For years it was debated about who’s liable where harm is caused by a machinery product and one has an AI-related component. Besides the Product Liability Directive itself, the Machinery Directive’s gaps and its underdevelopment in terms of AI have been leaving too much space for speculations. Now, as the EU comes up with its approach to AI, the regulatory contours gradually become apparent. So, what the new rules on machinery allow us to expect for AI-enabled one.

The general concept is as follows.

Software Ensuring Safety Functions is now a new item in the list of High-Risk Machinery Products and in the indicative list of Safety Components. And AI systems are included therein.

Machinery embedding AI systems that ensure safety functions is another new item in the list of High-Risk Machinery Products.

Those AI systems and machinery embedding AI systems are therefore to meet certain requirements in order to pass the high level user protection threshold and be placed on the market or put into service.

The general principle here in the light of the new rules is now as follows.

Where the machinery product integrates AI system, the machinery risk assessment shall consider the risk assessment for that AI system carried out under the AI Act which is another part—the core—of the EU approach to AI.

While carrying the machinery risk assessment, the manufacturer shall identify the hazards that may be generated by such machinery product and hazardous situations associated therewith, including hazards that may be generated during the lifecycle of the machinery product being foreseeable at the time of placing of that on the market as an intended evolution of its fully or partially evolving behaviour or logic that result from design for operation with varying levels of autonomy.

Having carried out the risk assessment the manufacturer is thereby determines the health and safety requirements, which apply to the machinery product. And it shall then be designed and constructed to prevent and minimise all the relevant risks following the risk assessment results.

The new rules also cover the technical documentation issue, which shall specify the means used by the manufacturer to ensure the conformity of the machinery product with those health and safety requirements.

Where the safety related software includes an AI system, the machinery product or partly completed machinery is sensor-fed, remotely driven, or autonomous, and the safety related operations are controlled by sensor data, the technical documentation shall include a description (if appropriate) of general characteristics, capabilities and limitations of the system, data, development, testing and validation processes used.

There are also some features that complete the concept.

Software updates not considered in the initial risk assessment and have an impact on safety should be considered as substantial modification requiring a new risk assessment to be carried out.

When a machinery is substantially modified, the one that modifies it becomes manufacturer and must comply with the obligations.

Third parties involved in the machinery supply chain are generally obliged to cooperate.

So, regulatory contours shift having AI-enabled machinery not left beyond.

The new rules on machinery will need some time for adoption, and will become applicable two and a half years after its entry into force.

That allows manufacturers and third parties involved in machinery supply chains to adapt. The adaptation, however, is not that fast where ones were not subject to requirements before. Which makes AI-related manufacturers and third parties to start considering the requirements and implementing those in the business and development processes even before the adoption takes place.

Artem Taranowski
Artem Taranowski

Years in profession allow focusing on data and artificial intelligence at the intersection of law, technology, science from the real life point of view