The European Commission has unveiled its view how AI is to be regulated. Just a year ago it was a white paper, and it is now a regulation to be. Which may well matter just as GDPR matters. Anyway, the EU is very serious about commandeering the approach to AI.

While new rules on machinery products and a new coordinated plan on AI with the Member States are also in the package it is exactly a legal framework on AI that is in focus. Being proposed as a regulation, it shall be directly applicable across the EU and beyond. Just as GDPR does.

The focal point of the legal framework is making sure that AI can be trusted while the risks are mitigated.

The general approach is that requirements are diversified depending on what the risks are—where AI is used, what for and how.

The higher the risk, the stricter the requirements. That being so, all the AI systems are to be classified in four different categories:

  • Minimal risk
  • Limited risk
  • High-risk
  • Unacceptable risk

What the minimal risk AI systems are about

Those are the vast majority of AI applications. The systems that represent only minimal or no risk at all. Just like AI-enabled spam filters, video games, industrial use to minimise waste and optimise resources.

No intervention here and the free use without any restrictions on top of the legal rules that already exist to protect consumers.

What the limited risk AI systems are about

It is about AI systems that are to have some specific level of transparency. Such as AI-enabled chatbots, systems assisting in buying tickets, booking hotels, finding the closest store, choosing and buying something that needed.

Those are also allowed but subject to transparency obligations in order to make sure that users are aware that they are interacting with machines so that they can take informed decisions to continue or step back.

What the high-risk AI systems are about

That is the most important category which is in main focus of the legal framework because such AI systems interfere with important aspects of human life. Such as AI enabled technological solutions used in:

  • critical infrastructures, e.g. transport or health care, be it self-driving cars or medical devices, that could put the life and health at risk
  • education, where scoring of exams may determine the access to knowledge or professional course
  • safety components of products, e.g. AI application in robot-assisted surgery
  • employment and staff management, e.g. CV sorting for recruitment procedures, employee evaluation for the career purposes
  • essential private and public services, e.g. credit scoring that is capable of decreasing opportunity to obtain a loan or being provided welfare
  • law enforcement that may interfere with human rights, e.g. evaluation of the reliability of evidence
  • migration, asylum and border control management, e.g. verification of authenticity of travel documents
  • administration of justice and democratic processes, e.g. applying the law to a particular set of facts and circumstances

Because those can potentially have a huge impact, they are subject to the set of strict obligations before they can be put on the market or in production, where AI providers are required to:

  • feed the system with high quality data making sure that results don’t come biased or discriminating
  • give detailed documentation about how the AI system works and what are its purpose in order for authorities to assess its compliance, in sense that it can be explained
  • provide users with substantial, clear and adequate information to help them understand and use the AI system
  • ensure a proper level of human oversight both in the design and in the implementation of the AI
  • respect the highest standards of cybersecurity, robustness and accuracy

While implementing appropriate logging of activity to ensure traceability of results, adequate risk assessment and mitigation is a must.

For example, remote biometric identification systems as such are considered high risk and subject to even stricter obligation. While live use of ones in publicly accessible spaces for law enforcement purposes is prohibited altogether. Except some strictly defined and regulated cases where the use is subject to authorisation by a court of other independent bodies being limited in time, geographic reach and scope of search. But it is crystal clear that there is no room for mass surveillance in the society.

What the unacceptable risk AI systems are about

Those are AI-enabled systems considered a clear threat to the safety, livelihoods and rights of people. Including AI systems or applications that manipulate human behaviour and circumvent free human will, e.g. ones that encourage dangerous behavior of minors or mislead elders, and systems allowing so called social scoring by governments that go against the fundamental values, ranking people and the like. This is exactly the category for aforementioned live remote biometric identification systems in public places by law enforcement authorities edging with mass surveillance.

They appear to be unacceptable, have no place in the society, are to be prohibited, and therefore proposed to be banned.

How is it going to be governed and enforced

At the national level it is the competent authorities of the Member States that are to supervise the new rules and control whether AI systems meet the requirements. At the EU level it a European Artificial Intelligence Board to be will facilitate the implementation and the development of AI-related standards.

If need be, the national level authorities would be the ones who decide if AI-enabled products are to be removed from the markets.

If an AI provider would not comply with a prohibition of AI practices it can be fined up to 6% of its yearly global turnover.

What are the next steps in terms of implementation

The European Commission’s proposal for the regulation is to be adopted by the European Parliament and the Members States within the bounds of the ordinary legislative procedure.

The draft regulation reads that it shall enter into force in 20 days from its official publication. And it shall apply in 2 years from the moment it is entered into force. Just the same way GDPR did.

Why it matters for Ukraine

Ukraine adopted the artificial intelligence development concept. But the approach to AI unveiled by the EU makes it clear that the national concept is not within the course the EU takes.

Given Ukraine’s ambitions of integration into the EU and its desire of even deeper integration into the Digital Single Market, it may well be exactly the moment when the national concept is to be reviewed. All the more so because the national concept related action plan is not approved yet and we have enough time to avoid rush while doing that. Whatever the speed would be for the EU to adopt the regulation for the legal framework on AI and despite the fact of its immediate entering into force, it would be 2 years until it applies.

Artem Taranowski
Artem Taranowski

Years in profession allow focusing on data and artificial intelligence at the intersection of law, technology, science from the real life point of view