Daily News

View All News

EU agrees on landmark deal over rules for trustworthy AI

11 December 2023

Members of the European Parliament struck a political deal with the European Council on a bill for harmonised rules on artificial intelligence (AI).

On Friday, 9 December, Parliament and Council negotiators reached a provisional agreement on the Artificial Intelligence Act. The agreement categorises recruitment as a ‘high-risk’ AI system.

This regulation aims to ensure that fundamental rights, democracy, the rule of law and environmental sustainability are protected from high-risk AI, while boosting innovation and ensuring businesses can thrive and expand. The rules establish obligations for AI based on its potential risks and level of impact.

These new rules will be applied directly in the same way across all EU member states, based on a future-proof definition of AI. They follow a risk-based approach with the main idea of regulating AI based on its capacity to cause harm to society following the approach of: the higher the risk, the stricter the rules. Classifications include minimal risk, high-risk, unacceptable risk and specific transparency risk.

Minimal risk: The vast majority of AI systems fall into the category of minimal risk. Minimal risk applications such as AI-enabled recommender systems or spam filters will benefit from a free-pass and absence of obligations, as these systems present only minimal or no risk for citizens' rights or safety. On a voluntary basis, companies may nevertheless commit to additional codes of conduct for these AI systems.

High-risk: AI systems identified as high-risk will be required to comply with strict requirements, including risk-mitigation systems, high quality of data sets, logging of activity, detailed documentation, clear user information, human oversight, and a high level of robustness, accuracy and cybersecurity. Regulatory sandboxes will facilitate responsible innovation and the development of compliant AI systems.

Notably, for recruiters, the rules highlight examples of high-risk AI systems which include systems to determine access to educational institutions or for recruiting people. Moreover, biometric identification, categorisation and emotion recognition systems are also considered high-risk. 

For AI systems classified as high-risk (due to their significant potential harm to health, safety, fundamental rights, environment, democracy and the rule of law), clear obligations were agreed.

Unacceptable risk: AI systems considered a clear threat to the fundamental rights of people will be banned. These include emotion recognition systems in the workplace and educational institutions. The co-legislators have agreed to prohibit biometric categorisation systems that use sensitive characteristics (e.g. political, religious, philosophical beliefs, sexual orientation, race). The ban also includes untargeted scraping of facial images from the internet or CCTV footage to create facial recognition databases; social scoring based on social behaviour or personal characteristics; as well as AI systems that manipulate human behaviour to circumvent their free will; and AI used to exploit the vulnerabilities of people (due to their age, disability, social or economic situation).

Specific transparency risk: When employing AI systems such as chatbots, users should be aware that they are interacting with a machine. Deep fakes and other AI generated content will have to be labelled as such, and users need to be informed when biometric categorisation or emotion recognition systems are being used. 

John Nurthen, executive director global research at SIA commented: “Staffing firms and employers should take note that they are firmly in the cross-hairs of this new regulation and exercise extreme caution when using AI-enabled technology during any part of the recruiting process. You should ensure your vendors are taking proper steps as defined in the AI Act to mitigate risk. This includes having control measures in place, and ensuring training data sets are relevant, representative, free of errors and able to detect bias. Software needs to operate transparently with event logs and technical documentation available, while AI processes must always be overseen by natural persons”.

Companies that do not comply with the rules can see fines ranging from €35 million or 7% of global turnover to €7.5 million or 1.5 % of turnover, depending on the infringement and size of the company.

Ursula von der Leyen, President of the European Commission, said of the AI Act, “By guaranteeing the safety and fundamental rights of people and businesses, it will support the development, deployment and take-up of trustworthy AI in the EU. Our AI Act will make a substantial contribution to the development of global rules and principles for human-centric AI.”

The agreement also ensures that the definition of an AI system provides sufficiently clear criteria for distinguishing AI from simpler software systems, the compromise agreement aligns the definition with the approach proposed by the OECD.

To account for the wide range of tasks AI systems can accomplish and the quick expansion of its capabilities, it was agreed that general-purpose AI (GPAI) systems, and the GPAI models they are based on, will have to adhere to transparency requirements as initially proposed by Parliament. These include drawing up technical documentation, complying with EU copyright law and disseminating detailed summaries about the content used for training.

The political agreement is now subject to formal approval by the European Parliament and the Council and will entry into force 20 days after publication in the Official Journal. The AI Act would then become applicable two years after its entry into force, except for some specific provisions: Prohibitions will already apply after six months while the rules on General Purpose AI will apply after 12 months.

To bridge the transitional period before the Regulation becomes generally applicable, the Commission will be launching an AI Pact. It will convene AI developers from Europe and around the world who commit on a voluntary basis to implement key obligations of the AI Act ahead of the legal deadlines.

EU Commissioner for internal market said, Thierry Breton said, “I welcome this historic deal.”

“The AI Act is much more than a rulebook — it's a launchpad for EU startups and researchers to lead the global race for trustworthy AI,” Breton said. “This Act is not an end in itself; it's the beginning of a new era in responsible and innovative AI development – fuelling growth and innovation for Europe.”

Věra Jourová, European commission, vice-president for values and transparency, said, “AI is already truly transformative, and we, the Europeans, must have a legal way to protect themselves from the most harmful impacts of AI. Our approach is risk-based and innovation-friendly. We simply want the people to retain their rights and uphold the fundamental rights in the digital age. We can do that not only by having laws, but also cherishing tech developers and working with them, so they have human-centric approach when they design and implement AI technology.”