Daily News

View All News

EU approves landmark Artificial Intelligence Act

14 March 2024

The European Parliament approved its landmark Artificial Intelligence Act introducing obligations for AI based on its potential risks and level of impact.

The regulation aims to “protect fundamental rights, democracy, the rule of law and environmental sustainability from high-risk AI, while boosting innovation and establishing Europe as a leader in the field.”

Staffing firms and other workforce intermediaries should note that the agreement categorises recruitment as a ‘high-risk’ AI system. It describes high risk systems as having significant potential harm to health, safety, fundamental rights, environment, democracy and the rule of law.

Law firm, Pinsent Masons notes that the legislation lists example uses in an employment context, including to place targeted job advertisements, to analyse and filter job applications, and to evaluate candidates. AI systems intended to be used to make decisions affecting the terms of work-related relationships, promotion and termination of work-related contractual relationships, to allocate tasks based on individual behaviour or personal traits or characteristics and to monitor and evaluate performance and behaviour of persons in such relationships will also potentially be high-risk AI systems.

Pinsent Masons go on to say that, as a high-risk AI system, recruitment technology will need to comply with certain requirements – including around risk management, data quality, transparency, human oversight and accuracy – while the businesses providing or deploying that technology will face obligations around registration, quality management, monitoring, record-keeping, and incident reporting. Additional duties will fall on importers and distributors of high-risk AI systems – and on other businesses that supply systems, tools, services, components, or processes that providers incorporate into their high-risk AI systems, such as to facilitate the training, testing and development of the AI model.

The landmark regulation was agreed in negotiations with member states in December 2023 and was endorsed by members of European Parliament with 523 votes in favour, 46 against and 49 abstentions.

Internal Market Committee co-rapporteur Brando Benifei said in a press release, “We finally have the world’s first binding law on artificial intelligence, to reduce risks, create opportunities, combat discrimination, and bring transparency. Thanks to Parliament, unacceptable AI practices will be banned in Europe and the rights of workers and citizens will be protected. The AI Office will now be set up to support companies to start complying with the rules before they enter into force. We ensured that human beings and European values are at the very centre of AI’s development”.

Benifei added, “Today is again an historic day on our long path towards regulation of AI."

The agreement outlines examples of high-risk AI use. These include critical infrastructure, education and vocational training, employment, essential private and public services (e.g. healthcare, banking), certain systems in law enforcement, migration and border management, justice and democratic processes (e.g. influencing elections).

On its obligations, the agreement states that such systems must assess and reduce risks, maintain use logs, be transparent and accurate, and ensure human oversight. Citizens will have a right to submit complaints about AI systems and receive explanations about decisions based on high-risk AI systems that affect their rights.

The new rules also ban certain AI applications that threaten citizens’ rights, including biometric categorisation systems based on sensitive characteristics and untargeted scraping of facial images from the internet or CCTV footage to create facial recognition databases. Emotion recognition in the workplace and schools, social scoring, predictive policing (when it is based solely on profiling a person or assessing their characteristics), and AI that manipulates human behaviour or exploits people’s vulnerabilities will also be forbidden.

The agreement also outlined transparency requirements. General-purpose AI (GPAI) systems, and the GPAI models they are based on, must meet certain transparency requirements, including compliance with EU copyright law and publishing detailed summaries of the content used for training.

The more powerful GPAI models that could pose systemic risks will face additional requirements, including performing model evaluations, assessing and mitigating systemic risks, and reporting on incidents. Additionally, artificial or manipulated images, audio or video content (“deepfakes”) need to be clearly labelled as such.

On law enforcement exemptions, it states that the use of biometric identification systems (RBI) by law enforcement is prohibited in principle, except in exhaustively listed and narrowly defined situations. Real-time” RBI can only be deployed if strict safeguards are met, e.g. its use is limited in time and geographic scope and subject to specific prior judicial or administrative authorisation.

The European Parliament added that regulatory sandboxes and real-world testing will have to be established at the national level, and made accessible to SMEs and start-ups, to develop and train innovative AI before its placement on the market.

Civil Liberties Committee co-rapporteur Dragos Tudorache, said in a press release, “We have linked the concept of artificial intelligence to the fundamental values that form the basis of our societies. However, much work lies ahead that goes beyond the AI Act itself. AI will push us to rethink the social contract at the heart of our democracies, our education models, labour markets, and the way we conduct warfare. The AI Act is a starting point for a new model of governance built around technology. We must now focus on putting this law into practice”.

According to France24, the EU had been subject to intense lobbying over the legislation. Watchdogs on Tuesday pointed to campaigning by French AI startup Mistral AI and Germany's Aleph Alpha as well as US-based tech giants like Google and Microsoft.

However, EU internal market commissioner Thierry Breton said that the EU "withstood the special interests and lobbyists calling to exclude large AI models from the regulation", adding, "The result is a balanced, risk-based and future-proof regulation."

The regulation is still subject to final checks and is expected to be finally adopted before the end of the legislature. The law also needs to be formally endorsed by the Council.