To the alarm of mostly-US industry, the EU AI Act gets preliminary nod to control the emerging technology – and sets the stage for months of negotiations in Brussels and Washington
The world’s first law to regulate artificial intelligence moved closer to adoption, as two European Parliament committees approved a draft text with tough controls to protect privacy, restrict misinformation and make it clear to people when the technology is being used.
The law, first proposed by the European Commission in 2021, would classify AI systems by the risk they pose of harming people or infringing rights – and the Parliament committees further toughened the draft with restrictions on AI in biometric identification, facial recognition and routine police work.
The joint vote in the Internal Market and the Civil Liberties committees was 84 to 7, with 12 abstentions.
The law is “very likely the most important piece of legislation” in the current Parliament, said one of the MEPs leading the legislation, Romanian Dragos Tudorache. “It’s the first legislation of this kind worldwide, which means that the EU can lead the way in making AI human-centric, trustworthy and safe.”
The law, strenuously opposed by the mostly-American tech companies leading the AI field, will intensify trans-Atlantic conflicts over the technology, which with the advance of generative AI models such as ChatGPT in the past year has burst into global public debate. So far, at the urging of US companies, the Biden administration has shied away from proposing any new laws of its own, and instead is focusing on voluntary cooperation from industry, but with tougher policing of existing laws against fraud or privacy invasion.
Attitudes elsewhere around the world vary widely, as governments struggle to come to grips with the technology’s implications. So far, the UK has been taking a relatively laissez-faire approach. But Italy’s data-protection agency announced a temporary ban on access to ChatGPT in the country at the beginning of April, subsequently restoring access at the end of the month.
The EU law has set off one of the most intense lobbying battles in Brussels history, with organisations on either side of the issue loudly condemning or praising the draft. Against it, the Washington-based Center for Data Innovation think tank estimates it will cost European businesses €10.9 billion a year, further weaken Europe’s own computing industry and slow the pace of innovation. But in Brussels, the Centre for European Policy Studies estimated annual costs ranging from €176 million to €725 million. The Commission itself puts the cost even lower, and in any case argues that the long-term harm of unregulated AI could be much greater than any short-term costs.
The Parliament committees’ vote opens the next act in the drama. A plenary vote is scheduled for Parliament’s 12 to 15 June session, after which months of negotiation between Parliament, the EU Council and Commission will start.
With that long endgame in view, the industry’s biggest lobby group in Brussels, Digital Europe, issued a deliberately measured statement, saying the text “strikes a good balance but we will continue to work on the finer details […]. Great efforts should also be made to align with international partners, for example at the upcoming Trade and Technology Council meeting with the US.”
Work on the EU law began in 2018, and the current draft would categorise AI applications by risk of harm – regardless of industry or application. High-risk uses, such as “social scoring” for eligibility for government benefits, are flatly banned. Also banned is using AI to manipulate the behaviour of vulnerable people, such as children or those with disabilities. Other activities, perceived as less risky, would have looser controls.
Parliament expanded the Commission’s list of banned or restricted activities to include the use of biometric identification in public spaces – and would require police to get a judge’s authorisation to use it. Using AI to categorise people by gender, race, ethnicity, citizenship, religion or political orientation would also be banned. Predictive policing and “emotional recognition” – using AI to guess who might or did commit a crime – would also be forbidden.
The law would require those who develop AI “foundation models” – the coding heart of AI systems – to register in an EU database and oblige them to “guarantee robust protection of fundamental rights.” Generative foundation models, like ChatGPT, would have to disclose when content is AI-sourced, prevent the model from generating illegal content, and publish summaries of copyrighted data that the developers used when training the AI system.
Some exemptions, however, are provided for research or open-source AI models. And the law promotes “regulatory sandboxes,” in which authorities allow companies to test a potentially risky product in controlled circumstances. A new EU AI Office would be set up to police the act, much as the data protection authorities today police the EU’s General Data Protection Regulation.