The European Parliament has approved the Artificial Intelligence Act, but there are increasing worries about the burden of compliance and how, along with a mix of other digital legislation, companies now have a ‘regulatory spaghetti bowl’ to untangle
A month before schedule, MEPs adopted the AI Act with a resounding majority, voting through the world's first law of its kind to regulate artificial intelligence by 523 votes to 46, with 49 abstentions.
The regulation aims to harmonise rules on AI systems across 27 member states, protecting fundamental rights and EU values from the risks of the technology, while also boosting innovation and establishing Europe, along with the US and China, as a front runner in the field.
"We finally have the world's first binding law on artificial intelligence to reduce risks, create opportunities, combat discrimination, and bring transparency," said Brando Benifei, internal market committee co-rapporteur on the AI Act.
The European Commission proposed the legislation in April 2021, and the act was approved on Wednesday after the European Parliament reached an agreement with the Council in December 2023.
The legislation has a crucial role in research since it includes various initiatives to aid AI start-ups and foster innovation. These include the creation of 'AI factories' equipped with supercomputers that are freely accessible to start-ups and SMEs.
However, the law has sparked controversy and opposing reactions, raising calls to make it easier to comply with the AI Act, by avoiding unnecessary bureaucracy, clarifying legal uncertainties and providing more funding for AI research.
Cecilia Bonefeld-Dahl, director general of industry association Digital Europe, expressed her concerns in a reaction published after the vote.
"The AI Act, if implemented smoothly, can be a positive force for AI uptake and innovation in Europe," Bonefeld-Dahl told Science|Business. But she said, "Being so horizontal, the AI Act touches upon so many sectors and their existing legislation (like medical devices, machinery or toy safety). This is on top of the unprecedented number of digital laws we've seen this term, such as the Cyber Resilience Act and the Data Act. It's like a regulatory spaghetti bowl and a lot to digest - the next Commission will have to focus on untangling it.”
Axel Voss MEP who is coordinator of the European People's Party in the committee on legal affairs, voted in favour of the legislation but expressed doubts about the capacity of the product safety approach to regulate an evolving technology. "Our AI developers will often not know how to comply with the AI Act and who to turn to if they face problems," he said.
Those issues, together with the burden of the binding ruling of the AI Act, could push start-ups and SMEs outside Europe, where there are fewer regulations, pointing to the need for a broader range of action for companies and more funding.
In January, the Commission launched an AI innovation package to support European start-ups and SMEs that addresses some of the concerns about the AI Act. In addition to access to supercomputers, this includes the formation of an AI Office within the Commission to support governance bodies and companies in complying with the rules for general purpose AI models.
The package also contains additional funding from the Commission of €4 billion in combined public and private investments by 2027, allocated via Horizon Europe and the Digital Europe programme specifically for generative AI.
Stef van Grieken, co founder and chief executive of AI at the protein design company Cradle, previously told Science|Business of concerns about European funds, which do not match that of governments in leading AI countries including the US, China, and the UK.