EU is ‘losing the narrative battle’ over AI Act, says UN adviser

05 Dec 2024 | News

European start-ups are being misled into thinking EU AI regulations are killing innovation, says former AI minister of Spain

Carme Artigas, co-chair of the United Nations advisory board on artificial intelligence. Eskinder Debebe / UN Photo

European companies are believing the “absolute lie” that the EU AI Act is killing innovation, Carme Artigas, co-chair of the United Nations advisory board on artificial intelligence, has warned.

“We are losing the battle of the narrative,” Artigas said last week at the Europe Startup Nations Alliance forum. 

As Spain’s AI minister, Artigas led negotiations on the AI Act in the EU Council. She denounced accusations that the act has resulted in the over-regulation of digital technologies and that it is pushing companies to set up abroad.

That narrative “is not innocent at all”, she said. It has been “promoted by the US - and our start-ups are buying that narrative.”

“What is the end game of this narrative? To disincentivise investment in Europe and make our start-ups cheaper to buy,” said Artigas.

In his report on EU competitiveness, Mario Draghi says the ambitions of the AI Act are “commendable”, but warns of overlaps and possible inconsistencies with the General Data Protection Regulation (GDPR). 

This creates a risk of “European companies being excluded from early AI innovations because of uncertainty of regulatory frameworks as well as higher burdens for EU researchers and innovators to develop homegrown AI”, the report says.

But for Artigas, the main objective of the legislation is “giving certainty to the citizens to enable massive adoption.” As things stand, “The reality is nobody is using AI mainstream, no single important industry.”

Lucilla Sioli, head of the European Commission’s AI Office, set up to enforce the AI Act and support innovation, agreed companies require certainty that consumers will trust products and services using AI. “You need the regulation to create trust, and that trust will stimulate innovation,” she told the forum.

In 2023, only 8% of EU companies used AI technologies. Sioli wants this to rise to three quarters.

She claimed the AI Act, which entered into force on 1 August, is less complicated than it appears and mainly consists of self-assessment.

The AI Act is the world’s first binding law of its kind, regulating AI systems based on their risk. Most systems face no obligations, while those deemed high-risk must comply with a range of requirements including risk mitigation systems and high-quality data sets. Systems with an “unacceptable” level of risk, such as those which allow social scoring, are banned completely.

Even for high-risk applications, the requirements are not that onerous, Sioli said. “Mostly [companies] have to document what they are doing, which is what I think any normal, serious data scientist developing an artificial intelligence application in a high-risk space would actually do.”

The Commission needs “to really explain these facts, because otherwise the impression is the AI Act is another GDPR, and in reality, it affects only a really limited number of companies, and the implementation and the compliance required for the AI Act are not too complicated,” said Sioli.

Kernel of truth

Holger Hoos, a founder of the Confederation of Laboratories for Artificial Intelligence Research in Europe, agreed it is in the interests of US tech companies to promote a narrative that Europe is stifling innovation in AI.

“They know Europe has lots of talent, and every so often they buy into companies using this talent, Mistral being the best example,” he told Science|Business.

Nevertheless, there is a “kernel of truth” to this narrative. “We’re in the early phases of implementation of the AI Act, and I believe there are reasons to be concerned that there is a really negative impact on certain parts of the AI ecosystem,” Hoos said.

One way to reassure European companies that there is no competitive disadvantage would be to deliver on President Ursula von der Leyen’s promise to set up a European AI Research Council to pool resources along the CERN model, thus placing responsibility for driving AI development outside of the Commission.

“The Commission has had six years to implement an AI strategy and all we have to show for it is an AI Act that’s not even properly implemented yet,” said Hoos. 

The European AI  strategy should put more focus on boosting investment, said Clark Parsons, CEO of the European Startup Network. “Why Europe fixed an unregulated AI market, instead of fixing its innovation financing ecosystem first is a question for historians. It might end up costing us 20 years of future AI industry marginalisation,” he said. 

Parsons believes regulations themselves are not to blame for European AI companies leaving London and Paris “almost weekly” to go to San Francisco.

“It’s the combination of access to way more capital, easier access to early-adopting customers, the benefits of being in the top cluster of talent, and many more prospects for easy initial public offerings or trade sales,” he said.

It remains to be seen whether the regulations have a negative impact on innovation, he said. “The AI Act tried to take a light touch and only focus on risky applications. But compared to ‘no touch’ non-regulation in the US, light touch regulation is still regulation in the eyes of the tech community,” Parsons said.

Never miss an update from Science|Business:   Newsletter sign-up