AI Act agreement gets mixed reaction from European tech

12 Dec 2023 | News

Despite last minute lobbying, the legislation will impose extra checks on cutting edge general purpose AI that is owned by a handful of US and Chinese companies. However, there’s no right for researchers to dig into these models

The press conference following the EU's trilogue discussions on the AI Act. Thierry Breton, European internal market commissioner (Right), looks across at Carme Artigas Brugal, Spanish state secretary for digitalisation and AI, and Brando Benifei, European Parliament co-rapporteur for the AI Act. Photo: European Union

European technology firms have given a mixed reaction to a crucial agreement on the EU’s AI Act after a marathon series of all-nighter trilogue negotiations between MEPs, member states and the Commission that concluded late in the evening of 8 December.

Despite last minute French and German lobbying, the Act will impose special safety regulations for ‘high end’ general purpose AI systems (GPAIs), that could pose a “systemic risk” like OpenAI’s GPT-4, so-called because they can be used for a variety of different tasks.

This handful of ultra-powerful systems, defined initially by the amount of computing power it takes to train them, will be subject to extra testing to scope out dangerous capabilities, and have to report on serious safety incidents, for example. A scientific panel will update the criteria on what counts as a systemic risk, so as to keep up with the fast-moving field of AI.

For other, less powerful systems, the amount of regulation depends on how risky they are judged to be. “Minimal risk” AI systems, like an email spam blocker, will get a “free pass” from regulation, according to the Commission, explaining the agreement.

High risk systems, for example AI involved in critical infrastructure, recruitment processes, or border control, will be subject to a much broader range of regulations, like requirements for human oversight, quality control of datasets, and risk-mitigation systems.

In addition to this tiered system, some applications are outright banned, like emotion recognition in the workplace or educational institutions, or biometric categorisation systems that sort people on the basis of ‘sensitive’ characteristics like race or sexual orientation.

On the whole, European digital companies are happy with this tiered risk-based approach, and that general purpose AI models, currently in the hands of US tech giants, are not going to be exempted. They had worried an exemption would leave the entire regulatory burden on downstream European companies that use general purpose AI models to build new applications.

“We welcome the regulation of applications on the basis of risk. We also welcome the regulation of foundation models that dominate the market, allowing digital SMEs to innovate and compete on a more fair basis,” said Oliver Grün, president of the European Digital SME Alliance.

But with the full text of the agreement yet to be officially released, there’s still no clarity on exactly which systems will count as high-risk, warned Victoria de Posson, secretary general of the European Tech Alliance, a lobbying group for established European tech firms.

“It still remains unclear what will be in the high-risk category,” she told Science|Business. For example, it was still fuzzy whether recommendation systems would be included, she added.

There’s also a fear that compliance tasks for high-risk systems, like impact assessments, will fall to developers, making European tech companies less attractive places to work.

“It’s not the most sexy of tasks,” she said. “It’s actually very difficult to keep developers and stay attractive.”

“The new requirements – on top of other sweeping new laws like the Data Act – will take a lot of resources for companies to comply with, resources that will be spent on lawyers instead of hiring AI engineers,” said Cecilia Bonefeld-Dahl, director general of DigitalEurope, a group of large companies with a presence in Europe.

Testing and scrutiny

Germany and France had lobbied to exempt general purpose AI systems from the Act, worried that regulation would slow down home-grown AI companies like Paris-based Mistral AI that hope to develop such systems and stop them catching up US rivals.

But including general purpose systems – also referred to as foundation models - in the regulation could actually lessen the regulatory burden on European firms, said Mark Brakel, director of policy at the Future of Life Institute, a think tank concerned with the potentially extreme risks of AI systems.

“Most companies will not be providers but deployers” of general purpose AI systems, he pointed out. Demanding greater testing and scrutiny of these systems means than European companies can use them to innovate with more security. “It’s not right to say this is purely adding compliance burden,” Brakel said.

The Act also includes provisions for national authorities to create regulatory sandboxes, enabling European companies to test out AI systems in real world conditions before putting them on the market.

These measures “should have a positive effect on SMEs by reducing the regulatory burden and providing some market access,” said David Marti, AI programme manager at the Swiss think tank Pour Demain.

“The US tech giants which develop GPAIs are expected to demonstrate how they deal with the systemic risk their models may pose. Nonetheless, the compliance costs for these model developers are expected to remain negligible,” he said.

Although pleased the Act imposes some restraints on the most powerful systems, Brakel said some checks and balances are missing. For example, there’s no right for academic researchers to be able to run tests on leading edge systems, to uncover dangerous capabilities or biases. There’s also no requirement for independent auditors to oversee the internal testing of the most powerful models, he said.

Outright bans

Despite the bleary-eyed triumph of negotiators last Friday night, the agreement on the Act is still only provisional, and needs to be approved by MEPs and the European Council.

Even when finally signed off, it will then take two years to come into force, with a few exceptions: the outright bans on uses like emotion recognition will start after six months, and the regulations on general purpose AI systems after 12 months.

Still, the agreement gives the EU back the initiative on AI governance after recent moves by the US and UK to place some constraints on the technology, albeit far short of actual legislation.

“The EU's AI Act is the first-ever comprehensive legal framework on rtificial intelligence worldwide. So, this is a historic moment. The AI Act transposes European values to a new era,” said Commission president Ursula von der Leyen after the agreement was reached.

Never miss an update from Science|Business:   Newsletter sign-up