Officials want to rein in applications such as live facial scanning, but pledge to promote AI with ‘sandboxes’ that allow companies to have early discussions with regulators on acceptable uses
Brussels will rein in the use of "unacceptable" artificial intelligence (AI) that tracks people and rates their behaviour, while proposing fines up to 6% of a company's turnover for violations, as part of first-of-its-kind AI rules announced on Wednesday.
The Commission would ban, "AI systems considered a clear threat to the safety, livelihoods and rights of people,” according to a briefing on the proposed regulations.
The EU rules, which face a lengthy approval process, will introduce prior authorisation requirements for the use of AI in areas considered to be high risk, such as national infrastructure, education, employment, finance, and law enforcement. The set of proposals has prompted concerns from the sector that they could stifle innovation.
Officials hope the early introduction of the first major AI rulebook will set a global standard for AI regulation. "With these landmark rules, the EU is spearheading the development of new global norms to make sure AI can be trusted," said the EU’s digital chief, Margrethe Vestager.
Controversial "remote biometric identification" systems, such as the use of facial recognition by police - but not using a fingerprint to unlock a phone or a face scan at passport control – would be "subject to strict requirements", the Commission says.
EU policymakers will also move to put limits on social scoring systems such as those under development in China, where the government has controversially used AI tools to identify pro-democracy protesters in Hong Kong, and for racial profiling and control of Uighur Muslims.
Violations of the EU rules could result in fines of up to €30,000, or for companies, up to 6% of their global annual revenue, whichever is higher, although authorities would first ask providers to fix their AI products or remove them from the market.
Vestager said use of the surveillance systems would temporarily be allowed in certain circumstances, where police need to find a missing child in a city, for example.
But digital rights groups are concerned about allowing such exceptions, and how they could open the way to widespread future use of the technology.
"Biometric and mass surveillance technology in our public spaces undermines our freedoms and threatens our open societies," said Patrick Breyer, a green member of the European Parliament. "We cannot allow the discrimination of certain groups of people and the false incrimination of countless individuals by these technologies."
Other activists said it a shame that the new rules did not cover AI weapons development. “Permitting machines to take human life is unconscionable and why the Campaign to Stop Killer Robots is working for a new international ban treaty to retain meaningful human control over the use of force,” said Mary Wareham, arms advocacy director at Human Rights Watch.
Innovation-killer?
The sector is worried that the new rules will put up barriers to AI development.
“After reading this regulation, it is still an open question whether future start-up founders in ‘high risk’ areas will decide to launch their business in Europe,” said Cecilia Bonefeld-Dahl, director general of the Digital Europe industry body, who is a member of the Commission’s high level expert group on AI.
Recalling the “bumpy implementation” of the EU’s strict data protection rules, Bonefeld-Dahl said smaller companies would need guidance, financial support and “simple and streamlined processes” to be able to navigate these requirements. “We need to nurture smaller companies through effective ‘sandboxing’, not bury them in new rules,” she said.
Other industry groups, including Dot Europe, whose members include Airbnb, Apple, Facebook, Google, Microsoft and others, said they are still formulating a response to the proposed regulation.
Commission officials argue the rules won’t be an innovation-killer, but rather will encourage the industry’s growth by raising trust in AI and providing legal clarity for companies.
Part of the regulation deals with measures to support AI development, pushing member states to establish "regulatory sandboxing schemes", where start-ups and SMEs can test AI systems before bringing them to market.
In these sandboxes, companies can interact directly with regulators earlier in the development cycle, the Commission says. This ensures that companies are not heading down the wrong path and that regulators learn throughout the process.
“We’re not regulating a technology; we’re regulating use cases that may be problematic,” a senior EU official said. “Take university enrolment. When you get admitted to university, will someone check that the AI that enrolled you has got it right? This is what we want. Also, if you’re hired or fired by AI, it needs to be managed carefully,” the official said.
The European Parliament and 27 member states will now consider the proposals, a process that could take several years. If an AI law is eventually passed, the rules would apply to providers of AI systems, irrespective of whether they are established within or outside the EU.
Sandbox experience in Norway
The proposal encourages member states to establish regulatory sandboxes under the supervision of public authorities.
The sandbox concept, which involves a collaborative approach between regulators and the companies they oversee, has become a popular tool for nurturing emerging technologies in several European countries, including the UK and France.
In Norway, there are four AI companies going through a sandbox process with authorities.
“Twenty-five companies applied for the first round of our sandbox; we picked four. We will work with them for three to six months,” said Kari Laumann, head of research, analysis and policy for the sandbox, which is being run by the Norwegian data protection authority.
Norway’s AI sandbox is not a physical location. “We go to them, and we talk with them about the grey areas of their tech, and help them with challenges. It’s very resource-intensive,” said Laumann.
Discussions between the start-ups and regulators in the sandbox process range from the theoretical to the practical.
One participating company, Secure Practice from Trondheim, is exploring if it can legally profile other companies’ employees and then provide these workers with tailored cybersecurity training materials.
“It is maybe the most controversial project we have because profiling employees is not something we’d normally want to do,” Laumann said. “We discuss with the company how it might work, what safeguards are needed, and who should ultimately have access to employee data.”
Another participant, a company called Age Labs, collects blood samples to predict the likelihood of disease for someone. “We’re exploring with this group how they can use anonymise data in order for their algorithm to learn and improve,” said Laumann.
A third project is developing online education resources, and its creators are similarly figuring out what data they should and shouldn’t feed into their algorithm.
“In all these cases, we are looking at transparency, fairness, responsibility, and whether the algorithms discriminate against anyone, for example, against people that don’t have Norwegian as a primary language,” said Laumann.
“I think it’s necessary to put some checks and balances on this. We want innovation, but we want it to be in line with our values,” she said.
“It’s important these rules are accessible and understandable for companies. Sandboxes are one way to make it easier to uncover the real life examples of how all this can be done,” Laumann added.