Leaked legislative proposal shows bloc leaning towards bans on surveillance and social scoring applications, and a broad interpretation of what makes for a high-risk AI system
Brussels is poised to ban the use of artificial intelligence (AI) to track people and rank their behaviour, while proposing fines up to 4% of a company's turnover for violations, as part of first-of-its-kind AI rules expected to be announced next week.
The proposed rules, which can still be amended, will introduce prior authorisation requirements for the use of AI in applications considered to be high risk. However, the regulation will not cover AI systems “exclusively used for the operation of weapons or other military purposes”.
The proposal envisages the creation of a vast new regulatory regime, with rules and roles laid out for AI providers, importers, distributors and users, and broad oversight including assessment boards, national supervisory bodies, and a newly-created European AI Board.
The draft legislation, which runs to 81 pages, describes AI as “a very powerful family of computer programming techniques that can be deployed in many fields of human activity for desirable uses, as well as more critical and harmful ones.”
AI applications used in remote biometric identification systems, job recruitment, entry to educational institutions, creditworthiness and asylum and visa applications are considered high-risk. Data used in these systems should be free of bias, the proposal says.
These systems should be overseen by humans, with national bodies set up to assess and issue certificates to certain high-risk systems. Other high-risk systems will be allowed to do a self-check.
The rules would apply to providers of AI systems irrespective of whether they are established within or outside the EU.
The Commission argues that its overarching goal is to boost public trust in AI, via a system of compliance checks.
Supporters of regulation have long argued that proper human oversight is needed for a rapidly developing technology that presents new risks to individual privacy and livelihoods. Others warn that the new rules could stifle innovation - with lasting economic consequences.
Commission officials argue the rules won’t be an innovation-killer – and in fact, part of the regulation deals with measures to support AI development, pushing member states to establish "regulatory sandboxing schemes" in which start-ups and SMEs can test AI systems before bringing them to market.
Much attention will inevitably go to article four in the text, which lays out a list of prohibited AI systems, including those that “manipulate human behaviour, opinions or decisions”.
EU policymakers and the public are concerned about applications including government social scoring systems such as those under development in China. The Chinese government has controversially used AI tools to identify pro-democracy protesters in Hong Kong, and for racial profiling and control of Uighur Muslims.
Commercial applications of mass surveillance systems, and general purpose social scoring systems which could lead to discrimination, are two examples of AI use that will be banned, the text says. Nonetheless, these practices should be allowed when carried out by public authorities “for the purpose of safeguarding public security and subject to appropriate safeguards”.
Some legal commentators, on Twitter, are already warning that the language used in article four is vague. Digital rights activists say the bans are too narrowly defined, and may allow for significant loopholes.
The EU proposal has been over two years in the making. Earlier leaked drafts floated the idea – which has since disappeared – of a three-to-five-year period in which the use of facial recognition technology could be prohibited in public places.
Silicon Valley wary
How the EU legislates on AI will matter a great deal to tech companies in and outside Europe.
Search giant Google has criticised measures in the Commission's AI white paper, published last year, which it says could harm the sector. Last year, the company issued its own guidance on the technology, arguing that although it comes with hazards, existing rules and self-regulation will be sufficient “in the vast majority of instances.”
In its response to the commission’s proposal, Microsoft similarly urged the EU to rely on existing laws and regulatory frameworks “as much as possible”. However, the US tech company added that developers should be “transparent about limitations and risks inherent in the use of any AI system. If this is not done voluntarily, it should be mandated by law, at least for high-risk use cases.”
US Representative Robin Kelly, speaking to legislators in the European Parliament in March, sounded a note of caution on EU legislation, warning the bloc not to take its own path on the technology without consulting its allies.
“There’s a real danger of over-prescriptive policies,” Kelly said, echoing the fear of the big American tech companies like Google and Microsoft, which have made large investments in new AI applications, and are wary of the EU’s plans to regulate. Kelly called on EU officials to seek American input during the drafting of AI regulations.