How the EU could improve its proposal to regulate the use of artificial intelligence
Artificial Intelligence (AI) has been compared to electricity: it is a general-purpose technology with applications in all domains of human activity. Electricity has found uses that no one envisaged when the first electrical systems were designed and, in practice, life would be completely different without this technology.
Ideally, the Act would have developed the two central ideas addressed by the White Paper: creating legislation that stimulates innovation, while at the same time guaranteeing trust. However, in its current form, the document has a few drawbacks and needs to mature to meet the expectations of the AI community, in particular, and of society, in general.
The main sections of the Act are concerned with prohibited practices, high-risk systems, transparency requirements, and governance. It is hard to argue against the need for regulation about the use of algorithms, in general, and about AI, in particular. As the field develops, and societies become more digital, the risks presented by AI-based systems increase rapidly. Quite rightly, the Act aims at regulating systems that can, and will, if left unchecked, compromise several rights that we take for granted in democratic societies, such as the right for privacy, safety, security, unbiased judgments, and fair assessments in many different situations. The extensive use of AI-based algorithms also raises the issues of unfair competition, business disruption, and greater skill inequalities within the workforce.
Important gaps in the Act
The Act addresses several of these issues. However, it could be improved to provide the credible warranties that one would expect from such an important piece of legislation. The prohibited practices section is focused mainly on restricting the use of biometrics, citizen rating software, and systems that, subliminally or not, unduly influence vulnerable groups of people. It does not address, in any significant way, several other important issues, such as the social problems caused by information bubbles resulting from targeted content delivery, or the unfair use of data to compete in specific markets.
The section on transparency could, conceptually, have been used to deal with such matters, but is restricted to a few limited cases of deep fakes and emotion identification. From past experience, we know it is challenging to define a set of robust general principles that will determine which AI solutions are acceptable and which are not. Therefore, listing currently identified trends is a way forward, but it has obvious limitations. Still, the advantage of an explicit list is that it can be easily recognised, modified, and extended.
The largest part of the Act is dedicated to “high-risk” systems, a definition that is both comprehensive and easy to circumvent. The list of high-risk AI systems includes programmes used for recruiting, credit assessment, migration control, predictive policing, and legal advising, among several others. Providers of high-risk systems supposedly have to fulfil several requirements in terms of transparency and risk assessment, but the definition of high-risk remains vague and unclear, and the criteria used to compile the proposed list of systems are left mostly unspecified. No doubt, with some ingenuity, system providers will be able to argue that their systems either do not fit the definition of high-risk or are subject only to the relatively mild obligation of self-assessment by the providers.
The INESC group has been investing heavily in research and development in areas related to regulation, safety, security, and explainability of AI applications in critical fields. Our results show that self-regulation can only be a component of a more comprehensive framework that should apply, with little leeway, to all involved actors.
Creating an ecosystem of AI excellence
Focused on regulation, the Act, perhaps understandably, pays little attention to the first major objective stated in the White Paper, that of developing AI excellence in Europe. After a leading paragraph dedicated to the strategic importance of AI for the European economy, as well as for dealing with climate change, environment, health, and central sectors of the society, only a very minor section of the Act is dedicated to the support of innovation in this area. It is important, however, to keep in mind one of the central aims of this process, that of developing a European AI ecosystem of excellence.
Digital intelligence will almost always be cheaper, more efficient, and more effective than the natural kind, in well-defined, specific, activities. It is, therefore, no surprise that all major economic blocks, including the United States and China, are pushing forward relevant agendas on AI research, supported by ambitious financial frameworks.
Researchers in several units of the INESC group have been involved in the development of AI applications for many decades. As such, they have witnessed the growing relevance of research institutions and of companies from the U.S. and China, while Europe lagged. The Act should play a role in reversing this trend.
All in all, the draft Act now available for public discussion represents a significant first step, in that it addresses some important issues. However, it needs to be improved to become a key piece of legislation that will support the development of AI in Europe, while ensuring the continent remains in the vanguard of defending the rights of its citizens.
To know more about the INESC institutes visit the website of the INESC Brussels HUB