Research should be outside the scope of the AI Act, but MEPs have agreed that the EU legislation should impose a total ban on the implementation of AI systems for biometric surveillance, emotion recognition and predictive policing
Artificial intelligence (AI) systems developed for the sole purpose of scientific research and development should be exempted from the scope of the upcoming EU regulation on the rapidly developing technology, MEPs agreed this week.
The move comes as the EU scrambles to pass legislation limiting potential negative impacts of AI, after rapid advances in the power of the technology enabled new products, such as the ChatGPT chatbot that can ‘write’ novels, hack computer code and automate creative processes that were previously were limited to humans.
If deployed without appropriate checks, these products could be used to spread misinformation or become embedded but hidden from view in decision-making processes. Leading AI experts, including the CEOs of ChatGPT developer OpenAI and of Google’s Deepmind AI division, recently sounded a warning that AI could lead to the extinction of humanity.
In the face of these concerns, MEPs agreed the EU legislation should impose a full ban on AI systems for biometric surveillance, emotion recognition and predictive policing. In addition, generative AI systems such as OpenAI’s ChatGPT or Google’s Bard must disclose whenever any type of content was AI-generated.
“The placing on the market, putting into service or use of certain AI systems with the objective to or the effect of materially distorting human behaviour, whereby physical or psychological harms are likely to occur, should be forbidden,” the draft legal text says.
However, research on these topics should not be halted. “Research for legitimate purposes in relation to such AI systems should not be stifled by the prohibition, if such research does not amount to use of the AI system in human machine relations that exposes natural persons to harm,” the draft says.
MEPs want member states to invest more in AI research that is conducted jointly by AI developers, academics, experts in inequality and non-discrimination and other representatives of civil society.
“The AI Act will set the tone worldwide in the development and governance of artificial intelligence, ensuring that this technology, set to radically transform our societies through the massive benefits it can offer, evolves and is used in accordance with the European values of democracy, fundamental rights, and the rule of law,” said Dragoș Tudorache MEP, co-rapporteur on the file.
This week’s vote in the European Parliament brings the EU one step closer to finalising the legislation which was first proposed by the Commission in April 2021. Commission leaders hope a deal could be reached with the parliament and member states by the end of the year but say it may take another two or three years before the new rules are implemented across the 27 countries.
In March, a group of scientists and executives in the AI industry signed an open letter calling for a moratorium on big AI experiments, including ChatGPT, until experts are confident that risks posed by the technology are manageable. The signatories said AI labs and independent experts should use a six month pause to jointly develop shared safety protocols for the use of AI products. “Recent months have seen AI labs locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one – not even their creators – can understand, predict, or reliably control,” the letter said.
In the meantime, the EU and US governments are now trying to organise an international coalition of governments and companies to formulate and sign up to a voluntary code of conduct.