Experts want to see restrictions on systems they say are error-ridden, invasive and – in the wrong hands – authoritarian
The European Commission is considering a temporary ban on the use of facial recognition in public areas for up to five years.
According to an 18-page draft circulated last week, the ban, which would last between three and five years, would give the EU time to figure out "a sound methodology for assessing the impacts of this technology and possible risk management measures could be identified and developed".
There are grave concerns about this controversial technology, which uses surveillance cameras, computer vision, and predictive imaging to keep tabs on large groups of people.
Several state and local governments in the US have stopped law-enforcement officers from using facial-recognition databases. Trials of the technology in Europe have provoked public backlashes.
“I would be in favour of a ban – and not just a temporary one,” said Patrick Breyer, a German digital rights activist and a member of the European Parliament. “The technology creates a feeling of permanent invasion. It’s untargeted, so it affects everybody.”
The tech, if widely deployed, would have a perverse effect on population behaviour, Breyer argues. “What it will create is a pressure to act uniformly. You could be someone placing a bag somewhere or you could be standing still for a little too long. Such behaviour can be perfectly normal, but perhaps it won’t be in the future.”
Privacy activists argue that the tech is potentially authoritarian, because it captures images without consent.
“With one single image, authorities can find out everything about you. That seems terrifying. It’s the normalisation of mass surveillance,” says Diego Naranjo, head of policy at the European Digital Rights, an association of civil and human rights organisations.
Those worried about the technology see its nightmare potential in China, an enthusiastic promoter of facial recognition.
The Chinese government has used the tool to identify pro-democracy protesters in Hong Kong, and for racial profiling and control of Uighur muslims.
Face scans in China are used to pick out and fine jaywalkers and citizens in Shanghai will soon have to verify their identity in pharmacies by scanning their faces.
“What the EU is doing is trying to prevent abuses [like these] before it’s too late,” said Veronica Barassi, senior lecturer in the Department of Media and Communications at Goldsmiths University of London. “The downsides are phenomenal. It’s time to take a step back.”
Tech travels around Europe
But face scan software is already slowly creeping into life in Europe, operating with little oversight.
“Countries such as France, Sweden and the UK are deploying this technology without any sort of impact assessments,” says Naranjo.
So far, EU countries are handling the technology very differently. “The commission wants to get back in front of the national legislatures, which are all going their own way on this,” says Jack Vernon, a senior research analyst with the International Data Corporation, a research firm.
Vernon cites a case in Wales where judges ruled against a shopper who brought a legal challenge against police use of the technology.
“The police were doing a trial of the tech – a guy hid his face, he was arrested,” Vernon said. The man’s legal challenge, which ultimately failed, argued that the use of the tool breached his right to privacy.
Swedish officials took a different view in a similar case, when the Data Protection Authority halted and fined a local authority for trialling the tech on secondary school students to keep track of attendance.
Here, the data authority cited the EU’s general data protection regulation (GDPR), which classes facial images and other biometric information as being a special category of data, with added restrictions on its use.
Limited rollouts of the tech are being discussed in France and in one of the continent’s most privacy-minded countries, Germany.
“I’d be surprised if these projects were to be dropped. For that reason, I don’t think an EU ban is that likely,” Vernon argued.
Police forces in almost all EU countries already use face recognition tech or plan to introduce it – and none of them are being fully transparent about their use, says Nicolas Kayser-Bril, a data journalist with Algorithm Watch, a non-profit research and advocacy organisation.
“Two police forces, in Poland and Lithuania, even declared that whether or not they used face recognition was a secret,” he said.
Kayser-Bril questions whether a moratorium would make a difference. “It would require a vast amount of wishful thinking to imagine that a legal ban would lead to face recognition not being used by the police. Before thinking about a ban, lawmakers should take stock of current uses and enforce the provisions of the GDPR regarding the right to access personal data,” he said.
Police officials have argued that facial recognition makes the public safer, but Kayser-Bril disputes the claim.
“[Some] 100,000 passengers go through Paris Charles De Gaulle airport every day. A face recognition software with an error rate of .1 per cent would wrongly flag 100 of them daily, which would be worse than useless for security personnel because they would have to go through these false positives on top of their existing workload,” argues Kayser-Bril.
Falling behind the curve
Others however cite the benefits of face-scanning technology.
“Computer vision as applied to humans is increasingly seen as dystopian – but it’s also seen as part of the developing social contract between humans and animals,” said David Hunt, president of Cainthus, an artificial intelligence company based in Dublin.
Cainthus is developing 'pixel pattern detecting' systems to track livestock and analyse their behaviour, enabling farmers to monitor the health, eating and drinking habits, and inter-cow behaviour of their herds.
Hunt argues that facial recognition tech, which is already embedded in apps and smartphones, can be “used for good or bad”.
“We’re on a long term trend of certain companies being able to track and follow you at all times. Google and Apple know everything about us through the phones in our pockets.
“If the EU puts a moratorium on facial recognition, it doesn’t stop the tech, it means Europe is behind the curve on it,” said Hunt.
Limiting the spread of the technology would inevitably harm European competition in AI, agreed Eline Chivot, senior policy analyst with the Centre for Data Innovation.
“This is at odds with the EU’s goal to lead in AI, and this means other countries like China will be making the rules for AI,” she argued.
Ban EU research?
Experts however raise concerns that face scan technology has a racial bias. If a system is trained primarily on white male faces, but fewer women and people of colour, it will be less accurate for the latter groups.
Less accuracy means more misidentifications. An investigation by the UK civil liberties group Big Brother Watch found that the automated system used by the Metropolitan Police Service in London had a false-positive rate of 98 per cent, and that the police retained images of thousands of innocent citizens, for future searches.
Given the errors, discrimination and privacy invasions associated with face scan systems, Breyer believes the Commission should halt all research into the systems.
The German MEP has filed a complaint with the European Court of First Instance over an EU-funded project called iBorder Ctrl, which received some €4.5 million from the Horizon 2020 research programme.
The project, which concluded last year, aimed to detect deception by immigrants through video recordings of their faces. Breyer says there is no sign that strong ethical protections were put in place before the research started.
“That project is unacceptable. How can you accurately tell if someone is lying? There’s a reason lie detectors are not admissible evidence in court. The EU should never have funded it,” he said. The Commission did not respond to a request for comment.
The project, which involved Manchester Metropolitan University, Leibniz University and others, aimed to develop technology that “quantifies the probability of deceit in interviews by analysing interviewees non-verbal micro expressions personalised to gender and language of the traveller.”
The results of these assessments were to be shared with border control staff, who would decide whether to permit entry to migrants. The tech was trialled in real operational scenarios in Hungary, Greece and Latvia.
The Commission, Breyer says, has refused him access to a legal assessment and an ethics report on the project.
“I want to know all about the research, the results, who they’ve been talking to,” he said. “The Commission tells me this information would harm the commercial information of the participants. But what way do we have of knowing if this tech will someday be exported to an unfriendly country?” Breyer asked.
A temporary EU ban on the technology, were it to happen, would likely include an exception for research – more of which, many argue, is badly needed for the fast-growing tech.
“Face recognition does have technical problems, especially when it comes to identifying non-white, non-male people, and funding in this area would be welcome,” says Kayser-Bril. “However, most of the issues raised are not technical but political and social. Research to explore these is very urgent.”