The ethics of neurotechology come under sharper scrutiny

09 Jan 2020 | News

Data privacy, liability – and the possibility of mind control – are among concerns experts raise, as OECD issues first international recommendations on the emerging technology

After gene editing and artificial intelligence, neurotechnology has become the next emerging technology to generate international concern about ethical risks.

Over the past few years, a growing number of experts have sounded Orwellian alarms that brain implants and monitors could one day be used to manipulate human behaviour or attitudes. In the first quasi-official recognition of that risk, on 11 December the 36 member-states of the Organisation for Economic Cooperation and Development issued a formal recommendation that governments, companies and researchers world-wide pay greater attention to governance of the possible misuse of neurotechnology.

The technology “raises a range of unique ethical, legal and societal questions,” the Paris-based organisation’s governing council said in statement. While it can be applied for good, to treat mental illness or understand the mind better, it can also have malign uses, such as invading privacy or controlling human behaviour. The statement spells out nine basic ethical principles to follow – such as safeguarding privacy and safety.

The OECD recommendation represents the first formally agreed international statement on the topic, though it has no legally binding force on anyone. It is “a very broad document” about general principles rather than specific proposals for action, but it’s important anyway to prompt governments to think now about the potential consequences of this technology, said Dylan Roskams-Edris, open science specialist at the Montreal Neurological Institute Hospital, McGill University, Montreal.

“A while ago you would have thought people putting their information on social media accounts wouldn’t have been such a big risk – then you see things like the Cambridge Analytica [election-influence] scandal,” Roskams-Edris said. “It’s unlikely that in two years everybody’s going to have a chip in their head, but there’s the possibility in 10 years that it’s going to be way more likely for wearable or implanted chips to have a significant impact on the way people live.”

For governments today, he said, “Having an eye on the future instead of being reactionary is a good thing.”

An emerging trend

The OECD statement follows heightened scrutiny of ethics in other unprecedented technologies, such as an agreement last year by the Group of 20 largest countries on a set of general ethical principles for the safe use of artificial intelligence. Whether AI, gene-editing or neurotech, “you see an uptick in the number of statements on the ethics of emerging technologies” world-wide, said David Winickoff, lead secretary for the OECD group that has worked on the topic. “I think the rate of public concern and interest is going up.”

Neurotechnology covers a broad sweep of research and products. Some applications are already in use clinically, such as wearable devices to monitor patients’ brain activity or implants to help people move disabled hands or legs. It is at the heart of several mega-research efforts – such as the European Commission’s big Human Brain Project and the US BRAIN initiative to map, model and understand how the brain works. Some companies have developed AI tools to analyse patients’ brain waves, help diagnose mental disorders and personalise antidepressant treatment.

Patenting is racing ahead, with inventors in the US leading on 7,775 neurotech patents filed from 2008 to 2016. The Chinese come second, with 3,226 filings. Europeans are lagging behind, with Germans responsible for 555 patents and French inventors 239, according to an OECD report. By far the biggest corporate filers are US-based medical-device companies Boston Scientific and Medtronic.

So far, the applications have been benign. But experts worry how these and as-yet undeveloped technologies could be used in future.

For instance, in theory neurotechnologies could one day be used to enhance human mental powers, change people’s personalities or alter how they perceive the world. Brain data could be used to categorise people by intelligence or temperament – so companies could target marketing individual-by-individual, or authoritarian governments could control citizens. Police could try to predict crimes and detain people in advance, or develop powerful lie detectors. In the process, innocent people could be charged, unfair biases amplified, or freedoms curtailed.

The oops! factor

Then there are the simple mistakes that might arise – or what Roskams-Edris calls “unintended consequences.” For instance, last year researchers at the University of Zurich studied nine cases of Parkinson’s Disease patients around the world who had received brain implants to control their tremors. The implants improved their symptoms, but had an unexpected side-effect: while they were all previously good swimmers, they could no longer swim. At least one nearly drowned, when he jumped in the water and suddenly discovered the problem.

Another worrisome case: Last year, the Wall Street Journal reported that a primary school near Shanghai was putting electroencephalography headbands, to measure brain activity, on children during classes. The headsets flash a red light when the students are paying attention, white when they are not. The stated aim is to help the students do better in class – as the information is relayed to teachers and parents. But an ensuing international outcry over privacy concerns led Chinese authorities to ban the headsets.

Similar concerns have been raised by many groups of late. As far back as 2013, the Nuffield Council on Bioethics in the UK sounded one of the first warnings. A neurotech group at Columbia University in New York last year launched the NeuroRights Initiative, promoting a declaration of human rights in the face of brain science. Other researchers and some companies in Canada, the US and elsewhere have also proposed ways to prevent nightmare scenarios becoming real.

As with most discussions of emerging technologies, the proposed remedies range from laissez-faire self-regulation to international bans on some uses.

As the first internationally negotiated consensus on the topic, the OECD statement is intended to promote what it calls “responsible innovation”. Given its focus on economic development, the OECD’s approach is that the technology is important and potentially useful. But it seeks global discussion on what is, and is not, ethical behaviour.

As a result, the specific recommendations are fairly general. For instance, they urge “relevant actors” to “prevent neurotechnology innovation that seeks to affect freedom and self-determination, particularly where this would foster or exacerbate bias for discrimination or exclusion.” They also urge that safety be prioritised and personal brain data be safeguarded.

For the OECD, the next steps will include gathering more data and sharing information internationally about the technologies and quandaries they raise. But there is not yet any agreement among the countries leading neurotech research about whether or when to translate any of this into hard regulation. Either way, further discussions are likely. At a baseline level, these technologies could affect our “understanding of what it is to be human,” said the OECD’s Winickoff.

“It doesn’t take a huge stretch of the imagination to think of a pretty bad future. It’s time for serious social deliberation on these technologies,” he said.

Never miss an update from Science|Business:   Newsletter sign-up