Time is running out to regulate neurotechnology

09 Dec 2021 | News

Headbands for scanning brain activity are already on sale to the public. But regulations for how such products should be used are lacking. Experts worry the industry could go down the same road as social media – eventually triggering a public ‘techlash’

Two years on from the world’s first official ethical recommendations for mind-reading technology, experts in the field worry that the industry is still an unregulated Wild West, with the risk that will go down the same privacy-invading, manipulative path taken by social media.

In December 2019, the OECD issued its first recommendations on neurotechnology, warning that although it held “great promise” in treating disorders like dementia and paralysis, there are also serious risks, including political or commercial manipulation.  

Since then, there have been a rash of articles about the danger that brain-scanning headbands could be used to harvest sensitive mental data, or even directly manipulate our minds, pushing the issue into the consciousness of the public.

But apart from a law proposed in Chile to guarantee citizens so-called neurorights, precious little has been achieved in the past two years, with the private sector in particular sometimes still oblivious of the need for an ethical framework along the lines of that proposed by the OECD.

“The danger is that these frameworks remain academic exercises,” said Diana Saville, co-founder of the brain science accelerator BrainMind at an OECD-convened conference on 7 December, called ‘Technology in and for Society’.

“We learned by speaking with members of our community, the groups who are translating science out of the lab – the scientists, the entrepreneurs, the investors – they’re unaware of the guidelines and standards, and they don’t understand how to use them, or why they need to use them in their day-to-day decision making,” she said.

The dystopian potential of technology that can track brain activity is obvious enough. It could be used to detect our mental state and target advertising during our most suspectable moments. Unscrupulous bosses could use bands to monitor their workers’ attention. And at the most extreme, it could even directly plant images into our mind – something that has already been demonstrated in mice.

Nightmare scenario

Since the OECD launched its recommendations, the world’s biggest tech players have unveiled increasingly eye-popping possibilities. Elon Musk’s Neuralink shocked the world earlier this year with a video of a monkey appearing to play Pong using wireless implants on its head.

The nightmare scenario that such machines could read our thoughts is still seen as “hypothetical” and have the “patina of the science fictional,” said Dylan Roskams-Edris, a neuroscience open science specialist at McGill University.

But “for those of us who pay attention to the advances in brain-reading technologies and the misuse of data, it seems like an obvious problem,” he said. There has simply not been “sufficient urgency” in trying to head off ethical abuses, he said.

In August this year, the UN’s international bioethics committee issued a draft report that found there were “few regulations on neurotechnology outside of regulation on medical devices”, and warned of a plethora of risks, including “neurosurveillance” at school or work. 

And in June, a group of neurotechnology experts raised the alarm the private sector was particularly lacking ethical oversight.

“Companies tend to sit in a blind spot between early-stage research ethics and post hoc regulatory responses,” they said in Nature Biotechnology.

They recommended a series of ways companies could develop neurotech with ethics in mind, complementing state regulation.

These include convening advisory boards that include social scientists and ethicists – an approach taken by Mindstrong, a US-based mental health company that among other things measures wellbeing through users’ smartphone use – and encouraging universities to make sure ideas get spun-out of the lab in an ethical direction.

This hybrid approach is seen by some as the best shot of putting the industry on an ethical path. “We can’t wait for regulation, it won’t be enough. We really need socially enforced norms in the community,” said Saville.

But since this call to action in the summer, there hasn’t been any significant progress, either on the corporate or public side, said Nina Frahm, a research associate at the Technical University of Munich (TUM), and one of the authors of the Nature Biotechnology piece.

Techlash

This is in part a “deliberate” move by hands-off legislators who want to see neurotechnology grow into a profitable new market, she said. Innovation is an explicit aim of two big neuroscience projects on either side of the Atlantic, she points out: the US BRAIN initiative, and the EU’s Human Brain Project.

The risk is that regulation will therefore only harden up after a “techlash” against neurotech from the public, she thinks. “Neurotech in this sense is no different from social media governance that encouraged innovation to develop relatively freely while potential ethical quandaries were left to the downstream future,” said Frahm.

To be fair, many in the private sector do want to take ethics seriously. “No one wants to become the next Theranos or Facebook,” said Sebastian Pfotenhauer, a professor of innovation research at TUM, and another of the authors.

The question is whether neurotechnology startups, in a hyper-competitive world where they are always after the next funding round, can afford to do anything that will constrain their growth.

Ethics is “definitely something people talk about,” said David Benrimoh, a psychiatry resident at McGill University, and chief scientific officer of Aifred Health, which uses AI to personalise depression treatment. “But as you grow, and you have a certain amount of revenue you need to have, how much lip service do you pay it? Which is why regulation and transparency are essential.”

“Many neurotech companies I have interviewed over the last two years are asking for clearer regulatory pathways or at least regulatory guidance,” said Frahm. Yet they fear too many rules will stop their products from getting to market, she added.

Some are sceptical that ethical self-regulation of neurotech firms will seriously ever put the brakes on mind-reading devices, when there is so much lucrative personal data to harvest.

“I’ve yet to see companies take a very progressive and proactive approach to shaping this agenda,” said Philipp Kellmeyer, a neurology researcher at the University Medical Centre Freiburg. “I’m largely seeing reactive approaches, driven by societal pushback, often driven by outrageous scandals like Cambridge Analytica.”

For now, the most dystopian scenarios of mind-reading technology are still a little way off technologically. It is still not possible to glean specific details about someone’s thoughts from an EEG recording, though other signals, such as sleep and wake cycles or general states, like stress, have been measured, said Benrimoh. “I don’t think it’s quite there yet,” in terms of mind-reading, he said.

And devices that are explicitly for medical use – stimulating the brain to ameliorate the symptoms of Parkinson’s disease, or to bypass a spinal cord injury, for example – will run into a thicket of existing medical regulation that should put at least some brakes on data harvesting, he predicts.

One slice or two?

But consumer head bands that monitor, but don’t try to change, brain activity are relatively unregulated and have already become a “fringe consumer market” with the potential to harvest brain data activity on mass, he said.  

So far, these are mere “gizmos”, Benrimoh said - fun but far from essential gadgets like fidget spinners.

But one day a company will produce a brain computer interface that provides as much convenience as the smartphone, he said. “Maybe you can think to your toaster, make me some toast,” he said. And that device will open the door to mass corporate surveillance of brain activity, he fears.

There are some signs that neurotechnology is beginning to creep its way up government agendas. The OECD’s recommendations on neurotechnology “hasn’t gotten as much attention” as its focus on artificial intelligence “but we think is quite important as well”, said Tarun Chhabra, senior director for technology and national security at the US’s National Security Council.

Hervé Chneiweiss, a neuroscientist and a member of the UN’s International Bioethics Committee, believes the world needs an equivalent of the Intergovernmental Panel on Climate Change for neurotechnology. “We absolutely need an international trustworthy group of scientists and people who know the field to inform the basis of these policies,” he told the conference.

Meanwhile, Savilles’ BrainMind accelerator is currently trying to organise a conference for 2023 to hammer out a private sector ethics charter based on the OECD’s recommendations.

But for now, the rules and norms of neurotechnology remain largely unwritten.

“That is what at stake today,” said Gabriel Villafuerte, chief science officer of neurotech firm Actipulse. “We have the unique opportunity to prevent or control black swan events in neurotechnology,” he said - referring to devastating but unpredictable events that may be lurking in the coming decades.

Never miss an update from Science|Business:   Newsletter sign-up