Hey computer, am I okay?

18 Jun 2021 | News

One of the biggest challenges facing artificial intelligence: Figuring out when someone is struggling.

Intelligent digital assistants could help people cope with everything from loneliness and insomnia to frailty and dementia. But helping individuals look after themselves will be one of the toughest arenas for both developers and regulators of artificial intelligence (AI). Given the enormous variety of conditions that can impact people’s health and wellbeing, automated assistants are going to need to be very adaptable and dynamic, while also being safe and trusted.

How to find a balance between these sometimes conflicting objectives was the main subject of a debate in a Science|Business webinar entitled: AI and the Individual - the latest in a series produced by Science|Business Data Rules group.

While recognising the need to regulate AI, John Olwal, global head of digital ethics, risk & compliance at Novartis, highlighted the difficulty of effectively certifying AI tools that learn and change over time. “Maybe they have become better, maybe they have become worse,” he noted. “That is really where we need to have a conversation. How do we regulate that kind of a set-up?”

Having drafted legislation to govern the use of AI, the European Commission is wrestling with this challenge, among others. Yordanka Ivanova, legal and policy officer, DG Connect, European Commission, told the webinar that there needs to be built-in constraints that will limit how much AI tools used in “critical situations” can change after they are placed on the market. AI users also have to ensure “additional monitoring and human oversight”, she added, while developers need to continue to “monitor how their system behaves even if it's already placed on the market.”

A right to know

Other potential safeguards include requirements around transparency and interpretability. EU citizens will also have the right to know when they are interacting with AI systems, “so we know that this is not a real human” and “whether our emotions are being recognised” and categorised, Ivanova said, adding that it is important that users also understand the limitations of AI systems.

Novartis is taking a “human-centred approach” to AI to optimise its internal processes, support the development of new drugs and facilitate innovation. To help anticipate how people might engage with AI systems, Olwal highlighted the importance of having multi-disciplinary groups, including engineers, psychologists and philosophers, involved in the development of such tools.

This approach might help alleviate concerns about AI “black boxes” that appear to be completely opaque. Although he expects interpretability to be a mandated requirement for most AI tools involved in healthcare, Stefan Feuerriegel, assistant professor, chair of management information systems, ETH Zurich, noted we may grow to trust AI systems over time. “I am trusting other managers and decision makers in my organisation when they give me good advice. Maybe I'm not questioning how they come and arrive at this advice.”

For some AI use cases, an interpretability requirement could result in reduced performance. But “in medicine, interpretable models are often better at capturing the dynamics of the disease and then they also function better,” Feuerriegel added. “Sometimes it's actually not either-or, but it’s a win-win situation. So there's no one answer to it.”

Although it is generally regarded as a good thing, transparency can also cause problems. Some patients with cognitive disorders, such as dementia, may not be comfortable knowing they are being monitored by an AI system, which may need to spend months collecting data about a patient before it can begin to understand the needs of that individual. Telling someone suffering from psychosis “that they are being watched by a machine won't help them,” said Félix Pageau, geriatrician & research associate, Institute for Biomedical Ethics, University of Basel. “They don't share the same reality as we do.” There is a risk that being watched by an AI system could worsen their psychological symptoms, he noted.

In such cases, it may be necessary to inform a guardian or family member about the involvement of an AI tool, rather than the patient themselves. “You need the help of informal carers or family, people that were there before, to help you implement care in the right way for the person,” Pageau explained, noting that people surrounding the patient can help to adapt an AI system and adjust it over time. I think leaving the patient with AI, is not a good idea, especially with dementia,” he added.

Although AI could help vulnerable people to live more independent and fulfilling lives, developers of such systems face an ethical minefield. For example, should an elderly person be obliged to accept a monitoring system in their home? “Is that really autonomy, is that freedom or only security? Security for who, the family, the patient?” Pageau asked. “Do you need consent, full consent, rational and free consent or do you only need an ascent? Can someone refuse such care, if it's for the security and who are the ones to decide?” He also warned that technologies can sometimes isolate people, as well as empower them.

Falling back on expert intermediaries

Of course, one way to mitigate these concerns is to ensure an expert clinician acts as an intermediary between the AI system and the patient. Indeed, research by Viktor Kaldo, professor of clinical psychology at the Karolinska Institutet, has found that AI tools can help clinicians predict whether a course of treatment will actually work. “Therapists are not good at predicting,” he said. “They are usually too optimistic. So in that case, we can say, we're already better… AI is not better at providing therapy,” but is clearly better at warning when the treatment might not work out for the patient.

Still, Kaldo believes developers of AI systems should be required to prove through clinical trials that their tools are better than alternatives, in the same way that pharmaceutical companies are required to show the efficacy of their medicines. If the trials are of sufficient duration, this approach could help assess the implications of any change in strategy by the AI tool as it learns over time.

Although patients should be aware that the clinicians are using these tools, Kaldo stressed that the communication of treatment predictions need to be handled with care. “If you present it like: well you're going to fail whatever you do, that's a problem,” he explained.But, of course, you don't do that.”

Up close and personal

Another major opportunity and challenge in this field is how to implement personalisation. One of the promises of AI systems is they can learn the behaviour of individuals and then detect abnormalities that might signal that something is wrong. For some people, such as those that suffer periodic disorientation and confusion, this could be very beneficial.

Whereas clinical treatments have often been determined by a single discrete piece of data, such as the results of a blood test, AI tools could analyse data from many different sources to help identify a more personalised treatment. Describing traditional health data as a photo of the patient’s condition, Feuerriegel said AI might be able to provide clinicians with a “photo album of the patient.”

But the subtleties of human behaviour mean there is a significant risk that the AI system will make mistakes. Moreover, in some cases, personalisation isn’t that important, Kaldo noted. When it comes to helping people sleep, for example, the “principles are almost always the same,” he explained. “Personalisation might be a bit overrated when you look for efficiency.  You can do a lot of things before you start getting perfect personalisation in a way. But then the finish of it might be much better with the personalised user interface or interaction.”

For Kaldo, one of the big leaps forward in recent years has been AI systems’ ability to accurately understand speech. He noted that computers can possibly provide “a more structured and unbiased interpretation” of language than a therapist. “That’s probably going to enhance our predictions more, because it's relevant data and it's from another source,” he added. “When you combine it with predictors we already have, then I think you have a chance to increase the overall prediction and the accuracy in classifying patients.” However, Kaldo stressed it will be a long time before chatbots are capable of replacing therapists. 

Risks, rewards and regulation

In the wake of the pandemic, highly stretched healthcare systems are likely to try and automate many more processes to alleviate the pressure on clinical staff. How fast that happens could be determined by regulators.

One of the controversial aspects of the Commission’s draft Artificial Intelligence Act is the fact that AI applications with implications for fundamental rights are treated in the same way as products that must be checked for their safety before they are placed on the market. Ivanova of the Commission defended this approach. “If you train the AI system with unrepresentative data, this could lead to discrimination” and “safety problems because the system does not function properly,” she noted, adding that both AI-based vehicle safety features and recruitment systems, for example, need to be transparent, auditable, secure and accurate.

More broadly, it seems like the EU may serve as a regulatory trendsetter for the rest of the world. The EU “is leading in this area of, at least, trying to define the direction on when it comes to trusted or ethical AI,” noted Olwal of Novartis. “There are discussions in other regions and the suspicion we have is that other regions will start coming up in similar kind of thing to what the European Union is trying to do.”

Never miss an update from Science|Business:   Newsletter sign-up