A new study shows patients mistrust the use of artificial intelligence tools in healthcare - and points to lack of transparency as the cause
AI may be slowly revolutionising healthcare, but patients show lower levels of trust in algorithm-based tools than their doctors. The cautious approach to what is seen as a black box technology may be understandable, but the authors of the study believe a little more transparency could significantly increase trust and promote uptake.
“Many AI-driven diagnostics can perform comparably, or even better, than human specialists, as well as providing easier access to information for patients outside of clinical settings, via smart devices,” said Romain Cadario, assistant professor at Erasmus University, Rotterdam School of Management.
“However, the clarity of these benefits is often matched by the opacity with which they are delivered, which presents a major barrier to their adoption,” Cadario said.
The call for people to have greater understanding of the AI applications impacting their lives is nothing new. Numerous ethics committees around the world have called for more transparency in how algorithms work and what kind of data they are fed.
What the study shows is that greater openness would be beneficial for developers and companies alike, Cadario told Science|Business. “[The policy debate] echoes what we find,” he said. From an ethical perspective, it is clear people have the right to an explanation. “What we argue is that it’s not only beneficial [in terms of ethics], but it’s beneficial for the applications and AI developers, because it will drive adoption.”
No blind trust
AI is poised to make a significant contribution to healthcare. It is already in use improving cancer diagnoses, as a tool in drug discovery and helping to predict and manage future pandemics. This potential drew Cadario and his colleagues Chiari Longoni and Carey Morewedge at Boston University to study it.
AI is much more useful in healthcare than in running Amazon services, Cadario said. But many see AI as a danger rather than an aid. AI applications are black boxes whose internal workings are often a mystery, even to the very developers creating them. For example, an AI tool can diagnose skin cancer on the basis of images it has been fed of cancerous moles and healthy skin. Loaded with millions of examples, the algorithm learns to pick out cancerous moles from other skin lesions, giving a probability of malignancy. But how exactly it arrives at the conclusion is not evident to a patient at the point of diagnosis.
For Cadario, this does not mean transparency cannot be achieved. While algorithms are difficult to dissect, a simple explanation of the mechanics of an AI tool can boost patients’ trust.
In one experiment, the researchers ran two Google adverts for an AI-based skin cancer diagnostic. One explained how the algorithm worked, while the other did not. The former attracted significantly more traffic.
EU to the rescue
The trust issue is not news to Brussels: earlier this year, the European Commission’s in-house science hub, the Joint Research Centre, published its own study into AI applications in healthcare.
It found that while skills, data and technological capacity play a role in AI adoption in health, trust is a key aspect too. To increase trust, the paper argued, the EU should boost transparency and deal with issues relating to data protection. This in turn would accelerate the adoption of AI tools in the health sector.
Meanwhile, the European Parliament heard about the importance of user trust back in December when Jelena Malinina, digital health policy officer at the European Consumer Organisation, a lobby group, made the case to MEPs that patients should have the right to explanations and transparency. “[When] we go to the doctor, we can always ask for our doctor to explain how the diagnosis was made. So the same must apply to AI,” Malinina said.
She also highlighted the importance of accountability, thorough assessment of AI-based tools before their roll out, and ensuring bias and discrimination is avoided in the process, as among other key aspects of building trust.
The weight of this should not fall solely on the shoulders of the consumers and doctors. With many surveys finding people do not know which side of their body the heart is, building understanding how AI works might be too much to ask, Malinina suggested. Instead she proposed, intermediaries who understand the technology could fill the gap, aiding both doctors and patients.
The issue of transparency and user trust is likely to be addressed in a future rulebook for AI. In a bid to lead the way, earlier this year, the Commission proposed new rules for the technology, which are now being considered by member states and the European Parliament, before legislation is enacted.