Scientists call for greater efforts to embed human values into emerging technologies
Photo credits: Thufir/BigStock
New technologies, such as AI, are developing rapidly and finding their way into every aspect of our lives.
To safeguard fundamental human values, such as privacy, justice, sustainability and wellbeing, experts believe the ethical and societal implications of emerging technologies need to be examined more proactively. “If we don’t have the human perspective on new technologies, we risk becoming blind and deploying the technology without knowing its possible implications,” warns Svend Brinkmann, professor of psychology at Aalborg University in Denmark.
As a member of the Danish Council on Ethics, which advises the Parliament, ministers, and public authorities on ethical issues and challenges, Brinkmann calls for more thorough ethical reflection to be built into technological developments. “As an ethical committee, we often feel pressure from politicians to provide fast answers, but ethical reflection is a slow process,” Brinkmann stresses. With some new developments, particularly in the field of AI, it’s becoming more and more challenging to provide timely advice. “The technology is moving so quickly that we don’t have time to think about it.”
Shengnan Han, professor in computer and systems sciences at Stockholm University, also argues for a more proactive approach and incorporating “a dialogue at the very beginning” of technological development. “The aim of computer scientists has always been to design something for the good of society, but, in reality, this hasn’t always been accomplished,” she says, pointing to negative side effects of computer systems, such as cybercrime and widespread misinformation.
Start from a SSAH standpoint
As technology and society interact, new problems emerge, she adds. “Afterwards, we try to address these problems. This has happened many times with technology, and this is a very passive pattern. We need to be more proactive and train scientists in considering societal issues beforehand.” The involvement of social sciences, arts and humanities (SSAH) is crucial in this regard. “SSAH should set the goals and the rest follows,” says Han.
Wendy Hui Kyong Chun, the founding director of the Digital Democracies Institute at Simon Fraser University in Canada, agrees that ethical, legal, and social implications of emerging technologies are not sufficiently considered before these technologies are released. However, these implications cannot be determined in advance, she notes. “They depend on how the technology is used. Which is why it must be an ongoing conversation.”
The conversation should include both SSAH and STEM disciplines, urges Chun. “It’s not about adding human values to technology, but about questioning the values that are already there.” To this end, Chun suggests establishing an international organisation that provides independent input and expert advice on emerging technologies. “I envision an international body like IPCC, but focused on AI and other new technologies. For example, the International Panel on the Information Environment (IPIE) is a body that could serve in that capacity.”
Her suggestion is similar to the one included in a 2022 study by Panel for the Future of Science and Technology (STOA) of the European Parliament, which suggested establishing “an EU observatory of converging digital technologies” that would be tasked with monitoring relevant developments, and carrying out interdisciplinary ethical, legal and social issues research.
More than human – how far is too far?
After all, humans and technology are deeply intertwined, points out Brinkmann of Aalborg University. “Some say that we have always lived in a ‘more than human’ world. It’s in our nature to expand our skills and capabilities through tools and technologies. It is human to be more than human but there needs to be a discussion on what the limits are.”
If these questions aren’t carefully considered, human values are at risk of being lost, he warns. “I’m not afraid that machines will gain consciousness and become human. I’m afraid that we will become less human. I’m concerned that humans will stop thinking now that we have thinking machines.”
Everyone should be mindful of these issues, adds Han of Stockholm University. “New technologies like AI learn directly from us. We are their data source. If we all behave well, if we don’t show bias or violence, AI will never learn this type of behaviour. That’s why we all need to be aware of the problem. We cannot depend on computer scientists to fix it.”
Echoing this view Chun at Simon Fraser University calls on all stakeholders to take up responsibility. If there are issues with AI or other new technologies, everyone should be involved in fixing them, she says. “It should not be only up to policymakers or the tech companies. It should be up to developers, legislators, users, and everybody else.”
In line with the recent Artificial Intelligence Act, the world’s first comprehensive AI law, the EU appears committed to creating responsible AI that serves societal needs. What those needs are, however, must remain open to discussion, stress the experts. “Together, we need to decide what type of world we want to live in,” says Brinkmann. “Do we want faster and more efficient processes run by machines, or do we want humans who can be held accountable? As scientists, we can’t make these decisions alone. It needs to be a public discussion.”
Human Values and Grand Challenges conference
The theme of preserving human values within new technologies will be thoroughly discussed during the of Human Values and Grand Challenges conference, hosted by Aalborg University on December 1-2, 2025 in Copenhagen as part of Denmark's Presidency of the Council of the European Union.
The public conference aims to have a direct impact in the European Research Area (ERA) by promoting an active inclusion of human values and relevant SSAH and will bring forward specific recommendations on how to achieve a more human-centric research and innovation in Europe.
Registration for the conference is available here.
A unique international forum for public research organisations and companies to connect their external engagement with strategic interests around their R&D system.