Two wars hung over this year’s get-together of scientists and politicians in Berlin. But there was still plenty of excitement, and trepidation, about new breakthroughs in AI, solar energy, and transatlantic science
EU universities have been all but left out of the UK’s AI safety summit, while arguably more important is a ground-breaking new executive order from the US, which begins to demand outside scrutiny of the potentially dangerous capabilities of AI models
As part of measures regulating artificial intelligence, Washington will move to improve surveillance of mail-order DNA. Scientists have long warned the current global system is full of loopholes. Now, the US government says the risks are ‘potentially made worse by AI’
Co-funded by Horizon Europe and the US National Science Foundation, the project reflects a wider strategy of deepening transatlantic cooperation on key technologies, to reduce dependence on China. The value of the project was not disclosed
Since the ‘AI made in Europe’ strategy launched in February 2020, the US has pulled further ahead. The EU’s problem is a lack of scale and focus. The answer is to adopt CERN’s approach to running large, coordinated and highly ambitious projects
European and Japanese scientists will fine tune their scientific models on each other’s machines, hopefully boosting performance and future-proofing code. It’s the latest push from Brussels to create stronger research links with ‘like-minded’ democracies
Politicians must not be allowed to harness fears around artificial intelligence to divide people, says Dragoș Tudorache MEP, who is leading Europe’s charge to regulate this powerful technology
Receive the Funding Newswire [full access requires a subscription] each Tuesday, our Policy Bulletin each Thursday, and news about bridging Europe’s east-west innovation gap twice a month in The Widening.
A unique international forum for public research organisations and companies to connect their external engagement with strategic interests around their R&D system.