The EU Artificial Intelligence Act is set to cover many more companies than first expected, loading them with a regulatory burden that could deter investors
Concerns are growing that the EU’s Artificial Intelligence (AI) Act will have a much greater impact on start-ups than expected, imposing additional responsibilities and costs on a wide range of companies.
One knock-on effect may be to make them less attractive to investors, according to a survey of 15 venture capitalist firms presented to the European AI Forum in December. This shows there is pessimism about the Act’s likely impact, with 73% of respondents expecting it to reduce or significantly reduce the competitiveness of European AI start-ups.
The replies go on to suggest that this view will influence future investment decisions. “There are a couple of VCs who say they will invest more in AI start-ups from Europe, but the majority say that they will either invest in other, non-AI start-ups within Europe, or stay with AI but outside of Europe,” said Till Klein of AppliedAI, which carried out the survey. “So, we would see a shift in investments, potentially, with less investment available for AI start-ups within Europe.”
A number of VCs also said they would shy away from start-ups falling into the AI Act’s high-risk category. “The number one answer is that they will focus investments more on low-risk applications, so it is important for start-ups to know where their use case stands,” Klein told the forum.
These concerns are being raised late in the day, with Act currently making its way through the legislative process. The European Council adopted a common position on the proposal in December, and the European Parliament expected to follow suit this spring.
Adding further disquiet, a separate survey of 113 start-ups presented at the forum is at odds with the European Commission’s impact assessment for the AI Act. While that estimated 5 - 15% of systems would fall into the high-risk category, the survey found that 33% of companies are sure that their systems would be high-risk, and another 15% are as yet, unsure.
Being categorised as high-risk will involve a steep learning curve, with companies needing to come to terms with how they work with the authorities to demonstrate compliance.
This will be unknown territory for many tech start-ups, and a significant number responding to the survey said it would put a brake on their progress. “Fifty per cent said they expect to slow down, which is not surprising looking at the increase in complexity, both on the technical and organisational level,” Klein said. “Some are even considering relocating outside of the EU, and a few say they may stop working on AI because it is getting too hard.”
AppliedAI concedes that at 113 start-ups, the sample size is modest. But it also points out this is the sharp end of the European AI industry, and the source of much of its innovation. “Feedback from established SMEs as well as larger European companies indicates similar challenges,” says the survey report.
A minority of stat-ups foresee the additional regulation having no effect, or even a benefit. “Some say there will be a positive effect, for example making their solution particularly explainable, which they can leverage as an opportunity,” Klein said.
The same goes for VCs that specialise in AI, who see the regulation imposing a necessary discipline on the sector.
“The EU AI Act is likely to influence our investment decisions as we aim to support founders in building responsible applications of AI, in line with European values and regulations,” Herman Kienhuis, managing partner at AI-focused venture capital fund Curiosity, told Science Business. “We want our portfolio companies to not only seek compliance with the EU AI Act, but also implement an AI ethics policy and make responsible AI part of their company DNA. The European AI Act provides a basic guideline for that.”
Falling into the high-risk category would not be a mark against start-ups seeking investment. “We’re not shying away from high-risk applications where it is even more important to ensure fairness and safety and where compliance with EU regulations could be a relevant unique selling point,” Kienhuis said.
Overall, Kienhuis thinks that the EU AI Act will be good for Europe’s ecosystem, setting a new international standard that might help European start-ups to thrive and compete globally. “We believe that creating rules for fair and safe use of AI is actually good for innovation and business, as it drives a more holistic product design, generates higher trust and adoption and creates a level playing field for companies to thrive in,” he said.
Another reason that the EU AI Act will have a wider impact than previously anticipated is its extension to general purpose AI, an idea introduced in the European Council’s December common position.
General purpose AI covers systems that handle tasks such as language processing, image and speech recognition, pattern detection, and question answering. Including these in the scope of the Act will draw in companies where AI is not front and centre in the business, but part of the machinery. This is a particular concern for financial technology companies which may use AI systems to carry out secondary tasks.
“This defies the logic of the risk-based approach of the AI Act, and the Commission’s own drafting,” said Maria Staszkiewicz, president of the European Digital Finance Association, told the AI Forum. “This may cause European companies, and Europe as such, to be cut off from systems that use general purpose AI in high-risk areas, and will definitely result in a very differential market for various AI products. And we don’t know, for example, how the regulation of general purpose AI systems will affect open source solutions.”
She also expressed concern at the Council proposal that details of compliance assessment for general purpose AI should be worked out by the Commission after the Act enters into force. “This is a huge responsibility to put in the hands of the Commission, and the question is whether this should not be a decision for the Council and the European Parliament.”
The broad scope of the high-risk categories is also a concern for the fintech sector, in particular where the AI Act appears to overlap with existing financial regulations. One example is credit worthiness assessment, which is already covered by the Consumer Credit Directive and European Banking Authority guidelines on loan origination. The AI Act will mean that any systems using AI to carry out this task will have another layer of regulation, and another supervisory body to answer to.
“Not only will companies working in the financial sector have to check again whether their credit worthiness assessment tools comply with the new regulation, but they will most likely be dependent on yet another authority interpreting the rules,” said Staszkiewicz.
Lobbying to reverse these developments is likely to continue throughout the year. “Despite the train already having left the station, we should definitely still try to make the legislators understand the specificities of each sector and the burden that yet another act may bring, especially on small and medium sized enterprises,” Staszkiewicz said.
Elsewhere in the Ecosystem…
- Quantum computing start-up Oxford Ionics has raised £30 million in series A funding. The cash will support the company’s expansion, with recruitment planned from software developers and engineers to designers, scientists and back office staff. The company was started in 2019 by two Oxford University researchers working on trapped ion quantum computing. The technology stands out for its potential to build quantum processors in standard semiconductor foundries. The round was led by Oxford Science Enterprises and Braavos Investment Advisers.
- ExeVir Bio, a spin-out from Belgian life sciences research institute VIB, raised €25 million in venture debt from the European Investment Bank to advance development of its COVID-19 therapeutic, XVR012, and move it into clinical trials. ExeVir is also planning to expand its pipeline to other infectious diseases. The financing comes from the InnovFin Infectious Diseases Finance Facility, which is part of Horizon 2020.
- The European Patent Office (EPO) has been helping assess applications to the European Innovation Council’s Transition programme, which supports projects aiming to mature a novel technology and develop a business case to bring it to market. EPO experts were asked to provide a non-binding view on shortlisted proposals in the September call, in particular looking at their technological novelty, inventive merit and intellectual property strategies. The collaboration is seen as a pilot that could lead to a longer-term collaboration, allowing both organisations to stay on top of technology developments.