Washington needs researchers in Europe to join the US in trimming artificial intelligence collaboration with China, otherwise any controls will be ineffective. It’s unclear if academics in the EU would get on board with this agenda
Existing artificial intelligence research ties between China and democratic states are “unsustainable”, and western universities should consider banning Chinese Communist Party (CCP) members from AI programmes, according to one of the US’s most influential think tanks.
The Brookings Institution warns the US cannot restrict AI collaboration with China alone and needs European and other democratic states on board too. Yet some European AI experts are sceptical that trimming ties with China is the best way to avoid becoming dependent on Chinese technology.
“The conventional wisdom within parts of the technical community that scientific AI research should have no borders must be re-examined,” the report says. “Business as usual with China is unsustainable.” Not only could AI tools be used for surveillance and control within China, the US could lose its technological edge.
The report, ‘Can Democracies Cooperate with China on AI Research?’, is part of a growing clamour in the US to re-examine the open research policies of the last decade that some consider have allowed the Chinese military to hoover up western knowledge. Last week US politicians created a new committee on ‘Strategic Competition Between the US and the Chinese Communist Party’ , with technology being one of the issues at stake.
Recent German research ties to military-linked Chinese universities also came under fire last week in a report from another US think tank, the Center for Research Security and Integrity. It called oversight of collaborations with China “wholly inadequate.”
The Brookings report made a number of recommendations for AI researchers in democratic countries that would rein in, although not stop, collaboration.
Although the institution is generally favourable towards Chinese students studying AI in the US – because they often stay – universities should consider screening out “high risk students”, like members of the CCP.
With nearly 100 million party members in China, and nearly as many in its youth league, this could severely restrict universities’ freedom of intake.
“It's certainly something to consider,” said Cameron Kerry, a visiting fellow at Brookings and one of the report’s authors. With Chinese state control over AI research in companies and universities increasing, “party members are a risk that needs to be considered.”
Screening research projects
Brookings also recommends AI researchers screen joint work to check whether it gives Chinese collaborators “access to new resources” like “novel engineering knowhow” or private datasets, which they could not obtain by simply reading the published final paper.
Giving Chinese researchers new knowledge through a joint project shouldn’t automatically preclude collaboration, the report stresses. But consideration should be given to whether collaborators have military or security ties, or whether the applications of the work could be misused – particularly to crush internal dissent in China.
“As a minimum baseline, however, we suggest institutions at least review research partnerships with the so-called Seven Sons of National Defence,” the report says, referring to seven universities with strong ties to China’s military. Microsoft has already reportedly stopped recruiting interns from these universities for its labs in Beijing and Shanghai.
The question of whether Chinese researchers might have military or security links is a murky one, however. Under Beijing’s policy of ‘civil military fusion’, whereby the country’s entire civilian technology ecosystem is supposed to feed into and bolster its defence industry, Chinese AI capabilities anywhere could in theory be roped into contributing to the Chinese military.
However, the Brookings report argues that in practice this is not the case. “Not everything done within a company or research institution is done at the behest of the state.”
“The party and state simply do not have the expertise, resources, or desire to dig into the operational details of every Chinese institution,” it says, although stresses that academics need to understand their Chinese partners’ relationship to the state.
Getting Europe on board
The institution stresses repeatedly is that the US can’t “rebalance” its AI relationship with China alone, and “alignment” with other AI powers like the EU, Japan and the UK is essential. “That’s one of the lessons from export controls,” said Kerry.
If the US decided to trim ties with China alone, Chinese researchers could simply pivot their collaborations towards other research powers like the EU, the report points out. “Any US action to reduce risks from AI knowledge transfers to Chinese researchers will require international cooperation to avoid leakage,” it said.
This coordination could take place at the Trade and Technology Council, a regular dialogue between Washington and Brussels that had its third meeting last December, suggested Kerry.
Whether European AI researchers would be happy to go along with an agenda set in Washington is less clear. Holger Hoos, an AI professor at RWTH Aachen University, said it is “good to see a clear acknowledgement of the importance of international alignment.”
But he called the Brookings report “US-centric” and said the best way to avoid technological dependence on Chinese AI technology is “not to limit cooperation or exchange of ideas” but instead invest more money, and invest it more effectively, in technological leadership in Europe.
Germany, for example, already has strict export controls on imparting AI techniques to countries like Iran, he said. These could be applied to China too, although it would require “very careful consideration” given the inevitable bureaucracy and hit to international collaboration.
“However, I can imagine circumstances under which teaching advanced AI techniques to students with ties to the Chinese government would become untenable,” Hoos said.
Then there is the question of how academics, in the US, Europe or elsewhere, would be cajoled into limiting their partnerships with Chinese counterparts, particularly when there are grants and prestigious publications on the line.
“Democracy is not a suicide pact,” responded Kerry, when asked whether academic freedom would stop efforts to rebalance ties with China. The discussion is “part and parcel of the broader discussion about artificial intelligence ethics.”
The Brookings analysis does not suggest any kind of law or mandatory enforcement among academics, but rather encourages guidelines and intelligence to nudge them into making better decisions. It suggests the creation of a new public-private research security institution, for example.
The report also acknowledges that many forums for AI research will have to remain open and international, even if this allows AI knowledge to leak to China.
It would be futile to try to restrict China-based researchers from reading AI papers by walling them off geographically, it says. International conferences should remain open, “besides screening for clearly bad actors, such as spies.”
It also stops short of calling for US corporate AI labs in China to shut down. “There was a time when the US could have significantly slowed the growth of China's AI capabilities by eliminating these overseas labs, but that time has likely passed,” the report says. “Today, transfers of knowledge often flow in reverse, from the China-based labs back to the US company and the global research ecosystem.”