Government leaders in both the UK and the US have addressed the safety of AI over the past week as they attempt to address demands for guidelines for a technology that continues to develop at a rapid pace. The activity comes ahead of this week’s AI Safety Summit, hosted by the UK at Bletchley Park starting Wednesday. 

Prime Minister Rishi Sunak, in a speech to the Royal Society Thursday, said the government was publishing its analysis on the risks of AI, including assessments from intelligence agencies. Sunak acknowledged the risks of AI “super intelligence” and the fact that a regulatory structure for the technology is currently lacking. 

“Right now, the only people testing the safety of AI are the very organisations developing it,” he said, adding: “Even they don’t always fully understand what their models could become capable of.” 

At the same time, he reiterated that while it is the government’s responsibility to address risks to safety, the UK policy is “not to rush to regulate”. 

On Monday, the government announced a £100 million fund to develop AI’s potential in life sciences and healthcare. The UK has already invested £100 million in an AI Foundation model task force, and Sunak said the government will also establish the world’s first AI Safety Institute. 

The institute, he said, will “carefully examine, evaluate and test new types of AI so that we understand what each new model is capable of – exploring all the risks, from social harms like bias and misinformation, through to the most extreme risks of all.” 

US taking more activist role  

The US has taken a more aggressive stance on AI safety, with President Joe Biden issuing an executive order last week aimed at protecting Americans from potential risks from AI from the outset. The document addresses a range of topics including overall standards, privacy, equity and civil rights, consumer, worker and patient protections and innovation and competition.  

Among the measures outlined in the executive order: a requirement that developers of the most powerful AI systems share their safety tests and “other critical information” with the US government; the development of “standards, tools and tests” to ensure the safety, security and trustworthiness of AI systems; and protecting against the risks of using AI to engineer dangerous biological materials. 

The US Department of Health and Human Services will also set up a safety program to receive reports of – and act to remedy – harms or “unsafe healthcare practices” related to AI, the document said. 

 UK companies react 

Healthcare companies working in the UK have generally welcomed the prime minister’s approach, although some have cautioned that government policy should ensure that the focus on making the country a competitive home for new technology should be balanced with an awareness of the need to protect patients. 

“While the government’s broad pro-technology stance and optimism for the future is appreciated, we’d highlight that investment in AI has the potential to deliver meaningful change for the UK’s public health today and is not just about investing in the future,” said Paul McGinness, co-founder and CEO of health data company Lenus Health.

“Focusing a significant proportion of the government’s £100m AI investment in the NHS can save lives this winter with ongoing staffing shortages, long waits and spikes in hospital admissions related to chronic respiratory conditions like COPD.”  

McGinness welcomed “the pragmatic approach to regulation to this end so Lenus Health, as a UK startup working hand in hand with our NHS collaborators, can continue to lead predictive AI for chronic conditions”. 

Michael Measures, director of technology at digital consultancy Answer Digital, said he appreciated Sunak’s distinction between different types of AI: “The kind we should treat with caution, and the type we should be pushing forward. The kinds we are using in healthcare are most certainly the latter. 

“AI has already begun to make its mark in healthcare in imaging, diagnostics, and drug testing,” he added, “but it holds so many possibilities for making changes throughout the system, from enhancing clinical decision support to natural language processing”.

Measures noted, however, that while the prime minister’s emphasis on innovation over regulation was important to allow innovation to flourish, “keeping these innovations within safe guardrails should not be overlooked.”

He added that he hoped the £100 million in funding would be used to mitigate risks associated with AI deployment – including inherent biases – to build public trust in the technology.

“Investing in safe, democratic ways to test, train, and scale AI should be a top priority with opensource AI deployment engines being a key enable to support robust evidence gathering and post-market surveillance throughout an AI model’s life cycle.”  

Jardine Barrington-Cook, head of interoperability and data at Access Health, Support and Care, warned of a different danger – that of failing to focus new government funding commitments on the solutions with the greatest chance of leading to progress.  

“To unlock this potential, we need to shift our focus from testing lots of small scale proof-of-concepts, to using high quality, large data sets and incremental approaches to deployment, or we will find ourselves in yet another hype cycle, and unrealised potential, which will only be a detriment to patients and clinicians in the long run.”