- Over £100 million to support regulators and advance research and innovation on AI, including Hubs in healthcare and chemical discovery
- Key regulators asked to publish plans by end of April for how they are responding to AI risks and opportunities
- UK government makes case for introducing future targeted, binding requirements for most advanced general-purpose AI systems
The UK is on course for more agile AI regulation, backing regulators with the skills and tools they need to address the risks and opportunities of AI, as part of the government’s response to the AI Regulation White Paper consultation today (6 February).
It comes as £10 million is announced to prepare and upskill regulators to address the risks and harness the opportunities of this defining technology. The fund will help regulators develop cutting-edge research and practical tools to monitor and address risks and opportunities in their sectors, from telecoms and healthcare to finance and education. For example, this might include new technical tools for examining AI systems.
Many regulators have already taken action. For example, the Information Commissioner’s Office has updated guidance on how our strong data protection laws apply to AI systems that process personal data to include fairness and has continued to hold organisations to account, such as through the issuing of enforcement notices. However, the UK government wants to build on this by further equipping them for the age of AI as use of the technology ramps up. The UK’s agile regulatory system will simultaneously allow regulators to respond rapidly to emerging risks, while giving developers room to innovate and grow in the UK.
In a drive to boost transparency and provide confidence to British businesses and citizens, key regulators, including Ofcom and the Competition and Markets Authority, have been asked to publish their approach to managing the technology by 30 April. It will see them set out AI-related risks in their areas, detail their current skillset and expertise to address them, and a plan for how they will regulate AI over the coming year.
This forms part of the AI regulation white paper consultation response, published today, which carves out the UK’s own approach to regulation and which will ensure it can quickly adapt to emerging issues and avoid placing burdens on business which could stifle innovation. This approach to AI regulation will mean the UK can be more agile than competitor nations, while also leading on AI safety research and evaluation, charting a bold course for the UK to become a leader in safe, responsible AI innovation.
The technology is rapidly developing, and the risks and most appropriate mitigations, are still not fully understood. The UK government will not rush to legislate, or risk implementing ‘quick-fix’ rules that would soon become outdated or ineffective. Instead, the government’s context-based approach means existing regulators are empowered to address AI risks in a targeted way.
The UK government has for the first time, however, set out its initial thinking for future binding requirements which could be introduced for developers building the most advanced AI systems – to ensure they are accountable for making these technologies sufficiently safe.
Secretary of State for Science, Innovation, and Technology, Michelle Donelan said:
The UK’s innovative approach to AI regulation has made us a world leader in both AI safety and AI development.
I am personally driven by AI’s potential to transform our public services and the economy for the better – leading to new treatments for cruel diseases like cancer and dementia, and opening the door to advanced skills and technology that will power the British economy of the future.
AI is moving fast, but we have shown that humans can move just as fast. By taking an agile, sector-specific approach, we have begun to grip the risks immediately, which in turn is paving the way for the UK to become one of the first countries in the world to reap the benefits of AI safely.
Meanwhile, nearly £90 million will go towards launching nine new research hubs across the UK and a partnership with the US on responsible AI. The hubs will support British AI expertise in harnessing the technology across areas including healthcare, chemistry, and mathematics.
£2 million of Arts and Humanities Research Council (AHRC) funding is also being announced today, which will support new research projects that will help to define what responsible AI looks like across sectors such as education, policing and the creative industries. These projects are part of the AHRC’s Bridging Responsible AI Divides (BRAID) programme.
£19 million will also go towards 21 projects to develop innovative trusted and responsible AI and machine learning solutions to accelerate deployment of these technologies and drive productivity. This will be funded through the Accelerating Trustworthy AI Phase 2 competition, supported through the UKRI Technology Missions Fund, and delivered by the Innovate UK BridgeAI programme.
The government will also be launching a steering committee in spring to support and guide the activities of a formal regulator coordination structure within government in the spring.
These measures sit alongside the £100 million invested by the government in the world’s first AI Safety Institute to evaluate the risks of new AI models, and the global leadership shown by hosting the world’s first major summit on AI safety at Bletchley Park in November.
The groundbreaking International Scientific Report on Advanced AI Safety which was unveiled at the summit will also help to build a shared evidence-based understanding of frontier AI, while the work of the AI Safety Institute will see the UK working closely with international partners to boost our ability to evaluate and research AI models.
The UK further commits to this approach today with an investment of £9 million through the government’s International Science Partnerships Fund, bringing together researchers and innovators in the UK and the United States to focus on developing safe, responsible, and trustworthy AI.
The government’s response lays out a pro-innovation case for further targeted binding requirements on the small number of organisations that are currently developing highly capable general-purpose AI systems, to ensure that they are accountable for making these technologies sufficiently safe. This would build on steps the UK’s expert regulators are already taking to respond to AI risks and opportunities in their domains.
Hugh Milward, Vice-President, External Affairs Microsoft UK said:
The decisions we take now will determine AI’s potential to grow our economy, revolutionise public services and tackle major societal challenges and we welcome the government’s response to the AI White Paper.
Seizing this opportunity will require responsible and flexible regulation that supports the UK’s global leadership in the era of AI”.
Aidan Gomez, Co-Founder and CEO of Cohere, said:
By reaffirming its commitment to an agile, principles-and-context based, regulatory approach to keep pace with a rapidly advancing technology the UK government is emerging as a global leader in AI policy.
The UK is building an AI-governance framework that both embraces the transformative benefits of AI, while being able to address emerging risks.
Lila Ibrahim, Chief Operating Officer, Google DeepMind:
I welcome the UK government’s statement on the next steps for AI regulation, and the balance it strikes between supporting innovation and ensuring AI is used safely and responsibly.
The hub and spoke model will help the UK benefit from the domain expertise of regulators, as well as provide clarity to the AI ecosystem – and I’m particularly supportive of the commitment to support regulators with further resources.
AI represents an opportunity to drive progress for humanity, and we look forward to working with the government to ensure that the UK can continue to be a global leader in AI research and set the standard for good regulation.
Tommy Shaffer Shane, AI Policy Advisor at the Centre for Long-Term Resilience, said:
We’re pleased to see this update to the government’s thinking on AI regulation, and especially the firm recognition that new legislation will be needed to address the risks posed by rapid developments in highly-capable general purpose systems.
Moving quickly here while thinking carefully about the details will be crucial to balancing innovation and risk mitigation, and to the UK’s international leadership in AI governance more broadly.
We look forward to seeing the government work through this challenge at pace, and to further updates on the approach to new legislation in the coming weeks and months.
Julian David, CEO at techUK said:
techUK welcomes the government’s commitment to the pro-innovation and pro-safety approach set out in the AI Whitepaper. We now need to move forward at speed, delivering the additional funding for regulators and getting the Central Function up and running. Our next steps must also include bringing a range of expertise into government, identifying the gaps in our regulatory system and assessing the immediate risks.
If we achieve this the Whitepaper is well placed to provide the regulatory clarity needed to support innovation, and the adoption of AI technologies, that promises such vast potential for the UK.”
Kate Jones, Chief Executive of the Digital Regulation Cooperation Forum (DRCF), said:
The DRCF member regulators are all keen to maximise the benefits of AI for individuals, society and the economy, while managing its risks effectively and proportionately.
To that end, we are taking significant steps to implement the White Paper principles, and are collaborating closely on areas of shared interest including our forthcoming AI and Digital Hub pilot service for innovators.
John Boumphrey, UK Country Manager of Amazon said:
Amazon supports the UK’s efforts to establish guardrails for AI, while also allowing for continued innovation. As one of the world’s leading developers and deployers of AI tools and services, trust in our products is one of our core tenets and we welcome the overarching goal of the white paper.
We encourage policymakers to continue pursuing an innovation-friendly and internationally coordinated approach, and we are committed to collaborating with government and industry to support the safe, secure, and responsible development of AI technology.
Markus Anderljung, Head of Policy, Centre for the Governance of AI said:
The UK’s approach to AI regulation is evolving in a positive direction: it heavily relies on existing regulators, takes concrete steps to support them, while also investing in identifying and addressing gaps in the regulatory ecosystem.
I am particularly pleased that the response acknowledges the need to address one such gap that has become more apparent since the white paper’s publication: how the most impactful and compute-intensive AI systems are developed and deployed onto the market.
The consultation has highlighted the strong support for the five cross-sectoral principles which are the foundation of the UK’s approach and include safety, transparency, fairness, and accountability.
The publication of the AI Regulation White Paper last March laid the foundations for the UK’s approach to regulating AI by driving safe, responsible innovation. This common sense, pragmatic approach will now be further strengthened by robust regulator expertise, allowing people across the country to safely harness the benefits of AI for years to come.