Results for ""
The UK government published a new AI paper today that outlines the government’s approach to regulating the technology in the UK, with proposed rules addressing future risks and opportunities so businesses are clear on how they can develop and use AI systems and consumers are confident they are safe and robust.
The approach is based on six core principles that regulators must apply, with the flexibility to implement these in ways that best meet the use of AI in their sectors.
The proposals focus on supporting growth and avoiding unnecessary barriers being placed on businesses. This could see businesses sharing information about how they test their AI’s reliability as well as following the guidance set by UK regulators to ensure AI is safe and avoids unfair bias.
Digital Minister Damian Collins said: “We want to make sure the UK has the right rules to empower businesses and protect people as AI and the use of data keeps changing the ways we live and work. It is vital that our rules offer clarity to businesses, confidence to investors and boost public trust. Our flexible approach will help us shape the future of AI and cement our global position as a science and tech superpower.”
Research this year predicted more than 1.3 million UK businesses will be using artificial intelligence and investing over £200 billion in the technology by 2040.
Instead of giving responsibility for AI governance to a central regulatory body, as the EU is doing through its AI Act, the government’s proposals will allow different regulators to take a tailored approach to the use of AI in a range of settings. This better reflects the growing use of AI in a range of sectors.
This approach will create proportionate and adaptable regulation so that AI continues to be rapidly adopted in the UK to boost productivity and growth. The core principles require developers and users to:
Regulators - such as Ofcom, the Competition and Markets Authority, the Information Commissioner’s Office, the Financial Conduct Authority and the Medicine and Healthcare Products Regulatory Agency - will be asked to interpret and implement the principles.
They will be encouraged to consider lighter touch options which could include guidance and voluntary measures or creating sandboxes - such as a trial environment where businesses can check the safety and reliability of AI tech before introducing it to market.
Industry experts, academics and civil society organisations focusing on this technology can share their views on putting this approach into practice through a call for evidence launching today.
Responses will be considered alongside further development of the framework in the forthcoming AI White Paper, which will explore how to put the principles into practice.
The government will consider ways to encourage coordination between regulators as well as looking at their capabilities to ensure that they are equipped to deliver a world-leading AI regulatory framework.