The government has published plans for regulating the use of artificial intelligence (AI) in the UK.

The proposals come as the Data Protection and Digital Information Bill has been introduced to Parliament, which includes measures to use AI responsibly while reducing the compliance burdens on businesses.

The move is an attempt to nurture innovation and growth in the AI sector, while still protecting the public from the risks of automated bias and flawed data.

According to the government, “The proposals focus on supporting growth and avoiding unnecessary barriers being placed on businesses. 

“This could see businesses sharing information about how they test their AI’s reliability as well as following guidance set by UK regulators to ensure AI is safe and avoids unfair bias.”

This requirement for regulators to support growth and innovation has been a constant theme of announcements from the Johnson administration, as it seeks to distance itself from what it regards as the overly bureaucratic EU.

For example, the new Information Commissioner was appointed with a brief to foster growth and innovation.

“Instead of giving responsibility for AI governance to a central regulatory body, as the EU is doing through its AI Act, the government’s proposals will allow different regulators to take a tailored approach to the use of AI in a range of settings,” explained the government.

“This approach will create proportionate and adaptable regulation so that AI continues to be rapidly adopted in the UK to boost productivity and growth.”

However, Europe’s centralised approach is designed to protect citizens from the overreach of data-centric US technology giants, such as Google/Alphabet, Facebook/Meta, and Amazon.

As a result, privacy and rights campaigners will be concerned that the UK may be abandoning those principles, in the context of further proposals to tear up human rights legislation.

Also of concern will be the strain that any wholesale move away from European standards may place on critical data adequacy between the UK and EU.

Under the new proposals, regulators such as Ofcom and the Competition and Markets Authority (CMA) will apply six overarching principles.

These are to:

  • Ensure that AI is used safely
  • Ensure that AI is technically secure, and functions as designed
  • Make sure that AI is appropriately transparent and explainable
  • Consider fairness
  • Identify a legal person to be responsible for AI
  • Clarify routes for redress or contestability

However, the instruction to merely “consider” fairness may be an issue for privacy and rights campaigners, along with the need for “appropriate” transparency.

The government’s quest for clarity may, in fact, create grey areas in which businesses can act irresponsibly while claiming compliance.

Digital Minister Damian Collins said, “We want to make sure the UK has the right rules to empower businesses and protect people as AI and the use of data keeps changing the ways we live and work.

“It is vital that our rules offer clarity to businesses, confidence to investors, and boost public trust. Our flexible approach will help us shape the future of AI and cement our global position as a science and tech superpower.”

According to the government, the UK leads Europe in AI innovation and investment, and is behind only the US and China in raising funds, with domestic firms attracting $4.65 billion in venture capital last year.