The British government has published a new standard for transparency in the algorithms that organisations use to support decision-making.

Whitehall’s Central Data Digital Office aims to help public sector organisations provide clear information about the algorithmic tools they use, especially when they are likely to have economic and other impacts on individuals or groups.

The standard – which will be tested and developed in partnership with the public sector and ethical and data science organisations – has two tiers. 

The first features a short description of the algorithm in question, including how and why it is being used. The second includes more detailed information about how the system works, the datasets that have been used to train the model, and the level of human oversight. 

The move comes as the use of automation, artificial intelligence (AI), and related technologies, such as machine learning and facial recognition, is spreading, while the government has a stated aim of making the UK an “AI superpower”.

The risks from badly designed algorithms, incorrect assumptions that lead to poor design, or flawed data – including datasets that are populated with historic biases – are legion and could lead to systemic or personal biases being automated, either accidentally or intentionally. 

As a result, both individuals and groups could be excluded or subjected to prejudicial treatment by automated systems. 

For example, if historic data shows that job applicants from a particular area have always fared badly, an automated CV screening system might reject all individuals from that area in future, denying skilled people a fair chance. Anecdotal reports from the employment sector suggest this has happened in some organisations.

The underlying issue is that historic data might include deliberate human bias – a desire to screen out certain applicants in the past. That bias might then be automated by software, rigging the system invisibly in the present day – perhaps without the algorithm’s designers being aware of the risk.

In the US last decade, a sentencing advice algorithm used by judges in court was found to have recommended harsher penalties for black Americans, and more lenient ones for white, due to iniquities in historic data.

Problems like this is demand that the choices made in algorithm design are examined deeply, along with their potential consequences. As a result, the design itself, and the data populating the system, need to be transparent.

Whitehall’s own record in algorithm design has been poor in the Covid/lockdown era. Notoriously, the 2020 A’ Level results scandal saw a grades standardisation system designed by regulator Ofqual downgrade state school pupils and upgrade those from private/independent schools, causing a public outcry. 

The algorithm was designed to prevent grade inflation, but the inclusion of class sizes as a moderating factor on results was one contributor to the system favouring independent schools over state institutions, in effect, while downgrading many pupils who had been assessed highly by their teachers.

This widening of the opportunity gap for some pupils was unfair, leading to 2020’s ‘results’ – more accurately grade assessments from data sources – being recalculated. Despite this, the lives of many teenagers had already been damaged in terms of their university options. 

The debacle serves as a cautionary tale about how a combination of historic data, poor assumptions, and bad algorithm design can have a massive impact on citizens’ lives. 

To minimise such problems in the future, the Central Digital and Data Office has developed the new algorithmic transparency standard, in partnership with the Centre for Data Ethics and Innovation.

It will be piloted by several public sector organisations and developed based on feedback, according to an announcement from the government.

Whitehall says that the move “delivers on commitments made in the National AI Strategy and National Data Strategy, and strengthens the UK’s position as a global leader in trustworthy AI.

“In its landmark review into bias in algorithmic decision-making, the Centre for Data Ethics and Innovation (CDEI) recommended that the UK government should place a mandatory transparency obligation on public sector organisations using algorithms to support significant decisions affecting individuals.”

It comes with support from The Alan Turing Institute and Ada Lovelace Institute, and international organisations such as the OECD and Open Government Partnership

Lord Agnew, Minister of State at the Cabinet Office, said, “Algorithms can be harnessed by public sector organisations to help them make fairer decisions, improve the efficiency of public services and lower the cost associated with delivery. 

“However, they must be used in decision-making processes in a way that manages risks, upholds the highest standards of transparency and accountability, and builds clear evidence of impact.”

Imogen Parker, Associate Director (Policy) at the Ada Lovelace Institute, said, “Meaningful transparency in the use of algorithmic tools in the public sector is an essential part of a trustworthy digital public sector. 

“The Ada Lovelace Institute has called for a transparency register of public sector algorithms to allow the public – and civil society who act on their behalf – to know what systems are in use, where, and why. 

“The UK government’s investment in developing this transparency standard is an important step towards achieving this objective, and a valuable contribution to the wider conversation on algorithmic accountability in the public sector.”

Adrian Weller, Programme Director for AI at The Alan Turing Institute, and Member of the Centre for Data Ethics and Innovation’s Advisory Board, added, “Organisations are increasingly turning to algorithms to automate or support decision-making. We have a window of opportunity to put the right governance mechanisms in place as adoption increases. 

“This is why I’m delighted to see the UK government publish one of the world’s first national algorithmic transparency standards. This is a pioneering move by the government, which will not only help to build appropriate trust in the use of algorithmic decision-making by the public sector but will also act as a lever to raise transparency standards in the private sector.”

Tabitha Goldstaub, Chair of the AI Council, said, “In the AI Council’s AI Roadmap, we highlighted the need for new transparency mechanisms to ensure accountability and public scrutiny of algorithmic decision-making; and encouraged the UK government to consider analysis and recommendations from the Centre for Data Ethics and Innovation, and the Committee on Standards in Public Life. 

“I’m thrilled to see the UK government acting swiftly on this.”