With ChatGPT hype swirling, UK government urges regulators to come up with rules for A.I.
In a white paper to be put forward to Parliament, the Department for Science, Innovation and Technology (DSIT) will outline five principles it wants companies to follow. They are: safety, security and robustness; transparency and explainability; fairness; accountability and governance; and contestability and redress.
Rather than establishing new regulations, the government is calling on regulators to apply existing regulations and inform companies about their obligations under the white paper.
It has tasked the Health and Safety Executive, the Equality and Human Rights Commission, and the Competition and Markets Authority with coming up with “tailored, context-specific approaches that suit the way AI is actually being used in their sectors.”
“Over the next twelve months, regulators will issue practical guidance to organisations, as well as other tools and resources like risk assessment templates, to set out how to implement these principles in their sectors,” the government said.
“When parliamentary time allows, legislation could be introduced to ensure regulators consider the principles consistently.”
Maya Pindeus, CEO and co-founder of AI startup Humanising Autonomy, said the government’s move marked a “first step” toward regulating AI.
“There does need to be a bit of a stronger narrative,” she said. “I hope to see that. This is kind of planting the seeds for this.”
However, she added, “Regulating technology as technology is incredibly difficult. You want it to advance; you don’t want to hinder any advancements when it impacts us in certain ways.”