The UK government released recommendations for the artificial intelligence industry on Wednesday, outlining an all-encompassing approach to regulating the technology at a time when it’s reached insane hype.
In the white paper, the Department of Science, Innovation and Technology (DSIT) outlined five principles that companies should follow. They are: security, protection and robustness; transparency and explainability; Justice; accountability and governance; and voidability and redress.
related investment news
Instead of issuing new regulations, the government is asking regulators to apply existing regulations and inform companies of their obligations under the white paper.
It has mandated the Health and Safety Executive, the Equality and Human Rights Commission and the Competition and Markets Authority to “develop bespoke, context-specific approaches that match the way AI is actually being used in their sectors”.
“Over the next 12 months, regulators will issue practical guidance for organizations, as well as other tools and resources such as risk assessment templates, to set out how these principles can be implemented in their sectors,” the government said.
“If parliamentary time permits, legislation could be introduced to ensure regulators consistently adhere to the principles.”
The recommendations are received promptly. ChatGPT, the popular AI chatbot developed by Microsoft-supported companies OpenAI, has sparked a surge in demand for the technology, and people are using the tool for everything from writing school essays to drafting legal opinions.
ChatGPT has already become one of the fastest growing consumer applications of all time, attracting 100 million monthly active users as of February. However, experts have raised concerns about the negative effects of the technology, including the potential for plagiarism and discrimination against women and ethnic minorities.
AI ethicists are concerned about bias in the data that trains AI models. It has been shown that algorithms a tends to be skewed in favor of men – especially white men – put women and minorities at a disadvantage.
Fears have also been raised that jobs could be lost to automation. On Tuesday, Goldman Sachs warned that up to 300 million jobs could be at risk of being wiped out by generative AI products.
The government wants companies that integrate AI into their business to ensure they offer a high level of transparency into how their algorithms are being developed and used. Organizations “should be able to communicate when and how it will be used and to explain a system’s decision-making process at an appropriate level of detail commensurate with the risks posed by the use of AI,” the DSIT said.
Companies should also offer users the ability to challenge decisions made by AI-based tools, the DSIT said. User generated platforms like FacebookTikTok and YouTube often use automated systems to remove content reported as violating their policies.
AI, which is believed to contribute £3.7 billion ($4.6 billion) to the UK economy each year, should also be “used in a manner consistent with existing UK laws, such as the Equality Act 2010 or the UK GDPR, and must not discriminate against individuals or achieve unfair commercial results,” added the DSIT.
On Monday, Secretary of State Michelle Donelan visited the offices of AI startup DeepMind in London, a government spokesman said.
“Artificial intelligence isn’t just science fiction anymore, and the pace of AI development is breathtaking, so we need rules to ensure it’s being developed safely,” Donelan said in a statement on Wednesday.
“Our new approach is based on strong principles, so people can have confidence in companies to unleash this technology of tomorrow.”
Lila Ibrahim, Chief Operating Officer of DeepMind and a member of the UK AI Council, said AI is a “transformational technology” but that it “can only reach its full potential when trusted, which is a public and private partnership in spirit.” requires responsible pioneering.”
“The contextual approach proposed by the UK will help regulation keep up with the evolution of AI, support innovation and mitigate future risks,” said Ibrahim.
It comes after other countries have developed their own respective regulations to regulate AI. In China, the government has asked tech companies to release details of their valuable recommendation algorithms, while the European Union has proposed its own regulations for the industry.
Not everyone is convinced of the UK government’s approach to regulating AI. John Buyers, head of AI at law firm Osborne Clarke, said the move to delegate responsibility for overseeing the technology among regulators risks creating a “complicated regulatory patchwork full of holes”.
“The risk with the current approach is that a problematic AI system needs to present itself in the right format to trigger the jurisdiction of a regulator, and moreover, the regulator in question needs to have the right enforcement powers to take decisive and effective action to address it repair the damage done and provide sufficient deterrent effect to incentivize compliance in the industry,” Buyers told CNBC via email.
In contrast, the EU has proposed a “top-down regulatory framework” when it comes to AI, he added.
REGARD: Three decades after the internet was invented, Tim Berners-Lee has some ideas on how to fix it
Source : www.cnbc.com