ROME (AP) – Italy is temporarily blocking artificial intelligence software ChatGPT in a data breach as it investigates a possible breach of strict European Union privacy rules, the government’s data protection commissioner said on Friday.
Italy’s data protection authority said it was taking interim measures “until ChatGPT respects privacy,” including temporarily restricting the company’s processing of Italian user data.
US-based OpenAI, which developed ChatGPT, did not immediately respond to a request for comment on Friday.
While some public schools and universities around the world have banned the ChatGPT website from their local networks due to student plagiarism issues, it’s not clear how Italy would ban it nationwide.
The move is also unlikely to impact applications from companies that already have licenses with OpenAI to use the same technology powering the chatbot, such as B. the Bing search engine from Microsoft.
The AI systems that power such chatbots, known as large language models, are able to mimic human writing styles based on the vast trove of digital books and online writings they have ingested.
The Italian regulator said OpenAI must report within 20 days what measures it has taken to ensure the privacy of user data, or face a fine of up to 20 million euros (almost $22 million), or 4% of the worldwide annual turnover.
The agency’s statement, citing the EU’s General Data Protection Regulation, noted that on March 20, ChatGPT suffered a data breach affecting “user conversations” and information about subscriber payments.
OpenAI previously announced that it had to take ChatGPT offline on March 20th to fix a bug that allowed some people to see the titles or subject lines of other users’ chat history.
“Our research also found that 1.2% of ChatGPT Plus users may have shared personal information with another user,” the company said. “We believe the number of users whose data was actually shared with someone else is extremely small and we have contacted those who may have been affected.”
Italy’s data protection commissioner lamented the lack of a legal basis to justify OpenAI’s “massive collection and processing of personal data” used to train the platform’s algorithms and that the company fails to notify the users whose data it collects.
The agency also said that ChatGPT can sometimes generate and store false information about individuals.
Finally, it was found that there is no system in place to verify users’ ages, exposing children to answers that are “completely inappropriate for their age and awareness”.
The watchdog’s move comes as concerns mount over the artificial intelligence boom. A group of academics and tech industry leaders released a letter on Wednesday urging companies like OpenAI to pause development of more powerful AI models until the fall to give society time to weigh the risks.
“While it is not clear how enforceable these decisions will be, the mere fact that there appears to be a mismatch between the technological reality on the ground and the legal framework in Europe” shows that there is merit in the letter’s request for a pause could be. so our cultural tools can catch up,” said Nello Cristianini, AI professor at the University of Bath.
San Francisco-based OpenAI CEO Sam Altman announced this week that he will be traveling across six continents in May to talk to users and developers about the technology. These include a planned stop in Brussels, where European Union lawmakers have negotiated sweeping new rules limiting high-risk AI tools, as well as visits to Madrid, Munich, London and Paris.
European consumer group BEUC on Thursday called on EU authorities and the bloc’s 27 member states to investigate ChatGPT and similar AI chatbots. BEUC said it could take years for EU AI legislation to come into force, requiring authorities to act faster to protect consumers from potential risks.
“In just a few months, we’ve seen massive adoption of ChatGPT, and this is just the beginning,” said Deputy Director General Ursula Pachl.
Waiting for the EU’s AI law “is not good enough as there are serious concerns about how ChatGPT and similar chatbots could deceive and manipulate people.”
O’Brien reported from Providence, Rhode Island. AP Business Writer Kelvin Chan contributed from London.
Copyright © 2023 The Washington Times, LLC.
Source : www.washingtontimes.com