After a data breach, Italy’s privacy authority has temporarily blocked the use of the AI program ChatGPT while it examines whether or not the company broke strict European Union data protection standards.
The Italian Data Protection Authority has announced it would take temporary measures “until ChatGPT respects privacy,” including preventing the firm from processing the data of Italian users.
OpenAI, a company located in the United States that created the chatbot, announced late on Friday night that it had blocked ChatGPT for Italian users per a request from the Italian authorities. The firm has stated that it is certain its procedures are in line with European privacy rules and that it is working to restore access to ChatGPT as quickly as possible.
Several public and private colleges and universities throughout the world have banned ChatGPT due to worries about student plagiarism, but Italy’s move is “the first nation-scale restriction of a mainstream AI platform by a democracy,” stated Alp Toker of NetBlocks, an organization that tracks internet censorship throughout the world.
The ban impacts the web-based version of ChatGPT, a popular writing helper, but it is unlikely to disrupt software applications from firms that already have licenses with OpenAI to utilize the same technology driving the chatbot, including Microsoft’s Bing search engine.
Large language models, the type of AI that powers such chatbots, may replicate human writing styles by learning from a vast library of digital books and online writings.
The Italian regulator has ordered OpenAI to report on the steps it has taken to protect the privacy of its customers’ data within 20 days or risk a punishment of up to 20 million euros (almost $22 million) or 4% of annual global revenue, whichever is greater.
A data breach including “users’ communications” and subscriber payment information was recently discovered, and the agency has issued a statement referencing the EU’s General Data Protection Regulation.
Due to a problem that allowed certain users to view the titles, or subject lines, of other users’ conversation histories, OpenAI said it would have to take ChatGPT offline on March 20.
“Our investigation has also found that 1.2% of ChatGPT Plus users might have had personal data revealed to another user,” the firm in question claimed.“We believe the number of users whose data was actually revealed to someone else is extremely low and we have contacted those who might be impacted.”
The Garante, Italy’s data protection authority, questioned OpenAI’s legality in light of the “huge collection and processing of personal data” that went into training the platform’s algorithms. Furthermore, it was shown that ChatGPT has the potential to produce and keep fictitious data on users.
The report concluded that because there is no way to confirm users’ ages, children may get comments that are “totally inappropriate to their age and knowledge.”
In response, OpenAI stated It works “to reduce personal data in training our AI systems like ChatGPT because we want our AI to learn about the world, not about private individuals.”
“We also believe that AI regulation is necessary — so we look forward to working closely with the Garante and educating them on how our systems are built and used,” according to the firm.
Concerns over the proliferation of AI have prompted this action by Italy’s authority. In a letter issued on Wednesday, a group of academics and tech industry experts urged businesses like OpenAI to delay the construction of more advanced AI models until the autumn. This would allow society more time to consider the potential dangers.
The head of Italy’s data protection agency confirmed on Friday night’s broadcast that he had signed the petition. According to Pasquale Stanzione, he made this decision because “it’s not apparent what interests are being pursued” by people working on AI at the end of the day.
AI that interferes with people’s right to “self-determination” is “extremely hazardous,” as Stanzione put it. He also found it “very grave” that people younger than 13 were unable to utilize filters.
Last week, OpenAI CEO Sam Altman of San Francisco announced that he will be traveling to all six continents in May to meet with users and developers of the company’s artificial intelligence software.
The itinerary includes stops in Madrid, Munich, London, and Paris, as well as Brussels, where EU legislators have been debating sweeping new restrictions to curb high-risk AI capabilities.
The European Consumer Organization BEUC demanded on Thursday that authorities in the European Union and its 27 member states look into ChatGPT and other artificial intelligence chatbots. It may be years until the EU’s AI law goes into force, according to BEUC, therefore authorities need to move quickly to safeguard consumers.
“In only a few months, we have seen a massive take-up of ChatGPT, and this is only the beginning,” Official spokesperson Ursula Pachl made the statement.
Watching for the European Union’s Artificial Intelligence Act “is not good enough as there are serious concerns growing about how ChatGPT and similar chatbots might deceive and manipulate people.”