If ChatGPT’s developer, OpenAI, takes the necessary steps to appease Italian authorities, the artificial intelligence program might soon be allowed back into the country. The restriction was put in place temporarily due to privacy concerns.
On Wednesday, the Italian data protection authorities detailed a number of conditions that OpenAI must meet by April 30 in order to have the prohibition on AI chatbots overturned.
In the midst of an investigation into a suspected data breach, the Italian regulator known as Garante demanded last month that the corporation temporarily cease processing the personal information of Italian customers.
The agency stressed the need of complying with the stringent data privacy standards of the European Union but said it did not want to impede AI research.
On Wednesday, OpenAI expressed its approval of the Italian government’s decision to address the group’s concerns after it reacted with a list of proposed solutions.
“We are happy that the Italian Garante is reconsidering their decision and we look forward to working with them to make ChatGPT available to our customers in Italy again soon,” OpenAI declared.
The proliferation of AI has sparked rising international scrutiny, with governments as diverse as France and Canada doing research into or taking a closer look at so-called generative AI technologies like ChatGPT.
The chatbot is “trained” on massive amounts of data, such as digital books and online writings, and may produce prose that is eerily similar to that written by humans.
OpenAI is required by Italian law to give its users and non-users access to their personal data and the means to modify or delete it, as well as to disclose how and why the company handles such data.
According to the watchdog, the corporation cannot utilize ChatGPT’s algorithms without either explicit approval from users or a “legitimate interest” in using the data.
The Italian authorities were concerned that misleading information on individuals may be generated by ChatGPT and questioned OpenAI’s legal right to gather the huge volumes of data necessary to train the system’s algorithms.
OpenAI, located in San Francisco, is required by Italy’s watchdog to conduct a marketing campaign by May 15 using radio, television, newspapers, and the internet to enlighten people about how it utilizes personal data for training algorithms.
Users’ ages must be confirmed, and a method must be put in place to weed out those under 13 and those between the ages of 13 and 18 who do not have parental permission to use the service.