On Tuesday, China unveiled its proposed assessment procedures for future generative AI tools, instructing enterprises to submit their technologies for review before releasing them to the public.
The South China Morning Press reports that the Cyberspace Administration of China (CAC) suggested the restrictions to prohibit discriminatory content, inaccurate information, and anything with the potential to impair personal privacy or intellectual property.
The CAC believes these precautions are necessary to prevent items from giving the impression of supporting the overthrow of a government or causing disruptions to the established social and economic order.
Baidu, SenseTime, and Alibaba are just a few of the Chinese tech giants that have lately demonstrated cutting-edge artificial intelligence (AI) models that will power everything from chatbots to picture generators, and this has regulators worried about the country’s impending AI boom.
As stated by Reuters, the CAC also emphasized that the items be consistent with the country’s fundamental socialist beliefs. If a provider is found to be in violation of the guidelines, they may be subject to civil penalties, suspension of services, or even criminal prosecution.
The CAC has instructed digital firms to provide fixes to their systems within three months if their platforms produce illegal or otherwise objectionable information.
The draft guidelines state that the public has until May 10 to provide feedback on the proposals and that the measures will go into force “sometime this year.”
Following a letter from industry experts and executives advocating a six-month freeze in AI development while government and tech corporations faced the larger ramifications of initiatives like ChatGPT, concerns over AI’s capabilities have increasingly dominated public conversation.
As a result of ChatGPT’s absence in China, various other businesses have been scrambling to introduce competing artificial intelligence (AI) solutions.
Ernie Bot, released by Baidu last month, was quickly followed by Tongyi Qianwen, developed by Alibaba, and SenseNova, developed by SenseTime.
According to the Post, Beijing is still leery of the dangers posed by generative AI, with the government’s media warning of a “market bubble” and “excessive hype” about the method and expressing fears that it may corrupt users’ “moral judgment.”
Several incidents with ChatGPT, such as the suspected acquisition of personal information of Canadian people without agreement and the fabrication of bogus sexual harassment charges against law professor Jonathan Turley, have already made a commotion and prompted worries over the potential of the technology.
According to research out of Germany’s Technische Hochschule Ingolstadt, ChatGPT may have some impact on how people make ethical decisions: Statements supporting and opposing the Trolley Problem, in which one person’s life must be sacrificed in order to rescue five others, were given to participants, along with a random selection of arguments from ChatGPT.
The study indicated that participants’ opinions on whether it was appropriate to sacrifice one life to save five varied depending on whether the remark they read argued for or against the sacrifice, regardless of whether the comment was credited to ChatGPT.