Using ChatGPT, the EU’s Plan to Control AI Was Compromised

The latest buzz in AI, the super-talkative chatbot called ChatGPT, has European regulators scrambling to rethink their approach to AI law.

In recent months, the chatbot stunned the internet with its ability to quickly produce human-sounding language. It publicly professed its feelings for a reporter from The New York Times.

It captured the story of escaped monkeys in a haiku. Two German MEPs brought the issue up in the European Parliament, using speeches written by ChatGPT to emphasize the need to regulate AI.

However, European Union regulators are now faced with the technology’s perplexing question: how do we bring this thing under control? after months of internet lolz — and doomsaying from critics.

The Artificial Intelligence Act, which is a draft of artificial intelligence rules for the European Union, has already been derailed by the technology. Some facial recognition, manipulation, and social scoring applications were intended to be prohibited by the Commission’s 2021 regulation proposal.

Some applications of AI would be labeled “high risk,” subjecting their creators to additional regulations governing transparency, security, and human oversight.

Where’s the catch? Both good and bad actors can use ChatGPT to their advantage.

Large language models are a sort of artificial intelligence that may be used for a wide variety of tasks, including but not limited to the following: writing computer code, policy papers, false news stories, and even court judgments, as one Colombian judge has confessed.

Other models trained on images rather than text can produce anything from cartoons to fake images of politicians, fueling concerns about the spread of false information.

The new Bing search engine, which is driven by ChatGPT’s technology, once threatened to “hack” and “ruin” a researcher. In another example, the Lensa image-to-cartoon converter program artificially enhanced the sexuality of shots of Asian women.

“These systems have no ethical understanding of the world, have no sense of truth, and they’re not reliable,” an AI specialist and outspoken critic named Gary Marcus.

Such AIs “are like engines. They are very powerful engines and algorithms that can do quite a number of things and which themselves are not yet allocated to a purpose,” claimed Dragoș Tudorache, a Romanian liberal member who, along with Brando Benifei, an Italian S&D legislator, is responsible for guiding the AI Act through the European Parliament.

There has already been a revision of preliminary plans by EU institutions due to this new technology. In December, the Council (which stands in for government capitals) passed its version of the proposed AI Act, which would give the Commission authority over regulating cybersecurity, transparency, and risk management for all AIs.

Thanks to ChatGPT’s success, the European Parliament has no choice but to adopt a similar platform. To prevent ChatGPT from mass producing falsehoods, key MPs on the AI Act, Benifei and Tudorache, advocated in February that AI systems creating complex texts without human control be included on the “high-risk” list.

Using ChatGPT, the EU's Plan to Control Artificial Intelligence Was Compromised
Using ChatGPT, the EU’s Plan to Control Artificial Intelligence Was Compromised


Conservative factions in the European Parliament and several members of Tudorache’s own Liberal group were skeptical of the plan. Right-of-center politician Axel Voss, who has a formal say in Parliament’s position, claimed the proposal “would make many things high-risk, that are not risky at all.”

But critics and observers say the proposal only scratched the surface of the problem with general-purpose AI. “It’s not great to just put text-making systems on the high-risk list: you have other general-purpose AI systems that present risks and also ought to be regulated,” Future of Life Institute director of policy and AI policy expert Mark Brakel mentioned.

The two top MPs in Parliament are also pushing for tougher regulations on the creation and usage of ChatGPT and other comparable AI models, such as better risk management and more openness about the technology’s inner workings.

Additionally, they are attempting to impose stricter regulations on major service providers while maintaining a less stringent regime for regular users experimenting with the technology.

Those who work in fields like teaching, hiring, finance, and law enforcement need to be informed “of what it entails to use this kind of system for purposes that have a significant risk for the fundamental rights of individuals,” Benifei declared. 

Brussels is preparing for the talks that would follow if Parliament struggles to understand the ChatGPT rule.

It is anticipated that trilateral discussions between the European Commission, the Council of the EU, and the European Parliament would begin in April. There, the three parties may find themselves at an impasse as they try to agree on how to use ChatGPT.

Big Tech companies are watching from the sidelines, especially those with skin in the game like Microsoft and Google.

Microsoft’s Chief Responsible AI Officer Natasha Crampton has said that the EU’s AI Act should “maintain its focus on high-risk use cases,” implying that general-purpose AI systems like ChatGPT are rarely being used for risky activities and are instead used for things like drafting documents and helping with writing code.

“We want to make sure that high-value, low-risk use cases continue to be available for Europeans,” Crampton remarked.

Microsoft has invested in ChatGPT, which was developed by a team of American academics at OpenAI and is now central to the company’s plan to revitalize Bing as a search engine. OpenAI did not provide any feedback after being asked.

Leave a Comment