Vice President Kamala Harris will meet with the chief executive officers of four key firms that are researching artificial intelligence. This will take place as the Biden administration rolls out a set of measures aimed to guarantee that the quickly advancing technology improves lives without jeopardizing the rights and safety of individuals.
The Democratic government aims to announce an investment of $140 million to construct seven new AI research institutions, according to administration officials who spoke with media in advance of the initiative to preview the endeavor.
In addition, during the next several months, the Office of Management and Budget at the White House is anticipated to publish recommendations on how federal agencies might make use of artificial intelligence systems.
There will also be an independent commitment made by leading AI developers to engage in a public examination of their systems that will take place in August in Las Vegas at the hacker event DEF CON.
On Thursday, Harris and other government officials intend to meet with the CEOs of Alphabet, Anthropic, Microsoft, and OpenAI to address the potential dangers that they perceive posed by the present state of artificial intelligence development.
The message that the leaders of the government are sending to the firms is that they can work together with the government to reduce the risks and that they have a role to play in minimizing those risks.
Authorities in the United Kingdom are likewise investigating the potential dangers posed by artificial intelligence. The British government’s competition watchdog has announced that it would begin an investigation into the artificial intelligence (AI) sector.
This investigation will concentrate on the technology that is used to power chatbots such as ChatGPT, which was created by OpenAI.
A month ago, Vice President Joe Biden mentioned that artificial intelligence (AI) has the potential to assist in the fight against illness and climate change, but it also has the potential to compromise national security and disrupt the economy in ways that are inherently unstable.
This year’s introduction of the ChatGPT chatbot has contributed to a heightened level of discussion around artificial intelligence (AI) and the role that the government plays in relation to technological advancements.
There are ethical and societal problems with artificial intelligence due to the fact that it may produce phony visuals and text that resemble human language.
OpenAI has maintained a policy of secrecy about the data upon which its AI systems have been developed. Because of this, it is difficult for individuals who are not affiliated with the corporation to comprehend the reasons why its ChatGPT generates replies that are either prejudiced or untrue to requests, and it also makes it difficult to address issues regarding whether or not it steals from works that are protected by intellectual property rights.
Margaret Mitchell, the chief ethical scientist of the artificial intelligence firm Hugging Face, noted that businesses that are concerned about being held accountable for anything in their training data would also not have incentives to adequately track it.
“I think it might not be possible for OpenAI to actually detail all of its training data at a level of detail that would be really useful in terms of some of the concerns around consent and privacy and licensing,” According to what Mitchell had to say in the interview on Tuesday. “From what I know of tech culture, that just isn’t done.”
At the very least, in theory, AI service providers may be compelled by some form of disclosure regulation to make their systems more accessible to audit by third parties. However, because AI systems are being built on top of earlier models, it won’t be straightforward for businesses to give greater transparency after the fact.
“I think it’s really going to be up to the governments to decide whether this means that you have to trash all the work you’ve done or not,” Mitchell stated.
“Of course, I kind of imagine that at least in the U.S., the decisions will lean towards the corporations and be supportive of the fact that it’s already been done. It would have such massive ramifications if all these companies had to essentially trash all of this work and start over.”