If AI goes wrong, it can go quite wrong: OpenAI CEO to US lawmakers

Sam Altman, who testified at a hearing in the US Senate in Washington, DC, late on Tuesday, said that the AI industry needs to be regulated by the government as AI becomes "increasingly powerful". "If this technology goes wrong, it can go quite wrong," Altman told them.

Sam Altman, CEO of Microsoft-backed OpenAI, has admitted that if generative artificial intelligence (AI) technology goes wrong, it can go quite wrong, as US senators expressed their fears about AI chatbots like ChatGPT.

Sam Altman, who testified at a hearing in the US Senate in Washington, DC, late on Tuesday, said that the AI industry needs to be regulated by the government as AI becomes "increasingly powerful".

Advertisement

"If this technology goes wrong, it can go quite wrong," Altman told them.

The US Senators grilled him about the potential threats AI poses and raised fears over the 2024 US election.

Advertisement

"If you were listening from home, you might have thought that voice was mine and the words from me, but in fact, that voice was not mine," said US Senator Richard Blumenthal.

He added that AI is more than just research experiments and is real and present.

Advertisement

Altman said he doesn't make money from OpenAI.

He told the committee that OpenAI is working on a copyright system to compensate artists whose art was used to create something new.

Advertisement

"Creators deserve control," he said.

The losses of OpenAI reportedly swelled to nearly $540 million last year and are likely to only keep rising.

Advertisement

According to The Information, OpenAI's losses doubled as it developed ChatGPT and hired key employees from Google.

OpenAI in February this year launched the new subscription plan, ChatGPT Plus, that is available for $20 a month.

Advertisement

Meanwhile, Altman's comments came as the WHO said that carefully examining the risks involved is imperative while using artificial intelligence (AI) tools such as ChatGPT and Bard in healthcare.

"There is concern that caution that would normally be exercised for any new technology is not being exercised consistently with large language model tools (LLMs)", the global health agency said.

Advertisement

Also Read | Google to delete all personal accounts inactive for 2 years

Also Read | Apple unveils new tools for cognitive, speech, vision accessibility in its products

Advertisement

Advertisement