icon bookmark-bicon bookmarkicon cameraicon checkicon chevron downicon chevron lefticon chevron righticon chevron upicon closeicon v-compressicon downloadicon editicon v-expandicon fbicon fileicon filtericon flag ruicon full chevron downicon full chevron lefticon full chevron righticon full chevron upicon gpicon insicon mailicon moveicon-musicicon mutedicon nomutedicon okicon v-pauseicon v-playicon searchicon shareicon sign inicon sign upicon stepbackicon stepforicon swipe downicon tagicon tagsicon tgicon trashicon twicon vkicon yticon wticon fm
6 Jun, 2024 14:35

ChatGPT maker ignoring fatal threat posed by AI – insider

Future advances in AI technology could destroy or catastrophically harm humanity, a researcher told the NYT
ChatGPT maker ignoring fatal threat posed by AI – insider

OpenAI is aware of major risks if it succeeds in building an artificial general intelligence (AGI) system, but is ignoring them, Daniel Kokotajlo, a former researcher at the US technology firm, has warned in an interview with the New York Times.

AGI is a hypothetical type of artificial intelligence, capable of understanding and reasoning across a broad range of tasks. The technology, if successfully created, would replicate or forecast human behaviour, while demonstrating an ability to learn and reason.

According to Kokotajlo, who left Open AI’s governance team in April, the chance that “the advanced AI” will wreck humanity is around 70%, but the San Francisco-based developer is pushing ahead with it regardless.

“OpenAI is really excited about building AGI, and they are recklessly racing to be the first there,” the former employee told the paper.

The 31-year-old researcher also said that after he joined OpenAI two years ago and was tasked with forecasting the technology’s progress, he came to the conclusion that the industry would not only develop AGI by 2027, but that there was a strong chance the technology would catastrophically harm or even destroy humanity, according to the NYT.

The former staffer said he told OpenAI CEO Sam Altman that the corporation should “pivot to safety” and spend more time and resources on countering the risks posed by AI rather than continuing to make it smarter. Kokotajlo claimed Altman agreed with him, but that nothing has changed since then.

Kokotajlo is part of a group of OpenAI insiders who recently released an open letter urging AI developers – including OpenAI – to establish greater transparency and more protections for whistleblowers.

OpenAI has defended its safety record amid employee criticism and public scrutiny, saying that the company is proud of its track record in providing the most capable and safest AI systems, and believes in its scientific approach to addressing risks.

“We agree that rigorous debate is crucial given the significance of this technology, and we’ll continue to engage with governments, civil society and other communities around the world,” the NYT cited the tech firm as saying.

Podcasts
0:00
28:21
0:00
26:3