OpenAI Says It’s ‘Dedicated’ to Making Sure Its AI Tools Don’t Cause Harm in Letter to US Lawmakers


OpenAI, responding to questions from US lawmakers, said it’s dedicated to making sure its powerful AI tools don’t cause harm, and that employees have ways to raise concerns about safety practices.

The startup sought to reassure lawmakers of its commitment to safety after five senators including Senator Brian Schatz, a Democrat from Hawaii, raised questions about OpenAI’s policies in a letter addressed to Chief Executive Officer Sam Altman.

“Our mission is to ensure artificial intelligence benefits all of humanity, and we are dedicated to implementing rigorous safety protocols at every stage of our process,” Chief Strategy Officer Jason Kwon said Wednesday in a letter to the lawmakers.

Specifically, OpenAI said it will continue to uphold its promise to allocate 20 percent of its computing resources toward safety-related research over multiple years. The company, in its letter, also pledged that it won’t enforce non-disparagement agreements for current and former employees, except in specific cases of a mutual non-disparagement agreement. OpenAI’s former limits on employees who left the company have come under scrutiny for being unusually restrictive. OpenAI has since said it has changed its policies.

Altman later elaborated on its strategy on social media.

“Our team has been working with the US AI Safety Institute on an agreement where we would provide early access to our next foundation model so that we can work together to push forward the science of AI evaluations,” he wrote on X.

Kwon, in his letter, also cited the recent creation of a safety and security committee, which is currently undergoing a review of OpenAI’s processes and policies.

In recent months, OpenAI has faced a series of controversies around its commitment to safety and ability for employees to speak out on the topic. Several key members of its safety-related teams, including former co-founder and chief scientist Ilya Sutskever, resigned, along with another leader of the company’s team devoted to assessing long-term safety risks, Jan Leike, who publicly shared concerns that the company was prioritizing product development over safety.

© 2024 Bloomberg LP

(This story has not been edited by NDTV staff and is auto-generated from a syndicated feed.)





Source link

Latest articles

Related articles

Discover more from Technology Tangle

Subscribe now to keep reading and get access to the full archive.

Continue reading

0