A former lead scientist at OpenAI says he's struggled to secure resources to research existential AI risk, as the startup ...
The entire OpenAI team focused on the existential dangers of AI has either resigned or been absorbed into other research ...
"Over the past years, safety culture and processes have taken a backseat to shiny products,” one team member wrote.
OpenAI eliminated a team focused on the risks posed by advanced artificial intelligence less than a year after it was formed ...
The company ended the project less than a year after it started.
OpenAI's Superalignment team was formed in July 2023 to mitigate AI risks, like "rogue" behavior. OpenAI has reportedly ...
I n a move that has stunned the tech community, OpenAI, a leading artificial intelligence research lab, has reportedly ...
OpenAI says it is now integrating its Superalignment group more deeply across its research efforts to help the company ...
A new report claims OpenAI has disbanded its Superalignmnet team, which was dedicated to mitigating the risk of a superhuman ...
OpenAI dissolves 'superalignment team' led by Ilya Sutskever and Jan Leike. Safety efforts led by John Schulman. Departures ...