Breaking news >>>
SuperBuzz recognized as one of the Top-Rated leading Marketing Automation Software tailored for small businesses
Watch now →

A former OpenAI employee has slammed the company, likening it to the builders of the Titanic. 🚢⚠️

William Saunders, an ex-security employee of OpenAI, recently likened the AI company to White Star Line, the builder of the Titanic. Having spent three years on OpenAI’s superalignment team, Saunders revealed in a recent interview that he resigned to avoid “working for the AI Titanic.”

During his tenure at OpenAI, William Saunders debated whether the company was more akin to NASA’s Apollo program or the Titanic. His concerns centered on OpenAI’s pursuit of Artificial General Intelligence (AGI) while also launching paid products. Saunders argued that the company focuses on creating “shiny products” rather than emphasizing safety and risk assessment, making it unlike the Apollo program.

He highlighted that the Apollo program meticulously predicted and assessed risks, maintaining “sufficient redundancy” to handle serious problems, as demonstrated by Apollo 13. In contrast, the White Star Line built the Titanic with watertight compartments and marketed it as unsinkable but failed to provide enough lifeboats, leading to disaster.

Saunders fears that OpenAI overly relies on its current, inadequate security measures and suggested that the company should postpone releasing new AI models to explore potential risks. As the leader of a team focused on understanding AI language model behaviors, Saunders stressed the importance of developing techniques to evaluate whether these systems “hide dangerous capabilities or motivations.”

Saunders expressed disappointment with OpenAI’s recent actions. He left the company in February, and in May, OpenAI disbanded the superalignment team soon after releasing their advanced AI model, GPT-4. OpenAI’s approach to safety concerns and their rapid development pace have faced criticism from various employees and experts, who advocate for increased government regulation to prevent potential disasters.

In early June, employees from DeepMind and OpenAI published an open letter, pointing out that current oversight regulations are inadequate to prevent a catastrophe for humanity. Additionally, Ilya Sutskever, co-founder and former chief scientist of OpenAI, resigned to establish Safe Superintelligence, a startup dedicated to prioritizing AI safety research.

Â