Breaking news >>>
SuperBuzz recognized as one of the Top-Rated leading Marketing Automation Software tailored for small businesses
Watch now →

🚀🚀Safe Superintelligence by OpenAI Co-Founder secures a whopping $1B in funding! The AI race is heating up! 🔥

According to a Reuters report, OpenAI co-founder Ilya Sutskever’s new AI start-up, Safe Superintelligence (SSI), has secured $1 billion in funding. The company aims to use these funds to create advanced AI systems that prioritize safety while exceeding human capabilities.

Although the valuation at which the funding was raised hasn’t been disclosed, sources informed Reuters that it’s valued at $5 billion. The company plans to use the funds to recruit top talent and build a strong team of engineers and researchers in locations like Palo Alto, California, and Tel Aviv, Israel, according to the Reuters report.

The funding round saw participation from prominent investors, including Andreessen Horowitz, Sequoia Capital, DST Global, and SV Angel. Additionally, NFDG funds also joined in the round.

The Founders of Safe Superintelligence

Safe Superintelligence, an AI start-up, was founded in June of this year by Ilya Sutskever, Daniel Gross, and Daniel Levy. Levy, a former OpenAI researcher, now serves as the co-founder and optimization lead at Safe Superintelligence. Gross, an entrepreneur and technology strategist for the company, previously co-founded another AI start-up called Cue, which was later acquired by Apple.

Sutskever, co-founder of OpenAI, left the company in May this year to launch his own AI venture. Shortly after his resignation, he posted on X, “After almost a decade, I have made the decision to leave OpenAI. The company’s trajectory has been nothing short of miraculous, and I’m confident that OpenAI will build AGI that is both safe and beneficial.”

In response, OpenAI’s CEO wrote on X, “Ilya and OpenAI are going to part ways. This is very sad to me; Ilya is easily one of the greatest minds of our generation, a guiding light in our field, and a dear friend.”

Previously, OpenAI faced leadership challenges when the board accused Sam Altman of lacking transparency. Media reports suggested a divide, with Sutskever concentrating on AI safety, while Altman and others focused on developing new technologies. After significant upheaval, Altman, who had been abruptly fired, was reinstated in March.

Safe Superintelligence’s Sole Focus on Pushing AI Boundaries

According to the cofounders, Safe Superintelligence aims to develop ‘superintelligence,’ which refers to AI that surpasses human intelligence. In a company blog post, they noted, “Our singular focus ensures we are not distracted by management overhead or product cycles, and our business model protects safety, security, and progress from short-term commercial pressures.”

This start-up’s funding comes amid a significant surge in AI start-up investments. From April to June, investments in AI start-ups reportedly rose to $24 billion, more than doubling the previous quarter, according to Crunchbase. Furthermore, Crunchbase data reveals that the first half of this year saw $35.6 billion invested in AI start-ups, a 24% increase from the $28.7 billion in the same period last year.