Months after resigning from AI developer OpenAI, former chief scientist Ilya Sutskever’s new venture Safe Superintelligence (SSI) has raised $1 billion in funding, the company announced on Wednesday.
According to SSI, the funding round included investments from NFDG, a16z, Sequoia, DST Global, and SV Angel. Reuters, citing sources “close to the matter,” reported that SSI is already valued at $5 billion.
SSI is building a straight shot to safe superintelligence.
We’ve raised $1B from NFDG, a16z, Sequoia, DST Global, and SV Angel.
We’re hiring: https://t.co/DmFWnrc1Kr
— SSI Inc. (@ssi) September 4, 2024
“Mountain: identified,” Sutskever tweeted on Wednesday. “Time to climb.”
Safe Superintelligence did not immediately respond to a request for comment from Decrypt.
In May, Sutskever and Jan Leike resigned from OpenAI following the departure of Andrej Karpathy in February. In a post on Twitter, Leike cited a lack of resources and safety focus as the reason for his decision to leave the ChatGPT developer.
“Stepping away from this job has been one of the hardest things I have ever done,” Leike wrote. “Because we urgently need to figure out how to steer and control AI systems much smarter than us.”
Sutskever’s departure came, according to a report by The New York Times, after he led the OpenAI board and a handful of OpenAI executives to oust co-founder and CEO Sam Altman in November 2023. Altman was reinstated a week later.
In June, Sutskever announced the launch of his new AI development company, Safe Superintelligence Inc., which was co-founded by Daniel Gross, a former AI lead at Apple, and Daniel Levy, who also previously worked at OpenAI.
According to Reuters, Sutskever serves as SSI’s chief scientist, with Levy as principal scientist, and Gross handling computing power and fundraising.
“SSI is our mission, our name, and our entire product roadmap, because it is our sole focus,” Safe Superintelligence wrote on Twitter in June. “Our team, investors, and business model are all aligned to achieve SSI.”
With generative AI becoming more ubiquitous, developers have looked for ways to assure consumers and regulators that their products are safe.
In August, OpenAI and Claude AI developer Anthropic announced agreements with the U.S. Department of Commerce’s National Institute of Standards and Technology (NIST) to establish formal collaborations with the U.S. AI Safety Institute (AISI) that would give the agency access to major new AI models from both companies.
“We are happy to have reached an agreement with the U.S. AI Safety Institute for pre-release testing of our future models,” OpenAI co-founder and CEO Sam Altman wrote on Twitter. “For many reasons, we think it’s important that this happens at the national level. [The] U.S. needs to continue to lead.”
Generally Intelligent Newsletter
A weekly AI journey narrated by Gen, a generative AI model.