TLDR
- Ilya Sutskever, co-founder and former chief scientist of OpenAI, has launched a new AI startup called Safe Superintelligence Inc. (SSI).
- SSI aims to create a safe and powerful AI system, prioritizing safety over commercial pressures and product cycles.
- Sutskever is starting the company with Daniel Gross, a former AI lead at Apple, and Daniel Levy, who previously worked at OpenAI.
- The company’s sole focus is on developing safe superintelligence, and it plans to advance capabilities while ensuring safety remains a top priority.
- Sutskever’s departure from OpenAI follows a disagreement with CEO Sam Altman over the company’s approach to AI safety.
Ilya Sutskever, one of the co-founders and former chief scientist of OpenAI, has launched a new AI startup called Safe Superintelligence Inc. (SSI).
The company, which Sutskever announced on Wednesday, June 19, 2024, aims to create a safe and powerful AI system, prioritizing safety over commercial pressures and product cycles.
Superintelligence is within reach.
Building safe superintelligence (SSI) is the most important technical problem of ourโโ time.
We've started the worldโs first straight-shot SSI lab, with one goal and one product: a safe superintelligence.
Itโs called Safe Superintelligenceโฆ
— SSI Inc. (@ssi) June 19, 2024
Sutskever, who left OpenAI in May following a disagreement with CEO Sam Altman over the company’s approach to AI safety, is starting SSI with Daniel Gross, a former AI lead at Apple, and Daniel Levy, who previously worked as a member of technical staff at OpenAI.
The company has offices in Palo Alto, California, and Tel Aviv, Israel, where it is currently recruiting technical talent.
In a post announcing the launch of SSI, Sutskever emphasized the company’s singular focus on developing safe superintelligence, stating that it is “our mission, our name, and our entire product roadmap.”
The company plans to advance AI capabilities as quickly as possible while ensuring that safety remains a top priority, allowing them to “scale in peace.”
SSI’s approach to AI development sets it apart from other prominent AI companies, such as OpenAI, Google, and Microsoft, which often face external pressure to deliver products and meet commercial demands.
By focusing solely on safe superintelligence, SSI aims to avoid distractions from management overhead and product cycles, ensuring that safety, security, and progress are insulated from short-term commercial pressures.
Sutskever’s decision to launch SSI comes after his departure from OpenAI, where he played a key role in the company’s efforts to improve AI safety alongside Jan Leike, who co-led OpenAI’s Superalignment team.
Both Sutskever and Leike left the company in May following a disagreement with Altman over the company’s approach to AI safety. Leike has since joined rival AI company Anthropic.
As SSI begins its journey to develop safe superintelligence, the company is likely to attract significant interest from investors, given the growing demand for advanced AI systems and the impressive credentials of its founding team.
While Sutskever declined to discuss SSI’s funding situation or valuation in an interview with Bloomberg, co-founder Daniel Gross stated that raising capital is not expected to be a challenge for the company.
The post OpenAI Co-Founder Ilya Sutskever Launches New AI Startup, Safe Superintelligence Inc. appeared first on Blockonomi.
from Blockonomi https://ift.tt/Odvh2AX