Connect with us

Business

Former Open AI’s Chief Scientist, Ilya Sutskever’s startup raises $1 billion 

Published

on

Safe Superintelligence (SSI), a newly established artificial intelligence startup co-founded by Ilya Sutskever, the former chief scientist of OpenAI, has raised $1 billion in funding.

The substantial capital injection will be used to develop AI systems that not only surpass human capabilities but also prioritize safety, a growing concern in the AI community.

According to a Reuters report, SSI, which currently operates with a small team of 10 employees, plans to allocate the funds towards acquiring high-performance computing power and recruiting top-tier talent.

The startup is building a team of researchers and engineers in its Palo Alto, California, and Tel Aviv, Israel offices.

Despite the significant funding, SSI has not disclosed its current valuation, though sources close to the company estimate it at approximately $5 billion.

The successful funding round underscores the continued confidence of investors in AI’s transformative potential, particularly in startups led by exceptional talent.

“Our goal is to focus on research and development for a few years before bringing our product to market,” Gross stated.

This approach aligns with the broader industry trend of ensuring AI safety amid growing concerns about the potential risks posed by advanced AI systems.

AI safety has become a critical topic, driven by fears that advanced AI systems could operate in ways that are detrimental to human interests or even pose existential threats.

The debate over AI safety is also influencing regulatory discussions, as seen in California, where a proposed bill seeks to impose safety regulations on AI companies.

The bill has divided the industry, with companies like OpenAI and Google opposing it, while others like Anthropic and Elon Musk’s xAI support the initiative.

Meanwhile, Sutskever’s departure from OpenAI earlier this year marked a significant shift in his career.

Trending