Connect with us

AI

Ilya Sutskever’s New Venture: Safe Superintelligence Inc.

Digi Asia News

Published

on

Ilya Sutskever's New Venture: Safe Superintelligence Inc.

The artificial intelligence (AI) community has been buzzing with the question, “Where’s Ilya?” For months, the absence of Ilya Sutskever, a co-founder of OpenAI, has left many in the tech world wondering about his next move. Following his involvement in the dramatic board ouster and subsequent reinstatement of Sam Altman as OpenAI’s CEO in 2023, Sutskever went quiet, leaving his future plans shrouded in mystery. However, in mid-May, Sutskever broke his silence to announce a new project: Safe Superintelligence Inc.

The Vision of Safe Superintelligence Inc.

Sutskever’s new venture aims to create a safe and powerful AI system within a research-focused organization. Unlike competitors such as OpenAI, Google, and Anthropic, Safe Superintelligence Inc. has no immediate plans to commercialize its AI products. Instead, the company is dedicated solely to developing a “safe superintelligence.”

A Unique Approach to AI Safety

“This company is special in that its first product will be the safe superintelligence, and it will not do anything else up until then,” Sutskever explained in an exclusive interview. The goal is to insulate the organization from the external pressures that come with product development and competition. By prioritizing safety from the outset, Sutskever hopes to achieve breakthroughs in AI safety engineering that are baked into the system, rather than relying on reactive measures.

The Founding Team

Joining Sutskever in this ambitious endeavor are two co-founders: Daniel Gross and Daniel Levy.

Daniel Gross: The Investor and AI Leader

Daniel Gross, a former AI lead at Apple Inc. and a prominent investor, brings a wealth of experience and a keen eye for high-profile AI startups. Gross has backed numerous successful AI ventures, including Keen Technologies, which was founded by John Carmack, the renowned coder and video game pioneer. Gross’s expertise in AI investments and his vision for the future of AI make him a valuable asset to Safe Superintelligence Inc.

Daniel Levy: The AI Model Expert

Daniel Levy, who worked alongside Sutskever at OpenAI, has built a strong reputation for training large AI models. Levy shares Sutskever’s vision of a lean, focused team dedicated to the single objective of creating a safe superintelligence. “I think the time is right to have such a project,” Levy said. “My vision is exactly the same as Ilya’s: a small, lean cracked team with everyone focused on the single objective of a safe superintelligence.”

The Quest for Safe Superintelligence

Sutskever’s focus on AI safety is not just a priority but the core mission of Safe Superintelligence Inc. The challenge lies in defining and achieving a level of safety that ensures AI does not harm humanity. “By safe, we mean safe like nuclear safety as opposed to safe as in ‘trust and safety,’” Sutskever clarified.

Engineering Safety into AI

Sutskever suggests that Safe Superintelligence Inc. will pursue safety through engineering breakthroughs that are intrinsic to the AI system. This proactive approach contrasts with the more common method of applying safety measures as afterthoughts. The goal is to create an AI that not only avoids harm but also operates on principles that align with human values, such as liberty, democracy, and freedom.

The Journey Ahead

The AI industry has long grappled with the question of how to make AI systems safer. While theoretical discussions have dominated the conversation, deep engineering solutions have been less common. Sutskever believes that his years of contemplation on AI safety will now translate into practical approaches at Safe Superintelligence Inc.

The Economic Reality

Safe Superintelligence Inc. faces significant economic challenges. Unlike OpenAI, which partnered with Microsoft to secure funding for its computational needs, Safe Superintelligence Inc. will need to raise capital without the prospect of immediate financial returns. Investors are betting on Sutskever’s reputation and the potential for groundbreaking advancements in AI safety.

A Throwback to OpenAI’s Original Vision

In many ways, Safe Superintelligence Inc. is a return to the original vision of OpenAI: a research organization focused on creating an artificial general intelligence (AGI) that surpasses human capabilities. Over time, OpenAI had to adapt to the financial demands of AI research, leading to partnerships and product development. Safe Superintelligence Inc. aims to avoid these distractions and stay true to its mission.

The Fascination with Sutskever’s Plans

Sutskever’s near-mythical status in the AI industry has made his plans a topic of intense speculation. As a key figure in the development of AI at Google and OpenAI, his decision to start Safe Superintelligence Inc. is seen as a significant development. His advocacy for building ever-larger models has already left a lasting impact on the AI landscape, most notably with the rise of ChatGPT.

Personal Reflections on the Journey

Reflecting on the past few months, Sutskever described his experience as “very strange.” While he remains tight-lipped about the specifics of his departure from OpenAI and his relationship with Altman, he is clearly focused on the future. “It’s very strange. I don’t know if I can give a much better answer than that,” he said, summarizing his recent experiences.

Safe Superintelligence Inc. represents a bold and ambitious attempt to create a safe and powerful AI system. Under the leadership of Ilya Sutskever and his co-founders, the company aims to address one of the most pressing challenges in the AI industry: safety. By prioritizing research and avoiding the commercial pressures faced by its competitors, Safe Superintelligence Inc. hopes to achieve breakthroughs that could shape the future of AI.

As the company embarks on this journey, it remains to be seen whether it can meet its lofty goals. However, with Sutskever’s expertise and the backing of a talented team, Safe Superintelligence Inc. is poised to make significant contributions to the field. The AI community and investors alike will be watching closely as this new venture unfolds, hopeful that it will pave the way for a safer and more powerful AI future.

 

Continue Reading