Address
33-17, Q Sentral.
2A, Jalan Stesen Sentral 2, Kuala Lumpur Sentral,
50470 Federal Territory of Kuala Lumpur
Contact
+603-2701-3606
[email protected]
Address
33-17, Q Sentral.
2A, Jalan Stesen Sentral 2, Kuala Lumpur Sentral,
50470 Federal Territory of Kuala Lumpur
Contact
+603-2701-3606
[email protected]
Ilya Sutskever, who helped start OpenAI, is well-known in the AI world for pushing the boundaries of machine learning and artificial intelligence. Recently, he’s been in the news for his part in the removal of Sam Altman from OpenAI’s board and then keeping a low profile, leading many to wonder what he was planning next.
In mid-May, Sutskever surprised everyone by leaving OpenAI. For a while, no one knew what he was up to, sparking lots of guesses and chatter about his future in AI.
Sutskever is now pouring his energy into his new project, Safe Superintelligence Inc. This company aims to build a strong AI that’s also safe to use. This move shows he’s really interested in focusing on research that’s free from the usual business pressures and competitive vibes in the tech world.
Safe Superintelligence Inc. has a clear goal: create AI that’s safe first and foremost. This goal comes from increasing worries about the ethical sides of AI and the dangers that come with more advanced AI tech.
With Sutskever’s new project emphasizing safety and ethics in AI, it could really change how the AI field thinks about and handles these issues. This might even lead other companies and research groups to take safety more seriously.
Even though Safe Superintelligence Inc. isn’t looking to sell its research right now, their work could seriously influence the AI industry later on, especially in how businesses think about adding AI into their products and services.
Get the scoop on Ilya Sutskever’s latest project, Safe Superintelligence Inc., focused on crafting safe AI tech. Dive into the new direction for AI research that’s all about being safe and ethical, and see how it might shake up the AI world.
Sources Bloomberg