Address
33-17, Q Sentral.
2A, Jalan Stesen Sentral 2, Kuala Lumpur Sentral,
50470 Federal Territory of Kuala Lumpur
Contact
+603-2701-3606
[email protected]
Address
33-17, Q Sentral.
2A, Jalan Stesen Sentral 2, Kuala Lumpur Sentral,
50470 Federal Territory of Kuala Lumpur
Contact
+603-2701-3606
[email protected]
I’m sure you’ve heard the news… AI labs are sprinting toward superintelligence. Now, MIT physicist Max Tegmark warns they must do what Oppenheimer did before Trinity: crunch the odds of runaway AI.
In 1945, J. Robert Oppenheimer’s team ran safety calculations before the first atomic blast. Today, Tegmark and his students say AI builders need their own “Compton constant”—the probability that an all-powerful AI slips our leash.
Tegmark helped draft the Singapore Consensus, uniting experts from OpenAI, DeepMind, and top labs. It calls for:
Q1: What is the Compton constant?
It’s a risk metric—modeled after nuclear-test odds—that estimates how likely an advanced AI is to escape human control.
Q2: Who is Max Tegmark?
He’s an MIT physicist and AI-safety expert who co-founded the Future of Life Institute and co-authored the Singapore Consensus on AI safety.
Q3: What happens if firms don’t calculate these odds?
Without clear risk numbers, policymakers may lack the urgency to agree on global safety rules—raising the chance of an unchecked AI crisis.
Sources The Guardian