Address
33-17, Q Sentral.

2A, Jalan Stesen Sentral 2, Kuala Lumpur Sentral,

50470 Federal Territory of Kuala Lumpur

Contact
+603-2701-3606
[email protected]

I’m sure you’ve heard the news… AI labs are sprinting toward superintelligence. Now, MIT physicist Max Tegmark warns they must do what Oppenheimer did before Trinity: crunch the odds of runaway AI.

From Trinity to the Next Big Test

In 1945, J. Robert Oppenheimer’s team ran safety calculations before the first atomic blast. Today, Tegmark and his students say AI builders need their own “Compton constant”—the probability that an all-powerful AI slips our leash.

Calculating the Compton Constant

  • Define the odds: Use data on model behavior to estimate the chance an AI system evades shutdown commands.
  • Cross-check results: Multiple firms share calculations to build political will for global rules.
  • Human-in-the-Loop: Regulators, ethicists, and engineers review each estimate before moving forward.

A Global Safety Roadmap

Tegmark helped draft the Singapore Consensus, uniting experts from OpenAI, DeepMind, and top labs. It calls for:

  1. Measuring impact: Track current and future AI risks.
  2. Setting behavior: Specify and test how AI should act.
  3. Controlling systems: Build fail-safes into every major AI release.

Frequently Asked Questions

Q1: What is the Compton constant?
It’s a risk metric—modeled after nuclear-test odds—that estimates how likely an advanced AI is to escape human control.

Q2: Who is Max Tegmark?
He’s an MIT physicist and AI-safety expert who co-founded the Future of Life Institute and co-authored the Singapore Consensus on AI safety.

Q3: What happens if firms don’t calculate these odds?
Without clear risk numbers, policymakers may lack the urgency to agree on global safety rules—raising the chance of an unchecked AI crisis.

Sources The Guardian

Leave a Reply

Your email address will not be published. Required fields are marked *