Address
33-17, Q Sentral.
2A, Jalan Stesen Sentral 2, Kuala Lumpur Sentral,
50470 Federal Territory of Kuala Lumpur
Contact
+603-2701-3606
info@linkdood.com
Address
33-17, Q Sentral.
2A, Jalan Stesen Sentral 2, Kuala Lumpur Sentral,
50470 Federal Territory of Kuala Lumpur
Contact
+603-2701-3606
info@linkdood.com
The new international treaty on artificial intelligence (AI) is all about keeping AI in check to make sure it’s safe and fair for everyone. The UK, along with big players like the EU, US, and Israel, signed this treaty to tackle the problems that can arise from AI tech. This is a big deal because it’s the first agreement of its kind that actually has legal power behind it. Spearheaded by the Council of Europe, which is big on human rights, the treaty fills in the blanks left by how fast AI is moving, meshing well with other rules like the EU AI Act.
The treaty lays down some ground rules for AI. It says that AI should protect our personal data, not discriminate, and respect human dignity. Governments need to make sure AI doesn’t spread fake news or make unfair decisions, like messing up job or benefit applications. It’s all about keeping things transparent and making sure we can hold AI accountable.
The treaty isn’t just for government bodies; it covers private companies too. Anyone making or using AI has to check how it might affect our rights and democracy and share their findings. If AI makes a decision about you, you should be able to challenge it and complain if something seems off. Also, when you’re dealing with AI, like in customer service, they have to make it clear you’re talking to a machine, not a person.
Companies have to be open about how they build and use AI. They have to make sure their AI systems don’t unfairly target or discriminate against people, especially in areas like hiring, banking, or government benefits. The treaty pushes for a deep dive into these AI tools to keep them ethical and transparent.
Now that the UK has signed this treaty, it needs to check its own rules to make sure they line up with the treaty’s standards. They’re looking into current laws and planning to bring in a new AI law based on what people think during public discussions.
If a company doesn’t follow the treaty’s rules, it could face penalties. For example, the EU AI Act outlaws AI that uses face recognition from public cameras or online photos. Also, AI that sorts people by their social behavior in unclear ways is a big no-no.
The treaty keeps a strong focus on human rights. AI must respect our privacy, fairness, and freedom, like speech and privacy. If an AI tool steps out of line here, it could get into serious trouble.
By signing this treaty, the UK is helping set a global standard that protects our rights and keeps AI development safe and responsible.
1. What is the main goal of the AI Safeguard Treaty?
The AI Safeguard Treaty is designed to make sure that AI technology doesn’t violate human rights, harm democracy, or disrupt the rule of law. It sets clear rules that both governments and private companies must follow, aiming to ensure that AI systems are fair, transparent, and accountable.
2. How does the treaty affect companies using AI?
Companies must be open about how their AI systems work, ensuring they don’t promote discrimination or bias, especially in critical areas like hiring or financial services. They’re also required to review their AI tools for fairness and transparency, and let people know when they’re interacting with an AI system instead of a human.
3. What happens if a company or government doesn’t follow the treaty’s guidelines?
If an organization breaks the rules set out by the treaty, it could face penalties. For example, AI systems that use face recognition from public sources or unfairly classify people based on behaviors could be banned under the EU’s regulations that align with the treaty.
Sources The Guardian