Address
33-17, Q Sentral.
2A, Jalan Stesen Sentral 2, Kuala Lumpur Sentral,
50470 Federal Territory of Kuala Lumpur
Contact
+603-2701-3606
info@linkdood.com
Address
33-17, Q Sentral.
2A, Jalan Stesen Sentral 2, Kuala Lumpur Sentral,
50470 Federal Territory of Kuala Lumpur
Contact
+603-2701-3606
info@linkdood.com
In July 2024, the European Union made a big move in how artificial intelligence (AI) is managed by publishing the Artificial Intelligence Act (AI Act). This new set of rules is a big deal because it aims to make sure AI technology is used safely and fairly across Europe. It’s like setting ground rules to ensure that as AI technology grows, it does so in a way that’s good for everyone.
Different Rules for Different Risks: The AI Act sorts AI applications into categories based on how risky they are. High-risk AI, like systems that handle sensitive data or affect people’s rights, must meet strict rules. On the other hand, AI that’s less risky has fewer rules to follow, focusing mostly on being clear about what the AI does and protecting user data.
No-Go Zones for AI: There are some things the AI Act completely bans. For example, AI can’t be used to gather biometric data without clear reasons, score people socially like in some dystopian movie, or trick people with hidden manipulations. These rules are there to keep people safe from AI that could invade privacy or manipulate them.
Who’s in Charge?: The Act sets up a few groups to watch over AI’s rollout, including a special AI Office within the European Commission, a panel of experts, and a board with members from each EU country. They’ll make sure everyone follows the rules and offer advice on using AI right.
Breaking the Rules is Costly: If a company doesn’t follow the rules, it could be fined up to 35 million euros or 7% of its global yearly sales. That’s a lot of money, showing how serious the EU is about these rules.
Getting Ready for Change: The full rules will start in 2025, but companies have this year to get used to them. This transition period helps businesses adjust without immediate penalties.
Tech and Innovation: The rules aim to create a safe space for new tech to grow without stepping on ethical lines or human rights. It’s about balancing innovation with responsibility.
Banking and Finance: Banks and other financial institutions need to be clear and accountable when they use AI, especially when making decisions that affect customers. The Act pushes for better controls and oversight on how AI is used in finance.
Public Sector: AI used in government services must be extra careful not to violate rights. There will be thorough checks to ensure that these AI systems are transparent and fair.
With these rules, the EU isn’t just protecting its own citizens; it’s also setting an example for other countries to follow in regulating AI. This could lead to a worldwide approach to keeping AI ethical and user-friendly.
1. What is the purpose of the EU AI Act?
The EU AI Act aims to regulate the development and use of artificial intelligence (AI) within the European Union to ensure safety, transparency, and trustworthiness. It sets rules to protect people from the potential risks and harms of AI while encouraging innovation and technological advancement in a responsible way.
2. How does the AI Act classify and regulate AI applications?
The AI Act uses a risk-based approach to classify AI applications into different categories based on their potential risk levels. High-risk AI applications, such as those affecting fundamental rights or using sensitive biometric data, must comply with stringent requirements. Lower-risk applications have fewer rules but still need to ensure transparency and data protection.
3. What are the penalties for not complying with the AI Act?
Companies that fail to comply with the AI Act can face significant fines, up to 35 million euros or 7% of their global annual turnover, whichever is higher. This underscores the EU’s commitment to enforcing these regulations and ensuring that AI is used responsibly and ethically.
Sources Bloomberg