Please consider supporting us by disabling your content blocker.
loader

By Meghna Pradhan

The rapid integration of Artificial Intelligence (AI) into various sectors has raised concerns about accountability, transparency, and fairness. The European Union’s AI Act, enforced in May 2024, aims to regulate AI usage, balancing innovation and governance. This article explores the implications of the AI Act on innovation and regulation.

The Regulatory Framework for AI

The AI Act, proposed in April 2021 and approved in May 2024, defines AI as a machine-based system with varying autonomy levels. It categorizes AI products based on risk: unacceptable, high, and minimal. Unacceptable risks include AI systems that manipulate decisions or conduct unauthorized facial recognition. High-risk applications include healthcare and education, while minimal risk covers general-purpose AI like music recommendations.

The Act applies to AI systems marketed or used within the EU, regardless of the provider’s location. It excludes military and research applications. Violations incur fines up to EUR 15,000,000 or 3% of global turnover, with lower penalties for small businesses.

Implications

The AI Act aims to foster innovation without compromising human rights. However, its definition of AI may not fully capture the technology’s potential. AI’s evolving nature poses challenges, as seen with generative AI like Chat-GPT. Regulatory costs could stifle competition, and exceptions for national security may lead to surveillance concerns.

While the Act addresses a regulatory gap, AI’s unpredictability remains a challenge. The EU’s approach may influence global standards, but the technology’s rapid evolution could outpace regulations.

Views expressed are of the author and do not necessarily reflect the views of the Manohar Parrikar IDSA or of the Government of India.

About the author: Ms Meghna Pradhan, Research Associate–LAWS Project, MP-IDSA

Source: This article was published by Manohar Parrikar IDSA