In the United States, Congress faces persistent gridlock over a comprehensive artificial intelligence bill. Meanwhile, the European Union (EU) has taken decisive steps towards establishing safeguards as the technology continues to expand globally.
Experts, including Reiko Feaver from CM Law, highlight a growing concern that a hands-off approach in the U.S. could result in difficulties for companies as they navigate a ‘patchwork’ of state laws, especially against the backdrop of international legislation like the EU AI Act.
With several U.S. states already enacting AI-centric legislation, companies that operate on a global scale will soon face various, and sometimes conflicting, legal frameworks regarding the deployment of AI technology.
According to Helen Christakos from A&O Shearman, there are consistent themes across different jurisdictions: “There are threads of similarity that we’re seeing across jurisdictions, and some of those threads are around transparency and bias. These are common themes, but the implementation is a bit different in different jurisdictions, so the focus is on how we implement.”
Upon the passing of the EU AI Act, experts anticipated a ripple effect across nations, hoping it might guide U.S. legislators toward a robust AI regulatory framework. However, with the incoming Trump-Vance administration, these hopes seem increasingly distant.
Nevertheless, the EU AI Act holds significance for U.S. entities that provide AI-driven services to European consumers, setting a possible precedent even as domestic regulations remain undefined.
The EU AI Act introduces a risk classification system, categorizing models as ‘low risk’ or ‘unacceptable risk.’ For instance, high-risk entities include those associated with biometric identification and workplace emotion recognition.
Regulating Models Under the EU AI Act
As Di Lorenzo notes, use cases in fashion and apparel, such as utilizing AI for warehouse efficiency, will likely fall under the ‘low risk’ category, exempting them from the EU AI Act’s stringent regulations. However, other applications, like AI-generated digital models, will still require transparency measures, even if they are classified as low risk.
She added, “If you create deepfakes—images that present something that has never happened—you need to qualify them, indicating, for example, ‘Generated by AI.’” Transparency provisions, however, will not take effect until 2026.
In the interim, both Christakos and Di Lorenzo urge brands and retailers to assess their AI systems to ensure compliance with upcoming regulations.
A Pew Research report revealed that 20 percent of U.S. adults utilize ChatGPT for work-related tasks. Consequently, if companies utilize AI technologies that affect EU residents, they assume responsibility for risk levels associated with these systems.
Di Lorenzo explained that companies must develop guidelines regarding employees’ use of general-purpose AI models like ChatGPT to mitigate potential high-risk scenarios without oversight.
Some provisions of the EU AI Act, particularly those concerning models with ‘unacceptable risk’, will be enacted in 2025, but most remain scheduled for implementation in 2026. As preparations continue within various sectors, companies globally will need to remain vigilant about evolving regulations.
“The important thing is to identify which are the laws that apply to you, and try to define a common denominator under those laws,” advised Di Lorenzo.