The European Union and its member states face a critical challenge: implementing the AI Act in a timely and user-friendly manner. Our new study highlights the importance of aligning existing European legislation more effectively with the new regulatory requirements.
With the adoption of the Artificial Intelligence Act (AI Act) in 2024, Europe took a pivotal step toward regulating AI systems to ensure their safe use in line with European values. As a complementary piece of the puzzle, the AI Act builds on efforts from previous legislative periods towards making the European Union and, in particular, its single market fit for the digital age.
By August 2026, policymakers must roll out the AI Act incrementally while clarifying how its provisions translate into practical applications. However, as with any complex regulatory framework, early signs suggest that certain components may not fit together seamlessly. Inconsistencies, overlaps and ambiguities risk hampering smooth implementation and creating legal uncertainty – an outcome that should be avoided.
Identifying conflicts between digital and sectoral regulations
On behalf of the Bertelsmann Stiftung’s reframe[Tech] – Algorithms for the Common Good project, Prof. Dr. Philipp Hacker has examined the key challenges and synergies between the AI Act and existing laws in the study “ The AI Act between Digital and Sectoral Regulations“. The study underscores the complexity of regulating AI applications that fall under the AI Act’s horizontal approach while also being subject to other digital and sector-specific requirements. It explores the tensions that are present through selected digital regulations – such as the General Data Protection Regulation (GDPR) and the Digital Services Act (DSA) – as well as sectoral laws in finance, healthcare and the automotive industry.
Key findings include:
- The risk assessment duties outlined in the DSA and AI Act can overlap, particularly for platforms integrating generative AI technologies. This creates a challenge in aligning platform-specific risks with AI-related risks.
- To date, there are no clear guidelines on reusing personal data for training generative AI models, complicating compliance with both GDPR and AI Act requirements.
- In the financial sector, differing obligations under data protection laws and the AI Act could lead to overlaps that complicate AI-driven risk assessments.
- In the automotive industry, integrating driver-assistance systems into existing product safety and liability frameworks poses a significant dual regulatory challenge.
- In the healthcare sector, contradictory requirements, combined with already strained approval mechanisms, could slow the adoption of AI-based medical applications. These include AI systems for diagnosing cancer or generating medical reports.
Each of these areas highlights how the need for adjustments and refinements varies by sector. Despite these differences, common structural measures can be identified across digital and sector-specific regulatory frameworks:
In the short term, existing regulations should be better integrated to eliminate redundancies and improve efficiency. In one case, the AI Act has already achieved this effectively: For financial institutions, existing rules on internal organization are explicitly sufficient to meet the AI Act’s requirements for a quality management system – provided they comply with EU standards. Similar integrations could largely be achieved through implementing regulations by the European Commission or through guidelines from national supervisory authorities tailored to the AI Act’s application in specific sectors.
Both national and European approaches are needed to align AI regulation with other legal frameworks and resolve contradictions sustainably. In addition, regulatory frameworks should be regularly reviewed to ensure they keep pace with technological and societal developments.
Avoid fragmentation and regulatory arbitrage through coherent implementation
This effort is not just about legal precision. The AI Act’s impact reaches across key economic and societal domains. Regulatory contradictions and uncertainties risk undermining business innovation and regulatory efficiency. They could also lead to fragmented responsibilities and regulatory arbitrage, where businesses may be tempted to sidestep stricter requirements. To effectively unify the diverse demands of AI regulation, a dialogue among all relevant stakeholders in Europe and its member states is essential. This includes lawmakers, supervisory authorities, businesses and civil society. Only through such an integrated approach can the AI Act realize its full potential.
This text is licensed under a Creative Commons Attribution 4.0 International License
Write a comment