Hit enter to search or ESC to close

EU AI Act Omnibus: High-Risk Deadline Moved | What It Means for Irish Organisations

Blog

In this article Ger Perdisatt, AI Advisor, Experienced COO & Board Member, and IoD Faculty member, outlines the new EU Digital Omnibus deal and what this means for AI Compliance. He also addresses what changed, what did not change, and what Irish organisations should do now.

At the time of writing, the Omnibus agreement remains a provisional political deal and still requires formal approval and legal-linguistic finalisation. However, the direction of travel is clear enough for boards and senior leadership teams to understand the practical implications.


What Did Change

The original AI Act brought many high-risk AI obligations into application from 2 August 2026. The Omnibus deal now separates the timetable more clearly:

  • Stand-alone high-risk AI systems: compliance deadline extended to 2 December 2027
  • AI embedded in regulated products: compliance deadline extended to 2 August 2028

The immediate compliance reason is technical: the harmonised standards and implementation tools that providers need to demonstrate conformity are not yet ready. Without those standards, organisations would have been required to interpret complex obligations against a moving target.

The deal also tries to reduce overlap between the AI Act and existing EU product-safety rules. Machinery receives the clearest carve-out, but other regulated product sectors are not removed wholesale from the AI Act framework. The Commission is expected to clarify the interaction with sectoral rules by August 2027.

On generative AI, the deal makes a narrow intervention. Transparency and watermarking requirements for AI-generated content are given additional implementation time, with the practical deadline moving to 2 December 2026. This does not remove the obligation. It gives providers more time to comply with technical guidance that is still being finalised.

The one substantive new prohibition is a ban on AI systems generating non-consensual intimate imagery and child sexual abuse material, with a December 2026 compliance window. That provision responds to a documented harm rather than a calendar problem.

What Did Not Change

The extension applies specifically to high-risk AI obligations under Chapter III of the Act, including conformity assessments, technical documentation, human oversight mechanisms and registration requirements. It does not pause the rest of the framework.
The following remain in force on their original timelines:

  • Article 5- Prohibited practices: Prohibited practices have been in force since February 2025. Social scoring, certain uses of real-time biometric surveillance in public spaces, and manipulative AI systems targeting vulnerable groups remain prohibited. The Omnibus deal does not alter this.
  • Article 4- AI literacy obligation: Providers and deployers of AI systems are required to ensure, to the best of their ability, that staff and others dealing with AI systems on their behalf have sufficient AI literacy. This obligation applies regardless of whether the system is classified as high-risk. It has not been extended.
  • Articles 51–55 - General-purpose AI models: General-purpose AI obligations, including transparency and copyright-related requirements for model providers, remain on their existing timeline. The main change is the additional implementation time for AI-generated content marking.
  • Ireland’s AI Office: Ireland’s implementing legislation is intended to establish the AI Office of Ireland, with an establishment date on or before 1 August 2026. That domestic implementation timeline is separate from the Omnibus extension for high-risk AI compliance. Irish organisations should therefore not assume that the European timetable change delays the creation of the national supervisory framework.

What Does This Mean in Practice for Irish Organisations?

For organisations that have been focused on the August 2026 high-risk compliance deadline, the extension creates planning room. Conformity assessments, technical documentation packages and formal registration can now be planned more realistically into 2027 rather than hurriedly completed by August 2026.
That does not mean the compliance work goes away. It means the sequence changes.

The foundations that were worth building before August remain the right starting point: AI inventory, risk classification, governance frameworks, oversight policies and clear accountability. Understanding which AI systems are in use, which may be high-risk, and what controls are already in place remains essential.
Three practical implications follow.

  1. First, the literacy and governance obligations are unaffected. Boards still need to govern AI. Staff still need adequate AI literacy. Ireland’s AI Office is still expected to be established in 2026. Governance and literacy are not merely precursors to high-risk compliance; they sit on a separate and more immediate timeline.
  2. Second, the high-risk extension matters most to organisations that provide or deploy high-risk AI systems. Many Irish SMEs and professional services firms may not be directly affected by the new high-risk dates. For them, the more immediate obligations remain prohibited practices, staff literacy, responsible data handling and basic governance.
  3. Third, the deal is not the final word. Further clarification is expected on how AI Act obligations will interact with sectoral rules. Organisations with complex regulated products, embedded AI or high-risk use cases should treat the new dates as planning markers, not as a reason to pause.

Governance of AI

The argument for building AI governance now was never purely about meeting an August 2026 enforcement deadline. It was about the operational reality that ungoverned AI adoption creates risk before any regulator issues a fine.

Those risks include reputational exposure, poor data handling, hallucinated outputs, over-reliance on opaque systems, weak human oversight, and liability for decisions made on the basis of flawed or poorly understood AI-generated analysis.

Those risks exist now. The Omnibus extends a compliance window. It does not reduce the underlying organisational risk.

For Irish organisations with high-risk AI in scope, the extension creates time to build compliance properly rather than hurriedly. That is worth using. For organisations across the broader economy, AI literacy, governance, policy and practical oversight remain the near-term priorities.

The message is simple: the deadline moved, but the work did not disappear.

This article is the view of the author(s) and does not necessarily reflect IoD Ireland’s policy or position.

About the Author

Ger Perdisatt is the Founder and CEO of Acuity AI Advisory, a leading AI strategy and governance consultancy based in Dublin. 

Prior to founding Acuity, he spent 15 years at Microsoft, including 5 years as COO for the Enterprise business across Western Europe and 3 years leading the 180-person technology strategy function. That role required governing technology adoption at scale, managing operational risk across diverse regulatory environments, and translating complex strategic objectives into measurable outcomes. 

Ger currently serves as a Board Member and Non-Executive Director at DAA plc, where he chairs the Strategic Infrastructure & Sustainability Committee overseeing a €5bn capital investment programme, and as a board member at Tailte Éireann, the national land registry, mapping and valuation authority. He is former Executive Committee Member of University College Dublin's Finance, Remuneration and Asset Management Committee. 

Internationally, he has provided AI advisory to enterprises and regulated state bodies around AI governance, innovation and decision intelligence. 

He advises boards and senior leadership teams on AI readiness, governance, and practical implementation. He has delivered AI governance programmes for a range of organisations across financial services, infrastructure, and professional services. 

He holds an MBA from UCD Smurfit Business School.