Workshops
AI Governance: Board Oversight and Accountability
Online
Governance Framework CPD
Learn more
In this article Ger Perdisatt, AI Advisor, Experienced COO & Board Member, and IoD Faculty member, outlines the new EU Digital Omnibus deal and what this means for AI Compliance. He also addresses what changed, what did not change, and what Irish organisations should do now.
At the time of writing, the Omnibus agreement remains a provisional political deal and still requires formal approval and legal-linguistic finalisation. However, the direction of travel is clear enough for boards and senior leadership teams to understand the practical implications.
The original AI Act brought many high-risk AI obligations into application from 2 August 2026. The Omnibus deal now separates the timetable more clearly:
The immediate compliance reason is technical: the harmonised standards and implementation tools that providers need to demonstrate conformity are not yet ready. Without those standards, organisations would have been required to interpret complex obligations against a moving target.
The deal also tries to reduce overlap between the AI Act and existing EU product-safety rules. Machinery receives the clearest carve-out, but other regulated product sectors are not removed wholesale from the AI Act framework. The Commission is expected to clarify the interaction with sectoral rules by August 2027.
On generative AI, the deal makes a narrow intervention. Transparency and watermarking requirements for AI-generated content are given additional implementation time, with the practical deadline moving to 2 December 2026. This does not remove the obligation. It gives providers more time to comply with technical guidance that is still being finalised.
The one substantive new prohibition is a ban on AI systems generating non-consensual intimate imagery and child sexual abuse material, with a December 2026 compliance window. That provision responds to a documented harm rather than a calendar problem.
The extension applies specifically to high-risk AI obligations under Chapter III of the Act, including conformity assessments, technical documentation, human oversight mechanisms and registration requirements. It does not pause the rest of the framework.
The following remain in force on their original timelines:
For organisations that have been focused on the August 2026 high-risk compliance deadline, the extension creates planning room. Conformity assessments, technical documentation packages and formal registration can now be planned more realistically into 2027 rather than hurriedly completed by August 2026.
That does not mean the compliance work goes away. It means the sequence changes.
The foundations that were worth building before August remain the right starting point: AI inventory, risk classification, governance frameworks, oversight policies and clear accountability. Understanding which AI systems are in use, which may be high-risk, and what controls are already in place remains essential.
Three practical implications follow.
The argument for building AI governance now was never purely about meeting an August 2026 enforcement deadline. It was about the operational reality that ungoverned AI adoption creates risk before any regulator issues a fine.
Those risks include reputational exposure, poor data handling, hallucinated outputs, over-reliance on opaque systems, weak human oversight, and liability for decisions made on the basis of flawed or poorly understood AI-generated analysis.
Those risks exist now. The Omnibus extends a compliance window. It does not reduce the underlying organisational risk.
For Irish organisations with high-risk AI in scope, the extension creates time to build compliance properly rather than hurriedly. That is worth using. For organisations across the broader economy, AI literacy, governance, policy and practical oversight remain the near-term priorities.
The message is simple: the deadline moved, but the work did not disappear.
This article is the view of the author(s) and does not necessarily reflect IoD Ireland’s policy or position.

Ger Perdisatt is the Founder and CEO of Acuity AI Advisory, a leading AI strategy and governance consultancy based in Dublin.
Prior to founding Acuity, he spent 15 years at Microsoft, including 5 years as COO for the Enterprise business across Western Europe and 3 years leading the 180-person technology strategy function. That role required governing technology adoption at scale, managing operational risk across diverse regulatory environments, and translating complex strategic objectives into measurable outcomes.
Ger currently serves as a Board Member and Non-Executive Director at DAA plc, where he chairs the Strategic Infrastructure & Sustainability Committee overseeing a €5bn capital investment programme, and as a board member at Tailte Éireann, the national land registry, mapping and valuation authority. He is former Executive Committee Member of University College Dublin's Finance, Remuneration and Asset Management Committee.
Internationally, he has provided AI advisory to enterprises and regulated state bodies around AI governance, innovation and decision intelligence.
He advises boards and senior leadership teams on AI readiness, governance, and practical implementation. He has delivered AI governance programmes for a range of organisations across financial services, infrastructure, and professional services.
He holds an MBA from UCD Smurfit Business School.