Several Irish regulators and professional bodies have issued sector-specific AI guidance. If your organisation operates in a regulated sector, your policy should incorporate these as overlays. Use the notes field in each domain to capture additional obligations.
-
The Central Bank's February 2025 Regulatory & Supervisory Outlook includes a dedicated AI spotlight chapter. It has been designated as a market surveillance authority under the AI Act (S.I. No. 366/2025). Key expectations for regulated firms include: transparency about AI use in decision-making, clear accountability for each AI deployment, mandatory human oversight (particularly for fund management and credit scoring), and robust risk assessment covering data quality, bias, and cyber resilience. The Central Bank will engage with firms through its Innovation Hub and regular supervisory cycles. Financial services boards should ensure this policy maps to the Central Bank's documented AI risk categories.
-
The Law Society published formal Generative AI Guidance for solicitors in November 2025, covering ethical and responsible use of tools like ChatGPT and Copilot in legal practice. The full guidelines (PDF) map AI use to core professional obligations under the Solicitors' Guide to Professional Conduct, with specific attention to hallucination risk in AI-generated case law. A Legal Tech Hub has been launched alongside practical CPD workshops running through April 2026. Boards of law firms, or organisations with in-house legal teams using AI, should ensure their policy reflects these professional conduct obligation
-
Published 11 March 2026, Ireland's first national AI healthcare strategy "AI for Care" sets out how AI will improve clinical care, operations, research and public health through 2030. The strategy is fully aligned with the EU AI Act and introduces strong oversight requirements for high-risk AI in healthcare. HIQA is developing National Guidance on Responsible and Safe Use of AI, and the HSE is publishing an AI Implementation Framework to ensure consistent rollout across health regions. Healthcare boards should note that the HSE has been assigned market surveillance responsibility for certain high-risk AI uses in essential public health services and emergency triage.
-
The Workplace Relations Commission is designated as a market surveillance authority for AI in employment under the AI Bill. AI systems used in recruitment, performance management, or workforce allocation are classified as high-risk under the EU AI Act. Ibec's workplace AI guidance provides practical support for employers navigating these obligations. Organisations using AI in any HR or people-management context should ensure their policy covers the high-risk classification requirements and maintains human oversight of employment-affecting decisions.
-
The Commission for Regulation of Utilities (CRU) is designated as the AI market surveillance authority for the energy and water sectors under the AI Bill. AI used in grid operation, network planning, demand forecasting, and customer management will sit under CRU supervision. Any AI applied to critical infrastructure management is likely to be classified as high-risk under the EU AI Act. Energy boards should ensure this policy accounts for CRU's oversight role and the heightened obligations for critical-infrastructure AI systems.
-
These sectors do not yet have bespoke AI regulations in Ireland. They sit under the EU AI Act plus existing sector-specific laws (health and safety, product safety, consumer protection). AI used for site safety monitoring, worker management systems, and critical-infrastructure tools in construction may fall into high-risk categories with requirements for documentation, human oversight and robustness. In retail and logistics, AI in demand forecasting, dynamic pricing, and supply-chain optimisation is governed by AI Act general obligations plus GDPR and consumer-protection law. Boards in these sectors should monitor the AI Bill's progress for any sector-specific designations as the legislation develops.
-
Article 4 of the EU AI Act took effect on 2 February 2025 and applies to every organisation that provides or uses AI systems - including general-purpose tools like chatbots. It requires organisations to ensure AI literacy among staff, contractors and service providers dealing with AI. This is not sector-specific: it applies to all IoD member organisations regardless of industry. Your AI use policy should reference this obligation and link to whatever training or awareness programme your organisation has in place (or plans to implement).
Disclaimer
The AI Governance Toolkit is provided by the Institute of Directors (IoD) Ireland for general informational purposes only. It is intended as a practical guide to support members. The toolkit does not constitute legal, regulatory or professional advice. IoD Ireland accepts no liability for any loss, damage or consequence arising from the use of, or reliance on, this material.