Hit enter to search or ESC to close

How to prepare your business for the EU Artificial Intelligence Act

Blog

Expert insights from Barry Scannell, Solicitor, Consultant - Technology Department, William Fry LLP. This blog has been written exclusively for IoD Ireland.

Introduction

Artificial intelligence (AI) is transforming the way businesses operate, innovate, and compete. However, AI also poses significant challenges and risks for society, such as the potential impact on human dignity, autonomy, privacy, safety, and the environment. Therefore, it is essential that businesses use AI in a responsible and ethical manner, in line with the EU values and the rule of law.

The EU Artificial Intelligence Act (AI Act) is a proposed regulation that aims to create a harmonised legal framework for the development and use of AI technologies in the EU. The AI Act was published by the European Commission on 21 April 2021, and recently passed by the EU Parliament, as part of a wider package of initiatives on the digital transformation of Europe. The AI Act sets a global benchmark for the ethical and responsible use of AI technologies, and has significant implications for businesses that develop, deploy, or use AI systems in the EU.

Directors and senior executives need to be aware of the AI Act: its objectives, scope, and provisions, and how it will affect their business strategy, operations, and reputation. Directors and senior executives also need to take proactive steps to prepare for the AI Act and ensure compliance to avoid potential penalties, legal risks, and reputational damage, and to leverage the opportunities and benefits that the AI Act can offer for business.

What is the AI Act and what are its main objectives?

The AI Act is a proposed regulation that aims to create a harmonised legal framework for the development and use of AI technologies in the EU. The AI Act defines an AI system as “a machine-based system designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments.”

The main objective of the AI Act is to foster an ecosystem of trustworthy and human-centric AI, while ensuring a high level of protection of health, safety, fundamental rights, and environmental protection, against harmful effects of AI systems. The AI Act follows a risk-based approach, which tailors the type and content of the rules to the intensity and scope of the risks that AI systems can generate. It distinguishes between four categories of AI systems: unacceptable, high-risk, limited-risk and minimal-risk, with each category facing a different set of regulations.

How does the AI Act categorise AI systems?

Unacceptable AI systems

These are AI systems that are considered to cause unacceptable harm to the EU values, fundamental rights, or important public interests, and are therefore prohibited. The AI Act lists the following examples of unacceptable AI practices:

  • AI systems that manipulate human behaviour, opinions or decisions through choice architectures, nudges, or subliminal techniques in a manner that causes or is likely to cause physical or psychological harm;
  • AI systems that exploit information or predictions about natural persons or groups of natural persons to target their vulnerabilities or special circumstances, and cause or are likely to cause them or another person physical or psychological harm;
  • AI systems that evaluate the trustworthiness of natural persons over a certain period of time based on their social behaviour or personality characteristics, and lead to detrimental or unfavourable treatment in social contexts which are unrelated to the contexts in which the data was originally generated or collected;
  • AI systems that use real-time remote biometric identification of natural persons in publicly accessible spaces for law enforcement purposes unless they are authorised by law and subject to appropriate safeguards.

High-risk AI systems

These are AI systems that are used for certain purposes or in certain contexts that are considered to entail a significant or substantial risk for the health, safety, fundamental rights, or important public interests of natural persons, and are therefore subject to mandatory requirements and obligations.

The AI Act provides a list of use cases that are considered to be high-risk, such as AI systems used for:

  • Safety components of products or services;
  • Management and operation of critical infrastructure;
  • Education and vocational training;
  • Employment, workers management and access to self-employment;
  • Essential private and public services;
  • Law enforcement;
  • Migration, asylum and border control;
  • Administration of justice and democratic processes.

What does the AI Act do?

The AI Act does the following:

First, it prohibits certain AI practices that are considered to cause unacceptable harm, such as manipulative or deceptive practices, social scoring, indiscriminate surveillance, or biometric categorisation based on sensitive data.

For example, AI systems that manipulate human behaviour, opinions or decisions through choice architectures, nudges, or subliminal techniques in a manner that causes or is likely to cause physical or psychological harm are prohibited under the AI Act. These practices can undermine the trust, autonomy, and dignity of your customers, employees, and stakeholders, and expose your business to legal and reputational risks.

Second, it lays down mandatory requirements for high-risk AI systems, such as those used in critical infrastructure, education, employment, law enforcement, and essential private and public services. These requirements include ensuring human oversight, technical robustness, data governance, and transparency.

High-risk AI systems must be designed and developed in such a way that they are resilient to errors, faults, inconsistencies, attacks, or changes in the environment. Additionally, they must achieve an appropriate level of accuracy, reliability, and security. These requirements can help you enhance the quality, safety, and performance of your products and services, and increase the confidence and satisfaction of your customers, employees, and stakeholders.

Third, it establishes a set of obligations for the providers and users of high-risk AI systems, such as conducting conformity assessments, registering the systems in a European database, providing technical documentation, informing users and affected persons, monitoring and reporting incidents, and cooperating with authorities.

Providers of high-risk AI systems must conduct a conformity assessment before placing the system on the market or putting it into service, to verify that the system meets the mandatory requirements and complies with the relevant EU legislation. These obligations can help you demonstrate your compliance and accountability and facilitate your access to the EU single market and the international trade.

Fourth, it sets out transparency obligations for general purpose AI systems such as the models behind the likes of ChatGPT. For example, AI systems that generate or manipulate image, audio, or video content, such as deepfakes, synthetic media, or content creation tools, must include in the metadata or in certain cases, inform the natural persons who are exposed to or use such content that it is generated or manipulated by an AI system, and disclose the relevant capabilities and limitations of the system. These obligations can help you increase the transparency and explainability of your AI systems, and foster the trust and consent of your customers, employees, and stakeholders.

Fifth, it establishes a governance system and an enforcement mechanism to ensure the effective implementation and oversight of the rules. The governance system consists of a European Artificial Intelligence Board, national supervisory authorities, and notified bodies. The enforcement mechanism consists of administrative sanctions and corrective measures, a right to lodge a complaint, and a cooperation and assistance mechanism.

For example, the national supervisory authorities will have the power to impose administrative sanctions and corrective measures on the businesses that breach the obligations under the AI Act, such as warnings, orders to comply, temporary or permanent bans, or fines up to 7% of the annual worldwide turnover of the business. These mechanisms can help you ensure the compliance and accountability of your AI systems, and avoid potential penalties, legal risks, and reputational damage.

The AI Act categorises AI systems according to their risk level, based on the criteria of the potential impact on the health, safety, fundamental rights, and important public interests of natural persons.

What are the benefits and challenges of the AI Act for businesses?

The AI Act can offer several benefits and opportunities for businesses that develop, deploy, or use AI systems in the EU, such as:

  • Creating a level playing field and a harmonised legal framework for AI in the EU, reducing the fragmentation and complexity of the regulatory landscape, and facilitating the access to the EU single market and the international trade.
  • Enhancing the quality, safety, and performance of AI systems, increasing the confidence and satisfaction of customers, employees, and stakeholders, and improving the competitiveness and innovation potential of businesses.
  • Fostering the trust and consent of natural persons who are exposed to or use AI systems, strengthening the customer loyalty and retention, and enhancing the reputation and social responsibility of businesses.
  • Providing legal certainty and guidance for businesses on the ethical and responsible use of AI, reducing the legal risks and liabilities, and avoiding potential penalties, litigation, and reputational damage.

However, the AI Act can also pose some challenges and costs for businesses such as:

  • Complying with the mandatory requirements and obligations for high-risk AI systems, which may entail significant investments in human, technical, and financial resources, and may affect the time-to-market and the scalability of AI systems.
  • Adapting to the changing and evolving regulatory environment, which may require continuous monitoring, updating, and testing of AI systems, and may create uncertainty and complexity for businesses.
  • Balancing the transparency and explainability of AI systems with the protection of trade secrets, intellectual property rights, and competitive advantages, which may require careful design and communication of AI systems and may pose some trade-offs and dilemmas for businesses.
  • Addressing the potential gaps, inconsistencies, or conflicts between the AI Act and other EU or national laws or regulations, which may create legal challenges and risks for businesses, and may require coordination and cooperation with the relevant authorities.

What steps should businesses take to prepare for the AI Act and ensure compliance?

Businesses that develop, deploy, or use AI systems in the EU should start preparing for the AI Act by taking the following steps:

  1. Assess whether their AI systems fall under the scope of the AI Act, and if so, under which risk category. This involves identifying the purpose, context, and impact of the AI systems, and checking the criteria and the list of use cases established in the AI Act.
  2. Integrate compliance into their AI strategy from the outset, ensuring that their AI systems meet the mandatory requirements for safety, transparency, and data protection. This involves implementing a risk management system, ensuring technical robustness and accuracy, complying with the EU rules on the protection of personal data and privacy, and providing clear and accurate information to users and affected persons.
  3. Establish a post-market monitoring system for their AI systems, collecting feedback and data on the performance, safety, and impact of the system, and enabling corrective actions if needed. This also involves reporting any serious incidents or malfunctioning of the system to the competent authorities without undue delay.
  4. Engage proactively with the regulatory bodies, staying abreast of the developments and guidance on the AI Act, and cooperating with the national supervisory authorities, the European Artificial Intelligence Board, and the notified bodies in case of market surveillance or enforcement actions.
  5. Align their AI systems with the ethical principles and values of the EU, such as respect for human dignity, democracy, equality, and the rule of law. This involves adopting a human-centric and responsible approach to AI, ensuring that their AI systems respect the fundamental rights and interests of natural persons, and contribute to the social good and the sustainable development of the EU.

Timeline

The rules on prohibited systems will likely come into force by the end of 2024, with rules on general purpose AI systems taking effect from mid-2025. In mid-2026 most of the rules in relation to high-risk AI systems will come into effect.

Conclusion

The EU Artificial Intelligence Act is a landmark regulation that aims to create a harmonised legal framework for the development and use of AI technologies in the EU. The AI Act sets a global benchmark for the ethical and responsible use of AI technologies, and has significant implications for businesses that develop, deploy, or use AI systems in the EU.

For directors and senior executives, this represents both a challenge and an opportunity. By embracing the principles of the AI Act and integrating compliance into their strategic planning, they can not only avoid potential penalties but also enhance their reputation as leaders in the responsible use of AI.