Hit enter to search or ESC to close

Corporate Boards and AI Risk – A Pragmatic Approach

Blog

Expert insights from Barry Scannell, Solicitor, Consultant - Technology Department, William Fry LLP. Barry specialises in artificial intelligence (AI), copyright, IP, technology law and data protection.

In today's rapidly evolving digital landscape, the emergence of artificial intelligence (AI) as a dominant force in business innovation and operations has ushered in a new era of corporate governance challenges. This article aims to elucidate why corporate boards must not only be aware of these issues but also actively manage them, fulfilling their fiduciary and legal obligations in an AI-integrated business environment.

The integration of AI into business processes and decision-making poses complex challenges that extend beyond traditional risk management paradigms. These challenges encompass ethical considerations, legal compliance, operational integrity, and strategic positioning. The pace at which AI is advancing further complicates these challenges, as does the diverse range of AI applications across different industries.

Corporate boards are tasked with the fiduciary duty to act in the best interest of the company and its stakeholders. This duty encompasses the responsibility to oversee and manage significant risks. In the context of AI, this translates into a requirement to understand and govern the risks and implications of AI deployment in business operations. The legal landscape surrounding AI is rapidly evolving, with new regulations and standards emerging, particularly in areas of data protection, data protection, and ethical use of AI. Boards need to ensure that their companies comply with these regulations.

AI presents both risks and opportunities. While AI can drive innovation, efficiency, and competitive advantage, it also introduces risks such as operational failures, ethical breaches, data security vulnerabilities, and compliance issues. Boards must therefore engage in strategic oversight, ensuring that the adoption and integration of AI align with the company’s long-term strategy and values. This involves not only mitigating risks but also exploring how AI can be leveraged to enhance business performance and stakeholder value.

Considering the risks 

Specific risks corporate boards may wish to consider are:

1. Operational Risks: Evolving Beyond Conventional Protocols

Operational risks in AI are not static; they evolve as AI systems learn and adapt. To effectively map these risks, boards must understand not just the 'what' but the 'how' of AI operations. This involves scrutinising the underlying algorithms, data sources, and decision-making processes. AI validation processes require a continuous, iterative approach, considering that AI's operational parameters can shift over time. Change management is particularly crucial; AI systems are frequently updated, often autonomously, making it imperative to have a protocol that ensures these updates adhere to legal and ethical standards. This involves a dynamic approach to oversight, transcending traditional risk management practices.

2. Use Case Risks: Anticipating the Unintended

Scenario planning for AI use cases demands an imaginative yet critical approach. Boards should not only consider direct legal risks but also the broader implications of how AI might interact with users and other systems. Social and ethical implications are paramount. AI's potential for bias and discrimination, for instance, is not merely a legal issue but a reputational and moral one. Boards must insist on transparent, accountable AI processes that respect social norms and values. This involves a thorough understanding of the societal context in which AI operates, going beyond mere compliance to ethical leadership.

3. End User Risks: Fostering Trust through Transparency

User agreement reviews should focus on clarity and fairness, ensuring that users understand how their data is used and how AI decisions are made. This is not just a legal requirement but a trust-building exercise. In the realm of data rights management, the board must ensure that policies reflect the nuanced nature of AI-generated data, balancing innovation with user rights and privacy. This involves a deep engagement with emerging legal concepts like digital personhood and data ownership, areas where the law is still evolving.

4. Intellectual Property Risks: Charting New Territories

Intellectual property risks in AI are complex and multifaceted. An IP audit must consider not just traditional aspects like patent infringement but also emerging issues like the ownership of AI-generated creations and the use of copyright works in text and data mining and machine learning. The board must grapple with questions of creatorship in AI, which challenge conventional notions of authorship and ownership. This requires a forward-thinking approach to IP policy, one that anticipates future legal developments and shapes them.

5. Liability Risks: Preparing for the Unknown

Product liability analysis for AI is a challenging area, as AI systems can evolve in unpredictable ways. Boards must understand the implications of the Revised Product Liability Directive and the AI Liability Directive, interpreting how these apply to AI products that are constantly 'learning' and changing. Risk allocation between AI providers and users must be clear, fair, and transparent, considering the unique nature of AI systems.

6. Regulatory Risks: Staying Ahead of the Curve

With AI regulation rapidly evolving, boards must adopt a proactive approach to regulatory monitoring and compliance. This involves not just understanding current laws but anticipating future ones. A detailed compliance strategy for AI should be agile, adaptable to new regulations as they emerge. Boards should also actively engage in policy discussions, shaping the legal landscape in which AI operates.

7. Contractual Risks: Crafting Future-Proof Contracts

AI-related contracts require a level of specificity and foresight that goes beyond standard legal agreements. Boards must ensure that these contracts address performance metrics, data usage, and IP rights in the context of AI's evolving nature. Contracts with third-party AI service providers must be scrutinised for compliance and risk allocation, considering the interconnected nature of AI ecosystems.

8. Cybersecurity Risks: Anticipating AI-Specific Threats

Cybersecurity in AI is not just about protecting data but also about safeguarding the integrity of AI systems themselves. Threat analysis must consider AI-specific vulnerabilities, such as adversarial attacks that target AI's learning processes. An incident response plan must include AI-specific scenarios, preparing for the possibility that AI itself could be the source of a cybersecurity incident.

Conclusion

As AI continues to reshape the business world, corporate boards must rise to the challenge of managing its associated risks. This requires a blend of legal expertise, technological understanding, and ethical foresight. By proactively addressing the unique challenges of AI, boards can not only protect their organisations but also position them to leverage AI's transformative potential responsibly and ethically. This strategic approach is not just a compliance exercise; it's a cornerstone of sustainable business practice in the digital age.

Become a Member

A knowledge network designed for leaders like you

Become a Member