If there is one statistic that sums up the increasing pace of technological change, it might well be this. Gartner forecasts that by 2026, more than 80 per cent of businesses will implement Generative AI in their production environments. To put this into context, GenAI use in business was just 5 per cent in 2023, the research company said. This rapid transformation creates a challenge for boards tasked with balancing emerging risks and strategic opportunities. And they need to do this while navigating an increasingly complex regulatory landscape.
Dr. Valerie Lyons, COO of BH Consulting, talked about these critical topics for the Institute of Directors Ireland in a recent webinar. In a presentation titled ‘Digital governance for boards and senior executives: AI, cybersecurity, and privacy’, she called on her extensive experience advising boards on these areas. She also spoke about the ethical frameworks shaping the new rules. Dr Lyons has spoken about this subject at major industry events, including the prestigious RSA Conference in San Francisco.
Boards and senior executives face several questions about how best to approach the challenges of cybersecurity, privacy, and AI governance. Where should they focus their attention? How should they approach the varied legislative requirements being thrust upon them from the EU and beyond? What are the technical and organisational measures that boards need to oversee, to make sure their organisations comply with the various regulations and frameworks?
Setting the regulatory ground rules
Among the key pieces of legislation are the Cyber Resilience Act (CRA) which establishes mandatory cybersecurity requirements for hardware and software products across the EU. It aims to ensure products are secure throughout their lifecycle, with reduced vulnerabilities, and improved transparency. The Data Governance Act creates a framework to facilitate trustworthy data sharing across the EU. It’s intended to balance increasing data availability with safeguarding privacy and confidentiality.
The Digital Services Act regulates online services to enhance digital trust. It sets clear responsibilities for online platforms to protect users from illegal content and ensure transparency. It introduces accountability measures for large platforms, and strengthens users’ rights. The Data Act enhances access to and use of non-personal data across sectors. It gives users greater control over data generated by connected devices, mandates data sharing under fair conditions, and aims to boost innovation and competition in the EU’s data-driven economy.
The EU AI Act is the world’s first comprehensive legal framework for artificial intelligence. It classifies AI systems by risk and imposes obligations accordingly, aiming to ensure safety, fundamental rights, and trustworthy innovation. Other frameworks that boards need to be aware of, include the newly introduced NIS2 Directive. This focuses on cybersecurity risk management and incident reporting for essential sectors like healthcare and energy.
Organisations can use certifications and standards to guide them on their governance journey. These include the ISO 27001 independent standard for cybersecurity; ISO 27701, which covers privacy, and ISO 42000 which refers to AI governance. The NIST frameworks encompass cybersecurity, privacy, and AI governance, and some industries have specific options such as PCI DSS, COBIT, and Digital Trust Frameworks.
Why AI governance is a key challenge for business leaders
The most recent challenge for boards and senior executives has been for AI governance. Although the concept of artificial intelligence has been with us for decades, the rise of generative AI (GenAI) has accelerated. By 2026, the industry watcher Gartner is forecasting that more than 80 per cent of businesses will implement GenAI APIs in production environments. To give a sense of how quickly adoption has surged, Gartner said that AI use in business was just 5 per cent in 2023.
However, AI governance remains a challenge. Here are six key concerns:
- Profiling and bias: AI models can infer personal details and make decisions based on biased datasets
- Data privacy and data protection: Training data is often collected without user consent, which raises ethical concerns
- Accountability: Companies must establish policies to ensure transparency and fairness in AI decision-making
- Bulk data collection: AI models are trained on vast datasets, often without explicit consent
- Persistent tracking: Tools like ChatGPT collect user conversations, IP addresses, and browser activity
- Regulatory gaps: Existing laws struggle to keep up with fast-evolving technologies.
Four key questions that members of the board and senior executives need to ask about AI governance are:
- Training: Are employees/board members/ senior executives adequately trained on AI risks?
- Policies: Do organisations have clear AI policies that are communicated and activated/monitored?
- Guardrails: Do organisations manage/limit use of AI and GEnAI and is this policed/enforced.
- Licences: Are the AI models being used within legal and ethical boundaries?
How should boards approach digital risks?
Boards play a crucial role in governing digital transformation. McKinsey & Co outlines four key strategies for directors:
- Close the insights gap: keep up with technological advancements
- Understand business model disruptions: identify how digital changes impact operations
- Engage in strategy and risk management: involve cybersecurity in decision-making
- Improve board expertise: onboard directors with relevant digital knowledge.
Boards can make real progress by implementing seven best practices for cybersecurity, data protection and AI governance.
- Make digital risk a board-level responsibility
Directors should integrate AI governance, cybersecurity, and data protection into corporate strategy.
- Create compliance officers
Assign key roles such as: Chief Information Security Officer (CISO) for NIS2 and CRA; an AI Compliance Officer for AI Act oversight; and a Data Protection Officer (DPO) for GDPR compliance.
- Take a risk-based approach
Identify, assess, and mitigate risks related to cybersecurity, AI ethics, and personal data protection.
- Develop and enforce policies
Regularly update policies for cybersecurity (NIS2/CRA), AI governance (AI Act), and data protection (GDPR).
- Build incident management and reporting capabilities
These capabilities should have three key elements:
- Timely response: Cyber incidents (NIS2/CRA) must be reported within 24 hours, AI failures (AI Act) must be documented, and data breaches (GDPR) must be disclosed within 72 hours.
- Incident Response Plan: Establish crisis management strategies.
- Crisis Training: Prepare teams to handle cybersecurity and AI-related failures.
- Adopt transparency and ethical AI practices
Focus on the following three areas:
- Explainability: AI models should be interpretable and fair
- Bias reduction: ensure ethical AI decision-making
- Privacy protection: strengthen data security.
- Enforce third-party risk management
Recent regulations increasingly call for organisations to check that suppliers and key service providers also conform to good practice in cybersecurity, AI fairness, and data protection rules.
By proactively addressing these challenges, boards can ensure the businesses they oversee are compliant with the rules, mitigate risks, and build resilient foundations for a future that’s increasingly digital.
