The ISACA Ireland Chapter Conference on 11 April brought together thought leaders in AI, cybersecurity, auditing, governance, and quantum computing. What stood out wasn’t just the topics, but how interconnected these seemingly distinct domains are becoming in today’s fast-evolving digital landscape. I had the incredible opportunity to attend the event, and here’s a reflection on the powerful sessions I attended.

Paul Hare of Deloitte kicked off the event with a warm welcome, setting the tone for a day packed with innovation, responsibility, and forward thinking. Dr. Wendy Ng’s keynote, ‘Driving the Next Industrial Revolution with AI and Automation’, was one of the most thought-provoking. She spoke passionately about how AI is the most transformative force of our time, reshaping industries, governance models, and the future of cybersecurity. She clearly emphasised that although retail, finance and operational technologies are rapidly adopting AI, human expertise remains irreplaceable.

Key insights included:

  • We are in the Fourth Industrial Revolution, where AI joins the lineage of steam, mechanised labour, and mass production
  • Businesses like Marks & Spencer are adapting AI not just for efficiency but as a strategy to gain competitive edge while carefully managing associated risks
  • AI is redefining roles across cybersecurity, operational continuity, and fraud detection retailers, in particular, are at the forefront of AI adoption.

Dr. Ng emphasised the balancing act between innovation and risk. Regulatory uncertainty remains a barrier, yet she urged industries to press forward responsibly, highlighting examples like robotic surgery and diagnostic algorithms that are transforming healthcare.

The role of AI auditors

Peter Hill, a director at Data Protection Schemes, demystified AI auditing, which is often perceived as a grey zone. His session, called Navigating the Future: The Role of AI Auditors, framed AI auditing as the essential backbone of trustworthy AI adoption, where auditors play a key role in ensuring accountability, transparency, and compliance. He began by demystifying AI auditing, clarifying that it involves more than just basic checks. Using COBIT and other governance models, he emphasised the importance of role definition, validation, and clarity in any auditing process.

Takeaways included:

  • Auditing mirrors AI governance frameworks and should involve validation, testing, and alignment with business goals
  • Key components like software development lifecycle methodologies, security, robustness, transparency, and change controls are non-negotiable
  • Regulations and frameworks like the AI Act, GDPR, ISO 42001, RMS/QMS, and post-market monitoring systems are critical for standardisation
  • AI audits can encompass everything from input data quality to output reasoning and compliance alignment.

As Peter rightly put it, the depth and clarity of AI audits will define the public’s trust in future AI systems.

Future focus: what DORA does differently

Dr Paul Lambert is a published author and speaker on data protection and information technology law. He introduced us to DORA (Digital Operational Resilience Act) with a visionary mindset. Instead of rigid rule-following, DORA encourages a resilient horizon-thinking approach, with cooperation and adaptive governance at its core. Rather than focusing on rigid regulatory frameworks, Dr Lambert urged companies to embrace cooperation over rules, lean on technical solutions, and prioritise preparedness over blind rule-following.

He emphasised that laws should be built to endure future scenarios, not just today’s threats. He pointed out that DORA promotes technical solutions over rigid bureaucracy, prioritising operational resilience and real-time collaboration. In essence, Dr Lambert reframed DORA as not just a legal requirement but a strategic opportunity.

Cloud calls for cooperation in a changed risk landscape

Has computing really changed with the cloud? That was the critical question Bruno Barros, a partner with Forvis Mazar, raised. Although the core architecture hasn’t shifted drastically, he said the risk landscape has. Cloud computing brings with it a shared responsibility model, which demands tight cooperation between vendors and businesses.

He discussed core risks including virtualisation loopholes, data in transit vs. data at rest, logging and reversibility. He also covered vendor lock-ins and interoperability, along with SOC (system and organisational controls) reports, audit rights, and data assurance mechanisms. Barros reminded us that security in the cloud isn’t about data storage: it’s about ongoing risk visibility and control.

Frequency hopping: the role of radio in security

Next was a surprising and fascinating and a unique session on how radio frequencies still play a massive role in global security and privacy. Ian Cortina, who is assistant principal for cybersecurity at the Department of Agriculture, took us on a journey through amateur radio, shortwave listening and the evolving implications. Starting with amateur radio, he explored the basics of setting up radio communications and the frameworks like HAREC and IRTS certifications.

He then covered over the air (OTA) listening using blogs, books and videos, then shortwave listening which, interestingly, requires no licence. The talk grew more interesting when covering RAMBO, a type of attack that uses radio frequencies emitted by a computer’s Random Access Memory, which can lead to secrets being leaked across air gaps.

Key moments:

  • The world of shortwave and amateur radio continues to enable access to sensitive communications
  • Examples of “radio laundry” included listening to cross-border data, RF emissions, and geo-mapping critical supply chains
  • Ethical dilemmas surfaced: How do we balance freedom to explore airwaves vs. respecting privacy?
  • Cases on TETRA communications were placed under scrutiny.

His storytelling and demo on radios ranged from storm disruptions to experimental antennas,  reminding us that radio tech is far from obsolete.

Where governance fits in an AI world

Next, a panel discussed ‘Governance 2035: Risk and Resilience’. Consultant Joe Mayo, Kamil Mahajan CEO of Advice Bytes, cybersecurity professional Stefania Lauciello, and John Brady, a risk programme manager, covered a wide range of topics with an eye on  strategic foresight.

  • AI governance is not just about models but about organisational maturity
  • Stefania emphasised starting with compliance maturity as a base
  • The “black box” problem was a focal point: how do we ensure transparency in AI decision-making?
  • The panel agreed that AI compliance must scale with innovation, and success lies in engaging governance early.

Ransomware: the AI angle

Faithful Chiagoziem, a PhD researcher at University College Dublin, delivered a gripping and awakening session entitled ‘Ransomware Attacks in the Age of AI’. He walked us through the evolution of ransomware from the early days of CryptoLocker in 1989 to modern AI-powered ransomware.

His session was a chilling reminder that AI is a double-edged sword in the cybersecurity space. He said that ransomware is now intelligent, self-adaptive, and automated. Two other points I took from his talk were that the rise of autonomous attack systems that mimic voices or generate phishing content is terrifying. Second, that the future of ransomware lies in targeting trust: systems that go beyond files to manipulate decision-making processes.

Ann Leslie, a cloud risk and controls expert with IBM, delivered a high-impact session, stating plainly: “Your AI business strategy can’t succeed without AI governance.” Her insights included a finding that 75 per cent of CEOs say GenAI surprises and scares at the same time, but few are prepared to govern it. She said enterprises need to define an AI pipeline that includes adoption, security, and resilience.

She said 70 per cent of digital transformation efforts fail without proper governance, and urged organisations to build a “dream team” for AI governance that includes legal, security, and ethical leaders.

Quantum clarity and a call to action

The next session quantum computing in a refreshingly clear manner. Alisha Sharma, a longstanding cybersecurity practitioner and expert in governance, risk and compliance, tackled this notoriously complex area. Her talk covered subjects like quantum entanglement, Shor vs. Grover algorithms, and post-quantum cryptography (PQC) using lattice, code-based, and multivariate algorithms. She also spoke about NIST’s PQC selections like Kyber, Dilithium, Falcon, and Sphincs+. Her core message was that the cryptographic foundation we rely on today will not hold up in a quantum-ready world. That’s why we must start preparing now.

The final panel of the day was hosted by Lee Bristow and speakers Jennifer O’Brien, head of technology audit with Canada Life, and Dr. Wendy Ng. Their discussion focused on digital twins and their transformative potential across industries. This was followed by the showstopper AI model Laura who mimicked a human was the co presenter alongside Tony Hughes where they highlighted the origins of AI, changing work enforcement, how automation frees humans.

The closing message of “save the humans” was both clever and sobering. It reminded us that in the race toward innovation, our humanity must remain the North Star. This conference wasn’t just about buzzwords. It was a call to action for professionals, regulators, technologists, and businesses to come together and shape a responsible digital future. The AI revolution isn’t coming. It’s here. And the decisions we make today will determine whether it empowers or endangers us.

Let’s make them wisely.

Pameela George is a junior data protection consultant with BH Consulting.

About the Author: admin

Let’s Talk

Please leave your contact details and a member of our team will be in touch shortly.

"*" indicates required fields

Name*
This field is for validation purposes and should be left unchanged.