From 15th October to 17th October, I had the privilege of being part of the ISACA Europe Conference 2025 an event that brought together some of the most insightful voices in technology, privacy, and governance.
For me, it was not just about attending sessions; it was about sharing the stage. As a first-time speaker representing BH Consulting, I delivered a session entitled “The Dark Side of AI: Trading Privacy for Protection”, exploring the ethical dilemmas that arise when Artificial Intelligence moves faster than regulation. But what made this experience truly enriching was the opportunity to listen, learn, and engage with other thought leaders who are shaping the conversation around AI governance, cyber resilience, and human-centric ethics.
Opening Remarks – Eric Prusch CEO
Eric Prusch kicked off the event with a warm welcome, setting the tone for a day packed with innovation, responsibility, and forward thinking. His vision for the future of technology was interesting and valuable.
Debra Searles- choose your attitude it’s not your magic it’s your mindset
The opening keynote by Debra Searle, adventurer and entrepreneur, set the tone for the entire conference. Her story of rowing solo across the Atlantic was more than an account of endurance it was a lesson in mindset. She reminded us that attitude is everything, especially when facing the storms of uncertainty. “We can’t control the waves, but we can control how we respond to them.” Her words resonated deeply, particularly as I reflected on my own journey of resilience balancing personal challenges, professional growth, and the pursuit of purpose.
Eric Jeffrey — AI and Cybersecurity: Balancing Opportunity and Threat
Eric Jeffrey’s session dived deep into the complex intersection of artificial intelligence and cybersecurity. He explored how AI simultaneously strengthens and challenges security ecosystems from predictive defence and automated detection to adversarial attacks and data-poisoning risks. His key point was clear: AI’s strength is only as reliable as our understanding of it. Organisations must ensure AI explainability, transparency, and accountability, particularly when these tools are used in critical defence contexts.
Steven Lawrence — The Right Attitude in Risk Management
Steven Lawrence brought the conversation from systems to mindsets. His session on “The Right Attitude and Approach in Risk Management” emphasised that effective risk control begins with people, not policies. He explained the different models in AI in detail and he also highlighted how governance is sustained through culture, a culture where awareness, curiosity, and ownership replaces checkbox compliance. His insights resonated deeply with our work in data protection and AI governance in BH Consulting, reinforcing that resilience thrives where trust and communication are built into everyday operations.
Dr. Valerie Lyons — DPIA and FRIA: Embedding Ethics into Compliance
Valerie’s session took the audience deep into the evolving regulatory landscape of Data Protection Impact Assessments (DPIAs) under Article 35 of the GDPR and Fundamental Rights Impact Assessments (FRIAs) under Article 27 of the EU AI Act. She began by highlighting that these are two
federal pieces of legislation that must increasingly interact. While DPIAs focus on data protection, FRIAs expand the scope to include broader human rights concerns such as fairness, dignity, and non-discrimination.
Dr.Valerie Lyons quoted Giovanni Buttarelli’s 2016 plenary remark:
“It’s not enough to follow the letter of the law, we must also follow its spirit.”
She illustrated this with a compelling real-world case: a Dutch driver was fined for allegedly using her phone while driving later proven innocent as she had merely been holding an ice pack after a dental procedure. The example demonstrated how AI systems and automated decision-making can misinterpret context, underscoring the need for fundamental rights safeguards alongside data protection. Referencing Alessandro Mantelero’s 2018 ‘Blueprint for AI’, Valerie explained how the FRIA requirement emerged to ensure human rights are integrated into AI risk governance from design to deployment.
Her “FRIA 101” slide broke down the essentials:
- Who? Deployers of high-risk AI systems (including public bodies and employers) must conduct FRIAs.
 - When? FRIAs must be completed before deployment.
 - What? They must document how the system is used, who may be affected, the nature of risks involved, and what happens if things go wrong. including governance and redress mechanisms.
 
She also walked us through the 10 criteria that elevate risk, such as: evaluation and scoring, automated decision-making with legal effect, systematic monitoring, sensitive or large-scale data use, cross-border transfers, and processing that prevents individuals from exercising their rights.
Dr. Tapiwa Chiwewe: Quantum Computing and the Next Frontier
Dr. Tapiwa Chiwewe took us into the fascinating world of quantum computing, bridging theory and real-world applications. His talk highlighted how this emerging technology could revolutionise optimisation, logistics, machine learning, and cybersecurity while also demanding new ways of thinking about preparedness and collaboration. The key takeaway? Quantum computing isn’t just a technological leap; it’s a mindset shift. It challenges us to re-imagine what’s possible when science, ethics, and innovation meet.
Panel session : Third-Party AI and the Governance Challenge
The session “Global Perspectives on Third-Party AI” brought together Mary Carmichael, Sanaz Sadoughi, and Hayriye Cinar for a powerful discussion on managing AI sourced from external vendors. They explored how most companies are now buying or inheriting AI tools. The panel was one of the most relatable sessions for many of us working at the intersection of governance and innovation. The speakers began with a simple but confronting question: “How many organisations here actually have an AI policy in place?” Using OpenAI’s ChatGPT as a live example, the panel explored how companies are embedding generative AI into fraud monitoring, customer service, and analytics to improve cost efficiency and time performance. Yet, this very speed of adoption has created governance blind spots — where security, ethical use, and regulatory readiness lag behind
innovation. They discussed Shadow AI – unapproved or unmonitored use of AI systems within organisations ,and how it poses serious risks to confidentiality, fairness, and compliance. The session underscored three essentials for every enterprise adopting third-party AI:
- Visibility: Know where and how AI tools are used across the organisation.
 - Inventory: Maintain a central register of all deployed or tested AI systems.
 - Contractual Requirements: Ensure vendor agreements cover accountability, explainability, and ongoing monitoring.
 
The speakers called for the development of a clear internal AI framework or policy to guide responsible adoption. They stressed the importance of transparency about what data goes into AI tools, ensuring vendor accountability, and continuously evaluating bias and fairness in outcomes. At the governance level, boards were urged to categorise AI-related risks, address confidentiality exposures, and recognise that algorithmic bias can lead to regulatory damage just as much as data breaches can. This panel was a strong reminder that AI governance is no longer a technical project, it’s a boardroom priority demanding cross-functional ownership, visibility, and ethical foresight
Panel session: Strengthening Cyber Resilience through Policy and Regulation
Moderated by Beth Maundrill with insights from Chris Dimitriadis (ISACA) and John Maguire (UK Department of Science, Innovation & Technology), this discussion addressed the urgent need for resilience in an era of escalating cyber threats. With 74 % of large UK businesses and 67 % of medium-sized firms reporting breaches last year, the panel emphasised that cyber resilience must now be seen as a leadership issue, not merely an IT one. Chris spoke about the delicate balance between regulation and innovation, while John outlined how public-private partnerships are essential to sustaining trust. The consensus was clear: collaboration beats isolation. Resilience isn’t built overnight; it’s cultivated through shared knowledge, adaptive policy, and a commitment to continuous improvement.
Keren Elazari: The Hacker’s Perspective — Trust and the Human Element
One of the most electrifying sessions at ISACA London 2025 was led by Keren Elazari, cybersecurity expert, researcher, and self-described “friendly hacker.” Drawing from her personal journey from her early fascination with computers to her evolution into one of the world’s most respected hacker voices, she invited the audience to look at the digital landscape through the eyes of a hacker. Keren began with a powerful reflection: all of the world’s knowledge once sat in computers but so did all of its vulnerabilities. She devoted her life to hacking not as an act of intrusion, but as a means to expose the fragility of technology.
Her talk explored the concept of “malicious innovation”, where AI, automation, and adversarial machine learning are exploited for ransomware, misinformation, and fraud. She described ransomware not as a tool, but as a service industry with professionalised ecosystems, affiliate programs, and even help desks for attackers. Groups like Scattered Spider and Dark Angels were mentioned as Gen Z–driven ransomware collectives whose sophistication has transformed cyberattacks into multi-million-dollar operations. The recent 2023 ransomware strikes on Caesars Palace and MGM Group served as real-world examples of how ransomware has evolved into a strategic, not merely technical, threat. Keren detailed how AI is being used by both defenders and attackers. She called this the “AI paradox” where the same tools that detect vulnerabilities can also be weaponised to exploit them. From crypto-mining transactions to deepfake avatars, hackers are
now leveraging generative AI to create synthetic IDs, impersonate employees remotely, and infiltrate corporate systems with alarming ease.
She coined the term “Infection Vectors 2.0” to describe how AI-enabled tools have broadened attack surfaces ,merging old-school hacking tactics with modern automation.
Her message, however, was not one of fear but of empowerment. “AI will not replace humans but humans who know how to use AI will replace those who don’t.”
Keren urged cybersecurity professionals to embrace AI literacy, maintain visibility into their own systems, and stay alert to Shadow AI, the ungoverned use of AI tools that may expose organisations to new threat vectors. She ended with an empowering note: “Change is the only constant. Keep calm, carry on, and adapt, because humans, not AI, should remain the decision-makers.”
This session wasn’t just a technical briefing; it was a wake-up call. Keren reminded everyone that the future of cybersecurity is not man versus machine, it’s man and machine, responsibly aligned.
Closing Reflection
As I walked out of ISACA London 2025, one thought stayed with me; technology will keep evolving, but ethics and empathy must evolve with it. From the courage of Debra Searle to the insights of Dr Valerie Lyons and Keren Elazari, every session reinforced a single truth: the future belongs to those who lead with conscience. To everyone at ISACA London and my incredible BH Consulting team, thank you for making this experience one to remember.
This blog was written by Pameela George, Data Protection Analyst with BH Consulting