Deepfakes are in more places than we realise, and they’re more convincing than we expect. Some time ago, I bought an electric guitar that looked like a genuine name-brand classic, right down to the familiar logo on the headstock; it turned out to be an elaborate knock-off. The line between what’s real and what’s artificial is becoming more blurred and harder to ascertain. And that brings me to cybersecurity.

We’re all seeing the impact of artificial intelligence in business, with its potential to boost productivity, save time and create economic growth. As AI starts to become more widespread and as adoption of AI tools continues to grow, businesses are using it to create a more personalised experience. The flipside of that is, there’s a risk that it can be used in a negative way. The technology has a lot of business leaders excited, but there’s another eager group of early adopters: cybercriminals.

I’ve been researching some of their AI-assisted attack techniques and I’ve found that many of them mimic conventional technology, so it’s harder for the average person to understand what’s real or faked.

Phishing: now it’s personal

One of the most common legitimate examples of AI is chatbots that create a personalised experience for someone visiting a website. Consumer-facing businesses in particular are starting to use generative AI to increase customer retention. But cybercriminals will do something similar in a different context.

Classic social engineering uses ‘hooks’ to lure the unsuspecting reader into taking action and create a sense of urgency that compels the victims to act. The typical ‘spoofed’ message invokes a figure of authority like a senior manager, head of finance or CEO. The email usually targets a junior member of staff in the accounts department, with an urgent message to pay an invoice straight away to a new supplier or an unfamiliar bank account.

Phishing scams are almost as old as the internet itself, and are a favourite technique for cyber criminals, but there were usually some telltale signs that something was off. Now, however, generative AI can eliminate all of the traditional red flags like obvious misspellings or grammatical errors. In the process, this increases the effectiveness and impact of phishing scams by giving the messages a more professional writing style. The effect is to make the fakes far harder to detect.

AI deepfakes: fool me once…

Where I see AI really contributing to the problem of business email compromise is by allowing attackers to make their phishing and scam emails much more specific and harder to detect. They can use large language models that can absorb vast amounts of real-time information from news and from various social media channels. This way, AI-generated scam emails can incorporate in-the-moment details that make the lures more sophisticated and the messages more believable.

The technology isn’t restricted to the written word either. Generative AI can now create realistic deepfake likenesses of a person. AI just needs three seconds of audio to clone someone’s voice to a level where it could trick a listener. It’s not hard to see the potential for very effective voice scams, or, if the target is a high value individual, video scams.

So how can we counter the problem? From what I have seen, a number of factors cause phishing issues, particularly in Microsoft 365 environments.

  • Advancements in AI
  • Cybercriminals adopting AI at a faster pace
  • Implementing AI without appropriate security ‘guardrails’

Here are some cyber hygiene measures which can reduce the risk.

AI Assess your entire Microsoft platform

Email is a primary communication method for many businesses, so it’s also a prime target for attackers. Although Microsoft 365 has many native security features, including Exchange Online Protection, it’s worth assessing your entire M365 platform to help mitigate the risk of successful AI phishing attempts. An important element of this will be to ensure DMARC (supported by and aligned to  SPF and DKIM) is set up. Although DMARC itself doesn’t directly combat AI, it strengthens email authentication, making it harder for attackers to spoof legitimate sender domains. It’s also particularly helpful if you use subdomains.

Examine all security configurations

In fact, I would argue, to protect the business from increased risks, it’s worth carrying out a cradle-to-grave examination of the entire security configurations within M365 including SharePoint, OneDrive and Teams, to include identity and access management controls, and how your data is protected. To avoid the risk of being left exposed, I suggest getting a security professional to check these settings.

Enable multi-factor authentication for all accounts 

Make sure you have the appropriate type of multi-factor authentication (MFA) enabled for all email accounts. This helps to prevent an unauthorised person from accessing an account, even if the password is compromised. So, the second part of authentication might be to get a one-time password via text message to the user’s phone. I also advise using MFA together with appropriate conditional access policies, depending on the user’s location or IP address. (All of this part of security should be invisible to the end user.)

Implement a comprehensive anti-phishing policy using Microsoft Defender

Defender is capable of adding a significant level of protection:

  • First contact safety tip:If someone  receives an email for the first time from a sender, Microsoft will add a banner at the top of the email to notify the recipient so they can exercise extra caution with emails from unknown senders.
  • User impersonation safety tip: If the sender was faked, Microsoft Defender will add the following text to the email to warn the recipient: “This sender appears similar to someone who previously sent you an email but may not be that person.”
  • Unauthenticated senders symbol for “spoof”: This adds a question mark to the sender’s photo in the “From” box if the message doesn’t pass SPF or DKIM checks and the message doesn’t pass DMARC.
  • Unusual character safety tip: If the sender’s email address contains special, unusual characters (e.g., maths symbols), Microsoft Defender will add the following text to the email to warn the recipient: “The email address <email address> includes unexpected letters or numbers. We recommend you don’t interact with this message.”

Configure your security settings to suit your needs

Microsoft Defender’s email and collaboration settings are very granular. They let you choose from a plethora of policies and rules around data loss prevention policies and phishing protection. We make this part of a security assessment, and we recommend configuring the additional policies in your Microsoft 365 tenancy to reflect the threats that we’re seeing.

There’s no simple solution to tackling AI-based cybercrime, and no single toolset to stop this. It needs a number of things to come together. As well as the technical controls, another vital element in tackling AI-enabled cybercrime is an informed user base. I’m a firm believer that a fully engaged workforce is a more vigilant one. So, carrying out phishing awareness campaigns together with associated training using real-world scenarios based on known AI-based scenarios can help everyone in the organisation to navigate the new and emerging threats.

I’ll return to the theme of training in a future blog as it’s critical to take the right approach. Anyone can make a mistake – especially with AI’s ability to create convincing fakes. It’s why I believe a no-blame culture is vital to fostering positive security.

And speaking of fakes: about that guitar I bought? The low price for the guitar I was supposed to be buying told me that the deal I was getting was too good to be true. In this case, I went in with my eyes open – and that story ends well because I can get a tune out of it. I accepted the risk and willingly parted with my money. But when it comes to scams, we’re right to be vigilant, and to keep our eyes open but our wallets closed.

John McWade is Head of Technical Strategy and Services

 

About the Author: Gordon Smith

Gordon Smith is a freelance journalist, copywriter and content consultant based in Ireland. He has covered information security, cyber risk and data privacy in print and online for over two decades, from national media including the Irish Times, Irish Independent, and Business Post, to specialist online news sites and titles including Siliconrepublic.com, TechPro, Help Net Security and the Law Society Gazette. He also hosts the annual IRISSCON conference in Dublin – Ireland’s longest running infosecurity event – and has produced content for a number of security industry organisations and business groups.

Let’s Talk

Please leave your contact details and a member of our team will be in touch shortly.

"*" indicates required fields

Name*
This field is for validation purposes and should be left unchanged.