Google takes social engineering to task, says no more deceptive download buttons

In life there are only a few certainties – death, taxes and online deception.

Fortunately, Google has decided to take one of those to task. It may not be the most scary of the above – after all, not even Google can avoid paying its dues – but it is an important move nonetheless.

Yesterday, the search giant announced that it was going to crack down on what it calls “social engineering ads.”

What does it mean by that?

Well, we’ve all seen plenty of examples – buttons on websites that say “Download,” “Play,” “Update,” “Install,” etc.

pasted image 0

But instead of being genuinely useful links to more content hosted by the site owner, they are actually adverts that, if you are lucky, lead to sellers of related products looking to entice you to buy from them.

Lucas Ballard, a software engineer with the company’s Safe Browsing Team explained that the new policy was an extension to its social engineering policy revealed in November which aims to protect people from tactics that attempt to trick them into giving up passwords and other sensitive information.

Ballard said embedded web content would also now be classified as social engineering should it:

  • Pretend to act, or look and feel, like a trusted entity — like your own device or browser, or the website itself, or

  • Try to trick you into doing something you’d only do for a trusted entity — like sharing a password or calling tech support

Should you attempt to visit such a site in the near future you will be presented with an unmissable warning that the site in question is deceptive in nature:


Google isn’t stopping there though – the tech giant says it plans to keep on making further improvements to its Safe Browsing – which already warns people if they are about to surf onto a site infected with malware – in order to improve web safety.

While the above is undoubtedly a great initiative which will help keep people away from sites that obviously don’t have their best interests in mind, I can only hope that all these additional web warnings will not instil a false sense of security into the average web surfer.

Google warnings do a great job of directing people away from websites that are bad for their computer’s health but I personally know some people who see them as something of a crutch – they think no Google warning is a sign that a site must be legit, and you can imagine the potential problems with that.

So, to wrap up, nice move Google, but please don’t forget that browser warnings are not an excuse for abdicating responsibility on the web – keep your eyes open and your wits about you. Don’t open unfamiliar sites, don’t click strange links found in emails and never give your personal information away to any site you don’t trust implicitly.

Why not negotiating with ransomware can still cost your business dear

Last week I wrote about Lincolnshire County Council’s ransomware problem which, at the time, looked to involve a demand for a million quid in return for unlocking its data.

Well, it seems that figure may have been slightly exaggerated – reports now suggest the figure may have been closer to £350 ($500).

Even so, the council’s leaders were determined not to pay up – which is the best response in my opinion, given the fact that ransomware is created by criminals who, by definition, are not the most trustworthy of souls.

And so, after spending the entire weekend working “24/7,” the IT team had the computer systems back up and running, according to Julie Hetherington-Smith, the council’s chief information officer.

Reiterating comments she made last week, Hetherington-Smith once again said no personal data had been compromised:

We’ve done a lot of checking and we, and the police, are confident that the data is safe. Nothing has been lost.

So, ransom payment avoided, all data safe.

Complete success, huh?

Not so fast with that assessment I say.

Because this episode was still exceedingly costly for Lincolnshire County Council.

From Tuesday through to the weekend, staff had to leave their computer systems well alone and revert to using pen and paper. Given how computerised just about every working environment is these days, how many hours of work were lost there? Hundreds I would guess, if not more.

Then, I presume, there would have been a not insignificant additional wage bill arising from all that 24/7 weekend overtime – I’d be absolutely gobsmacked if any UK council worker would have got out of bed on a Saturday or Sunday for love of the job or to do what needed to be done to merely collect their salary payment come the end of the month.

And then there’s the questions raised by what’s happened here.

Post-ransomware, Hetherington-Smith said the council would be reviewing its security systems, presumably to discover why an infected email attachment landed in somebody’s inbox in the first place and, secondly to understand why that attachment was opened.

Then there is the quote about ensuring that anti-virus was the latest available in the future – which suggests to me that updates were not being installed as and when they became available – something even home users know to do, hopefully.

So, overall, quite a sorry episode for the council methinks, but not that surprising either.

I suspect there are many other councils, corporations and small to medium businesses in exactly the same boat, they just don’t know it yet.

Will they find out before it’s too late?

Will you?

Hand over £1 million or all your sausage are belong to us

Well, ok, not your sausages, I mean data, but with this story being about Lincolnshire County Council, I just couldn’t resist.

Earlier this week, 300 of the council’s computers were infected with the same piece of malware, leaving the authority with no choice but to unplug its entire system.

According to The Lincolnite, a suspected breach exposed emails, medical records, addresses and bank details of local residents, though the BBC later reported the issue as ransomware.

Speaking for the council, chief information officer Judith Herrington-Smith said only a small number of files had been affected, though she added that “people can only use pens and paper, we’ve gone back a few years.”

Herrington-Smith went on to explain that the attack was quick but, as soon as it was identified, the network plug was pulled in a bid to save as much data as possible, adding how:

Some damage is always done before you get to that point – and some files have been locked by the software.

Fortunately, the council, which denies finding any evidence of a breach, had followed rule number one of data protection – keeping regular backups – and so it expects most of the infected files will be available for use again by the beginning of next week.

As for how the council systems became infected with the ransomware in the first place, I guess there is both good news and bad.

The good news, as far as the phrase can be stretched, is the fact that the ransomware appears to be a new strain, never before seen by security experts, or at least not the ones on the authority’s payroll at any rate.

The bad news, however, is the means by which the council became “the first victim” of this “zero-day malware” – it appears as though a staff member forgot their security awareness training and opened a dodgy email attachment.

Even so, Lincolnshire County Council says it has every faith in its security procedures and, with the ICO aware and the police investigating, we’ll find out how true that is, soon enough.

Let he who maintains his own privacy throw the first stone

As a society, we continue to be in a state of conflict when it comes to data. On the one hand, we’re often outraged over regular news around data breaches, while on the other hand we think nothing about trading our identities for a chocolate bar or less, often volunteering intimate data such as medical or financial information.

Said Raj Samani in a Computer Weekly article linked to Data Protection Day.

And you know what? I think he is completely correct.

While data breaches are bad news for everyone – the companies concerned, their employees, shareholders and, most of all, those whose data was compromised, the sad truth is that us humans have double standards when it comes to our own personal information, complaining that our Telco was hacked while taking to Twitter to announce where we are, who we’re with and what we’re likely to be doing for the next three weeks and with whom.


But then we’ve always had an uncomfortable relationship with privacy, haven’t we?

Or at least in evaluating its worth.

Those who give up their privacy for security deserve neither.

It’s not like the privacy debate is new – we’ve been here many times before and discussions about its worth certainly pre-date the internet but, with the advent of the web, the discussions have certainly become more frequent. Or more visible at least.

Such that we are at the point where every tiny technological advance is accompanied by questions about the effect it may have on that most valuable of worthless commodities we hold so dear, only to toss away when everyone is looking.

Take fitness trackers for example.

The world and his dog wrote about the privacy concerns surrounding such devices when they first appeared on the market, noting how valuable the accompanying data could be to insurers and how the consumer could ultimately end up paying a high price for sharing their heartbeat with a server they never bothered to check the location of.

Yet no-one tossed their bands in the bin.

They just upgraded to them to watches instead.

And we were all up late at night pulling our hair out (you wondered why everyone working in InfoSec was follicly challenged, right?) when Snowden told us all what we all secretly already knew about the NSA, GCHQ, etc.

But here we are on the internet still. Business as usual.

How many of you are using a VPN? PGP? Tor?

Yeah, that’s what I thought.

So, as Data Protection Day draws to a close, I say it’s time to think.

Why do you recognise these 24 hours as significant?

Is it because you were asked to write an article about it? Or did you think reading about it was the ‘cool’ thing to do today?

Or do you genuinely care about your privacy?

If you do, remember this isn’t January the first. You don’t get to make promises you have no intention of keeping tomorrow if you want to maintain the minimal amount of privacy you still have today.

And if you are loose with your own personal information you don’t get to throw a stone next time a big data breach hits the news.

That’s only fair, right?

Smartphone shopping: “Advert to aisle 4 please”

If you’ve ever worked strange hours you may well have experienced the weirdness of late night shopping.

Navigating pallets, cages and assorted debris on the floor while staff – who are best left out of the limelight of day time trading – scurry around filling shelves is a challenge on a par with finding any kind of customer service in the dimly moonlit hours.

Survive the health and safety at work challenge though, and you’ll be rewarded with a boot load of beer and, if you’ve been especially forward in your thinking, perhaps a couple of days food too. Failing that, the garage on the way home sells Pringles, eh.

But whatever you come away with, you’ll be relieved that you weren’t followed around the store – after all, all the other people shopping at 1 am are weirdos right?

Not so fast buster… you probably were tailed, you just weren’t aware enough to notice.

After all, it wasn’t that 18 stone unwashed guy with nothing but an axe head in his basket that was watching you. It wasn’t the security guard tilting and panning his cameras as you navigated aisle 14 either.

It was your phone.

If you’d shopped during the day like sane people do, you’d have been awake enough to realise that a 3-for-2 offer display lit up in from of your favourite beverage, but only when YOU approached it.


Because retailers are becoming increasingly interested in the next logical step up from loyalty cards – they not only want to know who you are and what you buy, they also want to know whereabouts you are in the store.

Ostensibly a way of encouraging extra sales and thus ‘helping’ the customer, such technology has to be kept in check though.

Even though it has some benefits, such as helping add more to a customer’s trolley, crowd control and retail buying decisions, it also has the potential to infringe upon consumers’ privacy if not reigned in with checks and balances.

As Dr Simon Rice, Group Manager for the Technology team which provides technical expertise to the ICO, says:

When this type of technology is used to generate aggregate statistics about daily visitor numbers or to generate an alert if an area is overcrowded, it can be done in a privacy-friendly manner.


Even if the identification of individuals is not the intended purpose, the implications of intelligent video analytics for privacy, data protection, and other human rights are still significant.

Rice offered up a list of recommendations to help tackle the potential privacy issues caused by this type of tech, highlighting how the key point was transparency – individuals should not be kept in the dark when it comes to having their data collected. Likewise they should be informed how that data will be used.

Whether the thought of retail tracking enthrals or enrages you, its here to stay. If you don’t believe me, just take a look at the US where its been prevalent for some time now, along with its fair share of privacy issues.

TalkTalk customers WalkWalk after data breach

Data breaches, though unfortunate, don’t have to signal the death knell for your business.

As many companies have discovered, the fallout can be severe but, in time, recovery can be possible, as long as lessons are learned and, far more importantly, the initial incident response is sound.

As any business owner or senior executive will tell you, having a well-drilled incident response plan in place, which has been practiced and ingrained into the minds of the incident response team, is a key part of any company’s long-term planning.

But what happens when an organisation has no plan in place, or fails to execute with any kind of, erm, professionalism?


TalkTalk, as I’m sure you all know, was hacked back in October 2015.

All in, around 150,000 of its customers saw their personal details accessed by hackers, resulting in losses of between £30m and £35m for the company.

Save for a small amount of residual resentment, that should have been that for TalkTalk but, alas, Dido Harding, the firm’s chief executive, began putting in the sort of media performances that left the security community wincing.. multiple times.

And so the upshot was…

… around a quarter of a million people left TalkTalk in the wake of the hack.

Despite some gains over the same period, the operator lost more customers than all the other UK broadband customers combined.

And I suspect not all of them left for pastures new due to the allure of faster broadband or Premier League footie!

Much more likely, according to Imran Choudhary, consumer insight director at Kantar Worldpanel ComTech, is the spectre of brand damage:

Customers have lost faith in TalkTalk as a trustworthy brand.

TalkTalk continues to offer some of the most attractive promotions across the home services market and almost a third of its new customers did choose it for this reason, but there can be no doubt that it lost potential customers following the major data hack. If it’s to recover from recent events TalkTalk will need to offer more than just good value.

I personally don’t disagree with Choudhary who rightly makes the point that TalkTalk is pulling in new customers with aggressive promotions but, whichever way you want to look at the post-hack telecoms company, the truth is that it has lost a whole heap of money.

How different things would have been with a different approach in the immediate aftermath of the breach we’ll never know but if I was a betting man I’d say the company would be in a much better position financially and its reputation would be somewhat less tattered than I believe it currently is.

So, the questions today are, how is your incident response plan? Who are your incident responders? Have they been trained? Have you tested the plan?

If not, what are you waiting for?

A data breach?

21% rise in phishing costs UK consumers £174 million

According to figures from GetSafeOnline, a UK government-backed initiative, 2015 was a very good year indeed for cyber criminals, or at least that subset which sends out phishing emails.

Brit computer users spent the last year looking at 21% more fraudulent emails than the year before and, collectively, lost £174.4 million to them.

As a result, GetSafeOnline has partnered up with the National Fraud Intelligence Bureau, as well as other leading law authorities, to launch an awareness campaign designed to educate the British public about the dangers of phishing scams, as well as other social engineering scams such as fake phone calls.

And such a campaign is extremely welcome, given the fact that the year November 2014 to October 2015 saw a total of 95,556 phishing scams reported to Action Fraud (and you can bet the unreported total was a whole lot larger than that!)

To put those figures into some sort of context, research from GetSafeOnline shows that just over one quarter (26%) of all reported online crime victims say they have fallen prey to either phishing emails and/or bogus telephone calls.

More alarmingly, over 77% of all the types of scams that were reported were of the phishing variety with a further 12% being delivered over the phone.

Beyond simply stealing money and personal information, 29% of the reported phishing emails were also found to be delivering malware, either in the form of an attachment or through a malicious link embedded in the message.

As for lures, you won’t be surprised to learn that criminals continue to leverage popular brands and government bodies including Apple, BT and HMRC.

As for when the phishing emails arrive in mailboxes there isn’t much detail in general (I’d put money on late Friday afternoons being popular) but one interesting fact that came to light was the fact that the most popular day bar none was 21 October – can anyone guess what happened on that day and why it was relevant?*

Interestingly, while awareness of how to avoid phishing scams appears to be quite low, there is an acknowledgement of the issue among the public – GetSafeOnline reports that 22% of people are most concerned about phishing than anything else, a key reason why it is now looking to raise understanding with its campaign no doubt.

Speaking of which, Tony Neate, CEO of Get Safe Online said:

Social engineering is becoming ever more targeted and personal, which is why it’s no surprise that the number of cases is on the rise

What’s worrying, however, is the complex nature of these scams and how they tap perfectly into feelings that make us panic – if we get an email purporting to come from someone we trust (such as our bank) about something that is emotive to us all (money) and then demand that we act urgently, it’s almost like the perfect storm.

That’s why we’re so pleased to be teaming up with the banks, City of London Police, Cifas and FFAUK to encourage people to think twice before they act and not to let panic override common sense.

If you have fallen victim to a phishing email, are a banking fraud victim or spot unusual activity on your account, contact your bank immediately in order to limit the potential damage. You can also contact Action Fraud, the UK’s national fraud reporting centre, by calling 0300 123 20 40 or by visiting

*21 Oct was the day the TalkTalk breach came to light and shows how cybercriminals leverage the news to increase their profits

Paying the penalty for not reading the security policies

So, you’re company is struggling with security.

You’ve identified risks and, perhaps unsurprisingly, many of them evolve around the weakest part of the network – the human operating system.

So you decide to do something about it and draw up some pretty comprehensive security policies and procedures which is, of course, a good starting point.

Only the problem, you see, is that no-one outside of your security function has the faintest bit of interest in reading them.

And who can blame them really? If you don’t work in InfoSec I can imagine they come across as being a tad.. dull.

So what can you do about that?

Well, you have a few choices really.

Option number one involves minimal work and a short annual test so you can ‘say’ your business has a security training and awareness program. Well done, you’ve passed, but let’s face it, your program isn’t all that is it?

With the second option you’ll have to put some work in, and likely a large wad of cash too as you set up a comprehensive program that attempts to engage your employees, emphasising how security is a feature of their lives away from work too, as you look to develop a culture of security over time. This is by far and away the best option – trust me, I know.

But there is a third option which is a bit radical to say the least.

This final approach, mooted in a study with a ridiculously long name (Defending Data: Turning Cybersecurity Inside Out With Corporate Leadership Perspectives on Reshaping Our Information Protection Practices) is less carrot and far more stick.

The survey, commissioned by Nuix, determined that human behaviour was the biggest threat to an organisation’s security, and that employers were increasingly less tolerant of the lax cybersecurity practices bred from its employees’ attitude toward safeguarding the company’s information and other digital assets.

As a result, Nuix suggests that corporations will will become far more intolerant toward risky behaviour and, as such, will likely begin penalising staff who “invite a data breach.”

Hmmm, ok, I guess you do need something in place to deal with recklessness and stupidity, but Nuix goes on to suggest employees should also be penalised if they “misunderstand, misinterpret, or miscalculate longstanding security policies and procedures.”

Is that fair?

The answer isn’t perhaps as obvious as you may first think – each case would have to be judged on its merits after all, but I would also wonder how many companies have sufficiently communicated their security policies and procedures in the first place?

I would guess the answer would actually amount to a small percentage, leading me to conclude that any notions of ‘fault’ should actually lie elsewhere.

So, if companies (including yours) are going to penalise employees for not being up to date on all of their security policies, who is going to police the writing and dissemination of those documents in the first place?

And is anyone on hand to check that the policies have been written in a manner that not only makes them clear to the non-technical among your staff, but also that they are delivered in a way that is sufficiently engaging that the information within them is retained for any length of time?

Speaking at the National Data Protection Conference

The 8th annual National Data Protection Conference will be held on January the 27th and 28th in the Aviva stadium. I am delighted to say I have been invited to address the conference again on the topic of cybercrime and its impact on Data Protection.

There are a number of other excellent speakers from many different organisations and I am looking forward to hearing their perspectives. In particular with the new Data Protection regulations and Directives that have been recently announced.

If you will be attending do say hello

Learn How to Move Securely to the Cloud

cloud_oopsOn February 18th & 19th of February in Maastricht I will be running a course on “Moving Securely to the Cloud – Key Issues, Risks and Compliance“. This course is being run with the European Institute of Public Administration (EIPA) and is aimed at those who are responsible for moving or managing data within the cloud. It will be of interest to EU officials, data privacy specialists, national civil servants, consultants, information professionals, as well as people working in the commercial sector and for NGOs.

This course provides participants with a comprehensive and structured guide on how to successfully move data to a cloud service provider while gaining assurance that the security of that data will not be compromised. The objective of the course is to enable participants to understand the concepts of security, risk and compliance, as they apply to the various cloud computing environments.

Learning methodology
This course will cover a series of topics that are particularly relevant to managing security in the cloud. The course will be lecture-driven, provided by experienced and recognised experts in the field. There will also be workshops to enable participants to apply what they have learnt in a meaningful and timely manner.

At the end of the training course, the participants will have a good knowledge of the key security issues to consider when engaging with a cloud service provider; will understand the concepts of security, privacy, compliance and risk, as they apply to cloud computing. They will be able to understand how to underpin the security concepts with that of an enterprise cloud computing environment; able to appreciate the unique security risks and challenges that cloud computing brings: and prepared to respond correctly should a security incident occur within their cloud computing environment.

You can register for the course on the EIPA website