So, you’re company is struggling with security.
You’ve identified risks and, perhaps unsurprisingly, many of them evolve around the weakest part of the network – the human operating system.
So you decide to do something about it and draw up some pretty comprehensive security policies and procedures which is, of course, a good starting point.
Only the problem, you see, is that no-one outside of your security function has the faintest bit of interest in reading them.
And who can blame them really? If you don’t work in InfoSec I can imagine they come across as being a tad.. dull.
So what can you do about that?
Well, you have a few choices really.
Option number one involves minimal work and a short annual test so you can ‘say’ your business has a security training and awareness program. Well done, you’ve passed, but let’s face it, your program isn’t all that is it?
With the second option you’ll have to put some work in, and likely a large wad of cash too as you set up a comprehensive program that attempts to engage your employees, emphasising how security is a feature of their lives away from work too, as you look to develop a culture of security over time. This is by far and away the best option – trust me, I know.
But there is a third option which is a bit radical to say the least.
This final approach, mooted in a study with a ridiculously long name (Defending Data: Turning Cybersecurity Inside Out With Corporate Leadership Perspectives on Reshaping Our Information Protection Practices) is less carrot and far more stick.
The survey, commissioned by Nuix, determined that human behaviour was the biggest threat to an organisation’s security, and that employers were increasingly less tolerant of the lax cybersecurity practices bred from its employees’ attitude toward safeguarding the company’s information and other digital assets.
As a result, Nuix suggests that corporations will will become far more intolerant toward risky behaviour and, as such, will likely begin penalising staff who “invite a data breach.”
Hmmm, ok, I guess you do need something in place to deal with recklessness and stupidity, but Nuix goes on to suggest employees should also be penalised if they “misunderstand, misinterpret, or miscalculate longstanding security policies and procedures.”
Is that fair?
The answer isn’t perhaps as obvious as you may first think – each case would have to be judged on its merits after all, but I would also wonder how many companies have sufficiently communicated their security policies and procedures in the first place?
I would guess the answer would actually amount to a small percentage, leading me to conclude that any notions of ‘fault’ should actually lie elsewhere.
So, if companies (including yours) are going to penalise employees for not being up to date on all of their security policies, who is going to police the writing and dissemination of those documents in the first place?
And is anyone on hand to check that the policies have been written in a manner that not only makes them clear to the non-technical among your staff, but also that they are delivered in a way that is sufficiently engaging that the information within them is retained for any length of time?