Facebook’s Exploding Posts: Mission Impossible vs. Robin Of Sherwood

This Facebook post will self-destruct in 5 seconds.

Well, ok, maybe not 5 seconds. But your latest Facebook post could soon be gone in a timescale chosen by you (well, ok, anywhere between 1 hour and a week).

A small number of users spotted the new feature in a Facebook iOS app earlier this week which allows users to set a deletion date at the time they create a new post.

A Facebook spokesperson confirmed the existence of the trial feature, saying:

“We’re running a small pilot of a feature on Facebook for iOS that lets people schedule deletion of their posts in advance.”

Small scale trials of new features are nothing new for the social networking giant which is constantly looking to evolve. Facebook users will be grateful, however, that this one is not as secret as say, testing how users react to positive and negative news, the secret emotion experiment which recently surfaced and did little to enhance the reputation of a company which many fail to equate with privacy protections.

That said, Facebook may be learning what its users want, as evidenced by the recent addition of the ‘privacy dinosaur’ aka the new privacy checkup tool.

So, that means all users will be able to self-destruct all of their postings in the future, wiping them off Facebook’s servers for ever more, right?

Well, before you see Facebook as a means for posting questionable or sensitive content, you may wish to consider the fact that the answer to that question is not clear – it looks like the removal from a user’s timeline will be permanent but I’d be very surprised if Facebook would want to let anything fall off its own servers (we know it keeps a record of anything typed into the status box, regardless of whether the user subsequently decides to publish it or not, for example).

Then of course there is the fact that virtually nothing shared on the web is private ever again anyway – the kids of today ain’t half bright you know and they can take screenshots and everything.

So before you even start contemplating using Facebook’s potential new service, or the Slingshot app, or Snapchat to post something you otherwise may have kept to yourself (or should have) remember that nothing that is published can be unpublished and privacy can sometimes be an illusion. Or, as my boyhood hero Michael Praed would say, “Nothing is forgotten. Nothing is ever forgotten.”

Microsoft Sings We’re Not Gonna Take It, Invites Contempt

“See you in court!”

Aaargggh, no thanks, that sounds like a mighty stressful and bank balance-busting exercise in futility to me.

But then again, I’m not Microsoft so perhaps I’ve got good reason to not want to end up in front of a judge. Not that I’ve done anything wrong of course. Honest. Just ask GCHQ – its minority report division already knows I’m a saint now and will continue to be so in the future too.

Microsoft, however, is so keen to have its say in court that it has invited proceedings upon itself. Kind of.

After US authorities made demands over emails stored on a Microsoft server in Dublin, Ireland, the software giant said no dice and has now taken the unusual step of asking the US government to hold it in contempt of court so that it can accelerate the privacy-based case onto the appeals stage.

The case centres around a series of emails which are said to to be relevant to an investigation into drug trafficking but, despite the potential gravity of that case, Microsoft disagrees with the government view that data held overseas is there to be grabbed, instead suggesting that US jurisdiction should terminate in line with its physical borders.

An outstanding warrant, about which almost nothing is known publicly, has caused Microsoft much consternation with the company promising to appeal any adverse ruling “promptly.” The company objected to the search on many levels, including the fact that it believes an existing precedent applies:

“The U.S. has entered into many bilateral agreements establishing specific procedures for obtaining physical evidence in another country including a recently-updated agreement with Ireland. We think the same procedures should apply in the online world.”

In a blog post, the company also highlights how it is taking the moral high ground in making a stand for privacy and also cites backers such as Apple, Cisco and the EFF.

None of this is to say that Microsoft feels it is above the law though, merely that it believes that government should play by the rules and follow established processes:

“We appreciate the vital importance of public safety, and we believe the government should be able to obtain evidence necessary to investigate a possible crime. We just believe the government should follow the processes it has established for obtaining physical evidence outside the United States.”

Now, after some procedural confusion, US District Judge Loretta Preska has found Microsoft in contempt, allowing the company to proceed with its appeal immediately. Meanwhile Microsoft has come to an agreement with the Department of Justice that allows it to escape punishment for that ruling, though the government said it retains the right to seek sanctions at a later date if it feels it necessary to do so, with the full stipulation saying:

  1. Microsoft has not fully complied with the warrant, and Microsoft does not intend to comply while it in good faith seeks further review of this Court’s July 31 decision rejecting Microsoft’s challenge to the Warrant.
  2. While Microsoft continues to believe that a contempt order is not required to perfect an appeal, it agrees that the entry of an order of contempt would eliminate any jurisdictional issues on appeal. Thus, while reserving its rights to appeal any contempt order and the underlying July 31 ruling, Microsoft concurs with the Government that entry of such an order will avoid delays and facilitate a prompt appeal in this case.
  3. The parties further agree that contempt sanctions need not be imposed at this time. The Government, however, reserves its right to seek sanctions, in addition to the contempt order, in the case of (a) materially changed circumstances in the underlying investigation, or (b) the Second Circuit’s issuance of the mandate in the appeal, if this Court’s order is affirmed and Microsoft continues not to comply with it.

Trading Privacy For Security In the Job Market

Personal data from Facebook, Twitter and other social media sites will be monitored more by employers over the next decade, according to a new report from PwC, which says that one third of young people would happily trade in their privacy in return for a little job security.

The future of work: A journey to 2022 report surveyed 10,000 workers around the world as well as 500 human resources professionals in order to guage their attitude towards their social media use being monitored by their employers.

The report suggests that data available through Facebook, Twitter and other social channels could be used by employers to gain an insight in to what motivates their workforce along with other information including why staff change jobs and what could be done to improve their wellbeing within the organisation.

John Harding, human resource services partner at PwC in Manchester, said:

“Just as advertisers and retailers are using data from customers’ online and social media activity to tailor their shopping experience, organisations could soon start using workers’ personal data (with their permission) to measure and anticipate performance and retention issues.

This sort of data profiling could also extend to real-time monitoring of employees’ health, with proactive health guidance to help reduce sick leave. Key to the success of organisations being able to use employee data will be developing measurable benefits for those who hand over their data and building trust through clear rules about how data is acquired, used and shared.”

According to the research, half of the global workforce will be aged 32 or under by 2020, which will see a shift in attitude towards the use of technology and personal data. The PwC report says that these younger workers are far more relaxed about the sharing of data than previous generations, with 36% saying their employer is welcome to their personal data.

Whilst I can see why an employer would love to gain access to an employee’s social postings, either by viewing what is publicly available or via explicit consent, I struggle to see how the staff member gains from such an agreement.

By giving an employer permission to access their social media accounts, the individual would be giving up their privacy for very little return. The employer would gain all sorts of insight into how their staff think and what they do with their time when away from the workplace but I fail to see how that could be used to motivate them further, or increase their feeling of wellbeing. From the employees’ point of view I can see nothing to gain whatsoever. How giving up access to their social media accounts would lead to the claimed increase in job security I do not know.

This just seems to be another case of the general poulace giving up their rights for very little in return. Or, as Benjamin Franklin may have said “Those who surrender their social media accounts for job security will not have, nor do they deserve, either one.”

Considering the laid back attitude many youngsters have towards the sharing of their personal data these days I do wonder if, in the future, that approach will come back to bite them where it hurts.

Tesco Hudl – Every Little Data Reset Flaw DOESN’T Help

If you have some old tech you want to sell then eBay may be your first port of call. As much as I dislike the site and some of its practices, it still presents a means of putting unwanted goods in front of a huge number of eyeballs. But the problem with that it is it has generated a marketplace that appeals to a massive number of people, many of whom are not as security conscious as perhaps they could be.

I myself have bought a second-hand laptop in years gone by, only to discover that the previous owner had made absolutely no attempt whatsoever to clear their private data from the machine. I discovered his favourite websites (I hope he visited THAT site when his wife wasn’t around), I know who he banked with, I wasn’t partial to his taste in music, but I did agree strongly with the Liverpool FC background he left on it.

Ultimately, what I learned is that some people lack the security awareness, or are too lazy, to wipe their personal data from computers and other devices before disposing of them via an auction site or the local tip. Based upon hard drives I’ve been given by friends, it is a widespread problem which we can only hope to eradicate by raising the issue and educating people.

But sometimes education isn’t enough.

Take the Hudl tablet for example. Ken Munro of Pen Test Partners recently conducted an experiment, in conjunction with the BBC, in which he examined the data deletion systems on Android devices.

Purchasing second-hand Hudls from eBay, Munro discovered that even those previous owners who had wiped the device before shipping were at risk of having their confidential data accessed.

Munro found that the device retained information even after a factory reset due to a flaw in the Rockchip processor’s firmware. The known bug allowed him to read and write to the device using freely available software. Extracting information only took minutes but the analysis of the data typically took a couple of hours per machine. Once done, however, Munro was able to determine PIN codes, wi-fi keys, cookies and other browsing data that would have allowed him to spoof the original owner.

A Tesco spokesperson told the BBC that:

“Customers should always ensure all personal information is removed prior to giving away or selling any mobile device. To guarantee this, customers should use a data wipe program.”

The spokesperson went on to say that any Hudls returned to Tesco would be securely wiped by the company, but urged users to visit the Get Safe Online website if they have any further privacy-related concerns.

Marc Rogers, principal researcher at Lookout, explained further, saying that a secure wipe should be used before disposing of any data-storing device. Such a wipe will overwrite all onboard memory with ones and zeroes, rendering it useless to any third party that later tried to access it. Unfortunately though, most manufacturers have adopted a different approach to factory resets he said:

“There’s an Android function to wipe data and most manufacturers are using that. But all that does is remove the index of where data is and does not delete data at all.”

Lookout also revealed that police had revealed that the average underground price for a second-hand smartphone with personal data on it was around £600, which just goes to show the potential value of that data to the crook who ends up buying it.

As sales of smartphones and tablets increase, in part due to their convenience and portability, it is increasingly likely that owners will entrust more and more data to them. When those devices are subsequently sold on the selfies left in memory may provide the new owner with a few chuckles, but there is a chance that the banking data, credit card numbers and less than safe for work snaps may leave the original owner with something far more tangible than the thought of a stranger laughing at them.

So, if you are selling a Hudl, or any other device that has previously held your personal data, ensure that you wipe it securely before placing that listing.

Yahoo Set To Enable Email Encryption For All Users By 2015

In the same week that Google announced that it will give a search ranking boost to security-conscious websites, Yahoo has now revealed that it too will take a proactive stance on encryption.

The company announced at Black Hat that it will apply end-to-end encryption to its email services before the end of 2015.

The move is likely in response to the Edward Snowden revelations about government surveillance that have prompted many tech firms to assess their stance on privacy and encryption.

Thus far, Google has taken the biggest strides, with the aforementioned ranking change following previous announcements of support for end-to-end encryption in its Mail, Drive and Search products.

The change will likely be welcomed by Yahoo’s 273 million email account holders who had previously been left behind as other email providers adopted encryption.

Yahoo’s encryption will not hide details such as who has emailed who, or the contents of the subject line, but the contents of the message will be covered by a version of PGP encryption which has so far not been cracked.

In an interview with the Wall Street Journal, Yahoo chief information security officer Alex Stamos said:

“We have to make it to clear to people it is not secret you’re emailing your priest. But the content of what you’re emailing him is secret.”

PGP relies upon both the sender and receiver of an email having their own encryption key which could potentially lead to similar problems as those experienced at Lavabit which closed down after being force to hand its keys over to the authorities.

Yahoo and Google, however, both claim that they will not hand keys over, not least because they are massive companies with the funds required to finance a large number of lawyers, with Stamos saying:

“That’s very different from a publicly traded multibillion dollar company with an army of lawyers who would love to take this argument all the way to the Supreme Court.”

Mark James, security specialist at ESET welcomed the news but pointed out that the average man in the street may not understand how to take advantage of the change:

“It’s great that two of the largest internet email providers will be offering us the ability to send end-to-end encrypted emails to each other. After Google announcing it was doing the same thing a few months ago it is good to see another leading email provider following suit.

It won’t mean a lot to the average user but anyone who wants to protect their emails when using these providers will be able to do so by using these browser extensions.

So what does it actually mean? Well once the browser extension is added and configured you will be able to send an email with the contents completely scrambled to anyone except the sender and receiver. No one will be able to read the content. There are many encryption tools available for those that want to install and use them but for the average user they are often scary to set up. I for one welcome any type of “easy” security.”

I personally hope that Yahoo and Google do make their email encryption easily understandable by the less savvy web users out there though because we seemingly live in a society where having nothing to hide doesn’t mean no-one will go looking anyway.

Raspberry Pi, Wearables And The Low Cost Of Surveillance

Are you whiling the time away until you get your first smartwatch or preparing to run to the local store to buy the latest fitness tracker?

If so, you may wish to know that snoops can track such devices and at a fraction of the prices you will be paying for the latest in wearable tech.

New research from Symantec has shown that it is possible to track individuals, even in crowded places, via cheap and readily accessible hardware.

The security firm took a Raspberry Pi and added components including a Bluetooth 4.0 adapter, SD card and battery pack. All-in, the home-made tracker cost around $75 which is about £44/56 Euros.

The company took a number of such devices to busy public locations in both Switzerland and Ireland, as well as a major sporting event, and ran them in passive mode. By simply scanning the airwaves for signals broadcast by wearables, the RasPis were able to successfully track each and every one of them via their serial numbers or a combination of other factors, prompting the researchers to say:

“In our testing, we found that all the devices we encountered can be easily tracked using the unique hardware address that they transmit. Some devices (depending on configuration) may allow for remote querying, through which information such as the serial number or a combination of characteristics of the device can be discovered by a third party from a short distance away without making any physical contact with the device.”

The researchers also delved further into wearable tech and the associated apps, looking for other potential security and privacy concerns, and it found several.

Symantec discovered that 52% of the self-tracking apps it examined did not have a privacy policy which, it says, may suggest that the developers do not take security and privacy as seriously as they perhaps could.

Researchers also discovered a large amount of unintentional data leakage with the average app contacting 5 domains (one even contacted 14 domains) in a short period of time. Whilst there may be legitimate reasons for a fitness or other tracking app to contact a number of domains for the transmission of data or to serve ads, for instance, Symantec said that the number of domains being contacted increased the risks of data leakage through human error, social engineering or careless or malicious handling of data.

The researchers also discovered other concerns, such as weak session management, which could lead to session hijacking, which could in turn lead to further problems.

Symantec’s blog post ends with the company pointing out that self-tracking apps and devices are not synonymous with privacy and suggesting that those who value their privacy will not get involved in self-tracking in the first place (I agree).

However, knowing that many users will continue to use fitness trackers, smartwatches, etc., regardless, the company offers up the following tips which I would describe as being little more than damage limitation rather than a security solution:

  • Use a screen lock or password to prevent unauthorized access to your device
  • Do not reuse the same user name and password between different sites
  • Use strong passwords
  • Turn off Bluetooth when not required
  • Be wary of sites and services asking for unnecessary or excessive information
  • Be careful when using social sharing features
  • Avoid sharing location details on social media
  • Avoid apps and services that do not prominently display a privacy policy
  • Read and understand the privacy policy of app and services
  • Install app and operating system updates when available
  • Use a device-based security solution if available
  • Use full device encryption if available

If you would like more information on Symantec’s research a whitepaper can be found here.

Information Commissioner’s Office Reports On Big Data And Privacy

The Information Commissioner’s Office (ICO) has today released a new report that considers how big data will operate within existing data protection laws which ensure that personal information is:

  • Fairly and lawfully processed
  • Processed for limited purposes
  • Adequate, relevant and not excessive
  • Accurate and up to date
  • Not kept for longer than is necessary
  • Processed in line with your rights
  • Secure
  • Not transferred to other countries without adequate protection

The Big data and data protection report accepts that the use of big data can bring benefits to companies and doesn’t wish to stifle innovation. That said, the ICO is keen to point out that organisations still have an obligation to keep information both private and secure, offering the following practical advice for dealing with personal information used in big data analytics:

  • Personal data – Does your big data project need to use personal data at all? If you are using personal data, can it be anonymised? If you are processing personal data you have to comply with the Data Protection Act.
  • Privacy impact assessments – Carry out a privacy impact assessment to understand how the processing will affect the people concerned. Are you using personal data to identify general trends or to make decisions that affect individuals?
  • Repurposing data – If you are repurposing data, consider whether the new purpose is incompatible with the original purpose, in data protection terms, and whether you need to get consent. If you are buying in personal data from elsewhere, you need to practice due diligence and ensure that you have a data protection condition for your processing.
  • Data minimisation – Big data analytics is not an excuse for stockpiling data or keeping it longer than you need for your business purposes, just in case it might be useful. Long term uses must be
    articulated or justifiable, even if all the detail of the future use is not known.
  • Transparency – Be as transparent and open as possible about what you are doing. Explain the purposes, implications and benefits of the analytics. Think of innovative and effective ways to
    convey this to the people concerned.
  • Subject access – People have a right to see the data you are processing about them. Design systems that make it easy for you to collate this information. Think about enabling people to
    access their data on line in a re-usable format.

The ICO’s head of policy delivery, Steve Wood, says that there is a buzz around how big data can be used for social benefits as well as the more obvious economic advantages it can provide. He did, however, highlight how organisations are struggling to understand how they can put big data to innovative new uses without falling foul of the law. Wood also explained that individuals are also expressing concern over how their personal data is being used in big data scenarios.

The answer, he says, begins with organisations being more transparent about how they are using big data:

“What we’re saying in this report is that many of the challenges of compliance can be overcome by being open about what you’re doing. Organisations need to think of innovative ways to tell customers what they want to do and what they’re hoping to achieve.

Not only does that go a long way toward complying with the law, but there are benefits from being seen as responsible custodians of data.”

The ICO report says that openness is a key factor, pointing out how organisations need to ensure that personal information is only used in ways previously communicated to users. The complexity of big data, it says, should not be used as an excuse to use data without consent.

Responding to concerns that existing data protection law is insufficient in the face of big data, Wood added that:

“Big data can work within the established data protection principles. The basic data protection principles already established in UK and EU law are flexible enough to cover big data. Applying those principles involves asking all the questions that anyone undertaking big data ought to be asking. Big data is not a game that is played by different rules.
The principles are still fit for purpose but organisations need to innovate when applying them.”

The organisation notes how the area of big data is fast-evolving, leading it to conclude that its guidance will likely change over time. In light of that, the ICO positively encourages feedback which can be sent to [email protected] up until September 12 of this year.

Are We The Architects Of Our Own Insecurity?

Its a well known fact that people men are obsessed with something. (Note to self: make that two things but don’t mention the first).

Go to any shopping centre on a Saturday and you’ll notice all manner of sideways glances, secret peeks and longing stares as men of all ages centre their attention on anything but their significant others.

The object of their desire, of course, is technology. Like bees around a honey pot, we can’t help ourselves – new tech captivates us in ways we cannot explain and creates a longing and desire that nothing else can satisfy.

Boredom with the old and interest in the new is fed by some sort of crazy attention deficit that is ingrained into our very DNA I swear.

Technology manufacturers love it though. Such an interest, that is almost always backed up by demand where funds permit, drives them to create new products like there is no tomorrow.

But the never-ending rush to bring new ‘toys’ to market does have drawbacks.

The biggest one that I can see is the fact that the security issues surrounding new technology never seem to be given the attention they deserve ahead of a product release and are instead only considered later, in response to particular incidents or third-party research (think IoT for instance).

Additionally, as we now know thanks to Edward Snowden, some governments have their own agendas when it comes to technology, seeing computers, phones and tablets as an extension to their national surveillance campaigns.

Some nations are not standing for it though, as evidenced by China’s claims on Friday that the iPhone represents a security threat to the state. The national TV broadcaster criticised the iPhone’s  “Frequent Locations” function, saying that access to the data “could glean sensitive information such as the country’s economic situation or ‘even state secrets.'”

Apple hit back by saying that it ” does not track users locations – Apple has never done so and has no plans to ever do so,” adding that it had “never worked with any government agency from any country to create a backdoor in any of our products or services. We have also never allowed access to our servers. And we never will. It’s something we feel very strongly about.”

Whilst the iPhone is still freely available for sale in the country at this time I would not be surprised if that changes very soon, given the fact that China also moved quickly to outlaw the use of Windows 8 within government agencies.

The same state TV service branded Microsoft’s operating system as a threat to the nation’s cybersecurity, saying that it posed a “big challenge” and suggesting that the NSA may be using it to gather data.

Then there is the case of Russia which, shortly after Snowden’s defection, swiftly swapped computer hardware within the Kremlin for good old-fashioned typewriters in order to improve its security whilst creating a means for linking any created documents to a particular machine.

By way of contrast, the United Kingdom government is having a whale of a time with all this new technology, seizing upon the perceived threat of terrorism, peadophiles, etc., to rush in a law – which I think is draconian in nature – which will allow it to hold onto metadata for an entire year (if you want to know why that should concern you, whether or not you think you have ‘something to hide’, and including why it may pose a threat to democracy, then I highly recommend this recent post from Sarah Clarke in which she looks into the proposals in detail).

The fact that the UK government is doing a rush job on getting the proposals through Parliament leave little to no opportunity for MPs to debate the Bill and just as little time for us mere mortals to do the same either but what is noticeable is the fact that we, as a nation, are not standing up when practises that threaten our security and privacy are brought to our attention in the way that the likes of China and Russia are.

Maybe we don’t need to because, after all, we live in a democracy and our elected officials are there at our whim to do as we ask after all.

But then again I don’t feel that way myself – I think that we are allowing technology to control our lives to some degree rather than make them simpler and we are too blind to see what is happening.

I believe that new technology is a good thing but the way in which much of it is utilised these days warrants a level of scrutiny and subsequent control that just isn’t there right now. Alternatively, the insecurity could be all mine.

Is Privacy Now The Preserve Of The Rich, Famous And Scandalous?

Article 8 of the Human Rights Act 1998 says (emphasis mine):

Everyone has the right to respect for his private and family life, his home and his correspondence.

There shall be no interference by a public authority with the exercise of this right except such as is in accordance with the law and is necessary in a democratic society in the interests of national security, public safety or the economic well-being of the country, for the prevention of disorder or crime, for the protection of health or morals, or for the protection of the rights and freedoms of others.

Now I don’t know about you, but I think that sounds pretty good in principle – we all have the same right to privacy, just as we all have the same rights under the legal system (cough).

And, considering the multiple revelations from NSA whistleblowing ex-contractor Edward Snowden, its just as well too, given how our privacy is being walked all over every day.

But, just like the legal system (arguably), the right to privacy seems to operate on a tiered system in which the mere act of being human doesn’t seem to equal a level playing field in terms of rights.

As I’m sure most people are already aware, Google recently launched its ‘right to be forgotten’ request form in response to a ruling by the European Court of Justice (ECJ) which affords citizens the ability to attempt to magic away search results they don’t like.

Whilst I can see many good reasons for such a form to exist, and some genuine reasons why someone would wish to remove certain topics from Google’s search results, it seems as though most removal requests surround past indiscretions and unfavourable news, neither of which are wholly what the ECJ was thinking of when it drafted the legislation – in my opinion.

Now we have the news today that the rich and famous have been having their homes blurred out from Google’s Street View. En masse.

The likes of Tony Blair, Sir Paul McCartney and Lily Allen have all had their properties effectively removed from the mapping program, presumably on the grounds that they are somehow special.

Now I know that Google will consider requests from the little man in the street when it comes to Street View – a friend of mine discovered that a car that shouldn’t really be parked on his drive was in fact there for all to see when the service first started and now has an updated view of his property, sans said vehicle – but the web giant isn’t about to remove my house or his entirely are they?

Well, apparently they will. Google says –

“We provide easily accessible tools allowing users to request further blurring of any image that features the user, their family, their car or their home. In addition to the automatic blurring of faces and license plates, we will blur the entire car, house, or person when a user makes this request for additional blurring. Users can also request the removal of images that feature inappropriate content (for example: nudity or violence).”

And, if you’ve found an image that you would like further blurred, or an image that you believe contains objectionable content, just follow these steps

  1. Locate the image in Street View.
  2. Click “Report a problem” in the bottom-right of the image window.
  3. Complete the form and click “Submit”.

That’s it. We’ll review your report promptly.

But good luck with that – there are many reports on the web of people submitting such requests, sometimes multiple times, and getting no joy whatsoever, or seeing the images of their houses return after a while.

I guess they just aren’t rich or famous enough huh? Or is it because the rich and famous can back up requests with letters from lawyers? I don’t know about that but it certainly seems to me that some people are more equal than others in the world of privacy – so take it upon yourselves to do what you can to maintain what little you have left of yours.

Google Glass Didn’t Kill The Video Star

Have you just splurged £1,000 on Google Glass? If so, you may be tempted to wear your expensive fashion faux pas everywhere in order to show off how much of a [insert appropriate adjective here] you really are.

But one place you won’t be able to wear the headset is in your local cinema.

The UK’s filmhouses won’t allow it you see. Not because they think it is more annoying that popcorn-munching, screaming kids who cannot be controlled by their parents, but rather because they are worried about piracy.

Several movie theatres decided to ban Google’s controversial headgear just six days after it made its debut in Blighty, echoing similar moves in the US.

The fears over piracy are overstated of course – Google Glass can only record about 30-45 minutes of video before the batteries run out – and who in this day and age wants to watch a jerky movie filled with peoples’ heads filmed by a cameraman with a nervous twitch?

Not only that but the device makes no secret about when its active, as described by a Google spokesperson:

“We recommend any cinemas concerned about Glass to treat the device as they treat similar devices like mobile phones: simply ask wearers to turn it off before the film starts. Broadly speaking, we also think it’s best to have direct and first-hand experience with Glass before creating policies around it. The fact that Glass is worn above the eyes and the screen lights up whenever it’s activated makes it a fairly lousy device for recording things secretly.”

But the news does make the point about how technology can often be seen as a threat, especially when it is perceived to threaten the livelihood of big businesses.

Whilst I believe that the way in which cinemas have reacted is somewhat over the top, the situation does demonstrate an effective use of information security – the theatres have obviously assessed a risk to the intellectual property under their control and have put measures in place to mitigate that threat. As long as they have ushers on hand to ensure that their policy of not allowing Google Glass to be deployed within their environment is enforced, they will have successfully protected the data broadcast over the large screen and it will be mission accomplished.

It is a shame, and a reflection on society as a whole, that securing something of financial value is given such a high priority in comparison to protecting privacy rights though.

I’m sure many of you have heard of Glass wearers being referred to as “glassholes” and that derision exists for a reason – wearers of Google’s eyewear can potentially use the device to take pictures and video of people without their knowledge.

Recent research has also discovered how a wearer can capture a PIN code typed onto an iPad from a distance of 3 metres and there is no reason to believe that the device couldn’t snaffle up an ATM PIN just as easily, something which prompted Brian Honan to tell the Register that:

“Devices that can capture images such as camera, mobile phones, PCs, etc. have always posed a threat to sensitive information. Anyone with a device with these capabilities can record sensitive data or capture other information such as PINs or passwords.

People should always be aware of their environment when entering passwords or PIN numbers to ensure they are not overlooked. Even when there is no-one around you should assume that someone could observe what you are doing by cameras with zoom lens, Google Glass, hidden cameras, or indeed CCTV cameras.”

And today a Dutch researcher has demonstrated how an attacker could compromise Google Glass and effectively see through the eyes of the wearer, taking pictures and video, and sending them back to a computer under their control.

Whilst such privacy concerns will undoubtedly be addresses in the not too distant future, they certainly will not be overcome within a mere six days.

Whilst security is undoubtedly important I can only hope that privacy doesn’t become the poor relative when it comes to regulating and controlling new technology and systems.

But I won’t hold my breath – after all, money talks far louder than the man or woman in the street being ogled by a glasshole.