The Electronic Frontier Foundation has a fascinating graphic on which companies are doing what things to protect their customers’ – our – data in the post Prism/Snowden era.
What really leaps out is that the companies is that provide consumer cloud services are on our side (Dropbox, Facebook, Google and Twitter); telecommunication companies are on their side (AT&T, Comcast, Verizon); and the main OS providers (Microsoft and Apple) aren’t really sure which side their bread is buttered.
I finally got the email I’ve been waiting for. It’s from Adobe. It starts
As we announced on 3 October 2013, we recently discovered that an attacker illegally entered our network and may have obtained access to your Adobe ID and encrypted password. We currently have no indication that there has been unauthorised activity on your account.
To prevent unauthorised access to your account, we have reset your password…
Let’s have a look at this. “We announced on 3 October 2013, we recently discovered…” What does recently mean? They announced on 3 October not because they had discovered the hack, but because Brian Krebs told the world that he had found stolen Adobe data on the internet. So when it was actually stolen (could have been months earlier) and when Adobe actually became aware of the theft (could have been months earlier) is not known.
Let’s be charitable and say Adobe knew about it by 1 October.
They said that just under 3 million usernames and encrypted passwords may have been stolen. Since I don’t have an Adobe account, and since 3 million is relatively few in the overall scheme of things, I thought no more about it.
A few weeks later Adobe admitted that the true figure is nearer 38 million. That’s getting a bit more worrying, so I checked my browser’s stored passwords and my more recently adopted password manager. Still nothing. No Adobe account. And anyway, Adobe said very clearly that the company had reset all the passwords and notified the 38 million users. I had not been notified. I had nothing to worry about.
But then, about a week later, it emerged that it wasn’t a mere 3 million, nor a more worrying 38 million, but a colossal 150 million. Adobe had notified 38 million out of 150 million – but that is by no means the worst of it. When Paul Ducklin got hold of the database of stolen data, now easily available if you know where to look, a quick analysis showed the user’s email in plaintext, an encrypted password, and the user’s password hint in plaintext.
email addresses – you can infer a lot from an address: usually the user’s name and company. For example, Ken Westin at Tripwire looked through the Adobe hack and found 89,997 military addresses. “This is in addition to the more than 6,000 accounts from defense contractors such as Raytheon, Northrup Gruman [sic], General Dynamics and BAE Systems we also found,” he wrote. “Also, on the federal side, there were 433 FBI accounts, 82 NSA accounts and 5,000 NASA accounts.” So, choose your company, guess the user’s name, look through LinkedIn and Facebook and you’ve got enough for a pretty compelling targeted phishing attack.
encrypted passwords – passwords should be hashed and salted with a slow hashing algorithm; they should not be encrypted. Hashing means 150 million passwords need to be cracked; encryption means that one key needs to be cracked and all 150 million passwords are known.
password hints in plaintext – oh, really! Why bother cracking the passwords when the hint will let you guess it? What do you think is the password when the hint is ‘57’; or ‘the bad disciple’?
So Adobe really cocked-up. They didn’t protect the data, they didn’t store it correctly, and they tried to minimise the extent of the damage. And still it gets worse; because they tried to suggest, don’t worry, most of these accounts aren’t real, they belong to people who just signed up to get promotions or freebies.
Here’s the real danger. In that great mass of one-off freebie-chasing accounts numbering anything between 38 million and 150 million are people who signed up, used a password that they can’t remember, and are completely unaware that their password is now compromised. What if these people signed up years ago before password thefts became a dime a dozen, and lazily used the same password as they use on their email address? There is no way that they can retrieve that password. They now have no way of knowing whether any or which or all of their other accounts have been compromised by Adobe’s failure to adequately protect this password.
One final point. I said at the beginning that I had been expecting the email from Adobe. That’s because I checked with LastPass (who has a little routine that will tell you whether you’re included in the hacked data) and learnt that although I couldn’t ever remember creating an Adobe account, at some point I must have done, because there I was.
So, at least six weeks after it knew of the breach, Adobe bothers to tell me that someone “may have obtained access to [my] Adobe ID and encrypted password” when the world and his dog has access to that encrypted password. I know; Ken Westin, Brian Krebs and Paul Ducklin almost certainly know; LastPass and the hackers most definitely know; and anyone who cares to look will also know. Adobe, however, doesn’t know and continues to insist that ‘an attacker… may have obtained access.”
How dare they, after all this time and all these mistakes, still try to save face at my expense?
I’ve had a comment on my latest Dropbox post (Is it safe to carry on using Dropbox (post Prism)? Yes and No: Part III) that I have rejected. This is a very heavily moderated blog, but I thought I’d explain why I rejected this one.
The comment started by saying, “As Dropbox stands today on its own, yes, completely agree that there is the *possibility* of your data being “looked in on” by people without your knowledge or permission.” It then added, “However, there are 3rd party services out there like xyznnn (www.xyznnn.com) that are completely tapproof, i.e. YOU hold the keys, not Dropbox or the 3rd party vendor. Meaning that your data cannot be accessed without you knowing about it. Read more in this blog post: xyznnn.”
It was, naturally, submitted by a member of the marketing department of the xyznnn company; so it is absolutely an attempt at advertising to the readers of this blog. That, in itself, is not enough for me to reject it. If such a comment adds value to the subject or will genuinely help the reader, I will still generally allow it.
But this one is flatly wrong. First of all, never trust anyone who says or implies that any security is unbreakable. In fact, if anyone says that, you can begin to distrust their understanding of security. So, rather than helping the readers, I consider claims such as “completely tapproof” and “your data cannot be accessed” to be misleading and potentially dangerous.
I will not knowingly help promote products that make what I consider to be statements verging on hyperbole and are fundamentally inaccurate — there are simply no absolutes in security. And that is why this particular comment was rejected.
I got this note from a PR company working for a quite major security company. It said, “In response to the MOD being the victim of a cyber espionage attack that has led to the theft of key data…” and pointed to an article on V3.
That article does indeed say,
The Ministry of Defence (MoD) was the victim of a cyber espionage attack that led to the theft of key data, in the latest evidence of the sustained cyber threats facing the UK.
The comment from the PR company talked about the importance of protecting encryption keys. “Failure to retain custody of your encryption keys is a huge issue that essentially negates the benefits of encryption,” said the spokesman.
This is, of course, perfectly true and valid. But we should go back to the source of V3′s article, the latest 2013 annual report from the UK’s Intelligence and Security Committee. Not once does it use the phrase ‘key data’. In fact, not once does it mention encryption.
In fact it doesn’t even say that the MoD was the victim. What it says is,
Government departments are also targeted via attacks on industry suppliers which may hold government information on their own systems. We have been told that cyber espionage “[has] resulted in MOD data being stolen,***. This has both security and financial consequences for the UK.
So this should be a story about supply chain security, not about encryption keys.
It is from such little misunderstandings that global cyberwar evolves…
Over the last few days numerous IT magazines have run a story about a surge in customers for Swiss hosting companies. For example, “Artmotion has witnessed a 45% growth in revenue amid this new demand for heightened privacy,” says Computer Weekly.
Most of these stories have come from, yes, a post-PRISM press release issued by Artmotion. “Artmotion, for example,” says the press release, “has witnessed 45 per cent growth in revenue amid this new demand for heightened privacy.”
Why are companies moving to Switzerland? Well, remember that we now live in post-Snowden enlightenment. “The desire for data privacy has therefore seen a surge in large corporations turning to Switzerland to take advantage of its privacy culture. Enterprises can host data in Switzerland clouds without fear of it being accessed by foreign governments,” says Computer Weekly.
“The desire for data privacy has therefore seen a surge in large corporations turning to ‘Silicon’ Switzerland to take advantage of the country’s renowned privacy culture. Here they can host data without fear of it being accessed by foreign governments,” says the press release.
Computer Weekly and the press release then both quote Mateo Meier, director at Artmotion:
Unlike the US or the rest of Europe, Switzerland offers many data security benefits. For instance, as the country is not a member of the EU, the only way to gain access to the data hosted within a Swiss Datacenter is if the company receives an official court order proving guilt or liability.
But my question is this: how do you get the data to Switzerland? Even if PRISM can’t get it when it’s there, Tempora will get it en route. And the NSA and GCHQ are in bed together in such an incestuous relationship that it would make a great movie (first available on The Pirate Bay).
That means that data in transit to and from the host will need to be encrypted (outside of the browser because we know we cannot trust either Google or Microsoft) in true and genuine end-to-end encryption. That won’t work for a traditional public-facing website.
What about a private cloud not open to the public? Still won’t work without encryption unless all of the users have a secure link to the server – and the only way to do that is with encryption.
What about secure back-up of company data? No, you still have to encrypt it to get it to and from the host securely.
So it doesn’t matter where you host your data, the only way it can be secure is if you encrypt it. But if you encrypt it, it doesn’t matter where you host it (provided of course the NSA/GCHQ doesn’t have a backdoor into the encryption itself).
I’m all in favour of Switzerland trying to make hay from the PRISM/Tempora fall out – but don’t assume that your data is safe just because of Swiss privacy laws. You need encryption, not geography, to be private.
One of the first rules of security is that you never use a product that employs any form of proprietary cryptography. And if a security guy then says ‘be careful’, you’d best be very very careful — no matter how many magazines or newspapers say the product is the real deal.
That’s what happened with Cryptocat which is a secure chat product that “could save your life and help overthrow your government,” according to Wired — it could “save lives, subvert governments and frustrate marketers.” Forbes said that it “establishes a secure, encrypted chat session that is not subject to commercial or government surveillance.” Sounds good.
But security folk weren’t so sure. “Since Cryptocat was first released,” warned Christopher Soghoian in July 2012, “security experts have criticized the web-based app, which is vulnerable to several attacks, some possible using automated tools.”
Patrick Ball expanded in August 2012:
CryptoCat is one of a whole class of applications that rely on what’s called “host-based security”… Unfortunately, these tools are subject to a well-known attack… but the short version is if you use one of these applications, your security depends entirely the security of the host. This means that in practice, CryptoCat is no more secure than Yahoo chat, and Hushmail is no more secure than Gmail. More generally, your security in a host-based encryption system is no better than having no crypto at all.
When It Comes to Human Rights, There Are No Online Security Shortcuts
Security professionals, then, were not surprised when last week Steve Thomas wrote about his DecryptoCat — which does what it says on the can: it cracks the keys that let you read the messages.
If you used group chat in Cryptocat from October 17th, 2011 to June 15th, 2013 assume your messages were compromised. Also if you or the person you are talking to has a version from that time span, then assume your messages are being compromised. Lastly I think everyone involved with Cryptocat are incompetent.
This is a big deal, because Cryptocat has been marketed towards dissidents operating in repressive regimes. As Soghoian wrote:
We also engage in risk compensation with security software. When we think our communications are secure, we are probably more likely to say things that we wouldn’t if our calls were going over a telephone like or via Facebook. However, if the security software people are using is in fact insecure, then the users of the software are put in danger.
Tech journalists: Stop hyping unproven security tools
Add to that the current revelations on the NSA/GCHQ mass surveillance, and our understanding from last week’s Snowden revelations that the NSA automatically and indefinitely retains encrypted messages, then we can say with pretty near certainty that if you have been using Cryptocat, at least the US and UK governments are aware of everything you said.
Have a look at this flow chart. It’s taken from the Intelligence and Security Committee’s report on the Communications Bill, and “illustrates how the elements of the Bill would work in practice.” It is UK democracy at work.
It starts with the authorities deciding that certain data is wanted. If the service provider objects, see how he has the right to appeal. If he accepts the request, he gets a notice to retain the required data. If he rejects the request, he gets a notice to retain the required data. If he still objects, it might go to court. If he wins the case, “HMG negotiates and serves a notice on a different Service Provider to collect and retain some or all of the required data using Deep Packet Inspection (DPI) or similar techniques.” In other words, they chose a different ISP and start all over again. That’s pure democracy: keep asking the question until you get the answer you want.
Of course, it’s not the only worrying thing about the Access to communications data by the intelligence and security Agencies report. My primary concern is that it is not an investigation into the Communications Bill at all. It is really a libation to the Bill. Its only criticism is that the government should have done a better job at selling the Bill to the population. But having said that, the Committee still tries to obfuscate and mislead.
Consider the section on ‘encryption’. It is heavily redacted (another example of modern democracy: ‘don’t tell the plebs what we’re doing’). It effectively says little more than
Makes you wonder, doesn’t it. What are those ‘options’? And what does it mean for the service provider to provide an unencrypted version of encrypted communications data?
I asked two interested experts to explain the issue for me – and rather than try to comment on their replies, I’m reproducing them in full.
James Firth, CEO of the Open Digital Policy Organisation Ltd
The problem arises because communications can be – effectively – layered. So a portion of ‘content’ at one (higher) layer is actually communications data at a lower layer.
A classic example is webmail. Alice logs in to mail service provider G via secure HTTPS and sends a message to Bob. All her ISP knows is that Alice has a web session with G.
Various proposals being touted – sometimes in the various drafts – include forcing mail service providers to abide by the Comms Bill. Jurisdiction could be enforced in some way for any company with a UK presence, but it would be impractical as the Bad Guys would just use other companies.
But there have been scarier options touted.
If Alice didn’t use a secure method to connect to her mail service provider her ISP could be forced to scan all CONTENT in order to detect nested communications data. In this model her ISP would scrape the email “to” and “from” fields from her web session. I doubt this would pass muster with various EU Directives but that’s been suggested.
Even scarier suggestion – if Alice DID use SSL to connect to her mail service provider it has been suggested – I have a very good contact confirming this – that legislation could be introduced to force her ISP to store the whole encrypted transaction, even though this includes the content.
The idea being that HMG could get a court order – here or in e.g. the US – at a later date to force her mail service provider to disclose their private SSL key. From that it would – in *some cases* – be possible to replay the SSL transaction to discover the session key and decrypt the contents, then extract the communications data, and – honest guv – ignore any content.
The major flaw in this plan is that Google, as one mail service provider, has rolled-out a feature called forward secrecy (http://en.wikipedia.org/wiki/Perfect_forward_secrecy).
Forward secrecy introduces, essentially, a second negotiated secret into the SSL transaction; a secret known only by the client web browser. The protracted SSL handshake with forward secrecy ensures that if one private key was later compromised – e.g. the mail service provider’s key – an attacker would still not be able to reproduce the plaintext from a captured encrypted session.
Clever mail service providers would never want to be in a position where they are forced by a court to hand over their private keys, so forward secrecy is actually in their interests as it devalues their private key as it won’t alone decrypt a captured session. In fact the second secret needed to decrypt a session with perfect forward secrecy should have been destroyed by the client and the server as soon as the session ends.
Alexander Hanff, managing director at Think Privacy Ltd
When an SSL connection is made a ‘tunnel’ is set up and as a result all of the HTTP headers are encrypted (these include the specific web page you are requesting from the server) but the TCP and IP data is not encrypted as they exist on different OSI layers – HTTP/SSL exist in layers 5-7 whereas TCP is layer 4 and IP is layer 3 (if my memory serves me correctly you might want to double-check that).
As you correctly state, it would be impossible to get the encrypted content to the server if the 3rd and 4th OSI layers were encrypted, so yes there is still a DNS lookup (which means the ISP knows the domain and the IP address you are trying to communicate with) but they would not know which web page you are trying to access within that domain.
The same should be true of emails which are not hosted by the ISP (for example I run my own email servers) they would know which email server I am trying to communicate with but since my emails are transported over SSL/TLS the would not know whom I am communicating with or anything else which is held within the encrypted packets (I run my own email servers to circumvent the Data Retention aspects of RIPA/Data Retention Directive).
That said, the ISP could easily set up Man in the Middle attacks similar to how Phorm did with their DPI boxes which would allow them to decrypt everything (including the content) which is what I presume was the redacted content in the report released yesterday when they were talking about the government has a feasible method of dealing with encryption. This of course would be completely illegal under RIPA (without a warrant) so they would need to introduce legislation to do this (which would put them in breach of the EU Data Retention Directive but as we know the UK gov are not good at complying with EU Directives so they probably wouldn’t care).
So in summary, yes encryption restricts them from obtaining some of the higher level data they want but not the low-level data such as domain and IP.
Notice that there is one common element in these two replies: further legislation. It seems highly likely, then, that this Communications Bill will be supplemented by altogether more intrusive legislation (the ‘options’) when the authorities finally realise what everyone has been telling them: this Communications Bill isn’t what they say and will not work. And that, of course, is democracy at work: keep asking the question until you get the response you want.
The Data Protection Regulation should be amended to force companies to disclose how passwords are stored
Over the last couple of days it has been disclosed that an amazing amount of personal data on 1.1 million Americans has been lifted from the US Nationwide insurance group. Passwords do not appear to be involved – it’s a storage of data rather than an interactive site. But the point is that this data would appear to have been unencrypted – at least the company concerned hasn’t specified one way or the other; and that’s the problem.
Time and again we learn of plaintext passwords being stolen. Plaintext is unacceptable, but it happens. Sometimes, they are stored hashed by SHA1. This is unacceptable because dictionary attacks and Jens Steube’s newly announced brute force attack makes them surprisingly vulnerable; but it happens. At the very least, passwords should be stored hashed with SHA1 – preferably better – and salted.
I for one would be reluctant to commit my password to any site that stores that password with anything less than salted SHA2. But they don’t tell us, do they.
So I call now for the European Commission to amend the proposed Data Protection Regulation to include a requirement for all sites that store user passwords to make it clear on their site, at registration, precisely how those passwords are stored: plaintext, hashed (with what), or hashed and salted. This is the only way we will be able to force vendors to improve the way in which they handle our data.
Ever since the news of a potential breach at Dropbox emerged, my old post “Is it safe to carry on using Dropbox?” has been getting an elevated number of hits. It is time perhaps to update.
Firstly, what’s this about a breach? Well, Dropbox wasn’t breached in the traditional sense of the word. The likelihood is that a number of Dropbox users had the same log-in credentials (email address and password) that they used on a different web account that was breached. The criminals were able to reuse the credentials stolen from elsewhere, and gain access to a number of Dropbox accounts.
Unfortunately, one of these accounts belonged to a Dropbox employee. The criminals gained access to his account and found a file containing an unknown number of users’ email addresses. It was probably these users that were subsequently spammed, leading to the suggestion that Dropbox had been hacked.
This leaves us two questions: is Dropbox safe to use; and what lessons should we learn?
Dropbox is no more nor less safe than it was before; that is, it is not safe. This for two reasons: firstly, it is in the cloud; and secondly, Dropbox is a US company. You don’t know what is happening in a cloud that is not your own; so it is not safe. Dropbox is registered in the US, and is subject to the PATRIOT Act – the US authorities are able to demand details of you and your account simply because they want them. So Dropbox is just not safe for confidential or incriminating content (and nor, note, is any other US-based cloud company).
But why worry if the data you store is neither of these? You can increase the level of security by locally encrypting the files (with something like TrueCrypt) and storing only encrypted files. The basic rule is simple: if it is important that nobody else ever sees the data, don’t use Dropbox; if it doesn’t matter if other people see your files, you can use Dropbox. If you’re somewhere in-between, encrypt.
What should we learn from this? Well, it is good that Dropbox has or will be initiating additional security – including two-factor authentication. This will make your data more safe from hackers, but it has no effect on law enforcement intrusion. And judging from Google’s 2FA, few people will bother using it.
I also very much like the new security page (partial screenshot below). It’s available at your Dropbox settings location, and shows who has recently accessed your account and who is currently accessing your account. This is certainly worth checking regularly. Note also that this is where you change your Dropbox password.
But despite this good response from Dropbox, the fact remains that these are reactive and not proactive steps. Security is still an afterthought, added on to systems rather than designed into them. That’s one lesson we don’t seem able to learn. Secondly, it is sad that a Dropbox employee should be guilty of fundamental security no-nos: he stored a file with user emails in plaintext; and he was reusing the same password on at least two different accounts.
These are the main lessons that we all need to learn: do not trust other people or systems to do security for you. It is your, not their, responsibility (or at least, even if it is their responsibility, you cannot assume they will do it).
And finally, and fundamentally, and beyond all others: when will we ever learn to stop re-using the same password on multiple accounts? Tens of millions of passwords have been stolen from tens of major providers this year alone – and that’s just the ones we know about. Are you sure that your own password is not included? If it is, and you re-use it on multiple accounts, then you simply don’t know who has access to your accounts. And if that includes your email account or bank account, not to put too fine a point on it, you’re screwed.
So, is Dropbox safe? Probably not; but that doesn’t mean we shouldn’t use it under certain circumstances. I shall certainly carry on using it. But are we safe? Absolutely not until we start using unique, strong passwords for every different account. Hint. Use a good password manager.
Update: the revelations from Edward Snowden concerning US government access to cloud services, which will include Dropbox, adds new urgency to considering the use of Dropbox. See our latest commentary following Edward Snowden’s Prism revelations: Is it safe to carry on using Dropbox (post Prism)? Yes and No: Part III