TrueCrypt, the free open source full disk encryption program favoured by many security-savvy people, including apparently Edward Snowden, is no more. Its website now redirects to its SourceForge page which starts with this message:
WARNING: Using TrueCrypt is not secure as it may contain unfixed security issues
This page exists only to help migrate existing data encrypted by TrueCrypt.
The development of TrueCrypt was ended in 5/2014 after Microsoft terminated support of Windows XP. Windows 8/7/Vista and later offer integrated support for encrypted disks and virtual disk images. Such integrated support is also available on other platforms (click here for more information). You should migrate any data encrypted by TrueCrypt to encrypted disks or virtual disk images supported on your platform.
This statement is so full of problems it is difficult to know where to start.
Is it a canary?
Canaries are warnings by a different method (if a canary died in a mine, the likelihood was that poison gas, otherwise yet undetected, was present). So one suggestion is that this message indicates government interference, and like Levison and Lavabit, it has been shut down to protect the users. (Levison said, “I have been forced to make a difficult decision: to become complicit in crimes against the American people or walk away from nearly ten years of hard work by shutting down Lavabit.”) Some have gone so far as to suggest a more explicit warning in TrueCrypt’s first paragraph: “not secure as”.
But for me the strongest suggestion that this might be a canary warning is the recommendation for Microsoft’s BitLocker. The message says “You should migrate any data encrypted by TrueCrypt to encrypted disks or virtual disk images supported on your platform.” It then proceeds to give a step-by-step how-to for migrating to BitLocker.
My problem is two-fold. Firstly, I find it difficult to believe that the developers of open-source cryptography would voluntarily recommend placing faith in a closed-source solution — and one from Microsoft to boot. Secondly, BitLocker gives up the ground won with such difficulty during the First Crypto Wars against Clinton’s Clipper chip and key escrow demands — BitLocker escrows the keys either with the IT department or with Microsoft’s cloud services. From both locations, using the PATRIOT Act, government agencies can retrieve those keys effectively on demand. This recommendation doesn’t make sense from a purely ‘security’ viewpoint.
Against this, however, we should note that ‘David’ (apparently a or the TrueCrypt developer) has told @stevebarnhart that there has been no government contact except one time inquiring about a ‘support contract'; that “BitLocker is ‘good enough’ and Windows was original ‘goal of the project';” and that “There is no longer interest.” But whether ‘David’ is who he says he is, or whether what he says is true is anyone’s guess.
I find myself conflicted. This time my heart says, don’t think conspiracy; but my head says, this isn’t right.
For whatever reason, TrueCrypt can no longer be trusted. If we take David at face value, he has simply lost interest in the project and bowed out in a most unsatisfactory manner. That would imply that you can carry on using TrueCrypt; but that like XP, any future issues will not be resolved. So it’s probably best not to wait for them.
But if you were savvy enough to install TrueCrypt you will be savvy enough to migrate to an alternative without being persuaded into using BitLocker. BitLocker works with the Trusted Platform Module (TPM), a motherboard chip that to my mind turns Windows 8 into an NSA trojan. (See Is Windows 8 an NSA trojan?) This latest development merely reinforces my opinion.
It would be tempting to say it is time to migrate away from Windows altogether — perhaps to Linux. The reality, however, is that nothing is secure. What can be made by software can be unmade by software; that which can be built by computer power can be demolished by computer power. The unmakers have a thousand times the resources of the makers.
The solution is political, not technological. We the people have to reassert our role over the politicians. They are our servants. We pay them to do our bidding. And we have to make it absolutely clear that government interference and surveillance is unacceptable and must stop.
There are two functions to PR: the first is to shout the good news from the hilltops, while the second is to bury the bad. When bad news hits, PR says very little.
Bad news has hit eBay. It admitted Wednesday that it had been hacked – but it actually gives very little information. This is a mistake. It means that people will comb their words used looking for clues over what has actually happened. The result is conjecture; but what follows is the conjecture of some very clever security people.
Three things leap out from the eBay statement. The first is the repeated use of the word ‘encrypted’, with no mention of hashing for the passwords. The second is the duration of the breach – it occurred in February/March, but was only discovered a couple of weeks ago. And the third is the mention of the database – not part of, nor a geographical region, but the (whole?) database. So what can we surmise from all this?
Firstly, were the passwords encrypted or hashed? It makes a difference. The implication from the statement is that they were encrypted. Most security experts believe that this would be a mistake – passwords should be hashed and salted. In fact, Ian Pratt, co-founder of Bromium, goes so far as to suggest, “It would be rather unusual to encrypt passwords rather than hash them; it’s probably just lack of precision in the statement.”
But that’s what we said about the Adobe breach – and it turned out that the passwords were indeed encrypted rather than hashed. The opinion among the experts I talked to is fairly evenly balanced – while eBay’s semantics suggest they used encryption, many experts find it hard to believe. “This heavily implies that the passwords were not hashed,” said Chris Oakley, principle security consultant at Nettitude. “eBay’s report suggests that the passwords were encrypted rather than hashed,” added Brendan Rizzo, Technical Director EMEA for Voltage Security. Sati Bains, COO of Sestus, said, “Yes… it appears from the comment that they did [encrypt rather than hash].”
“Encryption and hashing are often confused with each other,” explains Jon French, a security analyst at AppRiver. “But from the sounds of the [eBay’s] press release, it seems they were using some sort of encryption.”
Andrey Dulkin, senior director of cyber innovation at CyberArk, is in no doubt. “Indeed, from the eBay statements we understand that the passwords were encrypted, rather than hashed. The fact that the statements repeatedly use the words ‘encrypted’ and ‘decrypted’ supports this interpretation.”
It is, of course, possible that eBay is simply not differentiating between the two processes, since most of its customers will not understand the difference. “The public understand the word ‘encrypted’ more than hashed – so encrypt is frequently used in place of hashed. But it is believed they were hashed,” suggests Guy Bunker, spokesperson for the Jericho Forum and a cyber security expert at Clearswift.
Ilia Kolochenko, founder and CEO of High-Tech Bridge (HTB), doesn’t believe we can tell from eBay’s comments. “The difference isn’t easily understood by users. Even the spokesperson might not be aware. It’s quite possible that the company simply didn’t want to introduce the complexity of describing the technicalities of hashing and salting in a brief announcement.”
What’s the difference, and why does it matter?
The primary operational difference is that encryption can be decrypted; that is the original plaintext can be retrieved from the ciphertext through the use of the encryption key. Hashed outputs cannot be mathematically returned to the original plaintext.
In practice, an entire database of passwords would be encrypted via a single encryption key. But if hashing was used, each individual password would ideally have an unknown value added to it (a ‘salt’) and the results would be separately hashed. “This salt,” explains Voltage’s Rizzo, “is a way to make sure that the hash of a particular password cannot be compared to the known hash of that same password by the attacker through the use of rainbow tables.”
This means that if an encrypted database is stolen, only one key needs to be found to unlock every password in the database.If the passwords are hashed, every single password needs to be cracked individually.
“The advantages to hashing,” Nick Piagentini, senior solutions architect at CloudPassage, told me, “are one, there is no need to manage sensitive encryption keys; two, hashing processes have less overhead to run than encryption processes; and three, there is no need to reconstruct the password data from the hash. Encryption would only be used if there was a need to get the original password back.”
Could the hackers have the encryption key?
This is the 64 million dollar question (and is only relevant if the passwords were encrypted). We don’t know, and we may never know. But it is certainly possible. There are two possibilities: it could have been cracked or it could have been stolen.
Reuters spoke to eBay spokeswoman Amanda Miller:
She said the hackers gained access to 145 million records of which they copied “a large part”. Those records contained passwords as well as email addresses, birth dates, mailing addresses and other personal information, but not financial data such as credit card numbers.
Hackers raid eBay in historic breach, access 145 million records
eBay says the database was compromised some time around late February or early March; but wasn’t discovered until about two weeks ago. What we don’t know is whether the compromise was still in active use by the hackers, what else they did during the two months they were undetected, or whether they left something unwelcome behind. Frankly, I find it hard to believe that having gained access without being discovered the hackers did not have a good look round.
(Incidentally, it is worth pointing out at this point another comment from HTB’s Kolochenko. Basically, eBay’s statement that financial details were safely stored on a separate server is pretty meaningless. “The two servers would have to communicate,” he explained. “The hackers could have installed some malware to listen to the communication between the servers, and sniffed the plaintext traveling between them.”)
So could they have found the encryption key? Opinion is divided. “This is a primary argument for using hashing over encryption for password storage,” comments Nettitude’s Oakley; “an attacker who is able to compromise the database may also be in a position to obtain the encryption key(s).” (Incidentally, if the passwords were hashed rather than encrypted, the hackers could just as likely have found the salt or salt mechanism, rendering the hashed passwords relatively easy to crack via rainbow tables.)
On the other hand, “I would hope they [eBay] didn’t ‘tape the key to the door of the safe’”, comments Trey Ford, global security strategist at Rapid7. “eBay and PayPal have solid security teams, and go through regular third-party assessments. I refuse to believe they would handle encryption key materials that poorly.”
And yet they left the users’ email addresses and other personal information unencrypted. If they were using encryption seriously, they would have used a hardware security module (HSM) to house the keys, and would have encrypted everything. “They do not seem to be very confident about their encryption system,” comments Sebastian Munoz, CEO of REALSEC, “when they are suggesting their customers to reset passwords. If efficiently encrypted, using specific certified hardware, there would be no need to reset the passwords, since protection is guaranteed. When you use a Hardware Security Module (HSM) and not a simple and insecure encryption-by-software process, there is no way that hackers can gain access to the encryption keys.”
Munoz further suspects that software based encryption was used since only the passwords were encrypted. Since software encryption impacts on performance, then cost arguments come into play.
So, given the duration of the breach and the probable lack of an HSM, it is perfectly possible that the hackers also found the encryption key – and if this is the case, they now have access to all of the greater part of 145 million passwords, along with ‘email address, physical address, phone number and date of birth’.
If they did not find the key, would they be able to crack the encryption key? Again, opinion is divided – it all depends upon what encryption algorithm was used. Older encryption algorithms might be susceptible to a ‘known plaintext’ attack (see Wikipedia for details). Getting the necessary plaintext would be no problem. The most popular passwords are remarkable consistent – so a simple analysis with something like DigiNinja’s Pipal on an existing cracked database would provide a fair sampling of plaintext.
“However,” notes Bromium’s Ian Pratt, “assuming any kind of modern encryption (e.g. AES-128) was used then a known plaintext attack should not be feasible to recover the key and hence reveal other passwords.”
“Another approach,” suggested Clearswift’s Bunker, “is to ‘inject’ known passwords (either the hash or the encrypted version) into the database. This would create the equivalent of denial of service for the individual but would allow the attacker free reign over the account.”
The problem is we simply do not know what has happened. eBay’s attempts to downplay the incident is simply leading to conjecture.
While writing this report, Rapid7’s Trey Ford noticed adverts for the sale of eBay’s stolen database beginning to appear on Pastebin. “There has now been a posting on pastebin claiming to offer ‘145 312 663 unique records’ relating to the eBay breach,” he told me by email. We don’t know if they’re genuine, “it’s possible that a criminal has just spotted an opportunity to cash in on the attack with some other credentials dump they have.”
An analysis of the sample provided is inconclusive – the records are possibly genuine but not certainly genuine. But Ford had a look at the sample:
The sample that has been shared indicates that cracking the passwords will take considerable time. This is nothing like what we saw when LinkedIn was breached and the stolen credentials were quickly cracked due to only SHA-1 hashing being used for storage. In contrast, this credentials set is using PBKDF2 (Password-Based Key Derivation Function 2) SHA-256 hashes, which means they employ a strong hash function and also intentionally make cracking them more difficult and slow by individually salting and using a high number of hash iterations. The method used can be regarded as the state-of-the-art way to store passwords on web applications. Again though, we don’t know that these are credentials taken from the eBay breach, and no details have come from eBay on how they secure passwords.
This would fit in with eBay’s apparent confidence that the passwords cannot be hacked. However, Reuters spoke to eBay about the sample, and
eBay’s [spokesperson Amanda] Miller said the information was not authentic.
U.S. states probe eBay cyber attack as customers complain
AppRiver’s Jon French also noticed the Pastebin offer. He told me by email,
I’ll be wary of anything like this until I see people saying they see their own names (or if I end up seeing mine). Eventually if the Pastebin offer is legit, someone will post the file for free somewhere or some security company that buys it will verify authenticity.
His colleague, Troy Gill, a senior security analyst at AppRiver also suggested something that serious criminals will be well aware of: “There is always the remote possibility that this is a honey pot set by authorities to lure in would be buyers.”
eBay is taking the standard route for crisis management: say nothing. This is hugely disrespectful to its customers, who need and have a right to know everything possible. But eBay is also making a mistake in trying to downplay the effect of the stolen data. It says it has “no evidence of the compromise resulting in unauthorized activity for eBay users, and no evidence of any unauthorized access to financial or credit card information.” This is meant to make its customers feel better – the danger is that it might.
What eBay isn’t saying is that the unencrypted personal data also stolen (email address, physical address, phone number and date of birth) is a phisher’s wet dream. Armed with that information criminals will be able to concoct very compelling emails and cold call telephone calls. This is likely to happen on a vast scale and very soon. eBay might feel confidant about its own business, but the data it has lost puts millions of individuals and other companies in danger.
“When companies like eBay keep silent about the details,” commented High-Tech Bridge’s Kolochenko, “I would tend to expect the worst.” It is perhaps worth remembering the Adobe incident, which started off with a breach of a couple of million and slowly escalated into one of the worst breaches in history.
The Electronic Frontier Foundation has a fascinating graphic on which companies are doing what things to protect their customers’ – our – data in the post Prism/Snowden era.
What really leaps out is that the companies is that provide consumer cloud services are on our side (Dropbox, Facebook, Google and Twitter); telecommunication companies are on their side (AT&T, Comcast, Verizon); and the main OS providers (Microsoft and Apple) aren’t really sure which side their bread is buttered.
I finally got the email I’ve been waiting for. It’s from Adobe. It starts
As we announced on 3 October 2013, we recently discovered that an attacker illegally entered our network and may have obtained access to your Adobe ID and encrypted password. We currently have no indication that there has been unauthorised activity on your account.
To prevent unauthorised access to your account, we have reset your password…
Let’s have a look at this. “We announced on 3 October 2013, we recently discovered…” What does recently mean? They announced on 3 October not because they had discovered the hack, but because Brian Krebs told the world that he had found stolen Adobe data on the internet. So when it was actually stolen (could have been months earlier) and when Adobe actually became aware of the theft (could have been months earlier) is not known.
Let’s be charitable and say Adobe knew about it by 1 October.
They said that just under 3 million usernames and encrypted passwords may have been stolen. Since I don’t have an Adobe account, and since 3 million is relatively few in the overall scheme of things, I thought no more about it.
A few weeks later Adobe admitted that the true figure is nearer 38 million. That’s getting a bit more worrying, so I checked my browser’s stored passwords and my more recently adopted password manager. Still nothing. No Adobe account. And anyway, Adobe said very clearly that the company had reset all the passwords and notified the 38 million users. I had not been notified. I had nothing to worry about.
But then, about a week later, it emerged that it wasn’t a mere 3 million, nor a more worrying 38 million, but a colossal 150 million. Adobe had notified 38 million out of 150 million – but that is by no means the worst of it. When Paul Ducklin got hold of the database of stolen data, now easily available if you know where to look, a quick analysis showed the user’s email in plaintext, an encrypted password, and the user’s password hint in plaintext.
email addresses – you can infer a lot from an address: usually the user’s name and company. For example, Ken Westin at Tripwire looked through the Adobe hack and found 89,997 military addresses. “This is in addition to the more than 6,000 accounts from defense contractors such as Raytheon, Northrup Gruman [sic], General Dynamics and BAE Systems we also found,” he wrote. “Also, on the federal side, there were 433 FBI accounts, 82 NSA accounts and 5,000 NASA accounts.” So, choose your company, guess the user’s name, look through LinkedIn and Facebook and you’ve got enough for a pretty compelling targeted phishing attack.
encrypted passwords – passwords should be hashed and salted with a slow hashing algorithm; they should not be encrypted. Hashing means 150 million passwords need to be cracked; encryption means that one key needs to be cracked and all 150 million passwords are known.
password hints in plaintext – oh, really! Why bother cracking the passwords when the hint will let you guess it? What do you think is the password when the hint is ‘57’; or ‘the bad disciple’?
So Adobe really cocked-up. They didn’t protect the data, they didn’t store it correctly, and they tried to minimise the extent of the damage. And still it gets worse; because they tried to suggest, don’t worry, most of these accounts aren’t real, they belong to people who just signed up to get promotions or freebies.
Here’s the real danger. In that great mass of one-off freebie-chasing accounts numbering anything between 38 million and 150 million are people who signed up, used a password that they can’t remember, and are completely unaware that their password is now compromised. What if these people signed up years ago before password thefts became a dime a dozen, and lazily used the same password as they use on their email address? There is no way that they can retrieve that password. They now have no way of knowing whether any or which or all of their other accounts have been compromised by Adobe’s failure to adequately protect this password.
One final point. I said at the beginning that I had been expecting the email from Adobe. That’s because I checked with LastPass (who has a little routine that will tell you whether you’re included in the hacked data) and learnt that although I couldn’t ever remember creating an Adobe account, at some point I must have done, because there I was.
So, at least six weeks after it knew of the breach, Adobe bothers to tell me that someone “may have obtained access to [my] Adobe ID and encrypted password” when the world and his dog has access to that encrypted password. I know; Ken Westin, Brian Krebs and Paul Ducklin almost certainly know; LastPass and the hackers most definitely know; and anyone who cares to look will also know. Adobe, however, doesn’t know and continues to insist that ‘an attacker… may have obtained access.”
How dare they, after all this time and all these mistakes, still try to save face at my expense?
I’ve had a comment on my latest Dropbox post (Is it safe to carry on using Dropbox (post Prism)? Yes and No: Part III) that I have rejected. This is a very heavily moderated blog, but I thought I’d explain why I rejected this one.
The comment started by saying, “As Dropbox stands today on its own, yes, completely agree that there is the *possibility* of your data being “looked in on” by people without your knowledge or permission.” It then added, “However, there are 3rd party services out there like xyznnn (www.xyznnn.com) that are completely tapproof, i.e. YOU hold the keys, not Dropbox or the 3rd party vendor. Meaning that your data cannot be accessed without you knowing about it. Read more in this blog post: xyznnn.”
It was, naturally, submitted by a member of the marketing department of the xyznnn company; so it is absolutely an attempt at advertising to the readers of this blog. That, in itself, is not enough for me to reject it. If such a comment adds value to the subject or will genuinely help the reader, I will still generally allow it.
But this one is flatly wrong. First of all, never trust anyone who says or implies that any security is unbreakable. In fact, if anyone says that, you can begin to distrust their understanding of security. So, rather than helping the readers, I consider claims such as “completely tapproof” and “your data cannot be accessed” to be misleading and potentially dangerous.
I will not knowingly help promote products that make what I consider to be statements verging on hyperbole and are fundamentally inaccurate — there are simply no absolutes in security. And that is why this particular comment was rejected.
I got this note from a PR company working for a quite major security company. It said, “In response to the MOD being the victim of a cyber espionage attack that has led to the theft of key data…” and pointed to an article on V3.
That article does indeed say,
The Ministry of Defence (MoD) was the victim of a cyber espionage attack that led to the theft of key data, in the latest evidence of the sustained cyber threats facing the UK.
The comment from the PR company talked about the importance of protecting encryption keys. “Failure to retain custody of your encryption keys is a huge issue that essentially negates the benefits of encryption,” said the spokesman.
This is, of course, perfectly true and valid. But we should go back to the source of V3’s article, the latest 2013 annual report from the UK’s Intelligence and Security Committee. Not once does it use the phrase ‘key data’. In fact, not once does it mention encryption.
In fact it doesn’t even say that the MoD was the victim. What it says is,
Government departments are also targeted via attacks on industry suppliers which may hold government information on their own systems. We have been told that cyber espionage “[has] resulted in MOD data being stolen,***. This has both security and financial consequences for the UK.
So this should be a story about supply chain security, not about encryption keys.
It is from such little misunderstandings that global cyberwar evolves…
Over the last few days numerous IT magazines have run a story about a surge in customers for Swiss hosting companies. For example, “Artmotion has witnessed a 45% growth in revenue amid this new demand for heightened privacy,” says Computer Weekly.
Most of these stories have come from, yes, a post-PRISM press release issued by Artmotion. “Artmotion, for example,” says the press release, “has witnessed 45 per cent growth in revenue amid this new demand for heightened privacy.”
Why are companies moving to Switzerland? Well, remember that we now live in post-Snowden enlightenment. “The desire for data privacy has therefore seen a surge in large corporations turning to Switzerland to take advantage of its privacy culture. Enterprises can host data in Switzerland clouds without fear of it being accessed by foreign governments,” says Computer Weekly.
“The desire for data privacy has therefore seen a surge in large corporations turning to ‘Silicon’ Switzerland to take advantage of the country’s renowned privacy culture. Here they can host data without fear of it being accessed by foreign governments,” says the press release.
Computer Weekly and the press release then both quote Mateo Meier, director at Artmotion:
Unlike the US or the rest of Europe, Switzerland offers many data security benefits. For instance, as the country is not a member of the EU, the only way to gain access to the data hosted within a Swiss Datacenter is if the company receives an official court order proving guilt or liability.
But my question is this: how do you get the data to Switzerland? Even if PRISM can’t get it when it’s there, Tempora will get it en route. And the NSA and GCHQ are in bed together in such an incestuous relationship that it would make a great movie (first available on The Pirate Bay).
That means that data in transit to and from the host will need to be encrypted (outside of the browser because we know we cannot trust either Google or Microsoft) in true and genuine end-to-end encryption. That won’t work for a traditional public-facing website.
What about a private cloud not open to the public? Still won’t work without encryption unless all of the users have a secure link to the server – and the only way to do that is with encryption.
What about secure back-up of company data? No, you still have to encrypt it to get it to and from the host securely.
So it doesn’t matter where you host your data, the only way it can be secure is if you encrypt it. But if you encrypt it, it doesn’t matter where you host it (provided of course the NSA/GCHQ doesn’t have a backdoor into the encryption itself).
I’m all in favour of Switzerland trying to make hay from the PRISM/Tempora fall out – but don’t assume that your data is safe just because of Swiss privacy laws. You need encryption, not geography, to be private.
One of the first rules of security is that you never use a product that employs any form of proprietary cryptography. And if a security guy then says ‘be careful’, you’d best be very very careful — no matter how many magazines or newspapers say the product is the real deal.
That’s what happened with Cryptocat which is a secure chat product that “could save your life and help overthrow your government,” according to Wired — it could “save lives, subvert governments and frustrate marketers.” Forbes said that it “establishes a secure, encrypted chat session that is not subject to commercial or government surveillance.” Sounds good.
But security folk weren’t so sure. “Since Cryptocat was first released,” warned Christopher Soghoian in July 2012, “security experts have criticized the web-based app, which is vulnerable to several attacks, some possible using automated tools.”
Patrick Ball expanded in August 2012:
CryptoCat is one of a whole class of applications that rely on what’s called “host-based security”… Unfortunately, these tools are subject to a well-known attack… but the short version is if you use one of these applications, your security depends entirely the security of the host. This means that in practice, CryptoCat is no more secure than Yahoo chat, and Hushmail is no more secure than Gmail. More generally, your security in a host-based encryption system is no better than having no crypto at all.
When It Comes to Human Rights, There Are No Online Security Shortcuts
Security professionals, then, were not surprised when last week Steve Thomas wrote about his DecryptoCat — which does what it says on the can: it cracks the keys that let you read the messages.
If you used group chat in Cryptocat from October 17th, 2011 to June 15th, 2013 assume your messages were compromised. Also if you or the person you are talking to has a version from that time span, then assume your messages are being compromised. Lastly I think everyone involved with Cryptocat are incompetent.
This is a big deal, because Cryptocat has been marketed towards dissidents operating in repressive regimes. As Soghoian wrote:
We also engage in risk compensation with security software. When we think our communications are secure, we are probably more likely to say things that we wouldn’t if our calls were going over a telephone like or via Facebook. However, if the security software people are using is in fact insecure, then the users of the software are put in danger.
Tech journalists: Stop hyping unproven security tools
Add to that the current revelations on the NSA/GCHQ mass surveillance, and our understanding from last week’s Snowden revelations that the NSA automatically and indefinitely retains encrypted messages, then we can say with pretty near certainty that if you have been using Cryptocat, at least the US and UK governments are aware of everything you said.
Have a look at this flow chart. It’s taken from the Intelligence and Security Committee’s report on the Communications Bill, and “illustrates how the elements of the Bill would work in practice.” It is UK democracy at work.
It starts with the authorities deciding that certain data is wanted. If the service provider objects, see how he has the right to appeal. If he accepts the request, he gets a notice to retain the required data. If he rejects the request, he gets a notice to retain the required data. If he still objects, it might go to court. If he wins the case, “HMG negotiates and serves a notice on a different Service Provider to collect and retain some or all of the required data using Deep Packet Inspection (DPI) or similar techniques.” In other words, they chose a different ISP and start all over again. That’s pure democracy: keep asking the question until you get the answer you want.
Of course, it’s not the only worrying thing about the Access to communications data by the intelligence and security Agencies report. My primary concern is that it is not an investigation into the Communications Bill at all. It is really a libation to the Bill. Its only criticism is that the government should have done a better job at selling the Bill to the population. But having said that, the Committee still tries to obfuscate and mislead.
Consider the section on ‘encryption’. It is heavily redacted (another example of modern democracy: ‘don’t tell the plebs what we’re doing’). It effectively says little more than
Makes you wonder, doesn’t it. What are those ‘options’? And what does it mean for the service provider to provide an unencrypted version of encrypted communications data?
I asked two interested experts to explain the issue for me – and rather than try to comment on their replies, I’m reproducing them in full.
James Firth, CEO of the Open Digital Policy Organisation Ltd
The problem arises because communications can be – effectively – layered. So a portion of ‘content’ at one (higher) layer is actually communications data at a lower layer.
A classic example is webmail. Alice logs in to mail service provider G via secure HTTPS and sends a message to Bob. All her ISP knows is that Alice has a web session with G.
Various proposals being touted – sometimes in the various drafts – include forcing mail service providers to abide by the Comms Bill. Jurisdiction could be enforced in some way for any company with a UK presence, but it would be impractical as the Bad Guys would just use other companies.
But there have been scarier options touted.
If Alice didn’t use a secure method to connect to her mail service provider her ISP could be forced to scan all CONTENT in order to detect nested communications data. In this model her ISP would scrape the email “to” and “from” fields from her web session. I doubt this would pass muster with various EU Directives but that’s been suggested.
Even scarier suggestion – if Alice DID use SSL to connect to her mail service provider it has been suggested – I have a very good contact confirming this – that legislation could be introduced to force her ISP to store the whole encrypted transaction, even though this includes the content.
The idea being that HMG could get a court order – here or in e.g. the US – at a later date to force her mail service provider to disclose their private SSL key. From that it would – in *some cases* – be possible to replay the SSL transaction to discover the session key and decrypt the contents, then extract the communications data, and – honest guv – ignore any content.
The major flaw in this plan is that Google, as one mail service provider, has rolled-out a feature called forward secrecy (http://en.wikipedia.org/wiki/Perfect_forward_secrecy).
Forward secrecy introduces, essentially, a second negotiated secret into the SSL transaction; a secret known only by the client web browser. The protracted SSL handshake with forward secrecy ensures that if one private key was later compromised – e.g. the mail service provider’s key – an attacker would still not be able to reproduce the plaintext from a captured encrypted session.
Clever mail service providers would never want to be in a position where they are forced by a court to hand over their private keys, so forward secrecy is actually in their interests as it devalues their private key as it won’t alone decrypt a captured session. In fact the second secret needed to decrypt a session with perfect forward secrecy should have been destroyed by the client and the server as soon as the session ends.
Alexander Hanff, managing director at Think Privacy Ltd
When an SSL connection is made a ‘tunnel’ is set up and as a result all of the HTTP headers are encrypted (these include the specific web page you are requesting from the server) but the TCP and IP data is not encrypted as they exist on different OSI layers – HTTP/SSL exist in layers 5-7 whereas TCP is layer 4 and IP is layer 3 (if my memory serves me correctly you might want to double-check that).
As you correctly state, it would be impossible to get the encrypted content to the server if the 3rd and 4th OSI layers were encrypted, so yes there is still a DNS lookup (which means the ISP knows the domain and the IP address you are trying to communicate with) but they would not know which web page you are trying to access within that domain.
The same should be true of emails which are not hosted by the ISP (for example I run my own email servers) they would know which email server I am trying to communicate with but since my emails are transported over SSL/TLS the would not know whom I am communicating with or anything else which is held within the encrypted packets (I run my own email servers to circumvent the Data Retention aspects of RIPA/Data Retention Directive).
That said, the ISP could easily set up Man in the Middle attacks similar to how Phorm did with their DPI boxes which would allow them to decrypt everything (including the content) which is what I presume was the redacted content in the report released yesterday when they were talking about the government has a feasible method of dealing with encryption. This of course would be completely illegal under RIPA (without a warrant) so they would need to introduce legislation to do this (which would put them in breach of the EU Data Retention Directive but as we know the UK gov are not good at complying with EU Directives so they probably wouldn’t care).
So in summary, yes encryption restricts them from obtaining some of the higher level data they want but not the low-level data such as domain and IP.
Notice that there is one common element in these two replies: further legislation. It seems highly likely, then, that this Communications Bill will be supplemented by altogether more intrusive legislation (the ‘options’) when the authorities finally realise what everyone has been telling them: this Communications Bill isn’t what they say and will not work. And that, of course, is democracy at work: keep asking the question until you get the response you want.