Working in security is guaranteed to do one thing – it makes you cynical and distrustful. Snowden has quantified and multiplied that distrust – so it is perfectly possible that my cynicism has more to do with personal paranoia. But…
Is Mandiant really worth $1 billion? That’s what FireEye paid for it. Not in cash, but in shares – and it would have been more had FireEye’s shares not fallen just prior to the announcement. Shares, incidentally, that FireEye only has because it went public less than four months ago. It couldn’t buy Mandiant with cash because FireEye has never yet turned a profit.
Mandiant is privately held, and the big winners in the acquisition will be Mr. Mandia, the company’s founder, Mr. DeWalt, who joined Mandiant’s board as chairman in 2012, and the company’s venture backers. Mandiant has raised $70 million from Kleiner Perkins Caufield & Byers, the venture capital firm, and One Equity Partners, an investment arm of JPMorgan Chase.
FireEye Computer Security Firm Acquires Mandiant: New York Times
But hang on a bit. Isn’t Mr DeWalt chairman of FireEye? Yes he is. On 23 June 2012 he said (here), “Greetings everyone – Dave DeWalt here, the new chairman of FireEye. I have to tell you I’m very excited to join this hot security company…”
But just over a month earlier he blogged about his happiness in becoming Mandiant’s chairman:
Now, for Dave DeWalt to be a ‘big winner’ in the sale of Mandiant, he has to have a percentage of that company – which would not be surprising for the chairman of that company. So what we have is a timeline of Mr DeWalt becoming chairman of Mandiant (and some time thereabouts receiving a percentage of or investing some money in ownership of Mandiant); that same Mr DeWalt becoming chairman of FireEye approximately six weeks later; and then the second Mr DeWalt buying the first Mr DeWalt’s part of Mandiant for a very large sum of money just 18 months later.
I have, of course, asked FireEye if Mr DeWalt was still a member of the board of Mandiant at the time of the purchase, and how much he made from the sale – but have not at the time of writing this received a reply.
One of the first rules of security is that you never use a product that employs any form of proprietary cryptography. And if a security guy then says ‘be careful’, you’d best be very very careful — no matter how many magazines or newspapers say the product is the real deal.
That’s what happened with Cryptocat which is a secure chat product that “could save your life and help overthrow your government,” according to Wired — it could “save lives, subvert governments and frustrate marketers.” Forbes said that it “establishes a secure, encrypted chat session that is not subject to commercial or government surveillance.” Sounds good.
But security folk weren’t so sure. “Since Cryptocat was first released,” warned Christopher Soghoian in July 2012, “security experts have criticized the web-based app, which is vulnerable to several attacks, some possible using automated tools.”
Patrick Ball expanded in August 2012:
CryptoCat is one of a whole class of applications that rely on what’s called “host-based security”… Unfortunately, these tools are subject to a well-known attack… but the short version is if you use one of these applications, your security depends entirely the security of the host. This means that in practice, CryptoCat is no more secure than Yahoo chat, and Hushmail is no more secure than Gmail. More generally, your security in a host-based encryption system is no better than having no crypto at all.
When It Comes to Human Rights, There Are No Online Security Shortcuts
Security professionals, then, were not surprised when last week Steve Thomas wrote about his DecryptoCat — which does what it says on the can: it cracks the keys that let you read the messages.
If you used group chat in Cryptocat from October 17th, 2011 to June 15th, 2013 assume your messages were compromised. Also if you or the person you are talking to has a version from that time span, then assume your messages are being compromised. Lastly I think everyone involved with Cryptocat are incompetent.
This is a big deal, because Cryptocat has been marketed towards dissidents operating in repressive regimes. As Soghoian wrote:
We also engage in risk compensation with security software. When we think our communications are secure, we are probably more likely to say things that we wouldn’t if our calls were going over a telephone like or via Facebook. However, if the security software people are using is in fact insecure, then the users of the software are put in danger.
Tech journalists: Stop hyping unproven security tools
Add to that the current revelations on the NSA/GCHQ mass surveillance, and our understanding from last week’s Snowden revelations that the NSA automatically and indefinitely retains encrypted messages, then we can say with pretty near certainty that if you have been using Cryptocat, at least the US and UK governments are aware of everything you said.
Briefly, towards the end of last year, I contributed a newsy column in the print version Infosecurity Magazine. The magazine has now kindly allowed me to post them here. There are eight items in total; viz,
Just in case you missed any of them…
Kettling is an emotive issue. It is a police tactic used around the world to contain and limit protests. The theory is pretty good. If any area of the protest is over-heating, isolate it and separate it and allow it to fizzle out. But the practice is not so simple. Innocent bystanders can be caught. Human rights can be violated. And in the UK, it is illegal unless the police have genuine reason to believe it is necessary to prevent violence.
The Corporate Greed demonstration in London on Saturday 15 October could hardly be called a violent protest.
Earlier today protestors were peacefully prevented from gaining access to Paternoster Square, and there has been no major disorder.
Met Police statement: Update on protests in City of London
That suggests that kettling would be illegal. And indeed, according to the BBC, there was no kettling.
But police at the scene said a “kettling” technique had not been used and that protesters were free to leave the square.
Occupy London protests in financial district
But, admitted the Met
There is currently a containment at St Paul’s Churchyard to prevent breach of the peace. We will look to disperse anyone being held as soon as we can.
A containment officer is on the scene to make sure this process works effectively.
We will attempt to communicate with people within the containment area and will provide water and toilets for those being contained.
Those who are suspected of being involved in disorder may be questioned or arrested as they leave the containment.
That’s a kettle described by a PR man. But isn’t this part of the cause of the protests? The way we are fed half-truths and misleading information to keep us quiet?
Patrick Gray (Risky Business) has gone to town on statistics released by Norton. The claim is that cybercrime now costs us almost as much as the illicit drugs trade.
If the numbers are to be believed, these reports say, that means cybercrime costs us nearly as much as the global trade in illicit drugs. It’s a sensational claim and makes an awesome headline, but any way you slice or dice the numbers they just simply don’t stack up.
Norton’s cybercrime numbers don’t add up
Patrick points out that Norton’s figures include ‘indirect’ costs while the drug figures do not.
…But if you add the USD$114bn figure for direct cybercrime losses to the USD$274bn “time lost” figure, you wind up with a total just under the figure for drug sales (USD$402bn).
…Just think of the harm being inflicted on Mexico right now by the drug cartels, not to mention narco-related drama in countries like Afghanistan. Then there’s the money spent on the “War on Drugs,” keeping drug dealers in prison and the productive capacity society loses to all those dope-smoking young males glued to their PlayStation 3s.
What comes out of this post is the extent to which vested interests manipulate figures to suit themselves. It happens all the time and everywhere. It happens in politics and throughout industry. One of the most ridiculous, overbloated and absurd figures comes from the rightsholders to justify ACTA. They will take an area that has a high use of pirated goods, extrapolate that across the world, multiply that figure by the retail value of the goods concerned, and claim that they are losing the full amount to piracy. Firstly it uses an extrapolation based on ridiculously inflated assumptions, and then assumes that everybody using pirated goods would have paid the full amount if the ‘free’ version was unavailable.
The tragedy is that it has worked. ACTA is on the point of being signed. As far as Europe is concerned, this is undemocratic, secretive and illegal. As far as America is concerned,
The United States finds itself in a particularly bizarre situation – on the one hand, it claims that the Agreement is fully in line with domestic law while, on the other, it is reportedly not prepared to be bound by the Agreement and is treating the text as a non-binding “Executive Agreement.” The USA does, however, expect the other signatories of the Agreement to consider themselves legally bound.
Countries start signing ACTA, preparatory docs still secret
Marnix Dekker, one of the authors of the ENISA report on Appstore Security, has responded to my post in its comments (Appstore security: a new report from ENISA) and I would like to thank him for doing so. It’s worth reading, and I reproduce it here in full:
As one of the authors, allow me to briefly reply to your comments.
First of all thank you for reviewing the paper, I appreciate the feedback. Rants can be very refreshing.
It is true that the lines of defense are not in anyway controversial and may seem obvious. We felt that there was the need to outline the different defenses that can be used, as most of the app stores and platforms are not very explicit about these defenses. This is confusing for consumers.
Allow me to comment on your criticism of the killswitch. I would like to note that we do not exclude that there are other (than military) settings where a killswitch is unwanted. Bare in mind that most of the users do not want to keep malware on their device. We even mention that an optout where appropriate should be offered.
About jails: We are not saying that jailbreaking should be illegal, or that consumers should have no means of using alternative appstores… only that this should not be made so easy as to allow drive-by download attacks (email+link, genuine looking appstore, install approval, click, infected).
Your alternative proposal, to hold appstores liable for software vulnerabilities, is really a legal solution. I think it is a very interesting subject, but (big disclaimer) I am not a legal expert:
Some issues with this:
- It would be easy to set up a rogue appstore, run it from some obscure country, fill it with some infected apps. It would also be relatively easy to trick users into installing from there. Your solution, to simply find a suspect, and a court to fine, sounds to me a bit complicated. Just think of all the extradition procedures, harmonisation of laws, etc. that would be needed. Let’s ignore rogue appstores in the sequel.
- If I look at other platforms/software I do not see many consumers being granted compensation by courts, nor do I see many software vendors being fined for selling/distributing flawed software. Now this could change in the future, but I think we should address security in the meantime as well.
- Secondly, judges usually start fining people when it is clear they have been negligent or had malicious intent. That requires some kind of definition/agreement of what are best practices and sufficient measures/defences.
- Another issue with liability is – I think – the following: Imagine the opensourcing of software to continue. Android, Linux, Openoffice, etc, are example of this trend: A couple of volunteers decide to solve a problem (text editing say) by writing some software routines (say openmoko)… they publish them free of charge and they disclaim that you should only use this software at your own risk. Would you think it is fair to still fine them for flaws? What I am trying to say is that there are numerous examples of free opensource software/apps/platforms, and that we still need to address security there as well. Do you agree that the liability solution would only work for commercial software/platforms? In that case, what do we do about the rest?
Looking forward to discuss with you – software liability is a fascination topic
What have we actually learned from the phone-hacking scandal so far? That this is a game changer? Don’t believe it. That Murdoch won’t get BSkyB? Don’t believe it. He will eventually, one way or another. This current furore will blow over, die down and be replaced by another story. The abdication of Gaddafi? The overthrow of Assad? The implosion of the Eurozone?
What worries me most is that ultimately, nothing much will change. Tabloid journalists will just find other ways to get the scoop. And we’ll not change our phone habits. We’ll lose them. We won’t secure them. And we won’t adopt voice encryption. We’ll also just accept the new joined up Police National Database (PND) despite the proven corruptibility of the police that access it. So let’s look at those two things: mobile phone security, and the PND.
Mobile phone security
Clearly we need to take this more seriously. Strong access control to use it, anti-malware to protect it, data encryption for our sensitive information, remote shutdown in case of loss. All of this is available, and much of it can be got free of charge. But we should now also be thinking in terms of voice encryption to protect our conversations; which sadly is not yet so simple.
One option would be to use Skype VoIP, with its built-in encryption. This would effectively be free (and probably lower our telephony costs as well); but it is far from ideal. I asked Konstantin König, sales and marketing manager at GSMK CryptoPhone, for his view on this.
Skype encryption only helps “against the neighbour”, like an amateur sniffing on your network segment. It is however not strong and well protected enough against determined attackers, especially those with a good technical background or even intelligence background. With more and more nations engaging in economic espionage and countries either helping out their championed companies or directly getting involved in strategic takeovers, this is not a theoretical risk. Skype is also not protected against attacks with Trojan horse software, that is frequently used to snoop into users Skype communications. Trojans are being offered even commercially today and are in widespread use by all kind of attackers, down to jealous spouses.
You will not be surprised to know that GSMK offers a particularly strong product for mobile phone voice encryption.
The GSMK CryptoPhone comes with very strong encryption that has been designed to withstand even attacks from nation states and has a hardened operating system, that provides a high degree of protection against attacks with Trojan horse software, as long as the user maintains physical control of the phone.
I asked Constantin Graf zu Stolberg, executive partner at merchant bank Moorgate and Co and a customer of GSMK why he had adopted the CryptoPhone voice encryption. “Customer pressure,” he answered simply. “Our customers need to talk to us in total confidence about their finances, corporate takeovers, mergers and so on.” Either one or other party would have to travel potentially hundreds of miles in order for them to speak in guaranteed privacy – or they would need good strong encryption. In this case, economy pointed to the latter. I suspect the same economy of security will apply to many large companies. For the rest of us, sadly, it is simply too expensive. But perhaps it shows a market opportunity for the development of seriously strong but inexpensive voice encryption for the masses – something I’m sure that our law enforcement and intelligence agencies will not welcome. Which leads neatly to the second point – the PND.
Police National Database
I have no problem with the idea of a single joined up national police database. My problem is with the data stored on it and the access to it. First of all, it will contain unproven accusations against people never found guilty of the crime in question. And then something like 12,000 police officers will be able to access it.
The PND will face attack on two fronts; from external hackers (including foreign nations) and from the proverbial insider threat: corrupt, inquisitive or dissatisfied police officers. Anybody who thinks or claims that the PND is external hack-proof is living in cloud cuckoo land. It will happen sooner or later: accusations, founded or unfounded, of things like sexual (especially with minors) misdemeanours is valuable information for foreign agencies, competitive companies and organised gangs.
However, the insider threat from police officers themselves, is just as severe. I asked the NPIA for some further information. Are there any basic rules that define what information is and what information is not added to the PND?
Existing information held on local police databases which supports the custody, crime, intelligence, domestic violence and child abuse police business areas has been loaded. The PND can be used for any policing purpose, but the initial business focus will be in three key areas of policing: safeguarding children and vulnerable adults, countering terrorism and preventing and disrupting serious and organised crime. The PND is primarily an intelligence tool and will be used mainly by police investigators and analysts.
So that confirms that you don’t need to be guilty of an offence, merely to have had a complaint made against you, for your details to be held on the PND. How many people can access the PND? How is a user ‘authorised and appropriately vetted’?
Up to 12,000 vetted and authorised individuals will have access. Access will be strictly limited to those whose roles require it. Extensive background checks are made on all users, and the system is fully and heavily audited. PND can only be used for policing purposes and accessed by police forces or police organisations such as CEOP or SOCA. Role-based access dictates what level of information a user can view in terms of its security marking, not what business area the information is linked to.
At which point, think back to Stephen Gerrard’s affray…
In 2009 Gerrard was tried and subsequently cleared of any misdemeanour relating to an altercation between himself and Marcus McGee…
The case generated such interest within Merseyside Police that officers and staff, with no involvement in the case, breached data protection laws by accessing Gerrard’s file. Following an audit by Merseyside’s professional standards directorate, 130 officers have been cited as being involved in the data breach, with the file containing information such as Gerrard’s Date of Birth, address, the allegations against him and the photograph of him taken upon his arrest…
The report says that these breaches are common across Britain with the Lancashire Force experiencing a total of 84 breaches over a three-year period. This included one officer running checks on his daughter’s boyfriend to expose the man’s criminal record for sex offences.
FOI Act reveals Merseyside Police breach Data Protection Laws for information on Steven Gerrard
So we can pretty well expect PND breaches out of simple curiosity. Now think of the current phone hacking scandal, with allegations of cash paid to policemen for peoples’ personal phone numbers.
Brian Paddick, formerly a senior police commander, told the BBC that journalists make clandestine cash payoffs to police in envelopes, which are handed over at a drive-thru fast food restaurant near the News International headquarters.
Sometimes the reporters get information about celebrities in trouble — he cited a car crash involving singer George Michael, who was using marijuana and alcohol at the time — and sometimes it deals with ongoing investigations.
He said there are cases when payoffs are “jeopardizing serious criminal investigations by giving out confidential information that could be useful to criminals.”
Police officials have said only a handful of police are suspected of receiving payments, but declined to say how many.
Paddick, a former London mayoral candidate who may run again in 2012, said one journalist said he had paid 30,000 pounds (about $50,000) for information.
Focus of UK phone hacking scandal shifts to possible police corruption as tabloid closes
So I think we can say quite categorically that the PND will be hacked from the outside, and illegally accessed from the inside. Frankly, if someone has been convicted of a crime, that should be a matter of public record; so I’m not particularly concerned if it does get hacked. But a lot of the information will be unproven, possibly false and malicious accusation and hearsay: fodder for the intelligence tool. And theft of that sort of information is not merely worrying, it is potentially dangerous and life-destroying for totally innocent people.
Back to our original question: what have we learned? Probably nothing – but what we should have learned is that we need to increase our security stance for our mobile phones; and that the Police National Database should not be allowed to contain information that has not been verified by a successful criminal prosecution.
Security. If you look in a thesaurus the one word you are unlikely to find is ‘control’. And yet the two words, security and control, are synonymous. We say we wish to attain security because it is more acceptable than saying we wish to gain control: but security of our data is control over our data, and user security is control of our users. Whichever way you look at it, security/control is a huge motivating factor in all of our actions.
Two contemporary worlds in which these two words collide are ‘cloud’ and ‘data centres’. Are we concerned about moving our data into the cloud because of concern over its security or because we fear losing control of that data? In reality it is the same thing.
In data centres it appears in the whole debate: build (that is, keep control over the data), or buy (that is, delegate control to a data centre provider). When you look at the arguments, there are no objective reasons for the average company to build its own data centre; just the overriding subjective need for control. That argument is enough to make many companies who should know better choose the more dangerous route of building their own. They don’t actually want to build, they just need to keep control. For security reasons. And that’s a big problem for the specialist providers.
Enter Colt and its new modular data centres. These are complete prefabricated modular data centres delivered not to your door, but installed inside your own premises. And if your floor won’t take the weight, well you never had any choice anyway – you put the same prefabricated data centre in Colt’s premises.
What do you get? You get a self-contained data centre complete with security and its own power supply and back-up; and its own fire detection and suppression system. You get a specialist data centre provided in your own premises.
“What we’re doing with modularisation is new and market-changing,” explained Bernard Geoghegan, VP. But, he added, it shouldn’t be confused with other companies’ containerisation.”We deliver traditional data centre space in a modular fashion. Our delivery method is similar to containers, but the product that you get is a traditional data centre space. The cleverness is around the delivery and assembly rather than any new materials.”
This modular approach can be used in either the customers’ premises or Colt’s own premises. “We do both,” continued Bernard. In looking at our own requirements internally over the last few years, our guys put together a team that built a solution – and what we’re doing is productising that solution. It allows us to deliver both into our own space and into customer space. One of the big advantages is in fast delivery: traditional data centre projects tend to take around 18 months to 2 years – but we can deliver a fully commissioned data centre in four months.”
Guy Ruddock, vice president of operations, explained: “It’s like the oil industry of the 1980s. Building an oil rig on-site in the North Sea was simply too expensive and too slow – so the solution was to build the platform in its entirety on land, and then ship the whole thing to site. That’s what we do. But in a form that will take additional modules that can simply bolt together to provide even more space as and when required.”
That’s it – the circle is squared. You don’t need to choose; you can have both: a dedicated data centre provided by a specialist data centre provider in your own premises – and no loss of security/control.