There are two great drivers in current computing: a general migration into the public cloud, and the growth of mobile computing. The two are connected, for it is because of cloud computing that mobile computing is evolving so fast: the ubiquitous access to web technology makes the immediacy and geo-freedom of mobile computing attractive and inevitable.But both cloud computing and mobile computing have their own and different security issues – and both need to be solved before the full potential of either can be realised. That’s what we’re going to discuss here: how the cloud can be secured for the mobile workforce; and how the mobile workforce can be secured for the cloud. We’ll start with security in the cloud.
Public cloud computing and security
The basic question is simple: how can you be secure in the public cloud. And the basic answer is also simple. “You can’t,” says Bruce Schneier. “In the cloud you’ve given your data to someone else. How can you secure what you don’t have? You don’t even know where it is.”
This is a purist’s view: there is no such thing as absolute security. It doesn’t mean that you have to abandon the cloud, it means that you must understand the problems in order to be adequately secure. There are two key points here:
- absolute security is impossible (which is true in any form of computing). In the cloud it is primarily because you have to place your trust in a third-party
- where is your data? Knowledge of its geo-location is essential in order to ensure compliance with laws such as the EU’s data protection regulations
Neither of these problems will stop the migration to the cloud – the economic arguments are too compelling. “When you look at the incredible amount of scalability, and the flexibility and cost savings that combining the cloud model with mobile computing will bring,” says William Beer, director, OneSecurity practice, PricewaterhouseCoopers, “I’m convinced that this is just going to lead to some very positive changes in the whole way we conduct business.”
The key to being adequately secure in the cloud is having trust in your supplier. Philippe Courtot, chairman and CEO of Qualys, doesn’t believe this should be too difficult. “If you look at the current environment,” he argues, “first you design the network, then you choose your applications and integrate those applications and finally you secure them. This is actually beyond the capability of most companies. Think of new vulnerabilities and how long it takes to implement workarounds and then patch in the updates. Cloud computing can greatly simplify the problem by creating an environment that is ready-made and better controlled by security specialists.”
PwC’s Beer counsels that you must then look beyond the infrastructure at other aspects; such as forensics. “What happens,” he asks, “when something goes wrong?” [Is this a Freudian slip: not ‘if’, but ‘when’?] “What happens when you run into a problem and you need to move out of one cloud provider into another cloud provider? There are storm clouds on the horizon if we don’t consider these issues.”
So the choice of your cloud provider is important. You have to be able to trust that provider. The key here is your contract, your service level agreement (SLA) with that provider; if you’re not happy, shop around until you are. One area in particular you must explore is the geo-location of your data. This is particularly relevant for European companies that need to comply with very strict regulations on the ‘export’ of personal information. In order to maximise the cost-saving potential of cloud computing, the provider must be able to move data to the most efficient location – which could be a server in a foreign land; and that might be in breach of European law.
Here’s an example of the complexity. Google is a major cloud player. Google Apps is a major cloud application. But what if an EU company stores personal customer data in a Google Apps spreadsheet? Since the user doesn’t know where that is, isn’t it effectively illegal?
“We don’t think it is,” says Eran Feigenbaum, Google Apps’ director of security. “We don’t believe that there is a problem here. We have Safe Harbor certification [the agreement between the EU and USA that the US company is acceptable to EU data protection requirements].” Feigenbaum contests that since Google’s Apps servers are primarily located in the US and Europe, and since Google has had Safe Harbor certification since 2004, use of Google Apps automatically complies with EU data protection requirements. Is he right? There has been no legal confirmation that he is right. Can we take the risk? Can we afford not to?
But it would be wrong to think that moving to the cloud is just a security danger – it is also a security opportunity. Eric Baize, senior director of the Product Security Office at EMC, points out that when computer systems were first designed, security wasn’t part of the process. “You had computers, and network components, and applications and you integrated them and got them all working together – and then you went to separate suppliers to add security on the top. You don’t do that anymore. Today most security companies have become divisions within infrastructure suppliers. EMC was one of the first when it took over RSA; more recently Intel has acquired McAfee and HP has bought ArcSight.” Baize believes that before long, the only independent security companies will be start-ups. The effect though, is that security is now an integral part of the infrastructure. “When you buy a storage device today, you don’t have to add encryption afterwards; encryption is already a feature of storage.”
Moving to the cloud is, then, an enormous opportunity to get security right from the ground up. We always should have been concentrating on people and data, but we never did. Because security was separate to the infrastructure, and because the infrastructure was visible and the data wasn’t, we concentrated on trying to secure the infrastructure. Now things have reversed: the security is integral to an infrastructure that we cannot see. “Today,” says Baize, “we can have a security aware infrastructure rather than a security layer on top of the infrastructure.”
In theory then, there is no reason why cloud security should not be as good if not better than computer room security. But what about the geo-location issue? In a private cloud it is not an issue: you still own the infrastructure and ‘cloud’ is just a computing approach. You can use data loss prevention technology to prevent personal data leaving your network. Baize believes the same approach should be taken in the public cloud. “Look at data loss prevention (DLP) technology,” he says. “We are teaching our systems to be aware of content, and to respond to that content.” In DLP, that awareness prevents sensitive data from leaving the corporate network; in the cloud it could be used to prevent European data from leaving the EU.
Securing the mobile user
A security aware cloud is one thing. That will help the user have confidence in his data. But with a growing mobile workforce, and the growing consumerization of computing, we won’t have security unless the data can also have confidence in the user. Once again, PwC’s Beer points out that not enough of us have considered the liability aspects of this development.
“Consumerization means that the lines are becoming blurred between personal and professional computing. Companies are allowing their staff to choose and use their own mobile devices. The same iPad could be used to update the company Facebook page, the member of staff’s personal Facebook page, the member of staff’s partner’s personal Facebook page, and the company website. What happens if this personal use then causes a problem for the company? Who is liable? What redress is possible?”
Apart from liability, there are two issues here: what can be done with the mobile device, and who is using it. In terms of what can be done, Philippe Courtot sees a growing relevance for thin computing. “I’ve been saying for ten years now that we are not meant to be dependent upon a huge complex operating system like Windows on the desktop, and that, essentially, most of our computing will be in the cloud.” He was a bit ahead of his time, but this is finally happening. Users are abandoning expensive and feature-cluttered bloated desktop applications for the free (at least at the personal level) Google Apps and Microsoft Office Web Apps. When you think of this, all you need is a browser; and all you need for that is a smartphone or tablet.
Small handheld devices like this are ideal for the mobile worker; and they can more easily be locked down by the manufacturer. Consider the iPhone and the iPad, and the efforts taken by Apple to lock down its products. This can be annoying for the personal user; but a blessing for the company. However, it doesn’t solve the second issue: since mobile devices are so easily lost or stolen, how can the corporate data in the cloud be confident that it is the authorised user operating the mobile device? We’re talking, of course, about authentication: not just authenticating the device, but authenticating the user to the device – and it’s one of the hottest issues of the day.
For example, one possible route would be biometrics. Most governments are sufficiently confident in biometrics to promote their use for authenticating citizens with ID cards. Most security professionals, however, are less convinced. For example, if the user’s biometric template is stored centrally it is subject to loss, theft, alteration and corruption just like any other data. If your password is compromised, you change it. But you cannot change your biometric template.
Now consider Apple. It has, within the last few weeks patented the idea of using heartbeats as a biometric measure, and also bought a face recognition company. At this stage it is pure conjecture – but could we be moving towards biometric authentication of the user by the device taking periodic snapshots of the user’s face and simultaneously monitoring heartbeat rhythms. If the user is not the registered owner or, in motor insurance terms, a named other driver, then the device could be shut down (and maybe its location sent to the police).
This could be one approach. An alternative could be that just announced by Google: two factor authentication (2FA) for its Google Apps. “A couple of years ago,” explained Eran Feigenbaum, “we sat down and discussed what we could do to improve user security. We came to the conclusion that the weakest link in the security chain is the user password. Every day thousands of accounts are phished, hacked or guessed. So we looked at two-factor authentication. 2FA is not new – its been around for a while – but by and large it hasn’t really taken off. One reason is the cost – so we wanted to have a service that is free. A second problem is complexity, both for the admins and the user – so we wanted to make things easy as well as free. It has to be easy – we find that if you make it easy for users to do the right thing, they tend to. If you don’t, they’ll find ways around it.”
Google’s solution is an out-of-band six digit code either generated by a BlackBerry, iPhone or Android app, or generated by Google and sent to the user’s mobile phone whenever a login is attempted. It just ensures that the person attempting the login is actually the owner of the account. This, combined with Google Apps session encryption, raises Google’s cloud security to a level similar to that used by many banks. But there is still one problem common to both biometrics on Apple devices and 2FA on Google: they are limiting – Apple to Apple and Google to Google. What the cloud needs is something that transcends individual device or service provider, and places no limits on its users.
RSA’s Eric Baize thinks the answer could lie in what he describes as a ‘risk score’. This involves “looking at multiple aspects of the transaction and the connection to make sure that we have adequate assurance that the user behind the device is not someone who has stolen the password,” he explains. “We’d be looking at things like are you connecting from the same device as the last time you connected; are you connecting from the same geo-location as before; are you using the same browser; are you generating the same type of transaction as you usually generate. It’s an accumulation of factors to create a composite score that reflects the risk involved. If the risk score is low enough, the connection is allowed; if it is a high risk score, the connection is disallowed.” It’s a technology already used effectively by the banks, and there’s no reason that such an established and proven technology should not be applied to the cloud.
It’s going to be difficult. We’re going to need to ask questions we’ve never asked before. But marrying mobile and cloud computing will revolutionise the way we do business. And if we do it right, we now have the skills and technology to build security into the infrastructure rather than applying it over the top.
Over the last couple of days we have been hearing news about the seizure of more than 100 servers by the Dutch police. These servers were involved in the control of a huge number of Bredolab bots; so this can only be seen as a Good Thing.
However, the problem with taking down Command & Control servers is that it leaves the botnet itself in place. It can spring to life again when the criminals set up new C&C servers. So the real solution is to find and cleanse the bots themselves.
Well, the Dutch police attempted to do just this. With help from the Dutch Infosec company Fox-IT and the ISP LeaseWeb, the authorities uploaded their own code – effectively their own trojan – to the infected PCs. The payload is obviously benign. It simply sends the users to a Dutch Police page that explains that they are infected, and provides a link to information on removing the infection. By removing the bots rather than just the servers, the botnet is well and truly dismantled.
Well, I know nothing about Dutch law. But notice that the landing page is in English (there is, of course, also a Dutch version). It is perfectly clear, then, that the Dutch authorities were well aware that they would be ‘infecting’ PCs outside of The Netherlands – and quite likely some in the UK. So, for people in the UK, we are able to look at this from a UK point of view, and not just a Dutch point of view.
And what I want to know is whether the Dutch police action is legal, and/or acceptable. Most people in the security industry will automatically say it is acceptable. After all, it is their job to protect us, and this is a good way of going about it. And the security industry has been ‘infecting’ command and control servers for years – so this is just a small expansion from the servers to the bots. But I’m not so sure it is acceptable – and I’m pretty certain that in the UK it is illegal: that is, the Dutch authorities have broken UK laws if they have infected any UK PCs.
I asked leading lawyer Nicholas Bohm for his view. “Infecting a computer with a trojan would involve offences under the Computer Misuse legislation,” he explained, “unless carried out with some form of lawful authority. In the UK this is available under Part III of the Police Act 1997 (as amended). Authority may be given by chief constables, and others of equivalent rank.
“These powers were primarily introduced to cover the installation of viewing or listening devices in the premises or vehicles of suspects, but they seem to me capable of extending to planting trojans, keystroke loggers etc.”
Yaman Akdeniz, Associate Professor of Law, Faculty of Law, Istanbul Bilgi University, and Director, Cyber-Rights.Org, has similar concerns under the Computer Misuse Act: “Well, there is no ‘good hacker’ or ‘ethical hacker’ defence built into the Computer Misuse Act 1990, nor into the provisions of the Council of Europe CyberCrime Convention for example. So, whatever their intentions are, the access by the Dutch Police into the infected PCs of computer users would be unauthorised in the UK.
“On top of that their ‘modification’ of the content of the infected PCs can also be regarded in breach of the CMA 1990. So, from a legal point of view I find this approach problematic. What if they damage the computers? One may argue that the damage is already done with the initial infection but the access remains unauthorised whether by the bad guys or the good guys.”
So, on balance, the CMA forbids the covert installation of trojans, even if with the best of intentions by the good guys, but could be overridden by ‘chief constables, and others of equivalent rank’ under the Police Act 1977. But Bohm doesn’t believe that the Dutch police behaviour is automatically or necessarily bad. “Some such powers seem to me necessary, just as search warrants are. But I would rather see them controlled judicially – I am unconvinced by the use of retired judges as commissioners to supervise them, and would prefer the decisions involved to be subject to judicial review.”
It seems to me, then, that the Dutch police have broken UK law if they have uploaded their friendly trojan to any UK PCs; and have probably broken other laws all round the world. Judicial oversight may make such behaviour more acceptable; but without it, it should be abhorred. Accepting such behaviour from the authorities, who will always say ‘it is for your own good’ is a dangerous step. Every software developer in the world is aware of the dangers of ‘feature creep’. This sort of behaviour by the authorities is equally subject to feature creep – otherwise known as the slippery slope into authoritarianism.
Now I know i am just a grumpy old git – but this sort of thing so annoys me. I got this in my email today:
Technically, it’s spam, because it is certainly unsolicited. But I usually take a fairly relaxed view over what is spam and what is not. This is something I would tend to call marketing – if it were targeted to people who might be interested. But I am not interested. And for one very good reason. I don’t have an iPhone. In fact, I don’t have any smartphone. (Readers may recall my aversion to fried brains.)
So why have they sent me this? The answer may be at the bottom of the email:
First of all it says my email address has been given as a referral from a friend. That’s a lie. I don’t have any friends. Or rather, hopefully, it’s a lie because I don’t have any iPhones. But then it says the mailing is from a company called MailChimp.
The point here is that one of them, either iFindiPhone or MailChimp, is lying. I have no way of knowing which – so I put both into the same category. Both companies are lying spammers. That’s the impression I get. It is possible that only one of them is. But the net effect is that I won’t touch iFindiPhone with a barge pole. When a marketing company tells lies, it becomes a spamming company.
The first I heard of it was a Tweet from Luis Corrons towards the end of last week:
A bit cryptic, but the reference to me is almost certainly in relation to (one of) my two previous criticisms of AMTSO: firstly that its membership is almost entirely incestuous and that without involvement from outside of the industry its recommendations cannot be trusted; and secondly that use of any testing that is allowed to suggest 100% efficacy against viruses in the wild is disingenuous (see AMTSO: a serious attempt to clean up anti-malware testing; or just a great big con? and Anti Malware Testing Standards Organization: a dissenting view).
But now AMTSO itself has released details (this is on the former criticism, not the latter):
The Anti-Malware Testing Standards Organization (AMTSO, www.amtso.org), an international organization that encourages improved methodologies for testing security programs, announced today the imminent availability of a new subscription model that will open up membership to a wider audience.
Neil Rubenking, Lead Analyst at PC Magazine, commented, “As a member of AMTSO’s Advisory Board I’ve been privileged to interact and work with the group’s members and committees. AMTSO membership is open to individuals, but the 2,000 €/year price puts full membership out of reach for all but the most dedicated. The new subscription model will now allow all interested parties to make a marked contribution to the development of better testing methodologies.”
The new membership model will apparently cost just €20 (presumably per year), and is clearly a move in the right direction. But from the released information you don’t seem to get much for this. You get access to the
…educational resources that are already freely available on the AMTSO website [and] the development of documentation and participation on AMTSO’s email discussion boards, where some of the world’s foremost experts in the anti-malware industry and the testing industry leave vendor bias aside, in order to pursue lively conversations on the intricacies of malware testing, its fallacies and real-world ways in which to improve it.
In short, you get access to a specialist mailing list, and “the right to attend meetings, though not as voting members.”
I don’t want to sound churlish, because this is a major movement for AMTSO, but you get to speak your mind with no guarantee that anyone will listen, and certainly no say in what AMTSO actually does. It is nowhere near what I personally would like to see: the recruitment of senior technicians from some of the major corporate AV users; with full voting rights. If this simply isn’t possible, perhaps AMTSO could tell us why?
So, all in all, a tiny step in the right direction.
There has been so much doom and gloom over Osborne’s Cuts that it’s really getting quite depressing. The Labour Party is warning of a double-dip depression, the Unions are threatening outright industrial war, and every company in the country is saying ‘we can help, just buy our products or rent our services and we can help you make money out of the cuts’.
So Paul Winchester’s comments are worth repeating in full:
When it comes to IT and telecoms, the private sector will easily make up the jobs lost in the public sector. The private sector job market is in rude health and the industry is big enough to take care of itself. There is no reason to imagine the pace of consolidation envisaged in the Budget will undermine the recovery.
The IT industry employs one in twenty of the country’s workforce. If we assume roughly 490,000 public sector workers are going to lose their jobs over the next four years as a result of the review, that’s about 6,125 IT professionals every year leaving the public sector.
At first glance that looks bad – but if the growth experienced in the first three-quarters of this year continues, the private sector will hire a further 7,500 IT professionals in the fourth quarter of 2010 alone. Q3 2010 saw a 12% year-on-year increase in the number of roles we had on our books. So the private sector should be more than capable of finding jobs to replace those lost in the public sector, and the redeployment of people to more productive activities will improve economic performance, and in turn generate more employment opportunities.
Paul Winchester, managing director of IT and telecoms recruiter Greythorn
I write about IT. And telecoms (if you like). Am I safe, please?
Today is the Day of Cuts, the day when George Osborne has announced how he is going to reduce the budget deficit. I’m not an economist, so I’m going to make no comment whatsoever about the pros and cons of what he’s doing. Instead, I want to talk about Cameron’s Statement on Strategic Defence and Security Review, also published today. Specifically,
Over the next four years, we will invest over half a billion pounds of new money in a national cyber security programme.
This will significantly enhance our ability to detect and defend against cyber attacks and fix shortfalls in the critical cyber infrastructure on which the whole country now depends.
We already have an organization that is designed to protect the critical cyber infrastructure: CPNI, the Centre for the Protection of the National Infrastructure. I wonder how much of this money will actually go to CPNI? Or will it instead end up in the pockets of government-favoured businesses (like BAE’s Detica, or Microsoft or Intel et al). Detica already sounds as if it owns a slice:
Detica welcomes the £650m announced in the Strategic Defence and Security Review, as an important catalyst to protect the UK and build much needed UK cyber capabilities…
Cyber crime is one of the Nation’s greatest threats and we therefore welcome the Government’s commitment to improving cyber security in the UK. By partnering with the UK Government, we will be able to share capabilities and critical information to ensure that the UK can protect its critical national infrastructure and drive exports.
Martin Sutherland, Managing Director of Detica
Microsoft, of course, has already made its pitch with the Internet Health Certificate proposal; and the Microsoft/Intel Trusted Computing Platform would solve all the problems anyway! Either way, Microsoft will undoubtedly do anything it can in order to remain the primary supplier to government and especially education (which is ridiculous in hard times when Linux and OpenOffice are free).
But CPNI already has a programme. It’s called WARP: warning, advice and reporting point. WARPs are like-minded, trusted niche communities that share security information among their members and with other WARPs. For those who understand the concept of a CERT, a WARP is a CERT writ small – so that any organization can afford one. If you don’t understand CERTs, think of it as a sort of cyber neighbourhood watch. It’s actually a very, very good idea. And you’ve heard of the WARP, yes? Right. No, I’m sure you haven’t.
And that’s because of an almost total lack of funding for a very good idea that would have the potential to dramatically reduce the impact of computer infections across the country for a very minimal cost. There are other problems, of course – like the UK’s endemic attitude towards secrecy so that CESG could release security information to local authority WARPs but not private sector WARPs; and the lack of money to establish a centralised security store for all WARPs; and the lack of funding to allow the programme to evolve as fast as the threat landscape evolves… But all of these problems could be solved with just a tiny fraction of this new funding.
So will CPNI be getting the money? And will it pass any on to the WARP programme. I thought I’d ask. The CPNI website says it doesn’t speak to the press; enquiries are handled by the Home Office. So I tried the Home Office.
“WARP? What’s that?”
“CPNI? What’s that?”
“I think that’s the Cabinet Office.”
Not according to CPNI, I said.
“Oh, well the Chancellor’s on his feet at the moment, so we can’t say anything until he’s finished.”
I couldn’t face trying to explain that I was after information about the SDSR, not the CSR. So I asked if they would be able to help after Osborne had sat down.
“I doubt it.”
The Home Office hasn’t even heard about the Centre for the Protection of the National Infrastructure. So I very much doubt that it will get much money. And I hope I’m wrong, but I rather think that the worthy WARP programme will get even less. Instead, as always, the money will disappear in projects aimed at providing draconian security in return for further erosion of personal liberty. Same as it ever was. It’s all so very sad.
Readers of this blog will know that I am not the greatest fan of the Information Commissioner’s Office. It’s not entirely the staffers’ fault – if you create a guard dog without teeth it cannot bite; and what use is a guard dog that cannot or will not bite?
Here’s yet another point in question:
A doctor at North West London Hospitals NHS Trust breached the Data Protection Act by leaving medical information about 56 patients on the tube, the Information Commissioner’s Office (ICO) said today.
Is there much that is more personal, more sensitive and more private than your medical information? I think not. So the ICO has come down hard on the culprit:
Fiona Wise, Chief Executive of The North West London Hospitals NHS Trust, has signal [sic - signed?] a formal undertaking outlining that the organisation will ensure that personal data is processed in accordance with the Data Protection Act.
Now that’s gonna hurt. But what else can the ICO do? If it fines the NHS, we pay. If it sacks the doctor, we pay for a new one. But nothing the ICO has done to other data protection cowboys has had much effect – it certainly didn’t protect these 56 patients. Ollie Hart, head of public sector, Sophos, thinks the solution is at least partly in user education:
It is of paramount importance to educate users within the NHS of the risks of moving around patient and organisational information and how to protect such data. Having the right data protection software is vital but it also requires much more than just putting software in place. Alongside this, it is key to establish the right procedures and processes to protect the data, as well as educating users, across the organisation.
Well, Hart is of course absolutely right that this should be done; and if it were done… ’twere well it were done quickly. But why wasn’t it already being done? And will being told to do it now (when, potentially, the horse has already bolted) protect the personal data of those 56 patients? It will not. My opinion, then, mirrors that of Hugo Harber, Star’s Director of Convergence and Network Strategy: “If you don’t fine these companies that lose sensitive data, if you don’t make it very painful, then the IT director will not get budget next year to put in DLP or encryption or some similar system to fulfill the company’s duty of care.” (See here)
Obviously there’s no point in fining the NHS; so, hard as it may seem, doctors who lose their patients’ medical records need to be sacked. And that applies to anybody who loses the personal data of others. It’s the only way.
You may have noticed that some parts of the AV industry have not liked some of my posts. One example is the post on a Cyveillance study (Anti-virus is essential – it’s just not as good as they tell us). Please have a look at the comments to see how upset some people became.
My problem is that I have tried very hard to understand why the AV companies are so upset over this; and I cannot. I asked Luis Corrons, Technical Director of PandaLabs, to explain it to me; and he very kindly agreed (and I am particularly grateful since he is tackling the explanation in a foreign language). I have to say that I am still personally not convinced – but this is how the conversation went.
It would appear that Cyveillance have only tested static signature detection capabilities. This may be due to the fact they do not have the time or money required to perform detailed testing such as that from companies like AV-Test.org.
Static signature detection is the very first technology implemented in an antivirus and although it is good at detecting known malware, this is clearly not enough. Panda Security (as well as many other vendors) have been recognising this for years, which is why most of the major vendors have been developing proactive technologies such as behaviour analysis/blocking and cloud based detections, etc. Had Cyveillance performed tests that looked at other technologies aside from the static signature detection, I believe the detection results would have been decidedly different and the media attention would no doubt diminish.
The bit that I’ve never really understood about the AV industry’s concerns about this report is this: if Cyveillance took brand new copies of the AV packages and installed them on clean PCs, and then chucked the virus samples at them, how is this a false test?
Granted it does not say that this package is better than that one – but that wasn’t the purpose. The purpose was to say, OK, AV #1 took n days to learn about this new virus where AV #2 took n+4 days to learn about the same virus. I simply cannot see where this is wrong.
Imagine you are writing about cars; “Air-bag is essential – it’s just not as good as they tell us” because a company that makes business on the same sector has made a test with crash test dummies and showed that it was only saving lives 15% of the time. I would understand that some car makers could get upset with that, they would say the security of the user in the car should be measured globally, using all the security measures that it has, as it happens in the real life. Using the air-bag with the seat-belt, it would increase the %, the same with the ABS-EDS, etc.
The same is happening with the antivirus. Cyveillance says that proactive technologies are really important, and that relying on signatures is a mistake. And they are 100% right. I’ve been saying the same for the last 7 years, signature detection techniques are great, but they are more than 20 years old now and they have some limitations. Antivirus vendors know that, and that’s why most of the antivirus solutions include a number of proactive protection layers.
I understand that testing a security product in a real life circumstance is really hard, not many people are able to do it and it takes a lot of time and resources. So why wouldn’t Cyveillance do a real life test? It’s more expensive and the results wouldn’t be that bad, which wouldn’t really benefit Cyveillance, as they want to “show” how bad antivirus solutions are, it doesn’t matter if that’s true or not.
That doesn’t really wash. You are saying that only the airbag manufacturer can test whether it works or not, because only the airbag manufacturer understands how airbags work and how difficult testing airbags in genuine crash conditions really is. But in this analogy, the problem is that the airbag manufacturer is saying that the airbags will protect you 100% of the time (that’s my translation of what the VB100 award says), when in real life people are still getting injured despite the airbags. Similarly, people with AV products are still getting infected, and VirusTotal clearly shows that AV products DO NOT work against 100% of viruses. Airbags (like AV) are essential; but they are not as good as they tell us (if they say that they guarantee 100% protection during a crash, when I can demonstrate this is not the case).
I’m afraid I did not explain myself properly. In my analogy I was talking about car manufacturers, not airbag manufacturers. Users don’t buy airbags, but cars. Car manufacturers will go over 3rd party companies to obtain security certifications. And then, a company that says it is able to improve security through building better roads says that airbags are not good enough. That company should be talking about the whole car security, not airbags, and even though the results would be much better from the car manufacturer point of view, still it will show that better roads are needed.
Regarding VB100, I think that it is a test that measures some of the features in the products, such as Cyveillance is doing. I think that the best test is one that really reproduces what happens in the real life, since the threat is sent to the user, the user gets it, tries to enter / install in the system, etc. In these kinds of tests it is almost impossible for any vendor to obtain a 100% result, but it shows what happens in the real life. No more no less.
I would like everybody to sleep easy tonight. Our esteemed government has published its National Security Strategy.
…the National Security Council has overseen the development of a proper National Security Strategy, for the first time in this country’s history.
Right from the outset, this promises to be something momentous. And to a degree, it is. ‘Cyber security’ is elevated to the second highest threat facing the United Kingdom, second only to terrorism, for the next five years. This is new. This is exciting for anyone involved in cyber security. But I’m afraid to say, that’s just about as far as it goes.
Oh yes, there is a short passage describing the cyber threat (about half the size and quality of a sixth form homework essay); but just about nothing on what to do. Except, perhaps, for one sentence:
For example, business and government will need to work much more closely together to strengthen our defence against cyber attack and to prepare for the worst…
That’s a bit worrying. Whenever government says that business must work with government, it really means that business must pay for what government wants. But business never pays for anything – it just passes the cost on to the consumer. And that’s you and me. So here’s my interpretation of the National (cyber) Security Strategy: as soon as we can get away with it, we’ll implement something like Scott Charney’s Internet Health Certificate and make the ISPs pay for it.
In fairness. the document does eventually specify the new strategy. It concludes that the National Security Council will
…develop a transformative programme for cyber security, which addresses threats from states, criminals and terrorists; and seizes the opportunities which cyber space provides for our future prosperity and for advancing our security interests
And what this tells us is that the spin doctors are still running the asylum. What security man would ever say ‘develop a transformative programme’, or ‘seize the opportunities’? Meaningless spin. (Incidentally, the National Security Strategy is copy protected. It’s obviously in the national security interest that I have to retype all the quotes I’ve used rather than just select and copy and paste. And look carefully – there’s a £14.75 cover charge on the document, because God forbid that we should encourage people to read it.)
Do you think I exaggerate the absurdity of the UK’s official security stance? If you do, let me ask you a question: did you know that this is national ID fraud prevention week? According to http://www.stop-IDfraud.co.uk,
This awareness drive has been put in place by an expert group of public and private sector partners, including the CIFAS, The Association of Chief Police Officers, The City of London Police,The Metropolitan Police, National fraud Authority, The Identity and Passport Service, The British Retail Consortium, The Federation for Small Business, Fellowes, Callcredit, Experian, Equifax, British Chamber of Commerce and Royal Mail.
Impressive, yes? But if you want to learn how to protect your identity, by downloading the identity fraud protection guide, you have to give these people your personal information. Why? Why do they need that? Well, they don’t. But it’s not actually any of this ‘expert group of public and private sector partners’ such as ‘CIFAS, The Association of Chief Police Officers, The City of London Police, The Metropolitan Police, National fraud Authority, The Identity and Passport Service’ and so on that are getting the information. The site was registered and seems to be run by none of these – but by a PR company (actually the PR company for Fellowes, who you will find mid-way in the list of ‘experts’). Fellowes is a paper shredder.
And wait a minute – there’s another site: http://www.national-identity-fraud-prevention-week.co.uk/. This one is a front for ABT Office Supplies (which, incidentally, stocks Fellowes’ products, and majors on paper shredders). So the clear implication is that the national ID fraud prevention week is a fraud operating for the benefit of paper shredders. And if we combine this fraud with the National Security No-Strategy we can come to only one conclusion: our new national security strategy is run by spin doctors for the benefit of government and business.
So sleep easy – we’re in good hands.
I was talking to Amit Klein, the CTO of Trusteer, because I wanted a better understanding of how Rapport works. Rapport is Trusteer’s anti-banking trojan product. It’s free if your bank is a participating bank. The product prevents online bank transaction fraud; so it saves the banks money. If it saves the banks money, it is only fair that they pay for it. You get it free.
It works by protecting your browser. It recognises worrying behaviour and stops it. So, if I’m infected with Zeus (or some other bank trojan) and start an online bank transaction, Rapport sees Zeus trying to interfere and steps in to protect me.
Ah, I said. OK, you can protect my browser/bank interaction; but what if I’ve got a completely separate root-kit infection that doesn’t try to interfere with the transaction, just tries to steal my credentials?
Amit was very polite. He said, “We will protect your credentials when you’re online to your bank. But if you leave them lying around in some file on your computer…”
What he was saying was that security software can do what it is designed to do: but no software can protect against user stupidity. And that’s something we sometimes forget. We can install all the security we want: it won’t work if we forget to teach our users about security awareness.