The granddaddy of security software is the venerable anti-virus. But the mother of all attacks is the zero-day targeted exploit. Vendors of new products specifically designed to protect against the latter continuously insinuate that anti-virus no longer works — ergo you need to buy their shiny new product to stay safe.
These vendors point out that the attacker merely needs to modify the malware to change its signature to instantly create a pseudo-0-day that defeats AV signature engines. And to prove their point, they will submit the pseudo or actual 0-day to VirusTotal to demonstrate that few if any AV products actually detect it.
This gives a false impression. VT basically just submits the sample to the signature engine — which won’t detect 0-days. But the AV industry long ago accepted that signatures alone are not enough, and built additional behavioural defences into their products. These are not generally tested by VT.
So when a VirusTotal report says a particular sample was not detected by your own AV software, that doesn’t necessarily mean that it would not be detected by the AV product’s behavioural methods in situ on your PC. It’s a difficult thing to prove, and it has left the anti-virus industry disadvantaged against the arguments of the newer products.
Now F-Secure has tested it. There is a new 0-day MS Word/RTF vulnerability that is expected to be fixed by Microsoft in this week’s Patch Tuesday patches. For the moment, it remains a 0-day.
“Now that we got our hands on a sample of the latest Word zero-day exploit (CVE-2014-1761),” reported Timo Hirvonen, senior researcher at F-Secure, yesterday, “we can finally address a frequently asked question: does F-Secure protect against this threat? To find out the answer, I opened the exploit on a system protected with F-Secure Internet Security 2014, and here is the result:
I would suggest that F-Secure is not the only AV software able to detect the worrying behaviour, if not the signature, of the 0-day without ever seeing the malware.
The reality is that no software can guarantee to stop all malware; but anti-virus software remains the bedrock of good security. Adding to it is prudent; replacing it is foolhardy.
“Know your enemy,” says Sun Tzu in the Art of War, simplistically speaking. And, simplistically speaking, in the current cyberwar the enemy are the bots, the trojans, the worms and viruses and all the other malware that seek to breach our cyber defences. The clear implication is a need to monitor and understand these threats.But the threats are continuously evolving, changing and increasing; so the solution would appear to be ‘continuous threat monitoring’.
There are many ways this can be done: by signing up to the ‘alerts’ RSS feeds almost always provided by the major systems and software providers; by monitoring the national CERT pages and in particular the one hosted by Carnegie Mellon university in the USA; or by subscribing to one or more of the alert providers such as Secunia. An alternative or additional approach is to monitor the blogs of leading security researchers, such as David Harley (ESET), Luis Corrons (PandaLabs), Rik Ferguson (Trend Micro) and Graham Cluley (Sophos); all of whom provide insight and commentary on the current threat environment.
But we said at the beginning: ‘simplistically speaking’. The enemy isn’t just the threats: it includes time, your time to do all of this. Amanda Finch, general manager at the Institute of Information Security Professionals, suggests a risk management approach to ease the burden. Continuous threat management should depend on the business and the risks it faces. “For example,” she says, “in manufacturing this is probably not necessary or cost-effective; but for utilities or banks, or high security situations, it may be. With the sophistication of the cyber threat and the techniques, methods and tools available to attackers, the days of retrospectively checking configuration, incident and event logs is wholly inadequate for most business, certainly where monetary value, IP, or sensitive personal information is involved.”
But still this is too simplistic. The enemy isn’t merely the malware, or the time to monitor all the threats – the real enemies are the vulnerabilities that allow the malware into the system; and the user. Microsoft research shows that the vast majority of breaches depend upon the user doing something he or she should not; and that a statistically insignificant number of breaches are caused by the infamous 0-day threat. Further research shows that the bulk of detected exploit threats appear after the vulnerability is patched by the vendor.
Stuart Aston, chief security advisor at Microsoft, takes up the story. “You have to start from a thorough understanding of the risk. If you understand your risk, it will help you understand how to monitor the threats. For example, a large percentage of breaches come from end users actively doing something they shouldn’t. Similarly, 99% of breaches occur via patched vulnerabilities. It follows that improving your users’ security awareness together with religious patching will defend against the majority of security attacks. This, coupled with a good defence in depth, is the best way to not merely monitor threats, but to defeat them.” In other words, it is an effective use of time to let the vendors and security researchers monitor and alleviate the threats, provided the company then acts on the findings, and patches its software.
Continuous threat monitoring, then, should be a combination of watching the industry, using risk management techniques to concentrate on the most pertinent areas and, perhaps most importantly, keeping all systems and software fully upgraded and patched.
Understanding the threat
If we look at security today there is one conclusion we simply cannot avoid: it is not working. Despite the $20bn invested in IT security in 2010 (FireEye Advanced Threat Report – 1H 2011), the cost of cyber crime to the UK economy alone is estimated to be £27bn per annum (The Cost of Cyber Crime: a Detica report in partnership with the Office of Cyber Security and Information Security in the Cabinet Office). We need to understand what is going wrong in order to reverse this. And to understand that, we need to examine the evolving threat landscape.
It is tempting to blame the emergence of the advanced persistent threat (APT), a highly targeted, sophisticated attack aimed at large corporates. Hardly a week passes without news of a new APT attack on a household name: Google, Sony, Nintendo, RSA, Mitsubishi. And it is easy to support this idea with current statistics. FireEye divides current threats into two primary categories: ‘wide and shallow’, and ‘narrow but deep’. The first is the traditional approach: a wide net is thrown to catch as many targets as possible; but the actual loss is relatively small. The second is the specifically aimed attack on an individual organization that goes deeper and steals more – the APT.
It’s a description that is recognised by Detica’s Henry Harrison. “Of the £27bn annual loss to the UK economy,” he comments, “£17bn comes from theft of intellectual property and espionage – the typical narrow but deep targets of APT attacks.”
But while we must be aware of the threat of APT, we should not be diverted by it. The exploits and methodologies used are not new. Only the manner in which they are combined; the targets at which they are aimed; and, it has to be said, the almost military intelligence and precision with which they are controlled, is new. (It’s worth noting that ‘APT’ is a military term first coined by the US Air Force.)
Successful security should stop APT just as much as it should stop common-or-garden malware. Consider the banking trojan Zeus. Worldwide, RSA’s security and fraud expert Uri Rivner told me, “there are some five million PCs infected with Zeus”. Clearly our security defences stop neither wide and shallow nor narrow but deep attacks; and we need to understand the reason.
One clue can be found in PricewaterhouseCoopers’ 2012 Global State of Information Security Survey. “A clear majority of [9,600 CEOs, CFOs, CIOs, CISOs, CSOs worldwide],” it states, “are confident that their organization’s information security activities are effective.” This is despite the unambiguous empirical evidence to the contrary.
The problem is that we are stuck in an old security paradigm when the paradigm itself is changing. We grew up with our servers in the computer room and our users in the same building. The concept of security was simple: we put a barrier around our IT infrastructure to keep the bad things on the outside and the good things on the inside. Since the good things were all in one building it was conceptually simple. And since the technology to achieve this barrier is mature and effective – firewalls, anti-malware, intrusion prevention, content filters – and since we have all installed this technology, we believe we are secure.
It is a false sense of security that leaves us terribly exposed. Computing is no longer that simple. Cloud computing means that our data could be anywhere. Mobile computing means that our users could be anywhere. Consumerization means that our access devices could be anything that has internet connectivity. Where now can we effectively place a barrier? It’s not impossible, it’s just different; and we’re not keeping pace. But all of this pales into comparative insignificance in the face of a major new weakness: us. The rise of social networking combined with the consumerization of devices and mobile computing means that we are as like to socialise at work as we are to work at home. There is no longer even a virtual boundary between work and home.
“There has been a seismic shift in the threat landscape,” explains Rivner. “The criminals are no longer attacking the IT infrastructure. They are attacking the users.” It is social networking that provides the information that allows the criminal to bypass our security defences and get into our networks via our users. We have become nonchalant over the amount of personal information we effectively broadcast to all and sundry: our likes, our dislikes, what we do, what we want, where we are, where we’re going…
Armed with this information and basic social engineering skills it is easy for the criminal to trick us into doing something we shouldn’t, like going to a compromised website or opening a poisoned attachment. The malware itself stays ahead of us by rapid and automatic changes designed to defeat, and is successful at defeating, signature-based defences. FireEye points out that 90% of malicious executables and malicious domains change in just a few hours, and that today’s criminals are almost 100% successful at breaking into our networks.
The criminal no longer seeks to find a way through our security defences; social engineering has shown him a way round them. The difference with APT is that the criminal will now try to hide his presence and will take his time to find and steal what he wants. Unless we change our approach, and adapt our security to the changing threat landscape, the cost of crime will continue to escalate.
Tackling the threat
As things stand today, any company targeted by APT or simple spear phishing will almost certainly succumb. But it doesn’t have to be that way. There are things we can do. Absolutely central to this is continuous staff security awareness training to defeat that initial social engineering. It would be best not to do this yourself – use an expert to test both your defences and your staff. “First,” says David Hobson, the sales director of Global Secure Systems, “we test/audit your security systems and bring them up to speed. Then we’ll test your staff – and bring them up to speed.”
But that’s not enough; security awareness will not prevent all people-hacking. This summer RSA and TechAmerica hosted an Advanced Persistent Threats Summit in Washington, D.C. One of the takeaways is this: Organizations should plan and act as though they have already been breached (APT Summit Findings, RSA). Statistically, you probably have. So if existing defences aren’t working, go back to basics and start again. Security is not an end in itself: it is the risk mitigation aspect of risk management. Use risk management techniques to understand what is of most value. David Hobson uses an analogy with medieval castles. “You take your crown jewels and keep them separate in the best defended part of your castle, in the Keep.”
One method of segregating your networks is to colocate, wholly or partially, with a specialist data centre provider. It’s a way of providing greater physical security for your servers than you could probably do alone. “We use 24-hour manned security and biometric authentication (palm readers) for access to our data centres and to individual client suites, cages or racks,” explains Brian Packer of provider BIS.
There’s a second implication from the APT Summit: if you are already breached, it would be good to know about it as soon as possible. You need to shine a light inside your network, to see what is happening, to look out for anomalies and recognise any intrusion before any data loss. There are several new and very advanced security products that can help you here from companies like Detica and FireEye.
Rivner believes that virtualization can also help. “A virtual desktop infrastructure (vdi) could prevent malware getting onto the desktop and from there to the server; and it certainly makes patching and upgrading the entire infrastructure an easy task.” Bear in mind that the Google Aurora hack would not have succeeded if the target were not still using an old and outdated version of Internet Explorer. ‘Patch your software’ should be a way of life.
But virtualization is only as good as its implementation and your understanding of its components. “An APT or any other security threat,” explains Mike Atkins of Orange IS Security Solutions, “is likely to focus on the weaknesses that can be found in the target systems and processes, and then seek to leverage 0-hour exploits. The key to protecting a virtualised environment is to similarly focus on the weaknesses of the system and then mitigate as fully as possible any attacker’s ability to leverage those weaknesses.”
There is, however, one weakness in all of these approaches. Necessary and good though they be, they effectively use the same old security paradigm: wait for, recognise and respond to an attack. And that might be too late. In this new security paradigm we need to accept that our attackers are more sophisticated, better resourced and organized, and more patient and persistent than are we. “We need,” says RSA’s Uri Rivner, “global information sharing. It will be difficult, coping with the different privacy requirements in multiple jurisdictions, but it can be done. The banks are already doing it. When we all do it, we will have the necessary intelligence to cope with today’s evolving threat landscape.”
A few days ago I wrote about the LNK 0-day Windows vulnerability:
So forget about workarounds and concentrate on keeping your AV defences up to date – and hope that your AV supplier gets and stays on top of the problem at least until Microsoft patches the problem.
Some of the issues around the LNK zero-day vulnerability
Sophos has done just that by releasing a free tool to protect users.
So far we have seen the Stuxnet and Dulkis worms, as well as the Chymin Trojan horse, exploiting the shortcut vulnerability to help them spread and infect computer systems. Stuxnet made the headlines because it targeted the Siemens SCADA systems that look after critical infrastructure like power plants – but there’s a warning for all computer users here. Details of how to exploit the security hole are now published on the web, meaning it is child’s play for other hackers to take advantage and create attacks.
No-one knows when Microsoft will roll-out a proper patch for this critical security hole, and its current workaround leaves systems almost unworkable with broken-looking icons. The free tool from Sophos can be run alongside any existing anti-virus software, providing generic protection against the exploit. Unlike Microsoft’s workaround, it doesn’t blank out all the shortcuts on your Windows Start Menu – meaning your life – and that of your users – will be less stressful.
Graham Cluley, senior technology consultant at Sophos
Much of the security industry is talking about the .LNK zero-day vulnerability currently affecting all Windows platforms. There are several issues here. For a start, you don’t need to click anything to get infected: all that is necessary is the presence of a malformed Windows shortcut file. As Rahul Kashyap of McAfee Avert Labs comments:
This flaw can be triggered when explorer.exe (Windows Explorer) or iexplorer.exe (Internet Explorer) tries to render a malformed .LNK file that points to a malicious executable. The user need not double click on the .LNK file to trigger the vulnerability; just opening the folder containing the malicious shortcut is enough to get infected.
Microsoft 0day: Malformed Shortcut Vulnerability
A second issue is that since the problem is basically a design flaw in Windows itself, there is no easy workaround. If you open a folder containing one of these malformed shortcut files – bang – you’re got. But how do you know if a malformed .LNK is in the folder until you open it? Here’s the best workaround: switch off your computer.
So forget about workarounds and concentrate on keeping your AV defences up to date – and hope that your AV supplier gets and stays on top of the problem at least until Microsoft patches the problem. But this is the next issue: if you’re an XP SP2 user, there won’t be a patch. Eddy Willems thinks that this vulnerability will finally kill off XP SP2:
Take it from me: In the long end this lnk problem will kill MS Win2000 and MS Windows XP SP2 earlier as expected as this OS’ses will have no support or critical update anymore except if MS decides to make an exception, however I doubt it!
Also the number of Windows XP SP2 users is still very high… and do you really think that they care or are aware of their ‘not’ supported OS. Most of them don’t even know that they are using Windows XP, ‘they use Windows’.
The Microsoft LNK / USB worm / rootkit ‘issue’ will kill WIN XP SP2 and WIN2000 earlier…
The issue that intrigues me most, however, is the one raised by ESET’s Randy Abrams. In effect, we owe the perpetrator of the worm carrying this vulnerability a debt of gratitude. This is a vulnerability to kill for (and possibly not just figuratively). It is the sort of attack potential that governments and secret services and organised crime would pay a lot of money for; and they would then guard it vehemently.
I would imagine there is at least one intelligence person somewhere in the world with the singular goal of finding an executing whoever used the vulnerability as they did. It isn’t an affinity for SCADA systems that has them pissed off, it is the waste of an NSA grade exploit. This was a very, very potent weapon. In the hands of a skilled professional an exploit of this grade would do something like install remote access software on a target PC and then eliminate all traces of its existence. Think spy novel… Malicious files with the LNK vulnerability are left on a USB drive for the target to put in their PC. Immediately an undetected bot is installed with a rootkit and the lnk files are wiped from the drive. Why? Because you don’t want anyone to know that you can infect their computer just by having them look at the contents of the USB drive. By coupling this exploit with self-replication, a worm, the exploit is all over the world and certain to be discovered. Whoever is behind win32Stuxnet did not even realize what they actually had and of what value it really was. Well, there is another explanation. By making the malware spread all over the place they could obscure a specific target. Perhaps the attacker was going after one specific target and everything else was collateral damage. Possible, but still, you don’t waste an exploit this valuable in that manner.
It Wasn’t an Army
If you think about it, we don’t know that said secret services did not already know about, and were using, this vulnerability. And we absolutely don’t know how many other unknown vulnerabilities they are secretly, and protectively, using right now.
From a journalist we expect facts. We use those facts to inform our opinions and define how we interact with the world. From a blogger we expect entertainment; a voyeuristic view into somebody else’s opinions. We tend not to define our day based on the blogs we read.
Both of these statements are generalisations. We allow our journalists to have some opinions and we expect bloggers to justify theirs with some facts. Nevertheless it is a broadly accurate distinction. And problems can arise when bloggers stray into journalism; and to a lesser degree when journalists become bloggers.
There is an example current today. On 10 June 2010, Tavis Ormandy, an English-born security researcher based in Switzerland, disclosed a hitherto unknown vulnerability in Windows XP and Server 2003. He waited five days from the time he reported the vulnerability until the time he invoked full disclosure. Those are the basic facts. We’re going to have a look at how those facts have been treated by three separate bloggers:
- Brian Krebs, one time journalist with the Wall Street Journal, now mainstream blogger
- Graham Cluley, award-winning security blogger
- Roman Kenke, blogger
Last week, Google researcher Tavis Ormandy disclosed the details of a flaw in the Microsoft Help & Support Center on Windows XP and Server 2003 systems that he showed could be used to remotely compromise affected systems. Today, experts at security firm Sophos reported that they’re seeing the first malicious and/or hacked sites beginning to exploit the bug.
These are facts – blogged by a journalist. I have a slight concern over tagging Ormandy as a ‘Google researcher’ because it is not relevant to the facts – but nevertheless of interest to the reader.
A Google security engineer, Tavis Ormandy, sent details of a zero-day vulnerability he had discovered in Windows XP to Microsoft on Saturday June 5th… In the early hours of Thursday (June 10th), just five days after informing Microsoft of the security hole, the Google researcher decided to make his findings public – posting details of the vulnerability and proof-of-concept code to the Full Disclosure mailing list.
There are facts included here; but note the concentration on ‘Google’. Note also the tone (which is clearly very negative towards Ormandy), and the semantically less stringent use of language. The implication is that Ormandy woke up on Thursday morning and decided on the spur of the moment to release his findings. I see no evidence for this; and strongly suspect that the events of the previous five days were implicit in Thursday’s actions.
Tavis Ormandy: Asshole at work… Just some weeks ago this so called security expert (and Google employee) disclosed security problems in Java Webstart, today he disclosed security problem in Windows Help. The problem is not so much that he discloses security issues, but the way he does it. The pattern seems to be similar in both cases. He notifies the company of the security issue, giving them some time (in Java’s case it was at least a month) and then goes on to publish the full disclosure just a couple of days later for idiotic reasons.
This is a blogger. It is stronger on personal opinions and emotive language than facts; and some of these opinions are presented as facts (‘so called security expert’; well, Tavis Ormandy genuinely is a security expert). The language is contradictory: ‘The pattern seems to be similar’ when one is disclosed ‘a couple of days later’ while ‘in Java’s case it was at least a month’.
So what should we make of these three different treatments of the Ormandy story? I’m going to take the Kenke publication out of the argument because it is a blog and we know it is a blog. We’re not looking for facts; we’re looking either to enjoy the entry or to reinforce or upset our existing prejudices. It is true to its genre.
The Krebs story is a journalist at work. He states the facts without imposing his own opinions. If I want to know what happened, I would read Krebs.
The problem comes with Cluley. Don’t get me wrong; I read and enjoy Graham Cluley’s blog. But here is a blogger who has been so successful that he is beginning to be treated as journalist. People read Graham Cluley’s blog for facts. He has become a journalist. This is not his fault – it is the outcome of his own success.
But journalists have different responsibilities. Opinions must be justified, and counter opinions given space. Emotive language should be excluded.
Here’s an example. Twice in this extract Cluley links Ormandy to Google. The reader has to assume that this is relevant. So what is this relevance? A reasonable inference is that Cluley is associating Google with outing Microsoft. But a journalist cannot make such suggestions without evidence; and nowhere, in these or any other accounts, have I come across any proof that Google is at all involved.
So, first of all I apologise to these three authors. I have used their writing somewhat out of context to illustrate my own concern: when does a blogger become a journalist? There’s no easy answer. Krebs shows that a journalist is always a journalist; Kenke shows that a blogger is always a blogger. The difficulty comes with Graham Cluley: a blogger who is so successful that he is treated as a journalist; a source of facts. When this happens, the honorary journalist is honour-bound to relinquish his opinions and deal in facts. Or at least make it very clear that his writing is his own prejudiced (as all opinions are by definition) opinions. And as readers it is incumbent upon us to be aware of whether we are reading opinions or facts.