Is the anti-virus industry in bed with the NSA – why do CIPAV, FinFisher and DaVinci still defeat AV?
September 2013 is the month in which the extent of direct government hacking – as opposed to traffic surveillance – became known.
4 September – WikiLeaks releases Spy Files 3, demonstrating increasing use of third-party hacking tools, such as FinFisher.
6 September – Bruce Schneier writes in the Guardian
The NSA also devotes considerable resources to attacking endpoint computers. This kind of thing is done by its TAO – Tailored Access Operations – group. TAO has a menu of exploits it can serve up against your computer – whether you’re running Windows, Mac OS, Linux, iOS, or something else – and a variety of tricks to get them on to your computer. Your anti-virus software won’t detect them, and you’d have trouble finding them even if you knew where to look. These are hacker tools designed by hackers with an essentially unlimited budget. What I took away from reading the Snowden documents was that if the NSA wants in to your computer, it’s in. Period.
7 September – details of an NSA MITM operation against Google users in Brazil revealed.
12 September – FBI admits that it hacked Freedom Hosting. The implications are that it inserted the malware that monitored visitors, and the almost certainty that the malware was CIPAV.
FinFisher and CIPAV stand out as government operated spyware; but there are others: RCS (DaVinci), Bundestrojaner etcetera – and, of course, Stuxnet and Flame. We’ve known about them for a long time: see
- CIPAV: FBI, CIPAV spyware, and the anti-virus companies (this site, May 2011)
- RCS: Hacking Team’s RCS: hype or horror; fear or FUD? (this site, Nov 2011)
- FinFisher: Use of FinFisher spy kit in Bahrain exposed (Infosecurity Mag, August 2012)
- Bundestrojaner: Chaos Computer Club warns on “German government” communications trojan (Infosecurity Mag, Oct 2011)
This leaves a major question begging: if we’ve known about this malware for such a long time, how come it can still be used? Why doesn’t anti-malware software stop it?
There are two possible reasons that we’ll explore:
- the AV industry, like so many others, is in bed with the NSA
- the AV industry is not as good as the ‘stops 100% of known malware’ claims that it makes – or put another way, virus writers are generally one-step ahead of the AV industry
In bed with the NSA
This has been vehemently denied by every AV company I have spoken to (see the articles on CIPAV and RCS for examples). Bruce Schneier doesn’t believe it is:
I actually believe that AV is less likely to be compromised, because there are different companies in mutually antagonistic countries competing with each other in the marketplace. While the U.S. might be able to convince Symantec to ignore its secret malware, they wouldn’t be able to convince the Russian company Kaspersky to do the same. And likewise, Kaspersky might be convinced to ignore Russian malware but Symantec would not. These differences are likely to show up in product comparisons, which gives both companies an incentive to be honest. But I don’t know.
Explaining the latest NSA revelations – Q&A with internet privacy experts
And yet the possibility lingers. When Flame was ‘discovered’, Mikko Hypponen issued a mea culpa for the industry. Admitting that F-Secure had Flame samples on record for two years, he said,
Researchers at other antivirus firms have found evidence that they received samples of the malware even earlier than this, indicating that the malware was older than 2010.
What this means is that all of us had missed detecting this malware for two years, or more. That’s a spectacular failure for our company, and for the antivirus industry in general.
It wasn’t the first time this has happened, either. Stuxnet went undetected for more than a year after it was unleashed in the wild…
Why Antivirus Companies Like Mine Failed to Catch Flame and Stuxnet
Forget the ‘hand on heart’ for a moment, and consider… That’s the two major government-sponsored malware samples known about and ignored by multiple AV companies for several years. Coincidence? Maybe. But to echo Schneier’s last sentence, I don’t know.
Malware writers are one step ahead of the AV industry
If you listen to the AV marketers, this cannot be true. Every month we hear claims that AV products stop 99.9% to 100% of all known viruses (remember that they ‘knew’ about Stuxnet and Flame, but did nothing). I’ve written on my dismay at this sort of advertising elsewhere (for example, Anti Malware Testing Standards Organization: a dissenting view).
However, if you listen to the foot soldier researchers – and sometimes even higher –within the individual companies, you realise that it is absolutely, inherently, and unavoidably true. Luis Corrons, the technical director at PandaLabs, puts it like this:
The effectiveness of any malware sample is directly proportional at the resources spent. When we talk about targeted attacks (and [CIPAV and FinFisher] are developed to perform targeted attacks) the most important part is the ability to be undetected. Bypassing signature detection is trivial, although it is almost useless too, as most anti-malware programs have several different layers of protection which do not rely on signatures.
The attackers probably know which security solution(s) the potential victim is using. Then it is as ‘simple’ as replicating the same scenario (operating system, security solution, etc.) and verifying that the malware is not being detected. As soon as it is flagged they will change it to avoid detection, until they have the final version.
Once they are done, they will infect the victim and will be spying / stealing information out of him until they are detected. This could be a matter of days, months or even years.
Claudio Guarnieri of Rapid7 said very similar:
Since FinFisher, just as any other commercial spyware, is a very targeted and sophisticated (besides expensive) malware, it’s part of Gamma’s development lifecycle to make sure that they tweaked all the different components to avoid antiviruses before shipping the new FinFisher out to the customers.
The developers likely have their own internal systems to do these testings: think of something as a private VirusTotal. Every time they develop a new feature or a new release, they’ll test it against as many antiviruses as possible and if something gets detected, they debug and try to understand why and find a way around it.
The ‘problem’ with this approach is that they rely on the AV industry not knowing and not having access to their malware: whenever that happens AV vendors react pretty effectively and in fact if you look at FinFisher samples discovered 1 year ago they are now largely detected by most antivirus products.
Is the AV industry in bed with the NSA? The simple fact is that we just do not know. The industry itself denies it – but, well, it would, wouldn’t it? Statistically, since almost every other aspect of the security industry collaborates with or has been subverted by the NSA, my suspicion is that it is. At the very least, I suspect it engages in ‘tacit connivance’.
Are malware developers one step ahead of the AV industry? That depends. As Corrons says, it depends on the resources available to the bad guys, whether that’s NSA, FBI, GCHQ or the Russian Business Network. Well-resourced bad guys will always get in. As Schneier puts it, “if the NSA wants in to your computer, it’s in. Period.” But that probably applies to all governments and all seriously organized criminal gangs engaged in targeted hacking.
But one final comment: nothing said here should be taken to suggest that we don’t need the AV industry. It may not be able to stop the NSA, but it can and does stop a million script kiddie wannabe hackers every day.
I wrote about the March 8 deadline for remaining DNSChanger victims to get clean or lose their internet in Infosecurity Magazine: DNSChanger poses a new threat to its victims.
But I had two late comments from anti-virus people currently in the States and separated by the trans-Atlantic time difference. They both echo Graham Cluley of Sophos’ comment that “if this is the only way to wake the affected users into sorting out the problem, so be it.”
Panda Labs’ Luis Corrons used remarkably similar language. “At least this will make affected people react and secure their computers,” he told me.
And ESET’s David Harley said, “Pragmatically, I don’t have a problem with this: law enforcement doesn’t have a specific responsibility for maintaining service for infected machines.”
But reading between the lines, I suspect that any anger is really directed not at the infected users being apathetic with their own security, but that the nature of the infection makes further infection likely. Such users are being apathetic with other users’ security; and that’s really not on.
“The UK will become the best country in the world for e-commerce, the prime minister has promised.” His promise includes “a raft of measures to boost internet use in the UK, including a £1bn drive to get all government services online [within three years] and £15m to help businesses make the most of the web.”
This is not from the new UK cyber security strategy published by the Cabinet Office last week. It came from Tony Blair in 2002. And it didn’t happen.
Last week, the Cabinet Office explained that “Our vision is for the UK in 2015 to derive huge economic and social value from a vibrant, resilient and secure cyberspace, where our actions, guided by our core values of liberty, fairness, transparency and the rule of law, enhance prosperity, national security and a strong society.”
Cameron is perhaps less ambitious than Blair, allows more time (four rather than three years) and is focused on security. But is the end any more achievable?
I doubt it – and offer a few observations. Firstly, by far the majority of security companies and their experts have openly welcomed and praised this report. They have no option. The power of government purchasing makes it difficult for any business to openly criticise government. Indeed, the report acknowledges this lever:
To ensure smaller companies can play their part as drivers of new ideas and innovation we will bring forward proposals as part of the Growth Review to help small and medium sized enterprises fully access the value of public procurement.
However, regardless of what they say in public, many of these security experts have serious doubts. One, whose company statement had him praising the initiative, privately mailed me worrying about how many different government departments, quangos, committees, off-shoots and different law enforcement and intelligence agencies are involved in this strategy. It is always the joint that provide weaknesses, and this strategy has many joints.
The second observation, which to his credit, has also been highlighted by Amichai Shulman, CTO and co-founder of Imperva, is that there is no emphasis on protecting the individual.
The strategy has given only a few insights on how government is going to help businesses and individuals protect themselves. In fact, it has taken the traditional approach of non-intrusive, general advisor for tasks left to the individuals to do, e.g., keep safe and stay current with the latest threats. As we know, most consumers and enterprises don’t do that which explains why we’re in the cyber crime mess we live in today.
Amichai Shulman, Imperva
It would appear from the report that the government expects its GetSafeOnline website to be sufficient to protect the public. (You can see my attitude to GetSafeOnline here: UK Internet Security: State of the Nation – The Get Safe Online Report, November 2011.) I have serious doubts about its effectiveness. But I am more concerned there is no mention anywhere in the new cyber security strategy report of an existing CPNI-inaugurated initiative that has the potential to help the individual: the Warning Advice and Reporting Point, or WARP.
The WARP project is stagnating if not contracting. But the concept is still good. Given the right input and impetus WARPs could develop into a form of security-based social networking system, where individuals would share threat experiences between themselves, learn about new threats, and automatically report them back up the line eventually to CPNI. By sharing their information, by warning others, by offering help and advice to colleagues within any particular WARP, the individual security stance becomes much stronger.
This approach could help protect home computers from being recruited into botnets; and fewer active botnets means a more secure national infrastructure. I am worried that if the new strategy isn’t aimed at protecting the NI by protecting the individuals, how else is it to do it? Possibly by ramping up co-operation with and control over the ISPs. We will, says the report
Seek agreement with Internet Service Providers (ISPs) on the support they might offer to internet users to help them identify, address, and protect themselves from malicious activity on their systems.
It is too easy to move from this position to one of getting the ISPs to cut off infected users until they can prove their system is clean.
But it’s not all depressing; I have always known that there are comedians in government. Always leave them laughing when you say goodbye. And this report does just that:
- the Ministry of Justice will develop ‘cyber-tags’ as a form of online ASBO
- police forces are to recruit ‘cyber-specials’ (the internet traffic warden?)
- ‘kitemarks’ to help consumers distinguish between genuinely helpful products and advice and the purveyors of ‘scareware’.
- “…partnerships between the public and private sectors to share information on threats, manage cyber incidents, develop trend analysis and build cyber security capability and capacity.”
CESG share intelligence with the private sector? Now that one really made me laugh.
“Our little gnomes in the backroom,” says the excellent Shadowserver in an announcement headed ‘New AV Test Suite’, “have been working feverishly for the last several months to put the finishing touches on our new Anti-Virus backend test systems.”
Malware testing, as we know, is a tricky business. AMTSO, the Anti-malware Testing Standards Organization, has expended much energy and expertise in developing detailed methodologies designed to ensure fair, unbiased and accurate anti-virus tests. But do we get this from Shadowserver? Do we get a new AV comparison source that we can realistically access for accurate unbiased information on the different AV products available to us? Let’s see.
Shadowserver starts off with a fair comment.
No single vendor detects 100%, nor can they ever. To expect complete protection will always be science-fiction.
That being said, it goes on…
…you can see the different statistics of the different vendors in our charts.
Here’s a couple of examples.
The one thing that really leaps out here is that Panda apparently misses (shown in green) far more of the test samples than Avira. This is counterintuitive. Panda is a commercial product backed by one of the world’s leading security companies. Avira, which I personally trust sufficiently to use on my XP netbook, is a free product. Shadowserver provides a partial answer:
The longest running issue has been our inability to use Windows based AV applications. We can now handle that, however it is still not what you might buy for home or commercial use. We are utilizing a special command-line-interface version from each of the vendors that we are using. This is not something you can purchase or utilize normally. These are all special version but most of them do use the same engines and signatures that the commercial products use.
This is important. Luis Corrons, technical director at PandaLabs elaborated:
What ShadowServer does is not an antivirus test. As they say, they do not even use commercial products, but special versions. Furthermore, it is static analysis of files they capture. It is a statistic. But the data cannot be used to say “product x detects more than product y” or “product x detects this percentage” as they are not using any of the other security layers used in real products (behavioural analysis/blocking, firewall, URL filtering, etc). The most you can say with this system is product x was able to detect y percent of files using their signatures and heuristics (the oldest antivirus technologies).
This is important. The AV companies have long recognised that the original signature database solution to malware cannot match the speed with which new signatures are required for polymorphic virus families. So they have supplemented their signature detection with more advanced and sophisticated methodologies.
In our case (Panda) ShadowServer is using an engine which is a few years old (at least 5) and of course is not using the cloud, so I can guarantee that our results are going to be awful. We have been asking SS for years to use a new version, but they were not supporting Windows. Now that they are supporting it, they forgot to mention it, but it’s not a problem as we’ll be sending them a new version with cloud connection. Anyway, even though in that way the results will be way better, or even if we are the number 1 vendor, that doesn’t mean anything, as it is only a static analysis of some files.
One solution would be for Shadowserver to work more closely with AMTSO. Shadowserver is not currently a member of AMTSO. I urge it to join. And I urge AMTSO to waive all membership fees so that this non-profit free service organization can do so. Both parties would benefit enormously. In the meantime, I asked David Harley, a director of AMTSO and research fellow at ESET, for his personal thoughts.
Shadowserver has never been discussed within AMTSO, that I remember… In the past they’ve shied away from suggesting that their statistics are suitable for direct comparison of vendor performance. One of the reasons they cited for that is that their testing has been focused on Linux/gateway versions, and you can’t assume that desktop versions will perform in the same way across a range of products. Including some Windows products will make a difference in that respect, but I can’t say how much, because I don’t know which versions they’re using. Where gateway products are used, it’s unlikely that the whole range of detection techniques are used that an end-point product uses. Detection is often dependent on execution context, certainly where detection depends on some form of dynamic analysis. A gateway product on an OS where the binary can’t execute may not detect what its desktop equivalent does, because the context is inappropriate. On the other hand, the gateway product’s heuristics may be more paranoid. Either way, there’s a possibility for statistical bias…
This isn’t a criticism of Shadowserver, which does some really useful work. I just don’t think I could recommend this as a realistic guide to comparative performance assessment…
Neither Luis nor David are known to shy away from the truth, whether of themselves or their products. But both seem fairly clear: Shadowserver is good; but this service is not yet ready. Shadowserver’s AV test suite will not give a realistic view of different AV products’ actual capabilities. Not yet. It needs more work. I’m certain that will happen. But for the time being at least, don’t use Shadowserver’s statistics to form an opinion on the relative merits of different AV products.
UPDATE from Shadowserver
It is difficult to not compare one vendor to the next due to how we have the data
structured on the pages. It would be impossible not to try and derive conclusions
from those results. While that is the case, our goal is not to create a real
comparison site for everyone to try and compete to see which AV vendor is better
than the next…
That is not our purpose…
That being said, our purposes in doing AV testing is simple. We wanted to know what
each malware was supposed to be for categorization purposes, and of course just to
see what happened. We collect a lot of malware daily and trying to find ways of
tying our data together is important.
Because we are volunteers and a non-profit we really enjoy sharing what we find
no matter how odd. We even enjoy talking about when we screw something up or
when we encounter something exciting. Everything here is for you our public to
enjoy, discuss, and even criticize…
Shadowserver, 8 September.
The fundamental principle that underpins all security is the need to stop bad people or processes while allowing good people or processes. So security is about access control; and access control starts with identity. But identity on its own is not enough – we also need to understand purpose. We need to identify the person or process, and decide whether the intent is good or bad.
Consider passports in the physical world. They prove identity, but do not tell us intent: is the intent of that identity to do good or bad? We reinforce identity with lists of known intent: a whitelist of frequent flyers or VIPs whose intent is known to be good, and a blacklist of terrorists and bad people whose intent is known to be bad.
Cyber security is the same: based on identity and intent we maintain whitelists of known good (or at least acceptable) behavior, and blacklists of known bad (or unacceptable) behavior. Security is largely based on how we use these lists. In the main, we either allow what’s on the whitelist and prevent everything else; or we prevent what’s on the blacklist, and allow everything else. We tend to concentrate on one approach or the other: whitelisting or blacklisting.
Keeping our computers clean is a good example. In the beginning the anti-malware industry simply blacklisted the bad things. But now the alternative is gaining traction: whitelisting the good things. We need to know which is best for maximum security.
In favor of blacklisting
The basis of anti-virus security is a blacklist of all known malware. The technology is based on blacklisting because in the beginning there were very few viruses. A primary advantage of blacklisting is that it is conceptually simple to recognize a few bad things, stop them, and allow everything else.
A second argument in favor of blacklisting is ‘administrative ease’. The maintenance of blacklists is something we can delegate to trusted third parties – in this instance the anti-virus companies. They in turn, particularly with the advent of the internet, can automatically update the blacklist for us. Basically, we don’t have to do anything.
Whitelisting is different: it is difficult to delegate to a third party the decision on which applications we need. “Whitelisting would be the perfect solution if people only have one computer that is never patched and never changed,” explains Dan Power, UK regional manager for anti-spam company Spamina. “Intellectually it makes perfect sense to only allow execution of the files that you know to be good.” But maintaining this whitelist is difficult. “The problem comes when you have to register or re-register every DLL every time you install a new, or patch an existing, application. Which people do you allow to install their own software, and which people do you stop? And which bits of software can make changes and which can’t? It becomes more of an administrative rather than intellectual issue.”
David Harley, senior research fellow at ESET LLC, agrees: “Whitelisting – which isn’t much different in principle to the integrity checking of yesteryear, requires more work by internal support teams and interferes with the end-users’ God-given right to install anything they like; which is more of a problem in some environments than in others.”
That’s not to say that some people consider such delegation to be impossible. Last year Microsoft’s Scott Charney proposed a form of whitelisting for access to the internet; that is, only users with an internet health certificate for their computer should be allowed access. He has few supporters in the security industry. Power, again: “If computers were like televisions, with just one base operating system that was never changed, then it’s doable. But in the real world there are just so many variables associated with Windows and all the bits of software that have ever been written for Windows, that it’s almost impossible to be able to say what is and what is not a clean or healthy computer.”
Jennifer Gilburg, director of marketing at Intel, sees a different problem with this type of whitelisting. “Think of e-commerce,” she said. “An online trader would rather take the occasional fraudulent transaction than risk turning away a good transaction. So the thought of blocking a user from coming onto the internet until they are trusted would terrify many of the e-commerce providers who make their livelihood on the basis of the more users the better. I suspect that most of the e-commerce world would be lobbying very hard to put down this version of whitelisting.” So one of the strongest arguments in favor of blacklisting is the problems concerned with whitelisting.
In favor of whitelisting
However, Henry Harrison, technical director at Detica, points to a specific problem with blacklisting. “Anti-virus blacklisting,” he says, “is based on the idea of detecting things that are known to be bad and stopping them. But it simply cannot detect things that are bad, but not known.” Zero-day threats are not known simply because they are zero-day threats – and blacklisting merely lets them in as if they were good. “What we are seeing today,” continued Harrison, “is a lot of targeted, covert attacks – external infiltration into corporate networks with a view to the theft of valuable information using techniques that are specifically designed to evade blacklisting – and one possible response to zero-day threats is whitelisting.”
Lumension’s senior vice president Alan Bentley, points to the sheer volume of malware as a problem for blacklisting. “Blacklisting,” he explains, “is threat centric. Whitelisting is completely the opposite: it’s trust centric. While blacklisting malware used to be adequate, the whole threat arena in the cyberworld has exploded to such an extent we now have to question whether blacklisting alone is still good enough.”
This is what Lumension does: it protects end-points (such as the PC on your desk) by making it administratively easy to create and maintain a whitelist of acceptable applications while supporting that with a blacklist of malware. “We believe that if you look at the two things together, whitelisting should absolutely be the first line of defense for any organization, because it simply stops everything that isn’t approved. But what it cannot do is remove malware once it has embedded itself into a machine.”
Bit9, like Lumension, is a company that concentrates on whitelisting. “The premise of application whitelisting is very simple,” explains Harry Sverdlove, chief technology officer. “What you want running on your system is a much smaller set than what you don’t want. We apply this model to other aspects of security in our life. For example, who do you let into your home? You don’t keep a list of everyone bad in the world. Rather, you only allow people into your home whom you trust.”
What we’re seeing is that the explosion in malware (in excess of 2 million new pieces of malware every month) is exactly what makes us question whether blacklisting remains realistic. “As a general rule, whitelisting is always more secure than blacklisting,” continues Sverdlove. “But it requires you to think more about how software arrives on your systems and whether or not it is trustworthy. That’s why a software reputation database can be an invaluable aid in whitelisting – it provides a trust rating on software, like a trusted advisor or background security check service, that can make the process more manageable. If everything you run comes from a well-known third party, approving software almost exclusively from a cloud based reputation service can be enough. In most cases, however, you also have your only custom or proprietary software. An effective and robust whitelisting solution allows you to combine both your own policies along with those from a reputation database.”
So we should ask ourselves whether we can harness the power of cloud-based reputation systems to generate our whitelists. Spamina already uses this methodology to produce its blacklist of spam sources, calling on six separate reputation blacklists, but never relying on just one (thus minimizing the chance of false positives).
The anti-virus industry
“I’ve never advocated AV as a single defensive layer,” says ESET’s Harley. “Whitelisting can and does work for businesses, though it works best where there’s an authoritarian IT culture, rather than laissez-faire: restricted privileges and so on. I wouldn’t generally recommend it as a complete substitute for AV, but if it’s implemented properly, it’s a rational multi-layering strategy. It does, at a stroke, obviate most of the risk from social-engineering-dependent threats. In fact, most AV nowadays does have some whitelisting ability, though how it’s done and to what extent varies enormously.”
Ram Herkanaidu, security researcher at Kaspersky Lab UK, has a similar viewpoint and acknowledges the increasing relevance of whitelisting. “As the amount of malware increases,” he said, “I can see at some point it could be more efficient to only allow whitelisted files to be run in an organization. The idea has been around for a while but many things have to be taken in consideration, like software updates (especially windows updates), remote users, smartphone and non-standard users. Ideally as well as using the vendor’s whitelist you could have a local whitelist too. So while the idea of having a, ‘trusted environment’ is very appealing, in practice it is difficult to achieve.”
Kaspersky, like other AV companies, is already looking into whitelisting. “We have been running a whitelist program to collect information about all known good files,” continued Herkanaidu. “The files are sent to us by our whitelist partners and also through our Kaspersky Security Network (KSN). This is our ‘neighborhood watch’ which users become part of when they install Kaspersky Internet Security. Information about all unknown files is sent to our ‘in the cloud’ service and automatically analyzed. If malicious, all computers within the network are protected. If it is not malicious it will be added to our whitelist. This has two benefits for our customers: it will reduce the risk of false positives, and will increase scan speeds. In this way we have been able to collect information – not the files themselves – about millions of files.”
Whitelisting or blacklisting?
So what’s our conclusion? Whitelisting is fundamentally the better security solution. If something isn’t on the list, it gets stopped – the default position for whitelisting is secure. But with blacklisting, if something isn’t on the list it gets allowed – the default position for blacklisting is insecure. Against this, the administrative effort involved in blacklisting is minimal compared to whitelisting; and the difference increases as the size of the whitelist increases. However, the efficiency of blacklisting decreases as its size increases. You could almost say that whitelisting is best with a small whitelist, while blacklisting is best with a small blacklist. However, since neither of these situations is likely to occur in the real world, our conclusion is simple: you need both.
Sophos showed the way. It was the first major anti-virus company with free AV for Mac. In a masterly PR stroke it gave away what it could not sell: Mac AV for home users. The rest of the industry was thrown into catch-up.
And finally it has started. AVAST Software has launched a new free Mac AV product, available for 10.5 and newer users.
“It’s time for Mac users to start thinking about an antivirus app and this beta shows what they will need for their protection,” said Ondrej Vlcek, CTO of AVAST Software. “The Mac has long had a ‘cloak of invulnerability’ because its small market share made it a fringe target for malware. As Mac sales surge it is becoming a natural target for malware such as the Pinhead and Boonana Trojans or the MacDefender fake antivirus.”
For the moment it’s still in beta – and if you want to try that you can get it here. Otherwise you’ll have to wait for the released version which should be announced soon.
On 26 January, ComputerWorld published details of an interview with Justin Rattner, CTO at Intel. Rattner described a new hardware-based solution to malware:
“Right now, anti-malware depends on signatures, so if you haven’t seen the attack before, it goes right past you unnoticed,” said Rattner, who called the technology “radically different”.
“We’ve found a new approach that stops the most virulent attacks. It will stop zero-day scenarios. Even if we’ve never seen it, we can stop it dead in its tracks,” he said.
Intel developing security ‘game-changer’
Rattner didn’t really give much away – except that he hopes the technology will be available this year. So what is it?
Alan Bentley, SVP International of endpoint security firm, Lumension, clearly believes that Intel’s solution is some form of whitelisting (allowing only known good things and stopping everything else) rather than our current blacklisting approach (allowing everything by default, but using AV to stop known bad things).
“A shift in security thinking needs to happen to keep malware off our devices and away from our critical data,” he explained. Trying to shut malware out with signature-based security technologies is like herding jelly. Signature-based security was not designed to protect against the volumes of malware that we are seeing today. If you think that known malware signatures exceed 30,000 each day, the concept of protecting against unidentified malware is more than difficult, especially if you are trying to predict what malware might look like. With this approach, it is of little surprise that malware has a habit of falling through the cracks.
“If you flip security on its head and only allow the known good onto a device or a computer network, malware protection is significantly improved.”
In the interview, Rattner seems to indicate that McAfee (which it bought for $7.7bn last year) is not really involved in this project.
Rattner said Intel researchers were working on the new security technology before the company moved to buy security software maker McAfee. However, he said that doesn’t mean that McAfee might not somehow be involved.
But if this technology predates the McAfee acquisition, then it is most likely to be at chip level rather than the software level. And that rather suggests that it is to do with the Trusted Computing Platform (TCP). The biggest problem with TCP is that it involves one person or organisation imposing its own views on others. Now, if that’s a company controlling how its own computers are used, well there’s not too much wrong with that. But if it’s Intel or Microsoft – or government – controlling what can be run on privately owned computers, then that’s highly dangerous.
Lumension’s CEO, Pat Clawson, has worries along these lines, suggesting that a “pressing concern is whether it is socially acceptable for Intel to impose security on the device. Whilst it might make sense in the consumer mobility space, governments and enterprises will surely want to make their own security decisions, not have it forced on them at the chip level.” But it is indeed the consumer level – that’s you and me, by the way – that is possibly under the greatest threat. “It makes sense,” he adds, “for its focus to be on the consumer market, which represents a significant portion of both Intel and McAfee’s revenues.
“Security innovation on the mobile device would certainly be an interesting and most likely welcome addition to the consumer handset market. With current security models proving ineffective, new levels of intelligence and a change in mindset are needed to protect today’s IT environment.”
What worries me is that this vision seems to be exactly what Scott Charney was proposing with his ‘internet health certificate'; that is, we will not be allowed to access the internet unless we can prove our computers are clean – and one way to prove they are clean is with Intel controlling what can run on them. Was Charney playing John the Baptist to Rattner’s Christ: a voice crying in the wilderness, but preparing the way for our security saviour?
The economic downturn is affecting everyone; there’s just not a lot money going around these days. So spare a thought for the criminal – with less money to steal, he has to work harder for his living.
And that is certainly the conclusion to be drawn from Panda Security’s latest report: The Cyber-Crime Black Market: Uncovered. We have already seen in a previous post that complex viruses are being developed and used, probably to run interference for the trojans. By tying up the AV industry’s top engineers in locating, unravelling and disarming these viruses, the online criminals hope to keep the work of their data-stealing trojans operational over a longer period. But just consider the organizational skills that this requires: it’s an underground black market industry that mirrors the organisation of legitimate industry. Whatever you want, you can have: at the market price.
The report comes out of Panda’s decision to have a closer look at the internet’s black market, “to see what kind of services they are selling today,” explained technical director Luis Corrons. “We found that basically it is the same as what’s been going on for years – only now the service is more specialised, and the availability more widespread. In the past they sold things like infection kits, spam services, stolen credit cards; and yes that is still available. But now the criminals are offering far more services. In the past you could buy a number of credit cards; and depending on the amount you buy, the price goes up or down.” This is still available said Corrons, “but now the criminals have started offering guarantees. OK, you can pay, say, $2 for a credit card; but if you want access to a bank account with a certain amount of money guaranteed, you pay more – and you can have that. You can even request bank accounts with more than $80,000 – but to get such credentials you’d have to pay $700.”
Apart from the guarantees, the whole process has become more integrated. “You can buy the credit card details which you can use,” he continued, “and then you can hire additional services so they will take care of and make all the money transfers for an additional fee. Or, let’s say you buy some luxury items with these stolen credit card details – such as a big LCD TV. Well you can’t have it sent straight to your house because obviously the police can track it. No problem. There are people offering to do this for you. You want to buy this – we’ll do it for you, and take care of sending it to your house.”
|From the comfort of an office or bedroom, with a single computer and spurred on by the lack of international legislation or cooperation between countries to facilitate investigations and arrests, cyber-criminals have been making a lucrative living from these activities.|
Corrons even described a site in Russia that offers to provide you with anything you want for just 20% of the usual cost. How? Well FAQs on the site explain that they will use stolen credit cards to buy primarily from US online stores, and let you have the goods for 20% of the cost – which they take from you for their fee. This Russian site only supplies Russian residents, but is indicative of how the black market is evolving.
|Credit card details||From $2-$90|
|Physical credit cards||From $190 + cost of details|
|Card cloners||From $200-$1000|
|Fake ATMs||Up to $35,000|
|Bank credentials||From $80 to 700$ (with guaranteed
|Bank transfers and cashing
|From 10 to 40% of the total $10
for simple account without guaranteed balance
|Online stores and pay
|From $80-$1500 with guaranteed
|Design and publishing of fake
|According to the project (not
|Purchase and forwarding of
|From $30-$300 (depending on the
|Spam rental||From $15|
|SMTP rental||From $20 to $40 for three
|VPN rental||$20 for three months|
Clearly, there is so much money to be made from these activities, even in difficult times, that what Panda is describing is the beginning of a new industry: cloud-based Fraud as a Service.
“In the last two years, we have seen a growing number of new viruses…” Panda’s Luis Corrons explains
Differentiating between one type of malware and another is neither easy nor, ultimately, particularly useful. Nevertheless, there is a temptation to say that the purpose of a virus is to attack and probably harm the target, while the purpose of a Trojan is to steal from the target. In other words, a virus is a weapon and a Trojan is a tool.
The very nature of cybercrime is changing: it is evolving from the indiscriminate carnage wreaked by earlier viruses, into an organised criminal business. It is little wonder then, that by 2005, the number of new viruses (weapons) was being dwarfed by the number of new Trojans (business tools). Figures from Panda’s new report, The Cyber-Crime Black Market: Uncovered (of which more in a later post) show that the generation of new viruses was so small that it had to be included in the ‘other malware’ category. Trojans, however, accounted for nearly half (49%) of all new malware.
Luis Corrons, PandaLabs’ technical director, told me that the latest figures show an even greater dominance of new Trojans, so that by 2010 Trojans account for just about 56% of all new malware. Again, this is not surprising. Trojans are the tool by which cybercriminals extort, steal and fraudulently obtain their income. The surprise, however, is that the virus is showing signs of a recovery: no longer lost within the 10% of other malware, during 2010 it accounted for more than 22% of all new malware.
Why? Why should an uneconomic attack weapon resurface when logic would suggest it continue to decline. I asked Luis Corrons to explain.
“We used to get a lot of viruses in the past; and then everything became Trojans and worms, and there were only a few new viruses,” he said. “But in the last 2 years we have seen a growing number of new viruses appearing; not necessarily many different ones, but many new variants of the same ones.”
The cause remains a mystery. “We often ask ourselves, why should this happen?” His answer is a bit surprising. “The virus is, for us, a really painful process; even though as an industry it’s where we come from. A virus is far more complex to detect than any other threat, such as a Trojan or a worm. In the final analysis,” he continued, “with a Trojan or a worm, the whole file is malicious.”
Viruses are different. “The virus embeds itself into good files, making detection considerably more tricky. But the bottom line for us isn’t just detection; it’s disinfection. And we have to remove every trace of the virus from the file, returning it to a clean state similar to its condition before the infection. This takes a lot of time and is something that we cannot completely automate. So it involves high level engineers spending a lot of time on the problem.”
In short, Corrons is explaining that a disproportionate amount of time and expertise has to be spent on anti-virus rather than anti-Trojan activities. But here’s the anomaly. “Some of the new viruses we are seeing these days are really, really complex and could only be written be very skilled people. But most of these viruses don’t have any Trojan content, so financial gain is not a motivation.”
So, what is the motivation?
“Our guess,” he suggests, “although we don’t have any hard proof of this, is that the trojan criminals are also engaged in the creation of these high level computer viruses so that it takes a lot of our time and resources to prevent us focusing on their real business: Trojans and financial theft. We’ve tried to find a better explanation, but we really cannot.”
And that’s a bit worrying. It suggests that the criminal gangs are more organised, better resourced, and more determined than I for one had realised.
Anti-virus software is possibly the archetypal security product. It was the first, is the most ubiquitous and certainly the best known defence against the bad guys. But with so many high-profile malware successes (such as Stuxnet and Zeus and other botnets that comprise millions of infected computers) we need to ask ourselves if it is still up to the job. Are the bad guys winning the arms race? What are the latest developments in malware, and what is the AV industry doing to combat them? These are the questions we need to examine before answering the ultimate question: is anti-virus software still relevant?
In this article we are going to use ‘virus’ and ‘malware’ interchangeably. There is a technical difference between a virus and a worm and a trojan. But for the user, there is no meaningful difference: they are all malware and all bad for you. “The key thing to recognise,” says James Lyne, senior technologist at Sophos, “is that these things are now so inextricably linked together that this aged distinction between things like viruses, worms, trojans and spam actually doesn’t make a lot of sense at all – it’s all really just ‘bad stuff'”. For example, he explained, bots on compromised PCs are used to deliver spam that contains social engineering scams designed to trick users into visiting malicious websites that will infect the user with a trojan that opens a back door to allow in a root kit containing a keylogger and spyware. Anti-virus software doesn’t just seek to protect you from viruses – it seeks to protect you from all of this bad stuff. We’ll just call it all ‘malware’.
Current developments in malware: what are the attackers doing?
Modern malware has evolved from a demonstration of personal prowess into a serious, organised, criminal business; and is driven by the same motives as any legitimate business – a desire to maximise ROI. This explains the two primary characteristics of today’s malware: it follows the market; and is increasingly sophisticated.
Follows the market
Wherever there are large concentrations of users, there will also be malware. This explains the malware campaigns on Facebook and Twitter. But it also tells us what is likely to happen next, which will start with increasing malware for the Mac (a new Mac version of KoobFace is discovered by Intego, a Mac security specialist, as I write this article). The criminals will follow the numbers, and as the Mac and other Apple products increase in popularity, so will the criminals start to attack them. One of the biggest computing movements today is ‘mobilization’ – the growth of mobile computing using smartphones and tablets. As these markets grow, so will they attract malware. Similarly, market growth in virtual machines will lead to attacks on the hypervisor. The AV industry is aware that there are proof of concept attacks on virtual machines, but nothing has yet been found in the wild. But it will happen; and is an area where all AV companies are watching – and waiting.
It is only with a degree of tongue in cheek that Luis Corrons, technical director of PandaLabs, comments, “We’re becoming evermore interconnected. Everything is connected to everything else – and it’s all connected to the internet. I don’t know that we’re going to install anti-virus for the fridge – but who knows.” Basically, when there are enough fridges connected to the internet, there will be fridge malware.
James Lyne described one example of the increasing sophistication in malware. “Polymorphism,” he said, “has been around for about 20 years. It’s where the malware continually changes itself to avoid detection – but it has been easy for the AV vendors to defeat it. We’d get hold of a copy, extract and analyse the engine that creates the new copies and work out all the possible future versions. That would give us generic detection for that whole polymorphic family. But today the bad guys are using server-side polymorphism where the engine is not in the malware but on legitimate business websites. Every time it is refreshed, what is downloaded is different in content to the previous download – and after a couple of hundred downloads, they kill that site and move on to another. That way none of us vendors can get hold of the engine to write any form of generic protection.”
Current developments in anti-malware: what are the defenders doing?
There doesn’t appear to be a major advance in AV technology on the near horizon. “Right now,” says David Harley, ESET research fellow & director of malware Intelligence, “it’s more a case of multiple/hybrid technologies (found in nearly any modern AV) advancing by improving individual components. Obviously, some products stress certain components more than others.”
Christopher Boyd, GFI senior threat researcher, suggests “virtual sandboxing, which allows threats to be intercepted and executed inside a virtual machine running a Windows-like pseudo environment, allowing for more accurate detection and safer quarantine and disposal.”
But probably the biggest single development has been the evolution of product-based reputation feed back (not to be confused with community-based reputation systems such as the Web of Trust). Rik Ferguson Trend Micro’s, senior security advisor, explains his own company’s reputation system. It is born out of the marriage, in the cloud, of three separate databases: bad emails, bad URLs and bad files. “Let’s take a hypothetical worst-case scenario,” he said. “You get an email from a bot that has only just been infected – and the email is well-crafted so that it looks OK. We can’t see anything wrong with it, so we allow it. In this case, email reputation has failed. The email contains a link to a malicious website that has only just been registered. Again, we don’t yet know it’s bad – so we allow you to click the link, and again the reputation system has failed. You click the link and visit the website which uses a zero-day exploit to infect you with a new trojan that the bad guys have already tested against all the AV products. We haven’t seen this trojan, so we allow you to download it – and you’re infected. Email, URL and file reputation systems have all failed. But,” he stresses, “the first thing that the trojan will seek to do is phone home, either to tell its owner that it has landed, or to download additional components. At this point we will almost certainly recognise this as suspicious behaviour and block it. We will also relay the URL source of the suspect file to TrendLabs who will download the page content and analyse it.” Instantly, the URL database and file database are updated with the new reputations. And, “if a new email comes in pointing to that URL that we now know to be suspicious, we can recognise the email as also suspicious and can add details to our email reputation system. And all of this is based on the behaviour of a file that we had previously thought was OK; and all of these new reputations are, thanks to the cloud, instantly available to all of our other customers.”
Future solutions for the malware problem
We have a choice. We can carry on as we are, trying to improve our anti-malware defences in a perpetual leapfrogging process with the bad guys – or we can think out of the box and be radical. One approach could be Trusteer’s Rapport product. It’s purpose is not primarily to find and eliminate viruses; but to specifically protect online bank transactions from malware (such as Zeus). Rapport is anti-malware; but not as we know it. Its primary purpose is to protect the browser. It doesn’t go looking for malware on your PC. Rather it defines a browser behavioural policy – and if the browser tries to behave differently, it knows that there is malware involved. “It’s like behavioural detection,” explains Amit Klein, Trusteer’s chief technology officer, “but it’s not behavioural in the sense that we monitor all the behaviour of a suspicious binary – rather we wait for the malware to come to us – for it to ‘attack’ the browser; and that’s where we stop it cold.”
Scott Charney’s Internet Health Certificate
A more radical approach could be the Internet Health Certificate proposal put forward by Microsoft’s Scott Charney (Collective Defense – Applying Global Health Models to the Internet). Charney’s idea is that we should take a lead from the World Health Organization: you may need to prove your health before you can do certain things or go to certain places. In other words, users may need a health certificate for their computers before they are allowed access to the internet. The AV industry is not generally impressed. Who says a computer is healthy? Who defines computer health. “I’d be pretty unhappy if it turned out that the health of my systems was being certified by someone whose knowledge of security wasn’t much higher than the average,” comments ESET’s David Harley. “Or even the sysadmin responsible for the Microsoft servers that are used to relay spam…”
Nor is the technical problem trivial. “The technical issue is the volume of edge cases,” continues Harley. “I don’t think a ‘just about good enough’ heuristic approach combines well with a utilitarian ‘greatest good for the greatest number’ approach, in this case.”
Trend Micro’s Rik Ferguson raises a practical issue. “What happens,” he asks, “in the case of false positives? if users are incorrectly quarantined, will they be able to claim something back in lost productivity, lost purchases on eBay, or whatever it may be?”
“It’s an interesting idea,” concedes Trusteer’s Klein. “But with the current infection rates where your machine can be clean one day and infected the next, I’m worried about the implications for an ISP handling millions of customers, some of whom keep getting re-infected. In practice, I’m not sure how we can really adopt this – I’m not sure how the ISP, where the rubber meets the road, will be able to handle this under current pricing structures.”
With apparently so little going for this idea, you have to wonder how it got air time. The answer might be in Scott Charney’s title: vice president of trustworthy computing. Microsoft, of course, is a leading member of the Trusted Computing Group (TCG). The TCG has developed specifications for how to control what can and cannot run on a computer – and this can already be achieved via Intel chips (Intel is another member of the TCG) installed on the majority of the world’s PCs. So if a third-party (your company? Microsoft? Intel? Your ISP? the Government?) defines what can run on your PC for you to be allowed access to the internet, you automatically have a health certificate because nothing else, neither malware, nor pirated software, nor illegal music, nor porn, nor any new software not sanctioned by the controlling organization, is capable of running. The problem is solved. Some might say at the cost of personal freedom.
Some of the marketing hype around anti-virus products seems to imply that AV software is all you need to be safe. It is not. You need layers of different security. In fairness to them, none of the anti-virus technologists will suggest that AV is enough. You need to complement it with data loss prevention technologies, ID theft prevention, firewalls, URL filters and more. How will the market develop? “Slowly and painfully,” says Harley. “Customers who expect 100% success will continue to be disappointed. Pure AV will become rarer: the technology will continue to be further integrated with other defensive technologies.”
New technologies such as Rapport can help in niche areas; ideas such as trusted computing could solve the problem but at the cost of personal liberty. Now I am not the biggest fan of the way in which the anti-virus industry markets itself. But of this I am certain: we cannot, and must not try to, do without it. The anti-virus industry is not merely relevant; it is still essential.