Is the anti-virus industry in bed with the NSA – why do CIPAV, FinFisher and DaVinci still defeat AV?
September 2013 is the month in which the extent of direct government hacking – as opposed to traffic surveillance – became known.
4 September – WikiLeaks releases Spy Files 3, demonstrating increasing use of third-party hacking tools, such as FinFisher.
6 September – Bruce Schneier writes in the Guardian
The NSA also devotes considerable resources to attacking endpoint computers. This kind of thing is done by its TAO – Tailored Access Operations – group. TAO has a menu of exploits it can serve up against your computer – whether you’re running Windows, Mac OS, Linux, iOS, or something else – and a variety of tricks to get them on to your computer. Your anti-virus software won’t detect them, and you’d have trouble finding them even if you knew where to look. These are hacker tools designed by hackers with an essentially unlimited budget. What I took away from reading the Snowden documents was that if the NSA wants in to your computer, it’s in. Period.
7 September – details of an NSA MITM operation against Google users in Brazil revealed.
12 September – FBI admits that it hacked Freedom Hosting. The implications are that it inserted the malware that monitored visitors, and the almost certainty that the malware was CIPAV.
FinFisher and CIPAV stand out as government operated spyware; but there are others: RCS (DaVinci), Bundestrojaner etcetera – and, of course, Stuxnet and Flame. We’ve known about them for a long time: see
- CIPAV: FBI, CIPAV spyware, and the anti-virus companies (this site, May 2011)
- RCS: Hacking Team’s RCS: hype or horror; fear or FUD? (this site, Nov 2011)
- FinFisher: Use of FinFisher spy kit in Bahrain exposed (Infosecurity Mag, August 2012)
- Bundestrojaner: Chaos Computer Club warns on “German government” communications trojan (Infosecurity Mag, Oct 2011)
This leaves a major question begging: if we’ve known about this malware for such a long time, how come it can still be used? Why doesn’t anti-malware software stop it?
There are two possible reasons that we’ll explore:
- the AV industry, like so many others, is in bed with the NSA
- the AV industry is not as good as the ‘stops 100% of known malware’ claims that it makes – or put another way, virus writers are generally one-step ahead of the AV industry
In bed with the NSA
This has been vehemently denied by every AV company I have spoken to (see the articles on CIPAV and RCS for examples). Bruce Schneier doesn’t believe it is:
I actually believe that AV is less likely to be compromised, because there are different companies in mutually antagonistic countries competing with each other in the marketplace. While the U.S. might be able to convince Symantec to ignore its secret malware, they wouldn’t be able to convince the Russian company Kaspersky to do the same. And likewise, Kaspersky might be convinced to ignore Russian malware but Symantec would not. These differences are likely to show up in product comparisons, which gives both companies an incentive to be honest. But I don’t know.
Explaining the latest NSA revelations – Q&A with internet privacy experts
And yet the possibility lingers. When Flame was ‘discovered’, Mikko Hypponen issued a mea culpa for the industry. Admitting that F-Secure had Flame samples on record for two years, he said,
Researchers at other antivirus firms have found evidence that they received samples of the malware even earlier than this, indicating that the malware was older than 2010.
What this means is that all of us had missed detecting this malware for two years, or more. That’s a spectacular failure for our company, and for the antivirus industry in general.
It wasn’t the first time this has happened, either. Stuxnet went undetected for more than a year after it was unleashed in the wild…
Why Antivirus Companies Like Mine Failed to Catch Flame and Stuxnet
Forget the ‘hand on heart’ for a moment, and consider… That’s the two major government-sponsored malware samples known about and ignored by multiple AV companies for several years. Coincidence? Maybe. But to echo Schneier’s last sentence, I don’t know.
Malware writers are one step ahead of the AV industry
If you listen to the AV marketers, this cannot be true. Every month we hear claims that AV products stop 99.9% to 100% of all known viruses (remember that they ‘knew’ about Stuxnet and Flame, but did nothing). I’ve written on my dismay at this sort of advertising elsewhere (for example, Anti Malware Testing Standards Organization: a dissenting view).
However, if you listen to the foot soldier researchers – and sometimes even higher –within the individual companies, you realise that it is absolutely, inherently, and unavoidably true. Luis Corrons, the technical director at PandaLabs, puts it like this:
The effectiveness of any malware sample is directly proportional at the resources spent. When we talk about targeted attacks (and [CIPAV and FinFisher] are developed to perform targeted attacks) the most important part is the ability to be undetected. Bypassing signature detection is trivial, although it is almost useless too, as most anti-malware programs have several different layers of protection which do not rely on signatures.
The attackers probably know which security solution(s) the potential victim is using. Then it is as ‘simple’ as replicating the same scenario (operating system, security solution, etc.) and verifying that the malware is not being detected. As soon as it is flagged they will change it to avoid detection, until they have the final version.
Once they are done, they will infect the victim and will be spying / stealing information out of him until they are detected. This could be a matter of days, months or even years.
Claudio Guarnieri of Rapid7 said very similar:
Since FinFisher, just as any other commercial spyware, is a very targeted and sophisticated (besides expensive) malware, it’s part of Gamma’s development lifecycle to make sure that they tweaked all the different components to avoid antiviruses before shipping the new FinFisher out to the customers.
The developers likely have their own internal systems to do these testings: think of something as a private VirusTotal. Every time they develop a new feature or a new release, they’ll test it against as many antiviruses as possible and if something gets detected, they debug and try to understand why and find a way around it.
The ‘problem’ with this approach is that they rely on the AV industry not knowing and not having access to their malware: whenever that happens AV vendors react pretty effectively and in fact if you look at FinFisher samples discovered 1 year ago they are now largely detected by most antivirus products.
Is the AV industry in bed with the NSA? The simple fact is that we just do not know. The industry itself denies it – but, well, it would, wouldn’t it? Statistically, since almost every other aspect of the security industry collaborates with or has been subverted by the NSA, my suspicion is that it is. At the very least, I suspect it engages in ‘tacit connivance’.
Are malware developers one step ahead of the AV industry? That depends. As Corrons says, it depends on the resources available to the bad guys, whether that’s NSA, FBI, GCHQ or the Russian Business Network. Well-resourced bad guys will always get in. As Schneier puts it, “if the NSA wants in to your computer, it’s in. Period.” But that probably applies to all governments and all seriously organized criminal gangs engaged in targeted hacking.
But one final comment: nothing said here should be taken to suggest that we don’t need the AV industry. It may not be able to stop the NSA, but it can and does stop a million script kiddie wannabe hackers every day.
I wrote about the March 8 deadline for remaining DNSChanger victims to get clean or lose their internet in Infosecurity Magazine: DNSChanger poses a new threat to its victims.
But I had two late comments from anti-virus people currently in the States and separated by the trans-Atlantic time difference. They both echo Graham Cluley of Sophos’ comment that “if this is the only way to wake the affected users into sorting out the problem, so be it.”
Panda Labs’ Luis Corrons used remarkably similar language. “At least this will make affected people react and secure their computers,” he told me.
And ESET’s David Harley said, “Pragmatically, I don’t have a problem with this: law enforcement doesn’t have a specific responsibility for maintaining service for infected machines.”
But reading between the lines, I suspect that any anger is really directed not at the infected users being apathetic with their own security, but that the nature of the infection makes further infection likely. Such users are being apathetic with other users’ security; and that’s really not on.
“The UK will become the best country in the world for e-commerce, the prime minister has promised.” His promise includes “a raft of measures to boost internet use in the UK, including a £1bn drive to get all government services online [within three years] and £15m to help businesses make the most of the web.”
This is not from the new UK cyber security strategy published by the Cabinet Office last week. It came from Tony Blair in 2002. And it didn’t happen.
Last week, the Cabinet Office explained that “Our vision is for the UK in 2015 to derive huge economic and social value from a vibrant, resilient and secure cyberspace, where our actions, guided by our core values of liberty, fairness, transparency and the rule of law, enhance prosperity, national security and a strong society.”
Cameron is perhaps less ambitious than Blair, allows more time (four rather than three years) and is focused on security. But is the end any more achievable?
I doubt it – and offer a few observations. Firstly, by far the majority of security companies and their experts have openly welcomed and praised this report. They have no option. The power of government purchasing makes it difficult for any business to openly criticise government. Indeed, the report acknowledges this lever:
To ensure smaller companies can play their part as drivers of new ideas and innovation we will bring forward proposals as part of the Growth Review to help small and medium sized enterprises fully access the value of public procurement.
However, regardless of what they say in public, many of these security experts have serious doubts. One, whose company statement had him praising the initiative, privately mailed me worrying about how many different government departments, quangos, committees, off-shoots and different law enforcement and intelligence agencies are involved in this strategy. It is always the joint that provide weaknesses, and this strategy has many joints.
The second observation, which to his credit, has also been highlighted by Amichai Shulman, CTO and co-founder of Imperva, is that there is no emphasis on protecting the individual.
The strategy has given only a few insights on how government is going to help businesses and individuals protect themselves. In fact, it has taken the traditional approach of non-intrusive, general advisor for tasks left to the individuals to do, e.g., keep safe and stay current with the latest threats. As we know, most consumers and enterprises don’t do that which explains why we’re in the cyber crime mess we live in today.
Amichai Shulman, Imperva
It would appear from the report that the government expects its GetSafeOnline website to be sufficient to protect the public. (You can see my attitude to GetSafeOnline here: UK Internet Security: State of the Nation – The Get Safe Online Report, November 2011.) I have serious doubts about its effectiveness. But I am more concerned there is no mention anywhere in the new cyber security strategy report of an existing CPNI-inaugurated initiative that has the potential to help the individual: the Warning Advice and Reporting Point, or WARP.
The WARP project is stagnating if not contracting. But the concept is still good. Given the right input and impetus WARPs could develop into a form of security-based social networking system, where individuals would share threat experiences between themselves, learn about new threats, and automatically report them back up the line eventually to CPNI. By sharing their information, by warning others, by offering help and advice to colleagues within any particular WARP, the individual security stance becomes much stronger.
This approach could help protect home computers from being recruited into botnets; and fewer active botnets means a more secure national infrastructure. I am worried that if the new strategy isn’t aimed at protecting the NI by protecting the individuals, how else is it to do it? Possibly by ramping up co-operation with and control over the ISPs. We will, says the report
Seek agreement with Internet Service Providers (ISPs) on the support they might offer to internet users to help them identify, address, and protect themselves from malicious activity on their systems.
It is too easy to move from this position to one of getting the ISPs to cut off infected users until they can prove their system is clean.
But it’s not all depressing; I have always known that there are comedians in government. Always leave them laughing when you say goodbye. And this report does just that:
- the Ministry of Justice will develop ‘cyber-tags’ as a form of online ASBO
- police forces are to recruit ‘cyber-specials’ (the internet traffic warden?)
- ‘kitemarks’ to help consumers distinguish between genuinely helpful products and advice and the purveyors of ‘scareware’.
- “…partnerships between the public and private sectors to share information on threats, manage cyber incidents, develop trend analysis and build cyber security capability and capacity.”
CESG share intelligence with the private sector? Now that one really made me laugh.
“Our little gnomes in the backroom,” says the excellent Shadowserver in an announcement headed ‘New AV Test Suite’, “have been working feverishly for the last several months to put the finishing touches on our new Anti-Virus backend test systems.”
Malware testing, as we know, is a tricky business. AMTSO, the Anti-malware Testing Standards Organization, has expended much energy and expertise in developing detailed methodologies designed to ensure fair, unbiased and accurate anti-virus tests. But do we get this from Shadowserver? Do we get a new AV comparison source that we can realistically access for accurate unbiased information on the different AV products available to us? Let’s see.
Shadowserver starts off with a fair comment.
No single vendor detects 100%, nor can they ever. To expect complete protection will always be science-fiction.
That being said, it goes on…
…you can see the different statistics of the different vendors in our charts.
Here’s a couple of examples.
The one thing that really leaps out here is that Panda apparently misses (shown in green) far more of the test samples than Avira. This is counterintuitive. Panda is a commercial product backed by one of the world’s leading security companies. Avira, which I personally trust sufficiently to use on my XP netbook, is a free product. Shadowserver provides a partial answer:
The longest running issue has been our inability to use Windows based AV applications. We can now handle that, however it is still not what you might buy for home or commercial use. We are utilizing a special command-line-interface version from each of the vendors that we are using. This is not something you can purchase or utilize normally. These are all special version but most of them do use the same engines and signatures that the commercial products use.
This is important. Luis Corrons, technical director at PandaLabs elaborated:
What ShadowServer does is not an antivirus test. As they say, they do not even use commercial products, but special versions. Furthermore, it is static analysis of files they capture. It is a statistic. But the data cannot be used to say “product x detects more than product y” or “product x detects this percentage” as they are not using any of the other security layers used in real products (behavioural analysis/blocking, firewall, URL filtering, etc). The most you can say with this system is product x was able to detect y percent of files using their signatures and heuristics (the oldest antivirus technologies).
This is important. The AV companies have long recognised that the original signature database solution to malware cannot match the speed with which new signatures are required for polymorphic virus families. So they have supplemented their signature detection with more advanced and sophisticated methodologies.
In our case (Panda) ShadowServer is using an engine which is a few years old (at least 5) and of course is not using the cloud, so I can guarantee that our results are going to be awful. We have been asking SS for years to use a new version, but they were not supporting Windows. Now that they are supporting it, they forgot to mention it, but it’s not a problem as we’ll be sending them a new version with cloud connection. Anyway, even though in that way the results will be way better, or even if we are the number 1 vendor, that doesn’t mean anything, as it is only a static analysis of some files.
One solution would be for Shadowserver to work more closely with AMTSO. Shadowserver is not currently a member of AMTSO. I urge it to join. And I urge AMTSO to waive all membership fees so that this non-profit free service organization can do so. Both parties would benefit enormously. In the meantime, I asked David Harley, a director of AMTSO and research fellow at ESET, for his personal thoughts.
Shadowserver has never been discussed within AMTSO, that I remember… In the past they’ve shied away from suggesting that their statistics are suitable for direct comparison of vendor performance. One of the reasons they cited for that is that their testing has been focused on Linux/gateway versions, and you can’t assume that desktop versions will perform in the same way across a range of products. Including some Windows products will make a difference in that respect, but I can’t say how much, because I don’t know which versions they’re using. Where gateway products are used, it’s unlikely that the whole range of detection techniques are used that an end-point product uses. Detection is often dependent on execution context, certainly where detection depends on some form of dynamic analysis. A gateway product on an OS where the binary can’t execute may not detect what its desktop equivalent does, because the context is inappropriate. On the other hand, the gateway product’s heuristics may be more paranoid. Either way, there’s a possibility for statistical bias…
This isn’t a criticism of Shadowserver, which does some really useful work. I just don’t think I could recommend this as a realistic guide to comparative performance assessment…
Neither Luis nor David are known to shy away from the truth, whether of themselves or their products. But both seem fairly clear: Shadowserver is good; but this service is not yet ready. Shadowserver’s AV test suite will not give a realistic view of different AV products’ actual capabilities. Not yet. It needs more work. I’m certain that will happen. But for the time being at least, don’t use Shadowserver’s statistics to form an opinion on the relative merits of different AV products.
UPDATE from Shadowserver
It is difficult to not compare one vendor to the next due to how we have the data
structured on the pages. It would be impossible not to try and derive conclusions
from those results. While that is the case, our goal is not to create a real
comparison site for everyone to try and compete to see which AV vendor is better
than the next…
That is not our purpose…
That being said, our purposes in doing AV testing is simple. We wanted to know what
each malware was supposed to be for categorization purposes, and of course just to
see what happened. We collect a lot of malware daily and trying to find ways of
tying our data together is important.
Because we are volunteers and a non-profit we really enjoy sharing what we find
no matter how odd. We even enjoy talking about when we screw something up or
when we encounter something exciting. Everything here is for you our public to
enjoy, discuss, and even criticize…
Shadowserver, 8 September.
The fundamental principle that underpins all security is the need to stop bad people or processes while allowing good people or processes. So security is about access control; and access control starts with identity. But identity on its own is not enough – we also need to understand purpose. We need to identify the person or process, and decide whether the intent is good or bad.
Consider passports in the physical world. They prove identity, but do not tell us intent: is the intent of that identity to do good or bad? We reinforce identity with lists of known intent: a whitelist of frequent flyers or VIPs whose intent is known to be good, and a blacklist of terrorists and bad people whose intent is known to be bad.
Cyber security is the same: based on identity and intent we maintain whitelists of known good (or at least acceptable) behavior, and blacklists of known bad (or unacceptable) behavior. Security is largely based on how we use these lists. In the main, we either allow what’s on the whitelist and prevent everything else; or we prevent what’s on the blacklist, and allow everything else. We tend to concentrate on one approach or the other: whitelisting or blacklisting.
Keeping our computers clean is a good example. In the beginning the anti-malware industry simply blacklisted the bad things. But now the alternative is gaining traction: whitelisting the good things. We need to know which is best for maximum security.
In favor of blacklisting
The basis of anti-virus security is a blacklist of all known malware. The technology is based on blacklisting because in the beginning there were very few viruses. A primary advantage of blacklisting is that it is conceptually simple to recognize a few bad things, stop them, and allow everything else.
A second argument in favor of blacklisting is ‘administrative ease’. The maintenance of blacklists is something we can delegate to trusted third parties – in this instance the anti-virus companies. They in turn, particularly with the advent of the internet, can automatically update the blacklist for us. Basically, we don’t have to do anything.
Whitelisting is different: it is difficult to delegate to a third party the decision on which applications we need. “Whitelisting would be the perfect solution if people only have one computer that is never patched and never changed,” explains Dan Power, UK regional manager for anti-spam company Spamina. “Intellectually it makes perfect sense to only allow execution of the files that you know to be good.” But maintaining this whitelist is difficult. “The problem comes when you have to register or re-register every DLL every time you install a new, or patch an existing, application. Which people do you allow to install their own software, and which people do you stop? And which bits of software can make changes and which can’t? It becomes more of an administrative rather than intellectual issue.”
David Harley, senior research fellow at ESET LLC, agrees: “Whitelisting – which isn’t much different in principle to the integrity checking of yesteryear, requires more work by internal support teams and interferes with the end-users’ God-given right to install anything they like; which is more of a problem in some environments than in others.”
That’s not to say that some people consider such delegation to be impossible. Last year Microsoft’s Scott Charney proposed a form of whitelisting for access to the internet; that is, only users with an internet health certificate for their computer should be allowed access. He has few supporters in the security industry. Power, again: “If computers were like televisions, with just one base operating system that was never changed, then it’s doable. But in the real world there are just so many variables associated with Windows and all the bits of software that have ever been written for Windows, that it’s almost impossible to be able to say what is and what is not a clean or healthy computer.”
Jennifer Gilburg, director of marketing at Intel, sees a different problem with this type of whitelisting. “Think of e-commerce,” she said. “An online trader would rather take the occasional fraudulent transaction than risk turning away a good transaction. So the thought of blocking a user from coming onto the internet until they are trusted would terrify many of the e-commerce providers who make their livelihood on the basis of the more users the better. I suspect that most of the e-commerce world would be lobbying very hard to put down this version of whitelisting.” So one of the strongest arguments in favor of blacklisting is the problems concerned with whitelisting.
In favor of whitelisting
However, Henry Harrison, technical director at Detica, points to a specific problem with blacklisting. “Anti-virus blacklisting,” he says, “is based on the idea of detecting things that are known to be bad and stopping them. But it simply cannot detect things that are bad, but not known.” Zero-day threats are not known simply because they are zero-day threats – and blacklisting merely lets them in as if they were good. “What we are seeing today,” continued Harrison, “is a lot of targeted, covert attacks – external infiltration into corporate networks with a view to the theft of valuable information using techniques that are specifically designed to evade blacklisting – and one possible response to zero-day threats is whitelisting.”
Lumension’s senior vice president Alan Bentley, points to the sheer volume of malware as a problem for blacklisting. “Blacklisting,” he explains, “is threat centric. Whitelisting is completely the opposite: it’s trust centric. While blacklisting malware used to be adequate, the whole threat arena in the cyberworld has exploded to such an extent we now have to question whether blacklisting alone is still good enough.”
This is what Lumension does: it protects end-points (such as the PC on your desk) by making it administratively easy to create and maintain a whitelist of acceptable applications while supporting that with a blacklist of malware. “We believe that if you look at the two things together, whitelisting should absolutely be the first line of defense for any organization, because it simply stops everything that isn’t approved. But what it cannot do is remove malware once it has embedded itself into a machine.”
Bit9, like Lumension, is a company that concentrates on whitelisting. “The premise of application whitelisting is very simple,” explains Harry Sverdlove, chief technology officer. “What you want running on your system is a much smaller set than what you don’t want. We apply this model to other aspects of security in our life. For example, who do you let into your home? You don’t keep a list of everyone bad in the world. Rather, you only allow people into your home whom you trust.”
What we’re seeing is that the explosion in malware (in excess of 2 million new pieces of malware every month) is exactly what makes us question whether blacklisting remains realistic. “As a general rule, whitelisting is always more secure than blacklisting,” continues Sverdlove. “But it requires you to think more about how software arrives on your systems and whether or not it is trustworthy. That’s why a software reputation database can be an invaluable aid in whitelisting – it provides a trust rating on software, like a trusted advisor or background security check service, that can make the process more manageable. If everything you run comes from a well-known third party, approving software almost exclusively from a cloud based reputation service can be enough. In most cases, however, you also have your only custom or proprietary software. An effective and robust whitelisting solution allows you to combine both your own policies along with those from a reputation database.”
So we should ask ourselves whether we can harness the power of cloud-based reputation systems to generate our whitelists. Spamina already uses this methodology to produce its blacklist of spam sources, calling on six separate reputation blacklists, but never relying on just one (thus minimizing the chance of false positives).
The anti-virus industry
“I’ve never advocated AV as a single defensive layer,” says ESET’s Harley. “Whitelisting can and does work for businesses, though it works best where there’s an authoritarian IT culture, rather than laissez-faire: restricted privileges and so on. I wouldn’t generally recommend it as a complete substitute for AV, but if it’s implemented properly, it’s a rational multi-layering strategy. It does, at a stroke, obviate most of the risk from social-engineering-dependent threats. In fact, most AV nowadays does have some whitelisting ability, though how it’s done and to what extent varies enormously.”
Ram Herkanaidu, security researcher at Kaspersky Lab UK, has a similar viewpoint and acknowledges the increasing relevance of whitelisting. “As the amount of malware increases,” he said, “I can see at some point it could be more efficient to only allow whitelisted files to be run in an organization. The idea has been around for a while but many things have to be taken in consideration, like software updates (especially windows updates), remote users, smartphone and non-standard users. Ideally as well as using the vendor’s whitelist you could have a local whitelist too. So while the idea of having a, ‘trusted environment’ is very appealing, in practice it is difficult to achieve.”
Kaspersky, like other AV companies, is already looking into whitelisting. “We have been running a whitelist program to collect information about all known good files,” continued Herkanaidu. “The files are sent to us by our whitelist partners and also through our Kaspersky Security Network (KSN). This is our ‘neighborhood watch’ which users become part of when they install Kaspersky Internet Security. Information about all unknown files is sent to our ‘in the cloud’ service and automatically analyzed. If malicious, all computers within the network are protected. If it is not malicious it will be added to our whitelist. This has two benefits for our customers: it will reduce the risk of false positives, and will increase scan speeds. In this way we have been able to collect information – not the files themselves – about millions of files.”
Whitelisting or blacklisting?
So what’s our conclusion? Whitelisting is fundamentally the better security solution. If something isn’t on the list, it gets stopped – the default position for whitelisting is secure. But with blacklisting, if something isn’t on the list it gets allowed – the default position for blacklisting is insecure. Against this, the administrative effort involved in blacklisting is minimal compared to whitelisting; and the difference increases as the size of the whitelist increases. However, the efficiency of blacklisting decreases as its size increases. You could almost say that whitelisting is best with a small whitelist, while blacklisting is best with a small blacklist. However, since neither of these situations is likely to occur in the real world, our conclusion is simple: you need both.
Sophos showed the way. It was the first major anti-virus company with free AV for Mac. In a masterly PR stroke it gave away what it could not sell: Mac AV for home users. The rest of the industry was thrown into catch-up.
And finally it has started. AVAST Software has launched a new free Mac AV product, available for 10.5 and newer users.
“It’s time for Mac users to start thinking about an antivirus app and this beta shows what they will need for their protection,” said Ondrej Vlcek, CTO of AVAST Software. “The Mac has long had a ‘cloak of invulnerability’ because its small market share made it a fringe target for malware. As Mac sales surge it is becoming a natural target for malware such as the Pinhead and Boonana Trojans or the MacDefender fake antivirus.”
For the moment it’s still in beta – and if you want to try that you can get it here. Otherwise you’ll have to wait for the released version which should be announced soon.
On 26 January, ComputerWorld published details of an interview with Justin Rattner, CTO at Intel. Rattner described a new hardware-based solution to malware:
“Right now, anti-malware depends on signatures, so if you haven’t seen the attack before, it goes right past you unnoticed,” said Rattner, who called the technology “radically different”.
“We’ve found a new approach that stops the most virulent attacks. It will stop zero-day scenarios. Even if we’ve never seen it, we can stop it dead in its tracks,” he said.
Intel developing security ‘game-changer’
Rattner didn’t really give much away – except that he hopes the technology will be available this year. So what is it?
Alan Bentley, SVP International of endpoint security firm, Lumension, clearly believes that Intel’s solution is some form of whitelisting (allowing only known good things and stopping everything else) rather than our current blacklisting approach (allowing everything by default, but using AV to stop known bad things).
“A shift in security thinking needs to happen to keep malware off our devices and away from our critical data,” he explained. Trying to shut malware out with signature-based security technologies is like herding jelly. Signature-based security was not designed to protect against the volumes of malware that we are seeing today. If you think that known malware signatures exceed 30,000 each day, the concept of protecting against unidentified malware is more than difficult, especially if you are trying to predict what malware might look like. With this approach, it is of little surprise that malware has a habit of falling through the cracks.
“If you flip security on its head and only allow the known good onto a device or a computer network, malware protection is significantly improved.”
In the interview, Rattner seems to indicate that McAfee (which it bought for $7.7bn last year) is not really involved in this project.
Rattner said Intel researchers were working on the new security technology before the company moved to buy security software maker McAfee. However, he said that doesn’t mean that McAfee might not somehow be involved.
But if this technology predates the McAfee acquisition, then it is most likely to be at chip level rather than the software level. And that rather suggests that it is to do with the Trusted Computing Platform (TCP). The biggest problem with TCP is that it involves one person or organisation imposing its own views on others. Now, if that’s a company controlling how its own computers are used, well there’s not too much wrong with that. But if it’s Intel or Microsoft – or government – controlling what can be run on privately owned computers, then that’s highly dangerous.
Lumension’s CEO, Pat Clawson, has worries along these lines, suggesting that a “pressing concern is whether it is socially acceptable for Intel to impose security on the device. Whilst it might make sense in the consumer mobility space, governments and enterprises will surely want to make their own security decisions, not have it forced on them at the chip level.” But it is indeed the consumer level – that’s you and me, by the way – that is possibly under the greatest threat. “It makes sense,” he adds, “for its focus to be on the consumer market, which represents a significant portion of both Intel and McAfee’s revenues.
“Security innovation on the mobile device would certainly be an interesting and most likely welcome addition to the consumer handset market. With current security models proving ineffective, new levels of intelligence and a change in mindset are needed to protect today’s IT environment.”
What worries me is that this vision seems to be exactly what Scott Charney was proposing with his ‘internet health certificate'; that is, we will not be allowed to access the internet unless we can prove our computers are clean – and one way to prove they are clean is with Intel controlling what can run on them. Was Charney playing John the Baptist to Rattner’s Christ: a voice crying in the wilderness, but preparing the way for our security saviour?