Anti Malware Testing Standards Organization: a dissenting view
On June 15 I posted the article AMTSO: a serious attempt to clean up anti-malware testing; or just a great big con? The purpose of the article was to look at AMTSO, the Anti Malware Testing Standards Organisation; and I invited AMTSO members to justify themselves. Now I want to give a dissenting view, largely my own, and to look at AMTSO from outside of the tent. I shall be asking two principal questions:
- is AMTSO serious about improving the value of anti-malware testing?
- who does AMTSO serve?
Is AMTSO serious about improving the value of anti-malware testing?
I recently blogged about two new threats discovered in the wild by M86 Security: Asprox returns: fast-flux SQL injection attack; and Skype: old vulnerability, new exploit – in the wild. In both cases, M86 ran the malware they had discovered against VirusTotal (a respected site you can use to see what anti-malware products make of any submitted file). For the former, VirusTotal showed that only 7 out 42 anti-malware products detected the Asprox malware; while for the latter, only one AV product out of the 42 detected the Skype malware.
This would seem at odds with all of those marketing claims we see from the anti-malware industry, which state that their particular product detects between 97 and 100 per cent of all malware in the wild. An example is the VB100 award issued by VirusBulletin, one of the leading anti-malware test organisations. In VB’s own words:
The VB100 award was first introduced in 1998. In order to display the VB100 logo, an anti-virus product must have demonstrated in our tests that:
- It detects all In the Wild viruses during both on-demand and on-access scanning.
- It generates no false positives when scanning a set of clean files.
The product must fulfil these criteria in its default state.
I cannot think of a single anti-malware product that doesn’t boast similarly high scores, if not from VB, then from ICSA or West Coast Labs. But Virus Bulletin and VirusTotal cannot both be right. Well, the explanation is in the Virus Bulletin statement ‘in the wild’. It contains a link to this:
The WildList Organization collects monthly virus reports from anti-virus experts around the world. The data from the reports are compiled to produce The WildList – a list of those viruses currently spreading throughout a diverse user population. A virus that is reported by two or more of the WildList reporters will appear in the top-half of the list and is deemed to be ‘In the Wild’.
In recent times, the list has been used by Virus Bulletin and other anti-virus product testers [such as ICSA and West Coast Labs] as the definitive guide to the viruses found in the real world.
So, ‘in the wild’ is actually a sub-set of the viruses that are actually ‘in the wild’: it means only those viruses that are included in the WildList’s list of those viruses it has found in the wild. It gets worse.
- the WildList requires submission of a virus sample from at least two separate researchers
- many of the researchers are the anti-virus companies themselves
- in-built latency within the process can mean that it can take 3 months from the detection of a new virus to its inclusion within the WildList being used in a test
- this latency means that, almost by definition, the Wild List includes little, if any, of the biggest threat to end-users: zero-day malware
- members of the WildList Organization get to see the WildList when it is published; and yes, that includes the majority of AV companies
So what does this all mean? It means that the WildList is not a list of viruses in the wild, but a list of the majority of viruses that were in the wild several months ago. It means that the anti-virus test is against a set of viruses that the anti-virus companies already know about. It means that anything less than 100% success against the WildList is probably down to incompetence in the anti-virus company. It means that the average anti-virus buyer is being conned about the true situation.
So the answer to my first question, is AMTSO serious about improving the quality of anti-malware testing, is ‘no’. It would not allow the use of a test process, by its own members, that so clearly misleads the public if it were.
Who does AMTSO serve?
Let’s not prevaricate: the question is ‘does AMTSO serve the anti-malware user, or itself, the anti-malware industry?’ To answer this question I’m going to look at two things: the AMTSO Fundamental Principles of Testing, and the application of those principles by its Review Board.
The very first principle, headlined Testing must not endanger the public, includes the categoric statement: “In addition, new malware must not be created for testing purposes.” Why not? How can you test the true heuristic behavioral capabilities of an AV product without testing it against a brand new sample that you absolutely know it has never experienced before? To include this restriction under the banner of not endangering the public is also misleading: there is nothing essentially incompatible between developing a new virus and keeping the public safe.
I am not alone in being puzzled by this. Ed Moyles from SecurityCurve is similarly surprised:
Yes, yes… it’s terrible to create new malware – completely unethical. Yup, under any circumstances. Even if it doesn’t leave the lab, even if it doesn’t replicate, and even if it doesn’t have a hostile payload. Yep – still terrible. We know this because shady, fly-by-night organizations like Consumer Reports, University of Calgary, or Sanoma State are always springing up like mushrooms. Their clear intent is to bring down the Internet, wreak havoc, and otherwise mock everything that is just and holy… Sigh. I just can’t get my head around the argument.
SecurityCurve, June 16th, 2010
The problem for AMTSO is that there is one very obvious reason that comes to mind. Could it be that inclusion of new samples would increase the number of ‘fails’ in the test, and thereby lower the success rate so beloved by the industry for marketing purposes? AMTSO could respond that it isn’t a real ‘fail’ since the malware doesn’t actually exist; but as a user I would reply that it is more important to get an idea on how the product might respond to zero-day threats. So is this an example of AMTSO looking after itself?
Let’s move on to the Review Board. There are at the time of writing just two reviews: one on a Dennis Technology Labs test report, and one on an NSS Labs report. One AMTSO review is favourable and the other is not. I do not know enough about either of the testing companies or their test methodologies to comment on the reports themselves, but I think it is illuminating to compare the framework of the AMTSO reviews.
Dennis Technology Labs is a member of AMTSO. The testing was paid for by Symantec, a member of AMTSO. Symantec performed very well in these tests. The review of the report by AMTSO was requested by Dennis Technology Labs. The review was favourable. The test report is effectively endorsed.
NSS Labs is not a member of AMTSO (although it used to be). The testing was paid for by NSS independent of any anti-malware vendor (in the hope of recouping the cost via sales of the report). Sophos performed very badly in the report. The review of the report by AMTSO was requested by Sophos. The review was not favourable. The test report is effectively dismissed.
What does this look like? To me it looks like a duck; and If it looks like a duck, swims like a duck, and quacks like a duck, then it probably is a duck. AMTSO has its say about the NSS test report in its published review. I asked Rick Moy, President of NSS Labs, for his view of the AMTSO review. On AMTSO itself, he commented, “I have had drinks and long discussions with 90% of the folks in AMTSO. There is some very old-school thinking afoot, and a fair amount of protectionism. While they have good intentions, there is probably just too much business interest being represented.”
But what about their review of his test report?
Every vendor reviewed the methodology before. In fact I had sent it to them in 2008 and solicited comments before running the test. Every vendor but Sophos cooperated and gave us software and reviewed settings of the products. None complained about the methodology… But when the results came out, folks from AVG, ESET, Symantec and especially Sophos went crazy.
I cooperated for months of craziness. They all essentially demanded we give them free consulting and tear through samples to find what was wrong with our test. Well, it was a real-world test of fresh malware that had not been shared around amongst the vendors, that simple. Sophos even made brazen false claims that we had not contacted them. After much harangue, we produced email correspondence with the chairman of AMTSO and Lab Director at Sophos showing that we had, multiple times, and even reversed samples with them to help them troubleshoot. No sanctions or reprimand was made. Instead they redoubled their efforts to discredit the test.
Rick Moy, President, NSS Labs
So the answer to my second question, who does AMTSO serve, is that it serves the anti-malware industry: it is self-serving. In fairness, it rarely claims to be in the best interests of the user (except when it is trying to justify its guidelines). There are no user members, and it is not open to users: “AMTSO membership is open to academics, reviewers, publications, testers and vendors, subject to guidelines determined by AMTSO.” But in that case, it should keep itself to itself, and not send out press releases nor make its website nor its judgments available to users.
There are three main conclusions I draw from this look at AMTSO.
Firstly, the biggest problem I have with AMTSO is that it declares itself to be the sole arbiter of what is good in anti-malware testing: it is the prosecutor, judge and jury. I find this intensely arrogant. The sole judge of a test should be the user. The tester has to prove to the user that the tests are valid. If the vendor objects, he has to prove to the user that the tests are invalid. The idea that the vendor has only to prove his case to other vendors with identical vested interests is patently absurd and would be dismissed in any other industry.
Secondly, if AMTSO was serious about setting and maintaining testing standards for anti-malware products in accordance with its own charter, it would ban the WildList in its current form. WildList testing is dangerous. Users who buy security on the basis of ‘detects ALL viruses in the wild’ are likely to believe that they are completely safe from viruses when they most certainly are not, and might consequently behave less carefully on the internet.
And thirdly, AMTSO should immediately recuse itself from the purpose of setting anti-malware testing standards until, and unless, an open, independent, user-centric body can be established. To this body, the vendors should have every right to make representation; and to this body, the testing industry (separately) should have every right to make representation. Only then are we likely to have anti malware testing standards that are independent, valid and trustworthy.
I have no beef with any of the anti-malware companies. They are essential to our security; and we all, every one of us, must have at least one of their anti-malware products installed on our computers for our security. I have no beef with any of the individuals within AMTSO. They all have far greater knowledge of threats and solutions on the internet than do I. My beef is with AMTSO itself. It is, in its present form, a stain on an otherwise excellent industry.