Home > All, General Rants, Security Issues > Product testing: valuable or meaningless?

Product testing: valuable or meaningless?

February 16, 2010 Leave a comment Go to comments

Last Thursday Mike Rothman wrote a thoughtful piece on The Death of Product Reviews (Securosis blog). I’d like to take up this argument. “Unfortunately, the patient (the product review) has died,” he says. “The autopsy is still in process, but I suspect the product review died of starvation. There just hasn’t been enough food around to sustain this legend of media output.” He’s right: there simply isn’t enough advertising revenue to pay for genuine magazine reviews.

But I think the subject has evolved rather than died. And it has the worm of sickness at its heart already. The product review has been replaced by the product certification, the product test. Government, as always, is to blame (although in fairness, what has happened is not government’s fault). Certifying, or testing products is now essential (because of government requirements). Because of this, the money that could have gone into the advertising that would support independent magazine reviews is now used to fund the far more expensive, but much more necessary, government certification schemes.

In the main, certification is less testing than validating: validating marketing claims. Here’s the problem. The vendor simply claims its product’s strengths, and ignores the weaknesses – and hey presto it gets a government-recognized seal of approval, irrespective of otherwise glaring faults and weaknesses that are simply ignored.

That’s not the worst of it. If the product in question is in any way anti-malware, the vendor can simply claim that the product kills 99% of all known germs. The validation process will inevitably prove it to be true and the company has a marketing bonus that is actually meaningless. Why? Because the product will inevitably be tested against the Wild List.

There are two problems here. Firstly, it takes several months for the latest Wild List to be compiled, and released; so it is unlikely to contain any seriously testing zero-day malware. Secondly, the company being tested is more than likely a member of the Wild List organization; and as such will have already seen the list before the test. Anything less than 100% success should be seen as incompetence.

The bottom line is that magazine reviews have been replaced by independent testing that purports to be more rigourous and exacting, but ultimately means less. The true value of certification is not what it says about the product, but merely that the manufacturer has sufficient financial stability to afford the cost.

  1. March 28, 2010 at 5:45 pm

    Kevin and Mike are correct. The tests and certifications based on the wild list are almost meaningless indicators of protection in the real world. Can anyone really argue that a few hundred pre-agreed-upon samples don’t pale in comparison to what is really IN THE WILD. These tests are favored by vendors because they offer ‘validation’ to the buyer, who generally has no idea how the list is created and by whom. #falsesenseofsecurity

    That’s why NSS Labs started testing with the gloves off. Free report: nsslabs.com/anti-malware


    • April 4, 2010 at 1:28 pm

      All sample sets are “pre-agreed-upon”. The difference here is between samples that are selected and (hopefully) verified and classified by the tester, and samples that have been through a community validation process. That doesn’t make a static ItW test better than a good dynamic test. The problem comes when the tester’s validation and classification process is a black box, so you have to trust that it’s competently executed.

      Actually, “In the Wild” has very little meaning in today’s threat landscape. And that’s a theme I’ll be coming back to later this year.


    • April 5, 2010 at 12:31 am

      Interestingly (although slightly out of context) Luis Corrons, technical director at PandaLabs, made the following comment: “Using the Wild List as a test to say this particular anti-virus is good is impossible – and most of the tests used could be considered useless anyway.”



      • April 5, 2010 at 2:22 pm

        I don’t disagree. WL testing tells you very little about the effectiveness of detection using reputation services. And in general, detection testing has a margin for error so wide that its usefulness is indeed doubtful.


  2. March 15, 2010 at 11:27 am

    Nice post and this enter helped me alot in my college assignement. Thank you seeking your information.


  3. March 13, 2010 at 1:34 pm

    Genial fill someone in on and this post helped me alot in my college assignement. Gratefulness you on your information.


  4. Nuno Mendes
    February 18, 2010 at 2:00 am

    I’m from Portugal and definitly you wouldn’t want to compare a known test set (Wildlist) with what magazines select as a ‘test set’. In my opinion, it’s perfectly ok for magazines to keep doing reviews (comparative ou standalon) but it’s rare for them to publish the adopted methodology. I definitly trust more in something I know rather than something I don’t know 🙂


  5. February 17, 2010 at 12:57 pm

    In fact, I don’t disagree on every point. It’s more a matter of emphasis. But that’s a big topic for (my) short blog, and I hope to come back to it later and in much more detail.


  1. June 15, 2010 at 6:16 pm
  2. April 14, 2010 at 3:44 pm
  3. March 31, 2010 at 4:16 pm
  4. February 17, 2010 at 7:32 am

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s