Ed posted comparing IBM and Microsoft’s security update records. He missed a few important details, though that’s understandable given that he’s not a security dude. Just to set the record straight, though, I wanted to point out something that security folks learn pretty quickly: simplistic comparisons that claim that “vendor X has better security than vendor Y based on patches” are worthless. Any time you see one, there are some hard questions you should be asking.
First, what products are included? We don’t know what criteria McAfee used to make their pretty graphs. Did they include Office updates? Updates for Windows 2000 before it went EOL? Windows Media Player? Who knows? Reputable researchers and vendors will always include their source data; if you don’t see it, you should be wary.
Second, what basis of comparison is being used? Most broad-based comparisons of vendors are flawed because they mix dissimilar items, usually applications and OSes. You can say “Microsoft had to issue more patches than IBM”, but that’s meaningless unless you’re talking about specific products. A more interesting question would be to ask something like “Who had more patches to install: an Exchange 2003 admin on Windows 2003, or a Lotus Domino 6.5 admin on RHEL?” Well, according to Secunia, the numbers break down like this:
- 8 patches for Exchange 2003 + 100 patches for Windows Server 2003 Standard, versus
- 22 patches for Domino 6.5 + 212 patches for Red Hat Enterprise Server 4
All of a sudden the comparison doesn’t favor IBM quite so much! A more proper comparison might leave the operating system out of it (after all, there are more Notes seats on Windows than on Linux), but even then there’s still room for argument: Secunia doesn’t break down Domino R6 vs 6.5, so the vuln count of 22 may include some items that aren’t relevant.
Third, counting patches alone leaves out some important dimensions. It’s like counting the money in your wallet by counting bills and ignoring denominations– would you rather have 10 $1 bills or 1 $100? Other factors to evaluate include the severity of the vulnerability and how long between its emergence (or disclosure) before the vendor gets a patch out– the so-called “days of risk” model.
Fourth, not all vendors tell the truth. More kindly, not all vendors tell the whole truth and nothing but. For example, IBM doesn’t include severity ratings on its security page, so you can’t judge the severity of a reported vuln unless you’re already pretty knowledgeable. Oracle is flat-out dishonest in some of its security patch release notes. When you’re comparing vendor security, you should include the nature, frequency, and accuracy of their security-related disclosures and communications.