Mozilla releases anti-phishing report

So, the fine folks at Mozilla have released their own anti-phishing study comparing the anti-phishing features of Firefox 2.0 with IE 7. Unsurprisingly, they claim that Firefox beats IE7 handily, which is the opposite of what we found in 3Sharp’s report.

First off, I’m glad the Firefox team is doing this kind of testing. I always want to see as much data (and as much data about the data) as possible. That’s why I I like to read both Car and Driver and Road & Track to see how well their data agree– or where they don’t agree.

Anyway, reviewing the study didn’t take long, as it’s only 3 pages. (Interestingly, SmartWare, the company that authored the study, doesn’t seem to be distributing it; the only copy I could find is at the Washington Post. It’s not available yet from Mozilla, either. Go figure.) Here are my initial thoughts:

  • They didn’t make any attempt to score false positives. This is a critical omission, because a filter that produces significant numbers of false positives will quickly train users to ignore its legitimate warnings. (Interestingly, PhishTank’s own FAQ agrees with me). IMHO any study that doesn’t include false positive data is meaningless.
  • Speaking of “doesn’t include”, the report only looked at IE and Firefox. I would have liked to see some other products (note: not SiteAdvisor) included to give a broader basis of comparison.
  • The Firefox report mentions that IE can warn or block, but it doesn’t credit IE with any actual warnings. This is a significant omission, although we can’t tell how significant because…
  • The Firefox report doesn’t include any information about the actual URLs used. They promise to publish this data “soon”, but without that there’s no way to gauge the quality of their data. (I understand that they’ll publish the data later today; it’ll be interesting to see the raw stuff.) Of course, we published all our URL data in our report.
  • Speaking of data: the Firefox team used 1040 phish from Phishtank, a community filtering system, gathered over a two-week period. That’s a good number of phish, but the study period was awfully short, and the phish all came from one source. We used multiple sources, including honeypots and user reports, to generate the phish list we used.
  • Because they used a community-generated feed of phish, there’s no way to tell which of the phish had also (or already) been reported to other systems that may have fed into the “Ask Google” or Microsoft data feeds. By contrast, we took great pains to try to find phish that we knew hadn’t been submitted to Microsoft’s URL reputation service.

So, my personal opinion is that this study isn’t as rigorous as the 3Sharp study or the one done by Dr. Lorrie Cranor et al of Carnegie Mellon. Both our studies found that the version of the Google Toolbar available at the time lagged other products, sometimes by a wide margin. Some of the difference in Mozilla’s results and the ones we and CMU obtained are due to updates in the tool, but some are no doubt due to differences in methodology as well, and those are very difficult to discount.

Update: looks like Sandi independently came up with many of the same objections.

Technorati Tags: ,

About these ads

Comments Off

Filed under General Tech Stuff

Comments are closed.