Matt Cutts has a blog post entitled Closing the loop on malware where he writes

Suppose you worked at a search engine and someone dropped a high-accuracy way to detect malware on the web in your lap (see this USENIX paper [PDF] for some of the details)? Is it better to start protecting users immediately, or to wait until your solution is perfectly polished for both users and site owners? Remember that the longer you delay, the more users potentially visit malware-laden web pages and get infected themselves.

Google chose to protect users first and then quickly iterate to improve things for site owners. I think that’s the right choice, but it’s still a tough question. Google started flagging sites where we detected malware in August of last year.

When I got home yesterday, my fiancée informed me that her laptop was infected with spyware. I asked how it happened and she mentioned that she’d been searching for sites to pimp her MySpace profile. Since we’d talked in the past about visiting suspicious websites I wondered why she chosen to ignore my advise. Her response? “Google didn’t put the This Site May Harm Your Computer warning on the link so I thought the site was safe. Google failed me.”

I find this interesting on several levels. There’s the fact that this feature is really useful and engenders a sense of trust in Google’s users. Then there’s the palpable sense of betrayal on the user’s part when Google’s “not yet perfectly polished” algorithms for detectings malicious software fails to indicate a bad site. Finally, there’s the observation that instead of blaming Microsoft who produces the operating system and theWeb  browser which were both infected by the spyware, she chose to blame Google who produced the search engine that led to the malicious site instead. Why do you think this is? I have my theories…

Now playing: Hurricane Chris - Ay Bay Bay