I often worry about the consequences of what Siva Vaidhyanathan calls Googlization, the way Google is changing and disrupting the creation and dissemination of ideas. I've resisted using Google services like Gmail and Google Docs, despite their popularity and, in some cases, their convenience. I've mostly been disinterested in allowing Google to mine and profit from my information, but this week another, new concern reared its head.
Google attempts to protect people from malware by using their indexing system to detect malware on sites, and to mark them as potentially dangerous. You can see this in Google search results marked "This site may harm your computer."
All the popular web browsers, including Firefox, Safari, and Chrome, rely on Google's reports of unsafe sites in its internal browsing system. When a user tries to visit such a site, these browsers display a warning message like the one below.
Because my site is popular enough to be frequently indexed by Google, their system had already flagged the site as a malware distributor before I was even aware that the server had been compromised. This was inconvenient to say the least, because removing the warning requires one to notify Google that the site has been cleaned, in order that Google can initiate a new check of its contents.
Take close note of this process: one must sign up for a Google account in order to be able to rescue one's site from having been marked as unsafe by Google.
Unknown to me, Twitter also uses the Google unsafe warning as a way to flag accounts as spammers or malware distributors. Before I'd even completed restoring the hacked files on my site, Twitter had suspended my account, because my profile links to this site.
Google reindexed and cleared my site quickly. When I contacted Twitter, they sent the following "helpful" advice:
If you feel you've been suspended in error, please reply to this email with a short explanation if you haven't already, and don't forget to include your user name. We will do our best to get back to you within 30 days.
My site is more important than my Twitter account, but given the fact that I was speaking at a conference yesterday at which Twitter was in common use, not to mention the fact that my annual performance of Twittering Rocks is only days away, it is inconvenient to have my account disabled. It's also mildly embarrassing, because the big red notice on my account page suggests that I am a spammer.
Thanks go to the kind souls who tried to advocate for me on Twitter, an act that seems to have done no good. Perhaps my Twitter followers (who seem to be disappearing slowly for obvious reasons) might consider Tweeting this post as a way of spurring discussion about the policy. Except, as Clint Hocking points out in a comment below, you can't link to this site from Twitter, since it seems still to believe that this site is dangerous; URL shortening works as a workaround. (Update: Twitter restored my account in four days rather than thirty.)
The particular sort of cascading failure sheds light on an often unseen power Google holds, one that extends far beyond privacy and personal information. Web browsers and services like Twitter trust Google's reports of online danger implicitly. Yet, Google's system makes no distinction between people who have malsites and people who get hacked and then fix their sites. Neither Google nor Twitter notified me at all, despite the fact that both have my email address via my respective accounts at those services, nor did they give me any fair warning to remedy the problem before they took action. Instead, they just treated me like a cybercriminal.
As Google offers more and more business to business services like malware detection, and more and more third-parties use those services, this particular type of Googlization can only grow in impact. And the worst part of it is, you can't do anything about it. One can choose not to maintain a Google account or to use Google services, but one can't prevent Google from maintaining you.