Steven Levy’s In the Plex does an excellent job of giving an overview of Google as it evolved as a search product, a company, a technological powerhouse, and even as a culture. He clearly benefitted from a lot of access to top-level officials and a deep understanding of how Google works and what it aims to do — first be the most comprehensive search engine out there, then get people to use the Internet more often (including browsers and phones) and in ways that would allow Google to use their AdWords technology, and then to use Internet and search to promote democracy and do other good deeds. (Note that one mission does not replace the other; each builds on the foundation already in place).
But I wish there was more dialogue included with people outside of Google. I could never fully assess critiques of Google because the only people who really got a sturdy soapbox were those who were committed to Google and its success.
Because of this, I am left with some uncomfortable questions about Google, and its algorithms.
The Importance of Defining Value In Google’s Decision Making
Early in the book, Levy talks about how comprehensiveness and relevancy were key to a successful search engine and to pushing Google ahead of its competitors, but it wasn’t until I heard Nicole Wong, former Deputy General Counsel for Google speak at ONA 11’s Law School For Digital Journalists, that I fully grasped the significance of comprehensiveness in insuring that the algorithm is not tampered with.
Wong explained that with Google’s search engine, the value Google wanted to offer users was comprehensive search, that meant that the algorithm had to be king, and that Google regularly turns down requests to remove links (she said there was a spike in those requests whenever it is election season anywhere in the world) and fights for the right to keep linking to some documents. (In Europe, for example, once a convicted felon has served his time, he has the right to have record of that crime removed from the public record. Google has over 50 cases pending in Spain alone where Google is defending the right to keep linking to newspaper articles and court records with descriptions of crimes).
On the other hand, Wong said, other Google products, like Google+, Orkut, and YouTube (yes, she did need to explain to the tech-savvy audience what Orkut was) do have content restrictions because the value Google is seeking to offer users of those products is not comprehensiveness. It’s something else; and the restrictions on content, reflect that.
A great example of this is that Google+ prohibits hate speech, while searching for “Jew” on Google still brings up the anti-semitic site Jew Watch (I’m not adding a link; I do not want to feed the bots), and the ad with a link to a page with Google’s explanation about the algorithm is still there.
Later during the Law School for journalists panel, the moderator, Harvard Law professor Jonathan Zittrain, asked Wong if there was any law preventing Google from manipulating the search results. She said there wasn’t. Zittrain then suggested that it’s not a legal fear that keeps Google from manipulating a search, but the knowledge that the engineers at Google would rise up in protests. Wong agreed, confirming another one of Levy’s recurring themes — that Google’s culture is shaped by the engineers and driven by respect for them.
But I went back to Wong’s earlier comments about value, and realized that it’s not a fear of an engineer uprising that leaves the search results untouched; it’s business. Messing with search results reduces the value of the search, therefore making Google less useful, and potentially driving away users. So Google doesn’t do it.
Except when they do.
It’s Not Just The Algorithm
Leaving aside the fact that Google wrote an algorithm designed to alter search results for China, Google also uses other ways –albeit rarely–to alter what a user sees when using search.
The examples in the book include posting the number for the National Suicide Prevention Lifeline on the top of the results when someone searches for “suicide” and the aforementioned ad about the search results for “Jew.” Similarly, when a Google Image search for “Michelle Obama” had a photo of a monkey as the first hit, Google posted an ad explaining that result.
Those types of decisions, presumably are not made by the algorithm. They are made by a person. But who? And why? Why don’t a search results for “I want to kill my baby” get topped by resources for postpartum depression? (I spent a lot of time trying to figure out what other words might trigger a suicide-type resource. Good thing there is no human tracking my searches). Does Google run an ad every time a racist hit makes it to the top of search results? How many people have to complain before the ad is run?
In the scheme of things, these are small questions, and, as someone who uses Android, Gmail, Gchat, Google+, Picassa, GoogleVoice and GoogleDocs, it’s clear I’ve decided I trust Google enough to store my life in its servers. But they are questions for which Levy offers no answers, and that are not answered by “it’s the algorithm” and indicate that the algorithm doesn’t actually decide everything so might not be the foolproof defense Google thinks it is.
For a book that focuses a lot on the people behind Google, In the Plex left me with some real concerns about the lack of transparency about how those people make decisions that affect billions of people.