Is It Really Just The Algorithm?

Sergey Brin by Flickr user Ptufts ( Some Rights Reserved.
“Eric Schmidt told a reporter when asked just how Google determines the application of its famous unofficial motto [‘don’t be evil’] ‘Evil is what Sergey says is evil.’ ~ In the Plex, Kindle edition, location 5645 *
*I know that this caption is kind of misleading, but I was just so struck by this quote, and by the idea that there was one man — or more broadly one specific type of person– who was really putting meaning into Google’s famous “Don’t be evil” mantra.

Steven Levy’s In the Plex does an excellent job of giving an overview of Google as it evolved as a search product, a company, a technological powerhouse, and even as a culture. He clearly benefitted from a lot of access to top-level officials and a deep understanding of how Google works and what it aims to do — first be the most comprehensive search engine out there, then get people to use the Internet more often (including browsers and phones) and in ways that would allow Google to use their AdWords technology, and then to use Internet and search to promote democracy and do other good deeds. (Note that one mission does not replace the other; each builds on the foundation already in place).

But I wish there was more dialogue included with people outside of Google. I could never fully assess critiques of Google because the only people who really got a sturdy soapbox were those who were committed to Google and its success.

Because of this, I am left with some uncomfortable questions about Google, and its algorithms.

The Importance of Defining Value In Google’s Decision Making

Early in the book, Levy talks about how comprehensiveness  and relevancy were key to a successful search engine and to pushing Google ahead of its competitors, but it wasn’t until I heard Nicole Wong, former Deputy General Counsel for Google speak at ONA 11’s Law School For Digital Journalists, that I fully grasped the significance of comprehensiveness in insuring that the algorithm is not tampered with.

Wong explained that with Google’s search engine, the value Google wanted to offer users was comprehensive search, that meant that the algorithm had to be king, and that Google regularly turns down requests to remove links (she said there was a spike in those requests whenever it is  election season  anywhere in the world) and fights for the right to keep linking to some documents. (In Europe, for example, once a convicted felon has served his time, he has the right to have record of that crime removed from the public record. Google has over 50 cases pending in Spain alone where Google is defending the right to keep linking to newspaper articles and court records with descriptions of crimes).

On the other hand,  Wong said, other Google products, like Google+, Orkut, and YouTube (yes, she did need to explain to the tech-savvy audience what Orkut was) do have content restrictions because the value Google is seeking to offer users of those products is not comprehensiveness. It’s something else; and the restrictions on content, reflect that.

A great example of this is that Google+ prohibits hate speech, while searching for “Jew” on Google still brings up the anti-semitic site Jew Watch (I’m not adding a link; I do not want to feed the bots), and the ad with a link to a page with Google’s explanation about the algorithm is still there.

Later during the Law School for journalists panel, the moderator, Harvard Law professor Jonathan Zittrain, asked Wong if there was any law preventing Google from manipulating the search results. She said there wasn’t. Zittrain then suggested that it’s not a legal fear that keeps Google from manipulating a search, but the knowledge that the engineers at Google would rise up in protests. Wong agreed, confirming another one of Levy’s recurring themes — that Google’s culture is shaped by the engineers and driven by respect for them.

But  I went back to Wong’s earlier comments about value, and realized that it’s not a fear of an engineer uprising that leaves the search results untouched; it’s business. Messing with search results reduces the value of the search, therefore making Google less useful, and potentially driving away users. So Google doesn’t do it.

Except when they do.

It’s Not Just The Algorithm

Leaving aside the fact that Google wrote an algorithm designed to alter search results for China, Google also uses other ways –albeit rarely–to alter what a user sees when using search.

The examples in the book include posting the number for the National Suicide Prevention Lifeline on the top of the results when someone searches for “suicide” and the aforementioned ad about the search results for “Jew.” Similarly, when a Google Image search for “Michelle Obama” had a photo of a monkey as the first hit, Google posted an ad explaining that result.

Those types of decisions, presumably are not made by the algorithm. They are made by a person. But who? And why? Why don’t a search results for “I want to kill my baby” get topped by resources for postpartum depression? (I spent a lot of time trying to figure out what other words might trigger a suicide-type resource. Good thing there is no human tracking my searches). Does Google run an ad every time a racist hit makes it to the top of search results? How many people have to complain before the ad is run?

In the scheme of things, these are small questions, and, as someone who uses Android, Gmail, Gchat, Google+, Picassa, GoogleVoice and GoogleDocs, it’s clear I’ve decided I trust Google enough to  store my life in its servers. But they are questions for which Levy offers no answers, and that are not answered by “it’s the algorithm” and indicate that the algorithm doesn’t actually decide everything so might not be the foolproof defense Google thinks it is.

For a book that focuses a lot on the people behind Google, In the Plex left me with some real concerns about the lack of transparency about how those people make decisions that affect billions of people.

Storming The Newsroom

This is a bonus post, because what I am really interested in is newsroom dynamics, but I had to cut this from the original blog post in hopes of only having a  very long post rather than a ridiculously long post. The class assignment post is below this one (and linked here.)

Letting The Public Into The Newsroom

In his book Clay Shirky notes that “journalist” has become harder to define when the scarcity of the resources that defined it — people who wrote with access to publishers– disappeared with the advent of self publishing on the Internet.

The debate rages on; I am particularly tickled by the straightforward, rolling-my-eyes-because-it’s-so-obvious answer offered at compared to  hand wringing over at Buzz Machine where  Jeff Jarvis  writes “I am coming to wonder whether we should even reconsider the word journalism, as it carries more baggage than a Dreamliner.”

But for me, the issue is not “what is a journalist?” because I haven’t run up against any libel suits or need for a shield law. For me the question is, “if I consider myself a journalist, how has my responsibility changed in light of an expectation that communication on the Internet can go in more than one direction and that group-formation and personal blogging is changing the definition of news.

Shirky offers the example of Trent Lott revealing his segregationist sympathies at a Birthday party speech that was at first ignored by the mainstream media and picked up by that media only after Lott responded to what was blog-driven outcry. In the intervening years, I would tentatively offer John Edward’s affair (it was ignored by the mainstream media, but was broken by the Enquirer not by blogs though blogs fueled the flames).

To me, the biggest change is that newspapers can no longer say with bravado that they know what is news.

When I was working at the Columbia Spectator, there was a quote hanging on the wall  that I believed in full-heartedly. I incorporated it into cover letters and quoted it righteously as justification for writing another story about the flaws in the New York City Gifted and Talented program.

“Give the public what it wants to have and part of what it ought to have whether it wants it or not.” ~ Herbert Bayard Swope, editor of the New York World

To the college-aged version of me, this quote from a dead  editor of a shuttered newspaper was the epitome of doing journalism right. Journalists were the arbiters of news, and readers were going to learn something whether they expected to or not. The Internet not only gives readers a way to ignore news journalists think they ought to have, but it also provides newspaper editors with a clear way to find out what the public (or at least a vocal online section of the public) actually wants. When Swope made that statement, he was presumably making arbitrary decisions about both what the public ought to know and what it  wanted to know (well, blood and sex sell newspapers, so that could have been a rubric, but not a very precise one). Now, the first is harder to provide and the second is harder to fudge.

How does that change the way newspapers are run? It would make them a lot more reactionary to the Internet than they are now; and a lot less self righteous. The question is if newspapers really need to go down that road and if they risk losing out on an important part of their mission by catering to voices and the groups rising up from the Internet. I don’t know the answer.

Here Comes Everybody (Though I’m Still Not Sure How To Get Them Here).

Clay Shirky’s book Here Comes Everybody posits that the Internet, particularly social tools or social media (broadly defined), has changed society’s expectations of what can be accomplished through group effort (and what is worth the effort) and how widely information or resources can be disseminated.

The ramifications of those changes is that the concepts of hierarchy, group dynamics, management, and expertise have all been radically altered.

Groups are no longer reliant on management to sort through what is and isn’t feasible and to guide the process. Group formation no longer has to start with one person reaching out to one other person. Instead, the Internet allows anyone to propose an idea and anyone else to support it, repeat it, organize around it.

With the management structure completely removed from the equation (or at least in the traditional sense of an organizational hierarchy chart), there is no task too small  or too unprofitable to gather around. The Internet makes it easy for one person to reach out to many, or for many people to reach out to each other, to collect and curate information and to disseminate or access that information as needed.

Using the examples of Wikipedia and Flickr, Shirky points out that participation in social media is unbalanced; a small number of users make the majority of changes on any given Wikipedia post. Most people posted and tagged only one photo of the Mermaid Parade but one user posted over 200 photos. Still, there is a committed community participating at various levels of engagement without any resentment, because there are no expectations of equally shared responsibilities.

Another notable element addressed in the book is that when organizing through the web or mobile phones (which can, as in the case of, Voice of the Faithful, or the flash mobs in Belarus, have significant off-line presence), there is no arbiter of what is a worthy cause. In traditional organizing, there were financial limits on what was worth organizing around.

As Shirky  notes, because management took time, resources, and money, some efforts–collecting all photos taken at a parade or a natural disaster, writing an article about asphalt–were never worth pursuing. Now, since the cost of participation is nothing, those activities come into being, and impromptu communities (such as the people looking for loved ones after a Tsunami by scanning photos and comments in Flickr) are formed and dispersed as needed.

“[S]ocial tools don’t create collective action,” Shirky wrote, “–they merely remove obstacles to it.” (end of chapter 6)

The idea of non-existent costs leading to the lack of the need for a manager expands beyond group and community forming. For me, the most  obvious impact this has on my life is  that it  has transformed my industry.

What Are The Limits of Organic, Self-Policing Online Communities?

I would have liked to see discussion about the middle ground; communities that are not totally organic but that people care deeply about and participate in nonetheless. There isn’t a lot of talk in the Wikipedia chapter about the role of the moderators at Wikipedia; but they surely play a role. It might be true that only .5 percent of articles on Wikipedia are protected (is that for only the strongest form of protection? Shirky wasn’t clear, but I think that that must be what he was referring to) but I would venture a guess that those are also some of the most visited pages on Wikipedia. (A quick set of guesses lead me to pages for Sarah Palin, Michele Bachman, Barack Obama, Michelle Obama, Pakistan, Islam, the September 11 attacks, and Climate Change, all of which were at least semi-protected. The page for evolution had no protection).

For  a semi-protected article, one only need to register with Wikipedia to edit that article, but that automatically separates those who will edit a typo when they see one from those who intend to be more active participants. In other words, it makes the flow from consumer to contributor a little less fluid.

That fluidity is even more viscous on the Gawker Media sites, of which  the feminist, gossip blog Jezebel is one. It takes a conscious effort to become an approved commenter (which is the only way to really participate in the conversation because otherwise a limited group of people can see your posts) and it takes a conscious effort to become a starred commenter (or at least a commitment to posting regularly in hopes that you find the silver bullet; after a year or so of writing thought-out but unfunny comments and responses on posts about religion, feminism, or body acceptance, I was starred after rewriting the caption on a cartoon about how girls wear their hair).

There is some element of a committed group protecting what it loves. Just try slut-shaming or body-snarking on a Jezebel comment section, and see how fast you’d be torn down by other commenters. But the editors of Jezebel are very clear that they do not consider the site a self-policing community.

In the site’s commenting guidelines, the editors wrote”  “This is probably obvious but bears repeating: This is our website, and we will moderate it as we see fit.” (Emphasis theirs).

In fact, the decisions have sometimes seemed totally arbitrary and have forced some commenters to find another outlet or to attempt to organize against the editors’ decisions, as was the case with the banning of the user “Miz­Jenk­ins,” an example that is still referred to occasionally when new rumblings against the editors surface among Jezzies.  It does not appear to me that the editors are all that responsive, though I have not asked them or looked extensively into that.

When participation is less fluid does that change the group dynamic?  Did Jezebel create two online communities, one where the commenters talk amongst themselves and one where the editors decide what to post  as articles? Shirky talks about the pro-anorexia group leaving the moderated space of YM but essentially reconvening elsewhere on the Interent. That certainly happens on Jezebel–in the wake of a recent redesign, people actually used Jezebel to post links to other forums for Jezzies to migrate to–but the Jezebel community doesn’t die. What is the role for those semi-policed communities?

And what happens when the people who love something are only the people who are in charge? Or, in other words, how does a community like Wikipedia, where its members are committed to the success of the product or some subset of the product, form? How could the LA Times have made its editorial wiki experiment a success? Or, if that’s impossible, how can a newspaper like the Chicago Tribune make commenting a vibrant form of discussion.

When I was interning at a national newspaper in 2007, editors were trying to figure out a way to make comments a place for thoughtful discussion while keeping away racist comments (computer programs designed for the latter task regularly missed comments such as “Elect Obama and he will only serve fried chicken in the White House,” because neither fried nor chicken are in and of themselves racially charged words, and the newspaper editor did not have time in his day to moderate comments himself). How does a mainstream organization like a national newspaper cultivate the community driven by love that Shirky talks about in his book? Is it possible? Do newspaper readers have enough in common? (Maybe the link is actually readers of a single section or article). Is it even desirable? If there is no way to prevent racist comments  (or, in the only-sort-of equivalent case of the Pro-Ana message board on YM, encouragement for teen girls to starve themselves,) without interfering with the more organic group dynamic, maybe it means that the group dynamic is not worth pursuing.

But I find that hard to believe.