who’s afraid of the big bad algorithm?

Today I watched a TED presentation featuring Eli Pariser, the executive director of moveon.org and author of the just released “Filter Bubble: What the Internet is Hiding from You.” (continued below)

Eli says that he has a problem with the ways in which companiessquirrel like Facebook and Google use algorithms to filter and present personalized information to their end users. His primary concern regards algorithmic filtering on “relevancy” and he shares a quote by Mark Zuckerberg, founder of Facebook, in his opening remarks.

“A squirrel dying in front of your house may be more relevant to your interests right now than people dying in Africa.”

filter-bubbleEli argues that using “relevancy” as the primary filter to identify and present information is problematic because it may potentially leave out important information we need to see. It may also result in a situation where each of us is surrounded by our own “filter bubble” – our own “personal unique universe of information” that we live in online. Although the information in our filter bubble is decided by who we are and what we do, Eli is concerned that we don’t get to control what gets into our bubble, and, importantly, we don’t get to see what gets left out.

In his closing remarks, Eli addresses the leaders at Facebook and Google and asks them to ensure that the algorithms “have encoded in them a sense of public life, a sense of civic responsibility.” That they are “transparent enough that we can see what the rules are that determine what gets through our filters.” And finally, that we are given “some control, so that we can decide what gets through and what doesn’t.”

Ok. After I first watched Eli’s talk, I had mixed responses.

Part of me, a large part, was totally on board with Eli’s premise and his heartfelt request to the leaders of  Facebook and Google for transparency and control. I mean, of course I want to know what’s in my filter bubble and what’s not. And, of course I want to be able to control it. Who wouldn’t want that?

i-want-to-control-my-filter-bubbleBut another part of me was not on board with Eli’s arguments, just the opposite. This other part of me was annoyed with him for his presentation because, in his attempt to educate (and sell his book), Eli left out (i.e. “filtered”) a bunch of important information himself. Ironic, I know, given his thesis. I’m sure he filtered for different reasons. He only had like five minutes up there on stage, so he had to be succinct and tell a good story. But beyond that, there was information that was filtered out which, had it been included, would have complicated his story and make it a lot more difficult to sell. I’m in marketing. I get it.

Anyway, it’s my annoyed self that wrote the responses below.

Pot: meet kettle

In his attempt to educate people about the potential negative consequences of using algorithmic filtering on the Web, Eli ends up doing some  filtering himself. I’m referring specifically to his discussion around the Facebook newsfeed filtering. He tells his story.

“I’m progressive politically, big surprise. facebookBut I’ve always gone out of my way to meet conservatives. I like hearing what they’re thinking about. I like seeing what they link to. I like learning a thing or two. And so I was kind of surprised when I noticed one day that one day the conservatives had disappeared from my facebook feed.”

With his choice of words “one day the conservatives had disappeared from my Facebook feed,” Eli’s implying that the algorithm somehow intentionally censored only the “conservative view” from his newsfeed. You know, the big bad algorithm at work censoring the “other point of view.”

convervative-editBut as Eli himself says, the filtering is based on prior clicks. So, if he didn’t click on his very liberal, yet not very interesting Aunt Jenny’s posts or his super progressive, but ultimately annoying brother Joe’s posts, well, then their posts would have “disappeared” from his wall, too. But Eli chose to not to share that information. And while his “filtering” helps to create a sensationalist slant on Facebook filtering (“oh my gosh, I can’t believe they did that!”), it doesn’t help to educate people on the ways in which algorithms really work.

Relevance: What you talking about Eli?

Eli uses the word “relevance” multiple times during his presentation, but never once actually defines the word in context of his argument. For example, he says:

“So if algorithms are going to curate the world for us, if they’re going to decide what we get to see and what we don’t get to see, then we need to make sure that they’re not just keyed to relevance.”

Ok, we don’t really know what he means by relevance, but let’s assume that he’s talking about filtering based on whether or not users click on links, like the Facebook example he gave earlier in his discussion.

So he wants algorithms to be keyed to more than clicks (relevance). Ok, that’s totally reasonable. Of course we want information to be presented that has been filtered based on more criteria than just what we’ve clicked on in the past.

beware-online-filter-bubbles-video-on-ted-com_But the thing is, Eli’s contradicting himself. Earlier in his discussion he stated that Google uses 57 different signals to decide what search results to return to a user (everything computer and brower type to location).  Eli knows that Google’s algorithm is already based on more than relevance. So why does he keep referring to relevance throughout his talk as if that’s the only criteria used in these algorithms?

He continues.

“We need to make sure that they also show us things that are uncomfortable or challenging or important …”

sort-byOk, here we go again with the use of undefined terms that, in the context of his discussion, really mean nothing. What does he mean by uncomfortable or challenging or important? Aren’t these terms relative? What might be challenging to one person is simply not to another. And what is important to one, not to another. If this is true, then how can Eli assume that the algorithms are not showing information that is, on a personalized level, in fact, all of the above?

Myth making: nostalgia for the past we wanted, but never was…

During his discussion of the move from “human gatekeepers” of information to “algorithmic gatekeepers” he states:

“In a broadcast society – this is how the founding mythology goes – in a broadcast society, there were these gatekeepers, the editors, and they controlled the flows of information.”

Ok, so he’s acknowledges that this story is a “founding mythology,” then continues:

“And along came the Internet and it swept them out of the way, and it allowed all of us to connect together, and it was awesome.”

Ok, so the myth states that Internet swept away the human gatekeepers who controlled the information and it was “awesome.” He continues:

“But that’s not actually what’s happening right now.”

Ummm, right. Because it was a myth. Remember? He goes on:

“What we’re seeing is more of a passing of the torch from human gatekeepers to algorithmic ones. And the thing is that the algorithms don’t yet have the kind of embedded ethics that the editors did.”

Wait. I’m confused. First he says that it was “awesome” to have the Internet free us from those information controlling human gatekeepers, but then he says that those gatekeepers actually had some sort of embedded ethics that algorithms don’t yet have and therefore, were superior to the algorithmic gatekeepers. Hmmmmm…

Humans and algorithms and ethics, oh my!

human-gatekeeper

And, do we know that those human gatekeepers were (and are) ethical? Certainly not all are. Which ones are and which ones aren’t? And, let’s assume we can identify those whom are ethical. What is it that they do, these ethical human gatekeepers, that result in a better selection of information for the masses (in a broadcast society) than the algorithms are able to select for the individual (in a personalized algorithmic environment)? Eli doesn’t say.

There’s another critical element that Eli’s filtering out when talks about “humanalgorithmic-gatekeepers vs. algorithmic” gatekeepers. Remember, it’s humans who are writing the code for these algorithms. So, ultimately, it’s still humans who are the gatekeepers for how information filtering. The idea that humans are not in control is fantasy, science fiction. It’s sensationalist and fun to think about and helps to sell books, but it’s not accurate.

And, what about what we do control!?

To take this last thought one step further, I’d add that it isn’t just the human algorithm code writers who are in “control” of our filter bubbles, it’s also we the end users, too. Ultimately, we are the ones who control what we click on and what we don’t click on, what search terms we use.

Our browse and search behavior is a general indicator of our interests, certainly not the full expanse of our interests, but our interest in the information in front of us at a specific point in time relative to other information options on the same Web page.

So, once we’re aware (and awareness will arise from education) that algorithms do take into consideration our clicking behavior as one piece of input when deciding what other information to “show,” then we do have quite a bit of agency in what information is presented to us through our behavior. If we want to continue to see the news headlines, then we should click on the news headlines more often than not, to make sure that we’re representing the full expanse of ourselves to the algorithm and to ensure that our click behavior will “teach” the algorithm what we want to see in the future.

I know that statement will be problematic for many people; and evendarwin-evolve as I write it, I’m only partially comfortable with it myself. It’s very futuristic, very cyber forward to state that somehow we should be collaborating with technology (in this case, algorithms), allowing it to take even a greater role than it has in the past in delivering information to us in new ways. But, remember, when the telegraph was invented, people were certain that it would be the end of privacy and the end of newspapers. These are concerns we still grapple with today.

I guess my point is: Let’s evolve.

 

Related posts:

Leave a Reply

Your email address will not be published.

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>