Fighting with Algorithms

October 23rd, 2011

When I was young, whenever me and my brother would play video games and either of us would lose, we’d immediately accuse the game itself of cheating. Most of the time this was just us expressing our frustrations, but sometimes game creators give their AI all-knowing powers, which can seem supernatural and unfair. Eventually, after playing a game long enough, I’d start to predict the AI’s behavior and play it against itself. No program is omniscient, because it can’t truly understand what I, the user, am thinking.

I’m noticing lately that a lot of the services I use online are starting to cheat me out of what I wanted. It is getting to the point where a good third of my Google searches return results for things I did not search for. They’re close, sure, but that extra keyword or seemingly misspelled word was intentional. It used to be that the “did you mean” link at the top was as far as Google would go to manipulating the keywords themselves, time passed and they started sending me straight there if my search yielded no results, then later they started sending me straight to the alternative results with a reverse “did you mean” link to my actual results. Now they oftentimes they don’t even let me know they’re selectively changing my results.

Still, I could always add a + in front of keywords. Like the AI in games I’d play, I’m manipulating the system to get what I want. This is exactly what Google shouldn’t want me to do, I’m not playing a game, I’m searching for something, I’m supposed to get exactly what I want with the least amount of work. Recently, in their ongoing fight to give me less relevant results, they “deprecated” the + symbol, encouraging the use of double quotes instead.

Facebook is possibly the most guilty of this problem. It is best expressed in this TED Talk by Eli Pariser. In the video he talks about how friends with differing opinions from him slowly disappeared from his news feed, even though he didn’t specifically unsubscribe from them. I’m aware of this effect, so I try to visit profiles of friends who I don’t want to disappear. Facebook’s ability to predict what I want to read is pretty good, but it is flawed enough that I wouldn’t want it to make decisions for me, but it does anyway.

I know why Google and Facebook do this. They are constantly testing the success of their services and constantly trying to improve these metrics. These results undoubtedly show things like, if a user misspells a word in a search they are more likely to find what they are looking for if Google automatically corrects their spelling. This comes at the expense of the minority, who intended to search for the misspelled word, or hear the opinions of people they don’t agree with.

Basically, I want services like this should stop making implicit assumptions about my explicit interactions. No matter how advanced their ability to predict my needs are, it can never be perfect. In the end I searched for what I searched for, I subscribed to friends that I want to hear from. Google could, for example, give me the option to disable these kinds of presumptions. Facebook could still prioritize friends who it thinks I am interested in, but limit this to a certain number of users (which are clearly identified) so I can skip past its predictions and start seeing news from all my friends instead of just some of them.

A predictive algorithm is ultimately only as good as the data it has, and there is never enough data. This problem should be assumed to exist, even when it seems like it doesn’t, and accounted for in the UI design of services like Google and Facebook.