The biased machines

The author of the dystopian novel feed, M. T. Anderson, has imagined a society set in the future, of which functioning relies entirely on a technology that connects the human brain to the computer network called feednet. This fictional technology not only makes everyone smart by feeding the brain with any answers the users wish to look up but also learns to predict what the users hope for based on their behavior profile stored on the servers (Anderson 48). Sometimes, it gives out a word that the users want to look up; sometimes it gives purchase recommendations to the users when they are having a hard time making the decision themselves. This idea of this intelligent recommendation system is not far-fetched at all. In fact, its real-life example can be found in the booming industry of machine learned algorithms and mathematical models that are used to predict people’s behaviors and decision makings.

In 2009, Netflix announced the winner of their one-million-dollar prize for improving their viewing recommendation algorithm. The winner team “BellKor’s Pragmatic Chaos” had competed against more than 1000 other teams and had improved the performance for 10.06 % from the previous model. Although the algorithm itself is intricated, the data that are fed into those algorithms were not. Each contestant was provided with a huge dataset that contains the ratings given by the users. Each row includes the ratings for the last nine movies this particular user has rated. This almost oversimplified representation of a person’s movie taste is then fed into some co

Netflix awards the $ 1 million prize

mplicated equation to produce the final result. The goal is then to predict what the rating will this viewer give to a particular movie. But one might question how are the last 10 movies enough for predicting a person’s opinion on a totally different movie? They are not, if you only have one or two people’s data. The idea is that when provided with enough data (thousands or even million entries), the mathematical models trained from those data will be able to minimize the effect of outliers and become sufficient enough to produce convincing predictions about any given individual, even the ones whose data are never used in the mathematical modeling process. This means that the recommendation system that many think are tailored to themselves are actually de

Michelle Rhee on Time magazine cover

veloped based on data generated by some complete strangers. But why does it matter? It is just a video recommendation system after all, right? Yes, that might be true in the case for Netflix since there is no real consequence if the recommendation gets it wrong occasionally. But, when other algorithms like this are used to decide whether a to decide whether a person should be fired from a job, things start to get problematic.

 

Cathy O’Neil, an American mathematician, has called in attention about the potential danger of relying too much on the machines and data to predict human behaviors and performance in her book Weapons of Math Destruction. In chapter 2 of the book, she describes the educational reform that happened during 2007. Lead by education reformer Michelle Rhee, the goal of the operation is to weed out the “low-performing” teachers. In order to increase the efficiency of its implementation, algorithms are used to assess the teachers’ performances. Numerous teachers were fired because the computers

predicted that they won’t do well in teaching kids in the future. Despite the reformer’s good intentions to address the problem that the teachers were not doing their jobs, her method of quickly weeding out “low-performing” teachers might be unfair to many teachers. According to O’Nei

Cathy O’Nell’s book Weapons of Math Destruction

l, the algorithm places a heavy emphasis on how much the students have improved in terms of grades. And it completely disregards some students’ learning disabilities, students’ access to resource, and their socioeconomic status. Moreover, it ignores other important things that students that might gain from the teachers, such as discipline, good morals and so on. One teacher she interviewed, who were on the edge of being fired from the school, scored below 10 on a scale of 100 and then proceeded to score a 90+ the next year without changing his teaching style. The difference was in the students he taught each time.

 

We might not have a feed that connects our brain directly to the influx of information like what Anderson has imagined in his novel. But we are not far from this know-it-all artificial intelligence powered society fantasy, either. “Scientific” numbers and algorithms can disguise the biased nature of those artificially intelligent systems. At the end of the day, we should be aware of the fact that even though numbers don’t lie, people who use those numbers to create their own tools might.

 

Reference:

“Netflix Prize: Forum.” Polybius at The Clickto Network, Fox News, web.archive.org/web/20090924184639/http://www.netflixprize.com/community/viewtopic.php?id=1537.

O’Neil, Cathy. Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy. Penguin Books, 2018.

%d bloggers like this:
Skip to toolbar