Machines Learning Bias

Algorithms and models are reflections of human biases, and this sometimes leads to unintended repercussions, such as turning into toxic feedback loops. Toxic feedback loops, as described by Cathy O’Neil in Weapons of Math Destruction, reinforce certain negative behaviors that a model is trying to predict by way of its own outcome. These negative behaviors are then taken into account by the model, artificially boosting the success of its predicting factors so it in effect becomes a self-fulfilling prophecy. Such models then appear to be more successful and may be used more frequently, sustaining the cycle. These loops are dangerous to data interpretation, as it becomes challenging to separate the influence of the model from what would have ordinarily happened (or to even notice the effect of the model).

O’Neil’s book discusses how negative feedback loops can be perpetuated in predictive policing and criminal recidivism. For instance, if a criminal is deemed to be highly likely to commit another crime (due to social factors, past convictions, etc.), they will be sent to prison longer. As a result of staying in prison for longer, they may become more likely to commit another crime.

The image below presents a case of a subtler toxic feedback loop. The image is a post of a screenshot by Katrina Lake, the female CEO of Stick Fix, showing the suggested emoji for “CEO” as a man in a white-collared shirt. If girls are presented with more images of males in certain positions, such as “CEO” or other leadership or tech roles, then it may create the perception that these are male jobs and lead to more males occupying those positions.of As Rachel Thomas discusses in her blog, the engineers at Meetup deliberately chose not to create a feedback loop where men might express more interest in tech meetups than women, so fewer women would be recommended about tech meetups. This could lead to fewer women knowing about it and attending, creating a vicious cycle. Such factors may be difficult to spot when designing algorithms, but it is important to amend in order to stop perpetuating bias.

Both articles point out that models are not neutral, because they make choices about what information to emphasize. As a result, the creators of algorithms have ethical responsibilities to consider both the intended and unintended consequences because they have serious repercussions for people. I think that algorithms are useful and can be beneficial, but it is important to remember that they are simplifications of the complexities real world and need human oversight. When used to judge people, algorithms should be cautious about using factors that are correlated to an outcome, particularly factors that are outside of one’s control. Otherwise, these algorithms are potentially more unfair than the systems they were designed to replace.

https://www.forbes.com/sites/parmyolson/2018/02/15/the-algorithm-that-helped-google-translate-become-sexist/#ac652037daa2

%d bloggers like this:
Skip to toolbar