Ramblings on STET + AI Ethics

Perhaps the most prominent fear around the growing artificial intelligence industry focuses on the autonomy of the robots.  More specifically, there seems to be a lot of anxiety about the unpredictability, or lack of morals, that these robots may exhibit when capable of making their own decisions.  As humans, we question the ability of machines to make rational, ethical decisions, which also must align with our own conceptions of morality.  However, figuring out just what our own societal morals and ethics are may be harder than originally thought.

In the STET piece we read for class, Ed, who appears to comment on the workings of a fictional Anna, expresses concern “about subjectivity intruding into some of the analysis” at the top of the article.  At first, I was highly confused by what this website was supposed to be but began to get a better idea once I figured out that “stet” is used by writers to let editors know to let the marked phrase stand.  As I reevaluated the article with this knowledge, I noticed that Anna writes “STET” in response to every comment that Ed has.  Ed gets progressively more personal with Anna as the comments continue, ending with “We haven’t seen you in months, Anna. Everyone here cares about you. Please let us help?”.  Predictably, Anna responds “STET”, meaning that she wants that statement to stand true.

So where do ethics regarding artificial intelligence fit into this?  Going back to the quote about subjectivity mentioned earlier, this brief interaction on a proofread between Anna and Ed shows the complex morality that varies between each human.  Unfairly, artificial intelligence is expected to have some objective, unflawed sense of ethics that are somehow more developed than that of humans.  If Ed and Anna are unable to reach an agreement, then how can the technology which we program avoid our own biases?  Ed and Anna represent a microcosm of the greater crisis we face–what do our morals ought to be?  Highlighting this dilemma also questions why we, as individuals, place value on certain things but not on others.  The issue is complex, and seemingly every answer raises at least 5 more questions.

With certain artificial intelligence like self-driving cars, the ethics typically follow a model of keeping the passengers safe and saving the most pedestrian lives as possible, which is relatively straightforward.  When the situation is not so clear-cut, all expectations are thrown out the window.  For example, an artificially intelligent lawnmower must choose between running over a cat or dog to avoid running over a rock and killing the man walking down the street.  The mower must have some program that allows it to make this decision in a split second.  Being the cat lover that I am, I’d program the AI to save the cat over the dog every time.  But obviously, this opinion is not universally popular.  So what should the robot do?  Is there even a ‘right’ answer?  Ed and Anna show us that an agreement will probably never be reached.