Is AI blameworthy?

When I first opened the website for today’s reading I am a little confused about what it is about. But it doesn’t take long before I understand that it is a tragic and emotional story about humans with the wide use of technology. Before sharing my own thought about the story, I would like to sum up the narrative. The story is set in a futuristic world in which autonomous vehicles are ordinary transportations. From the fictional scholarly works Anna mentions, we may presume that the Artificial Intelligence (AI) for the autopilot cars in that world is so advanced that it learns from every human datum and is capable of “calculating” the consequences of accidents. In order words, when it comes to the dilemma which it (I am using the pronoun “it” here) will definitely kill something regardless of which choice to make, it can calculate the best outcome and act accordingly. In the story, Anna, the fictional “writer”, loses her three-year-old daughter Ursula because of the decision the vehicle-AI makes. Ursula falls in front of an autopilot car with an endangered bird, and the AI decides that the bird, which is listed as a protected animal by four Wildlife Preservations Acts, should be saved rather than the girl, thus killing her. What we read is a scholarly essay Anna writes about how autonomous vehicles are dangerous to people.

When I read it for the first time, I am sad about the hidden story and feel for Anna’s rage. However, when I read it again, I start to think about the other “protagonist” in this story – the AI. Undoubtedly, this tragic story may immediately arouse discontent and fear of AI, with the strong words “murder”, “manslaughter” and “kill” which usually refer to humans. However, does AI really have moral responsibility as a human do?

As the story is set in the future and most of the “books” cited in the text are fictional, I do not know how human-like AI is in the context (the fictional citation seems to mention that AI in that world has  consciousness, but we do not know the details.) Nevertheless, if I try to understand AI with today’s knowledge, I think AI does not have consciousness. When we talk about moral responsibility, free will is also required. Yet all the AI’s decision-making process are programmed, they do not have free will in our common sense (I am not going to go deep into the philosophical debate about free will here.) Moreover, for “murder”, the thing that commits it needs “intention”. And I do not believe that the AI intentionally kills although it can “think”.

Moreover, even if we assume that AI in the future world can be so advanced and human-like, why do human ask them to answer the question we human can’t? When you’re involved in a trolley problem, you are already in an unethical position – whatever you do or not do, you kill. There is no morally correct answer in the trolley problem for humans. And as AI is created by human and learns from human culture, why should we expect them to give a “correct” answer and blame them more severely than we do to human drivers? Yes, AI may be able to process more, if not all, information in a second than a human can ever be. However, even one day we are able to have all the information, cause and effect in our knowledge, I do not think there will ever be a correct answer for the trolley problem. It is because there will never be an objective measurement for lives. Even if we consider one of the most famous consequentialist approaches, Utilitarianism, Ursula’s case will not be solved too. For in some Utilitarianist theories every pain and happiness from all lives are counted, including humans and birds, how can we determine that a bird’s life (not to mention an endangered one and may affect the whole eco-system) is not more valuable than a 3-year-old human being?

So who is really blameworthy here? In my opinion, not the AI, but the people who think that AI can be god and solve all the human problems thus solely relying on them to do our job – thinking, feeling and trying to do the moral thing. It doesn’t matter that if AI has consciousness or not. At the end of the day, every individual needs to decide on their own and takes moral responsibility. Anna’s story is indeed heart-breaking, but I don’t think we should believe that AI is the culprit in this case.