AI

Is it really possible to train an AI?
I found Stet to be a really cool article that had me initially confused at its format until I understood what the author was trying to do. Throughout reading the article, I kept wondering what a human being would do in the place of the AI if they were in the same situation. My guess is that they would obviously save the child and not the woodpecker – however, in other scenarios, to what extent are humans just as prone or even more prone to make accidents happen in comparison to an AI? But precisely because we are beings capable of empathy, morality, and individual decision making skills/unpredictability, I believe that humans can own up to their mistakes/take responsibility and hence should be the only ones driving the vehicles rather than AI driven cars.
Even in shocking possible scenarios like this in the future, the responsibility lies on the person who manufactured and programmed the AI’s morals. Because AI’s can only function within their own programmed self and cannot self generate their thoughts, people cannot account for every possible scenario and program for that in case a certain situation happens. On the other hand, people can think for themselves are not confined to the limits of a “program.”
What really troubles me is the “how they decide who to murder.” The fact that a programmer is inputting data to decide who the AI should choose to “kill” in a possible scenario already has me questioning whether it is truly ethical to program an AI while inputting data as to which human being or thing is valuable over the other. Who is the one deciding that, first of all, and who is approving this?
You cannot overlook race, sex, gender, etc that comes to play when they decide “how they decide who to murder” and also how do AI’s identify which human is more “valuable” over the other? How do they recognize who is a human being while going at a high speed?