Machine Learns Death

We have discussed the idea of death and its many interpretations through the lense of today and technology. I would like to discuss a machine learning algorithm that can predict our premature death (https://phys.org/news/2019-03-artificial-intelligence-premature-death.html).

This algorithm was originally designed to help predict premature deaths to help with preventative healthcare in the future.  They used a model called “random forest” with “deep learning” trained on health data from over 500,000 people between the ages of 40 and 69.  The resulting model was able to predict red flags of serious diseases better than expert models and regressions. As these models can also be further “tuned” with more data, the accuracy of predictions can improve and increase in reliability.  Researchers believe that AI models like these will play essential parts within the treatment of individuals due to their manner of personalized treatment based on the lifestyle choices of the individual. This brings forth images of visits to the doctor including a questionnaire on one’s current eating and exercise habits to further add data points to their models for future predictions.  A somewhat constant exposure to the idea of death just slowly pushing it back through the model with a healthy diet and rigorou exercise.

It is interesting to see how predictors of serious and fatal diseases can be modeled and used to our advantage using the technology of today.  This technology can only improve and more sophisticated algorithms can spring forth in the future. It is strange and somewhat empowering to have gained another avenue in life through technology that can ward off the eventuality of our demise.

Virtual Reality Gaming Fizzling Out

We’ve discussed video-games and their representations of death as a gameplay mechanic or theme, but I would like to discuss the death of a gameplay platform.  Many of us have heard of the virtual reality (VR) revolution brought about by Oculus and HTC in 2016 and have even had the pleasure of being immersed within their virtual worlds. Many, including me, were hopeful and excited about what that would mean for the state of gaming.  However, the platforms are very expensive, complicated, and have several issues that have not been worked out yet (https://uploadvr.com/is-vr-dead/).

Platforms work out to be about $349 for the Oculus and $499 for the HTC Vive. This also does not include the price of a decently powered PC run the VR programs and games on. As one can guess, sale figures do not look the greatest because of this, which would give poor incentive for developers to build programs for such a small audience.

This does not bode well for the gaming industry, but the professional world has taken large investments into VR. This is due to the endless potential of training that can be done through VR. This area of development was one of the original target audiences but the adoption rate for it has been insane compared to the lack luster turnouts for gaming. We have essentially witnessed the shift and end of a technological movement caused by lack of interest and logistical difficulties.

Death of Traditional Driving

We’ve discussed the end of traditional driving as we know it in class with the rise of autonomous vehicles, but recently, one of the industry’s leading figures, Elon Musk, has been receiving criticisms over his plan for the future (https://www.businessinsider.com/waymos-execs-defend-lidar-2019-5).

Elon Musk, CEO of Tesla, has completely taken lidar technology out of his plans for autonomous cars, claiming that multiple cameras and AI recognition would be the way of the future. Lidar technology involves multiple sensors that sense light waves that bounce from it to “see” what is around it. Musk states that the use of this technology is too expensive and a “fool’s errand”. Waymo, one of the companies that pioneered lidar technology, called this approach “very, very risky”, arguing that one would need the very best camera systems to handle autonomous driving and that a combination of both lidar and cameras would provide the safest experience. Waymo brings up one of the biggest strengths of lidar is that it is great at still managing to detect objects in inclement weather and worse light Conditions. These are of course fundamentally different approaches towards achieving the end of traditional driving as we know it. Musk is also promising 1 million autonomous taxis by 2020, while being tight lipped about details. The other companies have been comparatively tight-lipped about their projection plans asserting that “deployments are gated by safety”.  Witnessing the plans for the death of traditional driving and how the success of private companies will decide the future of how cars will avoid hitting each other.

A.I. Justice

STET was definitely a bit confusing for me. At first, I thought STET was an AI that engineers were developing to help edit a story. I figured they were having a conversation while editing. Turns out STET means “let it stand” during a proofreading session and is a disregard for someone’s edit. Once I got over that confusing hurdle, I realized that the story was in the foot notes and article headlines and that it was about AI and self-driving cars.

STET shows the birth and growth of AI, their implementation into cars and their effects on humanity and Anna personally. The footnotes have links to AI being created, then accepted into cars and then people who were killed because of the self-driving cars. Anna definitely wants to debate about the ethics of allowing self-driving cars to be on the street and if they have consciousness and can they be held responsible for the deaths for the people killed by these cars.

There is also the personal aspect of this story. Anna is telling this story to, at least partly, vent about the death of her daughter who was hit by a self-driving car. Anna slowly begins to lose her cool as the footnotes go on. She constantly rejects the edits by “Ed” the editor and you can feel her annoyance and anger at both the AI and the editor. There’s drama and history between them and it is seeping into the short story. I think Anna on some level blames the editor for her daughter’s death. Maybe the editor was in the car. She definitely is upset with the editor for not attending her daughter’s funeral.

At the core of this story are the ethics of allowing AI in cars and giving them choice. Can you blame the car? Can you blame the AI? Can you blame the humans? These are the questions the short story is asking. My opinion is that since all of this technology is man-made or man-controlled, humans are to blame for every fault technology has and every mistake technology makes.

STET – Toyota vs. Tesla

Today’s reading is STET, a short futuristic science-fiction story by Sarah Gailey that has been nominated for the 2019 Hugo Award. This story is written in the style of a literature analysis excerpt from a textbook, including references to articles and their corresponding footnotes. At first, when I opened up the website, I was very confused – I kept looking for the start of the short story and figured that there was an error in the website page. After taking some time to look over what was written on the page, I realized that the story was actually told through the footnotes, which are fictional articles about how AI has been used in autonomous cars.

What struck me the most about the article was the level of detail that Sarah Gailey included. As a story that is told in little more than a paragraph, every detail is important and it is evident that Gailey put a lot of thought into it. The author of the citation in footnote 11 is Elon Musk, who supposedly wrote the book Driven: A Memoir. This detail made me laugh, as I am sure it did for a lot of other people, because at the time that this story was written (2018), Musk and his Tesla cars are the most prominent figures in the area of autonomous cars. However, the car that Gailey uses in the story is a Toyota, not a Tesla. This initially struck me as odd, because Toyota does not seem anywhere near developing an autonomous car in the future. However, Toyotas are also a very relatable car and much more represent the type of car that an average American would own than Teslas do.

The reference that the fictional writer, Anna, uses in her article, however, seems a little bit more targeted towards Toyota than it just being an average/relatable car company as she writes that this textbook is “drier than a Toyota executive’s apology.” After doing a bit of research, I realized that about 10 years ago, some cars made by Toyota had flaws that led to fatal crashes, including one in particular in which a family of 4 died due to uncontrollable acceleration from the gas pedal being stuck to the floor (https://www.nytimes.com/2009/10/03/business/global/03toyota.html). Only after a few months of pressure did the Toyota executive make an apology, and described the incident as “extremely regrettable,” which in my opinion, seems pretty dry and not very empathetic. While the incident is unrelated to AI and autonomous cars, in both cases, the car was used as a killing machine due to lack of oversight by the company’s engineers. Thus, I have to wonder if Gailey also used Toyota due to this and the poor response by Toyota’s executive. Maybe by calling out Toyota, Gailey is illustrating that cars will not only be used to hurt people in the future, but that they are already doing it now.

AI

Is it really possible to train an AI?
I found Stet to be a really cool article that had me initially confused at its format until I understood what the author was trying to do. Throughout reading the article, I kept wondering what a human being would do in the place of the AI if they were in the same situation. My guess is that they would obviously save the child and not the woodpecker – however, in other scenarios, to what extent are humans just as prone or even more prone to make accidents happen in comparison to an AI? But precisely because we are beings capable of empathy, morality, and individual decision making skills/unpredictability, I believe that humans can own up to their mistakes/take responsibility and hence should be the only ones driving the vehicles rather than AI driven cars.
Even in shocking possible scenarios like this in the future, the responsibility lies on the person who manufactured and programmed the AI’s morals. Because AI’s can only function within their own programmed self and cannot self generate their thoughts, people cannot account for every possible scenario and program for that in case a certain situation happens. On the other hand, people can think for themselves are not confined to the limits of a “program.”
What really troubles me is the “how they decide who to murder.” The fact that a programmer is inputting data to decide who the AI should choose to “kill” in a possible scenario already has me questioning whether it is truly ethical to program an AI while inputting data as to which human being or thing is valuable over the other. Who is the one deciding that, first of all, and who is approving this?
You cannot overlook race, sex, gender, etc that comes to play when they decide “how they decide who to murder” and also how do AI’s identify which human is more “valuable” over the other? How do they recognize who is a human being while going at a high speed?

Exchange of Death with Autonomous Vehicles

Something I briefly brought up in class is agency in vehicle related deaths and how it changes with autonomous driving cars. I will acknowledge that ultimately people are expected to pay attention and be accountable for the car while the auto-pilot is on, however the action is still the vehicle killing a human (the vehicle accelerating itself into a tractor trailer killing the driver). Most vehicle deaths currently feature the vehicle as the tool or medium in which a person was killed by another person. How would opinions on driving change if a hypothetical 10,000 deaths from drunk drivers were prevented while 1,000 new deaths are caused by glitches in the vehicle itself?

One thing that I think is worth considering is the nature of vehicle deaths. If a function similar to the activity we did in class was ever implemented is it really ethical to kill an obese old woman legally going through a crosswalk if it saves a family of four in the vehicle? Perhaps the death toll is reduced to a quarter of what it would have been otherwise but is forcing someone who did nothing incriminating to bear the burden of the family that did not have their vehicle in proper condition the proper action? Since it is their property, the vehicle, along with all its failures, should in be their burden. Ultimately the value of a life is put on a scale against other life in a way that we do not currently have to face.

While these are serious issues that I think are worth contemplating I do believe that successful auto-piloting would be a great addition for a society where so much death is caused with vehicles, namely drunk and texting driving deaths.

 

Is AI blameworthy?

When I first opened the website for today’s reading I am a little confused about what it is about. But it doesn’t take long before I understand that it is a tragic and emotional story about humans with the wide use of technology. Before sharing my own thought about the story, I would like to sum up the narrative. The story is set in a futuristic world in which autonomous vehicles are ordinary transportations. From the fictional scholarly works Anna mentions, we may presume that the Artificial Intelligence (AI) for the autopilot cars in that world is so advanced that it learns from every human datum and is capable of “calculating” the consequences of accidents. In order words, when it comes to the dilemma which it (I am using the pronoun “it” here) will definitely kill something regardless of which choice to make, it can calculate the best outcome and act accordingly. In the story, Anna, the fictional “writer”, loses her three-year-old daughter Ursula because of the decision the vehicle-AI makes. Ursula falls in front of an autopilot car with an endangered bird, and the AI decides that the bird, which is listed as a protected animal by four Wildlife Preservations Acts, should be saved rather than the girl, thus killing her. What we read is a scholarly essay Anna writes about how autonomous vehicles are dangerous to people.

When I read it for the first time, I am sad about the hidden story and feel for Anna’s rage. However, when I read it again, I start to think about the other “protagonist” in this story – the AI. Undoubtedly, this tragic story may immediately arouse discontent and fear of AI, with the strong words “murder”, “manslaughter” and “kill” which usually refer to humans. However, does AI really have moral responsibility as a human do?

As the story is set in the future and most of the “books” cited in the text are fictional, I do not know how human-like AI is in the context (the fictional citation seems to mention that AI in that world has  consciousness, but we do not know the details.) Nevertheless, if I try to understand AI with today’s knowledge, I think AI does not have consciousness. When we talk about moral responsibility, free will is also required. Yet all the AI’s decision-making process are programmed, they do not have free will in our common sense (I am not going to go deep into the philosophical debate about free will here.) Moreover, for “murder”, the thing that commits it needs “intention”. And I do not believe that the AI intentionally kills although it can “think”.

Moreover, even if we assume that AI in the future world can be so advanced and human-like, why do human ask them to answer the question we human can’t? When you’re involved in a trolley problem, you are already in an unethical position – whatever you do or not do, you kill. There is no morally correct answer in the trolley problem for humans. And as AI is created by human and learns from human culture, why should we expect them to give a “correct” answer and blame them more severely than we do to human drivers? Yes, AI may be able to process more, if not all, information in a second than a human can ever be. However, even one day we are able to have all the information, cause and effect in our knowledge, I do not think there will ever be a correct answer for the trolley problem. It is because there will never be an objective measurement for lives. Even if we consider one of the most famous consequentialist approaches, Utilitarianism, Ursula’s case will not be solved too. For in some Utilitarianist theories every pain and happiness from all lives are counted, including humans and birds, how can we determine that a bird’s life (not to mention an endangered one and may affect the whole eco-system) is not more valuable than a 3-year-old human being?

So who is really blameworthy here? In my opinion, not the AI, but the people who think that AI can be god and solve all the human problems thus solely relying on them to do our job – thinking, feeling and trying to do the moral thing. It doesn’t matter that if AI has consciousness or not. At the end of the day, every individual needs to decide on their own and takes moral responsibility. Anna’s story is indeed heart-breaking, but I don’t think we should believe that AI is the culprit in this case.

Ramblings on STET + AI Ethics

Perhaps the most prominent fear around the growing artificial intelligence industry focuses on the autonomy of the robots.  More specifically, there seems to be a lot of anxiety about the unpredictability, or lack of morals, that these robots may exhibit when capable of making their own decisions.  As humans, we question the ability of machines to make rational, ethical decisions, which also must align with our own conceptions of morality.  However, figuring out just what our own societal morals and ethics are may be harder than originally thought.

In the STET piece we read for class, Ed, who appears to comment on the workings of a fictional Anna, expresses concern “about subjectivity intruding into some of the analysis” at the top of the article.  At first, I was highly confused by what this website was supposed to be but began to get a better idea once I figured out that “stet” is used by writers to let editors know to let the marked phrase stand.  As I reevaluated the article with this knowledge, I noticed that Anna writes “STET” in response to every comment that Ed has.  Ed gets progressively more personal with Anna as the comments continue, ending with “We haven’t seen you in months, Anna. Everyone here cares about you. Please let us help?”.  Predictably, Anna responds “STET”, meaning that she wants that statement to stand true.

So where do ethics regarding artificial intelligence fit into this?  Going back to the quote about subjectivity mentioned earlier, this brief interaction on a proofread between Anna and Ed shows the complex morality that varies between each human.  Unfairly, artificial intelligence is expected to have some objective, unflawed sense of ethics that are somehow more developed than that of humans.  If Ed and Anna are unable to reach an agreement, then how can the technology which we program avoid our own biases?  Ed and Anna represent a microcosm of the greater crisis we face–what do our morals ought to be?  Highlighting this dilemma also questions why we, as individuals, place value on certain things but not on others.  The issue is complex, and seemingly every answer raises at least 5 more questions.

With certain artificial intelligence like self-driving cars, the ethics typically follow a model of keeping the passengers safe and saving the most pedestrian lives as possible, which is relatively straightforward.  When the situation is not so clear-cut, all expectations are thrown out the window.  For example, an artificially intelligent lawnmower must choose between running over a cat or dog to avoid running over a rock and killing the man walking down the street.  The mower must have some program that allows it to make this decision in a split second.  Being the cat lover that I am, I’d program the AI to save the cat over the dog every time.  But obviously, this opinion is not universally popular.  So what should the robot do?  Is there even a ‘right’ answer?  Ed and Anna show us that an agreement will probably never be reached.

Code, Consumerism, & Capitalism

The two articles we read for today were jarring in that these two people had to write these articles in the first place so that something can be done about the ways in which code & algorithms that track your online presence actually does no good to people in grievances. Personally, even as someone who isn’t going through what these people have/are going through, I find these “Year in Review” videos as well as facebooks “friendship” celebration videos and “how many likes you got this year” videos to be freaky and unnecessary. It does not make me feel any warmth or remembrance while  watching these videos, but rather, I just see a collection of pictures that I am tagged in on facebook in which facebook can track and say is part of the significance of this year in review, when those moments may not be what I really recall from that year. As facebook is trying to find a way for their site to be more personable, “caring,” and “friendly,” I find it more weird and creepy that they do this by using my own information and doing something more with it, which for me comes across more as a statement that my information really isn’t my information once it’s out on facebook or anywhere as these social media companies/corporations turn me into a data and also own it.
From what I know, I believe there is still very little you can do to turn off the “Year in Review,” considering that article was written four years ago. The only apology that facebook has made about that has been to the person who wrote “Inadvertent Algorithmic Cruelty,” and the response was, “[The app] was awesome for a lot of people, but clearly in this case we brought him grief rather than joy, “It’s valuable feedback,” Gheller said. “We can do better — I’m very grateful he took the time in his grief to write the blog post.”
It was jarring that he took that article as “valuable feedback,” as though someone’s grief is more valuable data for facebook to incorporate for the well being of their company.

As we live in a capitalistic and consumeristic society, everything we do online, especially in regards to whatever we search/buy/show interest in is monitored & collected as data so that even though you may have searched for let’s say a certain pair of shoes once then it pops up as an add as you scroll through facebook or any other website. As amazing as the internet is, it is also an outlet of advertisements to feed into consumerism and constant reminders that you need this, you need that, & oh that thing you searched for you should get it. What these corporations prioritize is not the well being of people but turning any chance they can get into a possibility for profit. It is sad that despite tech companies 100% knowing that Gillian Brockwell’s child was stillborn, they do not care for that data as it does nothing for their profit. It does not serve them anything to turn off pregnancy ads as they only really care for data that can be used for consumeristic purposes. I’m not exactly sure there is a way to completely disable ads in the first place as websites use advertisements for income & people only have the option to report an ad or close it. What else can we do to make sure things like this don’t happen beyond downloading a program that doesn’t let ads run on your laptop (but then again that would also prevent you from having access to some sites?). My main question is, why are these responsibilities always put on the individual rather than corporations who cause these problems in the first place yet do nothing about them? sidenote example: people urging each other to not use straws, which is great, but my problem with that is that all the responsibilities of consumerism and the environment are put onto the individual (and yes we as individuals need to be responsible) but what are the corporations doing about this that can change the ways in which they manufacture products/preserve the environment? I feel like the sad reality of capitalism is profit is always prioritized over the wellbeing of anything.