Ironically Mending the Physical Using Digital Tech

We’ve discussed human death, death of digital media, and the death of physical artifacts. What’s interesting is exploring the intersection of using the digital to mend the destruction and death of physical things.

April 16, 2019, one of the world’s great ancient landmarks, the Notre Dame Cathedral, suffered “colossal damages” following a fire that spread through the roof. Ironically, the tragic accident may in fact have technological advancement to blame.

Having survived many wars, natural disasters, etc, it’s rather ironic that it meets its demise following the installation of electronic technology. In fact, it was reported by the cathedral’s rector that a “computer glitch” may be to blame. From reports, it sounds as though the elevator system in the Cathedral may have suffered a short-circuit, sparking the fire. 

It’s not the end of the landmark, though. Thankfully.

Ubisoft, makers of the historical video game “Assassin’s Creed”, have announced they’ve pledged over $500,000 to contribute to the rebuilding of the monument.

That’s not all – during the 14 month creation of Assassin’s Creed unity, they digitally reconstructed the Notre Dame to scale. As a result, many people who own the video game have gotten to digitally relive the experience of visiting the location in a virtual space.

The similarity is incredible! Even for a 4+ year old game. In an interesting change of events, the use of computers might just help rebuild that which was destroyed by a computer. It’s possible that the cathedrals future construction might be guided by the 3d scans of the video game.

It’s quite encouraging to hear of such a positive use of digital technology. Will the cathedral ever be the same, though? And what would you make of having only the digital version to relive exactly how it was? In another one of my attempts of being hopeful for future technology, I think that eventually, using technology like VR, we may be able to experience digitally what is otherwise lost or damaged in our physical world.

Wild West of Autonomy

As technology becomes smarter and smarter, society has hit a point where moral dilemmas are becoming a hot topic in the world of automation. One of the most exciting new technologies, I think, is the self driving car. Cars have had technology assisted driving for almost a decade now (rear-view backup camera, self-parallel parking, remote start, etc.), but we have finally reached the point where self-driving cars have hit the market. Unfortunately, that means we have also reached the point where these self-driving cars get someone killed. So what should we do?

As the articles state, Tesla’s autopilot had been used on public highways for tens of millions of miles before causing a single fatality. I think this reflects a lot of the misunderstanding behind automation and where society is at in terms of its implementation. Automation is a very new technology in the realm of day-to-day life, so it still seems very unorganized to many people. To me, it feels like we are living in the “wild west” days of robotics because we simply do not have a lot of rules regarding what they can or cannot do. This is important because the laws and regulations we put in place for robots now will dictate the kinds of advancements that can be made in this area of technology. Because of that, I think it is very important for people to understand just what kind of technology they have access to (driver assist, not full autopilot) in order to use them properly and not misattribute unfortunate events to the technology itself.

On the topic of self-driving cars, I do think that this is one of the most important areas in which this kind of technology can be implemented. Humans are bad drivers and there is almost no way to communicate with other cars on the road (except brake lights and blinkers, but HUMANS don’t even use those all the time). With self driving cars we have the ability to create a network of cars that can all communicate with one another. No more accidentally changing lanes into someone else’s car or failing to see that the cars in front of you just halted to a stop. Autonomous cars can already help you with these things. The really interesting thing is what this means for traffic. Cars that can communicate and adjust speeds accordingly would make things like highway merging more efficient (read: less slowing down) and intersections wouldn’t need stop signs or stoplights. These are also the areas of roads that see the most human error and the most accidents, but these could be greatly reduced through the use of computers and inter-car connections. Obviously this kind of stoplight-eliminating technology is a long way off, but it is something that could be possible if companies are given enough time to develop technology that can be more affordable for average drivers.

turning 1’s and 0’s into life and death

As a computer science major, I’m a pretty big supporter of technological advancement. I’d like to see autonomous robots helping us improve daily life, cars driving themselves, implanted health monitors — but I find articles like what we read for today almost more important than the technology itself. Robin Henig, author of the NY Times article we read for today, mentions that it’s not that big of a leap from Roomba to “autonomous home-health aide”.

It’s not so much that technology is holding us back, but instead we’ve slowed down to solve these moral dilemmas that stem from digital autonomy. Which is warranted. I’m therefore a huge fan of the collaboration mentioned in the article between computer scientists and “philosophers, psychologists, linguists, lawyers, theologians and human rights experts.” Mostly because I think we all recognize that we’re bound to make these technological improvements. I don’t think ‘forgetting to program’ something will be the issue; what resonated with me the most is this issue of how the bot supports its actions. Right now it’s all a human built, but not as good of a human-replicating practice. The robots are having to learn pretty binary decisions – like to dispense medicine or withhold medicine – through preset rules communicated to them, be it through connectivity, programming, learning, whatever. The good news is that maybe in the future the robot doesn’t need to contact the supervisor before giving the patient medicine, but rather double check its offline machine learning model to reinforce whether it should do something.

I mention this a lot in these blogs but we live in a cool time where we are connected to limitless information which makes these decisions easier and easier to support (even if the decision itself is hard). We have access to all sorts of stats and models now that definitely increase our understanding of the repercussions. We’ve also reached a really cool level in tech where we can model things digitally before we put certain things into practice. Take self driving cars, for example: without risking any physical damage, coders and programmers can test algorithms in video games. This person was able to test his driving algorithms in grand theft auto, for example. Or take Elon Musk’s OpenAI project. They programmed a bot that replicates human movement and decision making seamlessly in the video game Dota 2. While not perfect examples, they definitely illustrate how being able to do stuff digitally gives tech people room to code and incorporate emotion without endangering physical lives. Another plus being that you can test things at a personal level in addition to at bigger corporations like Tesla.

I want to point out, though, that these articles were written a couple years ago, and point out, again, how close we were to autonomy even just 4 years ago. I’d like to think we’ve come a long way in replicating human decisions even since then, so I find the New York Times article from 2015’s tone of not having the emotional replication in robots a little out of date. I like Jack from Wired’s way of putting it: “the system isn’t flawed. It just isn’t fully advanced yet.” One last point I want to bring up is this quote from the Times article: “a robot’s lack of emotion is precisely what makes many people uncomfortable.” I do think it’s still hard to fathom, let alone code, how a robot feels. I think the best practice is to model off of humans’ emotions, and even that is scary. In any case we’re turning 1’s and 0’s into life and death and I’m glad to see society is discussing that.

Can robots have ethical dilemmas?

Robin Marantz Henig’s article, Death by Robotdiscussed the myriad of ethical dilemmas posed by those trying to blend the worlds of morality and humanity. Should Fabulon give their patient medicine to ease their suffering, even without permission? Should a military robot shoot someone because of the weapon it’s using, or because of what the person is wearing? Should a car on autopilot choose to hit another car or a pedestrian? The questions about self-driving cars seemed the most pressing because that technology was the closest to being actualized.

Illustration by Erik T. Johnson via The New Yorker

 

However, I think that the conflation of ethical dilemmas faced by humans and those faced by self-driving cars is inappropriate. Although John Seabrook’s personal story, Black Ice, Near-Death, and Transcendence on I-91 shows just how much your brain slows down during a car accident, I think that the decisions a driver makes are mostly instinctual reactions to an unfamiliar situation. You have a split second to choose whether you’ll hit a barrier, a car, or even another person. Often, this probably isn’t even much of a choice, but more of a subconscious response where our own internal auto-pilots take over. This is much different than engineers or ethicists thinking about a hypothetical car crash and programming in a response. The difference is that the former is spontaneous while the other is predetermined. The former is somehow an easier decision to make and to live with than the latter.

In another vein, I wonder if cars were programmed to hit safer car models over less-safe ones, such as a Volvo instead of a Mini Cooper, if that would discourage people from buying the safer models. If safety turns you into a target in accidents, why not go with something that is objectively more vulnerable, but might be less prone to be hit in the long run?

Deadly Liabilities

While reading the infamous death of the 40 year old man dying in a self-driving car accident starts to raise questions about the technology that is supposedly to be used to make driving safer. While incidents like these arises with this new type of technology, it starts to come down to liability issues of who really killed the driver, whose to blame, and why.  One of the more interesting things I’ve been analyzing is how a traumatic accident like this is supposedly “positive” for Tesla and it’s outlook on these types of cars.  According to After Probing Tesla’s Crash, Feds Say Yay to Safe Driving, this crash was not a negative experience for their company, but rather a beneficial one.  He mentions how with this crash bringing attention to self-driving, it emphasizes the stats of how rare this occurrence is, as well as looking at the safety this car still has in comparison to regular automated cars. Yes, Tesla may be facing this legally but why should they be worried?

Allowing for this crash to help the company and economical status rather than bring attention to the possible danger that this technology has takes away from the fact that this man who died was essentially a money booster, rather than a genuine human.  People are concentrating on the idea of money and power and taking away the fact that an innocent sole died because of the inaccuracy of a technological product.  With this in mind, who should ethically be the one to blame, the driver for putting on self-driving mode, or the car company who actually had the technology that malfunctioned the car?

Autopilot Speeds Up Into Wall?

I think the section “A Cautious Future” in the Wired article brought up some really important points that what Tesla calls “Autopilot” is “great for backing up humans whose attention drifts away from the raod, but not yet good enough to take over full-time” (Stewart par. 15). It makes me wonder how important marketing and the idea behind the word autopilot is and the effect is has on how the consumer treats the car. If Tesla were to re-brand the feature and call it something to the effect of assisted cruise control, would consumers give the car less control?

When reading the story about Joshua Brown, a related article about a more recent autonomous car accident related death appeared about Apple engineer Michael Huang (38 yo). Huang was in autopilot on US Highway 101 in Mountain View, California in the second lane from the left. The car approached the gore area (that triangular area dividing a split between lanes) dividing the main lanes from the exit ramp, but moved towards the gore area and collided into a crash attenuator. A Mazda and an Audi subsequently ran into it.

The odd thing about Huang’s accident was that in the last 3 seconds before the crash, the car sped up from 62 mph (under the speed limit of 65) to 70.8. The crash report by the National Transport Safety Board offers a step by step look at the events leading up to the crash:

Like Joshua Brown, Michael Huang did not survive so we can never really know what was going through the driver’s mind or the level of engagement, but the crash report indicates that Huang was alert because his hands were on the wheel during the 60 seconds prior to the crash. However, they were not on the wheel the last 6 seconds. In an effort to pin the blame on Huang, Tesla said “The driver had about 5 seconds and 150 meters of unobstructed view of the concrete divider with the crushed crash attenuator, but the vehicle logs show that no action was taken” (McCarthy par. 15) It is odd that the driver did not do anything, but Tesla’s argument kind of defeats the whole purpose of the idea of a car that drives defensively. Six seconds is such a quick time and could fall into the category of distraction and not reliance. In both Brown’s and Huang’s cases, Tesla seemed quick to convey their condolences but pin the blame on the driver. While it’s cool that we can access the car’s recorded performance data and know some more about what happened leading up to the crash, it still raises a lot of questions about the car’s technology. Why did the Tesla speed up? Why didn’t it detect that it was not in a lane? Why didn’t it detect the barrier? Was it trying to take the exit ramp or did it know/was effected by Huang’s intended route?

Works Cited

McCarthy, Kieren. “Oddly Enough, When a Tesla Accelerates at a Barrier, Someone Dies: Autopilot Report Lands.The Register, 7 June, 2018. Web.

Preliminary Report Highway HWY18FH011.” National Transportation Safety Board.

Stewart, Jack. “After Probing Tesla’s Deadly Crash, Feds Say Yay to Self-Driving.Wired, 20 Jan. 2017. Web.

Online Memorialization in the Physical World

Grave sites and websites: a comparison of memorialisation and Beyond the Grave: Facebook as a Site for the Expansion of Death and Mourning both explore some of the details surrounding the interactions between the living and the dead on social media platforms like Facebook. This is obviously a relatively new way to mourn the loss of someone, but it has already become a popular new way for people to memorialize loved ones and connect with others who may be going through a difficult time.

Upon reading these articles, I decided to go check my own social media profiles to see if I could find examples of this type of memorialization in my own life. Unfortunately, there are two such profiles on Instagram (a Facebook-owned platform) that I follow. Its been years since these profiles had to be transitioned to their current status, but I found that it was still a pretty interesting experience. The profiles, as highlighted in Grave sites and websites, bring back mostly positive memories of the deceased, even though their images do not appear in my feed and the profile itself remains dormant– its essentially a snapshot of their life.

I often times find readings like these to be interesting because of the heavy topics involved. Rarely, however, do I get the chance to experience something like this first-hand. These articles provided me with a greater sense of awareness while I was scrolling through my friends’ old pictures and comments. This isn’t the first time I’ve looked back through these profiles, but this time I was more conscious of the source of my feelings rather than just accepting them for what they were. These profiles are important to me and to the hundreds of people who still follow them because they really do do a great job of preserving something that was created entirely by someone, so I felt more strongly about how crucial these accounts could be for the families and close friends who still have access to them along with their physical belongings that may have become symbolized in a more “traditional” memorial.

Instagram Memorials

The reading by Jed Brubaker, Gillian Hayes, and Paul Dourish, “Beyond the Grave: Facebook as a Site for the Expansion of Death and Mourning,” explored the ways that the living interacts with the dead on Facebook. Since Facebook also owns Instagram, I decided to look at the way that Instagram memorializes accounts.

Interestingly, they are very explicit about their policies. On the page entitled “What happens when a deceased person’s account is memorialized?” they what the features of a memorialized account. These accounts don’t look any different from regular accounts, and all of the posts the deceased person shared will still be visible. In order to memorialize an account, living users have to contact Instagram with proof of death, such as “a link to an obituary or news article.” Instagram also won’t release the login credentials of a deceased person, but they will allow someone who proves that they’re an immediate family member of the deceased to have an account removed from the site.

I think that the distinctions between a memorialized account and a regular one are very intesting. Memorialized accounts won’t appear in the “explore” page, and no changes can be made to it (including comments and followers). In this way, Instagram seems to be trying to preserve the deceased’s virtual image that they curated before their passing.

Despite all of the procedures required to memorialize an account, things still seem to go wrong occasionally. The help page “My Instagram profile has been memorialized.” helps users who have been incorrectly (or morbidly, prematurely) memorialized by Instagram. Personally, I can hardly imagine how uncanny it would be to log in to Instagram and found out that you have been deemed dead.

Frankenstein and Facebook

Because of its involvement with death, resurrection has inevitably become tied to many horror story plots. One of the most famous cases of resurrection comes from Mary Shelly’s Frankenstein, which would become the inspiration for many later horror works. The “Black Mirror” episode Be Right Back follows the same story as Shelly’s, with Martha as Dr. Frankenstein, social media posts as the graveyard body parts, and new Ash as the Monster. As with any good adaptation, the themes and details within the story have changed to more effectively explore contemporary fears or curiosities; in this case, Be Right Back not only explores the possibility of resurrection, but also looks at the disconnect between someone’s social media presence and their personality.

One of the defining features of this episode was its simplicity. With the exception of the cliff picnic scene, the sets are all bland and forgettable. The characters live very average lives. In fact, the characters themselves aren’t anything to remember. Even the insane technology used to resurrect someone is just brushed aside and never explained in depth. Unlike other “Black Mirror” episodes, there is no physical conflict or grotesque action (like bestiality in S1E1, genocide in S3E3, or sadistic Star-Trek-inspired prisons in S4E1). Instead, this episode relies more heavily on the emotions surrounding the situations the characters are in– something that can’t be shown with images, but can have just as big of an impact.

Fake Ash represents two things: the idealized form of the dead and the shallowness of our online personalities. He is a picture perfect copy of his human counterpart, but is lacking in just about every human capacity possible. The major aspects of Ash’s personality, like his wittiness and kindness, are there, but the more ignorable or less desirable features that would never be presented on social media are missing. Martha can’t handle having only the enjoyable aspects of Ash because she already knew and appreciated the traits that made Ash less perfect. His idealized form made him very one-dimensional, so the relationship between the two also became one-dimensional and unenjoyable. Martha quickly realizes the shallowness of his online persona as fake Ash seems to always be reminding her of his inauthenticity. Fake Ash is essentially a form of human hypermediacy. All of this seems to be reminding the viewer that, for many people, online profiles are NOT them, only the idealized and shallowest versions of themselves.

This episode does a good job at doing what “Black Mirror” does best: it forces you to think about your own choices in the real world. Maybe you don’t have the desire to resurrect a loved one, but you probably do have a Facebook, Instagram, or Twitter account. As Martha steps up into her attic-prison and the screen cuts to black, I like to think the directors wanted the audience to be left staring into that black mirror while considering their own digital presence. Ash is representative of how dangerous and harmful the idealized versions of people can be, so I definitely found myself filled with existential dread (most episodes do this, though) while looking through my own Twitter and Instagram feeds.

Foreshadowing, Technological Similarity, and Tackling the Taboo

This was my first time watching “Be Right Back”, and I find it one of the most chillingly plausible uses of current technology in Black Mirror. I also took note of a few things that jumped out at me.

One thing I noticed a lot of that I normally don’t pick up on is the use of foreshadowing. When Martha is driving she says she will “crash this van”. Watching with subtitles, I noticed that the song playing in the van when this happens is a Bee Gees song, “How Deep is Your Love”. This is an interesting question to raise for the episode, how deep is your love? Is it so deeply intertwined with your reality that as a loved one passes you are constantly reminded of them? Would it hurt less to get to talk to them one last time?

I also noticed how they made so obvious the fact that Ash’s brother had died at a young age. And father. And how his mother handled those losses by putting those memories in the attic. Which is paralleled in the end by Martha banishing AshBot to the attic. An action which I think is meant to provoke a deeper narrative. That’s one of those “hard to chew” moments that gets the viewer thinking about something taboo, “how do we handle mourning?”.

There’s also a point where Martha walks with AshBot on the phone through the countryside. She ends up sitting near the suicide cliff and it was quite clear that this would become a point of climax for the character later on. Especially when the bot points out that people usually jump alone. Whether it would be Martha or a form of Ash I made quick note of the fact that one of them surely would jump.

From the getgo I also noticed how disconnected Ash was. Martha asks him bizarre questions, like if he wants his soup from a shoe, and he isn’t present. He just responds with a simple “yes” having not heard the question. Kind of robotic. His answering machine is also “I’m too busy or lazy to answer so leave a message”. I think this was particularly moving to me when the AshBot replies to Martha on the cliff that he only wishes to serve. The bot’s actions are clearly not those of Ash, but what the bot could learn from him. All meant as a tool for Martha.

And in classic Black Mirror fashion the technology doesn’t seem so far fetched. It reminded me a lot of what I think can be called Natural Language Processing, and Twitter Bots. The idea being that you can train a computer based on a dictionary of people’s work – like Shakespeare – and recreate sentences that are similar to what might have appeared in the original. The product and original aren’t quite the same – in fact normally they are quite strange.

There’s a pretty funny example of this in @KeatonPatti on Twitter “forcing a bot to watch over 1,000 hours of _”. Olive Garden, Porn, Trump’s State of the Union. The product is what is probably a human-altered script made by a bot using NLP, but they’re fun to read. While these are pretty silly examples, I know Dr. Sample you’ve also got a few bots that operate in a similar way in your Moby Dick bot https://twitter.com/MobyDickatSea and your favorite things bot that operate on their own. It’s interesting to think of them as eventually being used to learn from our digital literature (facebook, twitter, etc) and perhaps recreate our sentiments postmortem.