We’ve discussed human death, death of digital media, and the death of physical artifacts. What’s interesting is exploring the intersection of using the digital to mend the destruction and death of physical things.
April 16, 2019, one of the world’s great ancient landmarks, the Notre Dame Cathedral, suffered “colossal damages” following a fire that spread through the roof. Ironically, the tragic accident may in fact have technological advancement to blame.
Having survived many wars, natural disasters, etc, it’s rather ironic that it meets its demise following the installation of electronic technology. In fact, it was reported by the cathedral’s rector that a “computer glitch” may be to blame. From reports, it sounds as though the elevator system in the Cathedral may have suffered a short-circuit, sparking the fire.
Ubisoft, makers of the historical video game “Assassin’s Creed”, have announced they’ve pledged over $500,000 to contribute to the rebuilding of the monument.
That’s not all – during the 14 month creation of Assassin’s Creed unity, they digitally reconstructed the Notre Dame to scale. As a result, many people who own the video game have gotten to digitally relive the experience of visiting the location in a virtual space.
The similarity is incredible! Even for a 4+ year old game. In an interesting change of events, the use of computers might just help rebuild that which was destroyed by a computer. It’s possible that the cathedrals future construction might be guided by the 3d scans of the video game.
It’s quite encouraging to hear of such a positive use of digital technology. Will the cathedral ever be the same, though? And what would you make of having only the digital version to relive exactly how it was? In another one of my attempts of being hopeful for future technology, I think that eventually, using technology like VR, we may be able to experience digitally what is otherwise lost or damaged in our physical world.
It was fun reading through all of the posts in one place! Like a trip down memory lane.
I’ve always enjoyed writing about digital technology, and recognize a clear bias I have toward wanting more digital things in the world that do cool stuff. I’m definitely the last person to write off these things as unnecessary.
I think I successfully tied one digital artifact into each of my blog posts, which I wouldn’t have even said was my goal – but it’s definitely cool to look back and see I accomplished that. With the exception of the post about the themes in our monster readings. I’m actually quite proud of that one, though. I don’t think I usually share that sort of synthesis: digging through a text.
Some themes I notice myself drawn to: universality of the digital and the idea of “the other” (or unknown). And I think if I had to make one more content-oriented post it would be on the connection between the two, and how everyone is using technology but we leave it to do its own work inside of a black box.
As a computer science major, I’m a pretty big supporter of technological advancement. I’d like to see autonomous robots helping us improve daily life, cars driving themselves, implanted health monitors — but I find articles like what we read for today almost more important than the technology itself. Robin Henig, author of the NY Times article we read for today, mentions that it’s not that big of a leap from Roomba to “autonomous home-health aide”.
It’s not so much that technology is holding us back, but instead we’ve slowed down to solve these moral dilemmas that stem from digital autonomy. Which is warranted. I’m therefore a huge fan of the collaboration mentioned in the article between computer scientists and “philosophers, psychologists, linguists, lawyers, theologians and human rights experts.” Mostly because I think we all recognize that we’re bound to make these technological improvements. I don’t think ‘forgetting to program’ something will be the issue; what resonated with me the most is this issue of how the bot supports its actions. Right now it’s all a human built, but not as good of a human-replicating practice. The robots are having to learn pretty binary decisions – like to dispense medicine or withhold medicine – through preset rules communicated to them, be it through connectivity, programming, learning, whatever. The good news is that maybe in the future the robot doesn’t need to contact the supervisor before giving the patient medicine, but rather double check its offline machine learning model to reinforce whether it should do something.
I mention this a lot in these blogs but we live in a cool time where we are connected to limitless information which makes these decisions easier and easier to support (even if the decision itself is hard). We have access to all sorts of stats and models now that definitely increase our understanding of the repercussions. We’ve also reached a really cool level in tech where we can model things digitally before we put certain things into practice. Take self driving cars, for example: without risking any physical damage, coders and programmers can test algorithms in video games. This person was able to test his driving algorithms in grand theft auto, for example. Or take Elon Musk’s OpenAI project. They programmed a bot that replicates human movement and decision making seamlessly in the video game Dota 2. While not perfect examples, they definitely illustrate how being able to do stuff digitally gives tech people room to code and incorporate emotion without endangering physical lives. Another plus being that you can test things at a personal level in addition to at bigger corporations like Tesla.
I want to point out, though, that these articles were written a couple years ago, and point out, again, how close we were to autonomy even just 4 years ago. I’d like to think we’ve come a long way in replicating human decisions even since then, so I find the New York Times article from 2015’s tone of not having the emotional replication in robots a little out of date. I like Jack from Wired’s way of putting it: “the system isn’t flawed. It just isn’t fully advanced yet.” One last point I want to bring up is this quote from the Times article: “a robot’s lack of emotion is precisely what makes many people uncomfortable.” I do think it’s still hard to fathom, let alone code, how a robot feels. I think the best practice is to model off of humans’ emotions, and even that is scary. In any case we’re turning 1’s and 0’s into life and death and I’m glad to see society is discussing that.
As a form of haunted media I created a theoretical twitter-timeline style stream of the people who are passing away in real time, titled “ObituaRSS”. My motivation was to put a twist on social media in an unsettling way, but demonstrate, as well, that the idea isn’t far fetched. The project started through creating a random-information generator that could spit out a random name and date of birth. I expanded upon this to include the current time (time of death) and cause of death. I spent so long rerunning code and double checking that the functionality of my generator worked that I created a front-end UI to display the information and continued to expand upon the appearance and components of the piece.
From there the morbid dead info stream was born.
The backbone of the project is built using Python, and the random-simulating-methods that come with it. I also wanted to add a human element to the interaction with the stream, so photos were added from the StyleGAN project. I added around 300 faces, so repetitions are common, but it’s mostly a proof of concept. What’s cool about the StyleGAN project, as mentioned in class, is that the faces are made using machine learning. So although they may look 100% real, they are 100% constructed by a computer program. I hooked these Python programs into a NodeJS application and hosted it on my Davidson Domain.
My first reaction while formulating the project was in connection to how eerie the piece is. Ordinarily private, a viewer can see information regarding each fictional, but seemingly real, death in real time. This blurred line between real and fiction was a big motivation to continue building onto the project, and reminded me of our discussions in class surrounding the eerie. The most critical points from discussions were the idea of something existing that shouldn’t normally. Giving life to death. Adding a personal aspect with the images did this two-fold, both in being able to see something dead and give it a face, but also in that the faces aren’t photographs of real people. It’s eerie because I give existence to something nonexistent. A condition for the eerie is being able to understand something “without the need for specific forms of cultural mediation.” (Fisher, 61) The faces were used to further establish a connection with the viewer, and to really capture their attention. “The face is the predominant way we recognize people.” (Sanders, 1) And I wanted viewers to know they were looking at people. Due to identifying faces being a primary nature of humans, there is no need for specific cultural mediation, thus the faces are not just good indicators to use to represent people but they’re also quite eerie.
The part of my project that motivates this idea of the eerie the most is that the probability that a cause of death will appear in the stream is equivalent to that reported by the CDC in 2016.
Simulating this stat, and modeling it, brings to life the data point. No longer is it a number on a screen, now it’s attached to a face scrolling down the stream of information.
Further, in attaching names and faces to the points, a life (or death) from anywhere around the world is represented. The goal for the scope of the project was to represent death nationwide, not just in any one location, and ordinarily the specifics of who has died is limited to listings in newspaper obituaries or what information is voluntarily put onto online obituary sites. The information certainly doesn’t reside in any one updating website, or in a way that is involuntary, as I propose my project would take form. It’s also interesting to think of the implications that come with having a single place where you can receive this data. How is it gathered, is it voluntary, who has access to it, is it anonymized, etc.
Because of how morbid the piece is, I left in some functions that ground it in the absurd, or arguably comical. For example, some images are of children’s faces while their date of birth may be 1927. Or, in addition, names may be inherently masculine like John or feminine like Jane but appear below an image of the opposite sex. Although not as obvious at first glance, when you do finally realize that things aren’t exactly adding up it gives you a psychological distraction to focus on rather than the grim subject at hand. There are validations and checks I could construct to solidify completely “the impression of an encounter with the real” (Kirkland, 117), but this interruption of the digital was a clear example of hypermediation.
I wanted the user to be aware that they were looking at something that was fabricated. It may look real, but somebody had to make it. Harping, again, on the idea of the eerie and unsettling.
Another thing I realized in working through the project is how the concept is not far fetched. Although morbid, who’s to say that a hacker with access to this data doesn’t dump information of the deceased in a similar way. Or, thinking about the future of technology, who’s to say that in the future we won’t have computer chips in us that report when we die, so someone somewhere would see a stream of information just like this.
Some additions and improvements that I wish I had implemented are mostly related to user experience. Although a tall order, implementing some sort of data visualization on top of the stream might have motivated the statistical element of the project more. In addition, in retrospect adding specific social media functions like “following” a cause of death or “blocking” something may have led to interesting phenomena. Finally something that may have motivated the more social-media-facing aspect, as well, would have been to design a logo, which would cost a lot of time and energy.
Fisher, Mark. “Approaching the Eerie.” The Weird and the Eerie, Repeater Books, 2016, pp. 61–64.
Kirkland, E. “Resident Evil’s Typewriter: Survival Horror and Its Remediations.” Games and Culture, vol. 4, no. 2, 2008, pp. 115–126., doi:10.1177/1555412008325483.
Sanders, Robert. “Human Faces Are so Variable Because We Evolved to Look Unique.” Berkeley News, 9 July 2015, news.berkeley.edu/2014/09/16/human-faces-are-so-variable-because-we-evolved-to-look-unique/.
Evans, Timothy H. “Slender Man, H. P. Lovecraft, and the Dynamics of Horror Cultures.” Slender Man Is Coming: Creepypasta and Contemporary Legends on the Internet, edited by Trevor J. Blank and Lynne S. McNeill, University Press of Colorado, Logan, 2018, pp. 128–140. JSTOR, www.jstor.org/stable/j.ctv5jxq0m.10.
PINEDO, ISABEL. “RECREATIONAL TERROR: POSTMODERN ELEMENTS OF THE CONTEMPORARY HORROR FILM.” Journal of Film and Video, vol. 48, no. 1/2, 1996, pp. 17–31. JSTOR, www.jstor.org/stable/20688091.
Tolbert, Jeffrey A. “‘The Sort of Story That Has You Covering Your Mirrors’: The Case of Slender Man.” Slender Man Is Coming: Creepypasta and Contemporary Legends on the Internet, edited by Trevor J. Blank and Lynne S. McNeill, University Press of Colorado, Logan, 2018, pp. 25–50. JSTOR, www.jstor.org/stable/j.ctv5jxq0m.5.
This was my first time watching “Be Right Back”, and I find it one of the most chillingly plausible uses of current technology in Black Mirror. I also took note of a few things that jumped out at me.
One thing I noticed a lot of that I normally don’t pick up on is the use of foreshadowing. When Martha is driving she says she will “crash this van”. Watching with subtitles, I noticed that the song playing in the van when this happens is a Bee Gees song, “How Deep is Your Love”. This is an interesting question to raise for the episode, how deep is your love? Is it so deeply intertwined with your reality that as a loved one passes you are constantly reminded of them? Would it hurt less to get to talk to them one last time?
I also noticed how they made so obvious the fact that Ash’s brother had died at a young age. And father. And how his mother handled those losses by putting those memories in the attic. Which is paralleled in the end by Martha banishing AshBot to the attic. An action which I think is meant to provoke a deeper narrative. That’s one of those “hard to chew” moments that gets the viewer thinking about something taboo, “how do we handle mourning?”.
There’s also a point where Martha walks with AshBot on the phone through the countryside. She ends up sitting near the suicide cliff and it was quite clear that this would become a point of climax for the character later on. Especially when the bot points out that people usually jump alone. Whether it would be Martha or a form of Ash I made quick note of the fact that one of them surely would jump.
From the getgo I also noticed how disconnected Ash was. Martha asks him bizarre questions, like if he wants his soup from a shoe, and he isn’t present. He just responds with a simple “yes” having not heard the question. Kind of robotic. His answering machine is also “I’m too busy or lazy to answer so leave a message”. I think this was particularly moving to me when the AshBot replies to Martha on the cliff that he only wishes to serve. The bot’s actions are clearly not those of Ash, but what the bot could learn from him. All meant as a tool for Martha.
And in classic Black Mirror fashion the technology doesn’t seem so far fetched. It reminded me a lot of what I think can be called Natural Language Processing, and Twitter Bots. The idea being that you can train a computer based on a dictionary of people’s work – like Shakespeare – and recreate sentences that are similar to what might have appeared in the original. The product and original aren’t quite the same – in fact normally they are quite strange.
There’s a pretty funny example of this in @KeatonPatti on Twitter “forcing a bot to watch over 1,000 hours of _”. Olive Garden, Porn, Trump’s State of the Union. The product is what is probably a human-altered script made by a bot using NLP, but they’re fun to read. While these are pretty silly examples, I know Dr. Sample you’ve also got a few bots that operate in a similar way in your Moby Dick bot https://twitter.com/MobyDickatSea and your favorite things bot that operate on their own. It’s interesting to think of them as eventually being used to learn from our digital literature (facebook, twitter, etc) and perhaps recreate our sentiments postmortem.
The synthesis in Monster Culture (Seven Theses) by Jeffrey Jerome Cohen is very tightly packed. For a moment I just want to unwrap some of what he made me think about. Throughout his argument my mind was brought to the idea of “the Other”. The outsider, or complete opposite, from the human self. I started to think about the idea of calling another human a monster, but even more so the idea of monsters embodying human difference. Particularly in the last four theses of his argument.
Cohen uses examples of beasts and creatures: godzilla, nosferatu, alien. While monsters can be literally different in form, they also all personify a difference from one’s self, which is a lot of what Cohen is getting at. Just as someone can be scared of monsters, it seems fair to say that humans are scared of difference; scared of change.
Cohen’s argument of portrayal, monsters linger in difference, demonstrates humans’ more artistic rendering of the fear of the Other. “In medieval France the chansons de geste celebrated the crusades by transforming Muslims into demonic caricatures whose menacing lack of humanity was readable from their bestial attributes; by culturally glossing ‘Saracens’ as ‘monstra;’ propagandists rendered rhetorically admissible the annexation of the East by the West.” (8) His example of caricaturization makes me think of political cartoons. Especially in his brief discussion of “political or ideological difference” as being “much a catalyst to monstrous representation on a micro level as cultural alterity in the macrocosm.” (8) A dehumanizing representation of humans, especially those whose arguments and decisions are not in line with your own.
In his thesis regarding monsters policing the borders of possible Cohen explains “the monsters are here, as elsewhere, expedient representations of other cultures, generalized and demonized to enforce a strict notion of group sameness. The fears of contamination, impurity, and loss of identity that produce stories like the Genesis episode are strong, and they reappear incessantly.” (16) As a religious text, the Bible is something many philosophers seem to turn to in analyzing social patterns. Some of the earliest examples of embodied fear or difference with consequence. It’s through this example that I started to think about how we use the word monster in the modern world and how human a monster can be.
I had one thought that sums up his section on fearing the monster being an embodiment of desire. In putting another down one is attempting to raise oneself. Humans pick on others out of jealousy and envy.
In defining monsters as mystical, I agree with Cohen’s final argument – that more fantastical monsters exist through knowledge and in the mind. But I find his concluding statements about monsters one last reference to the idea of monsters as an embodiment of the fear of difference. “Monsters are our children… These monsters ask us how we perceive the world, and how we have misrepresented what we have attempted to place.” (20)
Tasha Robinson, in her article titled “Modern Horror Films Are Finding Their Scares in Dead Phone Batteries” for The Verge, describes cutting connection as a way of “establishing sympathies.” Part of the neurological science behind horror, and what can make it so scary, is the connection to the main character. The fact that 95% of the people watching these movies have a cell phone, gives the producers of these films something to tug at. While every viewer might not react in the same way, there are certain strings which I think manipulate people and make them malleable. This, mainly, being the fear of not being able to use your mobile device in case of an emergency! For this could be one of the main reasons that phones were created in the first place: to give the ability to call for help and get information while on the go. I mean, she points this out (and ties this idea together) as well, that “[the producers] aren’t just tapping into a tired cliché. They’re channeling the low-key real-world anxiety of needing a phone for a specific purpose and suddenly not being sure whether it has the juice to perform.”
And it’s not just cell phones – it’s things like power or a working car. These, among others, being “clichés” that producers work with in establishing sympathy. All technologies that I think we can mostly agree are taken for granted. It doesn’t surprise me that as we use things like Instagram, video chats, or cell phones, that they get integrated into the films. Especially as the majority of the population begin to use these digital technologies on a day-to-day basis.
Horror films are all about taking an aspect of real life and turning it on its head. You’re supposed to get close to the characters so that everything that happens to them feels like could happen to you. Establishing sympathy.
One last thing I want to expand upon is that not only does technology reshape the horror genre, but it’s starting to change how we receive it. I’ve encountered posts on Instagram and Snapchat with snippets of horror movies made specifically for the platform. They revolve around texting or using some app on your phone that producers are using to connect you to the character even more literally. And I think being able to use the medium you’re scaring someone on to be the subject you scare them with is not only meta, but a clever way to strengthen the impact of the scare.