Artificial Intelligence as a Dysfunctionality

Jon Bois’s work 17776 depicts one prediction of the future where much of the information is displayed as a conversation between sentient space probes such as Pioneer 9, Pioneer 10, and JUICE. These probes were not originally created to be self-aware or even capable of complex thought. Pioneer 9 was launched in 1968, well before computers had taken significant leaps forward, yet this story largely unfolds as Nine, Pioneer 9, wakes up and realizes it is capable of thought and emotion. In 17776, Jon Bois invokes the dysfunctional.

Dysfunctionality is not always bad. In the case of Nine, as well as the other probes, dysfunctionality resulted in its sentience. Having been designed to simply record data and transmit it back to Earth, the ability to ask questions becomes a significant leap forward. Additionally, communication was meant to be limited to the speed of light, hence the 257 years to relay communications between Nine and Ten. However, despite not having been outfitted with the necessary hardware, Nine becomes capable of quantum communication. This unexpected behavior could be categorized as an inadvertent dysfunctionality relative to Nine’s designers. Had they been able to, these engineers would have utilized such communication. From the perspective of Bois, this dysfunctionality is more ludic since his purposes are more playful. These tremendous achievements of human technology is paired with their development of artificial intelligence, yet they effectively become football commentators.

The pinnacle of human technology has achieved artificial intelligence. And it spends its time trolling humans. A screenshot of Jon Bois’s 17776.

Brokenness as a property is not only inherent to machines. Bois also incites a sense of dysfunctionality when Nine notes that the people seem so normal, as he remembers, yet also broken. The advancement of society resulted in significant innovations in so many fields. Consequently, there were significant changes in how society functioned.  To humans, slight changes in their daily routine would go unnoticed. After many changes, the beginning and ending points seem so different, yet the humans notice no change and everything is normal. Nine, on the other hand, saw only the starting point and present day. The change was abrupt and everything seemed out of place, just as one might see a washing machine in the middle of a forest. Nine’s perception of this abrupt change caused it to think human society was broken in some way, as if everyone suddenly decided to not go to work and just play football. It is rather interesting to consider how a dysfunctional space probe ponders the dysfunctionality of humans.

Nine considers human society to be broken in some way as compared to what he once knew, but realizes it’s situation is similar. Screenshot from Jon Bois’s 17776.

Fortunately, Computers Haven’t Gained Consciousness (Yet)

Last semester I took a class on Posthumanism.  It was one of the most thought-provoking classes I’ve ever taken, and forced me to question a lot of my deeply-held beliefs.  The main questions of the class included “What does it mean to be human?” and “Is there even a universal definition of humanity?” I came out of the class with the conclusions of “I don’t know” and “Probably not,” which I think was likely the point.  Nevertheless, I still believe there are a few characteristics that, while not entirely universal, are pretty emblematic of the human race.  Among them is the ability to tell stories.

Apparently, I’m not alone in this belief. A quick Google search brings lots of results on this topic, among them, this pretty popular book (187 reviews!) that is likely a better authority than the opinions of some inexperienced undergrad.

When I heard we would be studying computer-generated novels, I felt a little uneasy.  If some form of artificial intelligence was capable of telling a story, would we have to consider it human?  Would it be deserving of rights?  It would raise some complicated ethical questions.

I was relieved (albeit a little disappointed) to see that these computer generated novels were not original works and lacked any narrative structure.  They were created by copying existing works in the “cut up” method from Monday’s reading.  Furthermore, they weren’t really stories, but rather loosely connected sentences amounting to around 50,000 words.

It appears that while people can program Twitter bots or write code that creates a “novel,” there is no true, original, or organic generation of stories or ideas from these machines.

I found the most interesting part of the novels we explored to be some of the images within Generated Detective.  I love creepy, eerie, unsettling things, and I’ve included a few of my favorite images.

A panel from Generated Detective, depicting a large-eyed doll that looks like something straight out of a horror movie.
Another horrifying Generated Detective panel

I looked at Generated Detective before I finished reading the article explaining the various novels, so initially, I thought a computer had selected these images.  That was exciting!  Could a computer understand what constitutes a creepy aesthetic, or at least be programmed to select creepy images?  Upon reading the article, however, I realized that the programmer himself was the one who selected the images.   So the most interesting, unsettling part of the work was curated by a human.

Since the time the article was published, he’s updated the code so that the computer automatically selects the images, but because those images are from the first few issues, I’m assuming, hopefully not erroneously, that they were selected in the human curation phase.  I also assume this because the most recent issue, which I’m assuming has computer-selected images, is far less interesting.

A very uninteresting (and I’m assuming computer-selected) image from Generated Detective

I’m by no means trying to downplay what a feat of coding it is to create programs that generate these novels.  But while they’re interesting from a technological and theoretical standpoint, they, in my opinion, lack any true literary merit.  Storytelling, it would appear, still belongs exclusively to humanity.

I’m curious as to when (if ever) we will be able to create technology that can truly generate original stories.  That may be a science fiction dream that cannot ever be realized.  I don’t know enough about computer science to say. If we do, however, it will likely force us to expand or alter our definition of what it is to be human.