Combinatory Tweets: “Slotted” Technique vs. Markov Chain

I’ve enjoyed learning about combinatory writing more than previous topics in class because I have a better understanding of the code that produces them and because they are somewhat familiar to me. In a previous class I learned about artificial intelligence and natural language processing which are intertwined in combinatory poetics and have had major influences on them. As we dove into discussion in class this week and viewed Don’t Drink the Water by Dr. Sample, I was immediately reminded of a possible combinatory writing example that one of my friends showed me last year. The piece my friend showed me is called Automatic Donald Trump created by Filip Hráček. Famous for his twitter presence among other things, this site allows you to generate a realistically fake tweet by Donald Trump with just the click of a button. As I read further into the code and explanation, I found that this generator was created using a Markov chain. The author explained a Markov chain in simple terms. He comparing it to how an autosuggesting keyboard works on an iPhone when you’re texting. After you text one word, the keyboard supplies you with options based on common sentence structure and based on what you’ve texted before. This specific Donald Trump tweet generator used all of Donald Trump’s previous tweets as the information bank. Therefore every word you generate has already been tweeted by Donald Trump, just most likely not in the same combination.

I compared the Markov chain style to the random “slotting” technique that we are using for our tracery projects. Both of them have random aspects but the tracery project is more random since the next generated word does not depend on the previous one. It would be interesting to see if there was code that allowed us create a dependence for some slots on our projects.

Leave a Reply

Your email address will not be published. Required fields are marked *