turning 1’s and 0’s into life and death

As a computer science major, I’m a pretty big supporter of technological advancement. I’d like to see autonomous robots helping us improve daily life, cars driving themselves, implanted health monitors — but I find articles like what we read for today almost more important than the technology itself. Robin Henig, author of the NY Times article we read for today, mentions that it’s not that big of a leap from Roomba to “autonomous home-health aide”.

It’s not so much that technology is holding us back, but instead we’ve slowed down to solve these moral dilemmas that stem from digital autonomy. Which is warranted. I’m therefore a huge fan of the collaboration mentioned in the article between computer scientists and “philosophers, psychologists, linguists, lawyers, theologians and human rights experts.” Mostly because I think we all recognize that we’re bound to make these technological improvements. I don’t think ‘forgetting to program’ something will be the issue; what resonated with me the most is this issue of how the bot supports its actions. Right now it’s all a human built, but not as good of a human-replicating practice. The robots are having to learn pretty binary decisions – like to dispense medicine or withhold medicine – through preset rules communicated to them, be it through connectivity, programming, learning, whatever. The good news is that maybe in the future the robot doesn’t need to contact the supervisor before giving the patient medicine, but rather double check its offline machine learning model to reinforce whether it should do something.

I mention this a lot in these blogs but we live in a cool time where we are connected to limitless information which makes these decisions easier and easier to support (even if the decision itself is hard). We have access to all sorts of stats and models now that definitely increase our understanding of the repercussions. We’ve also reached a really cool level in tech where we can model things digitally before we put certain things into practice. Take self driving cars, for example: without risking any physical damage, coders and programmers can test algorithms in video games. This person was able to test his driving algorithms in grand theft auto, for example. Or take Elon Musk’s OpenAI project. They programmed a bot that replicates human movement and decision making seamlessly in the video game Dota 2. While not perfect examples, they definitely illustrate how being able to do stuff digitally gives tech people room to code and incorporate emotion without endangering physical lives. Another plus being that you can test things at a personal level in addition to at bigger corporations like Tesla.

I want to point out, though, that these articles were written a couple years ago, and point out, again, how close we were to autonomy even just 4 years ago. I’d like to think we’ve come a long way in replicating human decisions even since then, so I find the New York Times article from 2015’s tone of not having the emotional replication in robots a little out of date. I like Jack from Wired’s way of putting it: “the system isn’t flawed. It just isn’t fully advanced yet.” One last point I want to bring up is this quote from the Times article: “a robot’s lack of emotion is precisely what makes many people uncomfortable.” I do think it’s still hard to fathom, let alone code, how a robot feels. I think the best practice is to model off of humans’ emotions, and even that is scary. In any case we’re turning 1’s and 0’s into life and death and I’m glad to see society is discussing that.