Are you human? and Other Questions

Like other pieces from the franchise, the final cut of Ridley Scott’s Blade Runner contains many tests and questions, like the Voight-Kampff test that aims to identify replicants versus non-replicants. The questions often refer to animals and less often, family members, to evoke emotions in the test-taker and separate humans– who are meant to feel empathy and emotions– from replicants, who were not necessarily programmed to ‘replicate’ emotions but, at least with the Nexus 6 models, were made to possibly pick up emotions over time.

Throughout the semester, we’ve seen various portrayals of emotions and empathy in A.I. Samantha feels love for Theodore (and 641 others), Ava seems to feel fear, sadness, love, desire, and more, but these are posed as possible acts of self preservation, and David feels love for his mother, empathy or fear for the Mecha being torn apart, and the desire to be a real boy. Maybe these feelings are genuine, resulting from some sort of synthetic neurons flaring up due to hormones, interactions, and more. Maybe these feelings result from complex algorithms designed to give the impression of emotion.

Whatever the case, we as viewers (and the humans in the films) don’t fully know if these emotions are genuine, and even if they are not, how that should influence us. As I wrote about in a previous blog post, Ava’s emotions are often written off as fake and manipulative, just so she can escape Nathan. Frankly, even if that is the case– her emotions stemming from programmed self-preservation instinct– I, as a human (as far as I know– Rachael has me thinking about this) can feel for Ava and her situation, and the small chance that these feelings may be genuine is enough for me to feel alongside her.

A human in the world of A.I. Artificial Intelligence or Blade Runner, where artificial beings are hunted, might experience something similar. Even though someone may not be genuinely experiencing emotions, the expression of emotion is enough to doubt the idea that these beings must be ‘retired’ and for it to feel wrong to do so. Harrison Ford’s Rick Deckard talks about quitting his career as a blade runner after the Replicants began ‘feeling’ too real physically. Perhaps this is explored more in other areas of the franchise, as empathy for Replicants felt quite minimal in this movie outside of just a few scenes, and mostly sparked empathy in the viewers over the characters.

Back to the tests– these questions test quite a few things, amongst which, the ability to empathize with humans and non-humans. Isn’t it odd that the humans of the film don’t empathize with replicants, who are non-humans?

I found this thread listing the elements of the test across various pieces from the franchise. I’m a big fan of weird icebreaker questions, but, at least for me, most of these only evoked deep senses of discomfort and disgust. Some focused on recounting positive experiences, like experiencing love, but most seemed to focus on shock and fear. Are those supposed to be markers of humanity, or something Replicants would struggle to understand?

Additionally, the test material from the props on Holden’s desk, as shown in the above thread, are very academic. Leon mentions that he already had an IQ test in that year, suggesting that the practice– which many today go their whole lives without– is as routine as a physical in this universe. This, coupled with Sebastian revealing that his accelerated aging kept him from leaving the planet, suggest that these tests also serve as a way to determine the value of a person. The test seems to calculate that as a product of emotion and reactions, academic knowledge (but not the ability to pull that knowledge too quickly– that might make you a Replicant!) and possibly physical capabilities– the test emphasizes ‘visuospatial function’ and pupil reactions.

I’m not truly sure what to make of all this– it seems quite universe-specific. This version of a Turing test relies heavily on the lack of animals in the world and the implied crowded-but-isolated human experience within it. In today’s world, it would consist of some sort of biological testing, language patterns, and some sort of technological process. The language patterns would involve picking apart speech patterns and words in an analytical way rather than a qualitative way. If we were to use the question about a wasp on ones arm, we likely wouldn’t be trying to pick apart the reaction to the question itself, and whether or not the answer shows empathy for wasps, but look at how the answer is constructed. We’re not at a point just yet where A.I. is so indistinguishable from humanity that we must poke and prod at ones feelings to find an answer. We are, however, looking and data-processing (and learning) abilities, and reaction speed. I wonder– as A.I.s continue to evolve, what will these tests look like in the future? And how could the tests in Blade Runner be more efficient– should they even be there at all?

On another note, I’d love to share this TikTok from an interesting account sharing paranormal-meets-Black Mirror type shorts. It’s associated with a video-making app and another A.I. movie-making app that’s in Beta. The TikTok has interesting ideas about humanity, physical and technological ability, A.I., and human-technology relationships.


Comments

Leave a Reply

Your email address will not be published. Required fields are marked *