The film “Twilight: Breaking Dawn — Part 2” had one element that was freakishly creepy — and it had nothing to do with vampires, werewolves or death. It was, in fact, a newborn baby. 

The protagonists’ human-vampire daughter, Renesmee, was nicknamed Chuckesmee by the cast of the film. Why? Because attempts to create a baby using animatronics and computer-generated imagery, or CGI, were extremely unsuccessful. Technicians’ goal was to produce scenes that featured an extremely precocious hybrid baby, but they stumbled upon the limitations of technology at the time. What we got, as a result, was a baby that is instantaneously identified as not human. 

This is an interesting example of a well-known phenomenon within the field of cognitive science. First coined by robotics professor Masashiro Mori in 1970, the expression “uncanny valley” refers to the eeriness we feel as something approaches human appearance, but fails to properly attain a lifelike quality. Mori created a graph that plots affinity as a function of human likeness, and hypothesized that the graph plummets as a character that is not actually human approaches realism. 

What is fascinating about the uncanny valley hypothesis is that it is intuitively perceptible. A minion from “Despicable Me,” or a stuffed animal? Cute, likable. But increase the human similarity to that of Renesmee, or of the cat-human hybrids in the 2019 musical “Cats,” and you can sense the strangeness. 

The list of uncanny valley examples in film and television goes on, most likely because movement intensifies the effect, according to Mori. Movement is crucial to our behavior, yet extremely difficult to replicate in a human-like fashion. A moving prosthetic hand or a humanoid performing a smile are way more uncanny than their static counterparts.

However, despite the pervasiveness of the uncanny valley, recent technological developments have taken important steps towards the creation of artificial faces that are confused with real human ones. Generative Adversarial Networks, or GANs, use machine learning to achieve this. They couple two neural networks: the first generates novel images from an extensive database of faces, and the second is a discriminator network that decides whether a face is fake or not. The goal of the first network, in short, is to fool the second. And if it fools a neural network, it ends up fooling us as well: a 2022 study found that people were more likely to judge GAN faces as real than actual human faces.

So now, we’re playing with the limits of the uncanny valley. But does this mean that the phenomenon is overcome? In my opinion, no. Why? Three words: large language models — LLMs. 

A trending topic and an important breakthrough within the field of artificial intelligence, LLMs are built to comprehend and generate text like a person. Trained from vast amounts of data, LLMs can summarize content, aid in creative assignments, and answer coherently when prompted. Seeing these tools in action is especially impressive, and humbling, due to their shocking efficiency in fulfilling complex requests.

I’ve experienced, however, a novel version of the uncanny valley in past “interactions” with LLMs. On multiple occasions, it was clear that my correspondent was a program, not a person. LLMs usually comply with grammatical rules and provide contextually sound responses, but they lack the sarcasm and spontaneity that a human usually has. Our real-life interactions consist of more than just adequate and precise answers. We prompt novel discussions, we interrupt, and maybe most importantly, we change the subject. That’s what makes a human conversation interesting, dynamic and enriching. 

Still, LLMs are continuously improving and we have access to updated versions of tools with relative frequency. If GAN technologies challenged certain effects of the uncanny valley, maybe our uncanny chats won’t last that long. But for now, I think I prefer being able to distinguish a human conversation from one with an artificial intelligence tool. Unpredictable interactions make us more than an extrapolation of the data input we receive. If the Renesmee chat disappears, that’ll pose questions about what makes us different from our own creations.    

LAURA WAGNER is a sophomore in Benjamin Franklin College. Her fortnightly column, “Metamorphosis,” promotes insights about adapting to technological innovation and future change, based on personal experiences at Yale and beyond. Contact her at laura.wagner@yale.edu.