Climb out of the uncanny valley with these 4 facial animation tips
As animation becomes more realistic, our expectations of how digital characters perform increase as well. From video game dialogue to virtual assistants, animation gaffes such as unsynced audio, stiff faces, and the dreaded uncanny valley contribute to projects falling short of achieving lifelike results. Below, we discuss what key concepts often get overlooked in facial animation, and how taking these tips into consideration can help your digital characters come across as more authentic and improve your audience’s immersive experience.
Eyes are one of the biggest factors when determining the uncanniness of a digital character. “Dead eyes” contribute to an uneasy feeling in the audience, whereas eyes with subtle movement help to convey the emotions and intention of a character. [ref 1]
An emotional piece of dialogue can fall flat if the emotion being conveyed in the text doesn’t reach their eyes. Whether a character is yelling, upset, or having a friendly conversation, eye movements should be matched up with the context of the scene for impactful animation. For example, a character in the middle of a heated disagreement might roll their eyes or glare in contempt.
While a character is speaking, subtle head movements help reduce the stiff, robotic look of a character. Similar to eye movement, this should match up with the context of the scene or the character’s dialogue. This is especially important when a character is meant to be listening or thinking about their reply.
In the example below, note how Rowan, an Unreal Metahuman digital character powered by Rapport’s facial animation, moves his head slightly as he talks or pauses. These subtle movements give the character warmth and help to make the dialogue feel more conversational.
In order to tap into the global gaming market, many video game studios are looking to expand their animated language offerings. But game localization in multiple languages takes significant time and resources to do well. AI-powered facial animation trained with a variety of languages can help to reduce the cost of animating additional languages and empower studios to increase their audience with fewer hurdles.
“I think language and cultural-specific nuances are too often overlooked in animation, which has long focused on a select few widely-spoken languages,” said Georgia Clarke, Lead Data Engineer at Rapport. “However, we are seeing more and more languages represented in video games and other media.”
“Animators must be able to adapt to this challenge,” continued Georgia. “For example, the same phonemes (perceptually distinct units of sound) used in one language will not simply translate to another. Each language has its own set of phonemes - and this can differ further between dialects, between people, and even within a person’s own speech.
There isn’t a one-size-fits-all solution for human speech: it varies and it’s slippery. Animators should think beyond traditional vieseme-based approaches to animation and consider language, dialect or even speaker-specific phonemes and their corresponding muscle movements.”
There are also cultural nuances to remember when animating a digital character’s face. Georgia recounted a conversation with a native speaker of Korean about the animation of a piece of dialogue in Korean.
“They mentioned finding the openness of the mouth movements - especially the visibility of teeth - during Korean speech very jarring,” Georgia said. “They suspected this could be because in Korean culture showing your teeth can be thought of as impolite. Interestingly, they did not feel the same when looking at English speech animation.”
Creating lifelike facial animation takes significant time and resources, but it is necessary to build an immersive experience for your growing audience.
“I remember the days when I was impressed by games that had hundreds of lines of spoken and animated dialogue. But nowadays, it's commonplace to see numbers in the tens of thousands,” said Zhi Chen, Rapport’s Technical Art Director. “Game studios and their animation teams can't really keep up with this exponential growth in content. At a certain point, they either run out of money or run out of animators to hire.”
Rapport’s facial animation technology, backed by 20 years of research, helps animators produce realistic and accurate animation for their digital characters.
“This is an area where our technology really shines,” said Zhi. “Our AI-driven solutions provide a way to generate high-quality facial animation at a scale that would otherwise be impossible to achieve.”
Ready to bring your digital characters to life? Get connected with our team today to get started.