Virtual Avatars
A virtual avatar is a digital representation of a user (or an autonomous agent) that stands in for a person within a computer-mediated environment such as VR, AR, games, or social platforms. Avatars vary in visual fidelity (from abstract cartoons to photoreal humans) and behavioural fidelity (from simple canned animations to rich, performance-captured motion and facial expression). These choices influence presence, embodiment, social copresence, emotion recognition, and even bias and self-perception during interaction (Latoschik et al., 2017). Prior research shows that increasing realism can improve body ownership and social interaction in some contextsâbut may also trigger aversion if the appearance or motion lands in the âalmost humanâ zone (Higgins et al., 2021).
The Creation of Digital Avatars
Creating a digital avatar involves several steps, starting with defining the avatar's appearance, which can include everything from facial features and body shape to clothing and accessories. This is typically done using software that allows for a high degree of customization, enabling users to create avatars that closely resemble themselves or embody a completely different identity.
Advanced avatars, particularly those used in virtual reality (VR) and extended reality (XR) environments, often employ 3D modelling techniques. These avatars are built using specialized software like Blender or Maya, which allow designers to create detailed and realistic models. The avatars are then rigged with a skeleton structure that enables movement and interaction within virtual spaces. In some cases, these avatars can also be animated using motion capture technology, which records the movements of a human actor and applies them to the digital avatar.
Avatars in VR and XR
In VR and XR, avatars serve as the user's embodiment within a virtual space, enabling them to interact with the environment and other users. The role of avatars in these spaces is critical, as they help bridge the gap between the digital and physical worlds, providing a sense of presence and identity. Key features that enhance the effectiveness of avatars in VR include realistic movement, facial expressions, and the ability to interact with objects in a natural way.
One of the main challenges in creating avatars for VR is achieving a high level of realism without sacrificing performance. Realistic avatars require significant computational power, especially when they need to replicate complex human behaviours like speech and emotional expression. This has led to a focus on optimizing avatars for real-time interaction, where the balance between visual fidelity and responsiveness is crucial.
The uncanny valley: definition and mechanisms
The uncanny valley is the hypothesis that affinity for artificial humans rises with human-likeness up to a point, then drops sharply when the entity is almost - but not quite - human, before rising again as it becomes indistinguishable from a real person (Mori et al., 2012). The effect is often amplified by movement: mismatches in timing, dynamics, or micro-expressions make ânear-humanâ agents feel eerie. Contemporary reviews and meta-analyses emphasize that valley responses depend on multiple cuesâappearance, animation quality, and contextâand are not inevitable if appearanceâbehaviour congruence is high (Diel et al., 2021).
Key Features and Research Focus
Several key features are essential for effective avatars in VR and XR. Realism is one of the most important, as more lifelike avatars can enhance user immersion and the overall experience. This involves not just visual accuracy but also realistic movement and behaviour. Interactivity is another critical feature, as avatars must be able to engage with both the virtual environment and other avatars in a convincing manner. Customization is also vital, allowing users to create avatars that reflect their identity or desired personal.
Current research in the field of digital avatars focuses on improving realism, particularly through advancements in facial expression capture and body movement simulation. Researchers are also exploring how avatars can convey social cues more effectively, which is essential for applications like virtual meetings and social VR platforms. Another area of interest is the ethical implications of avatars, such as the impact of avatar appearance on user behaviour and identity, and the potential for misuse in deceptive practices.
From concept to tooling: Unrealâs MetaHuman
MetaHuman (Epic Games) is a framework and toolset for creating fully rigged, high-fidelity digital humans with controllable face, body, hair, clothing, and materials, plus pipelines for animation (including webcam/iPhone capture and pro-grade solvers). As of Unreal Engine 5.6, MetaHuman Creator is embedded directly in the engine, with expanded body authoring and permissive integration/licensing across tools and engines (Epic Games; The Verge report, 2025) (Epic Games Documentation, 2025). This integration matters for uncanny-valley mitigation: it shortens iteration loops on appearanceâmotion coherence, enables realistic facial performance capture, and improves animation qualityâa variable tightly linked to uncanny responses.
The ALIVE project (Savickaite et al.)
1) Proof-of-concept emotion-recognition and realism study (iLRN 2022)
ALIVE: Avatar Learning Impact assessment for Virtual Environments tested whether avatar realism (MetaHuman vs. cartoon) influences recognition of basic emotions and immersion in VR. The project positioned MetaHumans as an accessible route to soft-skills training and social interaction scenarios, emphasizing low-cost prototyping and user testing (Savickaite et al., 2022). The 2022 practitioner paper outlines the motivation and design space and situates ALIVE within literature on avatar realism, presence, and emotion (e.g., Latoschik et al., 2017; Higgins et al., 2021). Key contribution: scoping and validating a workflow to compare photorealistic and stylized avatars for emotion recognition tasks inside VR, with implications for training and accessibility.
2) Uncanny valley, neurodiversity, and virtual avatars (iLRN 2025)
A follow-up iLRN practitioner paper examined uncanny responses and emotion recognition using high-fidelity MetaHuman avatars across neurodivergent (ND) and neurotypical (NT) participants. With 126 trials per participant depicting Ekmanâs seven emotions, results showed ND participants rated avatars significantly more uncanny and exhibited lower emotion-recognition accuracy than NT participants; negative emotions (notably contempt and disgust) were judged more uncanny and harder to recognize. Importantly, uncanny ratings correlated strongly with animation quality, underscoring that motion fidelity can be as consequential as visual realism for inclusive design (Savickaite et al., 2025). These findings extend Moriâs hypothesis into accessibility research and suggest targeted improvementsâe.g., optimizing facial animation of specific emotionsâto reduce uncanny responses for ND users.
Challenges and Future Directions
Despite significant advancements, there are still several challenges associated with the development and use of digital avatars. Latency is a major issue in VR, where even slight delays between a user's movement and the avatar's response can break the sense of immersion. Scalability is another challenge, particularly in large virtual environments where many avatars must interact simultaneously without degrading performance. Privacy and security concerns also arise, especially as avatars become more realistic and are used in sensitive contexts like healthcare or professional settings.
Last updated
