<p dir="ltr"><u>Abstract</u>: We compared the body movements of five VR avatar representations in a user study (N=53) to ascertain how well these representations could convey body motions associated with different emotions: One head-and-hands representation using only tracking data, one upper-body representation using inverse kinematics (IK), and three full-body representations using IK, motion-capture, and the state-of-the-art deep-learning model AGRoL. Participants emotion detection accuracies were similar for the IK and AGRoL representations, highest for the full-body motion-caupture representation and lowest for the head-and-hands representation. Our findings suggest that from the perspective of emotion expressivity, connected upper-body parts that provide visual continuity improve clarity, and that current techniques for algorithmically animating the lower-body are ineffective. In particular, the deep-learning technique studied did not produce more expressive results, suggesting the need for training data specifically made for social VR applications.</p><p dir="ltr">This repository contains the Unity projects for the Emotion Legibility study. <u>Read the</u><u> individual READMEs in the .zip folders for more details about each project.</u></p><p dir="ltr"><i>AGRoL-Unity</i> contains the AGRoL implementation with a Unity server and Python client for sending motion data from Unity to the AGRoL network and back.</p><p dir="ltr"><i>IKRepresentations-Unity</i> contains the other four avatar representations (FBMC, UBIK, FBIK, HH) and the code to animate them and create videos for the study.</p><p dir="ltr"><i>StudyApp-WebGL-Unity</i> contains the Unity project and the WebGL build for the user study. It also contains all the videos from the pre- and main study.</p><p dir="ltr"><i>participant_accuracies.csv</i> contains the participant data from the main user study.</p><p dir="ltr"><br></p><p dir="ltr"><b>Licenses</b>:</p><p dir="ltr">AGRoL code is licensed under CC-BY-NC (parts of it are licensed under different license terms (see <a href="https://github.com/facebookresearch/AGRoL" rel="noreferrer" target="_blank">AGRoL Github page</a>).</p><p dir="ltr">The dataset we used was <a href="https://www.physionet.org/content/kinematic-actors-emotions/2.1.0/" rel="noreferrer" target="_blank">"Kinematic dataset of actors expressing emotions"</a>, licensed under <a href="https://www.physionet.org/content/kinematic-actors-emotions/view-license/2.1.0/" rel="noreferrer" target="_blank">PhysioNet Restricted Health Data License 1.5.0</a>.</p><p dir="ltr">We used the SMPL avatar model for our user study. A SMPL Unity project can be downloaded from the <a href="https://smpl.is.tue.mpg.de/download.php" rel="noreferrer" target="_blank">MPG website</a> (login required).</p><p dir="ltr">SMPL-Body is licensed under the <a href="http://creativecommons.org/licenses/by/4.0/" rel="noreferrer" target="_blank">Creative Commons Attribution 4.0 International License</a>.</p><p dir="ltr">Textures that were used in the Unity projects are from <a href="http://www.kenney.nl" rel="noreferrer" target="_blank">Kenney</a>, CC0.</p>