The University of Glasgow has won funding for a project that aims to eradicate awkward virtual meetings by better portraying non-verbal cues.
The FUSION project has been awarded £1.75m to create meeting spaces that will incorporate both virtual and physical spaces. People will interact with each other in these spaces both physically and as avatars.
Over the course of the next five years, the project will utilise cameras and sensors to observe volunteers as they interact with each other, both in person and online, while wearing headsets. These observations will help to develop models of social signals, including voice, gestures, and positions, between individuals and across different realities. The aim is to make avatars better represent non-verbal cues.
Dr Julie Williamson from the University of Glasgow (pictured), who is leading the project, said: “Many of us became very familiar with virtual meeting software like Zoom and Skype to help us maintain contact with friends, family and co-workers during covid lockdowns.
“While those tools can be very useful, they can also be frustrating experiences. People talk over each other or don’t make consistent eye contact with their cameras, for example, and it’s impossible to see non-verbal cues like body language if you’re restricted to only seeing people’s faces.
“More advanced technologies like virtual reality headsets can allow users to feel more present together, but they’re still very crude approximations compared to face-to-face interactions.
“Social signals like gestures, eye contact and personal space are currently very difficult to recreate in virtual spaces, which often prevents interactions with other people from feeling realistic. What we’re aiming to do with FUSION is dissolve the barriers between virtual and physical realities to create social experiences that accurately capture the nuances of human behaviour.”
Analysis of social cues allows for a new database of behavioural patterns that persist across virtual and in-person spaces. This will help create software that improves mixed-use communications and stabilises interactions for more immersive experiences, the university said.
For instance, users’ positions can be subtly adjusted to create more effective group set-ups or their eye lines could be tweaked to better simulate face-to-face eye contact. Additionally, in cases where multiple people are talking at once, the software may manipulate the audio to focus the group’s attention on a single speaker.