User (Dis)Satisfaction Estimation in Human-Robot Interactions ~

A designed interaction with an aim to collect data on users’ reactions to human-robot failures

CLIENT

Furhat Robotics

PROJECT

KTH Royal Institute of Technology,
Multimodal Interactions and Interfaces

TEAM

Alice Borg          
Marie Guldevall
Carl Leanderson 

DATE + DURATION

2020, 4 weeks

TOOLS

The Social Robot: Furhat, Blockly, OpenFace, Google Sheets 

ABOUT

With the use of Furhat, a social robot with human-like competencies, the project aimed to collect authentic data and analyze differences in user satisfaction and dissatisfaction when interacting with social robots. As human-robot interactions get more common, so will the failures. It will thus be important for robots to possess the ability to detect the users’ reactions to these failures. A successful detection could increase the opportunity to recover from the damage made and hence create better user experiences.


PROCESS

To retrieve both user satisfaction and dissatisfaction two alterations of one scenario were designed, taking form in a fictitious restaurant visit. The experiment context and task were selected based on the criteria of being simple, interactive, and recognizable for as many as possible. The qualitative analysis included semi-structured interviews and careful observation of the recorded material. The quantitative analysis involved questionnaires and inquiry of the recorded material through OpenFace, an open-source toolkit for creating interactive applications based on facial behavior detection.

Screen Shot 2021-01-17 at 10.46.37.png

UX–CHALLENGES

One of the biggest UX–challenges were to find a balance between exaggerating enough to make the users dissatisfied, but at the same time not making the interaction ridiculous. The main goal was to collect data on actual dissatisfaction, and prior research indicated that this was somewhat difficult. As the habit to communicate with social robots is still quite low amongst most people, some simulated failures are not even noticed. Users believe that the failures are a natural part of the interaction and will hence not become dissatisfied. Both prior research and repetitive tests were performed to avoid this from happening. 

Another UX–challenge was to make the users as comfortable as possible in the non-naturalistic environment. Due to ethical reasons, the participants were informed about the being filmed in advance. When looking at the videos in retrospect, it was clear that almost every participant adjusted their behaviors when being reminded of the cameras. Information describing the purpose and procedure were sent well in advance, to alleviate this.  Consequently, they had the opportunity to get familiar with the task and could ask potential questions in advance.

Screen Shot 2021-01-17 at 10.28.04.png

RESEARCH CONCLUSIONS

A lot has happened in the development of human-robot interactions and multimodal interfaces in previous years. Despite the rapid development, there are still relatively unexplored areas, and one of these areas refers to social perception. And as human-robot interactions get more common, so will also the failures. It will thus be important for robots to possess the ability to detect the users’ reactions to these failures. A successful detection could increase the opportunity to recover from the damage made and hence create better user experiences. These aspects motivated us to execute further research in this area of human-robot interaction, with a focus on human behaviors. It is easy to forget about the human aspects when developing social robots, and we wanted to emphasize the importance of it. 

Screen Shot 2021-01-06 at 16.44.50.png

UX–SOLUTION

The project aimed to design an interaction that generated data on both user satisfaction and dissatisfaction in reference to human-robot collaboration. The findings clarified that it is not possible to estimate user (dis)satisfaction from solely facial expressions. However, due to the huge amount of collected data, there is still a lot of potential for further analysis. Future research should focus on the detection of body poses, audio feature extraction, and contextualized speech recognition. If this is done, there are good reasons to believe that the findings may further improve the adaptability of social robots. 


REFLECTIONS + TAKEAWAYS

The project taught me the importance of multimodality. Humans are multimodal by nature, referring to the way we communicate and interact with the surrounding world. Consequently, it reminds me to always emphasize and include the human aspects when designing, even if the field in question normally does not do that. It further highlighted the strength of combining qualitative and quantitative analysis. The qualitative analysis provides answers to the important question, why, that is so often forgotten. The quantitative analysis enables a more objective perspective. The perfect combination thus lies in the combination of the two. 

Screen Shot 2021-01-17 at 10.45.44.png
Previous
Previous

SoundSpace

Next
Next

Resilience