State-of-the-art robots are not yet fully equipped to automatically correct their policy when they encounter new situations during deployment. We argue that in common everyday robot tasks, failures may be resolved by knowledge that non-experts could provide. Our research aims to integrate elements of formal synthesis approaches into computational human-robot interaction to develop verifiable robots that can automatically correct their policy using non-expert feedback on the fly. Preliminary results from two online studies show that non-experts can indeed correct failures and that robots can use the feedback to automatically synthesize correction mechanisms to avoid failures.
@inproceedings{van2022leveraging,title={Leveraging Non-Experts and Formal Methods to Automatically Correct Robot Failures},author={Van Waveren, Sanne},booktitle={Proceedings of the 2022 ACM/IEEE International Conference on Human-Robot Interaction},pages={1182--1184},year={2022},html={https://sannevw.github.io/assets/pdf/vanWaveren_HRI_Pioneers_2022_PP.pdf}}
Correct Me If I’m Wrong: Using Non-Experts to Repair Reinforcement Learning Policies
Van Waveren, Sanne, Pek, Christian, Tumova, Jana, and Leite, Iolanda
In Proceedings of the 17th ACM/IEEE International Conference on Human-Robot Interaction 2022
Reinforcement learning has shown great potential for learning sequential decision-making tasks. Yet, it is difficult to anticipate all possible real-world scenarios during training, causing robots to inevitably fail in the long run. Many of these failures are due to variations in the robot’s environment. Usually experts are called to correct the robot’s behavior; however, some of these failures do not necessarily require an expert to solve them. In this work, we query non-experts online for help and explore 1) if/how non-experts can provide feedback to the robot after a failure and 2) how the robot can use this feedback to avoid such failures in the future by generating shields that restrict or correct its high-level actions. We demonstrate our approach on common daily scenarios of a simulated kitchen robot. The results indicate that non-experts can indeed understand and repair robot failures. Our generated shields accelerate learning and improve data-efficiency during retraining.
@inproceedings{van2022correct,title={Correct Me If I'm Wrong: Using Non-Experts to Repair Reinforcement Learning Policies},author={Van Waveren, Sanne and Pek, Christian and Tumova, Jana and Leite, Iolanda},booktitle={Proceedings of the 17th ACM/IEEE International Conference on Human-Robot Interaction},pages={1--9},year={2022},html={https://sannevw.github.io/assets/pdf/vanWaveren_Hri2022_Final_PP.pdf},selected={true}}
Large-Scale Scenario Generation for Robotic Manipulation via Conditioned Generative Models
@article{van2022large,title={Large-Scale Scenario Generation for Robotic Manipulation via Conditioned Generative Models},author={van Waveren, Sanne and Pek, Christian and Leite, Iolanda and Tumova, Jana and Kragic, Danica},year={2022}}
Increasing Perceived Safety in Motion Planning for Human-Drone Interaction
Waveren, Sanne, Rudling, Rasmus, Leite, Iolanda, Jensfelt, Patric, and Pek, Christian
2022
@article{van2022increasing,title={Increasing Perceived Safety in Motion Planning for Human-Drone Interaction},author={van Waveren, Sanne and Rudling, Rasmus and Leite, Iolanda and Jensfelt, Patric and Pek, Christian},year={2022}}
2021
Encoding Human Driving Styles in Motion Planning for Autonomous Vehicles
Karlsson, Jesper*,Â
van Waveren*, Sanne, Pek, Christian, Torre, Ilaria, Leite, Iolanda, and Tumova, Jana
In ICRA International Conference on Robotics and Automation 2021
Driving styles play a major role in the acceptance and use of autonomous vehicles. Yet, existing motion planning techniques can often only incorporate simple driving styles that are modeled by the developers of the planner and not tailored to the passenger. We present a new approach to encode human driving styles through the use of signal temporal logic and its robustness metrics. Specifically, we use a penalty structure that can be used in many motion planning frameworks, and calibrate its parameters to model different automated driving styles. We combine this penalty structure with a set of signal temporal logic formula, based on the Responsibility-Sensitive Safety model, to generate trajectories that we expected to correlate with three different driving styles: aggressive, neutral, and defensive. An online study showed that people perceived different parameterizations of the motion planner as unique driving styles, and that most people tend to prefer a more defensive automated driving style, which correlated to their self-reported driving style.
@inproceedings{karlsson2021encoding,title={Encoding Human Driving Styles in Motion Planning for Autonomous Vehicles},author={Karlsson, Jesper* and {van Waveren}*, Sanne and Pek, Christian and Torre, Ilaria and Leite, Iolanda and Tumova, Jana},booktitle={ICRA International Conference on Robotics and Automation},year={2021},selected={true},html={https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=9561777},video={https://youtu.be/z5IXIdPKX_o}}
Exploring Non-Expert Robot Programming Through Crowdsourcing
Van Waveren, Sanne, Carter, Elizabeth J, Örnberg, Oscar, and Leite, Iolanda
A longstanding barrier to deploying robots in the real world is the ongoing need to author robot behavior. Remote data collection–particularly crowdsourcing—is increasingly receiving interest. In this paper, we make the argument to scale robot programming to the crowd and present an initial investigation of the feasibility of this proposed method. Using an off-the-shelf visual programming interface, non-experts created simple robot programs for two typical robot tasks (navigation and pick-and-place). Each needed four subtasks with an increasing number of programming statements (if statement, while loop, variables) for successful completion of the programs. Initial findings of an online study (N = 279) indicate that non-experts, after minimal instruction, were able to create simple programs using an off-the-shelf visual programming interface. We discuss our findings and identify future avenues for this line of research.
@article{van2021exploring,title={Exploring Non-Expert Robot Programming Through Crowdsourcing},author={Van Waveren, Sanne and Carter, Elizabeth J and {\"O}rnberg, Oscar and Leite, Iolanda},journal={Frontiers in Robotics and AI},pages={242},year={2021},publisher={Frontiers},html={https://www.frontiersin.org/articles/10.3389/frobt.2021.646002/full}}
2020
Gesticulator: A framework for semantically-aware speech-driven gesture generation
Kucherenko, Taras, Jonell, Patrik,Â
van Waveren, Sanne, Henter, Gustav Eje, Alexandersson, Simon, Leite, Iolanda, and Kjellström, Hedvig
In Proceedings of the 2020 International Conference on Multimodal Interaction 2020
During speech, people spontaneously gesticulate, which plays a key role in conveying information. Similarly, realistic co-speech gestures are crucial to enable natural and smooth interactions with social agents. Current data-driven co-speech gesture generation systems use a single modality for representing speech: either audio or text. These systems are therefore confined to producing either acoustically-linked beat gestures or semantically-linked gesticulation (e.g., raising a hand when saying “high”): they cannot appropriately learn to generate both gesture types. We present a model designed to produce arbitrary beat and semantic gestures together. Our deep-learning based model takes both acoustic and semantic representations of speech as input, and generates gestures as a sequence of joint angle rotations as output. The resulting gestures can be applied to both virtual agents and humanoid robots. Subjective and objective evaluations confirm the success of our approach.
@inproceedings{kucherenko2020gesticulator,title={Gesticulator: A framework for semantically-aware speech-driven gesture generation},author={Kucherenko, Taras and Jonell, Patrik and {van Waveren}, Sanne and Henter, Gustav Eje and Alexandersson, Simon and Leite, Iolanda and Kjellstr{\"o}m, Hedvig},booktitle={Proceedings of the 2020 International Conference on Multimodal Interaction},pages={242--250},year={2020},selected={true},html={https://svito-zar.github.io/papers/Gesticulator_ICMI_2020.pdf},code={https://github.com/Svito-zar/gesticulator},award={Best paper award}}
Behavioural responses to robot conversational failures
Kontogiorgos, Dimosthenis, Pereira, Andre, Sahindal, Boran,Â
van Waveren, Sanne, and Gustafson, Joakim
In Proceedings of the 2020 ACM/IEEE International Conference on Human-Robot Interaction 2020
Humans and robots will increasingly collaborate in domestic environments which will cause users to encounter more failures in interactions. Robots should be able to infer conversational failures by detecting human users’ behavioural and social signals. In this paper, we study and analyse these behavioural cues in response to robot conversational failures. Using a guided task corpus, where robot embodiment and time pressure are manipulated, we ask human annotators to estimate whether user affective states differ during various types of robot failures. We also train a random forest classifier to detect whether a robot failure has occurred and compare results to human annotator benchmarks. Our findings show that human-like robots augment users’ reactions to failures, as shown in users’ visual attention, in comparison to non-human-like smart-speaker embodiments. The results further suggest that speech behaviours are utilised more in responses to failures when non-human-like designs are present. This is particularly important to robot failure detection mechanisms that may need to consider the robot’s physical design in its failure detection model.
@inproceedings{kontogiorgos2020behavioural,title={Behavioural responses to robot conversational failures},author={Kontogiorgos, Dimosthenis and Pereira, Andre and Sahindal, Boran and {van Waveren}, Sanne and Gustafson, Joakim},booktitle={Proceedings of the 2020 ACM/IEEE International Conference on Human-Robot Interaction},pages={53--62},year={2020},selected={true},html={https://dl.acm.org/doi/pdf/10.1145/3319502.337478}}
Embodiment effects in interactions with failing robots
Kontogiorgos, Dimosthenis,Â
van Waveren, Sanne, Wallberg, Olle, Pereira, Andre, Leite, Iolanda, and Gustafson, Joakim
In Proceedings of the 2020 CHI conference on human factors in computing systems 2020
The increasing use of robots in real-world applications will inevitably cause users to encounter more failures in interactions. While there is a longstanding effort in bringing human-likeness to robots, how robot embodiment affects users’ perception of failures remains largely unexplored. In this paper, we extend prior work on robot failures by assessing the impact that embodiment and failure severity have on people’s behaviours and their perception of robots. Our findings show that when using a smart-speaker embodiment, failures negatively affect users’ intention to frequently interact with the device, however not when using a human-like robot embodiment. Additionally, users significantly rate the human-like robot higher in terms of perceived intelligence and social presence. Our results further suggest that in higher severity situations, human-likeness is distracting and detrimental to the interaction. Drawing on quantitative findings, we discuss benefits and drawbacks of embodiment in robot failures that occur in guided tasks.
@inproceedings{kontogiorgos2020embodiment,title={Embodiment effects in interactions with failing robots},author={Kontogiorgos, Dimosthenis and {van Waveren}, Sanne and Wallberg, Olle and Pereira, Andre and Leite, Iolanda and Gustafson, Joakim},booktitle={Proceedings of the 2020 CHI conference on human factors in computing systems},pages={1--14},year={2020},html={https://dl.acm.org/doi/pdf/10.1145/3313831.3376372},video={https://youtu.be/MxGerI0Bc-U}}
2019
Comparing Human-Robot Proxemics between Virtual Reality and the Real World
Li, Rui, Almkerk, Marc,Â
van Waveren, Sanne, Carter, Elizabeth, and Leite, Iolanda
In 2019 14th ACM/IEEE International Conference on Human-Robot Interaction (HRI) 2019
Virtual Reality (VR) can greatly benefit Human-Robot Interaction (HRI) as a tool to effectively iterate across robot designs. However, possible system limitations of VR could influence the results such that they do not fully reflect real-life encounters with robots. In order to better deploy VR in HRI, we need to establish a basic understanding of what the differences are between HRI studies in the real world and in VR. This paper investigates the differences between the real life and VR with a focus on proxemic preferences, in combination with exploring the effects of visual familiarity and spatial sound within the VR experience. Results suggested that people prefer closer interaction distances with a real, physical robot than with a virtual robot in VR. Additionally, the virtual robot was perceived as more discomforting than the real robot, which could result in the differences in proxemics. Overall, these results indicate that the perception of the robot has to be evaluated before the interaction can be studied. However, the results also suggested that VR settings with different visual familiarities are consistent with each other in how they affect HRI proxemics and virtual robot perceptions, indicating the freedom to study HRI in various scenarios in VR. The effect of spatial sound in VR drew a more complex picture and thus calls for more in-depth research to understand its influence on HRI in VR.
@inproceedings{li2019comparing,title={Comparing Human-Robot Proxemics between Virtual Reality and the Real World},author={Li, Rui and van Almkerk, Marc and {van Waveren}, Sanne and Carter, Elizabeth and Leite, Iolanda},booktitle={2019 14th ACM/IEEE International Conference on Human-Robot Interaction (HRI)},pages={431--439},year={2019},organization={IEEE},html={https://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=8673116}}
Take One For the Team: The Effects of Error Severity in Collaborative Tasks with Social Robots
Van Waveren, Sanne, Carter, Elizabeth J, and Leite, Iolanda
In Proceedings of the 19th ACM International Conference on Intelligent Virtual Agents 2019
We explore the effects of robot failure severity (no failure vs. low-impact vs. high-impact) on people’s subjective ratings of the robot. We designed an escape room scenario in which one participant teams up with a remotely-controlled Pepper robot. We manipulated the robot’s performance at the end of the game: the robot would either correctly follow the participant’s instructions (control condition), the robot would fail but people could still complete the task of escaping the room (low-impact condition), or the robot’s failure would cause the game to be lost (high-impact condition). Results showed no difference across conditions for people’s ratings of the robot in terms of warmth, competence, and discomfort. However, people in the low-impact condition had significantly less faith in the robot’s robustness in future escape room scenarios. Open-ended questions revealed interesting trends that are worth pursuing in the future: people may view task performance as a team effort and may blame their team or themselves more for the robot failure in case of a high-impact failure as compared to the low-impact failure.
@inproceedings{van2019take,title={Take One For the Team: The Effects of Error Severity in Collaborative Tasks with Social Robots},author={Van Waveren, Sanne and Carter, Elizabeth J and Leite, Iolanda},booktitle={Proceedings of the 19th ACM International Conference on Intelligent Virtual Agents},pages={151--158},year={2019},organization={ACM},html={https://dl.acm.org/doi/pdf/10.1145/3308532.3329475}}
Knock on wood: The effects of material choice on the perception of social robots
Many people who interact with robots in the near future will not have prior experience, and they are likely to intuitively form their first impressions of the robot based on its appearance. This paper explores the effects of component material on people’s perception of the robots in terms of social attributes and willingness to interact. Participants watched videos of three robots with different outer materials: wood, synthetic fur, and plastic. The results showed that people rated the perceived warmth of a plastic robot lower than a wooden or furry robot. Ratings of perceived competence and discomfort did not differ between the three robots.
@inproceedings{van2019knock,title={Knock on wood: The effects of material choice on the perception of social robots},author={Van Waveren, Sanne and Bj{\"o}rklund, Linn{\'e}a and Carter, Elizabeth J and Leite, Iolanda},booktitle={International Conference on Social Robotics},pages={211--221},year={2019},organization={Springer, Cham}}
2017
Groups in conflict at the airport: How people think a robot should act
Joosse, M,Â
van Waveren, S, Zaga, C, and Evers, V
In CSCW’17 workshop on robots in groups and teams 2017
As robots provide services to groups of people conflicts can arise forcing the robots to decide which action to take. In this extended abstract we present a study in which we collected qualitative data on peoples’ ideas of appropriate robot responses when encountering conflict situations while guiding passengers at an airport. We collected, coded and analyzed free text data from 118 participants, and will report on the dataset and general results. In general participants’ believed the needs of the many outweigh the needs of the few: the robot should move on.
@inproceedings{joosse2017groups,title={Groups in conflict at the airport: How people think a robot should act},author={Joosse, M and {van Waveren}, S and Zaga, C and Evers, V},booktitle={CSCW’17 workshop on robots in groups and teams},volume={26},year={2017},html={https://sannevw.github.io/assets/pdf/Joosse_etal_2017.pdf}}