Research


Publications

Sydney Thompson, Alexander Lew, Rohan Phanse, Alex Huang, Elizabeth Stanish, Yifan Li, Marynel Vázquez
26th ACM International Conference on Multimodal Interaction (ICMI) 2024


This work studies the problem of predicting human intent to interact with a robot in a public environment. To facilitate research in this problem domain, we first contribute the People Approaching Robots Database (PAR-D), a new collection of datasets for intent prediction in Human-Robot Interaction. The database includes a subset of the ATC Approach Trajectory dataset with augmented ground truth labels. It also includes two new datasets collected
with a robot photographer on two locations of a university campus. Then, we contribute a novel human-annotated baseline for predicting intent. Our results suggest that the robot’s environment
and the amount of time that a person is visible impacts human performance in this prediction task. We also provide computational baselines for intent prediction in PAR-D by comparing the performance of several machine learning models, including ones that directly model pedestrian interaction intent and others that predict motion trajectories as an intermediary step. From these models, we find that trajectory prediction seems useful for inferring intent to interact with a robot in a public environment

Sydney Thompson, Austin Narcomey, Alexander Lew, Marynel Vázquez
ACM/IEEE International Conference on Human-Robot Interaction (HRI) 2024


Deploying robots in-the-wild is critical for studying human-robot interaction, since human behavior varies between lab settings and public settings. Though robots that have been used in-the-wild exist, many of these robots are proprietary, expensive, or unavailable. We introduce Shutter, a low-cost, flexible social robot platform for in-the-wild experiments on human-robot interaction. Our demonstration will include a Shutter robot, which consists of a 4-DOF arm with a face screen, and a Kinect sensor. We will demonstrate two different interactions with Shutter: a photo-taking interaction and an embodied explanations interaction. Both interactions have been publicly deployed on the Shutter system.

Sydney Thompson, Abhijit Gupta, Anjali W. Gupta, Austin Chen, Marynel Vázquez
23rd ACM International Conference on Multimodal Interaction (ICMI) 2021


We study conversational group detection in varied social scenes using a message-passing Graph Neural Network (GNN) in combination with the Dominant Sets clustering algorithm. Our approach first describes a scene as an interaction graph, where nodes en
code individual features and edges encode pairwise relationship data. Then, it uses a GNN to predict pairwise affinity values that represent the likelihood of two people interacting together, and computes non-overlapping group assignments based on these affinities. We evaluate the proposed approach on the Cocktail Party and MatchNMingle datasets. Our results suggest that using GNNs to leverage both individual and relationship features when computing groups is beneficial, especially when more features are available for each individual.
Paper | Code

Nathan Tsoi, Joe Connolly, Emmanuel Adéníran, Amanda Hansen, Kaitlynn Taylor Pineda, Timothy Adamson, Sydney Thompson, Rebecca Ramnauth, Marynel Vázquez, Brian Scassellati
ACM/IEEE International Conference on Human-Robot Interaction (HRI) 2021
Best Paper Candidate
The practice of social distancing during the COVID-19 pandemic resulted in billions of people quarantined in their homes. In response, we designed and deployed VectorConnect, a robot teleoperation system intended to help combat the effects of social distancing in children during the pandemic. VectorConnect uses the off-the-shelf Vector robot to allow its users to engage in physical play while being geographically separated. We distributed the system to hundreds of users in a matter of weeks. This paper details the development and deployment of the system, our accomplishments, and the obstacles encountered throughout this process. Also, it provides recommendations to best facilitate similar deployments in the future. We hope that this case study about Human-Robot Interaction practice serves as inspiration to innovate in times of global crises.

Mason Swofford, John Peruzzi, Nathan Tsoi, Sydney Thompson, Roberto Martín-Martín, Silvio Savarese, Marynel Vázquez
Proceedings of the ACM on Human-Computer Interaction 4 (CSCW1)

We propose a data-driven approach to detect conversational groups by identifying spatial arrangements typical of these focused social encounters. Our approach uses a novel Deep Affinity Network (DANTE) to predict the likelihood that two individuals in a scene are part of the same conversational group, considering their social context. The predicted pair-wise affinities are then used in a graph clustering framework to identify both small (e.g., dyads) and large groups. The results from our evaluation on multiple, established benchmarks suggest that combining powerful deep learning methods with classical clustering techniques can improve the detection of conversational groups in comparison to prior approaches. Finally, we demonstrate the practicality of our approach in a human-robot interaction scenario. Our efforts show that our work advances group detection not only in theory, but also in practice.
Website | Code | Paper


Projects

Shutter the Robot Photographer
A low-cost, open-source robot photographer platform for in-lab or in-the-wild human-robot interaction.


Robots For Good: Fighting Social Isolation with Robots
Robotic telepresence for elementary school-aged children enabling socially distant play.



Awards

HRI 2024 - Honorable Mention, Best Demo
For the work: Shutter: A Low-Cost and Flexible Social Robot Platform for In-the-Wild Deployments by S. Thompson, Narcomey, A., Lew, A., Vázquez, M.

HRI 2021 - Best Paper Award Candidate
For the work: Challenges Deploying Robots During a Pandemic: An Effort to Fight Social Isolation Among Children by N. Tsoi, J. Connolly, E. Adéníran, A. Hansen, K. T. Pineda, T. Adamson, S. Thompson, R. Ramnauth, M. Vázquez, B. Scassellati