The following papers were reviewed by 1-2 experts from the program committee and evaluated by a meta-reviewer for publication decisions.
We are proud to showcase such excellent contributions. Those spotlight contributions have been selected by the reviewers to be streamed in the spotlight session during the live workshop on Monday 23rd from 16:25-16:45 ET.
Interactive Asynchronous Feedback: All workshop participants can ask questions to the authors of accepted papers and authors will be able to answer in the following sli.do event "Accepted Paper Track": https://app.sli.do/event/ebRS9onreKch3i5RAFmfRd
This link sli.do event link will go live on May 23rd and will be open through May 25th to enable delayed remote engagement.
Insights from an Industrial Collaborative Assembly Project: Lessons in Research and Collaboration
Authors: Tan Chen, Zhe Huang, James Motes, Junyi Geng (CMU), Quang Minh Ta, Holly Dinkel, Hameed Abdul-Rashid, Jessica Myers, Ye-Ji Mun, Wei-che Lin*, Yuan-yung Huang*, Sizhe Liu*, Marco Morales, Nancy M. Amato, Katherine Driggs-Campbell, and Timothy Bretl (University of Illinois at Urbana-Champaign & *Foxconn Interconnect Technology, USA)
Abstract: Significant progress in robotics reveals new opportunities to advance manufacturing. Next-generation industrial automation will require both integration of distinct robotic technologies and their application to challenging industrial environments. This paper presents lessons from a collaborative assembly project between three academic research groups and an industry partner. The goal of the project is to develop a flexible, safe, and productive manufacturing cell for sub-centimeter precision assembly. Solving this problem in a high-mix, low-volume production line motivates multiple research thrusts in robotics. This work identifies new directions in collaborative robotics for industrial applications and offers insight toward strengthening collaborations between institutions in academia and industry on the development of new technologies.
Authors: Anna Waldman-Brown, Lindsay Sanneman (Massachusetts Institute of Technology, USA), Simon Schumacher, and Roland Hall (Fraunhofer Institute for Manufacturing Engineering and Automation, Germany)
Abstract: Drawing from 17 interviews with cobot integrators and cobot-adopting small and medium enterprises, this paper provides a preliminary typology of cobot tasks that relates who controls the cobot with the degree of task repeatability. The typology consists of three potential groups of cobot controllers - shop floor workers, dedicated technologists within the firm, and experts outside the firm – and three types of cobot tasks – high repeatability, medium repeatability, and low repeatability. High repeatability tasks require little to no cobot operator knowledge within the firm, medium repeatability tasks require experts within the firm who are not necessarily on the shop floor, and low repeatability tasks require frequent shop floor expertise.
Conversational Programming for Collaborative Robots
Authors: Maike Paetzel-Prüsmann, Julie Hunter*, Kranti Chalamalasetti, Kate Thompson*, Alexandros Nicolaou, Ozan Güngör (Synergeticon, Germany), David Schlangen, and Nicholas Asher** (University of Potsdam, Germany; *LINAGORA Labs & **Centre National de Recherche Scientifique)
Abstract: In this position paper, we describe a novel approach of programming industrial robots via conversational dialogue. We believe that conversational programming, unlike other interfaces for humans to reprogram industrial robots, will enable novices to teach a robot complex new procedures without any knowledge of programming required. Using a sample conversation between a human User and an industrial robotic arm, we discuss how our approach differs from other (spoken) human-robot interfaces and why it has the potential to solve difficulties of such interfaces when it comes to learning to abstract from specific examples. We also describe the unique challenges conversational programming involves and how, once these are solved, it could be integrated into industrial settings of the future.
Towards Explainable and Trustworthy Collaborative Robots through Embodied Question Answering
Authors: Lars Kunze, Omer Gunes, Dylan Hillier, Matthew Munks, Helena Webb*, Pericle Salvini, Daniel Omeiza, and Marina Jirotka (University of Oxford & *University of Nottingham, UK)
Abstract: Collaborative robots (or cobots) will offer significant societal benefits, but their large-scale deployments may also lead to unintended consequences. The ability to query, analyse, and understand data from cobots will be a fundamental requirement for ensuring safety, accountability, and trust. To this end, we propose embodied question answering as a means to enable cobots to explain themselves and make them trustworthy. Our approach is founded in responsible research and innovation and thereby will shape the future of responsible robotics design, development, and deployment for cobots. In this paper, we first provide some background on responsible robotics. Second, we elaborate on the need for explanations. Third, we describe our approach to embodied question answering, and finally, we discuss open challenges before we conclude.
Reshaping Robot Trajectories Using Natural Language Commands: A Study of Multi-Modal Data Alignment Using Transformers
Authors: Arthur Bucker, Luis Figueredo, Sami Haddadin (TUM, Germany), Ashish Kapoor, Shuang Ma, Rogerio Bonatti (Microsoft)
Abstract: Natural language is the most intuitive medium for us to interact with other people when expressing commands and instructions. However, using language is seldom an easy task when humans need to express their intent towards robots, since most of the current language interfaces require rigid templates with a static set of action targets and commands. In this work we provide a flexible language-based interface for human-robot collaboration, which allows a user to reshape existing trajectories for an autonomous agent. We take advantage of recent advancements in the field of large language models (BERT and CLIP) to encode the user command, and then combine these features with trajectory information using multi-modal attention transformers. We train the model using imitation learning over a dataset containing robot trajectories modified by language commands, and treat the trajectory generation process as a sequence prediction problem, analogously to how language generation architectures operate. We evaluate the system in multiple simulated trajectory scenarios, and show a significant performance increase of our model over baseline approaches. In addition, our real-world experiments with a robot arm show that users significantly prefer our natural language interface over traditional methods such as kinesthetic teaching or cost-function programming. Our study shows how the field of robotics can take advantage of large pre-trained language models towards creating more intuitive interfaces between for robots and machines.
Explicit Reference Governor for Certified Safe, Fast, and Real-Time Cobot Control in Human-Robot Shared Workspaces
Authors: Kelly Merckaert, Bryan Convens, Marco M. Nicotra*, and Bram Vanderborght (Vrije Universiteit Brussel, Belgium & *University of Colorado Boulder, USA)
Abstract: In the past years, manufacturers have been moving from automated mass production to automated mass customization, where robotic manipulators work side-by-side with human operators. However, a challenging aspect of this collaboration is to ensure human safety while achieving efficient task realization. In this paper we introduce a computationally efficient control scheme that relies on the Explicit Reference Governor (ERG) formalism to enforce input and state constraints in real-time. The resulting constrained control method can steer the robot arm to the desired end-effector pose (or a steady-state admissible approximation thereof) in the presence of actuator saturation, limited joint ranges, speed limits, static obstacles, and humans without compromising the robotic performance goals.
Perceptions of Task Allocation Methods for Human-Robot Teams
Authors: Arsha Ali, Dawn M. Tilbury and Lionel P. Robert Jr. (University of Michigan, USA)
Abstract: Humans and robots that cooperate or collaborate together must have a method to allocate indivisible tasks among themselves. Allocating tasks in a way that makes good use of each agent’s capabilities is essential for effective teamwork and performance. Thus, a task allocation method that can learn and adapt its allocation to the capabilities of a specific individual, especially as these capabilities are changing, is needed to effectively allocate tasks in heterogeneous human-robot teams. This paper outlines a user study that manipulates the task allocation method (between-subjects) to measure the impact on performance, preference, and satisfaction. This study is to be tested in a warehouse distribution context where packages of varying weights and labels are to be sorted. During the experiment, the participant’s capabilities can grow from being able to identify only labels with Arabic numbers to also being able to identify Chinese numbers used in this study. We anticipate that a task allocation method that is able to adapt to participant capabilities will result in higher performance, preference, and satisfaction compared to a static task allocation method and task allocation done by the participant. If performance, preference, and satisfaction are higher when a task allocation method adapts to participant capabilities, this would further motivate the need to develop task allocation methods that account for dynamically evolving capabilities. Then, robots using such task allocation methods can be better integrated within existing work systems.
SAFETY & LEARNING CONTRIBUTIONS FOR IMPROVED HUMAN-ROBOT COLLABORATION
On Physical Compatibility of Robots in Human-Robot Collaboration Settings
Authors: Pranav Pandey, Ramviyas Parasuraman and Prashant Doshi (University of Georgia, USA)
Abstract: Human-Robot Interaction (HRI) is a multidisciplinary field. It has become essential for robots to work with humans in collaboration and teamwork settings, such as collaborative assembly, where they share tasks in an overlapping workspace. While extensive research is available to ensure successful HRI, primarily focusing on the safety factors, our objective is to provide a comprehensive perspective on robot’s compatibility with humans in such settings. Specifically, we highlight the key pillars and elements of Physical Human-Robot Interaction (pHRI) and discuss the valuable metrics for evaluating such systems. To achieve compatibility, we propose that the robot ensure humans’ safety, flexibility in tasks, and robustness to changes in the environment. Ultimately, these elements will help assess robots’ awareness of humans and surroundings and help increase trustworthiness of robots among human collaborators.
An Integrated Safe Task Planning Approach for Human-Robot Collaboration
Authors: Andrea Pupa and Cristian Secchi (University of Modena and Reggio Emilia, Italy.)
Abstract: In the new collaborative robotic applications, humans and robots cooperate to accomplish a common job, composed by a set of tasks. In this context, both a proper task scheduling strategy and task execution strategy are important to allow the collaboration. The first addresses the uncertainties of the two agents, while the second ensure safety. However, to make the most of the collaboration, it is also crucial to integrate these strategies.
In this paper, we propose an integrated architecture that exploits the task execution information to enrich the task scheduling procedure, improving the overall collaboration.
Aligning Robot Representations with Humans
Authors: Andreea Bobu (University of California Berkeley, USA) and Andi Peng (Massachusetts Institute of Technology, USA)
Abstract: As robots are increasingly deployed in real-world scenarios, a key question is how to best transfer knowledge learned in one environment to another, where shifting constraints and human preferences render adaptation challenging. A central challenge remains that often, it is difficult (perhaps even impossible) to capture the full complexity of the deployment environment, and therefore the desired tasks, at training time. Consequently, the representation, or abstraction, of the tasks the human hopes for the robot to perform in one environment may be misaligned with the representation of the tasks that the robot has learned in another. We postulate that because humans will be the ultimate evaluator of system success in the world, they are best suited to communicating the aspects of the tasks that matter to the robot. Our key insight is that effective learning from human input requires first explicitly learning good intermediate representations and then using those representations for solving downstream tasks. We highlight three areas where we can use this approach to build interactive systems and offer future directions of work to better create advanced collaborative robots.
CONTROL & SENSING CONTRIBUTIONS FOR IMPROVED HUMAN-ROBOT COLLABORATION
Workspace Nonlinear Disturbance Observer Robust Against Flexibility and Singularity for Flexible Joint Robot
Authors: Deokjin Lee and Sehoon Oh (Department of Robotics Engineering, DGIST, Korea)
Abstract: Although flexible joint robots are widely utilized for human-robot interaction in terms of inherent flexibility, flexibility brings limitations to position control. Many advanced control algorithms have been developed to overcome the flexibility problem. However, the advanced control algorithms have a critical limit of singularity occurring in the workspace. The singularity can be avoided but it restricts the motion range of the robot. It is directly related to deteriorating the variety, performance and efficiency of the tasks performed by the robot. In addition, the advantage of singularity that can generate a large force with a small motor torque cannot be utilized. This paper proposes a Rotating coordinated Workspace based Nonlinear Disturbance Observer (RWNDO) for the flexible joint robot to conduct robust position control overcoming the flexibility and singularity. The performance and robustness of the proposed RWNDO are analyzed and verified.
Nondecoupling Reservation (NDR) Algorithm for Robotics Providing the Direct Uniqueness and Novel Orientation Representation
Authors: Yaolun Zhang (Shanghai Qizhi Institute, China) and Jianyu Chen (Tsinghua University, China)
Abstract: Decoupling method is popular in robotics, especially for inverse kinematics (IK) of serial manipulators. According to our research, that is the key reason why the complete uniqueness is still pended without the selection and matching from multiple solutions. Herein, the Nondecoupling Reservation (NDR) method is proposed to tackle the inverse kinematics under the hybrid algebraic and analytical geometric strategy, which can directly specify the uniqueness without any selection and matching from multiple solutions. Meanwhile, we also propose the Dynamic Reference Orientation (DRO) algorithm to plan the orientations for kinematically deficient manipulators in the Cartesian space, which is coupled with the position of end-effector, and can efficiently plan the non-singular path. Finally, all the above-mentioned algorithms are verified through experiments based on KUKA youBot mobile manipulator. What’s more, under the NDR for the direct uniqueness, we discover that the number of solutions of IK about UR-like cobot may be more than 8 and less than 16 in the complete workspace.
Dynamic Contact Force Estimation Using Soft Tactile Sensor Based on Fiber Bragg Grating and Series Elastic Actuator
Authors: Hyunbin Na, Hyunwook Lee, Chang Hyun Park*, Gyeong Hun Kim*, Chang-Seok Kim*, and Sehoon Oh (Department of Robotics Engineering, DGIST & *Pusan National University, Korea)
Abstract: This study proposes a novel force sensing mechanism and algorithm integrating a soft tactile sensor and a series elastic actuator (SEA). The proposed mechanism can exploit the advantages of the soft tactile sensor which is able to measure the contact location all over the robot link and the advantages of the SEA which is able to measure the precise torque. To this end, the soft tactile sensor is designed using Fiber Bragg Grating (FBG) to measure contact force and its location, and this tactile sensor is attached to a SEA-driven robot link. A deep neural network is designed to estimate the force and its contact location from the tactile sensor. Then, a novel state-space observer is designed based on the dynamic characteristics of the robot link and the torque measurement of SEA, and thus it can enhance the accuracy of the force estimation and the location estimation. The feature of the proposed mechanism and observer is the ability to estimate the precise force and the application location.
CONTRIBUTIONS ON COLLABORATIVE MULTI-ROBOT APPLICATIONS FOR THE WORK OF THE FUTURE
Human-robot Matching and Routing for Multi-robot Tour Guiding under Time Uncertainty
Authors: Bo Fu, Tribhi Kathuria, Denise Rizzo*, Matthew Castanier*, X. Jessie Yang, Maani Ghaffari and Kira Barton (University of Michigan & *US Army DEVCOM Ground Vehicle Systems Center, USA)
Abstract: This work presents a framework for multi-robot tour guidance in a partially known environment with uncertainty, such as a museum. A simultaneous matching and routing problem (SMRP) is formulated to match the humans with robot guides according to their requested places of interest (POIs) and generate the routes for the robots according to uncertain time estimation. A large neighborhood search algorithm is developed to efficiently find sub-optimal low-cost solutions for the SMRP. The scalability and optimality of the multi-robot planner are evaluated computationally. The largest case tested involves 50 robots, 250 humans, and 50 POIs. A photo-realistic multi-robot simulation was developed to verify the tour guiding performance in an uncertain indoor environment. Supplementary video: https://youtu.be/jx1RtK0g6fo
Task Scheduling Problem for Heterogeneous Multi-Robot Garment Mass Customization
Authors: Ranulfo Bezerra, Kazunori Ohno, Shotaro Kojima, Hanif A. Aryadi, Kenta Gunji, Masao Kuwahara, Yoshito Okada, Masashi Konyo and Satoshi Tadokoro (Tohoku University & New Industry Creation Hatchery Center, Japan)
Abstract: Industrial environments that rely on Mass Customization are characterized by a high variety of product models and reduced batch sizes, demanding prompt adaptation of resources to a new product model. In such environment, it is important to schedule tasks that require manual procedures with different levels of complexity and repetitiveness. In a garment mass customization scenario, task scheduling needs to take into consideration the dependency of the tasks, meaning that in order to initiate a certain task, materials from previous tasks may be required. Therefore, in order to carry out a smooth scheduling process within a garment mass customization factory, not only the tasks but also the transportation of materials to perform such tasks need to be scheduled to static and mobile robots, respectively. This paper describes the above problem related to the logistics of an automated garment factory in a mass customization scenery.