Blending BIP

Blending Movement Primitives for HRI

Bayesian Interaction Primitives (BIP) is an Imitation Learning framework for Human-Robot Interaction (HRI), where a robot can learn complex, nonlinear interactions with only a few dozen expert demonstrations. BIP was developed by members of the Interactive Robotics Lab, ultimately extending the capabilities of Interaction Primitives and Dynamic Movement Primitives. Below are two example scenarios where I’ve deployed BIP- one being a hugging interaction, and another being an interaction with two UR5 robots meeting at an arbitrary point in between them.

Deploying Bayesian Interaction Primitives (ROS Framework) for a hugging interaction and for two UR5s in Coppelia Sim.

The interaction scenarios pictured above are interesting to study in a number of ways. With regard to the hugging interaction, a hug typically lasts only two to three seconds, so real-time inference and control is necessary. The learning algorithm must be designed in a way that does not require expensive computations at every time step. Second, the hug is a close-proximity interaction with the robot, so emphasis on safety is important. The Baxter robot (featured in the image above, dressed as a bear) is particularly suitable for these types of interactions since Baxter provides compliant responses and safety-critical features. Finally, the setup allows for collecting many high-resolution demonstrations in a small, controlled environment through the use of a motion capture system. Subtleties (at the millimeter level) in collected trajectories can be adequately captured and accounted for, providing a nice foundation for implementing data-driven studies of interaction trajectories. For this interaction, 30 demonstrations can be collected in under 10 minutes and a model for the robot can be generated instantly using a laptop computer. The ability to collect demonstrations in a short amount of time makes this a convenient interaction for collecting data with user studies.

Demonstrations for the dual UR5 interaction in Coppelia Sim can be collected without human input; therefore, the training procedure can be automated and a model can be created quickly. During training, the Controlled and Observed Agents meet at a random point, and the robot controls are provided by a generic controller (i.e., via operational space control or inverse kinematics). A BIP model for the controlled robot is created, and during testing, the Controlled Agent observes the Observed Agent’s joint angles. An example test demonstration is shown in the GIF above, where the Observed Agent moves to an arbitrary waypoint and the Controlled Agent successfully follows suit.

Overview

One of my projects during my Master’s involved generalizing BIP to multiple interactions. This approach was first done on the hugging interaction but can be extended to other scenarios as well. Here, the robot is trained on three different types of interactions: 1) A left-hand high hug, where the human approaches with the left hand higher than the right hand, then hugs with the left hand over the bear’s right shoulder and the right hand near the bear’s lower-left torso; 2) A middle hug, where the human hugs the robot around the center torso with both arms at equal heights; and 3) A right-hand high hug, which is essentially the opposite action of the left-hand high hug. In all training scenarios, the robot is given a control trajectory corresponding to the matching interaction type. During testing, the robot must be able to generalize, in both space and time, to the three different interaction types and identify the correct one. It can be seen below that the robot successfully generalizes to the different spatial states in the beginning of the interaction (transitioning from left-high, to right-high, and then middle) while inferring and maintaining the correct starting phase of the interaction. The user finally approaches the robot for a middle hug, where the robot correctly generalizes across time by progressing/localizing the phase of the interaction.


Blending BIP

Blending Movement Primitives for HRI

Bayesian Interaction Primitives (BIP) is an Imitation Learning framework for Human-Robot Interaction (HRI), where a robot can learn complex, nonlinear interactions with only a few dozen expert demonstrations. BIP was developed by members of the Interactive Robotics Lab, ultimately extending the capabilities of Interaction Primitives and Dynamic Movement Primitives. Below are two example scenarios where I’ve deployed BIP- one being a hugging interaction, and another being an interaction with two UR5 robots meeting at an arbitrary point in between them.

Deploying Bayesian Interaction Primitives (ROS Framework) for a hugging interaction and for two UR5s in Coppelia Sim.

The interaction scenarios pictured above are interesting to study in a number of ways. With regard to the hugging interaction, a hug typically lasts only two to three seconds, so real-time inference and control is necessary. The learning algorithm must be designed in a way that does not require expensive computations at every time step. Second, the hug is a close-proximity interaction with the robot, so emphasis on safety is important. The Baxter robot (featured in the image above, dressed as a bear) is particularly suitable for these types of interactions since Baxter provides compliant responses and safety-critical features. Finally, the setup allows for collecting many high-resolution demonstrations in a small, controlled environment through the use of a motion capture system. Subtleties (at the millimeter level) in collected trajectories can be adequately captured and accounted for, providing a nice foundation for implementing data-driven studies of interaction trajectories. For this interaction, 30 demonstrations can be collected in under 10 minutes and a model for the robot can be generated instantly using a laptop computer. The ability to collect demonstrations in a short amount of time makes this a convenient interaction for collecting data with user studies.

Demonstrations for the dual UR5 interaction in Coppelia Sim can be collected without human input; therefore, the training procedure can be automated and a model can be created quickly. During training, the Controlled and Observed Agents meet at a random point, and the robot controls are provided by a generic controller (i.e., via operational space control or inverse kinematics). A BIP model for the controlled robot is created, and during testing, the Controlled Agent observes the Observed Agent’s joint angles. An example test demonstration is shown in the GIF above, where the Observed Agent moves to an arbitrary waypoint and the Controlled Agent successfully follows suit.

Overview

One of my projects during my Master’s involved generalizing BIP to multiple interactions. This approach was first done on the hugging interaction but can be extended to other scenarios as well. Here, the robot is trained on three different types of interactions: 1) A left-hand high hug, where the human approaches with the left hand higher than the right hand, then hugs with the left hand over the bear’s right shoulder and the right hand near the bear’s lower-left torso; 2) A middle hug, where the human hugs the robot around the center torso with both arms at equal heights; and 3) A right-hand high hug, which is essentially the opposite action of the left-hand high hug. In all training scenarios, the robot is given a control trajectory corresponding to the matching interaction type. During testing, the robot must be able to generalize, in both space and time, to the three different interaction types and identify the correct one. It can be seen below that the robot successfully generalizes to the different spatial states in the beginning of the interaction (transitioning from left-high, to right-high, and then middle) while inferring and maintaining the correct starting phase of the interaction. The user finally approaches the robot for a middle hug, where the robot correctly generalizes across time by progressing/localizing the phase of the interaction.


Blending BIP

Blending Movement Primitives for HRI

Bayesian Interaction Primitives (BIP) is an Imitation Learning framework for Human-Robot Interaction (HRI), where a robot can learn complex, nonlinear interactions with only a few dozen expert demonstrations. BIP was developed by members of the Interactive Robotics Lab, ultimately extending the capabilities of Interaction Primitives and Dynamic Movement Primitives. Below are two example scenarios where I’ve deployed BIP- one being a hugging interaction, and another being an interaction with two UR5 robots meeting at an arbitrary point in between them.

Deploying Bayesian Interaction Primitives (ROS Framework) for a hugging interaction and for two UR5s in Coppelia Sim.

The interaction scenarios pictured above are interesting to study in a number of ways. With regard to the hugging interaction, a hug typically lasts only two to three seconds, so real-time inference and control is necessary. The learning algorithm must be designed in a way that does not require expensive computations at every time step. Second, the hug is a close-proximity interaction with the robot, so emphasis on safety is important. The Baxter robot (featured in the image above, dressed as a bear) is particularly suitable for these types of interactions since Baxter provides compliant responses and safety-critical features. Finally, the setup allows for collecting many high-resolution demonstrations in a small, controlled environment through the use of a motion capture system. Subtleties (at the millimeter level) in collected trajectories can be adequately captured and accounted for, providing a nice foundation for implementing data-driven studies of interaction trajectories. For this interaction, 30 demonstrations can be collected in under 10 minutes and a model for the robot can be generated instantly using a laptop computer. The ability to collect demonstrations in a short amount of time makes this a convenient interaction for collecting data with user studies.

Demonstrations for the dual UR5 interaction in Coppelia Sim can be collected without human input; therefore, the training procedure can be automated and a model can be created quickly. During training, the Controlled and Observed Agents meet at a random point, and the robot controls are provided by a generic controller (i.e., via operational space control or inverse kinematics). A BIP model for the controlled robot is created, and during testing, the Controlled Agent observes the Observed Agent’s joint angles. An example test demonstration is shown in the GIF above, where the Observed Agent moves to an arbitrary waypoint and the Controlled Agent successfully follows suit.

Overview

One of my projects during my Master’s involved generalizing BIP to multiple interactions. This approach was first done on the hugging interaction but can be extended to other scenarios as well. Here, the robot is trained on three different types of interactions: 1) A left-hand high hug, where the human approaches with the left hand higher than the right hand, then hugs with the left hand over the bear’s right shoulder and the right hand near the bear’s lower-left torso; 2) A middle hug, where the human hugs the robot around the center torso with both arms at equal heights; and 3) A right-hand high hug, which is essentially the opposite action of the left-hand high hug. In all training scenarios, the robot is given a control trajectory corresponding to the matching interaction type. During testing, the robot must be able to generalize, in both space and time, to the three different interaction types and identify the correct one. It can be seen below that the robot successfully generalizes to the different spatial states in the beginning of the interaction (transitioning from left-high, to right-high, and then middle) while inferring and maintaining the correct starting phase of the interaction. The user finally approaches the robot for a middle hug, where the robot correctly generalizes across time by progressing/localizing the phase of the interaction.


Blending BIP

Blending Movement Primitives for HRI

Bayesian Interaction Primitives (BIP) is an Imitation Learning framework for Human-Robot Interaction (HRI), where a robot can learn complex, nonlinear interactions with only a few dozen expert demonstrations. BIP was developed by members of the Interactive Robotics Lab, ultimately extending the capabilities of Interaction Primitives and Dynamic Movement Primitives. Below are two example scenarios where I’ve deployed BIP- one being a hugging interaction, and another being an interaction with two UR5 robots meeting at an arbitrary point in between them.

Deploying Bayesian Interaction Primitives (ROS Framework) for a hugging interaction and for two UR5s in Coppelia Sim.

The interaction scenarios pictured above are interesting to study in a number of ways. With regard to the hugging interaction, a hug typically lasts only two to three seconds, so real-time inference and control is necessary. The learning algorithm must be designed in a way that does not require expensive computations at every time step. Second, the hug is a close-proximity interaction with the robot, so emphasis on safety is important. The Baxter robot (featured in the image above, dressed as a bear) is particularly suitable for these types of interactions since Baxter provides compliant responses and safety-critical features. Finally, the setup allows for collecting many high-resolution demonstrations in a small, controlled environment through the use of a motion capture system. Subtleties (at the millimeter level) in collected trajectories can be adequately captured and accounted for, providing a nice foundation for implementing data-driven studies of interaction trajectories. For this interaction, 30 demonstrations can be collected in under 10 minutes and a model for the robot can be generated instantly using a laptop computer. The ability to collect demonstrations in a short amount of time makes this a convenient interaction for collecting data with user studies.

Demonstrations for the dual UR5 interaction in Coppelia Sim can be collected without human input; therefore, the training procedure can be automated and a model can be created quickly. During training, the Controlled and Observed Agents meet at a random point, and the robot controls are provided by a generic controller (i.e., via operational space control or inverse kinematics). A BIP model for the controlled robot is created, and during testing, the Controlled Agent observes the Observed Agent’s joint angles. An example test demonstration is shown in the GIF above, where the Observed Agent moves to an arbitrary waypoint and the Controlled Agent successfully follows suit.

Overview

One of my projects during my Master’s involved generalizing BIP to multiple interactions. This approach was first done on the hugging interaction but can be extended to other scenarios as well. Here, the robot is trained on three different types of interactions: 1) A left-hand high hug, where the human approaches with the left hand higher than the right hand, then hugs with the left hand over the bear’s right shoulder and the right hand near the bear’s lower-left torso; 2) A middle hug, where the human hugs the robot around the center torso with both arms at equal heights; and 3) A right-hand high hug, which is essentially the opposite action of the left-hand high hug. In all training scenarios, the robot is given a control trajectory corresponding to the matching interaction type. During testing, the robot must be able to generalize, in both space and time, to the three different interaction types and identify the correct one. It can be seen below that the robot successfully generalizes to the different spatial states in the beginning of the interaction (transitioning from left-high, to right-high, and then middle) while inferring and maintaining the correct starting phase of the interaction. The user finally approaches the robot for a middle hug, where the robot correctly generalizes across time by progressing/localizing the phase of the interaction.