Humans are excellent at collaboration. For example, when two people work together to lift a heavy object like a sofa or a table, they tend to automatically synchronize their motions, continuously recalibrating to ensure both their hands are at the same height. People’s natural ability to make these types of modifications allows them to work together on tasks both big and small.
Lead author Joseph DelPreto demonstrates the system’s ability to mirror his movements by monitoring muscle activity. (Photo: Joseph DelPreto/MIT CSAIL)
However, a computer or a robot still cannot follow a human’s lead easily. Human’s typically either unambiguously program them using machine-speak, or train them to comprehend human language, à la virtual assistants like Alexa or Siri.
In contrast, scientists at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) recently demonstrated that a smoother robot-human partnership is achievable through a new system they built, where machines help people carry objects by observing their muscle movements.
Named RoboRaise, the system involves placing electromyography (EMG) sensors on a user’s biceps and triceps to track muscle activity. Its algorithms then constantly detect variations to the person’s arm level, as well as distinct up-and-down hand gestures the user might follow for smoother motor control.
The team used the system for a string of tasks that included picking up and assembling mock airplane parts. During experiments, users performed these tasks with the robot and were able to manipulate it to within a few inches of the preferred heights by lifting and then tensing their arm. It was more exact when gestures were used, and the robot reacted appropriately to roughly 70% of all gestures.
Graduate student Joseph DelPreto says he could envision people using RoboRaise to help in manufacturing and construction situations, or even as an assistant in a household.
“Our approach to lifting objects with a robot aims to be intuitive and similar to how you might lift something with another person — roughly copying each other’s motions while inferring helpful adjustments,” says DelPreto, lead author on a new paper about the project with MIT Professor and CSAIL Director Daniela Rus. “The key insight is to use nonverbal cues that encode instructions for how to coordinate, for example, to lift a little higher or lower. Using muscle signals to communicate almost makes the robot an extension of yourself that you can fluidly control.”
The project takes off from the team’s current system that enables users to promptly rectify robot mistakes with hand gestures and brainwaves, now supporting continuous motion in a more collaborative way. “We aim to develop human-robot interaction where the robot adapts to the human, rather than the other way around. This way the robot becomes an intelligent tool for physical work,” says Rus.
EMG signals can be complicated to work with: They are frequently very noisy, and it can be tough to precisely predict how a limb is moving based on muscle activity. Even if an estimation of how a person is moving can be done, it would still be unclear how one wants the robot to respond.
RoboRaise overcomes this by putting the human in control. The system created by the team uses noninvasive, on-body sensors that sense the firing of neurons as one flexes or relaxes muscles. Wearables can also be used to overcome problems of ambient noise or occlusions, which can confuse tasks involving speech or vision.
RoboRaise’s algorithm then processes biceps activity to guess how the person’s arm is moving so the robot can approximately imitate it, and the person can moderately tense or relax their arm to move the robot up or down. If a user wants the robot to move farther away from where he/she is or keep a pose for a while, they can merely gesture up or down for finer control; a neural network senses these gestures at any time based on triceps and biceps activity.
An amateur user can start using the system very fast, with nominal calibration. After wearing the sensors, they only need to tense and relax their arm a couple of times then lift a light weight to a few heights. The neural network that senses gestures is only trained on data from earlier users.
The researchers carried out a string of three lifting experiments with 10 users using the system: one where the robot did not move at all, another where the robot moved in reaction to their muscles but did not help lift the object, and a third where the robot and person lifted an object together.
When the person had feedback from the robot — when they could see it moving or when they were lifting an object together — the attained height was considerably more accurate compared to getting no feedback.
The team also tested RoboRaise on assembly operations, for example, lifting a rubber sheet onto a base structure. It was able to positively lift both flexible and rigid objects onto the bases. RoboRaise was set up on the team’s Baxter humanoid robot, but the team says it could be modified for any robotic system.
Going forward, the team believes that adding more muscles or a range of different sensors to the system will boost the degrees of freedom, with the ultimate goal of performing even more intricate tasks. Signals like fatigue or exertion from muscle activity could also help robots offer more intuitive support. The team tested one model of the system that uses biceps and triceps levels to tell the robot how rigidly the person is holding their end of the object; together, the human and machine could smoothly drag an object around or tightly pull it taut.
The team will showcase its efforts at the International Conference on Robotics and Automation in Montreal, Canada. The project was sponsored partly by The Boeing Company.