(Image: NC State University)

Researchers have demonstrated a new method that leverages artificial intelligence (AI) and computer simulations to train robotic exoskeletons to autonomously help users save energy while walking, running, and climbing stairs.

“This work proposes and demonstrates a new machine-learning framework that bridges the gap between simulation and reality to autonomously control wearable robots to improve mobility and health of humans,” said Corresponding Author and Associate Professor Hao Su.

“Exoskeletons have enormous potential to improve human locomotive performance,” added Su. “However, their development and broad dissemination are limited by the requirement for lengthy human tests and handcrafted control laws.

“The key idea here is that the embodied AI in a portable exoskeleton is learning how to help people walk, run, or climb in a computer simulation, without requiring any experiments.”

Specifically, the researchers focused on improving autonomous control of embodied AI systems — which are systems where an AI program is integrated into a physical robot. This work focused on teaching robotic exoskeletons how to assist able-bodied people with various movements. Normally, users have to spend hours “training” an exoskeleton so that the technology knows how much force is needed — and when to apply that force — to help users walk, run, or climb stairs. The new method allows users to utilize the exoskeletons immediately.

For example, in testing with human subjects, the researchers found that study participants used 24.3 percent less metabolic energy when walking in the robotic exoskeleton than without the exoskeleton. Participants used 13.1 percent less energy when running in the exoskeleton, and 15.4 percent less energy when climbing stairs.

Here is an exclusive Tech Briefs interview — edited for length and clarity — with Su.

Tech Briefs: What was the biggest technical challenge you faced while developing this new method and how did you overcome it?

Su: In the field, it takes about one hour or even longer for robots to figure out how to coordinate with humans — how the robot can assist a human. For example, if I put an exoskeleton on you, you would need to walk on the treadmill for one hour, so the brain of the robotic control system can figure out how to personalize the control strategy. That was kind of a state-of-the-art challenge.

In the beginning, when we developed this learning and simulation framework, the major challenge we faced was we didn't know if it would work. That's a major challenge because we all know there's a huge gap between simulation and reality.

About five years ago, we worked with a professor from the New Jersey Institute of Technology (NJIT). His major challenge for us then — when I told him about it — was it sounds like a good idea, but it also sounds too good to be true. We had been very skeptical about this method for the first three years. So, basically, the biggest technical challenge was how to build a high-fidelity simulation of a human, a high-fidelity simulation of the robot, and also a high-fidelity simulation between human and the robot. Three components.

Even now it's still kind of challenging research — how to build this kind of high-fidelity system. For example, our human simulation has more than 200 muscles, so it's very computationally expensive. Usually, people don't know how to build this kind of model. My collaborator from NJIT is an expert in computational biomechanics. He built the model, and I designed the robot and the controller. So that's how we solved this problem.

Tech Briefs: Can you explain in simple terms how it works?

Su: For the first step, we created this learning and simulation framework. Then we created a virtual human, virtual robot, and virtual human-robot physical interaction. So, in this simulation environment, the first step is to figure out how to have the virtual human walk through learning. And then for the virtual robot, to figure out how to have it coordinate with this virtual human — how to help the virtual human in the simulation environment through a learning process. It's like a baby walking, at first it fails. It took about eight hours. The robot figured out a control policy, which is based on a deep neural network structure. This deep neural network structure is a controller. For the third step, we can deploy this learned control policy in the physical robot. When anyone wears this exoskeleton, it works immediately and it can significantly save human energy during walking, running, and stair climbing. That's how it works; kind of like a three-step procedure.

Tech Briefs: You’re quoted in the article as saying, ‘We are in the early stages of testing the new methods for performance in robotic exoskeletons being used by older adults and people with neurological conditions such as cerebral palsy.’ How is that coming along? Do you have any updates you could share?

Su: With this work, we primarily focused on an able-bodied population. This method is very generic. For this learning and simulation framework, we can use anyone’s motion data to learn a control policy for the robot, so the robot can understand and figure out how to help each person. To help the elderly and children with cerebral palsy, currently we have two projects, sponsored by National Science Foundation and the National Institute of Health; they sponsored us to study the exoskeleton for the elderly.

Another project is for children with mobility issues. The idea is that we can use the elderly data, or we can use the children's data. We can import this data in the simulator; it's kind of like a video game.

We can take a one or two-minute video of a child or elderly person walking. Then we can import that into the simulator. The simulator can understand the parts. Then the learning process will start over. It'll probably take another eight hours. Generally, if the person is an able-bodied individual, we only need a one-time simulation. If a person has a disability, however, we would have to tailor the control policy for each individual person.

Tech Briefs: You’re also quoted as saying, ‘We are also interested in exploring how the method could improve the performance of robotic prosthetic devices for amputee populations.’ Do you have any plans for next steps/further research/etc.?

Su: Another research area in our lab is robotic prostheses. We developed, for example, a knee prosthesis for amputees. What we want to understand is if this learning and simulation method can also work with the amputee population. We think it will very likely work, but we don't know for sure yet.

We need to import the robotics prothesis model to the simulation; we need to model the prosthesis. We need to model the amputee. For that part, it’s still ongoing — maybe it’ll be our next step after the elderly and children with cerebral palsy.

Another aspect we are working on is: This robot is already the most lightweight robotic exoskeleton. We want to further reduce the mass and cost of this device, and also improve the comfort. Right now, at least, this device costs maybe $10,000 — still kind of expensive. Most commercially available exoskeletons, range in price from about $50,000 to $120,000.

Our next step is to further design a new actuator and new sensor, and make the system smaller, more affordable, and ultimately more accessible for everyone — not only in lab or clinic settings but in home or community settings for everyone in a daily scenario.



Transcript

00:00:05 exoskeletons can improve human Mobility but developing exoskeleton controllers requires lengthy human tests and handcrafted control laws we developed a learning and simulation method that leverages data driven and physics informed reinforcement learning to obtain a controller purely from the simulation our method consisted of three

00:00:27 neural networks namely motion imitation muscle coordination and exoskeleton control networks which were trained simultaneously in a closed loop manner to facilitate deployment on a physical robot our train controller only requires one sensor per leg and can be implemented on a portable microcontroller our controller only needed to be trained once for 8 hours in

00:00:52 the simulation during this learning process it gradually learned to generate effective assistance to assist humans with three activities our method also simultaneously trained a high fidelity human muscul skeletal model to simulate the human response to the assistance this model can Faithfully reproduce human kinematics and biomechanics which facilitated our

00:01:14 controller to learn and identify effective control strategies we evaluated the performance of our train controller for walking running and stair climbing at different speeds metabolic rate was recorded to quantify the energetic benefits the participants obtained from the assistance our train controller substantially reduced energy expenditure

00:01:43 by 24.3% 13.1% and 15.4% during walking running and stair climbing compared with no exoskeleton conditions this is the largest reduction reported in the literature with any hip exoskeletons or any lower limit exoskeletons either tethered or portable our train controller could also

00:02:04 produce smooth and continuous assistance for the three activities and especially their transitions without the need for discrete activity classification or gate phase segmentation the assistance profile exhibited distinct shapes for each activity highlighting its ability to adapt to the changing requirements of the wearer automatically

00:02:31 this is the first work in wearable robotics to show that it is feasible to formulate an endtoend controller to synergistically assist multimodal Locomotion we envision that this generalized simple and easyto usee learning method can also be applied to a wide variety of assistive Technologies such as proses to restore mobility of people with disabilities