Using cameras and AI to help exoskeletons adapt to their environment

Researchers at Canada’s University of Waterloo are showcasing work with prostheses and exoskeletons that utilizes cameras and AI to deliver more natural human movement. The ExoNet project leverages video captured by a wearable camera, run through deep learning AI, in order to mimic how humans adapt and adjust movements based on their environments.

The project is an attempt to create more natural locomotion on the fly than is currently offered through systems with connected smartphone apps or other external controllers.

“That can be inconvenient and cognitively demanding,” Waterloo PhD candidate Brokoslaw Laschowski said in a release tied to the research. “Every time you want to perform a new locomotor activity, you have to stop, take out your smartphone and select the desired mode.”

The research highlighted primarily focuses on robotic exoskeletons, which are being developed by a number of different companies to help assist people with impaired mobility. The hope here is that the ExoNet system could eventually replace the need for external control by the wearer in order to create more natural locomotion.

Of course, there’s still a lot of work to be done. Naturally, the system is easier to navigate on flat terrain. Next steps would involve adapting it to environments that tend to give those with limited mobility some difficulty, including stairs and other obstacles. A final version of the system would be able to anticipate and adapt accordingly.

“Our control approach wouldn’t necessarily require human thought,” Laschowski adds. “Similar to autonomous cars that drive themselves, we’re designing autonomous exoskeletons that walk for themselves.”

There are other challenges here, as well. Batteries are an issue, for one. The team is looking to improve longevity by experimenting with a system that can recharge with the wearer’s movement.