Science-fiction stories are filled with tales of human-like robots who can interact with people on an equal level, who can recognize and understand visual cues to provide the proper responses when interacting with real humans. It’s the stuff of fantasy, the mere dreams of writers who envision a world that is decades in the future.
Well, maybe not. Not if Michael Ryoo, an assistant professor of informatics and computing at the School of Informatics and Computing, has anything to do with it.
Ryoo recently received recognition from the Institute of Electrical and Electronics Engineers (IEEE) Robotics and Automation Society for his paper “Multi-type Activity Recognition in Robot-Centric Scenarios” at the prestigious International Conference on Robotics and Automation (ICRA 2016) in mid-May. Ryoo was honored with the “Best Vision Paper” award at the conference, which is considered to be the premier conference in the field of robotics.
Ryoo’s research focused on allowing robots to automatically recognize human activities to provide the kind of activity-level situation awareness needed for intelligent operations, including human-robot interaction. The paper considered different types of recognition of human activities a robot might face during its operation. Given real-world scenarios, Ryoo suggested robots may observe single-person actions (human actions), two humans interacting with one another (human-human interaction), and humans interacting with a robot from the point of view of the robot (first-person activities).
“Although recognition of each of these activity types separately have been studied, recognizing all these multiple different types occurring concurrently or sequentially has not been attempted previously,” Ryoo said.
Ryoo began the work when he was with NASA’s Jet Propulsion Laboratory at the California Institute of Technology prior to arriving at IU in 2015, and he collaborated with Ilaria Gori and J.K. Aggarwal at the University of Texas at Austin, and Larry Matthies at NASA-JPL. The research introduced a new unified feature representation called Relational History Images that captures changes in the relative relationship between different body parts and regions of humans.
“Experiments were conducted not only with our new multi-type dataset but also with multiple public video datasets, confirming that our proposed approach performs at a superior level vs. previous methods,” Ryoo said.
Ryoo previously studied the first-person recognition of human-robot interactions in a 2013 paper, and he expanded his work in his latest study. The research is funded by the Robotics Collaborative Technology Alliance program supported by the Army Research Lab.
“The honor for Michael and his collaborators is truly deserved,” said Erik Stolterman, the chair of Informatics at SoIC. “Their hard work is a testament to their vision of what is possible in the field of both computer vision and robotics as research progresses.”