Robots have got pretty good at picking up objects. But give them something shiny or clear, and the poor droids will likely lose their grip. Not ideal if you want a kitchen robot that can slice you a piece of pie.
Their confusion often stems from their depth camera systems. These cameras shine infrared light on an object to detect its shape, which works pretty well on opaque items. But put them in front of a transparent object, and the light will go straight through and scatter off reflective surfaces, making it tricky to calculate the item’s shape.
Researchers from Carnegie Mellon University have discovered a pretty simple solution: adding consumer color cameras to the mix. Their system combines the cameras with machine learning algorithms to recognize shapes based on their colors.
[Read: How an AI learned to stitch up patients by studying surgical videos]
The team trained the system on a combination of depth camera images of opaque objects and color images of the same items. This allowed it to infer different 3D shapes from the images — and the best spots to grip.
The robots can now pick up individual shiny and clear objects, even if the items are in a pile of clutter. Check it out in action in the video below:
[embedded content]
The team admits that their system is still far from perfect. “We do sometimes miss, but for the most part it did a pretty good job, much better than any previous system for grasping transparent or reflective objects,” said David Held, an assistant professor at CMU’s Robotics Institute.
I’m still not sure I’d trust it with a razor-sharp kitchen knife. Unless I was really hungry and unwilling to leave the couch.
Published July 14, 2020 — 17:54 UTC