tech:

taffy

Carnegie Mellon Robot Uses Non-Visual Data To Identify Objects

Herb_CMU

A robot can struggle to discover objects in its surroundings when it relies on computer vision alone; but by taking advantage of all of the information available to it – an object’s location, size, shape and even whether it can be lifted – a robot can continually discover and refine its understanding of objects, say researchers at Carnegie Mellon University’s Robotics Institute.

The Lifelong Robotic Object Discovery (LROD) process developed by a research team enabled a two-armed, mobile robot to use color video, a Kinect depth camera and non-visual information to discover more than 100 objects in a home-like laboratory, including items such as computer monitors, plants and food items.

Normally, the CMU researchers build digital models and images of objects and load them into the memory of HERB – the Home-Exploring Robot Butler – so the robot can recognize objects that it needs to manipulate. Virtually all roboticists do something similar to help their robots recognize objects. With the team’s implementation of LROD, called HerbDisc, the robot now can discover these objects on its own.

The robot’s ability to discover objects on its own sometimes takes even the researchers by surprise, says Siddhartha Srinivasa, associate professor of robotics and head of the Personal Robotics Lab, where HERB is being developed. In one case, some students left the remains of lunch – a pineapple and a bag of bagels – in the lab when they went home for the evening. The next morning, they returned to find that HERB had built digital models of both the pineapple and the bag and had figured out how it could pick up each one.

Discovering and understanding objects in places filled with hundreds or thousands of things will be a crucial capability once robots begin working in the home and expanding their role in the workplace.  

Object recognition has long been a challenging area of inquiry for computer vision researchers. Recognizing objects based on vision alone quickly becomes an intractable computational problem in a cluttered environment, but humans don’t rely on sight alone to understand objects; babies will squeeze a rubber ducky, beat it against the tub, dunk it – even stick it in their mouth. Robots, too, have a lot of “domain knowledge” about their environment that they can use to discover objects. 

Depth measurements from HERB’s Kinect sensors proved to be particularly important, providing three-dimensional shape data that is highly discriminative for household items. Other domain knowledge available to HERB includes location – whether something is on a table, on the floor or in a cupboard. The robot can see whether a potential object moves on its own, or is moveable at all. It can note whether something is in a particular place at a particular time. And it can use its arms to see if it can lift the object – the ultimate test of its “objectness.”

“The first time HERB looks at the video, everything ‘lights up’ as a possible object,” Srinivasa said. But as the robot uses its domain knowledge, it becomes clearer what is and isn’t an object. The team found that adding domain knowledge to the video input almost tripled the number of objects HERB could discover and reduced computer processing time by a factor of 190.  

Though not yet implemented, HERB and other robots could use the Internet to create an even richer understanding of objects. Earlier work by Srinivasa showed that robots can use crowdsourcing via Amazon Mechanical Turk to help understand objects. Likewise, a robot might access image sites, such as RoboEarth, ImageNet or 3D Warehouse, to find the name of an object, or to get images of parts of the object it can’t see.

Bo Xiong, a student at Connecticut College, and Corina Gurau, a student at Jacobs University in Bremen, Germany, also contributed to this study.

HERB is a project of the Quality of Life Technology Center, a National Science Foundation engineering research center operated by Carnegie Mellon and the University of Pittsburgh. The center is focused on the development of intelligent systems that improve quality of life for everyone while enabling older adults and people with disabilities.

[Image courtesy: Carnegie Mellon University]

Just in

Vercel raises $250M

San Francisco-based Vercel, a frontend cloud platform provider, has secured $250 million in Series E funding, bringing the company's valuation to $3.25 billion.

Worky raises $6M (Mexico)

Mexico City-based Worky, a provider of HR and payroll software solutions for Mexican companies, has closed a $6 million Series A financing round.

Amazon announces $1.31B investment in France

Amazon has announced a new investment of about $1.31 billion (€1.2 billion) in France, which the company says will lead to the creation of over 3,000 permanent jobs in the country.

Amazon Web Services CEO Adam Selipsky to step down — CNBC

Adam Selipsky, CEO of Amazon’s cloud computing business, will step down from his role next month. Matt Garman, senior vice president of sales and marketing at Amazon Web Services, will succeed Mr. Selipsky after he exits the company June 3, writes Annie Palmer. 

Palo Alto Networks, Accenture expand alliance to offer generative AI services

Palo Alto Networks and Accenture have announced the expansion of their strategic alliance to provide new offerings that combine Palo Alto Networks' Precision AI technology with Accenture's secure generative AI services.