AsiaIndustrial NetNews: One more fingerRobotBy manipulating virtual objects in a simulated environment to achieve grasping, the era of machine learning and cloud services completely changing traditional manual labor is not far away. In a laboratory at the University of California, Berkeley, an ordinary Robot is picking out some oddly shaped objects. Amazingly, the Robot operates through virtual objects.
The robot has a lot of data about 3D graphics and its grasping skills, and can judge how to use different grasping strengths to pick up different objects. The Berkeley researchers fed the images to the robot’s deep learning system, which connected the robot’s arms and 3D sensors. When a new object is placed in front of the robot, its deep learning system can quickly match it to the appropriate shape and grasping technique, and instruct the arm how to operate.
It is reported that the grasping performance of this robot is the best in history. In the test, when the robot judged that the grasping success rate of the object was higher than 50%, it always successfully lifted the object, although there was some shaking, the probability of the object falling was only 2%. If the robot judges that the object is difficult to grasp, it will toggle to make it suitable for grasping, and the grasping success rate after adjusting the angle is as high as 99%.
Most current robotic arms can only grasp “familiar” objects, and researchers often need to give it a lot of practice, which is a time-consuming process. A new Berkeley invention demonstrates a new approach: using deep learning and cloud services for robotic grasping. This innovation avoids extensive training while improving the robot’s applicability in factories and warehouses. Through deep learning, robots can even work in some new application areas, such as hospitals and homes.
UM professor Ken Goldberg of the University of Berkeley said that unlike traditional robots that require months-long physical experiments, this robot does not require field practice. It can learn the three-dimensional graphics, visual appearance and grasping contained in the data set in a simulated environment. Grip skills, the “training” time is only 1 day.
Berkeley professor Ken Goldberg (left) and Siemens research group leader Juan Aparicio
It is reported that a paper related to the study will be published in July this year, when Professor Ken Goldberg and other researchers will publish the three-dimensional data set used by the robot to help advance the development of computer vision technology.
At present, machine vision technology is still in its infancy, and the academic world is extremely lacking in relevant data, and a systematic data set needs to be established urgently. Advances in deep learning, control algorithms, and new materials will facilitate the construction of new types of physical robots, and will also usher in a new era of “machine substitution” with significant economic impact. Although robots have appeared in various warehouses and production lines decades ago, such as Amazon’s kiva robot, most of the current robots can only carry items, and the sorting technology of robots is still quite clumsy.
A laboratory at MIT is also doing related experiments, and Berkeley’s results impressed them. A German company has reached out to Berkeley to commercialize the technology.
Dexterous hands are crucial to the development of human intelligence. In the long-term evolution process, humans have formed a virtuous feedback loop, vision has become sharper, and brain power has been greatly enhanced. “Grasp” is simple, but it will certainly play a role in the evolution of artificial intelligence.
The Links: 3HAC049817-001 JZNC-NRK02-1