November 21, 2019 feature

A multi-camera optical tactile sensor that could enable vision-based robotic skins
An image of the multi-camera sensor. Credit: Trueeb, Sferrazza & D’Andrea.

A team of researchers at ETH Zurich in Switzerland has recently developed a multi-camera optical tactile sensor (i.e., a tactile sensor based on optical devices) that collects information about the contact force distribution applied to its surface. This sensor, presented in a paper prepublished on arXiv, could be used to develop soft robotic skins based on computer vision algorithms.

"Compared to the vision capabilities that robots can achieve using modern cameras, the sense of touch in robots is very under-developed," Camill Trueeb, Carmelo Sferrazza and Raffaello D"Andrea, the researchers who carried out the study, told Tech Xplore via email. "Vision-based tactile skins aim at bridging this gap, exploiting the capabilities of vision sensors and state-of-the-art artificial intelligence algorithms, benefiting from the accessibility of large amounts of data and computational power."

The optical tactile sensor developed by Trueeb, Sferrazza and D"Andrea consists of four cameras placed underneath a soft, transparent material that contains an embedded spread of spherical particles within it. The cameras track the motion of these spherical particles, which arises from the deformation of the material when a force is applied to it.

The researchers also developed a machine learning (ML) architecture that analyzes the motion of the spherical particles in the material. By analyzing this motion, this architecture can reconstruct the forces that are causing a deformation in the material, also known as the contact force distribution.

"We use relatively inexpensive cameras that simultaneously provide images for a total of about 65,000 pixels," the researchers explained. "Therefore, they generate a large amount of information at very high resolution, which is ideal for a data-driven approach to tactile sensing."

Instead of only providing total force values, like the majority of existing standard force sensors for robotics applications, the sensor developed by the researchers offers feedback on the distribution of all the forces applied to its soft surface, decoupling normal and tangential components. Due to its structure and unique design, the new multi- sensor also exhibits a larger contact surface and a thinner structure than other camera-based tactile without requiring additional reflecting components (e.g., mirrors).

"The use of multiple cameras makes it possible to use this type of to cover larger areas with arbitrary shapes," the researchers said. "This work shows how the knowledge acquired on a subset of the cameras can be transferred to additional cameras, resulting in a scalable, data-efficient approach."

The AI-powered, multi-camera sensor could ultimately be scaled to larger surfaces, enabling the creation of soft and sensing robotic skins. In their recent paper, the researchers discuss how their ML architecture could be adapted to facilitate these applications in the future.

"We now plan to extend the capabilities of the sensor in order to reconstruct information about the contact with objects of complex and generic shapes," the researchers said. "We believe that the development of sensing algorithms should always take into account the data efficiency component to facilitate widespread use in robotics, and we will therefore pursue this direction in future work, as well."



More information: Towards vision-based robotic skins: a data-driven, multi-camera tactile sensor. arXiv:1910.14526 [cs.RO]. arxiv.org/abs/1910.14526

© 2019 Science X Network

Citation: A multi-camera optical tactile sensor that could enable vision-based robotic skins (2019, November 21) retrieved 21 November 2019 from https://techxplore.com/news/2019-11-multi-camera-optical-tactile-sensor-enable.html

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only.