Leonardo robot
A socially intelligent animatronic robot. Leonardo has 69 degrees of freedom — 32 of those are in the face alone. Leonardo is capable of near-human facial expression (constrained by its creature-like appearance). Although highly articulated, Leonardo is not designed to walk. Instead, its degrees of freedom were selected for their expressive and communicative functions. It can gesture and is able to manipulate objects in simple ways. The robot has an organic appearance. Leonardo a youthful appearance to encourage people to playfully interact with it much as one might with a young child.
MIT
Specifications
Degrees of freedom | 69 |
Height | 76.6 |
Overview
Leonardo has 69 degrees of freedom — 32 of those are in the face alone. Leonardo is capable of near-human facial expression (constrained by its creature-like appearance). Although highly articulated, Leonardo is not designed to walk. Instead, its degrees of freedom were selected for their expressive and communicative functions. It can gesture and is able to manipulate objects in simple ways.
The robot has an organic appearance. Leonardo a youthful appearance to encourage people to playfully interact with it much as one might with a young child.
Leonardo has a real-time face recognition system that can be trained on the fly via a simple social interaction with the robot. The interaction allows people to introduce themselves and others to Leonardo, who tries to memorize their faces for use in subsequent interactions.
The system receives images from the camera mounted in Leo’s right eye. Faces are isolated from these images using data provided by a facial feature tracker. Isolated face images are resampled into small greyscale images and projected onto the first 40 principal components of the face image data set. These face image projections are matched against appearance manifold splines to produce a classification, retrieving the name associated with the given face.
In order to learn new faces, Leo keeps a buffer of up to 200 temporally-contiguous or near-contiguous views of the currently tracked face. This buffer is used to create a new face model whenever the person introduces themselves via speech. When a new model is created, principal component analysis (PCA) is performed on the entire face image data set, and a spline manifold is fitted to the images of the new face. The appearance manifold splines for the other face models are also recomputed at this time. This full model building process takes about 15 seconds. Since the face recognition module runs as a separate process from Leo’s other cognitive modules, the addition of a new face model can be done without stalling the robot or the interaction.
The goal of this project is to develop a synthetic skin capable of detecting temperature, proximity, and pressure with acceptable resolution over the entire body, while still retaining the look and feel of its organic counterpart. Toward this end, we are experimenting with layering silicone materials (such as those used for make-up effects in special effects industry) over force sensitive resistors (FSR), quantum tunneling composites (QTC), temperature sensors, and capacitive sensing technologies.
References
Describes steps toward the realization of a fully “sensitive skin” for robots in which somatic sensors of varying modalities such as touch, temperature, pain, and proprioception combine, as if letters in an alphabet, to create a more vivid depiction of the world and foster richer human robot.
Describes the project in detail and goes deeper into the body, vision system, and the skin.