I was able to teach the robot a small vocabulary of shape and
color words by show-and-tell. This result, however, did not
seem significant since as the designer of the robot I of course
knew how to interact with it in order to get it to learn. I could
ask others to interact with the robot as a form of evaluation but
essentially the same problem would arise – the robot’s
performance would depend to a great degree on how well I
coached others on how to interact with it (this problem seems
to plague most work in human-robot interaction). And so the
idea was born to use child data to evaluate the robot. If the
robot could learn from the same audio-visual input as a child,
then the robot’s perceptual processing and learning algorithms
would demonstrate an interesting level of capability.