In the current state of the art, smart companions lack the ability to detect what
is arguably the most important factor present in normal human interaction: the
mood of the speaker. While speech and non-verbal analysis methods can be extended
to detect mood or emotions, also such affect recognition can be further enhanced
by associating image analysis to it for face and gesture recognition. These
affect detection techniques have been the object of extensive research, but the associated computational cost has generally kept their application restricted to relatively powerful computing platforms, restricting the interaction illusion to being it ‘via’ the computer.