Reference Information:
Title: Addressing the Problems of Data-Centric Physiology-Affect Relations Modeling
Authors: Roberto Legaspi, Ken-ichi Fukui, Koichi Moriyama, Satoshi Kurihara, Masayuki Numao, Merlin Suarez
Presentation Venue: IUI’10, February 7–10, 2010, Hong Kong, China
Summary:
Affective computing is important because it can allow the computer to interact more effectively with the user. This data-centric approach collects readings of emotion changes of the autonomic nervous system. There are three problems with this model that the paper discusses. These problems are feature optimization, discrete affect classes, and deals with small datasets.
The data is taken from sensors including a electroencephalography (EEG) helmet that monitors brainwaves. These sensors can be used to judge emotions of the users that the content is causing.
The high price of sensors causes sensors to be used in already proven useful reading points. Some of these points that are being monitored are the electromyogram (EMG), electrocardiogram (ECG), blood volume pulse (BVP), skin conductance (SC) and/or respiration sensors. There may be other useful points to measure from that are unknown. It is also unknown if these commonly used points are optimal.
It is also questioned if emotions continuous or discrete. The idea that it is continuous makes more logical sense and matches more with what is really neural and physiological.
The last question is if the model is restricted by datasize. The larger the dataset the slower the process.
This paper discusses that solutions to these problems include automatic feature selection for near optimal features and fast approximate matching algorithms to handle a very large data-set. Because only optimal features are selected this will be more cost efficient than previous methods.
Discussion:
I did not like reading this paper at all. I had a hard time following it and I'm still not 100% sure what its all about.
I thought the idea of measuring things from the human body to see what emotion was being felt was pretty cool. Because using so many sensors is expensive it was interesting to see how that was handled to be more cost efficient by using optimization.
They're basically trying to refine voice and facial recognition software to detect emotion of the user. If people can do it then computers should be able to as well. These guys were using brainwave profiling and extended facial recognition to narrow down the ranges in detection. But yeh, the paper did suck.
ReplyDelete