Ubiquitous Technologies for Emotion Recognition
Emotions play a very important role in how we think and behave. As such, the emotions we feel every day can compel us to act and influence the decisions and plans we make about our lives. Being able to measure, analyze, and better comprehend how or why our emotions may change is thus of much relevan...
Saved in:
HerausgeberIn: | |
---|---|
Sonstige: | |
Year of Publication: | 2021 |
Language: | English |
Physical Description: | 1 electronic resource (174 p.) |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
id |
993546003204498 |
---|---|
ctrlnum |
(CKB)5400000000042228 (oapen)https://directory.doabooks.org/handle/20.500.12854/76689 (EXLCZ)995400000000042228 |
collection |
bib_alma |
record_format |
marc |
spelling |
Banos, Oresti edt Ubiquitous Technologies for Emotion Recognition Basel, Switzerland MDPI - Multidisciplinary Digital Publishing Institute 2021 1 electronic resource (174 p.) text txt rdacontent computer c rdamedia online resource cr rdacarrier Emotions play a very important role in how we think and behave. As such, the emotions we feel every day can compel us to act and influence the decisions and plans we make about our lives. Being able to measure, analyze, and better comprehend how or why our emotions may change is thus of much relevance to understand human behavior and its consequences. Despite the great efforts made in the past in the study of human emotions, it is only now, with the advent of wearable, mobile, and ubiquitous technologies, that we can aim to sense and recognize emotions, continuously and in real time. This book brings together the latest experiences, findings, and developments regarding ubiquitous sensing, modeling, and the recognition of human emotions. English Information technology industries bicssc self-management interview application emotion analysis facial recognition image-mining deep convolutional neural network emotion recognition pattern recognition texture descriptors mobile tool neuromarketing brain computer interface (BCI) consumer preferences EEG signal deep learning deep neural network (DNN) electroencephalogram (EEG) logistic regression Gaussian kernel Laplacian prior affective computing human–robot interaction thermal IR imaging social robots facial expression analysis line segment feature analysis dimensionality reduction convolutional recurrent neural network driver health risk intelligent speech signal processing human computer interaction supervised learning computer vision optical flow micro facial expressions real-time processing driver stress state IR imaging machine learning support vector machine (SVR) advanced driver-assistance systems (ADAS) artificial intelligence image processing video processing 3-0365-1802-9 3-0365-1801-0 Castro, Luis A. edt Villalonga, Claudia edt Banos, Oresti oth Castro, Luis A. oth Villalonga, Claudia oth |
language |
English |
format |
eBook |
author2 |
Castro, Luis A. Villalonga, Claudia Banos, Oresti Castro, Luis A. Villalonga, Claudia |
author_facet |
Castro, Luis A. Villalonga, Claudia Banos, Oresti Castro, Luis A. Villalonga, Claudia |
author2_variant |
o b ob l a c la lac c v cv |
author2_role |
HerausgeberIn HerausgeberIn Sonstige Sonstige Sonstige |
title |
Ubiquitous Technologies for Emotion Recognition |
spellingShingle |
Ubiquitous Technologies for Emotion Recognition |
title_full |
Ubiquitous Technologies for Emotion Recognition |
title_fullStr |
Ubiquitous Technologies for Emotion Recognition |
title_full_unstemmed |
Ubiquitous Technologies for Emotion Recognition |
title_auth |
Ubiquitous Technologies for Emotion Recognition |
title_new |
Ubiquitous Technologies for Emotion Recognition |
title_sort |
ubiquitous technologies for emotion recognition |
publisher |
MDPI - Multidisciplinary Digital Publishing Institute |
publishDate |
2021 |
physical |
1 electronic resource (174 p.) |
isbn |
3-0365-1802-9 3-0365-1801-0 |
illustrated |
Not Illustrated |
work_keys_str_mv |
AT banosoresti ubiquitoustechnologiesforemotionrecognition AT castroluisa ubiquitoustechnologiesforemotionrecognition AT villalongaclaudia ubiquitoustechnologiesforemotionrecognition |
status_str |
n |
ids_txt_mv |
(CKB)5400000000042228 (oapen)https://directory.doabooks.org/handle/20.500.12854/76689 (EXLCZ)995400000000042228 |
carrierType_str_mv |
cr |
is_hierarchy_title |
Ubiquitous Technologies for Emotion Recognition |
author2_original_writing_str_mv |
noLinkedField noLinkedField noLinkedField noLinkedField noLinkedField |
_version_ |
1796648786766856192 |
fullrecord |
<?xml version="1.0" encoding="UTF-8"?><collection xmlns="http://www.loc.gov/MARC21/slim"><record><leader>03456nam-a2200853z--4500</leader><controlfield tag="001">993546003204498</controlfield><controlfield tag="005">20231214133410.0</controlfield><controlfield tag="006">m o d </controlfield><controlfield tag="007">cr|mn|---annan</controlfield><controlfield tag="008">202201s2021 xx |||||o ||| 0|eng d</controlfield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(CKB)5400000000042228</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(oapen)https://directory.doabooks.org/handle/20.500.12854/76689</subfield></datafield><datafield tag="035" ind1=" " ind2=" "><subfield code="a">(EXLCZ)995400000000042228</subfield></datafield><datafield tag="041" ind1="0" ind2=" "><subfield code="a">eng</subfield></datafield><datafield tag="100" ind1="1" ind2=" "><subfield code="a">Banos, Oresti</subfield><subfield code="4">edt</subfield></datafield><datafield tag="245" ind1="1" ind2="0"><subfield code="a">Ubiquitous Technologies for Emotion Recognition</subfield></datafield><datafield tag="260" ind1=" " ind2=" "><subfield code="a">Basel, Switzerland</subfield><subfield code="b">MDPI - Multidisciplinary Digital Publishing Institute</subfield><subfield code="c">2021</subfield></datafield><datafield tag="300" ind1=" " ind2=" "><subfield code="a">1 electronic resource (174 p.)</subfield></datafield><datafield tag="336" ind1=" " ind2=" "><subfield code="a">text</subfield><subfield code="b">txt</subfield><subfield code="2">rdacontent</subfield></datafield><datafield tag="337" ind1=" " ind2=" "><subfield code="a">computer</subfield><subfield code="b">c</subfield><subfield code="2">rdamedia</subfield></datafield><datafield tag="338" ind1=" " ind2=" "><subfield code="a">online resource</subfield><subfield code="b">cr</subfield><subfield code="2">rdacarrier</subfield></datafield><datafield tag="520" ind1=" " ind2=" "><subfield code="a">Emotions play a very important role in how we think and behave. As such, the emotions we feel every day can compel us to act and influence the decisions and plans we make about our lives. Being able to measure, analyze, and better comprehend how or why our emotions may change is thus of much relevance to understand human behavior and its consequences. Despite the great efforts made in the past in the study of human emotions, it is only now, with the advent of wearable, mobile, and ubiquitous technologies, that we can aim to sense and recognize emotions, continuously and in real time. This book brings together the latest experiences, findings, and developments regarding ubiquitous sensing, modeling, and the recognition of human emotions.</subfield></datafield><datafield tag="546" ind1=" " ind2=" "><subfield code="a">English</subfield></datafield><datafield tag="650" ind1=" " ind2="7"><subfield code="a">Information technology industries</subfield><subfield code="2">bicssc</subfield></datafield><datafield tag="653" ind1=" " ind2=" "><subfield code="a">self-management interview application</subfield></datafield><datafield tag="653" ind1=" " ind2=" "><subfield code="a">emotion analysis</subfield></datafield><datafield tag="653" ind1=" " ind2=" "><subfield code="a">facial recognition</subfield></datafield><datafield tag="653" ind1=" " ind2=" "><subfield code="a">image-mining</subfield></datafield><datafield tag="653" ind1=" " ind2=" "><subfield code="a">deep convolutional neural network</subfield></datafield><datafield tag="653" ind1=" " ind2=" "><subfield code="a">emotion recognition</subfield></datafield><datafield tag="653" ind1=" " ind2=" "><subfield code="a">pattern recognition</subfield></datafield><datafield tag="653" ind1=" " ind2=" "><subfield code="a">texture descriptors</subfield></datafield><datafield tag="653" ind1=" " ind2=" "><subfield code="a">mobile tool</subfield></datafield><datafield tag="653" ind1=" " ind2=" "><subfield code="a">neuromarketing</subfield></datafield><datafield tag="653" ind1=" " ind2=" "><subfield code="a">brain computer interface (BCI)</subfield></datafield><datafield tag="653" ind1=" " ind2=" "><subfield code="a">consumer preferences</subfield></datafield><datafield tag="653" ind1=" " ind2=" "><subfield code="a">EEG signal</subfield></datafield><datafield tag="653" ind1=" " ind2=" "><subfield code="a">deep learning</subfield></datafield><datafield tag="653" ind1=" " ind2=" "><subfield code="a">deep neural network (DNN)</subfield></datafield><datafield tag="653" ind1=" " ind2=" "><subfield code="a">electroencephalogram (EEG)</subfield></datafield><datafield tag="653" ind1=" " ind2=" "><subfield code="a">logistic regression</subfield></datafield><datafield tag="653" ind1=" " ind2=" "><subfield code="a">Gaussian kernel</subfield></datafield><datafield tag="653" ind1=" " ind2=" "><subfield code="a">Laplacian prior</subfield></datafield><datafield tag="653" ind1=" " ind2=" "><subfield code="a">affective computing</subfield></datafield><datafield tag="653" ind1=" " ind2=" "><subfield code="a">human–robot interaction</subfield></datafield><datafield tag="653" ind1=" " ind2=" "><subfield code="a">thermal IR imaging</subfield></datafield><datafield tag="653" ind1=" " ind2=" "><subfield code="a">social robots</subfield></datafield><datafield tag="653" ind1=" " ind2=" "><subfield code="a">facial expression analysis</subfield></datafield><datafield tag="653" ind1=" " ind2=" "><subfield code="a">line segment feature analysis</subfield></datafield><datafield tag="653" ind1=" " ind2=" "><subfield code="a">dimensionality reduction</subfield></datafield><datafield tag="653" ind1=" " ind2=" "><subfield code="a">convolutional recurrent neural network</subfield></datafield><datafield tag="653" ind1=" " ind2=" "><subfield code="a">driver health risk</subfield></datafield><datafield tag="653" ind1=" " ind2=" "><subfield code="a">intelligent speech signal processing</subfield></datafield><datafield tag="653" ind1=" " ind2=" "><subfield code="a">human computer interaction</subfield></datafield><datafield tag="653" ind1=" " ind2=" "><subfield code="a">supervised learning</subfield></datafield><datafield tag="653" ind1=" " ind2=" "><subfield code="a">computer vision</subfield></datafield><datafield tag="653" ind1=" " ind2=" "><subfield code="a">optical flow</subfield></datafield><datafield tag="653" ind1=" " ind2=" "><subfield code="a">micro facial expressions</subfield></datafield><datafield tag="653" ind1=" " ind2=" "><subfield code="a">real-time processing</subfield></datafield><datafield tag="653" ind1=" " ind2=" "><subfield code="a">driver stress state</subfield></datafield><datafield tag="653" ind1=" " ind2=" "><subfield code="a">IR imaging</subfield></datafield><datafield tag="653" ind1=" " ind2=" "><subfield code="a">machine learning</subfield></datafield><datafield tag="653" ind1=" " ind2=" "><subfield code="a">support vector machine (SVR)</subfield></datafield><datafield tag="653" ind1=" " ind2=" "><subfield code="a">advanced driver-assistance systems (ADAS)</subfield></datafield><datafield tag="653" ind1=" " ind2=" "><subfield code="a">artificial intelligence</subfield></datafield><datafield tag="653" ind1=" " ind2=" "><subfield code="a">image processing</subfield></datafield><datafield tag="653" ind1=" " ind2=" "><subfield code="a">video processing</subfield></datafield><datafield tag="776" ind1=" " ind2=" "><subfield code="z">3-0365-1802-9</subfield></datafield><datafield tag="776" ind1=" " ind2=" "><subfield code="z">3-0365-1801-0</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Castro, Luis A.</subfield><subfield code="4">edt</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Villalonga, Claudia</subfield><subfield code="4">edt</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Banos, Oresti</subfield><subfield code="4">oth</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Castro, Luis A.</subfield><subfield code="4">oth</subfield></datafield><datafield tag="700" ind1="1" ind2=" "><subfield code="a">Villalonga, Claudia</subfield><subfield code="4">oth</subfield></datafield><datafield tag="906" ind1=" " ind2=" "><subfield code="a">BOOK</subfield></datafield><datafield tag="ADM" ind1=" " ind2=" "><subfield code="b">2023-12-15 05:52:57 Europe/Vienna</subfield><subfield code="f">system</subfield><subfield code="c">marc21</subfield><subfield code="a">2022-04-04 09:22:53 Europe/Vienna</subfield><subfield code="g">false</subfield></datafield><datafield tag="AVE" ind1=" " ind2=" "><subfield code="i">DOAB Directory of Open Access Books</subfield><subfield code="P">DOAB Directory of Open Access Books</subfield><subfield code="x">https://eu02.alma.exlibrisgroup.com/view/uresolver/43ACC_OEAW/openurl?u.ignore_date_coverage=true&portfolio_pid=5338063520004498&Force_direct=true</subfield><subfield code="Z">5338063520004498</subfield><subfield code="b">Available</subfield><subfield code="8">5338063520004498</subfield></datafield></record></collection> |