Gelişmiş Arama

Basit öğe kaydını göster

dc.contributor.authorEroglu Erdem, Cigdem
dc.contributor.authorTuran, Cigdem
dc.contributor.authorAydin, Zafer
dc.date.accessioned2024-06-12T08:20:00Z
dc.date.available2024-06-12T08:20:00Z
dc.date.issued2015en_US
dc.identifier.issn1380-7501
dc.identifier.urihttps://doi.org10.1007/s11042-014-1986-2
dc.identifier.urihttps://link.springer.com/article/10.1007/s11042-014-1986-2
dc.identifier.urihttps://hdl.handle.net/20.500.12573/2202
dc.description.abstractAccess to audio-visual databases, which contain enough variety and are richly annotated is essential to assess the performance of algorithms in affective computing applications, which require emotion recognition from face and/or speech data. Most databases available today have been recorded under tightly controlled environments, are mostly acted and do not contain speech data. We first present a semi-automatic method that can extract audio-visual facial video clips from movies and TV programs in any language. The method is based on automatic detection and tracking of faces in a movie until the face is occluded or a scene cut occurs. We also created a video-based database, named as BAUM-2, which consists of annotated audio-visual facial clips in several languages. The collected clips simulate real-world conditions by containing various head poses, illumination conditions, accessories, temporary occlusions and subjects with a wide range of ages. The proposed semi-automatic affective clip extraction method can easily be used to extend the database to contain clips in other languages. We also created an image based facial expression database from the peak frames of the video clips, which is named as BAUM-2i. Baseline image and video-based facial expression recognition results using state-of-the art features and classifiers indicate that facial expression recognition under tough and close-to-natural conditions is quite challenging.en_US
dc.language.isoengen_US
dc.publisherKluwer Academic Publishers(SpringerLink)en_US
dc.relation.isversionof10.1007/s11042-014-1986-2en_US
dc.rightsinfo:eu-repo/semantics/closedAccessen_US
dc.subjectAffective databaseen_US
dc.subjectAudio-visual affective databaseen_US
dc.subjectFacial expression recognitionen_US
dc.titleBAUM-2: a multilingual audio-visual affective face databaseen_US
dc.typearticleen_US
dc.contributor.departmentAGÜ, Mühendislik Fakültesi, Bilgisayar Mühendisliği Bölümüen_US
dc.contributor.authorID0000-0001-7686-6298en_US
dc.contributor.institutionauthorAydin, Zafer
dc.identifier.volume74en_US
dc.identifier.startpage7429en_US
dc.identifier.endpage7459en_US
dc.relation.journalMultimedia Tools and Applicationsen_US
dc.relation.publicationcategoryMakale - Uluslararası Hakemli Dergi - Kurum Öğretim Elemanıen_US


Bu öğenin dosyaları:

Thumbnail

Bu öğe aşağıdaki koleksiyon(lar)da görünmektedir.

Basit öğe kaydını göster