Avviso di seminario - Human Behaviour Understanding from First Person (Egocentric) Vision

Avviso di seminario - Human Behaviour Understanding from First Person (Egocentric) Vision

di Giuseppe Vizzari -
Numero di risposte: 0

Ricevo e inoltro un avviso di seminario di potenziale interesse per voi.

Cordiali saluti
Giuseppe Vizzari

I'd like to inform and/or remind you that tomorrow Monday 9/5/2022 at 14:$0 we will have an online seminar by Dr Francesco Ragusa (CEO of NEXT VISION s.r.l./ postdoc UniCT) about the application of AI and Computer Vision technologies in industrial and cultural settings.

I would really appreciate it if you could share this invitation on the relevant departmental mailing lists and with any colleague or student who may be interested


Titolo: Human Behaviour Understanding from First Person (Egocentric) Vision

 

Abstract:

The First Person (Egocentric) Vision (FPV) paradigm allows an intelligent system to observe the scene from the point of view of the agent which is equipped with a camera. Wearable cameras allow to collect images and videos from the humans' perspective which can be processed using Computer Vision and Machine Learning to enable an automated analysis of humans' behavior. To study human behavior from the first person point of view we considered both cultural heritage and industrial domains.

Equipping visitors of a cultural site with a wearable device allows to easily collect information which can be used both online to assist the visitor and offline to support the manager of the site. Despite the positive impact such technologies can have in cultural heritage, the topic is currently understudied due to the limited number of public datasets suitable to study the considered problems. To address this issue, we proposed two egocentric datasets for visitors' behavior understanding in cultural sites. Moving from these studies, we built the VEDI System, which is the final integrated wearable system developed to assist the visitors of cultural sites.

While human-object interactions have been thoroughly investigated in third person vision, the problem has been understudied in egocentric settings and in industrial scenarios. To fill this gap, we present MECCANO, the first dataset of egocentric videos composed of multimodal data to study human-object interactions in industrial-like settings. We report a benchmark aimed to study egocentric human-object interactions in industrial-like domains which shows that the current state-of-the-art approaches achieve limited performance on this challenging dataset.

 


Bio

Francesco Ragusa is a postdoc at the University of Catania. He is member of the IPLAB (University of Catania) research group since 2015. He has completed an Industrial Doctorate in Computer Science in 2021 under the supervision of the professor Giovanni Maria Farinella. During his PhD studies, he has spent a period as Research Student at the University of Hertfordshire, UK. He received his master’s degree in computer science (cum laude) in 2017 from the University of Catania. Francesco has authored one patent and more than 10 papers in international journals and international conference proceedings. He serves as reviewer for several international conferences in the fields of computer vision and multimedia, such as CVPR, ECCV, BMVC, WACV, ACM Multimedia, ICPR, ICIAP, and for international journals, including Pattern Recognition Letters and IeT Computer Vision. Francesco Ragusa is member of IEEE, CVF e CVPL. He has been involved in different research projects and has honed in on the issue of human-object interaction anticipation from egocentric videos as the key to analyze and understand human behavior in industrial workplaces. He is co-founder and CEO of NEXT VISION s.r.l., an academic spin-off the the University of Catania since 2021. His research interests concern Computer Vision, Pattern Recognition, and Machine Learning, with focus on First Person Vision.