Title: Interacting in Extended Reality: perception and action from real to virtual
Speaker: Prof Manuela Chessa, University of Genoa
Venue: U6-38
Date: Fri 5 April 2023
Streaming Link: https://meet.google.com/vnk-npek-che
For more information: dimitri.ognibene@unimib.it
Abstract: Several interaction actions, like grasping, picking up objects, walking,
or sitting on a chair, are performed in everyday life without too much
effort and appreciable errors. Visual information is essential in the
first steps of movements, e.g., when planning to grasp an object.
Besides the common real world-experience, Virtual Reality (VR) systems
are spread in many different contexts, e.g., for training, simulation,
and digital twinning. Various forms of interaction are adopted to allow
the users to act inside virtual environments (VEs) and manipulate
objects. Solutions allowing natural, e.g., bare hands, interaction are
still less robust than standard, e.g., controlled based, ones. Many
factors affect the grasping actions of virtual objects: errors and
inconsistencies in the tracking of the users’ fingers, thus in their
replica inside the VE, the lack of tactile and haptic feedback, and the
absence of friction and weight. Extended Reality (XR) and passive
haptics, i.e., the combination of VR and real-world elements, appear in
this context promising. The main challenge is maintaining the alignment
between the virtual and real reference frames to keep the perceptual
(visual) coherence of the XR environment. Then, in XR, it is possible to
modify the visual aspect of real objects by preserving their physical
properties but modulating and augmenting their visual aspect. Both
natural and supernatural situations can be simulated, allowing the
creation of novel interactive systems and the study of the interplay
between visual perception and grasping actions.
or sitting on a chair, are performed in everyday life without too much
effort and appreciable errors. Visual information is essential in the
first steps of movements, e.g., when planning to grasp an object.
Besides the common real world-experience, Virtual Reality (VR) systems
are spread in many different contexts, e.g., for training, simulation,
and digital twinning. Various forms of interaction are adopted to allow
the users to act inside virtual environments (VEs) and manipulate
objects. Solutions allowing natural, e.g., bare hands, interaction are
still less robust than standard, e.g., controlled based, ones. Many
factors affect the grasping actions of virtual objects: errors and
inconsistencies in the tracking of the users’ fingers, thus in their
replica inside the VE, the lack of tactile and haptic feedback, and the
absence of friction and weight. Extended Reality (XR) and passive
haptics, i.e., the combination of VR and real-world elements, appear in
this context promising. The main challenge is maintaining the alignment
between the virtual and real reference frames to keep the perceptual
(visual) coherence of the XR environment. Then, in XR, it is possible to
modify the visual aspect of real objects by preserving their physical
properties but modulating and augmenting their visual aspect. Both
natural and supernatural situations can be simulated, allowing the
creation of novel interactive systems and the study of the interplay
between visual perception and grasping actions.