78 resultados para Image Processing, Visual Prostheses, Visual Information, Artificial Human Vision, Visual Perception
Resumo:
Individuals with vestibular dysfunction may experience visual vertigo (VV), in which symptoms are provoked or exacerbated by excessive or disorientating visual stimuli (e.g. supermarkets). VV can significantly improve when customized vestibular rehabilitation exercises are combined with exposure to optokinetic stimuli. Virtual reality (VR), which immerses patients in realistic, visually challenging environments, has also been suggested as an adjunct to VR to improve VV symptoms. This pilot study compared the responses of sixteen patients with unilateral peripheral vestibular disorder randomly allocated to a VR regime incorporating exposure to a static (Group S) or dynamic (Group D) VR environment. Participants practiced vestibular exercises, twice weekly for four weeks, inside a static (Group S) or dynamic (Group D) virtual crowded square environment, presented in an immersive projection theatre (IPT), and received a vestibular exercise program to practice on days not attending clinic. A third Group D1 completed both the static and dynamic VR training. Treatment response was assessed with the Dynamic Gait Index and questionnaires concerning symptom triggers and psychological state. At final assessment, significant betweengroup differences were noted between Groups D (p = 0.001) and D1 (p = 0.03) compared to Group S for VV symptoms with the former two showing a significant 59.2% and 25.8% improvement respectively compared to 1.6% for the latter. Depression scores improved only for Group S (p = 0.01) while a trend towards significance was noted for Group D regarding anxiety scores (p = 0.07). Conclusion: Exposure to dynamic VR environments should be considered as a useful adjunct to vestibular rehabilitation programs for patients with peripheral vestibular disorders and VV symptoms.
Resumo:
Peer-reviewed
Resumo:
Individuals with vestibular dysfunction may experience visual vertigo (VV), in which symptoms are provoked or exacerbated by excessive or disorientating visual stimuli (e.g. supermarkets). VV can significantly improve when customized vestibular rehabilitation exercises are combined with exposure to optokinetic stimuli. Virtual reality (VR), which immerses patients in realistic, visually challenging environments, has also been suggested as an adjunct to VR to improve VV symptoms. This pilot study compared the responses of sixteen patients with unilateral peripheral vestibular disorder randomly allocated to a VR regime incorporating exposure to a static (Group S) or dynamic (Group D) VR environment. Participants practiced vestibular exercises, twice weekly for four weeks, inside a static (Group S) or dynamic (Group D) virtual crowded square environment, presented in an immersive projection theatre (IPT), and received a vestibular exercise program to practice on days not attending clinic. A third Group D1 completed both the static and dynamic VR training. Treatment response was assessed with the Dynamic Gait Index and questionnaires concerning symptom triggers and psychological state. At final assessment, significant betweengroup differences were noted between Groups D (p = 0.001) and D1 (p = 0.03) compared to Group S for VV symptoms with the former two showing a significant 59.2% and 25.8% improvement respectively compared to 1.6% for the latter. Depression scores improved only for Group S (p = 0.01) while a trend towards significance was noted for Group D regarding anxiety scores (p = 0.07). Conclusion: Exposure to dynamic VR environments should be considered as a useful adjunct to vestibular rehabilitation programs for patients with peripheral vestibular disorders and VV symptoms.
Resumo:
Previous work has reported that it is not difficult to give people the illusion of ownership over an artificial body, providing a powerful tool for the investigation of the neural and cognitive mechanisms underlying body perception and self consciousness. We present an experimental study that uses immersive virtual reality (IVR) focused on identifying the perceptual building blocks of this illusion. We systematically manipulated visuotactile and visual sensorimotor contingencies, visual perspective, and the appearance of the virtual body in order to assess their relative role and mutual interaction. Consistent results from subjective reports and physiological measures showed that a first person perspective over a fake humanoid body is essential for eliciting a body ownership illusion. We found that the illusion of ownership can be generated when the virtual body has a realistic skin tone and spatially substitutes the real body seen from a first person perspective. In this case there is no need for an additional contribution of congruent visuotactile or sensorimotor cues. Additionally, we found that the processing of incongruent perceptual cues can be modulated by the level of the illusion: when the illusion is strong, incongruent cues are not experienced as incorrect. Participants exposed to asynchronous visuotactile stimulation can experience the ownership illusion and perceive touch as originating from an object seen to contact the virtual body. Analogously, when the level of realism of the virtual body is not high enough and/or when there is no spatial overlap between the two bodies, then the contribution of congruent multisensory and/or sensorimotor cues is required for evoking the illusion. On the basis of these results and inspired by findings from neurophysiological recordings in the monkey, we propose a model that accounts for many of the results reported in the literature.
Resumo:
Immersive virtual reality (IVR) typically generates the illusion in participants that they are in the displayed virtual scene where they can experience and interact in events as if they were really happening. Teleoperator (TO) systems place people at a remote physical destination embodied as a robotic device, and where typically participants have the sensation of being at the destination, with the ability to interact with entities there. In this paper, we show how to combine IVR and TO to allow a new class of application. The participant in the IVR is represented in the destination by a physical robot (TO) and simultaneously the remote place and entities within it are represented to the participant in the IVR. Hence, the IVR participant has a normal virtual reality experience, but where his or her actions and behaviour control the remote robot and can therefore have physical consequences. Here, we show how such a system can be deployed to allow a human and a rat to operate together, but the human interacting with the rat on a human scale, and the rat interacting with the human on the rat scale. The human is represented in a rat arena by a small robot that is slaved to the human"s movements, whereas the tracked rat is represented to the human in the virtual reality by a humanoid avatar. We describe the system and also a study that was designed to test whether humans can successfully play a game with the rat. The results show that the system functioned well and that the humans were able to interact with the rat to fulfil the tasks of the game. This system opens up the possibility of new applications in the life sciences involving participant observation of and interaction with animals but at human scale.
Resumo:
Immersive virtual reality (IVR) typically generates the illusion in participants that they are in the displayed virtual scene where they can experience and interact in events as if they were really happening. Teleoperator (TO) systems place people at a remote physical destination embodied as a robotic device, and where typically participants have the sensation of being at the destination, with the ability to interact with entities there. In this paper, we show how to combine IVR and TO to allow a new class of application. The participant in the IVR is represented in the destination by a physical robot (TO) and simultaneously the remote place and entities within it are represented to the participant in the IVR. Hence, the IVR participant has a normal virtual reality experience, but where his or her actions and behaviour control the remote robot and can therefore have physical consequences. Here, we show how such a system can be deployed to allow a human and a rat to operate together, but the human interacting with the rat on a human scale, and the rat interacting with the human on the rat scale. The human is represented in a rat arena by a small robot that is slaved to the human"s movements, whereas the tracked rat is represented to the human in the virtual reality by a humanoid avatar. We describe the system and also a study that was designed to test whether humans can successfully play a game with the rat. The results show that the system functioned well and that the humans were able to interact with the rat to fulfil the tasks of the game. This system opens up the possibility of new applications in the life sciences involving participant observation of and interaction with animals but at human scale.
Resumo:
Observers are often required to adjust actions with objects that change their speed. However, no evidence for a direct sense of acceleration has been found so far. Instead, observers seem to detect changes in velocity within a temporal window when confronted with motion in the frontal plane (2D motion). Furthermore, recent studies suggest that motion-in-depth is detected by tracking changes of position in depth. Therefore, in order to sense acceleration in depth a kind of second-order computation would have to be carried out by the visual system. In two experiments, we show that observers misperceive acceleration of head-on approaches at least within the ranges we used [600-800 ms] resulting in an overestimation of arrival time. Regardless of the viewing condition (only monocular or monocular and binocular), the response pattern conformed to a constant velocity strategy. However, when binocular information was available, overestimation was highly reduced.
Resumo:
Immersive virtual reality (IVR) typically generates the illusion in participants that they are in the displayed virtual scene where they can experience and interact in events as if they were really happening. Teleoperator (TO) systems place people at a remote physical destination embodied as a robotic device, and where typically participants have the sensation of being at the destination, with the ability to interact with entities there. In this paper, we show how to combine IVR and TO to allow a new class of application. The participant in the IVR is represented in the destination by a physical robot (TO) and simultaneously the remote place and entities within it are represented to the participant in the IVR. Hence, the IVR participant has a normal virtual reality experience, but where his or her actions and behaviour control the remote robot and can therefore have physical consequences. Here, we show how such a system can be deployed to allow a human and a rat to operate together, but the human interacting with the rat on a human scale, and the rat interacting with the human on the rat scale. The human is represented in a rat arena by a small robot that is slaved to the human"s movements, whereas the tracked rat is represented to the human in the virtual reality by a humanoid avatar. We describe the system and also a study that was designed to test whether humans can successfully play a game with the rat. The results show that the system functioned well and that the humans were able to interact with the rat to fulfil the tasks of the game. This system opens up the possibility of new applications in the life sciences involving participant observation of and interaction with animals but at human scale.
Resumo:
Immersive virtual reality (IVR) typically generates the illusion in participants that they are in the displayed virtual scene where they can experience and interact in events as if they were really happening. Teleoperator (TO) systems place people at a remote physical destination embodied as a robotic device, and where typically participants have the sensation of being at the destination, with the ability to interact with entities there. In this paper, we show how to combine IVR and TO to allow a new class of application. The participant in the IVR is represented in the destination by a physical robot (TO) and simultaneously the remote place and entities within it are represented to the participant in the IVR. Hence, the IVR participant has a normal virtual reality experience, but where his or her actions and behaviour control the remote robot and can therefore have physical consequences. Here, we show how such a system can be deployed to allow a human and a rat to operate together, but the human interacting with the rat on a human scale, and the rat interacting with the human on the rat scale. The human is represented in a rat arena by a small robot that is slaved to the human"s movements, whereas the tracked rat is represented to the human in the virtual reality by a humanoid avatar. We describe the system and also a study that was designed to test whether humans can successfully play a game with the rat. The results show that the system functioned well and that the humans were able to interact with the rat to fulfil the tasks of the game. This system opens up the possibility of new applications in the life sciences involving participant observation of and interaction with animals but at human scale.
Resumo:
Participants in an immersive virtual environment interact with the scene from an egocentric point of view that is, where there bodies appear to be located rather than from outside as if looking through a window. People interact through normal body movements, such as head-turning,reaching, and bending, and within the tracking limitations move through the environment or effect changes within it in natural ways.
Resumo:
Does realistic lighting in an immersive virtual reality application enhance presence, where participants feel that they are in the scene and behave correspondingly? Our previous study indicated that presence is more likely with real-time ray tracing compared with ray casting, but we could not separate the effects of overall quality of illumination from the dynamic effects of real-time shadows and reflections. Here we describe an experiment where 20 people experienced a scene rendered with global or local illumination. However, in both conditions there were dynamically changing shadows and reflections. We found that the quality of illumination did not impact presence, so that the earlier result must have been due to dynamic shadows and reflections. However, global illumination resulted in greater plausibility - participants were more likely to respond as if the virtual events were real. We conclude that global illumination does impact the responses of participants and is worth the effort.
Resumo:
The visual angle that is projected by an object (e.g. a ball) on the retina depends on the object's size and distance. Without further information, however, the visual angle is ambiguous with respect to size and distance, because equal visual angles can be obtained from a big ball at a longer distance and a smaller one at a correspondingly shorter distance. Failure to recover the true 3D structure of the object (e.g. a ball's physical size) causing the ambiguous retinal image can lead to a timing error when catching the ball. Two opposing views are currently prevailing on how people resolve this ambiguity when estimating time to contact. One explanation challenges any inference about what causes the retinal image (i.e. the necessity to recover this 3D structure), and instead favors a direct analysis of optic flow. In contrast, the second view suggests that action timing could be rather based on obtaining an estimate of the 3D structure of the scene. With the latter, systematic errors will be predicted if our inference of the 3D structure fails to reveal the underlying cause of the retinal image. Here we show that hand closure in catching virtual balls is triggered by visual angle, using an assumption of a constant ball size. As a consequence of this assumption, hand closure starts when the ball is at similar distance across trials. From that distance on, the remaining arrival time, therefore, depends on ball's speed. In order to time the catch successfully, closing time was coupled with ball's speed during the motor phase. This strategy led to an increased precision in catching but at the cost of committing systematic errors.
Resumo:
In this paper a colour texture segmentation method, which unifies region and boundary information, is proposed. The algorithm uses a coarse detection of the perceptual (colour and texture) edges of the image to adequately place and initialise a set of active regions. Colour texture of regions is modelled by the conjunction of non-parametric techniques of kernel density estimation (which allow to estimate the colour behaviour) and classical co-occurrence matrix based texture features. Therefore, region information is defined and accurate boundary information can be extracted to guide the segmentation process. Regions concurrently compete for the image pixels in order to segment the whole image taking both information sources into account. Furthermore, experimental results are shown which prove the performance of the proposed method
Resumo:
An unsupervised approach to image segmentation which fuses region and boundary information is presented. The proposed approach takes advantage of the combined use of 3 different strategies: the guidance of seed placement, the control of decision criterion, and the boundary refinement. The new algorithm uses the boundary information to initialize a set of active regions which compete for the pixels in order to segment the whole image. The method is implemented on a multiresolution representation which ensures noise robustness as well as computation efficiency. The accuracy of the segmentation results has been proven through an objective comparative evaluation of the method
Resumo:
Este trabajo presenta un sistema para detectar y clasificar objetos binarios según la forma de éstos. En el primer paso del procedimiento, se aplica un filtrado para extraer el contorno del objeto. Con la información de los puntos de forma se obtiene un descriptor BSM con características altamente descriptivas, universales e invariantes. En la segunda fase del sistema se aprende y se clasifica la información del descriptor mediante Adaboost y Códigos Correctores de Errores. Se han usado bases de datos públicas, tanto en escala de grises como en color, para validar la implementación del sistema diseñado. Además, el sistema emplea una interfaz interactiva en la que diferentes métodos de procesamiento de imágenes pueden ser aplicados.