940 resultados para experimental visual perception
Resumo:
This paper describes a simple low-cost approach toadding an element of haptic interaction within a virtualenvironment. Using off-the-shelf hardware and software wedescribe a simple setup that can be used to explore physically virtual objects in space. This setup comprises of a prototype glove with a number of vibrating actuators to provide the haptic feedback, a Kinect camera for the tracking of the user's hand and a virtual reality development environment. As proof of concept and to test the efficiency of the system as well as its potential applications, we developed a simple application where we created 4 different shapes within a virtual environment in order to try toexplore them and guess their shape through touch alone.
Resumo:
An experiment was carried out to examine the impact on electrodermal activity of people when approached by groups of one or four virtual characters at varying distances. It was premised on the basis of proxemics theory that the closer the approach of the virtual characters to the participant, the greater the level of physiological arousal. Physiological arousal was measured by the number of skin conductance responses within a short time period after the approach, and the maximum change in skin conductance level 5 s after the approach. The virtual characters were each either female or a cylinder of human size, and one or four characters approached each subject a total of 12 times. Twelve male subjects were recruited for the experiment. The results suggest that the number of skin conductance responses after the approach and the change in skin conductance level increased the closer the virtual characters approached toward the participants. Moreover, these response variables were inversely correlated with the number of visits, showing a typical adaptation effect. There was some evidence to suggest that the number of characters who simultaneously approached (one or four) was positively associated with the responses. Surprisingly there was no evidence of a difference in response between the humanoid characters and cylinders on the basis of this physiological data. It is suggested that the similarity in this quantitative arousal response to virtual characters and virtual objects might mask a profound difference in qualitative response, an interpretation supported by questionnaire and interview results. Overall the experiment supported the premise that people exhibit heightened physiological arousal the closer they are approached by virtual characters.
Resumo:
Immersive virtual reality (IVR) typically generates the illusion in participants that they are in the displayed virtual scene where they can experience and interact in events as if they were really happening. Teleoperator (TO) systems place people at a remote physical destination embodied as a robotic device, and where typically participants have the sensation of being at the destination, with the ability to interact with entities there. In this paper, we show how to combine IVR and TO to allow a new class of application. The participant in the IVR is represented in the destination by a physical robot (TO) and simultaneously the remote place and entities within it are represented to the participant in the IVR. Hence, the IVR participant has a normal virtual reality experience, but where his or her actions and behaviour control the remote robot and can therefore have physical consequences. Here, we show how such a system can be deployed to allow a human and a rat to operate together, but the human interacting with the rat on a human scale, and the rat interacting with the human on the rat scale. The human is represented in a rat arena by a small robot that is slaved to the human"s movements, whereas the tracked rat is represented to the human in the virtual reality by a humanoid avatar. We describe the system and also a study that was designed to test whether humans can successfully play a game with the rat. The results show that the system functioned well and that the humans were able to interact with the rat to fulfil the tasks of the game. This system opens up the possibility of new applications in the life sciences involving participant observation of and interaction with animals but at human scale.
Resumo:
Immersive virtual reality (IVR) typically generates the illusion in participants that they are in the displayed virtual scene where they can experience and interact in events as if they were really happening. Teleoperator (TO) systems place people at a remote physical destination embodied as a robotic device, and where typically participants have the sensation of being at the destination, with the ability to interact with entities there. In this paper, we show how to combine IVR and TO to allow a new class of application. The participant in the IVR is represented in the destination by a physical robot (TO) and simultaneously the remote place and entities within it are represented to the participant in the IVR. Hence, the IVR participant has a normal virtual reality experience, but where his or her actions and behaviour control the remote robot and can therefore have physical consequences. Here, we show how such a system can be deployed to allow a human and a rat to operate together, but the human interacting with the rat on a human scale, and the rat interacting with the human on the rat scale. The human is represented in a rat arena by a small robot that is slaved to the human"s movements, whereas the tracked rat is represented to the human in the virtual reality by a humanoid avatar. We describe the system and also a study that was designed to test whether humans can successfully play a game with the rat. The results show that the system functioned well and that the humans were able to interact with the rat to fulfil the tasks of the game. This system opens up the possibility of new applications in the life sciences involving participant observation of and interaction with animals but at human scale.
Resumo:
Research on face recognition and social judgment usually addresses the manipulation of facial features (eyes, nose, mouth, etc.). Using a procedure based on a Stroop-like task, Montepare and Opeyo (J Nonverbal Behav 26(1):43-59, 2002) established a hierarchy of the relative salience of cues based on facial attributes when differentiating faces. Using the same perceptual interference task, we established a hierarchy of facial features. Twenty-three participants (13 men and 10 women) volunteered for the experiment to compare pairs of frontal faces. The participants had to judge if the eyes, nose, mouth and chin in the pair of images were the same or different. The factors manipulated were the target-distractive factor (4 face components 9 3 distractive factors), interference (absent vs. present) and correct answer (the same vs. different). The analysis of reaction times and errors showed that the eyes and mouth were processed before the chin and nose, thus highlighting the critical importance of the eyes and mouth, as shown by previous research.
Resumo:
Brain-computer interfaces (BCIs) are becoming more and more popular as an input device for virtual worlds and computer games. Depending on their function, a major drawback is the mental workload associated with their use and there is significant effort and training required to effectively control them. In this paper, we present two studies assessing how mental workload of a P300-based BCI affects participants" reported sense of presence in a virtual environment (VE). In the first study, we employ a BCI exploiting the P300 event-related potential (ERP) that allows control of over 200 items in a virtual apartment. In the second study, the BCI is replaced by a gaze-based selection method coupled with wand navigation. In both studies, overall performance is measured and individual presence scores are assessed by means of a short questionnaire. The results suggest that there is no immediate benefit for visualizing events in the VE triggered by the BCI and that no learning about the layout of the virtual space takes place. In order to alleviate this, we propose that future P300-based BCIs in VR are set up so as require users to make some inference about the virtual space so that they become aware of it,which is likely to lead to higher reported presence.
Resumo:
Interception requires precise estimation of time-to-contact (TTC) information. A long-standing view posits that all relevant information for extracting TTC is available in the angular variables, which result from the projection of distal objecs on to the retina. The diferent timing models rooted in this tradition have consequently relied on combining visual angle and its rate of expansion in diferent ways with tau being the most well-known solution for TTC...
Resumo:
Immersive virtual reality (IVR) typically generates the illusion in participants that they are in the displayed virtual scene where they can experience and interact in events as if they were really happening. Teleoperator (TO) systems place people at a remote physical destination embodied as a robotic device, and where typically participants have the sensation of being at the destination, with the ability to interact with entities there. In this paper, we show how to combine IVR and TO to allow a new class of application. The participant in the IVR is represented in the destination by a physical robot (TO) and simultaneously the remote place and entities within it are represented to the participant in the IVR. Hence, the IVR participant has a normal virtual reality experience, but where his or her actions and behaviour control the remote robot and can therefore have physical consequences. Here, we show how such a system can be deployed to allow a human and a rat to operate together, but the human interacting with the rat on a human scale, and the rat interacting with the human on the rat scale. The human is represented in a rat arena by a small robot that is slaved to the human"s movements, whereas the tracked rat is represented to the human in the virtual reality by a humanoid avatar. We describe the system and also a study that was designed to test whether humans can successfully play a game with the rat. The results show that the system functioned well and that the humans were able to interact with the rat to fulfil the tasks of the game. This system opens up the possibility of new applications in the life sciences involving participant observation of and interaction with animals but at human scale.
Resumo:
Immersive virtual reality (IVR) typically generates the illusion in participants that they are in the displayed virtual scene where they can experience and interact in events as if they were really happening. Teleoperator (TO) systems place people at a remote physical destination embodied as a robotic device, and where typically participants have the sensation of being at the destination, with the ability to interact with entities there. In this paper, we show how to combine IVR and TO to allow a new class of application. The participant in the IVR is represented in the destination by a physical robot (TO) and simultaneously the remote place and entities within it are represented to the participant in the IVR. Hence, the IVR participant has a normal virtual reality experience, but where his or her actions and behaviour control the remote robot and can therefore have physical consequences. Here, we show how such a system can be deployed to allow a human and a rat to operate together, but the human interacting with the rat on a human scale, and the rat interacting with the human on the rat scale. The human is represented in a rat arena by a small robot that is slaved to the human"s movements, whereas the tracked rat is represented to the human in the virtual reality by a humanoid avatar. We describe the system and also a study that was designed to test whether humans can successfully play a game with the rat. The results show that the system functioned well and that the humans were able to interact with the rat to fulfil the tasks of the game. This system opens up the possibility of new applications in the life sciences involving participant observation of and interaction with animals but at human scale.
Resumo:
The integration of the human brain with computers is an interesting new area of applied neuroscience, where one application is replacement of a person"s real body by a virtual representation. Here we demonstrate that a virtual limb can be made to feel part of your body if appropriate multisensory correlations are provided. We report an illusion that is invoked through tactile stimulation on a person"s hidden real right hand with synchronous virtual visual stimulation on an aligned 3D stereo virtual arm projecting horizontally out of their shoulder. An experiment with 21 male participants showed displacement of ownership towards the virtual hand, as illustrated by questionnaire responses and proprioceptive drift. A control experiment with asynchronous tapping was carried out with a different set of 20 male participants who did not experience the illusion. After 5 min of stimulation the virtual arm rotated. Evidence suggests that the extent of the illusion was also correlated with the degree of muscle activity onset in the right arm as measured by EMG during this period that the arm was rotating, for the synchronous but not the asynchronous condition. A completely virtual object can therefore be experienced as part of one"s self, which opens up the possibility that an entire virtual body could be felt as one"s own in future virtual reality applications or online games, and be an invaluable tool for the understanding of the brain mechanisms underlying body ownership.
Resumo:
Different asymmetries between expansion and contraction (radial motions) have been reported in the literature. Often these patterns have been regarded as implying different channels for each type of radial direction (outward versus inwards) operating at a higher level of visual motion processing. In two experiments (detection and discrimination tasks) we report reaction time asymmetries between expansion and contraction. Power functions were fitted to the data. While an exponent of 0.5 accounted for the expansion data better, a value of unity yielded the best fit for the contraction data. Instead of interpreting these differences as corresponding to different higher order motion detectors, we regard these findings as reflecting the fact that expansion and contraction tap two distinct psychophysical input channels underlying the processing of fast and slow velocities respectively.
Resumo:
Covert spatial attention produces biases in perceptual and neural responses in the absence of overt orienting movements. The neural mechanism that gives rise to these effects is poorly understood. Here we report the relation between fixational eye movements, namely eye vergence, and covert attention. Visual stimuli modulate the angle of eye vergence as a function of their ability to capture attention. This illustrates the relation between eye vergence and bottom-up attention. In visual and auditory cue/no-cue paradigms, the angle of vergence is greater in the cue condition than in the no-cue condition. This shows a top-down attention component. In conclusion, observations reveal a close link between covert attention and modulation in eye vergence during eye fixation. Our study suggests a basis for the use of eye vergence as a tool for measuring attention and may provide new insights into attention and perceptual disorders.
Resumo:
The visual angle that is projected by an object (e.g. a ball) on the retina depends on the object's size and distance. Without further information, however, the visual angle is ambiguous with respect to size and distance, because equal visual angles can be obtained from a big ball at a longer distance and a smaller one at a correspondingly shorter distance. Failure to recover the true 3D structure of the object (e.g. a ball's physical size) causing the ambiguous retinal image can lead to a timing error when catching the ball. Two opposing views are currently prevailing on how people resolve this ambiguity when estimating time to contact. One explanation challenges any inference about what causes the retinal image (i.e. the necessity to recover this 3D structure), and instead favors a direct analysis of optic flow. In contrast, the second view suggests that action timing could be rather based on obtaining an estimate of the 3D structure of the scene. With the latter, systematic errors will be predicted if our inference of the 3D structure fails to reveal the underlying cause of the retinal image. Here we show that hand closure in catching virtual balls is triggered by visual angle, using an assumption of a constant ball size. As a consequence of this assumption, hand closure starts when the ball is at similar distance across trials. From that distance on, the remaining arrival time, therefore, depends on ball's speed. In order to time the catch successfully, closing time was coupled with ball's speed during the motor phase. This strategy led to an increased precision in catching but at the cost of committing systematic errors.
Resumo:
The role of grammatical class in lexical access and representation is still not well understood. Grammatical effects obtained in picture-word interference experiments have been argued to show the operation of grammatical constraints during lexicalization when syntactic integration is required by the task. Alternative views hold that the ostensibly grammatical effects actually derive from the coincidence of semantic and grammatical differences between lexical candidates. We present three picture-word interference experiments conducted in Spanish. In the first two, the semantic relatedness (related or unrelated) and the grammatical class (nouns or verbs) of the target and the distracter were manipulated in an infinitive form action naming task in order to disentangle their contributions to verb lexical access. In the third experiment, a possible confound between grammatical class and semantic domain (objects or actions) was eliminated by using action-nouns as distracters. A condition in which participants were asked to name the action pictures using an inflected form of the verb was also included to explore whether the need of syntactic integration modulated the appearance of grammatical effects. Whereas action-words (nouns or verbs), but not object-nouns, produced longer reaction times irrespective of their grammatical class in the infinitive condition, only verbs slowed latencies in the inflected form condition. Our results suggest that speech production relies on the exclusion of candidate responses that do not fulfil task-pertinent criteria like membership in the appropriate semantic domain or grammatical class. Taken together, these findings are explained by a response-exclusion account of speech output. This and alternative hypotheses are discussed.
Resumo:
Print quality and the printability of paper are very important attributes when modern printing applications are considered. In prints containing images, high print quality is a basic requirement. Tone unevenness and non uniform glossiness of printed products are the most disturbing factors influencing overall print quality. These defects are caused by non ideal interactions of paper, ink and printing devices in high speed printing processes. Since print quality is a perceptive characteristic, the measurement of unevenness according to human vision is a significant problem. In this thesis, the mottling phenomenon is studied. Mottling is a printing defect characterized by a spotty, non uniform appearance in solid printed areas. Print mottle is usually the result of uneven ink lay down or non uniform ink absorption across the paper surface, especially visible in mid tone imagery or areas of uniform color, such as solids and continuous tone screen builds. By using existing knowledge on visual perception and known methods to quantify print tone variation, a new method for print unevenness evaluation is introduced. The method is compared to previous results in the field and is supported by psychometric experiments. Pilot studies are made to estimate the effect of optical paper characteristics prior to printing, on the unevenness of the printed area after printing. Instrumental methods for print unevenness evaluation have been compared and the results of the comparison indicate that the proposed method produces better results in terms of visual evaluation correspondence. The method has been successfully implemented as ail industrial application and is proved to be a reliable substitute to visual expertise.