98 resultados para visual integration
em University of Queensland eSpace - Australia
Resumo:
The McGurk effect, in which auditory [ba] dubbed onto [go] lip movements is perceived as da or tha, was employed in a real-time task to investigate auditory-visual speech perception in prelingual infants. Experiments 1A and 1B established the validity of real-time dubbing for producing the effect. In Experiment 2, 4(1)/(2)-month-olds were tested in a habituation-test paradigm, in which 2 an auditory-visual stimulus was presented contingent upon visual fixation of a live face. The experimental group was habituated to a McGurk stimulus (auditory [ba] visual [ga]), and the control group to matching auditory-visual [ba]. Each group was then presented with three auditory-only test trials, [ba], [da], and [deltaa] (as in then). Visual-fixation durations in test trials showed that the experimental group treated the emergent percept in the McGurk effect, [da] or [deltaa], as familiar (even though they had not heard these sounds previously) and [ba] as novel. For control group infants [da] and [deltaa] were no more familiar than [ba]. These results are consistent with infants'perception of the McGurk effect, and support the conclusion that prelinguistic infants integrate auditory and visual speech information. (C) 2004 Wiley Periodicals, Inc.
Resumo:
Children with autistic spectrum disorder (ASD) may have poor audio-visual integration, possibly reflecting dysfunctional 'mirror neuron' systems which have been hypothesised to be at the core of the condition. In the present study, a computer program, utilizing speech synthesizer software and a 'virtual' head (Baldi), delivered speech stimuli for identification in auditory, visual or bimodal conditions. Children with ASD were poorer than controls at recognizing stimuli in the unimodal conditions, but once performance on this measure was controlled for, no group difference was found in the bimodal condition. A group of participants with ASD were also trained to develop their speech-reading ability. Training improved visual accuracy and this also improved the children's ability to utilize visual information in their processing of speech. Overall results were compared to predictions from mathematical models based on integration and non-integration, and were most consistent with the integration model. We conclude that, whilst they are less accurate in recognizing stimuli in the unimodal condition, children with ASD show normal integration of visual and auditory speech stimuli. Given that training in recognition of visual speech was effective, children with ASD may benefit from multi-modal approaches in imitative therapy and language training. (C) 2004 Elsevier Ltd. All rights reserved.
Resumo:
The extrastriate cortex near the dorsal midline has been described as part of an 'express' pathway that provides visual input to the premotor cortex. This pathway is considered important for the integration of sensory information about the visual field periphery and the skeletomotor system, especially in relation to the control of arm movements. However, a better understanding of the functional contributions of different parts of this complex has been hampered by the lack of data on the extent and boundaries of its constituent visual areas. Recent studies in macaques have provided the first detailed view of the topographical organization of this region in Old World monkeys. Despite differences in nomenclature, a comparison of the visuotopic organization, myeloarchitecture and connections of the relevant visual areas with those previously studied in New World monkeys reveals a remarkable degree of similarity and helps to clarify the subdivision of function between different areas of the dorsomedial complex. A caudal visual area, named DM or V6, appears to be important for the detection of coherent patterns of movement across wide regions of the visual field, such as those induced during self-motion. A rostral area, named M or V6A, is more directly involved with visuomotor integration. This area receives projections both from DM/V6 and from a separate motion analysis channel, centred on the middle temporal visual area (or V5), which detects the movement of objects in extrapersonal space. These results support the suggestion, made earlier on the basis of more fragmentary evidence, that the areas rostral to the second visual area in dorsal cortex are homologous in all simian primates. Moreover, they emphasize the importance of determining the anatomical organization of the cortex as a prerequisite for elucidating the function of different cortical areas.
Resumo:
Most Internet search engines are keyword-based. They are not efficient for the queries where geographical location is important, such as finding hotels within an area or close to a place of interest. A natural interface for spatial searching is a map, which can be used not only to display locations of search results but also to assist forming search conditions. A map-based search engine requires a well-designed visual interface that is intuitive to use yet flexible and expressive enough to support various types of spatial queries as well as aspatial queries. Similar to hyperlinks for text and images in an HTML page, spatial objects in a map should support hyperlinks. Such an interface needs to be scalable with the size of the geographical regions and the number of websites it covers. In spite of handling typically a very large amount of spatial data, a map-based search interface should meet the expectation of fast response time for interactive applications. In this paper we discuss general requirements and the design for a new map-based web search interface, focusing on integration with the WWW and visual spatial query interface. A number of current and future research issues are discussed, and a prototype for the University of Queensland is presented. (C) 2001 Published by Elsevier Science Ltd.
Resumo:
Spatio-temporal maps of the occipital cortex of macaque monkeys were analyzed using optical imaging of intrinsic signals. The images obtained during localized visual stimulation (IS) were compared with the images obtained on presentation of a blank screen (IB). We first investigated spontaneous variations of the intrinsic signals by analyzing the 100 IBs for each of the three cortical areas. Slow periodical activation was observed in alternation over the cortical areas. Cross-correlation analysis indicated that synchronization of spontaneous activation only took place within each cortical area, but not between them. When a small, drifting grating (2degreesX2degrees) was presented on the fovea. a dark spot appeared in the optical image at the cortical representation of this retinal location. It spread bilaterally along the border between V1 and V2, continuing as a number of parallel dark bands covering a large area of the lateral surface of V1. Cross-correlation analysis showed that during visual stimulation the intrinsic signals over all of the three cortical areas were synchronized, with in-phase activation of V1 and V2 and anti-phase activation of V4 and V1/V2. The significance of these extensive synergistic and antagonistic interactions between different cortical areas is discussed. (C) 2003 Elsevier B.V. All rights reserved.
Resumo:
Some motor tasks can be completed, quite literally, with our eyes shut. Most people can touch their nose without looking or reach for an object after only a brief glance at its location. This distinction leads to one of the defining questions of movement control: is information gleaned prior to starting the movement sufficient to complete the task (open loop), or is feedback about the progress of the movement required (closed loop)? One task that has commanded considerable interest in the literature over the years is that of steering a vehicle, in particular lane-correction and lane-changing tasks. Recent work has suggested that this type of task can proceed in a fundamentally open loop manner [1 and 2], with feedback mainly serving to correct minor, accumulating errors. This paper reevaluates the conclusions of these studies by conducting a new set of experiments in a driving simulator. We demonstrate that, in fact, drivers rely on regular visual feedback, even during the well-practiced steering task of lane changing. Without feedback, drivers fail to initiate the return phase of the maneuver, resulting in systematic errors in final heading. The results provide new insight into the control of vehicle heading, suggesting that drivers employ a simple policy of “turn and see,” with only limited understanding of the relationship between steering angle and vehicle heading.
Resumo:
We examined the influence of backrest inclination and vergence demand on the posture and gaze angle that-workers adopt to view visual targets placed in different vertical locations. In the study 12 participants viewed a small video monitor placed in 7 locations around a 0.65-m radius arc (from 650 below to 300 above horizontal eye height). Trunk posture was manipulated by changing the backrest inclination of an adjustable chair. Vergence demand was manipulated by using ophthalmic lenses and prisms to mimic the visual consequences of varying target distance. Changes in vertical target location caused large changes in atlantooccipital posture and gaze angle. Cervical posture was altered to a lesser extent by changes in vertical target location. Participants compensated for changes in backrest inclination by changing cervical posture, though they did not significantly alter atlanto-occipital posture and gaze angle. The posture adopted to view any target represents a compromise between visual and musculoskeletal demands. These results provide support for the argument that the optimal location of visual targets is at least 15 below horizontal eye level. Actual or potential applications of this work include the layout of computer workstations and the viewing of displays from a seated posture.
Resumo:
Extracting human postural information from video sequences has proved a difficult research question. The most successful approaches to date have been based on particle filtering, whereby the underlying probability distribution is approximated by a set of particles. The shape of the underlying observational probability distribution plays a significant role in determining the success, both accuracy and efficiency, of any visual tracker. In this paper we compare approaches used by other authors and present a cost path approach which is commonly used in image segmentation problems, however is currently not widely used in tracking applications.
Resumo:
It is known that some Virtual Reality (VR) head-mounted displays (HMDs) can cause temporary deficits in binocular vision. On the other hand, the precise mechanism by which visual stress occurs is unclear. This paper is concerned with a potential source of visual stress that has not been previously considered with regard to VR systems: inappropriate vertical gaze angle. As vertical gaze angle is raised or lowered the 'effort' required of the binocular system also changes. The extent to which changes in vertical gaze angle alter the demands placed upon the vergence eye movement system was explored. The results suggested that visual stress may depend, in part, on vertical gaze angle. The proximity of the display screens within an HMD means that a VR headset should be in the correct vertical location for any individual user. This factor may explain some previous empirical results and has important implications for headset design. Fortuitously, a reasonably simple solution exists.
Resumo:
The deep-sea pearleye, Scopelarchus michaelsarsi (Scopelarchidae) is a mesopelagic teleost with asymmetric or tubular eyes. The main retina subtends a large dorsal binocular field, while the accessory retina subtends a restricted monocular field of lateral visual space. Ocular specializations to increase the lateral visual field include an oblique pupil and a corneal lens pad. A detailed morphological and topographic study of the photoreceptors and retinal ganglion cells reveals seven specializations: a centronasal region of the main retina with ungrouped rod-like photoreceptors overlying a retinal tapetum; a region of high ganglion cell density (area centralis of 56.1x10(3) cells per mm(2)) in the centrolateral region of the main retina; a centrotemporal region of the main retina with grouped rod-like photoreceptors; a region (area giganto cellularis) of large (32.2+/-5.6 mu m(2)), alpha-like ganglion cells arranged in a regular array (nearest neighbour distance 53.5+/-9.3 mu m with a conformity ratio of 5.8) in the temporal main retina; an accessory retina with grouped rod-like photoreceptors; a nasotemporal band of a mixture of rod-and cone-like photoreceptors restricted to the ventral accessory retina; and a retinal diverticulum comprised of a ventral region of differentiated accessory retina located medial to the optic nerve head. Retrograde labelling from the optic nerve with DiI shows that approximately 14% of the cells in the ganglion cell layer of the main retina are displaced amacrine cells at 1.5 mm eccentricity. Cryosectioning of the tubular eye confirms Matthiessen's ratio (2.59), and calculations of the spatial resolving power suggests that the function of the area centralis (7.4 cycles per degree/8.1 minutes of are) and the cohort of temporal alpha-like ganglion cells (0.85 cycles per degree/70.6 minutes of are) in the main retina may be different. Low summation ratios in these various retinal zones suggests that each zone may mediate distinct visual tasks in a certain region of the visual field by optimizing sensitivity and/or resolving power.
Resumo:
A dissociation between two putative measures of resource allocation skin conductance responding, and secondary task reaction time (RT), has been observed during auditory discrimination tasks. Four experiments investigated the time course of the dissociation effect with a visual discrimination task. participants were presented with circles and ellipses and instructed to count the number of longer-than-usual presentations of one shape (task-relevant) and to ignore presentations of the other shape (task-irrelevant). Concurrent with this task, participants made a speeded motor response to an auditory probe. Experiment 1 showed that skin conductance responses were larger during task-relevant stimuli than during task-irrelevant stimuli, whereas RT to probes presented at 150 ms following shape onset was slower during task-irrelevant stimuli. Experiments 2 to 4 found slower RT during task-irrelevant stimuli at probes presented at 300 ms before shape onset until 150 ms following shape onset. At probes presented 3,000 and 4,000 ms following shape onset probe RT was slower during task-relevant stimuli. The similarities between the observed time course and the so-called psychological refractory period (PRF) effect are discussed.
Resumo:
Deep-sea fish, defined as those living below 200 m, inhabit a most unusual photic environment, being exposed to two sources of visible radiation: very dim downwelling sunlight and bioluminescence, both of which are, in most cases. maximal at wavelengths around 450-500 nm. This paper summarises the reflective properties of the ocular tapeta often found in these animals the pigmentation of their lenses and the absorption characteristics of their visual pigments. Deepsea tapeta usually appear blue to the human observer. reflecting mainly shortwave radiation. However, reflection in other parts of the spectrum is not uncommon and uneven tapetal distribution across the retina is widespread. Perhaps surprisingly, given the fact that they live in a photon limited environment, the lenses of some deep-sea teleosts are bright yellow, absorbing much of the shortwave part of the spectrum. Such lenses contain a variety of biochemically distinct pigments which most likely serve to enhance the visibility of bioluminescent signals. Of the 195 different visual pigments characterised by either detergent extract or microspectrophotometry in the retinae of deep-sea fishes, cn. 87% have peak absorbances within the range 468-494 nm. Modelling shows that this is most likely an adaptation for the detection of bioluminescence. Around 13% of deep-sea fish have retinae containing more than one visual pigment. Of these, we highlight three genera of stomiid dragonfishes, which uniquely produce far red bioluminescence from suborbital photophores. Using a combination of longwave-shifted visual pigments and in one species (Malacosteus niger) a chlorophyll-related photosensitizer. these fish have evolved extreme red sensitivity enabling them to see their own bioluminescence and giving them a private spectral waveband invisible to other inhabitants of the deep-ocean. (C) 1998 Elsevier Science Ltd. All rights reserved.