827 resultados para Flys Visual-system


Relevância:

80.00% 80.00%

Publicador:

Resumo:

Although the role of ophthalmic factors in dyslexia remains the subject of controversy, recent research has indicated that the correlates of dyslexia may include binocular dysfunction, unstable motor ocular dominance, a deficit of the transient visual subsystem, and an anomaly that can be treated with tinted lenses. These features, typically, have been studied in isolation and their inter-relationship has received little attention. The aim of the present research was to investigate ophthalmic factors in dyslexia, with a particular emphasis on the interaction between optometric variables. Further aims were to establish the most appropriate investigative techniques for optometric practice and to explore the relationship between optometric and psychometric variables. A pilot study was used to refine the experimental design for a subsequent detailed study of 39 children with a specific reading disability and 43 good readers, who were selected from 240 children. The groups were matched for age, sex, and performance IQ. The following factors emerged as correlates of dyslexia: slight impaired visual acuity; reduced vergence amplitudes; increased vergence instability; decreased accommodative amplitude; poor peformance at tests that were designed to assess the function of the transient visual system; and slightly slower performance at a non-verbal simulated reading visual search task. The `transient system deficit', as measured by reduced flicker sensitivity, was significantly associated with decreased accommodative and vergence amplitudes. This links the motor and sensory visual correlates of dyslexia. Although the binocular dysfunction was correlated with increased symptoms, the difference in the groups' simulated reading visual search task performance was largely attributable to psychometric variables. The results suggest tht optometric problems may be a contributory factor in dyslexia, but are unlikely to play a key causative role. Several optometric variables were confounded by psychometric parameters, and this interaction should be a priority for future investigation.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The transmission of weak signals through the visual system is limited by internal noise. Its level can be estimated by adding external noise, which increases the variance within the detecting mechanism, causing masking. But experiments with white noise fail to meet three predictions: (a) noise has too small an influence on the slope of the psychometric function, (b) masking occurs even when the noise sample is identical in each two-alternative forced-choice (2AFC) interval, and (c) double-pass consistency is too low. We show that much of the energy of 2D white noise masks extends well beyond the pass-band of plausible detecting mechanisms and that this suppresses signal activity. These problems are avoided by restricting the external noise energy to the target mechanisms by introducing a pedestal with a mean contrast of 0% and independent contrast jitter in each 2AFC interval (termed zero-dimensional [0D] noise). We compared the jitter condition to masking from 2D white noise in double-pass masking and (novel) contrast matching experiments. Zero-dimensional noise produced the strongest masking, greatest double-pass consistency, and no suppression of perceived contrast, consistent with a noisy ideal observer. Deviations from this behavior for 2D white noise were explained by cross-channel suppression with no need to appeal to induced internal noise or uncertainty. We conclude that (a) results from previous experiments using white pixel noise should be re-evaluated and (b) 0D noise provides a cleaner method for investigating internal variability than pixel noise. Ironically then, the best external noise stimulus does not look noisy.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The visual system dissects the retinal image into millions of local analyses along numerous visual dimensions. However, our perceptions of the world are not fragmentary, so further processes must be involved in stitching it all back together. Simply summing up the responses would not work because this would convey an increase in image contrast with an increase in the number of mechanisms stimulated. Here, we consider a generic model of signal combination and counter-suppression designed to address this problem. The model is derived and tested for simple stimulus pairings (e.g. A + B), but is readily extended over multiple analysers. The model can account for nonlinear contrast transduction, dilution masking, and signal combination at threshold and above. It also predicts nonmonotonic psychometric functions where sensitivity to signal A in the presence of pedestal B first declines with increasing signal strength (paradoxically dropping below 50% correct in two-interval forced choice), but then rises back up again, producing a contour that follows the wings and neck of a swan. We looked for and found these "swan" functions in four different stimulus dimensions (ocularity, space, orientation, and time), providing some support for our proposal.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

PURPOSE: To validate a new miniaturised, open-field wavefront device which has been developed with the capacity to be attached to an ophthalmic surgical microscope or slit-lamp. SETTING: Solihull Hospital and Aston University, Birmingham, UK DESIGN: Comparative non-interventional study. METHODS: The dynamic range of the Aston Aberrometer was assessed using a calibrated model eye. The validity of the Aston Aberrometer was compared to a conventional desk mounted Shack-Hartmann aberrometer (Topcon KR1W) by measuring the refractive error and higher order aberrations of 75 dilated eyes with both instruments in random order. The Aston Aberrometer measurements were repeated five times to assess intra-session repeatability. Data was converted to vector form for analysis. RESULTS: The Aston Aberrometer had a large dynamic range of at least +21.0 D to -25.0 D. It gave similar measurements to a conventional aberrometer for mean spherical equivalent (mean difference ± 95% confidence interval: 0.02 ± 0.49D; correlation: r=0.995, p<0.001), astigmatic components (J0: 0.02 ± 0.15D; r=0.977, p<0.001; J45: 0.03 ± 0.28; r=0.666, p<0.001) and higher order aberrations RMS (0.02 ± 0.20D; r=0.620, p<0.001). Intraclass correlation coefficient assessments of intra-sessional repeatability for the Aston Aberrometer were excellent (spherical equivalent =1.000, p<0.001; astigmatic components J0 =0.998, p<0.001, J45=0.980, p<0.01; higher order aberrations RMS =0.961, p<0.001). CONCLUSIONS: The Aston Aberrometer gives valid and repeatable measures of refractive error and higher order aberrations over a large range. As it is able to measure continuously, it can provide direct feedback to surgeons during intraocular lens implantations and corneal surgery as to the optical status of the visual system.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Richard Armstrong was educated at King’s College London (1968-1971) and subsequently at St. Catherine’s College Oxford (1972-1976). His early research involved the application of statistical methods to problems in botany and ecology. For the last 34 years, he has been a lecturer in Botany, Microbiology, Ecology, Neuroscience, and Optometry at the University of Aston. His current research interests include the application of quantitative methods to the study of neuropathology of neurodegenerative diseases with special reference to vision and the visual system.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Golfers, coaches and researchers alike, have all keyed in on golf putting as an important aspect of overall golf performance. Of the three principle putting tasks (green reading, alignment and the putting action phase), the putting action phase has attracted the most attention from coaches, players and researchers alike. This phase includes the alignment of the club with the ball, the swing, and ball contact. A significant amount of research in this area has focused on measuring golfer’s vision strategies with eye tracking equipment. Unfortunately this research suffers from a number of shortcomings, which limit its usefulness. The purpose of this thesis was to address some of these shortcomings. The primary objective of this thesis was to re-evaluate golfer’s putting vision strategies using binocular eye tracking equipment and to define a new, optimal putting vision strategy which was associated with both higher skill and success. In order to facilitate this research, bespoke computer software was developed and validated, and new gaze behaviour criteria were defined. Additionally, the effects of training (habitual) and competition conditions on the putting vision strategy were examined, as was the effect of ocular dominance. Finally, methods for improving golfer’s binocular vision strategies are discussed, and a clinical plan for the optometric management of the golfer’s vision is presented. The clinical management plan includes the correction of fundamental aspects of golfers’ vision, including monocular refractive errors and binocular vision defects, as well as enhancement of their putting vision strategy, with the overall aim of improving performance on the golf course. This research has been undertaken in order to gain a better understanding of the human visual system and how it relates to the sport performance of golfers specifically. Ultimately, the analysis techniques and methods developed are applicable to the assessment of visual performance in all sports.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Background - When a moving stimulus and a briefly flashed static stimulus are physically aligned in space the static stimulus is perceived as lagging behind the moving stimulus. This vastly replicated phenomenon is known as the Flash-Lag Effect (FLE). For the first time we employed biological motion as the moving stimulus, which is important for two reasons. Firstly, biological motion is processed by visual as well as somatosensory brain areas, which makes it a prime candidate for elucidating the interplay between the two systems with respect to the FLE. Secondly, discussions about the mechanisms of the FLE tend to recur to evolutionary arguments, while most studies employ highly artificial stimuli with constant velocities. Methodology/Principal Finding - Since biological motion is ecologically valid it follows complex patterns with changing velocity. We therefore compared biological to symbolic motion with the same acceleration profile. Our results with 16 observers revealed a qualitatively different pattern for biological compared to symbolic motion and this pattern was predicted by the characteristics of motor resonance: The amount of anticipatory processing of perceived actions based on the induced perspective and agency modulated the FLE. Conclusions/Significance - Our study provides first evidence for an FLE with non-linear motion in general and with biological motion in particular. Our results suggest that predictive coding within the sensorimotor system alone cannot explain the FLE. Our findings are compatible with visual prediction (Nijhawan, 2008) which assumes that extrapolated motion representations within the visual system generate the FLE. These representations are modulated by sudden visual input (e.g. offset signals) or by input from other systems (e.g. sensorimotor) that can boost or attenuate overshooting representations in accordance with biased neural competition (Desimone & Duncan, 1995).

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The visual system pools information from local samples to calculate textural properties. We used a novel stimulus to investigate how signals are combined to improve estimates of global orientation. Stimuli were 29 × 29 element arrays of 4 c/deg log Gabors, spaced 1° apart. A proportion of these elements had a coherent orientation (horizontal/vertical) with the remainder assigned random orientations. The observer's task was to identify the global orientation. The spatial configuration of the signal was modulated by a checkerboard pattern of square checks containing potential signal elements. The other locations contained either randomly oriented elements (''noise check'') or were blank (''blank check''). The distribution of signal elements was manipulated by varying the size and location of the checks within a fixed-diameter stimulus. An ideal detector would only pool responses from potential signal elements. Humans did this for medium check sizes and for large check sizes when a signal was presented in the fovea. For small check sizes, however, the pooling occurred indiscriminately over relevant and irrelevant locations. For these check sizes, thresholds for the noise check and blank check conditions were similar, suggesting that the limiting noise is not induced by the response to the noise elements. The results are described by a model that filters the stimulus at the potential target orientations and then combines the signals over space in two stages. The first is a mandatory integration of local signals over a fixed area, limited by internal noise at each location. The second is a taskdependent combination of the outputs from the first stage. © 2014 ARVO.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The visual system combines spatial signals from the two eyes to achieve single vision. But if binocular disparity is too large, this perceptual fusion gives way to diplopia. We studied and modelled the processes underlying fusion and the transition to diplopia. The likely basis for fusion is linear summation of inputs onto binocular cortical cells. Previous studies of perceived position, contrast matching and contrast discrimination imply the computation of a dynamicallyweighted sum, where the weights vary with relative contrast. For gratings, perceived contrast was almost constant across all disparities, and this can be modelled by allowing the ocular weights to increase with disparity (Zhou, Georgeson & Hess, 2014). However, when a single Gaussian-blurred edge was shown to each eye perceived blur was invariant with disparity (Georgeson & Wallis, ECVP 2012) – not consistent with linear summation (which predicts that perceived blur increases with disparity). This blur constancy is consistent with a multiplicative form of combination (the contrast-weighted geometric mean) but that is hard to reconcile with the evidence favouring linear combination. We describe a 2-stage spatial filtering model with linear binocular combination and suggest that nonlinear output transduction (eg. ‘half-squaring’) at each stage may account for the blur constancy.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Purpose: Traditionally, it has been thought that no binocular combination occurs in amblyopia. However, there is a growing body of evidence that there are intact binocular mechanisms in amblyopia rendered inactive under normal viewing conditions due to imbalanced monocular inputs. Georgeson and Wallis (2014) recently introduced a novel method to investigate fusion, suppression and diplopia in normal population. We have modified this method to assess binocular interactions in amblyopia. Methods: Ten amblyopic and ten control subjects viewed briefly-presented (200 ms) pairs of dichoptically separated horizontal Gaussian blurred edges. Subjects reported one central edge, one offset edge, or a double edge as the vertical disparity was manipulated. The experiment was conducted at a range of spatial scales (blur widths of 4, 8, 16, and 32 arc min) and contrasts. Our model, based Georgeson and Wallis (2014), converted subjects’ responses into probabilities of fusion, suppression, and diplopia. Results: When the normal participants were presented equal contrast to each eye the probability of fusion gradually decreased with increasing disparity, as the probability of diplopia gradually increased. In only a small proportion of the trials, normal participants experienced suppression. The pattern was consistent across all edge blurs. Interestingly, the majority of amblyopes had a comparable pattern of fusion, i.e. decreasing probability with increasing disparity. However, with increasing disparity the amblyopes tended to suppress the amblyopic eye, experiencing diplopia only in a small proportion of trials particularly at large blurs. Increasing the interocular contrast offset favouring the amblyopic eye normalized the pattern of data in a way similar to normal participants. There were some interesting exceptions: strong suppressors for which our contrast range was inadequate and one case in which diplopia dominated. Conclusions: This task is suitable for assessing binocular interactions in amblyopic participants and providing a way to quantify the relationship between fusion, suppression and diplopia. In agreement with previous studies, our data indicate the presence of binocular mechanisms in amblyopia. A contrast offset favouring the amblyopic eye normalizes the measured binocular interactions in the amblyopic visual system.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Integrating information from multiple sources is a crucial function of the brain. Examples of such integration include multiple stimuli of different modalties, such as visual and auditory, multiple stimuli of the same modality, such as auditory and auditory, and integrating stimuli from the sensory organs (i.e. ears) with stimuli delivered from brain-machine interfaces.

The overall aim of this body of work is to empirically examine stimulus integration in these three domains to inform our broader understanding of how and when the brain combines information from multiple sources.

First, I examine visually-guided auditory, a problem with implications for the general problem in learning of how the brain determines what lesson to learn (and what lessons not to learn). For example, sound localization is a behavior that is partially learned with the aid of vision. This process requires correctly matching a visual location to that of a sound. This is an intrinsically circular problem when sound location is itself uncertain and the visual scene is rife with possible visual matches. Here, we develop a simple paradigm using visual guidance of sound localization to gain insight into how the brain confronts this type of circularity. We tested two competing hypotheses. 1: The brain guides sound location learning based on the synchrony or simultaneity of auditory-visual stimuli, potentially involving a Hebbian associative mechanism. 2: The brain uses a ‘guess and check’ heuristic in which visual feedback that is obtained after an eye movement to a sound alters future performance, perhaps by recruiting the brain’s reward-related circuitry. We assessed the effects of exposure to visual stimuli spatially mismatched from sounds on performance of an interleaved auditory-only saccade task. We found that when humans and monkeys were provided the visual stimulus asynchronously with the sound but as feedback to an auditory-guided saccade, they shifted their subsequent auditory-only performance toward the direction of the visual cue by 1.3-1.7 degrees, or 22-28% of the original 6 degree visual-auditory mismatch. In contrast when the visual stimulus was presented synchronously with the sound but extinguished too quickly to provide this feedback, there was little change in subsequent auditory-only performance. Our results suggest that the outcome of our own actions is vital to localizing sounds correctly. Contrary to previous expectations, visual calibration of auditory space does not appear to require visual-auditory associations based on synchrony/simultaneity.

My next line of research examines how electrical stimulation of the inferior colliculus influences perception of sounds in a nonhuman primate. The central nucleus of the inferior colliculus is the major ascending relay of auditory information before it reaches the forebrain, and thus an ideal target for understanding low-level information processing prior to the forebrain, as almost all auditory signals pass through the central nucleus of the inferior colliculus before reaching the forebrain. Thus, the inferior colliculus is the ideal structure to examine to understand the format of the inputs into the forebrain and, by extension, the processing of auditory scenes that occurs in the brainstem. Therefore, the inferior colliculus was an attractive target for understanding stimulus integration in the ascending auditory pathway.

Moreover, understanding the relationship between the auditory selectivity of neurons and their contribution to perception is critical to the design of effective auditory brain prosthetics. These prosthetics seek to mimic natural activity patterns to achieve desired perceptual outcomes. We measured the contribution of inferior colliculus (IC) sites to perception using combined recording and electrical stimulation. Monkeys performed a frequency-based discrimination task, reporting whether a probe sound was higher or lower in frequency than a reference sound. Stimulation pulses were paired with the probe sound on 50% of trials (0.5-80 µA, 100-300 Hz, n=172 IC locations in 3 rhesus monkeys). Electrical stimulation tended to bias the animals’ judgments in a fashion that was coarsely but significantly correlated with the best frequency of the stimulation site in comparison to the reference frequency employed in the task. Although there was considerable variability in the effects of stimulation (including impairments in performance and shifts in performance away from the direction predicted based on the site’s response properties), the results indicate that stimulation of the IC can evoke percepts correlated with the frequency tuning properties of the IC. Consistent with the implications of recent human studies, the main avenue for improvement for the auditory midbrain implant suggested by our findings is to increase the number and spatial extent of electrodes, to increase the size of the region that can be electrically activated and provide a greater range of evoked percepts.

My next line of research employs a frequency-tagging approach to examine the extent to which multiple sound sources are combined (or segregated) in the nonhuman primate inferior colliculus. In the single-sound case, most inferior colliculus neurons respond and entrain to sounds in a very broad region of space, and many are entirely spatially insensitive, so it is unknown how the neurons will respond to a situation with more than one sound. I use multiple AM stimuli of different frequencies, which the inferior colliculus represents using a spike timing code. This allows me to measure spike timing in the inferior colliculus to determine which sound source is responsible for neural activity in an auditory scene containing multiple sounds. Using this approach, I find that the same neurons that are tuned to broad regions of space in the single sound condition become dramatically more selective in the dual sound condition, preferentially entraining spikes to stimuli from a smaller region of space. I will examine the possibility that there may be a conceptual linkage between this finding and the finding of receptive field shifts in the visual system.

In chapter 5, I will comment on these findings more generally, compare them to existing theoretical models, and discuss what these results tell us about processing in the central nervous system in a multi-stimulus situation. My results suggest that the brain is flexible in its processing and can adapt its integration schema to fit the available cues and the demands of the task.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Saccadic eye movements rapidly displace the image of the world that is projected onto the retinas. In anticipation of each saccade, many neurons in the visual system shift their receptive fields. This presaccadic change in visual sensitivity, known as remapping, was first documented in the parietal cortex and has been studied in many other brain regions. Remapping requires information about upcoming saccades via corollary discharge. Analyses of neurons in a corollary discharge pathway that targets the frontal eye field (FEF) suggest that remapping may be assembled in the FEF’s local microcircuitry. Complementary data from reversible inactivation, neural recording, and modeling studies provide evidence that remapping contributes to transsaccadic continuity of action and perception. Multiple forms of remapping have been reported in the FEF and other brain areas, however, and questions remain about reasons for these differences. In this review of recent progress, we identify three hypotheses that may help to guide further investigations into the structure and function of circuits for remapping.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Notre système visuel extrait d'ordinaire l'information en basses fréquences spatiales (FS) avant celles en hautes FS. L'information globale extraite tôt peut ainsi activer des hypothèses sur l'identité de l'objet et guider l'extraction d'information plus fine spécifique par la suite. Dans les troubles du spectre autistique (TSA), toutefois, la perception des FS est atypique. De plus, la perception des individus atteints de TSA semble être moins influencée par leurs a priori et connaissances antérieures. Dans l'étude décrite dans le corps de ce mémoire, nous avions pour but de vérifier si l'a priori de traiter l'information des basses aux hautes FS était présent chez les individus atteints de TSA. Nous avons comparé le décours temporel de l'utilisation des FS chez des sujets neurotypiques et atteints de TSA en échantillonnant aléatoirement et exhaustivement l'espace temps x FS. Les sujets neurotypiques extrayaient les basses FS avant les plus hautes: nous avons ainsi pu répliquer le résultat de plusieurs études antérieures, tout en le caractérisant avec plus de précision que jamais auparavant. Les sujets atteints de TSA, quant à eux, extrayaient toutes les FS utiles, basses et hautes, dès le début, indiquant qu'ils ne possédaient pas l'a priori présent chez les neurotypiques. Il semblerait ainsi que les individus atteints de TSA extraient les FS de manière purement ascendante, l'extraction n'étant pas guidée par l'activation d'hypothèses.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The police use both subjective (i.e. police staff) and automated (e.g. face recognition systems) methods for the completion of visual tasks (e.g person identification). Image quality for police tasks has been defined as the image usefulness, or image suitability of the visual material to satisfy a visual task. It is not necessarily affected by any artefact that may affect the visual image quality (i.e. decrease fidelity), as long as these artefacts do not affect the relevant useful information for the task. The capture of useful information will be affected by the unconstrained conditions commonly encountered by CCTV systems such as variations in illumination and high compression levels. The main aim of this thesis is to investigate aspects of image quality and video compression that may affect the completion of police visual tasks/applications with respect to CCTV imagery. This is accomplished by investigating 3 specific police areas/tasks utilising: 1) the human visual system (HVS) for a face recognition task, 2) automated face recognition systems, and 3) automated human detection systems. These systems (HVS and automated) were assessed with defined scene content properties, and video compression, i.e. H.264/MPEG-4 AVC. The performance of imaging systems/processes (e.g. subjective investigations, performance of compression algorithms) are affected by scene content properties. No other investigation has been identified that takes into consideration scene content properties to the same extend. Results have shown that the HVS is more sensitive to compression effects in comparison to the automated systems. In automated face recognition systems, `mixed lightness' scenes were the most affected and `low lightness' scenes were the least affected by compression. In contrast the HVS for the face recognition task, `low lightness' scenes were the most affected and `medium lightness' scenes the least affected. For the automated human detection systems, `close distance' and `run approach' are some of the most commonly affected scenes. Findings have the potential to broaden the methods used for testing imaging systems for security applications.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Digital Image Processing is a rapidly evolving eld with growing applications in Science and Engineering. It involves changing the nature of an image in order to either improve its pictorial information for human interpretation or render it more suitable for autonomous machine perception. One of the major areas of image processing for human vision applications is image enhancement. The principal goal of image enhancement is to improve visual quality of an image, typically by taking advantage of the response of human visual system. Image enhancement methods are carried out usually in the pixel domain. Transform domain methods can often provide another way to interpret and understand image contents. A suitable transform, thus selected, should have less computational complexity. Sequency ordered arrangement of unique MRT (Mapped Real Transform) coe cients can give rise to an integer-to-integer transform, named Sequency based unique MRT (SMRT), suitable for image processing applications. The development of the SMRT from UMRT (Unique MRT), forward & inverse SMRT algorithms and the basis functions are introduced. A few properties of the SMRT are explored and its scope in lossless text compression is presented.