104 resultados para Human Visual System
Resumo:
High-level cognitive factors, including self-awareness, are believed to play an important role in human visual perception. The principal aim of this study was to determine whether oscillatory brain rhythms play a role in the neural processes involved in self-monitoring attentional status. To do so we measured cortical activity using magnetoencephalography (MEG) and functional magnetic resonance imaging (fMRI) while participants were asked to self-monitor their internal status, only initiating the presentation of a stimulus when they perceived their attentional focus to be maximal. We employed a hierarchical Bayesian method that uses fMRI results as soft-constrained spatial information to solve the MEG inverse problem, allowing us to estimate cortical currents in the order of millimeters and milliseconds. Our results show that, during self-monitoring of internal status, there was a sustained decrease in power within the 7-13 Hz (alpha) range in the rostral cingulate motor area (rCMA) on the human medial wall, beginning approximately 430 msec after the trial start (p < 0.05, FDR corrected). We also show that gamma-band power (41-47 Hz) within this area was positively correlated with task performance from 40-640 msec after the trial start (r = 0.71, p < 0.05). We conclude: (1) the rCMA is involved in processes governing self-monitoring of internal status; and (2) the qualitative differences between alpha and gamma activity are reflective of their different roles in self-monitoring internal states. We suggest that alpha suppression may reflect a strengthening of top-down interareal connections, while a positive correlation between gamma activity and task performance indicates that gamma may play an important role in guiding visuomotor behavior. © 2013 Yamagishi et al.
Resumo:
Purpose: Many practitioners base the prescription of near vision additions on the assertion that only one half or two-thirds of an individual’s amplitude of accommodation is sustainable for a prolonged period. To better understand how much eye focus needs to be restored for presbyopic corrections to be adequate, this study investigated the robustness of the pre-presbyopic human accommodative system during a sustained and intensive near vision task. Methods: Twenty-one pre-presbyopic volunteers (aged 26.1 ± 4.7 years) participated in the study. Binocular subjective amplitude of accommodation was measured before and after a prolonged reading exercise, using the RAF rule. During the 30 min reading task, the subject’s closest comfortable eye-to-text distance and pupil size was monitored. Accommodative accuracy to 0.2, 1.0, 2.0, 3.0 and 4.0 D stimuli was determined objectively using a validated binocular open-view autorefractor immediately before, and after the reading task. Results: Amplitude of accommodation (p = 0.09) and accommodative accuracy (p > 0.05) were statistically unchanged following the intensive near task. The mean proportion of accommodation exerted throughout the near exercise was 80.6% (range 45.3 ± 3.7 to 96.6 ± 4.3%), which increased as the task progressed (F = 2.24, p = 0.02). The mean percentage of accommodation utilised increased with subject age (r = 0.517, p = 0.016). Conclusion: The pre-presbyopic human accommodative system is robust to fatigue during intense and prolonged near work. A greater proportion of one’s amplitude of accommodation may be continuously exerted than previously suggested.
Resumo:
Adapting to blurred images makes in-focus images look too sharp, and vice-versa (Webster et al, 2002 Nature Neuroscience 5 839 - 840). We asked how such blur adaptation is related to contrast adaptation. Georgeson (1985 Spatial Vision 1 103 - 112) found that grating contrast adaptation followed a subtractive rule: perceived (matched) contrast of a grating was fairly well predicted by subtracting some fraction k(~0.3) of the adapting contrast from the test contrast. Here we apply that rule to the responses of a set of spatial filters at different scales and orientations. Blur is encoded by the pattern of filter response magnitudes over scale. We tested two versions - the 'norm model' and 'fatigue model' - against blur-matching data obtained after adaptation to sharpened, in-focus or blurred images. In the fatigue model, filter responses are simply reduced by exposure to the adapter. In the norm model, (a) the visual system is pre-adapted to a focused world and (b) discrepancy between observed and expected responses to the experimental adapter leads to additional reduction (or enhancement) of filter responses during experimental adaptation. The two models are closely related, but only the norm model gave a satisfactory account of results across the four experiments analysed, with one free parameter k. This model implies that the visual system is pre-adapted to focused images, that adapting to in-focus or blank images produces no change in adaptation, and that adapting to sharpened or blurred images changes the state of adaptation, leading to changes in perceived blur or sharpness.
Resumo:
When a textured surface is modulated in depth and illuminated, the level of illumination varies across the surface, producing coarse-scale luminance modulations (LM) and amplitude modulation (AM) of the fine-scale texture. If the surface has an albedo texture (reflectance variation) then the LM and AM components are always in-phase, but if the surface has a relief texture the phase relation between LM and AM varies with the direction and nature of the illuminant. We showed observers sinusoidal luminance and amplitude modulations of a binary noise texture, in various phase relationships, in a paired-comparisons design. In the first experiment, the combinations under test were presented in different temporal intervals. Observers indicated which interval contained the more depthy stimulus. LM and AM in-phase were seen as more depthy than LM alone which was in turn more depthy than LM and AM in anti-phase, but the differences were weak. In the second experiment the combinations under test were presented in a single interval on opposite obliques of a plaid pattern. Observers were asked to indicate the more depthy oblique. Observers produced the same depth rankings as before, but now the effects were more robust and significant. Intermediate LM/AM phase relationships were also tested: phase differences less than 90 deg were seen as more depthy than LM-only, while those greater than 90 deg were seen as less depthy. We conjecture that the visual system construes phase offsets between LM and AM as indicating relief texture and thus perceives these combinations as depthy even when their phase relationship is other than zero. However, when different LM/AM pairs are combined in a plaid, the signals on the obliques are unlikely to indicate corrugations of the same texture, and in this case the out-of-phase pairing is seen as flat. [Supported by the Engineering and Physical Sciences Research Council (EPSRC)].
Resumo:
The ability to distinguish one visual stimulus from another slightly different one depends on the variability of their internal representations. In a recent paper on human visual-contrast discrimination, Kontsevich et al (2002 Vision Research 42 1771 - 1784) re-considered the long-standing question whether the internal noise that limits discrimination is fixed (contrast-invariant) or variable (contrast-dependent). They tested discrimination performance for 3 cycles deg-1 gratings over a wide range of incremental contrast levels at three masking contrasts, and showed that a simple model with an expansive response function and response-dependent noise could fit the data very well. Their conclusion - that noise in visual-discrimination tasks increases markedly with contrast - has profound implications for our understanding and modelling of vision. Here, however, we re-analyse their data, and report that a standard gain-control model with a compressive response function and fixed additive noise can also fit the data remarkably well. Thus these experimental data do not allow us to decide between the two models. The question remains open. [Supported by EPSRC grant GR/S74515/01]
Resumo:
We studied the visual mechanisms that encode edge blur in images. Our previous work suggested that the visual system spatially differentiates the luminance profile twice to create the `signature' of the edge, and then evaluates the spatial scale of this signature profile by applying Gaussian derivative templates of different sizes. The scale of the best-fitting template indicates the blur of the edge. In blur-matching experiments, a staircase procedure was used to adjust the blur of a comparison edge (40% contrast, 0.3 s duration) until it appeared to match the blur of test edges at different contrasts (5% - 40%) and blurs (6 - 32 min of arc). Results showed that lower-contrast edges looked progressively sharper. We also added a linear luminance gradient to blurred test edges. When the added gradient was of opposite polarity to the edge gradient, it made the edge look progressively sharper. Both effects can be explained quantitatively by the action of a half-wave rectifying nonlinearity that sits between the first and second (linear) differentiating stages. This rectifier was introduced to account for a range of other effects on perceived blur (Barbieri-Hesse and Georgeson, 2002 Perception 31 Supplement, 54), but it readily predicts the influence of the negative ramp. The effect of contrast arises because the rectifier has a threshold: it not only suppresses negative values but also small positive values. At low contrasts, more of the gradient profile falls below threshold and its effective spatial scale shrinks in size, leading to perceived sharpening.
Resumo:
Edge blur is an important perceptual cue, but how does the visual system encode the degree of blur at edges? Blur could be measured by the width of the luminance gradient profile, peak ^ trough separation in the 2nd derivative profile, or the ratio of 1st-to-3rd derivative magnitudes. In template models, the system would store a set of templates of different sizes and find which one best fits the `signature' of the edge. The signature could be the luminance profile itself, or one of its spatial derivatives. I tested these possibilities in blur-matching experiments. In a 2AFC staircase procedure, observers adjusted the blur of Gaussian edges (30% contrast) to match the perceived blur of various non-Gaussian test edges. In experiment 1, test stimuli were mixtures of 2 Gaussian edges (eg 10 and 30 min of arc blur) at the same location, while in experiment 2, test stimuli were formed from a blurred edge sharpened to different extents by a compressive transformation. Predictions of the various models were tested against the blur-matching data, but only one model was strongly supported. This was the template model, in which the input signature is the 2nd derivative of the luminance profile, and the templates are applied to this signature at the zero-crossings. The templates are Gaussian derivative receptive fields that covary in width and length to form a self-similar set (ie same shape, different sizes). This naturally predicts that shorter edges should look sharper. As edge length gets shorter, responses of longer templates drop more than shorter ones, and so the response distribution shifts towards shorter (smaller) templates, signalling a sharper edge. The data confirmed this, including the scale-invariance implied by self-similarity, and a good fit was obtained from templates with a length-to-width ratio of about 1. The simultaneous analysis of edge blur and edge location may offer a new solution to the multiscale problem in edge detection.
Resumo:
To investigate amblyopic contrast vision at threshold and above we performed pedestal-masking (contrastdiscrimination) experiments with a group of eight strabismic amblyopes using horizontal sinusoidal gratings (mainly 3 c/deg) in monocular, binocular and dichoptic configurations balanced across eye (i.e. five conditions). With some exceptions in some observers, the four main results were as follows. (1) For the monocular and dichoptic conditions, sensitivity was less in the amblyopic eye than in the good eye at all mask contrasts. (2) Binocular and monocular dipper functions superimposed in the good eye. (3) Monocular masking functions had a normal dipper shape in the good eye, but facilitation was diminished in the amblyopic eye. (4) A less consistent result was normal facilitation in dichoptic masking when testing the good eye, but a loss of this when testing the amblyopic eye. This pattern of amblyopic results was replicated in a normal observer by placing a neutral density filter in front of one eye. The two-stage model of binocular contrast gain control [Meese, T.S., Georgeson, M.A. & Baker, D.H. (2006). Binocular contrast vision at and above threshold. Journal of Vision 6, 1224--1243.] was `lesioned' in several ways to assess the form of the amblyopic deficit. The most successful model involves attenuation of signal and an increase in noise in the amblyopic eye, and intact stages of interocular suppression and binocular summation. This implies a behavioural influence from monocular noise in the amblyopic visual system as well as in normal observers with an ND filter over one eye.
Resumo:
The roots of the concept of cortical columns stretch far back into the history of neuroscience. The impulse to compartmentalise the cortex into functional units can be seen at work in the phrenology of the beginning of the nineteenth century. At the beginning of the next century Korbinian Brodmann and several others published treatises on cortical architectonics. Later, in the middle of that century, Lorente de No writes of chains of ‘reverberatory’ neurons orthogonal to the pial surface of the cortex and called them ‘elementary units of cortical activity’. This is the first hint that a columnar organisation might exist. With the advent of microelectrode recording first Vernon Mountcastle (1957) and then David Hubel and Torsten Wiesel provided evidence consistent with the idea that columns might constitute units of physiological activity. This idea was backed up in the 1970s by clever histochemical techniques and culminated in Hubel and Wiesel’s well-known ‘ice-cube’ model of the cortex and Szentogathai’s brilliant iconography. The cortical column can thus be seen as the terminus ad quem of several great lines of neuroscientific research: currents originating in phrenology and passing through cytoarchitectonics; currents originating in neurocytology and passing through Lorente de No. Famously, Huxley noted the tragedy of a beautiful hypothesis destroyed by an ugly fact. Famously, too, human visual perception is orientated toward seeing edges and demarcations when, perhaps, they are not there. Recently the concept of cortical columns has come in for the same radical criticism that undermined the architectonics of the early part of the twentieth century. Does history repeat itself? This paper reviews this history and asks the question.
Resumo:
The fundamental problem faced by noninvasive neuroimaging techniques such as EEG/MEG1 is to elucidate functionally important aspects of the microscopic neuronal network dynamics from macroscopic aggregate measurements. Due to the mixing of the activities of large neuronal populations in the observed macroscopic aggregate, recovering the underlying network that generates the signal in the absence of any additional information represents a considerable challenge. Recent MEG studies have shown that macroscopic measurements contain sufficient information to allow the differentiation between patterns of activity, which are likely to represent different stimulus-specific collective modes in the underlying network (Hadjipapas, A., Adjamian, P., Swettenham, J.B., Holliday, I.E., Barnes, G.R., 2007. Stimuli of varying spatial scale induce gamma activity with distinct temporal characteristics in human visual cortex. NeuroImage 35, 518–530). The next question arising in this context is whether aspects of collective network activity can be recovered from a macroscopic aggregate signal. We propose that this issue is most appropriately addressed if MEG/EEG signals are to be viewed as macroscopic aggregates arising from networks of coupled systems as opposed to aggregates across a mass of largely independent neural systems. We show that collective modes arising in a network of simulated coupled systems can be indeed recovered from the macroscopic aggregate. Moreover, we show that nonlinear state space methods yield a good approximation of the number of effective degrees of freedom in the network. Importantly, information about hidden variables, which do not directly contribute to the aggregate signal, can also be recovered. Finally, this theoretical framework can be applied to experimental MEG/EEG data in the future, enabling the inference of state dependent changes in the degree of local synchrony in the underlying network.
Resumo:
It is well known that optic flow - the smooth transformation of the retinal image experienced by a moving observer - contains valuable information about the three-dimensional layout of the environment. From psychophysical and neurophysiological experiments, specialised mechanisms responsive to components of optic flow (sometimes called complex motion) such as expansion and rotation have been inferred. However, it remains unclear (a) whether the visual system has mechanisms for processing the component of deformation and (b) whether there are multiple mechanisms that function independently from each other. Here, we investigate these issues using random-dot patterns and a forced-choice subthreshold summation technique. In experiment 1, we manipulated the size of a test region that was permitted to contain signal and found substantial spatial summation for signal components of translation, expansion, rotation, and deformation embedded in noise. In experiment 2, little or no summation was found for the superposition of orthogonal pairs of complex motion patterns (eg expansion and rotation), consistent with probability summation between pairs of independent detectors. Our results suggest that optic-flow components are detected by mechanisms that are specialised for particular patterns of complex motion.
Resumo:
Growing evidence from psychophysics and single-unit recordings suggests specialised mechanisms in the primate visual system for the detection of complex motion patterns such as expansion and rotation. Here we used a subthreshold summation technique to determine the direction tuning functions of the detecting mechanisms. We measured thresholds for discriminating noise and signal + noise for pairs of superimposed complex motion patterns (signal A and B) carried by random-dot stimuli in a circular 5° field. For expansion, rotation, deformation and translation we found broad tuning functions approximated by cos(d), where d is the difference in dot directions for signal A and B. These data were well described by models in which either: (a) cardinal mechanisms had direction bandwidths (half-widths) of around 60° or (b) the number of mechanisms was increased and their half-width was reduced to about 40°. When d = 180° we found summation to be greater than probability summation for expansion, rotation and translation, consistent with the idea that mechanisms for these stimuli are constructed from subunits responsive to relative motion. For deformation, however, we found sensitivity declined when d = 180°, suggesting antagonistic input from directional subunits in the deformation mechanism. This is a necessary property for a mechanism whose job is to extract the deformation component from the optic flow field. © 2001 Elsevier Science Ltd.
Resumo:
The question of whether language affects our categorization of perceptual continua is of particular interest for the domain of color where constraints on categorization have been proposed both within the visual system and in the visual environment. Recent research (Roberson, Davies, & Davidoff, 2000; Roberson et al., in press) found substantial evidence of cognitive color differences between different language communities, but concerns remained as to how representative might be a tiny, extremely remote community. The present study replicates and extends previous findings using additional paradigms among a larger community in a different visual environment. Adult semi-nomadic tribesmen in Southern Africa carried out similarity judgments, short-term memory and long-term learning tasks. They showed different cognitive organization of color to both English and another language with the five color terms. Moreover, Categorical Perception effects were found to differ even between languages with broadly similar color categories. The results provide further evidence of the tight relationship between language and cognition.
Resumo:
Adapting to blurred or sharpened images alters perceived blur of a focused image (M. A. Webster, M. A. Georgeson, & S. M. Webster, 2002). We asked whether blur adaptation results in (a) renormalization of perceived focus or (b) a repulsion aftereffect. Images were checkerboards or 2-D Gaussian noise, whose amplitude spectra had (log-log) slopes from -2 (strongly blurred) to 0 (strongly sharpened). Observers adjusted the spectral slope of a comparison image to match different test slopes after adaptation to blurred or sharpened images. Results did not show repulsion effects but were consistent with some renormalization. Test blur levels at and near a blurred or sharpened adaptation level were matched by more focused slopes (closer to 1/f) but with little or no change in appearance after adaptation to focused (1/f) images. A model of contrast adaptation and blur coding by multiple-scale spatial filters predicts these blur aftereffects and those of Webster et al. (2002). A key proposal is that observers are pre-adapted to natural spectra, and blurred or sharpened spectra induce changes in the state of adaptation. The model illustrates how norms might be encoded and recalibrated in the visual system even when they are represented only implicitly by the distribution of responses across multiple channels.
Resumo:
The timeline imposed by recent worldwide chemical legislation is not amenable to conventional in vivo toxicity testing, requiring the development of rapid, economical in vitro screening strategies which have acceptable predictive capacities. When acquiring regulatory neurotoxicity data, distinction on whether a toxic agent affects neurons and/or astrocytes is essential. This study evaluated neurofilament (NF) and glial fibrillary acidic protein (GFAP) directed single-cell (S-C) ELISA and flow cytometry as methods for distinguishing cell-specific cytoskeletal responses, using the established human NT2 neuronal/astrocytic (NT2.N/A) co-culture model and a range of neurotoxic (acrylamide, atropine, caffeine, chloroquine, nicotine) and non-neurotoxic (chloramphenicol, rifampicin, verapamil) test chemicals. NF and GFAP directed flow cytometry was able to identify several of the test chemicals as being specifically neurotoxic (chloroquine, nicotine) or astrocytoxic (atropine, chloramphenicol) via quantification of cell death in the NT2.N/A model at cytotoxic concentrations using the resazurin cytotoxicity assay. Those neurotoxicants with low associated cytotoxicity are the most significant in terms of potential hazard to the human nervous system. The NF and GFAP directed S-C ELISA data predominantly demonstrated the known neurotoxicants only to affect the neuronal and/or astrocytic cytoskeleton in the NT2.N/A cell model at concentrations below those affecting cell viability. This report concluded that NF and GFAP directed S-C ELISA and flow cytometric methods may prove to be valuable additions to an in vitro screening strategy for differentiating cytotoxicity from specific neuronal and/or astrocytic toxicity. Further work using the NT2.N/A model and a broader array of toxicants is appropriate in order to confirm the applicability of these methods.