946 resultados para Human Visual System
Resumo:
Colour changes in fiddler crabs have long been noted, but a functional interpretation is still lacking. Here we report that neighbouring populations of Uca vomeris in Australia exhibit different degrees of carapace colours, which range from dull mottled to brilliant blue and white. We determined the spectral characteristics of the mud substratum and of the carapace colours of U. vomeris and found that the mottled colours of crabs are cryptic against this background, while display colours provide strong colour contrast for both birds and crabs, but luminance contrast only for a crab visual system. We tested whether crab populations may become cryptic under the influence of bird predation by counting birds overflying or feeding on differently coloured colonies. Colonies with cryptically coloured crabs indeed experience a much higher level of bird presence, compared to colourful colonies. We show in addition that colourful crab individuals subjected to dummy bird predation do change their body colouration over a matter of days. The crabs thus appear to modify their social signalling system depending on their assessment of predation risk.
Resumo:
The goals of this study are to determine relationships between synaptogenesis and morphogenesis within the mushroom body calyx of the honeybee Apis mellifera and to find out how the microglomerular structure characteristic for the mature calyx is established during metamorphosis. We show that synaptogenesis in the mushroom body calycal neuropile starts in early metamorphosis (stages P1-P3), before the microglomerular structure of the neuropile is established. The initial step of synaptogenesis is characterized by the rare occurrence of distinct synaptic contacts. A massive synaptogenesis starts at stage P5, which coincides with the formation of microglomeruli, structural units of the calyx that are composed of centrally located presynaptic boutons surrounded by spiny postsynaptic endings. Microglomeruli are assembled either via accumulation of fine postsynaptic processes around preexisting presynaptic boutons or via ingrowth of thin neurites of presynaptic neurons into premicroglomeruli, tightly packed groups of spiny endings. During late pupal stages (P8-P9), addition of new synapses and microglomeruli is likely to continue. Most of the synaptic appositions formed there are made by boutons (putative extrinsic mushroom body neurons) into small postsynaptic profiles that do not exhibit presynaptic specializations (putative intrinsic mushroom body neurons). Synapses between presynaptic boutons characteristic of the adult calyx first appear at stage P8 but remain rare toward the end of metamorphosis. Our observations are consistent with the hypothesis that most of the synapses established during metamorphosis provide the structural basis for afferent information flow to calyces, whereas maturation of local synaptic circuitry is likely to occur after adult emergence.
Resumo:
Research on sensory processing or the way animals see, hear, smell, taste, feel and electrically and magnetically sense their environment has advanced a great deal over the last fifteen years. This book discusses the most important themes that have emerged from recent research and provides a summary of likely future directions. The book starts with two sections on the detection of sensory signals over long and short ranges by aquatic animals, covering the topics of navigation, communication, and finding food and other localized sources. The next section, the co-evolution of signal and sense, deals with how animals decide whether the source is prey, predator or mate by utilizing receptors that have evolved to take full advantage of the acoustical properties of the signal. Organisms living in the deep-sea environment have also received a lot of recent attention, so the next section deals with visual adaptations to limited light environments where sunlight is replaced by bioluminescence and the visual system has undergone changes to optimize light capture and sensitivity. The last section on central co-ordination of sensory systems covers how signals are processed and filtered for use by the animal. This book will be essential reading for all researchers and graduate students interested in sensory systems.
Resumo:
High-level cognitive factors, including self-awareness, are believed to play an important role in human visual perception. The principal aim of this study was to determine whether oscillatory brain rhythms play a role in the neural processes involved in self-monitoring attentional status. To do so we measured cortical activity using magnetoencephalography (MEG) and functional magnetic resonance imaging (fMRI) while participants were asked to self-monitor their internal status, only initiating the presentation of a stimulus when they perceived their attentional focus to be maximal. We employed a hierarchical Bayesian method that uses fMRI results as soft-constrained spatial information to solve the MEG inverse problem, allowing us to estimate cortical currents in the order of millimeters and milliseconds. Our results show that, during self-monitoring of internal status, there was a sustained decrease in power within the 7-13 Hz (alpha) range in the rostral cingulate motor area (rCMA) on the human medial wall, beginning approximately 430 msec after the trial start (p < 0.05, FDR corrected). We also show that gamma-band power (41-47 Hz) within this area was positively correlated with task performance from 40-640 msec after the trial start (r = 0.71, p < 0.05). We conclude: (1) the rCMA is involved in processes governing self-monitoring of internal status; and (2) the qualitative differences between alpha and gamma activity are reflective of their different roles in self-monitoring internal states. We suggest that alpha suppression may reflect a strengthening of top-down interareal connections, while a positive correlation between gamma activity and task performance indicates that gamma may play an important role in guiding visuomotor behavior. © 2013 Yamagishi et al.
Resumo:
Purpose: Many practitioners base the prescription of near vision additions on the assertion that only one half or two-thirds of an individual’s amplitude of accommodation is sustainable for a prolonged period. To better understand how much eye focus needs to be restored for presbyopic corrections to be adequate, this study investigated the robustness of the pre-presbyopic human accommodative system during a sustained and intensive near vision task. Methods: Twenty-one pre-presbyopic volunteers (aged 26.1 ± 4.7 years) participated in the study. Binocular subjective amplitude of accommodation was measured before and after a prolonged reading exercise, using the RAF rule. During the 30 min reading task, the subject’s closest comfortable eye-to-text distance and pupil size was monitored. Accommodative accuracy to 0.2, 1.0, 2.0, 3.0 and 4.0 D stimuli was determined objectively using a validated binocular open-view autorefractor immediately before, and after the reading task. Results: Amplitude of accommodation (p = 0.09) and accommodative accuracy (p > 0.05) were statistically unchanged following the intensive near task. The mean proportion of accommodation exerted throughout the near exercise was 80.6% (range 45.3 ± 3.7 to 96.6 ± 4.3%), which increased as the task progressed (F = 2.24, p = 0.02). The mean percentage of accommodation utilised increased with subject age (r = 0.517, p = 0.016). Conclusion: The pre-presbyopic human accommodative system is robust to fatigue during intense and prolonged near work. A greater proportion of one’s amplitude of accommodation may be continuously exerted than previously suggested.
Resumo:
Adapting to blurred images makes in-focus images look too sharp, and vice-versa (Webster et al, 2002 Nature Neuroscience 5 839 - 840). We asked how such blur adaptation is related to contrast adaptation. Georgeson (1985 Spatial Vision 1 103 - 112) found that grating contrast adaptation followed a subtractive rule: perceived (matched) contrast of a grating was fairly well predicted by subtracting some fraction k(~0.3) of the adapting contrast from the test contrast. Here we apply that rule to the responses of a set of spatial filters at different scales and orientations. Blur is encoded by the pattern of filter response magnitudes over scale. We tested two versions - the 'norm model' and 'fatigue model' - against blur-matching data obtained after adaptation to sharpened, in-focus or blurred images. In the fatigue model, filter responses are simply reduced by exposure to the adapter. In the norm model, (a) the visual system is pre-adapted to a focused world and (b) discrepancy between observed and expected responses to the experimental adapter leads to additional reduction (or enhancement) of filter responses during experimental adaptation. The two models are closely related, but only the norm model gave a satisfactory account of results across the four experiments analysed, with one free parameter k. This model implies that the visual system is pre-adapted to focused images, that adapting to in-focus or blank images produces no change in adaptation, and that adapting to sharpened or blurred images changes the state of adaptation, leading to changes in perceived blur or sharpness.
Resumo:
When a textured surface is modulated in depth and illuminated, the level of illumination varies across the surface, producing coarse-scale luminance modulations (LM) and amplitude modulation (AM) of the fine-scale texture. If the surface has an albedo texture (reflectance variation) then the LM and AM components are always in-phase, but if the surface has a relief texture the phase relation between LM and AM varies with the direction and nature of the illuminant. We showed observers sinusoidal luminance and amplitude modulations of a binary noise texture, in various phase relationships, in a paired-comparisons design. In the first experiment, the combinations under test were presented in different temporal intervals. Observers indicated which interval contained the more depthy stimulus. LM and AM in-phase were seen as more depthy than LM alone which was in turn more depthy than LM and AM in anti-phase, but the differences were weak. In the second experiment the combinations under test were presented in a single interval on opposite obliques of a plaid pattern. Observers were asked to indicate the more depthy oblique. Observers produced the same depth rankings as before, but now the effects were more robust and significant. Intermediate LM/AM phase relationships were also tested: phase differences less than 90 deg were seen as more depthy than LM-only, while those greater than 90 deg were seen as less depthy. We conjecture that the visual system construes phase offsets between LM and AM as indicating relief texture and thus perceives these combinations as depthy even when their phase relationship is other than zero. However, when different LM/AM pairs are combined in a plaid, the signals on the obliques are unlikely to indicate corrugations of the same texture, and in this case the out-of-phase pairing is seen as flat. [Supported by the Engineering and Physical Sciences Research Council (EPSRC)].
Resumo:
The ability to distinguish one visual stimulus from another slightly different one depends on the variability of their internal representations. In a recent paper on human visual-contrast discrimination, Kontsevich et al (2002 Vision Research 42 1771 - 1784) re-considered the long-standing question whether the internal noise that limits discrimination is fixed (contrast-invariant) or variable (contrast-dependent). They tested discrimination performance for 3 cycles deg-1 gratings over a wide range of incremental contrast levels at three masking contrasts, and showed that a simple model with an expansive response function and response-dependent noise could fit the data very well. Their conclusion - that noise in visual-discrimination tasks increases markedly with contrast - has profound implications for our understanding and modelling of vision. Here, however, we re-analyse their data, and report that a standard gain-control model with a compressive response function and fixed additive noise can also fit the data remarkably well. Thus these experimental data do not allow us to decide between the two models. The question remains open. [Supported by EPSRC grant GR/S74515/01]
Resumo:
We studied the visual mechanisms that encode edge blur in images. Our previous work suggested that the visual system spatially differentiates the luminance profile twice to create the `signature' of the edge, and then evaluates the spatial scale of this signature profile by applying Gaussian derivative templates of different sizes. The scale of the best-fitting template indicates the blur of the edge. In blur-matching experiments, a staircase procedure was used to adjust the blur of a comparison edge (40% contrast, 0.3 s duration) until it appeared to match the blur of test edges at different contrasts (5% - 40%) and blurs (6 - 32 min of arc). Results showed that lower-contrast edges looked progressively sharper. We also added a linear luminance gradient to blurred test edges. When the added gradient was of opposite polarity to the edge gradient, it made the edge look progressively sharper. Both effects can be explained quantitatively by the action of a half-wave rectifying nonlinearity that sits between the first and second (linear) differentiating stages. This rectifier was introduced to account for a range of other effects on perceived blur (Barbieri-Hesse and Georgeson, 2002 Perception 31 Supplement, 54), but it readily predicts the influence of the negative ramp. The effect of contrast arises because the rectifier has a threshold: it not only suppresses negative values but also small positive values. At low contrasts, more of the gradient profile falls below threshold and its effective spatial scale shrinks in size, leading to perceived sharpening.
Resumo:
Edge blur is an important perceptual cue, but how does the visual system encode the degree of blur at edges? Blur could be measured by the width of the luminance gradient profile, peak ^ trough separation in the 2nd derivative profile, or the ratio of 1st-to-3rd derivative magnitudes. In template models, the system would store a set of templates of different sizes and find which one best fits the `signature' of the edge. The signature could be the luminance profile itself, or one of its spatial derivatives. I tested these possibilities in blur-matching experiments. In a 2AFC staircase procedure, observers adjusted the blur of Gaussian edges (30% contrast) to match the perceived blur of various non-Gaussian test edges. In experiment 1, test stimuli were mixtures of 2 Gaussian edges (eg 10 and 30 min of arc blur) at the same location, while in experiment 2, test stimuli were formed from a blurred edge sharpened to different extents by a compressive transformation. Predictions of the various models were tested against the blur-matching data, but only one model was strongly supported. This was the template model, in which the input signature is the 2nd derivative of the luminance profile, and the templates are applied to this signature at the zero-crossings. The templates are Gaussian derivative receptive fields that covary in width and length to form a self-similar set (ie same shape, different sizes). This naturally predicts that shorter edges should look sharper. As edge length gets shorter, responses of longer templates drop more than shorter ones, and so the response distribution shifts towards shorter (smaller) templates, signalling a sharper edge. The data confirmed this, including the scale-invariance implied by self-similarity, and a good fit was obtained from templates with a length-to-width ratio of about 1. The simultaneous analysis of edge blur and edge location may offer a new solution to the multiscale problem in edge detection.
Resumo:
To investigate amblyopic contrast vision at threshold and above we performed pedestal-masking (contrastdiscrimination) experiments with a group of eight strabismic amblyopes using horizontal sinusoidal gratings (mainly 3 c/deg) in monocular, binocular and dichoptic configurations balanced across eye (i.e. five conditions). With some exceptions in some observers, the four main results were as follows. (1) For the monocular and dichoptic conditions, sensitivity was less in the amblyopic eye than in the good eye at all mask contrasts. (2) Binocular and monocular dipper functions superimposed in the good eye. (3) Monocular masking functions had a normal dipper shape in the good eye, but facilitation was diminished in the amblyopic eye. (4) A less consistent result was normal facilitation in dichoptic masking when testing the good eye, but a loss of this when testing the amblyopic eye. This pattern of amblyopic results was replicated in a normal observer by placing a neutral density filter in front of one eye. The two-stage model of binocular contrast gain control [Meese, T.S., Georgeson, M.A. & Baker, D.H. (2006). Binocular contrast vision at and above threshold. Journal of Vision 6, 1224--1243.] was `lesioned' in several ways to assess the form of the amblyopic deficit. The most successful model involves attenuation of signal and an increase in noise in the amblyopic eye, and intact stages of interocular suppression and binocular summation. This implies a behavioural influence from monocular noise in the amblyopic visual system as well as in normal observers with an ND filter over one eye.
Resumo:
The roots of the concept of cortical columns stretch far back into the history of neuroscience. The impulse to compartmentalise the cortex into functional units can be seen at work in the phrenology of the beginning of the nineteenth century. At the beginning of the next century Korbinian Brodmann and several others published treatises on cortical architectonics. Later, in the middle of that century, Lorente de No writes of chains of ‘reverberatory’ neurons orthogonal to the pial surface of the cortex and called them ‘elementary units of cortical activity’. This is the first hint that a columnar organisation might exist. With the advent of microelectrode recording first Vernon Mountcastle (1957) and then David Hubel and Torsten Wiesel provided evidence consistent with the idea that columns might constitute units of physiological activity. This idea was backed up in the 1970s by clever histochemical techniques and culminated in Hubel and Wiesel’s well-known ‘ice-cube’ model of the cortex and Szentogathai’s brilliant iconography. The cortical column can thus be seen as the terminus ad quem of several great lines of neuroscientific research: currents originating in phrenology and passing through cytoarchitectonics; currents originating in neurocytology and passing through Lorente de No. Famously, Huxley noted the tragedy of a beautiful hypothesis destroyed by an ugly fact. Famously, too, human visual perception is orientated toward seeing edges and demarcations when, perhaps, they are not there. Recently the concept of cortical columns has come in for the same radical criticism that undermined the architectonics of the early part of the twentieth century. Does history repeat itself? This paper reviews this history and asks the question.
Resumo:
The fundamental problem faced by noninvasive neuroimaging techniques such as EEG/MEG1 is to elucidate functionally important aspects of the microscopic neuronal network dynamics from macroscopic aggregate measurements. Due to the mixing of the activities of large neuronal populations in the observed macroscopic aggregate, recovering the underlying network that generates the signal in the absence of any additional information represents a considerable challenge. Recent MEG studies have shown that macroscopic measurements contain sufficient information to allow the differentiation between patterns of activity, which are likely to represent different stimulus-specific collective modes in the underlying network (Hadjipapas, A., Adjamian, P., Swettenham, J.B., Holliday, I.E., Barnes, G.R., 2007. Stimuli of varying spatial scale induce gamma activity with distinct temporal characteristics in human visual cortex. NeuroImage 35, 518–530). The next question arising in this context is whether aspects of collective network activity can be recovered from a macroscopic aggregate signal. We propose that this issue is most appropriately addressed if MEG/EEG signals are to be viewed as macroscopic aggregates arising from networks of coupled systems as opposed to aggregates across a mass of largely independent neural systems. We show that collective modes arising in a network of simulated coupled systems can be indeed recovered from the macroscopic aggregate. Moreover, we show that nonlinear state space methods yield a good approximation of the number of effective degrees of freedom in the network. Importantly, information about hidden variables, which do not directly contribute to the aggregate signal, can also be recovered. Finally, this theoretical framework can be applied to experimental MEG/EEG data in the future, enabling the inference of state dependent changes in the degree of local synchrony in the underlying network.
Resumo:
It is well known that optic flow - the smooth transformation of the retinal image experienced by a moving observer - contains valuable information about the three-dimensional layout of the environment. From psychophysical and neurophysiological experiments, specialised mechanisms responsive to components of optic flow (sometimes called complex motion) such as expansion and rotation have been inferred. However, it remains unclear (a) whether the visual system has mechanisms for processing the component of deformation and (b) whether there are multiple mechanisms that function independently from each other. Here, we investigate these issues using random-dot patterns and a forced-choice subthreshold summation technique. In experiment 1, we manipulated the size of a test region that was permitted to contain signal and found substantial spatial summation for signal components of translation, expansion, rotation, and deformation embedded in noise. In experiment 2, little or no summation was found for the superposition of orthogonal pairs of complex motion patterns (eg expansion and rotation), consistent with probability summation between pairs of independent detectors. Our results suggest that optic-flow components are detected by mechanisms that are specialised for particular patterns of complex motion.
Resumo:
Growing evidence from psychophysics and single-unit recordings suggests specialised mechanisms in the primate visual system for the detection of complex motion patterns such as expansion and rotation. Here we used a subthreshold summation technique to determine the direction tuning functions of the detecting mechanisms. We measured thresholds for discriminating noise and signal + noise for pairs of superimposed complex motion patterns (signal A and B) carried by random-dot stimuli in a circular 5° field. For expansion, rotation, deformation and translation we found broad tuning functions approximated by cos(d), where d is the difference in dot directions for signal A and B. These data were well described by models in which either: (a) cardinal mechanisms had direction bandwidths (half-widths) of around 60° or (b) the number of mechanisms was increased and their half-width was reduced to about 40°. When d = 180° we found summation to be greater than probability summation for expansion, rotation and translation, consistent with the idea that mechanisms for these stimuli are constructed from subunits responsive to relative motion. For deformation, however, we found sensitivity declined when d = 180°, suggesting antagonistic input from directional subunits in the deformation mechanism. This is a necessary property for a mechanism whose job is to extract the deformation component from the optic flow field. © 2001 Elsevier Science Ltd.