970 resultados para Receptive Fields
Resumo:
Motion is a powerful cue for figure-ground segregation, allowing the recognition of shapes even if the luminance and texture characteristics of the stimulus and background are matched. In order to investigate the neural processes underlying early stages of the cue-invariant processing of form, we compared the responses of neurons in the striate cortex (V1) of anaesthetized marmosets to two types of moving stimuli: bars defined by differences in luminance, and bars defined solely by the coherent motion of random patterns that matched the texture and temporal modulation of the background. A population of form-cue-invariant (FCI) neurons was identified, which demonstrated similar tuning to the length of contours defined by first- and second-order cues. FCI neurons were relatively common in the supragranular layers (where they corresponded to 28% of the recorded units), but were absent from layer 4. Most had complex receptive fields, which were significantly larger than those of other V1 neurons. The majority of FCI neurons demonstrated end-inhibition in response to long first- and second-order bars, and were strongly direction selective, Thus, even at the level of V1 there are cells whose variations in response level appear to be determined by the shape and motion of the entire second-order object, rather than by its parts (i.e. the individual textural components). These results are compatible with the existence of an output channel from V1 to the ventral stream of extrastriate areas, which already encodes the basic building blocks of the image in an invariant manner.
Resumo:
The type 1 polyaxonal (PA1) cell is a distinct type of axon-bearing amacrine cell whose soma commonly occupies an interstitial position in the inner plexiform layer; the proximal branches of the sparse dendritic tree produce 1-4 axon-like processes, which form an extensive axonal arbor that is concentric with the smaller dendritic tree (Dacey, 1989; Famiglietti, 1992a,b). In this study, intracellular injections of Neurobiotin have revealed the complete dendritic and axonal morphology of the PA1 cells in the rabbit retina, as well as labeling the local array of PA1 cells through homologous tracer coupling. The dendritic-field area of the PA1 cells increased from a minimum of 0.15 mm(2) (0.44-mm equivalent diameter) on the visual streak to a maximum of 0.67 mm(2) (0.92-mm diameter) in the far periphery; the axonal-field area also showed a 3-fold variation across the retina, ranging from 3.1 mm(2) (2.0-mm diameter) to 10.2 mm(2) (3.6-mm diameter). The increase in dendritic- and axonal-field size was accompanied by a reduction in cell density, from 60 cells/mm(2) in the visual streak to 20 cells/mm(2) in the far periphery, so that the PA1 cells showed a 12 times overlap of their dendritic fields across the retina and a 200-300 times overlap of their axonal fields. Consequently, the axonal plexus was much denser than the dendritic plexus, with each square millimeter of retina containing similar to100 mm of dendrites and similar to1000 mm of axonal processes. The strong homologous tracer coupling revealed that similar to45% of the PA1 somata were located in the inner nuclear layer, similar to50% in the inner plexiform layer, and similar to5% in the ganglion cell layer. In addition, the Neurobiotin-injected PA1 cells sometimes showed clear heterologous tracer coupling to a regular array of small ganglion cells, which were present at half the density of the PA1 cells. The PA1 cells were also shown to contain elevated levels of gamma-aminobutyric acid (GABA), like other axon-bearing amacrine cells.
Resumo:
We tested current hypotheses on the functional organization of the third visual complex, a particularly controversial region of the primate extrastriate cortex. In anatomical experiments, injections of retrograde tracers were placed in the dorsal cortex immediately rostral to the second visual area (V2) of New World monkeys (Callithrix jacchus), revealing the topography of interconnections between the third tier cortex and the primary visual area (V1). The data indicate the presence of a dorsomedial area (DM), which represents the entire upper and lower quadrants of the visual field, and which receives strong, topographically organized projections from the superficial layers of V1. The visuotopic organization and boundaries of DM were confirmed by electrophysiological recordings in the same animals and by architectural characteristics which were distinct from those found in ventral extrastriate cortex rostral to V2. There was no electrophysiological or histological evidence for a transitional area between V2 and DM. In particular, the central representation of the upper quadrant in DM was directly adjacent to the representation of the horizontal meridian that marks the rostral border of V2. The present results argue in favor of the hypothesis that the third visual complex in New World monkeys contains different areas in its dorsal and ventral components: area DM, near the dorsal midline, and a homolog of area 19 of other mammals, located more lateral and ventrally. The characteristics of DM suggest that it may correspond to visual area 6 (V6) of Old World monkeys. (C) 2005 Wiley-Liss, Inc.
Resumo:
Marked phenotypic variation has been reported in pyramidal cells in the primate cerebral cortex. These extent and systematic nature of these specializations suggest that they are important for specialized aspects of cortical processing. However, it remains unknown as to whether regional variations in the pyramidal cell phenotype are unique to primates or if they are widespread amongst mammalian species. In the present study we determined the receptive fields of neurons in striate and extrastriate visual cortex, and quantified pyramidal cell structure in these cortical regions, in the diurnal, large-brained, South American rodent Dasyprocta primnolopha. We found evidence for a first, second and third visual area (V1, V2 and V3, respectively) forming a lateral progression from the occipital pole to the temporal pole. Pyramidal cell structure became increasingly more complex through these areas, suggesting that regional specialization in pyramidal cell phenotype is not restricted to primates. However, cells in V1, V2 and V3 of the agouti were considerably more spinous than their counterparts in primates, suggesting different evolutionary and developmental influences may act on cortical microcircuitry in rodents and primates. (c) 2006 Elsevier B.V. All rights reserved.
Resumo:
Visual acuity is limited by the size and density of the smallest retinal ganglion cells, which correspond to the midget ganglion cells in primate retina and the beta- ganglion cells in cat retina, both of which have concentric receptive fields that respond at either light- On or light- Off. In contrast, the smallest ganglion cells in the rabbit retina are the local edge detectors ( LEDs), which respond to spot illumination at both light- On and light- Off. However, the LEDs do not predominate in the rabbit retina and the question arises, what role do they play in fine spatial vision? We studied the morphology and physiology of LEDs in the isolated rabbit retina and examined how their response properties are shaped by the excitatory and inhibitory inputs. Although the LEDs comprise only similar to 15% of the ganglion cells, neighboring LEDs are separated by 30 - 40 mu m on the visual streak, which is sufficient to account for the grating acuity of the rabbit. The spatial and temporal receptive- field properties of LEDs are generated by distinct inhibitory mechanisms. The strong inhibitory surround acts presynaptically to suppress both the excitation and the inhibition elicited by center stimulation. The temporal properties, characterized by sluggish onset, sustained firing, and low bandwidth, are mediated by the temporal properties of the bipolar cells and by postsynaptic interactions between the excitatory and inhibitory inputs. We propose that the LEDs signal fine spatial detail during visual fixation, when high temporal frequencies are minimal.
Resumo:
The storage capacity of multilayer networks with overlapping receptive fields is investigated for a constructive algorithm within a one-step replica symmetry breaking (RSB) treatment. We find that the storage capacity increases logarithmically with the number of hidden units K without saturating the Mitchison-Durbin bound. The slope of the logarithmic increase decays exponentionally with the stability with which the patterns have been stored.
Resumo:
We describe a template model for perception of edge blur and identify a crucial early nonlinearity in this process. The main principle is to spatially filter the edge image to produce a 'signature', and then find which of a set of templates best fits that signature. Psychophysical blur-matching data strongly support the use of a second-derivative signature, coupled to Gaussian first-derivative templates. The spatial scale of the best-fitting template signals the edge blur. This model predicts blur-matching data accurately for a wide variety of Gaussian and non-Gaussian edges, but it suffers a bias when edges of opposite sign come close together in sine-wave gratings and other periodic images. This anomaly suggests a second general principle: the region of an image that 'belongs' to a given edge should have a consistent sign or direction of luminance gradient. Segmentation of the gradient profile into regions of common sign is achieved by implementing the second-derivative 'signature' operator as two first-derivative operators separated by a half-wave rectifier. This multiscale system of nonlinear filters predicts perceived blur accurately for periodic and aperiodic waveforms. We also outline its extension to 2-D images and infer the 2-D shape of the receptive fields.
Resumo:
Recently there has been an outburst of interest in extending topographic maps of vectorial data to more general data structures, such as sequences or trees. However, there is no general consensus as to how best to process sequences using topographicmaps, and this topic remains an active focus of neurocomputational research. The representational capabilities and internal representations of the models are not well understood. Here, we rigorously analyze a generalization of the self-organizingmap (SOM) for processing sequential data, recursive SOM (RecSOM) (Voegtlin, 2002), as a nonautonomous dynamical system consisting of a set of fixed input maps. We argue that contractive fixed-input maps are likely to produce Markovian organizations of receptive fields on the RecSOM map. We derive bounds on parameter β (weighting the importance of importing past information when processing sequences) under which contractiveness of the fixed-input maps is guaranteed. Some generalizations of SOM contain a dynamic module responsible for processing temporal contexts as an integral part of the model. We show that Markovian topographic maps of sequential data can be produced using a simple fixed (nonadaptable) dynamic module externally feeding a standard topographic model designed to process static vectorial data of fixed dimensionality (e.g., SOM). However, by allowing trainable feedback connections, one can obtain Markovian maps with superior memory depth and topography preservation. We elaborate on the importance of non-Markovian organizations in topographic maps of sequential data. © 2006 Massachusetts Institute of Technology.
Resumo:
Recently, there has been a considerable research activity in extending topographic maps of vectorial data to more general data structures, such as sequences or trees. However, the representational capabilities and internal representations of the models are not well understood. We rigorously analyze a generalization of the Self-Organizing Map (SOM) for processing sequential data, Recursive SOM (RecSOM [1]), as a non-autonomous dynamical system consisting off a set of fixed input maps. We show that contractive fixed input maps are likely to produce Markovian organizations of receptive fields o the RecSOM map. We derive bounds on parameter $\beta$ (weighting the importance of importing past information when processing sequences) under which contractiveness of the fixed input maps is guaranteed.
Resumo:
Classical studies of area summation measure contrast detection thresholds as a function of grating diameter. Unfortunately, (i) this approach is compromised by retinal inhomogeneity and (ii) it potentially confounds summation of signal with summation of internal noise. The Swiss cheese stimulus of T. S. Meese and R. J. Summers (2007) and the closely related Battenberg stimulus of T. S. Meese (2010) were designed to avoid these problems by keeping target diameter constant and modulating interdigitated checks of first-order carrier contrast within the stimulus region. This approach has revealed a contrast integration process with greater potency than the classical model of spatial probability summation. Here, we used Swiss cheese stimuli to investigate the spatial limits of contrast integration over a range of carrier frequencies (1–16 c/deg) and raised plaid modulator frequencies (0.25–32 cycles/check). Subthreshold summation for interdigitated carrier pairs remained strong (~4 to 6 dB) up to 4 to 8 cycles/check. Our computational analysis of these results implied linear signal combination (following square-law transduction) over either (i) 12 carrier cycles or more or (ii) 1.27 deg or more. Our model has three stages of summation: short-range summation within linear receptive fields, medium-range integration to compute contrast energy for multiple patches of the image, and long-range pooling of the contrast integrators by probability summation. Our analysis legitimizes the inclusion of widespread integration of signal (and noise) within hierarchical image processing models. It also confirms the individual differences in the spatial extent of integration that emerge from our approach.
Resumo:
Visual perception begins by dissecting the retinal image into millions of small patches for local analyses by local receptive fields. However, image structures extend well beyond these receptive fields and so further processes must be involved in sewing the image fragments back together to derive representations of higher order (more global) structures. To investigate the integration process, we also need to understand the opposite process of suppression. To investigate both processes together, we measured triplets of dipper functions for targets and pedestals involving interdigitated stimulus pairs (A, B). Previous work has shown that summation and suppression operate over the full contrast range for the domains of ocularity and space. Here, we extend that work to include orientation and time domains. Temporal stimuli were 15-Hz counter-phase sine-wave gratings, where A and B were the positive and negative phases of the oscillation, respectively. For orientation, we used orthogonally oriented contrast patches (A, B) whose sum was an isotropic difference of Gaussians. Results from all four domains could be understood within a common framework in which summation operates separately within the numerator and denominator of a contrast gain control equation. This simple arrangement of summation and counter-suppression achieves integration of various stimulus attributes without distorting the underlying contrast code.
Resumo:
The article explores the possibilities of formalizing and explaining the mechanisms that support spatial and social perspective alignment sustained over the duration of a social interaction. The basic proposed principle is that in social contexts the mechanisms for sensorimotor transformations and multisensory integration (learn to) incorporate information relative to the other actor(s), similar to the "re-calibration" of visual receptive fields in response to repeated tool use. This process aligns or merges the co-actors' spatial representations and creates a "Shared Action Space" (SAS) supporting key computations of social interactions and joint actions; for example, the remapping between the coordinate systems and frames of reference of the co-actors, including perspective taking, the sensorimotor transformations required for lifting jointly an object, and the predictions of the sensory effects of such joint action. The social re-calibration is proposed to be based on common basis function maps (BFMs) and could constitute an optimal solution to sensorimotor transformation and multisensory integration in joint action or more in general social interaction contexts. However, certain situations such as discrepant postural and viewpoint alignment and associated differences in perspectives between the co-actors could constrain the process quite differently. We discuss how alignment is achieved in the first place, and how it is maintained over time, providing a taxonomy of various forms and mechanisms of space alignment and overlap based, for instance, on automaticity vs. control of the transformations between the two agents. Finally, we discuss the link between low-level mechanisms for the sharing of space and high-level mechanisms for the sharing of cognitive representations. © 2013 Pezzulo, Iodice, Ferraina and Kessler.
Resumo:
Simple features such as edges are the building blocks of spatial vision, and so I ask: how arevisual features and their properties (location, blur and contrast) derived from the responses ofspatial filters in early vision; how are these elementary visual signals combined across the twoeyes; and when are they not combined? Our psychophysical evidence from blur-matchingexperiments strongly supports a model in which edges are found at the spatial peaks ofresponse of odd-symmetric receptive fields (gradient operators), and their blur B is givenby the spatial scale of the most active operator. This model can explain some surprisingaspects of blur perception: edges look sharper when they are low contrast, and when theirlength is made shorter. Our experiments on binocular fusion of blurred edges show that singlevision is maintained for disparities up to about 2.5*B, followed by diplopia or suppression ofone edge at larger disparities. Edges of opposite polarity never fuse. Fusion may be served bybinocular combination of monocular gradient operators, but that combination - involvingbinocular summation and interocular suppression - is not completely understood.In particular, linear summation (supported by psychophysical and physiological evidence)predicts that fused edges should look more blurred with increasing disparity (up to 2.5*B),but results surprisingly show that edge blur appears constant across all disparities, whetherfused or diplopic. Finally, when edges of very different blur are shown to the left and righteyes fusion may not occur, but perceived blur is not simply given by the sharper edge, nor bythe higher contrast. Instead, it is the ratio of contrast to blur that matters: the edge with theAbstracts 1237steeper gradient dominates perception. The early stages of binocular spatial vision speak thelanguage of luminance gradients.
Resumo:
Saccadic eye movements rapidly displace the image of the world that is projected onto the retinas. In anticipation of each saccade, many neurons in the visual system shift their receptive fields. This presaccadic change in visual sensitivity, known as remapping, was first documented in the parietal cortex and has been studied in many other brain regions. Remapping requires information about upcoming saccades via corollary discharge. Analyses of neurons in a corollary discharge pathway that targets the frontal eye field (FEF) suggest that remapping may be assembled in the FEF’s local microcircuitry. Complementary data from reversible inactivation, neural recording, and modeling studies provide evidence that remapping contributes to transsaccadic continuity of action and perception. Multiple forms of remapping have been reported in the FEF and other brain areas, however, and questions remain about reasons for these differences. In this review of recent progress, we identify three hypotheses that may help to guide further investigations into the structure and function of circuits for remapping.
Resumo:
Saccadic eye movements rapidly displace the image of the world that is projected onto the retinas. In anticipation of each saccade, many neurons in the visual system shift their receptive fields. This presaccadic change in visual sensitivity, known as remapping, was first documented in the parietal cortex and has been studied in many other brain regions. Remapping requires information about upcoming saccades via corollary discharge. Analyses of neurons in a corollary discharge pathway that targets the frontal eye field (FEF) suggest that remapping may be assembled in the FEF's local microcircuitry. Complementary data from reversible inactivation, neural recording, and modeling studies provide evidence that remapping contributes to transsaccadic continuity of action and perception. Multiple forms of remapping have been reported in the FEF and other brain areas, however, and questions remain about reasons for these differences. In this review of recent progress, we identify three hypotheses that may help to guide further investigations into the structure and function of circuits for remapping.