28 resultados para Eye Movement

em Duke University


Relevância:

70.00% 70.00%

Publicador:

Resumo:

As we look around a scene, we perceive it as continuous and stable even though each saccadic eye movement changes the visual input to the retinas. How the brain achieves this perceptual stabilization is unknown, but a major hypothesis is that it relies on presaccadic remapping, a process in which neurons shift their visual sensitivity to a new location in the scene just before each saccade. This hypothesis is difficult to test in vivo because complete, selective inactivation of remapping is currently intractable. We tested it in silico with a hierarchical, sheet-based neural network model of the visual and oculomotor system. The model generated saccadic commands to move a video camera abruptly. Visual input from the camera and internal copies of the saccadic movement commands, or corollary discharge, converged at a map-level simulation of the frontal eye field (FEF), a primate brain area known to receive such inputs. FEF output was combined with eye position signals to yield a suitable coordinate frame for guiding arm movements of a robot. Our operational definition of perceptual stability was "useful stability,” quantified as continuously accurate pointing to a visual object despite camera saccades. During training, the emergence of useful stability was correlated tightly with the emergence of presaccadic remapping in the FEF. Remapping depended on corollary discharge but its timing was synchronized to the updating of eye position. When coupled to predictive eye position signals, remapping served to stabilize the target representation for continuously accurate pointing. Graded inactivations of pathways in the model replicated, and helped to interpret, previous in vivo experiments. The results support the hypothesis that visual stability requires presaccadic remapping, provide explanations for the function and timing of remapping, and offer testable hypotheses for in vivo studies. We conclude that remapping allows for seamless coordinate frame transformations and quick actions despite visual afferent lags. With visual remapping in place for behavior, it may be exploited for perceptual continuity.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

As we look around a scene, we perceive it as continuous and stable even though each saccadic eye movement changes the visual input to the retinas. How the brain achieves this perceptual stabilization is unknown, but a major hypothesis is that it relies on presaccadic remapping, a process in which neurons shift their visual sensitivity to a new location in the scene just before each saccade. This hypothesis is difficult to test in vivo because complete, selective inactivation of remapping is currently intractable. We tested it in silico with a hierarchical, sheet-based neural network model of the visual and oculomotor system. The model generated saccadic commands to move a video camera abruptly. Visual input from the camera and internal copies of the saccadic movement commands, or corollary discharge, converged at a map-level simulation of the frontal eye field (FEF), a primate brain area known to receive such inputs. FEF output was combined with eye position signals to yield a suitable coordinate frame for guiding arm movements of a robot. Our operational definition of perceptual stability was "useful stability," quantified as continuously accurate pointing to a visual object despite camera saccades. During training, the emergence of useful stability was correlated tightly with the emergence of presaccadic remapping in the FEF. Remapping depended on corollary discharge but its timing was synchronized to the updating of eye position. When coupled to predictive eye position signals, remapping served to stabilize the target representation for continuously accurate pointing. Graded inactivations of pathways in the model replicated, and helped to interpret, previous in vivo experiments. The results support the hypothesis that visual stability requires presaccadic remapping, provide explanations for the function and timing of remapping, and offer testable hypotheses for in vivo studies. We conclude that remapping allows for seamless coordinate frame transformations and quick actions despite visual afferent lags. With visual remapping in place for behavior, it may be exploited for perceptual continuity.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

BACKGROUND: The superior colliculus (SC) has been shown to play a crucial role in the initiation and coordination of eye- and head-movements. The knowledge about the function of this structure is mainly based on single-unit recordings in animals with relatively few neuroimaging studies investigating eye-movement related brain activity in humans. METHODOLOGY/PRINCIPAL FINDINGS: The present study employed high-field (7 Tesla) functional magnetic resonance imaging (fMRI) to investigate SC responses during endogenously cued saccades in humans. In response to centrally presented instructional cues, subjects either performed saccades away from (centrifugal) or towards (centripetal) the center of straight gaze or maintained fixation at the center position. Compared to central fixation, the execution of saccades elicited hemodynamic activity within a network of cortical and subcortical areas that included the SC, lateral geniculate nucleus (LGN), occipital cortex, striatum, and the pulvinar. CONCLUSIONS/SIGNIFICANCE: Activity in the SC was enhanced contralateral to the direction of the saccade (i.e., greater activity in the right as compared to left SC during leftward saccades and vice versa) during both centrifugal and centripetal saccades, thereby demonstrating that the contralateral predominance for saccade execution that has been shown to exist in animals is also present in the human SC. In addition, centrifugal saccades elicited greater activity in the SC than did centripetal saccades, while also being accompanied by an enhanced deactivation within the prefrontal default-mode network. This pattern of brain activity might reflect the reduced processing effort required to move the eyes toward as compared to away from the center of straight gaze, a position that might serve as a spatial baseline in which the retinotopic and craniotopic reference frames are aligned.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

BACKGROUND: Scale-invariant neuronal avalanches have been observed in cell cultures and slices as well as anesthetized and awake brains, suggesting that the brain operates near criticality, i.e. within a narrow margin between avalanche propagation and extinction. In theory, criticality provides many desirable features for the behaving brain, optimizing computational capabilities, information transmission, sensitivity to sensory stimuli and size of memory repertoires. However, a thorough characterization of neuronal avalanches in freely-behaving (FB) animals is still missing, thus raising doubts about their relevance for brain function. METHODOLOGY/PRINCIPAL FINDINGS: To address this issue, we employed chronically implanted multielectrode arrays (MEA) to record avalanches of action potentials (spikes) from the cerebral cortex and hippocampus of 14 rats, as they spontaneously traversed the wake-sleep cycle, explored novel objects or were subjected to anesthesia (AN). We then modeled spike avalanches to evaluate the impact of sparse MEA sampling on their statistics. We found that the size distribution of spike avalanches are well fit by lognormal distributions in FB animals, and by truncated power laws in the AN group. FB data surrogation markedly decreases the tail of the distribution, i.e. spike shuffling destroys the largest avalanches. The FB data are also characterized by multiple key features compatible with criticality in the temporal domain, such as 1/f spectra and long-term correlations as measured by detrended fluctuation analysis. These signatures are very stable across waking, slow-wave sleep and rapid-eye-movement sleep, but collapse during anesthesia. Likewise, waiting time distributions obey a single scaling function during all natural behavioral states, but not during anesthesia. Results are equivalent for neuronal ensembles recorded from visual and tactile areas of the cerebral cortex, as well as the hippocampus. CONCLUSIONS/SIGNIFICANCE: Altogether, the data provide a comprehensive link between behavior and brain criticality, revealing a unique scale-invariant regime of spike avalanches across all major behaviors.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This study investigated whether rhesus monkeys show evidence of metacognition in a reduced, visual oculomotor task that is particularly suitable for use in fMRI and electrophysiology. The 2-stage task involved punctate visual stimulation and saccadic eye movement responses. In each trial, monkeys made a decision and then made a bet. To earn maximum reward, they had to monitor their decision and use that information to bet advantageously. Two monkeys learned to base their bets on their decisions within a few weeks. We implemented an operational definition of metacognitive behavior that relied on trial-by-trial analyses and signal detection theory. Both monkeys exhibited metacognition according to these quantitative criteria. Neither external visual cues nor potential reaction time cues explained the betting behavior; the animals seemed to rely exclusively on internal traces of their decisions. We documented the learning process of one monkey. During a 10-session transition phase, betting switched from random to a decision-based strategy. The results reinforce previous findings of metacognitive ability in monkeys and may facilitate the neurophysiological investigation of metacognitive functions.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Once thought to be predominantly the domain of cortex, multisensory integration has now been found at numerous sub-cortical locations in the auditory pathway. Prominent ascending and descending connection within the pathway suggest that the system may utilize non-auditory activity to help filter incoming sounds as they first enter the ear. Active mechanisms in the periphery, particularly the outer hair cells (OHCs) of the cochlea and middle ear muscles (MEMs), are capable of modulating the sensitivity of other peripheral mechanisms involved in the transduction of sound into the system. Through indirect mechanical coupling of the OHCs and MEMs to the eardrum, motion of these mechanisms can be recorded as acoustic signals in the ear canal. Here, we utilize this recording technique to describe three different experiments that demonstrate novel multisensory interactions occurring at the level of the eardrum. 1) In the first experiment, measurements in humans and monkeys performing a saccadic eye movement task to visual targets indicate that the eardrum oscillates in conjunction with eye movements. The amplitude and phase of the eardrum movement, which we dub the Oscillatory Saccadic Eardrum Associated Response or OSEAR, depended on the direction and horizontal amplitude of the saccade and occurred in the absence of any externally delivered sounds. 2) For the second experiment, we use an audiovisual cueing task to demonstrate a dynamic change to pressure levels in the ear when a sound is expected versus when one is not. Specifically, we observe a drop in frequency power and variability from 0.1 to 4kHz around the time when the sound is expected to occur in contract to a slight increase in power at both lower and higher frequencies. 3) For the third experiment, we show that seeing a speaker say a syllable that is incongruent with the accompanying audio can alter the response patterns of the auditory periphery, particularly during the most relevant moments in the speech stream. These visually influenced changes may contribute to the altered percept of the speech sound. Collectively, we presume that these findings represent the combined effect of OHCs and MEMs acting in tandem in response to various non-auditory signals in order to manipulate the receptive properties of the auditory system. These influences may have a profound, and previously unrecognized, impact on how the auditory system processes sounds from initial sensory transduction all the way to perception and behavior. Moreover, we demonstrate that the entire auditory system is, fundamentally, a multisensory system.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Integrating information from multiple sources is a crucial function of the brain. Examples of such integration include multiple stimuli of different modalties, such as visual and auditory, multiple stimuli of the same modality, such as auditory and auditory, and integrating stimuli from the sensory organs (i.e. ears) with stimuli delivered from brain-machine interfaces.

The overall aim of this body of work is to empirically examine stimulus integration in these three domains to inform our broader understanding of how and when the brain combines information from multiple sources.

First, I examine visually-guided auditory, a problem with implications for the general problem in learning of how the brain determines what lesson to learn (and what lessons not to learn). For example, sound localization is a behavior that is partially learned with the aid of vision. This process requires correctly matching a visual location to that of a sound. This is an intrinsically circular problem when sound location is itself uncertain and the visual scene is rife with possible visual matches. Here, we develop a simple paradigm using visual guidance of sound localization to gain insight into how the brain confronts this type of circularity. We tested two competing hypotheses. 1: The brain guides sound location learning based on the synchrony or simultaneity of auditory-visual stimuli, potentially involving a Hebbian associative mechanism. 2: The brain uses a ‘guess and check’ heuristic in which visual feedback that is obtained after an eye movement to a sound alters future performance, perhaps by recruiting the brain’s reward-related circuitry. We assessed the effects of exposure to visual stimuli spatially mismatched from sounds on performance of an interleaved auditory-only saccade task. We found that when humans and monkeys were provided the visual stimulus asynchronously with the sound but as feedback to an auditory-guided saccade, they shifted their subsequent auditory-only performance toward the direction of the visual cue by 1.3-1.7 degrees, or 22-28% of the original 6 degree visual-auditory mismatch. In contrast when the visual stimulus was presented synchronously with the sound but extinguished too quickly to provide this feedback, there was little change in subsequent auditory-only performance. Our results suggest that the outcome of our own actions is vital to localizing sounds correctly. Contrary to previous expectations, visual calibration of auditory space does not appear to require visual-auditory associations based on synchrony/simultaneity.

My next line of research examines how electrical stimulation of the inferior colliculus influences perception of sounds in a nonhuman primate. The central nucleus of the inferior colliculus is the major ascending relay of auditory information before it reaches the forebrain, and thus an ideal target for understanding low-level information processing prior to the forebrain, as almost all auditory signals pass through the central nucleus of the inferior colliculus before reaching the forebrain. Thus, the inferior colliculus is the ideal structure to examine to understand the format of the inputs into the forebrain and, by extension, the processing of auditory scenes that occurs in the brainstem. Therefore, the inferior colliculus was an attractive target for understanding stimulus integration in the ascending auditory pathway.

Moreover, understanding the relationship between the auditory selectivity of neurons and their contribution to perception is critical to the design of effective auditory brain prosthetics. These prosthetics seek to mimic natural activity patterns to achieve desired perceptual outcomes. We measured the contribution of inferior colliculus (IC) sites to perception using combined recording and electrical stimulation. Monkeys performed a frequency-based discrimination task, reporting whether a probe sound was higher or lower in frequency than a reference sound. Stimulation pulses were paired with the probe sound on 50% of trials (0.5-80 µA, 100-300 Hz, n=172 IC locations in 3 rhesus monkeys). Electrical stimulation tended to bias the animals’ judgments in a fashion that was coarsely but significantly correlated with the best frequency of the stimulation site in comparison to the reference frequency employed in the task. Although there was considerable variability in the effects of stimulation (including impairments in performance and shifts in performance away from the direction predicted based on the site’s response properties), the results indicate that stimulation of the IC can evoke percepts correlated with the frequency tuning properties of the IC. Consistent with the implications of recent human studies, the main avenue for improvement for the auditory midbrain implant suggested by our findings is to increase the number and spatial extent of electrodes, to increase the size of the region that can be electrically activated and provide a greater range of evoked percepts.

My next line of research employs a frequency-tagging approach to examine the extent to which multiple sound sources are combined (or segregated) in the nonhuman primate inferior colliculus. In the single-sound case, most inferior colliculus neurons respond and entrain to sounds in a very broad region of space, and many are entirely spatially insensitive, so it is unknown how the neurons will respond to a situation with more than one sound. I use multiple AM stimuli of different frequencies, which the inferior colliculus represents using a spike timing code. This allows me to measure spike timing in the inferior colliculus to determine which sound source is responsible for neural activity in an auditory scene containing multiple sounds. Using this approach, I find that the same neurons that are tuned to broad regions of space in the single sound condition become dramatically more selective in the dual sound condition, preferentially entraining spikes to stimuli from a smaller region of space. I will examine the possibility that there may be a conceptual linkage between this finding and the finding of receptive field shifts in the visual system.

In chapter 5, I will comment on these findings more generally, compare them to existing theoretical models, and discuss what these results tell us about processing in the central nervous system in a multi-stimulus situation. My results suggest that the brain is flexible in its processing and can adapt its integration schema to fit the available cues and the demands of the task.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Saccadic eye movements can be elicited by more than one type of sensory stimulus. This implies substantial transformations of signals originating in different sense organs as they reach a common motor output pathway. In this study, we compared the prevalence and magnitude of auditory- and visually evoked activity in a structure implicated in oculomotor processing, the primate frontal eye fields (FEF). We recorded from 324 single neurons while 2 monkeys performed delayed saccades to visual or auditory targets. We found that 64% of FEF neurons were active on presentation of auditory targets and 87% were active during auditory-guided saccades, compared with 75 and 84% for visual targets and saccades. As saccade onset approached, the average level of population activity in the FEF became indistinguishable on visual and auditory trials. FEF activity was better correlated with the movement vector than with the target location for both modalities. In summary, the large proportion of auditory-responsive neurons in the FEF, the similarity between visual and auditory activity levels at the time of the saccade, and the strong correlation between the activity and the saccade vector suggest that auditory signals undergo tailoring to match roughly the strength of visual signals present in the FEF, facilitating accessing of a common motor output pathway.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper, we propose generalized sampling approaches for measuring a multi-dimensional object using a compact compound-eye imaging system called thin observation module by bound optics (TOMBO). This paper shows the proposed system model, physical examples, and simulations to verify TOMBO imaging using generalized sampling. In the system, an object is modulated and multiplied by a weight distribution with physical coding, and the coded optical signal is integrated on to a detector array. A numerical estimation algorithm employing a sparsity constraint is used for object reconstruction.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

From tendencies to reduce the Underground Railroad to the imperative "follow the north star" to the iconic images of Ruby Bridges' 1960 "step forward" on the stairs of William Frantz Elementary School, America prefers to picture freedom as an upwardly mobile development. This preoccupation with the subtractive and linear force of development makes it hard to hear the palpable steps of so many truant children marching in the Movement and renders illegible the nonlinear movements of minors in the Underground. Yet a black fugitive hugging a tree, a white boy walking alone in a field, or even pieces of a discarded raft floating downstream like remnants of child's play are constitutive gestures of the Underground's networks of care and escape. Responding to 19th-century Americanists and cultural studies scholars' important illumination of the child as central to national narratives of development and freedom, "Minor Moves" reads major literary narratives not for the child and development but for the fugitive trace of minor and growth.

In four chapters, I trace the physical gestures of Nathaniel Hawthorne's Pearl, Harriet Beecher Stowe's Topsy, Harriet Wilson's Frado, and Mark Twain's Huck against the historical backdrop of the Fugitive Slave Act and the passing of the first compulsory education bills that made truancy illegal. I ask how, within a discourse of independence that fails to imagine any serious movements in the minor, we might understand the depictions of moving children as interrupting a U.S. preoccupation with normative development and recognize in them the emergence of an alternative imaginary. To attend to the movement of the minor is to attend to what the discursive order of a development-centered imaginary deems inconsequential and what its grammar can render only as mistakes. Engaging the insights of performance studies, I regard what these narratives depict as childish missteps (Topsy's spins, Frado's climbing the roof) as dances that trouble the narrative's discursive order. At the same time, drawing upon the observations of black studies and literary theory, I take note of the pressure these "minor moves" put on the literal grammar of the text (Stowe's run-on sentences and Hawthorne's shaky subject-verb agreements). I regard these ungrammatical moves as poetic ruptures from which emerges an alternative and prior force of the imaginary at work in these narratives--a force I call "growth."

Reading these "minor moves" holds open the possibility of thinking about a generative association between blackness and childishness, one that neither supports racist ideas of biological inferiority nor mandates in the name of political uplift the subsequent repudiation of childishness. I argue that recognizing the fugitive force of growth indicated in the interplay between the conceptual and grammatical disjunctures of these minor moves opens a deeper understanding of agency and dependency that exceeds notions of arrested development and social death. For once we interrupt the desire to picture development (which is to say the desire to picture), dependency is no longer a state (of social death or arrested development) of what does not belong, but rather it is what Édouard Glissant might have called a "departure" (from "be[ing] a single being"). Topsy's hard-to-see pick-pocketing and Pearl's running amok with brown men in the market are not moves out of dependency but indeed social turns (a dance) by way of dependency. Dependent, moving and ungrammatical, the growth evidenced in these childish ruptures enables different stories about slavery, freedom, and childishness--ones that do not necessitate a repudiation of childishness in the name of freedom, but recognize in such minor moves a fugitive way out.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The foraging activity of many organisms reveal strategic movement patterns, showing efficient use of spatially distributed resources. The underlying mechanisms behind these movement patterns, such as the use of spatial memory, are topics of considerable debate. To augment existing evidence of spatial memory use in primates, we generated movement patterns from simulated primate agents with simple sensory and behavioral capabilities. We developed agents representing various hypotheses of memory use, and compared the movement patterns of simulated groups to those of an observed group of red colobus monkeys (Procolobus rufomitratus), testing for: the effects of memory type (Euclidian or landmark based), amount of memory retention, and the effects of social rules in making foraging choices at the scale of the group (independent or leader led). Our results indicate that red colobus movement patterns fit best with simulated groups that have landmark based memory and a follow the leader foraging strategy. Comparisons between simulated agents revealed that social rules had the greatest impact on a group's step length, whereas the type of memory had the highest impact on a group's path tortuosity and cohesion. Using simulation studies as experimental trials to test theories of spatial memory use allows the development of insight into the behavioral mechanisms behind animal movement, developing case-specific results, as well as general results informing how changes to perception and behavior influence movement patterns.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Vocal learning is a critical behavioral substrate for spoken human language. It is a rare trait found in three distantly related groups of birds-songbirds, hummingbirds, and parrots. These avian groups have remarkably similar systems of cerebral vocal nuclei for the control of learned vocalizations that are not found in their more closely related vocal non-learning relatives. These findings led to the hypothesis that brain pathways for vocal learning in different groups evolved independently from a common ancestor but under pre-existing constraints. Here, we suggest one constraint, a pre-existing system for movement control. Using behavioral molecular mapping, we discovered that in songbirds, parrots, and hummingbirds, all cerebral vocal learning nuclei are adjacent to discrete brain areas active during limb and body movements. Similar to the relationships between vocal nuclei activation and singing, activation in the adjacent areas correlated with the amount of movement performed and was independent of auditory and visual input. These same movement-associated brain areas were also present in female songbirds that do not learn vocalizations and have atrophied cerebral vocal nuclei, and in ring doves that are vocal non-learners and do not have cerebral vocal nuclei. A compilation of previous neural tracing experiments in songbirds suggests that the movement-associated areas are connected in a network that is in parallel with the adjacent vocal learning system. This study is the first global mapping that we are aware for movement-associated areas of the avian cerebrum and it indicates that brain systems that control vocal learning in distantly related birds are directly adjacent to brain systems involved in movement control. Based upon these findings, we propose a motor theory for the origin of vocal learning, this being that the brain areas specialized for vocal learning in vocal learners evolved as a specialization of a pre-existing motor pathway that controls movement.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

OBJECTIVE: To review the experience at a single institution with motor evoked potential (MEP) monitoring during intracranial aneurysm surgery to determine the incidence of unacceptable movement. METHODS: Neurophysiology event logs and anesthetic records from 220 craniotomies for aneurysm clipping were reviewed for unacceptable patient movement or reason for cessation of MEPs. Muscle relaxants were not given after intubation. Transcranial MEPs were recorded from bilateral abductor hallucis and abductor pollicis muscles. MEP stimulus intensity was increased up to 500 V until evoked potential responses were detectable. RESULTS: Out of 220 patients, 7 (3.2%) exhibited unacceptable movement with MEP stimulation-2 had nociception-induced movement and 5 had excessive field movement. In all but one case, MEP monitoring could be resumed, yielding a 99.5% monitoring rate. CONCLUSIONS: With the anesthetic and monitoring regimen, the authors were able to record MEPs of the upper and lower extremities in all patients and found only 3.2% demonstrated unacceptable movement. With a suitable anesthetic technique, MEP monitoring in the upper and lower extremities appears to be feasible in most patients and should not be withheld because of concern for movement during neurovascular surgery.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The cognitive control of behavior was long considered to be centralized in cerebral cortex. More recently, subcortical structures such as cerebellum and basal ganglia have been implicated in cognitive functions as well. The fact that subcortico-cortical circuits for the control of movement involve the thalamus prompts the notion that activity in movement-related thalamus may also reflect elements of cognitive behavior. Yet this hypothesis has rarely been investigated. Using the pathways linking cerebellum to cerebral cortex via the thalamus as a template, we review evidence that the motor thalamus, together with movement-related central thalamus have the requisite connectivity and activity to mediate cognitive aspects of movement control.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Our percept of visual stability across saccadic eye movements may be mediated by presaccadic remapping. Just before a saccade, neurons that remap become visually responsive at a future field (FF), which anticipates the saccade vector. Hence, the neurons use corollary discharge of saccades. Many of the neurons also decrease their response at the receptive field (RF). Presaccadic remapping occurs in several brain areas including the frontal eye field (FEF), which receives corollary discharge of saccades in its layer IV from a collicular-thalamic pathway. We studied, at two levels, the microcircuitry of remapping in the FEF. At the laminar level, we compared remapping between layers IV and V. At the cellular level, we compared remapping between different neuron types of layer IV. In the FEF in four monkeys (Macaca mulatta), we identified 27 layer IV neurons with orthodromic stimulation and 57 layer V neurons with antidromic stimulation from the superior colliculus. With the use of established criteria, we classified the layer IV neurons as putative excitatory (n = 11), putative inhibitory (n = 12), or ambiguous (n = 4). We found that just before a saccade, putative excitatory neurons increased their visual response at the RF, putative inhibitory neurons showed no change, and ambiguous neurons increased their visual response at the FF. None of the neurons showed presaccadic visual changes at both RF and FF. In contrast, neurons in layer V showed full remapping (at both the RF and FF). Our data suggest that elemental signals for remapping are distributed across neuron types in early cortical processing and combined in later stages of cortical microcircuitry.