31 resultados para UNCONDITIONED STIMULUS
Resumo:
In fear extinction, an animal learns that a conditioned stimulus (CS) no longer predicts a noxious stimulus [unconditioned stimulus (UCS)] to which it had previously been associated, leading to inhibition of the conditioned response (CR). Extinction creates a new CS-noUCS memory trace, competing with the initial fear (CS-UCS) memory. Recall of extinction memory and, hence, CR inhibition at later CS encounters is facilitated by contextual stimuli present during extinction training. In line with theoretical predictions derived from animal studies, we show that, after extinction, a CS-evoked engagement of human ventromedial prefrontal cortex (VMPFC) and hippocampus is context dependent, being expressed in an extinction, but not a conditioning, context. Likewise, a positive correlation between VMPFC and hippocampal activity is extinction context dependent. Thus, a VMPFC-hippocampal network provides for context-dependent recall of human extinction memory, consistent with a view that hippocampus confers context dependence on VMPFC.
Resumo:
Simultaneous recording from multiple single neurones presents many technical difficulties. However, obtaining such data has many advantages, which make it highly worthwhile to overcome the technical problems. This report describes methods which we have developed to permit recordings in awake behaving monkeys using the 'Eckhorn' 16 electrode microdrive. Structural magnetic resonance images are collected to guide electrode placement. Head fixation is achieved using a specially designed headpiece, modified for the multiple electrode approach, and access to the cortex is provided via a novel recording chamber. Growth of scar tissue over the exposed dura mater is reduced using an anti-mitotic compound. Control of the microdrive is achieved by a computerised system which permits several experimenters to move different electrodes simultaneously, considerably reducing the load on an individual operator. Neurones are identified as pyramidal tract neurones by antidromic stimulation through chronically implanted electrodes; stimulus control is integrated into the computerised system. Finally, analysis of multiple single unit recordings requires accurate methods to correct for non-stationarity in unit firing. A novel technique for such correction is discussed.
Resumo:
Learning is often understood as an organism's gradual acquisition of the association between a given sensory stimulus and the correct motor response. Mathematically, this corresponds to regressing a mapping between the set of observations and the set of actions. Recently, however, it has been shown both in cognitive and motor neuroscience that humans are not only able to learn particular stimulus-response mappings, but are also able to extract abstract structural invariants that facilitate generalization to novel tasks. Here we show how such structure learning can enhance facilitation in a sensorimotor association task performed by human subjects. Using regression and reinforcement learning models we show that the observed facilitation cannot be explained by these basic models of learning stimulus-response associations. We show, however, that the observed data can be explained by a hierarchical Bayesian model that performs structure learning. In line with previous results from cognitive tasks, this suggests that hierarchical Bayesian inference might provide a common framework to explain both the learning of specific stimulus-response associations and the learning of abstract structures that are shared by different task environments.
Resumo:
A decision is a commitment to a proposition or plan of action based on evidence and the expected costs and benefits associated with the outcome. Progress in a variety of fields has led to a quantitative understanding of the mechanisms that evaluate evidence and reach a decision. Several formalisms propose that a representation of noisy evidence is evaluated against a criterion to produce a decision. Without additional evidence, however, these formalisms fail to explain why a decision-maker would change their mind. Here we extend a model, developed to account for both the timing and the accuracy of the initial decision, to explain subsequent changes of mind. Subjects made decisions about a noisy visual stimulus, which they indicated by moving a handle. Although they received no additional information after initiating their movement, their hand trajectories betrayed a change of mind in some trials. We propose that noisy evidence is accumulated over time until it reaches a criterion level, or bound, which determines the initial decision, and that the brain exploits information that is in the processing pipeline when the initial decision is made to subsequently either reverse or reaffirm the initial decision. The model explains both the frequency of changes of mind as well as their dependence on both task difficulty and whether the initial decision was accurate or erroneous. The theoretical and experimental findings advance the understanding of decision-making to the highly flexible and cognitive acts of vacillation and self-correction.
Resumo:
When one finger touches the other, the resulting tactile sensation is perceived as weaker than the same stimulus externally imposed. This attenuation of sensation could result from a predictive process that subtracts the expected sensory consequences of the action, or from a postdictive process that alters the perception of sensations that are judged after the event to be self-generated. In this study we observe attenuation even when the fingers unexpectedly fail to make contact, supporting a predictive process. This predictive attenuation of self-generated sensation may have evolved to enhance the perception of sensations with an external cause.
Resumo:
We investigated whether stimulation of the pyramidal tract (PT) could reset the phase of 15-30 Hz beta oscillations observed in the macaque motor cortex. We recorded local field potentials (LFPs) and multiple single-unit activity from two conscious macaque monkeys performing a precision grip task. EMG activity was also recorded from the second animal. Single PT stimuli were delivered during the hold period of the task, when oscillations in the LFP were most prominent. Stimulus-triggered averaging of the LFP showed a phase-locked oscillatory response to PT stimulation. Frequency domain analysis revealed two components within the response: a 15-30 Hz component, which represented resetting of on-going beta rhythms, and a lower frequency 10 Hz response. Only the higher frequency could be observed in the EMG activity, at stronger stimulus intensities than were required for resetting the cortical rhythm. Stimulation of the PT during movement elicited a greatly reduced oscillatory response. Analysis of single-unit discharge confirmed that PT stimulation was capable of resetting periodic activity in motor cortex. The firing patterns of pyramidal tract neurones (PTNs) and unidentified neurones exhibited successive cycles of suppression and facilitation, time locked to the stimulus. We conclude that PTN activity directly influences the generation of the 15-30 Hz rhythm. These PTNs facilitate EMG activity in upper limb muscles, contributing to corticomuscular coherence at this same frequency. Since the earliest oscillatory effect observed following stimulation was a suppression of firing, we speculate that inhibitory feedback may be the key mechanism generating such oscillations in the motor cortex.
Resumo:
A self-produced tactile stimulus is perceived as less ticklish than the same stimulus generated externally. We used fMRI to examine neural responses when subjects experienced a tactile stimulus that was either self-produced or externally produced. More activity was found in somatosensory cortex when the stimulus was externally produced. In the cerebellum, less activity was associated with a movement that generated a tactile stimulus than with a movement that did not. This difference suggests that the cerebellum is involved in predicting the specific sensory consequences of movements, providing the signal that is used to cancel the sensory response to self-generated stimulation.
Resumo:
Molecular self-organization has the potential to serve as an efficient and versatile tool for the spontaneous creation of low-dimensional nanostructures on surfaces. We demonstrate how the subtle balance between intermolecular interactions and molecule-surface interactions can be altered by modifying the environment or through manipulation by means of the tip in a scanning tunnelling microscope (STM) at room temperature. We show how this leads to the distinctive ordering and disordering of a triangular nanographene molecule, the trizigzag-hexa-peri-hexabenzocoronenes-phenyl-6 (trizigzagHBC-Ph6), on two different surfaces: graphite and Au(111). The assembly of submonolayer films on graphite reveals a sixfold packing symmetry under UHV conditions, whereas at the graphite-phenyloctane interface, they reorganize into a fourfold packing symmetry, mediated by the solvent molecules. On Au(111) under UHV conditions in the multilayer films we investigated, although disorder prevails with the molecules being randomly distributed, their packing behaviour can be altered by the scanning motion of the tip. The asymmetric diode-like current-voltage characteristics of the molecules are retained when deposited on both substrates. This paper highlights the importance of the surrounding medium and any external stimulus in influencing the molecular organization process, and offers a unique approach for controlling the assembly of molecules at a desired location on a substrate.
Resumo:
Perceptual learning improves perception through training. Perceptual learning improves with most stimulus types but fails when . certain stimulus types are mixed during training (roving). This result is surprising because classical supervised and unsupervised neural network models can cope easily with roving conditions. What makes humans so inferior compared to these models? As experimental and conceptual work has shown, human perceptual learning is neither supervised nor unsupervised but reward-based learning. Reward-based learning suffers from the so-called unsupervised bias, i.e., to prevent synaptic " drift" , the . average reward has to be exactly estimated. However, this is impossible when two or more stimulus types with different rewards are presented during training (and the reward is estimated by a running average). For this reason, we propose no learning occurs in roving conditions. However, roving hinders perceptual learning only for combinations of similar stimulus types but not for dissimilar ones. In this latter case, we propose that a critic can estimate the reward for each stimulus type separately. One implication of our analysis is that the critic cannot be located in the visual system. © 2011 Elsevier Ltd.
Resumo:
Decisions about noisy stimuli require evidence integration over time. Traditionally, evidence integration and decision making are described as a one-stage process: a decision is made when evidence for the presence of a stimulus crosses a threshold. Here, we show that one-stage models cannot explain psychophysical experiments on feature fusion, where two visual stimuli are presented in rapid succession. Paradoxically, the second stimulus biases decisions more strongly than the first one, contrary to predictions of one-stage models and intuition. We present a two-stage model where sensory information is integrated and buffered before it is fed into a drift diffusion process. The model is tested in a series of psychophysical experiments and explains both accuracy and reaction time distributions. © 2012 Rüter et al.
Resumo:
It is commonly believed that visual short-term memory (VSTM) consists of a fixed number of "slots" in which items can be stored. An alternative theory in which memory resource is a continuous quantity distributed over all items seems to be refuted by the appearance of guessing in human responses. Here, we introduce a model in which resource is not only continuous but also variable across items and trials, causing random fluctuations in encoding precision. We tested this model against previous models using two VSTM paradigms and two feature dimensions. Our model accurately accounts for all aspects of the data, including apparent guessing, and outperforms slot models in formal model comparison. At the neural level, variability in precision might correspond to variability in neural population gain and doubly stochastic stimulus representation. Our results suggest that VSTM resource is continuous and variable rather than discrete and fixed and might explain why subjective experience of VSTM is not all or none.
Resumo:
The brain encodes visual information with limited precision. Contradictory evidence exists as to whether the precision with which an item is encoded depends on the number of stimuli in a display (set size). Some studies have found evidence that precision decreases with set size, but others have reported constant precision. These groups of studies differed in two ways. The studies that reported a decrease used displays with heterogeneous stimuli and tasks with a short-term memory component, while the ones that reported constancy used homogeneous stimuli and tasks that did not require short-term memory. To disentangle the effects of heterogeneity and short-memory involvement, we conducted two main experiments. In Experiment 1, stimuli were heterogeneous, and we compared a condition in which target identity was revealed before the stimulus display with one in which it was revealed afterward. In Experiment 2, target identity was fixed, and we compared heterogeneous and homogeneous distractor conditions. In both experiments, we compared an optimal-observer model in which precision is constant with set size with one in which it depends on set size. We found that precision decreases with set size when the distractors are heterogeneous, regardless of whether short-term memory is involved, but not when it is homogeneous. This suggests that heterogeneity, not short-term memory, is the critical factor. In addition, we found that precision exhibits variability across items and trials, which may partly be caused by attentional fluctuations.
Resumo:
A visual target is more difficult to recognize when it is surrounded by other, similar objects. This breakdown in object recognition is known as crowding. Despite a long history of experimental work, computational models of crowding are still sparse. Specifically, few studies have examined crowding using an ideal-observer approach. Here, we compare crowding in ideal observers with crowding in humans. We derived an ideal-observer model for target identification under conditions of position and identity uncertainty. Simulations showed that this model reproduces the hallmark of crowding, namely a critical spacing that scales with viewing eccentricity. To examine how well the model fits quantitatively to human data, we performed three experiments. In Experiments 1 and 2, we measured observers' perceptual uncertainty about stimulus positions and identities, respectively, for a target in isolation. In Experiment 3, observers identified a target that was flanked by two distractors. We found that about half of the errors in Experiment 3 could be accounted for by the perceptual uncertainty measured in Experiments 1 and 2. The remainder of the errors could be accounted for by assuming that uncertainty (i.e., the width of internal noise distribution) about stimulus positions and identities depends on flanker proximity. Our results provide a mathematical restatement of the crowding problem and support the hypothesis that crowding behavior is a sign of optimality rather than a perceptual defect.
Resumo:
Deciding whether a set of objects are the same or different is a cornerstone of perception and cognition. Surprisingly, no principled quantitative model of sameness judgment exists. We tested whether human sameness judgment under sensory noise can be modeled as a form of probabilistically optimal inference. An optimal observer would compare the reliability-weighted variance of the sensory measurements with a set size-dependent criterion. We conducted two experiments, in which we varied set size and individual stimulus reliabilities. We found that the optimal-observer model accurately describes human behavior, outperforms plausible alternatives in a rigorous model comparison, and accounts for three key findings in the animal cognition literature. Our results provide a normative footing for the study of sameness judgment and indicate that the notion of perception as near-optimal inference extends to abstract relations.
On the generality of crowding: visual crowding in size, saturation, and hue compared to orientation.
Resumo:
Perception of peripherally viewed shapes is impaired when surrounded by similar shapes. This phenomenon is commonly referred to as "crowding". Although studied extensively for perception of characters (mainly letters) and, to a lesser extent, for orientation, little is known about whether and how crowding affects perception of other features. Nevertheless, current crowding models suggest that the effect should be rather general and thus not restricted to letters and orientation. Here, we report on a series of experiments investigating crowding in the following elementary feature dimensions: size, hue, and saturation. Crowding effects in these dimensions were benchmarked against those in the orientation domain. Our primary finding is that all features studied show clear signs of crowding. First, identification thresholds increase with decreasing mask spacing. Second, for all tested features, critical spacing appears to be roughly half the viewing eccentricity and independent of stimulus size, a property previously proposed as the hallmark of crowding. Interestingly, although critical spacings are highly comparable, crowding magnitude differs across features: Size crowding is almost as strong as orientation crowding, whereas the effect is much weaker for saturation and hue. We suggest that future theories and models of crowding should be able to accommodate these differences in crowding effects.