242 resultados para Sensory modifier


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Dissociable processes for conscious perception (“what” processing) and guidance of action (“how” processing) have been identified in visual, auditory, and somatosensory systems. The present study was designed to find similar dissociation within whole-body movements in which the presence of vestibular information creates a unique perceptual condition. In two experiments, blindfolded participants walked along a linear path and specified the walked distance by verbally estimating it (“what” measure) and by pulling a length of tape that matched the walked distance (“how” measure). Although these two measures yielded largely comparable responses under a normal walking condition, variability in verbal estimates showed a qualitatively different pattern from that in tape-pulling when sensory input into walking was altered by having participants wear a heavy backpack. This suggests that the “what” versus “how” dissociation exists in whole-body movements as well, supporting a claim that it is a general principle with which perceptual systems are organized.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this paper we provide normative data along multiple cognitive and affective variable dimensions for a set of 110 sounds, including living and manmade stimuli. Environmental sounds are being increasingly utilized as stimuli in the cognitive, neuropsychological and neuroimaging fields, yet there is no comprehensive set of normative information for these type of stimuli available for use across these experimental domains. Experiment 1 collected data from 162 participants in an on-line questionnaire, which included measures of identification and categorization as well as cognitive and affective variables. A subsequent experiment collected response times to these sounds. Sounds were normalized to the same length (1 second) in order to maximize usage across multiple paradigms and experimental fields. These sounds can be freely downloaded for use, and all response data have also been made available in order that researchers can choose one or many of the cognitive and affective dimensions along which they would like to control their stimuli. Our hope is that the availability of such information will assist researchers in the fields of cognitive and clinical psychology and the neuroimaging community in choosing well-controlled environmental sound stimuli, and allow comparison across multiple studies.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Semantic knowledge is supported by a widely distributed neuronal network, with differential patterns of activation depending upon experimental stimulus or task demands. Despite a wide body of knowledge on semantic object processing from the visual modality, the response of this semantic network to environmental sounds remains relatively unknown. Here, we used fMRI to investigate how access to different conceptual attributes from environmental sound input modulates this semantic network. Using a range of living and manmade sounds, we scanned participants whilst they carried out an object attribute verification task. Specifically, we tested visual perceptual, encyclopedic, and categorical attributes about living and manmade objects relative to a high-level auditory perceptual baseline to investigate the differential patterns of response to these contrasting types of object-related attributes, whilst keeping stimulus input constant across conditions. Within the bilateral distributed network engaged for processing environmental sounds across all conditions, we report here a highly significant dissociation within the left hemisphere between the processing of visual perceptual and encyclopedic attributes of objects.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this study we investigate previous claims that a region in the left posterior superior temporal sulcus (pSTS) is more activated by audiovisual than unimodal processing. First, we compare audiovisual to visual-visual and auditory-auditory conceptual matching using auditory or visual object names that are paired with pictures of objects or their environmental sounds. Second, we compare congruent and incongruent audiovisual trials when presentation is simultaneous or sequential. Third, we compare audiovisual stimuli that are either verbal (auditory and visual words) or nonverbal (pictures of objects and their associated sounds). The results demonstrate that, when task, attention, and stimuli are controlled, pSTS activation for audiovisual conceptual matching is 1) identical to that observed for intramodal conceptual matching, 2) greater for incongruent than congruent trials when auditory and visual stimuli are simultaneously presented, and 3) identical for verbal and nonverbal stimuli. These results are not consistent with previous claims that pSTS activation reflects the active formation of an integrated audiovisual representation. After a discussion of the stimulus and task factors that modulate activation, we conclude that, when stimulus input, task, and attention are controlled, pSTS is part of a distributed set of regions involved in conceptual matching, irrespective of whether the stimuli are audiovisual, auditory-auditory or visual-visual.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Neuropsychological tests requiring patients to find a path through a maze can be used to assess visuospatial memory performance in temporal lobe pathology, particularly in the hippocampus. Alternatively, they have been used as a task sensitive to executive function in patients with frontal lobe damage. We measured performance on the Austin Maze in patients with unilateral left and right temporal lobe epilepsy (TLE), with and without hippocampal sclerosis, compared to healthy controls. Performance was correlated with a number of other neuropsychological tests to identify the cognitive components that may be associated with poor Austin Maze performance. Patients with right TLE were significantly impaired on the Austin Maze task relative to patients with left TLE and controls, and error scores correlated with their performance on the Block Design task. The performance of patients with left TLE was also impaired relative to controls; however, errors correlated with performance on tests of executive function and delayed recall. The presence of hippocampal sclerosis did not have an impact on maze performance. A discriminant function analysis indicated that the Austin Maze alone correctly classified 73.5% of patients as having right TLE. In summary, impaired performance on the Austin Maze task is more suggestive of right than left TLE; however, impaired performance on this visuospatial task does not necessarily involve the hippocampus. The relationship of the Austin Maze task with other neuropsychological tests suggests that differential cognitive components may underlie performance decrements in right versus left TLE.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

By virtue of its widespread afferent projections, perirhinal cortex is thought to bind polymodal information into abstract object-level representations. Consistent with this proposal, deficits in cross-modal integration have been reported after perirhinal lesions in nonhuman primates. It is therefore surprising that imaging studies of humans have not observed perirhinal activation during visual-tactile object matching. Critically, however, these studies did not differentiate between congruent and incongruent trials. This is important because successful integration can only occur when polymodal information indicates a single object (congruent) rather than different objects (incongruent). We scanned neurologically intact individuals using functional magnetic resonance imaging (fMRI) while they matched shapes. We found higher perirhinal activation bilaterally for cross-modal (visual-tactile) than unimodal (visual-visual or tactile-tactile) matching, but only when visual and tactile attributes were congruent. Our results demonstrate that the human perirhinal cortex is involved in cross-modal, visual-tactile, integration and, thus, indicate a functional homology between human and monkey perirhinal cortices.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

To identify and categorize complex stimuli such as familiar objects or speech, the human brain integrates information that is abstracted at multiple levels from its sensory inputs. Using cross-modal priming for spoken words and sounds, this functional magnetic resonance imaging study identified 3 distinct classes of visuoauditory incongruency effects: visuoauditory incongruency effects were selective for 1) spoken words in the left superior temporal sulcus (STS), 2) environmental sounds in the left angular gyrus (AG), and 3) both words and sounds in the lateral and medial prefrontal cortices (IFS/mPFC). From a cognitive perspective, these incongruency effects suggest that prior visual information influences the neural processes underlying speech and sound recognition at multiple levels, with the STS being involved in phonological, AG in semantic, and mPFC/IFS in higher conceptual processing. In terms of neural mechanisms, effective connectivity analyses (dynamic causal modeling) suggest that these incongruency effects may emerge via greater bottom-up effects from early auditory regions to intermediate multisensory integration areas (i.e., STS and AG). This is consistent with a predictive coding perspective on hierarchical Bayesian inference in the cortex where the domain of the prediction error (phonological vs. semantic) determines its regional expression (middle temporal gyrus/STS vs. AG/intraparietal sulcus).

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper presents a novel method to rank map hypotheses by the quality of localization they afford. The highest ranked hypothesis at any moment becomes the active representation that is used to guide the robot to its goal location. A single static representation is insufficient for navigation in dynamic environments where paths can be blocked periodically, a common scenario which poses significant challenges for typical planners. In our approach we simultaneously rank multiple map hypotheses by the influence that localization in each of them has on locally accurate odometry. This is done online for the current locally accurate window by formulating a factor graph of odometry relaxed by localization constraints. Comparison of the resulting perturbed odometry of each hypothesis with the original odometry yields a score that can be used to rank map hypotheses by their utility. We deploy the proposed approach on a real robot navigating a structurally noisy office environment. The configuration of the environment is physically altered outside the robots sensory horizon during navigation tasks to demonstrate the proposed approach of hypothesis selection.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Introduction With the ever-increasing global burden of retinal disease, there is an urgent need to vastly improve formulation strategies that enhance posterior eye delivery of therapeutics. Despite intravitreal administration having demonstrated notable superiority over other routes in enhancing retinal drug availability, there still exist various significant physical/biochemical barriers preventing optimal drug delivery into the retina. A further complication lies with an inability to reliably translate laboratory-based retinal models into a clinical setting. Several formulation approaches have recently been evaluated to improve intravitreal therapeutic outcomes, and our aim in this review is to highlight strategies that hold the most promise. Areas covered We discuss the complex barriers faced by the intravitreal route and examine how formulation strategies including implants, nanoparticulate carriers, viral vectors and sonotherapy have been utilized to attain both sustained delivery and enhanced penetration through to the retina. We conclude by highlighting the advances and limitations of current in vitro, ex vivo and in vivo retinal models in use by researchers globally. Expert opinion Various nanoparticle compositions have demonstrated the ability to overcome the retinal barriers successfully; however, their utility is limited to the laboratory setting. Optimization of these formulations and the development of more robust experimental retinal models are necessary to translate success in the laboratory into clinically efficacious outcomes.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Supported by contemporary theories of architectural aesthetics and neuro-aesthetics this paper presents a case for the use of portable fNIRS imaging in the assessment of emotional responses to spatial environments experienced by both blind and sighted. The aim of the paper is to outline the implications of fNIRS for spatial research and practice within the field of architecture, thereby suggesting a potential taxonomy of particular formations of space and affect. Empirical neurological study of affect and spatial experience from an architectural design perspective remains in many instances unchartered. Clinical research using the portable non-invasive neuro-imaging device, functional near infrared spectroscopy (fNIRS) is proving convincing in its ability to detect emotional responses to visual, spatio-auditory and task based stimuli, providing a firm basis to potentially track cortical activity in the appraisal of architectural environments. Additionally, recent neurological studies have sought to explore the manifold sensory abilities of the visually impaired to better understand spatial perception in general. Key studies reveal that early blind participants perform as well as sighted due to higher auditory and somato-sensory spatial acuity. For instance, face vision enables the visually impaired to detect environments through skin pressure, enabling at times an instantaneous impression of the layout of an unfamiliar environment. Studies also report pleasant and unpleasant emotional responses such as ‘weightedness’ or ‘claustrophobia’ within certain interior environments, revealing a deeper perceptual sensitivity then would be expected. We conclude with justification that comparative fNIRS studies between the sighted and blind concerning spatial experience have the potential to provide greater understanding of emotional responses to architectural environments.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We employed a novel cuing paradigm to assess whether dynamically versus statically presented facial expressions differentially engaged predictive visual mechanisms. Participants were presented with a cueing stimulus that was either the static depiction of a low intensity expressed emotion; or a dynamic sequence evolving from a neutral expression to the low intensity expressed emotion. Following this cue and a backwards mask, participants were presented with a probe face that displayed either the same emotion (congruent) or a different emotion (incongruent) with respect to that displayed by the cue although expressed at a high intensity. The probe face had either the same or different identity from the cued face. The participants' task was to indicate whether or not the probe face showed the same emotion as the cue. Dynamic cues and same identity cues both led to a greater tendency towards congruent responding, although these factors did not interact. Facial motion also led to faster responding when the probe face was emotionally congruent to the cue. We interpret these results as indicating that dynamic facial displays preferentially invoke predictive visual mechanisms, and suggest that motoric simulation may provide an important basis for the generation of predictions in the visual system.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Mismatch negativity (MMN) is a component of the event-related potential elicited by deviant auditory stimuli. It is presumed to index pre-attentive monitoring of changes in the auditory environment. MMN amplitude is smaller in groups of individuals with schizophrenia compared to healthy controls. We compared duration-deviant MMN in 16 recent-onset and 19 chronic schizophrenia patients versus age- and sex-matched controls. Reduced frontal MMN was found in both patient groups, involved reduced hemispheric asymmetry, and was correlated with Global Assessment of Functioning (GAF) and negative symptom ratings. A cortically-constrained LORETA analysis, incorporating anatomical data from each individual's MRI, was performed to generate a current source density model of the MMN response over time. This model suggested MMN generation within a temporal, parietal and frontal network, which was right hemisphere dominant only in controls. An exploratory analysis revealed reduced CSD in patients in superior and middle temporal cortex, inferior and superior parietal cortex, precuneus, anterior cingulate, and superior and middle frontal cortex. A region of interest (ROI) analysis was performed. For the early phase of the MMN, patients had reduced bilateral temporal and parietal response and no lateralisation in frontal ROIs. For late MMN, patients had reduced bilateral parietal response and no lateralisation in temporal ROIs. In patients, correlations revealed a link between GAF and the MMN response in parietal cortex. In controls, the frontal response onset was 17 ms later than the temporal and parietal response. In patients, onset latency of the MMN response was delayed in secondary, but not primary, auditory cortex. However amplitude reductions were observed in both primary and secondary auditory cortex. These latency delays may indicate relatively intact information processing upstream of the primary auditory cortex, but impaired primary auditory cortex or cortico-cortical or thalamo-cortical communication with higher auditory cortices as a core deficit in schizophrenia.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The reinforcing effects of aversive outcomes on avoidance behaviour are well established. However, their influence on perceptual processes is less well explored, especially during the transition from adolescence to adulthood. Using electroencephalography, we examined whether learning to actively or passively avoid harm can modulate early visual responses in adolescents and adults. The task included two avoidance conditions, active and passive, where two different warning stimuli predicted the imminent, but avoidable, presentation of an aversive tone. To avoid the aversive outcome, participants had to learn to emit an action (active avoidance) for one of the warning stimuli and omit an action for the other (passive avoidance). Both adults and adolescents performed the task with a high degree of accuracy. For both adolescents and adults, increased N170 event-related potential amplitudes were found for both the active and the passive warning stimuli compared with control conditions. Moreover, the potentiation of the N170 to the warning stimuli was stable and long lasting. Developmental differences were also observed; adolescents showed greater potentiation of the N170 component to danger signals. These findings demonstrate, for the first time, that learned danger signals in an instrumental avoidance task can influence early visual sensory processes in both adults and adolescents.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Emotionally arousing events can distort our sense of time. We used mixed block/event-related fMRI design to establish the neural basis for this effect. Nineteen participants were asked to judge whether angry, happy and neutral facial expressions that varied in duration (from 400 to 1,600 ms) were closer in duration to either a short or long duration they learnt previously. Time was overestimated for both angry and happy expressions compared to neutral expressions. For faces presented for 700 ms, facial emotion modulated activity in regions of the timing network Wiener et al. (NeuroImage 49(2):1728–1740, 2010) namely the right supplementary motor area (SMA) and the junction of the right inferior frontal gyrus and anterior insula (IFG/AI). Reaction times were slowest when faces were displayed for 700 ms indicating increased decision making difficulty. Taken together with existing electrophysiological evidence Ng et al. (Neuroscience, doi: 10.3389/fnint.2011.00077, 2011), the effects are consistent with the idea that facial emotion moderates temporal decision making and that the right SMA and right IFG/AI are key neural structures responsible for this effect.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

There is substantial evidence for facial emotion recognition (FER) deficits in autism spectrum disorder (ASD). The extent of this impairment, however, remains unclear, and there is some suggestion that clinical groups might benefit from the use of dynamic rather than static images. High-functioning individuals with ASD (n = 36) and typically developing controls (n = 36) completed a computerised FER task involving static and dynamic expressions of the six basic emotions. The ASD group showed poorer overall performance in identifying anger and disgust and were disadvantaged by dynamic (relative to static) stimuli when presented with sad expressions. Among both groups, however, dynamic stimuli appeared to improve recognition of anger. This research provides further evidence of specific impairment in the recognition of negative emotions in ASD, but argues against any broad advantages associated with the use of dynamic displays.