292 resultados para sensory acceptability
Resumo:
In this study we investigate previous claims that a region in the left posterior superior temporal sulcus (pSTS) is more activated by audiovisual than unimodal processing. First, we compare audiovisual to visual-visual and auditory-auditory conceptual matching using auditory or visual object names that are paired with pictures of objects or their environmental sounds. Second, we compare congruent and incongruent audiovisual trials when presentation is simultaneous or sequential. Third, we compare audiovisual stimuli that are either verbal (auditory and visual words) or nonverbal (pictures of objects and their associated sounds). The results demonstrate that, when task, attention, and stimuli are controlled, pSTS activation for audiovisual conceptual matching is 1) identical to that observed for intramodal conceptual matching, 2) greater for incongruent than congruent trials when auditory and visual stimuli are simultaneously presented, and 3) identical for verbal and nonverbal stimuli. These results are not consistent with previous claims that pSTS activation reflects the active formation of an integrated audiovisual representation. After a discussion of the stimulus and task factors that modulate activation, we conclude that, when stimulus input, task, and attention are controlled, pSTS is part of a distributed set of regions involved in conceptual matching, irrespective of whether the stimuli are audiovisual, auditory-auditory or visual-visual.
Resumo:
Neuropsychological tests requiring patients to find a path through a maze can be used to assess visuospatial memory performance in temporal lobe pathology, particularly in the hippocampus. Alternatively, they have been used as a task sensitive to executive function in patients with frontal lobe damage. We measured performance on the Austin Maze in patients with unilateral left and right temporal lobe epilepsy (TLE), with and without hippocampal sclerosis, compared to healthy controls. Performance was correlated with a number of other neuropsychological tests to identify the cognitive components that may be associated with poor Austin Maze performance. Patients with right TLE were significantly impaired on the Austin Maze task relative to patients with left TLE and controls, and error scores correlated with their performance on the Block Design task. The performance of patients with left TLE was also impaired relative to controls; however, errors correlated with performance on tests of executive function and delayed recall. The presence of hippocampal sclerosis did not have an impact on maze performance. A discriminant function analysis indicated that the Austin Maze alone correctly classified 73.5% of patients as having right TLE. In summary, impaired performance on the Austin Maze task is more suggestive of right than left TLE; however, impaired performance on this visuospatial task does not necessarily involve the hippocampus. The relationship of the Austin Maze task with other neuropsychological tests suggests that differential cognitive components may underlie performance decrements in right versus left TLE.
Resumo:
By virtue of its widespread afferent projections, perirhinal cortex is thought to bind polymodal information into abstract object-level representations. Consistent with this proposal, deficits in cross-modal integration have been reported after perirhinal lesions in nonhuman primates. It is therefore surprising that imaging studies of humans have not observed perirhinal activation during visual-tactile object matching. Critically, however, these studies did not differentiate between congruent and incongruent trials. This is important because successful integration can only occur when polymodal information indicates a single object (congruent) rather than different objects (incongruent). We scanned neurologically intact individuals using functional magnetic resonance imaging (fMRI) while they matched shapes. We found higher perirhinal activation bilaterally for cross-modal (visual-tactile) than unimodal (visual-visual or tactile-tactile) matching, but only when visual and tactile attributes were congruent. Our results demonstrate that the human perirhinal cortex is involved in cross-modal, visual-tactile, integration and, thus, indicate a functional homology between human and monkey perirhinal cortices.
Resumo:
To identify and categorize complex stimuli such as familiar objects or speech, the human brain integrates information that is abstracted at multiple levels from its sensory inputs. Using cross-modal priming for spoken words and sounds, this functional magnetic resonance imaging study identified 3 distinct classes of visuoauditory incongruency effects: visuoauditory incongruency effects were selective for 1) spoken words in the left superior temporal sulcus (STS), 2) environmental sounds in the left angular gyrus (AG), and 3) both words and sounds in the lateral and medial prefrontal cortices (IFS/mPFC). From a cognitive perspective, these incongruency effects suggest that prior visual information influences the neural processes underlying speech and sound recognition at multiple levels, with the STS being involved in phonological, AG in semantic, and mPFC/IFS in higher conceptual processing. In terms of neural mechanisms, effective connectivity analyses (dynamic causal modeling) suggest that these incongruency effects may emerge via greater bottom-up effects from early auditory regions to intermediate multisensory integration areas (i.e., STS and AG). This is consistent with a predictive coding perspective on hierarchical Bayesian inference in the cortex where the domain of the prediction error (phonological vs. semantic) determines its regional expression (middle temporal gyrus/STS vs. AG/intraparietal sulcus).
Resumo:
This paper presents a novel method to rank map hypotheses by the quality of localization they afford. The highest ranked hypothesis at any moment becomes the active representation that is used to guide the robot to its goal location. A single static representation is insufficient for navigation in dynamic environments where paths can be blocked periodically, a common scenario which poses significant challenges for typical planners. In our approach we simultaneously rank multiple map hypotheses by the influence that localization in each of them has on locally accurate odometry. This is done online for the current locally accurate window by formulating a factor graph of odometry relaxed by localization constraints. Comparison of the resulting perturbed odometry of each hypothesis with the original odometry yields a score that can be used to rank map hypotheses by their utility. We deploy the proposed approach on a real robot navigating a structurally noisy office environment. The configuration of the environment is physically altered outside the robots sensory horizon during navigation tasks to demonstrate the proposed approach of hypothesis selection.
Resumo:
Using mixed-methods, this research investigated why consumers engage in deviant behaviors. It found that there is significant variation in how consumers perceive right and wrong, which calls for more tailored deterrence strategies to challenge how consumers justify deviant behaviours. Specifically, individuals draw on a number of factors when assessing right and wrong. While individuals agree on the polar acceptable and unacceptable behaviours, behaviours in between are questionable. When social consensus varies on a behaviour's acceptability, so to do the predictors of deviant behaviour. These findings contribute to consumer deviance and consumer ethics research.
Resumo:
Background Chronic respiratory illnesses are the most common group of childhood chronic health conditions and are overrepresented in socially isolated groups. Objective To conduct a randomized controlled pilot trial to evaluate the efficacy of Breathe Easier Online (BEO), an Internet-based problem-solving program with minimal facilitator involvement to improve psychosocial well-being in children and adolescents with a chronic respiratory condition. Methods We randomly assigned 42 socially isolated children and adolescents (18 males), aged between 10 and 17 years to either a BEO (final n = 19) or a wait-list control (final n = 20) condition. In total, 3 participants (2 from BEO and 1 from control) did not complete the intervention. Psychosocial well-being was operationalized through self-reported scores on depression symptoms and social problem solving. Secondary outcome measures included self-reported attitudes toward their illness and spirometry results. Paper-and-pencil questionnaires were completed at the hospital when participants attended a briefing session at baseline (time 1) and in their homes after the intervention for the BEO group or a matched 9-week time period for the wait-list group (time 2). Results The two groups were comparable at baseline across all demographic measures (all F < 1). For the primary outcome measures, there were no significant group differences on depression (P = .17) or social problem solving (P = .61). However, following the online intervention, those in the BEO group reported significantly lower depression (P = .04), less impulsive/careless problem solving (P = .01), and an improvement in positive attitude toward their illness (P = .04) compared with baseline. The wait-list group did not show these differences. Children in the BEO group and their parents rated the online modules very favorably. Conclusions Although there were no significant group differences on primary outcome measures, our pilot data provide tentative support for the feasibility (acceptability and user satisfaction) and initial efficacy of an Internet-based intervention for improving well-being in children and adolescents with a chronic respiratory condition. Trial registration Australian New Zealand Clinical Trials Registry number: ACTRN12610000214033;
Resumo:
Introduction With the ever-increasing global burden of retinal disease, there is an urgent need to vastly improve formulation strategies that enhance posterior eye delivery of therapeutics. Despite intravitreal administration having demonstrated notable superiority over other routes in enhancing retinal drug availability, there still exist various significant physical/biochemical barriers preventing optimal drug delivery into the retina. A further complication lies with an inability to reliably translate laboratory-based retinal models into a clinical setting. Several formulation approaches have recently been evaluated to improve intravitreal therapeutic outcomes, and our aim in this review is to highlight strategies that hold the most promise. Areas covered We discuss the complex barriers faced by the intravitreal route and examine how formulation strategies including implants, nanoparticulate carriers, viral vectors and sonotherapy have been utilized to attain both sustained delivery and enhanced penetration through to the retina. We conclude by highlighting the advances and limitations of current in vitro, ex vivo and in vivo retinal models in use by researchers globally. Expert opinion Various nanoparticle compositions have demonstrated the ability to overcome the retinal barriers successfully; however, their utility is limited to the laboratory setting. Optimization of these formulations and the development of more robust experimental retinal models are necessary to translate success in the laboratory into clinically efficacious outcomes.
Resumo:
Supported by contemporary theories of architectural aesthetics and neuro-aesthetics this paper presents a case for the use of portable fNIRS imaging in the assessment of emotional responses to spatial environments experienced by both blind and sighted. The aim of the paper is to outline the implications of fNIRS for spatial research and practice within the field of architecture, thereby suggesting a potential taxonomy of particular formations of space and affect. Empirical neurological study of affect and spatial experience from an architectural design perspective remains in many instances unchartered. Clinical research using the portable non-invasive neuro-imaging device, functional near infrared spectroscopy (fNIRS) is proving convincing in its ability to detect emotional responses to visual, spatio-auditory and task based stimuli, providing a firm basis to potentially track cortical activity in the appraisal of architectural environments. Additionally, recent neurological studies have sought to explore the manifold sensory abilities of the visually impaired to better understand spatial perception in general. Key studies reveal that early blind participants perform as well as sighted due to higher auditory and somato-sensory spatial acuity. For instance, face vision enables the visually impaired to detect environments through skin pressure, enabling at times an instantaneous impression of the layout of an unfamiliar environment. Studies also report pleasant and unpleasant emotional responses such as ‘weightedness’ or ‘claustrophobia’ within certain interior environments, revealing a deeper perceptual sensitivity then would be expected. We conclude with justification that comparative fNIRS studies between the sighted and blind concerning spatial experience have the potential to provide greater understanding of emotional responses to architectural environments.
Resumo:
We employed a novel cuing paradigm to assess whether dynamically versus statically presented facial expressions differentially engaged predictive visual mechanisms. Participants were presented with a cueing stimulus that was either the static depiction of a low intensity expressed emotion; or a dynamic sequence evolving from a neutral expression to the low intensity expressed emotion. Following this cue and a backwards mask, participants were presented with a probe face that displayed either the same emotion (congruent) or a different emotion (incongruent) with respect to that displayed by the cue although expressed at a high intensity. The probe face had either the same or different identity from the cued face. The participants' task was to indicate whether or not the probe face showed the same emotion as the cue. Dynamic cues and same identity cues both led to a greater tendency towards congruent responding, although these factors did not interact. Facial motion also led to faster responding when the probe face was emotionally congruent to the cue. We interpret these results as indicating that dynamic facial displays preferentially invoke predictive visual mechanisms, and suggest that motoric simulation may provide an important basis for the generation of predictions in the visual system.
Resumo:
Mismatch negativity (MMN) is a component of the event-related potential elicited by deviant auditory stimuli. It is presumed to index pre-attentive monitoring of changes in the auditory environment. MMN amplitude is smaller in groups of individuals with schizophrenia compared to healthy controls. We compared duration-deviant MMN in 16 recent-onset and 19 chronic schizophrenia patients versus age- and sex-matched controls. Reduced frontal MMN was found in both patient groups, involved reduced hemispheric asymmetry, and was correlated with Global Assessment of Functioning (GAF) and negative symptom ratings. A cortically-constrained LORETA analysis, incorporating anatomical data from each individual's MRI, was performed to generate a current source density model of the MMN response over time. This model suggested MMN generation within a temporal, parietal and frontal network, which was right hemisphere dominant only in controls. An exploratory analysis revealed reduced CSD in patients in superior and middle temporal cortex, inferior and superior parietal cortex, precuneus, anterior cingulate, and superior and middle frontal cortex. A region of interest (ROI) analysis was performed. For the early phase of the MMN, patients had reduced bilateral temporal and parietal response and no lateralisation in frontal ROIs. For late MMN, patients had reduced bilateral parietal response and no lateralisation in temporal ROIs. In patients, correlations revealed a link between GAF and the MMN response in parietal cortex. In controls, the frontal response onset was 17 ms later than the temporal and parietal response. In patients, onset latency of the MMN response was delayed in secondary, but not primary, auditory cortex. However amplitude reductions were observed in both primary and secondary auditory cortex. These latency delays may indicate relatively intact information processing upstream of the primary auditory cortex, but impaired primary auditory cortex or cortico-cortical or thalamo-cortical communication with higher auditory cortices as a core deficit in schizophrenia.
Resumo:
The reinforcing effects of aversive outcomes on avoidance behaviour are well established. However, their influence on perceptual processes is less well explored, especially during the transition from adolescence to adulthood. Using electroencephalography, we examined whether learning to actively or passively avoid harm can modulate early visual responses in adolescents and adults. The task included two avoidance conditions, active and passive, where two different warning stimuli predicted the imminent, but avoidable, presentation of an aversive tone. To avoid the aversive outcome, participants had to learn to emit an action (active avoidance) for one of the warning stimuli and omit an action for the other (passive avoidance). Both adults and adolescents performed the task with a high degree of accuracy. For both adolescents and adults, increased N170 event-related potential amplitudes were found for both the active and the passive warning stimuli compared with control conditions. Moreover, the potentiation of the N170 to the warning stimuli was stable and long lasting. Developmental differences were also observed; adolescents showed greater potentiation of the N170 component to danger signals. These findings demonstrate, for the first time, that learned danger signals in an instrumental avoidance task can influence early visual sensory processes in both adults and adolescents.
Resumo:
Emotionally arousing events can distort our sense of time. We used mixed block/event-related fMRI design to establish the neural basis for this effect. Nineteen participants were asked to judge whether angry, happy and neutral facial expressions that varied in duration (from 400 to 1,600 ms) were closer in duration to either a short or long duration they learnt previously. Time was overestimated for both angry and happy expressions compared to neutral expressions. For faces presented for 700 ms, facial emotion modulated activity in regions of the timing network Wiener et al. (NeuroImage 49(2):1728–1740, 2010) namely the right supplementary motor area (SMA) and the junction of the right inferior frontal gyrus and anterior insula (IFG/AI). Reaction times were slowest when faces were displayed for 700 ms indicating increased decision making difficulty. Taken together with existing electrophysiological evidence Ng et al. (Neuroscience, doi: 10.3389/fnint.2011.00077, 2011), the effects are consistent with the idea that facial emotion moderates temporal decision making and that the right SMA and right IFG/AI are key neural structures responsible for this effect.
Resumo:
There is substantial evidence for facial emotion recognition (FER) deficits in autism spectrum disorder (ASD). The extent of this impairment, however, remains unclear, and there is some suggestion that clinical groups might benefit from the use of dynamic rather than static images. High-functioning individuals with ASD (n = 36) and typically developing controls (n = 36) completed a computerised FER task involving static and dynamic expressions of the six basic emotions. The ASD group showed poorer overall performance in identifying anger and disgust and were disadvantaged by dynamic (relative to static) stimuli when presented with sad expressions. Among both groups, however, dynamic stimuli appeared to improve recognition of anger. This research provides further evidence of specific impairment in the recognition of negative emotions in ASD, but argues against any broad advantages associated with the use of dynamic displays.
Resumo:
The stop-signal paradigm is increasingly being used as a probe of response inhibition in basic and clinical neuroimaging research. The critical feature of this task is that a cued response is countermanded by a secondary ‘stop-signal’ stimulus offset from the first by a ‘stop-signal delay’. Here we explored the role of task difficulty in the stop-signal task with the hypothesis that what is critical for successful inhibition is the time available for stopping, that we define as the difference between stop-signal onset and the expected response time (approximated by reaction time from previous trial). We also used functional magnetic resonance imaging (fMRI) to examine how the time available for stopping affects activity in the putative right inferior frontal gyrus and presupplementary motor area (right IFG-preSMA) network that is known to support stopping. While undergoing fMRI scanning, participants performed a stop-signal variant where the time available for stopping was kept approximately constant across participants, which enabled us to compare how the time available for stopping affected stop-signal task difficulty both within and between subjects. Importantly, all behavioural and neuroimaging data were consistent with previous findings. We found that the time available for stopping distinguished successful from unsuccessful inhibition trials, was independent of stop-signal delay, and affected successful inhibition depending upon individual SSRT. We also found that right IFG and adjacent anterior insula were more strongly activated during more difficult stopping. These findings may have critical implications for stop-signal studies that compare different patient or other groups using fixed stop-signal delays.