876 resultados para sensory authenticity
Resumo:
Emotionally arousing events can distort our sense of time. We used mixed block/event-related fMRI design to establish the neural basis for this effect. Nineteen participants were asked to judge whether angry, happy and neutral facial expressions that varied in duration (from 400 to 1,600 ms) were closer in duration to either a short or long duration they learnt previously. Time was overestimated for both angry and happy expressions compared to neutral expressions. For faces presented for 700 ms, facial emotion modulated activity in regions of the timing network Wiener et al. (NeuroImage 49(2):1728–1740, 2010) namely the right supplementary motor area (SMA) and the junction of the right inferior frontal gyrus and anterior insula (IFG/AI). Reaction times were slowest when faces were displayed for 700 ms indicating increased decision making difficulty. Taken together with existing electrophysiological evidence Ng et al. (Neuroscience, doi: 10.3389/fnint.2011.00077, 2011), the effects are consistent with the idea that facial emotion moderates temporal decision making and that the right SMA and right IFG/AI are key neural structures responsible for this effect.
Resumo:
There is substantial evidence for facial emotion recognition (FER) deficits in autism spectrum disorder (ASD). The extent of this impairment, however, remains unclear, and there is some suggestion that clinical groups might benefit from the use of dynamic rather than static images. High-functioning individuals with ASD (n = 36) and typically developing controls (n = 36) completed a computerised FER task involving static and dynamic expressions of the six basic emotions. The ASD group showed poorer overall performance in identifying anger and disgust and were disadvantaged by dynamic (relative to static) stimuli when presented with sad expressions. Among both groups, however, dynamic stimuli appeared to improve recognition of anger. This research provides further evidence of specific impairment in the recognition of negative emotions in ASD, but argues against any broad advantages associated with the use of dynamic displays.
Resumo:
The stop-signal paradigm is increasingly being used as a probe of response inhibition in basic and clinical neuroimaging research. The critical feature of this task is that a cued response is countermanded by a secondary ‘stop-signal’ stimulus offset from the first by a ‘stop-signal delay’. Here we explored the role of task difficulty in the stop-signal task with the hypothesis that what is critical for successful inhibition is the time available for stopping, that we define as the difference between stop-signal onset and the expected response time (approximated by reaction time from previous trial). We also used functional magnetic resonance imaging (fMRI) to examine how the time available for stopping affects activity in the putative right inferior frontal gyrus and presupplementary motor area (right IFG-preSMA) network that is known to support stopping. While undergoing fMRI scanning, participants performed a stop-signal variant where the time available for stopping was kept approximately constant across participants, which enabled us to compare how the time available for stopping affected stop-signal task difficulty both within and between subjects. Importantly, all behavioural and neuroimaging data were consistent with previous findings. We found that the time available for stopping distinguished successful from unsuccessful inhibition trials, was independent of stop-signal delay, and affected successful inhibition depending upon individual SSRT. We also found that right IFG and adjacent anterior insula were more strongly activated during more difficult stopping. These findings may have critical implications for stop-signal studies that compare different patient or other groups using fixed stop-signal delays.
Resumo:
Because moving depictions of face emotion have greater ecological validity than their static counterparts, it has been suggested that still photographs may not engage ‘authentic’ mechanisms used to recognize facial expressions in everyday life. To date, however, no neuroimaging studies have adequately addressed the question of whether the processing of static and dynamic expressions rely upon different brain substrates. To address this, we performed an functional magnetic resonance imaging (fMRI) experiment wherein participants made emotional expression discrimination and Sex discrimination judgements to static and moving face images. Compared to Sex discrimination, Emotion discrimination was associated with widespread increased activation in regions of occipito-temporal, parietal and frontal cortex. These regions were activated both by moving and by static emotional stimuli, indicating a general role in the interpretation of emotion. However, portions of the inferior frontal gyri and supplementary/pre-supplementary motor area showed task by motion interaction. These regions were most active during emotion judgements to static faces. Our results demonstrate a common neural substrate for recognizing static and moving facial expressions, but suggest a role for the inferior frontal gyrus in supporting simulation processes that are invoked more strongly to disambiguate static emotional cues.
Resumo:
The current research was designed to establish whether individual differences in timing performance predict neural activation in the areas that subserve the perception of short durations ranging between 400 and 1600 milliseconds. Seventeen participants completed both a temporal bisection task and a control task, in a mixed fMRI design. In keeping with previous research, there was increased activation in a network of regions typically active during time perception including the right supplementary motor area (SMA) and right pre-SMA and basal ganglia (including the putamen and right pallidum). Furthermore, correlations between neural activity in the right inferior frontal gyrus and SMA and timing performance corroborate the results of a recent meta-analysis and are further evidence that the SMA forms part of a neural clock that is responsible for the accumulation of temporal information. Specifically, subjective lengthening of the perceived duration were associated with increased activation in both the right SMA (and right pre-SMA) and right inferior frontal gyrus.
Resumo:
Isolating processes within the brain that are specific to human behavior is a key goal for social neuroscience. The current research was an attempt to test whether recent findings of enhanced negative ERPs in response to unexpected human gaze are unique to eye gaze stimuli by comparing the effects of gaze cues with the effects of an arrow cue. ERPs were recorded while participants (N¼30) observed a virtual actor or an arrow that gazed (or pointed) either toward (object congruent) or away from (object incongruent) a flashing checkerboard. An enhanced negative ERP (N300) in response to object incongruent compared to object congruent trials was recorded for both eye gaze and arrow stimuli. The findings are interpreted as reflecting a domain general mechanism for detecting unexpected events.
Resumo:
Inhibitory control deficits are well documented in schizophrenia, supported by impairment in an established measure of response inhibition, the stop-signal reaction time (SSRT). We investigated the neural basis of this impairment by comparing schizophrenia patients and controls matched for age, sex and education on behavioural, functional magnetic resonance imaging (fMRI) and event-related potential (ERP) indices of stop-signal task performance. Compared to controls, patients exhibited slower SSRT and reduced right inferior frontal gyrus (rIFG) activation, but rIFG activation correlated with SSRT in both groups. Go stimulus and stop-signal ERP components (N1/P3) were smaller in patients, but the peak latencies of stop-signal N1 and P3 were also delayed in patients, indicating impairment early in stop-signal processing. Additionally, response-locked lateralised readiness potentials indicated response preparation was prolonged in patients. An inability to engage rIFG may predicate slowed inhibition in patients, however multiple spatiotemporal irregularities in the networks underpinning stop-signal task performance may contribute to this deficit.
Resumo:
Sleep loss, widespread in today’s society and associated with a number of clinical conditions, has a detrimental effect on a variety of cognitive domains including attention. This study examined the sequelae of sleep deprivation upon BOLD fMRI activation during divided attention. Twelve healthy males completed two randomized sessions; one after 27 h of sleep deprivation and one after a normal night of sleep. During each session, BOLD fMRI was measured while subjects completed a cross-modal divided attention task (visual and auditory). After normal sleep, increased BOLD activation was observed bilaterally in the superior frontal gyrus and the inferior parietal lobe during divided attention performance. Subjects reported feeling significantly more sleepy in the sleep deprivation session, and there was a trend towards poorer divided attention task performance. Sleep deprivation led to a down regulation of activation in the left superior frontal gyrus, possibly reflecting an attenuation of top-down control mechanisms on the attentional system. These findings have implications for understanding the neural correlates of divided attention and the neurofunctional changes that occur in individuals who are sleep deprived.
Resumo:
Collaboration between neuroscience and architecture is emerging as a key field of research as demonstrated in recent times by development of the Academy of Neuroscience for Architecture (ANFA) and other societies. Neurological enquiry of affect and spatial experience from a design perspective remains in many instances unchartered. Research using portable near infrared spectroscopy (fNIRs) - an emerging non-invasive neuro-imaging device, is proving convincing in its ability to detect emotional responses to visual, spatio-auditory and task based stimuli. This innovation provides a firm basis to potentially track cortical activity in the appraisal of architectural environments. Additionally, recent neurological studies have sought to explore the manifold sensory abilities of the visually impaired to better understand spatial perception in general. Key studies reveal that early blind participants perform as well as sighted due to higher auditory and somato-sensory spatial acuity. Studies also report pleasant and unpleasant emotional responses within certain interior environments revealing a deeper perceptual sensitivity than would be expected. Comparative fNIRS studies between the sighted and blind concerning spatial experience has the potential to provide greater understanding of emotional responses to architectural environments. Supported by contemporary theories of architectural aesthetics, this paper presents a case for the use of portable fNIRS imaging in the assessment of emotional responses to spatial environments experienced by both blind and sighted. The aim of the paper is to outline the implications of fNIRS upon spatial research and practice within the field of architecture and points to a potential taxonomy of particular formations of space and affect.
Resumo:
Most developmental studies of emotional face processing to date have focused on infants and very young children. Additionally, studies that examine emotional face processing in older children do not distinguish development in emotion and identity face processing from more generic age-related cognitive improvement. In this study, we developed a paradigm that measures processing of facial expression in comparison to facial identity and complex visual stimuli. The three matching tasks were developed (i.e., facial emotion matching, facial identity matching, and butterfly wing matching) to include stimuli of similar level of discriminability and to be equated for task difficulty in earlier samples of young adults. Ninety-two children aged 5–15 years and a new group of 24 young adults completed these three matching tasks. Young children were highly adept at the butterfly wing task relative to their performance on both face-related tasks. More importantly, in older children, development of facial emotion discrimination ability lagged behind that of facial identity discrimination.
Resumo:
The Thatcher Illusion is generally discussed as a phenomenon related to face perception. Nonetheless, we show that compellingly strong Thatcher Effects can be elicited with non-face stimuli, provided that the stimulus set has a familiar standard configuration and a canonical view. Apparently, the Thatcher Illusion is not about faces, nor is it about Thatcher. It just might, however, be about Britain...
Resumo:
As a social species in a constantly changing environment, humans rely heavily on the informational richness and communicative capacity of the face. Thus, understanding how the brain processes information about faces in real-time is of paramount importance. The N170 is a high temporal resolution electrophysiological index of the brain's early response to visual stimuli that is reliably elicited in carefully controlled laboratory-based studies. Although the N170 has often been reported to be of greatest amplitude to faces, there has been debate regarding whether this effect might be an artifact of certain aspects of the controlled experimental stimulation schedules and materials. To investigate whether the N170 can be identified in more realistic conditions with highly variable and cluttered visual images and accompanying auditory stimuli we recorded EEG 'in the wild', while participants watched pop videos. Scene-cuts to faces generated a clear N170 response, and this was larger than the N170 to transitions where the videos cut to non-face stimuli. Within participants, wild-type face N170 amplitudes were moderately correlated to those observed in a typical laboratory experiment. Thus, we demonstrate that the face N170 is a robust and ecologically valid phenomenon and not an artifact arising as an unintended consequence of some property of the more typical laboratory paradigm.
Resumo:
Wi-Fi is a commonly available source of localization information in urban environments but is challenging to integrate into conventional mapping architectures. Current state of the art probabilistic Wi-Fi SLAM algorithms are limited by spatial resolution and an inability to remove the accumulation of rotational error, inherent limitations of the Wi-Fi architecture. In this paper we leverage the low quality sensory requirements and coarse metric properties of RatSLAM to localize using Wi-Fi fingerprints. To further improve performance, we present a novel sensor fusion technique that integrates camera and Wi-Fi to improve localization specificity, and use compass sensor data to remove orientation drift. We evaluate the algorithms in diverse real world indoor and outdoor environments, including an office floor, university campus and a visually aliased circular building loop. The algorithms produce topologically correct maps that are superior to those produced using only a single sensor modality.
Resumo:
Pavlovian fear conditioning is an evolutionary conserved and extensively studied form of associative learning and memory. In mammals, the lateral amygdala (LA) is an essential locus for Pavlovian fear learning and memory. Despite significant progress unraveling the cellular mechanisms responsible for fear conditioning, very little is known about the anatomical organization of neurons encoding fear conditioning in the LA. One key question is how fear conditioning to different sensory stimuli is organized in LA neuronal ensembles. Here we show that Pavlovian fear conditioning, formed through either the auditory or visual sensory modality, activates a similar density of LA neurons expressing a learning-induced phosphorylated extracellular signal-regulated kinase (p-ERK1/2). While the size of the neuron population specific to either memory was similar, the anatomical distribution differed. Several discrete sites in the LA contained a small but significant number of p-ERK1/2-expressing neurons specific to either sensory modality. The sites were anatomically localized to different levels of the longitudinal plane and were independent of both memory strength and the relative size of the activated neuronal population, suggesting some portion of the memory trace for auditory and visually cued fear conditioning is allocated differently in the LA. Presenting the visual stimulus by itself did not activate the same p-ERK1/2 neuron density or pattern, confirming the novelty of light alone cannot account for the specific pattern of activated neurons after visual fear conditioning. Together, these findings reveal an anatomical distribution of visual and auditory fear conditioning at the level of neuronal ensembles in the LA.