982 resultados para JavaServer Faces
Resumo:
This chapter explores some of the implications of adopting a research approach that focuses on people and their livelihoods in the rice-wheat system of the Indo-Gangetic Plains. We draw on information from a study undertaken by the authors in Bangladesh and then consider the transferability of our findings to other situations. We conclude that if our research is to bridge the researcher-farmer interface, ongoing technical research must be supported by research that explores how institutional, policy, and communication strategies determine livelihood outcomes. The challenge that now faces researchers is to move beyond their involvement in participatory research to understand how to facilitate a process in which they provide information and products for others to test. Building capacity at various levels for openness in sharing information and products–seeing research as a public good for all–seems to be a prerequisite for more effective dissemination of the available information and technologies.
Resumo:
In this article, we present FACSGen 2.0, new animation software for creating static and dynamic threedimensional facial expressions on the basis of the Facial Action Coding System (FACS). FACSGen permits total control over the action units (AUs), which can be animated at all levels of intensity and applied alone or in combination to an infinite number of faces. In two studies, we tested the validity of the software for the AU appearance defined in the FACS manual and the conveyed emotionality of FACSGen expressions. In Experiment 1, four FACS-certified coders evaluated the complete set of 35 single AUs and 54 AU combinations for AU presence or absence, appearance quality, intensity, and asymmetry. In Experiment 2, lay participants performed a recognition task on emotional expressions created with FACSGen software and rated the similarity of expressions displayed by human and FACSGen faces. Results showed good to excellent classification levels for all AUs by the four FACS coders, suggesting that the AUs are valid exemplars of FACS specifications. Lay participants’ recognition rates for nine emotions were high, and comparisons of human and FACSGen expressions were very similar. The findings demonstrate the effectiveness of the software in producing reliable and emotionally valid expressions, and suggest its application in numerous scientific areas, including perception, emotion, and clinical and euroscience research.
Resumo:
The human mirror neuron system (hMNS) has been associated with various forms of social cognition and affective processing including vicarious experience. It has also been proposed that a faulty hMNS may underlie some of the deficits seen in the autism spectrum disorders (ASDs). In the present study we set out to investigate whether emotional facial expressions could modulate a putative EEG index of hMNS activation (mu suppression) and if so, would this differ according to the individual level of autistic traits [high versus low Autism Spectrum Quotient (AQ) score]. Participants were presented with 3 s films of actors opening and closing their hands (classic hMNS mu-suppression protocol) while simultaneously wearing happy, angry, or neutral expressions. Mu-suppression was measured in the alpha and low beta bands. The low AQ group displayed greater low beta event-related desynchronization (ERD) to both angry and neutral expressions. The high AQ group displayed greater low beta ERD to angry than to happy expressions. There was also significantly more low beta ERD to happy faces for the low than for the high AQ group. In conclusion, an interesting interaction between AQ group and emotional expression revealed that hMNS activation can be modulated by emotional facial expressions and that this is differentiated according to individual differences in the level of autistic traits. The EEG index of hMNS activation (mu suppression) seems to be a sensitive measure of the variability in facial processing in typically developing individuals with high and low self-reported traits of autism.
Resumo:
Emotional reactivity and the time taken to recover, particularly from negative, stressful, events, are inextricably linked, and both are crucial for maintaining well-being. It is unclear, however, to what extent emotional reactivity during stimulus onset predicts the time course of recovery after stimulus offset. To address this question, 25 participants viewed arousing (negative and positive) and neutral pictures from the International Affective Picture System (IAPS) followed by task-relevant face targets, which were to be gender categorized. Faces were presented early (400–1500 ms) or late (2400–3500 ms) after picture offset to capture the time course of recovery from emotional stimuli. Measures of reaction time (RT), as well as face-locked N170 and P3 components were taken as indicators of the impact of lingering emotion on attentional facilitation or interference. Electrophysiological effects revealed negative and positive images to facilitate face-target processing on the P3 component, regardless of temporal interval. At the individual level, increased reactivity to: (1) negative pictures, quantified as the IAPS picture-locked Late Positive Potential (LPP), predicted larger attentional interference on the face-locked P3 component to faces presented in the late time window after picture offset. (2) Positive pictures, denoted by the LPP, predicted larger facilitation on the face-locked P3 component to faces presented in the earlier time window after picture offset. These results suggest that subsequent processing is still impacted up to 3500 ms after the offset of negative pictures and 1500 ms after the offset of positive pictures for individuals reacting more strongly to these pictures, respectively. Such findings emphasize the importance of individual differences in reactivity when predicting the temporality of emotional recovery. The current experimental model provides a novel basis for future research aiming to identify profiles of adaptive and maladaptive recovery.
Resumo:
Voluntary selective attention can prioritize different features in a visual scene. The frontal eye-fields (FEF) are one potential source of such feature-specific top-down signals, but causal evidence for influences on visual cortex (as was shown for "spatial" attention) has remained elusive. Here, we show that transcranial magnetic stimulation (TMS) applied to right FEF increased the blood oxygen level-dependent (BOLD) signals in visual areas processing "target feature" but not in "distracter feature"-processing regions. TMS-induced BOLD signals increase in motion-responsive visual cortex (MT+) when motion was attended in a display with moving dots superimposed on face stimuli, but in face-responsive fusiform area (FFA) when faces were attended to. These TMS effects on BOLD signal in both regions were negatively related to performance (on the motion task), supporting the behavioral relevance of this pathway. Our findings provide new causal evidence for the human FEF in the control of nonspatial "feature"-based attention, mediated by dynamic influences on feature-specific visual cortex that vary with the currently attended property.
Resumo:
Three experiments examined the cultural relativity of emotion recognition using the visual search task. Caucasian-English and Japanese participants were required to search for an angry or happy discrepant face target against an array of competing distractor faces. Both cultural groups performed the task with displays that consisted of Caucasian and Japanese faces in order to investigate the effects of racial congruence on emotion detection performance. Under high perceptual load conditions, both cultural groups detected the happy face more efficiently than the angry face. When perceptual load was reduced such that target detection could be achieved by feature-matching, the English group continued to show a happiness advantage in search performance that was more strongly pronounced for other race faces. Japanese participants showed search time equivalence for happy and angry targets. Experiment 3 encouraged participants to adopt a perceptual based strategy for target detection by removing the term 'emotion' from the instructions. Whilst this manipulation did not alter the happiness advantage displayed by our English group, it reinstated it for our Japanese group, who showed a detection advantage for happiness only for other race faces. The results demonstrate cultural and linguistic modifiers on the perceptual saliency of the emotional signal and provide new converging evidence from cognitive psychology for the interactionist perspective on emotional expression recognition.
Resumo:
As people get older, they tend to remember more positive than negative information. This age-by-valence interaction has been called “positivity effect.” The current study addressed the hypotheses that baseline functional connectivity at rest is predictive of older adults' brain activity when learning emotional information and their positivity effect in memory. Using fMRI, we examined the relationship among resting-state functional connectivity, subsequent brain activity when learning emotional faces, and individual differences in the positivity effect (the relative tendency to remember faces expressing positive vs. negative emotions). Consistent with our hypothesis, older adults with a stronger positivity effect had increased functional coupling between amygdala and medial PFC (MPFC) during rest. In contrast, younger adults did not show the association between resting connectivity and memory positivity. A similar age-by-memory positivity interaction was also found when learning emotional faces. That is, memory positivity in older adults was associated with (a) enhanced MPFC activity when learning emotional faces and (b) increased negative functional coupling between amygdala and MPFC when learning negative faces. In contrast, memory positivity in younger adults was related to neither enhanced MPFC activity to emotional faces, nor MPFC–amygdala connectivity to negative faces. Furthermore, stronger MPFC–amygdala connectivity during rest was predictive of subsequent greater MPFC activity when learning emotional faces. Thus, emotion–memory interaction in older adults depends not only on the task-related brain activity but also on the baseline functional connectivity.
Resumo:
There is increasing evidence that Williams syndrome (WS) is associated with elevated anxiety that is non-social in nature, including generalised anxiety and fears. To date very little research has examined the cognitive processes associated with this anxiety. In the present research, attentional bias for non-social threatening images in WS was examined using a dot-probe paradigm. Participants were 16 individuals with WS aged between 13 and 34 years and two groups of typically developing controls matched to the WS group on chronological age and attentional control ability respectively. The WS group exhibited a significant attention bias towards threatening images. In contrast, no bias was found for group matched on attentional control and a slight bias away from threat was found in the chronological age matched group. The results are contrasted with recent findings suggesting that individuals with WS do not show an attention bias for threatening faces and discussed in relation to neuroimaging research showing elevated amygdala activation in response to threatening non-social scenes in WS.
Resumo:
Introduction: Observations of behaviour and research using eye-tracking technology have shown that individuals with Williams syndrome (WS) pay an unusual amount of attention to other people’s faces. The present research examines whether this attention to faces is moderated by the valence of emotional expression. Method: Sixteen participants with WS aged between 13 and 29 years (Mean=19 years 9 months) completed a dot-probe task in which pairs of faces displaying happy, angry and neutral expressions were presented. The performance of the WS group was compared to two groups of typically developing control participants, individually matched to the participants in the WS group on either chronological age or mental age. General mental age was assessed in the WS group using the Woodcock Johnson Test of Cognitive Ability Revised (WJ-COG-R; Woodcock & Johnson, 1989; 1990). Results: Compared to both control groups, the WS group exhibited a greater attention bias for happy faces. In contrast, no between-group differences in bias for angry faces were obtained. Conclusions: The results are discussed in relation to recent neuroimaging findings and the hypersocial behaviour that is characteristic of the WS population.
Resumo:
The mean wind direction within an urban canopy changes with height when the incoming flow is not orthogonal to obstacle faces. This wind-turning effect is induced by complex processes and its modelling in urban-canopy (UC) parametrizations is difficult. Here we focus on the analysis of the spatially-averaged flow properties over an aligned array of cubes and their variation with incoming wind direction. For this purpose, Reynolds-averaged Navier–Stokes simulations previously compared, for a reduced number of incident wind directions, against direct numerical simulation results are used. The drag formulation of a UCparametrization ismodified and different drag coefficients are tested in order to reproduce the wind-turning effect within the canopy for oblique wind directions. The simulations carried out for a UC parametrization in one-dimensional mode indicate that a height-dependent drag coefficient is needed to capture this effect.
Resumo:
This chapter assesses the recent dramatic rise of the Greek Golden Dawn by examining its background, electoral base and policies/ agenda.
Resumo:
Infant faces elicit early, specific activity in the orbitofrontal cortex (OFC), a key cortical region for reward and affective processing. A test of the causal relationship between infant facial configuration and OFC activity is provided by naturally occurring disruptions to the face structure. One such disruption is cleft lip, a small change to one facial feature, shown to disrupt parenting. Using magnetoencephalography, we investigated neural responses to infant faces with cleft lip compared with typical infant and adult faces. We found activity in the right OFC at 140 ms in response to typical infant faces but diminished activity to infant faces with cleft lip or adult faces. Activity in the right fusiform face area was of similar magnitude for typical adult and infant faces but was significantly lower for infant faces with cleft lip. This is the first evidence that a minor change to the infant face can disrupt neural activity potentially implicated in caregiving.
Resumo:
Smart healthcare is a complex domain for systems integration due to human and technical factors and heterogeneous data sources involved. As a part of smart city, it is such a complex area where clinical functions require smartness of multi-systems collaborations for effective communications among departments, and radiology is one of the areas highly relies on intelligent information integration and communication. Therefore, it faces many challenges regarding integration and its interoperability such as information collision, heterogeneous data sources, policy obstacles, and procedure mismanagement. The purpose of this study is to conduct an analysis of data, semantic, and pragmatic interoperability of systems integration in radiology department, and to develop a pragmatic interoperability framework for guiding the integration. We select an on-going project at a local hospital for undertaking our case study. The project is to achieve data sharing and interoperability among Radiology Information Systems (RIS), Electronic Patient Record (EPR), and Picture Archiving and Communication Systems (PACS). Qualitative data collection and analysis methods are used. The data sources consisted of documentation including publications and internal working papers, one year of non-participant observations and 37 interviews with radiologists, clinicians, directors of IT services, referring clinicians, radiographers, receptionists and secretary. We identified four primary phases of data analysis process for the case study: requirements and barriers identification, integration approach, interoperability measurements, and knowledge foundations. Each phase is discussed and supported by qualitative data. Through the analysis we also develop a pragmatic interoperability framework that summaries the empirical findings and proposes recommendations for guiding the integration in the radiology context.
Resumo:
The arousal-biased competition model predicts that arousal increases the gain on neural competition between stimuli representations. Thus, the model predicts that arousal simultaneously enhances processing of salient stimuli and impairs processing of relatively less-salient stimuli. We tested this model with a simple dot-probe task. On each trial, participants were simultaneously exposed to one face image as a salient cue stimulus and one place image as a non-salient stimulus. A border around the face cue location further increased its bottom-up saliency. Before these visual stimuli were shown, one of two tones played: one that predicted a shock (increasing arousal) or one that did not. An arousal-by-saliency interaction in category-specific brain regions (fusiform face area for salient faces and parahippocampal place area for non-salient places) indicated that brain activation associated with processing the salient stimulus was enhanced under arousal whereas activation associated with processing the non-salient stimulus was suppressed under arousal. This is the first functional magnetic resonance imaging study to demonstrate that arousal can enhance information processing for prioritized stimuli while simultaneously impairing processing of non-prioritized stimuli. Thus, it goes beyond previous research to show that arousal does not uniformly enhance perceptual processing, but instead does so selectively in ways that optimizes attention to highly salient stimuli.
Resumo:
Research on the flexibility of race-based processing offers divergent results. Some studies find that race affects processing in an obligatory fashion. Other studies suggest dramatic flexibility. The current study attempts to clarify this divergence by examining a process that may mediate flexibility in race-based processing: the engagement of visual attention. In this study, White participants completed an exogenous cuing task designed to measure attention to White and Black faces. Participants in the control condition showed a pronounced bias to attend to Black faces. Critically, participants in a goal condition were asked to process a feature of the stimulus that was unrelated to race. The induction of this goal eliminated differential attention to Black faces, suggesting that attentional engagement responds flexibly to top-down goals, rather than obligatorily to bottom-up racial cues.