990 resultados para Emotion Processing
Resumo:
One of the fundamental problems with image processing of petrographic thin sections is that the appearance (colour I intensity) of a mineral grain will vary with the orientation of the crystal lattice to the preferred direction of the polarizing filters on a petrographic microscope. This makes it very difficult to determine grain boundaries, grain orientation and mineral species from a single captured image. To overcome this problem, the Rotating Polarizer Stage was used to replace the fixed polarizer and analyzer on a standard petrographic microscope. The Rotating Polarizer Stage rotates the polarizers while the thin section remains stationary, allowing for better data gathering possibilities. Instead of capturing a single image of a thin section, six composite data sets are created by rotating the polarizers through 900 (or 1800 if quartz c-axes measurements need to be taken) in both plane and cross polarized light. The composite data sets can be viewed as separate images and consist of the average intensity image, the maximum intensity image, the minimum intensity image, the maximum position image, the minimum position image and the gradient image. The overall strategy used by the image processing system is to gather the composite data sets, determine the grain boundaries using the gradient image, classify the different mineral species present using the minimum and maximum intensity images and then perform measurements of grain shape and, where possible, partial crystallographic orientation using the maximum intensity and maximum position images.
Resumo:
The purpose of this multiple case study was 1) to explore the effectiveness of an emotions recognition program for preschoolers with Autism Spectrum Disorders (ASD), and 2) to investigate one parent's perception of the emotions program. To address these objectives, the emotion unit scores of 7 preschoolers with ASD aged 3 to 5 years old (1 female, 6 males) were graphed and analyzed using visual inspection. In addition, the mother of 1 participant was interviewed to explore her perceptions of the emotions program and emotional learning. Overall, results revealed that participants' emotion recognition scores increased over the course of the emotions unit. The parent reported improvements in her son's expression and understanding of emotion, but noted that he continued to have difficulty with regulation of emotion. Implications for theory, education, and future research are discussed.
Resumo:
Individuals who have sustained a traumatic brain injury (TBI) often complain of t roubl e sleeping and daytime fatigue but little is known about the neurophysiological underpinnings of the s e sleep difficulties. The fragile sleep of thos e with a TBI was predicted to be characterized by impairments in gating, hyperarousal and a breakdown in sleep homeostatic mechanisms. To test these hypotheses, 20 individuals with a TBI (18- 64 years old, 10 men) and 20 age-matched controls (18-61 years old, 9 men) took part in a comprehensive investigation of their sleep. While TBI participants were not recruited based on sleep complaint, the fmal sample was comprised of individuals with a variety of sleep complaints, across a range of injury severities. Rigorous screening procedures were used to reduce potential confounds (e.g., medication). Sleep and waking data were recorded with a 20-channel montage on three consecutive nights. Results showed dysregulation in sleep/wake mechanisms. The sleep of individuals with a TBI was less efficient than that of controls, as measured by sleep architecture variables. There was a clear breakdown in both spontaneous and evoked K-complexes in those with a TBI. Greater injury severities were associated with reductions in spindle density, though sleep spindles in slow wave sleep were longer for individuals with TBI than controls. Quantitative EEG revealed an impairment in sleep homeostatic mechanisms during sleep in the TBI group. As well, results showed the presence of hyper arousal based on quantitative EEG during sleep. In wakefulness, quantitative EEG showed a clear dissociation in arousal level between TBls with complaints of insomnia and TBls with daytime fatigue. In addition, ERPs indicated that the experience of hyper arousal in persons with a TBI was supported by neural evidence, particularly in wakefulness and Stage 2 sleep, and especially for those with insomnia symptoms. ERPs during sleep suggested that individuals with a TBI experienced impairments in information processing and sensory gating. Whereas neuropsychological testing and subjective data confirmed predicted deficits in the waking function of those with a TBI, particularly for those with more severe injuries, there were few group differences on laboratory computer-based tasks. Finally, the use of correlation analyses confirmed distinct sleep-wake relationships for each group. In sum, the mechanisms contributing to sleep disruption in TBI are particular to this condition, and unique neurobiological mechanisms predict the experience of insomnia versus daytime fatigue following a TBI. An understanding of how sleep becomes disrupted after a TBI is important to directing future research and neurorehabilitation.
Resumo:
The initial timing of face-specific effects in event-related potentials (ERPs) is a point of contention in face processing research. Although effects during the time of the N170 are robust in the literature, inconsistent effects during the time of the P100 challenge the interpretation of the N170 as being the initial face-specific ERP effect. The interpretation of the early P100 effects are often attributed to low-level differences between face stimuli and a host of other image categories. Research using sophisticated controls for low-level stimulus characteristics (Rousselet, Husk, Bennett, & Sekuler, 2008) report robust face effects starting at around 130 ms following stimulus onset. The present study examines the independent components (ICs) of the P100 and N170 complex in the context of a minimally controlled low-level stimulus set and a clear P100 effect for faces versus houses at the scalp. Results indicate that four ICs account for the ERPs to faces and houses in the first 200ms following stimulus onset. The IC that accounts for the majority of the scalp N170 (icNla) begins dissociating stimulus conditions at approximately 130 ms, closely replicating the scalp results of Rousselet et al. (2008). The scalp effects at the time of the P100 are accounted for by two constituent ICs (icP1a and icP1b). The IC that projects the greatest voltage at the scalp during the P100 (icP1a) shows a face-minus-house effect over the period of the P100 that is less robust than the N 170 effect of icN 1 a when measured as the average of single subject differential activation robustness. The second constituent process of the P100 (icP1b), although projecting a smaller voltage to the scalp than icP1a, shows a more robust effect for the face-minus-house contrast starting prior to 100 ms following stimulus onset. Further, the effect expressed by icP1 b takes the form of a larger negative projection to medial occipital sites for houses over faces partially canceling the larger projection of icP1a, thereby enhancing the face positivity at this time. These findings have three main implications for ERP research on face processing: First, the ICs that constitute the face-minus-house P100 effect are independent from the ICs that constitute the N170 effect. This suggests that the P100 effect and the N170 effect are anatomically independent. Second, the timing of the N170 effect can be recovered from scalp ERPs that have spatio-temporally overlapping effects possibly associated with low-level stimulus characteristics. This unmixing of the EEG signals may reduce the need for highly constrained stimulus sets, a characteristic that is not always desirable for a topic that is highly coupled to ecological validity. Third, by unmixing the constituent processes of the EEG signals new analysis strategies are made available. In particular the exploration of the relationship between cortical processes over the period of the P100 and N170 ERP complex (and beyond) may provide previously unaccessible answers to questions such as: Is the face effect a special relationship between low-level and high-level processes along the visual stream?
Resumo:
Previously, studies investigating emotional face perception - regardless of whether they involved adults or children - presented participants with static photos of faces in isolation. In the natural world, faces are rarely encountered in isolation. In the few studies that have presented faces in context, the perception of emotional facial expressions is altered when paired with an incongruent context. For both adults and 8- year-old children, reaction times increase and accuracy decreases when facial expressions are presented in an incongruent context depicting a similar emotion (e.g., sad face on a fear body) compared to when presented in a congruent context (e.g., sad face on a sad body; Meeren, van Heijnsbergen, & de Gelder, 2005; Mondloch, 2012). This effect is called a congruency effect and does not exist for dissimilar emotions (e.g., happy and sad; Mondloch, 2012). Two models characterize similarity between emotional expressions differently; the emotional seed model bases similarity on physical features, whereas the dimensional model bases similarity on underlying dimensions of valence an . arousal. Study 1 investigated the emergence of an adult-like pattern of congruency effects in pre-school aged children. Using a child-friendly sorting task, we identified the youngest age at which children could accurately sort isolated facial expressions and body postures and then measured whether an incongruent context disrupted the perception of emotional facial expressions. Six-year-old children showed congruency effects for sad/fear but 4-year-old children did not for sad/happy. This pattern of congruency effects is consistent with both models and indicates that an adult-like pattern exists at the youngest age children can reliably sort emotional expressions in isolation. In Study 2, we compared the two models to determine their predictive abilities. The two models make different predictions about the size of congruency effects for three emotions: sad, anger, and fear. The emotional seed model predicts larger congruency effects when sad is paired with either anger or fear compared to when anger and fear are paired with each other. The dimensional model predicts larger congruency effects when anger and fear are paired together compared to when either is paired with sad. In both a speeded and unspeeded task the results failed to support either model, but the pattern of results indicated fearful bodies have a special effect. Fearful bodies reduced accuracy, increased reaction times more than any other posture, and shifted the pattern of errors. To determine whether the results were specific to bodies, we ran the reverse task to determine if faces could disrupt the perception of body postures. This experiment did not produce congruency effects, meaning faces do not influence the perception of body postures. In the final experiment, participants performed a flanker task to determine whether the effect of fearful bodies was specific to faces or whether fearful bodies would also produce a larger effect in an unrelated task in which faces were absent. Reaction times did not differ across trials, meaning fearful bodies' large effect is specific to situations with faces. Collectively, these studies provide novel insights, both developmentally and theoretically, into how emotional faces are perceived in context.
Resumo:
As important social stimuli, faces playa critical role in our lives. Much of our interaction with other people depends on our ability to recognize faces accurately. It has been proposed that face processing consists of different stages and interacts with other systems (Bruce & Young, 1986). At a perceptual level, the initial two stages, namely structural encoding and face recognition, are particularly relevant and are the focus of this dissertation. Event-related potentials (ERPs) are averaged EEG signals time-locked to a particular event (such as the presentation of a face). With their excellent temporal resolution, ERPs can provide important timing information about neural processes. Previous research has identified several ERP components that are especially related to face processing, including the N 170, the P2 and the N250. Their nature with respect to the stages of face processing is still unclear, and is examined in Studies 1 and 2. In Study 1, participants made gender decisions on a large set of female faces interspersed with a few male faces. The ERP responses to facial characteristics of the female faces indicated that the N 170 amplitude from each side of the head was affected by information from eye region and by facial layout: the right N 170 was affected by eye color and by face width, while the left N 170 was affected by eye size and by the relation between the sizes of the top and bottom parts of a face. In contrast, the P100 and the N250 components were largely unaffected by facial characteristics. These results thus provided direct evidence for the link between the N 170 and structural encoding of faces. In Study 2, focusing on the face recognition stage, we manipulated face identity strength by morphing individual faces to an "average" face. Participants performed a face identification task. The effect of face identity strength was found on the late P2 and the N250 components: as identity strength decreased from an individual face to the "average" face, the late P2 increased and the N250 decreased. In contrast, the P100, the N170 and the early P2 components were not affected by face identity strength. These results suggest that face recognition occurs after 200 ms, but not earlier. Finally, because faces are often associated with social information, we investigated in Study 3 how group membership might affect ERP responses to faces. After participants learned in- and out-group memberships of the face stimuli based on arbitrarily assigned nationality and university affiliation, we found that the N170 latency differentiated in-group and out-group faces, taking longer to process the latter. In comparison, without group memberships, there was no difference in N170 latency among the faces. This dissertation provides evidence that at a neural level, structural encoding of faces, indexed by the N170, occurs within 200 ms. Face recognition, indexed by the late P2 and the N250, occurs shortly afterwards between 200 and 300 ms. Social cognitive factors can also influence face processing. The effect is already evident as early as 130-200 ms at the structural encoding stage.
Resumo:
When the second of two targets (T2) is presented temporally close to the first target (T1) in rapid serial visual presentation, accuracy to detect/identify T2 is markedly reduced as compared to longer target separations. This is known as the attentional blink (AB), and is thought to reflect a limitation of selective attention. While most individuals show an AB, research has demonstrated that individuals are variously susceptible to this effect. To explain these differences, Dale and Arnell (2010) examined whether dispositional differences in attentional breadth, as measured by the Navon letter task, could predict individual AB magnitude. They found that individuals who showed a natural bias toward the broad, global level of Navon letter stimuli were less susceptible to the AB as compared to individuals who showed a natural bias toward the detailed, local aspects of Navon letter stimuli. This suggests that individuals who naturally broaden their attention can overcome the AB. However, it was unclear how stable these individual differences were over time, and whether a variety of global/local tasks could predict AB performance. As such, the purpose of this dissertation was to investigate, through four empirical studies, the nature of individual differences in both global/local bias and the AB, and how these differences in attentional breadth can modulate AB performance. Study 1 was designed to examine the stability of dispositional global/local biases over time, as well as the relationships among three different global/local processing measures. Study 2 examined the stability of individual differences in the AB, as well as the relationship among two distinct AB tasks. Study 3 examined whether the three distinct global/local tasks used in Study 1 could predict performance on the two AB tasks from Study 2. Finally, Study 4 explored whether individual differences in global/local bias could be manipulated by exposing participants to high/low spatial frequencies and Navon stimuli. In Study 1, I showed that dispositional differences in global/local bias were reliable over a period of at least a week, demonstrating that these individual biases may be trait-like. However, the three tasks that purportedly measure global/local bias were unrelated to each other, suggesting that they measure unique aspects of global/local processing. In Study 2, I found that individual variation in AB performance was also reliable over a period of at least a week, and that the two AB task versions were correlated. Study 3 showed that dispositional global/local biases, as measured by the three tasks from Study 1, predicted AB magnitude, such that individuals who were naturally globally biased had smaller ABs. Finally, in Study 4 I demonstrated that these dispositional global/local biases are resistant to both spatial frequency and Navon letter manipulations, indicating that these differences are robust and intractable. Overall, the results of the four studies in this dissertation help clarify the role of individual differences in attentional breadth in selective attention.
Resumo:
The present work suggests that sentence processing requires both heuristic and algorithmic processing streams, where the heuristic processing strategy precedes the algorithmic phase. This conclusion is based on three self-paced reading experiments in which the processing of two-sentence discourses was investigated, where context sentences exhibited quantifier scope ambiguity. Experiment 1 demonstrates that such sentences are processed in a shallow manner. Experiment 2 uses the same stimuli as Experiment 1 but adds questions to ensure deeper processing. Results indicate that reading times are consistent with a lexical-pragmatic interpretation of number associated with context sentences, but responses to questions are consistent with the algorithmic computation of quantifier scope. Experiment 3 shows the same pattern of results as Experiment 2, despite using stimuli with different lexicalpragmatic biases. These effects suggest that language processing can be superficial, and that deeper processing, which is sensitive to structure, only occurs if required. Implications for recent studies of quantifier scope ambiguity are discussed.
Resumo:
Based on the theoretical framework of Dressler and Dziubalska-Kołaczyk (2006a,b), the Strong Morphonotactic Hypothesis will be tested. It assumes that phonotactics helps in decomposition of words into morphemes: if a certain sequence occurs only or only by default over a morpheme boundary and is thus a prototypical morphonotactic sequence, it should be processed faster and more accurately than a purely phonotactic sequence. Studies on typical and atypical first language acquisition in English, Lithuanian and Polish have shown significant differences between the acquisition of morphonotactic and phonotactic consonant clusters: Morphonotactic clusters are acquired earlier and faster by typically developing children, but are more problematic for children with Specific Language Impairment. However, results on acquisition are less clear for German. The focus of this contribution is whether and how German-speaking adults differentiate between morphonotactic and phonotactic consonant clusters and vowel-consonant sequences in visual word recognition. It investigates whether sub-lexical letter sequences are found faster when the target sequence is separated from the word stem by a morphological boundary than when it is a part of a morphological root. An additional factor that is addressed concerns the position of the target cluster in the word. Due to the bathtub effect, sequences in peripheral positions in a word are more salient and thus facilitate processing more than word-internal positions. Moreover, for adults the primacy effect most favors word-initial position (whereas for young children the recency effect most favors word- final position). Our study discusses effects of phonotactic vs. morphonotactic cluster status and of position within the word.
Resumo:
Lexical processing among bilinguals is often affected by complex patterns of individual experience. In this paper we discuss the psychocentric perspective on language representation and processing, which highlights the centrality of individual experience in psycholinguistic experimentation. We discuss applications to the investigation of lexical processing among multilinguals and explore the advantages of using high-density experiments with multilinguals. High density experiments are designed to co-index measures of lexical perception and production, as well as participant profiles. We discuss the challenges associated with the characterization of participant profiles and present a new data visualization technique, that we term Facial Profiles. This technique is based on Chernoff faces developed over 40 years ago. The Facial Profile technique seeks to overcome some of the challenges associated with the use of Chernoff faces, while maintaining the core insight that recoding multivariate data as facial features can engage the human face recognition system and thus enhance our ability to detect and interpret patterns within multivariate datasets. We demonstrate that Facial Profiles can code participant characteristics in lexical processing studies by recoding variables such as reading ability, speaking ability, and listening ability into iconically-related relative sizes of eye, mouth, and ear, respectively. The balance of ability in bilinguals can be captured by creating composite facial profiles or Janus Facial Profiles. We demonstrate the use of Facial Profiles and Janus Facial Profiles in the characterization of participant effects in the study of lexical perception and production.
Resumo:
This lexical decision study with eye tracking of Japanese two-kanji-character words investigated the order in which a whole two-character word and its morphographic constituents are activated in the course of lexical access, the relative contributions of the left and the right characters in lexical decision, the depth to which semantic radicals are processed, and how nonlinguistic factors affect lexical processes. Mixed-effects regression analyses of response times and subgaze durations (i.e., first-pass fixation time spent on each of the two characters) revealed joint contributions of morphographic units at all levels of the linguistic structure with the magnitude and the direction of the lexical effects modulated by readers’ locus of attention in a left-to-right preferred processing path. During the early time frame, character effects were larger in magnitude and more robust than radical and whole-word effects, regardless of the font size and the type of nonwords. Extending previous radical-based and character-based models, we propose a task/decision-sensitive character-driven processing model with a level-skipping assumption: Connections from the feature level bypass the lower radical level and link up directly to the higher character level.
Resumo:
Neural models of the processing of illusory contour (ICs) diverge from one another in terms of their emphasis on bottom-up versus top-down constituents. The current study uses a dichoptic fusion paradigm to block top-down awareness of ICs in order to examine possible bottom-up effects. Group results indicate that the N170 ERP component is particularly sensitive to ICs at central occipital sites when top-down awareness of the stimulus is permitted. Furthermore, single-subject statistics reveal that the IC N170 ERP effect is highly variable across individuals in terms of timing and topographical spread. The results suggest that the ubiquitous N170 effect to ICs found in the literature depends, at least in part, on participants’ awareness of the stimulus. Therefore a strong bottom-up model of IC processing at the time of the N170 is unlikely.
Resumo:
The use of information and communication technologies in the health and social service sectors, and the development of multi-centred and international research networks present many benefits for society: for example, better follow-up on an individual’s states of health, better quality of care, better control of expenses, and better communication between healthcare professionals. However, this approach raises issues relative to the protection of privacy: more specifically, to the processing of individual health information.
Resumo:
La douleur est une expérience perceptive comportant de nombreuses dimensions. Ces dimensions de douleur sont inter-reliées et recrutent des réseaux neuronaux qui traitent les informations correspondantes. L’élucidation de l'architecture fonctionnelle qui supporte les différents aspects perceptifs de l'expérience est donc une étape fondamentale pour notre compréhension du rôle fonctionnel des différentes régions de la matrice cérébrale de la douleur dans les circuits corticaux qui sous tendent l'expérience subjective de la douleur. Parmi les diverses régions du cerveau impliquées dans le traitement de l'information nociceptive, le cortex somatosensoriel primaire et secondaire (S1 et S2) sont les principales régions généralement associées au traitement de l'aspect sensori-discriminatif de la douleur. Toutefois, l'organisation fonctionnelle dans ces régions somato-sensorielles n’est pas complètement claire et relativement peu d'études ont examiné directement l'intégration de l'information entre les régions somatiques sensorielles. Ainsi, plusieurs questions demeurent concernant la relation hiérarchique entre S1 et S2, ainsi que le rôle fonctionnel des connexions inter-hémisphériques des régions somatiques sensorielles homologues. De même, le traitement en série ou en parallèle au sein du système somatosensoriel constitue un autre élément de questionnement qui nécessite un examen plus approfondi. Le but de la présente étude était de tester un certain nombre d'hypothèses sur la causalité dans les interactions fonctionnelle entre S1 et S2, alors que les sujets recevaient des chocs électriques douloureux. Nous avons mis en place une méthode de modélisation de la connectivité, qui utilise une description de causalité de la dynamique du système, afin d'étudier les interactions entre les sites d'activation définie par un ensemble de données provenant d'une étude d'imagerie fonctionnelle. Notre paradigme est constitué de 3 session expérimentales en utilisant des chocs électriques à trois différents niveaux d’intensité, soit modérément douloureux (niveau 3), soit légèrement douloureux (niveau 2), soit complètement non douloureux (niveau 1). Par conséquent, notre paradigme nous a permis d'étudier comment l'intensité du stimulus est codé dans notre réseau d'intérêt, et comment la connectivité des différentes régions est modulée dans les conditions de stimulation différentes. Nos résultats sont en faveur du mode sériel de traitement de l’information somatosensorielle nociceptive avec un apport prédominant de la voie thalamocorticale vers S1 controlatérale au site de stimulation. Nos résultats impliquent que l'information se propage de S1 controlatéral à travers notre réseau d'intérêt composé des cortex S1 bilatéraux et S2. Notre analyse indique que la connexion S1→S2 est renforcée par la douleur, ce qui suggère que S2 est plus élevé dans la hiérarchie du traitement de la douleur que S1, conformément aux conclusions précédentes neurophysiologiques et de magnétoencéphalographie. Enfin, notre analyse fournit des preuves de l'entrée de l'information somatosensorielle dans l'hémisphère controlatéral au côté de stimulation, avec des connexions inter-hémisphériques responsable du transfert de l'information à l'hémisphère ipsilatéral.
Resumo:
Lors d'une intervention conversationnelle, le langage est supporté par une communication non-verbale qui joue un rôle central dans le comportement social humain en permettant de la rétroaction et en gérant la synchronisation, appuyant ainsi le contenu et la signification du discours. En effet, 55% du message est véhiculé par les expressions faciales, alors que seulement 7% est dû au message linguistique et 38% au paralangage. L'information concernant l'état émotionnel d'une personne est généralement inférée par les attributs faciaux. Cependant, on ne dispose pas vraiment d'instruments de mesure spécifiquement dédiés à ce type de comportements. En vision par ordinateur, on s'intéresse davantage au développement de systèmes d'analyse automatique des expressions faciales prototypiques pour les applications d'interaction homme-machine, d'analyse de vidéos de réunions, de sécurité, et même pour des applications cliniques. Dans la présente recherche, pour appréhender de tels indicateurs observables, nous essayons d'implanter un système capable de construire une source consistante et relativement exhaustive d'informations visuelles, lequel sera capable de distinguer sur un visage les traits et leurs déformations, permettant ainsi de reconnaître la présence ou absence d'une action faciale particulière. Une réflexion sur les techniques recensées nous a amené à explorer deux différentes approches. La première concerne l'aspect apparence dans lequel on se sert de l'orientation des gradients pour dégager une représentation dense des attributs faciaux. Hormis la représentation faciale, la principale difficulté d'un système, qui se veut être général, est la mise en œuvre d'un modèle générique indépendamment de l'identité de la personne, de la géométrie et de la taille des visages. La démarche qu'on propose repose sur l'élaboration d'un référentiel prototypique à partir d'un recalage par SIFT-flow dont on démontre, dans cette thèse, la supériorité par rapport à un alignement conventionnel utilisant la position des yeux. Dans une deuxième approche, on fait appel à un modèle géométrique à travers lequel les primitives faciales sont représentées par un filtrage de Gabor. Motivé par le fait que les expressions faciales sont non seulement ambigües et incohérentes d'une personne à une autre mais aussi dépendantes du contexte lui-même, à travers cette approche, on présente un système personnalisé de reconnaissance d'expressions faciales, dont la performance globale dépend directement de la performance du suivi d'un ensemble de points caractéristiques du visage. Ce suivi est effectué par une forme modifiée d'une technique d'estimation de disparité faisant intervenir la phase de Gabor. Dans cette thèse, on propose une redéfinition de la mesure de confiance et introduisons une procédure itérative et conditionnelle d'estimation du déplacement qui offrent un suivi plus robuste que les méthodes originales.