913 resultados para Speech emotion recognition
Resumo:
Background Children with callous-unemotional (CU) traits, a proposed precursor to adult psychopathy, are characterized by impaired emotion recognition, reduced responsiveness to others’ distress, and a lack of guilt or empathy. Reduced attention to faces, and more specifically to the eye region, has been proposed to underlie these difficulties, although this has never been tested longitudinally from infancy. Attention to faces occurs within the context of dyadic caregiver interactions, and early environment including parenting characteristics has been associated with CU traits. The present study tested whether infants’ preferential tracking of a face with direct gaze and levels of maternal sensitivity predict later CU traits. Methods Data were analyzed from a stratified random sample of 213 participants drawn from a population-based sample of 1233 first-time mothers. Infants’ preferential face tracking at 5 weeks and maternal sensitivity at 29 weeks were entered into a weighted linear regression as predictors of CU traits at 2.5 years. Results Controlling for a range of confounders (e.g., deprivation), lower preferential face tracking predicted higher CU traits (p = .001). Higher maternal sensitivity predicted lower CU traits in girls (p = .009), but not boys. No significant interaction between face tracking and maternal sensitivity was found. Conclusions This is the first study to show that attention to social features during infancy as well as early sensitive parenting predict the subsequent development of CU traits. Identifying such early atypicalities offers the potential for developing parent-mediated interventions in children at risk for developing CU traits.
Resumo:
Background There is a need to develop and adapt therapies for use with people with learning disabilities who have mental health problems. Aims To examine the performance of people with learning disabilities on two cognitive therapy tasks (emotion recognition and discrimination among thoughts, feelings and behaviours). We hypothesized that cognitive therapy task performance would be significantly correlated with IQ and receptive vocabulary, and that providing a visual cue would improve performance. Method Fifty-nine people with learning disabilities were assessed on the Wechsler Abbreviated Scale of Intelligence (WASI), the British Picture Vocabulary Scale-II (BPVS-II), a test of emotion recognition and a task requiring participants to discriminate among thoughts, feelings and behaviours. In the discrimination task, participants were randomly assigned to a visual cue condition or a no-cue condition. Results There was considerable variability in performance. Emotion recognition was significantly associated with receptive vocabulary, and discriminating among thoughts, feelings and behaviours was significantly associated with vocabulary and IQ. There was no effect of the cue on the discrimination task. Conclusion People with learning disabilities with higher IQs and good receptive vocabulary were more likely to be able to identify different emotions and to discriminate among thoughts, feelings and behaviours. This implies that they may more easily understand the cognitive model. Structured ways of simplifying the concepts used in cognitive therapy and methods of socialization and education in the cognitive model are required to aid participation of people with learning disabilities.
Resumo:
This study aimed to measure, using fMRI, the effect of diazepam on the haemodynamic response to emotional faces. Twelve healthy male volunteers (mean age = 24.83 +/- 3.16 years), were evaluated in a randomized, balanced-order, double-blind, placebo-controlled crossover design. Diazepam (10 mg) or placebo was given 1 h before the neuroimaging acquisition. In a blocked design covert face emotional task, subjects were presented with neutral (A) and aversive (B) (angry or fearful) faces. Participants were also submitted to an explicit emotional face recognition task, and subjective anxiety was evaluated throughout the procedures. Diazepam attenuated the activation of right amygdala and right orbitofrontal cortex and enhanced the activation of right anterior cingulate cortex (ACC) to fearful faces. In contrast, diazepam enhanced the activation of posterior left insula and attenuated the activation of bilateral ACC to angry faces. In the behavioural task, diazepam impaired the recognition of fear in female faces. Under the action of diazepam, volunteers were less anxious at the end of the experimental session. These results suggest that benzodiazepines can differentially modulate brain activation to aversive stimuli, depending on the stimulus features and indicate a role of amygdala and insula in the anxiolytic action of benzodiazepines.
Resumo:
In this paper we have quantified the consistency of word usage in written texts represented by complex networks, where words were taken as nodes, by measuring the degree of preservation of the node neighborhood. Words were considered highly consistent if the authors used them with the same neighborhood. When ranked according to the consistency of use, the words obeyed a log-normal distribution, in contrast to Zipf's law that applies to the frequency of use. Consistency correlated positively with the familiarity and frequency of use, and negatively with ambiguity and age of acquisition. An inspection of some highly consistent words confirmed that they are used in very limited semantic contexts. A comparison of consistency indices for eight authors indicated that these indices may be employed for author recognition. Indeed, as expected, authors of novels could be distinguished from those who wrote scientific texts. Our analysis demonstrated the suitability of the consistency indices, which can now be applied in other tasks, such as emotion recognition.
Resumo:
La tesi tratta i temi di computer vision connessi alle problematiche di inserimento in una piattaforma Web. Nel testo sono spiegate alcune soluzioni per includere una libreria software per l'emotion recognition in un'applicazione web e tecnologie per la registrazione di un video, catturando le immagine da una webcam.
Resumo:
Empathy is a core prerequisite for human social behavior. Relatively, little is known about how empathy is influenced by social stress and its associated neuroendocrine alterations. The current study was designed to test the impact of acute stress on emotional and cognitive empathy. Healthy male participants were exposed to a psychosocial laboratory stressor (trier social stress test, (TSST)) or a well-matched control condition (Placebo-TSST). Afterwards they participated in an empathy test measuring emotional and cognitive empathy (multifaceted empathy test, (MET)). Stress exposure caused an increase in negative affect, a rise in salivary alpha amylase and a rise in cortisol. Participants exposed to stress reported more emotional empathy in response to pictures displaying both positive and negative emotional social scenes. Cognitive empathy (emotion recognition) in contrast did not differ between the stress and the control group. The current findings provide initial evidence for enhanced emotional empathy after acute psychosocial stress.
Resumo:
Gamma-hydroxybutyrate (GHB) is a GHB-/GABAB-receptor agonist. Reports from GHB abusers indicate euphoric, prosocial, and empathogenic effects of the drug. We measured the effects of GHB on mood, prosocial behavior, social and non-social cognition and assessed potential underlying neuroendocrine mechanisms. GHB (20mg/kg) was tested in 16 healthy males, using a randomized, placebo-controlled, cross-over design. Subjective effects on mood were assessed by visual-analogue-scales and the GHB-Specific-Questionnaire. Prosocial behavior was examined by the Charity Donation Task, the Social Value Orientation test, and the Reciprocity Task. Reaction time, memory, empathy, and theory-of-mind were also tested. Blood plasma levels of GHB, oxytocin, testosterone, progesterone, dehydroepiandrosterone (DHEA), cortisol, aldosterone, and adrenocorticotropic-hormone (ACTH) were determined. GHB showed stimulating and sedating effects, and elicited euphoria, disinhibition, and enhanced vitality. In participants with low prosociality, the drug increased donations and prosocial money distributions. In contrast, social cognitive abilities such as emotion recognition, empathy, and theory-of-mind, and basal cognitive functions were not affected. GHB increased plasma progesterone, while oxytocin and testosterone, cortisol, aldosterone, DHEA, and ACTH levels remained unaffected. GHB has mood-enhancing and prosocial effects without affecting social hormones such as oxytocin and testosterone. These data suggest a potential involvement of GHB-/GABAB-receptors and progesterone in mood and prosocial behavior.
Resumo:
En el trabajo que aquí presentamos se incluye la base teórica (sintaxis y semántica) y una implementación de un framework para codificar el razonamiento de la representación difusa o borrosa del mundo (tal y como nosotros, seres humanos, entendemos éste). El interés en la realización de éste trabajo parte de dos fuentes: eliminar la complejidad existente cuando se realiza una implementación con un lenguaje de programación de los llamados de propósito general y proporcionar una herramienta lo suficientemente inteligente para dar respuestas de forma constructiva a consultas difusas o borrosas. El framework, RFuzzy, permite codificar reglas y consultas en una sintaxis muy cercana al lenguaje natural usado por los seres humanos para expresar sus pensamientos, pero es bastante más que eso. Permite representar conceptos muy interesantes, como fuzzificaciones (funciones usadas para convertir conceptos no difusos en difusos), valores por defecto (que se usan para devolver resultados un poco menos válidos que los que devolveríamos si tuviésemos la información necesaria para calcular los más válidos), similaridad entre atributos (característica que utilizamos para buscar aquellos individuos en la base de datos con una característica similar a la buscada), sinónimos o antónimos y, además, nos permite extender el numero de conectivas y modificadores (incluyendo modificadores de negación) que podemos usar en las reglas y consultas. La personalización de la definición de conceptos difusos (muy útil para lidiar con el carácter subjetivo de los conceptos borrosos, donde nos encontramos con que cualificar a alguien de “alto” depende de la altura de la persona que cualifica) es otra de las facilidades incluida. Además, RFuzzy implementa la semántica multi-adjunta. El interés en esta reside en que introduce la posibilidad de obtener la credibilidad de una regla a partir de un conjunto de datos y una regla dada y no solo el grado de satisfacción de una regla a partir de el universo modelado en nuestro programa. De esa forma podemos obtener automáticamente la credibilidad de una regla para una determinada situación. Aún cuando la contribución teórica de la tesis es interesante en si misma, especialmente la inclusión del modificador de negacion, sus multiples usos practicos lo son también. Entre los diferentes usos que se han dado al framework destacamos el reconocimiento de emociones, el control de robots, el control granular en computacion paralela/distribuída y las busquedas difusas o borrosas en bases de datos. ABSTRACT In this work we provide a theoretical basis (syntax and semantics) and a practical implementation of a framework for encoding the reasoning and the fuzzy representation of the world (as human beings understand it). The interest for this work comes from two sources: removing the existing complexity when doing it with a general purpose programming language (one developed without focusing in providing special constructions for representing fuzzy information) and providing a tool intelligent enough to answer, in a constructive way, expressive queries over conventional data. The framework, RFuzzy, allows to encode rules and queries in a syntax very close to the natural language used by human beings to express their thoughts, but it is more than that. It allows to encode very interesting concepts, as fuzzifications (functions to easily fuzzify crisp concepts), default values (used for providing results less adequate but still valid when the information needed to provide results is missing), similarity between attributes (used to search for individuals with a characteristic similar to the one we are looking for), synonyms or antonyms and it allows to extend the number of connectives and modifiers (even negation) we can use in the rules. The personalization of the definition of fuzzy concepts (very useful for dealing with the subjective character of fuzziness, in which a concept like tall depends on the height of the person performing the query) is another of the facilities included. Besides, RFuzzy implements the multi-adjoint semantics. The interest in them is that in addition to obtaining the grade of satisfaction of a consequent from a rule, its credibility and the grade of satisfaction of the antecedents we can determine from a set of data how much credibility we must assign to a rule to model the behaviour of the set of data. So, we can determine automatically the credibility of a rule for a particular situation. Although the theoretical contribution is interesting by itself, specially the inclusion of the negation modifier, the practical usage of it is equally important. Between the different uses given to the framework we highlight emotion recognition, robocup control, granularity control in parallel/distributed computing and flexible searches in databases.
Resumo:
Desde hace más de 20 años, muchos grupos de investigación trabajan en el estudio de técnicas de reconocimiento automático de expresiones faciales. En los últimos años, gracias al avance de las metodologías, ha habido numerosos avances que hacen posible una rápida detección de las caras presentes en una imagen y proporcionan algoritmos de clasificación de expresiones. En este proyecto se realiza un estudio sobre el estado del arte en reconocimiento automático de emociones, para conocer los diversos métodos que existen en el análisis facial y en el reconocimiento de la emoción. Con el fin de poder comparar estos métodos y otros futuros, se implementa una herramienta modular y ampliable y que además integra un método de extracción de características que consiste en la obtención de puntos de interés en la cara y dos métodos para clasificar la expresión, uno mediante comparación de desplazamientos de los puntos faciales, y otro mediante detección de movimientos específicos llamados unidades de acción. Para el entrenamiento del sistema y la posterior evaluación del mismo, se emplean las bases de datos Cohn-Kanade+ y JAFFE, de libre acceso a la comunidad científica. Después, una evaluación de estos métodos es llevada a cabo usando diferentes parámetros, bases de datos y variando el número de emociones. Finalmente, se extraen conclusiones del trabajo y su evaluación, proponiendo las mejoras necesarias e investigación futura. ABSTRACT. Currently, many research teams focus on the study of techniques for automatic facial expression recognition. Due to the appearance of digital image processing, in recent years there have been many advances in the field of face detection, feature extraction and expression classification. In this project, a study of the state of the art on automatic emotion recognition is performed to know the different methods existing in facial feature extraction and emotion recognition. To compare these methods, a user friendly tool is implemented. Besides, a feature extraction method is developed which consists in obtaining 19 facial feature points. Those are passed to two expression classifier methods, one based on point displacements, and one based on the recognition of facial Action Units. Cohn-Kanade+ and JAFFE databases, both freely available to the scientific community, are used for system training and evaluation. Then, an evaluation of the methods is performed with different parameters, databases and varying the number of emotions. Finally, conclusions of the work and its evaluation are extracted, proposing some necessary improvements and future research.
Resumo:
Introdução: O objetivo do estudo foi investigar se há associação entre déficits na capacidade de reconhecimento de emoções faciais e déficits na flexibilidade mental e na adequação social em pacientes com Transtorno Bipolar do tipo I eutímicos quando comparados a sujeitos controles sem transtorno mental. Métodos: 65 pacientes com Transtorno Bipolar do tipo I eutímicos e 95 controles sem transtorno mental, foram avaliados no reconhecimento de emoções faciais, na flexibilidade mental e na adequação social através de avaliações clínicas e neuropsicológicas. Os sintomas afetivos foram avaliados através da Escala de Depressão de Hamilton e da Escala de Mania de Young, o reconhecimento de emoções faciais através da Facial Expressions of Emotion: Stimuli and Tests, a flexibilidade mental avaliada através do Wisconsin Card Sorting Test e a adequação social através da Escala de Auto- Avaliação de Adequação Social. Resultados: Pacientes com Transtorno Bipolar do tipo I eutímicos apresentam uma associação de maior intensidade comparativamente aos controles entre o reconhecimento de emoções faciais e a flexibilidade mental, indicando que quanto mais preservada a flexibilidade mental, melhor será a habilidade para reconhecer emoções faciais Neste grupo às correlações de todas as emoções são positivas com o total de acertos e as categorias e são negativas com as respostas perseverativas, total de erros, erros perseverativos e erros não perseverativos. Não houve uma correlação entre o reconhecimento de emoções faciais e a adequação social, apesar dos pacientes com Transtorno Bipolar do tipo I eutímicos apresentar uma pior adequação social, sinalizando que a pior adequação social não parece ser devida a uma dificuldade em reconhecer e interpretar adequadamente as expressões faciais. Os pacientes com Transtorno Bipolar do tipo I eutímicos não apresentam diferenças significativas no reconhecimento de emoções faciais em relação aos controles, entretanto no subteste surpresa (p=0,080) as diferenças estão no limite da significância estatística, indicando que portadores de transtorno bipolar do tipo I eutímicos tendem a apresentar um pior desempenho no reconhecimento da emoção surpresa em relação aos controles. Conclusão: Nossos resultados reforçam a hipótese de que existe uma associação entre o reconhecimento de emoções faciais e a preservação do funcionamento executivo, mais precisamente a flexibilidade mental, indicando que quanto maior a flexibilidade mental, melhor será a habilidade para reconhecer emoções faciais e melhorar o desempenho funcional do paciente. Pacientes bipolares do tipo I eutímicos apresentam uma pior adequação social quando comparados aos controles, o que pode ser uma consequência do Transtorno Bipolar que ratifica a necessidade de uma intervenção terapêutica rápida e eficaz nestes pacientes
Resumo:
O Transtorno Bipolar (TB) tipo I é uma doença caracterizada por episódios de mania e depressão recorrentes com importante prejuízo do funcionamento global e comprometimento das funções cognitivas. Além disso, sabe-se que o número de episódios de humor patológico ao longo da vida pode também influenciar o funcionamento cognitivo destes sujeitos. Neste cenário, ocorreu a necessidade de se investigar marcadores genéticos para disfunção cognitiva no TB com o objetivo de estudar este fenômeno. Dentre os potenciais genes responsáveis por influenciar a cognição destacam-se os polimorfismos funcionais do fator neurotrófico derivado do cérebro (BDNF), da catecol-O-metiltransferase (COMT), da apolipoproteína-E (APOE) e do canal de cálcio de baixa voltagem subunidade 1-C (CACNA1C). Sabe-se, também, que no TB os marcadores de estresse oxidativo estão aumentados durante todas as fases da doença, entretanto, não é claro qual impacto destes na disfunção cognitiva de indivíduos com TB. O objetivo dessa tese foi avaliar o desempenho cognitivo de pacientes jovens com bipolaridade tipo I e sua associação com o genótipo de BDNF, COMT, APOE e CACNA1C e também com os níveis plasmáticos de oxidação da guanosina (8-OHdG) e citosina (5-Mec) durante os episódios de humor, eutimia e em controles. Para investigar essa associação foram incluídos 116 pacientes (79 em episódio de humor patológico e 37 eutímicos) com diagnóstico de TB tipo I (DSMIV-TR); 97 controles saudáveis foram submetidos à avaliação neuropsicológica e coleta de sangue para extração de DNA visando genotipagem para BDNF (rs6265), COMT (rs4680; rs165599), APOE (rs429358 e rs7412), CACNA1C (rs1006737), 8-OhdG e 5-Mec. A análise dos dados obtidos revelou que pacientes portadores do genótipo Met/Met rs4680/rs165599 do COMT apresentam comprometimento cognitivo mais grave (função executiva, fluência verbal, memória e inteligência) comparado ao genótipo Val/Met ou Val/Val durante episódios maníacos ou mistos. Na mesma direção destes resultados, verificou-se que pacientes portadores do alelo Met rs4680 do COMT apresentam comprometimento do reconhecimento de emoções faciais em episódios de mania e depressão. Nenhum efeito do COMT foi observado em controles. O alelo de risco Met do CACNA1C se associou a um pior comprometimento executivo independente dos sintomas maníacos ou depressivos no TB, porém nenhum efeito se observou nos controles. O alelo Met do BDNF rs6265 ou a presença do alelo 4 da APOE não representa um fator que identifique um grupo com desempenho cognitivo diferenciado durante as fases do TB ou em controles. Sujeitos com TB apresentaram níveis mais elevados de 8-OHdG e tais níveis eram diretamente proporcionais ao número de episódios maníacos ao longo da vida, sugerindo um papel dos episódios hiperdopaminérgicos na oxidação das bases de DNA. Concluiu-se que a genotipagem para COMT e CACNA1C em pacientes com TB pode identificar um grupo de pacientes associados a pior disfunção cognitiva durante as fases maníacas e mistas do TB. Tal dado pode ser um indicador do envolvimento do sistema dopaminérgico e dos canais de cálcio de baixa voltagem na fisiopatologia da disfunção cognitiva no TB e deve ser explorado em outros estudos
Resumo:
Este trabalho avalia a influência das emoções humanas expressas pela mímica da face na tomada de decisão de sistemas computacionais, com o objetivo de melhorar a experiência do usuário. Para isso, foram desenvolvidos três módulos: o primeiro trata-se de um sistema de computação assistiva - uma prancha de comunicação alternativa e ampliada em versão digital. O segundo módulo, aqui denominado Módulo Afetivo, trata-se de um sistema de computação afetiva que, por meio de Visão Computacional, capta a mímica da face do usuário e classifica seu estado emocional. Este segundo módulo foi implementado em duas etapas, as duas inspiradas no Sistema de Codificação de Ações Faciais (FACS), que identifica expressões faciais com base no sistema cognitivo humano. Na primeira etapa, o Módulo Afetivo realiza a inferência dos estados emocionais básicos: felicidade, surpresa, raiva, medo, tristeza, aversão e, ainda, o estado neutro. Segundo a maioria dos pesquisadores da área, as emoções básicas são inatas e universais, o que torna o módulo afetivo generalizável a qualquer população. Os testes realizados com o modelo proposto apresentaram resultados 10,9% acima dos resultados que usam metodologias semelhantes. Também foram realizadas análises de emoções espontâneas, e os resultados computacionais aproximam-se da taxa de acerto dos seres humanos. Na segunda etapa do desenvolvimento do Módulo Afetivo, o objetivo foi identificar expressões faciais que refletem a insatisfação ou a dificuldade de uma pessoa durante o uso de sistemas computacionais. Assim, o primeiro modelo do Módulo Afetivo foi ajustado para este fim. Por fim, foi desenvolvido um Módulo de Tomada de Decisão que recebe informações do Módulo Afetivo e faz intervenções no Sistema Computacional. Parâmetros como tamanho do ícone, arraste convertido em clique e velocidade de varredura são alterados em tempo real pelo Módulo de Tomada de Decisão no sistema computacional assistivo, de acordo com as informações geradas pelo Módulo Afetivo. Como o Módulo Afetivo não possui uma etapa de treinamento para inferência do estado emocional, foi proposto um algoritmo de face neutra para resolver o problema da inicialização com faces contendo emoções. Também foi proposto, neste trabalho, a divisão dos sinais faciais rápidos entre sinais de linha base (tique e outros ruídos na movimentação da face que não se tratam de sinais emocionais) e sinais emocionais. Os resultados dos Estudos de Caso realizados com os alunos da APAE de Presidente Prudente demonstraram que é possível melhorar a experiência do usuário, configurando um sistema computacional com informações emocionais expressas pela mímica da face.
Resumo:
This study aimed to: i) determine if the attention bias towards angry faces reported in eating disorders generalises to a non-clinical sample varying in eating disorder-related symptoms; ii) examine if the bias occurs during initial orientation or later strategic processing; and iii) confirm previous findings of impaired facial emotion recognition in non-clinical disordered eating. Fifty-two females viewed a series of face-pairs (happy or angry paired with neutral) whilst their attentional deployment was continuously monitored using an eye-tracker. They subsequently identified the emotion portrayed in a separate series of faces. The highest (n=18) and lowest scorers (n=17) on the Eating Disorders Inventory (EDI) were compared on the attention and facial emotion recognition tasks. Those with relatively high scores exhibited impaired facial emotion recognition, confirming previous findings in similar non-clinical samples. They also displayed biased attention away from emotional faces during later strategic processing, which is consistent with previously observed impairments in clinical samples. These differences were related to drive-for-thinness. Although we found no evidence of a bias towards angry faces, it is plausible that the observed impairments in emotion recognition and avoidance of emotional faces could disrupt social functioning and act as a risk factor for the development of eating disorders.
Resumo:
The need to provide computers with the ability to distinguish the affective state of their users is a major requirement for the practical implementation of affective computing concepts. This dissertation proposes the application of signal processing methods on physiological signals to extract from them features that can be processed by learning pattern recognition systems to provide cues about a person's affective state. In particular, combining physiological information sensed from a user's left hand in a non-invasive way with the pupil diameter information from an eye-tracking system may provide a computer with an awareness of its user's affective responses in the course of human-computer interactions. In this study an integrated hardware-software setup was developed to achieve automatic assessment of the affective status of a computer user. A computer-based "Paced Stroop Test" was designed as a stimulus to elicit emotional stress in the subject during the experiment. Four signals: the Galvanic Skin Response (GSR), the Blood Volume Pulse (BVP), the Skin Temperature (ST) and the Pupil Diameter (PD), were monitored and analyzed to differentiate affective states in the user. Several signal processing techniques were applied on the collected signals to extract their most relevant features. These features were analyzed with learning classification systems, to accomplish the affective state identification. Three learning algorithms: Naïve Bayes, Decision Tree and Support Vector Machine were applied to this identification process and their levels of classification accuracy were compared. The results achieved indicate that the physiological signals monitored do, in fact, have a strong correlation with the changes in the emotional states of the experimental subjects. These results also revealed that the inclusion of pupil diameter information significantly improved the performance of the emotion recognition system. ^
Resumo:
Les parents à travers le monde chantent et parlent à leurs bébés. Ces deux types de vocalisations aux enfants préverbaux partagent plusieurs similarités de même que des différences, mais leurs conséquences sur les bébés demeurent méconnues. L’objectif de cette thèse était de documenter l’efficacité relative du chant et de la parole à capter l’attention des bébés sur de courtes périodes de temps (Étude 1) ainsi qu’à réguler l’affect des bébés en maintenant un état de satisfaction sur une période de temps prolongée (Étude 2). La première étude a exploré les réactions attentionnelles des bébés exposés à des enregistrements audio non familiers de chant et de parole. Lors de l’expérience 1, des bébés de 4 à 13 mois ont été exposés à de la parole joyeuse s’adressant au bébé (séquences de syllabes) et des berceuses fredonnées par la même femme. Ils ont écouté significativement plus longtemps la parole, qui contenait beaucoup plus de variabilité acoustique et d’expressivité que les berceuses. Dans l’expérience 2, des bébés d’âges comparables n’ont montré aucune écoute différentielle face à une version parlée ou chantée d’une chanson pour enfant turque, les deux versions étant exprimées de façon joyeuse / heureuse. Les bébés de l’expérience 3, ayant entendu la version chantée de la chanson turque ainsi qu’une version parlée de façon affectivement neutre ou s’adressant à l’adulte, ont écouté significativement plus longtemps la version chantée. Dans l’ensemble, la caractéristique vocale joyeuse plutôt que le mode vocal (chanté versus parlé) était le principal déterminant de l’attention du bébé, indépendamment de son âge. Dans la seconde étude, la régulation affective des bébés a été explorée selon l’exposition à des enregistrements audio non familiers de chant ou de parole. Les bébés ont été exposés à du chant ou de la parole jusqu’à ce qu’ils rencontrent un critère d’insatisfaction exprimée dans le visage. Lors de l’expérience 1, des bébés de 7 à 10 mois ont écouté des enregistrements de paroles s’adressant au bébé, de paroles s’adressant à l’adulte ou du chant dans une langue non familière (turque). Les bébés ont écouté le chant près de deux fois plus longtemps que les paroles avant de manifester de l’insatisfaction. Lors de l’expérience 2, des bébés ont été exposés à des enregistrements de paroles ou de chants issus d’interactions naturelles entre la mère et son bébé, dans une langue familière. Comme dans l’expérience 1, le chant s’adressant au bébé était considérablement plus efficace que les paroles pour retarder l’apparition du mécontentement. La construction temporelle du chant, avec notamment son rythme régulier, son tempo stable et ses répétitions, pourrait jouer un rôle important dans la régulation affective, afin de soutenir l’attention, rehausser la familiarité ou promouvoir l’écoute prédictive et l’entraînement. En somme, les études présentées dans cette thèse révèlent, pour la première fois, que le chant est un outil parental puissant, tout aussi efficace que la parole pour capter l’attention et plus efficace que la parole pour maintenir les bébés dans un état paisible. Ces découvertes soulignent l’utilité du chant dans la vie quotidienne et l’utilité potentielle du chant dans des contextes thérapeutiques variés impliquant des bébés.