872 resultados para facial expressions
Resumo:
A novel method for 3D head tracking in the presence of large head rotations and facial expression changes is described. Tracking is formulated in terms of color image registration in the texture map of a 3D surface model. Model appearance is recursively updated via image mosaicking in the texture map as the head orientation varies. The resulting dynamic texture map provides a stabilized view of the face that can be used as input to many existing 2D techniques for face recognition, facial expressions analysis, lip reading, and eye tracking. Parameters are estimated via a robust minimization procedure; this provides robustness to occlusions, wrinkles, shadows, and specular highlights. The system was tested on a variety of sequences taken with low quality, uncalibrated video cameras. Experimental results are reported.
Resumo:
We introduce Active Hidden Models (AHM) that utilize kernel methods traditionally associated with classification. We use AHMs to track deformable objects in video sequences by leveraging kernel projections. We introduce the "subset projection" method which improves the efficiency of our tracking approach by a factor of ten. We successfully tested our method on facial tracking with extreme head movements (including full 180-degree head rotation), facial expressions, and deformable objects. Given a kernel and a set of training observations, we derive unbiased estimates of the accuracy of the AHM tracker. Kernels are generally used in classification methods to make training data linearly separable. We prove that the optimal (minimum variance) tracking kernels are those that make the training observations linearly dependent.
Resumo:
An improved technique for 3D head tracking under varying illumination conditions is proposed. The head is modeled as a texture mapped cylinder. Tracking is formulated as an image registration problem in the cylinder's texture map image. The resulting dynamic texture map provides a stabilized view of the face that can be used as input to many existing 2D techniques for face recognition, facial expressions analysis, lip reading, and eye tracking. To solve the registration problem in the presence of lighting variation and head motion, the residual error of registration is modeled as a linear combination of texture warping templates and orthogonal illumination templates. Fast and stable on-line tracking is achieved via regularized, weighted least squares minimization of the registration error. The regularization term tends to limit potential ambiguities that arise in the warping and illumination templates. It enables stable tracking over extended sequences. Tracking does not require a precise initial fit of the model; the system is initialized automatically using a simple 2D face detector. The only assumption is that the target is facing the camera in the first frame of the sequence. The formulation is tailored to take advantage of texture mapping hardware available in many workstations, PC's, and game consoles. The non-optimized implementation runs at about 15 frames per second on a SGI O2 graphic workstation. Extensive experiments evaluating the effectiveness of the formulation are reported. The sensitivity of the technique to illumination, regularization parameters, errors in the initial positioning and internal camera parameters are analyzed. Examples and applications of tracking are reported.
Resumo:
The act of playing percussion is innately theatrical. The motions by which a percussionist plays his instruments often resemble a carefully choreographed dance. Our bodies are freed from having to hold or blow into an instrument. We are free to move to and from our instruments, to change our facial expressions, and to vocalize. This performance dissertation consists of three recitals featuring theatrical music for solo percussion. The selections are limited to works in which the composer instructs the solo percussion performer to perform in an extra-musical capacity, whether it be by singing, speaking, moving, acting, dancing, or any other action. On all three recitals, I performed established works for solo theatrical percussion. However, on the first recital, I also played the world premiere performance of The Authors, a new work by Stuart Saunders Smith. This half-hour long marimba opera requires the solo performer to speak, sing, and act while playing the marimba. For the final recital, I featured pieces commissioned from two professional composers, John Leupold II and Daniel Adams. Through this series, I hope to both pay tribute to existing great works and to encourage new composition of solo theatrical percussion music.
Resumo:
The original article is available as an open access file on the Springer website in the following link: http://link.springer.com/article/10.1007/s10639-015-9388-2
Resumo:
Emotion research has long been dominated by the “standard method” of displaying posed or acted static images of facial expressions of emotion. While this method has been useful it is unable to investigate the dynamic nature of emotion expression. Although continuous self-report traces have enabled the measurement of dynamic expressions of emotion, a consensus has not been reached on the correct statistical techniques that permit inferences to be made with such measures. We propose Generalized Additive Models and Generalized Additive Mixed Models as techniques that can account for the dynamic nature of such continuous measures. These models allow us to hold constant shared components of responses that are due to perceived emotion across time, while enabling inference concerning linear differences between groups. The mixed model GAMM approach is preferred as it can account for autocorrelation in time series data and allows emotion decoding participants to be modelled as random effects. To increase confidence in linear differences we assess the methods that address interactions between categorical variables and dynamic changes over time. In addition we provide comments on the use of Generalized Additive Models to assess the effect size of shared perceived emotion and discuss sample sizes. Finally we address additional uses, the inference of feature detection, continuous variable interactions, and measurement of ambiguity.
Resumo:
A importância e preocupação dedicadas à autonomia e independência das pessoas idosas e dos pacientes que sofrem de algum tipo de deficiência tem vindo a aumentar significativamente ao longo das últimas décadas. As cadeiras de rodas inteligentes (CRI) são tecnologias que podem ajudar este tipo de população a aumentar a sua autonomia, sendo atualmente uma área de investigação bastante ativa. Contudo, a adaptação das CRIs a pacientes específicos e a realização de experiências com utilizadores reais são assuntos de estudo ainda muito pouco aprofundados. A cadeira de rodas inteligente, desenvolvida no âmbito do Projeto IntellWheels, é controlada a alto nível utilizando uma interface multimodal flexível, recorrendo a comandos de voz, expressões faciais, movimentos de cabeça e através de joystick. Este trabalho teve como finalidade a adaptação automática da CRI atendendo às características dos potenciais utilizadores. Foi desenvolvida uma metodologia capaz de criar um modelo do utilizador. A investigação foi baseada num sistema de recolha de dados que permite obter e armazenar dados de voz, expressões faciais, movimentos de cabeça e do corpo dos pacientes. A utilização da CRI pode ser efetuada em diferentes situações em ambiente real e simulado e um jogo sério foi desenvolvido permitindo especificar um conjunto de tarefas a ser realizado pelos utilizadores. Os dados foram analisados recorrendo a métodos de extração de conhecimento, de modo a obter o modelo dos utilizadores. Usando os resultados obtidos pelo sistema de classificação, foi criada uma metodologia que permite selecionar a melhor interface e linguagem de comando da cadeira para cada utilizador. A avaliação para validação da abordagem foi realizada no âmbito do Projeto FCT/RIPD/ADA/109636/2009 - "IntellWheels - Intelligent Wheelchair with Flexible Multimodal Interface". As experiências envolveram um vasto conjunto de indivíduos que sofrem de diversos níveis de deficiência, em estreita colaboração com a Escola Superior de Tecnologia de Saúde do Porto e a Associação do Porto de Paralisia Cerebral. Os dados recolhidos através das experiências de navegação na CRI foram acompanhados por questionários preenchidos pelos utilizadores. Estes dados foram analisados estatisticamente, a fim de provar a eficácia e usabilidade na adequação da interface da CRI ao utilizador. Os resultados mostraram, em ambiente simulado, um valor de usabilidade do sistema de 67, baseado na opinião de uma amostra de pacientes que apresentam os graus IV e V (os mais severos) de Paralisia Cerebral. Foi também demonstrado estatisticamente que a interface atribuída automaticamente pela ferramenta tem uma avaliação superior à sugerida pelos técnicos de Terapia Ocupacional, mostrando a possibilidade de atribuir automaticamente uma linguagem de comando adaptada a cada utilizador. Experiências realizadas com distintos modos de controlo revelaram a preferência dos utilizadores por um controlo compartilhado com um nível de ajuda associado ao nível de constrangimento do paciente. Em conclusão, este trabalho demonstra que é possível adaptar automaticamente uma CRI ao utilizador com claros benefícios a nível de usabilidade e segurança.
Resumo:
O presente trabalho foi elaborado no âmbito da dissertação do Mestrado em Tradução e Interpretação Especializadas, e tem como objectivo estudar o formato mockumentary, no qual se baseia a minha análise, e compreender as especificidades da tradução do elemento satírico de que este híbrido se pode revestir, enquanto elemento potenciador de comicidade. A Tradução Audiovisual é um fenómeno com que nos deparamos diariamente, seja a partir de uma televisão, de um ecrã de computador ou de um dispositivo móvel, em que estamos perante um conjunto de elementos semióticos diversos, para os quais concorre uma combinação essencial de quatro canais, a saber, o canal visual, acústico, verbal e não-verbal. O tradutor deve ter em consideração não só o código verbal, como também os elementos que não são estritamente linguísticos, como gestos, música, expressões faciais, etc. Dado que Uma Família Muito Moderna, sobre a qual baseei a minha análise, é uma série humorística com um forte pendor satírico, afigurou-se como primeiro objecto de estudo por descrever o mockumentary e analisar como esse humor se verifica na versão portuguesa. Deste modo, impõe-se abordar a questão da tradução do humor, neste caso de trocadilhos (wordplay), puns e casos de polissemia, bem como da tradução de referências culturais e intertextuais. Para esse efeito, procedi a uma análise contrastiva entre o texto original, em língua inglesa, e verificar a recuperação do tom satírico potenciador de humor nas legendas, na língua de chegada, em língua portuguesa. Em conjunto, estes elementos concorrem para a construção de um texto que, no caso deste mockumentary, se reveste de um forte tom satírico - mock - para a criação de humor, apresentando uma série que, apesar de indubitavelmente fictícia, revela determinados traços associados a formatos televisivos informativos - documentary - , transmitindo ao espectador uma ilusão de realidade e de factualidade.
Resumo:
Traumatic brain injury (TBI) often affects social adaptive functioning and these changes in social adaptability are usually associated with general damage to the frontal cortex. Recent evidence suggests that certain neurons within the orbitofrontal cortex appear to be specialized for the processing of faces and facial expressions. The orbitofrontal cortex also appears to be involved in self-initiated somatic activation to emotionally-charged stimuli. According to Somatic Marker Theory (Damasio, 1994), the reduced physiological activation fails to provide an individual with appropriate somatic cues to personally-relevant stimuli and this, in turn, may result in maladaptive behaviour. Given the susceptibility of the orbitofrontal cortex in TBI, it was hypothesized that impaired perception and reactivity to socially-relevant information might be responsible for some of the social difficulties encountered after TBL Fifteen persons who sustained a moderate to severe brain injury were compared to age and education matched Control participants. In the first study, both groups were presented with photographs of models displaying the major emotions and either asked to identify the emotions or simply view the faces passively. In a second study, participants were asked to select cards from decks that varied in terms of how much money could be won or lost. Those decks with higher losses were considered to be high-risk decks. Electrodermal activity was measured concurrently in both situations. Relative to Controls, TBI participants were found to have difficulty identifying expressions of surprise, sadness, anger, and fear. TBI persons were also found to be under-reactive, as measured by electrodermal activity, while passively viewing slides of negative expressions. No group difference,in reactivity to high-risk card decks was observed. The ability to identify emotions in the face and electrodermal reactivity to faces and to high-risk decks in the card game were examined in relationship to social monitoring and empathy as described by family members or friends on the Brock Adaptive Functioning Questionnaire (BAFQ). Difficulties identifying negative expressions (i.e., sadness, anger, fear, and disgust) predicted problems in monitoring social situations. As well, a modest relationship was observed between hypo-arousal to negative faces and problems with social monitoring. Finally, hypo-arousal in the anticipation of risk during the card game related to problems in empathy. In summary, these data are consistent with the view that alterations in the ability to perceive emotional expressions in the face and the disruption in arousal to personally-relevant information may be accounting for some of the difficulties in social adaptation often observed in persons who have sustained a TBI. Furthermore, these data provide modest support for Damasio's Somatic Marker Theory in that physiological reactivity to socially-relevant information has some value in predicting social function. Therefore, the assessment of TBI persons, particularly those with adaptive behavioural problems, should be expanded to determine whether alterations in perception and reactivity to socially-relevant stimuli have occurred. When this is the case, rehabilitative strategies aimed more specifically at these difficulties should be considered.
Resumo:
Psychopathy is associated with well-known characteristics such as a lack of empathy and impulsive behaviour, but it has also been associated with impaired recognition of emotional facial expressions. The use of event-related potentials (ERPs) to examine this phenomenon could shed light on the specific time course and neural activation associated with emotion recognition processes as they relate to psychopathic traits. In the current study we examined the PI , N170, and vertex positive potential (VPP) ERP components and behavioural performance with respect to scores on the Self-Report Psychopathy (SRP-III) questionnaire. Thirty undergraduates completed two tasks, the first of which required the recognition and categorization of affective face stimuli under varying presentation conditions. Happy, angry or fearful faces were presented under with attention directed to the mouth, nose or eye region and varied stimulus exposure duration (30, 75, or 150 ms). We found that behavioural performance to be unrelated to psychopathic personality traits in all conditions, but there was a trend for the Nl70 to peak later in response to fearful and happy facial expressions for individuals high in psychopathic traits. However, the amplitude of the VPP was significantly negatively associated with psychopathic traits, but only in response to stimuli presented under a nose-level fixation. Finally, psychopathic traits were found to be associated with longer N170 latencies in response to stimuli presented under the 30 ms exposure duration. In the second task, participants were required to inhibit processing of irrelevant affective and scrambled face distractors while categorizing unrelated word stimuli as living or nonliving. Psychopathic traits were hypothesized to be positively associated with behavioural performance, as it was proposed that individuals high in psychopathic traits would be less likely to automatically attend to task-irrelevant affective distractors, facilitating word categorization. Thus, decreased interference would be reflected in smaller N170 components, indicating less neural activity associated with processing of distractor faces. We found that overall performance decreased in the presence of angry and fearful distractor faces as psychopathic traits increased. In addition, the amplitude of the N170 decreased and the latency increased in response to affective distractor faces for individuals with higher levels of psychopathic traits. Although we failed to find the predicted behavioural deficit in emotion recognition in Task 1 and facilitation effect in Task 2, the findings of increased N170 and VPP latencies in response to emotional faces are consistent wi th the proposition that abnormal emotion recognition processes may in fact be inherent to psychopathy as a continuous personality trait.
Resumo:
Previously, studies investigating emotional face perception - regardless of whether they involved adults or children - presented participants with static photos of faces in isolation. In the natural world, faces are rarely encountered in isolation. In the few studies that have presented faces in context, the perception of emotional facial expressions is altered when paired with an incongruent context. For both adults and 8- year-old children, reaction times increase and accuracy decreases when facial expressions are presented in an incongruent context depicting a similar emotion (e.g., sad face on a fear body) compared to when presented in a congruent context (e.g., sad face on a sad body; Meeren, van Heijnsbergen, & de Gelder, 2005; Mondloch, 2012). This effect is called a congruency effect and does not exist for dissimilar emotions (e.g., happy and sad; Mondloch, 2012). Two models characterize similarity between emotional expressions differently; the emotional seed model bases similarity on physical features, whereas the dimensional model bases similarity on underlying dimensions of valence an . arousal. Study 1 investigated the emergence of an adult-like pattern of congruency effects in pre-school aged children. Using a child-friendly sorting task, we identified the youngest age at which children could accurately sort isolated facial expressions and body postures and then measured whether an incongruent context disrupted the perception of emotional facial expressions. Six-year-old children showed congruency effects for sad/fear but 4-year-old children did not for sad/happy. This pattern of congruency effects is consistent with both models and indicates that an adult-like pattern exists at the youngest age children can reliably sort emotional expressions in isolation. In Study 2, we compared the two models to determine their predictive abilities. The two models make different predictions about the size of congruency effects for three emotions: sad, anger, and fear. The emotional seed model predicts larger congruency effects when sad is paired with either anger or fear compared to when anger and fear are paired with each other. The dimensional model predicts larger congruency effects when anger and fear are paired together compared to when either is paired with sad. In both a speeded and unspeeded task the results failed to support either model, but the pattern of results indicated fearful bodies have a special effect. Fearful bodies reduced accuracy, increased reaction times more than any other posture, and shifted the pattern of errors. To determine whether the results were specific to bodies, we ran the reverse task to determine if faces could disrupt the perception of body postures. This experiment did not produce congruency effects, meaning faces do not influence the perception of body postures. In the final experiment, participants performed a flanker task to determine whether the effect of fearful bodies was specific to faces or whether fearful bodies would also produce a larger effect in an unrelated task in which faces were absent. Reaction times did not differ across trials, meaning fearful bodies' large effect is specific to situations with faces. Collectively, these studies provide novel insights, both developmentally and theoretically, into how emotional faces are perceived in context.
Resumo:
Dans les centres d’hébergement et de soins de longue durée (CHSLD), jusqu’à 80 % des personnes admises vivent avec une démence (Conseil des aînés, 2007). Parmi ces personnes âgées, plusieurs crient. Ce comportement a fait l’objet de quelques études, mais ces études ne fournissent pas une compréhension des sens de ces cris qui pourrait orienter le choix d’interventions face à ces personnes et leur entourage. À l’aide de la perspective infirmière de Leininger (2001) et de la théorie de la communication et des interactions à l’intérieur de triades d’Adams et Gardiner (2005), le but de cette étude était de décrire les sens des cris de personnes âgées vivant avec une démence et les facteurs les influençant en tenant compte de la perspective des personnes âgées, de leurs aidants familiaux et de leurs soignants. Pour atteindre ce but, l’ethnographie critique a été choisie comme méthode. L’étude s’est découlée dans un CHSLD au sein duquel sept triades, composées de personnes âgées vivant avec une démence qui crient, d’aidants familiaux principaux et de soignants, ont été recrutées selon un échantillonnage théorique. Diverses méthodes de collecte des données ont été utilisées dont l’observation des personnes âgées et du milieu et des entrevues semi-dirigées auprès des aidants et soignants. L’analyse des données a été effectuée à l’aide des quatre approches proposées par Spradley (1979) soit l’analyse : des domaines, taxonomique, componentielle et thématique. Les résultats ont permis d’identifier des thèmes sur les sens des cris et sur les facteurs influençant ceux-ci. Les cris peuvent avoir pour sens la vulnérabilité, la souffrance et la perte de sens vécues par la personne âgée. Ce comportement peut aussi indiquer diverses finalités, par exemple l’expression d’insatisfactions ou d’émotions. Ces finalités peuvent être distinguées à partir de critères de modulation tels que la prévisibilité des cris ou un faciès émotif. Par ailleurs, divers aspects rendent les sens des cris singuliers. La stabilité et la flexibilité dans la façon d’organiser les soins ainsi que l’effet réciproque entre les personnes âgées qui crient et les autres personnes dans le CHSLD se sont dégagées comme étant des facteurs qui influencent les sens des cris. Il s’est aussi révélé que les cris de chaque personne âgée peuvent être considérés comme un langage unique que les aidants et les soignants sont en mesure d’apprendre et qui influence l’interprétation des sens des cris. L’accompagnement de la personne âgée, en tenant compte de ses volontés, ses besoins et de sa personnalité, les fluctuations dans les relations de pouvoir au sein des triades personnes âgées-aidants-soignants et les sentiments d’impuissance et de culpabilité des aidants et des soignants sont d’autres facteurs qui influencent les cris. Les connaissances découlant de cette étude augmentent la compréhension sur les sens des cris des personnes âgées vivant avec une démence. Elles ont des implications pour les divers champs d’activités des infirmières et qui pourront contribuer à offrir des soins culturellement cohérents et caring pour les personnes âgées vivant avec une démence et leur entourage.
Resumo:
Dans ce mémoire, les contes de trois conteurs contemporains du Québec – Jos Gallant d’André Lemelin, Ti Pinge de Joujou Turenne et L’entrain à vapeur, de Fred Pellerin – font avant tout l’objet d’une lecture pragmatique afin de mieux comprendre comment le conteur, qui emploie le canevas en spectacle, transmet une fiction à un auditoire ou à un lectorat. L’étude présente d’abord une analyse comparative de chacune des prestations avec la version publiée d’un même récit et met ainsi en relief leurs points de convergence et de divergence. Selon l’hypothèse avancée, l’analyse de la prestation des conteurs qui suivent un canevas révèlerait comment s’y manifestent les dimensions performatives et les articulations du discours fictionnel. Corrélativement, l’examen des rapports entre le conteur et son public permet ensuite de s’interroger sur le statut du narrateur et de voir en quoi et comment, durant la performance, la fiction est partagée avec l’auditoire. L’analyse des énoncés performatifs, inspirés des travaux de Kerbrat-Orechionni et la dynamique de vectorisation proposée par Pavis pour l’étude de la gestuelle, des mimiques et de la voix, sont mises à contribution et visent également à dégager les outils pouvant servir à l’analyse des spectacles de contes. Au terme de cette recherche, l’auteure démontre les avantages liés au canevas, notamment en ce qui concerne les interactions qu’il favorise avec le public et dans la liberté qu’il procure, en permettant de modifier ou d’adapter le discours et les ressources expressives du conteur à chacune de ses représentations.
Resumo:
On évoque souvent des difficultés à interagir socialement chez les enfants ayant une dysphasie. Ces difficultés sont généralement attribuées aux troubles du langage, mais elles pourraient aussi provenir d’un problème à décoder les émotions des autres. Le but de la présente recherche est d’explorer cette voie chez les enfants dysphasiques de 9 à 12 ans. Différents stimuli émotionnels leur ont été présentés sous forme de vidéos ainsi qu’à des enfants d’un groupe contrôle selon cinq conditions : parole non filtrée, parole filtrée, visage dynamique, visage dynamique accompagné de la parole non filtrée, et visage dynamique avec parole filtrée. Les enfants dysphasiques et les enfants du groupe contrôle ne se comportent pas différemment de manière significative en fonction des émotions présentées et des conditions testées. Par contre, un sous-groupe d’enfants ayant une dysphasie mixte commet significativement plus d’erreurs pour l’ensemble de la tâche que le sous-groupe d’enfants sans dysphasie de même âge chronologique. En fait une part seulement des enfants dysphasiques mixtes ont des scores plus faibles. Ces mêmes enfants présentent un QI non verbal faible tandis que leur compréhension du langage est équivalente à celle de leur sous-groupe (enfants dysphasiques mixtes). Malgré ces différences significatives, les scores des enfants dysphasiques mixtes restent relativement élevés et les difficultés observées sont subtiles. Sur le plan clinique, les praticiens (orthophonistes, psychologues, éducateur) devront systématiser l’évaluation des habiletés de décodage des émotions chez l’enfant dysphasique dont les difficultés ne sont pas forcément évidentes dans la vie quotidienne. La recherche devra développer un outil de dépistage sensible aux troubles de décodage émotionnel et des stratégies thérapeutiques adaptées.
Resumo:
Lors d'une intervention conversationnelle, le langage est supporté par une communication non-verbale qui joue un rôle central dans le comportement social humain en permettant de la rétroaction et en gérant la synchronisation, appuyant ainsi le contenu et la signification du discours. En effet, 55% du message est véhiculé par les expressions faciales, alors que seulement 7% est dû au message linguistique et 38% au paralangage. L'information concernant l'état émotionnel d'une personne est généralement inférée par les attributs faciaux. Cependant, on ne dispose pas vraiment d'instruments de mesure spécifiquement dédiés à ce type de comportements. En vision par ordinateur, on s'intéresse davantage au développement de systèmes d'analyse automatique des expressions faciales prototypiques pour les applications d'interaction homme-machine, d'analyse de vidéos de réunions, de sécurité, et même pour des applications cliniques. Dans la présente recherche, pour appréhender de tels indicateurs observables, nous essayons d'implanter un système capable de construire une source consistante et relativement exhaustive d'informations visuelles, lequel sera capable de distinguer sur un visage les traits et leurs déformations, permettant ainsi de reconnaître la présence ou absence d'une action faciale particulière. Une réflexion sur les techniques recensées nous a amené à explorer deux différentes approches. La première concerne l'aspect apparence dans lequel on se sert de l'orientation des gradients pour dégager une représentation dense des attributs faciaux. Hormis la représentation faciale, la principale difficulté d'un système, qui se veut être général, est la mise en œuvre d'un modèle générique indépendamment de l'identité de la personne, de la géométrie et de la taille des visages. La démarche qu'on propose repose sur l'élaboration d'un référentiel prototypique à partir d'un recalage par SIFT-flow dont on démontre, dans cette thèse, la supériorité par rapport à un alignement conventionnel utilisant la position des yeux. Dans une deuxième approche, on fait appel à un modèle géométrique à travers lequel les primitives faciales sont représentées par un filtrage de Gabor. Motivé par le fait que les expressions faciales sont non seulement ambigües et incohérentes d'une personne à une autre mais aussi dépendantes du contexte lui-même, à travers cette approche, on présente un système personnalisé de reconnaissance d'expressions faciales, dont la performance globale dépend directement de la performance du suivi d'un ensemble de points caractéristiques du visage. Ce suivi est effectué par une forme modifiée d'une technique d'estimation de disparité faisant intervenir la phase de Gabor. Dans cette thèse, on propose une redéfinition de la mesure de confiance et introduisons une procédure itérative et conditionnelle d'estimation du déplacement qui offrent un suivi plus robuste que les méthodes originales.