890 resultados para Facial hemiatrophy
Resumo:
Introduction: Observations of behaviour and research using eye-tracking technology have shown that individuals with Williams syndrome (WS) pay an unusual amount of attention to other people’s faces. The present research examines whether this attention to faces is moderated by the valence of emotional expression. Method: Sixteen participants with WS aged between 13 and 29 years (Mean=19 years 9 months) completed a dot-probe task in which pairs of faces displaying happy, angry and neutral expressions were presented. The performance of the WS group was compared to two groups of typically developing control participants, individually matched to the participants in the WS group on either chronological age or mental age. General mental age was assessed in the WS group using the Woodcock Johnson Test of Cognitive Ability Revised (WJ-COG-R; Woodcock & Johnson, 1989; 1990). Results: Compared to both control groups, the WS group exhibited a greater attention bias for happy faces. In contrast, no between-group differences in bias for angry faces were obtained. Conclusions: The results are discussed in relation to recent neuroimaging findings and the hypersocial behaviour that is characteristic of the WS population.
Resumo:
Objective. Interferences from spatially adjacent non-target stimuli are known to evoke event-related potentials (ERPs) during non-target flashes and, therefore, lead to false positives. This phenomenon was commonly seen in visual attention-based brain–computer interfaces (BCIs) using conspicuous stimuli and is known to adversely affect the performance of BCI systems. Although users try to focus on the target stimulus, they cannot help but be affected by conspicuous changes of the stimuli (such as flashes or presenting images) which were adjacent to the target stimulus. Furthermore, subjects have reported that conspicuous stimuli made them tired and annoyed. In view of this, the aim of this study was to reduce adjacent interference, annoyance and fatigue using a new stimulus presentation pattern based upon facial expression changes. Our goal was not to design a new pattern which could evoke larger ERPs than the face pattern, but to design a new pattern which could reduce adjacent interference, annoyance and fatigue, and evoke ERPs as good as those observed during the face pattern. Approach. Positive facial expressions could be changed to negative facial expressions by minor changes to the original facial image. Although the changes are minor, the contrast is big enough to evoke strong ERPs. In this paper, a facial expression change pattern between positive and negative facial expressions was used to attempt to minimize interference effects. This was compared against two different conditions, a shuffled pattern containing the same shapes and colours as the facial expression change pattern, but without the semantic content associated with a change in expression, and a face versus no face pattern. Comparisons were made in terms of classification accuracy and information transfer rate as well as user supplied subjective measures. Main results. The results showed that interferences from adjacent stimuli, annoyance and the fatigue experienced by the subjects could be reduced significantly (p < 0.05) by using the facial expression change patterns in comparison with the face pattern. The offline results show that the classification accuracy of the facial expression change pattern was significantly better than that of the shuffled pattern (p < 0.05) and the face pattern (p < 0.05). Significance. The facial expression change pattern presented in this paper reduced interference from adjacent stimuli and decreased the fatigue and annoyance experienced by BCI users significantly (p < 0.05) compared to the face pattern.
Resumo:
Interferences from the spatially adjacent non-target stimuli evoke ERPs during non-target sub-trials and lead to false positives. This phenomenon is commonly seen in visual attention based BCIs and affects the performance of BCI system. Although, users or subjects tried to focus on the target stimulus, they still could not help being affected by conspicuous changes of the stimuli (flashes or presenting images) which were adjacent to the target stimulus. In view of this case, the aim of this study is to reduce the adjacent interference using new stimulus presentation pattern based on facial expression changes. Positive facial expressions can be changed to negative facial expressions by minor changes to the original facial image. Although the changes are minor, the contrast will be big enough to evoke strong ERPs. In this paper, two different conditions (Pattern_1, Pattern_2) were used to compare across objective measures such as classification accuracy and information transfer rate as well as subjective measures. Pattern_1 was a “flash-only” pattern and Pattern_2 was a facial expression change of a dummy face. In the facial expression change patterns, the background is a positive facial expression and the stimulus is a negative facial expression. The results showed that the interferences from adjacent stimuli could be reduced significantly (P<;0.05) by using the facial expression change patterns. The online performance of the BCI system using the facial expression change patterns was significantly better than that using the “flash-only” patterns in terms of classification accuracy (p<;0.01), bit rate (p<;0.01), and practical bit rate (p<;0.01). Subjects reported that the annoyance and fatigue could be significantly decreased (p<;0.05) using the new stimulus presentation pattern presented in this paper.
Resumo:
OBJECTIVE: Interferences from spatially adjacent non-target stimuli are known to evoke event-related potentials (ERPs) during non-target flashes and, therefore, lead to false positives. This phenomenon was commonly seen in visual attention-based brain-computer interfaces (BCIs) using conspicuous stimuli and is known to adversely affect the performance of BCI systems. Although users try to focus on the target stimulus, they cannot help but be affected by conspicuous changes of the stimuli (such as flashes or presenting images) which were adjacent to the target stimulus. Furthermore, subjects have reported that conspicuous stimuli made them tired and annoyed. In view of this, the aim of this study was to reduce adjacent interference, annoyance and fatigue using a new stimulus presentation pattern based upon facial expression changes. Our goal was not to design a new pattern which could evoke larger ERPs than the face pattern, but to design a new pattern which could reduce adjacent interference, annoyance and fatigue, and evoke ERPs as good as those observed during the face pattern. APPROACH: Positive facial expressions could be changed to negative facial expressions by minor changes to the original facial image. Although the changes are minor, the contrast is big enough to evoke strong ERPs. In this paper, a facial expression change pattern between positive and negative facial expressions was used to attempt to minimize interference effects. This was compared against two different conditions, a shuffled pattern containing the same shapes and colours as the facial expression change pattern, but without the semantic content associated with a change in expression, and a face versus no face pattern. Comparisons were made in terms of classification accuracy and information transfer rate as well as user supplied subjective measures. MAIN RESULTS: The results showed that interferences from adjacent stimuli, annoyance and the fatigue experienced by the subjects could be reduced significantly (p < 0.05) by using the facial expression change patterns in comparison with the face pattern. The offline results show that the classification accuracy of the facial expression change pattern was significantly better than that of the shuffled pattern (p < 0.05) and the face pattern (p < 0.05). SIGNIFICANCE: The facial expression change pattern presented in this paper reduced interference from adjacent stimuli and decreased the fatigue and annoyance experienced by BCI users significantly (p < 0.05) compared to the face pattern.
Resumo:
Joint attention (JA) and spontaneous facial mimicry (SFM) are fundamental processes in social interactions, and they are closely related to empathic abilities. When tested independently, both of these processes have been usually observed to be atypical in individuals with autism spectrum conditions (ASC). However, it is not known how these processes interact with each other in relation to autistic traits. This study addresses this question by testing the impact of JA on SFM of happy faces using a truly interactive paradigm. Sixty-two neurotypical participants engaged in gaze-based social interaction with an anthropomorphic, gaze-contingent virtual agent. The agent either established JA by initiating eye contact or looked away, before looking at an object and expressing happiness or disgust. Eye tracking was used to make the agent's gaze behavior and facial actions contingent to the participants' gaze. SFM of happy expressions was measured by Electromyography (EMG) recording over the Zygomaticus Major muscle. Results showed that JA augments SFM in individuals with low compared with high autistic traits. These findings are in line with reports of reduced impact of JA on action imitation in individuals with ASC. Moreover, they suggest that investigating atypical interactions between empathic processes, instead of testing these processes individually, might be crucial to understanding the nature of social deficits in autism
Resumo:
Background: Some studies have proven that a conventional visual brain computer interface (BCI) based on overt attention cannot be used effectively when eye movement control is not possible. To solve this problem, a novel visual-based BCI system based on covert attention and feature attention has been proposed and was called the gaze-independent BCI. Color and shape difference between stimuli and backgrounds have generally been used in examples of gaze-independent BCIs. Recently, a new paradigm based on facial expression changes has been presented, and obtained high performance. However, some facial expressions were so similar that users couldn't tell them apart, especially when they were presented at the same position in a rapid serial visual presentation (RSVP) paradigm. Consequently, the performance of the BCI is reduced. New Method: In this paper, we combined facial expressions and colors to optimize the stimuli presentation in the gaze-independent BCI. This optimized paradigm was called the colored dummy face pattern. It is suggested that different colors and facial expressions could help users to locate the target and evoke larger event-related potentials (ERPs). In order to evaluate the performance of this new paradigm, two other paradigms were presented, called the gray dummy face pattern and the colored ball pattern. Comparison with Existing Method(s): The key point that determined the value of the colored dummy faces stimuli in BCI systems was whether the dummy face stimuli could obtain higher performance than gray faces or colored balls stimuli. Ten healthy participants (seven male, aged 21–26 years, mean 24.5 ± 1.25) participated in our experiment. Online and offline results of four different paradigms were obtained and comparatively analyzed. Results: The results showed that the colored dummy face pattern could evoke higher P300 and N400 ERP amplitudes, compared with the gray dummy face pattern and the colored ball pattern. Online results showed that the colored dummy face pattern had a significant advantage in terms of classification accuracy (p < 0.05) and information transfer rate (p < 0.05) compared to the other two patterns. Conclusions: The stimuli used in the colored dummy face paradigm combined color and facial expressions. This had a significant advantage in terms of the evoked P300 and N400 amplitudes and resulted in high classification accuracies and information transfer rates. It was compared with colored ball and gray dummy face stimuli.
Resumo:
The etiology of idiopathic peripheral facial palsy (IPFP) is still uncertain; however, some authors suggest the possibility of a viral infection. Aim: to analyze the ultrastructure of the facial nerve seeking viral evidences that might provide etiological data. Material and Methods: We studied 20 patients with peripheral facial palsy (PFP), with moderate to severe FP, of both genders, between 18-60 years of age, from the Clinic of Facial Nerve Disorders. The patients were broken down into two groups - Study: eleven patients with IPFP and Control: nine patients with trauma or tumor-related PFP. The fragments were obtained from the facial nerve sheath or from fragments of its stumps - which would be discarded or sent to pathology exam during the facial nerve repair surgery. The removed tissue was fixed in 2% glutaraldehyde, and studied under Electronic Transmission Microscopy. Results: In the study group we observed an intense repair cellular activity by increased collagen fibers, fibroblasts containing developed organelles, free of viral particles. In the control group this repair activity was not evident, but no viral particles were observed. Conclusion: There were no viral particles, and there were evidences of intense activity of repair or viral infection.
Resumo:
Cavernous sinus thrombosis is a severe encephalic complication of the cervicofacial infections that can lead to death if not treated in adequate time. Among the several etiologies related to the development of this infection, myiasis has not been reported, enforcing the importance of the report of a case of thrombosis of the cavernous sinus developed from a facial myiasis. (Quintessence Int 2010;41:e72-e74)
Resumo:
Here we report on the clinical and genetic data for a large sample of Brazilian patients studied at the Hospital de Reabilitacao de Anomalas Craniofaciais-Universidade de Sao Paulo (HRAC-USP) who presented with either the classic holoprosencephaly or the holoprosencephaly-like (HPE-L) phenotype. The sample included patients without detected mutations in some HPE determinant genes such as SHH, GLI2, SIX3, TGIF, and PTCH, as well as the photographic documentation of the previously reported patients in our Center. The HPE-L phenotype has been also called of HPE ``minor forms"" or ""microforms,"" The variable phenotype, the challenge of genetic counseling, and the similarities to patients with isolated cleft lip/palate are discussed. (c) 2010 Wiley-Liss, Inc.
Resumo:
Genetic mutations responsible for oblique facial clefts (ObFC), a unique class of facial malformations, are largely unknown. We show that loss-of-function mutations in SPECC1L. are pathogenic for this human developmental disorder and that SPECC1L is a critical organizer of vertebrate facial morphogenesis. During murine embryogenesis, Speed 1 1 is expressed in cell populations of the developing facial primordial, which proliferate and fuse to form the face. In zebrafish, knockdown of a SPECC1L homolog produces a faceless phenotype with loss of jaw and facial structures, and knockdown in Drosophila phenocopies mutants in the integrin signaling pathway that exhibit cell-migration and -adhesion defects. Furthermore, in mammalian cells, SPECC1L colocalizes with both tubulin and actin, and its deficiency results in defective actin-cytoskeleton reorganization, as well as abnormal cell adhesion and migration. Collectively, these data demonstrate that SPECC1L functions in actin-cytoskeleton reorganization and is required for proper facial morphogenesis.
Resumo:
Conclusion. Hyperbaric oxygen treatment (HBOT) promoted an increase of the mean axonal diameter in the group evaluated 2 weeks after lesion induction, which suggests a more advanced regeneration process. However, the number of myelin nerve fibers of the facial nerve of the rabbits was similar when compared to the control and treatment groups, in both evaluation periods. Objective. To evaluate the effect of HBOT on the histological pattern of the facial nerve in rabbits exposed to a nerve crush injury. Materials and methods. Twenty rabbits were exposed to facial nerve crush injury. Ten rabbits received HBOT, 10 rabbits comprised the control group. The rabbits were sacrificed 2 and 4 weeks after the trauma. Qualitative morphological analysis, measurement of the external axonal diameters and myelin fiber count were carried out in an area of 185 000 mu m(2). Results. There was an increase in the area of the axons and thicker myelin in the 2 weeks treatment group in comparison with the control group. The mean diameter of the axons was of 2.34 mu m in the control group and of 2.81 mu m in the HBOT group, with statistically significant differences. The 2 week control group had a mean number of myelin fibers of 186 +/- 5.2664, and the HBOT group had a mean number of 2026.3 +/- 302; this was not statistically significant. The 4 week control group presented a mean of 2495.1 +/- 479 fibers and the HBOT group presented a mean of 2359.9 +/- 473; this was not statistically significant.
Resumo:
Sign language animations can lead to better accessibility of information and services for people who are deaf and have low literacy skills in spoken/written languages. Due to the distinct word-order, syntax, and lexicon of the sign language from the spoken/written language, many deaf people find it difficult to comprehend the text on a computer screen or captions on a television. Animated characters performing sign language in a comprehensible way could make this information accessible. Facial expressions and other non-manual components play an important role in the naturalness and understandability of these animations. Their coordination to the manual signs is crucial for the interpretation of the signed message. Software to advance the support of facial expressions in generation of sign language animation could make this technology more acceptable for deaf people. In this survey, we discuss the challenges in facial expression synthesis and we compare and critique the state of the art projects on generating facial expressions in sign language animations. Beginning with an overview of facial expressions linguistics, sign language animation technologies, and some background on animating facial expressions, a discussion of the search strategy and criteria used to select the five projects that are the primary focus of this survey follows. This survey continues on to introduce the work from the five projects under consideration. Their contributions are compared in terms of support for specific sign language, categories of facial expressions investigated, focus range in the animation generation, use of annotated corpora, input data or hypothesis for their approach, and other factors. Strengths and drawbacks of individual projects are identified in the perspectives above. This survey concludes with our current research focus in this area and future prospects.
Resumo:
The objective of this thesis work, is to propose an algorithm to detect the faces in a digital image with complex background. A lot of work has already been done in the area of face detection, but drawback of some face detection algorithms is the lack of ability to detect faces with closed eyes and open mouth. Thus facial features form an important basis for detection. The current thesis work focuses on detection of faces based on facial objects. The procedure is composed of three different phases: segmentation phase, filtering phase and localization phase. In segmentation phase, the algorithm utilizes color segmentation to isolate human skin color based on its chrominance properties. In filtering phase, Minkowski addition based object removal (Morphological operations) has been used to remove the non-skin regions. In the last phase, Image Processing and Computer Vision methods have been used to find the existence of facial components in the skin regions.This method is effective on detecting a face region with closed eyes, open mouth and a half profile face. The experiment’s results demonstrated that the detection accuracy is around 85.4% and the detection speed is faster when compared to neural network method and other techniques.
Resumo:
Needle fear is a common problem in children undergoing immunization. To ensure that the individual child's needs are met during a painful procedure it would be beneficial to be able to predict whether there is a need for extra support. The self-reporting instrument facial affective scale (FAS) could have potential for this purpose. The aim of this study was to evaluate whether the FAS can predict pain unpleasantness in girls undergoing immunization. Girls, aged 11-12 years, reported their expected pain unpleasantness on the FAS at least two weeks before and then experienced pain unpleasantness immediately before each vaccination. The experienced pain unpleasantness during the vaccination was also reported immediately after each immunization. The level of anxiety was similarly assessed during each vaccination and supplemented with stress measures in relation to the procedure in order to assess and evaluate concurrent validity. The results show that the FAS is valid to predict pain unpleasantness in 11-12-year-old girls who undergo immunizations and that it has the potential to be a feasible instrument to identify children who are in need of extra support to cope with immunization. In conclusion, the FAS measurement can facilitate caring interventions.
Resumo:
Needle fear is a common problem in children undergoing immunization. To ensure that the individual child’s needs are met during a painful procedure it would be beneficial to be able to predict whether there is a need for extra support. The self-reporting instrument facial affective scale (FAS) could have potential for this purpose. The aim of this study was to evaluate whether the FAS can predict pain unpleasantness in girls undergoing immunization. Girls, aged 11-12 years, reported their expected pain unpleasantness on the FAS at least two weeks before and then experienced pain unpleasantness immediately before each vaccination. The experienced pain unpleasantness during the vaccination was also reported immediately after each immunization. The level of anxiety was similarly assessed during each vaccination and supplemented with stress measures in relation to the procedure in order to assess and evaluate concurrent validity. The results show that the FAS is valid to predict pain unpleasantness in 11-12-year-old girls who undergo immunizations and that it has the potential to be a feasible instrument to identify children who are in need of extra support to cope with immunization. In conclusion, the FAS measurement can facilitate caring interventions.