890 resultados para caranio-facial
Resumo:
2,3-Unsaturated 3-arylsulfinyl pyranosides undergo nucleophilic additions at C-2, with facial selectivities depending on the nucleophile and the substituent on sulfinyl sulfur. The reactions of such sugar vinyl sulfoxides lead to the addition of nucleophile preferring an axial orientation at C-2, with concomitant formation of an allylic bond at C-3 to C-4. This trend in the addition pattern is observed for primary amine, carbon and sulfur nucleophiles, whereas secondary amines prefer an equatorial addition at C-2. The effect of p-tolylthio-versus (p-isopropylphenyl)thio vinyl sulfoxide is that the equatorial nucleophilic addition is preferred even more with the latter vinyl sulfoxide. (C) 2013 Published by Elsevier Ltd.
Resumo:
A AIDS deixou de ser uma doença aguda, tendo como desfecho morte imediata. Com o advento da terapia antirretroviral potente, controlou-se o vírus da imunodeficiência humana, tornando a AIDS uma doença crônica. Entretanto, a terapia antirretroviral potente possui reações adversas, sendo uma delas a síndrome lipodistrófica do HIV. Uma das manifestações desta síndrome é a lipoatrofia facial: perda de gordura na face. O Ministério da Saúde do Brasil normatizou a aplicação de polimetilmetacrilato para reabilitação da face. Porém, crianças e adolescentes não podem realizar tal procedimento. Para esta população, o presente trabalho propõe a terapia miofuncional. Objetivo: Verificar os efeitos da terapia fonoaudiológica miofuncional em adolescentes vivendo com HIV/AIDS, contraído por transmisão vertical, com lipoatrofia facial. Métodos: Realizou-se avaliação fonoaudiológica antes e depois de 12 sessões de terapia fonoaudiológica, utilizando avaliação estrutural, medidas antropométricas da face, registro fotográfico, peso e altura, índice de lipoatrofia facial (ILA) e índice de incapacidade facial índice de bem-estar social (IIF-IBES). Na terapia fonoaudiológica, utilizou-se exercícios isotônicos e isométricos para face, bochechas e língua. Foram coletados os últimos dados, como a contagem de CD4, a carga viral, e o histórico da terapia antirretroviral utilizada. Resultados: Dos 15 pacientes estudados, 10 tinham lipoatrofia facial, mensurada através do ILA. Quatro completaram as todas as sessões de terapia fonoaudiológica. Nestes pacientes, as medidas antropométricas da face ficaram mais harmônicas, corroborando com os achados do registro fotográfico e da avaliação estrutural. Aumentou-se sutilmente o ILA em três pacientes. Conclusão: A terapia fonoaudiológica mostrou-se eficaz no tratamento da lipoatrofia facial leve. Considera-se importante a readequação das funções estomatognáticas quando necessário. Outras demandas fonoaudiológicas surgiram na população estudada.
Resumo:
We have developed a novel human facial tracking system that operates in real time at a video frame rate without needing any special hardware. The approach is based on the use of Lie algebra, and uses three-dimensional feature points on the targeted human face. It is assumed that the roughly estimated facial model (relative coordinates of the three-dimensional feature points) is known. First, the initial feature positions of the face are determined using a model fitting technique. Then, the tracking is operated by the following sequence: (1) capture the new video frame and render feature points to the image plane; (2) search for new positions of the feature points on the image plane; (3) get the Euclidean matrix from the moving vector and the three-dimensional information for the points; and (4) rotate and translate the feature points by using the Euclidean matrix, and render the new points on the image plane. The key algorithm of this tracker is to estimate the Euclidean matrix by using a least square technique based on Lie algebra. The resulting tracker performed very well on the task of tracking a human face.
Resumo:
In this paper, a novel algorithm for removing facial makeup disturbances as a face detection preprocess based on high dimensional imaginal geometry is proposed. After simulation and practical application experiments, the algorithm is theoretically analyzed. Its apparent effect of removing facial makeup and the advantages of face detection with this pre-process over face detection without it are discussed. Furthermore, in our experiments with color images, the proposed algorithm even gives some surprises.
Resumo:
Video-based facial expression recognition is a challenging problem in computer vision and human-computer interaction. To target this problem, texture features have been extracted and widely used, because they can capture image intensity changes raised by skin deformation. However, existing texture features encounter problems with albedo and lighting variations. To solve both problems, we propose a new texture feature called image ratio features. Compared with previously proposed texture features, e. g., high gradient component features, image ratio features are more robust to albedo and lighting variations. In addition, to further improve facial expression recognition accuracy based on image ratio features, we combine image ratio features with facial animation parameters (FAPs), which describe the geometric motions of facial feature points. The performance evaluation is based on the Carnegie Mellon University Cohn-Kanade database, our own database, and the Japanese Female Facial Expression database. Experimental results show that the proposed image ratio feature is more robust to albedo and lighting variations, and the combination of image ratio features and FAPs outperforms each feature alone. In addition, we study asymmetric facial expressions based on our own facial expression database and demonstrate the superior performance of our combined expression recognition system.
Resumo:
Purpose - The aim of this study was to investigate whether the presence of a whole-face context during facial composite production facilitates construction of facial composite images. Design/Methodology - In Experiment 1, constructors viewed a celebrity face and then developed a facial composite using PRO-fit in one of two conditions: either the full-face was visible while facial features were selected, or only the feature currently being selected was visible. The composites were named by different participants. We then replicated the study using a more forensically-valid procedure: In Experiment 2 non-football fans viewed an image of a premiership footballer and 24 hours later constructed a composite of the face with a trained software operator. The resulting composites were named by football fans. Findings - In both studies we found that presence of the facial context promoted more identifiable facial composite images. Research limitations/implications – Though this study uses current software in an unconventional way, this was necessary to avoid error arising from between-system differences. Practical implications - Results confirm that composite software should have the whole-face context visible to witnesses throughout construction. Though some software systems do this, there remain others that present features in isolation and these findings show that these systems are unlikely to be optimal. Originality/value - This is the first study to demonstrate the importance of a full-face context for the construction of facial composite images. Results are valuable to police forces and developers of composite software.
Resumo:
Facial features play an important role in expressing grammatical information in signed languages, including American Sign Language(ASL). Gestures such as raising or furrowing the eyebrows are key indicators of constructions such as yes-no questions. Periodic head movements (nods and shakes) are also an essential part of the expression of syntactic information, such as negation (associated with a side-to-side headshake). Therefore, identification of these facial gestures is essential to sign language recognition. One problem with detection of such grammatical indicators is occlusion recovery. If the signer's hand blocks his/her eyebrows during production of a sign, it becomes difficult to track the eyebrows. We have developed a system to detect such grammatical markers in ASL that recovers promptly from occlusion. Our system detects and tracks evolving templates of facial features, which are based on an anthropometric face model, and interprets the geometric relationships of these templates to identify grammatical markers. It was tested on a variety of ASL sentences signed by various Deaf native signers and detected facial gestures used to express grammatical information, such as raised and furrowed eyebrows as well as headshakes.
Resumo:
Confronting the rapidly increasing, worldwide reliance on biometric technologies to surveil, manage, and police human beings, my dissertation