890 resultados para caranio-facial
Resumo:
The aims of this study were to examine preterm infant reactions to pain in detail over prolonged time periods using multiple measures, and to assess the value of including specific body movements of the Neonatal Individualized Developmental Care and Assessment Program (NIDCAP) system to evaluate pain. Ten preterm infants born at 31 weeks mean gestational age (GA) and mean birth weight 1676 g were studied during a routine blood collection in a Level III neonatal intensive care unit (NICU). At 32-week post-conceptional age, computerized physiologic and video recordings were obtained continuously for 60 min (prior to, during and after lance). Motor and facial behaviors were coded independently, using the NIDCAP and the NFCS (Neonatal Facial Coding System), respectively, and compared with heart rate (HR) and oxygen saturation responses. Of the movements hypothesized to be stress cues in the NIDCAP model, extension of arms and legs (80%) and finger splay (70%) were the most common following lance. Contrary to the model, most infants (70%) had lower incidence of twitches and startles post-lance compared to baseline. Whereas all infants showed some NFCS response to lance, for three infants, the magnitude was low. HR increased and oxygen saturation decreased post-lance. Infants with more prior pain exposure, lower Apgar, and lower GA at birth, displayed more motor stress cues but less facial activity post-lance. Extension of extremities and finger splay, but not twitches and startles, from the NIDCAP, appear to be stress cues and show promise as clinical pain indicators to supplement facial and physiological pain measures in preterm infants.
Resumo:
Children's judgements about pain at age 8-10 years were examined comparing two groups of children who had experienced different exposure to nociceptive procedures in the neonatal period: extremely low birthweight (ELBW) <or = 1000 g (N = 47) and full birthweight (FBW) > or = 2500 g (N = 37). The 24 pictures that comprise the Pediatric Pain Inventory, depicting events in four settings: medical, recreational, daily living, and psychosocial, were used as the pain stimuli. The subjects rated pain intensity using the Color Analog Scale and pain affect using the Facial Affective Scale. Child IQ and maternal education were statistically adjusted in group comparisons. Pain intensity and pain affect related to activities of daily living and recreation were significantly higher than psychosocial and medically related pain on both scales in both groups of children. Although the two groups of children did not differ overall in their perceptions of pain intensity or affect, the ELBW children rated medical pain intensity significantly higher than psychosocial pain, unlike the FBW group. Also, duration of neonatal intensive care unit stay for the ELBW children was related to increased pain affect ratings in recreational and daily living settings. Despite altered response to pain in the early years reported by parents, on the whole at 8-10 years of age ELBW children judged pain in pictures similarly to their term peers. However, differences were evident, which suggests that studies are needed of biobehavioural reactivity to pain beyond infancy, as well as research into beliefs, attitudes, and perceptions about pain during the course of childhood in formerly ELBW children.
Resumo:
Caretakers intuitively use various sources of evidence when judging infant pain, but the relative importance of salient cues has received little attention. This investigation examined the predictive significance for judgements of painful discomfort in preterm and full-term neonates of behavioural (facial activity and body movement), contextual (invasiveness of the procedure), and developmental (gestational age) information. Judges viewed videotapes showing infants varying in the foregoing characteristics undergoing heel incisions for routine blood sampling purposes. Findings indicated all but the contextual information contributed uniquely to judgements of pain, with facial activity accounting for the most unique variance (35%), followed by bodily activity and gestational age, each accounting for 3% and 1% of the judgmental variance, respectively. Generally, 71% of the variance in ratings of pain could be predicted using facial activity alone, compared to 30% of the variance using bodily activity alone, 19% by relying on context alone, and 8% by referring to gestational age alone. Noteworthy was the tendency to judge early preterm infants to be experiencing less pain even though they were subjected to the same invasive procedure as the older infants. This finding also runs counter to evidence from developmental neurobiology which indicates that preterm newborns may be hypersensitive to invasive procedures.
Resumo:
The impact of invasive procedures on preterm neonates has received little systematic attention. We examined facial activity, body movements, and physiological measures in 56 preterm and full-term newborns in response to heel lancing, along with comparison preparatory and recovery intervals. The measures were recorded in special care and full-term nurseries during routine blood sampling. Data analyses indicated that in all measurement categories reactions of greatest magnitude were to the lancing procedure. Neonates with gestational ages as short as 25-27 weeks displayed physiological responsivity to the heel lance, but only in the heart rate measure did this vary with gestational age. Bodily activity was diminished in preterm neonates in general, relative to full-term newborns. Facial activity increased with the gestational age of the infant. Specificity of the response to the heel lance was greatest on the facial activity measure. Identification of pain requires attention to gestational age in the preterm neonate.
Resumo:
The purpose of this study was to examine the behavioural responses of infants to pain stimuli across different developmental ages. Eighty infants were included in this cross-sectional design. Four subsamples of 20 infants each included: (1) premature infants between 32 and 34 weeks gestational age undergoing heel-stick procedure; (2) full-term infants receiving intramuscular vitamin K injection; (3) 2-month-old infants receiving subcutaneous injection for immunisation against DPT; and (4) 4-month-old infants receiving subcutaneous injection for immunisation against DPT. Audio and video recordings were made for 15 sec from stimulus. Cry analysis was conducted on the first full expiratory cry by FFT with time and frequency measures. Facial action was coded using the Neonatal Facial Action Coding System (NFCS). Results from multivariate analysis showed that premature infants were different from older infants, that full-term newborns were different from others, but that 2- and 4-month-olds were similar. The specific variables contributing to the significance were higher pitched cries and more horizontal mouth stretch in the premature group and more taut tongue in the full-term newborns. The results imply that the premature infant has the basis for communicating pain via facial actions but that these are not well developed. The full-term newborn is better equipped to interact with his caretakers and express his distress through specific facial actions. The cries of the premature infant, however, have more of the characteristics that are arousing to the listener which serve to alert the caregiver of the state of distress from pain.
Resumo:
This paper presents a novel method of audio-visual feature-level fusion for person identification where both the speech and facial modalities may be corrupted, and there is a lack of prior knowledge about the corruption. Furthermore, we assume there are limited amount of training data for each modality (e.g., a short training speech segment and a single training facial image for each person). A new multimodal feature representation and a modified cosine similarity are introduced to combine and compare bimodal features with limited training data, as well as vastly differing data rates and feature sizes. Optimal feature selection and multicondition training are used to reduce the mismatch between training and testing, thereby making the system robust to unknown bimodal corruption. Experiments have been carried out on a bimodal dataset created from the SPIDRE speaker recognition database and AR face recognition database with variable noise corruption of speech and occlusion in the face images. The system's speaker identification performance on the SPIDRE database, and facial identification performance on the AR database, is comparable with the literature. Combining both modalities using the new method of multimodal fusion leads to significantly improved accuracy over the unimodal systems, even when both modalities have been corrupted. The new method also shows improved identification accuracy compared with the bimodal systems based on multicondition model training or missing-feature decoding alone.
Resumo:
Emotion research has long been dominated by the “standard method” of displaying posed or acted static images of facial expressions of emotion. While this method has been useful it is unable to investigate the dynamic nature of emotion expression. Although continuous self-report traces have enabled the measurement of dynamic expressions of emotion, a consensus has not been reached on the correct statistical techniques that permit inferences to be made with such measures. We propose Generalized Additive Models and Generalized Additive Mixed Models as techniques that can account for the dynamic nature of such continuous measures. These models allow us to hold constant shared components of responses that are due to perceived emotion across time, while enabling inference concerning linear differences between groups. The mixed model GAMM approach is preferred as it can account for autocorrelation in time series data and allows emotion decoding participants to be modelled as random effects. To increase confidence in linear differences we assess the methods that address interactions between categorical variables and dynamic changes over time. In addition we provide comments on the use of Generalized Additive Models to assess the effect size of shared perceived emotion and discuss sample sizes. Finally we address additional uses, the inference of feature detection, continuous variable interactions, and measurement of ambiguity.
Resumo:
The overall aim of this study was to assess the accuracy, reproducibility and stability of a high resolution passive stereophotogrammetry system to image a female mannequin torso, to validate measurements made on the textured virtual surface compared with those obtained using manual techniques and to develop an approach to make objective measurements of the female breast. 3D surface imaging was carried out on a textured female torso and measurements made in accordance with the system of mammometrics. Linear errors in measurements were less than 0.5 mm, system calibration produced errors of less than 1.0 mm over 94% over the surface and intra-rater reliability measured by ICC = 0.999. The mean difference between manual and digital curved surface distances was 1.36 mm with maximum and minimum differences of 3.15 mm and 0.02 mm, respectively. The stereophotogrammetry system has been demonstrated to perform accurately and reliably with specific reference to breast assessment. (C) 2011 IPEM. Published by Elsevier Ltd. All rights reserved.
Resumo:
Background: This study was designed to evaluate the structures, muscles, and fasciae of which the modiolus is composed. It can aid in the understanding and, therefore, the utilization of plastic surgery for the aesthetic or reconstructive treatment of that region, especially the angle of the mouth. Methods: Dissections of the midface were done on five different cadavers. They were of different races (3 males, 2 females). The anatomy of the modiolus was studied in detail. New anatomical observations were classified as type I through type VI. Results: The perifacial artery fascia contributed to the modiolus in four (80%) specimens and was not part of it in 1 (20%) specimen. The facial artery was anterior to it in one (20%) specimen, lateral in four (80%) specimens, and never medial to it. No significant relationship was observed between the perifacial artery fascia contribution to the modiolus and gender or race. Also, the location of the facial artery lateral or anterior to the modiolus was not significantly related to gender or race. In addition, the deep and superficial fasciae of the face converged not anterior to the masseter muscle but actually at the modiolus, which was different from observations made by others. Conclusion: The modiolus is of critical importance in aesthetic and reconstructive plastic surgery of the face. © 2008 Springer Science+Business Media, LLC and International Society of Aesthetic Plastic Surgery.
Resumo:
Introduction: The application of light as a stimulus in pharmaceutical systems and the associated ability to provide precise spatiotemporal control over location, wavelength and intensity, allowing ease of external control independent of environmental conditionals, has led to its increased use. Of particular note is the use of light with photosensitisers.
Areas covered: Photosensitisers are widely used in photodynamic therapy to cause a cidal effect towards cells on irradiation due to the generation of reactive oxygen species. These cidal effects have also been used to treat infectious diseases. The effects and benefits of photosensitisers in the treatment of such conditions are still being developed and further realised, with the design of novel delivery strategies. This review provides an overview of the realisation of the pharmaceutically relevant uses of photosensitisers, both in the context of current research and in terms of current clinical application, and looks to the future direction of research.
Expert opinion: Substantial advances have been and are being made in the use of photosensitisers. Of particular note are their antimicrobial applications, due to absence of resistance that is so frequently associated with conventional treatments. Their potency of action and the ability to immobilise to polymeric supports is opening a wide range of possibilities with great potential for use in healthcare infection prevention strategies.
Resumo:
Previous investigators have not described some of the new anatomic variations or provided quantitative and analytical data of the arterial anatomy of the lips in as much depth as in this study. Dissections of 14 different facial sides of cadavers were done. Through investigating the arterial supply of the upper and lower lips, measurements were performed and statistically analyzed. The main arterial supply of the upper lip was from the superior labial artery (SLA, mean external diameter, 1.8 mm [SD, 0.74 mm]); in addition, the subalar and septal branches contributed to its vascularization. The origin of the SLA was above the labial commissure in 78.6%. The subalar branch was not found but replaced by the alar artery that arose from the infraorbital artery in 1 specimen. The main arterial supply of the lower lip was derived from 3 branches of the facial artery, the inferior labial artery (mean external diameters, 1.4 mm [SD, 0.31 mm]) and the horizontal and vertical labiomental arteries. The inferior labial artery originated mostly below the labial commissure in 42.9% and formed a common trunk with the SLA in 28.6%. The horizontal labiomental artery was present in all, but vertical labiomental artery was absent in 21.4% of specimens. Overall, observed anatomic variations were classified into types I to VIII. Significant relations between the demographic variables and measured parameters were reported including the correlation coefficient among evaluated parameters. In conclusion, this study provides various information that aids in creating new flaps and supports the vascular base for clinical procedures in reconstructive surgery of the lip.
Resumo:
Despite the importance of laughter in social interactions it remains little studied in affective computing. Respiratory, auditory, and facial laughter signals have been investigated but laughter-related body movements have received almost no attention. The aim of this study is twofold: first an investigation into observers' perception of laughter states (hilarious, social, awkward, fake, and non-laughter) based on body movements alone, through their categorization of avatars animated with natural and acted motion capture data. Significant differences in torso and limb movements were found between animations perceived as containing laughter and those perceived as nonlaughter. Hilarious laughter also differed from social laughter in the amount of bending of the spine, the amount of shoulder rotation and the amount of hand movement. The body movement features indicative of laughter differed between sitting and standing avatar postures. Based on the positive findings in this perceptual study, the second aim is to investigate the possibility of automatically predicting the distributions of observer's ratings for the laughter states. The findings show that the automated laughter recognition rates approach human rating levels, with the Random Forest method yielding the best performance.
Resumo:
Despite its importance in social interactions, laughter remains little studied in affective computing. Intelligent virtual agents are often blind to users’ laughter and unable to produce convincing laughter themselves. Respiratory, auditory, and facial laughter signals have been investigated but laughter-related body movements have received less attention. The aim of this study is threefold. First, to probe human laughter perception by analyzing patterns of categorisations of natural laughter animated on a minimal avatar. Results reveal that a low dimensional space can describe perception of laughter “types”. Second, to investigate observers’ perception of laughter (hilarious, social, awkward, fake, and non-laughter) based on animated avatars generated from natural and acted motion-capture data. Significant differences in torso and limb movements are found between animations perceived as laughter and those perceived as non-laughter. Hilarious laughter also differs from social laughter. Different body movement features were indicative of laughter in sitting and standing avatar postures. Third, to investigate automatic recognition of laughter to the same level of certainty as observers’ perceptions. Results show recognition rates of the Random Forest model approach human rating levels. Classification comparisons and feature importance analyses indicate an improvement in recognition of social laughter when localized features and nonlinear models are used.
Resumo:
We study the sensitivity of a MAP configuration of a discrete probabilistic graphical model with respect to perturbations of its parameters. These perturbations are global, in the sense that simultaneous perturbations of all the parameters (or any chosen subset of them) are allowed. Our main contribution is an exact algorithm that can check whether the MAP configuration is robust with respect to given perturbations. Its complexity is essentially the same as that of obtaining the MAP configuration itself, so it can be promptly used with minimal effort. We use our algorithm to identify the largest global perturbation that does not induce a change in the MAP configuration, and we successfully apply this robustness measure in two practical scenarios: the prediction of facial action units with posed images and the classification of multiple real public data sets. A strong correlation between the proposed robustness measure and accuracy is verified in both scenarios.
Resumo:
This paper explores the application of semi-qualitative probabilistic networks (SQPNs) that combine numeric and qualitative information to computer vision problems. Our version of SQPN allows qualitative influences and imprecise probability measures using intervals. We describe an Imprecise Dirichlet model for parameter learning and an iterative algorithm for evaluating posterior probabilities, maximum a posteriori and most probable explanations. Experiments on facial expression recognition and image segmentation problems are performed using real data.