126 resultados para facial fracture
Resumo:
Introduction: In an attempt to reduce stress shielding in the proximal femur multiple new shorter stem design have become available. We investigated the load to fracture of a new polished tapered cemented short stem in comparison to the conventional polished tapered Exeter stem. Method: A total of forty-two stems, twenty-one short stems and twenty-one conventional stems both with three different offsets were cemented in a composite sawbone model and loaded to fracture. Results: study showed that femurs will break at a significantly lower load to failure with a shorter compared to conventional length Exeter stem. Conclusion: This Both standard and short stem design are safe to use as the torque to failure is 7–10 times as much as the torques seen in activities of daily living.
Resumo:
This project examined the differences in healing of metaphyseal bone, when the implants of variable stiffness are used for fracture fixation. This knowledge is important in development of novel orthopaedic implants, used in orthopaedic surgery to stabilise the fractures. Dr Koval used a mouse model to create a fracture, and then assessed its healing with a combination of mechanical testing, microcomputed tomography and histomorphometric examination.
Resumo:
Complex bone contour and anatomical variations between individual bones complicate the process of deriving an implant shape that fits majority of the population. This thesis proposes an automatic fitting method for anatomically-precontoured plates based on clinical requirements, and investigated if 100% anatomical fit for a group of bone is achievable through manual bending of one plate shape. It was found that, for the plate used, 100% fit is impossible to achieve through manual bending alone. Rather, newly-developed shapes are also required to obtain anatomical fit in areas with more complex bone contour.
Resumo:
Study Design Delphi panel and cohort study. Objective To develop and refine a condition-specific, patient-reported outcome measure, the Ankle Fracture Outcome of Rehabilitation Measure (A-FORM), and to examine its psychometric properties, including factor structure, reliability, and validity, by assessing item fit with the Rasch model. Background To our knowledge, there is no patient-reported outcome measure specific to ankle fracture with a robust content foundation. Methods A 2-stage research design was implemented. First, a Delphi panel that included patients and health professionals developed the items and refined the item wording. Second, a cohort study (n = 45) with 2 assessment points was conducted to permit preliminary maximum-likelihood exploratory factor analysis and Rasch analysis. Results The Delphi panel reached consensus on 53 potential items that were carried forward to the cohort phase. From the 2 time points, 81 questionnaires were completed and analyzed; 38 potential items were eliminated on account of greater than 10% missing data, factor loadings, and uniqueness. The 15 unidimensional items retained in the scale demonstrated appropriate person and item reliability after (and before) removal of 1 item (anxious about footwear) that had a higher-than-ideal outfit statistic (1.75). The “anxious about footwear” item was retained in the instrument, but only the 14 items with acceptable infit and outfit statistics (range, 0.5–1.5) were included in the summary score. Conclusion This investigation developed and refined the A-FORM (Version 1.0). The A-FORM items demonstrated favorable psychometric properties and are suitable for conversion to a single summary score. Further studies utilizing the A-FORM instrument are warranted. J Orthop Sports Phys Ther 2014;44(7):488–499. Epub 22 May 2014. doi:10.2519/jospt.2014.4980
Resumo:
We employed a novel cuing paradigm to assess whether dynamically versus statically presented facial expressions differentially engaged predictive visual mechanisms. Participants were presented with a cueing stimulus that was either the static depiction of a low intensity expressed emotion; or a dynamic sequence evolving from a neutral expression to the low intensity expressed emotion. Following this cue and a backwards mask, participants were presented with a probe face that displayed either the same emotion (congruent) or a different emotion (incongruent) with respect to that displayed by the cue although expressed at a high intensity. The probe face had either the same or different identity from the cued face. The participants' task was to indicate whether or not the probe face showed the same emotion as the cue. Dynamic cues and same identity cues both led to a greater tendency towards congruent responding, although these factors did not interact. Facial motion also led to faster responding when the probe face was emotionally congruent to the cue. We interpret these results as indicating that dynamic facial displays preferentially invoke predictive visual mechanisms, and suggest that motoric simulation may provide an important basis for the generation of predictions in the visual system.
Resumo:
Emotionally arousing events can distort our sense of time. We used mixed block/event-related fMRI design to establish the neural basis for this effect. Nineteen participants were asked to judge whether angry, happy and neutral facial expressions that varied in duration (from 400 to 1,600 ms) were closer in duration to either a short or long duration they learnt previously. Time was overestimated for both angry and happy expressions compared to neutral expressions. For faces presented for 700 ms, facial emotion modulated activity in regions of the timing network Wiener et al. (NeuroImage 49(2):1728–1740, 2010) namely the right supplementary motor area (SMA) and the junction of the right inferior frontal gyrus and anterior insula (IFG/AI). Reaction times were slowest when faces were displayed for 700 ms indicating increased decision making difficulty. Taken together with existing electrophysiological evidence Ng et al. (Neuroscience, doi: 10.3389/fnint.2011.00077, 2011), the effects are consistent with the idea that facial emotion moderates temporal decision making and that the right SMA and right IFG/AI are key neural structures responsible for this effect.
Resumo:
Because moving depictions of face emotion have greater ecological validity than their static counterparts, it has been suggested that still photographs may not engage ‘authentic’ mechanisms used to recognize facial expressions in everyday life. To date, however, no neuroimaging studies have adequately addressed the question of whether the processing of static and dynamic expressions rely upon different brain substrates. To address this, we performed an functional magnetic resonance imaging (fMRI) experiment wherein participants made emotional expression discrimination and Sex discrimination judgements to static and moving face images. Compared to Sex discrimination, Emotion discrimination was associated with widespread increased activation in regions of occipito-temporal, parietal and frontal cortex. These regions were activated both by moving and by static emotional stimuli, indicating a general role in the interpretation of emotion. However, portions of the inferior frontal gyri and supplementary/pre-supplementary motor area showed task by motion interaction. These regions were most active during emotion judgements to static faces. Our results demonstrate a common neural substrate for recognizing static and moving facial expressions, but suggest a role for the inferior frontal gyrus in supporting simulation processes that are invoked more strongly to disambiguate static emotional cues.
Resumo:
Representation of facial expressions using continuous dimensions has shown to be inherently more expressive and psychologically meaningful than using categorized emotions, and thus has gained increasing attention over recent years. Many sub-problems have arisen in this new field that remain only partially understood. A comparison of the regression performance of different texture and geometric features and investigation of the correlations between continuous dimensional axes and basic categorized emotions are two of these. This paper presents empirical studies addressing these problems, and it reports results from an evaluation of different methods for detecting spontaneous facial expressions within the arousal-valence dimensional space (AV). The evaluation compares the performance of texture features (SIFT, Gabor, LBP) against geometric features (FAP-based distances), and the fusion of the two. It also compares the prediction of arousal and valence, obtained using the best fusion method, to the corresponding ground truths. Spatial distribution, shift, similarity, and correlation are considered for the six basic categorized emotions (i.e. anger, disgust, fear, happiness, sadness, surprise). Using the NVIE database, results show that the fusion of LBP and FAP features performs the best. The results from the NVIE and FEEDTUM databases reveal novel findings about the correlations of arousal and valence dimensions to each of six basic emotion categories.
Resumo:
Graphyne is an allotrope of graphene. The mechanical properties of graphynes (α-, β-, γ- and 6,6,12-graphynes) under uniaxial tension deformation at different temperatures and strain rates are studied using molecular dynamics simulations. It is found that graphynes are more sensitive to temperature changes than graphene in terms of fracture strength and Young's modulus. The temperature sensitivity of the different graphynes is proportionally related to the percentage of acetylenic linkages in their structures, with the α-graphyne (having 100% of acetylenic linkages) being most sensitive to temperature. For the same graphyne, temperature exerts a more pronounced effect on the Young's modulus than fracture strength, which is different from that of graphene. The mechanical properties of graphynes are also sensitive to strain rate, in particular at higher temperatures.
Resumo:
Most developmental studies of emotional face processing to date have focused on infants and very young children. Additionally, studies that examine emotional face processing in older children do not distinguish development in emotion and identity face processing from more generic age-related cognitive improvement. In this study, we developed a paradigm that measures processing of facial expression in comparison to facial identity and complex visual stimuli. The three matching tasks were developed (i.e., facial emotion matching, facial identity matching, and butterfly wing matching) to include stimuli of similar level of discriminability and to be equated for task difficulty in earlier samples of young adults. Ninety-two children aged 5–15 years and a new group of 24 young adults completed these three matching tasks. Young children were highly adept at the butterfly wing task relative to their performance on both face-related tasks. More importantly, in older children, development of facial emotion discrimination ability lagged behind that of facial identity discrimination.
Resumo:
Schizophrenia patients have been shown to be compromised in their ability to recognize facial emotion. This deficit has been shown to be related to negative symptoms severity. However, to date, most studies have used static rather than dynamic depictions of faces. Nineteen patients with schizophrenia were compared with seventeen controls on 2 tasks; the first involving the discrimination of facial identity, emotion, and butterfly wings; the second testing emotion recognition using both static and dynamic stimuli. In the first task, the patients performed more poorly than controls for emotion discrimination only, confirming a specific deficit in facial emotion recognition. In the second task, patients performed more poorly in both static and dynamic facial emotion processing. An interesting pattern of associations suggestive of a possible double dissociation emerged in relation to correlations with symptom ratings: high negative symptom ratings were associated with poorer recognition of static displays of emotion, whereas high positive symptom ratings were associated with poorer recognition of dynamic displays of emotion. However, while the strength of associations between negative symptom ratings and accuracy during static and dynamic facial emotion processing was significantly different, those between positive symptom ratings and task performance were not. The results confirm a facial emotion-processing deficit in schizophrenia using more ecologically valid dynamic expressions of emotion. The pattern of findings may reflect differential patterns of cortical dysfunction associated with negative and positive symptoms of schizophrenia in the context of differential neural mechanisms for the processing of static and dynamic displays of facial emotion.
Resumo:
During fracture healing, many complex and cryptic interactions occur between cells and bio-chemical molecules to bring about repair of damaged bone. In this thesis two mathematical models were developed, concerning the cellular differentiation of osteoblasts (bone forming cells) and the mineralisation of new bone tissue, allowing new insights into these processes. These models were mathematically analysed and simulated numerically, yielding results consistent with experimental data and highlighting the underlying pattern formation structure in these aspects of fracture healing.
Resumo:
Facial identity and facial expression matching tasks were completed by 5–12-year-old children and adults using stimuli extracted from the same set of normalized faces. Configural and feature processing were examined using speed and accuracy of responding and facial feature selection, respectively. Facial identity matching was slower than face expression matching for all age groups. Large age effects were found on both speed and accuracy of responding and feature use in both identity and expression matching tasks. Eye region preference was found on the facial identity task and mouth region preference on the facial expression task. Use of mouth region information for facial expression matching increased with age, whereas use of eye region information for facial identity matching peaked early. The feature use information suggests that the specific use of primary facial features to arrive at identity and emotion matching judgments matures across middle childhood.
Resumo:
Theoretical accounts suggest that mirror neurons play a crucial role in social cognition. The current study used transcranial-magnetic stimulation (TMS) to investigate the association between mirror neuron activation and facialemotion processing, a fundamental aspect of social cognition, among healthy adults (n = 20). Facial emotion processing of static (but not dynamic) images correlated significantly with an enhanced motor response, proposed to reflect mirror neuron activation. These correlations did not appear to reflect general facial processing or pattern recognition, and provide support to current theoretical accounts linking the mirror neuron system to aspects of social cognition. We discuss the mechanism by which mirror neurons might facilitate facial emotion recognition.
Resumo:
People with schizophrenia perform poorly when recognising facial expressions of emotion, particularly negative emotions such as fear. This finding has been taken as evidence of a “negative emotion specific deficit”, putatively associated with a dysfunction in the limbic system, particularly the amygdala. An alternative explanation is that greater difficulty in recognising negative emotions may reflect a priori differences in task difficulty. The present study uses a differential deficit design to test the above argument. Facial emotion recognition accuracy for seven emotion categories was compared across three groups. Eighteen schizophrenia patients and one group of healthy age- and gender-matched controls viewed identical sets of stimuli. A second group of 18 age- and gender-matched controls viewed a degraded version of the same stimuli. The level of stimulus degradation was chosen so as to equate overall level of accuracy to the schizophrenia patients. Both the schizophrenia group and the degraded image control group showed reduced overall recognition accuracy and reduced recognition accuracy for fearful and sad facial stimuli compared with the intact-image control group. There were no differences in recognition accuracy for any emotion category between the schizophrenia group and the degraded image control group. These findings argue against a negative emotion specific deficit in schizophrenia.