151 resultados para score validity
Resumo:
Driver simulators provide safe conditions to assess driver behaviour and provide controlled and repeatable environments for study. They are a promising research tool in terms of both providing safety and experimentally well controlled environments. There are wide ranges of driver simulators, from laptops to advanced technologies which are controlled by several computers in a real car mounted on platforms with six degrees of freedom of movement. The applicability of simulator-based research in a particular study needs to be considered before starting the study, to determine whether the use of a simulator is actually appropriate for the research. Given the wide range of driver simulators and their uses, it is important to know beforehand how closely the results from a driver simulator match results found in the real word. Comparison between drivers’ performance under real road conditions and in particular simulators is a fundamental part of validation. The important question is whether the results obtained in a simulator mirror real world results. In this paper, the results of the most recently conducted research into validity of simulators is presented.
Resumo:
Uncooperative iris identification systems at a distance and on the move often suffer from poor resolution and poor focus of the captured iris images. The lack of pixel resolution and well-focused images significantly degrades the iris recognition performance. This paper proposes a new approach to incorporate the focus score into a reconstruction-based super-resolution process to generate a high resolution iris image from a low resolution and focus inconsistent video sequence of an eye. A reconstruction-based technique, which can incorporate middle and high frequency components from multiple low resolution frames into one desired super-resolved frame without introducing false high frequency components, is used. A new focus assessment approach is proposed for uncooperative iris at a distance and on the move to improve performance for variations in lighting, size and occlusion. A novel fusion scheme is then proposed to incorporate the proposed focus score into the super-resolution process. The experiments conducted on the The Multiple Biometric Grand Challenge portal database shows that our proposed approach achieves an EER of 2.1%, outperforming the existing state-of-the-art averaging signal-level fusion approach by 19.2% and the robust mean super-resolution approach by 8.7%.
Resumo:
Purpose: To undertake rigorous psychometric testing of the newly developed contemporary work environment measure (the Brisbane Practice Environment Measure [B-PEM]) using exploratory factor analysis and confirmatory factor analysis. Methods: Content validity of the 33-item measure was established by a panel of experts. Initial testing involved 195 nursing staff using principal component factor analysis with varimax rotation (orthogonal) and Cronbach's alpha coefficients. Confirmatory factor analysis was conducted using data from a further 983 nursing staff. Results: Principal component factor analysis yielded a four-factor solution with eigenvalues greater than 1 that explained 52.53% of the variance. These factors were then verified using confirmatory factor analysis. Goodness-of-fit indices showed an acceptable fit overall with the full model, explaining 21% to 73% of the variance. Deletion of items took place throughout the evolution of the instrument, resulting in a 26-item, four-factor measure called the Brisbane Practice Environment Measure-Tested. Conclusions: The B-PEM has undergone rigorous psychometric testing, providing evidence of internal consistency and goodness-of-fit indices within acceptable ranges. The measure can be utilised as a subscale or total score reflective of a contemporary nursing work environment. Clinical Relevance: An up-to-date instrument to measure practice environment may be useful for nursing leaders to monitor the workplace and to assist in identifying areas for improvement, facilitating greater job satisfaction and retention.
Resumo:
The study objective was to determine whether the ‘cardiac decompensation score’ could identify cardiac decompensation in a patient with existing cardiac compromise managed with intraaortic balloon counterpulsation (IABP). A one-group, posttest-only design was utilised to collect observations in 2003 from IABP recipients treated in the intensive care unit of a 450 bed Australian, government funded, public, cardiothoracic, tertiary referral hospital. Twenty-three consecutive IABP recipients were enrolled, four of whom died in ICU (17.4%). All non-survivors exhibited primarily rising scores over the observation period (p < 0.001) and had final scores of 25 or higher. In contrast, the maximum score obtained by a survivor at any time was 15. Regardless of survival, scores for the 23 participants were generally decreasing immediately following therapy escalation (p = 0.016). Further reflecting these changes in patient support, there was also a trend for scores to move from rising to falling at such treatment escalations (p = 0.024). This pilot study indicates the ‘cardiac decompensation score’ to accurately represent changes in heart function specific to an individual patient. Use of the score in conjunction with IABP may lead to earlier identification of changes occurring in a patient's cardiac function and thus facilitate improved IABP outcomes.
Resumo:
Antipsychotic medications act as either antagonists or partial agonists of the dopamine D2 receptor (DRD2) and antipsychotic drugs vary widely in their binding affinity for the D2 receptor (Kapur and Seeman, 2000). The DRD2 957CNT (rs6277) polymorphism has previously been associated with schizophrenia (Lawford et al., 2005) and the T-allele of the 957CNT polymorphism is associated with reduced mRNA stability and synthesis of the dopamine D2 receptor (Duan et al., 2003). The aim of the study was to determine if the rs6277 polymorphism predicts some of the variability of positive and negative symptoms observed in schizophrenia patients being treated with antipsychotic medication.
Resumo:
Background: Alcohol craving is associated with greater alcohol-related problems and less favorable treatment prognosis. The Obsessive Compulsive Drinking Scale (OCDS) is the most widely used alcohol craving instrument. The OCDS has been validated in adults with alcohol use disorders (AUDs), which typically emerge in early adulthood. This study examines the validity of the OCDS in a nonclinical sample of young adults. Methods: Three hundred and nine college students (mean age of 21.8 years, SD = 4.6 years) completed the OCDS, Alcohol Use Disorders Identification Test (AUDIT), and measures of alcohol consumption. Subjects were randomly allocated to 2 samples. Construct validity was examined via exploratory factor analysis (n = 155) and confirmatory factor analysis (n = 154). Concurrent validity was assessed using the AUDIT and measures of alcohol consumption. A second, alcohol-dependent sample (mean age 42 years, SD 12 years) from a previously published study (n = 370) was used to assess discriminant validity. Results: A unique young adult OCDS factor structure was validated, consisting of Interference/Control, Frequency of Obsessions, Alcohol Consumption and Resisting Obsessions/Compulsions. The young adult 4-factor structure was significantly associated with the AUDIT and alcohol consumption. The 4 factor OCDS successfully classified nonclinical subjects in 96.9% of cases and the older alcohol-dependent patients in 83.7% of cases. Although the OCDS was able to classify college nonproblem drinkers (AUDIT <13, n = 224) with 83.2% accuracy, it was no better than chance (49.4%) in classifying potential college problem drinkers (AUDIT score ≥13, n = 85). Conclusions: Using the 4-factor structure, the OCDS is a valid measure of alcohol craving in young adult populations. In this nonclinical set of students, the OCDS classified nonproblem drinkers well but not problem drinkers. Studies need to further examine the utility of the OCDS in young people with alcohol misuse.
Resumo:
We examined properties of culture-level personality traits in ratings of targets (N=5,109) ages 12 to 17 in 24 cultures. Aggregate scores were generalizable across gender, age, and relationship groups and showed convergence with culture-level scores from previous studies of self-reports and observer ratings of adults, but they were unrelated to national character stereotypes. Trait profiles also showed cross-study agreement within most cultures, 8 of which had not previously been studied. Multidimensional scaling showed that Western and non-Western cultures clustered along a dimension related to Extraversion. A culture-level factor analysis replicated earlier findings of a broad Extraversion factor but generally resembled the factor structure found in individuals. Continued analysis of aggregate personality scores is warranted.
Resumo:
This letter is in response to the recently published article “Evaluation of two self-referent foot health instruments” by Robert Trevethan (RT) and is in regard to the scale scores he derived when using the quality of life measure, the Foot Health Status Questionnaire [1]. Unfortunately, the journal reviewers and editor did not identify, or address, a fundamental flaw in the methodology of this paper. Subsequently, the inference drawn from this paper could, in all reasonableness, mislead the reader
Resumo:
Older adults, especially those acutely ill, are vulnerable to developing malnutrition due to a range of risk factors. The high prevalence and extensive consequences of malnutrition in hospitalised older adults have been reported extensively. However, there are few well-designed longitudinal studies that report the independent relationship between malnutrition and clinical outcomes after adjustment for a wide range of covariates. Acutely ill older adults are exceptionally prone to nutritional decline during hospitalisation, but few reports have studied this change and impact on clinical outcomes. In the rapidly ageing Singapore population, all this evidence is lacking, and the characteristics associated with the risk of malnutrition are also not well-documented. Despite the evidence on malnutrition prevalence, it is often under-recognised and under-treated. It is therefore crucial that validated nutrition screening and assessment tools are used for early identification of malnutrition. Although many nutrition screening and assessment tools are available, there is no universally accepted method for defining malnutrition risk and nutritional status. Most existing tools have been validated amongst Caucasians using various approaches, but they are rarely reported in the Asian elderly and none has been validated in Singapore. Due to the multiethnicity, cultural, and language differences in Singapore older adults, the results from non-Asian validation studies may not be applicable. Therefore it is important to identify validated population and setting specific nutrition screening and assessment methods to accurately detect and diagnose malnutrition in Singapore. The aims of this study are therefore to: i) characterise hospitalised elderly in a Singapore acute hospital; ii) describe the extent and impact of admission malnutrition; iii) identify and evaluate suitable methods for nutritional screening and assessment; and iv) examine changes in nutritional status during admission and their impact on clinical outcomes. A total of 281 participants, with a mean (+SD) age of 81.3 (+7.6) years, were recruited from three geriatric wards in Tan Tock Seng Hospital over a period of eight months. They were predominantly Chinese (83%) and community-dwellers (97%). They were screened within 72 hours of admission by a single dietetic technician using four nutrition screening tools [Tan Tock Seng Hospital Nutrition Screening Tool (TTSH NST), Nutritional Risk Screening 2002 (NRS 2002), Mini Nutritional Assessment-Short Form (MNA-SF), and Short Nutritional Assessment Questionnaire (SNAQ©)] that were administered in no particular order. The total scores were not computed during the screening process so that the dietetic technician was blinded to the results of all the tools. Nutritional status was assessed by a single dietitian, who was blinded to the screening results, using four malnutrition assessment methods [Subjective Global Assessment (SGA), Mini Nutritional Assessment (MNA), body mass index (BMI), and corrected arm muscle area (CAMA)]. The SGA rating was completed prior to computation of the total MNA score to minimise bias. Participants were reassessed for weight, arm anthropometry (mid-arm circumference, triceps skinfold thickness), and SGA rating at discharge from the ward. The nutritional assessment tools and indices were validated against clinical outcomes (length of stay (LOS) >11days, discharge to higher level care, 3-month readmission, 6-month mortality, and 6-month Modified Barthel Index) using multivariate logistic regression. The covariates included age, gender, race, dementia (defined using DSM IV criteria), depression (defined using a single question “Do you often feel sad or depressed?”), severity of illness (defined using a modified version of the Severity of Illness Index), comorbidities (defined using Charlson Comorbidity Index, number of prescribed drugs and admission functional status (measured using Modified Barthel Index; MBI). The nutrition screening tools were validated against the SGA, which was found to be the most appropriate nutritional assessment tool from this study (refer section 5.6) Prevalence of malnutrition on admission was 35% (defined by SGA), and it was significantly associated with characteristics such as swallowing impairment (malnourished vs well-nourished: 20% vs 5%), poor appetite (77% vs 24%), dementia (44% vs 28%), depression (34% vs 22%), and poor functional status (MBI 48.3+29.8 vs 65.1+25.4). The SGA had the highest completion rate (100%) and was predictive of the highest number of clinical outcomes: LOS >11days (OR 2.11, 95% CI [1.17- 3.83]), 3-month readmission (OR 1.90, 95% CI [1.05-3.42]) and 6-month mortality (OR 3.04, 95% CI [1.28-7.18]), independent of a comprehensive range of covariates including functional status, disease severity and cognitive function. SGA is therefore the most appropriate nutritional assessment tool for defining malnutrition. The TTSH NST was identified as the most suitable nutritional screening tool with the best diagnostic performance against the SGA (AUC 0.865, sensitivity 84%, specificity 79%). Overall, 44% of participants experienced weight loss during hospitalisation, and 27% had weight loss >1% per week over median LOS 9 days (range 2-50). Wellnourished (45%) and malnourished (43%) participants were equally prone to experiencing decline in nutritional status (defined by weight loss >1% per week). Those with reduced nutritional status were more likely to be discharged to higher level care (adjusted OR 2.46, 95% CI [1.27-4.70]). This study is the first to characterise malnourished hospitalised older adults in Singapore. It is also one of the very few studies to (a) evaluate the association of admission malnutrition with clinical outcomes in a multivariate model; (b) determine the change in their nutritional status during admission; and (c) evaluate the validity of nutritional screening and assessment tools amongst hospitalised older adults in an Asian population. Results clearly highlight that admission malnutrition and deterioration in nutritional status are prevalent and are associated with adverse clinical outcomes in hospitalised older adults. With older adults being vulnerable to risks and consequences of malnutrition, it is important that they are systematically screened so timely and appropriate intervention can be provided. The findings highlighted in this thesis provide an evidence base for, and confirm the validity of the current nutrition screening and assessment tools used among hospitalised older adults in Singapore. As the older adults may have developed malnutrition prior to hospital admission, or experienced clinically significant weight loss of >1% per week of hospitalisation, screening of the elderly should be initiated in the community and continuous nutritional monitoring should extend beyond hospitalisation.