925 resultados para Clinical validation


Relevância:

30.00% 30.00%

Publicador:

Resumo:

The aim of this study was to apply the principles of content, criterion, and construct validation to a new questionnaire specifically designed to measure foot-health status. One hundred eleven subjects completed two different questionnaires designed to measure foot health (the new Foot Health Status Questionnaire and the previously validated Foot Function Index) and underwent a clinical examination in order to provide data for a second-order confirmatory factor analysis. Presented herein is a psychometrically evaluated questionnaire that contains 13 items covering foot pain, foot function, footwear, and general foot health. The tool demonstrates a high degree of content, criterion, and construct validity and test-retest reliability.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Older adults, especially those acutely ill, are vulnerable to developing malnutrition due to a range of risk factors. The high prevalence and extensive consequences of malnutrition in hospitalised older adults have been reported extensively. However, there are few well-designed longitudinal studies that report the independent relationship between malnutrition and clinical outcomes after adjustment for a wide range of covariates. Acutely ill older adults are exceptionally prone to nutritional decline during hospitalisation, but few reports have studied this change and impact on clinical outcomes. In the rapidly ageing Singapore population, all this evidence is lacking, and the characteristics associated with the risk of malnutrition are also not well-documented. Despite the evidence on malnutrition prevalence, it is often under-recognised and under-treated. It is therefore crucial that validated nutrition screening and assessment tools are used for early identification of malnutrition. Although many nutrition screening and assessment tools are available, there is no universally accepted method for defining malnutrition risk and nutritional status. Most existing tools have been validated amongst Caucasians using various approaches, but they are rarely reported in the Asian elderly and none has been validated in Singapore. Due to the multiethnicity, cultural, and language differences in Singapore older adults, the results from non-Asian validation studies may not be applicable. Therefore it is important to identify validated population and setting specific nutrition screening and assessment methods to accurately detect and diagnose malnutrition in Singapore. The aims of this study are therefore to: i) characterise hospitalised elderly in a Singapore acute hospital; ii) describe the extent and impact of admission malnutrition; iii) identify and evaluate suitable methods for nutritional screening and assessment; and iv) examine changes in nutritional status during admission and their impact on clinical outcomes. A total of 281 participants, with a mean (+SD) age of 81.3 (+7.6) years, were recruited from three geriatric wards in Tan Tock Seng Hospital over a period of eight months. They were predominantly Chinese (83%) and community-dwellers (97%). They were screened within 72 hours of admission by a single dietetic technician using four nutrition screening tools [Tan Tock Seng Hospital Nutrition Screening Tool (TTSH NST), Nutritional Risk Screening 2002 (NRS 2002), Mini Nutritional Assessment-Short Form (MNA-SF), and Short Nutritional Assessment Questionnaire (SNAQ©)] that were administered in no particular order. The total scores were not computed during the screening process so that the dietetic technician was blinded to the results of all the tools. Nutritional status was assessed by a single dietitian, who was blinded to the screening results, using four malnutrition assessment methods [Subjective Global Assessment (SGA), Mini Nutritional Assessment (MNA), body mass index (BMI), and corrected arm muscle area (CAMA)]. The SGA rating was completed prior to computation of the total MNA score to minimise bias. Participants were reassessed for weight, arm anthropometry (mid-arm circumference, triceps skinfold thickness), and SGA rating at discharge from the ward. The nutritional assessment tools and indices were validated against clinical outcomes (length of stay (LOS) >11days, discharge to higher level care, 3-month readmission, 6-month mortality, and 6-month Modified Barthel Index) using multivariate logistic regression. The covariates included age, gender, race, dementia (defined using DSM IV criteria), depression (defined using a single question “Do you often feel sad or depressed?”), severity of illness (defined using a modified version of the Severity of Illness Index), comorbidities (defined using Charlson Comorbidity Index, number of prescribed drugs and admission functional status (measured using Modified Barthel Index; MBI). The nutrition screening tools were validated against the SGA, which was found to be the most appropriate nutritional assessment tool from this study (refer section 5.6) Prevalence of malnutrition on admission was 35% (defined by SGA), and it was significantly associated with characteristics such as swallowing impairment (malnourished vs well-nourished: 20% vs 5%), poor appetite (77% vs 24%), dementia (44% vs 28%), depression (34% vs 22%), and poor functional status (MBI 48.3+29.8 vs 65.1+25.4). The SGA had the highest completion rate (100%) and was predictive of the highest number of clinical outcomes: LOS >11days (OR 2.11, 95% CI [1.17- 3.83]), 3-month readmission (OR 1.90, 95% CI [1.05-3.42]) and 6-month mortality (OR 3.04, 95% CI [1.28-7.18]), independent of a comprehensive range of covariates including functional status, disease severity and cognitive function. SGA is therefore the most appropriate nutritional assessment tool for defining malnutrition. The TTSH NST was identified as the most suitable nutritional screening tool with the best diagnostic performance against the SGA (AUC 0.865, sensitivity 84%, specificity 79%). Overall, 44% of participants experienced weight loss during hospitalisation, and 27% had weight loss >1% per week over median LOS 9 days (range 2-50). Wellnourished (45%) and malnourished (43%) participants were equally prone to experiencing decline in nutritional status (defined by weight loss >1% per week). Those with reduced nutritional status were more likely to be discharged to higher level care (adjusted OR 2.46, 95% CI [1.27-4.70]). This study is the first to characterise malnourished hospitalised older adults in Singapore. It is also one of the very few studies to (a) evaluate the association of admission malnutrition with clinical outcomes in a multivariate model; (b) determine the change in their nutritional status during admission; and (c) evaluate the validity of nutritional screening and assessment tools amongst hospitalised older adults in an Asian population. Results clearly highlight that admission malnutrition and deterioration in nutritional status are prevalent and are associated with adverse clinical outcomes in hospitalised older adults. With older adults being vulnerable to risks and consequences of malnutrition, it is important that they are systematically screened so timely and appropriate intervention can be provided. The findings highlighted in this thesis provide an evidence base for, and confirm the validity of the current nutrition screening and assessment tools used among hospitalised older adults in Singapore. As the older adults may have developed malnutrition prior to hospital admission, or experienced clinically significant weight loss of >1% per week of hospitalisation, screening of the elderly should be initiated in the community and continuous nutritional monitoring should extend beyond hospitalisation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Precise protein quantification is essential in clinical dietetics, particularly in the management of renal, burn and malnourished patients. The EP-10 was developed to expedite the estimation of dietary protein for nutritional assessment and recommendation. The main objective of this study was to compare the validity and efficacy of the EP-10 with the American Dietetic Association’s “Exchange List for Meal Planning” (ADA-7g) in quantifying dietary protein intake, against computerised nutrient analysis (CNA). Protein intake of 197 food records kept by healthy adult subjects in Singapore was determined thrice using three different methods – (1) EP-10, (2) ADA-7g and (3) CNA using SERVE program (Version 4.0). Assessments using the EP-10 and ADA-7g were performed by two assessors in a blind crossover manner while a third assessor performed the CNA. All assessors were blind to each other’s results. Time taken to assess a subsample (n=165) using the EP-10 and ADA-7g was also recorded. Mean difference in protein intake quantification when compared to the CNA was statistically non-significant for the EP-10 (1.4 ± 16.3 g, P = .239) and statistically significant for the ADA-7g (-2.2 ± 15.6 g, P = .046). Both the EP-10 and ADA-7g had clinically acceptable agreement with the CNA as determined via Bland-Altman plots, although it was found that EP-10 had a tendency to overestimate with protein intakes above 150 g. The EP-10 required significantly less time for protein intake quantification than the ADA-7g (mean time of 65 ± 36 seconds vs. 111 ± 40 seconds, P < .001). The EP-10 and ADA-7g are valid clinical tools for protein intake quantification in an Asian context, with EP-10 being more time efficient. However, a dietician’s discretion is needed when the EP-10 is used on protein intakes above 150g.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background: Patients with chest pain contribute substantially to emergency department attendances, lengthy hospital stay, and inpatient admissions. A reliable, reproducible, and fast process to identify patients presenting with chest pain who have a low short-term risk of a major adverse cardiac event is needed to facilitate early discharge. We aimed to prospectively validate the safety of a predefined 2-h accelerated diagnostic protocol (ADP) to assess patients presenting to the emergency department with chest pain symptoms suggestive of acute coronary syndrome. Methods: This observational study was undertaken in 14 emergency departments in nine countries in the Asia-Pacific region, in patients aged 18 years and older with at least 5 min of chest pain. The ADP included use of a structured pre-test probability scoring method (Thrombolysis in Myocardial Infarction [TIMI] score), electrocardiograph, and point-of-care biomarker panel of troponin, creatine kinase MB, and myoglobin. The primary endpoint was major adverse cardiac events within 30 days after initial presentation (including initial hospital attendance). This trial is registered with the Australia-New Zealand Clinical Trials Registry, number ACTRN12609000283279. Findings: 3582 consecutive patients were recruited and completed 30-day follow-up. 421 (11•8%) patients had a major adverse cardiac event. The ADP classified 352 (9•8%) patients as low risk and potentially suitable for early discharge. A major adverse cardiac event occurred in three (0•9%) of these patients, giving the ADP a sensitivity of 99•3% (95% CI 97•9–99•8), a negative predictive value of 99•1% (97•3–99•8), and a specificity of 11•0% (10•0–12•2). Interpretation: This novel ADP identifies patients at very low risk of a short-term major adverse cardiac event who might be suitable for early discharge. Such an approach could be used to decrease the overall observation periods and admissions for chest pain. The components needed for the implementation of this strategy are widely available. The ADP has the potential to affect health-service delivery worldwide.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background: Evidence-based practice (EBP) is embraced internationally as an ideal approach to improve patient outcomes and provide cost-effective care. However, despite the support for and apparent benefits of evidence-based practice, it has been shown to be complex and difficult to incorporate into the clinical setting. Research exploring implementation of evidence-based practice has highlighted many internal and external barriers including clinicians’ lack of knowledge and confidence to integrate EBP into their day-to-day work. Nurses in particular often feel ill-equipped with little confidence to find, appraise and implement evidence. Aims: The following study aimed to undertake preliminary testing of the psychometric properties of tools that measure nurses’ self-efficacy and outcome expectancy in regard to evidence-based practice. Methods: A survey design was utilised in which nurses who had either completed an EBP unit or were randomly selected from a major tertiary referral hospital in Brisbane, Australia were sent two newly developed tools: 1) Self-efficacy in Evidence-Based Practice (SE-EBP) scale and 2) Outcome Expectancy for Evidence-Based Practice (OE-EBP) scale. Results: Principal Axis Factoring found three factors with eigenvalues above one for the SE-EBP explaining 73% of the variance and one factor for the OE-EBP scale explaining 82% of the variance. Cronbach’s alpha for SE-EBP, three SE-EBP factors and OE-EBP were all >.91 suggesting some item redundancy. The SE-EBP was able to distinguish between those with no prior exposure to EBP and those who completed an introductory EBP unit. Conclusions: While further investigation of the validity of these tools is needed, preliminary testing indicates that the SE-EBP and OE-EBP scales are valid and reliable instruments for measuring health professionals’ confidence in the process and the outcomes of basing their practice on evidence.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background When large scale trials are investigating the effects of interventions on appetite, it is paramount to efficiently monitor large amounts of human data. The original hand-held Electronic Appetite Ratings System (EARS) was designed to facilitate the administering and data management of visual analogue scales (VAS) of subjective appetite sensations. The purpose of this study was to validate a novel hand-held method (EARS II (HP® iPAQ)) against the standard Pen and Paper (P&P) method and the previously validated EARS. Methods Twelve participants (5 male, 7 female, aged 18-40) were involved in a fully repeated measures design. Participants were randomly assigned in a crossover design, to either high fat (>48% fat) or low fat (<28% fat) meal days, one week apart and completed ratings using the three data capture methods ordered according to Latin Square. The first set of appetite sensations was completed in a fasted state, immediately before a fixed breakfast. Thereafter, appetite sensations were completed every thirty minutes for 4h. An ad libitum lunch was provided immediately before completing a final set of appetite sensations. Results Repeated measures ANOVAs were conducted for ratings of hunger, fullness and desire to eat. There were no significant differences between P&P compared with either EARS or EARS II (p > 0.05). Correlation coefficients between P&P and EARS II, controlling for age and gender, were performed on Area Under the Curve ratings. R2 for Hunger (0.89), Fullness (0.96) and Desire to Eat (0.95) were statistically significant (p < 0.05). Conclusions EARS II was sensitive to the impact of a meal and recovery of appetite during the postprandial period and is therefore an effective device for monitoring appetite sensations. This study provides evidence and support for further validation of the novel EARS II method for monitoring appetite sensations during large scale studies. The added versatility means that future uses of the system provides the potential to monitor a range of other behavioural and physiological measures often important in clinical and free living trials.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The major limitation of current typing methods for Streptococcus pyogenes, such as emm sequence typing and T typing, is that these are based on regions subject to considerable selective pressure. Multilocus sequence typing (MLST) is a better indicator of the genetic backbone of a strain but is not widely used due to high costs. The objective of this study was to develop a robust and cost-effective alternative to S. pyogenes MLST. A 10-member single nucleotide polymorphism (SNP) set that provides a Simpson’s Index of Diversity (D) of 0.99 with respect to the S. pyogenes MLST database was derived. A typing format involving high-resolution melting (HRM) analysis of small fragments nucleated by each of the resolution-optimized SNPs was developed. The fragments were 59–119 bp in size and, based on differences in G+C content, were predicted to generate three to six resolvable HRM curves. The combination of curves across each of the 10 fragments can be used to generate a melt type (MelT) for each sequence type (ST). The 525 STs currently in the S. pyogenes MLST database are predicted to resolve into 298 distinct MelTs and the method is calculated to provide a D of 0.996 against the MLST database. The MelTs are concordant with the S. pyogenes population structure. To validate the method we examined clinical isolates of S. pyogenes of 70 STs. Curves were generated as predicted by G+C content discriminating the 70 STs into 65 distinct MelTs.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background: Nurse practitioner education and practice has been guided by generic competency standards in Australia since 2006. Development of specialist competencies has been less structured and there are no formal standards to guide education and continuing professional development for specialty fields. There is limited international research and no Australian research into development of specialist nurse practitioner competencies. This pilot study aimed to test data collection methods, tools and processes in preparation for a larger national study to investigate specialist competency standards for emergency nurse practitioners. Research into specialist emergency nurse practitioner competencies has not been conducted in Australia. Methods: Mixed methods research was conducted with a sample of experienced emergency nurse practitioners. Deductive analysis of data from a focus group workshop informed development of a draft specialty competency framework. The framework was subsequently subjected to systematic scrutiny for consensus validation through a two round Delphi Study. Results: The Delphi study first round had a 100% response rate; the second round 75% response rate. The scoring for all items in both rounds was above the 80% cut off mark with the lowest mean score being 4.1 (82%) from the first round. Conclusion: The authors collaborated with emergency nurse practitioners to produce preliminary data on the formation of specialty competencies as a first step in developing an Australian framework.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Objective: To investigate the validity of the Trendelenburg test (TT) using an ultrasound-guided nerve block (UNB) of the superior gluteal nerve and determine whether the reduction in hip abductor muscle (HABD) strength would result in the theorized mechanical compensatory strategies measured during the TT. Design: Quasi-experimental. Setting: Hospital. Participants: Convenience sample of 9 healthy men. Only participants with no current or previous injury to the lumbar spine, pelvis, or lower extremities, and no previous surgeries were included. Interventions: Ultrasound-guided nerve block. Main Outcome Measures: Hip abductor muscle strength (percent body weight [%BW]), contralateral pelvic drop (cPD), change in contralateral pelvic drop (Delta cPD), ipsilateral hip adduction, and ipsilateral trunk sway (TRUNK) measured in degrees. Results: The median age and weight of the participants were 31 years (interquartile range [IQR], 22-32 years) and 73 kg (IQR, 67-81 kg), respectively. An average 52% reduction of HABD strength (z = 2.36, P = 0.02) resulted after the UNB. No differences were found in cPD or Delta cPD (z = 0.01, P = 0.99, z = 20.67, P = 0.49, respectively). Individual changes in biomechanics showed no consistency between participants and nonsystematic changes across the group. One participant demonstrated the mechanical compensations described by Trendelenburg. Conclusions: The TT should not be used as a screening measure for HABD strength in populations demonstrating strength greater than 30% BW but should be reserved for use with populations with marked HABD weakness. Clinical Relevance: This study presents data regarding a critical level of HABD strength required to support the pelvis during the TT.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Introduction: The Trendelenburg Test (TT) is used to assess the functional strength of the hip abductor muscles (HABD), their ability to control frontal plane motion of the pelvis, and the ability of the lumbopelvic complex to transfer load into single leg stance. Rationale: Although a standard method to perform the test has been described for use within clinical populations, no study has directly investigated Trendelenburg’s hypotheses. Purpose: To investigate the validity of the TT using an ultrasound guided nerve block (UNB) of the superior gluteal nerve and determine whether the reduction in HABD strength would result in the theorized mechanical compensatory strategies measured during the TT. Methods: Quasi-experimental design using a convenience sample of nine healthy males. Only subjects with no current or previous injury to the lumbar spine, pelvis, or lower extremities, and no previous surgeries were included. Force dynamometry was used to evaluation HABD strength (%BW). 2D mechanics were used to evaluate contralateral pelvic drop (cMPD), change in contralateral pelvic drop (∆cMPD), ipsilateral hip adduction (iHADD) and ipsilateral trunk sway (TRUNK) measured in degrees (°). All measures were collected prior to and following a UNB on the superior gluteal nerve performed by an interventional radiologist. Results: Subjects’ age was median 31yrs (IQR:22-32yrs); and weight was median 73kg (IQR:67-81kg). An average 52% reduction of HABD strength (z=2.36,p=0.02) resulted following the UNB. No differences were found in cMPD or ∆cMPD (z=0.01,p= 0.99, z=-0.67,p=0.49). Individual changes in biomechanics show no consistency between subjects and non-systematic changes across the group. One subject demonstrated the mechanical compensations described by Trendelenburg. Discussion: The TT should not be used as screening measure for HABD strength in populations demonstrating strength greater than 30%BW but reserved for use with populations with marked HABD weakness. Importance: This study presents data regarding a critical level of HABD strength required to support the pelvis during the TT.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

- The RAH was activated over 2500 trauma calls in 2009. This figure is over twice the number of calls put out by similar services. - Many trauma calls (in particular L2 trauma calls) from the existing system do not warrant activation of the trauma team - Sometimes trauma calls are activated for nontrauma reasons (eg rapid access to radiology, departmental pressures etc) - The excess of trauma calls has several deleterious effects particularly on time management for the trauma service staff: ward rounds/tertiary survey rounds, education, quality improvement, research

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background Selection of candidates for clinical psychology programmes is arguably the most important decision made in determining the clinical psychology workforce. However, there are few models to inform the development of selection tools to support selection procedures. The study, using a factor analytic structure, has operationalised the model predicting applicants' capabilities. Method Eighty-eight clinical applicants for entry into a postgraduate clinical psychology programme were assessed on a series of tasks measuring eight capabilities: guided reflection, communication skills, ethical decision making, writing, conceptual reasoning, empathy, and awareness of mind and self-observation. Results Factor analysis revealed three capabilities: labelled “awareness” accounting for 35.71% of variance; “reflection” accounting for 20.56%; and “reasoning” accounting for 18.24% of variance. Fourth year grade point average (GPA) did not correlate with performance on any of the selection capabilities other than a weak correlation with performance on the ethics capability. Conclusions Eight selection capabilities are identified for the selection of candidates independent of GPA. While the model is tentative, it is hoped that the findings will stimulate the development and validation of assessment procedures with good predictive validity which will benefit the training of clinical psychologists and, ultimately, effective service delivery.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background The incidence of malignant mesothelioma is increasing. There is the perception that survival is worse in the UK than in other countries. However, it is important to compare survival in different series based on accurate prognostic data. The European Organisation for Research and Treatment of Cancer (EORTC) and the Cancer and Leukaemia Group B (CALGB) have recently published prognostic scoring systems. We have assessed the prognostic variables, validated the EORTC and CALGB prognostic groups, and evaluated survival in a series of 142 patients. Methods Case notes of 142 consecutive patients presenting in Leicester since 1988 were reviewed. Univariate analysis of prognostic variables was performed using a Cox proportional hazards regression model. Statistically significant variables were analysed further in a forward, stepwise multivariate model. EORTC and CALGB prognostic groups were derived, Kaplan-Meier survival curves plotted, and survival rates were calculated from life tables. Results Significant poor prognostic factors in univariate analysis included male sex, older age, weight loss, chest pain, poor performance status, low haemoglobin, leukocytosis, thrombocytosis, and non-epithelial cell type (p<0.05). The prognostic significance of cell type, haemoglobin, white cell count, performance status, and sex were retained in the multivariate model. Overall median survival was 5.9 (range 0-34.3) months. One and two year survival rates were 21.3% (95% CI 13.9 to 28.7) and 3.5% (0 to 8.5), respectively. Median, one, and two year survival data within prognostic groups in Leicester were equivalent to the EORTC and CALGB series. Survival curves were successfully stratified by the prognostic groups. Conclusions This study validates the EORTC and CALGB prognostic scoring systems which should be used both in the assessment of survival data of series in different countries and in the stratification of patients into randomised clinical studies.