896 resultados para Outcome assessment
Resumo:
Background and aims Self-efficacy beliefs and outcome expectancies are central to Social Cognitive Theory (SCT). Alcohol studies demonstrate the theoretical and clinical utility of applying both SCT constructs. This study examined the relationship between refusal self-efficacy and outcome expectancies in a sample of cannabis users, and tested formal mediational models. Design Patients referred for cannabis treatment completed a comprehensive clinical assessment, including recently validated cannabis expectancy and refusal self-efficacy scales. Setting A hospital alcohol and drug out-patient clinic. Participants Patients referred for a cannabis treatment [n = 1115, mean age 26.29, standard deviation (SD) 9.39]. Measurements The Cannabis Expectancy Questionnaire (CEQ) and Cannabis Refusal Self-Efficacy Questionnaire (CRSEQ) were completed, along with measures of cannabis severity [Severity of Dependence Scale (SDS)] and cannabis consumption. Findings Positive (β = −0.29, P < 0.001) and negative (β = −0.19, P < 0.001) cannabis outcome expectancies were associated significantly with refusal self-efficacy. Refusal self-efficacy, in turn, fully mediated the association between negative expectancy and weekly consumption [95% confidence interval (CI) = 0.03, 0.17] and partially mediated the effect of positive expectancy on weekly consumption (95% CI = 0.06, 0.17). Conclusions Consistent with Social Cognitive Theory, refusal self-efficacy (a person's belief that he or she can abstain from cannabis use) mediates part of the association between cannabis outcome expectancies (perceived consequences of cannabis use) and cannabis use.
Resumo:
Study Design Delphi panel and cohort study. Objective To develop and refine a condition-specific, patient-reported outcome measure, the Ankle Fracture Outcome of Rehabilitation Measure (A-FORM), and to examine its psychometric properties, including factor structure, reliability, and validity, by assessing item fit with the Rasch model. Background To our knowledge, there is no patient-reported outcome measure specific to ankle fracture with a robust content foundation. Methods A 2-stage research design was implemented. First, a Delphi panel that included patients and health professionals developed the items and refined the item wording. Second, a cohort study (n = 45) with 2 assessment points was conducted to permit preliminary maximum-likelihood exploratory factor analysis and Rasch analysis. Results The Delphi panel reached consensus on 53 potential items that were carried forward to the cohort phase. From the 2 time points, 81 questionnaires were completed and analyzed; 38 potential items were eliminated on account of greater than 10% missing data, factor loadings, and uniqueness. The 15 unidimensional items retained in the scale demonstrated appropriate person and item reliability after (and before) removal of 1 item (anxious about footwear) that had a higher-than-ideal outfit statistic (1.75). The “anxious about footwear” item was retained in the instrument, but only the 14 items with acceptable infit and outfit statistics (range, 0.5–1.5) were included in the summary score. Conclusion This investigation developed and refined the A-FORM (Version 1.0). The A-FORM items demonstrated favorable psychometric properties and are suitable for conversion to a single summary score. Further studies utilizing the A-FORM instrument are warranted. J Orthop Sports Phys Ther 2014;44(7):488–499. Epub 22 May 2014. doi:10.2519/jospt.2014.4980
Resumo:
Purpose Dermatologic adverse events (dAEs) in cancer treatment are frequent with the use of targeted therapies. These dAEs have been shown to have significant impact on health-related quality of life (HRQoL). While standardized assessment tools have been developed for physicians to assess severity of dAEs, there is a discord between objective and subjective measures. The identification of patient-reported outcome (PRO) instruments useful in the context of targeted cancer therapies is therefore important in both the clinical and research settings for the overall evaluation of dAEs and their impact on HRQoL. Methods A comprehensive, systematic literature search of published articles was conducted by two independent reviewers in order to identify PRO instruments previously utilized in patient populations with dAEs from targeted cancer therapies. The identified PRO instruments were studied to determine which HRQoL issues relevant to dAEs were addressed, as well as the process of development and validation of these instruments. Results Thirteen articles identifying six PRO instruments met the inclusion criteria. Four instruments were general dermatology (Skindex-16©, Skindex-29©, Dermatology Life Quality Index (DLQI), and DIELH-24) and two were symptom-specific (functional assessment of cancer therapy-epidermal growth factor receptor inhibitor-18 (FACT-EGFRI-18) and hand-foot syndrome-14 (HFS-14)). Conclusions While there are several PRO instruments that have been tested in the context of targeted cancer therapy, additional work is needed to develop new instruments and to further validate the instruments identified in this study in patients receiving targeted therapies.
Resumo:
There is a growing awareness of the high levels of psychological distress being experienced by law students and the practising profession in Australia. In this context, a Threshold Learning Outcome (TLO) on self-management has been included in the six TLOs recently articulated as minimum learning outcomes for all Australian graduates of the Bachelor of Laws degree (LLB). The TLOs were developed during 2010 as part of the Australian Learning and Teaching Council’s (ALTC’s) project funded by the Australian Government to articulate ‘Learning and Teaching Academic Standards’. The TLOs are the result of a comprehensive national consultation process led by the ALTC’s Discipline Scholars: Law, Professors Sally Kift and Mark Israel.1 The TLOs have been endorsed by the Council of Australian Law Deans (CALD) and have received broad support from members of the judiciary and practising profession, representative bodies of the legal profession, law students and recent graduates, Legal Services Commissioners and the Law Admissions Consultative Committee. At the time of writing, TLOs for the Juris Doctor (JD) are also being developed, utilising the TLOs articulated for the LLB as their starting point but restating the JD requirements as the higher order outcomes expected of graduates of a ‘Masters Degree (Extended)’, this being the award level designation for the JD now set out in the new Australian Qualifications Framework.2 As Australian law schools begin embedding the learning, teaching and assessment of the TLOs in their curricula, and seek to assure graduates’ achievement of them, guidance on the implementation of the self-management TLO is salient and timely.
Resumo:
The Australian Learning and Teaching Council (ALTC) Discipline Scholars for Law, Professors Sally Kift and Mark Israel, articulated six Threshold Learning Outcomes (TLOs) for the Bachelor of Laws degree as part of the ALTC’s 2010 project on Learning and Teaching Academic Standards. One of these TLOs promotes the learning, teaching and assessment of self-management skills in Australian law schools. This paper explores the concept of self-management and how it can be relevantly applied in the first year of legal education. Recent literature from the United States (US) and Australia provides insights into the types of issues facing law students, as well as potential antidotes to these problems. Based on these findings, I argue that designing a pedagogical framework for the first year law curriculum that promotes students’ connection with their intrinsic interests, values, motivations and purposes will facilitate student success in terms of their personal well-being, ethical dispositions and academic engagement.
Resumo:
Rapid urbanization, improved quality of life, and diversified lifestyle options have collectively led to an escalation in housing demand in our cities, where residential areas, as the largest portion of urban land use type, play a critical role in the formation of sustainable cities. To date there has been limited research to ascertain residential development layouts that provide a more sustainable urban outcome. This paper aims to evaluate and compare sustainability levels of residential types by focusing on their layouts. The paper scrutinizes three different development types in a developing country context—i.e., subdivision, piecemeal, and master-planned developments. This study develops a “Neighborhood Sustainability Assessment” tool and applies it to compare their sustainability levels in Ipoh, Malaysia. The analysis finds that the master-planned development, amongst the investigated case studies, possesses the potential to produce higher levels of sustainability outcomes. The results reveal insights and evidence for policymakers, planners, development agencies and researchers; advocate further studies on neighborhood-level sustainability analysis, and; emphasize the need for collective efforts and an effective process in achieving neighborhood sustainability and sustainable city formation.
Resumo:
Purpose/Objectives: To examine and compare the reliability of four body composition methods commonly used in assessing breast cancer survivors. Design: Cross-sectional. Setting: A rehabilitation facility at a university-based comprehensive cancer center in the southeastern United States. Sample: 14 breast cancer survivors aged 40-71 years. Methods: Body fat (BF) percentage was estimated via bioelectric impedance analysis (BIA), air displacement plethysmography (ADP), and skinfold thickness (SKF) using both three- and seven-site algorithms, where reliability of the methods was evaluated by conducting two tests for each method (test 1 and test 2), one immediately after the other. An analysis of variance was used to compare the results of BF percentage among the four methods. Intraclass correlation coefficient (ICC) was used to test the reliability of each method. Main Research Variable: BF percentage. Findings: Significant differences in BF percentage were observed between BIA and all other methods (three-site SKF, p < 0.001; seven-site SKF, p < 0.001; ADP, p = 0.002). No significant differences (p > 0.05) in BF percentage between three-site SKF, seven-site SKF, and ADP were observed. ICCs between test 1 and test 2 for each method were BIA = 1, ADP = 0.98, three-site SKF = 0.99, and seven-site SKF = 0.94. Conclusions: ADP and both SKF methods produce similar estimates of BF percentage in all participants, whereas BIA overestimated BF percentage relative to the other measures. Caution is recommended when using BIA as the body composition method for breast cancer survivors who have completed treatment but are still undergoing adjuvant hormonal therapy. Implications for Nursing: Measurements of body composition can be implemented very easily as part of usual care and should serve as an objective outcome measure for interventions designed to promote healthy behaviors among breast cancer survivors. - See more at: https://onf.ons.org/onf/38/4/comparison-body-composition-assessment-methods-breast-cancer-survivors#sthash.5djfTS1Q.dpuf
Resumo:
Skin temperature is an important physiological measure that can reflect the presence of illness and injury as well as provide insight into the localised interactions between the body and the environment. The aim of this systematic review was to analyse the agreement between conductive and infrared means of assessing skin temperature which are commonly employed in in clinical, occupational, sports medicine, public health and research settings. Full-text eligibility was determined independently by two reviewers. Studies meeting the following criteria were included in the review: 1) the literature was written in English, 2) participants were human (in vivo), 3) skin surface temperature was assessed at the same site, 4) with at least two commercially available devices employed—one conductive and one infrared—and 5) had skin temperature data reported in the study. A computerised search of four electronic databases, using a combination of 21 keywords, and citation tracking was performed in January 2015. A total of 8,602 were returned. Methodology quality was assessed by 2 authors independently, using the Cochrane risk of bias tool. A total of 16 articles (n = 245) met the inclusion criteria. Devices are classified to be in agreement if they met the clinically meaningful recommendations of mean differences within ±0.5 °C and limits of agreement of ±1.0 °C. Twelve of the included studies found mean differences greater than ±0.5 °C between conductive and infrared devices. In the presence of external stimulus (e.g. exercise and/or heat) five studies foundexacerbated measurement differences between conductive and infrared devices. This is the first review that has attempted to investigate presence of any systemic bias between infrared and conductive measures by collectively evaluating the current evidence base. There was also a consistently high risk of bias across the studies, in terms of sample size, random sequence generation, allocation concealment, blinding and incomplete outcome data. This systematic review questions the suitability of using infrared cameras in stable, resting, laboratory conditions. Furthermore, both infrared cameras and thermometers in the presence of sweat and environmental heat demonstrate poor agreement when compared to conductive devices. These findings have implications for clinical, occupational, public health, sports science and research fields.
Resumo:
With the smartphone revolution, consumer-focused mobile medical applications (apps) have flooded the market without restriction. We searched the market for commercially available apps on all mobile platforms that could provide automated risk analysis of the most serious skin cancer, melanoma. We tested 5 relevant apps against 15 images of previously excised skin lesions and compared the apps' risk grades to the known histopathologic diagnosis of the lesions. Two of the apps did not identify any of the melanomas. The remaining 3 apps obtained 80% sensitivity for melanoma risk identification; specificities for the 5 apps ranged from 20%-100%. Each app provided its own grading and recommendation scale and included a disclaimer recommending regular dermatologist evaluation regardless of the analysis outcome. The results indicate that autonomous lesion analysis is not yet ready for use as a triage tool. More concerning is the lack of restrictions and regulations for these applications.
Resumo:
This paper proposes new metrics and a performance-assessment framework for vision-based weed and fruit detection and classification algorithms. In order to compare algorithms, and make a decision on which one to use fora particular application, it is necessary to take into account that the performance obtained in a series of tests is subject to uncertainty. Such characterisation of uncertainty seems not to be captured by the performance metrics currently reported in the literature. Therefore, we pose the problem as a general problem of scientific inference, which arises out of incomplete information, and propose as a metric of performance the(posterior) predictive probabilities that the algorithms will provide a correct outcome for target and background detection. We detail the framework through which these predicted probabilities can be obtained, which is Bayesian in nature. As an illustration example, we apply the framework to the assessment of performance of four algorithms that could potentially be used in the detection of capsicums (peppers).
Resumo:
The majority of Australian weeds are exotic plant species that were intentionally introduced for a variety of horticultural and agricultural purposes. A border weed risk assessment system (WRA) was implemented in 1997 in order to reduce the high economic costs and massive environmental damage associated with introducing serious weeds. We review the behaviour of this system with regard to eight years of data collected from the assessment of species proposed for importation or held within genetic resource centres in Australia. From a taxonomic perspective, species from the Chenopodiaceae and Poaceae were most likely to be rejected and those from the Arecaceae and Flacourtiaceae were most likely to be accepted. Dendrogram analysis and classification and regression tree (TREE) models were also used to analyse the data. The latter revealed that a small subset of the 35 variables assessed was highly associated with the outcome of the original assessment. The TREE model examining all of the data contained just five variables: unintentional human dispersal, congeneric weed, weed elsewhere, tolerates or benefits from mutilation, cultivation or fire, and reproduction by vegetative propagation. It gave the same outcome as the full WRA model for 71% of species. Weed elsewhere was not the first splitting variable in this model, indicating that the WRA has a capacity for capturing species that have no history of weediness. A reduced TREE model (in which human-mediated variables had been removed) contained four variables: broad climate suitability, reproduction in less or than equal to 1 year, self-fertilisation, and tolerates and benefits from mutilation, cultivation or fire. It yielded the same outcome as the full WRA model for 65% of species. Data inconsistencies and the relative importance of questions are discussed, with some recommendations made for improving the use of the system.
Resumo:
RFLP markers are currently the most appropriate marker system for the identification of uncharacterised polymorphism at the interspecific and intergeneric level. Given the benefits of a PCR-based marker system and the availability of sequence information for many Solanaceous cDNA clones, it is now possible to target conserved fragments, for primer development, that flank sequences possessing interspecific polymorphism. The potential outcome is the development of a suite of markers that amplify widely in Solanaceae. Temperature gradient gel electrophoresis (TGGE) is a relatively inexpensive gel-based system that is suitable for the detection of most single-base changes. TGGE can be used to screen for both known and unknown polymorphisms, and has been assessed here, for the development of PCR-based markers that are useful for the detection of interspecific variation within Solanaceae. Fifteen markers are presented where differences between Lycopersicon esculentum and L. pennellii have been detected by TGGE. The markers were assessed on a wider selection of plant species and found to be potentially useful for the identification of interspecific and intergeneric polymorphism in Solanaceous plants.
Resumo:
Approximately one-third of stroke patients experience depression. Stroke also has a profound effect on the lives of caregivers of stroke survivors. However, depression in this latter population has received little attention. In this study the objectives were to determine which factors are associated with and can be used to predict depression at different points in time after stroke; to compare different depression assessment methods among stroke patients; and to determine the prevalence, course and associated factors of depression among the caregivers of stroke patients. A total of 100 consecutive hospital-admitted patients no older than 70 years of age were followed for 18 months after having their first ischaemic stroke. Depression was assessed according to the Diagnostic and Statistical Manual of Mental Disorders (DSM-III-R), Beck Depression Inventory (BDI), Hamilton Rating Scale (HRSD), Visual Analogue Mood Scale (VAMS), Clinical Global Impression (CGI) and caregiver ratings. Neurological assessments and a comprehensive neuropsychological test battery were performed. Depression in caregivers was assessed by BDI. Depressive symptoms had early onsets in most cases. Mild depressive symptoms were often persistent with little change during the 18-month follow-up, although there was an increase in major depression over the same time interval. Stroke severity was associated with depression especially from 6 to 12 months post-stroke. At the acute phase, older patients were at higher risk of depression, and a higher proportion of men were depressed at 18 months post-stroke. Of the various depression assessment methods, none stood clearly apart from the others. The feasibility of each did not differ greatly, but prevalence rates differed widely according to the different criteria. When compared against DSM-III-R criteria, sensitivity and specificity were acceptable for the CGI, BDI, and HRSD. The CGI and BDI had better sensitivity than the more specific HRSD. The VAMS seemed not to be a reliable method for assessing depression among stroke patients. The caregivers often rated patients depression as more severe than did the patients themselves. Moreover, their ratings seemed to be influenced by their own depression. Of the caregivers, 30-33% were depressed. At the acute phase, caregiver depression was associated with the severity of the stroke and the older age of the patient. The best predictor of caregiver depression at later follow-up was caregiver depression at the acute phase. The results suggest that depression should be assessed during the early post-stroke period and that the follow-up of those at risk of poor emotional outcome should be extended beyond the first year post-stroke. Further, the assessment of well-being of the caregivers of stroke patients should be included as a part of a rehabilitation plan for stroke patients.
Resumo:
The current Ebola virus disease (EVD) epidemic in West Africa is unprecedented in scale, and Sierra Leone is the most severely affected country. The case fatality risk (CFR) and hospitalization fatality risk (HFR) were used to characterize the severity of infections in confirmed and probable EVD cases in Sierra Leone. Proportional hazards regression models were used to investigate factors associated with the risk of death in EVD cases. In total, there were 17 318 EVD cases reported in Sierra Leone from 23 May 2014 to 31 January 2015. Of the probable and confirmed EVD cases with a reported final outcome, a total of 2536 deaths and 886 recoveries were reported. CFR and HFR estimates were 74·2% [95% credibility interval (CrI) 72·6–75·5] and 68·9% (95% CrI 66·2–71·6), respectively. Risks of death were higher in the youngest (0–4 years) and oldest (≥60 years) age groups, and in the calendar month of October 2014. Sex and occupational status did not significantly affect the mortality of EVD. The CFR and HFR estimates of EVD were very high in Sierra Leone.
Resumo:
Assessment of the outcome of critical illness is complex. Severity scoring systems and organ dysfunction scores are traditional tools in mortality and morbidity prediction in intensive care. Their ability to explain risk of death is impressive for large cohorts of patients, but insufficient for an individual patient. Although events before intensive care unit (ICU) admission are prognostically important, the prediction models utilize data collected at and just after ICU admission. In addition, several biomarkers have been evaluated to predict mortality, but none has proven entirely useful in clinical practice. Therefore, new prognostic markers of critical illness are vital when evaluating the intensive care outcome. The aim of this dissertation was to investigate new measures and biological markers of critical illness and to evaluate their predictive value and association with mortality and disease severity. The impact of delay in emergency department (ED) on intensive care outcome, measured as hospital mortality and health-related quality of life (HRQoL) at 6 months, was assessed in 1537 consecutive patients admitted to medical ICU. Two new biological markers were investigated in two separate patient populations: in 231 ICU patients and 255 patients with severe sepsis or septic shock. Cell-free plasma DNA is a surrogate marker of apoptosis. Its association with disease severity and mortality rate was evaluated in ICU patients. Next, the predictive value of plasma DNA regarding mortality and its association with the degree of organ dysfunction and disease severity was evaluated in severe sepsis or septic shock. Heme oxygenase-1 (HO-1) is a potential regulator of apoptosis. Finally, HO-1 plasma concentrations and HO-1 gene polymorphisms and their association with outcome were evaluated in ICU patients. The length of ED stay was not associated with outcome of intensive care. The hospital mortality rate was significantly lower in patients admitted to the medical ICU from the ED than from the non-ED, and the HRQoL in the critically ill at 6 months was significantly lower than in the age- and sex-matched general population. In the ICU patient population, the maximum plasma DNA concentration measured during the first 96 hours in intensive care correlated significantly with disease severity and degree of organ failure and was independently associated with hospital mortality. In patients with severe sepsis or septic shock, the cell-free plasma DNA concentrations were significantly higher in ICU and hospital nonsurvivors than in survivors and showed a moderate discriminative power regarding ICU mortality. Plasma DNA was an independent predictor for ICU mortality, but not for hospital mortality. The degree of organ dysfunction correlated independently with plasma DNA concentration in severe sepsis and plasma HO-1 concentration in ICU patients. The HO-1 -413T/GT(L)/+99C haplotype was associated with HO-1 plasma levels and frequency of multiple organ dysfunction. Plasma DNA and HO-1 concentrations may support the assessment of outcome or organ failure development in critically ill patients, although their value is limited and requires further evaluation.