893 resultados para Score Normalization
Resumo:
Background Some patients visit a hospital’s emergency department (ED) for reasons other than an urgent medical condition. There is evidence that this practice may differ among patients from different backgrounds. The objective of this study was to examine the reasons why patients from a non-English speaking background (NESB) and patients with an English speaking background but not born in Australia (ESB-NBA) visit the ED, as compared to patients from English-speaking backgrounds but born in Australia (ESB-BA). Methods A cross-sectional survey was conducted at the ED of a tertiary hospital in metropolitan Brisbane, Queensland, Australia. Over a four-month period patients who were assigned an Australasian Triage Scale score of 3, 4 or 5 were surveyed. Pearson chi-square test and multivariate logistic regression analyses were performed to examine the differences between the ESB and NESB patients’ reported reasons for attending the ED. Results A total of 828 patients participated in this study. Compared to ESB-BA patients NESB patients were less likely to consider contacting a general practitioner (GP) before attending the ED (Odds Ratios (OR) 0.6 (95% Confidence Interval (CI) 0.4–0.8, p < .05) While ESB-NBA were more likely to consider contacting a GP 1.7 (1.1–2.5, p < .05). Both the NESB patients and the ESB-NBA patients were far more likely than ESB-BA patients to report that they had visited the ED either because they do not have a GP (OR 7.9, 95% CI 4.7–13.4, p < .001) and 2.2 (95% CI 1.1–4.4, p < .05) respectively and less likely to think that the ED could deal with their problem better than a GP(OR 0.5 (95% CI 0.3–0.8, p < .05) and 0.7 (0.3–0.9, p < .05) respectively. The NESB patients also thought it would take too long to make an appointment to consult a GP (OR 6.2, 95% CI 3.7–10.4, p < 0.001). Conclusions NESB patients were the least likely to consider contacting a GP before attending hospital EDs. Educational interventions may help direct NESB people to the appropriate health services and therefore reduce the burden on tertiary hospitals ED.
Resumo:
Objective - Report long term outcomes of the NOURISH randomized controlled trial (RCT) that evaluated a universal intervention commencing in infancy to provide anticipatory guidance to first-time mothers on ‘protective’ complementary feeding practices which were hypothesized to reduce childhood obesity risk. Subjects and Methods - The NOURISH RCT enrolled 698 mothers (mean age 30.1 years, SD=5.3) with healthy term infants (51% female). Mothers were randomly allocated to usual care or to attend two 6-session, 12-week group education modules. Outcomes were assessed five times: baseline (infants 4.3 months); 6 months after module 1 (infants 14 months); 6 months after module 2 (infants 2 years) and at 3.5 and 5 years of age. Maternal feeding practices were self-reported using validated questionnaires. BMI Z-score was calculated from measured child height and weight. Linear Mixed Models evaluated intervention (group) effect across time. Results - Retention at 5 years of age was 61%. Across ages 2-5 years, intervention mothers reported less frequent use of non-responsive feeding practices on 6/9 scales. At 5 years they also reported more appropriate responses to food refusal on 7/12 items (Ps ≤.05). No statistically significant group effect was noted for anthropometric outcomes (BMI Z-score: P=.06), or the prevalence of overweight/obesity (control 13.3% vs. intervention 11.4%, P=.66). Conclusions - Anticipatory guidance on complementary feeding resulted in first-time mothers reporting increased use of protective feeding practices. These intervention effects were sustained up to five years of age and were paralleled by a non-significant trend for lower child BMI Z-scores at all post-intervention assessment points.
Resumo:
Background: It is important for nutrition intervention in malnourished patients to be guided by accurate evaluation and detection of small changes in the patient’s nutrition status over time. However, the current Subjective Global Assessment (SGA) is not able to detect changes in a short period of time. The aim of the study was to determine whether 7-point SGA is more time sensitive to nutrition changes than the conventional SGA. Methods: In this prospective study, 67 adult inpatients assessed as malnourished using both the 7-point SGA and conventional SGA were recruited. Each patient received nutrition intervention and was followed up post-discharge. Patients were reassessed using both tools at 1, 3 and 5 months from baseline assessment. Results: It took significantly shorter time to see a one-point change using 7-point SGA compared to conventional SGA (median: 1 month vs. 3 months, p = 0.002). The likelihood of at least a one-point change is 6.74 times greater in 7-point SGA compared to conventional SGA after controlling for age, gender and medical specialties (odds ratio = 6.74, 95% CI 2.88-15.80, p<0.001). Fifty-six percent of patients who had no change in SGA score had changes detected using 7-point SGA. The level of agreement was 100% (k = 1, p < 0.001) between 7-point SGA and 3-point SGA and 83% (k=0.726, p<0.001) between two blinded assessors for 7-point SGA. Conclusion: The 7-point SGA is more time sensitive in its response to nutrition changes than conventional SGA. It can be used to guide nutrition intervention for patients.
Resumo:
A score investigating issues of mobility, accessing mobility, and alternative mobility I have been working on. This score is continuing a series, called The Excentric Fixations Project, I have been working on since 20019. It is also the middle score of in a series of three processual performance scores I have been asked to develop as part of QUT’s contribution to a federally funded Higher Education Participation Program, where universities in the region partner to develop, deliver and evaluate activities that build aspiration amongst disadvantaged students. The program encourages participants to use investigations of small moments of anxiety about meeting new people, taking new paths, or trying new things, in order to imagine new ways of dealing with these issues at a larger level, in study, career, or life choices.
Resumo:
The Wechsler and Stanford Binet scales are among the most commonly used tests of intelligence. In clinical practice, they often seem to be used interchangeably. This paper reports the results of two studies that compared the most recent editions of two Wechsler scales (WPPSI-III and WISC-IV) with the Stanford-Binet Fifth Edition (SB5). The participants in the first study were 36 typically developing 4-year-old children who completed the WPPSI-III and SB5 in counter-balanced order. Although correlations of composite scores ranged from r = .59 to r = .82 and were similar to those reported for earlier versions of the two instruments, more than half the sample had a score discrepancy greater than 10 points across the two instruments. In the second study, the WISC-IV and SB5 were administered to 30 children aged 12-14 years. There was a significant difference between Full Scale IQs on the two measures, with scores being higher on the WISC-IV. Differences between the two verbal scales were also significant and favoured the WISC-IV. There were moderate correlations of Full Scale IQs (r = .58) and Nonverbal IQs (r = .54) but the relationship between the two Verbal scales was not significant. For some children, notable score differences led to different categorisations of their level of intellectual ability The findings suggest that the Wechsler and Stanford Binet scales cannot be presumed to be interchangeable. The discussion focuses on how psychologists might reconcile large differences in test scores and the need for caution when interpreting and comparing test results.
Resumo:
The output of a differential scanning fluorimetry (DSF) assay is a series of melt curves, which need to be interpreted to get value from the assay. An application that translates raw thermal melt curve data into more easily assimilated knowledge is described. This program, called “Meltdown,” conducts four main activities—control checks, curve normalization, outlier rejection, and melt temperature (Tm) estimation—and performs optimally in the presence of triplicate (or higher) sample data. The final output is a report that summarizes the results of a DSF experiment. The goal of Meltdown is not to replace human analysis of the raw fluorescence data but to provide a meaningful and comprehensive interpretation of the data to make this useful experimental technique accessible to inexperienced users, as well as providing a starting point for detailed analyses by more experienced users.
Resumo:
Purpose This study tested the effectiveness of a pressure ulcer (PU) prevention bundle in reducing the incidence of PUs in critically ill patients in two Saudi intensive care units (ICUs). Design A two-arm cluster randomized experimental control trial. Methods Participants in the intervention group received the PU prevention bundle, while the control group received standard skin care as per the local ICU policies. Data collected included demographic variables (age, diagnosis, comorbidities, admission trajectory, length of stay) and clinical variables (Braden Scale score, severity of organ function score, mechanical ventilation, PU presence, and staging). All patients were followed every two days from admission through to discharge, death, or up to a maximum of 28 days. Data were analyzed with descriptive correlation statistics, Kaplan-Meier survival analysis, and Poisson regression. Findings The total number of participants recruited was 140: 70 control participants (with a total of 728 days of observation) and 70 intervention participants (784 days of observation). PU cumulative incidence was significantly lower in the intervention group (7.14%) compared to the control group (32.86%). Poisson regression revealed the likelihood of PU development was 70% lower in the intervention group. The intervention group had significantly less Stage I (p = 002) and Stage II PU development (p = 026). Conclusions Significant improvements were observed in PU-related outcomes with the implementation of the PU prevention bundle in the ICU; PU incidence, severity, and total number of PUs per patient were reduced. Clinical Relevance Utilizing a bundle approach and standardized nursing language through skin assessment and translation of the knowledge to practice has the potential to impact positively on the quality of care and patient outcome.
Resumo:
Purpose To compare small nerve fiber damage in the central cornea and whorl area in participants with diabetic peripheral neuropathy (DPN) and to examine the accuracy of evaluating these 2 anatomical sites for the diagnosis of DPN. Methods A cohort of 187 participants (107 with type 1 diabetes and 80 controls) was enrolled. The neuropathy disability score (NDS) was used for the identification of DPN. The corneal nerve fiber length at the central cornea (CNFLcenter) and whorl (CNFLwhorl) was quantified using corneal confocal microscopy and a fully automated morphometric technique and compared according to the DPN status. Receiver operating characteristic analyses were used to compare the accuracy of the 2 corneal locations for the diagnosis of DPN. Results CNFLcenter and CNFLwhorl were able to differentiate all 3 groups (diabetic participants with and without DPN and controls) (P < 0.001). There was a weak but significant linear relationship for CNFLcenter and CNFLwhorl versus NDS (P < 0.001); however, the corneal location x NDS interaction was not statistically significant (P = 0.17). The area under the receiver operating characteristic curve was similar for CNFLcenter and CNFLwhorl (0.76 and 0.77, respectively, P = 0.98). The sensitivity and specificity of the cutoff points were 0.9 and 0.5 for CNFLcenter and 0.8 and 0.6 for CNFLwhorl. Conclusions Small nerve fiber pathology is comparable at the central and whorl anatomical sites of the cornea. Quantification of CNFL from the corneal center is as accurate as CNFL quantification of the whorl area for the diagnosis of DPN.
Resumo:
PURPOSE Vocational recovery is a primary treatment goal of young people with first-episode psychosis (FEP), yet treatment in this domain is often delayed due to concerns that it might be too stressful. This study aimed to examine whether a relationship exists between vocational status and level of perceived stress and daily hassles in FEP. METHODS Forty-seven FEP participants were recruited upon admission to the Early Psychosis Prevention and Intervention Centre (EPPIC), Melbourne. Demographics, psychopathology, perceived stress (Perceived Stress Scale; PSS) and daily hassles (Hassles Scale; HS) were measured. RESULTS Regarding vocational status, 19 participants were unemployed, 13 were employed, 14 were students, and 1 reported 'home duties'. ANOVAs and post hoc tests comparing the first three groups on perceived stress and daily hassles revealed that the mean PSS Total and mean PSS Distress scores of the employed group were significantly lower than those of the unemployed and student groups. Regarding hassles scores, the employed group had a significantly lower mean Hassles Intensity score than the unemployed group. Results were largely unchanged when covariates were included. There were no significant differences between the three groups in levels of anxiety, negative or positive symptoms. The employed group reported lower depression than the student group, but this finding disappeared after controlling for gender. CONCLUSIONS These results provide preliminary evidence supporting the notion that working or studying is not associated with increased perceived stress or daily hassles in FEP. The findings require replication in larger samples and in different phases of psychosis.
Resumo:
BACKGROUND PTSD is an anxiety disorder related to exposure to a severe psychological trauma. Symptoms include re-experiencing the event, avoidance and arousal as well as distress and impairment resulting from these symptoms.Guidelines suggest a combination of both psychological therapy and pharmacotherapy may enhance treatment response, especially in those with more severe PTSD or in those who have not responded to either intervention alone. OBJECTIVES To assess whether the combination of psychological therapy and pharmacotherapy provides a more efficacious treatment for PTSD than either of these interventions delivered separately. SEARCH STRATEGY Searches were conducted on the trial registers kept by the CCDAN group (CCDANCTR-Studies and CCDANCTR-References) to June 2010. The reference sections of included studies and several conference abstracts were also scanned. SELECTION CRITERIA Patients of any age or gender, with chronic or recent onset PTSD arising from any type of event relevant to the diagnostic criteria were included. A combination of any psychological therapy and pharmacotherapy was included and compared to wait list, placebo, standard treatment or either intervention alone. The primary outcome was change in total PTSD symptom severity. Other outcomes included changes in functioning, depression and anxiety symptoms, suicide attempts, substance use, withdrawal and cost. DATA COLLECTION AND ANALYSIS Two or three review authors independently selected trials, assessed their 'risk of bias' and extracted trial and outcome data. We used a fixed-effect model for meta-analysis. The relative risk was used to summarise dichotomous outcomes and the mean difference and standardised mean difference were used to summarise continuous measures. MAIN RESULTS Four trials were eligible for inclusion, one of these trials (n =24) was on children and adolescents. All used an SSRI and prolonged exposure or a cognitive behavioural intervention. Two trials compared combination treatment with pharmacological treatment and two compared combination treatment with psychological treatment. Only two trials reported a total PTSD symptom score and these data could not be combined. There was no strong evidence to show if there were differences between the group receiving combined interventions compared to the group receiving psychological therapy (mean difference 2.44, 95% CI -2.87, 7.35 one study, n=65) or pharmacotherapy (mean difference -4.70, 95% CI -10.84 to 1.44; one study, n = 25). Trialists reported no significant differences between combination and single intervention groups in the other two studies. There were very little data reported for other outcomes, and in no case were significant differences reported. AUTHORS' CONCLUSIONS There is not enough evidence available to support or refute the effectiveness of combined psychological therapy and pharmacotherapy compared to either of these interventions alone. Further large randomised controlled trials are urgently required.
Resumo:
Large complex projects often fail spectacularly in terms of cost overruns and delays; witness the London Olympics and the Airbus A380. In this project, we studied the emotional intelligence (EI) of leadership teams involved in such projects. We collected our data from 370 employees in 40 project teams working on large Australian defense contracts. We asked leadership team members to complete a scale measuring their EI, and project team members to rate the success of the projects. We found it was not the mean score, but the highest EI score in the leadership team that predicted members’ project success ratings.
Resumo:
BACKGROUND Law is increasingly involved in clinical practice, particularly at the end of life, but undergraduate and postgraduate education in this area remains unsystematic. We hypothesised that attitudes to and knowledge of the law governing withholding/withdrawing treatment from adults without capacity (the WWLST law) would vary and demonstrate deficiencies among medical specialists. AIMS We investigated perspectives, knowledge and training of medical specialists in the three largest (populations and medical workforces) Australian states, concerning the WWLST law. METHODS Following expert legal review, specialist focus groups, pre-testing and piloting in each state, seven specialties involved with end-of-life care were surveyed, with a variety of statistical analyses applied to the responses. RESULTS Respondents supported the need to know and follow the law. There were mixed views about its helpfulness in medical decision-making. Over half the respondents conceded poor knowledge of the law; this was mirrored by critical gaps in knowledge that varied by specialty. There were relatively low but increasing rates of education from the undergraduate to continuing professional development (CPD) stages. Mean knowledge score did not vary significantly according to undergraduate or immediate postgraduate training, but CPD training, particularly if recent, resulted in greater knowledge. Case-based workshops were the preferred CPD instruction method. CONCLUSIONS Teaching of current and evolving law should be strengthened across all stages of medical education. This should improve understanding of the role of law, ameliorate ambivalence towards the law, and contribute to more informed deliberation about end-of-life issues with patients and families.
Resumo:
Purpose To test an interventional patient skin integrity bundle, InSPiRE protocol, on the impact of pressure injuries (PrIs) in critically ill patients in an Australian adult intensive care unit (ICU). Methods Before and after design was used where the group of patients receiving the intervention (InSPiRE protocol) was compared with a similar control group who received standard care. Data collected included demographic and clinical variables, skin assessment, PrI presence and stage, and a Sequential Organ Failure Assessment (SOFA) score. Results Overall, 207 patients were enrolled, 105 in the intervention group and 102 in the control group. Most patients were men, mean age 55. The groups were similar on major demographic variables (age, SOFA scores, ICU length of stay). Pressure injury cumulative incidence was significantly lower in the intervention group (18%) compared to the control group for skin injuries(30.4%) (χ2=4.271, df=1, p=0.039) and mucous injuries (t test =3.27, p=<0.001) . Significantly fewer PrIs developing over time in the intervention group (Logrank= 11.842, df=1, p=<0.001) and patients developed fewer skin injuries (>3 PrIs/patient = 1/105) compared with the control group (>3 injuries/patient = 10/102) (p=0.018). Conclusion The intervention group, recieving the InSPiRE protocol, had lower PrI cumulative incidence, and reduced number and severity of PrIs that developed over time. Systematic and ongoing assessment of the patient's skin and PrI risk as well as implementation of tailored prevention measures are central to preventing PrIs.
Resumo:
Completed as part of a Joint PhD program between Queensland University of Technology and the Royal Institute of Technology in Stockholm, Sweden, this thesis examines the effects of different government incentive policies on the demand, usage and pricing of energy efficient vehicles. This study outlines recommendations for policy makers aiming to increase the uptake of energy efficient vehicles. The study finds that whilst many government incentives have been successful in encouraging the uptake of energy efficient vehicles, policy makers need to both recognise and attempt to minimise the potential unintended consequences of such initiatives.
Resumo:
Objectives The rapid uptake of nurse practitioner (NP) services in Australia has outpaced evaluation of this service model. A randomized controlled trial was conducted to compare the effectiveness of NP service versus standard medical care in the emergency department (ED) of a major referral hospital in Australia. Methods Patients presenting with pain were randomly assigned to receive either standard ED medical care or NP care. Primary investigators were blinded to treatment allocation for data analyses. The primary outcome measure was the proportion of patients receiving analgesia within 30 minutes from being seen by care group. Secondary outcome measures were time to analgesia from presentation and documentation of and changes in pain scores. Results There were 260 patients randomized; 128 received standard care (medical practitioner led), and 130 received NP care. Two patients needed to be excluded due to incomplete consent forms. The proportion of patients who received analgesia within 30 minutes from being seen was 49.2% (n = 64) in the NP group and 29.7% (n = 38) in the standard group, a difference of 19.5% (95% confidence interval [CI] = 7.9% to 31.2%; p = 0.001). Of 165 patients who received analgesia, 64 (84.2%) received analgesia within 30 minutes in the NP group compared to 38 (42.7%) in the standard care group, a difference in proportions of 41.5% (95% CI = 28.3% to 54.7%; p < 0.001). The mean (±SD) time from being seen to analgesia was 25.4 (±39.2) minutes for NP care and 43.0 (±35.5) minutes for standard care, a difference of 17.6 minutes (95% CI = 6.1 to 29.1 minutes; p = 0.003). There was a difference in the median change in pain score of 0.5 between care groups, but this was not statistically significant (p = 0.13). Conclusions Nurse practitioner service effectiveness was demonstrated through superior performance in achieving timely analgesia for ED patients.