838 resultados para Coping Outcome
Resumo:
Objective: Cognitive-behavioural therapy (CBT) has been effectively used in the treatment of alcohol dependence. Clinical studies report that the anticraving drug naltrexone, is a useful adjunct to treatment. Currently, few data are available on the impact of adding this medication to programmes in more typical, outpatient, and rehabilitation settings. The objective of this study was to examine the impact on outcome of adding naltrexone to an established outpatient alcohol rehabilitation program which employed CBT. Method: Fifty patients participated in an established 12-week, outpatient, 'contract'-based alcohol abstinence programme which employed CBT. They also received naltrexone 50 mg orally daily (CBT + naltrexone). Outcomes were compared with 50 historical, matched controls, all of whom participated in the same programme without an anticraving medication (CBT alone). All patients met DSM-IV criteria for alcohol dependence. Results: Programme attendance across the eight treatment sessions was lower in the CBT alone group (p < 0.001). Relapse to alcohol use occurred sooner and more frequently in the CBT alone group (p < 0.001). Rehabilitation programme completion at 12 weeks was 88% (CBT + naltrexone) compared with 36% for (CBT alone) (p < 0.001). Alcohol abstinence at 12 weeks was 76% (CBT + naltrexone) compared with 18% (CBT alone) (p < 0.001). Conclusion: When employing the same outpatient rehabilitation programme and comparing outcomes using matched historical controls, the addition of naltrexone substantially improves programme attendance, programme completion and reported alcohol abstinence. In a typical outpatient programme, naltrexone addition was associated with significantly improved programme participation, better outcomes and was well tolerated.
Resumo:
Mice transgenic for E6/E7 oncogenes of Human Papillomavirus type 16 display life-long expression of E6 in lens and skin epithelium, and develop inflammatory skin disease late in life, which progresses to papillomata and squamous carcinoma in some mice. We asked whether endogenous expression of E6 induced a specific immunological outcome, i.e. immunity or tolerance, or whether the mice remained immunologically naive to E6. We show that prior to the onset of skin disease, E6 transgenic mice did not develop a spontaneous E6-directed antibody response, nor did they display T-cell proliferative responses to dominant T-helper epitope peptides within E6. In contrast, old mice in which skin disease had arisen, developed antibodies to E6. We also show that following immunisation with E6, specific antibody responses did not differ significantly among groups of EB-transgenic mice of different ages (and therefore of different durations and amounts of exposure to endogenous E6), and non-transgenic controls. Additionally, E6 immunisation-induced T-cell proliferative responses were similar in E6-transgenic and non-transgenic mice. These data are consistent with the interpretation that unimmunised Eb-transgenic mice that have not developed inflammatory skin disease remain immunologically naive to E6 at the B- and Th levels. There are implications for E6-mediated tumorigenesis in humans, and for the development of putative E6 therapeutic vaccines. (C) 2001 Elsevier Science B.V. All rights reserved.
Resumo:
Background: Patients with spinal cord injury (SCI) have always posed difficulties for the diagnosis of an acute abdomen. The aim of the present study was to define this problem retrospectively at Princess Alexandra Hospital and to assess the results of treatment for these patients. Methods: A retrospective review was conducted of 133 SCI patients admitted with an acute abdomen in the 16 years prior to this analysis at the Spinal Injuries Unit (SIU) of Princess Alexandra Hospital. There were 21 patients who conformed to the study criteria. All the patients had sustained traumatic SCI at or above the level of T11, more than 1 month prior to admission. Results: There were 13 male and eight female patients. The time lapse between SCI and the onset of an acute abdomen ranged from 1.5 months to 27 years. The age range was 26-79 years. The majority of patients had C6 injuries (six patients). There were 18 patients with injury levels above T6 and three patients with injuries below this level. The time taken to diagnose the cause of the acute abdomen ranged between 1 day and 3 months. Investigations were found to be useful in making the diagnoses in 61.9% of cases. There were 14 patients who had surgical interventions. Five patients had surgical complications and there were two deaths in the study. The length of follow up was 1-132 months. The mortality in the study was 9.5%. Conclusion: An aggressive approach to the diagnosis and treatment of the acute abdomen in SCI patients with suspicious symptoms is recommended. A high index of suspicion should be maintained in those patients with pre-existing SCI who present with abdominal trauma.
Resumo:
To examine whether nucleolar organizer regions detected by argyrophilia (Ag-NOR counts) can be used as a prognostic indicator in phyllodes tumors of the breast, and to compare its usefulness with that of DNA flow cytometric analysis, 28 cases of breast phyllodes tumors (including 15 benign, two borderline and 11 malignant tumors) were subjected to Ag-NOR staining and counting as well as DNA flow cytometric analysis. S-phase fraction and DNA ploidy analysis showed useful trends for improving outcome predictions in malignant phyllodes tumors. However, high Ag-NOR counts were significant in predicting survival status (P = 0.013) and reached near statistical significance in predicting survival times (P = 0.07). In predicting survival status, results for Ag-NOR counts were significantly better than those for ploidy analysis (P = 0.02) and S-phase fraction (P < 0.01). Only S-phase fraction was significantly predictive of survival times (P = 0.025). It is concluded that Ag-NOR counts and DNA flow cytometric analysis, easily performed using paraffin sections, give information that can improve predictions made by histopathological classification. Ag-NOR counts are significant in predicting survival in the presence of histopathological features of malignancy.
Resumo:
Primary objective: To examine a theoretical model which suggests that a contribution of both psychological and neuropsychological factors underlie deficits in self-awareness and self-regulation. Research design: Multivariate design including correlations and analysis of variance (ANOVA). Methods: Sixty-one subjects with acquired brain injury (ABI) were administered standardized measures of self-awareness and self-regulation. Psychological factors included measures of coping-related denial, personality-related denial and personality change. Neuropsychological factors included an estimate of IQ and two measures of executive functioning that assess capacity for volition and purposive behaviour. Main outcomes and results: The findings indicated that the relative contribution of neuropsychological factors to an outcome of deficits in self-awareness and self-regulation had a more direct effect than psychological factors. In general, measures of executive functioning had a direct relationship, while measures of coping-related and personality-related denial had an indirect relationship with measures of self-awareness and self-regulation. Conclusion: The findings highlighted the importance of measuring both neuropsychological and psychological factors and demonstrated that the relative contribution of these variables varies according to different levels of self-awareness and self-regulation.
Resumo:
The efficacy of psychological treatments emphasising a self-management approach to chronic pain has been demonstrated by substantial empirical research. Nevertheless, high drop-out and relapse rates and low or unsuccessful engagement in self-management pain rehabilitation programs have prompted the suggestion that people vary in their readiness to adopt a self-management approach to their pain. The Pain Stages of Change Questionnaire (PSOCQ) was developed to assess a patient's readiness to adopt a self-management approach to their chronic pain. Preliminary evidence has supported the PSOCQ's psychometric properties. The current study was designed to further examine the psychometric properties of the PSOCQ, including its reliability, factorial structure and predictive validity. A total of 107 patients with an average age of 36.2 years (SD = 10.63) attending a multi-disciplinary pain management program completed the PSOCQ, the Pain Self-Efficacy Questionnaire (PSEQ) and the West Haven-Yale Multidimensional Pain Inventory (WHYMPI) pre-admission and at discharge from the program. Initial data analysis found inadequate internal consistencies of the precontemplation and action scales of the PSOCQ and a high correlation (r = 0.66, P < 0.01) between the action and maintenance scales. Principal component analysis supported a two-factor structure: 'Contemplation' and 'Engagement'. Subsequent analyses revealed that the PSEQ was a better predictor of treatment outcome than the PSOCQ scales. Discussion centres upon the utility of the PSOCQ in a clinical pain setting in light of the above findings, and a need for further research. (C) 2002 International Association for the Study of Pain. Published by Elsevier Science B.V. All rights reserved.
Resumo:
The outcome effect occurs where an evaluator, who has knowledge of the outcome of a judge's decision , assesses the quality of the judgment of that decision maker. If the evaluator has knowledge of a negative outcome, then that knowledge negatively influences his or her assessment of the ex ante judgment. For instance, jurors in a lawsuit brought against an auditor for alleged negligence are informed of an undetected fraud, even though an unqualified opinion was issued. This paper reports the results of an experiment in an applied audit judgment setting that examined methods of mitigating the outcome effect by means of instructions. The results showed that simply instructing or warning the evaluator about the potential biasing effects of outcome information was only weakly effective. However, instructions that stressed either (1) the cognitive non-normativeness of the outcome effect or (2) the seriousness and gravity of the evaluation ameliorated the effect significantly. From a theoretical perspective, the results suggest that there may both motivational and cognitive components to the outcome effect. In all, the findings suggest awareness of the outcome effect and use of relatively nonintrusive instructions to evaluators may effectively counteract the potential for the outcome bias.
Resumo:
The objective of this study was to determine the mortality rate and the functional outcomes of stroke patients admitted to the intensive care unit (ICU) and to identify predictors of poor outcome in this population. The records of all patients admitted to the ICU with the diagnosis of stroke between January 1994 and December 1999 were reviewed. Patients with subarachnoid haemorrhage were excluded. Data were collected on clinical and biological variables, risk factors for stroke and the presence of comorbidities. Mortality (ICU, in-hospital and three-month) and functional outcome were used as end-points. In the six-year-period, 61 patients were admitted to the ICU with either haemorrhagic or ischaemic stroke. Medical records were available for only 58 patients. There were 23 ischaemic and 35 haemorrhagic strokes. The ICU, in-hospital and three-month mortality rates were 36%, 47% and 52% respectively. There were no significant differences in the prevalence of premorbid risk factors between survivors and non-survivors. The mean Barthel score was significantly different between the independent and dependent survivors (94 +/- 6 vs 45 +/- 26, P < 0.001). A substantial number of patients with good functional outcomes had lower Rankin scores (92% vs 11%, P < 0.001). Only 46% of those who were alive at three months were functionally independent. Intensive care admission was associated with a high mortality rate and a high likelihood of dependent lifestyle after hospital discharge. Haemorrhagic stroke, fixed dilated pupil(s) and GCS < 10 during assessment were associated with increased mortality and poor functional outcome.
Resumo:
Development of a self-report measure of coping specific to multiple sclerosis (MS) caregiving is needed to advance our understanding of the role of coping in adaptation to caring for a person with MS and to contribute to a lack of empirical data on MS caregiving. A total of 213 MS caregivers and their care recipients completed a Coping with MS Caregiving Inventory (CMSCI) and measures of adjustment (psychological distress), appraisal and illness. A subsample (n = 64) also completed the Ways of Coping Checklist (WCC) and additional adjustment measures (depression, caregiving impact. dyadic adjustment, and relationship conflict and reciprocity). Factor analyses revealed 5 factors: Supportive Engagement, Criticism and Coercion, Practical Assistance, Avoidance, and Positive Reframing. Subscales had internal reliabilities comparable to similar scales and were empirically distinct. Preliminary construct validation data are consistent with recent MS caregiving research that links passive avoidant emotion-focused coping with poorer adjustment, and relationship-focused coping caregiving research that links greater reliance on positive relationship-focused coping and less reliance on criticism with better adjustment. Results extend this research by revealing new relations between coping and adaptation to MS caregiving. Convergent validation data suggest that although the inventory differs from the WCC, it does share certain conceptual similarities with this scale.
Resumo:
The present study examined the utility of a stress and coping model of adaptation to a homeless shelter among homeless adolescents. Seventy-eight homeless adolescents were interviewed and completed self-administered scales at Time 1 (day of shelter entry) and Time 2 (day of discharge). The mean duration of stay at the shelter was 7.23 days (SD = 7.01). Predictors included appraisal (threat and self-efficacy), coping resources, and coping strategies (productive, nonproductive, and reference to others coping). Adjustment outcomes were Time I measures of global distress, physical health, clinician-and youthworker- rated social adjustment, and externalizing behavior and Time 2 youthworker-rated social adjustment and goal achievement. Results of hierarchical regression analyses indicated that after controlling for the effects of relevant background variables (number of other shelters visited, sexual, emotional, and physical abuse), measures of coping resources, appraisal, and coping strategies evidenced distinct relations with measures of adjustment in ways consistent with the model's predictions with few exceptions. In cross-sectional analyses better Time I adjustment was related to reports of higher levels of coping resources, self-efficacy beliefs, and productive coping strategies, and reports of lower levels of threat appraisal and nonproductive coping strategies. Prospective analyses showed a link between reports of higher levels of reference to others coping strategies and greater goal achievement and, unexpectedly, an association between lower self-efficacy beliefs and better Time 2 youthworker-rated social adjustment. Hence, whereas prospective analyses provide only limited support for the use of a stress and coping model in explaining the adjustment of homeless adolescents to a crisis shelter, cross-sectional findings provide stronger support.
Resumo:
The present study examined the comparative efficacy of intervening at the caregiver/care-recipient dyadic level, versus the individual caregiver level, for caregivers and their care-recipients with HIV/AIDS. Participants were randomly assigned to a Dyad Intervention (DI), a Caregiver Intervention (CI) or Wait List Control group (WLC), and assessed by interview and self-administered scales immediately before treatment and eight weeks later. Participants in the intervention groups also completed a four-month follow-up assessment. Dependent variables included global distress, social adjustment, dyadic adjustment, subjective health status, HIV/AIDS knowledge and target problem ratings. Results showed that caregivers in the DI group showed greater improvement from pre- to post-treatment on global distress, dyadic adjustment and target problems than the CI and WLC caregivers. The CI and DI caregivers showed greater improvement than the WLC group on all dependent variables except social adjustment. Care-recipients in the DI group improved significantly from pre- to post-treatment on dyadic adjustment, social adjustment, knowledge, subjective health status and Target Problem 1, whereas the CI and WLC care-recipients failed to improve on any of these measures. The treatment gains made by the DI caregivers and care-recipients on most dependent variables were maintained at a four-month follow-up. Findings support a reciprocal determinism approach to the process of dyadic adjustment and suggest that intervening at the caregiver/care-recipient level may produce better outcomes for both the caregiver and care-recipient than intervening at the individual caregiver level.