922 resultados para patient-reported outcome


Relevância:

30.00% 30.00%

Publicador:

Resumo:

OBJECTIVE: To determine the point at which differences in clinical assessment scores on physical ability, pain and overall condition are sufficiently large to correspond to a subjective perception of a meaningful difference from the perspective of the patient. METHODS: Forty patients with a diagnosis of rheumatoid arthritis participated in an evening of clinical assessment and one-on-one conversations with each other regarding their arthritic condition. The assessments included tender and swollen joint counts, clinician and patient global assessments, participant assessment of pain and the Health Assessment Questionnaire (HAQ) on physical ability. After each conversation, participants rated themselves relative to their conversational partner on physical ability, pain and overall condition. These subjective comparative ratings were compared to the differences of the individual clinical assessments. RESULTS: In total there were 120 conversations. Generally participants judged themselves as less disabled than others. They rated themselves as "somewhat better" than their conversation partner when they had a (mean) 7% better score on the HAQ, 6% less pain, and 9% better global assessment. In contrast, they rated themselves as "somewhat worse" when they had a (mean) 16% worse score on the HAQ, 16% more pain, and 29% worse global assessment. CONCLUSIONS: Patients view clinically important differences in an asymmetric manner. These results can provide guidance in interpreting results and planning clinical trials.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Families of 52 first-admission patients diagnosed with a severe psychiatric disorder were videotaped interacting with the patient. Behavioral coding was used to derive several indices of interaction: base rates of positive and negative behavior by patients and relatives, cumulative affect of patients and relatives (the difference between the rates of positive and negative behaviors), and classification of families as affect-regulated or unregulated. Family-affect regulation reflects positive cumulative affect by both people in a given interaction. Six months after hospital discharge patients were assessed on occurrence of relapse, global functioning, severity of psychiatric symptoms, and quality of life. Relative to affect-unregulated family interaction, affect-regulated interaction predicted significantly fewer relapses, better global functioning, fewer positive and negative psychiatric symptoms, and higher patient quality of life. Most of the predictions by family-affect regulation were independent of

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Drink driving is a major public health issue and this report examines the experiences of convicted offenders who participated in an established drink driving rehabilitation program Under the Limit (UTL). Course completers were surveyed at least three months after they had finished the 11-week UTL course. The aim of this study was to examine whether the UTL program reduced the level of alcohol consumption either directly as a result of participation in the UTL drink driving program or through increased use of community alcohol program by participants. The research involved a self-report outcome evaluation to determine whether the self-reported levels of alcohol use after the course had changed from the initial alcohol use reported by offenders. The findings are based on the responses of 30 drink-driving offenders who had completed the UTL program (response rate: 20%). While a process evaluation was proposed in the initial application, the low response rate meant that this follow up research was not feasible. The response rate was low for two reasons, it was difficult to: recruit participants who consented to follow up, and subsequently locate and survey those who had consented to involvement.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background: Critically ill patients are at high risk for pressure ulcer (PrU) development due to their high acuity and the invasive nature of the multiple interventions and therapies they receive. With reported incidence rates of PrU development in the adult critical care population as high as 56%, the identification of patients at high risk of PrU development is essential. This paper will explore the association between PrU development and risk factors. It will also explore PrU development and the use of risk assessment scales for critically ill patients in adult intensive care units. Method: A literature search from 2000 to 2012 using the CINHAL, Cochrane Library, EBSCOHost, Medline (via EBSCOHost), PubMed, ProQuest and Google Scholar databases was conducted. Key words used were: pressure ulcer/s; pressure sore/s; decubitus ulcer/s; bed sore/s; critical care; intensive care; critical illness; prevalence; incidence; prevention; management; risk factor; risk assessment scale. Results: Nineteen articles were included in this review; eight studies addressing PrU risk factors, eight studies addressing risk assessment scales and three studies overlapping both. Results from the studies reviewed identified 28 intrinsic and extrinsic risk factors which may lead to PrU development. Development of a risk factor prediction model in this patient population, although beneficial, appears problematic due to many issues such as diverse diagnoses and subsequent patient needs. Additionally, several risk assessment instruments have been developed for early screening of patients at higher risk of developing PrU in the ICU. No existing risk assessment scales are valid for identification high risk critically ill patient,with the majority of scales potentially over-predicting patients at risk for PrU development. Conclusion: Research studies to inform the risk factors for potential pressure ulcer development are inconsistent. Additionally, there is no consistent or clear evidence which demonstrates any scale to better or more effective than another when used to identify the patients at risk for PrU development. Furthermore robust research is needed to identify the risk factors and develop valid scales for measuring the risk of PrU development in ICU.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Purpose: The precise shape of the three-dimensional dose distributions created by intensity-modulated radiotherapy means that the verification of patient position and setup is crucial to the outcome of the treatment. In this paper, we investigate and compare the use of two different image calibration procedures that allow extraction of patient anatomy from measured electronic portal images of intensity-modulated treatment beams. Methods and Materials: Electronic portal images of the intensity-modulated treatment beam delivered using the dynamic multileaf collimator technique were acquired. The images were formed by measuring a series of frames or segments throughout the delivery of the beams. The frames were then summed to produce an integrated portal image of the delivered beam. Two different methods for calibrating the integrated image were investigated with the aim of removing the intensity modulations of the beam. The first involved a simple point-by-point division of the integrated image by a single calibration image of the intensity-modulated beam delivered to a homogeneous polymethyl methacrylate (PMMA) phantom. The second calibration method is known as the quadratic calibration method and required a series of calibration images of the intensity-modulated beam delivered to different thicknesses of homogeneous PMMA blocks. Measurements were made using two different detector systems: a Varian amorphous silicon flat-panel imager and a Theraview camera-based system. The methods were tested first using a contrast phantom before images were acquired of intensity-modulated radiotherapy treatment delivered to the prostate and pelvic nodes of cancer patients at the Royal Marsden Hospital. Results: The results indicate that the calibration methods can be used to remove the intensity modulations of the beam, making it possible to see the outlines of bony anatomy that could be used for patient position verification. This was shown for both posterior and lateral delivered fields. Conclusions: Very little difference between the two calibration methods was observed, so the simpler division method, requiring only the single extra calibration measurement and much simpler computation, was the favored method. This new method could provide a complementary tool to existing position verification methods, and it has the advantage that it is completely passive, requiring no further dose to the patient and using only the treatment fields.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Yates et al (1996) provided a review of the literature on educational approaches to improving psychosocial care of terminally ill patients and their families and suggested that there was an urgent need for innovation in this area. A programme of professional development currently being offered to 181 palliative care nurses in Queensland, Australia, was also described. This paper presents research in progress evaluating this programme which involves use of a quasi-experimental pre-post test design. It also includes process and outcome measures to assess effectiveness in improving the participant's ability to provide psychosocial care to patients and families. Research examining the effectiveness of various educational programmes on care of the dying has offered equivocal results (Yates et al 1996). Degner and Gow (1988a) noted that the inconsistencies found in research into death education result from inadequate study designs, variations in the conceptualisation and measurement of the outcomes of the programmes and flaws in data analysis. Such studies have often lacked a theoretical basis, few have employed well-controlled experimental designs, and the programme outcomes have generally been limited to the participant's 'death anxiety', or other death attitudes which have been variously defined and measured. Whilst Degner and Gow (1988b) have reported that undergraduate nursing students who participated in a care of the dying educational programme demonstrated more 'approach caring' behaviours than a control group, the impact of education programmes on patient care has rarely been examined. Failure to link education to nursing practice and subsequent clinical outcomes has, however, been seen as a major limitation of nursing knowledge in this area (Degner et al 1991). This paper describes an approach to researching the effectiveness of professional development programmes for palliative care nurses.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We have previously reported a preliminary taxonomy of patient error. However, approaches to managing patients' contribution to error have received little attention in the literature. This paper aims to assess how patients and primary care professionals perceive the relative importance of different patient errors as a threat to patient safety. It also attempts to suggest what these groups believe may be done to reduce the errors, and how. It addresses these aims through original research that extends the nominal group analysis used to generate the error taxonomy. Interviews were conducted with 11 purposively selected groups of patients and primary care professionals in Auckland, New Zealand, during late 2007. The total number of participants was 83, including 64 patients. Each group ranked the importance of possible patient errors identified through the nominal group exercise. Approaches to managing the most important errors were then discussed. There was considerable variation among the groups in the importance rankings of the errors. Our general inductive analysis of participants' suggestions revealed the content of four inter-related actions to manage patient error: Grow relationships; Enable patients and professionals to recognise and manage patient error; be Responsive to their shared capacity for change; and Motivate them to act together for patient safety. Cultivation of this GERM of safe care was suggested to benefit from 'individualised community care'. In this approach, primary care professionals individualise, in community spaces, population health messages about patient safety events. This approach may help to reduce patient error and the tension between personal and population health-care.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

PURPOSE Current research on errors in health care focuses almost exclusively on system and clinician error. It tends to exclude how patients may create errors that influence their health. We aimed to identify the types of errors that patients can contribute and help manage, especially in primary care. METHODS Eleven nominal group interviews of patients and primary health care professionals were held in Auckland, New Zealand, during late 2007. Group members reported and helped to classify types of potential error by patients. We synthesized the ideas that emerged from the nominal groups into a taxonomy of patient error. RESULTS Our taxonomy is a 3-level system encompassing 70 potential types of patient error. The first level classifies 8 categories of error into 2 main groups: action errors and mental errors. The action errors, which result in part or whole from patient behavior, are attendance errors, assertion errors, and adherence errors. The mental errors, which are errors in patient thought processes, comprise memory errors, mindfulness errors, misjudgments, and—more distally—knowledge deficits and attitudes not conducive to health. CONCLUSION The taxonomy is an early attempt to understand and recognize how patients may err and what clinicians should aim to influence so they can help patients act safely. This approach begins to balance perspectives on error but requires further research. There is a need to move beyond seeing patient, clinician, and system errors as separate categories of error. An important next step may be research that attempts to understand how patients, clinicians, and systems interact to cocreate and reduce errors.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background and significance: Nurses' job dissatisfaction is associated with negative nursing and patient outcomes. One of the most powerful reasons for nurses to stay in an organisation is satisfaction with leadership. However, nurses are frequently promoted to leadership positions without appropriate preparation for the role. Although a number of leadership programs have been described, none have been tested for effectiveness, using a randomised control trial methodology. Aims: The aims of this research were to develop an evidence based leadership program and to test its effectiveness on nurse unit managers' (NUMs') and nursing staff's (NS's) job satisfaction, and on the leader behaviour scores of nurse unit managers. Methods: First, the study used a comprehensive literature review to examine the evidence on job satisfaction, leadership and front-line manager competencies. From this evidence a summary of leadership practices was developed to construct a two component leadership model. The components of this model were then combined with the evidence distilled from previous leadership development programs to develop a Leadership Development Program (LDP). This evidence integrated the program's design, its contents, teaching strategies and learning environment. Central to the LDP were the evidence-based leadership practices associated with increasing nurses' job satisfaction. A randomised controlled trial (RCT) design was employed for this research to test the effectiveness of the LDP. A RCT is one of the most powerful tools of research and the use of this method makes this study unique, as a RCT has never been used previously to evaluate any leadership program for front-line nurse managers. Thirty-nine consenting nurse unit managers from a large tertiary hospital were randomly allocated to receive either the leadership program or only the program's written information about leadership. Demographic baseline data were collected from participants in the NUM groups and the nursing staff who reported to them. Validated questionnaires measuring job satisfaction and leader behaviours were administered at baseline, at three months after the commencement of the intervention and at six months after the commencement of the intervention, to the nurse unit managers and to the NS. Independent and paired t-tests were used to analyse continuous outcome variables and Chi Square tests were used for categorical data. Results: The study found that the nurse unit managers' overall job satisfaction score was higher at 3-months (p = 0.016) and at 6-months p = 0.027) post commencement of the intervention in the intervention group compared with the control group. Similarly, at 3-months testing, mean scores in the intervention group were higher in five of the six "positive" sub-categories of the leader behaviour scale when compared to the control group. There was a significant difference in one sub-category; effectiveness, p = 0.015. No differences were observed in leadership behaviour scores between groups by 6-months post commencement of the intervention. Over time, at three month and six month testing there were significant increases in four transformational leader behaviour scores and in one positive transactional leader behaviour scores in the intervention group. Over time at 3-month testing, there were significant increases in the three leader behaviour outcome scores, however at 6-months testing; only one of these leader behaviour outcome scores remained significantly increased. Job satisfaction scores were not significantly increased between the NS groups at three months and at six months post commencement of the intervention. However, over time within the intervention group at 6-month testing there was a significant increase in job satisfaction scores of NS. There were no significant increases in NUM leader behaviour scores in the intervention group, as rated by the nursing staff who reported to them. Over time, at 3-month testing, NS rated nurse unit managers' leader behaviour scores significantly lower in two leader behaviours and two leader behaviour outcome scores. At 6-month testing, over time, one leader behaviour score was rated significantly lower and the nontransactional leader behaviour was rated significantly higher. Discussion: The study represents the first attempt to test the effectiveness of a leadership development program (LDP) for nurse unit managers using a RCT. The program's design, contents, teaching strategies and learning environment were based on a summary of the literature. The overall improvement in role satisfaction was sustained for at least 6-months post intervention. The study's results may reflect the program's evidence-based approach to developing the LDP, which increased the nurse unit managers' confidence in their role and thereby their job satisfaction. Two other factors possibly contributed to nurse unit managers' increased job satisfaction scores. These are: the program's teaching strategies, which included the involvement of the executive nursing team of the hospital, and the fact that the LDP provided recognition of the importance of the NUM role within the hospital. Consequently, participating in the program may have led to nurse unit managers feeling valued and rewarded for their service; hence more satisfied. Leadership behaviours remaining unchanged between groups at the 6 months data collection time may relate to the LDP needing to be conducted for a longer time period. This is suggested because within the intervention group, over time, at 3 and 6 months there were significant increases in self-reported leader behaviours. The lack of significant changes in leader behaviour scores between groups may equally signify that leader behaviours require different interventions to achieve change. Nursing staff results suggest that the LDP's design needs to consider involving NS in the program's aims and progress from the outset. It is also possible that by including regular feedback from NS to the nurse unit managers during the LDP that NS's job satisfaction and their perception of nurse unit managers' leader behaviours may alter. Conclusion/Implications: This study highlights the value of providing an evidence-based leadership program to nurse unit managers to increase their job satisfaction. The evidence based leadership program increased job satisfaction but its effect on leadership behaviour was only seen over time. Further research is required to test interventions which attempt to change leader behaviours. Also further research on NS' job satisfaction is required to test the indirect effects of LDP on NS whose nurse unit managers participate in LDPs.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Because of increased competition between healthcare providers, higher customer expectations, stringent checks on insurance payments and new government regulations, it has become vital for healthcare organisations to enhance the quality of the care they provide, to increase efficiency, and to improve the cost effectiveness of their services. Consequently, a number of quality management concepts and tools are employed in the healthcare domain to achieve the most efficient ways of using time, manpower, space and other resources. Emergency departments are designed to provide a high-quality medical service with immediate availability of resources to those in need of emergency care. The challenge of maintaining a smooth flow of patients in emergency departments is a global problem. This study attempts to improve the patient flow in emergency departments by considering Lean techniques and Six Sigma methodology in a comprehensive conceptual framework. The proposed research will develop a systematic approach through integration of Lean techniques with Six Sigma methodology to improve patient flow in emergency departments. The results reported in this paper are based on a standard questionnaire survey of 350 patients in the Emergency Department of Aseer Central Hospital in Saudi Arabia. The results of the study led us to determine the most significant variables affecting patient satisfaction with patient flow, including waiting time during patient treatment in the emergency department; effectiveness of the system when dealing with the patient’s complaints; and the layout of the emergency department. The proposed model will be developed within a performance evaluation metric based on these critical variables, to be evaluated in future work within fuzzy logic for continuous quality improvement.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background: Few patients diagnosed with lung cancer are still alive 5 years after diagnosis. The aim of the current study was to conduct a 10-year review of a consecutive series of patients undergoing curative-intent surgical resection at the largest tertiary referral centre to identify prognostic factors. Methods: Case records of all patients operated on for lung cancer between 1998 and 2008 were reviewed. The clinical features and outcomes of all patients with non-small cell lung cancer (NSCLC) stage I-IV were recorded. Results: A total of 654 patients underwent surgical resection with curative intent during the study period. Median overall survival for the entire cohort was 37 months. The median age at operation was 66 years, with males accounting for 62.7 %. Squamous cell type was the most common histological subtype, and lobectomies were performed in 76.5 % of surgical resections. Pneumonectomy rates decreased significantly in the latter half of the study (25 vs. 16.3 %), while sub-anatomical resection more than doubled (2 vs. 5 %) (p < 0.005). Clinico-pathological characteristics associated with improved survival by univariate analysis include younger age, female sex, smaller tumour size, smoking status, lobectomy, lower T and N status and less advanced pathological stage. Age, gender, smoking status and tumour size, as well as T and N descriptors have emerged as independent prognostic factors by multivariate analysis. Conclusion: We identified several factors that predicted outcome for NSCLC patients undergoing curative-intent surgical resection. Survival rates in our series are comparable to those reported from other thoracic surgery centres. © 2012 Royal Academy of Medicine in Ireland.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background: There is currently no early predictive marker of survival for patients receiving chemotherapy for malignant pleural mesothelioma (MPM). Tumour response may be predictive for overall survival (OS), though this has not been explored. We have thus undertaken a combined-analysis of OS, from a 42 day landmark, of 526 patients receiving systemic therapy for MPM. We also validate published progression-free survival rates (PFSRs) and a progression-free survival (PFS) prognostic-index model. Methods: Analyses included nine MPM clinical trials incorporating six European Organisation for Research and Treatment of Cancer (EORTC) studies. Analysis of OS from landmark (from day 42 post-treatment) was considered regarding tumour response. PFSR analysis data included six non-EORTC MPM clinical trials. Prognostic index validation was performed on one non-EORTC data-set, with available survival data. Results: Median OS, from landmark, of patients with partial response (PR) was 12·8 months, stable disease (SD), 9·4 months and progressive disease (PD), 3·4 months. Both PR and SD were associated with longer OS from landmark compared with disease progression (both p < 0·0001). PFSRs for platinum-based combination therapies were consistent with published significant clinical activity ranges. Effective separation between PFS and OS curves provided a validation of the EORTC prognostic model, based on histology, stage and performance status. Conclusion: Response to chemotherapy is associated with significantly longer OS from landmark in patients with MPM. © 2012 Elsevier Ltd. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Aims: To report cancer-specific and health-related quality-of-life outcomes in patients undergoing radical chemoradiation (CRT) alone for oesophageal cancer. Materials and methods: Between 1998 and 2005, 56 patients with oesophageal cancer received definitive radical CRT, due to local disease extent, poor general health, or patient choice. Data from European Organization for Research and Treatment of Cancer quality-of-life questionnaires QLQ-30 and QLQ-OES24 were collected prospectively. Questionnaires were completed at diagnosis, and at 3, 6 and 12 months after CRT where applicable. Results: The median follow-up was 18 months. The median overall survival was 14 months, with a 51, 26 and 13% 1-, 3- and 5-year survival, respectively. At 12 months after treatment there was a significant improvement compared with before treatment with respect to dysphagia and pain. Global health scores were not significantly affected. Conclusions: Considering the relatively short long-term survival for this cohort of patients, maximising the quality of those final months should be very carefully borne in mind from the outset. The health-related quality-of-life data reported herein helps to establish benchmarks for larger evaluation within randomised clinical trials. © 2007 The Royal College of Radiologists.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

OBJECTIVE: This study explored gene expression differences in predicting response to chemoradiotherapy in esophageal cancer. PURPOSE:: A major pathological response to neoadjuvant chemoradiation is observed in about 40% of esophageal cancer patients and is associated with favorable outcomes. However, patients with tumors of similar histology, differentiation, and stage can have vastly different responses to the same neoadjuvant therapy. This dichotomy may be due to differences in the molecular genetic environment of the tumor cells. BACKGROUND DATA: Diagnostic biopsies were obtained from a training cohort of esophageal cancer patients (13), and extracted RNA was hybridized to genome expression microarrays. The resulting gene expression data was verified by qRT-PCR. In a larger, independent validation cohort (27), we examined differential gene expression by qRT-PCR. The ability of differentially-regulated genes to predict response to therapy was assessed in a multivariate leave-one-out cross-validation model. RESULTS: Although 411 genes were differentially expressed between normal and tumor tissue, only 103 genes were altered between responder and non-responder tumor; and 67 genes differentially expressed >2-fold. These included genes previously reported in esophageal cancer and a number of novel genes. In the validation cohort, 8 of 12 selected genes were significantly different between the response groups. In the predictive model, 5 of 8 genes could predict response to therapy with 95% accuracy in a subset (74%) of patients. CONCLUSIONS: This study has identified a gene microarray pattern and a set of genes associated with response to neoadjuvant chemoradiation in esophageal cancer. The potential of these genes as biomarkers of response to treatment warrants further investigation. Copyright © 2009 by Lippincott Williams & Wilkins.