750 resultados para adverse events
Resumo:
In a randomized, double-blind study, 202 healthy adults were randomized to receive a live, attenuated Japanese encephalitis chimeric virus vaccine (JE-CV) and placebo 28 days apart in a cross-over design. A subgroup of 98 volunteers received a JE-CV booster at month 6. Safety, immunogenicity, and persistence of antibodies to month 60 were evaluated. There were no unexpected adverse events (AEs) and the incidence of AEs between JE-CV and placebo were similar. There were three serious adverse events (SAE) and no deaths. A moderately severe case of acute viral illness commencing 39 days after placebo administration was the only SAE considered possibly related to immunization. 99% of vaccine recipients achieved a seroprotective antibody titer ≥ 10 to JE-CV 28 days following the single dose of JE-CV, and 97% were seroprotected at month 6. Kaplan Meier analysis showed that after a single dose of JE-CV, 87% of the participants who were seroprotected at month 6 were still protected at month 60. This rate was 96% among those who received a booster immunization at month 6. 95% of subjects developed a neutralizing titer ≥ 10 against at least three of the four strains of a panel of wild-type Japanese encephalitis virus (JEV) strains on day 28 after immunization. At month 60, that proportion was 65% for participants who received a single dose of JE-CV and 75% for the booster group. These results suggest that JE-CV is safe, well tolerated and that a single dose provides long-lasting immunity to wild-type strains
Resumo:
A randomized, double-blind, study was conducted to evaluate the safety, tolerability and immunogenicity of a live attenuated Japanese encephalitis chimeric virus vaccine (JE-CV) co-administered with live attenuated yellow fever (YF) vaccine (YF-17D strain; Stamaril(®), Sanofi Pasteur) or administered successively. Participants (n = 108) were randomized to receive: YF followed by JE-CV 30 days later, JE followed by YF 30 days later, or the co-administration of JE and YF followed or preceded by placebo 30 days later or earlier. Placebo was used in a double-dummy fashion to ensure masking. Neutralizing antibody titers against JE-CV, YF-17D and selected wild-type JE virus strains was determined using a 50% serum-dilution plaque reduction neutralization test. Seroconversion was defined as the appearance of a neutralizing antibody titer above the assay cut-off post-immunization when not present pre-injection at day 0, or a least a four-fold rise in neutralizing antibody titer measured before the pre-injection day 0 and later post vaccination samples. There were no serious adverse events. Most adverse events (AEs) after JE vaccination were mild to moderate in intensity, and similar to those reported following YF vaccination. Seroconversion to JE-CV was 100% and 91% in the JE/YF and YF/JE sequential vaccination groups, respectively, compared with 96% in the co-administration group. All participants seroconverted to YF vaccine and retained neutralizing titers above the assay cut-off at month six. Neutralizing antibodies against JE vaccine were detected in 82-100% of participants at month six. These results suggest that both vaccines may be successfully co-administered simultaneously or 30 days apart.
Resumo:
Background Chronic heart failure (CHF) is associated with high hospitalisation and mortality rates and debilitating symptoms. In an effort to reduce hospitalisations and improve symptoms individuals must be supported in managing their condition. Patients who can effectively self-manage their symptoms through lifestyle modification and adherence to complex medication regimens will experience less hospitalisations and other adverse events. Aim The purpose of this paper is to explain how providing evidence-based information, using patient education resources, can support self-care. Discussion Self-care relates to the activities that individuals engage in relation to health seeking behaviours. Supporting self-care practices through tailored and relevant information can provide patients with resources and advice on strategies to manage their condition. Evidence-based approaches to improve adherence to self-care practices in patients with heart failure are not often reported. Low health literacy can result in poor understanding of the information about CHF and is related to adverse health outcomes. Also a lack of knowledge can lead to non-adherence with self-care practices such as following fluid restriction, low sodium diet and daily weighing routines. However these issues need to be addressed to improve self-management skills. Outcome Recently the Heart Foundation CHF consumer resource was updated based on evidence-based national clinical guidelines. The aim of this resource is to help consumers improve understanding of the disease, reduce uncertainty and anxiety about what to do when symptoms appear, encourage discussions with local doctors, and build confidence in self-care management. Conclusion Evidence-based CHF patient education resources promote self-care practices and early detection of symptom change that may reduce hospitalisations and improve the quality of life for people with CHF.
Resumo:
Purpose: Age-related macular degeneration (AMD) is the leading cause of irreversible visual impairment among older adults. This study explored the relationship between AMD, falls risk and other injuries and identified visual risk factors for these adverse events. Methods: Participants included 76 community-dwelling individuals with a range of severity of AMD (mean age, 77.0±6.9 years). Baseline assessment included binocular visual acuity, contrast sensitivity and merged visual fields. Participants completed monthly falls and injury diaries for one year following the baseline assessment. Results: Overall, 74% of participants reported having either a fall, injurious fall or other injury. Fifty-four percent of participants reported a fall and 30% reported more than one fall; of the 102 falls reported, 63% resulted in an injury. Most occurred outdoors (52%), between late morning and late afternoon (61%) and when navigating on level ground (62%). The most common non-fall injuries were lacerations (36%) and collisions with an object (35%). Reduced contrast sensitivity and visual acuity were associated with increased fall rate, after controlling for age, gender, cognitive function, cataract severity and self-reported physical function. Reduced contrast sensitivity was the only significant predictor of falls and other injuries. Conclusion: Among older adults with AMD, increased visual impairment was significantly associated with an increased incidence of falls and other injuries. Reduced contrast sensitivity was significantly associated with increased rates of falls, injurious falls and injuries, while reduced visual acuity was only associated with increased falls risk. These findings have important implications for the assessment of visually impaired older adults.
Resumo:
Exercise interventions during adjuvant cancer treatment have been shown to increase functional capacity, relieve fatigue and distress and in one recent study, assist chemotherapy completion. These studies have been limited to breast, prostate or mixed cancer groups and it is not yet known if a similar intervention is even feasible among women diagnosed with ovarian cancer. Women undergoing treatment for ovarian cancer commonly have extensive pelvic surgery followed by high intensity chemotherapy. It is hypothesized that women with ovarian cancer may benefit most from a customised exercise intervention during chemotherapy treatment. This could reduce the number and severity of chemotherapy-related side-effects and optimize treatment adherence. Hence, the aim of the research was to assess feasibility and acceptability of a walking intervention in women with ovarian cancer whilst undergoing chemotherapy, as well as pre-post intervention changes in a range of physical and psychological outcomes. Newly diagnosed women with ovarian cancer were recruited from the Royal Brisbane and Women’s Hospital (RBWH), to participate in a walking program throughout chemotherapy. The study used a one group pre- post-intervention test design. Baseline (conducted following surgery but prior to the first or second chemotherapy cycles) and follow-up (conducted three weeks after the last chemotherapy dose was received) assessments were performed. To accommodate changes in side-effects associated with treatment, specific weekly walking targets with respect to frequency, intensity and duration, were individualised for each participant. To assess feasibility, adherence and compliance with prescribed walking sessions, withdrawals and adverse events were recorded. Physical and psychological outcomes assessed included functional capacity, body composition, anxiety and depression, symptoms experienced during treatment and quality of life. Chemotherapy completion data was also documented and self-reported program helpfulness was assessed using a questionnaire post intervention. Forty-two women were invited to participate. Nine women were recruited, all of whom completed the program. There were no adverse events associated with participating in the intervention and all women reported that the walking program was helpful during their neo-adjuvant or adjuvant chemotherapy treatment. Adherence and compliance to the walking prescription was high. On average, women achieved at least two of their three individual weekly prescription targets 83% of the time (range 42% to 94%). Positive changes were found in functional capacity and quality of life, in addition to reductions in the number and intensity of treatment-associated symptoms over the course of the intervention period. Functional capacity increased for all nine women from baseline to follow-up assessment, with improvements ranging from 10% to 51%. Quality of life improvements were also noted, especially in the physical well-being scale (baseline: median 18; follow-up: median 23). Treatment symptoms reduced in presence and severity, specifically, in constipation, pain and fatigue, post intervention. These positive yet preliminary results suggest that a walking intervention for women receiving chemotherapy for ovarian cancer is safe, feasible and acceptable. Importantly, women perceived the program to be helpful and rewarding, despite being conducted during a time typically associated with elevated distress and treatment symptoms that are often severe enough to alter or cease chemotherapy prescription.
Resumo:
Background: Exercise interventions during adjuvant cancer therapy have been shown to increase functional capacity, relieve fatigue and distress and may assist rates of chemotherapy completion. These studies have been limited to breast, gastric and mixed cancer groups and it is not yet known if a similar intervention is even feasible among women with ovarian cancer. We aimed to assess safety, feasibility and potential effect of a walking intervention in women undergoing chemotherapy for ovarian cancer. Methods: Women newly diagnosed with ovarian cancer were recruited to participate in an individualised walking intervention throughout chemotherapy and were assessed pre-and post-intervention. Feasibility measures included session adherence, compliance with exercise physiologist prescribed walking targets and self-reported program acceptability. Changes in objective physical functioning (6 minute walk test), self-reported distress (Hospital Anxiety and Depression Scale), symptoms (Memorial Symptom Assessment Scale - Physical) and quality of life (Functional Assessment of Cancer Therapy - Ovarian) were calculated, and chemotherapy completion and adverse intervention effects recorded. Results: Seventeen women were enrolled (63% recruitment rate). Mean age was 60 years (SD = 8 years), 88% were diagnosed with FIGO stage III or IV disease, 14 women underwent adjuvant and three neo-adjuvant chemotherapy. On average, women adhered to > 80% of their intervention sessions and complied with 76% of their walking targets, with the majority walking four days a week at moderate intensity for 30 minutes per session. Meaningful improvements were found in physical functioning, physical symptoms, physical well-being and ovarian cancerspecific quality of life. Most women (76%) completed ≥85% of their planned chemotherapy dose. There were no withdrawals or serious adverse events and all women reported the program as being helpful. Conclusions: These positive preliminary results suggest that this walking intervention for women receiving chemotherapy for ovarian cancer is safe, feasible and acceptable and could be used in development of future work. Trial registration: ACTRN12609000252213
Resumo:
Purpose: Silicone hydrogel contact lenses (CLs) are becoming increasingly popular for daily wear (DW), extended wear (EW) and continuous wear (CW), due to their higher oxygen transmissibility compared to hydrogel CLs. The aim of this study was to investigate the clinical and subjective performance of asmofilcon A (Menicon Co., Ltd), a new surface treated silicone hydrogel CL, during 6-night EW over 6 months (M). Methods: A prospective, randomised, single-masked, monadic study was conducted. N=60 experienced DW soft CL wearers were randomly assigned to wear either asmofilcon A (test: Dk=129, water content (WC)=40%, Nanogloss surface treatment) or senofilcon A (control: Dk=103, WC=38%, PVP internal wetting agent, Vistakon, Johnson & Johnson Vision Care) CLs bilaterally for 6 M on an EW basis. A PHMB-preserved solution (Menicon Co., Ltd) was dispensed for CL care. Evaluations were conducted at CL delivery and after 1 week (W), 4 W, 3 M and 6 M of EW. At each visit, a range of objective and subjective clinical performance measures were assessed. Results: N=50 subjects (83%) successfully completed the study, with the majority of discontinuations due to loss to follow-up (n=3) or moving away/travel (n=5). N=2 subjects experienced adverse events; n=1 unilateral red eye with asmofilcon A and n=1 asymptomatic infiltrate with senofilcon A. There were no significant differences in high or low contrast distance visual acuity (HCDVA or LCDVA) between asmofilcon A and senofilcon A; however, LCDVA decreased significantly over time with both CL types (p<0.05). The two CL types did not vary significantly with respect to any of the objective and subjective measures assessed (p>0.05); CL fitting characteristics and CL surface measurements were very similar and mean bulbar and limbal redness measures were always less than grade 1.0. Superior palpebral conjunctival injection showed a statistically, but not clinically, significant increase over time with both CL types (p<0.05). Corneal staining did not vary significantly between asmofilcon A and senofilcon A (p>0.05), with low median gradings of less than 0.5 observed for all areas assessed. There were no solution-related staining reactions observed with either CL type. The asmofilcon A and senofilcon A CLs were both rated highly with respect to overall comfort, with medians of 14 or 15 hours of comfortable lens wearing time per day reported at each of the study visits (p>0.05). Conclusions: Over 6 months of EW, the asmofilcon A and senofilcon A CLs performed in a similar manner with respect to visual acuity, ocular health and CL performance measures. Some changes over time were observed with both CL types, including reduced LCDVA and increased superior palpebral injection, which warrant further investigation in longer-term EW studies. Asmofilcon A appeared to be equivalent in performance to senofilcon A.
Resumo:
Quality oriented management systems and methods have become the dominant business and governance paradigm. From this perspective, satisfying customers’ expectations by supplying reliable, good quality products and services is the key factor for an organization and even government. During recent decades, Statistical Quality Control (SQC) methods have been developed as the technical core of quality management and continuous improvement philosophy and now are being applied widely to improve the quality of products and services in industrial and business sectors. Recently SQC tools, in particular quality control charts, have been used in healthcare surveillance. In some cases, these tools have been modified and developed to better suit the health sector characteristics and needs. It seems that some of the work in the healthcare area has evolved independently of the development of industrial statistical process control methods. Therefore analysing and comparing paradigms and the characteristics of quality control charts and techniques across the different sectors presents some opportunities for transferring knowledge and future development in each sectors. Meanwhile considering capabilities of Bayesian approach particularly Bayesian hierarchical models and computational techniques in which all uncertainty are expressed as a structure of probability, facilitates decision making and cost-effectiveness analyses. Therefore, this research investigates the use of quality improvement cycle in a health vii setting using clinical data from a hospital. The need of clinical data for monitoring purposes is investigated in two aspects. A framework and appropriate tools from the industrial context are proposed and applied to evaluate and improve data quality in available datasets and data flow; then a data capturing algorithm using Bayesian decision making methods is developed to determine economical sample size for statistical analyses within the quality improvement cycle. Following ensuring clinical data quality, some characteristics of control charts in the health context including the necessity of monitoring attribute data and correlated quality characteristics are considered. To this end, multivariate control charts from an industrial context are adapted to monitor radiation delivered to patients undergoing diagnostic coronary angiogram and various risk-adjusted control charts are constructed and investigated in monitoring binary outcomes of clinical interventions as well as postintervention survival time. Meanwhile, adoption of a Bayesian approach is proposed as a new framework in estimation of change point following control chart’s signal. This estimate aims to facilitate root causes efforts in quality improvement cycle since it cuts the search for the potential causes of detected changes to a tighter time-frame prior to the signal. This approach enables us to obtain highly informative estimates for change point parameters since probability distribution based results are obtained. Using Bayesian hierarchical models and Markov chain Monte Carlo computational methods, Bayesian estimators of the time and the magnitude of various change scenarios including step change, linear trend and multiple change in a Poisson process are developed and investigated. The benefits of change point investigation is revisited and promoted in monitoring hospital outcomes where the developed Bayesian estimator reports the true time of the shifts, compared to priori known causes, detected by control charts in monitoring rate of excess usage of blood products and major adverse events during and after cardiac surgery in a local hospital. The development of the Bayesian change point estimators are then followed in a healthcare surveillances for processes in which pre-intervention characteristics of patients are viii affecting the outcomes. In this setting, at first, the Bayesian estimator is extended to capture the patient mix, covariates, through risk models underlying risk-adjusted control charts. Variations of the estimator are developed to estimate the true time of step changes and linear trends in odds ratio of intensive care unit outcomes in a local hospital. Secondly, the Bayesian estimator is extended to identify the time of a shift in mean survival time after a clinical intervention which is being monitored by riskadjusted survival time control charts. In this context, the survival time after a clinical intervention is also affected by patient mix and the survival function is constructed using survival prediction model. The simulation study undertaken in each research component and obtained results highly recommend the developed Bayesian estimators as a strong alternative in change point estimation within quality improvement cycle in healthcare surveillances as well as industrial and business contexts. The superiority of the proposed Bayesian framework and estimators are enhanced when probability quantification, flexibility and generalizability of the developed model are also considered. The empirical results and simulations indicate that the Bayesian estimators are a strong alternative in change point estimation within quality improvement cycle in healthcare surveillances. The superiority of the proposed Bayesian framework and estimators are enhanced when probability quantification, flexibility and generalizability of the developed model are also considered. The advantages of the Bayesian approach seen in general context of quality control may also be extended in the industrial and business domains where quality monitoring was initially developed.
Resumo:
Introduction and objectives Early recognition of deteriorating patients results in better patient outcomes. Modified early warning scores (MEWS) attempt to identify deteriorating patients early so timely interventions can occur thus reducing serious adverse events. We compared frequencies of vital sign recording 24 h post-ICU discharge and 24 h preceding unplanned ICU admission before and after a new observation chart using MEWS and an associated educational programme was implemented into an Australian Tertiary referral hospital in Brisbane. Design Prospective before-and-after intervention study, using a convenience sample of ICU patients who have been discharged to the hospital wards, and in patients with an unplanned ICU admission, during November 2009 (before implementation; n = 69) and February 2010 (after implementation; n = 70). Main outcome measures Any change in a full set or individual vital sign frequency before-and-after the new MEWS observation chart and associated education programme was implemented. A full set of vital signs included Blood pressure (BP), heart rate (HR), temperature (T°), oxygen saturation (SaO2) respiratory rate (RR) and urine output (UO). Results After the MEWS observation chart implementation, we identified a statistically significant increase (210%) in overall frequency of full vital sign set documentation during the first 24 h post-ICU discharge (95% CI 148, 288%, p value <0.001). Frequency of all individual vital sign recordings increased after the MEWS observation chart was implemented. In particular, T° recordings increased by 26% (95% CI 8, 46%, p value = 0.003). An increased frequency of full vital sign set recordings for unplanned ICU admissions were found (44%, 95% CI 2, 102%, p value = 0.035). The only statistically significant improvement in individual vital sign recordings was urine output, demonstrating a 27% increase (95% CI 3, 57%, p value = 0.029). Conclusions The implementation of a new MEWS observation chart plus a supporting educational programme was associated with statistically significant increases in frequency of combined and individual vital sign set recordings during the first 24 h post-ICU discharge. There were no significant changes to frequency of individual vital sign recordings in unplanned admissions to ICU after the MEWS observation chart was implemented, except for urine output. Overall increases in the frequency of full vital sign sets were seen.
Resumo:
AIM: To compare Total Laparoscopic Hysterectomy (TLH) and Total Abdominal Hysterectomy (TAH) with regard to surgical safety. METHODS: Between October 2005 and June 2010, 760 patients with apparent early stage endometrial cancer were enroled in a multicentre, randomised clinical trial (LACE) comparing outcomes following TLH or TAH. The main study end points for this analysis were surgical adverse events (AE), hospital length of stay, conversion from laparoscopy to laparotomy, including 753 patients who completed at least 6 weeks of follow-up. Postoperative AEs were graded according to Common Toxicity Criteria (V3), and those immediately life-threatening, requiring inpatient hospitalisation or prolonged hospitalisation, or resulting in persistent or significant disability/incapacity were regarded as serious AEs. RESULTS: The incidence of intra-operative AEs was comparable in either group. The incidence of post-operative AE CTC grade 3+ (18.6% in TAH, 12.9% in TLH, p 0.03) and serious AE (14.3% in TAH, 8.2% in TLH, p 0.007) was significantly higher in the TAH group compared to the TLH group. Mean operating time was 132 and 107 min, and median length of hospital stay was 2 and 5 days in the TLH and TAH group, respectively (p<0.0001). The decline of haemoglobin from baseline to day 1 postoperatively was 2g/L less in the TLH group (p 0.006). CONCLUSIONS: Compared to TAH, TLH is associated with a significantly decreased risk of major surgical AEs. A laparoscopic surgical approach to early stage endometrial cancer is safe.
Resumo:
Objective: A literature review to examine the incorporation of respiratory assessment into everyday surgical nursing practice; possible barriers to this; and the relationship to patient outcomes. Primary argument: Escalating demands on intensive care beds have led to highly dependent patients being cared for in general surgical ward areas. This change in patient demographics has meant the knowledge and skills required of registered nurses in these areas has expanded exponentially. The literature supported the notion that postoperative monitoring of vital signs should include the fundamental assessment of respiratory rate; depth and rhythm; work of breathing; use of accessory muscles and symmetrical chest movement; as well as auscultation of lung fields using a stethoscope. Early intervention in response to changes in a patient's respiratory health status impacts positively on patient health outcomes. Substantial support exists for the contention that technologically adept nurses who also possess competent respiratory assessment skills make a difference to respiratory care. Conclusions: Sub-clinical respiratory problems have been demonstrated to contribute to adverse events. There is a paucity of research knowledge as to whether respiratory education programs and associated inservice make a difference to nursing clinical practice. Similarly, the implications for associated respiratory educational needs are not well documented, nor has a research base been sufficiently developed to guide nursing practice. Further research has the potential to influence the future role and function of the registered nurse by determining the importance of respiratory education programs on post-operative patient outcomes.
Resumo:
Context: The benefits of high serum levels of 25-hydroxyvitamin D [25(OH)D] are unclear. Trials are needed to establish an appropriate evidence base. Objective: We plan to conduct a large-scale trial of vitamin D supplementation for the reduction of cancer incidence and overall mortality and report here the methods and results of a pilot trial established to inform its design. Design: Pilot D-Health was a randomized trial carried out in a general community setting with 12 months intervention and follow-up. Participants: Participants were 60- to 84-yr-old residents of one of the four eastern Australian states who did not have any vitamin D-related disorders and who were not taking more than 400 IU supplementary vitamin D per day. A total of 644 participants were randomized, and 615 completed the study (two persons withdrew because of nonserious adverse events). Interventions: The interventions were monthly doses of placebo or 30,000 or 60,000 IU vitamin D3. Main Outcomes: The main outcomes were the recruitment rate and changes in serum 25(OH)D. Results: Ten percent of those approached were recruited. At baseline, the mean 25(OH)D was 42 nmol/liter in all three study arms. The mean change in 25(OH)D in the placebo group was 0.12 nmol/liter, compared with changes of 22 and 36 nmol/liter in the 30,000- and 60,000-IU groups, respectively. Conclusions: The D-Health pilot has shown that a large trial is feasible in Australia and that a dose of 2000 IU/d will be needed to ensure that a large proportion of the population reaches the target serum 25(OH)D level. Copyright © 2012 by The Endocrine Society.
Resumo:
BACKGROUND: Studies have shown that nurse staffing levels, among many other factors in the hospital setting, contribute to adverse patient outcomes. Concerns about patient safety and quality of care have resulted in numerous studies being conducted to examine the relationship between nurse staffing levels and the incidence of adverse patient events in both general wards and intensive care units. AIM: The aim of this paper is to review literature published in the previous 10 years which examines the relationship between nurse staffing levels and the incidence of mortality and morbidity in adult intensive care unit patients. METHODS: A literature search from 2002 to 2011 using the MEDLINE, Cumulative Index to Nursing and Allied Health Literature (CINAHL), PsycINFO, and Australian digital thesis databases was undertaken. The keywords used were: intensive care; critical care; staffing; nurse staffing; understaffing; nurse-patient ratios; adverse outcomes; mortality; ventilator-associated pneumonia; ventilator-acquired pneumonia; infection; length of stay; pressure ulcer/injury; unplanned extubation; medication error; readmission; myocardial infarction; and renal failure. A total of 19 articles were included in the review. Outcomes of interest are patient mortality and morbidity, particularly infection and pressure ulcers. RESULTS: Most of the studies were observational in nature with variables obtained retrospectively from large hospital databases. Nurse staffing measures and patient outcomes varied widely across the studies. While an overall statistical association between increased nurse staffing levels and decreased adverse patient outcomes was not found in this review, most studies concluded that a trend exists between increased nurse staffing levels and decreased adverse events. CONCLUSION: While an overall statistical association between increased nurse staffing levels and decreased adverse patient outcomes was not found in this review, most studies demonstrated a trend between increased nurse staffing levels and decreased adverse patient outcomes in the intensive care unit which is consistent with previous literature. While further more robust research methodologies need to be tested in order to more confidently demonstrate this association and decrease the influence of the many other confounders to patient outcomes; this would be difficult to achieve in this field of research.
Resumo:
Introduction: Thoracoscopic anterior instrumented fusion (TASF) is a safe and viable surgical option for corrective stabilisation of progressive adolescent idiopathic scoliosis (AIS) [1-2]. However, there is a paucity of literature examining optimum methods of analgesia following this type of surgery. The aim of this study was to identify; if local anaesthetic bolus via an intrapleural catheter provides effective analgesia following thoracoscopic scoliosis correction; what pain levels may be expected; and any adverse effects associated with the use of intermittent intrapleural analgesia at our centre. Methods: A subset of the most recent 80 patients from a large single centre consecutive series of 201 patients (April 2000 to present) who had undergone TASF had their medical records reviewed. 32 patients met the inclusion criteria for the analysis (i.e. pain scores must have been recorded within the hour prior and within two hours following an intrapleural bolus being given). All patients received an intrapleural catheter inserted during surgery, in addition to patient-controlled opiate analgesia and oral analgesia as required. After surgery, patients received a bolus of 0.25% bupivacaine every four hours via the intrapleural catheter. Visual analogue pain scale scores were recorded before and after the bolus of local anaesthetic and the quantity and time of day that any other analgesia was taken, were also recorded. Results and Discussion: 28 female and four male patients (mean age 14.5 ± 1.5 years) had a total of 230 boluses of local anaesthetic administered intrapleurally, directly onto the spine, in the 96 hour period following surgery. Pain scores significantly decreased following the administration of a bolus (p<0.0001), with the mean pain score decreasing from 3.66 to 1.83. The quantity of opiates via patient-controlled analgesia after surgery decreased steadily between successive 24 hours intervals after an initial increase in the second 24 hour period when patients were mobilised. One intrapleural catheter required early removal at 26 hours postop due to leakage; there were no other associated complications with the intermittent intrapleural analgesia method. Post-operative pain following anterior scoliosis correction was decreased significantly with the administration of regular local anaesthetic boluses and can be reduced to ‘mild’ levels by combined analgesia regimes. The intermittent intrapleural analgesia method was not associated with any adverse events or complications in the full cohort of 201 patients.
Resumo:
Background Falls are one of the most frequently occurring adverse events that impact upon the recovery of older hospital inpatients. Falls can threaten both immediate and longer-term health and independence. There is need to identify cost-effective means for preventing falls in hospitals. Hospital-based falls prevention interventions tested in randomized trials have not yet been subjected to economic evaluation. Methods Incremental cost-effectiveness analysis was undertaken from the health service provider perspective, over the period of hospitalization (time horizon) using the Australian Dollar (A$) at 2008 values. Analyses were based on data from a randomized trial among n = 1,206 acute and rehabilitation inpatients. Decision tree modeling with three-way sensitivity analyses were conducted using burden of disease estimates developed from trial data and previous research. The intervention was a multimedia patient education program provided with trained health professional follow-up shown to reduce falls among cognitively intact hospital patients. Results The short-term cost to a health service of one cognitively intact patient being a faller could be as high as A$14,591 (2008). The education program cost A$526 (2008) to prevent one cognitively intact patient becoming a faller and A$294 (2008) to prevent one fall based on primary trial data. These estimates were unstable due to high variability in the hospital costs accrued by individual patients involved in the trial. There was a 52% probability the complete program was both more effective and less costly (from the health service perspective) than providing usual care alone. Decision tree modeling sensitivity analyses identified that when provided in real life contexts, the program would be both more effective in preventing falls among cognitively intact inpatients and cost saving where the proportion of these patients who would otherwise fall under usual care conditions is at least 4.0%. Conclusions This economic evaluation was designed to assist health care providers decide in what circumstances this intervention should be provided. If the proportion of cognitively intact patients falling on a ward under usual care conditions is 4% or greater, then provision of the complete program in addition to usual care will likely both prevent falls and reduce costs for a health service.