109 resultados para Unit Patients
Resumo:
Objective.- The Patient-Rated Wrist Evaluation is a specific questionnaire for the wrist [1]. It consists of 15 questions with a total score of 100. It was recently translated into French [2]. However, its validity has not been tested in this language. The Disabilities Arm Shoulder and Hand (DASH), with well-established psychometric properties, is considered as the reference questionnaire for the evaluation of upper extremities. The objective of this study is to measure the construct validity of the PRWE-F with the DASH-F in patients with wrist pathology.Patients and methods.- Fifty-one patients (40 m, 11 w, mean age 42 years), 25 fractures of the radius and 26 lesions of the carpus.Questionnaires PRWE-F and DASH-F at entry and at discharge (0 to 100). Calculation of the construct validity of the PRWE-F comparing with the DASH-F with Pearson correlation coefficients (r) at entry and at discharge. Level of significance (alpha) was set at 5%.Results.- Correlation DASH/PRWE at entry: r = 0.799 (95% CI 0.671 to 0.881), P < 0.0001. Correlation DASH/PRWE at discharge: r = 0.847 (95% CI: 0.745 to 0.910), P < 0.0001.Discussion.- The construct validity of the two instruments indicates that they measure the same concept. Our correlation between DASH-F and PRWE-F, going from 0.799 to 0.847, are comparable to those published in different languages (0.71 to 0.84) [3,4]. The questionnaires PRWE-F can thus be used in rehabilitation patients presenting with wrist pathologies; it is comparable to the DASH but described by MacDermid [1] to be more specific. Compared to the DASH it has the advantage of consisting of two dimensions. Its construct validity is excellent. This questionnaire should be evaluated in other populations, and it should be compared with hand questionnaires more specific than the DASH.
Resumo:
Introduction: Delirium is frequent in hospitalized older people, with incidence rate up to 40% in acute care. Delirium is associated with several adverse consequences, including increased mortality and institutionalization. This study aims to investigate the prevalence, incidence, and consequences of delirium in patients hospitalized in an acute care unit for elderly (ACE unit). Methods: Over a 3 months period, every patient (N = 93, mean age 84.1 ± 7.8 years, 66/93(71%) women) admitted to a 28-bed ACE unit were systematically assessed for delirium. Trained nurses used the Confusion Assessment Method (CAM) instrument to determine the presence of delirium at admission and on each subsequent day over patients' stay. Delirium prevalence rate was defined as the proportion of patients with a positive CAM within 24 hours of admission to the ACE unit. Delirium incidence rate was defined as the proportion of patients with a negative CAM at admission whose CAM became positive at least once during their stay. This evaluation was part of a functional assessment, including Basic Activities of Daily Life (Katz BADL, from 0 to 6, higher score indicating better function). Delirium prevention interventions and specific treatment was provided if needed. Results: Overall,25/93(27%)patients had delirium during their stay. Prevalence of delirium at admission was 10/93 (11%), with an incidence of 15/83(18%). Compared with non-delirious patients, those with delirium were more frequently men (10/25(40%) vs 17/68(25%), p <.001) and had reduced functional status at admission(BADL 2.0 ± 1.9 vs 3.6 ± 2.1, p = .004). They tended to be older (86.0 ± 6.7 vs 83.3 ± 8.1 years, p = .110). At discharge, delirium was associated with reduced functional status (BADL 2.0 ± 2.1 vs 4.3 ± 1.9, p <.001), lower rate of home discharge (6/20(30%) vs 28/65 (43%), p = .009) and increased mortality (5/25 (20%) vs 3/68 (5%), p <.001). On average, patients with delirium stayed 5.7 days longer (17.0 ± 9.8 vs 11.31 ± 6.3, p = .011). Conclusion: Delirium occurred in almost a third of these older patients, even though its incidence was relatively low in this frail population. Despite specific management, delirium remained associated with higher risk for adverse outcomes at discharge. These results suggest that early preventive interventions, implemented as soon as possible after hospital admission, might be needed in similar population to achieve better outcomes. Effectiveness of such interventions will be evaluated in future studies.
Resumo:
INTRODUCTION: Cefepime has been associated with a greater risk of mortality than other beta-lactams in patients treated for severe sepsis. Hypotheses for this failure include possible hidden side-effects (for example, neurological) or inappropriate pharmacokinetic/pharmacodynamic (PK/PD) parameters for bacteria with cefepime minimal inhibitory concentrations (MIC) at the highest limits of susceptibility (8 mg/l) or intermediate-resistance (16 mg/l) for pathogens such as Enterobacteriaceae, Pseudomonas aeruginosa and Staphylococcus aureus. We examined these issues in a prospective non-interventional study of 21 consecutive intensive care unit (ICU) adult patients treated with cefepime for nosocomial pneumonia. METHODS: Patients (median age 55.1 years, range 21.8 to 81.2) received intravenous cefepime at 2 g every 12 hours for creatinine clearance (CLCr) >or= 50 ml/min, and 2 g every 24 hours or 36 hours for CLCr < 50 ml/minute. Cefepime plasma concentrations were determined at several time-points before and after drug administration by high-pressure liquid chromatography. PK/PD parameters were computed by standard non-compartmental analysis. RESULTS: Seventeen first-doses and 11 steady states (that is, four to six days after the first dose) were measured. Plasma levels varied greatly between individuals, from two- to three-fold at peak-concentrations to up to 40-fold at trough-concentrations. Nineteen out of 21 (90%) patients had PK/PD parameters comparable to literature values. Twenty-one of 21 (100%) patients had appropriate duration of cefepime concentrations above the MIC (T>MIC >or= 50%) for the pathogens recovered in this study (MIC <or= 4 mg/l), but only 45 to 65% of them had appropriate coverage for potential pathogens with cefepime MIC >or= 8 mg/l. Moreover, 2/21 (10%) patients with renal impairment (CLCr < 30 ml/minute) demonstrated accumulation of cefepime in the plasma (trough concentrations of 20 to 30 mg/l) in spite of dosage adjustment. Both had symptoms compatible with non-convulsive epilepsy (confusion and muscle jerks) that were not attributed to cefepime-toxicity until plasma levels were disclosed to the caretakers and symptoms resolved promptly after drug arrest. CONCLUSIONS: These empirical results confirm the suspected risks of hidden side-effects and inappropriate PK/PD parameters (for pathogens with upper-limit MICs) in a population of ICU adult patients. Moreover, it identifies a safety and efficacy window for cefepime doses of 2 g every 12 hours in patients with a CLCr >or= 50 ml/minute infected by pathogens with cefepime MICs <or= 4 mg/l. On the other hand, prompt monitoring of cefepime plasma levels should be considered in case of lower CLCr or greater MICs.
Resumo:
BACKGROUND: Tracheal intubation may be more difficult in morbidly obese (MO) patients than in the non-obese. The aim of this study was to evaluate clinically if the use of the Video Intubation Unit (VIU), a video-optical intubation stylet, could improve the laryngoscopic view compared with the standard Macintosh laryngoscope in this specific population. METHODS: We studied 40 MO patients (body mass index >35 kg/m(2)) scheduled for bariatric surgery. Each patient had a conventional laryngoscopy and a VIU inspection. The laryngoscopic grades (LG) using the Cormack and Lehane scoring system were noted and compared. Thereafter, the patients were randomised to be intubated with one of the two techniques. In one group, the patients were intubated with the help of the VIU and in the control group, tracheal intubation was performed conventionally. The duration of intubation, as well as the minimal SpO(2) achieved during the procedure, were measured. RESULTS: Patient characteristics were similar in both groups. Seventeen patients had a direct LG of 2 or 3 (no patient had a grade of 4). Out of these 17 patients, the LG systematically improved with the VIU and always attained grade 1 (P<0.0001). The intubation time was shorter within the VIU group, but did not attain significance. There was no difference in the SpO(2) post-intubation. CONCLUSION: In MO patients, the use of the VIU significantly improves the visualisation of the larynx, thereby improving the intubation conditions.
Resumo:
INTRODUCTION: urinary incontinence (UI) is a phenomenon with high prevalence in hospitalized elderly patients, effecting up to 70% of patients requiring long term care. However, despite the discomfort it causes and its association with functional decline, it seems to be given insufficient attention by nurses in geriatric care. OBJECTIVES: to assess the prevalence of urinary incontinence in geriatric patients at admission and the level of nurse involvement as characterized by the explicit documentation of UI diagnosis in the patient's record, prescription of nursing intervention, or nursing actions related to UI. METHODS: cross-sectional retrospective chart review. One hundred cases were randomly selected from those patients 65 years or older admitted to the geriatric ward of a university hospital. The variables examined included: total and continence scores on the Measure of Functional Independence (MIF), socio-demographic variables, presence of a nursing diagnosis in the medical record, prescription of or documentation of a nursing intervention related to UI. RESULTS: the prevalence of urinary incontinence was 72 % and UI was positively correlated with a low MIF score, age and status of awaiting placement. Of the examined cases, nursing diagnosis of UI was only documented in 1.4 % of cases, nursing interventions were prescribed in 54 % of cases, and at least one nursing intervention was performed in 72 % of cases. The vast majority of the interventions were palliative. DISCUSSION: the results on the prevalence of IU are similar to those reported in several other studies. This is also the case in relation to nursing interventions. In this study, people with UI were given the same care regardless of their MIF score MIF, age or gender. One limitation of this study is that it is retrospective and therefore dependent on the quality of the nursing documentation. CONCLUSIONS: this study is novel because it examines UI in relation to nursing interventions. It demonstrates that despite a high prevalence of UI, the general level of concern for nurses remains relatively low. Individualized care is desirable and clinical innovations must be developed for primary and secondary prevention of UI during hospitalization.
Resumo:
Le but de cette étude est de mesurer ainsi que de qualifier l'impact de l'implication des proches aidants de patients hospitalisés dans des unités de soins psychiatriques aigus sur eux-mêmes. Le cadre conceptuel utilisé est celui du fardeau des familles de Schene (1990). Il différencie les parties objectives et subjectives du fardeau familial. La récolte de données a été réalisée à l'aide de l'Involvement Evaluation Questionnaire (IEQ). Les résultats de cette étude montrent que les proches aidants souffrent d'un niveau d'inquiétude élevé, 3.8 sur une échelle de Likert à 5 questions, ainsi que d'un niveau de tension à 2.44 sur une échelle de Likert à 5 questions. Des associations ont été trouvées. L'augmentation de la durée du trouble diminue l'inquiétude, avec une valeur ρ de 0.048. Le fait d'être habitué à la situation a également un impact en diminuant la tension, avec une valeur ρ de 0.002. Plus on est « habitué à la situation », avec une valeur ρ de -0.021, moins le proche est inquiet par rapport à la situation du patient. Ainsi que, plus le patient est jeune, plus le proche aidant ressent de tension, avec une valeur ρ de 0.008. Ces résultats, peu généralisables au vu du petit échantillon (n=24), pourraient toutefois impliquer une réflexion approfondie sur l'accueil, la place et le soutien des proches aidants de patients souffrants de troubles psychiatriques hospitalisés dans une unité de soins aigus de la part des infirmières.
Resumo:
A burn patient was infected with Acinetobacter baumannii on transfer to the hospital after a terrorist attack. Two patients experienced cross-infection. Environmental swab samples were negative for A. baumannii. Six months later, the bacteria reemerged in 6 patients. Environmental swab samples obtained at this time were inoculated into a minimal mineral broth, and culture results showed widespread contamination. No case of infection occurred after closure of the unit for disinfection.
Resumo:
Introduction: Drug prescription is difficult in ICUs as prescribers are many, drugs expensive and decisions complex. In our ICU, specialist clinicians (SC) are entitled to prescribe a list of specific drugs, negotiated with intensive care physicians (ICP). The objective of this investigation was to assess the 5-year evolution of quantity and costs of drug prescription in our adult ICU and identify the relative costs generated by ICP or SC. Methods: Quantities and costs of drugs delivered on a quarterly basis to the adult ICU of our hospital between 2004 and 2008 were extracted from the pharmacy database by ATC code, an international five-level classification system. Within each ATC first level, drugs with either high level of consumption, high costs or large variations in quantities and costs were singled out and split by type of prescriber, ICP or SC. Cost figures used were drug purchase prices by the hospital pharmacy. Results: Over the 5-year period, both quantities and costs of drugs increased, following a nonsteady, nonparallel pattern. Four ATC codes accounted for 80% of both quantities and costs, with ATC code B (blood and haematopoietic organs) amounting to 63% in quantities and 41% in costs, followed by ATC code J (systemic anti-infective, 20% of the costs), ATC code N (nervous system, 11% of the costs) and ATC code C (cardiovascular system, 8% of the costs). Prescription by SC amounted to 1% in drug quantities, but 19% in drug costs. The rate of increase in quantities and costs was seven times larger for ICP than for SC (Figure 1 overleaf ). Some peak values in costs and quantities were related to a very limited number of patients. Conclusions: A 5-year increase in quantities and costs of drug prescription in an ICU is a matter of concern. Rather unexpectedly, total costs and cost increases were generated mainly by ICP. A careful follow-up is necessary to try influencing this evolution through an institutional policy co-opted by all professional categories involved in the process.
Resumo:
BACKGROUND: Multiple interventions were made to optimize the medication process in our intensive care unit (ICU). 1 Transcriptions from the medical order form to the administration plan were eliminated by merging both into a single document; 2 the new form was built in a logical sequence and was highly structured to promote completeness and standardization of information; 3 frequently used drug names, approved units, and fixed routes were pre-printed; 4 physicians and nurses were trained with regard to the correct use of the new form. This study was aimed at evaluating the impact of these interventions on clinically significant types of medication errors. METHODS: Eight types of medication errors were measured by a prospective chart review before and after the interventions in the ICU of a public tertiary care hospital. We used an interrupted time-series design to control the secular trends. RESULTS: Over 85 days, 9298 lines of drug prescription and/or administration to 294 patients, corresponding to 754 patient-days were collected and analysed for the three series before and three series following the intervention. Global error rate decreased from 4.95 to 2.14% (-56.8%, P < 0.001). CONCLUSIONS: The safety of the medication process in our ICU was improved by simple and inexpensive interventions. In addition to the optimization of the prescription writing process, the documentation of intravenous preparation, and the scheduling of administration, the elimination of the transcription in combination with the training of users contributed to reducing errors and carried an interesting potential to increase safety.
Resumo:
INTRODUCTION: Therapeutic hypothermia (TH) is often used to treat out-of-hospital cardiac arrest (OHCA) patients who also often simultaneously receive insulin for stress-induced hyperglycaemia. However, the impact of TH on systemic metabolism and insulin resistance in critical illness is unknown. This study analyses the impact of TH on metabolism, including the evolution of insulin sensitivity (SI) and its variability, in patients with coma after OHCA. METHODS: This study uses a clinically validated, model-based measure of SI. Insulin sensitivity was identified hourly using retrospective data from 200 post-cardiac arrest patients (8,522 hours) treated with TH, shortly after admission to the intensive care unit (ICU). Blood glucose and body temperature readings were taken every one to two hours. Data were divided into three periods: 1) cool (T <35°C); 2) an idle period of two hours as normothermia was re-established; and 3) warm (T >37°C). A maximum of 24 hours each for the cool and warm periods was considered. The impact of each condition on SI is analysed per cohort and per patient for both level and hour-to-hour variability, between periods and in six-hour blocks. RESULTS: Cohort and per-patient median SI levels increase consistently by 35% to 70% and 26% to 59% (P <0.001) respectively from cool to warm. Conversely, cohort and per-patient SI variability decreased by 11.1% to 33.6% (P <0.001) for the first 12 hours of treatment. However, SI variability increases between the 18th and 30th hours over the cool to warm transition, before continuing to decrease afterward. CONCLUSIONS: OCHA patients treated with TH have significantly lower and more variable SI during the cool period, compared to the later warm period. As treatment continues, SI level rises, and variability decreases consistently except for a large, significant increase during the cool to warm transition. These results demonstrate increased resistance to insulin during mild induced hypothermia. Our study might have important implications for glycaemic control during targeted temperature management.
Resumo:
OBJECTIVE: To calculate the variable costs involved with the process of delivering erythropoiesis stimulating agents (ESA) in European dialysis practices. METHODS: A conceptual model was developed to classify the processes and sub-processes followed in the pharmacy (ordering from supplier, receiving/storing/delivering ESA to the dialysis unit), dialysis unit (dose determination, ordering, receipt, registration, storage, administration, registration) and waste disposal unit. Time and material costs were recorded. Labour costs were derived from actual local wages while material costs came from the facilities' accounting records. Activities associated with ESA administration were listed and each activity evaluated to determine if dosing frequency affected the amount of resources required. RESULTS: A total of 21 centres in 8 European countries supplied data for 142 patients (mean) per hospital (range 42-648). Patients received various ESA regimens (thrice-weekly, twice-weekly, once-weekly, once every 2 weeks and once-monthly). Administering ESA every 2 weeks, the mean costs per patient per year for each process and the estimates of the percentage reduction in costs obtainable, respectively, were: pharmacy labour (10.1 euro, 39%); dialysis unit labour (66.0 euro, 65%); dialysis unit materials (4.11 euro, 61%) and waste unit materials (0.43 euro, 49%). LIMITATION: Impact on financial costs was not measured. CONCLUSION: ESA administration has quantifiable labour and material costs which are affected by dosing frequency.