10 resultados para Unit-Level
em BORIS: Bern Open Repository and Information System - Berna - Suiça
Resumo:
The Nursing Home Survey on Patient Safety Culture (NHSPSC) was specifically developed for nursing homes to assess a facility’s safety climate and it consists of 12 dimensions. After its pilot testing, however, no fur- ther psychometric analyses were performed on the instrument. For this study of safety climate in Swiss nursing home units, the NHSPSC was linguistically adapted to the Swiss context and to address the unit as well as facility level, with the aim of testing aspects of the validity and reliability of the Swiss version before its use in Swiss nursing home units. Psychometric analyses were performed on data from 367 nurs- ing personnel from nine nursing homes in the German-speaking part of Switzerland (response rate = 66%), and content validity (CVI) examined. The statistical influence of unit membership on respondents’ answers, and on their agreement concerning their units’ safety climate, was tested using intraclass corre- lation coefficients (ICCs) and the rWG(J) interrater agreement index. A multilevel exploratory factor analysis (MEFA) with oblimin rotation was applied to examine the questionnaire’s dimensionality. Cronbach’s alpha and Raykov’s rho were calculated to assess factor reliability. The relationship of safety climate dimensions with clinical outcomes was explored. Expert feedback confirmed the relevance of the instru- ment’s items (CVI = 0.93). Personnel showed strong agreement in their perceptions in three dimensions of the questionnaire. ICCs supported a multilevel analysis. MEFA produced nine factors at the within-level (in comparison to 12 in the original version) and two factors at the between-level with satisfactory fit statis- tics. Raykov’s Rho for the single level factors ranged between 0.67 and 0.86. Some safety climate dimen- sions show moderate, but non-significant correlations with the use of bedrails, physical restraint use, and fall-related injuries. The Swiss version of the NHSPSC needs further refinement and testing before its use can be recommended in Swiss nursing homes: its dimensionality needs further clarification, particularly to distinguish items addressing the unit-level safety climate from those at the facility level.
Resumo:
UNLABELLED In a prospective multicentre study of bloodstream infection (BSI) from November 01, 2007 to July 31, 2010, seven paediatric cancer centres (PCC) from Germany and one from Switzerland included 770 paediatric cancer patients (58% males; median age 8.3 years, interquartile range (IQR) 3.8-14.8 years) comprising 153,193 individual days of surveillance (in- and outpatient days during intensive treatment). Broviac catheters were used in 63% of all patients and Ports in 20%. One hundred forty-two patients (18%; 95% CI 16 to 21%) experienced at least one BSI (179 BSIs in total; bacteraemia 70%, bacterial sepsis 27%, candidaemia 2%). In 57%, the BSI occurred in inpatients, in 79% after conventional chemotherapy. Only 56 % of the patients showed neutropenia at BSI onset. Eventually, patients with acute lymphoblastic leukaemia (ALL) or acute myeloblastic leukaemia (AML), relapsed malignancy and patients with a Broviac faced an increased risk of BSI in the multivariate analysis. Relapsed malignancy (16%) was an independent risk factor for all BSI and for Gram-positive BSI. CONCLUSION This study confirms relapsed malignancy as an independent risk factor for BSIs in paediatric cancer patients. On a unit level, data on BSIs in this high-risk population derived from prospective surveillance are not only mandatory to decide on empiric antimicrobial treatment but also beneficial in planning and evaluating preventive bundles. WHAT IS KNOWN • Paediatric cancer patients face an increased risk of nosocomial bloodstream infections (BSIs). • In most cases, these BSIs are associated with the use of a long-term central venous catheter (Broviac, Port), severe and prolonged immunosuppression (e.g. neutropenia) and other chemotherapy-induced alterations of host defence mechanisms (e.g. mucositis). What is New: • This study is the first multicentre study confirming relapsed malignancy as an independent risk factor for BSIs in paediatric cancer patients. • It describes the epidemiology of nosocomial BSI in paediatric cancer patients mainly outside the stem cell transplantation setting during conventional intensive therapy and argues for prospective surveillance programmes to target and evaluate preventive bundle interventions.
Resumo:
AIM: To compare the 10-year peri-implant bone loss (BL) rate in periodontally compromised (PCP) and periodontally healthy patients (PHP) around two different implant systems supporting single-unit crowns. MATERIALS AND METHODS: In this retrospective, controlled study, the mean BL (mBL) rate around dental implants placed in four groups of 20 non-smokers was evaluated after a follow-up of 10 years. Two groups of patients treated for periodontitis (PCP) and two groups of PHP were created. For each category (PCP and PHP), two different types of implant had been selected. The mBL was calculated by subtracting the radiographic bone levels at the time of crown cementation from the bone levels at the 10-year follow-up. RESULTS: The mean age, mean full-mouth plaque and full-mouth bleeding scores and implant location were similar between the four groups. Implant survival rates ranged between 85% and 95%, without statistically significant differences (P>0.05) between groups. For both implant systems, PCP showed statistically significantly higher mBL rates and number of sites with BL> or =3 mm compared with PHP (P<0.0001). CONCLUSIONS: After 10 years, implants in PCP yielded lower survival rates and higher mean marginal BL rates compared with those of implants placed in PHP. These results were independent of the implant system used or the healing modality applied.
Resumo:
to compare the 10-year marginal bone loss rates around implants supporting single-unit crowns in tobacco smokers with and without a history of treated periodontitis.
Resumo:
INTRODUCTION: Guidelines for the treatment of patients in severe hypothermia and mainly in hypothermic cardiac arrest recommend the rewarming using the extracorporeal circulation (ECC). However,guidelines for the further in-hospital diagnostic and therapeutic approach of these patients, who often suffer from additional injuries—especially in avalanche casualties, are lacking. Lack of such algorithms may relevantly delay treatment and put patients at further risk. Together with a multidisciplinary team, the Emergency Department at the University Hospital in Bern, a level I trauma centre, created an algorithm for the in-hospital treatment of patients with hypothermic cardiac arrest. This algorithm primarily focuses on the decision-making process for the administration of ECC. THE BERNESE HYPOTHERMIA ALGORITHM: The major difference between the traditional approach, where all hypothermic patients are primarily admitted to the emergency centre, and our new algorithm is that hypothermic cardiac arrest patients without obvious signs of severe trauma are taken to the operating theatre without delay. Subsequently, the interdisciplinary team decides whether to rewarm the patient using ECC based on a standard clinical trauma assessment, serum potassium levels, core body temperature, sonographic examinations of the abdomen, pleural space, and pericardium, as well as a pelvic X-ray, if needed. During ECC, sonography is repeated and haemodynamic function as well as haemoglobin levels are regularly monitored. Standard radiological investigations according to the local multiple trauma protocol are performed only after ECC. Transfer to the intensive care unit, where mild therapeutic hypothermia is maintained for another 12 h, should not be delayed by additional X-rays for minor injuries. DISCUSSION: The presented algorithm is intended to facilitate in-hospital decision-making and shorten the door-to-reperfusion time for patients with hypothermic cardiac arrest. It was the result of intensive collaboration between different specialties and highlights the importance of high-quality teamwork for rare cases of severe accidental hypothermia. Information derived from the new International Hypothermia Registry will help to answer open questions and further optimize the algorithm.
Resumo:
PURPOSE: The aim of the present clinical trial was to evaluate the 12-month success rate of titanium dental implants placed in the posterior mandible and immediately loaded with 3-unit fixed partial dentures. MATERIALS AND METHODS: Patients with missing mandibular premolars and molars were enrolled in this study. To be included in the study, the implants had to show good primary stability. Implant stability was measured with resonance frequency analysis using the Osstell device (Integration Diagnostics). Implants were included in the study when the stability quotient (ISQ) exceeded 62. Clinical measurements, such as width of keratinized tissue, ISQ, and radiographic assessment of peri-implant bone crest levels, were performed at baseline and at the 12-month follow-up. The comparison between the baseline and the 12-month visits was performed with the Student t test for paired data (statistically significant at a level of alpha = 0.05). RESULTS: Forty implants with a sandblasted, large grit, acid-etched (SLA) surface (Straumann) were placed in 20 patients. At 12 months, only 1 implant had been lost because of an acute infection. The remaining 39 implants were successful, resulting in a 1-year success rate of 97.5%. Neither peri-implant bone levels, measured radiographically, nor implant stability changed significantly from baseline to the 12-month follow-up (P > .05). DISCUSSION: The immediate functional loading of implants placed in this case series study resulted in a satisfactory success rate. CONCLUSION: The findings from this clinical study showed that the placement of SLA transmucosal implants in the mandibular area and their immediate loading with 3-unit fixed partial dentures may be a safe and successful procedure.
Resumo:
BACKGROUND: Sedation protocols, including the use of sedation scales and regular sedation stops, help to reduce the length of mechanical ventilation and intensive care unit stay. Because clinical assessment of depth of sedation is labor-intensive, performed only intermittently, and interferes with sedation and sleep, processed electrophysiological signals from the brain have gained interest as surrogates. We hypothesized that auditory event-related potentials (ERPs), Bispectral Index (BIS), and Entropy can discriminate among clinically relevant sedation levels. METHODS: We studied 10 patients after elective thoracic or abdominal surgery with general anesthesia. Electroencephalogram, BIS, state entropy (SE), response entropy (RE), and ERPs were recorded immediately after surgery in the intensive care unit at Richmond Agitation-Sedation Scale (RASS) scores of -5 (very deep sedation), -4 (deep sedation), -3 to -1 (moderate sedation), and 0 (awake) during decreasing target-controlled sedation with propofol and remifentanil. Reference measurements for baseline levels were performed before or several days after the operation. RESULTS: At baseline, RASS -5, RASS -4, RASS -3 to -1, and RASS 0, BIS was 94 [4] (median, IQR), 47 [15], 68 [9], 75 [10], and 88 [6]; SE was 87 [3], 46 [10], 60 [22], 74 [21], and 87 [5]; and RE was 97 [4], 48 [9], 71 [25], 81 [18], and 96 [3], respectively (all P < 0.05, Friedman Test). Both BIS and Entropy had high variabilities. When ERP N100 amplitudes were considered alone, ERPs did not differ significantly among sedation levels. Nevertheless, discriminant ERP analysis including two parameters of principal component analysis revealed a prediction probability PK value of 0.89 for differentiating deep sedation, moderate sedation, and awake state. The corresponding PK for RE, SE, and BIS was 0.88, 0.89, and 0.85, respectively. CONCLUSIONS: Neither ERPs nor BIS or Entropy can replace clinical sedation assessment with standard scoring systems. Discrimination among very deep, deep to moderate, and no sedation after general anesthesia can be provided by ERPs and processed electroencephalograms, with similar P(K)s. The high inter- and intraindividual variability of Entropy and BIS precludes defining a target range of values to predict the sedation level in critically ill patients using these parameters. The variability of ERPs is unknown.
Resumo:
Purpose We hypothesized that reduced arousability (Richmond Agitation Sedation Scale, RASS, scores −2 to −3) for any reason during delirium assessment increases the apparent prevalence of delirium in intensive care patients. To test this hypothesis, we assessed delirium using the Confusion Assessment Method for the Intensive Care Unit (CAM-ICU) and Intensive Care Delirium Screening Checklist (ICDSC) in intensive care patients during sedation stops, and related the findings to the level of sedation, as assessed with RASS score. Methods We assessed delirium in 80 patients with ICU stay longer than 48 h using CAM-ICU and ICDSC during daily sedation stops. Sedation was assessed using RASS. The effect of including patients with a RASS of −2 and −3 during sedation stop (“light to moderate sedation”, eye contact less than 10 s or not at all, respectively) on prevalence of delirium was analyzed. Results A total of 467 patient days were assessed. The proportion of CAM-ICU-positive evaluations decreased from 53 to 31 % (p < 0.001) if assessments from patients at RASS −2/−3 (22 % of all assessments) were excluded. Similarly, the number of positive ICDSC results decreased from 51 to 29 % (p < 0.001). Conclusions Sedation per se can result in positive items of both CAM-ICU and ICDSC, and therefore in a diagnosis of delirium. Consequently, apparent prevalence of delirium is dependent on how a depressed level of consciousness after sedation stop is interpreted (delirium vs persisting sedation). We suggest that any reports on delirium using these assessment tools should be stratified for a sedation score during the assessment.
Resumo:
OBJECTIVES The aim of this case series was to introduce a complete digital workflow for the production of monolithic implant crowns. MATERIAL AND METHODS Six patients were treated with implant-supported crowns made of resin nano ceramic (RNC). Starting with an intraoral optical scan (IOS), and following a CAD/CAM process, the monolithic crowns were bonded either to a novel prefabricated titanium abutment base (group A) or to a CAD/CAM-generated individualized titanium abutment (group B) in premolar or molar sites on a soft tissue level dental implant. Economic analyses included clinical and laboratory steps. An esthetic evaluation was performed to compare the two abutment-crown combinations. RESULTS None of the digitally constructed RNC crowns required any clinical adaptation. Overall mean work time calculations revealed obvious differences for group A (65.3 min) compared with group B (86.5 min). Esthetic analysis demonstrated a more favorable outcome for the prefabricated bonding bases. CONCLUSIONS Prefabricated or individualized abutments on monolithic RNC crowns using CAD/CAM technology in a model-free workflow seem to provide a feasible and streamlined treatment approach for single-edentulous space rehabilitation in the posterior region. However, RNC as full-contour material has to be considered experimental, and further large-scale clinical investigations with long-term follow-up observation are necessary.
Resumo:
Muscular weakness and muscle wasting may often be observed in critically ill patients on intensive care units (ICUs) and may present as failure to wean from mechanical ventilation. Importantly, mounting data demonstrate that mechanical ventilation itself may induce progressive dysfunction of the main respiratory muscle, i.e. the diaphragm. The respective condition was termed 'ventilator-induced diaphragmatic dysfunction' (VIDD) and should be distinguished from peripheral muscular weakness as observed in 'ICU-acquired weakness (ICU-AW)'. Interestingly, VIDD and ICU-AW may often be observed in critically ill patients with, e.g. severe sepsis or septic shock, and recent data demonstrate that the pathophysiology of these conditions may overlap. VIDD may mainly be characterized on a histopathological level as disuse muscular atrophy, and data demonstrate increased proteolysis and decreased protein synthesis as important underlying pathomechanisms. However, atrophy alone does not explain the observed loss of muscular force. When, e.g. isolated muscle strips are examined and force is normalized for cross-sectional fibre area, the loss is disproportionally larger than would be expected by atrophy alone. Nevertheless, although the exact molecular pathways for the induction of proteolytic systems remain incompletely understood, data now suggest that VIDD may also be triggered by mechanisms including decreased diaphragmatic blood flow or increased oxidative stress. Here we provide a concise review on the available literature on respiratory muscle weakness and VIDD in the critically ill. Potential underlying pathomechanisms will be discussed before the background of current diagnostic options. Furthermore, we will elucidate and speculate on potential novel future therapeutic avenues.