935 resultados para Increasing failure rate


Relevância:

80.00% 80.00%

Publicador:

Resumo:

PURPOSE: This systematic review sought to determine the long-term clinical survival rates of single-tooth restorations fabricated with computer-aided design/computer-assisted manufacture (CAD/CAM) technology, as well as the frequency of failures depending on the CAD/CAM system, the type of restoration, the selected material, and the luting agent. MATERIALS AND METHODS: An electronic search from 1985 to 2007 was performed using two databases: Medline/PubMed and Embase. Selected keywords and well-defined inclusion and exclusion criteria guided the search. All articles were first reviewed by title, then by abstract, and subsequently by a full text reading. Data were assessed and extracted by two independent examiners. The pooled results were statistically analyzed and the overall failure rate was calculated by assuming a Poisson-distributed number of events. In addition, reported failures were analyzed by CAD/CAM system, type of restoration, restorative material, and luting agent. RESULTS: From a total of 1,957 single-tooth restorations with a mean exposure time of 7.9 years and 170 failures, the failure rate was 1.75% per year, estimated per 100 restoration years (95% CI: 1.22% to 2.52%). The estimated total survival rate after 5 years of 91.6% (95% CI: 88.2% to 94.1%) was based on random-effects Poisson regression analysis. CONCLUSIONS: Long-term survival rates for CAD/CAM single-tooth Cerec 1, Cerec 2, and Celay restorations appear to be similar to conventional ones. No clinical studies or randomized clinical trials reporting on other CAD/CAM systems currently used in clinical practice and with follow-up reports of 3 or more years were found at the time of the search.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

BACKGROUND: The development of arsenical and diamidine resistance in Trypanosoma brucei is associated with loss of drug uptake by the P2 purine transporter as a result of alterations in the corresponding T. brucei adenosine transporter 1 gene (TbAT1). Previously, specific TbAT1 mutant type alleles linked to melarsoprol treatment failure were significantly more prevalent in T. b. gambiense from relapse patients at Omugo health centre in Arua district. Relapse rates of up to 30% prompted a shift from melarsoprol to eflornithine (alpha-difluoromethylornithine, DFMO) as first-line treatment at this centre. The aim of this study was to determine the status of TbAT1 in recent isolates collected from T. b. gambiense sleeping sickness patients from Arua and Moyo districts in Northwestern Uganda after this shift in first-line drug choice. METHODOLOGY AND RESULTS: Blood and cerebrospinal fluids of consenting patients were collected for DNA preparation and subsequent amplification. All of the 105 isolates from Omugo that we successfully analysed by PCR-RFLP possessed the TbAT1 wild type allele. In addition, PCR/RFLP analysis was performed for 74 samples from Moyo, where melarsoprol is still the first line drug; 61 samples displayed the wild genotype while six were mutant and seven had a mixed pattern of both mutant and wild-type TbAT1. The melarsoprol treatment failure rate at Moyo over the same period was nine out of 101 stage II cases that were followed up at least once. Five of the relapse cases harboured mutant TbAT1, one had the wild type, while no amplification was achieved from the remaining three samples. CONCLUSIONS/SIGNIFICANCE: The apparent disappearance of mutant alleles at Omugo may correlate with melarsoprol withdrawal as first-line treatment. Our results suggest that melarsoprol could successfully be reintroduced following a time lag subsequent to its replacement. A field-applicable test to predict melarsoprol treatment outcome and identify patients for whom the drug can still be beneficial is clearly required. This will facilitate cost-effective management of HAT in rural resource-poor settings, given that eflornithine has a much higher logistical requirement for its application.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

INTRODUCTION The objective of this trial was to compare the survival rates of mandibular lingual retainers bonded with either chemically cured or light-cured adhesive after orthodontic treatment. METHODS Patients having undergone orthodontic treatment at a private orthodontic office were randomly allocated to fixed retainers placed with chemically cured composite or light-cured composite. Eligibility criteria included no active caries, restorations, or fractures on the mandibular anterior teeth, and adequate oral hygiene. The main outcome was any type of first-time lingual retainer breakage; pattern of failure (adapted adhesive remnant index scores) was a secondary outcome. Randomization was accomplished with random permuted blocks of 20 patients with allocation concealed in sequentially numbered, opaque, sealed envelopes. Blinding was applicable for outcome assessment only. Patients were reviewed at 1, 3, and 6 months and then every 6 months after placement of the retainer until completion of the study. Data were analyzed using survival analysis including Cox regression; sensitivity analysis was carried out after data imputation for subjects lost to follow-up. RESULTS Two hundred twenty patients (median age, 16 years; interquartile range, 2; range, 12-47 years) were randomized in a 1:1 ratio to either chemical or light curing. Baseline characteristics were similar between groups, the median follow-up period was 2.19 years (range, 0.003-3.64 years), and 16 patients were lost to follow-up. At a minimum follow-up of 2 years, 47 of 110 (42.7%) and 55 of 110 (50.0%) retainers had some type of failure with chemically cured and light-cured adhesive, respectively (log-rank test, P = 0.35). Data were analyzed on an intention-to-treat basis, and the hazard ratio (HR) was 1.15 (95% confidence interval [CI], 0.88-1.70; P = 0.47). There was weak evidence that age is a significant predictor for lingual retainer failures (HR, 0.96; 95% CI, 0.93-1.00; P = 0.08). Adhesive remnant index scoring was possible for only 66 of the 102 (64.7%) failures and did not differ between composites (Fisher exact test, P = 0.16). No serious harm was observed other than gingivitis associated with plaque accumulation. CONCLUSIONS The results of this study indicated no evidence that survival of mandibular lingual retainers differs between chemically and light-cured adhesives. The overall failure rate was 46.4%; however, this included any type of failure, which may have exaggerated the overall failure rate.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

OBJECTIVES Mortality in patients starting antiretroviral therapy (ART) is higher in Malawi and Zambia than in South Africa. We examined whether different monitoring of ART (viral load [VL] in South Africa and CD4 count in Malawi and Zambia) could explain this mortality difference. DESIGN Mathematical modelling study based on data from ART programmes. METHODS We used a stochastic simulation model to study the effect of VL monitoring on mortality over 5 years. In baseline scenario A all parameters were identical between strategies except for more timely and complete detection of treatment failure with VL monitoring. Additional scenarios introduced delays in switching to second-line ART (scenario B) or higher virologic failure rates (due to worse adherence) when monitoring was based on CD4 counts only (scenario C). Results are presented as relative risks (RR) with 95% prediction intervals and percent of observed mortality difference explained. RESULTS RRs comparing VL with CD4 cell count monitoring were 0.94 (0.74-1.03) in scenario A, 0.94 (0.77-1.02) with delayed switching (scenario B) and 0.80 (0.44-1.07) when assuming a 3-times higher rate of failure (scenario C). The observed mortality at 3 years was 10.9% in Malawi and Zambia and 8.6% in South Africa (absolute difference 2.3%). The percentage of the mortality difference explained by VL monitoring ranged from 4% (scenario A) to 32% (scenarios B and C combined, assuming a 3-times higher failure rate). Eleven percent was explained by non-HIV related mortality. CONCLUSIONS VL monitoring reduces mortality moderately when assuming improved adherence and decreased failure rates.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

OBJECTIVES The aim of this study was to describe the process to obtain Food and Drug Administration (FDA) approval for the expanded indication for treatment with the Resolute zotarolimus-eluting stent (R-ZES) (Medtronic, Inc., Santa Rosa, California) in patients with coronary artery disease and diabetes. BACKGROUND The R-ZES is the first drug-eluting stent specifically indicated in the United States for percutaneous coronary intervention in patients with diabetes. METHODS We pooled patient-level data for 5,130 patients from the RESOLUTE Global Clinical Program. A performance goal prospectively determined in conjunction with the FDA was established as a rate of target vessel failure at 12 months of 14.5%. In addition to the FDA pre-specified cohort of less complex patients with diabetes (n = 878), we evaluated outcomes of the R-ZES in all 1,535 patients with diabetes compared with all 3,595 patients without diabetes at 2 years. RESULTS The 12-month rate of target vessel failure in the pre-specified diabetic cohort was 7.8% (upper 95% confidence interval: 9.51%), significantly lower than the performance goal of 14.5% (p < 0.001). After 2 years, the cumulative incidence of target lesion failure in patients with noninsulin-treated diabetes was comparable to that of patients without diabetes (8.0% vs. 7.1%). The higher risk insulin-treated population demonstrated a significantly higher target lesion failure rate (13.7%). In the whole population, including complex patients, rates of stent thrombosis were not significantly different between patients with and without diabetes (1.2% vs. 0.8%). CONCLUSIONS The R-ZES is safe and effective in patients with diabetes. Long-term clinical data of patients with noninsulin-treated diabetes are equivalent to patients without diabetes. Patients with insulin-treated diabetes remain a higher risk subset. (The Medtronic RESOLUTE Clinical Trial; NCT00248079; Randomized, Two-arm, Non-inferiority Study Comparing Endeavor-Resolute Stent With Abbot Xience-V Stent [RESOLUTE-AC]; NCT00617084; The Medtronic RESOLUTE US Clinical Trial (R-US); NCT00726453; RESOLUTE International Registry: Evaluation of the Resolute Zotarolimus-Eluting Stent System in a 'Real-World' Patient Population [R-Int]; NCT00752128; RESOLUTE Japan-The Clinical Evaluation of the MDT-4107 Drug-Eluting Coronary Stent [RJ]; NCT00927940).

Relevância:

80.00% 80.00%

Publicador:

Resumo:

BACKGROUND Although well-established for suspected lower limb deep venous thrombosis, an algorithm combining a clinical decision score, d-dimer testing, and ultrasonography has not been evaluated for suspected upper extremity deep venous thrombosis (UEDVT). OBJECTIVE To assess the safety and feasibility of a new diagnostic algorithm in patients with clinically suspected UEDVT. DESIGN Diagnostic management study. (ClinicalTrials.gov: NCT01324037) SETTING: 16 hospitals in Europe and the United States. PATIENTS 406 inpatients and outpatients with suspected UEDVT. MEASUREMENTS The algorithm consisted of the sequential application of a clinical decision score, d-dimer testing, and ultrasonography. Patients were first categorized as likely or unlikely to have UEDVT; in those with an unlikely score and normal d-dimer levels, UEDVT was excluded. All other patients had (repeated) compression ultrasonography. The primary outcome was the 3-month incidence of symptomatic UEDVT and pulmonary embolism in patients with a normal diagnostic work-up. RESULTS The algorithm was feasible and completed in 390 of the 406 patients (96%). In 87 patients (21%), an unlikely score combined with normal d-dimer levels excluded UEDVT. Superficial venous thrombosis and UEDVT were diagnosed in 54 (13%) and 103 (25%) patients, respectively. All 249 patients with a normal diagnostic work-up, including those with protocol violations (n = 16), were followed for 3 months. One patient developed UEDVT during follow-up, for an overall failure rate of 0.4% (95% CI, 0.0% to 2.2%). LIMITATIONS This study was not powered to show the safety of the substrategies. d-Dimer testing was done locally. CONCLUSION The combination of a clinical decision score, d-dimer testing, and ultrasonography can safely and effectively exclude UEDVT. If confirmed by other studies, this algorithm has potential as a standard approach to suspected UEDVT. PRIMARY FUNDING SOURCE None.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

PURPOSE To assess the survival outcomes and reported complications of screw- and cement-retained fixed reconstructions supported on dental implants. MATERIALS AND METHODS A Medline (PubMed), Embase, and Cochrane electronic database search from 2000 to September 2012 using MeSH and free-text terms was conducted. Selected inclusion and exclusion criteria guided the search. All studies were first reviewed by abstract and subsequently by full-text reading by two examiners independently. Data were extracted by two examiners and statistically analyzed using a random effects Poisson regression. RESULTS From 4,324 abstracts, 321 full-text articles were reviewed. Seventy-three articles were found to qualify for inclusion. Five-year survival rates of 96.03% (95% confidence interval [CI]: 93.85% to 97.43%) and 95.55% (95% CI: 92.96% to 97.19%) were calculated for cemented and screw-retained reconstructions, respectively (P = .69). Comparison of cement and screw retention showed no difference when grouped as single crowns (I-SC) (P = .10) or fixed partial dentures (I-FDP) (P = .49). The 5-year survival rate for screw-retained full-arch reconstructions was 96.71% (95% CI: 93.66% to 98.31). All-ceramic reconstruction material exhibited a significantly higher failure rate than porcelain-fused-to-metal (PFM) in cemented reconstructions (P = .01) but not when comparing screw-retained reconstructions (P = .66). Technical and biologic complications demonstrating a statistically significant difference included loss of retention (P ≤ .01), abutment loosening (P ≤ .01), porcelain fracture and/or chipping (P = .02), presence of fistula/suppuration (P ≤ .001), total technical events (P = .03), and total biologic events (P = .02). CONCLUSIONS Although no statistical difference was found between cement- and screw-retained reconstructions for survival or failure rates, screw-retained reconstructions exhibited fewer technical and biologic complications overall. There were no statistically significant differences between the failure rates of the different reconstruction types (I-SCs, I-FDPs, full-arch I-FDPs) or abutment materials (titanium, gold, ceramic). The failure rate of cemented reconstructions was not influenced by the choice of a specific cement, though cement type did influence loss of retention.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The choice and duration of antiplatelet therapy for secondary prevention of coronary artery disease (CAD) is determined by the clinical context and treatment strategy. Oral antiplatelet agents for secondary prevention include the cyclo-oxygenase-1 inhibitor aspirin, and the ADP dependent P2Y12 inhibitors clopidogrel, prasugrel and ticagrelor. Aspirin constitutes the cornerstone in secondary prevention of CAD and is complemented by clopidogrel in patients with stable CAD undergoing percutaneous coronary intervention. Among patients with acute coronary syndrome, prasugrel and ticagrelor improve net clinical outcome by reducing ischaemic adverse events at the expense of an increased risk of bleeding as compared with clopidogrel. Prasugrel appears particularly effective among patients with ST elevation myocardial infarction to reduce the risk of stent thrombosis compared with clopidogrel, and offered a greater net clinical benefit among patients with diabetes compared with patients without diabetes. Ticagrelor is associated with reduced mortality without increasing the rate of coronary artery bypass graft (CABG)-related bleeding as compared with clopidogrel. Dual antiplatelet therapy should be continued for a minimum of 1 year among patients with acute coronary syndrome irrespective of stent type; among patients with stable CAD treated with new generation drug-eluting stents, available data suggest no benefit to prolong antiplatelet treatment beyond 6 months.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Reduced bone stock can result in fractures that mostly occur in the spine, distal radius, and proximal femur. In case of operative treatment, osteoporosis is associated with an increased failure rate. To estimate implant anchorage, mechanical methods seem to be promising to measure bone strength intraoperatively. It has been shown that the mechanical peak torque correlates with the local bone mineral density and screw failure load in hip, hindfoot, humerus, and spine in vitro. One device to measure mechanical peak torque is the DensiProbe (AO Research Institute, Davos, Switzerland). The device has shown its effectiveness in mechanical peak torque measurement in mechanical testing setups for the use in hip, hindfoot, and spine. In all studies, the correlation of mechanical torque measurement and local bone mineral density and screw failure load could be shown. It allows the surgeon to judge local bone strength intraoperatively directly at the region of interest and gives valuable information if additional augmentation is needed. We summarize methods of this new technique, its advantages and limitations, and give an overview of actual and possible future applications.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

OBJECTIVES Valve-sparing root replacement (VSRR) is thought to reduce the rate of thromboembolic and bleeding events compared with aortic root replacement using a mechanical aortic root replacement (MRR) with a composite graft by avoiding oral anticoagulation. But as VSRR carries a certain risk for subsequent reinterventions, decision-making in the individual patient can be challenging. METHODS Of 100 Marfan syndrome (MFS) patients who underwent 169 aortic surgeries and were followed at our institution since 1995, 59 consecutive patients without a history of dissection or prior aortic surgery underwent elective VSRR or MRR and were retrospectively analysed. RESULTS VSRR was performed in 29 (David n = 24, Yacoub n = 5) and MRR in 30 patients. The mean age was 33 ± 15 years. The mean follow-up after VSRR was 6.5 ± 4 years (180 patient-years) compared with 8.8 ± 9 years (274 patient-years) after MRR. Reoperation rates after root remodelling (Yacoub) were significantly higher than after the reimplantation (David) procedure (60 vs 4.2%, P = 0.01). The need for reinterventions after the reimplantation procedure (0.8% per patient-year) was not significantly higher than after MRR (P = 0.44) but follow-up after VSRR was significantly shorter (P = 0.03). There was neither significant morbidity nor mortality associated with root reoperations. There were no neurological events after VSRR compared with four stroke/intracranial bleeding events in the MRR group (log-rank, P = 0.11), translating into an event rate of 1.46% per patient-year following MRR. CONCLUSION The calculated annual failure rate after VSRR using the reimplantation technique was lower than the annual risk for thromboembolic or bleeding events. Since the perioperative risk of reinterventions following VSRR is low, patients might benefit from VSRR even if redo surgery may become necessary during follow-up.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

OBJECT In ventriculoperitoneal (VP) shunt surgery, laparoscopic assistance can be used for placement of the peritoneal catheter. Until now, the efficacy of laparoscopic shunt placement has been investigated only in retrospective and nonrandomized prospective studies, which have reported decreased distal shunt dysfunction rates in patients undergoing laparascopic placement compared with mini-laparotomy cohorts. In this randomized controlled trial the authors compared rates of shunt failure in patients who underwent laparoscopic surgery for peritoneal catheter placement with rates in patients who underwent traditional mini-laparotomy. METHODS One hundred twenty patients scheduled for VP shunt surgery were randomized to laparoscopic surgery or mini-laparotomy for insertion of the peritoneal catheter. The primary endpoint was the rate of overall shunt complication or failure within the first 12 months after surgery. Secondary endpoints were distal shunt failure, overall complication/ failure, duration of surgery and hospitalization, and morbidity. RESULTS The overall shunt complication/failure rate was 15% (9 of 60 cases) in the laparoscopic group and 18.3% (11 of 60 cases) in the mini-laparotomy group (p = 0.404). Patients in the laparoscopic group had no distal shunt failures; in contrast, 5 (8%) of 60 patients in the mini-laparotomy group experienced distal shunt failure (p = 0.029). Intraoperative complications occurred in 2 patients (both in the laparoscopic group), and abdominal pain led to catheter removal in 1 patient per group. Infections occurred in 1 patient in the laparoscopic group and 3 in the mini-laparotomy group. The mean durations of surgery and hospitalization were similar in the 2 groups. CONCLUSIONS While overall shunt failure rates were similar in the 2 groups, the use of laparoscopic shunt placement significantly reduced the rate of distal shunt failure compared with mini-laparotomy.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This study analysed the outcome of 563 Aplastic Anaemia (AA) children aged 0-12 years reported to the Severe Aplastic Anaemia Working Party database of the European Society for Blood and Marrow Transplantation, according to treatment received. Overall survival (OS) after upfront human leucocyte antigen-matched family donor (MFD) haematopoietic stem cell transplantation (HSCT) or immunosuppressive treatment (IST) was 91% vs. 87% (P 0·18). Event-free survival (EFS) after upfront MFD HSCT or IST was 87% vs. 33% (P 0·001). Ninety-one of 167 patients (55%) failed front-line IST and underwent rescue HSCT. The OS of this rescue group was 83% compared with 91% for upfront MFD HSCT patients and 97% for those who did not fail IST up-front (P 0·017). Rejection was 2% for MFD HSCT and HSCT post-IST failure (P 0·73). Acute graft-versus-host disease (GVHD) grade II-IV was 8% in MFD graft vs. 25% for HSCT post-IST failure (P < 0·0001). Chronic GVHD was 6% in MFD HSCT vs. 20% in HSCT post-IST failure (P < 0·0001). MFD HSCT is an excellent therapy for children with AA. IST has a high failure rate, but remains a reasonable first-line choice if MFD HSCT is not available because high OS enables access to HSCT, which is a very good rescue option.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The choice and duration of antiplatelet therapy for secondary prevention of coronary artery disease (CAD) is determined by the clinical context and treatment strategy. Oral antiplatelet agents for secondary prevention include the cyclo-oxygenase-1 inhibitor aspirin, and the ADP dependent P2Y12 inhibitors clopidogrel, prasugrel and ticagrelor. Aspirin constitutes the cornerstone in secondary prevention of CAD and is complemented by clopidogrel in patients with stable CAD undergoing percutaneous coronary intervention. Among patients with acute coronary syndrome, prasugrel and ticagrelor improve net clinical outcome by reducing ischaemic adverse events at the expense of an increased risk of bleeding as compared with clopidogrel. Prasugrel appears particularly effective among patients with ST elevation myocardial infarction to reduce the risk of stent thrombosis compared with clopidogrel, and offered a greater net clinical benefit among patients with diabetes compared with patients without diabetes. Ticagrelor is associated with reduced mortality without increasing the rate of coronary artery bypass graft (CABG)-related bleeding as compared with clopidogrel. Dual antiplatelet therapy should be continued for a minimum of 1 year among patients with acute coronary syndrome irrespective of stent type; among patients with stable CAD treated with new generation drug-eluting stents, available data suggest no benefit to prolong antiplatelet treatment beyond 6 months.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The ATLS program by the American college of surgeons is probably the most important globally active training organization dedicated to improve trauma management. Detection of acute haemorrhagic shock belongs to the key issues in clinical practice and thus also in medical teaching. (In this issue of the journal William Schulz and Ian McConachrie critically review the ATLS shock classification Table 1), which has been criticized after several attempts of validation have failed [1]. The main problem is that distinct ranges of heart rate are related to ranges of uncompensated blood loss and that the heart rate decrease observed in severe haemorrhagic shock is ignored [2]. Table 1. Estimated blood loos based on patient's initial presentation (ATLS Students Course Manual, 9th Edition, American College of Surgeons 2012). Class I Class II Class III Class IV Blood loss ml Up to 750 750–1500 1500–2000 >2000 Blood loss (% blood volume) Up to 15% 15–30% 30–40% >40% Pulse rate (BPM) <100 100–120 120–140 >140 Systolic blood pressure Normal Normal Decreased Decreased Pulse pressure Normal or ↑ Decreased Decreased Decreased Respiratory rate 14–20 20–30 30–40 >35 Urine output (ml/h) >30 20–30 5–15 negligible CNS/mental status Slightly anxious Mildly anxious Anxious, confused Confused, lethargic Initial fluid replacement Crystalloid Crystalloid Crystalloid and blood Crystalloid and blood Table options In a retrospective evaluation of the Trauma Audit and Research Network (TARN) database blood loss was estimated according to the injuries in nearly 165,000 adult trauma patients and each patient was allocated to one of the four ATLS shock classes [3]. Although heart rate increased and systolic blood pressure decreased from class I to class IV, respiratory rate and GCS were similar. The median heart rate in class IV patients was substantially lower than the value of 140 min−1 postulated by ATLS. Moreover deterioration of the different parameters does not necessarily go parallel as suggested in the ATLS shock classification [4] and [5]. In all these studies injury severity score (ISS) and mortality increased with in increasing shock class [3] and with increasing heart rate and decreasing blood pressure [4] and [5]. This supports the general concept that the higher heart rate and the lower blood pressure, the sicker is the patient. A prospective study attempted to validate a shock classification derived from the ATLS shock classes [6]. The authors used a combination of heart rate, blood pressure, clinically estimated blood loss and response to fluid resuscitation to classify trauma patients (Table 2) [6]. In their initial assessment of 715 predominantly blunt trauma patients 78% were classified as normal (Class 0), 14% as Class I, 6% as Class II and only 1% as Class III and Class IV respectively. This corresponds to the results from the previous retrospective studies [4] and [5]. The main endpoint used in the prospective study was therefore presence or absence of significant haemorrhage, defined as chest tube drainage >500 ml, evidence of >500 ml of blood loss in peritoneum, retroperitoneum or pelvic cavity on CT scan or requirement of any blood transfusion >2000 ml of crystalloid. Because of the low prevalence of class II or higher grades statistical evaluation was limited to a comparison between Class 0 and Class I–IV combined. As in the retrospective studies, Lawton did not find a statistical difference of heart rate and blood pressure among the five groups either, although there was a tendency to a higher heart rate in Class II patients. Apparently classification during primary survey did not rely on vital signs but considered the rather soft criterion of “clinical estimation of blood loss” and requirement of fluid substitution. This suggests that allocation of an individual patient to a shock classification was probably more an intuitive decision than an objective calculation the shock classification. Nevertheless it was a significant predictor of ISS [6]. Table 2. Shock grade categories in prospective validation study (Lawton, 2014) [6]. Normal No haemorrhage Class I Mild Class II Moderate Class III Severe Class IV Moribund Vitals Normal Normal HR > 100 with SBP >90 mmHg SBP < 90 mmHg SBP < 90 mmHg or imminent arrest Response to fluid bolus (1000 ml) NA Yes, no further fluid required Yes, no further fluid required Requires repeated fluid boluses Declining SBP despite fluid boluses Estimated blood loss (ml) None Up to 750 750–1500 1500–2000 >2000 Table options What does this mean for clinical practice and medical teaching? All these studies illustrate the difficulty to validate a useful and accepted physiologic general concept of the response of the organism to fluid loss: Decrease of cardiac output, increase of heart rate, decrease of pulse pressure occurring first and hypotension and bradycardia occurring only later. Increasing heart rate, increasing diastolic blood pressure or decreasing systolic blood pressure should make any clinician consider hypovolaemia first, because it is treatable and deterioration of the patient is preventable. This is true for the patient on the ward, the sedated patient in the intensive care unit or the anesthetized patients in the OR. We will therefore continue to teach this typical pattern but will continue to mention the exceptions and pitfalls on a second stage. The shock classification of ATLS is primarily used to illustrate the typical pattern of acute haemorrhagic shock (tachycardia and hypotension) as opposed to the Cushing reflex (bradycardia and hypertension) in severe head injury and intracranial hypertension or to the neurogenic shock in acute tetraplegia or high paraplegia (relative bradycardia and hypotension). Schulz and McConachrie nicely summarize the various confounders and exceptions from the general pattern and explain why in clinical reality patients often do not present with the “typical” pictures of our textbooks [1]. ATLS refers to the pitfalls in the signs of acute haemorrhage as well: Advanced age, athletes, pregnancy, medications and pace makers and explicitly state that individual subjects may not follow the general pattern. Obviously the ATLS shock classification which is the basis for a number of questions in the written test of the ATLS students course and which has been used for decades probably needs modification and cannot be literally applied in clinical practice. The European Trauma Course, another important Trauma training program uses the same parameters to estimate blood loss together with clinical exam and laboratory findings (e.g. base deficit and lactate) but does not use a shock classification related to absolute values. In conclusion the typical physiologic response to haemorrhage as illustrated by the ATLS shock classes remains an important issue in clinical practice and in teaching. The estimation of the severity haemorrhage in the initial assessment trauma patients is (and was never) solely based on vital signs only but includes the pattern of injuries, the requirement of fluid substitution and potential confounders. Vital signs are not obsolete especially in the course of treatment but must be interpreted in view of the clinical context. Conflict of interest None declared. Member of Swiss national ATLS core faculty.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

PURPOSE The aim of this study was to analyze the patient pool referred to a specialty clinic for implant surgery over a 3-year period. MATERIALS AND METHODS All patients receiving dental implants between 2008 and 2010 at the Department of Oral Surgery and Stomatology were included in the study. As primary outcome parameters, the patients were analyzed according to the following criteria: age, sex, systemic diseases, and indication for therapy. For the inserted implants, the type of surgical procedure, the types of implants placed, postsurgical complications, and early failures were recorded. A logistic regression analysis was performed to identify possible local and systemic risk factors for complications. As a secondary outcome, data regarding demographics and surgical procedures were compared with the findings of a historic study group (2002 to 2004). RESULTS A total of 1,568 patients (792 women and 776 men; mean age, 52.6 years) received 2,279 implants. The most frequent indication was a single-tooth gap (52.8%). Augmentative procedures were performed in 60% of the cases. Tissue-level implants (72.1%) were more frequently used than bone-level implants (27.9%). Regarding dimensions of the implants, a diameter of 4.1 mm (59.7%) and a length of 10 mm (55.0%) were most often utilized. An early failure rate of 0.6% was recorded (13 implants). Patients were older and received more implants in the maxilla, and the complexity of surgical interventions had increased when compared to the patient pool of 2002 to 2004. CONCLUSION Implant therapy performed in a surgical specialty clinic utilizing strict patient selection and evidence-based surgical protocols showed a very low early failure rate of 0.6%.