901 resultados para retrospective studies
Resumo:
BACKGROUND: Adults use the Internet for weight loss information, sometimes by participating in discussion forums. Our purpose was to analyze the quality of advice exchanged on these forums. METHODS: This was a retrospective analysis of messages posted to 18 Internet weight loss forums during 1 month in 2006. Advice was evaluated for congruence with clinical guidelines; potential for causing harm; and subsequent correction when it was contradictory to guidelines (erroneous) or potentially harmful. Message- and forum-specific characteristics were evaluated as predictors of advice quality and self-correction. RESULTS: Of 3368 initial messages, 266 (7.9%) were requests for advice. Of 654 provisions of advice, 56 (8.6%) were erroneous and 19 of these 56 (34%) were subsequently corrected. Forty-three (6.6%) provisions of advice were harmful, and 12 of these 43 (28%) were subsequently corrected. Messages from low-activity forums (fewer messages) were more likely than those from high-activity forums to be erroneous (10.6% vs 2.4%, P < .001) or harmful (8.4% vs 1.2%, P < .001). In high-activity forums, 2 of 4 (50%) erroneous provisions of advice and 2 of 2 (100%) potentially harmful provisions of advice were corrected by subsequent postings. Compared with general weight loss advice, medication-related advice was more likely to be erroneous (P = .02) or harmful (P = .01). CONCLUSIONS: Most advice posted on highly active Internet weight loss forums is not erroneous or harmful. However, clinical and research strategies are needed to address the quality of medication-related advice.
Resumo:
BACKGROUND: Pediatric truncal vascular injuries occur infrequently and have a reported mortality rate of 30% to 50%. This report examines the demographics, mechanisms of injury, associated trauma, and outcome of patients presenting for the past 10 years at a single institution with truncal vascular injuries. METHODS: A retrospective review (1997-2006) of a pediatric trauma registry at a single institution was undertaken. RESULTS: Seventy-five truncal vascular injuries occurred in 57 patients (age, 12 +/- 3 years); the injury mechanisms were penetrating in 37%. Concomitant injuries occurred with 76%, 62%, and 43% of abdominal, thoracic, and neck vascular injuries, respectively. Nonvascular complications occurred more frequently in patients with abdominal vascular injuries who were hemodynamically unstable on presentation. All patients with thoracic vascular injuries presenting with hemodynamic instability died. In patients with neck vascular injuries, 1 of 2 patients who were hemodynamically unstable died, compared to 1 of 12 patients who died in those who presented hemodynamically stable. Overall survival was 75%. CONCLUSIONS: Survival and complications of pediatric truncal vascular injury are related to hemodynamic status at the time of presentation. Associated injuries are higher with trauma involving the abdomen.
Resumo:
OBJECTIVE: We sought to determine maternal and neonatal outcomes by labor onset type and gestational age. STUDY DESIGN: We used electronic medical records data from 10 US institutions in the Consortium on Safe Labor on 115,528 deliveries from 2002 through 2008. Deliveries were divided by labor onset type (spontaneous, elective induction, indicated induction, unlabored cesarean). Neonatal and maternal outcomes were calculated by labor onset type and gestational age. RESULTS: Neonatal intensive care unit admissions and sepsis improved with each week of gestational age until 39 weeks (P < .001). After adjusting for complications, elective induction of labor was associated with a lower risk of ventilator use (odds ratio [OR], 0.38; 95% confidence interval [CI], 0.28-0.53), sepsis (OR, 0.36; 95% CI, 0.26-0.49), and neonatal intensive care unit admissions (OR, 0.52; 95% CI, 0.48-0.57) compared to spontaneous labor. The relative risk of hysterectomy at term was 3.21 (95% CI, 1.08-9.54) with elective induction, 1.16 (95% CI, 0.24-5.58) with indicated induction, and 6.57 (95% CI, 1.78-24.30) with cesarean without labor compared to spontaneous labor. CONCLUSION: Some neonatal outcomes improved until 39 weeks. Babies born with elective induction are associated with better neonatal outcomes compared to spontaneous labor. Elective induction may be associated with an increased hysterectomy risk.
Resumo:
BACKGROUND: Follow-up of abnormal outpatient laboratory test results is a major patient safety concern. Electronic medical records can potentially address this concern through automated notification. We examined whether automated notifications of abnormal laboratory results (alerts) in an integrated electronic medical record resulted in timely follow-up actions. METHODS: We studied 4 alerts: hemoglobin A1c > or =15%, positive hepatitis C antibody, prostate-specific antigen > or =15 ng/mL, and thyroid-stimulating hormone > or =15 mIU/L. An alert tracking system determined whether the alert was acknowledged (ie, provider clicked on and opened the message) within 2 weeks of transmission; acknowledged alerts were considered read. Within 30 days of result transmission, record review and provider contact determined follow-up actions (eg, patient contact, treatment). Multivariable logistic regression models analyzed predictors for lack of timely follow-up. RESULTS: Between May and December 2008, 78,158 tests (hemoglobin A1c, hepatitis C antibody, thyroid-stimulating hormone, and prostate-specific antigen) were performed, of which 1163 (1.48%) were transmitted as alerts; 10.2% of these (119/1163) were unacknowledged. Timely follow-up was lacking in 79 (6.8%), and was statistically not different for acknowledged and unacknowledged alerts (6.4% vs 10.1%; P =.13). Of 1163 alerts, 202 (17.4%) arose from unnecessarily ordered (redundant) tests. Alerts for a new versus known diagnosis were more likely to lack timely follow-up (odds ratio 7.35; 95% confidence interval, 4.16-12.97), whereas alerts related to redundant tests were less likely to lack timely follow-up (odds ratio 0.24; 95% confidence interval, 0.07-0.84). CONCLUSIONS: Safety concerns related to timely patient follow-up remain despite automated notification of non-life-threatening abnormal laboratory results in the outpatient setting.
Resumo:
OBJECTIVE: Because studies suggest that ultraviolet (UV) radiation modulates the myositis phenotype and Mi-2 autoantigen expression, we conducted a retrospective investigation to determine whether UV radiation may influence the relative prevalence of dermatomyositis and anti-Mi-2 autoantibodies in the US. METHODS: We assessed the relationship between surface UV radiation intensity in the state of residence at the time of onset with the relative prevalence of dermatomyositis and myositis autoantibodies in 380 patients with myositis from referral centers in the US. Myositis autoantibodies were detected by validated immunoprecipitation assays. Surface UV radiation intensity was estimated from UV Index data collected by the US National Weather Service. RESULTS: UV radiation intensity was associated with the relative proportion of patients with dermatomyositis (odds ratio [OR] 2.3, 95% confidence interval [95% CI] 0.9-5.8) and with the proportion of patients expressing anti-Mi-2 autoantibodies (OR 6.0, 95% CI 1.1-34.1). Modeling of these data showed that these associations were confined to women (OR 3.8, 95% CI 1.3-11.0 and OR 17.3, 95% CI 1.8-162.4, respectively) and suggests that sex influences the effects of UV radiation on autoimmune disorders. Significant associations were not observed in men, nor were UV radiation levels related to the presence of antisynthetase or anti-signal recognition particle autoantibodies. CONCLUSION: This first study of the distribution of myositis phenotypes and UV radiation exposure in the US showed that UV radiation may modulate the clinical and immunologic expression of autoimmune disease in women. Further investigation of the mechanisms by which these effects are produced may provide insights into pathogenesis and suggest therapeutic or preventative strategies.
Resumo:
BACKGROUND: Given the fragmentation of outpatient care, timely follow-up of abnormal diagnostic imaging results remains a challenge. We hypothesized that an electronic medical record (EMR) that facilitates the transmission and availability of critical imaging results through either automated notification (alerting) or direct access to the primary report would eliminate this problem. METHODS: We studied critical imaging alert notifications in the outpatient setting of a tertiary care Department of Veterans Affairs facility from November 2007 to June 2008. Tracking software determined whether the alert was acknowledged (ie, health care practitioner/provider [HCP] opened the message for viewing) within 2 weeks of transmission; acknowledged alerts were considered read. We reviewed medical records and contacted HCPs to determine timely follow-up actions (eg, ordering a follow-up test or consultation) within 4 weeks of transmission. Multivariable logistic regression models accounting for clustering effect by HCPs analyzed predictors for 2 outcomes: lack of acknowledgment and lack of timely follow-up. RESULTS: Of 123 638 studies (including radiographs, computed tomographic scans, ultrasonograms, magnetic resonance images, and mammograms), 1196 images (0.97%) generated alerts; 217 (18.1%) of these were unacknowledged. Alerts had a higher risk of being unacknowledged when the ordering HCPs were trainees (odds ratio [OR], 5.58; 95% confidence interval [CI], 2.86-10.89) and when dual-alert (>1 HCP alerted) as opposed to single-alert communication was used (OR, 2.02; 95% CI, 1.22-3.36). Timely follow-up was lacking in 92 (7.7% of all alerts) and was similar for acknowledged and unacknowledged alerts (7.3% vs 9.7%; P = .22). Risk for lack of timely follow-up was higher with dual-alert communication (OR, 1.99; 95% CI, 1.06-3.48) but lower when additional verbal communication was used by the radiologist (OR, 0.12; 95% CI, 0.04-0.38). Nearly all abnormal results lacking timely follow-up at 4 weeks were eventually found to have measurable clinical impact in terms of further diagnostic testing or treatment. CONCLUSIONS: Critical imaging results may not receive timely follow-up actions even when HCPs receive and read results in an advanced, integrated electronic medical record system. A multidisciplinary approach is needed to improve patient safety in this area.
Resumo:
INTRODUCTION: Thyroid cancer is the most common endocrine malignancy. The outcomes of patients with relapsed thyroid cancer treated on early-phase clinical trials have not been systematically analyzed. PATIENTS AND METHODS: We reviewed the records of consecutive patients with metastatic thyroid cancer referred to the Phase I Clinical Trials Program from March 2006 to April 2008. Best response was assessed by Response Evaluation Criteria in Solid Tumors. RESULTS: Fifty-six patients were identified. The median age was 55 yr (range 35-79 yr). Of 49 patients evaluable for response, nine (18.4%) had a partial response, and 16 (32.7%) had stable disease for 6 months or longer. The median progression-free survival was 1.12 yr. With a median follow-up of 15.6 months, the 1-yr survival rate was 81%. In univariate analysis, factors predicting shorter survival were anaplastic histology (P = 0.0002) and albumin levels less than 3.5 g/dl (P = 0.05). Among 26 patients with tumor decreases, none died (median follow-up 1.3 yr), whereas 52% of patients with any tumor increase died by 1 yr (P = 0.0001). The median time to failure in our phase I clinical trials was 11.5 months vs. 4.1 months for the previous treatment (P = 0.04). CONCLUSION: Patients with advanced thyroid cancer treated on phase I clinical trials had high rates of partial response and prolonged stable disease. Time to failure was significantly longer on the first phase I trial compared with the prior conventional treatment. Patients with any tumor decrease had significantly longer survival than those with any tumor increase.
Resumo:
Currently, there are no molecular biomarkers that guide treatment decisions for patients with head and neck squamous cell carcinoma (HNSCC). Several retrospective studies have evaluated TP53 in HNSCC, and results have suggested that specific mutations are associated with poor outcome. However, there exists heterogeneity among these studies in the site and stage of disease of the patients reviewed, the treatments rendered, and methods of evaluating TP53 mutation. Thus, it remains unclear as to which patients and in which clinical settings TP53 mutation is most useful in predicting treatment failure. In the current study, we reviewed the records of a cohort of patients with advanced, resectable HNSCC who received surgery and post-operative radiation (PORT) and had DNA isolated from fresh tumor tissue obtained at the time of surgery. TP53 mutations were identified using Sanger sequencing of exons 2-11 and the associated splice regions of the TP53 gene. We have found that the group of patients with either non-disruptive or disruptive TP53 mutations had decreased overall survival, disease-free survival, and an increased rate of distant metastasis. When examined as an independent factor, disruptive mutation was strongly associated with the development of distant metastasis. As a second aim of this project, we performed a pilot study examining the utility of the AmpliChip® p53 test as a practical method for TP53 sequencing in the clinical setting. AmpliChip® testing and Sanger sequencing was performed on a separate cohort of patients with HNSCC. Our study demonstrated the ablity of the AmpliChip® to call TP53 mutation from a single formalin-fixed paraffin-embedded slide. The results from AmpliChip® testing were identical with the Sanger method in 11 of 19 cases, with a higher rate of mutation calls using the AmpliChip® test. TP53 mutation is a potential prognostic biomarker among patients with advanced, resectable HNSCC treated with surgery and PORT. Whether this subgroup of patients could benefit from the addition of concurrent or induction chemotherapy remains to be evaluated in prospective clinical trials. Our pilot study of the p53 AmpliChip® suggests this could be a practical and reliable method of TP53 analysis in the clinical setting.
Resumo:
Purpose: The objective of this systematic review was to assess and compare the survival and complication rates of implant-supported prostheses reported in studies published in the year 2000 and before, to those reported in studies published after the year 2000. Materials and Methods: Three electronic searches complemented by manual searching were conducted to identify 139 prospective and retrospective studies on implant-supported prostheses. The included studies were divided in two groups: a group of 31 older studies published in the year 2000 or before, and a group of 108 newer studies published after the year 2000. Survival and complication rates were calculated using Poisson regression models, and multivariable robust Poisson regression was used to formally compare the outcomes of older and newer studies. Results: The 5-year survival rate of implant-supported prostheses was significantly increased in newer studies compared with older studies. The overall survival rate increased from 93.5% to 97.1%. The survival rate for cemented prostheses increased from 95.2% to 97.9%; for screw-retained reconstruction, from 77.6% to 96.8%; for implant-supported single crowns, from 92.6% to 97.2%; and for implant-supported fixed dental prostheses (FDPs), from 93.5% to 96.4%. The incidence of esthetic complications decreased in more recent studies compared with older ones, but the incidence of biologic complications was similar. The results for technical complications were inconsistent. There was a significant reduction in abutment or screw loosening by implant-supported FDPs. On the other hand, the total number of technical complications and the incidence of fracture of the veneering material was significantly increased in the newer studies. To explain the increased rate of complications, minor complications are probably reported in more detail in the newer publications. Conclusions: The results of the present systematic review demonstrated a positive learning curve in implant dentistry, represented in higher survival rates and lower complication rates reported in more recent clinical studies. The incidence of esthetic, biologic, and technical complications, however, is still high. Hence, it is important to identify these complications and their etiology to make implant treatment even more predictable in the future.
Resumo:
To test the hypothesis on prolonged survival in glioblastoma cases with increased subventricular zone (SVZ) radiation dose. Sixty glioblastoma cases were previously treated with adjuvant radiotherapy and Temozolamide. Ipsilateral, contralateral and bilateral SVZs were contoured and their doses were retrospectively evaluated. Median follow-up, progression free survival (PFS) and overall survival (OS) were 24.5, 8.5 and 19.3 months respectively. Log-rank tests showed a statistically significant correlation between contralateral SVZ (cSVZ) dose > 59.2 Gy (75th percentile) and poor median PFS (10.37 [95% CI 8.37-13.53] vs 7.1 [95% CI 3.5-8.97] months, p = 0.009). cSVZ dose > 59.2 Gy was associated with poor OS in the subgroup with subtotal resection/biopsy (HR: 4.83 [95% CI 1.71-13.97], p = 0.004). High ipsilateral SVZ dose of > 62.25 Gy (75th percentile) was associated with poor PFS in both subgroups of high performance status (HR: 2.58 [95% CI 1.03-6.05], p = 0.044) and SVZ without tumoral contact (HR: 10.57 [95% CI 2.04-49], p = 0.008). The effect of high cSVZ dose on PFS lost its statistical significance in multivariate Cox regression analysis. We report contradictory results compared to previous publications. Changing the clinical practice based on retrospective studies which even do not indicate consistent results among each other will be dangerous. We need carefully designed prospective randomized studies to evaluate any impact of radiation to SVZ in glioblastoma.
Resumo:
PURPOSE The objectives of this systematic review are (1) to quantitatively estimate the esthetic outcomes of implants placed in postextraction sites, and (2) to evaluate the influence of simultaneous bone augmentation procedures on these outcomes. MATERIALS AND METHODS Electronic and manual searches of the dental literature were performed to collect information on esthetic outcomes based on objective criteria with implants placed after extraction of maxillary anterior and premolar teeth. All levels of evidence were accepted (case series studies required a minimum of 5 cases). RESULTS From 1,686 titles, 114 full-text articles were evaluated and 50 records included for data extraction. The included studies reported on single-tooth implants adjacent to natural teeth, with no studies on multiple missing teeth identified (6 randomized controlled trials, 6 cohort studies, 5 cross-sectional studies, and 33 case series studies). Considerable heterogeneity in study design was found. A meta-analysis of controlled studies was not possible. The available evidence suggests that esthetic outcomes, determined by esthetic indices (predominantly the pink esthetic score) and positional changes of the peri-implant mucosa, may be achieved for single-tooth implants placed after tooth extraction. Immediate (type 1) implant placement, however, is associated with a greater variability in outcomes and a higher frequency of recession of > 1 mm of the midfacial mucosa (eight studies; range 9% to 41% and median 26% of sites, 1 to 3 years after placement) compared to early (type 2 and type 3) implant placement (2 studies; no sites with recession > 1 mm). In two retrospective studies of immediate (type 1) implant placement with bone graft, the facial bone wall was not detectable on cone beam CT in 36% and 57% of sites. These sites had more recession of the midfacial mucosa compared to sites with detectable facial bone. Two studies of early implant placement (types 2 and 3) combined with simultaneous bone augmentation with GBR (contour augmentation) demonstrated a high frequency (above 90%) of facial bone wall visible on CBCT. Recent studies of immediate (type 1) placement imposed specific selection criteria, including thick tissue biotype and an intact facial socket wall, to reduce esthetic risk. There were no specific selection criteria for early (type 2 and type 3) implant placement. CONCLUSIONS Acceptable esthetic outcomes may be achieved with implants placed after extraction of teeth in the maxillary anterior and premolar areas of the dentition. Recession of the midfacial mucosa is a risk with immediate (type 1) placement. Further research is needed to investigate the most suitable biomaterials to reconstruct the facial bone and the relationship between long-term mucosal stability and presence/absence of the facial bone, the thickness of the facial bone, and the position of the facial bone crest.
Resumo:
Renal cell carcinoma (RCC) extension into the renal vein or the inferior vena cava occurs in 4%-10% of all kidney cancer cases. This entity shows a wide range of different clinical and surgical scenarios, making natural history and oncological outcomes variable and poorly characterized. Infrequency and variability make it necessary to share the experience from different institutions to properly analyze surgical outcomes in this setting. The International Renal Cell Carcinoma-Venous Tumor Thrombus Consortium was created to answer the questions generated by competing results from different retrospective studies in RCC with venous extension on current controversial topics. The aim of this article is to summarize the experience gained from the analysis of the world's largest cohort of patients in this unique setting to date.
Resumo:
PURPOSE Prevention of psychosis requires both presence of clinical high risk (CHR) criteria and early help-seeking. Previous retrospective studies of the duration of untreated illness (i.e. prodrome plus psychosis) did not distinguish between prodromal states with and without CHR symptoms. Therefore, we examined the occurrence of CHR symptoms and first help-seeking, thereby considering effects of age at illness-onset. METHODS Adult patients first admitted for psychosis (n = 126) were retrospectively assessed for early course of illness and characteristics of first help-seeking. RESULTS One-hundred and nine patients reported a prodrome, 58 with CHR symptoms. In patients with an early illness-onset before age 18 (n = 45), duration of both illness and psychosis were elongated, and CHR symptoms more frequent (68.9 vs. 33.3 %) compared to those with adult illness-onset. Only 29 patients reported help-seeking in the prodrome; this was mainly self-initiated, especially in patients with an early illness-onset. After the onset of first psychotic symptoms, help-seeking was mainly initiated by others. State- and age-independently, mental health professionals were the main first point-of-call (54.0 %). CONCLUSIONS Adult first-admission psychosis patients with an early, insidious onset of symptoms before age 18 were more likely to recall CHR symptoms as part of their prodrome. According to current psychosis-risk criteria, these CHR symptoms, in principle, would have allowed the early detection of psychosis. Furthermore, compared to patients with an adult illness-onset, patients with an early illness-onset were also more likely to seek help on their own account. Thus, future awareness strategies to improve CHR detection might be primarily related to young persons and self-perceived subtle symptoms.
Resumo:
The ATLS program by the American college of surgeons is probably the most important globally active training organization dedicated to improve trauma management. Detection of acute haemorrhagic shock belongs to the key issues in clinical practice and thus also in medical teaching. (In this issue of the journal William Schulz and Ian McConachrie critically review the ATLS shock classification Table 1), which has been criticized after several attempts of validation have failed [1]. The main problem is that distinct ranges of heart rate are related to ranges of uncompensated blood loss and that the heart rate decrease observed in severe haemorrhagic shock is ignored [2]. Table 1. Estimated blood loos based on patient's initial presentation (ATLS Students Course Manual, 9th Edition, American College of Surgeons 2012). Class I Class II Class III Class IV Blood loss ml Up to 750 750–1500 1500–2000 >2000 Blood loss (% blood volume) Up to 15% 15–30% 30–40% >40% Pulse rate (BPM) <100 100–120 120–140 >140 Systolic blood pressure Normal Normal Decreased Decreased Pulse pressure Normal or ↑ Decreased Decreased Decreased Respiratory rate 14–20 20–30 30–40 >35 Urine output (ml/h) >30 20–30 5–15 negligible CNS/mental status Slightly anxious Mildly anxious Anxious, confused Confused, lethargic Initial fluid replacement Crystalloid Crystalloid Crystalloid and blood Crystalloid and blood Table options In a retrospective evaluation of the Trauma Audit and Research Network (TARN) database blood loss was estimated according to the injuries in nearly 165,000 adult trauma patients and each patient was allocated to one of the four ATLS shock classes [3]. Although heart rate increased and systolic blood pressure decreased from class I to class IV, respiratory rate and GCS were similar. The median heart rate in class IV patients was substantially lower than the value of 140 min−1 postulated by ATLS. Moreover deterioration of the different parameters does not necessarily go parallel as suggested in the ATLS shock classification [4] and [5]. In all these studies injury severity score (ISS) and mortality increased with in increasing shock class [3] and with increasing heart rate and decreasing blood pressure [4] and [5]. This supports the general concept that the higher heart rate and the lower blood pressure, the sicker is the patient. A prospective study attempted to validate a shock classification derived from the ATLS shock classes [6]. The authors used a combination of heart rate, blood pressure, clinically estimated blood loss and response to fluid resuscitation to classify trauma patients (Table 2) [6]. In their initial assessment of 715 predominantly blunt trauma patients 78% were classified as normal (Class 0), 14% as Class I, 6% as Class II and only 1% as Class III and Class IV respectively. This corresponds to the results from the previous retrospective studies [4] and [5]. The main endpoint used in the prospective study was therefore presence or absence of significant haemorrhage, defined as chest tube drainage >500 ml, evidence of >500 ml of blood loss in peritoneum, retroperitoneum or pelvic cavity on CT scan or requirement of any blood transfusion >2000 ml of crystalloid. Because of the low prevalence of class II or higher grades statistical evaluation was limited to a comparison between Class 0 and Class I–IV combined. As in the retrospective studies, Lawton did not find a statistical difference of heart rate and blood pressure among the five groups either, although there was a tendency to a higher heart rate in Class II patients. Apparently classification during primary survey did not rely on vital signs but considered the rather soft criterion of “clinical estimation of blood loss” and requirement of fluid substitution. This suggests that allocation of an individual patient to a shock classification was probably more an intuitive decision than an objective calculation the shock classification. Nevertheless it was a significant predictor of ISS [6]. Table 2. Shock grade categories in prospective validation study (Lawton, 2014) [6]. Normal No haemorrhage Class I Mild Class II Moderate Class III Severe Class IV Moribund Vitals Normal Normal HR > 100 with SBP >90 mmHg SBP < 90 mmHg SBP < 90 mmHg or imminent arrest Response to fluid bolus (1000 ml) NA Yes, no further fluid required Yes, no further fluid required Requires repeated fluid boluses Declining SBP despite fluid boluses Estimated blood loss (ml) None Up to 750 750–1500 1500–2000 >2000 Table options What does this mean for clinical practice and medical teaching? All these studies illustrate the difficulty to validate a useful and accepted physiologic general concept of the response of the organism to fluid loss: Decrease of cardiac output, increase of heart rate, decrease of pulse pressure occurring first and hypotension and bradycardia occurring only later. Increasing heart rate, increasing diastolic blood pressure or decreasing systolic blood pressure should make any clinician consider hypovolaemia first, because it is treatable and deterioration of the patient is preventable. This is true for the patient on the ward, the sedated patient in the intensive care unit or the anesthetized patients in the OR. We will therefore continue to teach this typical pattern but will continue to mention the exceptions and pitfalls on a second stage. The shock classification of ATLS is primarily used to illustrate the typical pattern of acute haemorrhagic shock (tachycardia and hypotension) as opposed to the Cushing reflex (bradycardia and hypertension) in severe head injury and intracranial hypertension or to the neurogenic shock in acute tetraplegia or high paraplegia (relative bradycardia and hypotension). Schulz and McConachrie nicely summarize the various confounders and exceptions from the general pattern and explain why in clinical reality patients often do not present with the “typical” pictures of our textbooks [1]. ATLS refers to the pitfalls in the signs of acute haemorrhage as well: Advanced age, athletes, pregnancy, medications and pace makers and explicitly state that individual subjects may not follow the general pattern. Obviously the ATLS shock classification which is the basis for a number of questions in the written test of the ATLS students course and which has been used for decades probably needs modification and cannot be literally applied in clinical practice. The European Trauma Course, another important Trauma training program uses the same parameters to estimate blood loss together with clinical exam and laboratory findings (e.g. base deficit and lactate) but does not use a shock classification related to absolute values. In conclusion the typical physiologic response to haemorrhage as illustrated by the ATLS shock classes remains an important issue in clinical practice and in teaching. The estimation of the severity haemorrhage in the initial assessment trauma patients is (and was never) solely based on vital signs only but includes the pattern of injuries, the requirement of fluid substitution and potential confounders. Vital signs are not obsolete especially in the course of treatment but must be interpreted in view of the clinical context. Conflict of interest None declared. Member of Swiss national ATLS core faculty.
Resumo:
OBJECTIVE To evaluate the risk of failure of fixed orthodontic retention protocols. DATA Screening for inclusion eligibility, quality assessment of studies and data extraction was performed independently by two authors. SOURCES The electronic databases MEDLINE, EMBASE and CENTRAL were searched with no restrictions on publication date or language using detailed strategies. The main outcome assessed was bond failure. STUDY SELECTION Twenty-seven studies satisfied the inclusion criteria. Randomised controlled trials and prospective studies were evaluated according to the Cochrane risk of bias tool. Retrospective studies were graded employing the predetermined criteria of Bondemark. RESULTS Nine randomised controlled trials, four of which were of low quality, were identified. Six studies had a prospective design and all were of low quality. Twelve studies were retrospective. The quality of trial reporting was poor in general. Four studies assessing glass-fibre retainers, three RCTs and one prospective, reported bond failures from 11 to 71%, whereas twenty studies evaluating multistranded retainers – nine RCTs, two prospective and nine retrospective – reported failures ranging from 12 to 50%. One comparison was performed, multistranded wires vs. polyehtylene woven ribbon (RR: 1.74; 95% CI: 0.45, 6.73; p=0.42). CONCLUSION The quality of the available evidence is low. No conclusive evidence was found in order to guide orthodontists in the selection of the best protocol. CLINICAL SIGNIFICANCE Although fixed orthodontic retainers have been used for years in clinical practice, the selection of the best treatment protocol still remains a subjective issue. The available studies, and their synthesis, cannot provide reliable evidence in this field.