50 resultados para All-to-all comparison
em BORIS: Bern Open Repository and Information System - Berna - Suiça
Resumo:
OBJECTIVE to compare the vascular healing process between the sirolimus-eluting NEVO and the everolimus-eluting Xience stent by optical coherence tomography (OCT) at 1-year follow-up. BACKGROUND Presence of durable polymer on a drug-eluting metallic stent may be the basis of an inflammatory reaction with abnormal healing response. The NEVO stent, having a bioresorbable polymer eluted by reservoir technology, may overcome this problem. METHODS All consecutive patients, who received NEVO or Xience stent implantation between September 2010 and October 2010 in our institution, were included. Vascular healing was assessed at 1-year as percentage of uncovered struts, neointimal thickness (NIT), in-stent/stent area obstruction and pattern of neointima. RESULTS A total 47 patients (2:1 randomization, n = 32 NEVO, n = 15 Xience) were included. Eighteen patients underwent angiographic follow-up (eight patients with nine lesions for NEVO vs. 10 patients with 11 lesions for Xience). The angiographic late loss was numerically higher but not statistically different in NEVO compared with Xience treated lesions (0.38 ± 0.47 mm vs. 0.18 ± 0.27 mm; P = 0.171). OCT analysis of 4,912 struts demonstrated similar rates of uncovered struts (0.5 vs. 0.7%, P = 0.462), higher mean NIT (177.76 ± 87.76 µm vs. 132.22 ± 30.91 µm; P = 0.170) and in stent/stent area obstruction (23.02 ± 14.74% vs. 14.17 ± 5.94%, P = 0.120) in the NEVO as compared with Xience. CONCLUSION The NEVO stent with a reservoir technology seems to exhibit more neointimal proliferation as compared to Xience stent. The findings of our study, which currently represent the unique data existing on this reservoir technology, would need to be confirmed in a large population.
Resumo:
This study sought to compare the neointimal response of metallic everolimus drug-eluting stents (DES) and polymeric everolimus bioresorbable vascular scaffolds (BVS) by optical coherence tomography at 1 year.
Resumo:
Small-bowel MRI based on contrast-enhanced T1-weighted sequences has been challenged by diffusion-weighted imaging (DWI) for detection of inflammatory bowel lesions and complications in patients with Crohn disease.
Resumo:
To analyse and to compare the changes in the various optical coherence tomography (OCT), echogenicity and intravascular ultrasound virtual histology (VH) of the everolimus-eluting bioresorbable scaffold (ABSORB) degradation parameters during the first 12 months after ABSORB implantation. In the ABSORB study, changes in the appearance of the ABSORB scaffold were monitored over time using various intracoronary imaging modalities. The scaffold struts exhibited a progressive change in their black core area by OCT, in their ultrasound derived grey level intensity quantified by echogenicity, and in their backscattering ultrasound signal, identified as "pseudo dense-calcium" (DC) by VH.
Resumo:
The ATLS program by the American college of surgeons is probably the most important globally active training organization dedicated to improve trauma management. Detection of acute haemorrhagic shock belongs to the key issues in clinical practice and thus also in medical teaching. (In this issue of the journal William Schulz and Ian McConachrie critically review the ATLS shock classification Table 1), which has been criticized after several attempts of validation have failed [1]. The main problem is that distinct ranges of heart rate are related to ranges of uncompensated blood loss and that the heart rate decrease observed in severe haemorrhagic shock is ignored [2]. Table 1. Estimated blood loos based on patient's initial presentation (ATLS Students Course Manual, 9th Edition, American College of Surgeons 2012). Class I Class II Class III Class IV Blood loss ml Up to 750 750–1500 1500–2000 >2000 Blood loss (% blood volume) Up to 15% 15–30% 30–40% >40% Pulse rate (BPM) <100 100–120 120–140 >140 Systolic blood pressure Normal Normal Decreased Decreased Pulse pressure Normal or ↑ Decreased Decreased Decreased Respiratory rate 14–20 20–30 30–40 >35 Urine output (ml/h) >30 20–30 5–15 negligible CNS/mental status Slightly anxious Mildly anxious Anxious, confused Confused, lethargic Initial fluid replacement Crystalloid Crystalloid Crystalloid and blood Crystalloid and blood Table options In a retrospective evaluation of the Trauma Audit and Research Network (TARN) database blood loss was estimated according to the injuries in nearly 165,000 adult trauma patients and each patient was allocated to one of the four ATLS shock classes [3]. Although heart rate increased and systolic blood pressure decreased from class I to class IV, respiratory rate and GCS were similar. The median heart rate in class IV patients was substantially lower than the value of 140 min−1 postulated by ATLS. Moreover deterioration of the different parameters does not necessarily go parallel as suggested in the ATLS shock classification [4] and [5]. In all these studies injury severity score (ISS) and mortality increased with in increasing shock class [3] and with increasing heart rate and decreasing blood pressure [4] and [5]. This supports the general concept that the higher heart rate and the lower blood pressure, the sicker is the patient. A prospective study attempted to validate a shock classification derived from the ATLS shock classes [6]. The authors used a combination of heart rate, blood pressure, clinically estimated blood loss and response to fluid resuscitation to classify trauma patients (Table 2) [6]. In their initial assessment of 715 predominantly blunt trauma patients 78% were classified as normal (Class 0), 14% as Class I, 6% as Class II and only 1% as Class III and Class IV respectively. This corresponds to the results from the previous retrospective studies [4] and [5]. The main endpoint used in the prospective study was therefore presence or absence of significant haemorrhage, defined as chest tube drainage >500 ml, evidence of >500 ml of blood loss in peritoneum, retroperitoneum or pelvic cavity on CT scan or requirement of any blood transfusion >2000 ml of crystalloid. Because of the low prevalence of class II or higher grades statistical evaluation was limited to a comparison between Class 0 and Class I–IV combined. As in the retrospective studies, Lawton did not find a statistical difference of heart rate and blood pressure among the five groups either, although there was a tendency to a higher heart rate in Class II patients. Apparently classification during primary survey did not rely on vital signs but considered the rather soft criterion of “clinical estimation of blood loss” and requirement of fluid substitution. This suggests that allocation of an individual patient to a shock classification was probably more an intuitive decision than an objective calculation the shock classification. Nevertheless it was a significant predictor of ISS [6]. Table 2. Shock grade categories in prospective validation study (Lawton, 2014) [6]. Normal No haemorrhage Class I Mild Class II Moderate Class III Severe Class IV Moribund Vitals Normal Normal HR > 100 with SBP >90 mmHg SBP < 90 mmHg SBP < 90 mmHg or imminent arrest Response to fluid bolus (1000 ml) NA Yes, no further fluid required Yes, no further fluid required Requires repeated fluid boluses Declining SBP despite fluid boluses Estimated blood loss (ml) None Up to 750 750–1500 1500–2000 >2000 Table options What does this mean for clinical practice and medical teaching? All these studies illustrate the difficulty to validate a useful and accepted physiologic general concept of the response of the organism to fluid loss: Decrease of cardiac output, increase of heart rate, decrease of pulse pressure occurring first and hypotension and bradycardia occurring only later. Increasing heart rate, increasing diastolic blood pressure or decreasing systolic blood pressure should make any clinician consider hypovolaemia first, because it is treatable and deterioration of the patient is preventable. This is true for the patient on the ward, the sedated patient in the intensive care unit or the anesthetized patients in the OR. We will therefore continue to teach this typical pattern but will continue to mention the exceptions and pitfalls on a second stage. The shock classification of ATLS is primarily used to illustrate the typical pattern of acute haemorrhagic shock (tachycardia and hypotension) as opposed to the Cushing reflex (bradycardia and hypertension) in severe head injury and intracranial hypertension or to the neurogenic shock in acute tetraplegia or high paraplegia (relative bradycardia and hypotension). Schulz and McConachrie nicely summarize the various confounders and exceptions from the general pattern and explain why in clinical reality patients often do not present with the “typical” pictures of our textbooks [1]. ATLS refers to the pitfalls in the signs of acute haemorrhage as well: Advanced age, athletes, pregnancy, medications and pace makers and explicitly state that individual subjects may not follow the general pattern. Obviously the ATLS shock classification which is the basis for a number of questions in the written test of the ATLS students course and which has been used for decades probably needs modification and cannot be literally applied in clinical practice. The European Trauma Course, another important Trauma training program uses the same parameters to estimate blood loss together with clinical exam and laboratory findings (e.g. base deficit and lactate) but does not use a shock classification related to absolute values. In conclusion the typical physiologic response to haemorrhage as illustrated by the ATLS shock classes remains an important issue in clinical practice and in teaching. The estimation of the severity haemorrhage in the initial assessment trauma patients is (and was never) solely based on vital signs only but includes the pattern of injuries, the requirement of fluid substitution and potential confounders. Vital signs are not obsolete especially in the course of treatment but must be interpreted in view of the clinical context. Conflict of interest None declared. Member of Swiss national ATLS core faculty.
Resumo:
In many cases, it is not possible to call the motorists to account for their considerable excess in speeding, because they deny being the driver on the speed-check photograph. An anthropological comparison of facial features using a photo-to-photo comparison can be very difficult depending on the quality of the photographs. One difficulty of that analysis method is that the comparison photographs of the presumed driver are taken with a different camera or camera lens and from a different angle than for the speed-check photo. To take a comparison photograph with exactly the same camera setup is almost impossible. Therefore, only an imprecise comparison of the individual facial features is possible. The geometry and position of each facial feature, for example the distances between the eyes or the positions of the ears, etc., cannot be taken into consideration. We applied a new method using 3D laser scanning, optical surface digitalization, and photogrammetric calculation of the speed-check photo, which enables a geometric comparison. Thus, the influence of the focal length and the distortion of the objective lens are eliminated and the precise position and the viewing direction of the speed-check camera are calculated. Even in cases of low-quality images or when the face of the driver is partly hidden, good results are delivered using this method. This new method, Geometric Comparison, is evaluated and validated in a prepared study which is described in this article.
Resumo:
A new scoring system, the Basic Erosive Wear Examination (BEWE), has been designed to provide a simple tool for use in general practice and to allow comparison to other more discriminative indices. The most severely affected surface in each sextant is recorded with a four level score and the cumulative score classified and matched to risk levels which guide the management of the condition. The BEWE allows re-analysis and integration of results from existing studies and, in time, should initiate a consensus within the scientific community and so avoid continued proliferation of indices. Finally, this process should lead to the development of an internationally accepted, standardised and validated index. The BEWE further aims to increase the awareness of tooth erosion amongst clinicians and general dental practitioners and to provide a guide as to its management.
Resumo:
The rise of evidence-based medicine as well as important progress in statistical methods and computational power have led to a second birth of the >200-year-old Bayesian framework. The use of Bayesian techniques, in particular in the design and interpretation of clinical trials, offers several substantial advantages over the classical statistical approach. First, in contrast to classical statistics, Bayesian analysis allows a direct statement regarding the probability that a treatment was beneficial. Second, Bayesian statistics allow the researcher to incorporate any prior information in the analysis of the experimental results. Third, Bayesian methods can efficiently handle complex statistical models, which are suited for advanced clinical trial designs. Finally, Bayesian statistics encourage a thorough consideration and presentation of the assumptions underlying an analysis, which enables the reader to fully appraise the authors' conclusions. Both Bayesian and classical statistics have their respective strengths and limitations and should be viewed as being complementary to each other; we do not attempt to make a head-to-head comparison, as this is beyond the scope of the present review. Rather, the objective of the present article is to provide a nonmathematical, reader-friendly overview of the current practice of Bayesian statistics coupled with numerous intuitive examples from the field of oncology. It is hoped that this educational review will be a useful resource to the oncologist and result in a better understanding of the scope, strengths, and limitations of the Bayesian approach.
Resumo:
BACKGROUND Trastuzumab has established efficacy against breast cancer with overexpression or amplification of the HER2 oncogene. The standard of care is 1 year of adjuvant trastuzumab, but the optimum duration of treatment is unknown. We compared 2 years of treatment with trastuzumab with 1 year of treatment, and updated the comparison of 1 year of trastuzumab versus observation at a median follow-up of 8 years, for patients enrolled in the HERceptin Adjuvant (HERA) trial. METHODS The HERA trial is an international, multicentre, randomised, open-label, phase 3 trial comparing treatment with trastuzumab for 1 and 2 years with observation after standard neoadjuvant chemotherapy, adjuvant chemotherapy, or both in 5102 patients with HER2-positive early breast cancer. The primary endpoint was disease-free survival. The comparison of 2 years versus 1 year of trastuzumab treatment involved a landmark analysis of 3105 patients who were disease-free 12 months after randomisation to one of the trastuzumab groups, and was planned after observing at least 725 disease-free survival events. The updated intention-to-treat comparison of 1 year trastuzumab treatment versus observation alone in 3399 patients at a median follow-up of 8 years (range 0-10) is also reported. This study is registered with ClinicalTrials.gov, number NCT00045032. FINDINGS We recorded 367 events of disease-free survival in 1552 patients in the 1 year group and 367 events in 1553 patients in the 2 year group (hazard ratio [HR] 0·99, 95% CI 0·85-1·14, p=0·86). Grade 3-4 adverse events and decreases in left ventricular ejection fraction during treatment were reported more frequently in the 2 year treatment group than in the 1 year group (342 [20·4%] vs 275 [16·3%] grade 3-4 adverse events, and 120 [7·2%] vs 69 [4·1%] decreases in left ventricular ejection fraction, respectively). HRs for a comparison of 1 year of trastuzumab treatment versus observation were 0·76 (95% CI 0·67-0·86, p<0·0001) for disease-free survival and 0·76 (0·65-0·88, p=0·0005) for overall survival, despite crossover of 884 (52%) patients from the observation group to trastuzumab therapy. INTERPRETATION 2 years of adjuvant trastuzumab is not more effective than is 1 year of treatment for patients with HER2-positive early breast cancer. 1 year of treatment provides a significant disease-free and overall survival benefit compared with observation and remains the standard of care. FUNDING F Hoffmann-La Roche (Roche).
Resumo:
Bovine tuberculosis (bTB) is a (re-)emerging disease in European countries, including Switzerland. This study assesses the seroprevalence of infection with Mycobacterium bovis and closely related agents in wild boar (Sus scrofa) in Switzerland, because wild boar are potential maintenance hosts of these pathogens. The study employs harmonised laboratory methods to facilitate comparison with the situation in other countries. Eighteen out of 743 blood samples tested seropositive (2.4%, CI: 1.5-3.9%) by ELISA, and the results for 61 animals previously assessed using culture and PCR indicated that this serological test was not 100% specific for M. bovis, cross-reacting with M. microti. Nevertheless, serology appears to be an appropriate test methodology in the harmonisation of wild boar testing throughout Europe. In accordance with previous findings, the low seroprevalence found in wild boar suggests wildlife is an unlikely source of the M. bovis infections recently detected in cattle in Switzerland. This finding contrasts with the epidemiological situation pertaining in southern Spain.
Resumo:
Studies have shown that the discriminability of successive time intervals depends on the presentation order of the standard (St) and the comparison (Co) stimuli. Also, this order affects the point of subjective equality. The first effect is here called the standard-position effect (SPE); the latter is known as the time-order error. In the present study, we investigated how these two effects vary across interval types and standard durations, using Hellström’s sensation-weighting model to describe the results and relate them to stimulus comparison mechanisms. In Experiment 1, four modes of interval presentation were used, factorially combining interval type (filled, empty) and sensory modality (auditory, visual). For each mode, two presentation orders (St–Co, Co–St) and two standard durations (100 ms, 1,000 ms) were used; half of the participants received correctness feedback, and half of them did not. The interstimulus interval was 900 ms. The SPEs were negative (i.e., a smaller difference limen for St–Co than for Co–St), except for the filled-auditory and empty-visual 100-ms standards, for which a positive effect was obtained. In Experiment 2, duration discrimination was investigated for filled auditory intervals with four standards between 100 and 1,000 ms, an interstimulus interval of 900 ms, and no feedback. Standard duration interacted with presentation order, here yielding SPEs that were negative for standards of 100 and 1,000 ms, but positive for 215 and 464 ms. Our findings indicate that the SPE can be positive as well as negative, depending on the interval type and standard duration, reflecting the relative weighting of the stimulus information, as is described by the sensation-weighting model.
Resumo:
The aim of this study was to investigate the impact of patient and lesion complexity on outcomes with newer-generation zotarolimus-eluting stents (ZES) and everolimus-eluting stents (EES).
Resumo:
To compare the MANKIN and OARSI cartilage histopathology assessment systems using human articular cartilage from a large number of donors across the adult age spectrum representing all levels of cartilage degradation.
Resumo:
AIMS: Second-generation everolimus-eluting stents (EES) are safer and more efficient than first-generation paclitaxel-eluting stents (PES). Third-generation biolimus-eluting stents (BES) have been found to be non-inferior to PES. To date, there is no available comparative study between EES and BES. We aimed to investigate the safety and efficacy of BES with biodegradable polymer compared to EES with durable polymer at a follow-up of two years in an unselected population of consecutively enrolled patients. METHODS AND RESULTS: A group of 814 consecutive patients undergoing percutaneous coronary intervention (PCI) was enrolled between 2007 and 2010, of which 527 were treated with EES and 287 with BES implantation. Clinical outcome was compared in 200 pairs using propensity score matching. The primary endpoint was a composite of death, myocardial infarction (MI) and target vessel revascularisation (TVR) at two-year follow-up. Median follow-up was 22 months. The primary outcome occurred in 11.5% of EES and 10.5% of BES patients (HR 1.11, 95% CI: 0.61-2.00, p=0.74). At two years, there was no significant difference with regard to death (HR 0.49, 95% CI: 0.18-1.34, p=0.17), cardiac death (HR 0.14, 95% CI: 0.02-1.14, p=0.66) or MI (HR 6.10, 95% CI: 0.73-50.9, p=0.10). Stent thrombosis (ST) incidence was evenly distributed between EES (n=2) and BES (n=2) (p-value=1.0). CONCLUSIONS: This first clinical study failed to demonstrate any significant difference regarding safety or efficacy between these two types and generations of drug-eluting stents (DES).