844 resultados para Optimal time delay
Resumo:
Background Cardiac arrests are handled by teams rather than by individual health-care workers. Recent investigations demonstrate that adherence to CPR guidelines can be less than optimal, that deviations from treatment algorithms are associated with lower survival rates, and that deficits in performance are associated with shortcomings in the process of team-building. The aim of this study was to explore and quantify the effects of ad-hoc team-building on the adherence to the algorithms of CPR among two types of physicians that play an important role as first responders during CPR: general practitioners and hospital physicians. Methods To unmask team-building this prospective randomised study compared the performance of preformed teams, i.e. teams that had undergone their process of team-building prior to the onset of a cardiac arrest, with that of teams that had to form ad-hoc during the cardiac arrest. 50 teams consisting of three general practitioners each and 50 teams consisting of three hospital physicians each, were randomised to two different versions of a simulated witnessed cardiac arrest: the arrest occurred either in the presence of only one physician while the remaining two physicians were summoned to help ("ad-hoc"), or it occurred in the presence of all three physicians ("preformed"). All scenarios were videotaped and performance was analysed post-hoc by two independent observers. Results Compared to preformed teams, ad-hoc forming teams had less hands-on time during the first 180 seconds of the arrest (93 ± 37 vs. 124 ± 33 sec, P < 0.0001), delayed their first defibrillation (67 ± 42 vs. 107 ± 46 sec, P < 0.0001), and made less leadership statements (15 ± 5 vs. 21 ± 6, P < 0.0001). Conclusion Hands-on time and time to defibrillation, two performance markers of CPR with a proven relevance for medical outcome, are negatively affected by shortcomings in the process of ad-hoc team-building and particularly deficits in leadership. Team-building has thus to be regarded as an additional task imposed on teams forming ad-hoc during CPR. All physicians should be aware that early structuring of the own team is a prerequisite for timely and effective execution of CPR.
Resumo:
In the field of mergers and acquisitions, German and international tax law allow for several opportunities to step up a firm's assets, i.e., to revaluate the assets at fair market values. When a step-up is performed the taxpayer recognizes a taxable gain, but also obtains tax benefits in the form of higher future depreciation allowances associated with stepping up the tax base of the assets. This tax-planning problem is well known in taxation literature and can also be applied to firm valuation in the presence of taxation. However, the known models usually assume a perfect loss offset. If this assumption is abandoned, the depreciation allowances may lose value as they become tax effective at a later point in time, or even never if there are not enough cash flows to be offset against. This aspect is especiallyrelevant if future cash flows are assumed to be uncertain. This paper shows that a step-up may be disadvantageous or a firm overvalued if these aspects are not integrated into the basic calculus. Compared to the standard approach, assets should be stepped up only in a few cases and - under specific conditions - at a later point in time. Firm values may be considerably lower under imperfect loss offset.
Resumo:
The execution of a project requires resources that are generally scarce. Classical approaches to resource allocation assume that the usage of these resources by an individual project activity is constant during the execution of that activity; in practice, however, the project manager may vary resource usage over time within prescribed bounds. This variation gives rise to the project scheduling problem which consists in allocating the scarce resources to the project activities over time such that the project duration is minimized, the total number of resource units allocated equals the prescribed work content of each activity, and various work-content-related constraints are met. We formulate this problem for the first time as a mixed-integer linear program. Our computational results for a standard test set from the literature indicate that this model outperforms the state-of-the-art solution methods for this problem.
Resumo:
Objectives: To update the 2006 systematic review of the comparative benefits and harms of erythropoiesis-stimulating agent (ESA) strategies and non-ESA strategies to manage anemia in patients undergoing chemotherapy and/or radiation for malignancy (excluding myelodysplastic syndrome and acute leukemia), including the impact of alternative thresholds for initiating treatment and optimal duration of therapy. Data sources: Literature searches were updated in electronic databases (n=3), conference proceedings (n=3), and Food and Drug Administration transcripts. Multiple sources (n=13) were searched for potential gray literature. A primary source for current survival evidence was a recently published individual patient data meta-analysis. In that meta-analysis, patient data were obtained from investigators for studies enrolling more than 50 patients per arm. Because those data constitute the most currently available data for this update, as well as the source for on-study (active treatment) mortality data, we limited inclusion in the current report to studies enrolling more than 50 patients per arm to avoid potential differential endpoint ascertainment in smaller studies. Review methods: Title and abstract screening was performed by one or two (to resolve uncertainty) reviewers; potentially included publications were reviewed in full text. Two or three (to resolve disagreements) reviewers assessed trial quality. Results were independently verified and pooled for outcomes of interest. The balance of benefits and harms was examined in a decision model. Results: We evaluated evidence from 5 trials directly comparing darbepoetin with epoetin, 41 trials comparing epoetin with control, and 8 trials comparing darbepoetin with control; 5 trials evaluated early versus late (delay until Hb ≤9 to 11 g/dL) treatment. Trials varied according to duration, tumor types, cancer therapy, trial quality, iron supplementation, baseline hemoglobin, ESA dosing frequency (and therefore amount per dose), and dose escalation. ESAs decreased the risk of transfusion (pooled relative risk [RR], 0.58; 95% confidence interval [CI], 0.53 to 0.64; I2 = 51%; 38 trials) without evidence of meaningful difference between epoetin and darbepoetin. Thromboembolic event rates were higher in ESA-treated patients (pooled RR, 1.51; 95% CI, 1.30 to 1.74; I2 = 0%; 37 trials) without difference between epoetin and darbepoetin. In 14 trials reporting the Functional Assessment of Cancer Therapy (FACT)-Fatigue subscale, the most common patient-reported outcome, scores decreased by −0.6 in control arms (95% CI, −6.4 to 5.2; I2 = 0%) and increased by 2.1 in ESA arms (95% CI, −3.9 to 8.1; I2 = 0%). There were fewer thromboembolic and on-study mortality adverse events when ESA treatment was delayed until baseline Hb was less than 10 g/dL, in keeping with current treatment practice, but the difference in effect from early treatment was not significant, and the evidence was limited and insufficient for conclusions. No evidence informed optimal duration of therapy. Mortality was increased during the on-study period (pooled hazard ratio [HR], 1.17; 95% CI, 1.04 to 1.31; I2 = 0%; 37 trials). There was one additional death for every 59 treated patients when the control arm on-study mortality was 10 percent and one additional death for every 588 treated patients when the control-arm on-study mortality was 1 percent. A cohort decision model yielded a consistent result—greater loss of life-years when control arm on-study mortality was higher. There was no discernible increase in mortality with ESA use over the longest available followup (pooled HR, 1.04; 95% CI, 0.99 to 1.10; I2 = 38%; 44 trials), but many trials did not include an overall survival endpoint and potential time-dependent confounding was not considered. Conclusions: Results of this update were consistent with the 2006 review. ESAs reduced the need for transfusions and increased the risk of thromboembolism. FACT-Fatigue scores were better with ESA use but the magnitude was less than the minimal clinically important difference. An increase in mortality accompanied the use of ESAs. An important unanswered question is whether dosing practices and overall ESA exposure might influence harms.
Resumo:
OBJECTIVE Standard stroke CT protocols start with non-enhanced CT followed by perfusion-CT (PCT) and end with CTA. We aimed to evaluate the influence of the sequence of PCT and CTA on quantitative perfusion parameters, venous contrast enhancement and examination time to save critical time in the therapeutic window in stroke patients. METHODS AND MATERIALS Stroke CT data sets of 85 patients, 47 patients with CTA before PCT (group A) and 38 with CTA after PCT (group B) were retrospectively analyzed by two experienced neuroradiologists. Parameter maps of cerebral blood flow, cerebral blood volume, time to peak and mean transit time and contrast enhancements (arterial and venous) were compared. RESULTS Both readers rated contrast of brain-supplying arteries to be equal in both groups (p=0.55 (intracranial) and p=0.73 (extracranial)) although the extent of venous superimposition of the ICA was rated higher in group B (p=0.04). Quantitative perfusion parameters did not significantly differ between the groups (all p>0.18), while the extent of venous superimposition of the ICA was rated higher in group B (p=0.04). The time to complete the diagnostic CT examination was significantly shorter for group A (p<0.01). CONCLUSION Performing CTA directly after NECT has no significant effect on PCT parameters and avoids venous preloading in CTA, while examination times were significantly shorter.
Resumo:
Recent downward revisions in the climate response to rising CO2 levels, and opportunities for reducing non-CO2 climate warming, have both been cited as evidence that the case for reducing CO2 emissions is less urgent than previously thought. Evaluating the impact of delay is complicated by the fact that CO2 emissions accumulate over time, so what happens after they peak is as relevant for long-term warming as the size and timing of the peak itself. Previous discussions have focused on how the rate of reduction required to meet any given temperature target rises asymptotically the later the emissions peak. Here we focus on a complementary question: how fast is peak CO2-induced warming increasing while mitigation is delayed, assuming no increase in rates of reduction after the emissions peak? We show that this peak-committed warming is increasing at the same rate as cumulative CO2 emissions, about 2% per year, much faster than observed warming, independent of the climate response.
Resumo:
The paper analyzes how to comply with an emission constraint, which restricts the use of an established energy technique, given the two options to save energy and to invest in two alternative energy techniques. These techniques differ in their deterioration rates and the investment lags of the corresponding capital stocks. Thus, the paper takes a medium-term perspective on climate change mitigation, where the time horizon is too short for technological change to occur, but long enough for capital stocks to accumulate and deteriorate. It is shown that, in general, only one of the two alternative techniques prevails in the stationary state, although, both techniques might be utilized during the transition phase. Hence, while in a static economy only one technique is efficient, this is not necessarily true in a dynamic economy.
Resumo:
OBJECTIVES The aim of this phantom study was to minimize the radiation dose by finding the best combination of low tube current and low voltage that would result in accurate volume measurements when compared to standard CT imaging without significantly decreasing the sensitivity of detecting lung nodules both with and without the assistance of CAD. METHODS An anthropomorphic chest phantom containing artificial solid and ground glass nodules (GGNs, 5-12 mm) was examined with a 64-row multi-detector CT scanner with three tube currents of 100, 50 and 25 mAs in combination with three tube voltages of 120, 100 and 80 kVp. This resulted in eight different protocols that were then compared to standard CT sensitivity (100 mAs/120 kVp). For each protocol, at least 127 different nodules were scanned in 21-25 phantoms. The nodules were analyzed in two separate sessions by three independent, blinded radiologists and computer-aided detection (CAD) software. RESULTS The mean sensitivity of the radiologists for identifying solid lung nodules on a standard CT was 89.7% ± 4.9%. The sensitivity was not significantly impaired when the tube and current voltage were lowered at the same time, except at the lowest exposure level of 25 mAs/80 kVp [80.6% ± 4.3% (p = 0.031)]. Compared to the standard CT, the sensitivity for detecting GGNs was significantly lower at all dose levels when the voltage was 80 kVp; this result was independent of the tube current. The CAD significantly increased the radiologists' sensitivity for detecting solid nodules at all dose levels (5-11%). No significant volume measurement errors (VMEs) were documented for the radiologists or the CAD software at any dose level. CONCLUSIONS Our results suggest a CT protocol with 25 mAs and 100 kVp is optimal for detecting solid and ground glass nodules in lung cancer screening. The use of CAD software is highly recommended at all dose levels.
Resumo:
BACKGROUND Surgical site infections are the most common hospital-acquired infections among surgical patients. The administration of surgical antimicrobial prophylaxis reduces the risk of surgical site infections . The optimal timing of this procedure is still a matter of debate. While most studies suggest that it should be given as close to the incision time as possible, others conclude that this may be too late for optimal prevention of surgical site infections. A large observational study suggests that surgical antimicrobial prophylaxis should be administered 74 to 30 minutes before surgery. The aim of this article is to report the design and protocol of a randomized controlled trial investigating the optimal timing of surgical antimicrobial prophylaxis.Methods/design: In this bi-center randomized controlled trial conducted at two tertiary referral centers in Switzerland, we plan to include 5,000 patients undergoing general, oncologic, vascular and orthopedic trauma procedures. Patients are randomized in a 1:1 ratio into two groups: one receiving surgical antimicrobial prophylaxis in the anesthesia room (75 to 30 minutes before incision) and the other receiving surgical antimicrobial prophylaxis in the operating room (less than 30 minutes before incision). We expect a significantly lower rate of surgical site infections with surgical antimicrobial prophylaxis administered more than 30 minutes before the scheduled incision. The primary outcome is the occurrence of surgical site infections during a 30-day follow-up period (one year with an implant in place). When assuming a 5 surgical site infection risk with administration of surgical antimicrobial prophylaxis in the operating room, the planned sample size has an 80% power to detect a relative risk reduction for surgical site infections of 33% when administering surgical antimicrobial prophylaxis in the anesthesia room (with a two-sided type I error of 5%). We expect the study to be completed within three years. DISCUSSION The results of this randomized controlled trial will have an important impact on current international guidelines for infection control strategies in the hospital. Moreover, the results of this randomized controlled trial are of significant interest for patient safety and healthcare economics.Trial registration: This trial is registered on ClinicalTrials.gov under the identifier NCT01790529.
Resumo:
BACKGROUND Vitamin D deficiency is prevalent in HIV-infected individuals and vitamin D supplementation is proposed according to standard care. This study aimed at characterizing the kinetics of 25(OH)D in a cohort of HIV-infected individuals of European ancestry to better define the influence of genetic and non-genetic factors on 25(OH)D levels. These data were used for the optimization of vitamin D supplementation in order to reach therapeutic targets. METHODS 1,397 25(OH)D plasma levels and relevant clinical information were collected in 664 participants during medical routine follow up visits. They were genotyped for 7 SNPs in 4 genes known to be associated with 25(OH)D levels. 25(OH)D concentrations were analyzed using a population pharmacokinetic approach. The percentage of individuals with 25(OH)D concentrations within the recommended range of 20-40ng/ml during 12 months of follow up and several dosage regimens were evaluated by simulation. RESULTS A one-compartment model with linear absorption and elimination was used to describe 25(OH)D pharmacokinetics, while integrating endogenous baseline plasma concentrations. Covariate analyses confirmed the effect of seasonality, body mass index, smoking habits, the analytical method, darunavir/r and the genetic variant in GC (rs2282679) on 25(OH)D concentrations. 11% of the interindividual variability in 25(OH)D levels was explained by seasonality and other non-genetic covariates and 1% by genetics. The optimal supplementation for severe vitamin D deficient patients was 300000 IU two times per year. CONCLUSIONS This analysis allowed identifying factors associated with 25(OH)D plasma levels in HIV-infected individuals. Improvement of dosage regimen and timing of vitamin D supplementation is proposed based on those results.
Resumo:
PURPOSE Therapeutic drug monitoring of patients receiving once daily aminoglycoside therapy can be performed using pharmacokinetic (PK) formulas or Bayesian calculations. While these methods produced comparable results, their performance has never been checked against full PK profiles. We performed a PK study in order to compare both methods and to determine the best time-points to estimate AUC0-24 and peak concentrations (C max). METHODS We obtained full PK profiles in 14 patients receiving a once daily aminoglycoside therapy. PK parameters were calculated with PKSolver using non-compartmental methods. The calculated PK parameters were then compared with parameters estimated using an algorithm based on two serum concentrations (two-point method) or the software TCIWorks (Bayesian method). RESULTS For tobramycin and gentamicin, AUC0-24 and C max could be reliably estimated using a first serum concentration obtained at 1 h and a second one between 8 and 10 h after start of the infusion. The two-point and the Bayesian method produced similar results. For amikacin, AUC0-24 could reliably be estimated by both methods. C max was underestimated by 10-20% by the two-point method and by up to 30% with a large variation by the Bayesian method. CONCLUSIONS The ideal time-points for therapeutic drug monitoring of once daily administered aminoglycosides are 1 h after start of a 30-min infusion for the first time-point and 8-10 h after start of the infusion for the second time-point. Duration of the infusion and accurate registration of the time-points of blood drawing are essential for obtaining precise predictions.
Resumo:
OBJECTIVE The results of Interventional Management of Stroke (IMS) III, Magnetic Resonance and REcanalization of Stroke Clots Using Embolectomy (MR RESCUE), and SYNTHESIS EXPANSION trials are expected to affect the practice of endovascular treatment for acute ischemic stroke. The purpose of this report is to review the components of the designs and methods of these trials and to describe the influence of those components on the interpretation of trial results. METHODS A critical review of trial design and conduct of IMS III, MR RESCUE, and SYNTHESIS EXPANSION is performed with emphasis on patient selection, shortcomings in procedural aspects, and methodology of data ascertainment and analysis. The influence of each component is estimated based on published literature including multicenter clinical trials reporting on endovascular treatment for acute ischemic stroke and myocardial infarction. RESULTS We critically examined the time interval between symptom onset and treatment and rates of angiographic recanalization to differentiate between "endovascular treatment" and "parameter optimized endovascular treatment" as it relates to the IMS III, MR RESCUE, and SYNTHESIS EXPANSION trials. All the three trials failed to effectively test "parameter optimized endovascular treatment" due to the delay between symptom onset and treatment and less than optimal rates of recanalization. In all the three trials, the magnitude of benefit with endovascular treatment required to reject the null hypothesis was larger than could be expected based on previous studies. The IMS III and SYNTHESIS EXPANSION trials demonstrated that rates of symptomatic intracerebral hemorrhages subsequent to treatment are similar between IV thrombolytics and endovascular treatment in matched acute ischemic stroke patients. The trials also indirectly validated the superiority/equivalence of IV thrombolytics (compared with endovascular treatment) in patients with minor neurological deficits and those without large vessel occlusion on computed tomographic/magnetic resonance angiography. CONCLUSIONS The results do not support a large magnitude benefit of endovascular treatment in subjects randomized in all the three trials. The possibility that benefits of a smaller magnitude exist in certain patient populations cannot be excluded. Large magnitude benefits can be expected with implementation of "parameter optimized endovascular treatment" in patients with ischemic stroke who are candidates for IV thrombolytics.
Resumo:
In the current study, we consider that optimal sprint start performance requires the self-control of responses. Therefore, start performance should depend on athletes' self-control strength. We assumed that momentary depletion of self-control strength (ego depletion) would either speed up or slow down the initiation of a sprint start, where an initiation that was sped up would carry the increased risk of a false start. Applying a mixed between- (depletion vs. nondepletion) and within- (before vs. after manipulation of depletion) subjects design, we tested the start reaction times of 37 sport students. We found that participants' start reaction times decelerated after finishing a depleting task, whereas it remained constant in the nondepletion condition. These results indicate that sprint start performance can be impaired by unrelated preceding actions that lower momentary self-control strength. We discuss practical implications in terms of optimizing sprint starts and related overall sprint performance.
Resumo:
Low-grade gliomas (LGGs) are a group of primary brain tumours usually encountered in young patient populations. These tumours represent a difficult challenge because many patients survive a decade or more and may be at a higher risk for treatment-related complications. Specifically, radiation therapy is known to have a relevant effect on survival but in many cases it can be deferred to avoid side effects while maintaining its beneficial effect. However, a subset of LGGs manifests more aggressive clinical behaviour and requires earlier intervention. Moreover, the effectiveness of radiotherapy depends on the tumour characteristics. Recently Pallud et al. (2012. Neuro-Oncology, 14: , 1-10) studied patients with LGGs treated with radiation therapy as a first-line therapy and obtained the counterintuitive result that tumours with a fast response to the therapy had a worse prognosis than those responding late. In this paper, we construct a mathematical model describing the basic facts of glioma progression and response to radiotherapy. The model provides also an explanation to the observations of Pallud et al. Using the model, we propose radiation fractionation schemes that might be therapeutically useful by helping to evaluate tumour malignancy while at the same time reducing the toxicity associated to the treatment.
Resumo:
BACKGROUND AND AIMS Limited data from large cohorts are available on tumor necrosis factor (TNF) antagonists (infliximab, adalimumab, certolizumab pegol) switch over time. We aimed to evaluate the prevalence of switching from one TNF antagonist to another and to identify associated risk factors. METHODS Data from the Swiss Inflammatory Bowel Diseases Cohort Study (SIBDCS) were analyzed. RESULTS Of 1731 patients included into the SIBDCS (956 with Crohn's disease [CD] and 775 with ulcerative colitis [UC]), 347 CD patients (36.3%) and 129 UC patients (16.6%) were treated with at least one TNF antagonist. A total of 53/347 (15.3%) CD patients (median disease duration 9 years) and 20/129 (15.5%) of UC patients (median disease duration 7 years) needed to switch to a second and/or a third TNF antagonist, respectively. Median treatment duration was longest for the first TNF antagonist used (CD 25 months; UC 14 months), followed by the second (CD 13 months; UC 4 months) and third TNF antagonist (CD 11 months; UC 15 months). Primary nonresponse, loss of response and side effects were the major reasons to stop and/or switch TNF antagonist therapy. A low body mass index, a short diagnostic delay and extraintestinal manifestations at inclusion were identified as risk factors for a switch of the first used TNF antagonist within 24 months of its use in CD patients. CONCLUSION Switching of the TNF antagonist over time is a common issue. The median treatment duration with a specific TNF antagonist is diminishing with an increasing number of TNF antagonists being used.