879 resultados para Computation time delay
Resumo:
Prospective systematic analyses of the clinical presentation of bullous pemphigoid (BP) are lacking. Little is known about the time required for its diagnosis. Knowledge of the disease spectrum is important for diagnosis, management and inclusion of patients in therapeutic trials.
Resumo:
Various inference procedures for linear regression models with censored failure times have been studied extensively. Recent developments on efficient algorithms to implement these procedures enhance the practical usage of such models in survival analysis. In this article, we present robust inferences for certain covariate effects on the failure time in the presence of "nuisance" confounders under a semiparametric, partial linear regression setting. Specifically, the estimation procedures for the regression coefficients of interest are derived from a working linear model and are valid even when the function of the confounders in the model is not correctly specified. The new proposals are illustrated with two examples and their validity for cases with practical sample sizes is demonstrated via a simulation study.
Resumo:
This paper introduces a novel approach to making inference about the regression parameters in the accelerated failure time (AFT) model for current status and interval censored data. The estimator is constructed by inverting a Wald type test for testing a null proportional hazards model. A numerically efficient Markov chain Monte Carlo (MCMC) based resampling method is proposed to simultaneously obtain the point estimator and a consistent estimator of its variance-covariance matrix. We illustrate our approach with interval censored data sets from two clinical studies. Extensive numerical studies are conducted to evaluate the finite sample performance of the new estimators.
Resumo:
Visualization and exploratory analysis is an important part of any data analysis and is made more challenging when the data are voluminous and high-dimensional. One such example is environmental monitoring data, which are often collected over time and at multiple locations, resulting in a geographically indexed multivariate time series. Financial data, although not necessarily containing a geographic component, present another source of high-volume multivariate time series data. We present the mvtsplot function which provides a method for visualizing multivariate time series data. We outline the basic design concepts and provide some examples of its usage by applying it to a database of ambient air pollution measurements in the United States and to a hypothetical portfolio of stocks.
Resumo:
BACKGROUND: Constipation is a significant side effect of opioid therapy. We have previously demonstrated that naloxone-3-glucuronide (NX3G) antagonizes the motility-lowering-effect of morphine in the rat colon. AIM: To find out whether oral NX3G is able to reduce the morphine-induced delay in colonic transit time (CTT) without being absorbed and influencing the analgesic effect. METHODS: Fifteen male volunteers were included. Pharmacokinetics: after oral administration of 0.16 mg/kg NX3G, blood samples were collected over a 6-h period. Pharmacodynamics: NX3G or placebo was then given at the start time and every 4 h thereafter. Morphine (0.05 mg/kg) or placebo was injected s.c. 2 h after starting and thereafter every 6 h for 24 h. CTT was measured over a 48-h period by scintigraphy. Pressure pain threshold tests were performed. RESULTS: Neither NX3G nor naloxone was detected in the venous blood. The slowest transit time was observed during the morphine phase, which was significantly different from morphine with NX3G and placebo. The pain perception was not significantly influenced by NX3G. CONCLUSIONS: Orally administered NX3G is able to reverse the morphine-induced delay of CTT in humans without being detected in peripheral blood samples. Therefore, NX3G may improve symptoms of constipation in-patients using opioid medication without affecting opioid-analgesic effects.
Resumo:
The performance of memory-guided saccades with two different delays (3 and 30 s of memorization) was studied in seven healthy subjects. Double-pulse transcranial magnetic stimulation (dTMS) with an interstimulus interval of 100 ms was applied over the right dorsolateral prefrontal cortex (DLPFC) early (1 s after target presentation) and late (28 s after target presentation). Early stimulation significantly increased in both delays the percentage of error in amplitude (PEA) of contralateral memory-guided saccades compared to the control experiment without stimulation. dTMS applied late in the delay had no significant effect on PEA. Furthermore, we found a significantly smaller effect of early stimulation in the long-delay paradigm. These results suggest a time-dependent hierarchical organization of the spatial working memory with a functional dominance of DLPFC during the early memorization, independent from the memorization delay. For a long memorization delay, however, working memory seems to have an additional, DLPFC-independent component.
Resumo:
For broadcasting purposes MIXED REALITY, the combination of real and virtual scene content, has become ubiquitous nowadays. Mixed Reality recording still requires expensive studio setups and is often limited to simple color keying. We present a system for Mixed Reality applications which uses depth keying and provides threedimensional mixing of real and artificial content. It features enhanced realism through automatic shadow computation which we consider a core issue to obtain realism and a convincing visual perception, besides the correct alignment of the two modalities and correct occlusion handling. Furthermore we present a possibility to support placement of virtual content in the scene. Core feature of our system is the incorporation of a TIME-OF-FLIGHT (TOF)-camera device. This device delivers real-time depth images of the environment at a reasonable resolution and quality. This camera is used to build a static environment model and it also allows correct handling of mutual occlusions between real and virtual content, shadow computation and enhanced content planning. The presented system is inexpensive, compact, mobile, flexible and provides convenient calibration procedures. Chroma-keying is replaced by depth-keying which is efficiently performed on the GRAPHICS PROCESSING UNIT (GPU) by the usage of an environment model and the current ToF-camera image. Automatic extraction and tracking of dynamic scene content is herewith performed and this information is used for planning and alignment of virtual content. An additional sustainable feature is that depth maps of the mixed content are available in real-time, which makes the approach suitable for future 3DTV productions. The presented paper gives an overview of the whole system approach including camera calibration, environment model generation, real-time keying and mixing of virtual and real content, shadowing for virtual content and dynamic object tracking for content planning.
Resumo:
Recent downward revisions in the climate response to rising CO2 levels, and opportunities for reducing non-CO2 climate warming, have both been cited as evidence that the case for reducing CO2 emissions is less urgent than previously thought. Evaluating the impact of delay is complicated by the fact that CO2 emissions accumulate over time, so what happens after they peak is as relevant for long-term warming as the size and timing of the peak itself. Previous discussions have focused on how the rate of reduction required to meet any given temperature target rises asymptotically the later the emissions peak. Here we focus on a complementary question: how fast is peak CO2-induced warming increasing while mitigation is delayed, assuming no increase in rates of reduction after the emissions peak? We show that this peak-committed warming is increasing at the same rate as cumulative CO2 emissions, about 2% per year, much faster than observed warming, independent of the climate response.
Resumo:
The responses of carbon dioxide (CO2) and other climate variables to an emission pulse of CO2 into the atmosphere are often used to compute the Global Warming Potential (GWP) and Global Temperature change Potential (GTP), to characterize the response timescales of Earth System models, and to build reduced-form models. In this carbon cycle-climate model intercomparison project, which spans the full model hierarchy, we quantify responses to emission pulses of different magnitudes injected under different conditions. The CO2 response shows the known rapid decline in the first few decades followed by a millennium-scale tail. For a 100 Gt-C emission pulse added to a constant CO2 concentration of 389 ppm, 25 ± 9% is still found in the atmosphere after 1000 yr; the ocean has absorbed 59 ± 12% and the land the remainder (16 ± 14%). The response in global mean surface air temperature is an increase by 0.20 ± 0.12 °C within the first twenty years; thereafter and until year 1000, temperature decreases only slightly, whereas ocean heat content and sea level continue to rise. Our best estimate for the Absolute Global Warming Potential, given by the time-integrated response in CO2 at year 100 multiplied by its radiative efficiency, is 92.5 × 10−15 yr W m−2 per kg-CO2. This value very likely (5 to 95% confidence) lies within the range of (68 to 117) × 10−15 yr W m−2 per kg-CO2. Estimates for time-integrated response in CO2 published in the IPCC First, Second, and Fourth Assessment and our multi-model best estimate all agree within 15% during the first 100 yr. The integrated CO2 response, normalized by the pulse size, is lower for pre-industrial conditions, compared to present day, and lower for smaller pulses than larger pulses. In contrast, the response in temperature, sea level and ocean heat content is less sensitive to these choices. Although, choices in pulse size, background concentration, and model lead to uncertainties, the most important and subjective choice to determine AGWP of CO2 and GWP is the time horizon.
Resumo:
There is great demand for easily-accessible, user-friendly dietary self-management applications. Yet accurate, fully-automatic estimation of nutritional intake using computer vision methods remains an open research problem. One key element of this problem is the volume estimation, which can be computed from 3D models obtained using multi-view geometry. The paper presents a computational system for volume estimation based on the processing of two meal images. A 3D model of the served meal is reconstructed using the acquired images and the volume is computed from the shape. The algorithm was tested on food models (dummy foods) with known volume and on real served food. Volume accuracy was in the order of 90 %, while the total execution time was below 15 seconds per image pair. The proposed system combines simple and computational affordable methods for 3D reconstruction, remained stable throughout the experiments, operates in near real time, and places minimum constraints on users.
Resumo:
Low-grade gliomas (LGGs) are a group of primary brain tumours usually encountered in young patient populations. These tumours represent a difficult challenge because many patients survive a decade or more and may be at a higher risk for treatment-related complications. Specifically, radiation therapy is known to have a relevant effect on survival but in many cases it can be deferred to avoid side effects while maintaining its beneficial effect. However, a subset of LGGs manifests more aggressive clinical behaviour and requires earlier intervention. Moreover, the effectiveness of radiotherapy depends on the tumour characteristics. Recently Pallud et al. (2012. Neuro-Oncology, 14: , 1-10) studied patients with LGGs treated with radiation therapy as a first-line therapy and obtained the counterintuitive result that tumours with a fast response to the therapy had a worse prognosis than those responding late. In this paper, we construct a mathematical model describing the basic facts of glioma progression and response to radiotherapy. The model provides also an explanation to the observations of Pallud et al. Using the model, we propose radiation fractionation schemes that might be therapeutically useful by helping to evaluate tumour malignancy while at the same time reducing the toxicity associated to the treatment.
Resumo:
BACKGROUND AND AIMS Limited data from large cohorts are available on tumor necrosis factor (TNF) antagonists (infliximab, adalimumab, certolizumab pegol) switch over time. We aimed to evaluate the prevalence of switching from one TNF antagonist to another and to identify associated risk factors. METHODS Data from the Swiss Inflammatory Bowel Diseases Cohort Study (SIBDCS) were analyzed. RESULTS Of 1731 patients included into the SIBDCS (956 with Crohn's disease [CD] and 775 with ulcerative colitis [UC]), 347 CD patients (36.3%) and 129 UC patients (16.6%) were treated with at least one TNF antagonist. A total of 53/347 (15.3%) CD patients (median disease duration 9 years) and 20/129 (15.5%) of UC patients (median disease duration 7 years) needed to switch to a second and/or a third TNF antagonist, respectively. Median treatment duration was longest for the first TNF antagonist used (CD 25 months; UC 14 months), followed by the second (CD 13 months; UC 4 months) and third TNF antagonist (CD 11 months; UC 15 months). Primary nonresponse, loss of response and side effects were the major reasons to stop and/or switch TNF antagonist therapy. A low body mass index, a short diagnostic delay and extraintestinal manifestations at inclusion were identified as risk factors for a switch of the first used TNF antagonist within 24 months of its use in CD patients. CONCLUSION Switching of the TNF antagonist over time is a common issue. The median treatment duration with a specific TNF antagonist is diminishing with an increasing number of TNF antagonists being used.
Resumo:
BACKGROUND Closed reduction and pinning is the accepted treatment choice for dislocated supracondylar humeral fractures in children (SCHF). Rates of open reduction, complications and outcome are reported to be dependent on delay of surgery. We investigated whether delay of surgery had influence on the incidence of open reduction, complications and outcome of surgical treatment of SCHFs in the authors' institution. METHODS Three hundred and forty-one children with 343 supracondylar humeral fractures (Gartland II: 144; Gartland III: 199) who underwent surgery between 2000 and 2009 were retrospectively analysed. The group consisted of 194 males and 149 females. The average age was 6.3 years. Mean follow-up was 6.2 months. Time interval between trauma and surgical intervention was determined using our institutional database. Clinical and radiographical data were collected for each group. Influence of delay of treatment on rates of open reduction, complications and outcome was calculated using logistic regression analysis. Furthermore, patients were grouped into 4 groups of delay (<6 h, n = 166; 6-12 h, n = 95; 12-24 h, n = 68; >24 h, n = 14) and the aforementioned variables were compared among these groups. RESULTS The incidence of open procedures in 343 supracondylar humeral fractures was 2.6 %. Complication rates were similar to the literature (10.8 %) primarily consisting of transient neurological impairments (9.0 %) which all were fully reversible by conservative treatment. Poor outcome was seen in 1.7 % of the patients. Delay of surgical treatment had no influence on rates of open surgery (p = 0.662), complications (p = 0.365) or poor outcome (p = 0.942). CONCLUSIONS In this retrospective study delay of treatment of SCHF did not have significant influence on the incidence of open reduction, complications, and outcome. Therefore, in SCHF with sufficient blood perfusion and nerve function, elective treatment is reasonable to avoid surgical interventions in the middle of the night which are stressful and wearing both for patients and for surgeons. LEVEL OF EVIDENCE III (retrospective comparative study).
Resumo:
OBJECTIVE Faster time from onset to recanalization (OTR) in acute ischemic stroke using endovascular therapy (ET) has been associated with better outcome. However, previous studies were based on less-effective first-generation devices, and analyzed only dichotomized disability outcomes, which may underestimate the full effect of treatment. METHODS In the combined databases of the SWIFT and STAR trials, we identified patients treated with the Solitaire stent retriever with achievement of substantial reperfusion (Thrombolysis in Cerebral Infarction [TICI] 2b-3). Ordinal numbers needed to treat values were derived by populating joint outcome tables. RESULTS Among 202 patients treated with ET with TICI 2b to 3 reperfusion, mean age was 68 (±13), 62% were female, and median National Institutes of Health Stroke Scale (NIHSS) score was 17 (interquartile range [IQR]: 14-20). Day 90 modified Rankin Scale (mRS) outcomes for OTR time intervals ranging from 180 to 480 minutes showed substantial time-related reductions in disability across the entire outcome range. Shorter OTR was associated with improved mean 90-day mRS (1.4 vs. 2.4 vs. 3.3, for OTR groups of 124-240 vs. 241-360 vs. 361-660 minutes; p < 0.001). The number of patients identified as benefitting from therapy with shorter OTR were 3-fold (range, 1.5-4.7) higher on ordinal, compared with dichotomized analysis. For every 15-minute acceleration of OTR, 34 per 1,000 treated patients had improved disability outcome. INTERPRETATION Analysis of disability over the entire outcome range demonstrates a marked effect of shorter time to reperfusion upon improved clinical outcome, substantially higher than binary metrics. For every 5-minute delay in endovascular reperfusion, 1 of 100 patients has a worse disability outcome. Ann Neurol 2015;78:584-593.
Resumo:
Analysis of recurrent events has been widely discussed in medical, health services, insurance, and engineering areas in recent years. This research proposes to use a nonhomogeneous Yule process with the proportional intensity assumption to model the hazard function on recurrent events data and the associated risk factors. This method assumes that repeated events occur for each individual, with given covariates, according to a nonhomogeneous Yule process with intensity function λx(t) = λ 0(t) · exp( x′β). One of the advantages of using a non-homogeneous Yule process for recurrent events is that it assumes that the recurrent rate is proportional to the number of events that occur up to time t. Maximum likelihood estimation is used to provide estimates of the parameters in the model, and a generalized scoring iterative procedure is applied in numerical computation. ^ Model comparisons between the proposed method and other existing recurrent models are addressed by simulation. One example concerning recurrent myocardial infarction events compared between two distinct populations, Mexican-American and Non-Hispanic Whites in the Corpus Christi Heart Project is examined. ^