440 resultados para extrapolation
Resumo:
Health economic evaluations require estimates of expected survival from patients receiving different interventions, often over a lifetime. However, data on the patients of interest are typically only available for a much shorter follow-up time, from randomised trials or cohorts. Previous work showed how to use general population mortality to improve extrapolations of the short-term data, assuming a constant additive or multiplicative effect on the hazards for all-cause mortality for study patients relative to the general population. A more plausible assumption may be a constant effect on the hazard for the specific cause of death targeted by the treatments. To address this problem, we use independent parametric survival models for cause-specific mortality among the general population. Because causes of death are unobserved for the patients of interest, a polyhazard model is used to express their all-cause mortality as a sum of latent cause-specific hazards. Assuming proportional cause-specific hazards between the general and study populations then allows us to extrapolate mortality of the patients of interest to the long term. A Bayesian framework is used to jointly model all sources of data. By simulation, we show that ignoring cause-specific hazards leads to biased estimates of mean survival when the proportion of deaths due to the cause of interest changes through time. The methods are applied to an evaluation of implantable cardioverter defibrillators for the prevention of sudden cardiac death among patients with cardiac arrhythmia. After accounting for cause-specific mortality, substantial differences are seen in estimates of life years gained from implantable cardioverter defibrillators.
Resumo:
Clinical applications of quantitative computed tomography (qCT) in patients with pulmonary opacifications are hindered by the radiation exposure and by the arduous manual image processing. We hypothesized that extrapolation from only ten thoracic CT sections will provide reliable information on the aeration of the entire lung. CTs of 72 patients with normal and 85 patients with opacified lungs were studied retrospectively. Volumes and masses of the lung and its differently aerated compartments were obtained from all CT sections. Then only the most cranial and caudal sections and a further eight evenly spaced sections between them were selected. The results from these ten sections were extrapolated to the entire lung. The agreement between both methods was assessed with Bland-Altman plots. Median (range) total lung volume and mass were 3,738 (1,311-6,768) ml and 957 (545-3,019) g, the corresponding bias (limits of agreement) were 26 (-42 to 95) ml and 8 (-21 to 38) g, respectively. The median volumes (range) of differently aerated compartments (percentage of total lung volume) were 1 (0-54)% for the nonaerated, 5 (1-44)% for the poorly aerated, 85 (28-98)% for the normally aerated, and 4 (0-48)% for the hyperaerated subvolume. The agreement between the extrapolated results and those from all CT sections was excellent. All bias values were below 1% of the total lung volume or mass, the limits of agreement never exceeded +/- 2%. The extrapolation method can reduce radiation exposure and shorten the time required for qCT analysis of lung aeration.
Resumo:
Here, we report suboptimal efavirenz exposure in an obese patient treated with the standard 600 mg dose. Tripling the dose allowed attainment of therapeutic efavirenz concentrations. We developed an in vitro-in vivo extrapolation model to quantify dose requirements in obese individuals. Obesity represents a risk factor for antiretroviral therapy underdosing.
Resumo:
I use a multi-layer feedforward perceptron, with backpropagation learning implemented via stochastic gradient descent, to extrapolate the volatility smile of Euribor derivatives over low-strikes by training the network on parametric prices.
Resumo:
In this work we describe the usage of bilinear statistical models as a means of factoring the shape variability into two components attributed to inter-subject variation and to the intrinsic dynamics of the human heart. We show that it is feasible to reconstruct the shape of the heart at discrete points in the cardiac cycle. Provided we are given a small number of shape instances representing the same heart atdifferent points in the same cycle, we can use the bilinearmodel to establish this. Using a temporal and a spatial alignment step in the preprocessing of the shapes, around half of the reconstruction errors were on the order of the axial image resolution of 2 mm, and over 90% was within 3.5 mm. From this, weconclude that the dynamics were indeed separated from theinter-subject variability in our dataset.
Resumo:
The pharmacokinetics (PK) of efavirenz (EFV) is characterized by marked interpatient variability that correlates with its pharmacodynamics (PD). In vitro-in vivo extrapolation (IVIVE) is a "bottom-up" approach that combines drug data with system information to predict PK and PD. The aim of this study was to simulate EFV PK and PD after dose reductions. At the standard dose, the simulated probability was 80% for viral suppression and 28% for central nervous system (CNS) toxicity. After a dose reduction to 400 mg, the probabilities of viral suppression were reduced to 69, 75, and 82%, and those of CNS toxicity were 21, 24, and 29% for the 516 GG, 516 GT, and 516 TT genotypes, respectively. With reduction of the dose to 200 mg, the probabilities of viral suppression decreased to 54, 62, and 72% and those of CNS toxicity decreased to 13, 18, and 20% for the 516 GG, 516 GT, and 516 TT genotypes, respectively. These findings indicate how dose reductions might be applied in patients with favorable genetic characteristics.
Resumo:
We develop an abstract extrapolation theory for the real interpolation method that covers and improves the most recent versions of the celebrated theorems of Yano and Zygmund. As a consequence of our method, we give new endpoint estimates of the embedding Sobolev theorem for an arbitrary domain Omega
Resumo:
ABSTRACT Knowledge of natural water availability, which is characterized by low flows, is essential for planning and management of water resources. One of the most widely used hydrological techniques to determine streamflow is regionalization, but the extrapolation of regionalization equations beyond the limits of sample data is not recommended. This paper proposes a new method for reducing overestimation errors associated with the extrapolation of regionalization equations for low flows. The method is based on the use of a threshold value for the maximum specific low flow discharge estimated at the gauging sites that are used in the regionalization. When a specific low flow, which has been estimated using the regionalization equation, exceeds the threshold value, the low flow can be obtained by multiplying the drainage area by the threshold value. This restriction imposes a physical limit to the low flow, which reduces the error of overestimating flows in regions of extrapolation. A case study was done in the Urucuia river basin, in Brazil, and the results showed the regionalization equation to perform positively in reducing the risk of extrapolation.
Resumo:
We argue that population modeling can add value to ecological risk assessment by reducing uncertainty when extrapolating from ecotoxicological observations to relevant ecological effects. We review other methods of extrapolation, ranging from application factors to species sensitivity distributions to suborganismal (biomarker and "-omics'') responses to quantitative structure activity relationships and model ecosystems, drawing attention to the limitations of each. We suggest a simple classification of population models and critically examine each model in an extrapolation context. We conclude that population models have the potential for adding value to ecological risk assessment by incorporating better understanding of the links between individual responses and population size and structure and by incorporating greater levels of ecological complexity. A number of issues, however, need to be addressed before such models are likely to become more widely used. In a science context, these involve challenges in parameterization, questions about appropriate levels of complexity, issues concerning how specific or general the models need to be, and the extent to which interactions through competition and trophic relationships can be easily incorporated.
Resumo:
The molar single ion activity coefficient (y(F)) of fluoride ions was determined at 25 degrees C and ionic strengths between 0.100 and 3.00 mol L(-1) NaClO(4) using an ion-selective electrode. The activity coefficient dependency on ionic strength was determined to be Phi(F) = log y(F) = 0.2315I-0.041I(2). The function Phi(F)(I), combined with functions obtained in previous work for copper (Phi(Cu)) and hydrogen (Phi(H)), allowed us to make the estimation of the stoichiometric and thermodynamic protonation constants of some halides and pseudo-halides as well as the formation constants of some pseudo-halides and fluoride 1:1 bivalent cation complexes. The calculation procedure proposed in this paper is consistent with critically-selected experimental data. It was demonstrated that it is possible to use Phi(F)(I) for predicting the thermodynamic equilibrium parameters independently of Pearson's hardness of acids and bases.
Resumo:
One of the purposes of this study is to give further constraints on the temperature range of the zircon partial annealing zone over a geological time scale using data from borehole zircon samples, which have experienced stable temperatures for ∼1 Ma. In this way, the extrapolation problem is explicitly addressed by fitting the zircon annealing models with geological timescale data. Several empirical model formulations have been proposed to perform these calibrations and have been compared in this work. The basic form proposed for annealing models is the Arrhenius-type model. There are other annealing models, that are based on the same general formulation. These empirical model equations have been preferred due to the great number of phenomena from track formation to chemical etching that are not well understood. However, there are two other models, which try to establish a direct correlation between their parameters and the related phenomena. To compare the response of the different annealing models, thermal indexes, such as closure temperature, total annealing temperature and the partial annealing zone, have been calculated and compared with field evidence. After comparing the different models, it was concluded that the fanning curvilinear models yield the best agreement between predicted index temperatures and field evidence. © 2012 Elsevier Ltd. All rights reserved.
Resumo:
The extrapolation chamber is a parallel-plate ionization chamber that allows variation of its air-cavity volume. In this work, an experimental study and MCNP-4C Monte Carlo code simulations of an ionization chamber designed and constructed at the Calibration Laboratory at IFEN to be used as a secondary dosimetry standard for low-energy X-rays are reported. The results obtained were within the international recommendations, and the simulations showed that the components of the extrapolation chamber may influence its response up to 11.0%. (C) 2011 Elsevier Ltd. All rights reserved.
Resumo:
We derive a new class of iterative schemes for accelerating the convergence of the EM algorithm, by exploiting the connection between fixed point iterations and extrapolation methods. First, we present a general formulation of one-step iterative schemes, which are obtained by cycling with the extrapolation methods. We, then square the one-step schemes to obtain the new class of methods, which we call SQUAREM. Squaring a one-step iterative scheme is simply applying it twice within each cycle of the extrapolation method. Here we focus on the first order or rank-one extrapolation methods for two reasons, (1) simplicity, and (2) computational efficiency. In particular, we study two first order extrapolation methods, the reduced rank extrapolation (RRE1) and minimal polynomial extrapolation (MPE1). The convergence of the new schemes, both one-step and squared, is non-monotonic with respect to the residual norm. The first order one-step and SQUAREM schemes are linearly convergent, like the EM algorithm but they have a faster rate of convergence. We demonstrate, through five different examples, the effectiveness of the first order SQUAREM schemes, SqRRE1 and SqMPE1, in accelerating the EM algorithm. The SQUAREM schemes are also shown to be vastly superior to their one-step counterparts, RRE1 and MPE1, in terms of computational efficiency. The proposed extrapolation schemes can fail due to the numerical problems of stagnation and near breakdown. We have developed a new hybrid iterative scheme that combines the RRE1 and MPE1 schemes in such a manner that it overcomes both stagnation and near breakdown. The squared first order hybrid scheme, SqHyb1, emerges as the iterative scheme of choice based on our numerical experiments. It combines the fast convergence of the SqMPE1, while avoiding near breakdowns, with the stability of SqRRE1, while avoiding stagnations. The SQUAREM methods can be incorporated very easily into an existing EM algorithm. They only require the basic EM step for their implementation and do not require any other auxiliary quantities such as the complete data log likelihood, and its gradient or hessian. They are an attractive option in problems with a very large number of parameters, and in problems where the statistical model is complex, the EM algorithm is slow and each EM step is computationally demanding.