888 resultados para Two-stage stochastic model


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Spatial prediction of hourly rainfall via radar calibration is addressed. The change of support problem (COSP), arising when the spatial supports of different data sources do not coincide, is faced in a non-Gaussian setting; in fact, hourly rainfall in Emilia-Romagna region, in Italy, is characterized by abundance of zero values and right-skeweness of the distribution of positive amounts. Rain gauge direct measurements on sparsely distributed locations and hourly cumulated radar grids are provided by the ARPA-SIMC Emilia-Romagna. We propose a three-stage Bayesian hierarchical model for radar calibration, exploiting rain gauges as reference measure. Rain probability and amounts are modeled via linear relationships with radar in the log scale; spatial correlated Gaussian effects capture the residual information. We employ a probit link for rainfall probability and Gamma distribution for rainfall positive amounts; the two steps are joined via a two-part semicontinuous model. Three model specifications differently addressing COSP are presented; in particular, a stochastic weighting of all radar pixels, driven by a latent Gaussian process defined on the grid, is employed. Estimation is performed via MCMC procedures implemented in C, linked to R software. Communication and evaluation of probabilistic, point and interval predictions is investigated. A non-randomized PIT histogram is proposed for correctly assessing calibration and coverage of two-part semicontinuous models. Predictions obtained with the different model specifications are evaluated via graphical tools (Reliability Plot, Sharpness Histogram, PIT Histogram, Brier Score Plot and Quantile Decomposition Plot), proper scoring rules (Brier Score, Continuous Rank Probability Score) and consistent scoring functions (Root Mean Square Error and Mean Absolute Error addressing the predictive mean and median, respectively). Calibration is reached and the inclusion of neighbouring information slightly improves predictions. All specifications outperform a benchmark model with incorrelated effects, confirming the relevance of spatial correlation for modeling rainfall probability and accumulation.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The purpose of this study was to determine the pharmacokinetics of [C-14]diclofenac, [C-14]salicylate and [H-3]clonidine using a single pass rat head perfusion preparation. The head was perfused with 3-[N-morpholino] propane-sulfonic acid-buffered Ringer's solution. Tc-99m-red blood cells and a drug were injected in a bolus into the internal carotid artery and collected from the posterior facial vein over 28 min. A two-barrier stochastic organ model was used to estimate the statistical moments of the solutes. Plasma, interstitial and cellular distribution volumes for the solutes ranged from 1.0 mL (diclofenac) to 1.6 mL (salicylate), 2.0 mL (diclofenac) to 4.2 mL (water) and 3.9 mL (salicylate) to 20.9 mL (diclofenac), respectively. A comparison of these volumes to water indicated some exclusion of the drugs from the interstitial space and salicylate from the cellular space. Permeability-surface area (PS) products calculated from plasma to interstitial fluid permeation clearances (CLPI) (range 0.02-0.40 mL s(-1)) and fractions of solute unbound in the perfusate were in the order: diclofenac>salicylate >clonidine>sucrose (from 41.8 to 0.10 mL s(-1)). The slow efflux of diclofenac, compared with clonidine and salicylate, may be related to its low average unbound fraction in the cells. This work accounts for the tail of disposition curves in describing pharmacokinetics in the head.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Prediction of random effects is an important problem with expanding applications. In the simplest context, the problem corresponds to prediction of the latent value (the mean) of a realized cluster selected via two-stage sampling. Recently, Stanek and Singer [Predicting random effects from finite population clustered samples with response error. J. Amer. Statist. Assoc. 99, 119-130] developed best linear unbiased predictors (BLUP) under a finite population mixed model that outperform BLUPs from mixed models and superpopulation models. Their setup, however, does not allow for unequally sized clusters. To overcome this drawback, we consider an expanded finite population mixed model based on a larger set of random variables that span a higher dimensional space than those typically applied to such problems. We show that BLUPs for linear combinations of the realized cluster means derived under such a model have considerably smaller mean squared error (MSE) than those obtained from mixed models, superpopulation models, and finite population mixed models. We motivate our general approach by an example developed for two-stage cluster sampling and show that it faithfully captures the stochastic aspects of sampling in the problem. We also consider simulation studies to illustrate the increased accuracy of the BLUP obtained under the expanded finite population mixed model. (C) 2007 Elsevier B.V. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The climate of Marine Isotope Stage (MIS) 11, the interglacial roughly 400,000 years ago, is investigated for four time slices, 416, 410, 400, and 394 ka. The overall picture is that MIS 11 was a relatively warm interglacial in comparison to preindustrial, with Northern Hemisphere (NH) summer temperatures early in MIS 11 (416-410 ka) warmer than preindustrial, though winters were cooler. Later in MIS 11, especially around 400 ka, conditions were cooler in the NH summer, mainly in the high latitudes. Climate changes simulated by the models were mainly driven by insolation changes, with the exception of two local feedbacks that amplify climate changes. Here, the NH high latitudes, where reductions in sea ice cover lead to a winter warming early in MIS 11, as well as the tropics, where monsoon changes lead to stronger climate variations than one would expect on the basis of latitudinal mean insolation change alone, are especially prominent. The results support a northward expansion of trees at the expense of grasses in the high northern latitudes early during MIS 11, especially in northern Asia and North America.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We present a general model to find the best allocation of a limited amount of supplements (extra minutes added to a timetable in order to reduce delays) on a set of interfering railway lines. By the best allocation, we mean the solution under which the weighted sum of expected delays is minimal. Our aim is to finely adjust an already existing and well-functioning timetable. We model this inherently stochastic optimization problem by using two-stage recourse models from stochastic programming, building upon earlier research from the literature. We present an improved formulation, allowing for an efficient solution using a standard algorithm for recourse models. We show that our model may be solved using any of the following theoretical frameworks: linear programming, stochastic programming and convex non-linear programming, and present a comparison of these approaches based on a real-life case study. Finally, we introduce stochastic dependency into the model, and present a statistical technique to estimate the model parameters from empirical data.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper we discuss implicit methods based on stiffly accurate Runge-Kutta methods and splitting techniques for solving Stratonovich stochastic differential equations (SDEs). Two splitting techniques: the balanced splitting technique and the deterministic splitting technique, are used in this paper. We construct a two-stage implicit Runge-Kutta method with strong order 1.0 which is corrected twice and no update is needed. The stability properties and numerical results show that this approach is suitable for solving stiff SDEs. (C) 2001 Elsevier Science B.V. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Management are keen to maximize the life span of an information system because of the high cost, organizational disruption, and risk of failure associated with the re-development or replacement of an information system. This research investigates the effects that various factors have on an information system's life span by understanding how the factors affect an information system's stability. The research builds on a previously developed two-stage model of information system change whereby an information system is either in a stable state of evolution in which the information system's functionality is evolving, or in a state of revolution, in which the information system is being replaced because it is not providing the functionality expected by its users. A case study surveyed a number of systems within one organization. The aim was to test whether a relationship existed between the base value of the volatility index (a measure of the stability of an information system) and certain system characteristics. Data relating to some 3000 user change requests covering 40 systems over a 10-year period were obtained. The following factors were hypothesized to have significant associations with the base value of the volatility index: language level (generation of language of construction), system size, system age, and the timing of changes applied to a system. Significant associations were found in the hypothesized directions except that the timing of user changes was not associated with any change in the value of the volatility index. Copyright (C) 2002 John Wiley Sons, Ltd.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper we construct predictor-corrector (PC) methods based on the trivial predictor and stochastic implicit Runge-Kutta (RK) correctors for solving stochastic differential equations. Using the colored rooted tree theory and stochastic B-series, the order condition theorem is derived for constructing stochastic RK methods based on PC implementations. We also present detailed order conditions of the PC methods using stochastic implicit RK correctors with strong global order 1.0 and 1.5. A two-stage implicit RK method with strong global order 1.0 and a four-stage implicit RK method with strong global order 1.5 used as the correctors are constructed in this paper. The mean-square stability properties and numerical results of the PC methods based on these two implicit RK correctors are reported.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The interplay of seasonality, the system's nonlinearities and intrinsic stochasticity, is studied for a seasonally forced susceptible-exposed-infective-recovered stochastic model. The model is explored in the parameter region that corresponds to childhood infectious diseases such as measles. The power spectrum of the stochastic fluctuations around the attractors of the deterministic system that describes the model in the thermodynamic limit is computed analytically and validated by stochastic simulations for large system sizes. Size effects are studied through additional simulations. Other effects such as switching between coexisting attractors induced by stochasticity often mentioned in the literature as playing an important role in the dynamics of childhood infectious diseases are also investigated. The main conclusion is that stochastic amplification, rather than these effects, is the key ingredient to understand the observed incidence patterns.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

OBJECTIVE:To analyze factors associated with cervical cancer screening failure. METHODS:Population-based cross-sectional study with self-weighted two-stage cluster sampling conducted in the cities of Fortaleza (Northeastern Brazil) and Rio de Janeiro (Southeastern Brazil) in 2002. Subjects were women aged 25-59 years in the last three years prior to the study. Data were analyzed through Poisson regression using a hierarchical model. RESULTS: The proportion of women who did not undergo the Pap smear test in Fortaleza and Rio de Janeiro was 19.1% (95% CI: 16.1;22.1) and 16.5% (95% CI: 14.1;18.9), respectively. Higher prevalence ratios of cervical cancer screening failure in both cities were seen among women with low education and low per capita income, old age, unmarried, who never underwent mammography, clinical breast examination, and blood glucose and cholesterol level testing. Smokers also had lower screening rates compared to non-smoker women and this difference was only statistically significant in Rio de Janeiro. CONCLUSIONS:The study findings point to the need of intervention focusing particularly women in worse socioeconomic conditions and access to healthcare, old-aged and unmarried. Education activities must prioritize screening of asymptomatic women and early diagnosis for symptomatic women and access to adequate diagnostic methods and treatment should be provided.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The evolution of a quantitative phenotype is often envisioned as a trait substitution sequence where mutant alleles repeatedly replace resident ones. In infinite populations, the invasion fitness of a mutant in this two-allele representation of the evolutionary process is used to characterize features about long-term phenotypic evolution, such as singular points, convergence stability (established from first-order effects of selection), branching points, and evolutionary stability (established from second-order effects of selection). Here, we try to characterize long-term phenotypic evolution in finite populations from this two-allele representation of the evolutionary process. We construct a stochastic model describing evolutionary dynamics at non-rare mutant allele frequency. We then derive stability conditions based on stationary average mutant frequencies in the presence of vanishing mutation rates. We find that the second-order stability condition obtained from second-order effects of selection is identical to convergence stability. Thus, in two-allele systems in finite populations, convergence stability is enough to characterize long-term evolution under the trait substitution sequence assumption. We perform individual-based simulations to confirm our analytic results.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In epidemiologic studies, measurement error in dietary variables often attenuates association between dietary intake and disease occurrence. To adjust for the attenuation caused by error in dietary intake, regression calibration is commonly used. To apply regression calibration, unbiased reference measurements are required. Short-term reference measurements for foods that are not consumed daily contain excess zeroes that pose challenges in the calibration model. We adapted two-part regression calibration model, initially developed for multiple replicates of reference measurements per individual to a single-replicate setting. We showed how to handle excess zero reference measurements by two-step modeling approach, how to explore heteroscedasticity in the consumed amount with variance-mean graph, how to explore nonlinearity with the generalized additive modeling (GAM) and the empirical logit approaches, and how to select covariates in the calibration model. The performance of two-part calibration model was compared with the one-part counterpart. We used vegetable intake and mortality data from European Prospective Investigation on Cancer and Nutrition (EPIC) study. In the EPIC, reference measurements were taken with 24-hour recalls. For each of the three vegetable subgroups assessed separately, correcting for error with an appropriately specified two-part calibration model resulted in about three fold increase in the strength of association with all-cause mortality, as measured by the log hazard ratio. Further found is that the standard way of including covariates in the calibration model can lead to over fitting the two-part calibration model. Moreover, the extent of adjusting for error is influenced by the number and forms of covariates in the calibration model. For episodically consumed foods, we advise researchers to pay special attention to response distribution, nonlinearity, and covariate inclusion in specifying the calibration model.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper aims to estimate a translog stochastic frontier production function in the analysis of a panel of 150 mixed Catalan farms in the period 1989-1993, in order to attempt to measure and explain variation in technical inefficiency scores with a one-stage approach. The model uses gross value added as the output aggregate measure. Total employment, fixed capital, current assets, specific costs and overhead costs are introduced into the model as inputs. Stochasticfrontier estimates are compared with those obtained using a linear programming method using a two-stage approach. The specification of the translog stochastic frontier model appears as an appropriate representation of the data, technical change was rejected and the technical inefficiency effects were statistically significant. The mean technical efficiency in the period analyzed was estimated to be 64.0%. Farm inefficiency levels were found significantly at 5%level and positively correlated with the number of economic size units.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We provide analytical evidence of stochastic resonance in polarization switching vertical-cavity surface-emitting lasers (VCSELs). We describe the VCSEL by a two-mode stochastic rate equation model and apply a multiple time-scale analysis. We were able to reduce the dynamical description to a single stochastic differential equation, which is the starting point of the analytical study of stochastic resonance. We confront our results with numerical simulations on the original rate equations, validating the use of a multiple time-scale analysis on stochastic equations as an analytical tool.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Tämä työ luo katsauksen ajallisiin ja stokastisiin ohjelmien luotettavuus malleihin sekä tutkii muutamia malleja käytännössä. Työn teoriaosuus sisältää ohjelmien luotettavuuden kuvauksessa ja arvioinnissa käytetyt keskeiset määritelmät ja metriikan sekä varsinaiset mallien kuvaukset. Työssä esitellään kaksi ohjelmien luotettavuusryhmää. Ensimmäinen ryhmä ovat riskiin perustuvat mallit. Toinen ryhmä käsittää virheiden ”kylvöön” ja merkitsevyyteen perustuvat mallit. Työn empiirinen osa sisältää kokeiden kuvaukset ja tulokset. Kokeet suoritettiin käyttämällä kolmea ensimmäiseen ryhmään kuuluvaa mallia: Jelinski-Moranda mallia, ensimmäistä geometrista mallia sekä yksinkertaista eksponenttimallia. Kokeiden tarkoituksena oli tutkia, kuinka syötetyn datan distribuutio vaikuttaa mallien toimivuuteen sekä kuinka herkkiä mallit ovat syötetyn datan määrän muutoksille. Jelinski-Moranda malli osoittautui herkimmäksi distribuutiolle konvergaatio-ongelmien vuoksi, ensimmäinen geometrinen malli herkimmäksi datan määrän muutoksille.