19 resultados para simulation models

em Biblioteca Digital da Produção Intelectual da Universidade de São Paulo (BDPI/USP)


Relevância:

30.00% 30.00%

Publicador:

Resumo:

The Amazon Basin is crucial to global circulatory and carbon patterns due to the large areal extent and large flux magnitude. Biogeophysical models have had difficulty reproducing the annual cycle of net ecosystem exchange (NEE) of carbon in some regions of the Amazon, generally simulating uptake during the wet season and efflux during seasonal drought. In reality, the opposite occurs. Observational and modeling studies have identified several mechanisms that explain the observed annual cycle, including: (1) deep soil columns that can store large water amount, (2) the ability of deep roots to access moisture at depth when near-surface soil dries during annual drought, (3) movement of water in the soil via hydraulic redistribution, allowing for more efficient uptake of water during the wet season, and moistening of near-surface soil during the annual drought, and (4) photosynthetic response to elevated light levels as cloudiness decreases during the dry season. We incorporate these mechanisms into the third version of the Simple Biosphere model (SiB3) both singly and collectively, and confront the results with observations. For the forest to maintain function through seasonal drought, there must be sufficient water storage in the soil to sustain transpiration through the dry season in addition to the ability of the roots to access the stored water. We find that individually, none of these mechanisms by themselves produces a simulation of the annual cycle of NEE that matches the observed. When these mechanisms are combined into the model, NEE follows the general trend of the observations, showing efflux during the wet season and uptake during seasonal drought.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Here we investigate the contribution of surface Alfven wave damping to the heating of the solar wind in minima conditions. These waves are present in the regions of strong inhomogeneities in density or magnetic field (e.g., the border between open and closed magnetic field lines). Using a three-dimensional (3D) magnetohydrodynamics (MHD) model, we calculate the surface Alfven wave damping contribution between 1 and 4 R(circle dot) (solar radii), the region of interest for both acceleration and coronal heating. We consider waves with frequencies lower than those that are damped in the chromosphere and on the order of those dominating the heliosphere: 3 x 10(-6) to 10(-1) Hz. In the region between open and closed field lines, within a few R(circle dot) of the surface, no other major source of damping has been suggested for the low frequency waves we consider here. This work is the first to study surface Alfven waves in a 3D environment without assuming a priori a geometry of field lines or magnetic and density profiles. We demonstrate that projection effects from the plane of the sky to 3D are significant in the calculation of field line expansion. We determine that waves with frequencies >2.8 x 10(-4) Hz are damped between 1 and 4 R(circle dot). In quiet-Sun regions, surface Alfven waves are damped at further distances compared to active regions, thus carrying additional wave energy into the corona. We compare the surface Alfven wave contribution to the heating by a variable polytropic index and find it as an order of magnitude larger than needed for quiet-Sun regions. For active regions, the contribution to the heating is 20%. As it has been argued that a variable gamma acts as turbulence, our results indicate that surface Alfven wave damping is comparable to turbulence in the lower corona. This damping mechanism should be included self-consistently as an energy driver for the wind in global MHD models.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this article, we present a generalization of the Bayesian methodology introduced by Cepeda and Gamerman (2001) for modeling variance heterogeneity in normal regression models where we have orthogonality between mean and variance parameters to the general case considering both linear and highly nonlinear regression models. Under the Bayesian paradigm, we use MCMC methods to simulate samples for the joint posterior distribution. We illustrate this algorithm considering a simulated data set and also considering a real data set related to school attendance rate for children in Colombia. Finally, we present some extensions of the proposed MCMC algorithm.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This study investigates the numerical simulation of three-dimensional time-dependent viscoelastic free surface flows using the Upper-Convected Maxwell (UCM) constitutive equation and an algebraic explicit model. This investigation was carried out to develop a simplified approach that can be applied to the extrudate swell problem. The relevant physics of this flow phenomenon is discussed in the paper and an algebraic model to predict the extrudate swell problem is presented. It is based on an explicit algebraic representation of the non-Newtonian extra-stress through a kinematic tensor formed with the scaled dyadic product of the velocity field. The elasticity of the fluid is governed by a single transport equation for a scalar quantity which has dimension of strain rate. Mass and momentum conservations, and the constitutive equation (UCM and algebraic model) were solved by a three-dimensional time-dependent finite difference method. The free surface of the fluid was modeled using a marker-and-cell approach. The algebraic model was validated by comparing the numerical predictions with analytic solutions for pipe flow. In comparison with the classical UCM model, one advantage of this approach is that computational workload is substantially reduced: the UCM model employs six differential equations while the algebraic model uses only one. The results showed stable flows with very large extrudate growths beyond those usually obtained with standard differential viscoelastic models. (C) 2010 Elsevier Ltd. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Increasing efforts exist in integrating different levels of detail in models of the cardiovascular system. For instance, one-dimensional representations are employed to model the systemic circulation. In this context, effective and black-box-type decomposition strategies for one-dimensional networks are needed, so as to: (i) employ domain decomposition strategies for large systemic models (1D-1D coupling) and (ii) provide the conceptual basis for dimensionally-heterogeneous representations (1D-3D coupling, among various possibilities). The strategy proposed in this article works for both of these two scenarios, though the several applications shown to illustrate its performance focus on the 1D-1D coupling case. A one-dimensional network is decomposed in such a way that each coupling point connects two (and not more) of the sub-networks. At each of the M connection points two unknowns are defined: the flow rate and pressure. These 2M unknowns are determined by 2M equations, since each sub-network provides one (non-linear) equation per coupling point. It is shown how to build the 2M x 2M non-linear system with arbitrary and independent choice of boundary conditions for each of the sub-networks. The idea is then to solve this non-linear system until convergence, which guarantees strong coupling of the complete network. In other words, if the non-linear solver converges at each time step, the solution coincides with what would be obtained by monolithically modeling the whole network. The decomposition thus imposes no stability restriction on the choice of the time step size. Effective iterative strategies for the non-linear system that preserve the black-box character of the decomposition are then explored. Several variants of matrix-free Broyden`s and Newton-GMRES algorithms are assessed as numerical solvers by comparing their performance on sub-critical wave propagation problems which range from academic test cases to realistic cardiovascular applications. A specific variant of Broyden`s algorithm is identified and recommended on the basis of its computer cost and reliability. (C) 2010 Elsevier B.V. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The estimation of data transformation is very useful to yield response variables satisfying closely a normal linear model, Generalized linear models enable the fitting of models to a wide range of data types. These models are based on exponential dispersion models. We propose a new class of transformed generalized linear models to extend the Box and Cox models and the generalized linear models. We use the generalized linear model framework to fit these models and discuss maximum likelihood estimation and inference. We give a simple formula to estimate the parameter that index the transformation of the response variable for a subclass of models. We also give a simple formula to estimate the rth moment of the original dependent variable. We explore the possibility of using these models to time series data to extend the generalized autoregressive moving average models discussed by Benjamin er al. [Generalized autoregressive moving average models. J. Amer. Statist. Assoc. 98, 214-223]. The usefulness of these models is illustrated in a Simulation study and in applications to three real data sets. (C) 2009 Elsevier B.V. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In survival analysis applications, the failure rate function may frequently present a unimodal shape. In such case, the log-normal or log-logistic distributions are used. In this paper, we shall be concerned only with parametric forms, so a location-scale regression model based on the Burr XII distribution is proposed for modeling data with a unimodal failure rate function as an alternative to the log-logistic regression model. Assuming censored data, we consider a classic analysis, a Bayesian analysis and a jackknife estimator for the parameters of the proposed model. For different parameter settings, sample sizes and censoring percentages, various simulation studies are performed and compared to the performance of the log-logistic and log-Burr XII regression models. Besides, we use sensitivity analysis to detect influential or outlying observations, and residual analysis is used to check the assumptions in the model. Finally, we analyze a real data set under log-Buff XII regression models. (C) 2008 Published by Elsevier B.V.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The absorption spectrum of the acid form of pterin in water was investigated theoretically. Different procedures using continuum, discrete, and explicit models were used to include the solvation effect on the absorption spectrum, characterized by two bands. The discrete and explicit models used Monte Carlo simulation to generate the liquid structure and time-dependent density functional theory (B3LYP/6-31G+(d)) to obtain the excitation energies. The discrete model failed to give the correct qualitative effect on the second absorption band. The continuum model, in turn, has given a correct qualitative picture and a semiquantitative description. The explicit use of 29 solvent molecules, forming a hydration shell of 6 angstrom, embedded in the electrostatic field of the remaining solvent molecules, gives absorption transitions at 3.67 and 4.59 eV in excellent agreement with the S(0)-S(1) and S(0)-S(2) absorption bands at of 3.66 and 4.59 eV, respectively, that characterize the experimental spectrum of pterin in water environment. (C) 2010 Wiley Periodicals, Inc. Int J Quantum Chem 110: 2371-2377, 2010

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A time efficient optical model is proposed for GATE simulation of a LYSO scintillation matrix coupled to a photomultiplier. The purpose is to avoid the excessively long computation time when activating the optical processes in GATE. The usefulness of the model is demonstrated by comparing the simulated and experimental energy spectra obtained with the dual planar head equipment for dosimetry with a positron emission tomograph ( DoPET). The procedure to apply the model is divided in two steps. Firstly, a simplified simulation of a single crystal element of DoPET is used to fit an analytic function that models the optical attenuation inside the crystal. In a second step, the model is employed to calculate the influence of this attenuation in the energy registered by the tomograph. The use of the proposed optical model is around three orders of magnitude faster than a GATE simulation with optical processes enabled. A good agreement was found between the experimental and simulated data using the optical model. The results indicate that optical interactions inside the crystal elements play an important role on the energy resolution and induce a considerable degradation of the spectra information acquired by DoPET. Finally, the same approach employed by the proposed optical model could be useful to simulate a scintillation matrix coupled to a photomultiplier using single or dual readout scheme.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Predictors of random effects are usually based on the popular mixed effects (ME) model developed under the assumption that the sample is obtained from a conceptual infinite population; such predictors are employed even when the actual population is finite. Two alternatives that incorporate the finite nature of the population are obtained from the superpopulation model proposed by Scott and Smith (1969. Estimation in multi-stage surveys. J. Amer. Statist. Assoc. 64, 830-840) or from the finite population mixed model recently proposed by Stanek and Singer (2004. Predicting random effects from finite population clustered samples with response error. J. Amer. Statist. Assoc. 99, 1119-1130). Predictors derived under the latter model with the additional assumptions that all variance components are known and that within-cluster variances are equal have smaller mean squared error (MSE) than the competitors based on either the ME or Scott and Smith`s models. As population variances are rarely known, we propose method of moment estimators to obtain empirical predictors and conduct a simulation study to evaluate their performance. The results suggest that the finite population mixed model empirical predictor is more stable than its competitors since, in terms of MSE, it is either the best or the second best and when second best, its performance lies within acceptable limits. When both cluster and unit intra-class correlation coefficients are very high (e.g., 0.95 or more), the performance of the empirical predictors derived under the three models is similar. (c) 2007 Elsevier B.V. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this article, we compare three residuals based on the deviance component in generalised log-gamma regression models with censored observations. For different parameter settings, sample sizes and censoring percentages, various simulation studies are performed and the empirical distribution of each residual is displayed and compared with the standard normal distribution. For all cases studied, the empirical distributions of the proposed residuals are in general symmetric around zero, but only a martingale-type residual presented negligible kurtosis for the majority of the cases studied. These studies suggest that the residual analysis usually performed in normal linear regression models can be straightforwardly extended for the martingale-type residual in generalised log-gamma regression models with censored data. A lifetime data set is analysed under log-gamma regression models and a model checking based on the martingale-type residual is performed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The class of symmetric linear regression models has the normal linear regression model as a special case and includes several models that assume that the errors follow a symmetric distribution with longer-than-normal tails. An important member of this class is the t linear regression model, which is commonly used as an alternative to the usual normal regression model when the data contain extreme or outlying observations. In this article, we develop second-order asymptotic theory for score tests in this class of models. We obtain Bartlett-corrected score statistics for testing hypotheses on the regression and the dispersion parameters. The corrected statistics have chi-squared distributions with errors of order O(n(-3/2)), n being the sample size. The corrections represent an improvement over the corresponding original Rao`s score statistics, which are chi-squared distributed up to errors of order O(n(-1)). Simulation results show that the corrected score tests perform much better than their uncorrected counterparts in samples of small or moderate size.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We present simple matrix formulae for corrected score statistics in symmetric nonlinear regression models. The corrected score statistics follow more closely a chi (2) distribution than the classical score statistic. Our simulation results indicate that the corrected score tests display smaller size distortions than the original score test. We also compare the sizes and the powers of the corrected score tests with bootstrap-based score tests.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In general, the normal distribution is assumed for the surrogate of the true covariates in the classical error model. This paper considers a class of distributions, which includes the normal one, for the variables subject to error. An estimation approach yielding consistent estimators is developed and simulation studies reported.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Mixed linear models are commonly used in repeated measures studies. They account for the dependence amongst observations obtained from the same experimental unit. Often, the number of observations is small, and it is thus important to use inference strategies that incorporate small sample corrections. In this paper, we develop modified versions of the likelihood ratio test for fixed effects inference in mixed linear models. In particular, we derive a Bartlett correction to such a test, and also to a test obtained from a modified profile likelihood function. Our results generalize those in [Zucker, D.M., Lieberman, O., Manor, O., 2000. Improved small sample inference in the mixed linear model: Bartlett correction and adjusted likelihood. Journal of the Royal Statistical Society B, 62,827-838] by allowing the parameter of interest to be vector-valued. Additionally, our Bartlett corrections allow for random effects nonlinear covariance matrix structure. We report simulation results which show that the proposed tests display superior finite sample behavior relative to the standard likelihood ratio test. An application is also presented and discussed. (C) 2008 Elsevier B.V. All rights reserved.