625 resultados para Standard models
Resumo:
Travelling wave phenomena are observed in many biological applications. Mathematical theory of standard reaction-diffusion problems shows that simple partial differential equations exhibit travelling wave solutions with constant wavespeed and such models are used to describe, for example, waves of chemical concentrations, electrical signals, cell migration, waves of epidemics and population dynamics. However, as in the study of cell motion in complex spatial geometries, experimental data are often not consistent with constant wavespeed. Non-local spatial models have successfully been used to model anomalous diffusion and spatial heterogeneity in different physical contexts. In this paper, we develop a fractional model based on the Fisher-Kolmogoroff equation and analyse it for its wavespeed properties, attempting to relate the numerical results obtained from our simulations to experimental data describing enteric neural crest-derived cells migrating along the intact gut of mouse embryos. The model proposed essentially combines fractional and standard diffusion in different regions of the spatial domain and qualitatively reproduces the behaviour of neural crest-derived cells observed in the caecum and the hindgut of mouse embryos during in vivo experiments.
Resumo:
Invasion waves of cells play an important role in development, disease and repair. Standard discrete models of such processes typically involve simulating cell motility, cell proliferation and cell-to-cell crowding effects in a lattice-based framework. The continuum-limit description is often given by a reaction–diffusion equation that is related to the Fisher–Kolmogorov equation. One of the limitations of a standard lattice-based approach is that real cells move and proliferate in continuous space and are not restricted to a predefined lattice structure. We present a lattice-free model of cell motility and proliferation, with cell-to-cell crowding effects, and we use the model to replicate invasion wave-type behaviour. The continuum-limit description of the discrete model is a reaction–diffusion equation with a proliferation term that is different from lattice-based models. Comparing lattice based and lattice-free simulations indicates that both models lead to invasion fronts that are similar at the leading edge, where the cell density is low. Conversely, the two models make different predictions in the high density region of the domain, well behind the leading edge. We analyse the continuum-limit description of the lattice based and lattice-free models to show that both give rise to invasion wave type solutions that move with the same speed but have very different shapes. We explore the significance of these differences by calibrating the parameters in the standard Fisher–Kolmogorov equation using data from the lattice-free model. We conclude that estimating parameters using this kind of standard procedure can produce misleading results.
Resumo:
In biology, we frequently observe different species existing within the same environment. For example, there are many cell types in a tumour, or different animal species may occupy a given habitat. In modelling interactions between such species, we often make use of the mean field approximation, whereby spatial correlations between the locations of individuals are neglected. Whilst this approximation holds in certain situations, this is not always the case, and care must be taken to ensure the mean field approximation is only used in appropriate settings. In circumstances where the mean field approximation is unsuitable we need to include information on the spatial distributions of individuals, which is not a simple task. In this paper we provide a method that overcomes many of the failures of the mean field approximation for an on-lattice volume-excluding birth-death-movement process with multiple species. We explicitly take into account spatial information on the distribution of individuals by including partial differential equation descriptions of lattice site occupancy correlations. We demonstrate how to derive these equations for the multi-species case, and show results specific to a two-species problem. We compare averaged discrete results to both the mean field approximation and our improved method which incorporates spatial correlations. We note that the mean field approximation fails dramatically in some cases, predicting very different behaviour from that seen upon averaging multiple realisations of the discrete system. In contrast, our improved method provides excellent agreement with the averaged discrete behaviour in all cases, thus providing a more reliable modelling framework. Furthermore, our method is tractable as the resulting partial differential equations can be solved efficiently using standard numerical techniques.
Resumo:
Standard Monte Carlo (sMC) simulation models have been widely used in AEC industry research to address system uncertainties. Although the benefits of probabilistic simulation analyses over deterministic methods are well documented, the sMC simulation technique is quite sensitive to the probability distributions of the input variables. This phenomenon becomes highly pronounced when the region of interest within the joint probability distribution (a function of the input variables) is small. In such cases, the standard Monte Carlo approach is often impractical from a computational standpoint. In this paper, a comparative analysis of standard Monte Carlo simulation to Markov Chain Monte Carlo with subset simulation (MCMC/ss) is presented. The MCMC/ss technique constitutes a more complex simulation method (relative to sMC), wherein a structured sampling algorithm is employed in place of completely randomized sampling. Consequently, gains in computational efficiency can be made. The two simulation methods are compared via theoretical case studies.
Resumo:
Time-domain models of marine structures based on frequency domain data are usually built upon the Cummins equation. This type of model is a vector integro-differential equation which involves convolution terms. These convolution terms are not convenient for analysis and design of motion control systems. In addition, these models are not efficient with respect to simulation time, and ease of implementation in standard simulation packages. For these reasons, different methods have been proposed in the literature as approximate alternative representations of the convolutions. Because the convolution is a linear operation, different approaches can be followed to obtain an approximately equivalent linear system in the form of either transfer function or state-space models. This process involves the use of system identification, and several options are available depending on how the identification problem is posed. This raises the question whether one method is better than the others. This paper therefore has three objectives. The first objective is to revisit some of the methods for replacing the convolutions, which have been reported in different areas of analysis of marine systems: hydrodynamics, wave energy conversion, and motion control systems. The second objective is to compare the different methods in terms of complexity and performance. For this purpose, a model for the response in the vertical plane of a modern containership is considered. The third objective is to describe the implementation of the resulting model in the standard simulation environment Matlab/Simulink.
Resumo:
This paper develops a semiparametric estimation approach for mixed count regression models based on series expansion for the unknown density of the unobserved heterogeneity. We use the generalized Laguerre series expansion around a gamma baseline density to model unobserved heterogeneity in a Poisson mixture model. We establish the consistency of the estimator and present a computational strategy to implement the proposed estimation techniques in the standard count model as well as in truncated, censored, and zero-inflated count regression models. Monte Carlo evidence shows that the finite sample behavior of the estimator is quite good. The paper applies the method to a model of individual shopping behavior. © 1999 Elsevier Science S.A. All rights reserved.
Resumo:
During the early design stages of construction projects, accurate and timely cost feedback is critical to design decision making. This is particularly challenging for cost estimators, as they must quickly and accurately estimate the cost of the building when the design is still incomplete and evolving. State-of-the-art software tools typically use a rule-based approach to generate detailed quantities from the design details present in a building model and relate them to the cost items in a cost estimating database. In this paper, we propose a generic approach for creating and maintaining a cost estimate using flexible mappings between a building model and a cost estimate. The approach uses queries on the building design that are used to populate views, and each view is then associated with one or more cost items. The benefit of this approach is that the flexibility of modern query languages allows the estimator to encode a broad variety of relationships between the design and estimate. It also avoids the use of a common standard to which both designers and estimators must conform, allowing the estimator added flexibility and functionality to their work.
Resumo:
The type of contract model may have a significant influence on achieving project objectives, including environmental and climate change goals. This research investigates non-standard contract models impacting greenhouse gas emissions (GHG) in transport infrastructure construction in Australia. The research is based on the analysis of two case studies: an Early Contractor Involvement (ECI) contract and a Design and Construct (D&C) contract with GHG reduction requirements embedded in the contractor selection. Main findings support the use of ECIs for better integrating decisions made during the planning phase with the construction activities, and improve environmental outcomes while achieving financial and time savings. Key words: greenhouse gases reduction; road construction; contracting; ECI; D&C
Resumo:
This article describes a maximum likelihood method for estimating the parameters of the standard square-root stochastic volatility model and a variant of the model that includes jumps in equity prices. The model is fitted to data on the S&P 500 Index and the prices of vanilla options written on the index, for the period 1990 to 2011. The method is able to estimate both the parameters of the physical measure (associated with the index) and the parameters of the risk-neutral measure (associated with the options), including the volatility and jump risk premia. The estimation is implemented using a particle filter whose efficacy is demonstrated under simulation. The computational load of this estimation method, which previously has been prohibitive, is managed by the effective use of parallel computing using graphics processing units (GPUs). The empirical results indicate that the parameters of the models are reliably estimated and consistent with values reported in previous work. In particular, both the volatility risk premium and the jump risk premium are found to be significant.
Resumo:
A 'pseudo-Bayesian' interpretation of standard errors yields a natural induced smoothing of statistical estimating functions. When applied to rank estimation, the lack of smoothness which prevents standard error estimation is remedied. Efficiency and robustness are preserved, while the smoothed estimation has excellent computational properties. In particular, convergence of the iterative equation for standard error is fast, and standard error calculation becomes asymptotically a one-step procedure. This property also extends to covariance matrix calculation for rank estimates in multi-parameter problems. Examples, and some simple explanations, are given.
Resumo:
The method of generalized estimating equation-, (GEEs) has been criticized recently for a failure to protect against misspecification of working correlation models, which in some cases leads to loss of efficiency or infeasibility of solutions. However, the feasibility and efficiency of GEE methods can be enhanced considerably by using flexible families of working correlation models. We propose two ways of constructing unbiased estimating equations from general correlation models for irregularly timed repeated measures to supplement and enhance GEE. The supplementary estimating equations are obtained by differentiation of the Cholesky decomposition of the working correlation, or as score equations for decoupled Gaussian pseudolikelihood. The estimating equations are solved with computational effort equivalent to that required for a first-order GEE. Full details and analytic expressions are developed for a generalized Markovian model that was evaluated through simulation. Large-sample ".sandwich" standard errors for working correlation parameter estimates are derived and shown to have good performance. The proposed estimating functions are further illustrated in an analysis of repeated measures of pulmonary function in children.
Resumo:
Statistical methods are often used to analyse commercial catch and effort data to provide standardised fishing effort and/or a relative index of fish abundance for input into stock assessment models. Achieving reliable results has proved difficult in Australia's Northern Prawn Fishery (NPF), due to a combination of such factors as the biological characteristics of the animals, some aspects of the fleet dynamics, and the changes in fishing technology. For this set of data, we compared four modelling approaches (linear models, mixed models, generalised estimating equations, and generalised linear models) with respect to the outcomes of the standardised fishing effort or the relative index of abundance. We also varied the number and form of vessel covariates in the models. Within a subset of data from this fishery, modelling correlation structures did not alter the conclusions from simpler statistical models. The random-effects models also yielded similar results. This is because the estimators are all consistent even if the correlation structure is mis-specified, and the data set is very large. However, the standard errors from different models differed, suggesting that different methods have different statistical efficiency. We suggest that there is value in modelling the variance function and the correlation structure, to make valid and efficient statistical inferences and gain insight into the data. We found that fishing power was separable from the indices of prawn abundance only when we offset the impact of vessel characteristics at assumed values from external sources. This may be due to the large degree of confounding within the data, and the extreme temporal changes in certain aspects of individual vessels, the fleet and the fleet dynamics.
Resumo:
This paper proposes solutions to three issues pertaining to the estimation of finite mixture models with an unknown number of components: the non-identifiability induced by overfitting the number of components, the mixing limitations of standard Markov Chain Monte Carlo (MCMC) sampling techniques, and the related label switching problem. An overfitting approach is used to estimate the number of components in a finite mixture model via a Zmix algorithm. Zmix provides a bridge between multidimensional samplers and test based estimation methods, whereby priors are chosen to encourage extra groups to have weights approaching zero. MCMC sampling is made possible by the implementation of prior parallel tempering, an extension of parallel tempering. Zmix can accurately estimate the number of components, posterior parameter estimates and allocation probabilities given a sufficiently large sample size. The results will reflect uncertainty in the final model and will report the range of possible candidate models and their respective estimated probabilities from a single run. Label switching is resolved with a computationally light-weight method, Zswitch, developed for overfitted mixtures by exploiting the intuitiveness of allocation-based relabelling algorithms and the precision of label-invariant loss functions. Four simulation studies are included to illustrate Zmix and Zswitch, as well as three case studies from the literature. All methods are available as part of the R package Zmix, which can currently be applied to univariate Gaussian mixture models.
Resumo:
In a very recent study [1] the Renormalisation Group (RNG) turbulence model was used to obtain flow predictions in a strongly swirling quarl burner, and was found to perform well in predicting certain features that are not well captured using less sophisticated models of turbulence. The implication is that the RNG approach should provide an economical and reliable tool for the prediction of swirling flows in combustor and furnace geometries commonly encountered in technological applications. To test this hypothesis the present work considers flow in a model furnace for which experimental data is available [2]. The essential features of the flow which differentiate it from the previous study [1] are that the annular air jet entry is relatively narrow and the base wall of the cylindrical furnace is at 90 degrees to the inlet pipe. For swirl numbers of order 1 the resulting flow is highly complex with significant inner and outer recirculation regions. The RNG and standard k-epsilon models are used to model the flow for both swirling and non-swirling entry jets and the results compared with experimental data [2]. Near wall viscous effects are accounted for in both models via the standard wall function formulation [3]. For the RNG model, additional computations with grid placement extending well inside the near wall viscous-affected sublayer are performed in order to assess the low Reynolds number capabilities of the model.
Resumo:
In this work we numerically model isothermal turbulent swirling flow in a cylindrical burner. Three versions of the RNG k-epsilon model are assessed against performance of the standard k-epsilon model. Sensitivity of numerical predictions to grid refinement, differing convective differencing schemes and choice of (unknown) inlet dissipation rate, were closely scrutinised to ensure accuracy. Particular attention is paid to modelling the inlet conditions to within the range of uncertainty of the experimental data, as model predictions proved to be significantly sensitive to relatively small changes in upstream flow conditions. We also examine the characteristics of the swirl--induced recirculation zone predicted by the models over an extended range of inlet conditions. Our main findings are: - (i) the standard k-epsilon model performed best compared with experiment; - (ii) no one inlet specification can simultaneously optimize the performance of the models considered; - (iii) the RNG models predict both single-cell and double-cell IRZ characteristics, the latter both with and without additional internal stagnation points. The first finding indicates that the examined RNG modifications to the standard k-e model do not result in an improved eddy viscosity based model for the prediction of swirl flows. The second finding suggests that tuning established models for optimal performance in swirl flows a priori is not straightforward. The third finding indicates that the RNG based models exhibit a greater variety of structural behaviour, despite being of the same level of complexity as the standard k-e model. The plausibility of the predicted IRZ features are discussed in terms of known vortex breakdown phenomena.