399 resultados para Movable bed models (Hydraulic engineering)
Resumo:
The contemporary methodology for growth models of organisms is based on continuous trajectories and thus it hinders us from modelling stepwise growth in crustacean populations. Growth models for fish are normally assumed to follow a continuous function, but a different type of model is needed for crustacean growth. Crustaceans must moult in order for them to grow. The growth of crustaceans is a discontinuous process due to the periodical shedding of the exoskeleton in moulting. The stepwise growth of crustaceans through the moulting process makes the growth estimation more complex. Stochastic approaches can be used to model discontinuous growth or what are commonly known as "jumps" (Figure 1). However, in stochastic growth model we need to ensure that the stochastic growth model results in only positive jumps. In view of this, we will introduce a subordinator that is a special case of a Levy process. A subordinator is a non-decreasing Levy process, that will assist in modelling crustacean growth for better understanding of the individual variability and stochasticity in moulting periods and increments. We develop the estimation methods for parameter estimation and illustrate them with the help of a dataset from laboratory experiments. The motivational dataset is from the ornate rock lobster, Panulirus ornatus, which can be found between Australia and Papua New Guinea. Due to the presence of sex effects on the growth (Munday et al., 2004), we estimate the growth parameters separately for each sex. Since all hard parts are shed too often, the exact age determination of a lobster can be challenging. However, the growth parameters for the aforementioned moult processes from tank data being able to estimate through: (i) inter-moult periods, and (ii) moult increment. We will attempt to derive a joint density, which is made up of two functions: one for moult increments and the other for time intervals between moults. We claim these functions are conditionally independent given pre-moult length and the inter-moult periods. The variables moult increments and inter-moult periods are said to be independent because of the Markov property or conditional probability. Hence, the parameters in each function can be estimated separately. Subsequently, we integrate both of the functions through a Monte Carlo method. We can therefore obtain a population mean for crustacean growth (e. g. red curve in Figure 1). [GRAPHICS]
Resumo:
The Three-Georges Dam holds many records in the history of engineering. While the dam has produced benefits in terms of flood control, hydropower generation and increased navigation capacity of the Yangtze River, serious questions have been raised concerning its impact on both upstream and downstream ecosystems. It has been suggested that the dam operation intensifies the extremes of wet and dry conditions in the downstream Poyang Lake, and affects adversely important local wetlands. A floodgate has been proposed to maintain the lake water level by controlling the flow between the Poyang Lake and Yangtze River. Using extensive hydrological data and generalized linear statistical models, we demonstrated that the dam operation induces major changes in the downstream river discharge near the dam, including an average "water loss". The analysis also revealed considerable effects on the Poyang Lake water level, particularly a reduced level over the dry period from late summer to autumn. However, the dam impact needs to be further assessed based on long-term monitoring of the lake ecosystem, covering a wide range of parameters related to hydrological and hydraulic characteristics of the lake, water quality, geomorphological characteristics, aquatic biota and their habitat, wetland vegetation and associated fauna.
Resumo:
Consider a general regression model with an arbitrary and unknown link function and a stochastic selection variable that determines whether the outcome variable is observable or missing. The paper proposes U-statistics that are based on kernel functions as estimators for the directions of the parameter vectors in the link function and the selection equation, and shows that these estimators are consistent and asymptotically normal.
Resumo:
The method of generalized estimating equation-, (GEEs) has been criticized recently for a failure to protect against misspecification of working correlation models, which in some cases leads to loss of efficiency or infeasibility of solutions. However, the feasibility and efficiency of GEE methods can be enhanced considerably by using flexible families of working correlation models. We propose two ways of constructing unbiased estimating equations from general correlation models for irregularly timed repeated measures to supplement and enhance GEE. The supplementary estimating equations are obtained by differentiation of the Cholesky decomposition of the working correlation, or as score equations for decoupled Gaussian pseudolikelihood. The estimating equations are solved with computational effort equivalent to that required for a first-order GEE. Full details and analytic expressions are developed for a generalized Markovian model that was evaluated through simulation. Large-sample ".sandwich" standard errors for working correlation parameter estimates are derived and shown to have good performance. The proposed estimating functions are further illustrated in an analysis of repeated measures of pulmonary function in children.
Resumo:
Statistical methods are often used to analyse commercial catch and effort data to provide standardised fishing effort and/or a relative index of fish abundance for input into stock assessment models. Achieving reliable results has proved difficult in Australia's Northern Prawn Fishery (NPF), due to a combination of such factors as the biological characteristics of the animals, some aspects of the fleet dynamics, and the changes in fishing technology. For this set of data, we compared four modelling approaches (linear models, mixed models, generalised estimating equations, and generalised linear models) with respect to the outcomes of the standardised fishing effort or the relative index of abundance. We also varied the number and form of vessel covariates in the models. Within a subset of data from this fishery, modelling correlation structures did not alter the conclusions from simpler statistical models. The random-effects models also yielded similar results. This is because the estimators are all consistent even if the correlation structure is mis-specified, and the data set is very large. However, the standard errors from different models differed, suggesting that different methods have different statistical efficiency. We suggest that there is value in modelling the variance function and the correlation structure, to make valid and efficient statistical inferences and gain insight into the data. We found that fishing power was separable from the indices of prawn abundance only when we offset the impact of vessel characteristics at assumed values from external sources. This may be due to the large degree of confounding within the data, and the extreme temporal changes in certain aspects of individual vessels, the fleet and the fleet dynamics.
Resumo:
James (1991, Biometrics 47, 1519-1530) constructed unbiased estimating functions for estimating the two parameters in the von Bertalanffy growth curve from tag-recapture data. This paper provides unbiased estimating functions for a class of growth models that incorporate stochastic components and explanatory variables. a simulation study using seasonal growth models indicates that the proposed method works well while the least-squares methods that are commonly used in the literature may produce substantially biased estimates. The proposed model and method are also applied to real data from tagged rack lobsters to assess the possible seasonal effect on growth.
Resumo:
description and analysis of geographically indexed health data with respect to demographic, environmental, behavioural, socioeconomic, genetic, and infectious risk factors (Elliott andWartenberg 2004). Disease maps can be useful for estimating relative risk; ecological analyses, incorporating area and/or individual-level covariates; or cluster analyses (Lawson 2009). As aggregated data are often more readily available, one common method of mapping disease is to aggregate the counts of disease at some geographical areal level, and present them as choropleth maps (Devesa et al. 1999; Population Health Division 2006). Therefore, this chapter will focus exclusively on methods appropriate for areal data...
Resumo:
This paper proposes solutions to three issues pertaining to the estimation of finite mixture models with an unknown number of components: the non-identifiability induced by overfitting the number of components, the mixing limitations of standard Markov Chain Monte Carlo (MCMC) sampling techniques, and the related label switching problem. An overfitting approach is used to estimate the number of components in a finite mixture model via a Zmix algorithm. Zmix provides a bridge between multidimensional samplers and test based estimation methods, whereby priors are chosen to encourage extra groups to have weights approaching zero. MCMC sampling is made possible by the implementation of prior parallel tempering, an extension of parallel tempering. Zmix can accurately estimate the number of components, posterior parameter estimates and allocation probabilities given a sufficiently large sample size. The results will reflect uncertainty in the final model and will report the range of possible candidate models and their respective estimated probabilities from a single run. Label switching is resolved with a computationally light-weight method, Zswitch, developed for overfitted mixtures by exploiting the intuitiveness of allocation-based relabelling algorithms and the precision of label-invariant loss functions. Four simulation studies are included to illustrate Zmix and Zswitch, as well as three case studies from the literature. All methods are available as part of the R package Zmix, which can currently be applied to univariate Gaussian mixture models.
Resumo:
Public-Private Partnerships (PPP) are established globally as an important mode of procurement and the features of PPP, not least of which the transfer of risk, appeal to governments and particularly in the current economic climate. There are many other advantages of PPP that are claimed as outweighing the costs of PPP and affording Value for Money (VfM) relative to traditionally financed projects or non-PPP. That said, it is the case that we lack comparative whole-life empirical studies of VfM in PPP and non-PPP. Whilst we await this kind of study, the pace and trajectory of PPP seem set to continue and so in the meantime, the virtues of seeking to improve PPP appear incontrovertible. The decision about which projects, or parts of projects, to offer to the market as a PPP and the decision concerning the allocation or sharing risks as part of engagement of the PPP consortium are among the most fundamental decisions that determine whether PPP deliver VfM. The focus in the paper is on latter decision concerning governments’ attitudes towards risk and more specifically, the effect of this decision on the nature of the emergent PPP consortium, or PPP model, including its economic behavior and outcomes. This paper presents an exploration into the extent to which the seemingly incompatible alternatives of risk allocation and risk sharing, represented by the orthodox/conventional PPP model and the heterodox/alliance PPP model respectively, can be reconciled along with suggestions for new research directions to inform this reconciliation. In so doing, an important step is taken towards charting a path by which governments can harness the relative strengths of both kinds of PPP model.
Packed bed bioreactor for the isolation and expansion of placental-derived Mesenchymal Stromal Cells
Resumo:
Large numbers of Mesenchymal stem/stromal cells (MSCs) are required for clinical relevant doses to treat a number of diseases. To economically manufacture these MSCs, an automated bioreactor system will be required. Herein we describe the development of a scalable closed-system, packed bed bioreactor suitable for large-scale MSCs expansion. The packed bed was formed from fused polystyrene pellets that were air plasma treated to endow them with a surface chemistry similar to traditional tissue culture plastic. The packed bed was encased within a gas permeable shell to decouple the medium nutrient supply and gas exchange. This enabled a significant reduction in medium flow rates, thus reducing shear and even facilitating single pass medium exchange. The system was optimised in a small-scale bioreactor format (160 cm2) with murine-derived green fluorescent protein-expressing MSCs, and then scaled-up to a 2800 cm2 format. We demonstrated that placental derived MSCs could be isolated directly within the bioreactor and subsequently expanded. Our results demonstrate that the closed system large-scale packed bed bioreactor is an effective and scalable tool for large-scale isolation and expansion of MSCs.
Resumo:
Changing the topology of a railway network can greatly affect its capacity. Railway networks however can be altered in a multitude of different ways. As each way has significant immediate and long term financial ramifications, it is a difficult task to decide how and where to expand the network. In response some railway capacity expansion models (RCEM) have been developed to help capacity planning activities, and to remove physical bottlenecks in the current railway system. The exact purpose of these models is to decide given a fixed budget, where track duplications and track sub divisions should be made, in order to increase theoretical capacity most. These models are high level and strategic, and this is why increases to the theoretical capacity is concentrated upon. The optimization models have been applied to a case study to demonstrate their application and their worth. The case study evidently shows how automated approaches of this nature could be a formidable alternative to current manual planning techniques and simulation. If the exact effect of track duplications and sub-divisions can be sufficiently approximated, this approach will be very applicable.
Resumo:
Diffusive transport is a universal phenomenon, throughout both biological and physical sciences, and models of diffusion are routinely used to interrogate diffusion-driven processes. However, most models neglect to take into account the role of volume exclusion, which can significantly alter diffusive transport, particularly within biological systems where the diffusing particles might occupy a significant fraction of the available space. In this work we use a random walk approach to provide a means to reconcile models that incorporate crowding effects on different spatial scales. Our work demonstrates that coarse-grained models incorporating simplified descriptions of excluded volume can be used in many circumstances, but that care must be taken in pushing the coarse-graining process too far.
Resumo:
The quality of species distribution models (SDMs) relies to a large degree on the quality of the input data, from bioclimatic indices to environmental and habitat descriptors (Austin, 2002). Recent reviews of SDM techniques, have sought to optimize predictive performance e.g. Elith et al., 2006. In general SDMs employ one of three approaches to variable selection. The simplest approach relies on the expert to select the variables, as in environmental niche models Nix, 1986 or a generalized linear model without variable selection (Miller and Franklin, 2002). A second approach explicitly incorporates variable selection into model fitting, which allows examination of particular combinations of variables. Examples include generalized linear or additive models with variable selection (Hastie et al. 2002); or classification trees with complexity or model based pruning (Breiman et al., 1984, Zeileis, 2008). A third approach uses model averaging, to summarize the overall contribution of a variable, without considering particular combinations. Examples include neural networks, boosted or bagged regression trees and Maximum Entropy as compared in Elith et al. 2006. Typically, users of SDMs will either consider a small number of variable sets, via the first approach, or else supply all of the candidate variables (often numbering more than a hundred) to the second or third approaches. Bayesian SDMs exist, with several methods for eliciting and encoding priors on model parameters (see review in Low Choy et al. 2010). However few methods have been published for informative variable selection; one example is Bayesian trees (O’Leary 2008). Here we report an elicitation protocol that helps makes explicit a priori expert judgements on the quality of candidate variables. This protocol can be flexibly applied to any of the three approaches to variable selection, described above, Bayesian or otherwise. We demonstrate how this information can be obtained then used to guide variable selection in classical or machine learning SDMs, or to define priors within Bayesian SDMs.
Resumo:
This paper conceptualizes a framework for bridging the BIM (building information modelling)-specifications divide through augmenting objects within BIM with specification parameters derived from a product library. We demonstrate how model information, enriched with data at various LODs (levels of development), can evolve simultaneously with design and construction using different representation of a window object embedded in a wall as lifecycle phase exemplars at different levels of granularity. The conceptual standpoint is informed by the need for exploring a methodological approach which extends beyond current limitations of current modelling platforms in enhancing the information content of BIM models. Therefore, this work demonstrates that BIM objects can be augmented with construction specification parameters leveraging product libraries.
Resumo:
In a very recent study [1] the Renormalisation Group (RNG) turbulence model was used to obtain flow predictions in a strongly swirling quarl burner, and was found to perform well in predicting certain features that are not well captured using less sophisticated models of turbulence. The implication is that the RNG approach should provide an economical and reliable tool for the prediction of swirling flows in combustor and furnace geometries commonly encountered in technological applications. To test this hypothesis the present work considers flow in a model furnace for which experimental data is available [2]. The essential features of the flow which differentiate it from the previous study [1] are that the annular air jet entry is relatively narrow and the base wall of the cylindrical furnace is at 90 degrees to the inlet pipe. For swirl numbers of order 1 the resulting flow is highly complex with significant inner and outer recirculation regions. The RNG and standard k-epsilon models are used to model the flow for both swirling and non-swirling entry jets and the results compared with experimental data [2]. Near wall viscous effects are accounted for in both models via the standard wall function formulation [3]. For the RNG model, additional computations with grid placement extending well inside the near wall viscous-affected sublayer are performed in order to assess the low Reynolds number capabilities of the model.