928 resultados para Stochastic Model
Resumo:
Traditional sensitivity and elasticity analyses of matrix population models have been used to inform management decisions, but they ignore the economic costs of manipulating vital rates. For example, the growth rate of a population is often most sensitive to changes in adult survival rate, but this does not mean that increasing that rate is the best option for managing the population because it may be much more expensive than other options. To explore how managers should optimize their manipulation of vital rates, we incorporated the cost of changing those rates into matrix population models. We derived analytic expressions for locations in parameter space where managers should shift between management of fecundity and survival, for the balance between fecundity and survival management at those boundaries, and for the allocation of management resources to sustain that optimal balance. For simple matrices, the optimal budget allocation can often be expressed as simple functions of vital rates and the relative costs of changing them. We applied our method to management of the Helmeted Honeyeater (Lichenostomus melanops cassidix; an endangered Australian bird) and the koala (Phascolarctos cinereus) as examples. Our method showed that cost-efficient management of the Helmeted Honeyeater should focus on increasing fecundity via nest protection, whereas optimal koala management should focus on manipulating both fecundity and survival simultaneously. These findings are contrary to the cost-negligent recommendations of elasticity analysis, which would suggest focusing on managing survival in both cases. A further investigation of Helmeted Honeyeater management options, based on an individual-based model incorporating density dependence, spatial structure, and environmental stochasticity, confirmed that fecundity management was the most cost-effective strategy. Our results demonstrate that decisions that ignore economic factors will reduce management efficiency. ©2006 Society for Conservation Biology.
Resumo:
Stochastic modelling is critical in GNSS data processing. Currently, GNSS data processing commonly relies on the empirical stochastic model which may not reflect the actual data quality or noise characteristics. This paper examines the real-time GNSS observation noise estimation methods enabling to determine the observation variance from single receiver data stream. The methods involve three steps: forming linear combination, handling the ionosphere and ambiguity bias and variance estimation. Two distinguished ways are applied to overcome the ionosphere and ambiguity biases, known as the time differenced method and polynomial prediction method respectively. The real time variance estimation methods are compared with the zero-baseline and short-baseline methods. The proposed method only requires single receiver observation, thus applicable to both differenced and un-differenced data processing modes. However, the methods may be subject to the normal ionosphere conditions and low autocorrelation GNSS receivers. Experimental results also indicate the proposed method can result on more realistic parameter precision.
Resumo:
In this paper it is demonstrated how the Bayesian parametric bootstrap can be adapted to models with intractable likelihoods. The approach is most appealing when the semi-automatic approximate Bayesian computation (ABC) summary statistics are selected. After a pilot run of ABC, the likelihood-free parametric bootstrap approach requires very few model simulations to produce an approximate posterior, which can be a useful approximation in its own right. An alternative is to use this approximation as a proposal distribution in ABC algorithms to make them more efficient. In this paper, the parametric bootstrap approximation is used to form the initial importance distribution for the sequential Monte Carlo and the ABC importance and rejection sampling algorithms. The new approach is illustrated through a simulation study of the univariate g-and- k quantile distribution, and is used to infer parameter values of a stochastic model describing expanding melanoma cell colonies.
Resumo:
A simple stochastic model of a fish population subject to natural and fishing mortalities is described. The fishing effort is assumed to vary over different periods but to be constant within each period. A maximum-likelihood approach is developed for estimating natural mortality (M) and the catchability coefficient (q) simultaneously from catch-and-effort data. If there is not enough contrast in the data to provide reliable estimates of both M and q, as is often the case in practice, the method can be used to obtain the best possible values of q for a range of possible values of M. These techniques are illustrated with tiger prawn (Penaeus semisulcatus) data from the Northern Prawn Fishery of Australia.
Resumo:
We consider the motion of a diffusive population on a growing domain, 0 < x < L(t ), which is motivated by various applications in developmental biology. Individuals in the diffusing population, which could represent molecules or cells in a developmental scenario, undergo two different kinds of motion: (i) undirected movement, characterized by a diffusion coefficient, D, and (ii) directed movement, associated with the underlying domain growth. For a general class of problems with a reflecting boundary at x = 0, and an absorbing boundary at x = L(t ), we provide an exact solution to the partial differential equation describing the evolution of the population density function, C(x,t ). Using this solution, we derive an exact expression for the survival probability, S(t ), and an accurate approximation for the long-time limit, S = limt→∞ S(t ). Unlike traditional analyses on a nongrowing domain, where S ≡ 0, we show that domain growth leads to a very different situation where S can be positive. The theoretical tools developed and validated in this study allow us to distinguish between situations where the diffusive population reaches the moving boundary at x = L(t ) from other situations where the diffusive population never reaches the moving boundary at x = L(t ). Making this distinction is relevant to certain applications in developmental biology, such as the development of the enteric nervous system (ENS). All theoretical predictions are verified by implementing a discrete stochastic model.
Resumo:
Bacteria play an important role in many ecological systems. The molecular characterization of bacteria using either cultivation-dependent or cultivation-independent methods reveals the large scale of bacterial diversity in natural communities, and the vastness of subpopulations within a species or genus. Understanding how bacterial diversity varies across different environments and also within populations should provide insights into many important questions of bacterial evolution and population dynamics. This thesis presents novel statistical methods for analyzing bacterial diversity using widely employed molecular fingerprinting techniques. The first objective of this thesis was to develop Bayesian clustering models to identify bacterial population structures. Bacterial isolates were identified using multilous sequence typing (MLST), and Bayesian clustering models were used to explore the evolutionary relationships among isolates. Our method involves the inference of genetic population structures via an unsupervised clustering framework where the dependence between loci is represented using graphical models. The population dynamics that generate such a population stratification were investigated using a stochastic model, in which homologous recombination between subpopulations can be quantified within a gene flow network. The second part of the thesis focuses on cluster analysis of community compositional data produced by two different cultivation-independent analyses: terminal restriction fragment length polymorphism (T-RFLP) analysis, and fatty acid methyl ester (FAME) analysis. The cluster analysis aims to group bacterial communities that are similar in composition, which is an important step for understanding the overall influences of environmental and ecological perturbations on bacterial diversity. A common feature of T-RFLP and FAME data is zero-inflation, which indicates that the observation of a zero value is much more frequent than would be expected, for example, from a Poisson distribution in the discrete case, or a Gaussian distribution in the continuous case. We provided two strategies for modeling zero-inflation in the clustering framework, which were validated by both synthetic and empirical complex data sets. We show in the thesis that our model that takes into account dependencies between loci in MLST data can produce better clustering results than those methods which assume independent loci. Furthermore, computer algorithms that are efficient in analyzing large scale data were adopted for meeting the increasing computational need. Our method that detects homologous recombination in subpopulations may provide a theoretical criterion for defining bacterial species. The clustering of bacterial community data include T-RFLP and FAME provides an initial effort for discovering the evolutionary dynamics that structure and maintain bacterial diversity in the natural environment.
Resumo:
Nanomaterials with a hexagonally ordered atomic structure, e.g., graphene, carbon and boron nitride nanotubes, and white graphene (a monolayer of hexagonal boron nitride) possess many impressive properties. For example, the mechanical stiffness and strength of these materials are unprecedented. Also, the extraordinary electronic properties of graphene and carbon nanotubes suggest that these materials may serve as building blocks of next generation electronics. However, the properties of pristine materials are not always what is needed in applications, but careful manipulation of their atomic structure, e.g., via particle irradiation can be used to tailor the properties. On the other hand, inadvertently introduced defects can deteriorate the useful properties of these materials in radiation hostile environments, such as outer space. In this thesis, defect production via energetic particle bombardment in the aforementioned materials is investigated. The effects of ion irradiation on multi-walled carbon and boron nitride nanotubes are studied experimentally by first conducting controlled irradiation treatments of the samples using an ion accelerator and subsequently characterizing the induced changes by transmission electron microscopy and Raman spectroscopy. The usefulness of the characterization methods is critically evaluated and a damage grading scale is proposed, based on transmission electron microscopy images. Theoretical predictions are made on defect production in graphene and white graphene under particle bombardment. A stochastic model based on first-principles molecular dynamics simulations is used together with electron irradiation experiments for understanding the formation of peculiar triangular defect structures in white graphene. An extensive set of classical molecular dynamics simulations is conducted, in order to study defect production under ion irradiation in graphene and white graphene. In the experimental studies the response of carbon and boron nitride multi-walled nanotubes to irradiation with a wide range of ion types, energies and fluences is explored. The stabilities of these structures under ion irradiation are investigated, as well as the issue of how the mechanism of energy transfer affects the irradiation-induced damage. An irradiation fluence of 5.5x10^15 ions/cm^2 with 40 keV Ar+ ions is established to be sufficient to amorphize a multi-walled nanotube. In the case of 350 keV He+ ion irradiation, where most of the energy transfer happens through inelastic collisions between the ion and the target electrons, an irradiation fluence of 1.4x10^17 ions/cm^2 heavily damages carbon nanotubes, whereas a larger irradiation fluence of 1.2x10^18 ions/cm^2 leaves a boron nitride nanotube in much better condition, indicating that carbon nanotubes might be more susceptible to damage via electronic excitations than their boron nitride counterparts. An elevated temperature was discovered to considerably reduce the accumulated damage created by energetic ions in both carbon and boron nitride nanotubes, attributed to enhanced defect mobility and efficient recombination at high temperatures. Additionally, cobalt nanorods encapsulated inside multi-walled carbon nanotubes were observed to transform into spherical nanoparticles after ion irradiation at an elevated temperature, which can be explained by the inverse Ostwald ripening effect. The simulation studies on ion irradiation of the hexagonal monolayers yielded quantitative estimates on types and abundances of defects produced within a large range of irradiation parameters. He, Ne, Ar, Kr, Xe, and Ga ions were considered in the simulations with kinetic energies ranging from 35 eV to 10 MeV, and the role of the angle of incidence of the ions was studied in detail. A stochastic model was developed for utilizing the large amount of data produced by the molecular dynamics simulations. It was discovered that a high degree of selectivity over the types and abundances of defects can be achieved by carefully selecting the irradiation parameters, which can be of great use when precise pattering of graphene or white graphene using focused ion beams is planned.
Resumo:
Recently, Brownian networks have emerged as an effective stochastic model to approximate multiclass queueing networks with dynamic scheduling capability, under conditions of balanced heavy loading. This paper is a tutorial introduction to dynamic scheduling in manufacturing systems using Brownian networks. The article starts with motivational examples. It then provides a review of relevant weak convergence concepts, followed by a description of the limiting behaviour of queueing systems under heavy traffic. The Brownian approximation procedure is discussed in detail and generic case studies are provided to illustrate the procedure and demonstrate its effectiveness. This paper places emphasis only on the results and aspires to provide the reader with an up-to-date understanding of dynamic scheduling based on Brownian approximations.
Resumo:
A detalied study of the maonthly Convery river flows at the krishna raja sagara (KRS) reservoir is carried out by using the techniques of spectral analysis. The correlogram and power spectrum ate platted and used to identify the peridiocities inherent in the monthly inflows. The statistical significance of these periodicities is tested. Apart from the periodiocities at 12 months and 6 months, a significant of periodicity of 4 month was also observed in the monthly inflows. The analysis prepares ground for developing an appropriate stochastic model for the item series of the monthly flows.
Resumo:
In order to study cell electroporation in situ, polymer devices have been fabricated from poly-dimethyl siloxane with transparent indium tin oxide parallel plate electrodes in horizontal geometry. This geometry with cells located on a single focal plane at the interface of the bottom electrode allows a longer observation time in both transmitted bright-field and reflected fluorescence microscopy modes. Using propidium iodide (PI) as a marker dye, the number of electroporated cells in a typical culture volume of 10-100 mu l was quantified in situ as a function of applied voltage from 10 to 90 V in a series of 2-ms pulses across 0.5-mm electrode spacing. The electric field at the interface and device current was calculated using a model that takes into account bulk screening of the transient pulse. The voltage dependence of the number of electroporated cells could be explained using a stochastic model for the electroporation kinetics, and the free energy for pore formation was found to be kT at room temperature. With this device, the optimum electroporation conditions can be quickly determined by monitoring the uptake of PI marker dye in situ under the application of millisecond voltage pulses. The electroporation efficiency was also quantified using an ex situ fluorescence-assisted cell sorter, and the morphology of cultured cells was evaluated after the pulsing experiment. Importantly, the efficacy of the developed device was tested independently using two cell lines (C2C12 mouse myoblast cells and yeast cells) as well as in three different electroporation buffers (phosphate buffer saline, electroporation buffer and 10 % glycerol).
Resumo:
针对断续节理岩体提出了一种随机计算模型.在该模型中,假设结构面的形状为正方形,通过岩体结构面的统计分布函数模拟结构面的空间随机分布.给出了随机节理模型的实现方法,对该随机模型的算法可靠性进行了验证.通过单向加载模拟试验研究了节理岩体破坏强度与节理倾角及节理连通率等因素的关系,并与极限平衡条件推导的理论结果进行了比较,分析了数值模拟结果与极限平衡理论结果的异同性,进而验证了节理随机计算模型的可靠性.同时,研究了节理连通率与岩体等效弹性模量之间的影响关系,给出了二者的影响关系式.
Resumo:
Large-eddy simulation (LES) has emerged as a promising tool for simulating turbulent flows in general and, in recent years,has also been applied to the particle-laden turbulence with some success (Kassinos et al., 2007). The motion of inertial particles is much more complicated than fluid elements, and therefore, LES of turbulent flow laden with inertial particles encounters new challenges. In the conventional LES, only large-scale eddies are explicitly resolved and the effects of unresolved, small or subgrid scale (SGS) eddies on the large-scale eddies are modeled. The SGS turbulent flow field is not available. The effects of SGS turbulent velocity field on particle motion have been studied by Wang and Squires (1996), Armenio et al. (1999), Yamamoto et al. (2001), Shotorban and Mashayek (2006a,b), Fede and Simonin (2006), Berrouk et al. (2007), Bini and Jones (2008), and Pozorski and Apte (2009), amongst others. One contemporary method to include the effects of SGS eddies on inertial particle motions is to introduce a stochastic differential equation (SDE), that is, a Langevin stochastic equation to model the SGS fluid velocity seen by inertial particles (Fede et al., 2006; Shotorban and Mashayek, 2006a; Shotorban and Mashayek, 2006b; Berrouk et al., 2007; Bini and Jones, 2008; Pozorski and Apte, 2009).However, the accuracy of such a Langevin equation model depends primarily on the prescription of the SGS fluid velocity autocorrelation time seen by an inertial particle or the inertial particle–SGS eddy interaction timescale (denoted by $\delt T_{Lp}$ and a second model constant in the diffusion term which controls the intensity of the random force received by an inertial particle (denoted by C_0, see Eq. (7)). From the theoretical point of view, dTLp differs significantly from the Lagrangian fluid velocity correlation time (Reeks, 1977; Wang and Stock, 1993), and this carries the essential nonlinearity in the statistical modeling of particle motion. dTLp and C0 may depend on the filter width and particle Stokes number even for a given turbulent flow. In previous studies, dTLp is modeled either by the fluid SGS Lagrangian timescale (Fede et al., 2006; Shotorban and Mashayek, 2006b; Pozorski and Apte, 2009; Bini and Jones, 2008) or by a simple extension of the timescale obtained from the full flow field (Berrouk et al., 2007). In this work, we shall study the subtle and on-monotonic dependence of $\delt T_{Lp}$ on the filter width and particle Stokes number using a flow field obtained from Direct Numerical Simulation (DNS). We then propose an empirical closure model for $\delta T_{Lp}$. Finally, the model is validated against LES of particle-laden turbulence in predicting single-particle statistics such as particle kinetic energy. As a first step, we consider the particle motion under the one-way coupling assumption in isotropic turbulent flow and neglect the gravitational settling effect. The one-way coupling assumption is only valid for low particle mass loading.
Resumo:
With data centers being the supporting infrastructure for a wide range of IT services, their efficiency has become a big concern to operators, as well as to society, for both economic and environmental reasons. The goal of this thesis is to design energy-efficient algorithms that reduce energy cost while minimizing compromise to service. We focus on the algorithmic challenges at different levels of energy optimization across the data center stack. The algorithmic challenge at the device level is to improve the energy efficiency of a single computational device via techniques such as job scheduling and speed scaling. We analyze the common speed scaling algorithms in both the worst-case model and stochastic model to answer some fundamental issues in the design of speed scaling algorithms. The algorithmic challenge at the local data center level is to dynamically allocate resources (e.g., servers) and to dispatch the workload in a data center. We develop an online algorithm to make a data center more power-proportional by dynamically adapting the number of active servers. The algorithmic challenge at the global data center level is to dispatch the workload across multiple data centers, considering the geographical diversity of electricity price, availability of renewable energy, and network propagation delay. We propose algorithms to jointly optimize routing and provisioning in an online manner. Motivated by the above online decision problems, we move on to study a general class of online problem named "smoothed online convex optimization", which seeks to minimize the sum of a sequence of convex functions when "smooth" solutions are preferred. This model allows us to bridge different research communities and help us get a more fundamental understanding of general online decision problems.
Resumo:
4 p.
Resumo:
Three-protein circadian oscillations in cyanobacteria sustain for weeks. To understand how cellular oscillations function robustly in stochastic fluctuating environments, we used a stochastic model to uncover two natures of circadian oscillation: the potential landscape related to steady-state probability distribution of protein concentrations; and the corresponding flux related to speed of concentration changes which drive the oscillations. The barrier height of escaping from the oscillation attractor on the landscape provides a quantitative measure of the robustness and coherence for oscillations against intrinsic and external fluctuations. The difference between the locations of the zero total driving force and the extremal of the potential provides a possible experimental probe and quantification of the force from curl flux. These results, correlated with experiments, can help in the design of robust oscillatory networks.