391 resultados para Stochastic Processes
Resumo:
Dengue is the most prevalent arthropod-borne virus, with at least 40% of the world’s population at risk of infection each year. In Australia, dengue is not endemic, but viremic travelers trigger outbreaks involving hundreds of cases. We compared the susceptibility of Aedes aegypti mosquitoes from two geographically isolated populations with two strains of dengue virus serotype 2. We found, interestingly, that mosquitoes from a city with no history of dengue were more susceptible to virus than mosquitoes from an outbreak-prone region, particularly with respect to one dengue strain. These findings suggest recent evolution of population-based differences in vector competence or different historical origins. Future genomic comparisons of these populations could reveal the genetic basis of vector competence and the relative role of selection and stochastic processes in shaping their differences. Lastly, we show the novel finding of a correlation between midgut dengue titer and titer in tissues colonized after dissemination.
Resumo:
Many model-based investigation techniques, such as sensitivity analysis, optimization, and statistical inference, require a large number of model evaluations to be performed at different input and/or parameter values. This limits the application of these techniques to models that can be implemented in computationally efficient computer codes. Emulators, by providing efficient interpolation between outputs of deterministic simulation models, can considerably extend the field of applicability of such computationally demanding techniques. So far, the dominant techniques for developing emulators have been priors in the form of Gaussian stochastic processes (GASP) that were conditioned with a design data set of inputs and corresponding model outputs. In the context of dynamic models, this approach has two essential disadvantages: (i) these emulators do not consider our knowledge of the structure of the model, and (ii) they run into numerical difficulties if there are a large number of closely spaced input points as is often the case in the time dimension of dynamic models. To address both of these problems, a new concept of developing emulators for dynamic models is proposed. This concept is based on a prior that combines a simplified linear state space model of the temporal evolution of the dynamic model with Gaussian stochastic processes for the innovation terms as functions of model parameters and/or inputs. These innovation terms are intended to correct the error of the linear model at each output step. Conditioning this prior to the design data set is done by Kalman smoothing. This leads to an efficient emulator that, due to the consideration of our knowledge about dominant mechanisms built into the simulation model, can be expected to outperform purely statistical emulators at least in cases in which the design data set is small. The feasibility and potential difficulties of the proposed approach are demonstrated by the application to a simple hydrological model.
Resumo:
This study presents a general approach to identify dominant oscillation modes in bulk power system by using wide-area measurement system. To automatically identify the dominant modes without artificial participation, spectral characteristic of power system oscillation mode is applied to distinguish electromechanical oscillation modes which are calculated by stochastic subspace method, and a proposed mode matching pursuit is adopted to discriminate the dominant modes from the trivial modes, then stepwise-refinement scheme is developed to remove outliers of the dominant modes and the highly accurate dominant modes of identification are obtained. The method is implemented on the dominant modes of China Southern Power Grid which is one of the largest AC/DC paralleling grids in the world. Simulation data and field-measurement data are used to demonstrate high accuracy and better robustness of the dominant modes identification approach.
Resumo:
Drug resistance continues to be a major barrier to the delivery of curative therapies in cancer. Historically, drug resistance has been associated with over-expression of drug transporters, changes in drug kinetics or amplification of drug targets. However, the emergence of resistance in patients treated with new-targeted therapies has provided new insight into the complexities underlying cancer drug resistance. Recent data now implicate intratumoural heterogeneity as a major driver of drug resistance. Single cell sequencing studies that identified multiple genetically distinct variants within human tumours clearly demonstrate the heterogeneous nature of human tumours. The major contributors to intratumoural heterogeneity are (i) genetic variation, (ii) stochastic processes, (iii) the microenvironment and (iv) cell and tissue plasticity. Each of these factors impacts on drug sensitivity. To deliver curative therapies to patients, modification of current therapeutic strategies to include methods that estimate intratumoural heterogeneity and plasticity will be essential.
Resumo:
Many insect clades, especially within the Diptera (true flies), have been considered classically ‘Gondwanan’, with an inference that distributions derive from vicariance of the southern continents. Assessing the role that vicariance has played in the evolution of austral taxa requires testing the location and tempo of diversification and speciation against the well-established predictions of fragmentation of the ancient super-continent. Several early (anecdotal) hypotheses that current austral distributions originate from the breakup of Gondwana derive from studies of taxa within the family Chironomidae (non-biting midges). With the advent of molecular phylogenetics and biogeographic analytical software, these studies have been revisited and expanded to test such conclusions better. Here we studied the midge genus Stictocladius Edwards, from the subfamily Orthocladiinae, which contains austral-distributed clades that match vicariance-based expectations. We resolve several issues of systematic relationships among morphological species and reveal cryptic diversity within many taxa. Time-calibrated phylogenetic relationships among taxa accorded partially with the predicted tempo from geology. For these apparently vagile insects, vicariance-dated patterns persist for South America and Australia. However, as often found, divergence time estimates for New Zealand at c. 50 mya post-date separation of Zealandia from Antarctica and the remainder of Gondwana, but predate the proposed Oligocene ‘drowning’ of these islands. We detail other such ‘anomalous’ dates and suggest a single common explanation rather than stochastic processes. This could involve synchronous establishment following recovery from ‘drowning’ and/or deleteriously warming associated with the mid-Eocene climatic optimum (hence ‘waving’, which refers to cycles of drowning events) plus new availability of topography providing of cool running waters, or all these factors in combination. Alternatively a vicariance explanation remains available, given the uncertain duration of connectivity of Zealandia to Australia–Antarctic–South America via the Lord Howe and Norfolk ridges into the Eocene.
Resumo:
Introduced predators can have pronounced effects on naïve prey species; thus, predator control is often essential for conservation of threatened native species. Complete eradication of the predator, although desirable, may be elusive in budget-limited situations, whereas predator suppression is more feasible and may still achieve conservation goals. We used a stochastic predator-prey model based on a Lotka-Volterra system to investigate the cost-effectiveness of predator control to achieve prey conservation. We compared five control strategies: immediate eradication, removal of a constant number of predators (fixed-number control), removal of a constant proportion of predators (fixed-rate control), removal of predators that exceed a predetermined threshold (upper-trigger harvest), and removal of predators whenever their population falls below a lower predetermined threshold (lower-trigger harvest). We looked at the performance of these strategies when managers could always remove the full number of predators targeted by each strategy, subject to budget availability. Under this assumption immediate eradication reduced the threat to the prey population the most. We then examined the effect of reduced management success in meeting removal targets, assuming removal is more difficult at low predator densities. In this case there was a pronounced reduction in performance of the immediate eradication, fixed-number, and lower-trigger strategies. Although immediate eradication still yielded the highest expected minimum prey population size, upper-trigger harvest yielded the lowest probability of prey extinction and the greatest return on investment (as measured by improvement in expected minimum population size per amount spent). Upper-trigger harvest was relatively successful because it operated when predator density was highest, which is when predator removal targets can be more easily met and the effect of predators on the prey is most damaging. This suggests that controlling predators only when they are most abundant is the "best" strategy when financial resources are limited and eradication is unlikely. © 2008 Society for Conservation Biology.
Resumo:
Threatened species often exist in a small number of isolated subpopulations. Given limitations on conservation spending, managers must choose from strategies that range from managing just one subpopulation and risking all other subpopulations to managing all subpopulations equally and poorly, thereby risking the loss of all subpopulations. We took an economic approach to this problem in an effort to discover a simple rule of thumb for optimally allocating conservation effort among subpopulations. This rule was derived by maximizing the expected number of extant subpopulations remaining given n subpopulations are actually managed. We also derived a spatiotemporally optimized strategy through stochastic dynamic programming. The rule of thumb suggested that more subpopulations should be managed if the budget increases or if the cost of reducing local extinction probabilities decreases. The rule performed well against the exact optimal strategy that was the result of the stochastic dynamic program and much better than other simple strategies (e.g., always manage one extant subpopulation or half of the remaining subpopulation). We applied our approach to the allocation of funds in 2 contrasting case studies: reduction of poaching of Sumatran tigers (Panthera tigris sumatrae) and habitat acquisition for San Joaquin kit foxes (Vulpes macrotis mutica). For our estimated annual budget for Sumatran tiger management, the mean time to extinction was about 32 years. For our estimated annual management budget for kit foxes in the San Joaquin Valley, the mean time to extinction was approximately 24 years. Our framework allows managers to deal with the important question of how to allocate scarce conservation resources among subpopulations of any threatened species. © 2008 Society for Conservation Biology.
Resumo:
Mathematical descriptions of birth–death–movement processes are often calibrated to measurements from cell biology experiments to quantify tissue growth rates. Here we describe and analyze a discrete model of a birth–death-movement process applied to a typical two–dimensional cell biology experiment. We present three different descriptions of the system: (i) a standard mean–field description which neglects correlation effects and clustering; (ii) a moment dynamics description which approximately incorporates correlation and clustering effects, and; (iii) averaged data from repeated discrete simulations which directly incorporates correlation and clustering effects. Comparing these three descriptions indicates that the mean–field and moment dynamics approaches are valid only for certain parameter regimes, and that both these descriptions fail to make accurate predictions of the system for sufficiently fast birth and death rates where the effects of spatial correlations and clustering are sufficiently strong. Without any method to distinguish between the parameter regimes where these three descriptions are valid, it is possible that either the mean–field or moment dynamics model could be calibrated to experimental data under inappropriate conditions, leading to errors in parameter estimation. In this work we demonstrate that a simple measurement of agent clustering and correlation, based on coordination number data, provides an indirect measure of agent correlation and clustering effects, and can therefore be used to make a distinction between the validity of the different descriptions of the birth–death–movement process.
Resumo:
Outdoor robots such as planetary rovers must be able to navigate safely and reliably in order to successfully perform missions in remote or hostile environments. Mobility prediction is critical to achieving this goal due to the inherent control uncertainty faced by robots traversing natural terrain. We propose a novel algorithm for stochastic mobility prediction based on multi-output Gaussian process regression. Our algorithm considers the correlation between heading and distance uncertainty and provides a predictive model that can easily be exploited by motion planning algorithms. We evaluate our method experimentally and report results from over 30 trials in a Mars-analogue environment that demonstrate the effectiveness of our method and illustrate the importance of mobility prediction in navigating challenging terrain.
Resumo:
Modelling fluvial processes is an effective way to reproduce basin evolution and to recreate riverbed morphology. However, due to the complexity of alluvial environments, deterministic modelling of fluvial processes is often impossible. To address the related uncertainties, we derive a stochastic fluvial process model on the basis of the convective Exner equation that uses the statistics (mean and variance) of river velocity as input parameters. These statistics allow for quantifying the uncertainty in riverbed topography, river discharge and position of the river channel. In order to couple the velocity statistics and the fluvial process model, the perturbation method is employed with a non-stationary spectral approach to develop the Exner equation as two separate equations: the first one is the mean equation, which yields the mean sediment thickness, and the second one is the perturbation equation, which yields the variance of sediment thickness. The resulting solutions offer an effective tool to characterize alluvial aquifers resulting from fluvial processes, which allows incorporating the stochasticity of the paleoflow velocity.
Resumo:
Aijt-Sahalia (2002) introduced a method to estimate transitional probability densities of di®usion processes by means of Hermite expansions with coe±cients determined by means of Taylor series. This note describes a numerical procedure to ¯nd these coe±cients based on the calculation of moments. One advantage of this procedure is that it can be used e®ectively when the mathematical operations required to ¯nd closed-form expressions for these coe±cients are otherwise infeasible.
Resumo:
Financial processes may possess long memory and their probability densities may display heavy tails. Many models have been developed to deal with this tail behaviour, which reflects the jumps in the sample paths. On the other hand, the presence of long memory, which contradicts the efficient market hypothesis, is still an issue for further debates. These difficulties present challenges with the problems of memory detection and modelling the co-presence of long memory and heavy tails. This PhD project aims to respond to these challenges. The first part aims to detect memory in a large number of financial time series on stock prices and exchange rates using their scaling properties. Since financial time series often exhibit stochastic trends, a common form of nonstationarity, strong trends in the data can lead to false detection of memory. We will take advantage of a technique known as multifractal detrended fluctuation analysis (MF-DFA) that can systematically eliminate trends of different orders. This method is based on the identification of scaling of the q-th-order moments and is a generalisation of the standard detrended fluctuation analysis (DFA) which uses only the second moment; that is, q = 2. We also consider the rescaled range R/S analysis and the periodogram method to detect memory in financial time series and compare their results with the MF-DFA. An interesting finding is that short memory is detected for stock prices of the American Stock Exchange (AMEX) and long memory is found present in the time series of two exchange rates, namely the French franc and the Deutsche mark. Electricity price series of the five states of Australia are also found to possess long memory. For these electricity price series, heavy tails are also pronounced in their probability densities. The second part of the thesis develops models to represent short-memory and longmemory financial processes as detected in Part I. These models take the form of continuous-time AR(∞) -type equations whose kernel is the Laplace transform of a finite Borel measure. By imposing appropriate conditions on this measure, short memory or long memory in the dynamics of the solution will result. A specific form of the models, which has a good MA(∞) -type representation, is presented for the short memory case. Parameter estimation of this type of models is performed via least squares, and the models are applied to the stock prices in the AMEX, which have been established in Part I to possess short memory. By selecting the kernel in the continuous-time AR(∞) -type equations to have the form of Riemann-Liouville fractional derivative, we obtain a fractional stochastic differential equation driven by Brownian motion. This type of equations is used to represent financial processes with long memory, whose dynamics is described by the fractional derivative in the equation. These models are estimated via quasi-likelihood, namely via a continuoustime version of the Gauss-Whittle method. The models are applied to the exchange rates and the electricity prices of Part I with the aim of confirming their possible long-range dependence established by MF-DFA. The third part of the thesis provides an application of the results established in Parts I and II to characterise and classify financial markets. We will pay attention to the New York Stock Exchange (NYSE), the American Stock Exchange (AMEX), the NASDAQ Stock Exchange (NASDAQ) and the Toronto Stock Exchange (TSX). The parameters from MF-DFA and those of the short-memory AR(∞) -type models will be employed in this classification. We propose the Fisher discriminant algorithm to find a classifier in the two and three-dimensional spaces of data sets and then provide cross-validation to verify discriminant accuracies. This classification is useful for understanding and predicting the behaviour of different processes within the same market. The fourth part of the thesis investigates the heavy-tailed behaviour of financial processes which may also possess long memory. We consider fractional stochastic differential equations driven by stable noise to model financial processes such as electricity prices. The long memory of electricity prices is represented by a fractional derivative, while the stable noise input models their non-Gaussianity via the tails of their probability density. A method using the empirical densities and MF-DFA will be provided to estimate all the parameters of the model and simulate sample paths of the equation. The method is then applied to analyse daily spot prices for five states of Australia. Comparison with the results obtained from the R/S analysis, periodogram method and MF-DFA are provided. The results from fractional SDEs agree with those from MF-DFA, which are based on multifractal scaling, while those from the periodograms, which are based on the second order, seem to underestimate the long memory dynamics of the process. This highlights the need and usefulness of fractal methods in modelling non-Gaussian financial processes with long memory.