969 resultados para Stochastic Processes
Resumo:
Thesis (Ph.D.)--University of Washington, 2016-08
Resumo:
This volume presents a collection of papers covering applications from a wide range of systems with infinitely many degrees of freedom studied using techniques from stochastic and infinite dimensional analysis, e.g. Feynman path integrals, the statistical mechanics of polymer chains, complex networks, and quantum field theory. Systems of infinitely many degrees of freedom create their particular mathematical challenges which have been addressed by different mathematical theories, namely in the theories of stochastic processes, Malliavin calculus, and especially white noise analysis. These proceedings are inspired by a conference held on the occasion of Prof. Ludwig Streit’s 75th birthday and celebrate his pioneering and ongoing work in these fields.
Resumo:
This paper discusses the statistical analyses used to derive bridge live loads models for Hong Kong from a 10-year weigh-in-motion (WIM) data. The statistical concepts required and the terminologies adopted in the development of bridge live load models are introduced. This paper includes studies for representative vehicles from the large amount of WIM data in Hong Kong. Different load affecting parameters such as gross vehicle weights, axle weights, axle spacings, average daily number of trucks etc are first analyzed by various stochastic processes in order to obtain the mathematical distributions of these parameters. As a prerequisite to determine accurate bridge design loadings in Hong Kong, this study not only takes advantages of code formulation methods used internationally but also presents a new method for modelling collected WIM data using a statistical approach.
Resumo:
This paper presents a novel framework for the modelling of passenger facilitation in a complex environment. The research is motivated by the challenges in the airport complex system, where there are multiple stakeholders, differing operational objectives and complex interactions and interdependencies between different parts of the airport system. Traditional methods for airport terminal modelling do not explicitly address the need for understanding causal relationships in a dynamic environment. Additionally, existing Bayesian Network (BN) models, which provide a means for capturing causal relationships, only present a static snapshot of a system. A method to integrate a BN complex systems model with stochastic queuing theory is developed based on the properties of the Poisson and Exponential distributions. The resultant Hybrid Queue-based Bayesian Network (HQBN) framework enables the simulation of arbitrary factors, their relationships, and their effects on passenger flow and vice versa. A case study implementation of the framework is demonstrated on the inbound passenger facilitation process at Brisbane International Airport. The predicted outputs of the model, in terms of cumulative passenger flow at intermediary and end points in the inbound process, are found to have an $R^2$ goodness of fit of 0.9994 and 0.9982 respectively over a 10 hour test period. The utility of the framework is demonstrated on a number of usage scenarios including real time monitoring and `what-if' analysis. This framework provides the ability to analyse and simulate a dynamic complex system, and can be applied to other socio-technical systems such as hospitals.
Resumo:
Dengue is the most prevalent arthropod-borne virus, with at least 40% of the world’s population at risk of infection each year. In Australia, dengue is not endemic, but viremic travelers trigger outbreaks involving hundreds of cases. We compared the susceptibility of Aedes aegypti mosquitoes from two geographically isolated populations with two strains of dengue virus serotype 2. We found, interestingly, that mosquitoes from a city with no history of dengue were more susceptible to virus than mosquitoes from an outbreak-prone region, particularly with respect to one dengue strain. These findings suggest recent evolution of population-based differences in vector competence or different historical origins. Future genomic comparisons of these populations could reveal the genetic basis of vector competence and the relative role of selection and stochastic processes in shaping their differences. Lastly, we show the novel finding of a correlation between midgut dengue titer and titer in tissues colonized after dissemination.
Resumo:
Many model-based investigation techniques, such as sensitivity analysis, optimization, and statistical inference, require a large number of model evaluations to be performed at different input and/or parameter values. This limits the application of these techniques to models that can be implemented in computationally efficient computer codes. Emulators, by providing efficient interpolation between outputs of deterministic simulation models, can considerably extend the field of applicability of such computationally demanding techniques. So far, the dominant techniques for developing emulators have been priors in the form of Gaussian stochastic processes (GASP) that were conditioned with a design data set of inputs and corresponding model outputs. In the context of dynamic models, this approach has two essential disadvantages: (i) these emulators do not consider our knowledge of the structure of the model, and (ii) they run into numerical difficulties if there are a large number of closely spaced input points as is often the case in the time dimension of dynamic models. To address both of these problems, a new concept of developing emulators for dynamic models is proposed. This concept is based on a prior that combines a simplified linear state space model of the temporal evolution of the dynamic model with Gaussian stochastic processes for the innovation terms as functions of model parameters and/or inputs. These innovation terms are intended to correct the error of the linear model at each output step. Conditioning this prior to the design data set is done by Kalman smoothing. This leads to an efficient emulator that, due to the consideration of our knowledge about dominant mechanisms built into the simulation model, can be expected to outperform purely statistical emulators at least in cases in which the design data set is small. The feasibility and potential difficulties of the proposed approach are demonstrated by the application to a simple hydrological model.
Resumo:
This study presents a general approach to identify dominant oscillation modes in bulk power system by using wide-area measurement system. To automatically identify the dominant modes without artificial participation, spectral characteristic of power system oscillation mode is applied to distinguish electromechanical oscillation modes which are calculated by stochastic subspace method, and a proposed mode matching pursuit is adopted to discriminate the dominant modes from the trivial modes, then stepwise-refinement scheme is developed to remove outliers of the dominant modes and the highly accurate dominant modes of identification are obtained. The method is implemented on the dominant modes of China Southern Power Grid which is one of the largest AC/DC paralleling grids in the world. Simulation data and field-measurement data are used to demonstrate high accuracy and better robustness of the dominant modes identification approach.
Resumo:
Drug resistance continues to be a major barrier to the delivery of curative therapies in cancer. Historically, drug resistance has been associated with over-expression of drug transporters, changes in drug kinetics or amplification of drug targets. However, the emergence of resistance in patients treated with new-targeted therapies has provided new insight into the complexities underlying cancer drug resistance. Recent data now implicate intratumoural heterogeneity as a major driver of drug resistance. Single cell sequencing studies that identified multiple genetically distinct variants within human tumours clearly demonstrate the heterogeneous nature of human tumours. The major contributors to intratumoural heterogeneity are (i) genetic variation, (ii) stochastic processes, (iii) the microenvironment and (iv) cell and tissue plasticity. Each of these factors impacts on drug sensitivity. To deliver curative therapies to patients, modification of current therapeutic strategies to include methods that estimate intratumoural heterogeneity and plasticity will be essential.
Resumo:
Many insect clades, especially within the Diptera (true flies), have been considered classically ‘Gondwanan’, with an inference that distributions derive from vicariance of the southern continents. Assessing the role that vicariance has played in the evolution of austral taxa requires testing the location and tempo of diversification and speciation against the well-established predictions of fragmentation of the ancient super-continent. Several early (anecdotal) hypotheses that current austral distributions originate from the breakup of Gondwana derive from studies of taxa within the family Chironomidae (non-biting midges). With the advent of molecular phylogenetics and biogeographic analytical software, these studies have been revisited and expanded to test such conclusions better. Here we studied the midge genus Stictocladius Edwards, from the subfamily Orthocladiinae, which contains austral-distributed clades that match vicariance-based expectations. We resolve several issues of systematic relationships among morphological species and reveal cryptic diversity within many taxa. Time-calibrated phylogenetic relationships among taxa accorded partially with the predicted tempo from geology. For these apparently vagile insects, vicariance-dated patterns persist for South America and Australia. However, as often found, divergence time estimates for New Zealand at c. 50 mya post-date separation of Zealandia from Antarctica and the remainder of Gondwana, but predate the proposed Oligocene ‘drowning’ of these islands. We detail other such ‘anomalous’ dates and suggest a single common explanation rather than stochastic processes. This could involve synchronous establishment following recovery from ‘drowning’ and/or deleteriously warming associated with the mid-Eocene climatic optimum (hence ‘waving’, which refers to cycles of drowning events) plus new availability of topography providing of cool running waters, or all these factors in combination. Alternatively a vicariance explanation remains available, given the uncertain duration of connectivity of Zealandia to Australia–Antarctic–South America via the Lord Howe and Norfolk ridges into the Eocene.
Resumo:
Introduced predators can have pronounced effects on naïve prey species; thus, predator control is often essential for conservation of threatened native species. Complete eradication of the predator, although desirable, may be elusive in budget-limited situations, whereas predator suppression is more feasible and may still achieve conservation goals. We used a stochastic predator-prey model based on a Lotka-Volterra system to investigate the cost-effectiveness of predator control to achieve prey conservation. We compared five control strategies: immediate eradication, removal of a constant number of predators (fixed-number control), removal of a constant proportion of predators (fixed-rate control), removal of predators that exceed a predetermined threshold (upper-trigger harvest), and removal of predators whenever their population falls below a lower predetermined threshold (lower-trigger harvest). We looked at the performance of these strategies when managers could always remove the full number of predators targeted by each strategy, subject to budget availability. Under this assumption immediate eradication reduced the threat to the prey population the most. We then examined the effect of reduced management success in meeting removal targets, assuming removal is more difficult at low predator densities. In this case there was a pronounced reduction in performance of the immediate eradication, fixed-number, and lower-trigger strategies. Although immediate eradication still yielded the highest expected minimum prey population size, upper-trigger harvest yielded the lowest probability of prey extinction and the greatest return on investment (as measured by improvement in expected minimum population size per amount spent). Upper-trigger harvest was relatively successful because it operated when predator density was highest, which is when predator removal targets can be more easily met and the effect of predators on the prey is most damaging. This suggests that controlling predators only when they are most abundant is the "best" strategy when financial resources are limited and eradication is unlikely. © 2008 Society for Conservation Biology.
Resumo:
Threatened species often exist in a small number of isolated subpopulations. Given limitations on conservation spending, managers must choose from strategies that range from managing just one subpopulation and risking all other subpopulations to managing all subpopulations equally and poorly, thereby risking the loss of all subpopulations. We took an economic approach to this problem in an effort to discover a simple rule of thumb for optimally allocating conservation effort among subpopulations. This rule was derived by maximizing the expected number of extant subpopulations remaining given n subpopulations are actually managed. We also derived a spatiotemporally optimized strategy through stochastic dynamic programming. The rule of thumb suggested that more subpopulations should be managed if the budget increases or if the cost of reducing local extinction probabilities decreases. The rule performed well against the exact optimal strategy that was the result of the stochastic dynamic program and much better than other simple strategies (e.g., always manage one extant subpopulation or half of the remaining subpopulation). We applied our approach to the allocation of funds in 2 contrasting case studies: reduction of poaching of Sumatran tigers (Panthera tigris sumatrae) and habitat acquisition for San Joaquin kit foxes (Vulpes macrotis mutica). For our estimated annual budget for Sumatran tiger management, the mean time to extinction was about 32 years. For our estimated annual management budget for kit foxes in the San Joaquin Valley, the mean time to extinction was approximately 24 years. Our framework allows managers to deal with the important question of how to allocate scarce conservation resources among subpopulations of any threatened species. © 2008 Society for Conservation Biology.
Resumo:
Evolutionarily stable sex ratios are determined for social hymenoptera under local mate competition (LMC) and when the brood size is finite. LMC is modelled by the parameter d. Of the reproductive progeny from a single foundress nest, a fraction d disperses (outbreeding), while (1-d) mate amongst themselves (sibmating). When the brood size is finite, d is taken to be the probability of an offspring dispersing, and similarly, r, the proportion of male offspring, the probability of a haploid egg being laid. Under the joint influence of these two stochastic processes, there is a nonzero probability that some females remain unmated in the nest. As a result, the optimal proportion of males (corresponding to the evolutionarily stable strategy, ESS) is higher than that obtained when the brood size is infinite. When the queen controls the sex ration, the ESS becomes more female biased under increased inbreeding (lower d), However, the ESS under worker control shows an unexpected pattern, including an increase in the proportion of males with increased inbreeding. This effect is traced to the complex interaction between inbreeding and local mate competition.
Resumo:
Purpose: The authors aim at developing a pseudo-time, sub-optimal stochastic filtering approach based on a derivative free variant of the ensemble Kalman filter (EnKF) for solving the inverse problem of diffuse optical tomography (DOT) while making use of a shape based reconstruction strategy that enables representing a cross section of an inhomogeneous tumor boundary by a general closed curve. Methods: The optical parameter fields to be recovered are approximated via an expansion based on the circular harmonics (CH) (Fourier basis functions) and the EnKF is used to recover the coefficients in the expansion with both simulated and experimentally obtained photon fluence data on phantoms with inhomogeneous inclusions. The process and measurement equations in the pseudo-dynamic EnKF (PD-EnKF) presently yield a parsimonious representation of the filter variables, which consist of only the Fourier coefficients and the constant scalar parameter value within the inclusion. Using fictitious, low-intensity Wiener noise processes in suitably constructed ``measurement'' equations, the filter variables are treated as pseudo-stochastic processes so that their recovery within a stochastic filtering framework is made possible. Results: In our numerical simulations, we have considered both elliptical inclusions (two inhomogeneities) and those with more complex shapes (such as an annular ring and a dumbbell) in 2-D objects which are cross-sections of a cylinder with background absorption and (reduced) scattering coefficient chosen as mu(b)(a)=0.01mm(-1) and mu('b)(s)=1.0mm(-1), respectively. We also assume mu(a) = 0.02 mm(-1) within the inhomogeneity (for the single inhomogeneity case) and mu(a) = 0.02 and 0.03 mm(-1) (for the two inhomogeneities case). The reconstruction results by the PD-EnKF are shown to be consistently superior to those through a deterministic and explicitly regularized Gauss-Newton algorithm. We have also estimated the unknown mu(a) from experimentally gathered fluence data and verified the reconstruction by matching the experimental data with the computed one. Conclusions: The PD-EnKF, which exhibits little sensitivity against variations in the fictitiously introduced noise processes, is also proven to be accurate and robust in recovering a spatial map of the absorption coefficient from DOT data. With the help of shape based representation of the inhomogeneities and an appropriate scaling of the CH expansion coefficients representing the boundary, we have been able to recover inhomogeneities representative of the shape of malignancies in medical diagnostic imaging. (C) 2012 American Association of Physicists in Medicine. [DOI: 10.1118/1.3679855]
Resumo:
Background: There has been growing interest in integrative taxonomy that uses data from multiple disciplines for species delimitation. Typically, in such studies, monophyly is taken as a proxy for taxonomic distinctiveness and these units are treated as potential species. However, monophyly could arise due to stochastic processes. Thus here, we have employed a recently developed tool based on coalescent approach to ascertain the taxonomic distinctiveness of various monophyletic units. Subsequently, the species status of these taxonomic units was further tested using corroborative evidence from morphology and ecology. This inter-disciplinary approach was implemented on endemic centipedes of the genus Digitipes (Attems 1930) from the Western Ghats (WG) biodiversity hotspot of India. The species of the genus Digitipes are morphologically conserved, despite their ancient late Cretaceous origin. Principal Findings: Our coalescent analysis based on mitochondrial dataset indicated the presence of nine putative species. The integrative approach, which includes nuclear, morphology, and climate datasets supported distinctiveness of eight putative species, of which three represent described species and five were new species. Among the five new species, three were morphologically cryptic species, emphasizing the effectiveness of this approach in discovering cryptic diversity in less explored areas of the tropics like the WG. In addition, species pairs showed variable divergence along the molecular, morphological and climate axes. Conclusions: A multidisciplinary approach illustrated here is successful in discovering cryptic diversity with an indication that the current estimates of invertebrate species richness for the WG might have been underestimated. Additionally, the importance of measuring multiple secondary properties of species while defining species boundaries was highlighted given variable divergence of each species pair across the disciplines.
Resumo:
We study coverage in sensor networks having two types of nodes, namely, sensor nodes and backbone nodes. Each sensor is capable of transmitting information over relatively small distances. The backbone nodes collect information from the sensors. This information is processed and communicated over an ad hoc network formed by the backbone nodes, which are capable of transmitting over much larger distances. We consider two models of deployment for the sensor and backbone nodes. One is a PoissonPoisson cluster model and the other a dependently thinned Poisson point process. We deduce limit laws for functionals of vacancy in both models using properties of association for random measures.