979 resultados para spatial markov Chains
Resumo:
We consider binary infinite order stochastic chains perturbed by a random noise. This means that at each time step, the value assumed by the chain can be randomly and independently flipped with a small fixed probability. We show that the transition probabilities of the perturbed chain are uniformly close to the corresponding transition probabilities of the original chain. As a consequence, in the case of stochastic chains with unbounded but otherwise finite variable length memory, we show that it is possible to recover the context tree of the original chain, using a suitable version of the algorithm Context, provided that the noise is small enough.
Resumo:
When building genetic maps, it is necessary to choose from several marker ordering algorithms and criteria, and the choice is not always simple. In this study, we evaluate the efficiency of algorithms try (TRY), seriation (SER), rapid chain delineation (RCD), recombination counting and ordering (RECORD) and unidirectional growth (UG), as well as the criteria PARF (product of adjacent recombination fractions), SARF (sum of adjacent recombination fractions), SALOD (sum of adjacent LOD scores) and LHMC (likelihood through hidden Markov chains), used with the RIPPLE algorithm for error verification, in the construction of genetic linkage maps. A linkage map of a hypothetical diploid and monoecious plant species was simulated containing one linkage group and 21 markers with fixed distance of 3 cM between them. In all, 700 F(2) populations were randomly simulated with and 400 individuals with different combinations of dominant and co-dominant markers, as well as 10 and 20% of missing data. The simulations showed that, in the presence of co-dominant markers only, any combination of algorithm and criteria may be used, even for a reduced population size. In the case of a smaller proportion of dominant markers, any of the algorithms and criteria (except SALOD) investigated may be used. In the presence of high proportions of dominant markers and smaller samples (around 100), the probability of repulsion linkage increases between them and, in this case, use of the algorithms TRY and SER associated to RIPPLE with criterion LHMC would provide better results. Heredity (2009) 103, 494-502; doi:10.1038/hdy.2009.96; published online 29 July 2009
Resumo:
A significant problem in the collection of responses to potentially sensitive questions, such as relating to illegal, immoral or embarrassing activities, is non-sampling error due to refusal to respond or false responses. Eichhorn & Hayre (1983) suggested the use of scrambled responses to reduce this form of bias. This paper considers a linear regression model in which the dependent variable is unobserved but for which the sum or product with a scrambling random variable of known distribution, is known. The performance of two likelihood-based estimators is investigated, namely of a Bayesian estimator achieved through a Markov chain Monte Carlo (MCMC) sampling scheme, and a classical maximum-likelihood estimator. These two estimators and an estimator suggested by Singh, Joarder & King (1996) are compared. Monte Carlo results show that the Bayesian estimator outperforms the classical estimators in almost all cases, and the relative performance of the Bayesian estimator improves as the responses become more scrambled.
Resumo:
Expokit provides a set of routines aimed at computing matrix exponentials. More precisely, it computes either a small matrix exponential in full, the action of a large sparse matrix exponential on an operand vector, or the solution of a system of linear ODEs with constant inhomogeneity. The backbone of the sparse routines consists of matrix-free Krylov subspace projection methods (Arnoldi and Lanczos processes), and that is why the toolkit is capable of coping with sparse matrices of large dimension. The software handles real and complex matrices and provides specific routines for symmetric and Hermitian matrices. The computation of matrix exponentials is a numerical issue of critical importance in the area of Markov chains and furthermore, the computed solution is subject to probabilistic constraints. In addition to addressing general matrix exponentials, a distinct attention is assigned to the computation of transient states of Markov chains.
Resumo:
In a recent paper [16], one of us identified all of the quasi-stationary distributions for a non-explosive, evanescent birth-death process for which absorption is certain, and established conditions for the existence of the corresponding limiting conditional distributions. Our purpose is to extend these results in a number of directions. We shall consider separately two cases depending on whether or not the process is evanescent. In the former case we shall relax the condition that absorption is certain. Furthermore, we shall allow for the possibility that the minimal process might be explosive, so that the transition rates alone will not necessarily determine the birth-death process uniquely. Although we shall be concerned mainly with the minimal process, our most general results hold for any birth-death process whose transition probabilities satisfy both the backward and the forward Kolmogorov differential equations.
Resumo:
A decision theory framework can be a powerful technique to derive optimal management decisions for endangered species. We built a spatially realistic stochastic metapopulation model for the Mount Lofty Ranges Southern Emu-wren (Stipiturus malachurus intermedius), a critically endangered Australian bird. Using diserete-time Markov,chains to describe the dynamics of a metapopulation and stochastic dynamic programming (SDP) to find optimal solutions, we evaluated the following different management decisions: enlarging existing patches, linking patches via corridors, and creating a new patch. This is the first application of SDP to optimal landscape reconstruction and one of the few times that landscape reconstruction dynamics have been integrated with population dynamics. SDP is a powerful tool that has advantages over standard Monte Carlo simulation methods because it can give the exact optimal strategy for every landscape configuration (combination of patch areas and presence of corridors) and pattern of metapopulation occupancy, as well as a trajectory of strategies. It is useful when a sequence of management actions can be performed over a given time horizon, as is the case for many endangered species recovery programs, where only fixed amounts of resources are available in each time step. However, it is generally limited by computational constraints to rather small networks of patches. The model shows that optimal metapopulation, management decisions depend greatly on the current state of the metapopulation,. and there is no strategy that is universally the best. The extinction probability over 30 yr for the optimal state-dependent management actions is 50-80% better than no management, whereas the best fixed state-independent sets of strategies are only 30% better than no management. This highlights the advantages of using a decision theory tool to investigate conservation strategies for metapopulations. It is clear from these results that the sequence of management actions is critical, and this can only be effectively derived from stochastic dynamic programming. The model illustrates the underlying difficulty in determining simple rules of thumb for the sequence of management actions for a metapopulation. This use of a decision theory framework extends the capacity of population viability analysis (PVA) to manage threatened species.
Resumo:
The portfolio generating the iTraxx EUR index is modeled by coupled Markov chains. Each of the industries of the portfolio evolves according to its own Markov transition matrix. Using a variant of the method of moments, the model parameters are estimated from a data set of Standard and Poor's. Swap spreads are evaluated by Monte-Carlo simulations. Along with an actuarially fair spread, at least squares spread is considered.
Resumo:
We introduce the notions of equilibrium distribution and time of convergence in discrete non-autonomous graphs. Under some conditions we give an estimate to the convergence time to the equilibrium distribution using the second largest eigenvalue of some matrices associated with the system.
Resumo:
The dynamics of catalytic networks have been widely studied over the last decades because of their implications in several fields like prebiotic evolution, virology, neural networks, immunology or ecology. One of the most studied mathematical bodies for catalytic networks was initially formulated in the context of prebiotic evolution, by means of the hypercycle theory. The hypercycle is a set of self-replicating species able to catalyze other replicator species within a cyclic architecture. Hypercyclic organization might arise from a quasispecies as a way to increase the informational containt surpassing the so-called error threshold. The catalytic coupling between replicators makes all the species to behave like a single and coherent evolutionary multimolecular unit. The inherent nonlinearities of catalytic interactions are responsible for the emergence of several types of dynamics, among them, chaos. In this article we begin with a brief review of the hypercycle theory focusing on its evolutionary implications as well as on different dynamics associated to different types of small catalytic networks. Then we study the properties of chaotic hypercycles with error-prone replication with symbolic dynamics theory, characterizing, by means of the theory of topological Markov chains, the topological entropy and the periods of the orbits of unimodal-like iterated maps obtained from the strange attractor. We will focus our study on some key parameters responsible for the structure of the catalytic network: mutation rates, autocatalytic and cross-catalytic interactions.
Resumo:
Dissertação apresentada na faculdade de Ciências e Tecnologia da Universidade Nova de Lisboa para a obtenção do grau de Mestre em Engenharia Electrotécnica e de Computadores
Resumo:
Simulation, modelling, proxels, PDEs, Markov chains, Petri nets, stochastic, performability, transient analysis
Resumo:
BACKGROUND: Lipid-lowering therapy is costly but effective at reducing coronary heart disease (CHD) risk. OBJECTIVE: To assess the cost-effectiveness and public health impact of Adult Treatment Panel III (ATP III) guidelines and compare with a range of risk- and age-based alternative strategies. DESIGN: The CHD Policy Model, a Markov-type cost-effectiveness model. DATA SOURCES: National surveys (1999 to 2004), vital statistics (2000), the Framingham Heart Study (1948 to 2000), other published data, and a direct survey of statin costs (2008). TARGET POPULATION: U.S. population age 35 to 85 years. Time Horizon: 2010 to 2040. PERSPECTIVE: Health care system. INTERVENTION: Lowering of low-density lipoprotein cholesterol with HMG-CoA reductase inhibitors (statins). OUTCOME MEASURE: Incremental cost-effectiveness. RESULTS OF BASE-CASE ANALYSIS: Full adherence to ATP III primary prevention guidelines would require starting (9.7 million) or intensifying (1.4 million) statin therapy for 11.1 million adults and would prevent 20,000 myocardial infarctions and 10,000 CHD deaths per year at an annual net cost of $3.6 billion ($42,000/QALY) if low-intensity statins cost $2.11 per pill. The ATP III guidelines would be preferred over alternative strategies if society is willing to pay $50,000/QALY and statins cost $1.54 to $2.21 per pill. At higher statin costs, ATP III is not cost-effective; at lower costs, more liberal statin-prescribing strategies would be preferred; and at costs less than $0.10 per pill, treating all persons with low-density lipoprotein cholesterol levels greater than 3.4 mmol/L (>130 mg/dL) would yield net cost savings. RESULTS OF SENSITIVITY ANALYSIS: Results are sensitive to the assumptions that LDL cholesterol becomes less important as a risk factor with increasing age and that little disutility results from taking a pill every day. LIMITATION: Randomized trial evidence for statin effectiveness is not available for all subgroups. CONCLUSION: The ATP III guidelines are relatively cost-effective and would have a large public health impact if implemented fully in the United States. Alternate strategies may be preferred, however, depending on the cost of statins and how much society is willing to pay for better health outcomes. FUNDING: Flight Attendants' Medical Research Institute and the Swanson Family Fund. The Framingham Heart Study and Framingham Offspring Study are conducted and supported by the National Heart, Lung, and Blood Institute.
Resumo:
Various methodologies in economic literature have been used to analyse the international hydrocarbon retail sector. Nevertheless at a Spanish level these studies are much more recent and most conclude that generally there is no effective competition present in this market, regardless of the approach used. In this paper, in order to analyse the price levels in the Spanish petrol market, our starting hypothesis is that in uncompetitive markets the prices are higher and the standard deviation is lower. We use weekly retail petrol price data from the ten biggest Spanish cities, and apply Markov chains to fill the missing values for petrol 95 and diesel, and we also employ a variance filter. We conclude that this market demonstrates reduced price dispersion, regardless of brand or city.
Resumo:
High throughput genome (HTG) and expressed sequence tag (EST) sequences are currently the most abundant nucleotide sequence classes in the public database. The large volume, high degree of fragmentation and lack of gene structure annotations prevent efficient and effective searches of HTG and EST data for protein sequence homologies by standard search methods. Here, we briefly describe three newly developed resources that should make discovery of interesting genes in these sequence classes easier in the future, especially to biologists not having access to a powerful local bioinformatics environment. trEST and trGEN are regularly regenerated databases of hypothetical protein sequences predicted from EST and HTG sequences, respectively. Hits is a web-based data retrieval and analysis system providing access to precomputed matches between protein sequences (including sequences from trEST and trGEN) and patterns and profiles from Prosite and Pfam. The three resources can be accessed via the Hits home page (http://hits. isb-sib.ch).