900 resultados para Markov Chains


Relevância:

60.00% 60.00%

Publicador:

Resumo:

Dynamic Bayesian Networks (DBNs) provide a versatile platform for predicting and analysing the behaviour of complex systems. As such, they are well suited to the prediction of complex ecosystem population trajectories under anthropogenic disturbances such as the dredging of marine seagrass ecosystems. However, DBNs assume a homogeneous Markov chain whereas a key characteristics of complex ecosystems is the presence of feedback loops, path dependencies and regime changes whereby the behaviour of the system can vary based on past states. This paper develops a method based on the small world structure of complex systems networks to modularise a non-homogeneous DBN and enable the computation of posterior marginal probabilities given evidence in forwards inference. It also provides an approach for an approximate solution for backwards inference as convergence is not guaranteed for a path dependent system. When applied to the seagrass dredging problem, the incorporation of path dependency can implement conditional absorption and allows release from the zero state in line with environmental and ecological observations. As dredging has a marked global impact on seagrass and other marine ecosystems of high environmental and economic value, using such a complex systems model to develop practical ways to meet the needs of conservation and industry through enhancing resistance and/or recovery is of paramount importance.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This thesis which consists of an introduction and four peer-reviewed original publications studies the problems of haplotype inference (haplotyping) and local alignment significance. The problems studied here belong to the broad area of bioinformatics and computational biology. The presented solutions are computationally fast and accurate, which makes them practical in high-throughput sequence data analysis. Haplotype inference is a computational problem where the goal is to estimate haplotypes from a sample of genotypes as accurately as possible. This problem is important as the direct measurement of haplotypes is difficult, whereas the genotypes are easier to quantify. Haplotypes are the key-players when studying for example the genetic causes of diseases. In this thesis, three methods are presented for the haplotype inference problem referred to as HaploParser, HIT, and BACH. HaploParser is based on a combinatorial mosaic model and hierarchical parsing that together mimic recombinations and point-mutations in a biologically plausible way. In this mosaic model, the current population is assumed to be evolved from a small founder population. Thus, the haplotypes of the current population are recombinations of the (implicit) founder haplotypes with some point--mutations. HIT (Haplotype Inference Technique) uses a hidden Markov model for haplotypes and efficient algorithms are presented to learn this model from genotype data. The model structure of HIT is analogous to the mosaic model of HaploParser with founder haplotypes. Therefore, it can be seen as a probabilistic model of recombinations and point-mutations. BACH (Bayesian Context-based Haplotyping) utilizes a context tree weighting algorithm to efficiently sum over all variable-length Markov chains to evaluate the posterior probability of a haplotype configuration. Algorithms are presented that find haplotype configurations with high posterior probability. BACH is the most accurate method presented in this thesis and has comparable performance to the best available software for haplotype inference. Local alignment significance is a computational problem where one is interested in whether the local similarities in two sequences are due to the fact that the sequences are related or just by chance. Similarity of sequences is measured by their best local alignment score and from that, a p-value is computed. This p-value is the probability of picking two sequences from the null model that have as good or better best local alignment score. Local alignment significance is used routinely for example in homology searches. In this thesis, a general framework is sketched that allows one to compute a tight upper bound for the p-value of a local pairwise alignment score. Unlike the previous methods, the presented framework is not affeced by so-called edge-effects and can handle gaps (deletions and insertions) without troublesome sampling and curve fitting.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The increased accuracy in the cosmological observations, especially in the measurements of the comic microwave background, allow us to study the primordial perturbations in grater detail. In this thesis, we allow the possibility for a correlated isocurvature perturbations alongside the usual adiabatic perturbations. Thus far the simplest six parameter \Lambda CDM model has been able to accommodate all the observational data rather well. However, we find that the 3-year WMAP data and the 2006 Boomerang data favour a nonzero nonadiabatic contribution to the CMB angular power sprctrum. This is primordial isocurvature perturbation that is positively correlated with the primordial curvature perturbation. Compared with the adiabatic \Lambda CMD model we have four additional parameters describing the increased complexity if the primordial perturbations. Our best-fit model has a 4% nonadiabatic contribution to the CMB temperature variance and the fit is improved by \Delta\chi^2 = 9.7. We can attribute this preference for isocurvature to a feature in the peak structure of the angular power spectrum, namely, the widths of the second and third acoustic peak. Along the way, we have improved our analysis methods by identifying some issues with the parametrisation of the primordial perturbation spectra and suggesting ways to handle these. Due to the improvements, the convergence of our Markov chains is improved. The change of parametrisation has an effect on the MCMC analysis because of the change in priors. We have checked our results against this and find only marginal differences between our parametrisation.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Mathematical modelling plays a vital role in the design, planning and operation of flexible manufacturing systems (FMSs). In this paper, attention is focused on stochastic modelling of FMSs using Markov chains, queueing networks, and stochastic Petri nets. We bring out the role of these modelling tools in FMS performance evaluation through several illustrative examples and provide a critical comparative evaluation. We also include a discussion on the modelling of deadlocks which constitute an important source of performance degradation in fully automated FMSs.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

We consider discrete-time versions of two classical problems in the optimal control of admission to a queueing system: i) optimal routing of arrivals to two parallel queues and ii) optimal acceptance/rejection of arrivals to a single queue. We extend the formulation of these problems to permit a k step delay in the observation of the queue lengths by the controller. For geometric inter-arrival times and geometric service times the problems are formulated as controlled Markov chains with expected total discounted cost as the minimization objective. For problem i) we show that when k = 1, the optimal policy is to allocate an arrival to the queue with the smaller expected queue length (JSEQ: Join the Shortest Expected Queue). We also show that for this problem, for k greater than or equal to 2, JSEQ is not optimal. For problem ii) we show that when k = 1, the optimal policy is a threshold policy. There are, however, two thresholds m(0) greater than or equal to m(1) > 0, such that mo is used when the previous action was to reject, and mi is used when the previous action was to accept.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Image segmentation is formulated as a stochastic process whose invariant distribution is concentrated at points of the desired region. By choosing multiple seed points, different regions can be segmented. The algorithm is based on the theory of time-homogeneous Markov chains and has been largely motivated by the technique of simulated annealing. The method proposed here has been found to perform well on real-world clean as well as noisy images while being computationally far less expensive than stochastic optimisation techniques

Relevância:

60.00% 60.00%

Publicador:

Resumo:

We consider the asymptotics of the invariant measure for the process of spatial distribution of N coupled Markov chains in the limit of a large number of chains. Each chain reflects the stochastic evolution of one particle. The chains are coupled through the dependence of transition rates on the spatial distribution of particles in the various states. Our model is a caricature for medium access interactions in wireless local area networks. Our model is also applicable in the study of spread of epidemics in a network. The limiting process satisfies a deterministic ordinary differential equation called the McKean-Vlasov equation. When this differential equation has a unique globally asymptotically stable equilibrium, the spatial distribution converges weakly to this equilibrium. Using a control-theoretic approach, we examine the question of a large deviation from this equilibrium.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

We provide new analytical results concerning the spread of information or influence under the linear threshold social network model introduced by Kempe et al. in, in the information dissemination context. The seeder starts by providing the message to a set of initial nodes and is interested in maximizing the number of nodes that will receive the message ultimately. A node's decision to forward the message depends on the set of nodes from which it has received the message. Under the linear threshold model, the decision to forward the information depends on the comparison of the total influence of the nodes from which a node has received the packet with its own threshold of influence. We derive analytical expressions for the expected number of nodes that receive the message ultimately, as a function of the initial set of nodes, for a generic network. We show that the problem can be recast in the framework of Markov chains. We then use the analytical expression to gain insights into information dissemination in some simple network topologies such as the star, ring, mesh and on acyclic graphs. We also derive the optimal initial set in the above networks, and also hint at general heuristics for picking a good initial set.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Peer to peer networks are being used extensively nowadays for file sharing, video on demand and live streaming. For IPTV, delay deadlines are more stringent compared to file sharing. Coolstreaming was the first P2P IPTV system. In this paper, we model New Coolstreaming (newer version of Coolstreaming) via a queueing network. We use two time scale decomposition of Markov chains to compute the stationary distribution of number of peers and the expected number of substreams in the overlay which are not being received at the required rate due to parent overloading. We also characterize the end-to-end delay encountered by a video packet received by a user and originated at the server. Three factors contribute towards the delay. The first factor is the mean shortest path length between any two overlay peers in terms of overlay hops of the partnership graph which is shown to be O (log n) where n is the number of peers in the overlay. The second factor is the mean number of routers between any two overlay neighbours which is seen to be at most O (log N-I) where N-I is the number of routers in the internet. Third factor is the mean delay at a router in the internet. We provide an approximation of this mean delay E W]. Thus, the mean end to end delay in New Coolstreaming is shown to be upper bounded by O (log E N]) (log N-I) E (W)] where E N] is the mean number of peers at a channel.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Infinite horizon discounted-cost and ergodic-cost risk-sensitive zero-sum stochastic games for controlled Markov chains with countably many states are analyzed. Upper and lower values for these games are established. The existence of value and saddle-point equilibria in the class of Markov strategies is proved for the discounted-cost game. The existence of value and saddle-point equilibria in the class of stationary strategies is proved under the uniform ergodicity condition for the ergodic-cost game. The value of the ergodic-cost game happens to be the product of the inverse of the risk-sensitivity factor and the logarithm of the common Perron-Frobenius eigenvalue of the associated controlled nonlinear kernels. (C) 2013 Elsevier B.V. All rights reserved.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

We consider refined versions of Markov chains related to juggling introduced by Warrington. We further generalize the construction to juggling with arbitrary heights as well as infinitely many balls, which are expressed more succinctly in terms of Markov chains on integer partitions. In all cases, we give explicit product formulas for the stationary probabilities. The normalization factor in one case can be explicitly written as a homogeneous symmetric polynomial. We also refine and generalize enriched Markov chains on set partitions. Lastly, we prove that in one case, the stationary distribution is attained in bounded time.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Modern robots are increasingly expected to function in uncertain and dynamically challenging environments, often in proximity with humans. In addition, wide scale adoption of robots requires on-the-fly adaptability of software for diverse application. These requirements strongly suggest the need to adopt formal representations of high level goals and safety specifications, especially as temporal logic formulas. This approach allows for the use of formal verification techniques for controller synthesis that can give guarantees for safety and performance. Robots operating in unstructured environments also face limited sensing capability. Correctly inferring a robot's progress toward high level goal can be challenging.

This thesis develops new algorithms for synthesizing discrete controllers in partially known environments under specifications represented as linear temporal logic (LTL) formulas. It is inspired by recent developments in finite abstraction techniques for hybrid systems and motion planning problems. The robot and its environment is assumed to have a finite abstraction as a Partially Observable Markov Decision Process (POMDP), which is a powerful model class capable of representing a wide variety of problems. However, synthesizing controllers that satisfy LTL goals over POMDPs is a challenging problem which has received only limited attention.

This thesis proposes tractable, approximate algorithms for the control synthesis problem using Finite State Controllers (FSCs). The use of FSCs to control finite POMDPs allows for the closed system to be analyzed as finite global Markov chain. The thesis explicitly shows how transient and steady state behavior of the global Markov chains can be related to two different criteria with respect to satisfaction of LTL formulas. First, the maximization of the probability of LTL satisfaction is related to an optimization problem over a parametrization of the FSC. Analytic computation of gradients are derived which allows the use of first order optimization techniques.

The second criterion encourages rapid and frequent visits to a restricted set of states over infinite executions. It is formulated as a constrained optimization problem with a discounted long term reward objective by the novel utilization of a fundamental equation for Markov chains - the Poisson equation. A new constrained policy iteration technique is proposed to solve the resulting dynamic program, which also provides a way to escape local maxima.

The algorithms proposed in the thesis are applied to the task planning and execution challenges faced during the DARPA Autonomous Robotic Manipulation - Software challenge.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Estudos multitemporais de dados de sensoriamento remoto dedicam-se ao mapeamento temático de uso da terra em diferentes instâncias de tempo com o objetivo de identificar as mudanças ocorridas em uma região em determinado período. Em sua maioria, os trabalhos de classificação automática supervisionada de imagens de sensoriamento remoto não utilizam um modelo de transformação temporal no processo de classificação. Pesquisas realizadas na última década abriram um importante precedente ao comprovarem que a utilização de um modelo de conhecimento sobre a dinâmica da região (modelo de transformação temporal), baseado em Cadeias de Markov Fuzzy (CMF), possibilita resultados superiores aos produzidos pelos classificadores supervisionados monotemporais. Desta forma, o presente trabalho enfoca um dos aspectos desta abordagem pouco investigados: a combinação de CMF de intervalos de tempo curtos para classificar imagens de períodos longos. A área de estudo utilizada nos experimentos é um remanescente florestal situado no município de Londrina-PR e que abrange todo o limite do Parque Estadual Mata dos Godoy. Como dados de entrada, são utilizadas cinco imagens do satélite Landsat 5 TM com intervalo temporal de cinco anos. De uma forma geral, verificou-se, a partir dos resultados experimentais, que o uso das Cadeias de Markov Fuzzy contribuiu significativamente para a melhoria do desempenho do processo de classificação automática em imagens orbitais multitemporais, quando comparado com uma classificação monotemporal. Ainda, pôde-se observar que as classificações com base em matrizes estimadas para períodos curtos sempre apresentaram resultados superiores aos das classificações com base em matrizes estimadas para períodos longos. Também, que a superioridade da estimação direta frente à extrapolação se reduz com o aumento da distância temporal. Os resultados do presente trabalho poderão servir de motivação para a criação de sistemas automáticos de classificação de imagens multitemporais. O potencial de sua aplicação se justifica pela aceleração do processo de monitoramento do uso e cobertura da terra, considerando a melhoria obtida frente a classificações supervisionadas tradicionais.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In a multi-target complex network, the links (L-ij) represent the interactions between the drug (d(i)) and the target (t(j)), characterized by different experimental measures (K-i, K-m, IC50, etc.) obtained in pharmacological assays under diverse boundary conditions (c(j)). In this work, we handle Shannon entropy measures for developing a model encompassing a multi-target network of neuroprotective/neurotoxic compounds reported in the CHEMBL database. The model predicts correctly >8300 experimental outcomes with Accuracy, Specificity, and Sensitivity above 80%-90% on training and external validation series. Indeed, the model can calculate different outcomes for >30 experimental measures in >400 different experimental protocolsin relation with >150 molecular and cellular targets on 11 different organisms (including human). Hereafter, we reported by the first time the synthesis, characterization, and experimental assays of a new series of chiral 1,2-rasagiline carbamate derivatives not reported in previous works. The experimental tests included: (1) assay in absence of neurotoxic agents; (2) in the presence of glutamate; and (3) in the presence of H2O2. Lastly, we used the new Assessing Links with Moving Averages (ALMA)-entropy model to predict possible outcomes for the new compounds in a high number of pharmacological tests not carried out experimentally.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

No mundo, as hepatites decorrentes de infecções virais têm sido uma das grandes preocupações em saúde pública devido a seu caráter crônico, curso assintomático e pela sua capacidade de determinar a perda da função hepática. Com o uso em larga escala de medicamentos antirretrovirais, a doença hepática relacionada à infecção pelo vírus da hepatite C (VHC) contribuiu para uma mudança radical na história natural da infecção pelo vírus da imunodeficiência humana (HIV). Não se sabe ao certo o peso da coinfecção VHC/HIV no Brasil, mas evidências apontam que independentemente da região geográfica, esses indivíduos apresentam maiores dificuldades em eliminar o VHC após o tratamento farmacológico, quando comparados a monoinfectados. No âmbito do SUS, o tratamento antiviral padrão para portadores do genótipo 1 do VHC e do HIV é a administração de peguinterferon associado à Ribavirina. Quanto ao período de tratamento e aos indivíduos que devem ser incluídos, os dois protocolos terapêuticos mais recentes possuem divergências. A diretriz mais atual preconiza o tratamento de indivíduos respondedores precoces somados a respondedores virológicos lentos, enquanto a diretriz imediatamente anterior exclui na 12 semana indivíduos que não respondem completamente. Com base nessa divergência, esse estudo objetivou avaliar o custo-efetividade do tratamento contra o VHC em indivíduos portadores do genótipo 1, coinfectados com o HIV, virgens de tratamento antiviral, não cirróticos e imunologicamente estabilizados, submetidos às regras de tratamento antiviral estabelecidos pelas duas mais recentes diretrizes terapêuticas direcionadas ao atendimento pelo SUS. Para tal, foi elaborado um modelo matemático de decisão, baseado em cadeias de Markov, que simulou a progressão da doença hepática mediante o tratamento e não tratamento. Foi acompanhada uma coorte hipotética de mil indivíduos homens, maiores de 40 anos. Adotou-se a perspectiva do Sistema Único de Saúde, horizonte temporal de 30 anos e taxa de desconto de 5% para os custos e consequências clínicas. A extensão do tratamento para respondedores lentos proporcionou incremento de 0,28 anos de vida ajustados por qualidade (QALY), de 7% de sobrevida e aumento de 60% no número de indivíduos que eliminaram o VHC. Além dos esperados benefícios em eficácia, a inclusão de respondedores virológicos lentos mostrou-se uma estratégia custo-efetiva ao alcançar um incremental de custo efetividade de R$ 44.171/QALY, valor abaixo do limiar de aceitabilidade proposto pela Organização Mundial da Saúde OMS - (R$ 63.756,00/QALY). A análise de sensibilidade demonstrou que as possíveis incertezas contidas no modelo são incapazes de alterar o resultado final, evidenciando, assim, a robustez da análise. A inclusão de indivíduos coinfectados VHC/HIV respondedores virológicos lentos no protocolo de tratamento apresenta-se, do ponto de vista fármaco-econômico, como uma estratégia com relação de custoefetividade favorável para o Sistema Único de Saúde. Sua adoção é perfeitamente compatível com a perspectiva do sistema, ao retornar melhores resultados em saúdeassociados a custos abaixo de um teto orçamentário aceitável, e com o da sociedade, ao evitar em maior grau, complicações e internações quando comparado à não inclusão.