943 resultados para Markov Model


Relevância:

60.00% 60.00%

Publicador:

Resumo:

Reliability analysis for computing systems in aerospace applications must account for actual computations the system performs in the use environment. This paper introduces a theoretical nonhomogeneous Markov model for such applications.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The variation of the viscosity as a function of the sequence distribution in an A-B random copolymer melt is determined. The parameters that characterize the random copolymer are the fraction of A monomers f, the parameter lambda which determines the correlation in the monomer identities along a chain and the Flory chi parameter chi(F) which determines the strength of the enthalpic repulsion between monomers of type A and B. For lambda>0, there is a greater probability of finding like monomers at adjacent positions along the chain, and for lambda<0 unlike monomers are more likely to be adjacent to each other. The traditional Markov model for the random copolymer melt is altered to remove ultraviolet divergences in the equations for the renormalized viscosity, and the phase diagram for the modified model has a binary fluid type transition for lambda>0 and does not exhibit a phase transition for lambda<0. A mode coupling analysis is used to determine the renormalization of the viscosity due to the dependence of the bare viscosity on the local concentration field. Due to the dissipative nature of the coupling. there are nonlinearities both in the transport equation and in the noise correlation. The concentration dependence of the transport coefficient presents additional difficulties in the formulation due to the Ito-Stratonovich dilemma, and there is some ambiguity about the choice of the concentration to be used while calculating the noise correlation. In the Appendix, it is shown using a diagrammatic perturbation analysis that the Ito prescription for the calculation of the transport coefficient, when coupled with a causal discretization scheme, provides a consistent formulation that satisfies stationarity and the fluctuation dissipation theorem. This functional integral formalism is used in the present analysis, and consistency is verified for the present problem as well. The upper critical dimension for this type of renormaliaation is 2, and so there is no divergence in the viscosity in the vicinity of a critical point. The results indicate that there is a systematic dependence of the viscosity on lambda and chi(F). The fluctuations tend to increase the viscosity for lambda<0, and decrease the viscosity for lambda>0, and an increase in chi(F) tends to decrease the viscosity. (C) 1996 American Institute of Physics.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

A link failure in the path of a virtual circuit in a packet data network will lead to premature disconnection of the circuit by the end-points. A soft failure will result in degraded throughput over the virtual circuit. If these failures can be detected quickly and reliably, then appropriate rerouteing strategies can automatically reroute the virtual circuits that use the failed facility. In this paper, we develop a methodology for analysing and designing failure detection schemes for digital facilities. Based on errored second data, we develop a Markov model for the error and failure behaviour of a T1 trunk. The performance of a detection scheme is characterized by its false alarm probability and the detection delay. Using the Markov model, we analyse the performance of detection schemes that use physical layer or link layer information. The schemes basically rely upon detecting the occurrence of severely errored seconds (SESs). A failure is declared when a counter, that is driven by the occurrence of SESs, reaches a certain threshold.For hard failures, the design problem reduces to a proper choice;of the threshold at which failure is declared, and on the connection reattempt parameters of the virtual circuit end-point session recovery procedures. For soft failures, the performance of a detection scheme depends, in addition, on how long and how frequent the error bursts are in a given failure mode. We also propose and analyse a novel Level 2 detection scheme that relies only upon anomalies observable at Level 2, i.e. CRC failures and idle-fill flag errors. Our results suggest that Level 2 schemes that perform as well as Level 1 schemes are possible.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Throughput analysis of bulk TCP downloads in cases where all WLAN stations are associated at the same rate with the AP is available in the literature. In this paper,we extend the analysis to TCP uploads for the case of multirate associations. The approach is based on a two-dimensional semi- Markov model for the number of backlogged stations. Analytical results are in excellent agreement with simulations performed using QUALNET 4.5.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In this paper, we analyze the coexistence of a primary and a secondary (cognitive) network when both networks use the IEEE 802.11 based distributed coordination function for medium access control. Specifically, we consider the problem of channel capture by a secondary network that uses spectrum sensing to determine the availability of the channel, and its impact on the primary throughput. We integrate the notion of transmission slots in Bianchi's Markov model with the physical time slots, to derive the transmission probability of the secondary network as a function of its scan duration. This is used to obtain analytical expressions for the throughput achievable by the primary and secondary networks. Our analysis considers both saturated and unsaturated networks. By performing a numerical search, the secondary network parameters are selected to maximize its throughput for a given level of protection of the primary network throughput. The theoretical expressions are validated using extensive simulations carried out in the Network Simulator 2. Our results provide critical insights into the performance and robustness of different schemes for medium access by the secondary network. In particular, we find that the channel captures by the secondary network does not significantly impact the primary throughput, and that simply increasing the secondary contention window size is only marginally inferior to silent-period based methods in terms of its throughput performance.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The sensor scheduling problem can be formulated as a controlled hidden Markov model and this paper solves the problem when the state, observation and action spaces are continuous. This general case is important as it is the natural framework for many applications. The aim is to minimise the variance of the estimation error of the hidden state w.r.t. the action sequence. We present a novel simulation-based method that uses a stochastic gradient algorithm to find optimal actions. © 2007 Elsevier Ltd. All rights reserved.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Approximate Bayesian computation (ABC) has become a popular technique to facilitate Bayesian inference from complex models. In this article we present an ABC approximation designed to perform biased filtering for a Hidden Markov Model when the likelihood function is intractable. We use a sequential Monte Carlo (SMC) algorithm to both fit and sample from our ABC approximation of the target probability density. This approach is shown to, empirically, be more accurate w.r.t.~the original filter than competing methods. The theoretical bias of our method is investigated; it is shown that the bias goes to zero at the expense of increased computational effort. Our approach is illustrated on a constrained sequential lasso for portfolio allocation to 15 constituents of the FTSE 100 share index.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Many problems in control and signal processing can be formulated as sequential decision problems for general state space models. However, except for some simple models one cannot obtain analytical solutions and has to resort to approximation. In this thesis, we have investigated problems where Sequential Monte Carlo (SMC) methods can be combined with a gradient based search to provide solutions to online optimisation problems. We summarise the main contributions of the thesis as follows. Chapter 4 focuses on solving the sensor scheduling problem when cast as a controlled Hidden Markov Model. We consider the case in which the state, observation and action spaces are continuous. This general case is important as it is the natural framework for many applications. In sensor scheduling, our aim is to minimise the variance of the estimation error of the hidden state with respect to the action sequence. We present a novel SMC method that uses a stochastic gradient algorithm to find optimal actions. This is in contrast to existing works in the literature that only solve approximations to the original problem. In Chapter 5 we presented how an SMC can be used to solve a risk sensitive control problem. We adopt the use of the Feynman-Kac representation of a controlled Markov chain flow and exploit the properties of the logarithmic Lyapunov exponent, which lead to a policy gradient solution for the parameterised problem. The resulting SMC algorithm follows a similar structure with the Recursive Maximum Likelihood(RML) algorithm for online parameter estimation. In Chapters 6, 7 and 8, dynamic Graphical models were combined with with state space models for the purpose of online decentralised inference. We have concentrated more on the distributed parameter estimation problem using two Maximum Likelihood techniques, namely Recursive Maximum Likelihood (RML) and Expectation Maximization (EM). The resulting algorithms can be interpreted as an extension of the Belief Propagation (BP) algorithm to compute likelihood gradients. In order to design an SMC algorithm, in Chapter 8 uses a nonparametric approximations for Belief Propagation. The algorithms were successfully applied to solve the sensor localisation problem for sensor networks of small and medium size.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

O estudo tem como objetivo geral avaliar a razão de custo-utilidade do tratamento da infecção pelo vírus da hepatite C (VHC) em pacientes dialisados, candidatos a transplante renal, tendo como esquemas terapêuticos alternativos o interferon-_ em monoterapia; o interferon peguilado em monoterapia; o interferon-_ em terapia combinada com ribavirina e o interferon peguilado em terapia combinada com ribavirina, comparando-os com o nãotratamento. A perspectiva do estudo foi a do Sistema Único de Saúde(SUS), que também serviu de base para estimar o impacto orçamentário da estratégia de tratamento mais custo efetiva. Para o alcance dos objetivos, foi construído um modelo de Makov para simulação de custos e resultados de cada estratégia avaliada. Para subsidiar o modelo, foi realizada uma revisão de literatura, a fim de definir os estados de saúde relacionados à infecção pelo vírus da hepatite C em transplantados e a probabilidade de transição entre os estados. Medidas de utilidade foram derivadas de consultas a especialistas. Os custos foram derivados da tabela de procedimentos do SUS. Os resultados do estudo demonstraram que o tratamento da infecção pelo VHC antes do transplante renal é mais custo-efetivo que o não tratamento, apontando o interferon-a como a melhor opção. O impacto orçamentário para adoção dessa estratégia pelo SUS corresponde a 0,3% do valor despendido pelo SUS com terapia renal substitutiva ao longo do ano de 2007.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

As ações de prevenção, diagnóstico e tratamento da hepatite C crônica integram as agendas das políticas de saúde do Brasil e do mundo, pois se trata de uma doença com grande número de acometidos, com alto custo tratamento e que ocasiona graves desfechos e incapacidade, o que acaba por onerar seu custo social. Os protocolos clínicos e diretrizes terapêuticas demonstram os esforços de inúmeras entidades no combate da hepatite C, pois informam aos profissionais de saúde, pacientes e familiares e cidadãos em geral, qual seria a melhor forma, comprovada cientificamente, de se proceder frente a uma infecção desta natureza. Realizouse uma análise de custoefetividade, sob a perspectiva do SUS, das estratégias: tratamento e retratamento com a terapia dupla, tratamento com a terapia dupla e retratamento com a terapia tripla e tratamento com a terapia tripla. Através de modelo de simulação baseado em cadeias Markov foi criada uma coorte hipotética de 1000 indivíduos adultos, acima de 40 anos, de ambos os sexos, sem distinção declasse socioeconômica, com diagnóstico confirmado para hepatite C crônica, monoinfectados pelo genótipo 1 do VHC e com ausência de comorbidades. A simulação foi iniciada com todos os indivíduos portando a forma mais branda da doença, tida como a classificação histológica F0 ou F1 segundo a escala Metavir. Os resultados demonstram que as duas opções, ou seja, a terapia dupla/tripla e a terapia tripla estão abaixo do limiar de aceitabilidade para incorporação de tecnologia proposto pela OMS (2012) que é de 72.195 (R$/QALY) (IBGE, 2013; WHO, 2012). Ambas são custoefetivas, visto que o ICER da terapia dupla/tripla em relação alinha de base foi de 7.186,3 (R$/QALY) e o da terapia tripla foi de 59.053,8 (R$/QALY). Entretanto o custo incremental de terapia tripla em relação à dupla/tripla foi de 31.029 e a efetividade incremental foi de 0,52. Em geral, quando as intervenções analisadas encontramse abaixo do limiar, sugerese a adoção do esquema de maior efetividade. A terapia tripla, apesar de ter apresentado uma efetividade um pouco acima da terapia dupla/tripla, apresentou custo muito superior. Assim, como seria coerente a adoção de uma ou da outra para utilização no SUS, visto que este sistema apresenta recursos limitados, indicase a realização de um estudo de impacto orçamentário para obterse mais um dado de embasamento da decisão e assim poder apoiar o protocolo brasileiro existente ou sugerir a confecção de novo documento.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Iron is required for many microbes and pathogens for their survival and proliferation including Leishmania which cause leishmaniasis. Leishmaniasis is an increasingly serious infectious disease with a wide spectrum of clinical manifestations. These range from localized cutaneous leishmaniasis (CL) lesions to a lethal visceral form. Certain strains such as BALB/c mice fail to control L. major infection and develop progressive lesions and systemic disease. These mice are thought to be a model of non-healing forms of the human disease such as kala-azar or diffuse cutaneous leishmaniasis. Progression of disease in BALB/c mice has been associated with the anemia, in last days of their survival, the progressive anemia is considered to be one of the reasons of their death. Ferroportin (Fpn), a key regulator of iron homeostasis is a conserved membrane protein that exports iron across the duodenal enterocytes as well as macrophages and hepatocytes into the blood circulation. Fpn has also critical influence on survival and proliferation of many microorganisms whose growth is dependent upon iron, thus preparation of Fpn is needed to study the role of iron in immune responses and pathogenesis of micoorganisms. To prepare and characterize a recombinant ferroportin, total RNA was extracted from Indian zebrafish duodenum, and used to synthesize cDNA by RT-PCR. PCR product was first cloned in Topo TA vector and then subcloned into the GFP expression vector pEGFP–N1. The final resulted plasmid (pEGFP-ZFpn) was used for expression of FPN-EGFP protein in Hek 293T cells. The expression was confirmed by fluorescence microscopy and flow cytometery. Recombinant Fpn was further characterized by submission of its predicted amino acid sequences to the TMHMM V2.0 prediction server (hidden Markov model), NetOGlyc 3.1 server and NetNGlyc 3.1 server. Data emphasised that obtained Fpn from indian zebrafish contained eight transmembrane domains with N- and C-termini inside the cytoplasm and harboured 78 mucin-type glycosylated amino acid. The results indicate that the prepared and characterized recombinant Fpn protein has no membrane topology difference compared to other Fpn described by other researcher. Our next aim was to deliver recombinant plasmid (pEGFP-ZFpn) to entrocyte cells. However, naked therapeutic genes are rapidly degraded by nucleases, showing poor cellular uptake, nonspecificity to the target cells, and low transfection efficiency. The development of safe and efficient gene carriers is one of the prerequisites for the success of gene therapy. Chitosan and alginate 139 polymers were used for oral gene carrier because of their biodegradability, biocompatibility and their mucoadhesive and permeability-enhancing properties in the gut. Nanoparticles comprising Alginate/Chitosan polymers were prepared by pregel preparation method. The resulting nanoparticles had a loading efficiency of 95% and average size of 188 nm as confirmed by PCS method and SEM images had showed spherical particles. BALB/c mice were divided to three groups. The first and second group were fed with chitosan/alginate nanoparticles containing the pEGFP-ZFpn and pEGFP plasmid, respectively (30 μgr/mice) and the third group (control) didn’t get any nanoparticles. The result showed BALB/c mice infected by L.major, resulted in higher hematocryte and iron level in pEGFP-ZFpn fed mice than that in other groups. Consentration of cytokines determined by ELISA showed lower levels of IL-4 and IL-10 and higher levels of IFN-γ/IL-4 and IFN-γ/IL-10 ratios in pEGFP-ZFpn fed mice than that in other groups. Morover more limited increase of footpad thickness and significant reduction of viable parasites in lymph node was seen in pEGFP-ZFpn fed mice. The results showed the first group exhibited a highr hematocryte and iron compared to the other groups. These data strongly suggests the in vivo administration of chitosan/alginate nanoparticles containing pEGFP-ZFpn suppress Th2 response and may be used to control the leishmaniasis .

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Hidden Markov model (HMM)-based speech synthesis systems possess several advantages over concatenative synthesis systems. One such advantage is the relative ease with which HMM-based systems are adapted to speakers not present in the training dataset. Speaker adaptation methods used in the field of HMM-based automatic speech recognition (ASR) are adopted for this task. In the case of unsupervised speaker adaptation, previous work has used a supplementary set of acoustic models to estimate the transcription of the adaptation data. This paper first presents an approach to the unsupervised speaker adaptation task for HMM-based speech synthesis models which avoids the need for such supplementary acoustic models. This is achieved by defining a mapping between HMM-based synthesis models and ASR-style models, via a two-pass decision tree construction process. Second, it is shown that this mapping also enables unsupervised adaptation of HMM-based speech synthesis models without the need to perform linguistic analysis of the estimated transcription of the adaptation data. Third, this paper demonstrates how this technique lends itself to the task of unsupervised cross-lingual adaptation of HMM-based speech synthesis models, and explains the advantages of such an approach. Finally, listener evaluations reveal that the proposed unsupervised adaptation methods deliver performance approaching that of supervised adaptation.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

We present a new haplotype-based approach for inferring local genetic ancestry of individuals in an admixed population. Most existing approaches for local ancestry estimation ignore the latent genetic relatedness between ancestral populations and treat them as independent. In this article, we exploit such information by building an inheritance model that describes both the ancestral populations and the admixed population jointly in a unified framework. Based on an assumption that the common hypothetical founder haplotypes give rise to both the ancestral and the admixed population haplotypes, we employ an infinite hidden Markov model to characterize each ancestral population and further extend it to generate the admixed population. Through an effective utilization of the population structural information under a principled nonparametric Bayesian framework, the resulting model is significantly less sensitive to the choice and the amount of training data for ancestral populations than state-of-the-art algorithms. We also improve the robustness under deviation from common modeling assumptions by incorporating population-specific scale parameters that allow variable recombination rates in different populations. Our method is applicable to an admixed population from an arbitrary number of ancestral populations and also performs competitively in terms of spurious ancestry proportions under a general multiway admixture assumption. We validate the proposed method by simulation under various admixing scenarios and present empirical analysis results from a worldwide-distributed dataset from the Human Genome Diversity Project.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

针对自动最复重传(ARQ)机制在无线广播系统中吞吐量性能不佳的缺陷,提出一种基于随机网络编码的广播重传方案RNC-ARQ。对于广播节点,采用随机线性码对所有丢失包进行编码组合重传。对于接收节点,当接收的编码包累积到一定数量后可通过解码操作恢复出原始数据。该方案可有效减少重传次数,改善无线广播的吞吐量性能。基于Gilbert-Elliott模型描述的突发错误信道,建立了信道状态和节点接收处理流程合并的多状态马尔可夫模型,并以此为基础推导了RNC-ARQ方案的TQ吐量闭合解。最后,使用NS-2模拟器评估RNC-ARQ方案的性能,结果表明在突发差错信道下,基于随机网络编码重传方案的吞吐量优于传统的选择重传ARQ方案和基于异或编码的重传方案。

Relevância:

60.00% 60.00%

Publicador:

Resumo:

A novel approach for real-time skin segmentation in video sequences is described. The approach enables reliable skin segmentation despite wide variation in illumination during tracking. An explicit second order Markov model is used to predict evolution of the skin-color (HSV) histogram over time. Histograms are dynamically updated based on feedback from the current segmentation and predictions of the Markov model. The evolution of the skin-color distribution at each frame is parameterized by translation, scaling and rotation in color space. Consequent changes in geometric parameterization of the distribution are propagated by warping and resampling the histogram. The parameters of the discrete-time dynamic Markov model are estimated using Maximum Likelihood Estimation, and also evolve over time. The accuracy of the new dynamic skin color segmentation algorithm is compared to that obtained via a static color model. Segmentation accuracy is evaluated using labeled ground-truth video sequences taken from staged experiments and popular movies. An overall increase in segmentation accuracy of up to 24% is observed in 17 out of 21 test sequences. In all but one case the skin-color classification rates for our system were higher, with background classification rates comparable to those of the static segmentation.