862 resultados para Markov model
Resumo:
We study a fixed-point formalization of the well-known analysis of Bianchi. We provide a significant simplification and generalization of the analysis. In this more general framework, the fixed-point solution and performance measures resulting from it are studied. Uniqueness of the fixed point is established. Simple and general throughput formulas are provided. It is shown that the throughput of any flow will be bounded by the one with the smallest transmission rate. The aggregate throughput is bounded by the reciprocal of the harmonic mean of the transmission rates. In an asymptotic regime with a large number of nodes, explicit formulas for the collision probability, the aggregate attempt rate, and the aggregate throughput are provided. The results from the analysis are compared with ns2 simulations and also with an exact Markov model of the backoff process. It is shown how the saturated network analysis can be used to obtain TCP transfer throughputs in some cases.
Resumo:
We consider an optimal power and rate scheduling problem for a multiaccess fading wireless channel with the objective of minimising a weighted sum of mean packet transmission delay subject to a peak power constraint. The base station acts as a controller which, depending upon the buffer lengths and the channel state of each user, allocates transmission rate and power to individual users. We assume perfect channel state information at the transmitter and the receiver. We also assume a Markov model for the fading and packet arrival processes. The policy obtained represents a form of Indexability.
Resumo:
Joint decoding of multiple speech patterns so as to improve speech recognition performance is important, especially in the presence of noise. In this paper, we propose a Multi-Pattern Viterbi algorithm (MPVA) to jointly decode and recognize multiple speech patterns for automatic speech recognition (ASR). The MPVA is a generalization of the Viterbi Algorithm to jointly decode multiple patterns given a Hidden Markov Model (HMM). Unlike the previously proposed two stage Constrained Multi-Pattern Viterbi Algorithm (CMPVA),the MPVA is a single stage algorithm. MPVA has the advantage that it cart be extended to connected word recognition (CWR) and continuous speech recognition (CSR) problems. MPVA is shown to provide better speech recognition performance than the earlier techniques: using only two repetitions of noisy speech patterns (-5 dB SNR, 10% burst noise), the word error rate using MPVA decreased by 28.5%, when compared to using individual decoding. (C) 2010 Elsevier B.V. All rights reserved.
Resumo:
Reliability analysis for computing systems in aerospace applications must account for actual computations the system performs in the use environment. This paper introduces a theoretical nonhomogeneous Markov model for such applications.
Resumo:
The variation of the viscosity as a function of the sequence distribution in an A-B random copolymer melt is determined. The parameters that characterize the random copolymer are the fraction of A monomers f, the parameter lambda which determines the correlation in the monomer identities along a chain and the Flory chi parameter chi(F) which determines the strength of the enthalpic repulsion between monomers of type A and B. For lambda>0, there is a greater probability of finding like monomers at adjacent positions along the chain, and for lambda<0 unlike monomers are more likely to be adjacent to each other. The traditional Markov model for the random copolymer melt is altered to remove ultraviolet divergences in the equations for the renormalized viscosity, and the phase diagram for the modified model has a binary fluid type transition for lambda>0 and does not exhibit a phase transition for lambda<0. A mode coupling analysis is used to determine the renormalization of the viscosity due to the dependence of the bare viscosity on the local concentration field. Due to the dissipative nature of the coupling. there are nonlinearities both in the transport equation and in the noise correlation. The concentration dependence of the transport coefficient presents additional difficulties in the formulation due to the Ito-Stratonovich dilemma, and there is some ambiguity about the choice of the concentration to be used while calculating the noise correlation. In the Appendix, it is shown using a diagrammatic perturbation analysis that the Ito prescription for the calculation of the transport coefficient, when coupled with a causal discretization scheme, provides a consistent formulation that satisfies stationarity and the fluctuation dissipation theorem. This functional integral formalism is used in the present analysis, and consistency is verified for the present problem as well. The upper critical dimension for this type of renormaliaation is 2, and so there is no divergence in the viscosity in the vicinity of a critical point. The results indicate that there is a systematic dependence of the viscosity on lambda and chi(F). The fluctuations tend to increase the viscosity for lambda<0, and decrease the viscosity for lambda>0, and an increase in chi(F) tends to decrease the viscosity. (C) 1996 American Institute of Physics.
Resumo:
A link failure in the path of a virtual circuit in a packet data network will lead to premature disconnection of the circuit by the end-points. A soft failure will result in degraded throughput over the virtual circuit. If these failures can be detected quickly and reliably, then appropriate rerouteing strategies can automatically reroute the virtual circuits that use the failed facility. In this paper, we develop a methodology for analysing and designing failure detection schemes for digital facilities. Based on errored second data, we develop a Markov model for the error and failure behaviour of a T1 trunk. The performance of a detection scheme is characterized by its false alarm probability and the detection delay. Using the Markov model, we analyse the performance of detection schemes that use physical layer or link layer information. The schemes basically rely upon detecting the occurrence of severely errored seconds (SESs). A failure is declared when a counter, that is driven by the occurrence of SESs, reaches a certain threshold.For hard failures, the design problem reduces to a proper choice;of the threshold at which failure is declared, and on the connection reattempt parameters of the virtual circuit end-point session recovery procedures. For soft failures, the performance of a detection scheme depends, in addition, on how long and how frequent the error bursts are in a given failure mode. We also propose and analyse a novel Level 2 detection scheme that relies only upon anomalies observable at Level 2, i.e. CRC failures and idle-fill flag errors. Our results suggest that Level 2 schemes that perform as well as Level 1 schemes are possible.
Resumo:
Throughput analysis of bulk TCP downloads in cases where all WLAN stations are associated at the same rate with the AP is available in the literature. In this paper,we extend the analysis to TCP uploads for the case of multirate associations. The approach is based on a two-dimensional semi- Markov model for the number of backlogged stations. Analytical results are in excellent agreement with simulations performed using QUALNET 4.5.
Resumo:
In this paper, we analyze the coexistence of a primary and a secondary (cognitive) network when both networks use the IEEE 802.11 based distributed coordination function for medium access control. Specifically, we consider the problem of channel capture by a secondary network that uses spectrum sensing to determine the availability of the channel, and its impact on the primary throughput. We integrate the notion of transmission slots in Bianchi's Markov model with the physical time slots, to derive the transmission probability of the secondary network as a function of its scan duration. This is used to obtain analytical expressions for the throughput achievable by the primary and secondary networks. Our analysis considers both saturated and unsaturated networks. By performing a numerical search, the secondary network parameters are selected to maximize its throughput for a given level of protection of the primary network throughput. The theoretical expressions are validated using extensive simulations carried out in the Network Simulator 2. Our results provide critical insights into the performance and robustness of different schemes for medium access by the secondary network. In particular, we find that the channel captures by the secondary network does not significantly impact the primary throughput, and that simply increasing the secondary contention window size is only marginally inferior to silent-period based methods in terms of its throughput performance.
Resumo:
The sensor scheduling problem can be formulated as a controlled hidden Markov model and this paper solves the problem when the state, observation and action spaces are continuous. This general case is important as it is the natural framework for many applications. The aim is to minimise the variance of the estimation error of the hidden state w.r.t. the action sequence. We present a novel simulation-based method that uses a stochastic gradient algorithm to find optimal actions. © 2007 Elsevier Ltd. All rights reserved.
Resumo:
Approximate Bayesian computation (ABC) has become a popular technique to facilitate Bayesian inference from complex models. In this article we present an ABC approximation designed to perform biased filtering for a Hidden Markov Model when the likelihood function is intractable. We use a sequential Monte Carlo (SMC) algorithm to both fit and sample from our ABC approximation of the target probability density. This approach is shown to, empirically, be more accurate w.r.t.~the original filter than competing methods. The theoretical bias of our method is investigated; it is shown that the bias goes to zero at the expense of increased computational effort. Our approach is illustrated on a constrained sequential lasso for portfolio allocation to 15 constituents of the FTSE 100 share index.
Resumo:
Many problems in control and signal processing can be formulated as sequential decision problems for general state space models. However, except for some simple models one cannot obtain analytical solutions and has to resort to approximation. In this thesis, we have investigated problems where Sequential Monte Carlo (SMC) methods can be combined with a gradient based search to provide solutions to online optimisation problems. We summarise the main contributions of the thesis as follows. Chapter 4 focuses on solving the sensor scheduling problem when cast as a controlled Hidden Markov Model. We consider the case in which the state, observation and action spaces are continuous. This general case is important as it is the natural framework for many applications. In sensor scheduling, our aim is to minimise the variance of the estimation error of the hidden state with respect to the action sequence. We present a novel SMC method that uses a stochastic gradient algorithm to find optimal actions. This is in contrast to existing works in the literature that only solve approximations to the original problem. In Chapter 5 we presented how an SMC can be used to solve a risk sensitive control problem. We adopt the use of the Feynman-Kac representation of a controlled Markov chain flow and exploit the properties of the logarithmic Lyapunov exponent, which lead to a policy gradient solution for the parameterised problem. The resulting SMC algorithm follows a similar structure with the Recursive Maximum Likelihood(RML) algorithm for online parameter estimation. In Chapters 6, 7 and 8, dynamic Graphical models were combined with with state space models for the purpose of online decentralised inference. We have concentrated more on the distributed parameter estimation problem using two Maximum Likelihood techniques, namely Recursive Maximum Likelihood (RML) and Expectation Maximization (EM). The resulting algorithms can be interpreted as an extension of the Belief Propagation (BP) algorithm to compute likelihood gradients. In order to design an SMC algorithm, in Chapter 8 uses a nonparametric approximations for Belief Propagation. The algorithms were successfully applied to solve the sensor localisation problem for sensor networks of small and medium size.
Resumo:
O estudo tem como objetivo geral avaliar a razão de custo-utilidade do tratamento da infecção pelo vírus da hepatite C (VHC) em pacientes dialisados, candidatos a transplante renal, tendo como esquemas terapêuticos alternativos o interferon-_ em monoterapia; o interferon peguilado em monoterapia; o interferon-_ em terapia combinada com ribavirina e o interferon peguilado em terapia combinada com ribavirina, comparando-os com o nãotratamento. A perspectiva do estudo foi a do Sistema Único de Saúde(SUS), que também serviu de base para estimar o impacto orçamentário da estratégia de tratamento mais custo efetiva. Para o alcance dos objetivos, foi construído um modelo de Makov para simulação de custos e resultados de cada estratégia avaliada. Para subsidiar o modelo, foi realizada uma revisão de literatura, a fim de definir os estados de saúde relacionados à infecção pelo vírus da hepatite C em transplantados e a probabilidade de transição entre os estados. Medidas de utilidade foram derivadas de consultas a especialistas. Os custos foram derivados da tabela de procedimentos do SUS. Os resultados do estudo demonstraram que o tratamento da infecção pelo VHC antes do transplante renal é mais custo-efetivo que o não tratamento, apontando o interferon-a como a melhor opção. O impacto orçamentário para adoção dessa estratégia pelo SUS corresponde a 0,3% do valor despendido pelo SUS com terapia renal substitutiva ao longo do ano de 2007.
Resumo:
As ações de prevenção, diagnóstico e tratamento da hepatite C crônica integram as agendas das políticas de saúde do Brasil e do mundo, pois se trata de uma doença com grande número de acometidos, com alto custo tratamento e que ocasiona graves desfechos e incapacidade, o que acaba por onerar seu custo social. Os protocolos clínicos e diretrizes terapêuticas demonstram os esforços de inúmeras entidades no combate da hepatite C, pois informam aos profissionais de saúde, pacientes e familiares e cidadãos em geral, qual seria a melhor forma, comprovada cientificamente, de se proceder frente a uma infecção desta natureza. Realizouse uma análise de custoefetividade, sob a perspectiva do SUS, das estratégias: tratamento e retratamento com a terapia dupla, tratamento com a terapia dupla e retratamento com a terapia tripla e tratamento com a terapia tripla. Através de modelo de simulação baseado em cadeias Markov foi criada uma coorte hipotética de 1000 indivíduos adultos, acima de 40 anos, de ambos os sexos, sem distinção declasse socioeconômica, com diagnóstico confirmado para hepatite C crônica, monoinfectados pelo genótipo 1 do VHC e com ausência de comorbidades. A simulação foi iniciada com todos os indivíduos portando a forma mais branda da doença, tida como a classificação histológica F0 ou F1 segundo a escala Metavir. Os resultados demonstram que as duas opções, ou seja, a terapia dupla/tripla e a terapia tripla estão abaixo do limiar de aceitabilidade para incorporação de tecnologia proposto pela OMS (2012) que é de 72.195 (R$/QALY) (IBGE, 2013; WHO, 2012). Ambas são custoefetivas, visto que o ICER da terapia dupla/tripla em relação alinha de base foi de 7.186,3 (R$/QALY) e o da terapia tripla foi de 59.053,8 (R$/QALY). Entretanto o custo incremental de terapia tripla em relação à dupla/tripla foi de 31.029 e a efetividade incremental foi de 0,52. Em geral, quando as intervenções analisadas encontramse abaixo do limiar, sugerese a adoção do esquema de maior efetividade. A terapia tripla, apesar de ter apresentado uma efetividade um pouco acima da terapia dupla/tripla, apresentou custo muito superior. Assim, como seria coerente a adoção de uma ou da outra para utilização no SUS, visto que este sistema apresenta recursos limitados, indicase a realização de um estudo de impacto orçamentário para obterse mais um dado de embasamento da decisão e assim poder apoiar o protocolo brasileiro existente ou sugerir a confecção de novo documento.
Resumo:
Iron is required for many microbes and pathogens for their survival and proliferation including Leishmania which cause leishmaniasis. Leishmaniasis is an increasingly serious infectious disease with a wide spectrum of clinical manifestations. These range from localized cutaneous leishmaniasis (CL) lesions to a lethal visceral form. Certain strains such as BALB/c mice fail to control L. major infection and develop progressive lesions and systemic disease. These mice are thought to be a model of non-healing forms of the human disease such as kala-azar or diffuse cutaneous leishmaniasis. Progression of disease in BALB/c mice has been associated with the anemia, in last days of their survival, the progressive anemia is considered to be one of the reasons of their death. Ferroportin (Fpn), a key regulator of iron homeostasis is a conserved membrane protein that exports iron across the duodenal enterocytes as well as macrophages and hepatocytes into the blood circulation. Fpn has also critical influence on survival and proliferation of many microorganisms whose growth is dependent upon iron, thus preparation of Fpn is needed to study the role of iron in immune responses and pathogenesis of micoorganisms. To prepare and characterize a recombinant ferroportin, total RNA was extracted from Indian zebrafish duodenum, and used to synthesize cDNA by RT-PCR. PCR product was first cloned in Topo TA vector and then subcloned into the GFP expression vector pEGFP–N1. The final resulted plasmid (pEGFP-ZFpn) was used for expression of FPN-EGFP protein in Hek 293T cells. The expression was confirmed by fluorescence microscopy and flow cytometery. Recombinant Fpn was further characterized by submission of its predicted amino acid sequences to the TMHMM V2.0 prediction server (hidden Markov model), NetOGlyc 3.1 server and NetNGlyc 3.1 server. Data emphasised that obtained Fpn from indian zebrafish contained eight transmembrane domains with N- and C-termini inside the cytoplasm and harboured 78 mucin-type glycosylated amino acid. The results indicate that the prepared and characterized recombinant Fpn protein has no membrane topology difference compared to other Fpn described by other researcher. Our next aim was to deliver recombinant plasmid (pEGFP-ZFpn) to entrocyte cells. However, naked therapeutic genes are rapidly degraded by nucleases, showing poor cellular uptake, nonspecificity to the target cells, and low transfection efficiency. The development of safe and efficient gene carriers is one of the prerequisites for the success of gene therapy. Chitosan and alginate 139 polymers were used for oral gene carrier because of their biodegradability, biocompatibility and their mucoadhesive and permeability-enhancing properties in the gut. Nanoparticles comprising Alginate/Chitosan polymers were prepared by pregel preparation method. The resulting nanoparticles had a loading efficiency of 95% and average size of 188 nm as confirmed by PCS method and SEM images had showed spherical particles. BALB/c mice were divided to three groups. The first and second group were fed with chitosan/alginate nanoparticles containing the pEGFP-ZFpn and pEGFP plasmid, respectively (30 μgr/mice) and the third group (control) didn’t get any nanoparticles. The result showed BALB/c mice infected by L.major, resulted in higher hematocryte and iron level in pEGFP-ZFpn fed mice than that in other groups. Consentration of cytokines determined by ELISA showed lower levels of IL-4 and IL-10 and higher levels of IFN-γ/IL-4 and IFN-γ/IL-10 ratios in pEGFP-ZFpn fed mice than that in other groups. Morover more limited increase of footpad thickness and significant reduction of viable parasites in lymph node was seen in pEGFP-ZFpn fed mice. The results showed the first group exhibited a highr hematocryte and iron compared to the other groups. These data strongly suggests the in vivo administration of chitosan/alginate nanoparticles containing pEGFP-ZFpn suppress Th2 response and may be used to control the leishmaniasis .
Resumo:
Hidden Markov model (HMM)-based speech synthesis systems possess several advantages over concatenative synthesis systems. One such advantage is the relative ease with which HMM-based systems are adapted to speakers not present in the training dataset. Speaker adaptation methods used in the field of HMM-based automatic speech recognition (ASR) are adopted for this task. In the case of unsupervised speaker adaptation, previous work has used a supplementary set of acoustic models to estimate the transcription of the adaptation data. This paper first presents an approach to the unsupervised speaker adaptation task for HMM-based speech synthesis models which avoids the need for such supplementary acoustic models. This is achieved by defining a mapping between HMM-based synthesis models and ASR-style models, via a two-pass decision tree construction process. Second, it is shown that this mapping also enables unsupervised adaptation of HMM-based speech synthesis models without the need to perform linguistic analysis of the estimated transcription of the adaptation data. Third, this paper demonstrates how this technique lends itself to the task of unsupervised cross-lingual adaptation of HMM-based speech synthesis models, and explains the advantages of such an approach. Finally, listener evaluations reveal that the proposed unsupervised adaptation methods deliver performance approaching that of supervised adaptation.