874 resultados para Markov Model Estimation
Resumo:
In this paper, we analyze the coexistence of a primary and a secondary (cognitive) network when both networks use the IEEE 802.11 based distributed coordination function for medium access control. Specifically, we consider the problem of channel capture by a secondary network that uses spectrum sensing to determine the availability of the channel, and its impact on the primary throughput. We integrate the notion of transmission slots in Bianchi's Markov model with the physical time slots, to derive the transmission probability of the secondary network as a function of its scan duration. This is used to obtain analytical expressions for the throughput achievable by the primary and secondary networks. Our analysis considers both saturated and unsaturated networks. By performing a numerical search, the secondary network parameters are selected to maximize its throughput for a given level of protection of the primary network throughput. The theoretical expressions are validated using extensive simulations carried out in the Network Simulator 2. Our results provide critical insights into the performance and robustness of different schemes for medium access by the secondary network. In particular, we find that the channel captures by the secondary network does not significantly impact the primary throughput, and that simply increasing the secondary contention window size is only marginally inferior to silent-period based methods in terms of its throughput performance.
Resumo:
Approximate Bayesian computation (ABC) has become a popular technique to facilitate Bayesian inference from complex models. In this article we present an ABC approximation designed to perform biased filtering for a Hidden Markov Model when the likelihood function is intractable. We use a sequential Monte Carlo (SMC) algorithm to both fit and sample from our ABC approximation of the target probability density. This approach is shown to, empirically, be more accurate w.r.t.~the original filter than competing methods. The theoretical bias of our method is investigated; it is shown that the bias goes to zero at the expense of increased computational effort. Our approach is illustrated on a constrained sequential lasso for portfolio allocation to 15 constituents of the FTSE 100 share index.
Resumo:
O estudo tem como objetivo geral avaliar a razão de custo-utilidade do tratamento da infecção pelo vírus da hepatite C (VHC) em pacientes dialisados, candidatos a transplante renal, tendo como esquemas terapêuticos alternativos o interferon-_ em monoterapia; o interferon peguilado em monoterapia; o interferon-_ em terapia combinada com ribavirina e o interferon peguilado em terapia combinada com ribavirina, comparando-os com o nãotratamento. A perspectiva do estudo foi a do Sistema Único de Saúde(SUS), que também serviu de base para estimar o impacto orçamentário da estratégia de tratamento mais custo efetiva. Para o alcance dos objetivos, foi construído um modelo de Makov para simulação de custos e resultados de cada estratégia avaliada. Para subsidiar o modelo, foi realizada uma revisão de literatura, a fim de definir os estados de saúde relacionados à infecção pelo vírus da hepatite C em transplantados e a probabilidade de transição entre os estados. Medidas de utilidade foram derivadas de consultas a especialistas. Os custos foram derivados da tabela de procedimentos do SUS. Os resultados do estudo demonstraram que o tratamento da infecção pelo VHC antes do transplante renal é mais custo-efetivo que o não tratamento, apontando o interferon-a como a melhor opção. O impacto orçamentário para adoção dessa estratégia pelo SUS corresponde a 0,3% do valor despendido pelo SUS com terapia renal substitutiva ao longo do ano de 2007.
Resumo:
As ações de prevenção, diagnóstico e tratamento da hepatite C crônica integram as agendas das políticas de saúde do Brasil e do mundo, pois se trata de uma doença com grande número de acometidos, com alto custo tratamento e que ocasiona graves desfechos e incapacidade, o que acaba por onerar seu custo social. Os protocolos clínicos e diretrizes terapêuticas demonstram os esforços de inúmeras entidades no combate da hepatite C, pois informam aos profissionais de saúde, pacientes e familiares e cidadãos em geral, qual seria a melhor forma, comprovada cientificamente, de se proceder frente a uma infecção desta natureza. Realizouse uma análise de custoefetividade, sob a perspectiva do SUS, das estratégias: tratamento e retratamento com a terapia dupla, tratamento com a terapia dupla e retratamento com a terapia tripla e tratamento com a terapia tripla. Através de modelo de simulação baseado em cadeias Markov foi criada uma coorte hipotética de 1000 indivíduos adultos, acima de 40 anos, de ambos os sexos, sem distinção declasse socioeconômica, com diagnóstico confirmado para hepatite C crônica, monoinfectados pelo genótipo 1 do VHC e com ausência de comorbidades. A simulação foi iniciada com todos os indivíduos portando a forma mais branda da doença, tida como a classificação histológica F0 ou F1 segundo a escala Metavir. Os resultados demonstram que as duas opções, ou seja, a terapia dupla/tripla e a terapia tripla estão abaixo do limiar de aceitabilidade para incorporação de tecnologia proposto pela OMS (2012) que é de 72.195 (R$/QALY) (IBGE, 2013; WHO, 2012). Ambas são custoefetivas, visto que o ICER da terapia dupla/tripla em relação alinha de base foi de 7.186,3 (R$/QALY) e o da terapia tripla foi de 59.053,8 (R$/QALY). Entretanto o custo incremental de terapia tripla em relação à dupla/tripla foi de 31.029 e a efetividade incremental foi de 0,52. Em geral, quando as intervenções analisadas encontramse abaixo do limiar, sugerese a adoção do esquema de maior efetividade. A terapia tripla, apesar de ter apresentado uma efetividade um pouco acima da terapia dupla/tripla, apresentou custo muito superior. Assim, como seria coerente a adoção de uma ou da outra para utilização no SUS, visto que este sistema apresenta recursos limitados, indicase a realização de um estudo de impacto orçamentário para obterse mais um dado de embasamento da decisão e assim poder apoiar o protocolo brasileiro existente ou sugerir a confecção de novo documento.
Resumo:
Iron is required for many microbes and pathogens for their survival and proliferation including Leishmania which cause leishmaniasis. Leishmaniasis is an increasingly serious infectious disease with a wide spectrum of clinical manifestations. These range from localized cutaneous leishmaniasis (CL) lesions to a lethal visceral form. Certain strains such as BALB/c mice fail to control L. major infection and develop progressive lesions and systemic disease. These mice are thought to be a model of non-healing forms of the human disease such as kala-azar or diffuse cutaneous leishmaniasis. Progression of disease in BALB/c mice has been associated with the anemia, in last days of their survival, the progressive anemia is considered to be one of the reasons of their death. Ferroportin (Fpn), a key regulator of iron homeostasis is a conserved membrane protein that exports iron across the duodenal enterocytes as well as macrophages and hepatocytes into the blood circulation. Fpn has also critical influence on survival and proliferation of many microorganisms whose growth is dependent upon iron, thus preparation of Fpn is needed to study the role of iron in immune responses and pathogenesis of micoorganisms. To prepare and characterize a recombinant ferroportin, total RNA was extracted from Indian zebrafish duodenum, and used to synthesize cDNA by RT-PCR. PCR product was first cloned in Topo TA vector and then subcloned into the GFP expression vector pEGFP–N1. The final resulted plasmid (pEGFP-ZFpn) was used for expression of FPN-EGFP protein in Hek 293T cells. The expression was confirmed by fluorescence microscopy and flow cytometery. Recombinant Fpn was further characterized by submission of its predicted amino acid sequences to the TMHMM V2.0 prediction server (hidden Markov model), NetOGlyc 3.1 server and NetNGlyc 3.1 server. Data emphasised that obtained Fpn from indian zebrafish contained eight transmembrane domains with N- and C-termini inside the cytoplasm and harboured 78 mucin-type glycosylated amino acid. The results indicate that the prepared and characterized recombinant Fpn protein has no membrane topology difference compared to other Fpn described by other researcher. Our next aim was to deliver recombinant plasmid (pEGFP-ZFpn) to entrocyte cells. However, naked therapeutic genes are rapidly degraded by nucleases, showing poor cellular uptake, nonspecificity to the target cells, and low transfection efficiency. The development of safe and efficient gene carriers is one of the prerequisites for the success of gene therapy. Chitosan and alginate 139 polymers were used for oral gene carrier because of their biodegradability, biocompatibility and their mucoadhesive and permeability-enhancing properties in the gut. Nanoparticles comprising Alginate/Chitosan polymers were prepared by pregel preparation method. The resulting nanoparticles had a loading efficiency of 95% and average size of 188 nm as confirmed by PCS method and SEM images had showed spherical particles. BALB/c mice were divided to three groups. The first and second group were fed with chitosan/alginate nanoparticles containing the pEGFP-ZFpn and pEGFP plasmid, respectively (30 μgr/mice) and the third group (control) didn’t get any nanoparticles. The result showed BALB/c mice infected by L.major, resulted in higher hematocryte and iron level in pEGFP-ZFpn fed mice than that in other groups. Consentration of cytokines determined by ELISA showed lower levels of IL-4 and IL-10 and higher levels of IFN-γ/IL-4 and IFN-γ/IL-10 ratios in pEGFP-ZFpn fed mice than that in other groups. Morover more limited increase of footpad thickness and significant reduction of viable parasites in lymph node was seen in pEGFP-ZFpn fed mice. The results showed the first group exhibited a highr hematocryte and iron compared to the other groups. These data strongly suggests the in vivo administration of chitosan/alginate nanoparticles containing pEGFP-ZFpn suppress Th2 response and may be used to control the leishmaniasis .
Resumo:
Hidden Markov model (HMM)-based speech synthesis systems possess several advantages over concatenative synthesis systems. One such advantage is the relative ease with which HMM-based systems are adapted to speakers not present in the training dataset. Speaker adaptation methods used in the field of HMM-based automatic speech recognition (ASR) are adopted for this task. In the case of unsupervised speaker adaptation, previous work has used a supplementary set of acoustic models to estimate the transcription of the adaptation data. This paper first presents an approach to the unsupervised speaker adaptation task for HMM-based speech synthesis models which avoids the need for such supplementary acoustic models. This is achieved by defining a mapping between HMM-based synthesis models and ASR-style models, via a two-pass decision tree construction process. Second, it is shown that this mapping also enables unsupervised adaptation of HMM-based speech synthesis models without the need to perform linguistic analysis of the estimated transcription of the adaptation data. Third, this paper demonstrates how this technique lends itself to the task of unsupervised cross-lingual adaptation of HMM-based speech synthesis models, and explains the advantages of such an approach. Finally, listener evaluations reveal that the proposed unsupervised adaptation methods deliver performance approaching that of supervised adaptation.
Resumo:
针对自动最复重传(ARQ)机制在无线广播系统中吞吐量性能不佳的缺陷,提出一种基于随机网络编码的广播重传方案RNC-ARQ。对于广播节点,采用随机线性码对所有丢失包进行编码组合重传。对于接收节点,当接收的编码包累积到一定数量后可通过解码操作恢复出原始数据。该方案可有效减少重传次数,改善无线广播的吞吐量性能。基于Gilbert-Elliott模型描述的突发错误信道,建立了信道状态和节点接收处理流程合并的多状态马尔可夫模型,并以此为基础推导了RNC-ARQ方案的TQ吐量闭合解。最后,使用NS-2模拟器评估RNC-ARQ方案的性能,结果表明在突发差错信道下,基于随机网络编码重传方案的吞吐量优于传统的选择重传ARQ方案和基于异或编码的重传方案。
Resumo:
The goal of this thesis is to apply the computational approach to motor learning, i.e., describe the constraints that enable performance improvement with experience and also the constraints that must be satisfied by a motor learning system, describe what is being computed in order to achieve learning, and why it is being computed. The particular tasks used to assess motor learning are loaded and unloaded free arm movement, and the thesis includes work on rigid body load estimation, arm model estimation, optimal filtering for model parameter estimation, and trajectory learning from practice. Learning algorithms have been developed and implemented in the context of robot arm control. The thesis demonstrates some of the roles of knowledge in learning. Powerful generalizations can be made on the basis of knowledge of system structure, as is demonstrated in the load and arm model estimation algorithms. Improving the performance of parameter estimation algorithms used in learning involves knowledge of the measurement noise characteristics, as is shown in the derivation of optimal filters. Using trajectory errors to correct commands requires knowledge of how command errors are transformed into performance errors, i.e., an accurate model of the dynamics of the controlled system, as is demonstrated in the trajectory learning work. The performance demonstrated by the algorithms developed in this thesis should be compared with algorithms that use less knowledge, such as table based schemes to learn arm dynamics, previous single trajectory learning algorithms, and much of traditional adaptive control.
Resumo:
The number of hospital admissions in England due to heart failure is projected to increase by over 50% during the next 25 years. This will incur greater pressures on hospital managers to allocate resources in an effective manner. A reliable indicator for measuring the quantity of resources consumed by hospital patients is their length of stay (LOS) in care. This paper proposes modelling the length of time heart failure patients spend in hospital using a special type of Markov model, where the flow of patients through hospital can be thought of as consisting of three stages of care—short-, medium- and longer-term care. If it is assumed that new admissions into the ward are replacements for discharges, such a model may be used to investigate the case-mix of patients in hospital and the expected patient turnover during some specified period of time. An example is illustrated by considering hospital admissions to a Belfast hospital in Northern Ireland, between 2000 and 2004.
Resumo:
Coxian phase-type distributions are a special type of Markov model that describes duration until an event occurs in terms of a process consisting of a sequence of latent phases. This paper considers the use of Coxian phase-type distributions for modelling patient duration of stay for the elderly in hospital and investigates the potential for using the resulting distribution as a classifying variable to identify common characteristics between different groups of patients according to their (anticipated) length of stay in hospital. The identification of common characteristics for patient length of stay groups would offer hospital managers and clinicians possible insights into the overall management and bed allocation of the hospital wards.
Resumo:
Coxian phase-type distributions are a special type of Markov model that can be used to represent survival times in terms of phases through which an individual may progress until they eventually leave the system completely. Previous research has considered the Coxian phase-type distribution to be ideal in representing patient survival in hospital. However, problems exist in fitting the distributions. This paper investigates the problems that arise with the fitting process by simulating various Coxian phase-type models for the representation of patient survival and examining the estimated parameter values and eigenvalues obtained. The results indicate that numerical methods previously used for fitting the model parameters do not always converge. An alternative technique is therefore considered. All methods are influenced by the choice of initial parameter values. The investigation uses a data set of 1439 elderly patients and models their survival time, the length of time they spend in a UK hospital.
Resumo:
In this paper, we present a new approach to visual speech recognition which improves contextual modelling by combining Inter-Frame Dependent and Hidden Markov Models. This approach captures contextual information in visual speech that may be lost using a Hidden Markov Model alone. We apply contextual modelling to a large speaker independent isolated digit recognition task, and compare our approach to two commonly adopted feature based techniques for incorporating speech dynamics. Results are presented from baseline feature based systems and the combined modelling technique. We illustrate that both of these techniques achieve similar levels of performance when used independently. However significant improvements in performance can be achieved through a combination of the two. In particular we report an improvement in excess of 17% relative Word Error Rate in comparison to our best baseline system.
Resumo:
This paper presents a new algorithm for learning the structure of a special type of Bayesian network. The conditional phase-type (C-Ph) distribution is a Bayesian network that models the probabilistic causal relationships between a skewed continuous variable, modelled by the Coxian phase-type distribution, a special type of Markov model, and a set of interacting discrete variables. The algorithm takes a dataset as input and produces the structure, parameters and graphical representations of the fit of the C-Ph distribution as output.The algorithm, which uses a greedy-search technique and has been implemented in MATLAB, is evaluated using a simulated data set consisting of 20,000 cases. The results show that the original C-Ph distribution is recaptured and the fit of the network to the data is discussed.
Resumo:
BACKGROUND: Age-related macular degeneration is the most common cause of sight impairment in the UK. In neovascular age-related macular degeneration (nAMD), vision worsens rapidly (over weeks) due to abnormal blood vessels developing that leak fluid and blood at the macula.
OBJECTIVES: To determine the optimal role of optical coherence tomography (OCT) in diagnosing people newly presenting with suspected nAMD and monitoring those previously diagnosed with the disease.
DATA SOURCES: Databases searched: MEDLINE (1946 to March 2013), MEDLINE In-Process & Other Non-Indexed Citations (March 2013), EMBASE (1988 to March 2013), Biosciences Information Service (1995 to March 2013), Science Citation Index (1995 to March 2013), The Cochrane Library (Issue 2 2013), Database of Abstracts of Reviews of Effects (inception to March 2013), Medion (inception to March 2013), Health Technology Assessment database (inception to March 2013).
REVIEW METHODS: Types of studies: direct/indirect studies reporting diagnostic outcomes.
INDEX TEST: time domain optical coherence tomography (TD-OCT) or spectral domain optical coherence tomography (SD-OCT).
COMPARATORS: clinical evaluation, visual acuity, Amsler grid, colour fundus photographs, infrared reflectance, red-free images/blue reflectance, fundus autofluorescence imaging, indocyanine green angiography, preferential hyperacuity perimetry, microperimetry. Reference standard: fundus fluorescein angiography (FFA). Risk of bias was assessed using quality assessment of diagnostic accuracy studies, version 2. Meta-analysis models were fitted using hierarchical summary receiver operating characteristic curves. A Markov model was developed (65-year-old cohort, nAMD prevalence 70%), with nine strategies for diagnosis and/or monitoring, and cost-utility analysis conducted. NHS and Personal Social Services perspective was adopted. Costs (2011/12 prices) and quality-adjusted life-years (QALYs) were discounted (3.5%). Deterministic and probabilistic sensitivity analyses were performed.
RESULTS: In pooled estimates of diagnostic studies (all TD-OCT), sensitivity and specificity [95% confidence interval (CI)] was 88% (46% to 98%) and 78% (64% to 88%) respectively. For monitoring, the pooled sensitivity and specificity (95% CI) was 85% (72% to 93%) and 48% (30% to 67%) respectively. The FFA for diagnosis and nurse-technician-led monitoring strategy had the lowest cost (£39,769; QALYs 10.473) and dominated all others except FFA for diagnosis and ophthalmologist-led monitoring (£44,649; QALYs 10.575; incremental cost-effectiveness ratio £47,768). The least costly strategy had a 46.4% probability of being cost-effective at £30,000 willingness-to-pay threshold.
LIMITATIONS: Very few studies provided sufficient information for inclusion in meta-analyses. Only a few studies reported other tests; for some tests no studies were identified. The modelling was hampered by a lack of data on the diagnostic accuracy of strategies involving several tests.
CONCLUSIONS: Based on a small body of evidence of variable quality, OCT had high sensitivity and moderate specificity for diagnosis, and relatively high sensitivity but low specificity for monitoring. Strategies involving OCT alone for diagnosis and/or monitoring were unlikely to be cost-effective. Further research is required on (i) the performance of SD-OCT compared with FFA, especially for monitoring but also for diagnosis; (ii) the performance of strategies involving combinations/sequences of tests, for diagnosis and monitoring; (iii) the likelihood of active and inactive nAMD becoming inactive or active respectively; and (iv) assessment of treatment-associated utility weights (e.g. decrements), through a preference-based study.
STUDY REGISTRATION: This study is registered as PROSPERO CRD42012001930.
FUNDING: The National Institute for Health Research Health Technology Assessment programme.
Resumo:
To cope with the rapid growth of multimedia applications that requires dynamic levels of quality of service (QoS), cross-layer (CL) design, where multiple protocol layers are jointly combined, has been considered to provide diverse QoS provisions for mobile multimedia networks. However, there is a lack of a general mathematical framework to model such CL scheme in wireless networks with different types of multimedia classes. In this paper, to overcome this shortcoming, we therefore propose a novel CL design for integrated real-time/non-real-time traffic with strict preemptive priority via a finite-state Markov chain. The main strategy of the CL scheme is to design a Markov model by explicitly including adaptive modulation and coding at the physical layer, queuing at the data link layer, and the bursty nature of multimedia traffic classes at the application layer. Utilizing this Markov model, several important performance metrics in terms of packet loss rate, delay, and throughput are examined. In addition, our proposed framework is exploited in various multimedia applications, for example, the end-to-end real-time video streaming and CL optimization, which require the priority-based QoS adaptation for different applications. More importantly, the CL framework reveals important guidelines as to optimize the network performance