876 resultados para Markov Model Estimation
Resumo:
The general assumption under which the (X) over bar chart is designed is that the process mean has a constant in-control value. However, there are situations in which the process mean wanders. When it wanders according to a first-order autoregressive (AR (1)) model, a complex approach involving Markov chains and integral equation methods is used to evaluate the properties of the (X) over bar chart. In this paper, we propose the use of a pure Markov chain approach to study the performance of the (X) over bar chart. The performance of the chat (X) over bar with variable parameters and the (X) over bar with double sampling are compared. (C) 2011 Elsevier B.V. All rights reserved.
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
A procedure for calculation of refrigerant mass flow rate is implemented in the distributed numerical model to simulate the flow in finned-tube coil dry-expansion evaporators, usually found in refrigeration and air-conditioning systems. Two-phase refrigerant flow inside the tubes is assumed to be one-dimensional, unsteady, and homogeneous. In themodel the effects of refrigerant pressure drop and the moisture condensation from the air flowing over the external surface of the tubes are considered. The results obtained are the distributions of refrigerant velocity, temperature and void fraction, tube-wall temperature, air temperature, and absolute humidity. The finite volume method is used to discretize the governing equations. Additionally, given the operation conditions and the geometric parameters, the model allows the calculation of the refrigerant mass flow rate. The value of mass flow rate is computed using the process of parameter estimation with the minimization method of Levenberg-Marquardt minimization. In order to validate the developed model, the obtained results using HFC-134a as a refrigerant are compared with available data from the literature.
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
In this work we study the Hidden Markov Models with finite as well as general state space. In the finite case, the forward and backward algorithms are considered and the probability of a given observed sequence is computed. Next, we use the EM algorithm to estimate the model parameters. In the general case, the kernel estimators are used and to built a sequence of estimators that converge in L1-norm to the density function of the observable process
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
Resumo:
Um modelo bayesiano de regressão binária é desenvolvido para predizer óbito hospitalar em pacientes acometidos por infarto agudo do miocárdio. Métodos de Monte Carlo via Cadeias de Markov (MCMC) são usados para fazer inferência e validação. Uma estratégia para construção de modelos, baseada no uso do fator de Bayes, é proposta e aspectos de validação são extensivamente discutidos neste artigo, incluindo a distribuição a posteriori para o índice de concordância e análise de resíduos. A determinação de fatores de risco, baseados em variáveis disponíveis na chegada do paciente ao hospital, é muito importante para a tomada de decisão sobre o curso do tratamento. O modelo identificado se revela fortemente confiável e acurado, com uma taxa de classificação correta de 88% e um índice de concordância de 83%.
Resumo:
A partir de perfis populacionais experimentais de linhagens do díptero forídeo Megaselia scalaris, foi determinado o número mínimo de perfis amostrais que devem ser repetidos, via processo de simulação bootstrap, para se ter uma estimativa confiável do perfil médio populacional e apresentar estimativas do erro-padrão como medida da precisão das simulações realizadas. Os dados originais são provenientes de populações experimentais fundadas com as linhagens SR e R4, com três réplicas cada, e que foram mantidas por 33 semanas pela técnica da transferência seriada em câmara de temperatura constante (25 ± 1,0ºC). A variável usada foi tamanho populacional e o modelo adotado para cada perfíl foi o de um processo estocástico estacionário. Por meio das simulações, os perfis de três populações experimentais foram amplificados, determinando-se, dessa forma, o tamanho mínimo de amostra. Fixado o tamanho de amostra, simulações bootstrap foram realizadas para construção de intervalos de confiança e comparação dos perfis médios populacionais das duas linhagens. Os resultados mostram que com o tamanho de amostra igual a 50 inicia-se o processo de estabilização dos valores médios.
Resumo:
Foi utilizada uma análise de segregação com o uso da inferência Bayesiana para estimar componentes de variância e verificar a presença de genes de efeito principal (GEP) influenciando duas características de carcaça: gordura intramuscular (GIM), em %, e espessura de toucinho (ET), em mm; e uma de crescimento, ganho de peso (g/dia) dos 25 aos 90 kg de peso vivo (GP). Para este estudo, foram utilizadas informações de 1.257 animais provenientes de um delineamento de F2, obtidos do cruzamento de suínos machos Meishan e fêmeas Large White e Landrace. No melhoramento genético animal, os modelos poligênicos finitos (MPF) podem ser uma alternativa aos modelos poligênicos infinitesimais (MPI) para avaliação genética de características quantitativas usando pedigrees complexos. MPI, MPF e MPI combinado com MPF foram empiricamente testados para se estimar componentes de variâncias e número de genes no MPF. Para a estimação de médias marginais a posteriori de componentes de variância e de parâmetros, foi utilizada uma metodologia Bayesiana, por meio do uso da Cadeia de Markov, algoritmos de Monte Carlo (MCMC), via Amostrador de Gibbs e Reversible Jump Sampler (Metropolis-Hastings). em função dos resultados obtidos, pode-se evidenciar quatro GEP, sendo dois para GIM e dois para ET. Para ET, o GEP explicou a maior parte da variação genética, enquanto, para GIM, o GEP reduziu significativamente a variação poligênica. Para a variação do GP, não foi possível determinar a influência do GEP. As herdabilidades estimadas ajustando-se MPI para GIM, ET e GP foram de 0,37; 0,24 e 0,37, respectivamente. Estudos futuros com base neste experimento que usem marcadores moleculares para mapear os genes de efeito principal que afetem, principalmente GIM e ET, poderão lograr êxito.
Resumo:
The objective of this work was to evaluate the Nelore beef cattle, growth curve parameters using the Von Bertalanffy function in a nested Bayesian procedure that allowed estimation of the joint posterior distribution of growth curve parameters, their (co)variance components, and the environmental and additive genetic components affecting them. A hierarchical model was applied; each individual had a growth trajectory described by the nonlinear function, and each parameter of this function was considered to be affected by genetic and environmental effects that were described by an animal model. Random samples of the posterior distributions were drawn using Gibbs sampling and Metropolis-Hastings algorithms. The data set consisted of a total of 145,961 BW recorded from 15,386 animals. Even though the curve parameters were estimated for animals with few records, given that the information from related animals and the structure of systematic effects were considered in the curve fitting, all mature BW predicted were suitable. A large additive genetic variance for mature BW was observed. The parameter a of growth curves, which represents asymptotic adult BW, could be used as a selection criterion to control increases in adult BW when selecting for growth rate. The effect of maternal environment on growth was carried through to maturity and should be considered when evaluating adult BW. Other growth curve parameters showed small additive genetic and maternal effects. Mature BW and parameter k, related to the slope of the curve, presented a large, positive genetic correlation. The results indicated that selection for growth rate would increase adult BW without substantially changing the shape of the growth curve. Selection to change the slope of the growth curve without modifying adult BW would be inefficient because their genetic correlation is large. However, adult BW could be considered in a selection index with its corresponding economic weight to improve the overall efficiency of beef cattle production.
Resumo:
The pathogenicity and immunogenicity of six recently isolated Paracoccidioides brasiliensis samples derived from patients presenting distinct and well defined clinical forms of paracoccidioidomycosis (PCM) were compared as to their virulence, tropism to different organs and ability to induce specific cellular and humoral immune response in susceptible (B10.A) inbred mice. Isolates Pb44 and Pb47 were obtained from acute cases, Pb50 from a chronic severe form, Pb45 from a chronic moderate case and both Pb56 and Pb57 from chronic mild forms of PCM. Pathogenicity and tropism of each fungal sample were evaluated by LD50% estimation, examination of gross lesions on various organs at 2, 4, 12 and 16 weeks post-infection, and by colony-forming unit (CFU) counts in the lungs at week 16 post-infection of mice. Fungal tropism in human PCM and in B10.A mice was always dissociated. A well defined relationship between virulence of the fungal sample and the clinical findings of the correspondent patient was not evident, although a tendency to higher LD50% and less intense paracoccidioidic lesions was observed in mice infected with Pb56 and Pb57. The specific DTH response patterns varied according to the infectant sample, but positive DTH reactions at the beginning of the infection and a tendency to anergy or low DTH responses at week 12 and/or week 16 post-infection were always observed. A correspondence between the DTH response in humans and in mice was noticeable only when the isolates from the most benign cases (Pb56 and Pb57) were considered. The specific antibody patterns in mice and in the correspondent patients were also not analogous. Collectively, these results indicate that an association between the fungal pathogenicity and immunogenicity in the human disease and in susceptible mice was discernible only when isolates obtained from very mild cases (Pb56 and Pb57) were considered.
Resumo:
In the present work, a method for rotor support stiffness estimation via a model updating process using the sensitivity analysis is presented. This method consists in using the eigenvalues sensitivity analysis, relating to the rotor support stiffnesses variation to perform the adjustment of the model based on the minimization of the difference between eigenvalues of reference and eigenvalues obtained via mathematical model from previously adopted support bearing stiffness values. The mathematical model is developed by the finite element method and the method of adjustment should converge employing an iterative process. The performance and robustness of the method have been analyzed through a numerical example.
Resumo:
Studies have been carried out on the heat transfer in a packed bed of glass beads percolated by air at moderate flow rates. Rigorous statistic analysis of the experimental data was carried out and the traditional two parameter model was used to represent them. The parameters estimated were the effective radial thermal conductivity, k, and the wall coefficient, h, through the least squares method. The results were evaluated as to the boundary bed inlet temperature, T-o, number of terms of the solution series and number of experimental points used in the estimate. Results indicated that a small difference in T-o was sufficient to promote great modifications in the estimated parameters and in the statistical properties of the model. The use of replicas at points of high parametric information of the model improved the results, although analysis of the residuals has resulted in the rejection of this alternative. In order to evaluate cion-linearity of the model, Bates and Watts (1988) curvature measurements and the Box (1971) biases of the coefficients were calculated. The intrinsic curvatures of the model (IN) tend to be concentrated at low bed heights and those due to parameter effects (PE) are spread all over the bed. The Box biases indicated both parameters as responsible for the curvatures PE, h being somewhat more problematic. (C) 2000 Elsevier B.V. Ltd. All rights reserved.
Resumo:
When the (X) over bar chart is in use, samples are regularly taken from the process, and their means are plotted on the chart. In some cases, it is too expensive to obtain the X values, but not the values of a correlated variable Y. This paper presents a model for the economic design of a two-stage control chart, that is. a control chart based on both performance (X) and surrogate (Y) variables. The process is monitored by the surrogate variable until it signals an out-of-control behavior, and then a switch is made to the (X) over bar chart. The (X) over bar chart is built with central, warning. and action regions. If an X sample mean falls in the central region, the process surveillance returns to the (Y) over bar chart. Otherwise. The process remains under the (X) over bar chart's surveillance until an (X) over bar sample mean falls outside the control limits. The search for an assignable cause is undertaken when the performance variable signals an out-of-control behavior. In this way, the two variables, are used in an alternating fashion. The assumption of an exponential distribution to describe the length of time the process remains in control allows the application of the Markov chain approach for developing the cost function. A study is performed to examine the economic advantages of using performance and surrogate variables. (C) 2003 Elsevier B.V. All rights reserved.
Resumo:
This paper presents an economic design of (X) over bar control charts with variable sample sizes, variable sampling intervals, and variable control limits. The sample size n, the sampling interval h, and the control limit coefficient k vary between minimum and maximum values, tightening or relaxing the control. The control is relaxed when an (X) over bar value falls close to the target and is tightened when an (X) over bar value falls far from the target. A cost model is constructed that involves the cost of false alarms, the cost of finding and eliminating the assignable cause, the cost associated with production in an out-of-control state, and the cost of sampling and testing. The assumption of an exponential distribution to describe the length of time the process remains in control allows the application of the Markov chain approach for developing the cost function. A comprehensive study is performed to examine the economic advantages of varying the (X) over bar chart parameters.