962 resultados para Estimation par maximum de vraisemblance
Resumo:
Background A pandemic strain of influenza A spread rapidly around the world in 2009, now referred to as pandemic (H1N1) 2009. This study aimed to examine the spatiotemporal variation in the transmission rate of pandemic (H1N1) 2009 associated with changes in local socio-environmental conditions from May 7–December 31, 2009, at a postal area level in Queensland, Australia. Method We used the data on laboratory-confirmed H1N1 cases to examine the spatiotemporal dynamics of transmission using a flexible Bayesian, space–time, Susceptible-Infected-Recovered (SIR) modelling approach. The model incorporated parameters describing spatiotemporal variation in H1N1 infection and local socio-environmental factors. Results The weekly transmission rate of pandemic (H1N1) 2009 was negatively associated with the weekly area-mean maximum temperature at a lag of 1 week (LMXT) (posterior mean: −0.341; 95% credible interval (CI): −0.370–−0.311) and the socio-economic index for area (SEIFA) (posterior mean: −0.003; 95% CI: −0.004–−0.001), and was positively associated with the product of LMXT and the weekly area-mean vapour pressure at a lag of 1 week (LVAP) (posterior mean: 0.008; 95% CI: 0.007–0.009). There was substantial spatiotemporal variation in transmission rate of pandemic (H1N1) 2009 across Queensland over the epidemic period. High random effects of estimated transmission rates were apparent in remote areas and some postal areas with higher proportion of indigenous populations and smaller overall populations. Conclusions Local SEIFA and local atmospheric conditions were associated with the transmission rate of pandemic (H1N1) 2009. The more populated regions displayed consistent and synchronized epidemics with low average transmission rates. The less populated regions had high average transmission rates with more variations during the H1N1 epidemic period.
Resumo:
We derive a new method for determining size-transition matrices (STMs) that eliminates probabilities of negative growth and accounts for individual variability. STMs are an important part of size-structured models, which are used in the stock assessment of aquatic species. The elements of STMs represent the probability of growth from one size class to another, given a time step. The growth increment over this time step can be modelled with a variety of methods, but when a population construct is assumed for the underlying growth model, the resulting STM may contain entries that predict negative growth. To solve this problem, we use a maximum likelihood method that incorporates individual variability in the asymptotic length, relative age at tagging, and measurement error to obtain von Bertalanffy growth model parameter estimates. The statistical moments for the future length given an individual’s previous length measurement and time at liberty are then derived. We moment match the true conditional distributions with skewed-normal distributions and use these to accurately estimate the elements of the STMs. The method is investigated with simulated tag–recapture data and tag–recapture data gathered from the Australian eastern king prawn (Melicertus plebejus).
Resumo:
The Minimum Description Length (MDL) principle is a general, well-founded theoretical formalization of statistical modeling. The most important notion of MDL is the stochastic complexity, which can be interpreted as the shortest description length of a given sample of data relative to a model class. The exact definition of the stochastic complexity has gone through several evolutionary steps. The latest instantation is based on the so-called Normalized Maximum Likelihood (NML) distribution which has been shown to possess several important theoretical properties. However, the applications of this modern version of the MDL have been quite rare because of computational complexity problems, i.e., for discrete data, the definition of NML involves an exponential sum, and in the case of continuous data, a multi-dimensional integral usually infeasible to evaluate or even approximate accurately. In this doctoral dissertation, we present mathematical techniques for computing NML efficiently for some model families involving discrete data. We also show how these techniques can be used to apply MDL in two practical applications: histogram density estimation and clustering of multi-dimensional data.
Resumo:
A nonlinear control design approach is presented in this paper for a challenging application problem of ensuring robust performance of an air-breathing engine operating at supersonic speed. The primary objective of control design is to ensure that the engine produces the required thrust that tracks the commanded thrust as closely as possible by appropriate regulation of the fuel flow rate. However, since the engine operates in the supersonic range, an important secondary objective is to ensure an optimal location of the shock in the intake for maximum pressure recovery with a sufficient margin. This is manipulated by varying the throat area of the nozzle. The nonlinear dynamic inversion technique has been successfully used to achieve both of the above objectives. In this problem, since the process is faster than the actuators, independent control designs have also been carried out for the actuators as well to assure the satisfactory performance of the system. Moreover, an extended Kalman Filter based state estimation design has been carried out both to filter out the process and sensor noises as well as to make the control design operate based on output feedback. Promising simulation results indicate that the proposed control design approach is quite successful in obtaining robust performance of the air-breathing system.
Resumo:
The insulation in a dc cable is subjected to both thermal and electric stress at the same time. While the electric stress is generic to the cable, the temperature rise in the insulation is, by and large, due to the Ohmic losses in the conductor. The consequence of this synergic effect is to reduce the maximum operating voltage and causes a premature failure of the cable. The authors examine this subject in some detail and propose a comprehensive theoretical formulation relating the maximum thermal voltage (MTV) to the physical and geometrical parameters of the insulation. The heat flow patterns and boundary conditions considered by the authors here and those found in earlier literature are provided. The MTV of a dc cable is shown to be a function of the load current apart from the resistance of the insulation. The results obtained using the expressions, developed by the authors, are compared with relevant results published in the literature and found to be in close conformity.
Resumo:
The aim of this paper is to provide a Bayesian formulation of the so-called magnitude-based inference approach to quantifying and interpreting effects, and in a case study example provide accurate probabilistic statements that correspond to the intended magnitude-based inferences. The model is described in the context of a published small-scale athlete study which employed a magnitude-based inference approach to compare the effect of two altitude training regimens (live high-train low (LHTL), and intermittent hypoxic exposure (IHE)) on running performance and blood measurements of elite triathletes. The posterior distributions, and corresponding point and interval estimates, for the parameters and associated effects and comparisons of interest, were estimated using Markov chain Monte Carlo simulations. The Bayesian analysis was shown to provide more direct probabilistic comparisons of treatments and able to identify small effects of interest. The approach avoided asymptotic assumptions and overcame issues such as multiple testing. Bayesian analysis of unscaled effects showed a probability of 0.96 that LHTL yields a substantially greater increase in hemoglobin mass than IHE, a 0.93 probability of a substantially greater improvement in running economy and a greater than 0.96 probability that both IHE and LHTL yield a substantially greater improvement in maximum blood lactate concentration compared to a Placebo. The conclusions are consistent with those obtained using a ‘magnitude-based inference’ approach that has been promoted in the field. The paper demonstrates that a fully Bayesian analysis is a simple and effective way of analysing small effects, providing a rich set of results that are straightforward to interpret in terms of probabilistic statements.
Resumo:
Nonlinear finite element analysis is used for the estimation of damage due to low-velocity impact loading of laminated composite circular plates. The impact loading is treated as an equivalent static loading by assuming the impactor to be spherical and the contact to obey Hertzian law. The stresses in the laminate are calculated using a 48 d.o.f. laminated composite sector element. Subsequently, the Tsai-Wu criterion is used to detect the zones of failure and the maximum stress criterion is used to identify the mode of failure. Then the material properties of the laminate are degraded in the failed regions. The stress analysis is performed again using the degraded properties of the plies. The iterative process is repeated until no more failure is detected in the laminate. The problem of a typical T300/N5208 composite [45 degrees/0 degrees/-45 degrees/90 degrees](s) circular plate being impacted by a spherical impactor is solved and the results are compared with experimental and analytical results available in the literature. The method proposed and the computer code developed can handle symmetric, as well as unsymmetric, laminates. It can be easily extended to cover the impact of composite rectangular plates, shell panels and shells.
Resumo:
In this paper we study constrained maximum entropy and minimum divergence optimization problems, in the cases where integer valued sufficient statistics exists, using tools from computational commutative algebra. We show that the estimation of parametric statistical models in this case can be transformed to solving a system of polynomial equations. We give an implicit description of maximum entropy models by embedding them in algebraic varieties for which we give a Grobner basis method to compute it. In the cases of minimum KL-divergence models we show that implicitization preserves specialization of prior distribution. This result leads us to a Grobner basis method to embed minimum KL-divergence models in algebraic varieties. (C) 2012 Elsevier Inc. All rights reserved.
Resumo:
Most of the existing WCET estimation methods directly estimate execution time, ET, in cycles. We propose to study ET as a product of two factors, ET = IC * CPI, where IC is instruction count and CPI is cycles per instruction. Considering directly the estimation of ET may lead to a highly pessimistic estimate since implicitly these methods may be using worst case IC and worst case CPI. We hypothesize that there exists a functional relationship between CPI and IC such that CPI=f(IC). This is ascertained by computing the covariance matrix and studying the scatter plots of CPI versus IC. IC and CPI values are obtained by running benchmarks with a large number of inputs using the cycle accurate architectural simulator, Simplescalar on two different architectures. It is shown that the benchmarks can be grouped into different classes based on the CPI versus IC relationship. For some benchmarks like FFT, FIR etc., both IC and CPI are almost a constant irrespective of the input. There are other benchmarks that exhibit a direct or an inverse relationship between CPI and IC. In such a case, one can predict CPI for a given IC as CPI=f(IC). We derive the theoretical worst case IC for a program, denoted as SWIC, using integer linear programming(ILP) and estimate WCET as SWIC*f(SWIC). However, if CPI decreases sharply with IC then measured maximum cycles is observed to be a better estimate. For certain other benchmarks, it is observed that the CPI versus IC relationship is either random or CPI remains constant with varying IC. In such cases, WCET is estimated as the product of SWIC and measured maximum CPI. It is observed that use of the proposed method results in tighter WCET estimates than Chronos, a static WCET analyzer, for most benchmarks for the two architectures considered in this paper.
Resumo:
The objective of the paper is to estimate Safe Shutdown Earthquake (SSE) and Operating/Design Basis Earthquake (OBE/DBE) for the Nuclear Power Plant (NPP) site located at Kalpakkam, Tamil Nadu, India. The NPP is located at 12.558 degrees N, 80.175 degrees E and a 500 km circular area around NPP site is considered as `seismic study area' based on past regional earthquake damage distribution. The geology, seismicity and seismotectonics of the study area are studied and the seismotectonic map is prepared showing the seismic sources and the past earthquakes. Earthquake data gathered from many literatures are homogenized and declustered to form a complete earthquake catalogue for the seismic study area. The conventional maximum magnitude of each source is estimated considering the maximum observed magnitude (M-max(obs)) and/or the addition of 0.3 to 0.5 to M-max(obs). In this study maximum earthquake magnitude has been estimated by establishing a region's rupture character based on source length and associated M-max(obs). A final source-specific M-max is selected from the three M-max values by following the logical criteria. To estimate hazard at the NPP site, ten Ground-Motion Prediction Equations (GMPEs) valid for the study area are considered. These GMPEs are ranked based on Log-Likelihood (LLH) values. Top five GMPEs are considered to estimate the peak ground acceleration (PGA) for the site. Maximum PGA is obtained from three faults and named as vulnerable sources to decide the magnitudes of OBE and SSE. The average and normalized site specific response spectrum is prepared considering three vulnerable sources and further used to establish site-specific design spectrum at NPP site.
Resumo:
This article presents the details of estimation of fracture parameters for high strength concrete (HSC, HSC1) and ultra high strength concrete (UHSC). Brief details about characterization of ingredients of HSC, HSC1 and UHSC have been provided. Experiments have been carried out on beams made up of HSC, HSC1 and UHSC considering various sizes and notch depths. Fracture characteristics such as size independent fracture energy (G(f)), size of fracture process zone (C-f), fracture toughness (K-IC) and crack tip opening displacement (CTODc) have been estimated based on the experimental observations. From the studies, it is observed that (i) UHSC has high fracture energy and ductility inspite of having a very low value of C-f; (ii) relatively much more homogeneous than other concretes, because of absence of coarse aggregates and well-graded smaller size particles; (iii) the critical SIF (K-IC) values are increasing with increase of beam depth and decreasing with increase of notch depth. Generally, it can be noted that there is significant increase in fracture toughness and CTODc. They are about 7 times in HSC1 and about 10 times in UHSC compared to those in HSC; (iv) for notch-to-depth ratio 0.1, Bazant's size effect model slightly overestimates the maximum failure loads compared to experimental observations and Karihaloo's model slightly underestimates the maximum failure loads. For the notch-to-depth ratio ranging from 0.2 to 0.4 for the case of UHSC, it can be observed that, both the size effect models predict more or less similar maximum failure loads compared to corresponding experimental values.
Resumo:
A joint Maximum Likelihood (ML) estimation algorithm for the synchronization impairments such as Carrier Frequency Offset (CFO), Sampling Frequency Offset (SFO) and Symbol Timing Error (STE) in single user MIMO-OFDM system is investigated in this work. A received signal model that takes into account the nonlinear effects of CFO, SFO, STE and Channel Impulse Response (CIR) is formulated. Based on the signal model, a joint ML estimation algorithm is proposed. Cramer-Rao Lower Bound (CRLB) for the continuous parameters CFO and SFO is derived for the cases of with and without channel response effects and is used to compare the effect of coupling between different estimated parameters. The performance of the estimation method is studied through numerical simulations.
Resumo:
Low complexity joint estimation of synchronization impairments and channel in a single-user MIMO-OFDM system is presented in this paper. Based on a system model that takes into account the effects of synchronization impairments such as carrier frequency offset, sampling frequency offset, and symbol timing error, and channel, a Maximum Likelihood (ML) algorithm for the joint estimation is proposed. To reduce the complexity of ML grid search, the number of received signal samples used for estimation need to be reduced. The conventional channel estimation techniques using Least-Squares (LS) or Maximum a posteriori (MAP) methods fail for the reduced sample under-determined system, which results in poor performance of the joint estimator. The proposed ML algorithm uses Compressed Sensing (CS) based channel estimation method in a sparse fading scenario, where the received samples used for estimation are less than that required for an LS or MAP based estimation. The performance of the estimation method is studied through numerical simulations, and it is observed that CS based joint estimator performs better than LS and MAP based joint estimator. (C) 2013 Elsevier GmbH. All rights reserved.
Resumo:
This paper proposes an automatic acoustic-phonetic method for estimating voice-onset time of stops. This method requires neither transcription of the utterance nor training of a classifier. It makes use of the plosion index for the automatic detection of burst onsets of stops. Having detected the burst onset, the onset of the voicing following the burst is detected using the epochal information and a temporal measure named the maximum weighted inner product. For validation, several experiments are carried out on the entire TIMIT database and two of the CMU Arctic corpora. The performance of the proposed method compares well with three state-of-the-art techniques. (C) 2014 Acoustical Society of America
Resumo:
We present methods for fixed-lag smoothing using Sequential Importance sampling (SIS) on a discrete non-linear, non-Gaussian state space system with unknown parameters. Our particular application is in the field of digital communication systems. Each input data point is taken from a finite set of symbols. We represent transmission media as a fixed filter with a finite impulse response (FIR), hence a discrete state-space system is formed. Conventional Markov chain Monte Carlo (MCMC) techniques such as the Gibbs sampler are unsuitable for this task because they can only perform processing on a batch of data. Data arrives sequentially, so it would seem sensible to process it in this way. In addition, many communication systems are interactive, so there is a maximum level of latency that can be tolerated before a symbol is decoded. We will demonstrate this method by simulation and compare its performance to existing techniques.