971 resultados para maximum pseudolikelihood (MPL) estimation


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Most of the existing WCET estimation methods directly estimate execution time, ET, in cycles. We propose to study ET as a product of two factors, ET = IC * CPI, where IC is instruction count and CPI is cycles per instruction. Considering directly the estimation of ET may lead to a highly pessimistic estimate since implicitly these methods may be using worst case IC and worst case CPI. We hypothesize that there exists a functional relationship between CPI and IC such that CPI=f(IC). This is ascertained by computing the covariance matrix and studying the scatter plots of CPI versus IC. IC and CPI values are obtained by running benchmarks with a large number of inputs using the cycle accurate architectural simulator, Simplescalar on two different architectures. It is shown that the benchmarks can be grouped into different classes based on the CPI versus IC relationship. For some benchmarks like FFT, FIR etc., both IC and CPI are almost a constant irrespective of the input. There are other benchmarks that exhibit a direct or an inverse relationship between CPI and IC. In such a case, one can predict CPI for a given IC as CPI=f(IC). We derive the theoretical worst case IC for a program, denoted as SWIC, using integer linear programming(ILP) and estimate WCET as SWIC*f(SWIC). However, if CPI decreases sharply with IC then measured maximum cycles is observed to be a better estimate. For certain other benchmarks, it is observed that the CPI versus IC relationship is either random or CPI remains constant with varying IC. In such cases, WCET is estimated as the product of SWIC and measured maximum CPI. It is observed that use of the proposed method results in tighter WCET estimates than Chronos, a static WCET analyzer, for most benchmarks for the two architectures considered in this paper.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The objective of the paper is to estimate Safe Shutdown Earthquake (SSE) and Operating/Design Basis Earthquake (OBE/DBE) for the Nuclear Power Plant (NPP) site located at Kalpakkam, Tamil Nadu, India. The NPP is located at 12.558 degrees N, 80.175 degrees E and a 500 km circular area around NPP site is considered as `seismic study area' based on past regional earthquake damage distribution. The geology, seismicity and seismotectonics of the study area are studied and the seismotectonic map is prepared showing the seismic sources and the past earthquakes. Earthquake data gathered from many literatures are homogenized and declustered to form a complete earthquake catalogue for the seismic study area. The conventional maximum magnitude of each source is estimated considering the maximum observed magnitude (M-max(obs)) and/or the addition of 0.3 to 0.5 to M-max(obs). In this study maximum earthquake magnitude has been estimated by establishing a region's rupture character based on source length and associated M-max(obs). A final source-specific M-max is selected from the three M-max values by following the logical criteria. To estimate hazard at the NPP site, ten Ground-Motion Prediction Equations (GMPEs) valid for the study area are considered. These GMPEs are ranked based on Log-Likelihood (LLH) values. Top five GMPEs are considered to estimate the peak ground acceleration (PGA) for the site. Maximum PGA is obtained from three faults and named as vulnerable sources to decide the magnitudes of OBE and SSE. The average and normalized site specific response spectrum is prepared considering three vulnerable sources and further used to establish site-specific design spectrum at NPP site.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This article presents the details of estimation of fracture parameters for high strength concrete (HSC, HSC1) and ultra high strength concrete (UHSC). Brief details about characterization of ingredients of HSC, HSC1 and UHSC have been provided. Experiments have been carried out on beams made up of HSC, HSC1 and UHSC considering various sizes and notch depths. Fracture characteristics such as size independent fracture energy (G(f)), size of fracture process zone (C-f), fracture toughness (K-IC) and crack tip opening displacement (CTODc) have been estimated based on the experimental observations. From the studies, it is observed that (i) UHSC has high fracture energy and ductility inspite of having a very low value of C-f; (ii) relatively much more homogeneous than other concretes, because of absence of coarse aggregates and well-graded smaller size particles; (iii) the critical SIF (K-IC) values are increasing with increase of beam depth and decreasing with increase of notch depth. Generally, it can be noted that there is significant increase in fracture toughness and CTODc. They are about 7 times in HSC1 and about 10 times in UHSC compared to those in HSC; (iv) for notch-to-depth ratio 0.1, Bazant's size effect model slightly overestimates the maximum failure loads compared to experimental observations and Karihaloo's model slightly underestimates the maximum failure loads. For the notch-to-depth ratio ranging from 0.2 to 0.4 for the case of UHSC, it can be observed that, both the size effect models predict more or less similar maximum failure loads compared to corresponding experimental values.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Low complexity joint estimation of synchronization impairments and channel in a single-user MIMO-OFDM system is presented in this paper. Based on a system model that takes into account the effects of synchronization impairments such as carrier frequency offset, sampling frequency offset, and symbol timing error, and channel, a Maximum Likelihood (ML) algorithm for the joint estimation is proposed. To reduce the complexity of ML grid search, the number of received signal samples used for estimation need to be reduced. The conventional channel estimation techniques using Least-Squares (LS) or Maximum a posteriori (MAP) methods fail for the reduced sample under-determined system, which results in poor performance of the joint estimator. The proposed ML algorithm uses Compressed Sensing (CS) based channel estimation method in a sparse fading scenario, where the received samples used for estimation are less than that required for an LS or MAP based estimation. The performance of the estimation method is studied through numerical simulations, and it is observed that CS based joint estimator performs better than LS and MAP based joint estimator. (C) 2013 Elsevier GmbH. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper proposes an automatic acoustic-phonetic method for estimating voice-onset time of stops. This method requires neither transcription of the utterance nor training of a classifier. It makes use of the plosion index for the automatic detection of burst onsets of stops. Having detected the burst onset, the onset of the voicing following the burst is detected using the epochal information and a temporal measure named the maximum weighted inner product. For validation, several experiments are carried out on the entire TIMIT database and two of the CMU Arctic corpora. The performance of the proposed method compares well with three state-of-the-art techniques. (C) 2014 Acoustical Society of America

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We present methods for fixed-lag smoothing using Sequential Importance sampling (SIS) on a discrete non-linear, non-Gaussian state space system with unknown parameters. Our particular application is in the field of digital communication systems. Each input data point is taken from a finite set of symbols. We represent transmission media as a fixed filter with a finite impulse response (FIR), hence a discrete state-space system is formed. Conventional Markov chain Monte Carlo (MCMC) techniques such as the Gibbs sampler are unsuitable for this task because they can only perform processing on a batch of data. Data arrives sequentially, so it would seem sensible to process it in this way. In addition, many communication systems are interactive, so there is a maximum level of latency that can be tolerated before a symbol is decoded. We will demonstrate this method by simulation and compare its performance to existing techniques.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Approximate Bayesian computation (ABC) is a popular technique for analysing data for complex models where the likelihood function is intractable. It involves using simulation from the model to approximate the likelihood, with this approximate likelihood then being used to construct an approximate posterior. In this paper, we consider methods that estimate the parameters by maximizing the approximate likelihood used in ABC. We give a theoretical analysis of the asymptotic properties of the resulting estimator. In particular, we derive results analogous to those of consistency and asymptotic normality for standard maximum likelihood estimation. We also discuss how sequential Monte Carlo methods provide a natural method for implementing our likelihood-based ABC procedures.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We show that the sensor localization problem can be cast as a static parameter estimation problem for Hidden Markov Models and we develop fully decentralized versions of the Recursive Maximum Likelihood and the Expectation-Maximization algorithms to localize the network. For linear Gaussian models, our algorithms can be implemented exactly using a distributed version of the Kalman filter and a message passing algorithm to propagate the derivatives of the likelihood. In the non-linear case, a solution based on local linearization in the spirit of the Extended Kalman Filter is proposed. In numerical examples we show that the developed algorithms are able to learn the localization parameters well.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

ENGLISH: Age composition of catch, and growth rate, of yellowfin tuna have been estimated by Hennemuth (1961a) and Davidoff (1963). The relative abundance and instantaneous total mortality rate of yellowfin tuna during 1954-1959 have been estimated by Hennenmuth (1961b). It is now possible to extend this work, because more data are available; these include data for 1951-1954, which were previously not available, and data for 1960-1962, which were collected subsequent to Hennemuth's (1961b) publication. In that publication, Hennemuth estimated the total instantaneous mortality rate (Z) during the entire time period a year class is present in the fishery following full recruitment. However, this method may lead to biased estimates of abundance, and hence mortality rates, because of both seasonal migrations into or out of specific fishing areas and possible seasonal differences in availability or vulnerability of the fish to the fishing gear. Schaefer, Chatwin and Broadhead (1961) and Joseph etl al. (1964) have indicated that seasonal migrations of yellowfin occur. A method of estimating mortality rates which is not biased by seasonal movements would be of value in computations of population dynamics. The method of analysis outlined and used in the present paper may obviate this bias by comparing the abundance of an individual yellowfin year class, following its period of maximum abundance, in an individual area during a specific quarter of the year with its abundance in the same area one year later. The method was suggested by Gulland (1955) and used by Chapman, Holt and Allen (1963) in assessing Antarctic whale stocks. This method, and the results of its use with data for yellowfin caught in the eastern tropical Pacific from 1951-1962 are described in this paper. SPANISH: La composición de edad de la captura, y la tasa de crecimiento del atún aleta amarilla, han sido estimadas por Hennemuth (1961a) y Davidoff (1963). Hennemuth (1961b), estimó la abundancia relativa y la tasa de mortalidad total instantánea del atún aleta amarilla durante 1954-1959. Se puede ampliar ahora, este trabajo, porque se dispone de más datos; éstos incluyen datos de 1951 1954, de los cuales no se disponía antes, y datos de 1960-1962 que fueron recolectados después de la publicación de Hennemuth (1961b). En esa obra, Hennemuth estimó la tasa de mortalidad total instantánea (Z) durante todo el período de tiempo en el cual una clase anual está presente en la pesquería, consecutiva al reclutamiento total. Sin embargo, este método puede conducir a estimaciones con bias (inclinación viciada) de abundancia, y de aquí las tasas de mortalidad, debidas tanto a migraciones estacionales dentro o fuera de las áreas determinadas de pesca, como a posibles diferencias estacionales en la disponibilidad y vulnerabilidad de los peces al equipo de pesca. Schaefer, Chatwin y Broadhead (1961) y Joseph et al. (1964) han indicado que ocurren migraciones estacionales de atún aleta amarilla. Un método para estimar las tasas de mortalidad el cual no tuviera bias debido a los movimientos estacionales, sería de valor en los cómputos de la dinámica de las poblaciones. El método de análisis delineado y usado en el presente estudio puede evitar este bias al comparar la abundancia de una clase anual individual de atún aleta amarilla, subsecuente a su período de abundancia máxima en un área individual, durante un trimestre específico del año, con su abundancia en la misma área un año más tarde. Este método fue sugerido por Gulland (1955) y empleado por Chapman, Holt y Allen (1963) en la declaración de los stocks de la ballena antártica. Este método y los resultados de su uso, en combinación con los datos del atún aleta amarilla capturado en el Pacífico oriental tropical desde 1951-1962, son descritos en este estudio.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The search for reliable proxies of past deep ocean temperature and salinity has proved difficult, thereby limiting our ability to understand the coupling of ocean circulation and climate over glacial-interglacial timescales. Previous inferences of deep ocean temperature and salinity from sediment pore fluid oxygen isotopes and chlorinity indicate that the deep ocean density structure at the Last Glacial Maximum (LGM, approximately 20,000 years BP) was set by salinity, and that the density contrast between northern and southern sourced deep waters was markedly greater than in the modern ocean. High density stratification could help explain the marked contrast in carbon isotope distribution recorded in the LGM ocean relative to that we observe today, but what made the ocean's density structure so different at the LGM? How did it evolve from one state to another? Further, given the sparsity of the LGM temperature and salinity data set, what else can we learn by increasing the spatial density of proxy records?

We investigate the cause and feasibility of a highly and salinity stratified deep ocean at the LGM and we work to increase the amount of information we can glean about the past ocean from pore fluid profiles of oxygen isotopes and chloride. Using a coupled ocean--sea ice--ice shelf cavity model we test whether the deep ocean density structure at the LGM can be explained by ice--ocean interactions over the Antarctic continental shelves, and show that a large contribution of the LGM salinity stratification can be explained through lower ocean temperature. In order to extract the maximum information from pore fluid profiles of oxygen isotopes and chloride we evaluate several inverse methods for ill-posed problems and their ability to recover bottom water histories from sediment pore fluid profiles. We demonstrate that Bayesian Markov Chain Monte Carlo parameter estimation techniques enable us to robustly recover the full solution space of bottom water histories, not only at the LGM, but through the most recent deglaciation and the Holocene up to the present. Finally, we evaluate a non-destructive pore fluid sampling technique, Rhizon samplers, in comparison to traditional squeezing methods and show that despite their promise, Rhizons are unlikely to be a good sampling tool for pore fluid measurements of oxygen isotopes and chloride.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We show that the sensor self-localization problem can be cast as a static parameter estimation problem for Hidden Markov Models and we implement fully decentralized versions of the Recursive Maximum Likelihood and on-line Expectation-Maximization algorithms to localize the sensor network simultaneously with target tracking. For linear Gaussian models, our algorithms can be implemented exactly using a distributed version of the Kalman filter and a novel message passing algorithm. The latter allows each node to compute the local derivatives of the likelihood or the sufficient statistics needed for Expectation-Maximization. In the non-linear case, a solution based on local linearization in the spirit of the Extended Kalman Filter is proposed. In numerical examples we demonstrate that the developed algorithms are able to learn the localization parameters. © 2012 IEEE.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Conventional Hidden Markov models generally consist of a Markov chain observed through a linear map corrupted by additive noise. This general class of model has enjoyed a huge and diverse range of applications, for example, speech processing, biomedical signal processing and more recently quantitative finance. However, a lesser known extension of this general class of model is the so-called Factorial Hidden Markov Model (FHMM). FHMMs also have diverse applications, notably in machine learning, artificial intelligence and speech recognition [13, 17]. FHMMs extend the usual class of HMMs, by supposing the partially observed state process is a finite collection of distinct Markov chains, either statistically independent or dependent. There is also considerable current activity in applying collections of partially observed Markov chains to complex action recognition problems, see, for example, [6]. In this article we consider the Maximum Likelihood (ML) parameter estimation problem for FHMMs. Much of the extant literature concerning this problem presents parameter estimation schemes based on full data log-likelihood EM algorithms. This approach can be slow to converge and often imposes heavy demands on computer memory. The latter point is particularly relevant for the class of FHMMs where state space dimensions are relatively large. The contribution in this article is to develop new recursive formulae for a filter-based EM algorithm that can be implemented online. Our new formulae are equivalent ML estimators, however, these formulae are purely recursive and so, significantly reduce numerical complexity and memory requirements. A computer simulation is included to demonstrate the performance of our results. © Taylor & Francis Group, LLC.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The retention factors (k) of 104 hydrophobic organic chemicals (HOCs) were measured in soil column chromatography (SCC) over columns filled with three naturally occurring reference soils and eluted with Milli-Q water. A novel method for the estimation of soil organic partition coefficient (K-oc) was developed based on correlations with k in soil/water systems. Strong log K-oc versus log k correlations (r>0.96) were found. The estimated K-oc values were in accordance with the literature values with a maximum deviation of less than 0.4 log units. All estimated K-oc values from three soils were consistent with each other. The SCC approach is promising for fast screening of a large number of chemicals in their environmental applications. (C) 2002 Elsevier Science B.V. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Recovering a volumetric model of a person, car, or other object of interest from a single snapshot would be useful for many computer graphics applications. 3D model estimation in general is hard, and currently requires active sensors, multiple views, or integration over time. For a known object class, however, 3D shape can be successfully inferred from a single snapshot. We present a method for generating a ``virtual visual hull''-- an estimate of the 3D shape of an object from a known class, given a single silhouette observed from an unknown viewpoint. For a given class, a large database of multi-view silhouette examples from calibrated, though possibly varied, camera rigs are collected. To infer a novel single view input silhouette's virtual visual hull, we search for 3D shapes in the database which are most consistent with the observed contour. The input is matched to component single views of the multi-view training examples. A set of viewpoint-aligned virtual views are generated from the visual hulls corresponding to these examples. The 3D shape estimate for the input is then found by interpolating between the contours of these aligned views. When the underlying shape is ambiguous given a single view silhouette, we produce multiple visual hull hypotheses; if a sequence of input images is available, a dynamic programming approach is applied to find the maximum likelihood path through the feasible hypotheses over time. We show results of our algorithm on real and synthetic images of people.