928 resultados para Maximum likelihood channel estimation algorithms


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Este trabalho teve como objetivo principal avaliar a importância da inclusão dos efeitos genético materno, comum de leitegada e de ambiente permanente no modelo de estimação de componentes de variância para a característica intervalo de parto em fêmeas suínas. Foram utilizados dados que consistiam de 1.013 observações de fêmeas Dalland (C-40), registradas em dois rebanhos. As estimativas dos componentes de variância foram realizadas pelo método da máxima verossimilhança restrita livre de derivadas. Foram testados oito modelos, que continham os efeitos fixos (grupos de contemporâneo e covariáveis) e os efeitos genético aditivo direto e residual, mas variavam quanto à inclusão dos efeitos aleatórios genético materno, ambiental comum de leitegada e ambiental permanente. O teste da razão de verossimilhança (LR) indicou a não necessidade da inclusão desses efeitos no modelo. No entanto observou-se que o efeito ambiental permanente causou mudança nas estimativas de herdabilidade, que variaram de 0,00 a 0,03. Conclui-se que os valores de herdabilidade obtidos indicam que esta característica não apresentaria ganho genético como resposta à seleção. O efeito ambiental comum de leitegada e o genético materno não apresentaram influência sobre esta característica. Já o ambiental permanente, mesmo sem ter sido significativo o seu efeito pelo LR, deve ser considerado nos modelos genéticos para essa característica, pois sua presença causou mudança nas estimativas da variância genética aditiva.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We propose new classes of linear codes over integer rings of quadratic extensions of Q, the field of rational numbers. The codes are considered with respect to a Mannheim metric, which is a Manhattan metric modulo a two-dimensional (2-D) grid. In particular, codes over Gaussian integers and Eisenstein-Jacobi integers are extensively studied. Decoding algorithms are proposed for these codes when up to two coordinates of a transmitted code vector are affected by errors of arbitrary Mannheim weight. Moreover, we show that the proposed codes are maximum-distance separable (MDS), with respect to the Hamming distance. The practical interest in such Mannheim-metric codes is their use in coded modulation schemes based on quadrature amplitude modulation (QAM)-type constellations, for which neither the Hamming nor the Lee metric is appropriate.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A measurement technique of charm baryons lifetimes from hadro-production data was presented. The measurement verified the lifetime analysis procedure in a sample with higher statistical precision. Other effects studied include mass reflections; effects of the presence of a second charm particle; and mismeasurement of charm decays. Monte carlo simulations were used for the detailed study of systematic effects using the charm data.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The present paper deals with estimation of variance components, prediction of breeding values and selection in a population of rubber tree [Hevea brasiliensis (Willd. ex Adr. de Juss.) Müell.-Arg.] from Rio Branco, State of Acre, Brazil. The REML/BLUP (restricted maximum likelihood/best linear unbiased prediction) procedure was applied. For this purpose, 37 rubber tree families were obtained and assessed in a randomized complete block design, with three unbalanced replications. The field trial was carried out at the Experimental Station of UNESP, located in Selvíria, State of Mato Grosso do Sul, Brazil. The quantitative traits evaluated were: girth (G), bark thickness (BT), number of latex vessel rings (NR), and plant height (PH). Given the unbalanced condition of the progeny test, the REML/BLUP procedure was used for estimation. The narrow-sense individual heritability estimates were 0.43 for G, 0.18 for BT, 0.01 for NR, and 0.51 for PH. Two selection strategies were adopted: one short-term (ST - selection intensity of 8.85%) and the other long-term (LT - selection intensity of 26.56%). For G, the estimated genetic gains in relation to the population average were 26.80% and 17.94%, respectively, according to the ST and LT strategies. The effective population sizes were 22.35 and 46.03, respectively. The LT and ST strategies maintained 45.80% and 28.24%, respectively, of the original genetic diversity represented in the progeny test. So, it can be inferred that this population has potential for both breeding and ex situ genetic conservation as a supplier of genetic material for advanced rubber tree breeding programs. Copyright by the Brazilian Society of Genetics.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper we proposed a new two-parameters lifetime distribution with increasing failure rate. The new distribution arises on a latent complementary risk problem base. The properties of the proposed distribution are discussed, including a formal proof of its probability density function and explicit algebraic formulae for its reliability and failure rate functions, quantiles and moments, including the mean and variance. A simple EM-type algorithm for iteratively computing maximum likelihood estimates is presented. The Fisher information matrix is derived analytically in order to obtaining the asymptotic covariance matrix. The methodology is illustrated on a real data set. © 2010 Elsevier B.V. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Maximum-likelihood decoding is often the optimal decoding rule one can use, but it is very costly to implement in a general setting. Much effort has therefore been dedicated to find efficient decoding algorithms that either achieve or approximate the error-correcting performance of the maximum-likelihood decoder. This dissertation examines two approaches to this problem. In 2003 Feldman and his collaborators defined the linear programming decoder, which operates by solving a linear programming relaxation of the maximum-likelihood decoding problem. As with many modern decoding algorithms, is possible for the linear programming decoder to output vectors that do not correspond to codewords; such vectors are known as pseudocodewords. In this work, we completely classify the set of linear programming pseudocodewords for the family of cycle codes. For the case of the binary symmetric channel, another approximation of maximum-likelihood decoding was introduced by Omura in 1972. This decoder employs an iterative algorithm whose behavior closely mimics that of the simplex algorithm. We generalize Omura's decoder to operate on any binary-input memoryless channel, thus obtaining a soft-decision decoding algorithm. Further, we prove that the probability of the generalized algorithm returning the maximum-likelihood codeword approaches 1 as the number of iterations goes to infinity.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

An extension of some standard likelihood based procedures to heteroscedastic nonlinear regression models under scale mixtures of skew-normal (SMSN) distributions is developed. This novel class of models provides a useful generalization of the heteroscedastic symmetrical nonlinear regression models (Cysneiros et al., 2010), since the random term distributions cover both symmetric as well as asymmetric and heavy-tailed distributions such as skew-t, skew-slash, skew-contaminated normal, among others. A simple EM-type algorithm for iteratively computing maximum likelihood estimates of the parameters is presented and the observed information matrix is derived analytically. In order to examine the performance of the proposed methods, some simulation studies are presented to show the robust aspect of this flexible class against outlying and influential observations and that the maximum likelihood estimates based on the EM-type algorithm do provide good asymptotic properties. Furthermore, local influence measures and the one-step approximations of the estimates in the case-deletion model are obtained. Finally, an illustration of the methodology is given considering a data set previously analyzed under the homoscedastic skew-t nonlinear regression model. (C) 2012 Elsevier B.V. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Accurate estimates of the penetrance rate of autosomal dominant conditions are important, among other issues, for optimizing recurrence risks in genetic counseling. The present work on penetrance rate estimation from pedigree data considers the following situations: 1) estimation of the penetrance rate K (brief review of the method); 2) construction of exact credible intervals for K estimates; 3) specificity and heterogeneity issues; 4) penetrance rate estimates obtained through molecular testing of families; 5) lack of information about the phenotype of the pedigree generator; 6) genealogies containing grouped parent-offspring information; 7) ascertainment issues responsible for the inflation of K estimates.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Das Standardmodell (SM) der Teilchenphysik beschreibt sehr präzise die fundamentalen Bausteine und deren Wechselwirkungen (WW). Trotz des Erfolges gibt es noch offene Fragen, die vom SM nicht beantwortet werden können. Ein noch noch nicht abgeschlossener Test besteht aus der Messung der Stärke der schwachen Kopplung zwischen Quarks. Neutrale B- bzw. $bar{B}$-Mesonen können sich innerhalb ihrer Lebensdauer über einen Prozeß der schwachen WW in ihr Antiteilchen transformieren. Durch die Messung der Bs-Oszillation kann die Kopplung Vtd zwischen den Quarksorten Top (t) und Down (d) bestimmt werden. Alle bis Ende 2005 durchgeführten Experimente lieferten lediglich eine untere Grenze für die Oszillationsfrequenz von ms>14,4ps-1. Die vorliegenden Arbeit beschreibt die Messung der Bs-Oszillationsfrequenz ms mit dem semileptonischen Kanal BsD(-)+. Die verwendeten Daten stammen aus Proton-Antiproton-Kollisionen, die im Zeitraum von April 2002 bis März 2006 mit dem DØ-Detektor am Tevatron-Beschleuniger des Fermi National Accelerator Laboratory bei einer Schwerpunktsenergie von $sqrt{s}$=1,96TeV aufgezeichnet wurden. Die verwendeten Datensätze entsprechen einer integrierten Luminosität von 1,3fb-1 (620 millionen Ereignisse). Für diese Oszillationsmessung wurde der Quarkinhalt des Bs-Mesons zur Zeit der Produktion sowie des Zerfalls bestimmt und die Zerfallszeit wurde gemessen. Nach der Rekonstruktion und Selektion der Signalereignisse legt die Ladung des Myons den Quarkinhalt des Bs-Mesons zur Zeit des Zerfalls fest. Zusätzlich wurde der Quarkinhalt des Bs-Mesons zur Zeit der Produktion markiert. b-Quarks werden in $pbar{p}$-Kollisionen paarweise produziert. Die Zerfallsprodukte des zweiten b-Hadrons legen den Quarkinhalt des Bs-Mesons zur Zeit der Produktion fest. Bei einer Sensitivität von msenss=14,5ps-1 wurde eine untere Grenze für die Oszillationsfrequenz ms>15,5ps-1 bestimmt. Die Maximum-Likelihood-Methode lieferte eine Oszillationsfrequenz ms>(20+2,5-3,0(stat+syst)0,8(syst,k))ps-1 bei einem Vertrauensniveau von 90%. Der nicht nachgewiesene Neutrinoimpuls führt zu dem systematischen Fehler (sys,k). Dieses Resultat ergibt zusammen mit der entsprechenden Oszillation des Bd-Mesons eine signifikante Messung der Kopplung Vtd, in Übereinstimmung mit weiteren Experimenten über die schwachen Quarkkopplungen.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A large number of proposals for estimating the bivariate survival function under random censoring has been made. In this paper we discuss nonparametric maximum likelihood estimation and the bivariate Kaplan-Meier estimator of Dabrowska. We show how these estimators are computed, present their intuitive background and compare their practical performance under different levels of dependence and censoring, based on extensive simulation results, which leads to a practical advise.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We investigate the interplay of smoothness and monotonicity assumptions when estimating a density from a sample of observations. The nonparametric maximum likelihood estimator of a decreasing density on the positive half line attains a rate of convergence at a fixed point if the density has a negative derivative. The same rate is obtained by a kernel estimator, but the limit distributions are different. If the density is both differentiable and known to be monotone, then a third estimator is obtained by isotonization of a kernel estimator. We show that this again attains the rate of convergence and compare the limit distributors of the three types of estimators. It is shown that both isotonization and smoothing lead to a more concentrated limit distribution and we study the dependence on the proportionality constant in the bandwidth. We also show that isotonization does not change the limit behavior of a kernel estimator with a larger bandwidth, in the case that the density is known to have more than one derivative.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper discusses estimation of the tumor incidence rate, the death rate given tumor is present and the death rate given tumor is absent using a discrete multistage model. The model was originally proposed by Dewanji and Kalbfleisch (1986) and the maximum likelihood estimate of the tumor incidence rate was obtained using EM algorithm. In this paper, we use a reparametrization to simplify the estimation procedure. The resulting estimates are not always the same as the maximum likelihood estimates but are asymptotically equivalent. In addition, an explicit expression for asymptotic variance and bias of the proposed estimators is also derived. These results can be used to compare efficiency of different sacrifice schemes in carcinogenicity experiments.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Latent class regression models are useful tools for assessing associations between covariates and latent variables. However, evaluation of key model assumptions cannot be performed using methods from standard regression models due to the unobserved nature of latent outcome variables. This paper presents graphical diagnostic tools to evaluate whether or not latent class regression models adhere to standard assumptions of the model: conditional independence and non-differential measurement. An integral part of these methods is the use of a Markov Chain Monte Carlo estimation procedure. Unlike standard maximum likelihood implementations for latent class regression model estimation, the MCMC approach allows us to calculate posterior distributions and point estimates of any functions of parameters. It is this convenience that allows us to provide the diagnostic methods that we introduce. As a motivating example we present an analysis focusing on the association between depression and socioeconomic status, using data from the Epidemiologic Catchment Area study. We consider a latent class regression analysis investigating the association between depression and socioeconomic status measures, where the latent variable depression is regressed on education and income indicators, in addition to age, gender, and marital status variables. While the fitted latent class regression model yields interesting results, the model parameters are found to be invalid due to the violation of model assumptions. The violation of these assumptions is clearly identified by the presented diagnostic plots. These methods can be applied to standard latent class and latent class regression models, and the general principle can be extended to evaluate model assumptions in other types of models.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

All optical systems that operate in or through the atmosphere suffer from turbulence induced image blur. Both military and civilian surveillance, gun-sighting, and target identification systems are interested in terrestrial imaging over very long horizontal paths, but atmospheric turbulence can blur the resulting images beyond usefulness. My dissertation explores the performance of a multi-frame-blind-deconvolution technique applied under anisoplanatic conditions for both Gaussian and Poisson noise model assumptions. The technique is evaluated for use in reconstructing images of scenes corrupted by turbulence in long horizontal-path imaging scenarios and compared to other speckle imaging techniques. Performance is evaluated via the reconstruction of a common object from three sets of simulated turbulence degraded imagery representing low, moderate and severe turbulence conditions. Each set consisted of 1000 simulated, turbulence degraded images. The MSE performance of the estimator is evaluated as a function of the number of images, and the number of Zernike polynomial terms used to characterize the point spread function. I will compare the mean-square-error (MSE) performance of speckle imaging methods and a maximum-likelihood, multi-frame blind deconvolution (MFBD) method applied to long-path horizontal imaging scenarios. Both methods are used to reconstruct a scene from simulated imagery featuring anisoplanatic turbulence induced aberrations. This comparison is performed over three sets of 1000 simulated images each for low, moderate and severe turbulence-induced image degradation. The comparison shows that speckle-imaging techniques reduce the MSE 46 percent, 42 percent and 47 percent on average for low, moderate, and severe cases, respectively using 15 input frames under daytime conditions and moderate frame rates. Similarly, the MFBD method provides, 40 percent, 29 percent, and 36 percent improvements in MSE on average under the same conditions. The comparison is repeated under low light conditions (less than 100 photons per pixel) where improvements of 39 percent, 29 percent and 27 percent are available using speckle imaging methods and 25 input frames and 38 percent, 34 percent and 33 percent respectively for the MFBD method and 150 input frames. The MFBD estimator is applied to three sets of field data and the results presented. Finally, a combined Bispectrum-MFBD Hybrid estimator is proposed and investigated. This technique consistently provides a lower MSE and smaller variance in the estimate under all three simulated turbulence conditions.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Academic and industrial research in the late 90s have brought about an exponential explosion of DNA sequence data. Automated expert systems are being created to help biologists to extract patterns, trends and links from this ever-deepening ocean of information. Two such systems aimed on retrieving and subsequently utilizing phylogenetically relevant information have been developed in this dissertation, the major objective of which was to automate the often difficult and confusing phylogenetic reconstruction process. ^ Popular phylogenetic reconstruction methods, such as distance-based methods, attempt to find an optimal tree topology (that reflects the relationships among related sequences and their evolutionary history) by searching through the topology space. Various compromises between the fast (but incomplete) and exhaustive (but computationally prohibitive) search heuristics have been suggested. An intelligent compromise algorithm that relies on a flexible “beam” search principle from the Artificial Intelligence domain and uses the pre-computed local topology reliability information to adjust the beam search space continuously is described in the second chapter of this dissertation. ^ However, sometimes even a (virtually) complete distance-based method is inferior to the significantly more elaborate (and computationally expensive) maximum likelihood (ML) method. In fact, depending on the nature of the sequence data in question either method might prove to be superior. Therefore, it is difficult (even for an expert) to tell a priori which phylogenetic reconstruction method—distance-based, ML or maybe maximum parsimony (MP)—should be chosen for any particular data set. ^ A number of factors, often hidden, influence the performance of a method. For example, it is generally understood that for a phylogenetically “difficult” data set more sophisticated methods (e.g., ML) tend to be more effective and thus should be chosen. However, it is the interplay of many factors that one needs to consider in order to avoid choosing an inferior method (potentially a costly mistake, both in terms of computational expenses and in terms of reconstruction accuracy.) ^ Chapter III of this dissertation details a phylogenetic reconstruction expert system that selects a superior proper method automatically. It uses a classifier (a Decision Tree-inducing algorithm) to map a new data set to the proper phylogenetic reconstruction method. ^