971 resultados para maximum pseudolikelihood (MPL) estimation
Resumo:
Accurate estimates of the penetrance rate of autosomal dominant conditions are important, among other issues, for optimizing recurrence risks in genetic counseling. The present work on penetrance rate estimation from pedigree data considers the following situations: 1) estimation of the penetrance rate K (brief review of the method); 2) construction of exact credible intervals for K estimates; 3) specificity and heterogeneity issues; 4) penetrance rate estimates obtained through molecular testing of families; 5) lack of information about the phenotype of the pedigree generator; 6) genealogies containing grouped parent-offspring information; 7) ascertainment issues responsible for the inflation of K estimates.
Resumo:
The thermal limits of individual animals were originally proposed as a link between animal physiology and thermal ecology. Although this link is valid in theory, the evaluation of physiological tolerances involves some problems that are the focus of this study. One rationale was that heating rates shall influence upper critical limits, so that ecological thermal limits need to consider experimental heating rates. In addition, if thermal limits are not surpassed in experiments, subsequent tests of the same individual should yield similar results or produce evidence of hardening. Finally, several non-controlled variables such as time under experimental conditions and procedures may affect results. To analyze these issues we conducted an integrative study of upper critical temperatures in a single species, the ant Atta sexdens rubropiosa, an animal model providing large numbers of individuals of diverse sizes but similar genetic makeup. Our specific aims were to test the 1) influence of heating rates in the experimental evaluation of upper critical temperature, 2) assumptions of absence of physical damage and reproducibility, and 3) sources of variance often overlooked in the thermal-limits literature; and 4) to introduce some experimental approaches that may help researchers to separate physiological and methodological issues. The upper thermal limits were influenced by both heating rates and body mass. In the latter case, the effect was physiological rather than methodological. The critical temperature decreased during subsequent tests performed on the same individual ants, even one week after the initial test. Accordingly, upper thermal limits may have been overestimated by our (and typical) protocols. Heating rates, body mass, procedures independent of temperature and other variables may affect the estimation of upper critical temperatures. Therefore, based on our data, we offer suggestions to enhance the quality of measurements, and offer recommendations to authors aiming to compile and analyze databases from the literature.
Resumo:
Defining pharmacokinetic parameters and depletion intervals for antimicrobials used in fish represents important guidelines for future regulation by Brazilian agencies of the use of these substances in fish farming. This article presents a depletion study for oxytetracycline (OTC) in tilapias (Orechromis niloticus) farmed under tropical conditions during the winter season. High performance liquid chromatography, with fluorescence detection for the quantitation of OTC in tilapia fillets and medicated feed, was developed and validated. The depletion study with fish was carried out under monitored environmental conditions. OTC was administered in the feed for five consecutive days at daily dosages of 80 mg/kg body weight. Groups of ten fish were slaughtered at 1, 2, 3, 4, 5, 8, 10, 15, 20, and 25 days after medication. After the 8th day posttreatment, OTC concentrations in the tilapia fillets were below the limit of quantitation (13 ng/g) of the method. Linear regression of the mathematical model of data analysis presented a coefficient of 0.9962. The elimination half- life for OTC in tilapia fillet and the withdrawal period were 1.65 and 6 days, respectively, considering a percentile of 99% with 95% of confidence and a maximum residue limit of 100 ng/g. Even though the study was carried out in the winter under practical conditions where water temperature varied, the results obtained are similar to others from studies conducted under controlled temperature.
Resumo:
In this study we analyzed the phylogeographic pattern and historical demography of an endemic Atlantic forest (AF) bird, Basileuterus leucoblepharus, and test the influence of the last glacial maximum (LGM) on its population effective size using coalescent simulations. We address two main questions: (i) Does B. leucoblepharus present population genetic structure congruent with the patterns observed for other AF organisms? (ii) How did the LGM affect the effective population size of B. leucoblepharus? We sequenced 914 bp of the mitochondrial gene cytochrome b and 512 bp of the nuclear intron 5 of beta-fibrinogen of 62 individuals from 15 localities along the AF. Both molecular markers revealed no genetic structure in B. leucoblepharus. Neutrality tests based on both loci showed significant demographic expansion. The extended Bayesian skyline plot showed that the species seems to have experienced demographic expansion starting around 300,000 years ago, during the late Pleistocene. This date does not coincide with the LGM and the dynamics of population size showed stability during the LGM. To further test the effect of the LGM on this species, we simulated seven demographic scenarios to explore whether populations suffered specific bottlenecks. The scenarios most congruent with our data were population stability during the LGM with bottlenecks older than this period. This is the first example of an AF organism that does not show phylogeographic breaks caused by vicariant events associated to climate change and geotectonic activities in the Quaternary. Differential ecological, environmental tolerances and habitat requirements are possibly influencing the different evolutionary histories of these organisms. Our results show that the history of organism diversification in this megadiverse Neotropical forest is complex. Crown Copyright (c) 2012 Published by Elsevier Inc. All rights reserved.
Resumo:
[EN] The aim of this work is to propose a new method for estimating the backward flow directly from the optical flow. We assume that the optical flow has already been computed and we need to estimate the inverse mapping. This mapping is not bijective due to the presence of occlusions and disocclusions, therefore it is not possible to estimate the inverse function in the whole domain. Values in these regions has to be guessed from the available information. We propose an accurate algorithm to calculate the backward flow uniquely from the optical flow, using a simple relation. Occlusions are filled by selecting the maximum motion and disocclusions are filled with two different strategies: a min-fill strategy, which fills each disoccluded region with the minimum value around the region; and a restricted min-fill approach that selects the minimum value in a close neighborhood. In the experimental results, we show the accuracy of the method and compare the results using these two strategies.
Resumo:
Despite the scientific achievement of the last decades in the astrophysical and cosmological fields, the majority of the Universe energy content is still unknown. A potential solution to the “missing mass problem” is the existence of dark matter in the form of WIMPs. Due to the very small cross section for WIMP-nuleon interactions, the number of expected events is very limited (about 1 ev/tonne/year), thus requiring detectors with large target mass and low background level. The aim of the XENON1T experiment, the first tonne-scale LXe based detector, is to be sensitive to WIMP-nucleon cross section as low as 10^-47 cm^2. To investigate the possibility of such a detector to reach its goal, Monte Carlo simulations are mandatory to estimate the background. To this aim, the GEANT4 toolkit has been used to implement the detector geometry and to simulate the decays from the various background sources: electromagnetic and nuclear. From the analysis of the simulations, the level of background has been found totally acceptable for the experiment purposes: about 1 background event in a 2 tonne-years exposure. Indeed, using the Maximum Gap method, the XENON1T sensitivity has been evaluated and the minimum for the WIMP-nucleon cross sections has been found at 1.87 x 10^-47 cm^2, at 90% CL, for a WIMP mass of 45 GeV/c^2. The results have been independently cross checked by using the Likelihood Ratio method that confirmed such results with an agreement within less than a factor two. Such a result is completely acceptable considering the intrinsic differences between the two statistical methods. Thus, in the PhD thesis it has been proven that the XENON1T detector will be able to reach the designed sensitivity, thus lowering the limits on the WIMP-nucleon cross section by about 2 orders of magnitude with respect to the current experiments.
Resumo:
A large number of proposals for estimating the bivariate survival function under random censoring has been made. In this paper we discuss nonparametric maximum likelihood estimation and the bivariate Kaplan-Meier estimator of Dabrowska. We show how these estimators are computed, present their intuitive background and compare their practical performance under different levels of dependence and censoring, based on extensive simulation results, which leads to a practical advise.
Resumo:
We investigate the interplay of smoothness and monotonicity assumptions when estimating a density from a sample of observations. The nonparametric maximum likelihood estimator of a decreasing density on the positive half line attains a rate of convergence at a fixed point if the density has a negative derivative. The same rate is obtained by a kernel estimator, but the limit distributions are different. If the density is both differentiable and known to be monotone, then a third estimator is obtained by isotonization of a kernel estimator. We show that this again attains the rate of convergence and compare the limit distributors of the three types of estimators. It is shown that both isotonization and smoothing lead to a more concentrated limit distribution and we study the dependence on the proportionality constant in the bandwidth. We also show that isotonization does not change the limit behavior of a kernel estimator with a larger bandwidth, in the case that the density is known to have more than one derivative.
Resumo:
This paper discusses estimation of the tumor incidence rate, the death rate given tumor is present and the death rate given tumor is absent using a discrete multistage model. The model was originally proposed by Dewanji and Kalbfleisch (1986) and the maximum likelihood estimate of the tumor incidence rate was obtained using EM algorithm. In this paper, we use a reparametrization to simplify the estimation procedure. The resulting estimates are not always the same as the maximum likelihood estimates but are asymptotically equivalent. In addition, an explicit expression for asymptotic variance and bias of the proposed estimators is also derived. These results can be used to compare efficiency of different sacrifice schemes in carcinogenicity experiments.
Resumo:
This paper considers a wide class of semiparametric problems with a parametric part for some covariate effects and repeated evaluations of a nonparametric function. Special cases in our approach include marginal models for longitudinal/clustered data, conditional logistic regression for matched case-control studies, multivariate measurement error models, generalized linear mixed models with a semiparametric component, and many others. We propose profile-kernel and backfitting estimation methods for these problems, derive their asymptotic distributions, and show that in likelihood problems the methods are semiparametric efficient. While generally not true, with our methods profiling and backfitting are asymptotically equivalent. We also consider pseudolikelihood methods where some nuisance parameters are estimated from a different algorithm. The proposed methods are evaluated using simulation studies and applied to the Kenya hemoglobin data.
Resumo:
DNA sequence copy number has been shown to be associated with cancer development and progression. Array-based Comparative Genomic Hybridization (aCGH) is a recent development that seeks to identify the copy number ratio at large numbers of markers across the genome. Due to experimental and biological variations across chromosomes and across hybridizations, current methods are limited to analyses of single chromosomes. We propose a more powerful approach that borrows strength across chromosomes and across hybridizations. We assume a Gaussian mixture model, with a hidden Markov dependence structure, and with random effects to allow for intertumoral variation, as well as intratumoral clonal variation. For ease of computation, we base estimation on a pseudolikelihood function. The method produces quantitative assessments of the likelihood of genetic alterations at each clone, along with a graphical display for simple visual interpretation. We assess the characteristics of the method through simulation studies and through analysis of a brain tumor aCGH data set. We show that the pseudolikelihood approach is superior to existing methods both in detecting small regions of copy number alteration and in accurately classifying regions of change when intratumoral clonal variation is present.
Resumo:
Latent class regression models are useful tools for assessing associations between covariates and latent variables. However, evaluation of key model assumptions cannot be performed using methods from standard regression models due to the unobserved nature of latent outcome variables. This paper presents graphical diagnostic tools to evaluate whether or not latent class regression models adhere to standard assumptions of the model: conditional independence and non-differential measurement. An integral part of these methods is the use of a Markov Chain Monte Carlo estimation procedure. Unlike standard maximum likelihood implementations for latent class regression model estimation, the MCMC approach allows us to calculate posterior distributions and point estimates of any functions of parameters. It is this convenience that allows us to provide the diagnostic methods that we introduce. As a motivating example we present an analysis focusing on the association between depression and socioeconomic status, using data from the Epidemiologic Catchment Area study. We consider a latent class regression analysis investigating the association between depression and socioeconomic status measures, where the latent variable depression is regressed on education and income indicators, in addition to age, gender, and marital status variables. While the fitted latent class regression model yields interesting results, the model parameters are found to be invalid due to the violation of model assumptions. The violation of these assumptions is clearly identified by the presented diagnostic plots. These methods can be applied to standard latent class and latent class regression models, and the general principle can be extended to evaluate model assumptions in other types of models.
Resumo:
The degree of polarization of a refected field from active laser illumination can be used for object identifcation and classifcation. The goal of this study is to investigate methods for estimating the degree of polarization for refected fields with active laser illumination, which involves the measurement and processing of two orthogonal field components (complex amplitudes), two orthogonal intensity components, and the total field intensity. We propose to replace interferometric optical apparatuses with a computational approach for estimating the degree of polarization from two orthogonal intensity data and total intensity data. Cramer-Rao bounds for each of the three sensing modalities with various noise models are computed. Algebraic estimators and maximum-likelihood (ML) estimators are proposed. Active-set algorithm and expectation-maximization (EM) algorithm are used to compute ML estimates. The performances of the estimators are compared with each other and with their corresponding Cramer-Rao bounds. Estimators for four-channel polarimeter (intensity interferometer) sensing have a better performance than orthogonal intensities estimators and total intensity estimators. Processing the four intensities data from polarimeter, however, requires complicated optical devices, alignment, and four CCD detectors. It only requires one or two detectors and a computer to process orthogonal intensities data and total intensity data, and the bounds and estimator performances demonstrate that reasonable estimates may still be obtained from orthogonal intensities or total intensity data. Computational sensing is a promising way to estimate the degree of polarization.
Resumo:
This thesis develops high performance real-time signal processing modules for direction of arrival (DOA) estimation for localization systems. It proposes highly parallel algorithms for performing subspace decomposition and polynomial rooting, which are otherwise traditionally implemented using sequential algorithms. The proposed algorithms address the emerging need for real-time localization for a wide range of applications. As the antenna array size increases, the complexity of signal processing algorithms increases, making it increasingly difficult to satisfy the real-time constraints. This thesis addresses real-time implementation by proposing parallel algorithms, that maintain considerable improvement over traditional algorithms, especially for systems with larger number of antenna array elements. Singular value decomposition (SVD) and polynomial rooting are two computationally complex steps and act as the bottleneck to achieving real-time performance. The proposed algorithms are suitable for implementation on field programmable gated arrays (FPGAs), single instruction multiple data (SIMD) hardware or application specific integrated chips (ASICs), which offer large number of processing elements that can be exploited for parallel processing. The designs proposed in this thesis are modular, easily expandable and easy to implement. Firstly, this thesis proposes a fast converging SVD algorithm. The proposed method reduces the number of iterations it takes to converge to correct singular values, thus achieving closer to real-time performance. A general algorithm and a modular system design are provided making it easy for designers to replicate and extend the design to larger matrix sizes. Moreover, the method is highly parallel, which can be exploited in various hardware platforms mentioned earlier. A fixed point implementation of proposed SVD algorithm is presented. The FPGA design is pipelined to the maximum extent to increase the maximum achievable frequency of operation. The system was developed with the objective of achieving high throughput. Various modern cores available in FPGAs were used to maximize the performance and details of these modules are presented in detail. Finally, a parallel polynomial rooting technique based on Newton’s method applicable exclusively to root-MUSIC polynomials is proposed. Unique characteristics of root-MUSIC polynomial’s complex dynamics were exploited to derive this polynomial rooting method. The technique exhibits parallelism and converges to the desired root within fixed number of iterations, making this suitable for polynomial rooting of large degree polynomials. We believe this is the first time that complex dynamics of root-MUSIC polynomial were analyzed to propose an algorithm. In all, the thesis addresses two major bottlenecks in a direction of arrival estimation system, by providing simple, high throughput, parallel algorithms.