117 resultados para Penalized likelihood
em University of Queensland eSpace - Australia
Resumo:
Background: A knowledge of energy expenditure in infancy is required for the estimation of recommended daily amounts of food energy, for designing artificial infant feeds, and as a reference standard for studies of energy metabolism in disease states. Objectives: The objectives of this study were to construct centile reference charts for total energy expenditure (TEE) in infants across the first year of life. Methods: Repeated measures of TEE using the doubly labeled water technique were made in 162 infants at 1.5, 3, 6, 9 and 12 months. In total, 322 TEE measurements were obtained. The LMS method with maximum penalized likelihood was used to construct the centile reference charts. Centiles were constructed for TEE expressed as MJ/day and also expressed relative to body weight (BW) and fat-free mass (FFM). Results: TEE increased with age and was 1.40,1.86, 2.64, 3.07 and 3.65 MJ/day at 1.5, 3, 6, 9 and 12 months, respectively. The standard deviations were 0.43, 0.47, 0.52, 0.66 and 0.88, respectively. TEE in MJ/kg increased from 0.29 to 0.36 and in MJ/day/kg FFM from 0.36 to 0.48. Conclusions: We have presented centile reference charts for TEE expressed as MJ/day and expressed relative to BW and FFM in infants across the first year of life. There was a wide variation or biological scatter in TEE values seen at all ages. We suggest that these centile charts may be used to assess and possibly quantify abnormal energy metabolism in disease states in infants.
Resumo:
A significant problem in the collection of responses to potentially sensitive questions, such as relating to illegal, immoral or embarrassing activities, is non-sampling error due to refusal to respond or false responses. Eichhorn & Hayre (1983) suggested the use of scrambled responses to reduce this form of bias. This paper considers a linear regression model in which the dependent variable is unobserved but for which the sum or product with a scrambling random variable of known distribution, is known. The performance of two likelihood-based estimators is investigated, namely of a Bayesian estimator achieved through a Markov chain Monte Carlo (MCMC) sampling scheme, and a classical maximum-likelihood estimator. These two estimators and an estimator suggested by Singh, Joarder & King (1996) are compared. Monte Carlo results show that the Bayesian estimator outperforms the classical estimators in almost all cases, and the relative performance of the Bayesian estimator improves as the responses become more scrambled.
Resumo:
Binning and truncation of data are common in data analysis and machine learning. This paper addresses the problem of fitting mixture densities to multivariate binned and truncated data. The EM approach proposed by McLachlan and Jones (Biometrics, 44: 2, 571-578, 1988) for the univariate case is generalized to multivariate measurements. The multivariate solution requires the evaluation of multidimensional integrals over each bin at each iteration of the EM procedure. Naive implementation of the procedure can lead to computationally inefficient results. To reduce the computational cost a number of straightforward numerical techniques are proposed. Results on simulated data indicate that the proposed methods can achieve significant computational gains with no loss in the accuracy of the final parameter estimates. Furthermore, experimental results suggest that with a sufficient number of bins and data points it is possible to estimate the true underlying density almost as well as if the data were not binned. The paper concludes with a brief description of an application of this approach to diagnosis of iron deficiency anemia, in the context of binned and truncated bivariate measurements of volume and hemoglobin concentration from an individual's red blood cells.
Resumo:
There has been a resurgence of interest in the mean trace length estimator of Pahl for window sampling of traces. The estimator has been dealt with by Mauldon and Zhang and Einstein in recent publications. The estimator is a very useful one in that it is non-parametric. However, despite some discussion regarding the statistical distribution of the estimator, none of the recent works or the original work by Pahl provide a rigorous basis for the determination a confidence interval for the estimator or a confidence region for the estimator and the corresponding estimator of trace spatial intensity in the sampling window. This paper shows, by consideration of a simplified version of the problem but without loss of generality, that the estimator is in fact the maximum likelihood estimator (MLE) and that it can be considered essentially unbiased. As the MLE, it possesses the least variance of all estimators and confidence intervals or regions should therefore be available through application of classical ML theory. It is shown that valid confidence intervals can in fact be determined. The results of the work and the calculations of the confidence intervals are illustrated by example. (C) 2003 Elsevier Science Ltd. All rights reserved.
Resumo:
In simultaneous analyses of multiple data partitions, the trees relevant when measuring support for a clade are the optimal tree, and the best tree lacking the clade (i.e., the most reasonable alternative). The parsimony-based method of partitioned branch support (PBS) forces each data set to arbitrate between the two relevant trees. This value is the amount each data set contributes to clade support in the combined analysis, and can be very different to support apparent in separate analyses. The approach used in PBS can also be employed in likelihood: a simultaneous analysis of all data retrieves the maximum likelihood tree, and the best tree without the clade of interest is also found. Each data set is fitted to the two trees and the log-likelihood difference calculated, giving partitioned likelihood support (PLS) for each data set. These calculations can be performed regardless of the complexity of the ML model adopted. The significance of PLS can be evaluated using a variety of resampling methods, such as the Kishino-Hasegawa test, the Shimodiara-Hasegawa test, or likelihood weights, although the appropriateness and assumptions of these tests remains debated.
Resumo:
We present a novel method, called the transform likelihood ratio (TLR) method, for estimation of rare event probabilities with heavy-tailed distributions. Via a simple transformation ( change of variables) technique the TLR method reduces the original rare event probability estimation with heavy tail distributions to an equivalent one with light tail distributions. Once this transformation has been established we estimate the rare event probability via importance sampling, using the classical exponential change of measure or the standard likelihood ratio change of measure. In the latter case the importance sampling distribution is chosen from the same parametric family as the transformed distribution. We estimate the optimal parameter vector of the importance sampling distribution using the cross-entropy method. We prove the polynomial complexity of the TLR method for certain heavy-tailed models and demonstrate numerically its high efficiency for various heavy-tailed models previously thought to be intractable. We also show that the TLR method can be viewed as a universal tool in the sense that not only it provides a unified view for heavy-tailed simulation but also can be efficiently used in simulation with light-tailed distributions. We present extensive simulation results which support the efficiency of the TLR method.
Resumo:
In diagnosis and prognosis, we should avoid intuitive “guesstimates” and seek a validated numerical aid
Resumo:
In cell lifespan studies the exponential nature of cell survival curves is often interpreted as showing the rate of death is independent of the age of the cells within the population. Here we present an alternative model where cells that die are replaced and the age and lifespan of the population pool is monitored until a, steady state is reached. In our model newly generated individual cells are given a determined lifespan drawn from a number of known distributions including the lognormal, which is frequently found in nature. For lognormal lifespans the analytic steady-state survival curve obtained can be well-fit by a single or double exponential, depending on the mean and standard deviation. Thus, experimental evidence for exponential lifespans of one and/or two populations cannot be taken as definitive evidence for time and age independence of cell survival. A related model for a dividing population in steady state is also developed. We propose that the common adoption of age-independent, constant rates of change in biological modelling may be responsible for significant errors, both of interpretation and of mathematical deduction. We suggest that additional mathematical and experimental methods must be used to resolve the relationship between time and behavioural changes by cells that are predominantly unsynchronized.
Resumo:
We present a novel, maximum-likelihood (ML), lattice-decoding algorithm for noncoherent block detection of QAM signals. The computational complexity is polynomial in the block length; making it feasible for implementation compared with the exhaustive search ML detector. The algorithm works by enumerating the nearest neighbor regions for a plane defined by the received vector; in a conceptually similar manner to sphere decoding. Simulations show that the new algorithm significantly outperforms existing approaches