917 resultados para Gaussian prior variance
Resumo:
In this work we propose and analyze nonlinear elliptical models for longitudinal data, which represent an alternative to gaussian models in the cases of heavy tails, for instance. The elliptical distributions may help to control the influence of the observations in the parameter estimates by naturally attributing different weights for each case. We consider random effects to introduce the within-group correlation and work with the marginal model without requiring numerical integration. An iterative algorithm to obtain maximum likelihood estimates for the parameters is presented, as well as diagnostic results based on residual distances and local influence [Cook, D., 1986. Assessment of local influence. journal of the Royal Statistical Society - Series B 48 (2), 133-169; Cook D., 1987. Influence assessment. journal of Applied Statistics 14 (2),117-131; Escobar, L.A., Meeker, W.Q., 1992, Assessing influence in regression analysis with censored data, Biometrics 48, 507-528]. As numerical illustration, we apply the obtained results to a kinetics longitudinal data set presented in [Vonesh, E.F., Carter, R.L., 1992. Mixed-effects nonlinear regression for unbalanced repeated measures. Biometrics 48, 1-17], which was analyzed under the assumption of normality. (C) 2009 Elsevier B.V. All rights reserved.
Resumo:
Subtle quantum properties offer exciting new prospects in optical communications. For example, quantum entanglement enables the secure exchange of cryptographic keys(1) and the distribution of quantum information by teleportation(2,3). Entangled bright beams of light are increasingly appealing for such tasks, because they enable the use of well-established classical communications techniques(4). However, quantum resources are fragile and are subject to decoherence by interaction with the environment. The unavoidable losses in the communication channel can lead to a complete destruction of entanglement(5-8), limiting the application of these states to quantum-communication protocols. We investigate the conditions under which this phenomenon takes place for the simplest case of two light beams, and analyse characteristics of states which are robust against losses. Our study sheds new light on the intriguing properties of quantum entanglement and how they may be harnessed for future applications.
Resumo:
We study and compare the information loss of a large class of Gaussian bipartite systems. It includes the usual Caldeira-Leggett-type model as well as Anosov models ( parametric oscillators, the inverted oscillator environment, etc), which exhibit instability, one of the most important characteristics of chaotic systems. We establish a rigorous connection between the quantum Lyapunov exponents and coherence loss, and show that in the case of unstable environments coherence loss is completely determined by the upper quantum Lyapunov exponent, a behavior which is more universal than that of the Caldeira-Leggett-type model.
Resumo:
The main object of this paper is to discuss the Bayes estimation of the regression coefficients in the elliptically distributed simple regression model with measurement errors. The posterior distribution for the line parameters is obtained in a closed form, considering the following: the ratio of the error variances is known, informative prior distribution for the error variance, and non-informative prior distributions for the regression coefficients and for the incidental parameters. We proved that the posterior distribution of the regression coefficients has at most two real modes. Situations with a single mode are more likely than those with two modes, especially in large samples. The precision of the modal estimators is studied by deriving the Hessian matrix, which although complicated can be computed numerically. The posterior mean is estimated by using the Gibbs sampling algorithm and approximations by normal distributions. The results are applied to a real data set and connections with results in the literature are reported. (C) 2011 Elsevier B.V. All rights reserved.
Resumo:
Although the asymptotic distributions of the likelihood ratio for testing hypotheses of null variance components in linear mixed models derived by Stram and Lee [1994. Variance components testing in longitudinal mixed effects model. Biometrics 50, 1171-1177] are valid, their proof is based on the work of Self and Liang [1987. Asymptotic properties of maximum likelihood estimators and likelihood tests under nonstandard conditions. J. Amer. Statist. Assoc. 82, 605-610] which requires identically distributed random variables, an assumption not always valid in longitudinal data problems. We use the less restrictive results of Vu and Zhou [1997. Generalization of likelihood ratio tests under nonstandard conditions. Ann. Statist. 25, 897-916] to prove that the proposed mixture of chi-squared distributions is the actual asymptotic distribution of such likelihood ratios used as test statistics for null variance components in models with one or two random effects. We also consider a limited simulation study to evaluate the appropriateness of the asymptotic distribution of such likelihood ratios in moderately sized samples. (C) 2008 Elsevier B.V. All rights reserved.
Resumo:
Objectives: Human papillomavirus (HPV) infection is a major risk factor for cervical disease. Using baseline data from the HIV-infected cohort of Evandro Chagas Clinical Research Institute at Fiocruz, Rio de Janeiro, Brazil, factors associated with an increased prevalence of HPV were assessed. Methods: Samples from 634 HIV-infected women were tested for the presence of HPV infection using hybrid capture 11 and polymerase chain reaction. Prevalence ratios (PR) were estimated using Poisson regression analysis with robust variance. Results: The overall prevalence of HPV infection was 48%, of which 94% were infected with a high-risk HPV. In multivariate analysis, factors independently associated with infection with high-risk HPV type were: younger age (<30 years of age; PR 1.5, 95% confidence interval (CI) 1.1-2.1), current or prior drug use (PR 1.3, 95% CI 1.0-1.6), self-reported history of HPV infection (PR 1.2, 95% CI 0.96-1.6), condom use in the last sexual intercourse (PR 1.3, 95% CI 1.1-1.7), and nadir CD4+ T-cell count <100 cells/mm(3) (PR 1.6, 95% CI 1.2-2.1). Conclusions: The estimated prevalence of high-risk HPV-infection among HIV-infected women from Rio de Janeiro, Brazil, was high. Close monitoring of HPV-related effects is warranted in all HIV-infected women, in particular those of younger age and advanced immunosuppression. (C) 2008 International Society for Infectious Diseases. Published by Elsevier Ltd. All rights reserved.
Resumo:
The modeling and analysis of lifetime data is an important aspect of statistical work in a wide variety of scientific and technological fields. Good (1953) introduced a probability distribution which is commonly used in the analysis of lifetime data. For the first time, based on this distribution, we propose the so-called exponentiated generalized inverse Gaussian distribution, which extends the exponentiated standard gamma distribution (Nadarajah and Kotz, 2006). Various structural properties of the new distribution are derived, including expansions for its moments, moment generating function, moments of the order statistics, and so forth. We discuss maximum likelihood estimation of the model parameters. The usefulness of the new model is illustrated by means of a real data set. (c) 2010 Elsevier B.V. All rights reserved.
Resumo:
Purpose: The aim of this in situ double-blind randomised crossover study was to investigate the effect of calcium (Ca) pre-rinse on the composition of plaque and on enamel prior to the use of fluoride (F) dentifrice. Materials and Methods: During four phases (14 days each) of this study, 10 volunteers had agreed to wear dental appliances containing two healthy bovine enamel blocks. A fresh solution containing 20% weight/volume (w/v) sucrose was dripped on the enamel blocks ex vivo for 5 min three times a day. Subsequently, the appliances were replaced in the mouth, and the volunteers rinsed their mouth with 10 mL of a Ca (150 mmol/L) or a placebo rinse (1 min). In sequence, a slurry (1:3 w/v) of F (1030 ppm) or placebo dentifrice was dripped onto the blocks ex vivo for 1 min. During this time, the volunteers brushed their teeth with the respective dentifrice. The appliances were replaced in the mouth, and the volunteers rinsed their mouth with water. The plaque formed on the blocks was analysed for F and Ca. The enamel demineralisation as well as the incorporation of F on enamel was evaluated by cross-sectional microhardness and alkali-soluble F analysis, respectively. Data were tested using analysis of variance (P < 0.05). Results: The Ca pre-rinse prior to the use of the F dentifrice led to a three- and sixfold increase in the plaque F and Ca concentrations, respectively. It also did not have any additive effect on the F content on the enamel and the demineralisation of the enamel, in comparison with the use of F dentifrice alone. Conclusions: A Ca lactate rinse used prior to the F dentifrice was able to change the mineral content in the plaque, but it was unable to prevent enamel demineralisation.
Resumo:
In this thesis, a new algorithm has been proposed to segment the foreground of the fingerprint from the image under consideration. The algorithm uses three features, mean, variance and coherence. Based on these features, a rule system is built to help the algorithm to efficiently segment the image. In addition, the proposed algorithm combine split and merge with modified Otsu. Both enhancements techniques such as Gaussian filter and histogram equalization are applied to enhance and improve the quality of the image. Finally, a post processing technique is implemented to counter the undesirable effect in the segmented image. Fingerprint recognition system is one of the oldest recognition systems in biometrics techniques. Everyone have a unique and unchangeable fingerprint. Based on this uniqueness and distinctness, fingerprint identification has been used in many applications for a long period. A fingerprint image is a pattern which consists of two regions, foreground and background. The foreground contains all important information needed in the automatic fingerprint recognition systems. However, the background is a noisy region that contributes to the extraction of false minutiae in the system. To avoid the extraction of false minutiae, there are many steps which should be followed such as preprocessing and enhancement. One of these steps is the transformation of the fingerprint image from gray-scale image to black and white image. This transformation is called segmentation or binarization. The aim for fingerprint segmentation is to separate the foreground from the background. Due to the nature of fingerprint image, the segmentation becomes an important and challenging task. The proposed algorithm is applied on FVC2000 database. Manual examinations from human experts show that the proposed algorithm provides an efficient segmentation results. These improved results are demonstrating in diverse experiments.
Resumo:
The genetic improvement in litter size in pigs has been substantial during the last 10-15 years. The number of teats on the sow must increase as well to meet the needs of the piglets, because each piglet needs access to its own teat. We applied a genetic heterogeneity model on teat numberin sows, and estimated medium-high heritability for teat number (0.5), but low heritability for residual variance (0.05), indicating that selection for reduced variance might have very limited effect. A numerically positive correlation (0.8) between additive genetic breeding values for mean and for variance was found, but because of the low heritability for residual variance, the variance will increase very slowly with the mean.
Resumo:
This thesis develops and evaluates statistical methods for different types of genetic analyses, including quantitative trait loci (QTL) analysis, genome-wide association study (GWAS), and genomic evaluation. The main contribution of the thesis is to provide novel insights in modeling genetic variance, especially via random effects models. In variance component QTL analysis, a full likelihood model accounting for uncertainty in the identity-by-descent (IBD) matrix was developed. It was found to be able to correctly adjust the bias in genetic variance component estimation and gain power in QTL mapping in terms of precision. Double hierarchical generalized linear models, and a non-iterative simplified version, were implemented and applied to fit data of an entire genome. These whole genome models were shown to have good performance in both QTL mapping and genomic prediction. A re-analysis of a publicly available GWAS data set identified significant loci in Arabidopsis that control phenotypic variance instead of mean, which validated the idea of variance-controlling genes. The works in the thesis are accompanied by R packages available online, including a general statistical tool for fitting random effects models (hglm), an efficient generalized ridge regression for high-dimensional data (bigRR), a double-layer mixed model for genomic data analysis (iQTL), a stochastic IBD matrix calculator (MCIBD), a computational interface for QTL mapping (qtl.outbred), and a GWAS analysis tool for mapping variance-controlling loci (vGWAS).
Resumo:
Background: Genetic variation for environmental sensitivity indicates that animals are genetically different in their response to environmental factors. Environmental factors are either identifiable (e.g. temperature) and called macro-environmental or unknown and called micro-environmental. The objectives of this study were to develop a statistical method to estimate genetic parameters for macro- and micro-environmental sensitivities simultaneously, to investigate bias and precision of resulting estimates of genetic parameters and to develop and evaluate use of Akaike’s information criterion using h-likelihood to select the best fitting model. Methods: We assumed that genetic variation in macro- and micro-environmental sensitivities is expressed as genetic variance in the slope of a linear reaction norm and environmental variance, respectively. A reaction norm model to estimate genetic variance for macro-environmental sensitivity was combined with a structural model for residual variance to estimate genetic variance for micro-environmental sensitivity using a double hierarchical generalized linear model in ASReml. Akaike’s information criterion was constructed as model selection criterion using approximated h-likelihood. Populations of sires with large half-sib offspring groups were simulated to investigate bias and precision of estimated genetic parameters. Results: Designs with 100 sires, each with at least 100 offspring, are required to have standard deviations of estimated variances lower than 50% of the true value. When the number of offspring increased, standard deviations of estimates across replicates decreased substantially, especially for genetic variances of macro- and micro-environmental sensitivities. Standard deviations of estimated genetic correlations across replicates were quite large (between 0.1 and 0.4), especially when sires had few offspring. Practically, no bias was observed for estimates of any of the parameters. Using Akaike’s information criterion the true genetic model was selected as the best statistical model in at least 90% of 100 replicates when the number of offspring per sire was 100. Application of the model to lactation milk yield in dairy cattle showed that genetic variance for micro- and macro-environmental sensitivities existed. Conclusion: The algorithm and model selection criterion presented here can contribute to better understand genetic control of macro- and micro-environmental sensitivities. Designs or datasets should have at least 100 sires each with 100 offspring.
Resumo:
BACKGROUND: Canalization is defined as the stability of a genotype against minor variations in both environment and genetics. Genetic variation in degree of canalization causes heterogeneity of within-family variance. The aims of this study are twofold: (1) quantify genetic heterogeneity of (within-family) residual variance in Atlantic salmon and (2) test whether the observed heterogeneity of (within-family) residual variance can be explained by simple scaling effects. RESULTS: Analysis of body weight in Atlantic salmon using a double hierarchical generalized linear model (DHGLM) revealed substantial heterogeneity of within-family variance. The 95% prediction interval for within-family variance ranged from ~0.4 to 1.2 kg2, implying that the within-family variance of the most extreme high families is expected to be approximately three times larger than the extreme low families. For cross-sectional data, DHGLM with an animal mean sub-model resulted in severe bias, while a corresponding sire-dam model was appropriate. Heterogeneity of variance was not sensitive to Box-Cox transformations of phenotypes, which implies that heterogeneity of variance exists beyond what would be expected from simple scaling effects. CONCLUSIONS: Substantial heterogeneity of within-family variance was found for body weight in Atlantic salmon. A tendency towards higher variance with higher means (scaling effects) was observed, but heterogeneity of within-family variance existed beyond what could be explained by simple scaling effects. For cross-sectional data, using the animal mean sub-model in the DHGLM resulted in biased estimates of variance components, which differed substantially both from a standard linear mean animal model and a sire-dam DHGLM model. Although genetic differences in canalization were observed, selection for increased canalization is difficult, because there is limited individual information for the variance sub-model, especially when based on cross-sectional data. Furthermore, potential macro-environmental changes (diet, climatic region, etc.) may make genetic heterogeneity of variance a less stable trait over time and space.