893 resultados para likelihood to publication


Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper considers an extension to the skew-normal model through the inclusion of an additional parameter which can lead to both uni- and bi-modal distributions. The paper presents various basic properties of this family of distributions and provides a stochastic representation which is useful for obtaining theoretical properties and to simulate from the distribution. Moreover, the singularity of the Fisher information matrix is investigated and maximum likelihood estimation for a random sample with no covariates is considered. The main motivation is thus to avoid using mixtures in fitting bimodal data as these are well known to be complicated to deal with, particularly because of identifiability problems. Data-based illustrations show that such model can be useful. Copyright (C) 2009 John Wiley & Sons, Ltd.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The Birnbaum-Saunders regression model is becoming increasingly popular in lifetime analyses and reliability studies. In this model, the signed likelihood ratio statistic provides the basis for testing inference and construction of confidence limits for a single parameter of interest. We focus on the small sample case, where the standard normal distribution gives a poor approximation to the true distribution of the statistic. We derive three adjusted signed likelihood ratio statistics that lead to very accurate inference even for very small samples. Two empirical applications are presented. (C) 2010 Elsevier B.V. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We consider the issue of performing accurate small-sample likelihood-based inference in beta regression models, which are useful for modelling continuous proportions that are affected by independent variables. We derive small-sample adjustments to the likelihood ratio statistic in this class of models. The adjusted statistics can be easily implemented from standard statistical software. We present Monte Carlo simulations showing that inference based on the adjusted statistics we propose is much more reliable than that based on the usual likelihood ratio statistic. A real data example is presented.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The Birnbaum-Saunders regression model is commonly used in reliability studies. We address the issue of performing inference in this class of models when the number of observations is small. Our simulation results suggest that the likelihood ratio test tends to be liberal when the sample size is small. We obtain a correction factor which reduces the size distortion of the test. Also, we consider a parametric bootstrap scheme to obtain improved critical values and improved p-values for the likelihood ratio test. The numerical results show that the modified tests are more reliable in finite samples than the usual likelihood ratio test. We also present an empirical application. (C) 2009 Elsevier B.V. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper we deal with the issue of performing accurate testing inference on a scalar parameter of interest in structural errors-in-variables models. The error terms are allowed to follow a multivariate distribution in the class of the elliptical distributions, which has the multivariate normal distribution as special case. We derive a modified signed likelihood ratio statistic that follows a standard normal distribution with a high degree of accuracy. Our Monte Carlo results show that the modified test is much less size distorted than its unmodified counterpart. An application is presented.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Although the asymptotic distributions of the likelihood ratio for testing hypotheses of null variance components in linear mixed models derived by Stram and Lee [1994. Variance components testing in longitudinal mixed effects model. Biometrics 50, 1171-1177] are valid, their proof is based on the work of Self and Liang [1987. Asymptotic properties of maximum likelihood estimators and likelihood tests under nonstandard conditions. J. Amer. Statist. Assoc. 82, 605-610] which requires identically distributed random variables, an assumption not always valid in longitudinal data problems. We use the less restrictive results of Vu and Zhou [1997. Generalization of likelihood ratio tests under nonstandard conditions. Ann. Statist. 25, 897-916] to prove that the proposed mixture of chi-squared distributions is the actual asymptotic distribution of such likelihood ratios used as test statistics for null variance components in models with one or two random effects. We also consider a limited simulation study to evaluate the appropriateness of the asymptotic distribution of such likelihood ratios in moderately sized samples. (C) 2008 Elsevier B.V. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We discuss the connection between information and copula theories by showing that a copula can be employed to decompose the information content of a multivariate distribution into marginal and dependence components, with the latter quantified by the mutual information. We define the information excess as a measure of deviation from a maximum-entropy distribution. The idea of marginal invariant dependence measures is also discussed and used to show that empirical linear correlation underestimates the amplitude of the actual correlation in the case of non-Gaussian marginals. The mutual information is shown to provide an upper bound for the asymptotic empirical log-likelihood of a copula. An analytical expression for the information excess of T-copulas is provided, allowing for simple model identification within this family. We illustrate the framework in a financial data set. Copyright (C) EPLA, 2009

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We give a general matrix formula for computing the second-order skewness of maximum likelihood estimators. The formula was firstly presented in a tensorial version by Bowman and Shenton (1998). Our matrix formulation has numerical advantages, since it requires only simple operations on matrices and vectors. We apply the second-order skewness formula to a normal model with a generalized parametrization and to an ARMA model. (c) 2010 Elsevier B.V. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We analyse the finite-sample behaviour of two second-order bias-corrected alternatives to the maximum-likelihood estimator of the parameters in a multivariate normal regression model with general parametrization proposed by Patriota and Lemonte [A. G. Patriota and A. J. Lemonte, Bias correction in a multivariate regression model with genereal parameterization, Stat. Prob. Lett. 79 (2009), pp. 1655-1662]. The two finite-sample corrections we consider are the conventional second-order bias-corrected estimator and the bootstrap bias correction. We present the numerical results comparing the performance of these estimators. Our results reveal that analytical bias correction outperforms numerical bias corrections obtained from bootstrapping schemes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Understanding the scientific method fosters the development of critical thinking and logical analysis of information. Additionally, proposing and testing a hypothesis is applicable not only to science, but also to ordinary facts of daily life. Knowing the way science is done and how its results are published is useful for all citizens and mandatory for science students. A 60-h course was created to offer undergraduate students a framework in which to learn the procedures of scientific production and publication. The course`s main focus was biochemistry, and it was comprised of two modules. Module I dealt with scientific articles, and Module II with research project writing. Module I covered the topics: 1) the difference between scientific knowledge and common sense, 2) different conceptions of science, 3) scientific methodology, 4) scientific publishing categories, 5) logical principles, 6) deductive and inductive approaches, and 7) critical reading of scientific articles. Module II dealt with 1) selection of an experimental problem for investigation, 2) bibliographic revision, 3) materials and methods, 4) project writing and presentation, 5) funding agencies, and 6) critical analysis of experimental results. The course adopted a collaborative learning strategy, and each topic was studied through activities performed by the students. Qualitative and quantitative course evaluations with Likert questionnaires were carried out at each stage, and the results showed the students` high approval of the course. The staff responsible for course planning and development also evaluated it positively. The Biochemistry Department of the Chemistry Institute of the University of Sao Paulo has offered the course four times.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We consider methods for estimating causal effects of treatment in the situation where the individuals in the treatment and the control group are self selected, i.e., the selection mechanism is not randomized. In this case, simple comparison of treated and control outcomes will not generally yield valid estimates of casual effects. The propensity score method is frequently used for the evaluation of treatment effect. However, this method is based onsome strong assumptions, which are not directly testable. In this paper, we present an alternative modeling approachto draw causal inference by using share random-effect model and the computational algorithm to draw likelihood based inference with such a model. With small numerical studies and a real data analysis, we show that our approach gives not only more efficient estimates but it is also less sensitive to model misspecifications, which we consider, than the existing methods.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents the techniques of likelihood prediction for the generalized linear mixed models. Methods of likelihood prediction is explained through a series of examples; from a classical one to more complicated ones. The examples show, in simple cases, that the likelihood prediction (LP) coincides with already known best frequentist practice such as the best linear unbiased predictor. The paper outlines a way to deal with the covariate uncertainty while producing predictive inference. Using a Poisson error-in-variable generalized linear model, it has been shown that in complicated cases LP produces better results than already know methods.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A crucial aspect of evidential reasoning in crime investigation involves comparing the support that evidence provides for alternative hypotheses. Recent work in forensic statistics has shown how Bayesian Networks (BNs) can be employed for this purpose. However, the specification of BNs requires conditional probability tables describing the uncertain processes under evaluation. When these processes are poorly understood, it is necessary to rely on subjective probabilities provided by experts. Accurate probabilities of this type are normally hard to acquire from experts. Recent work in qualitative reasoning has developed methods to perform probabilistic reasoning using coarser representations. However, the latter types of approaches are too imprecise to compare the likelihood of alternative hypotheses. This paper examines this shortcoming of the qualitative approaches when applied to the aforementioned problem, and identifies and integrates techniques to refine them.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

ABSTRACT World Heritage sites provide a glimpse into the stories and civilizations of the past. There are currently 1007 unique World Heritage properties with 779 being classified as cultural sites, 197 as natural sites, and 31 falling into the categories of both cultural and natural sites (UNESCO & World Heritage Centre, 1992-2015). However, of these 1007 World Heritage sites, at least 46 are categorized as in danger and this number continues to grow. These unique and irreplaceable sites are exceptional because of their universality. Consequently, since World Heritage sites belong to all the people of the world and provide inspiration and admiration to all who visit them, it is our responsibility to help preserve these sites. The key form of preservation involves the individual monitoring of each site over time. While traditional methods are still extremely valuable, more recent advances in the field of geographic and spatial technologies including geographic information systems (GIS), laser scanning, and remote sensing, are becoming more beneficial for the monitoring and overall safeguarding of World Heritage sites. Through the employment and analysis of more accurately detailed spatial data, World Heritage sites can be better managed. There is a strong urgency to protect these sites. The purpose of this thesis is to describe the importance of taking care of World Heritage sites and to depict a way in which spatial technologies can be used to monitor and in effect preserve World Heritage sites through the utilization of remote sensing imagery. The research conducted in this thesis centers on the Everglades National Park, a World Heritage site that is continually affected by changes in vegetation. Data used include Landsat satellite imagery that dates from 2001-2003, the Everglades' boundaries shapefile, and Google Earth imagery. In order to conduct the in-depth analysis of vegetation change within the selected World Heritage site, three main techniques were performed to study changes found within the imagery. These techniques consist of conducting supervised classification for each image, incorporating a vegetation index known as Normalized Vegetation Index (NDVI), and utilizing the change detection tool available in the Environment for Visualizing Images (ENVI) software. With the research and analysis conducted throughout this thesis, it has been shown that within the three year time span (2001-2003), there has been an overall increase in both areas of barren soil (5.760%) and areas of vegetation (1.263%) with a decrease in the percentage of areas classified as sparsely vegetated (-6.987%). These results were gathered through the use of the maximum likelihood classification process available in the ENVI software. The results produced by the change detection tool which further analyzed vegetation change correlate with the results produced by the classification method. As well, by utilizing the NDVI method, one is able to locate changes by selecting a specific area and comparing the vegetation index generated for each date. It has been found that through the utilization of remote sensing technology, it is possible to monitor and observe changes featured within a World Heritage site. Remote sensing is an extraordinary tool that can and should be used by all site managers and organizations whose goal it is to preserve and protect World Heritage sites. Remote sensing can be used to not only observe changes over time, but it can also be used to pinpoint threats within a World Heritage site. World Heritage sites are irreplaceable sources of beauty, culture, and inspiration. It is our responsibility, as citizens of this world, to guard these treasures.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This futuristic article discusses the shift in academic and research libraries to electronic collections in the context of information access, costs, publication models, and preservation of content. Certain factors currently complicate the shift to electronic formats and challenge their widespread acceptance. Future scenarios spanning skill ecosystems, technologies and workflows, and societal implications are explored as logical outgrowths of present circumstances.