983 resultados para Log-gamma generalized distribution
Resumo:
Mode of access: Internet.
Resumo:
At head of title: Project report.
Resumo:
Mode of access: Internet.
Resumo:
The generalized secant hyperbolic distribution (GSHD) proposed in Vaughan (2002) includes a wide range of unimodal symmetric distributions, with the Cauchy and uniform distributions being the limiting cases, and the logistic and hyperbolic secant distributions being special cases. The current article derives an asymptotically efficient rank estimator of the location parameter of the GSHD and suggests the corresponding one- and two-sample optimal rank tests. The rank estimator derived is compared to the modified MLE of location proposed in Vaughan (2002). By combining these two estimators, a computationally attractive method for constructing an exact confidence interval of the location parameter is developed. The statistical procedures introduced in the current article are illustrated by examples.
Resumo:
The size frequency distributions of diffuse, primitive and cored senile plaques (SP) were studied in single sections of the temporal lobe from 10 patients with Alzheimer’s disease (AD). The size distribution curves were unimodal and positively skewed. The size distribution curve of the diffuse plaques was shifted towards larger plaques while those of the neuritic and cored plaques were shifted towards smaller plaques. The neuritic/diffuse plaque ratio was maximal in the 11 – 30 micron size class and the cored/ diffuse plaque ratio in the 21 – 30 micron size class. The size distribution curves of the three types of plaque deviated significantly from a log-normal distribution. Distributions expressed on a logarithmic scale were ‘leptokurtic’, i.e. with excess of observations near the mean. These results suggest that SP in AD grow to within a more restricted size range than predicted from a log-normal model. In addition, there appear to be differences in the patterns of growth of diffuse, primitive and cored plaques. If neuritic and cored plaques develop from earlier diffuse plaques, then smaller diffuse plaques are more likely to be converted to mature plaques.
Resumo:
2000 Mathematics Subject Classification: 62H10.
Resumo:
Many modern applications fall into the category of "large-scale" statistical problems, in which both the number of observations n and the number of features or parameters p may be large. Many existing methods focus on point estimation, despite the continued relevance of uncertainty quantification in the sciences, where the number of parameters to estimate often exceeds the sample size, despite huge increases in the value of n typically seen in many fields. Thus, the tendency in some areas of industry to dispense with traditional statistical analysis on the basis that "n=all" is of little relevance outside of certain narrow applications. The main result of the Big Data revolution in most fields has instead been to make computation much harder without reducing the importance of uncertainty quantification. Bayesian methods excel at uncertainty quantification, but often scale poorly relative to alternatives. This conflict between the statistical advantages of Bayesian procedures and their substantial computational disadvantages is perhaps the greatest challenge facing modern Bayesian statistics, and is the primary motivation for the work presented here.
Two general strategies for scaling Bayesian inference are considered. The first is the development of methods that lend themselves to faster computation, and the second is design and characterization of computational algorithms that scale better in n or p. In the first instance, the focus is on joint inference outside of the standard problem of multivariate continuous data that has been a major focus of previous theoretical work in this area. In the second area, we pursue strategies for improving the speed of Markov chain Monte Carlo algorithms, and characterizing their performance in large-scale settings. Throughout, the focus is on rigorous theoretical evaluation combined with empirical demonstrations of performance and concordance with the theory.
One topic we consider is modeling the joint distribution of multivariate categorical data, often summarized in a contingency table. Contingency table analysis routinely relies on log-linear models, with latent structure analysis providing a common alternative. Latent structure models lead to a reduced rank tensor factorization of the probability mass function for multivariate categorical data, while log-linear models achieve dimensionality reduction through sparsity. Little is known about the relationship between these notions of dimensionality reduction in the two paradigms. In Chapter 2, we derive several results relating the support of a log-linear model to nonnegative ranks of the associated probability tensor. Motivated by these findings, we propose a new collapsed Tucker class of tensor decompositions, which bridge existing PARAFAC and Tucker decompositions, providing a more flexible framework for parsimoniously characterizing multivariate categorical data. Taking a Bayesian approach to inference, we illustrate empirical advantages of the new decompositions.
Latent class models for the joint distribution of multivariate categorical, such as the PARAFAC decomposition, data play an important role in the analysis of population structure. In this context, the number of latent classes is interpreted as the number of genetically distinct subpopulations of an organism, an important factor in the analysis of evolutionary processes and conservation status. Existing methods focus on point estimates of the number of subpopulations, and lack robust uncertainty quantification. Moreover, whether the number of latent classes in these models is even an identified parameter is an open question. In Chapter 3, we show that when the model is properly specified, the correct number of subpopulations can be recovered almost surely. We then propose an alternative method for estimating the number of latent subpopulations that provides good quantification of uncertainty, and provide a simple procedure for verifying that the proposed method is consistent for the number of subpopulations. The performance of the model in estimating the number of subpopulations and other common population structure inference problems is assessed in simulations and a real data application.
In contingency table analysis, sparse data is frequently encountered for even modest numbers of variables, resulting in non-existence of maximum likelihood estimates. A common solution is to obtain regularized estimates of the parameters of a log-linear model. Bayesian methods provide a coherent approach to regularization, but are often computationally intensive. Conjugate priors ease computational demands, but the conjugate Diaconis--Ylvisaker priors for the parameters of log-linear models do not give rise to closed form credible regions, complicating posterior inference. In Chapter 4 we derive the optimal Gaussian approximation to the posterior for log-linear models with Diaconis--Ylvisaker priors, and provide convergence rate and finite-sample bounds for the Kullback-Leibler divergence between the exact posterior and the optimal Gaussian approximation. We demonstrate empirically in simulations and a real data application that the approximation is highly accurate, even in relatively small samples. The proposed approximation provides a computationally scalable and principled approach to regularized estimation and approximate Bayesian inference for log-linear models.
Another challenging and somewhat non-standard joint modeling problem is inference on tail dependence in stochastic processes. In applications where extreme dependence is of interest, data are almost always time-indexed. Existing methods for inference and modeling in this setting often cluster extreme events or choose window sizes with the goal of preserving temporal information. In Chapter 5, we propose an alternative paradigm for inference on tail dependence in stochastic processes with arbitrary temporal dependence structure in the extremes, based on the idea that the information on strength of tail dependence and the temporal structure in this dependence are both encoded in waiting times between exceedances of high thresholds. We construct a class of time-indexed stochastic processes with tail dependence obtained by endowing the support points in de Haan's spectral representation of max-stable processes with velocities and lifetimes. We extend Smith's model to these max-stable velocity processes and obtain the distribution of waiting times between extreme events at multiple locations. Motivated by this result, a new definition of tail dependence is proposed that is a function of the distribution of waiting times between threshold exceedances, and an inferential framework is constructed for estimating the strength of extremal dependence and quantifying uncertainty in this paradigm. The method is applied to climatological, financial, and electrophysiology data.
The remainder of this thesis focuses on posterior computation by Markov chain Monte Carlo. The Markov Chain Monte Carlo method is the dominant paradigm for posterior computation in Bayesian analysis. It has long been common to control computation time by making approximations to the Markov transition kernel. Comparatively little attention has been paid to convergence and estimation error in these approximating Markov Chains. In Chapter 6, we propose a framework for assessing when to use approximations in MCMC algorithms, and how much error in the transition kernel should be tolerated to obtain optimal estimation performance with respect to a specified loss function and computational budget. The results require only ergodicity of the exact kernel and control of the kernel approximation accuracy. The theoretical framework is applied to approximations based on random subsets of data, low-rank approximations of Gaussian processes, and a novel approximating Markov chain for discrete mixture models.
Data augmentation Gibbs samplers are arguably the most popular class of algorithm for approximately sampling from the posterior distribution for the parameters of generalized linear models. The truncated Normal and Polya-Gamma data augmentation samplers are standard examples for probit and logit links, respectively. Motivated by an important problem in quantitative advertising, in Chapter 7 we consider the application of these algorithms to modeling rare events. We show that when the sample size is large but the observed number of successes is small, these data augmentation samplers mix very slowly, with a spectral gap that converges to zero at a rate at least proportional to the reciprocal of the square root of the sample size up to a log factor. In simulation studies, moderate sample sizes result in high autocorrelations and small effective sample sizes. Similar empirical results are observed for related data augmentation samplers for multinomial logit and probit models. When applied to a real quantitative advertising dataset, the data augmentation samplers mix very poorly. Conversely, Hamiltonian Monte Carlo and a type of independence chain Metropolis algorithm show good mixing on the same dataset.
Resumo:
Mixtures of Zellner's g-priors have been studied extensively in linear models and have been shown to have numerous desirable properties for Bayesian variable selection and model averaging. Several extensions of g-priors to Generalized Linear Models (GLMs) have been proposed in the literature; however, the choice of prior distribution of g and resulting properties for inference have received considerably less attention. In this paper, we extend mixtures of g-priors to GLMs by assigning the truncated Compound Confluent Hypergeometric (tCCH) distribution to 1/(1+g) and illustrate how this prior distribution encompasses several special cases of mixtures of g-priors in the literature, such as the Hyper-g, truncated Gamma, Beta-prime, and the Robust prior. Under an integrated Laplace approximation to the likelihood, the posterior distribution of 1/(1+g) is in turn a tCCH distribution, and approximate marginal likelihoods are thus available analytically. We discuss the local geometric properties of the g-prior in GLMs and show that specific choices of the hyper-parameters satisfy the various desiderata for model selection proposed by Bayarri et al, such as asymptotic model selection consistency, information consistency, intrinsic consistency, and measurement invariance. We also illustrate inference using these priors and contrast them to others in the literature via simulation and real examples.
Resumo:
This is an ecological, analytical and retrospective study comprising the 645 municipalities in the State of São Paulo, the scope of which was to determine the relationship between socioeconomic, demographic variables and the model of care in relation to infant mortality rates in the period from 1998 to 2008. The ratio of average annual change for each indicator per stratum coverage was calculated. Infant mortality was analyzed according to the model for repeated measures over time, adjusted for the following correction variables: the city's population, proportion of Family Health Programs (PSFs) deployed, proportion of Growth Acceleration Programs (PACs) deployed, per capita GDP and SPSRI (São Paulo social responsibility index). The analysis was performed by generalized linear models, considering the gamma distribution. Multiple comparisons were performed with the likelihood ratio with chi-square approximate distribution, considering a significance level of 5%. There was a decrease in infant mortality over the years (p < 0.05), with no significant difference from 2004 to 2008 (p > 0.05). The proportion of PSFs deployed (p < 0.0001) and per capita GDP (p < 0.0001) were significant in the model. The decline of infant mortality in this period was influenced by the growth of per capita GDP and PSFs.
Resumo:
In acquired immunodeficiency syndrome (AIDS) studies it is quite common to observe viral load measurements collected irregularly over time. Moreover, these measurements can be subjected to some upper and/or lower detection limits depending on the quantification assays. A complication arises when these continuous repeated measures have a heavy-tailed behavior. For such data structures, we propose a robust structure for a censored linear model based on the multivariate Student's t-distribution. To compensate for the autocorrelation existing among irregularly observed measures, a damped exponential correlation structure is employed. An efficient expectation maximization type algorithm is developed for computing the maximum likelihood estimates, obtaining as a by-product the standard errors of the fixed effects and the log-likelihood function. The proposed algorithm uses closed-form expressions at the E-step that rely on formulas for the mean and variance of a truncated multivariate Student's t-distribution. The methodology is illustrated through an application to an Human Immunodeficiency Virus-AIDS (HIV-AIDS) study and several simulation studies.
Resumo:
Gamma ray tomography experiments have been carried out to detect spatial patterns in the porosity in a 0.27 m diameter column packed with steel Rashig rings of different sizes: 12.6, 37.9, and 76 mm. using a first generation CT system (Chen et al., 1998). A fast Fourier transform tomographic reconstruction algorithm has been used to calculate the spatial variation over the column cross section. Cross-sectional gas porosity and solid holdup distribution were determinate. The values of cross-sectional average gas porosity were epsilon=0.849, 0.938 and 0.966 for the 12.6, 37.9, and 76 mm rings, respectively. Radial holdup variation within the packed bed has been determined. The variation of the circumferentially averaged gas holdup in the radial direction indicates that the porosity in the column wall region is a somewhat higher than that in the bulk region, due to the effect of the column wall. (C) 2009 Elsevier Ltd. All rights reserved.
Resumo:
The objective was to study the flow pattern in a plate heat exchanger (PHE) through residence time distribution (RTD) experiments. The tested PHE had flat plates and it was part of a laboratory scale pasteurization unit. Series flow and parallel flow configurations were tested with a variable number of passes and channels per pass. Owing to the small scale of the equipment and the short residence times, it was necessary to take into account the influence of the tracer detection unit on the RID data. Four theoretical RID models were adjusted: combined, series combined, generalized convection and axial dispersion. The combined model provided the best fit and it was useful to quantify the active and dead space volumes of the PHE and their dependence on its configuration. Results suggest that the axial dispersion model would present good results for a larger number of passes because of the turbulence associated with the changes of pass. This type of study can be useful to compare the hydraulic performance of different plates or to provide data for the evaluation of heat-induced changes that occur in the processing of heat-sensitive products. (C) 2011 Elsevier Ltd. All rights reserved.
Resumo:
A deterministic mathematical model for steady-state unidirectional solidification is proposed to predict the columnar-to-equiaxed transition. In the model, which is an extension to the classic model proposed by Hunt [Hunt JD. Mater Sci Eng 1984;65:75], equiaxed grains nucleate according to either a normal or a log-normal distribution of nucleation undercoolings. Growth maps are constructed, indicating either columnar or equiaxed solidification as a function of the velocity of isotherms and temperature gradient. The fields A columnar and equiaxed growth change significantly with the spread of the nucleation undercooling distribution. Increasing the spread Favors columnar solidification if the dimensionless velocity of the isotherms is larger than 1. For a velocity less than 1, however, equiaxed solidification is initially favored, but columnar solidification is enhanced for a larger increase in the spread. This behavior was confirmed by a stochastic model, which showed that an increase in the distribution spread Could change the grain structure from completely columnar to 50% columnar grains. (c) 2008 Acta Materialia Inc. Published by Elsevier Ltd. All rights reserved.