933 resultados para Bayesian hierarchical linear model


Relevância:

100.00% 100.00%

Publicador:

Resumo:

We proposed and tested a multilevel model, underpinned by empowerment theory, that examines the processes linking high-performance work systems (HPWS) and performance outcomes at the individual and organizational levels of analyses. Data were obtained from 37 branches of 2 banking institutions in Ghana. Results of hierarchical regression analysis revealed that branch-level HPWS relates to empowerment climate. Additionally, results of hierarchical linear modeling that examined the hypothesized cross-level relationships revealed 3 salient findings. First, experienced HPWS and empowerment climate partially mediate the influence of branch-level HPWS on psychological empowerment. Second, psychological empowerment partially mediates the influence of empowerment climate and experienced HPWS on service performance. Third, service orientation moderates the psychological empowerment-service performance relationship such that the relationship is stronger for those high rather than low in service orientation. Last, ordinary least squares regression results revealed that branch-level HPWS influences branch-level market performance through cross-level and individual-level influences on service performance that emerges at the branch level as aggregated service performance. © 2011 American Psychological Association.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Many modern applications fall into the category of "large-scale" statistical problems, in which both the number of observations n and the number of features or parameters p may be large. Many existing methods focus on point estimation, despite the continued relevance of uncertainty quantification in the sciences, where the number of parameters to estimate often exceeds the sample size, despite huge increases in the value of n typically seen in many fields. Thus, the tendency in some areas of industry to dispense with traditional statistical analysis on the basis that "n=all" is of little relevance outside of certain narrow applications. The main result of the Big Data revolution in most fields has instead been to make computation much harder without reducing the importance of uncertainty quantification. Bayesian methods excel at uncertainty quantification, but often scale poorly relative to alternatives. This conflict between the statistical advantages of Bayesian procedures and their substantial computational disadvantages is perhaps the greatest challenge facing modern Bayesian statistics, and is the primary motivation for the work presented here.

Two general strategies for scaling Bayesian inference are considered. The first is the development of methods that lend themselves to faster computation, and the second is design and characterization of computational algorithms that scale better in n or p. In the first instance, the focus is on joint inference outside of the standard problem of multivariate continuous data that has been a major focus of previous theoretical work in this area. In the second area, we pursue strategies for improving the speed of Markov chain Monte Carlo algorithms, and characterizing their performance in large-scale settings. Throughout, the focus is on rigorous theoretical evaluation combined with empirical demonstrations of performance and concordance with the theory.

One topic we consider is modeling the joint distribution of multivariate categorical data, often summarized in a contingency table. Contingency table analysis routinely relies on log-linear models, with latent structure analysis providing a common alternative. Latent structure models lead to a reduced rank tensor factorization of the probability mass function for multivariate categorical data, while log-linear models achieve dimensionality reduction through sparsity. Little is known about the relationship between these notions of dimensionality reduction in the two paradigms. In Chapter 2, we derive several results relating the support of a log-linear model to nonnegative ranks of the associated probability tensor. Motivated by these findings, we propose a new collapsed Tucker class of tensor decompositions, which bridge existing PARAFAC and Tucker decompositions, providing a more flexible framework for parsimoniously characterizing multivariate categorical data. Taking a Bayesian approach to inference, we illustrate empirical advantages of the new decompositions.

Latent class models for the joint distribution of multivariate categorical, such as the PARAFAC decomposition, data play an important role in the analysis of population structure. In this context, the number of latent classes is interpreted as the number of genetically distinct subpopulations of an organism, an important factor in the analysis of evolutionary processes and conservation status. Existing methods focus on point estimates of the number of subpopulations, and lack robust uncertainty quantification. Moreover, whether the number of latent classes in these models is even an identified parameter is an open question. In Chapter 3, we show that when the model is properly specified, the correct number of subpopulations can be recovered almost surely. We then propose an alternative method for estimating the number of latent subpopulations that provides good quantification of uncertainty, and provide a simple procedure for verifying that the proposed method is consistent for the number of subpopulations. The performance of the model in estimating the number of subpopulations and other common population structure inference problems is assessed in simulations and a real data application.

In contingency table analysis, sparse data is frequently encountered for even modest numbers of variables, resulting in non-existence of maximum likelihood estimates. A common solution is to obtain regularized estimates of the parameters of a log-linear model. Bayesian methods provide a coherent approach to regularization, but are often computationally intensive. Conjugate priors ease computational demands, but the conjugate Diaconis--Ylvisaker priors for the parameters of log-linear models do not give rise to closed form credible regions, complicating posterior inference. In Chapter 4 we derive the optimal Gaussian approximation to the posterior for log-linear models with Diaconis--Ylvisaker priors, and provide convergence rate and finite-sample bounds for the Kullback-Leibler divergence between the exact posterior and the optimal Gaussian approximation. We demonstrate empirically in simulations and a real data application that the approximation is highly accurate, even in relatively small samples. The proposed approximation provides a computationally scalable and principled approach to regularized estimation and approximate Bayesian inference for log-linear models.

Another challenging and somewhat non-standard joint modeling problem is inference on tail dependence in stochastic processes. In applications where extreme dependence is of interest, data are almost always time-indexed. Existing methods for inference and modeling in this setting often cluster extreme events or choose window sizes with the goal of preserving temporal information. In Chapter 5, we propose an alternative paradigm for inference on tail dependence in stochastic processes with arbitrary temporal dependence structure in the extremes, based on the idea that the information on strength of tail dependence and the temporal structure in this dependence are both encoded in waiting times between exceedances of high thresholds. We construct a class of time-indexed stochastic processes with tail dependence obtained by endowing the support points in de Haan's spectral representation of max-stable processes with velocities and lifetimes. We extend Smith's model to these max-stable velocity processes and obtain the distribution of waiting times between extreme events at multiple locations. Motivated by this result, a new definition of tail dependence is proposed that is a function of the distribution of waiting times between threshold exceedances, and an inferential framework is constructed for estimating the strength of extremal dependence and quantifying uncertainty in this paradigm. The method is applied to climatological, financial, and electrophysiology data.

The remainder of this thesis focuses on posterior computation by Markov chain Monte Carlo. The Markov Chain Monte Carlo method is the dominant paradigm for posterior computation in Bayesian analysis. It has long been common to control computation time by making approximations to the Markov transition kernel. Comparatively little attention has been paid to convergence and estimation error in these approximating Markov Chains. In Chapter 6, we propose a framework for assessing when to use approximations in MCMC algorithms, and how much error in the transition kernel should be tolerated to obtain optimal estimation performance with respect to a specified loss function and computational budget. The results require only ergodicity of the exact kernel and control of the kernel approximation accuracy. The theoretical framework is applied to approximations based on random subsets of data, low-rank approximations of Gaussian processes, and a novel approximating Markov chain for discrete mixture models.

Data augmentation Gibbs samplers are arguably the most popular class of algorithm for approximately sampling from the posterior distribution for the parameters of generalized linear models. The truncated Normal and Polya-Gamma data augmentation samplers are standard examples for probit and logit links, respectively. Motivated by an important problem in quantitative advertising, in Chapter 7 we consider the application of these algorithms to modeling rare events. We show that when the sample size is large but the observed number of successes is small, these data augmentation samplers mix very slowly, with a spectral gap that converges to zero at a rate at least proportional to the reciprocal of the square root of the sample size up to a log factor. In simulation studies, moderate sample sizes result in high autocorrelations and small effective sample sizes. Similar empirical results are observed for related data augmentation samplers for multinomial logit and probit models. When applied to a real quantitative advertising dataset, the data augmentation samplers mix very poorly. Conversely, Hamiltonian Monte Carlo and a type of independence chain Metropolis algorithm show good mixing on the same dataset.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This dissertation contributes to the rapidly growing empirical research area in the field of operations management. It contains two essays, tackling two different sets of operations management questions which are motivated by and built on field data sets from two very different industries --- air cargo logistics and retailing.

The first essay, based on the data set obtained from a world leading third-party logistics company, develops a novel and general Bayesian hierarchical learning framework for estimating customers' spillover learning, that is, customers' learning about the quality of a service (or product) from their previous experiences with similar yet not identical services. We then apply our model to the data set to study how customers' experiences from shipping on a particular route affect their future decisions about shipping not only on that route, but also on other routes serviced by the same logistics company. We find that customers indeed borrow experiences from similar but different services to update their quality beliefs that determine future purchase decisions. Also, service quality beliefs have a significant impact on their future purchasing decisions. Moreover, customers are risk averse; they are averse to not only experience variability but also belief uncertainty (i.e., customer's uncertainty about their beliefs). Finally, belief uncertainty affects customers' utilities more compared to experience variability.

The second essay is based on a data set obtained from a large Chinese supermarket chain, which contains sales as well as both wholesale and retail prices of un-packaged perishable vegetables. Recognizing the special characteristics of this particularly product category, we develop a structural estimation model in a discrete-continuous choice model framework. Building on this framework, we then study an optimization model for joint pricing and inventory management strategies of multiple products, which aims at improving the company's profit from direct sales and at the same time reducing food waste and thus improving social welfare.

Collectively, the studies in this dissertation provide useful modeling ideas, decision tools, insights, and guidance for firms to utilize vast sales and operations data to devise more effective business strategies.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In acquired immunodeficiency syndrome (AIDS) studies it is quite common to observe viral load measurements collected irregularly over time. Moreover, these measurements can be subjected to some upper and/or lower detection limits depending on the quantification assays. A complication arises when these continuous repeated measures have a heavy-tailed behavior. For such data structures, we propose a robust structure for a censored linear model based on the multivariate Student's t-distribution. To compensate for the autocorrelation existing among irregularly observed measures, a damped exponential correlation structure is employed. An efficient expectation maximization type algorithm is developed for computing the maximum likelihood estimates, obtaining as a by-product the standard errors of the fixed effects and the log-likelihood function. The proposed algorithm uses closed-form expressions at the E-step that rely on formulas for the mean and variance of a truncated multivariate Student's t-distribution. The methodology is illustrated through an application to an Human Immunodeficiency Virus-AIDS (HIV-AIDS) study and several simulation studies.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A common breeding strategy is to carry out basic studies to investigate the hypothesis of a single gene controlling the trait (major gene) with or without polygenes of minor effect. In this study we used Bayesian inference to fit genetic additive-dominance models of inheritance to plant breeding experiments with multiple generations. Normal densities with different means, according to the major gene genotype, were considered in a linear model in which the design matrix of the genetic effects had unknown coefficients (which were estimated in individual basis). An actual data set from an inheritance study of partenocarpy in zucchini (Cucurbita pepo L.) was used for illustration. Model fitting included posterior probabilities for all individual genotypes. Analysis agrees with results in the literature but this approach was far more efficient than previous alternatives assuming that design matrix was known for the generations. Partenocarpy in zucchini is controlled by a major gene with important additive effect and partial dominance.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

OBJETIVO: Estimar a prevalência de hipertensão arterial entre militares jovens e fatores associados. MÉTODOS: Estudo transversal realizado com amostra de 380 militares do sexo masculino de 19 e 35 anos de idade em uma unidade da Força Aérea Brasileira em São Paulo, SP, entre 2000 e 2001. Os pontos de corte para hipertensão foram: >140mmHg para pressão sistólica e > 90mmHg para pressão diastólica. As variáveis estudadas incluíram fatores de risco e de proteção para hipertensão, como características comportamentais e nutricionais. Para análise das associações, utilizou-se regressão linear generalizada múltipla, com família binomial e ligação logarítmica, obtendo-se razões de prevalências com intervalo de 90% de confiança e seleção hierarquizada das variáveis. RESULTADOS: A prevalência de hipertensão arterial foi de 22% (IC 90%: 21;29). No modelo final da regressão múltipla verificou-se prevalência de hipertensão 68% maior entre os ex-fumantes em relação aos não fumantes (IC 90%: 1,13;2,50). Entre os indivíduos com sobrepeso (índice de massa corporal - IMC de 25 a 29kg/m2) e com obesidade (IMC>29kg/m2) as prevalências foram, respectivamente, 75% (IC 90%: 1,23;2,50) e 178% (IC 90%: 1,82;4,25) maiores do que entre os eutróficos. Entre os que praticavam atividade física regular, comparado aos que não praticavam, a prevalência foi 52% menor (IC 90%: 0,30;0,90). CONCLUSÕES: Ser ex-fumante e ter sobrepeso ou obesidade foram situações de risco para hipertensão, enquanto que a prática regular de atividade física foi fator de proteção em militares jovens.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This work deals with analysis of cracked structures using BEM. Two formulations to analyse the crack growth process in quasi-brittle materials are discussed. They are based on the dual formulation of BEM where two different integral equations are employed along the opposite sides of the crack surface. The first presented formulation uses the concept of constant operator, in which the corrections of the nonlinear process are made only by applying appropriate tractions along the crack surfaces. The second presented BEM formulation to analyse crack growth problems is an implicit technique based on the use of a consistent tangent operator. This formulation is accurate, stable and always requires much less iterations to reach the equilibrium within a given load increment in comparison with the classical approach. Comparison examples of classical problem of crack growth are shown to illustrate the performance of the two formulations. (C) 2009 Elsevier Ltd. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This article presents a tool for the allocation analysis of complex systems of water resources, called AcquaNetXL, developed in the form of spreadsheet in which a model of linear optimization and another nonlinear were incorporated. The AcquaNetXL keeps the concepts and attributes of a decision support system. In other words, it straightens out the communication between the user and the computer, facilitates the understanding and the formulation of the problem, the interpretation of the results and it also gives a support in the process of decision making, turning it into a clear and organized process. The performance of the algorithms used for solving the problems of water allocation was satisfactory especially for the linear model.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The relative importance of factors that may promote genetic differentiation in marine organisms is largely unknown. Here, contributions to population structure from biogeography, habitat distribution, and isolation by distance were investigated in Axoclinus nigricaudus, a small subtidal rock reef fish, throughout its range in the Gulf of California. A 408 basepair fragment of the mitochondrial control region was sequenced from 105 individuals. Variation was significantly partitioned between many pairs of populations. Phylogenetic analyses, hierarchical analyses of variance, and general linear models substantiated a major break between two putative biogeographic regions. This genetic discontinuity coincides with an abrupt change in ecological characteristics (including temperature and salinity) but does not coincide with known oceanographic circulation patterns. Geographic distance and the nature of habitat separating populations (continuous habitat along a shoreline, discontinuous habitat along a shoreline, and open water) also contributed to population structure in general linear model analyses. To verify that local populations are genetically stable over time, one population was resampled on four occasions over eighteen months; it showed no evidence of a temporal component to diversity. These results indicate that having a planktonic life stage does not preclude geographically partitioned genetic variation over relatively small geographic distances in marine environments. Moreover, levels of genetic differentiation among populations of Axoclinus nigricaudus cannot be explained by a single factor, but are due to the combined influences of a biogeographic boundary, habitat, and geographic distance.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Item noise models of recognition assert that interference at retrieval is generated by the words from the study list. Context noise models of recognition assert that interference at retrieval is generated by the contexts in which the test word has appeared. The authors introduce the bind cue decide model of episodic memory, a Bayesian context noise model, and demonstrate how it can account for data from the item noise and dual-processing approaches to recognition memory. From the item noise perspective, list strength and list length effects, the mirror effect for word frequency and concreteness, and the effects of the similarity of other words in a list are considered. From the dual-processing perspective, process dissociation data on the effects of length, temporal separation of lists, strength, and diagnosticity of context are examined. The authors conclude that the context noise approach to recognition is a viable alternative to existing approaches.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We analyze the influence of time-, firm-, industry- and country-level determinants of capital structure. First, we apply hierarchical linear modeling in order to assess the relative importance of those levels. We find that time and firm levels explain 78% of firm leverage. Second, we include random intercepts and random coefficients in order to analyze the direct and indirect influences of firm/industry/country characteristics on firm leverage. We document several important indirect influences of variables at industry and country-levels on firm determinants of leverage, as well as several structural differences in the financial behavior between firms of developed and emerging countries. (C) 2010 Elsevier B.V. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

With a 41-society sample of 9990 managers and professionals, we used hierarchical linear modeling to investigate the impact of both macro-level and micro-level predictors on subordinate influence ethics. While we found that both macro-level and micro-level predictors contributed to the model definition, we also found global agreement for a subordinate influence ethics hierarchy. Thus our findings provide evidence that developing a global model of subordinate ethics is possible, and should be based upon multiple criteria and multilevel variables. Journal of International Business Studies (2009) 40, 1022-1045. doi:10.1057/jibs.2008.109

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Ussing [1] considered the steady flux of a single chemical component diffusing through a membrane under the influence of chemical potentials and derived from his linear model, an expression for the ratio of this flux and that of the complementary experiment in which the boundary conditions were interchanged. Here, an extension of Ussing's flux ratio theorem is obtained for n chemically interacting components governed by a linear system of diffusion-migration equations that may also incorporate linear temporary trapping reactions. The determinants of the output flux matrices for complementary experiments are shown to satisfy an Ussing flux ratio formula for steady state conditions of the same form as for the well-known one-component case. (C) 2000 Elsevier Science Ltd. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Item noise models of recognition assert that interference at retrieval is generated by the words from the study list. Context noise models of recognition assert that interference at retrieval is generated by the contexts in which the test word has appeared. The authors introduce the bind cue decide model of episodic memory, a Bayesian context noise model, and demonstrate how it can account for data from the item noise and dual-processing approaches to recognition memory. From the item noise perspective, list strength and list length effects, the mirror effect for word frequency and concreteness, and the effects of the similarity of other words in a list are considered. From the dual-processing perspective, process dissociation data on the effects of length. temporal separation of lists, strength, and diagnosticity of context are examined. The authors conclude that the context noise approach to recognition is a viable alternative to existing approaches. (PsycINFO Database Record (c) 2008 APA, all rights reserved)

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Dissertação de mestrado integrado em Psicologia