935 resultados para Latent variables
Resumo:
A fully dimensional view of psychiatric disorder conceptualises schizotypy as both a continuous personality trait and an underlying vulnerability to the development of psychotic illness. Such a model would predict that the structure of schizotypal traits would closely parallel the structure of schizophrenia or psychosis. This was investigated in injecting amphetamine users (N = 322), a clinical population who have high rates of acute psychotic episodes and subclinical schizotypal experiences. Schizotypy was assessed using the Oxford-Liverpool Inventory of Feelings and Experiences (O-LIFE), and psychotic symptoms were assessed using the Brief Psychiatric Rating Scale (BPRS). Using confirmatory factor analysis, O-LIFE subscale scores were mapped onto latent variables with their more clinical counterparts from the BPRS. A four-factor model comprising positive schizotypy, disorganisation, negative schizotypy, and disinhibition provided the best model fit, consistent with prior research into the structure of schizotypy. The model provided a good fit to the data, lending support to the theory that schizotypy and psychotic symptoms map onto common underlying dimensions.
Resumo:
This is a methodological paper describing when and how manifest items dropped from a latent construct measurement model (e.g., factor analysis) can be retained for additional analysis. Presented are protocols for assessment for retention in the measurement model, evaluation of dropped items as potential items separate from the latent construct, and post hoc analyses that can be conducted using all retained (manifest or latent) variables. The protocols are then applied to data relating to the impact of the NAPLAN test. The variables examined are teachers’ achievement goal orientations and teachers’ perceptions of the impact of the test on curriculum and pedagogy. It is suggested that five attributes be considered before retaining dropped manifest items for additional analyses. (1) Items can be retained when employed in service of an established or hypothesized theoretical model. (2) Items should only be retained if sufficient variance is present in the data set. (3) Items can be retained when they provide a rational segregation of the data set into subsamples (e.g., a consensus measure). (4) The value of retaining items can be assessed using latent class analysis or latent mean analysis. (5) Items should be retained only when post hoc analyses with these items produced significant and substantive results. These suggested exploratory strategies are presented so that other researchers using survey instruments might explore their data in similar and more innovative ways. Finally, suggestions for future use are provided.
Resumo:
In recent years, business practitioners are seen valuing patents on the basis of the market price that the patent can attract. Researchers have also looked into various patent latent variables and firm variables that influence the price of a patent. Forward citations of a patent are shown to play a role in determining price. Using patent auction price data (of Ocean Tomo now ICAP patent brokerage), we delve deeper into of the role of forward citations. The successfully sold 167 singleton patents form the sample of our study. We found that, it is mainly the right tail of the citation distribution that explains the high prices of the patents falling on the right tail of the price distribution. There is consistency in the literature on the positive correlation between patent prices and forward citations. In this paper, we go deeper to understand this linear relationship through case studies. Case studies of patents with high and low citations are described in this paper to understand why some patents attracted high prices. We look into the role of additional patent latent variables like age, technology discipline, class and breadth of the patent in influencing citations that a patent receives.
Resumo:
We study unsupervised learning in a probabilistic generative model for occlusion. The model uses two types of latent variables: one indicates which objects are present in the image, and the other how they are ordered in depth. This depth order then determines how the positions and appearances of the objects present, specified in the model parameters, combine to form the image. We show that the object parameters can be learnt from an unlabelled set of images in which objects occlude one another. Exact maximum-likelihood learning is intractable. However, we show that tractable approximations to Expectation Maximization (EM) can be found if the training images each contain only a small number of objects on average. In numerical experiments it is shown that these approximations recover the correct set of object parameters. Experiments on a novel version of the bars test using colored bars, and experiments on more realistic data, show that the algorithm performs well in extracting the generating causes. Experiments based on the standard bars benchmark test for object learning show that the algorithm performs well in comparison to other recent component extraction approaches. The model and the learning algorithm thus connect research on occlusion with the research field of multiple-causes component extraction methods.
Resumo:
Variational methods are a key component of the approximate inference and learning toolbox. These methods fill an important middle ground, retaining distributional information about uncertainty in latent variables, unlike maximum a posteriori methods (MAP), and yet generally requiring less computational time than Monte Carlo Markov Chain methods. In particular the variational Expectation Maximisation (vEM) and variational Bayes algorithms, both involving variational optimisation of a free-energy, are widely used in time-series modelling. Here, we investigate the success of vEM in simple probabilistic time-series models. First we consider the inference step of vEM, and show that a consequence of the well-known compactness property of variational inference is a failure to propagate uncertainty in time, thus limiting the usefulness of the retained distributional information. In particular, the uncertainty may appear to be smallest precisely when the approximation is poorest. Second, we consider parameter learning and analytically reveal systematic biases in the parameters found by vEM. Surprisingly, simpler variational approximations (such a mean-field) can lead to less bias than more complicated structured approximations.
Resumo:
We consider the inverse reinforcement learning problem, that is, the problem of learning from, and then predicting or mimicking a controller based on state/action data. We propose a statistical model for such data, derived from the structure of a Markov decision process. Adopting a Bayesian approach to inference, we show how latent variables of the model can be estimated, and how predictions about actions can be made, in a unified framework. A new Markov chain Monte Carlo (MCMC) sampler is devised for simulation from the posterior distribution. This step includes a parameter expansion step, which is shown to be essential for good convergence properties of the MCMC sampler. As an illustration, the method is applied to learning a human controller.
Resumo:
Abstract. Latent Dirichlet Allocation (LDA) is a document level language model. In general, LDA employ the symmetry Dirichlet distribution as prior of the topic-words’ distributions to implement model smoothing. In this paper, we propose a data-driven smoothing strategy in which probability mass is allocated from smoothing-data to latent variables by the intrinsic inference procedure of LDA. In such a way, the arbitrariness of choosing latent variables'priors for the multi-level graphical model is overcome. Following this data-driven strategy,two concrete methods, Laplacian smoothing and Jelinek-Mercer smoothing, are employed to LDA model. Evaluations on different text categorization collections show data-driven smoothing can significantly improve the performance in balanced and unbalanced corpora.
Resumo:
The nature of individual differences among children is an important issue in the study of human intelligence. There are close relation between intelligence and executive functions. Traditional theories, which are based mainly on the factor analysis, approach the problem only from the perspective of psychometrics. However, they do not study the relation of cognition and neurobiology. Some researchers try to explore the essential differences in intelligence from the basic cognitive level, by studying the relationship between executive function and intelligence. The aim of this study was to do the followings 1) to delineate and separate the executive function in children into measurable constructs; 2) to establish the relationship between executive function and intelligence in children; 3) to find out the difference and its neural mechanism between intellectually-gifted and normal children’s executive function. The participants were 188 children aged 7-12 year old. There were 6 executive function tasks. The results were follows: 1) The latent variables analyses showed that there was no stable construct of executive function in 7-10 year old children. The executive function construct of 11-12 year old children could be separated into updating, inhibition and shifting. And they had grown to be more or less the same as adults in the executive function. There were only moderate correlations between the three types of executive function, but they were largely independent of each other. 2) The correlations between the indices of updating, inhibition, shifting and intelligence were different in 7-12 year old children. The older the age, the more the indices were related to intelligence. The updating and shifting were related to intelligence in 7-10 year old children. There were significant correlations between the updating, inhibition, shifting and intelligence in 11-12 year old children. The correlation between updating and intelligence was higher than the correlation between shifting and intelligence. Furthermore, in structural equation models controlling for the three executive functions correlations, updating was highly related to intelligence, but the relations of inhibition and shifting to intelligence were not significant. 3) Intellectually-gifted children performed better than normal children in executive function tasks. The neural mechanism differences between intellectually gifted and average children were indicated by ERP component P3. The present study helps us to understand the relationship between intelligence and executive function; and throws light on the issue of individual differences in intelligence. The present results can provide theoretical support for the development a culture-free intelligence test and a method to promote the development of intelligence. Our present study lends support to the neural efficient hypothesis.
Resumo:
Mechanisms underlying cognitive psychology and cerebral physiological of mental arithmetic with increasing are were studied by using behavioral methods and functional magnetic resonance imaging (fMRI). I. Studies on mechanism underlying cognitive psychology of mental arithmetic with increasing age These studies were accomplished in 172 normal subjects ranging from 20 to 79 years of age with above 12 years of education (Mean = 1.51, SD = 1.5). Five mental arithmetic tasks, "1000-1", "1000-3", "1000-7", "1000-13", "1000-17", were designed with a serial calculation in which subjects sequentially subtracted the same prime number (1, 3, 7, 13, 17) from another number 1000. The variables studied were mental arithmetic, age, working memory, and sensory-motor speed, and four studies were conducted: (1) Aging process of mental arithmetic with different difficulties, (2) mechanism of aging of mental arithmetic processing. (3) effects of working memory and sensory-motor speed on aging process of mental arithmetic, (4) model of cognitive aging of mental arithmetic, with statistical methods such as MANOVA, hierarchical multiple regression, stepwise regression analysis, structural equation modelling (SEM). The results were indicated as following: Study 1: There was an obvious interaction between age and mental arithmetic, in which reaction time (RT) increased with advancing age and more difficult mental arithmetic, and mental arithmetic efficiency (the ratio of accuracy to RT) deceased with advancing age and more difficult mental arithmetic; Mental arithmetic efficiency with different difficulties decreased in power function: Study 2: There were two mediators (latent variables) in aging process of mental arithmetic, and age had an effect on mental arithmetic with different difficulties through the two mediators; Study 3: There were obvious interactions between age and working memory, working memory and mental arithmetic; Working memory and sensory-motor speed had effects on aging process of mental arithmetic, in which the effect of working memory on aging process of mental arithmetic was about 30-50%, and the effect of sensory-motor speed on aging process of mental arithmetic was above 35%. Study 4: Age, working memory, and sensory-motor speed had effects on two latent variables (factor 1 and factor 2), then had effects on mental arithmetic with different difficulties through factor 1 which was relative to memory component, and factor 2 which relative to speed component and had an effect on factor 1 significantly. II. Functional magnetic resonance imaging study on metal arithmetic with increasing age This study was accomplished in 14 normal right-handed subjects ranging from 20 to 29 (7 subjects) and 60 to 69 (7 subjects) years of age by using functional magnetic resonance imaging apparatus, a superconductive Signa Horizon 1.5T MRI system. Two mental arithmetic tasks, "1000-3" and "1000-17", were designed with a serial calculation in which subjects sequentially subtracted the same prime number (3 or 17) from another number 1000 silently, and controlling task, "1000-0", in which subjects continually rehearsed number 1000 silently, was regarded as baseline, based on current "baseline-task" OFF-ON subtraction pattern. Original data collected by fMRI apparatus, were analyzed off-line in SUN SPARC working station by using current STIMULATE software. The analytical steps were composed of within-subject analysis, in which brain activated images about mental arithmetic with two difficulties were obtained by using t-test, and between-subject analysis, in which features of brain activation about mental arithmetic with two difficulties, the relationship between left and right hemisphere during mental arithmetic, and age differences of brain activation in young and elderly adults were examined by using non-parameter Wilcoxon test. The results were as following:
Resumo:
Gaussian factor models have proven widely useful for parsimoniously characterizing dependence in multivariate data. There is a rich literature on their extension to mixed categorical and continuous variables, using latent Gaussian variables or through generalized latent trait models acommodating measurements in the exponential family. However, when generalizing to non-Gaussian measured variables the latent variables typically influence both the dependence structure and the form of the marginal distributions, complicating interpretation and introducing artifacts. To address this problem we propose a novel class of Bayesian Gaussian copula factor models which decouple the latent factors from the marginal distributions. A semiparametric specification for the marginals based on the extended rank likelihood yields straightforward implementation and substantial computational gains. We provide new theoretical and empirical justifications for using this likelihood in Bayesian inference. We propose new default priors for the factor loadings and develop efficient parameter-expanded Gibbs sampling for posterior computation. The methods are evaluated through simulations and applied to a dataset in political science. The models in this paper are implemented in the R package bfa.
Resumo:
How can we correlate the neural activity in the human brain as it responds to typed words, with properties of these terms (like ‘edible’, ‘fits in hand’)? In short, we want to find latent variables, that jointly explain both the brain activity, as well as the behavioral responses. This is one of many settings of the Coupled Matrix-Tensor Factorization (CMTF) problem.
Can we accelerate any CMTF solver, so that it runs within a few minutes instead of tens of hours to a day, while maintaining good accuracy? We introduce Turbo-SMT, a meta-method capable of doing exactly that: it boosts the performance of any CMTF algorithm, by up to 200x, along with an up to 65 fold increase in sparsity, with comparable accuracy to the baseline.
We apply Turbo-SMT to BrainQ, a dataset consisting of a (nouns, brain voxels, human subjects) tensor and a (nouns, properties) matrix, with coupling along the nouns dimension. Turbo-SMT is able to find meaningful latent variables, as well as to predict brain activity with competitive accuracy.
Resumo:
How can we correlate neural activity in the human brain as it responds to words, with behavioral data expressed as answers to questions about these same words? In short, we want to find latent variables, that explain both the brain activity, as well as the behavioral responses. We show that this is an instance of the Coupled Matrix-Tensor Factorization (CMTF) problem. We propose Scoup-SMT, a novel, fast, and parallel algorithm that solves the CMTF problem and produces a sparse latent low-rank subspace of the data. In our experiments, we find that Scoup-SMT is 50-100 times faster than a state-of-the-art algorithm for CMTF, along with a 5 fold increase in sparsity. Moreover, we extend Scoup-SMT to handle missing data without degradation of performance. We apply Scoup-SMT to BrainQ, a dataset consisting of a (nouns, brain voxels, human subjects) tensor and a (nouns, properties) matrix, with coupling along the nouns dimension. Scoup-SMT is able to find meaningful latent variables, as well as to predict brain activity with competitive accuracy. Finally, we demonstrate the generality of Scoup-SMT, by applying it on a Facebook dataset (users, friends, wall-postings); there, Scoup-SMT spots spammer-like anomalies.
Resumo:
Tese de doutoramento, Psicologia (Psicologia da Social), Universidade de Lisboa, Faculdade de Psicologia, 2015
Resumo:
Thesis (Master's)--University of Washington, 2014