905 resultados para Latent Inhibition Model
Resumo:
The purpose of this research was to better understand the impact of the terrorist attacks in 2001 on public health, particularly for Texas public health. This study employed mixed methods to examine changes to public health culture within Texas local public health agencies, important attitudes of public health workers toward responding to a disaster, and the funding policies that might ensure our investment in public health emergency preparedness is protected. ^ A qualitative analysis of interviews conducted with a large sample of public health officials in Texas found that all the constituent parts of a peculiar culture for public health preparedness existed that spanned the state's local health departments regardless of size, or funding level. The new preparedness culture in Texas had the hallmarks necessary for a robust public health preparedness and emergency response system. ^ The willingness of public health workers, necessary to make these kinds of changes and mount a disaster response was examined in one of Texas' most experienced disaster response teams—the public health workers for the City of Houston. A hypothesized latent variable model showed that willingness mediated all other factors in the model (self-efficacy, knowledge, barriers, and risk perception) for self-reported likelihood of reporting to work for a disaster. The RMSEA for the final model was 0.042 with a confidence interval of 0.036—0.049 and the chi-squared difference test was P=0.08, indicating a well-fitted model that suggests willingness is an important factor for consideration by preparedness planners and researchers alike. ^ Finally, with disasters on the rise and federal funding for preparedness dwindling, a review of states' policies for the distribution of these funds and their advantages and disadvantages were examined through a review of current literature and public documents, and a survey of state-level public health officials, emergency management professionals and researchers. Although the base plus per-capita method is the most common, it is not necessarily perceived to be the most effective. No clear "optimal" method emerged from the study, but recommendations for a strategic combination of three methods were made that has the potential to maximize the benefits of each method, while minimizing the weaknesses.^
Resumo:
The sensory patches in the ear of a vertebrate can be compared with the mechanosensory bristles of a fly. This comparison has led to the discovery that lateral inhibition mediated by the Notch cell–cell signaling pathway, first characterized in Drosophila and crucial for bristle development, also has a key role in controlling the pattern of sensory hair cells and supporting cells in the ear. We review the arguments for considering the sensory patches of the vertebrate ear and bristles of the insect to be homologous structures, evolved from a common ancestral mechanosensory organ, and we examine more closely the role of Notch signaling in each system. Using viral vectors to misexpress components of the Notch pathway in the chick ear, we show that a simple lateral-inhibition model based on feedback regulation of the Notch ligand Delta is inadequate for the ear just as it is for the fly bristle. The Notch ligand Serrate1, expressed in supporting cells in the ear, is regulated by lateral induction, not lateral inhibition; commitment to become a hair cell is not simply controlled by levels of expression of the Notch ligands Delta1, Serrate1, and Serrate2 in the neighbors of the nascent hair cell; and at least one factor, Numb, capable of blocking reception of lateral inhibition is concentrated in hair cells. These findings reinforce the parallels between the vertebrate ear and the fly bristle and show how study of the insect system can help us understand the vertebrate.
Resumo:
The current study tested two competing models of Attention-Deficit/Hyperactivity Disorder (AD/HD), the inhibition and state regulation theories, by conducting fine-grained analyses of the Stop-Signal Task and another putative measure of behavioral inhibition, the Gordon Continuous Performance Test (G-CPT), in a large sample of children and adolescents. The inhibition theory posits that performance on these tasks reflects increased difficulties for AD/HD participants to inhibit prepotent responses. The model predicts that putative stop-signal reaction time (SSRT) group differences on the Stop-Signal Task will be primarily related to AD/HD participants requiring more warning than control participants to inhibit to the stop-signal and emphasizes the relative importance of commission errors, particularly "impulsive" type commissions, over other error types on the G-CPT. The state regulation theory, on the other hand, proposes response variability due to difficulties maintaining an optimal state of arousal as the primary deficit in AD/HD. This model predicts that SSRT differences will be more attributable to slower and/or more variable reaction time (RT) in the AD/HD group, as opposed to reflecting inhibitory deficits. State regulation assumptions also emphasize the relative importance of omission errors and "slow processing" type commissions over other error types on the G-CPT. Overall, results of Stop-Signal Task analyses were more supportive of state regulation predictions and showed that greater response variability (i.e., SDRT) in the AD/HD group was not reducible to slow mean reaction time (MRT) and that response variability made a larger contribution to increased SSRT in the AD/HD group than inhibitory processes. Examined further, ex-Gaussian analyses of Stop-Signal Task go-trial RT distributions revealed that increased variability in the AD/HD group was not due solely to a few excessively long RTs in the tail of the AD/HD distribution (i.e., tau), but rather indicated the importance of response variability throughout AD/HD group performance on the Stop-Signal Task, as well as the notable sensitivity of ex-Gaussian analyses to variability in data screening procedures. Results of G-CPT analyses indicated some support for the inhibition model, although error type analyses failed to further differentiate the theories. Finally, inclusion of primary variables of interest in exploratory factor analysis with other neurocognitive predictors of AD/HD indicated response variability as a separable construct and further supported its role in Stop-Signal Task performance. Response variability did not, however, make a unique contribution to the prediction of AD/HD symptoms beyond measures of motor processing speed in multiple deficit regression analyses. Results have implications for the interpretation of the processes reflected in widely-used variables in the AD/HD literature, as well as for the theoretical understanding of AD/HD.
Resumo:
Latent variable models represent the probability density of data in a space of several dimensions in terms of a smaller number of latent, or hidden, variables. A familiar example is factor analysis which is based on a linear transformations between the latent space and the data space. In this paper we introduce a form of non-linear latent variable model called the Generative Topographic Mapping, for which the parameters of the model can be determined using the EM algorithm. GTM provides a principled alternative to the widely used Self-Organizing Map (SOM) of Kohonen (1982), and overcomes most of the significant limitations of the SOM. We demonstrate the performance of the GTM algorithm on a toy problem and on simulated data from flow diagnostics for a multi-phase oil pipeline.
Resumo:
The Self-Organizing Map (SOM) algorithm has been extensively studied and has been applied with considerable success to a wide variety of problems. However, the algorithm is derived from heuristic ideas and this leads to a number of significant limitations. In this paper, we consider the problem of modelling the probability density of data in a space of several dimensions in terms of a smaller number of latent, or hidden, variables. We introduce a novel form of latent variable model, which we call the GTM algorithm (for Generative Topographic Mapping), which allows general non-linear transformations from latent space to data space, and which is trained using the EM (expectation-maximization) algorithm. Our approach overcomes the limitations of the SOM, while introducing no significant disadvantages. We demonstrate the performance of the GTM algorithm on simulated data from flow diagnostics for a multi-phase oil pipeline.
Resumo:
Latent variable models represent the probability density of data in a space of several dimensions in terms of a smaller number of latent, or hidden, variables. A familiar example is factor analysis which is based on a linear transformations between the latent space and the data space. In this paper we introduce a form of non-linear latent variable model called the Generative Topographic Mapping, for which the parameters of the model can be determined using the EM algorithm. GTM provides a principled alternative to the widely used Self-Organizing Map (SOM) of Kohonen (1982), and overcomes most of the significant limitations of the SOM. We demonstrate the performance of the GTM algorithm on a toy problem and on simulated data from flow diagnostics for a multi-phase oil pipeline.
Resumo:
Principal component analysis (PCA) is one of the most popular techniques for processing, compressing and visualising data, although its effectiveness is limited by its global linearity. While nonlinear variants of PCA have been proposed, an alternative paradigm is to capture data complexity by a combination of local linear PCA projections. However, conventional PCA does not correspond to a probability density, and so there is no unique way to combine PCA models. Previous attempts to formulate mixture models for PCA have therefore to some extent been ad hoc. In this paper, PCA is formulated within a maximum-likelihood framework, based on a specific form of Gaussian latent variable model. This leads to a well-defined mixture model for probabilistic principal component analysers, whose parameters can be determined using an EM algorithm. We discuss the advantages of this model in the context of clustering, density modelling and local dimensionality reduction, and we demonstrate its application to image compression and handwritten digit recognition.
Resumo:
Principal component analysis (PCA) is a ubiquitous technique for data analysis and processing, but one which is not based upon a probability model. In this paper we demonstrate how the principal axes of a set of observed data vectors may be determined through maximum-likelihood estimation of parameters in a latent variable model closely related to factor analysis. We consider the properties of the associated likelihood function, giving an EM algorithm for estimating the principal subspace iteratively, and discuss the advantages conveyed by the definition of a probability density function for PCA.
Resumo:
Principal component analysis (PCA) is a ubiquitous technique for data analysis and processing, but one which is not based upon a probability model. In this paper we demonstrate how the principal axes of a set of observed data vectors may be determined through maximum-likelihood estimation of parameters in a latent variable model closely related to factor analysis. We consider the properties of the associated likelihood function, giving an EM algorithm for estimating the principal subspace iteratively, and discuss the advantages conveyed by the definition of a probability density function for PCA.
Resumo:
This thesis describes the Generative Topographic Mapping (GTM) --- a non-linear latent variable model, intended for modelling continuous, intrinsically low-dimensional probability distributions, embedded in high-dimensional spaces. It can be seen as a non-linear form of principal component analysis or factor analysis. It also provides a principled alternative to the self-organizing map --- a widely established neural network model for unsupervised learning --- resolving many of its associated theoretical problems. An important, potential application of the GTM is visualization of high-dimensional data. Since the GTM is non-linear, the relationship between data and its visual representation may be far from trivial, but a better understanding of this relationship can be gained by computing the so-called magnification factor. In essence, the magnification factor relates the distances between data points, as they appear when visualized, to the actual distances between those data points. There are two principal limitations of the basic GTM model. The computational effort required will grow exponentially with the intrinsic dimensionality of the density model. However, if the intended application is visualization, this will typically not be a problem. The other limitation is the inherent structure of the GTM, which makes it most suitable for modelling moderately curved probability distributions of approximately rectangular shape. When the target distribution is very different to that, theaim of maintaining an `interpretable' structure, suitable for visualizing data, may come in conflict with the aim of providing a good density model. The fact that the GTM is a probabilistic model means that results from probability theory and statistics can be used to address problems such as model complexity. Furthermore, this framework provides solid ground for extending the GTM to wider contexts than that of this thesis.
Resumo:
Bove, Pervan, Beatty, and Shiu [Bove, LL, Pervan, SJ, Beatty, SE, Shiu, E. Service worker role in encouraging customer organizational citizenship behaviors. J Bus Res 2009;62(7):698–705.] develop and test a latent variable model of the role of service workers in encouraging customers' organizational citizenship behaviors. However, Bove et al. [Bove, LL, Pervan, SJ, Beatty, SE, Shiu, E. Service worker role in encouraging customer organizational citizenship behaviors. J Bus Res 2009;62(7):698–705.] claim support for hypothesized relationships between constructs that, due to insufficient discriminant validity regarding certain constructs, may be inaccurate. This research comment discusses what discriminant validity represents, procedures for establishing discriminant validity, and presents an example of inaccurate discriminant validity assessment based upon the work of Bove et al. [Bove, LL, Pervan, SJ, Beatty, SE, Shiu, E. Service worker role in encouraging customer organizational citizenship behaviors. J Bus Res 2009;62(7):698–705.]. Solutions to discriminant validity problems and a five-step procedure for assessing discriminant validity then conclude the paper. This comment hopes to motivate a review of discriminant validity issues and offers assistance to future researchers conducting latent variable analysis.
Resumo:
Visualization of high-dimensional data has always been a challenging task. Here we discuss and propose variants of non-linear data projection methods (Generative Topographic Mapping (GTM) and GTM with simultaneous feature saliency (GTM-FS)) that are adapted to be effective on very high-dimensional data. The adaptations use log space values at certain steps of the Expectation Maximization (EM) algorithm and during the visualization process. We have tested the proposed algorithms by visualizing electrostatic potential data for Major Histocompatibility Complex (MHC) class-I proteins. The experiments show that the variation in the original version of GTM and GTM-FS worked successfully with data of more than 2000 dimensions and we compare the results with other linear/nonlinear projection methods: Principal Component Analysis (PCA), Neuroscale (NSC) and Gaussian Process Latent Variable Model (GPLVM).
Resumo:
The principled statistical application of Gaussian random field models used in geostatistics has historically been limited to data sets of a small size. This limitation is imposed by the requirement to store and invert the covariance matrix of all the samples to obtain a predictive distribution at unsampled locations, or to use likelihood-based covariance estimation. Various ad hoc approaches to solve this problem have been adopted, such as selecting a neighborhood region and/or a small number of observations to use in the kriging process, but these have no sound theoretical basis and it is unclear what information is being lost. In this article, we present a Bayesian method for estimating the posterior mean and covariance structures of a Gaussian random field using a sequential estimation algorithm. By imposing sparsity in a well-defined framework, the algorithm retains a subset of “basis vectors” that best represent the “true” posterior Gaussian random field model in the relative entropy sense. This allows a principled treatment of Gaussian random field models on very large data sets. The method is particularly appropriate when the Gaussian random field model is regarded as a latent variable model, which may be nonlinearly related to the observations. We show the application of the sequential, sparse Bayesian estimation in Gaussian random field models and discuss its merits and drawbacks.
Resumo:
This thesis applies a hierarchical latent trait model system to a large quantity of data. The motivation for it was lack of viable approaches to analyse High Throughput Screening datasets which maybe include thousands of data points with high dimensions. High Throughput Screening (HTS) is an important tool in the pharmaceutical industry for discovering leads which can be optimised and further developed into candidate drugs. Since the development of new robotic technologies, the ability to test the activities of compounds has considerably increased in recent years. Traditional methods, looking at tables and graphical plots for analysing relationships between measured activities and the structure of compounds, have not been feasible when facing a large HTS dataset. Instead, data visualisation provides a method for analysing such large datasets, especially with high dimensions. So far, a few visualisation techniques for drug design have been developed, but most of them just cope with several properties of compounds at one time. We believe that a latent variable model (LTM) with a non-linear mapping from the latent space to the data space is a preferred choice for visualising a complex high-dimensional data set. As a type of latent variable model, the latent trait model can deal with either continuous data or discrete data, which makes it particularly useful in this domain. In addition, with the aid of differential geometry, we can imagine the distribution of data from magnification factor and curvature plots. Rather than obtaining the useful information just from a single plot, a hierarchical LTM arranges a set of LTMs and their corresponding plots in a tree structure. We model the whole data set with a LTM at the top level, which is broken down into clusters at deeper levels of t.he hierarchy. In this manner, the refined visualisation plots can be displayed in deeper levels and sub-clusters may be found. Hierarchy of LTMs is trained using expectation-maximisation (EM) algorithm to maximise its likelihood with respect to the data sample. Training proceeds interactively in a recursive fashion (top-down). The user subjectively identifies interesting regions on the visualisation plot that they would like to model in a greater detail. At each stage of hierarchical LTM construction, the EM algorithm alternates between the E- and M-step. Another problem that can occur when visualising a large data set is that there may be significant overlaps of data clusters. It is very difficult for the user to judge where centres of regions of interest should be put. We address this problem by employing the minimum message length technique, which can help the user to decide the optimal structure of the model. In this thesis we also demonstrate the applicability of the hierarchy of latent trait models in the field of document data mining.
Resumo:
This paper presents a comparative study of three closely related Bayesian models for unsupervised document level sentiment classification, namely, the latent sentiment model (LSM), the joint sentiment-topic (JST) model, and the Reverse-JST model. Extensive experiments have been conducted on two corpora, the movie review dataset and the multi-domain sentiment dataset. It has been found that while all the three models achieve either better or comparable performance on these two corpora when compared to the existing unsupervised sentiment classification approaches, both JST and Reverse-JST are able to extract sentiment-oriented topics. In addition, Reverse-JST always performs worse than JST suggesting that the JST model is more appropriate for joint sentiment topic detection.