28 resultados para Latent variable


Relevância:

60.00% 60.00%

Publicador:

Resumo:

The principled statistical application of Gaussian random field models used in geostatistics has historically been limited to data sets of a small size. This limitation is imposed by the requirement to store and invert the covariance matrix of all the samples to obtain a predictive distribution at unsampled locations, or to use likelihood-based covariance estimation. Various ad hoc approaches to solve this problem have been adopted, such as selecting a neighborhood region and/or a small number of observations to use in the kriging process, but these have no sound theoretical basis and it is unclear what information is being lost. In this article, we present a Bayesian method for estimating the posterior mean and covariance structures of a Gaussian random field using a sequential estimation algorithm. By imposing sparsity in a well-defined framework, the algorithm retains a subset of “basis vectors” that best represent the “true” posterior Gaussian random field model in the relative entropy sense. This allows a principled treatment of Gaussian random field models on very large data sets. The method is particularly appropriate when the Gaussian random field model is regarded as a latent variable model, which may be nonlinearly related to the observations. We show the application of the sequential, sparse Bayesian estimation in Gaussian random field models and discuss its merits and drawbacks.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Personal selling and sales management play a critical role in the short and long term success of the firm, and have thus received substantial academic interest since the 1970s. Sales research has examined the role of the sales manager in some depth, defining a number of key technical and interpersonal roles which sales managers have in influencing sales force effectiveness. However, one aspect of sales management which appears to remain unexplored is that of their resolution of salesperson-related problems. This study represents the first attempt to address this gap by reporting on the conceptual and empirical development of an instrument designed to measure sales managers' problem resolution styles. A comprehensive literature review and qualitative research study identified three key constructs relating to sales managers' problem resolution styles. The three constructs identified were termed; sales manager willingness to respond, sales manager caring, and sales manager aggressiveness. Building on this, existing literature was used to develop a conceptual model of salesperson-specific consequences of the three problem resolution style constructs. The quantitative phase of the study consisted of a mail survey of UK salespeople, achieving a total sample of 140 fully usable responses. Rigorous statistical assessment of the sales manager problem resolution style measures was undertaken, and construct validity examined. Following this, the conceptual model was tested using latent variable path analysis. The results for the model were encouraging overall, and also with regard to the individual hypotheses. Sales manager problem resolution styles were found individually to have significant impacts on the salesperson-specific variables of role ambiguity, emotional exhaustion, job satisfaction, organisational commitment and organisational citizenship behaviours. The findings, theoretical and managerial implications, limitations and directions for future research are discussed.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Strategic planning and more specifically, the impact of strategic planning on organisational performance has been the subject of significant academic interest since the early 1970's. However, despite the significant amount of previous work examining the relationship between strategic planning and organisational performance, a comprehensive literature review identified a number of areas where contributions to the domain of study could be made. In overview, the main areas for further study identified from the literature review were a) a further examination of both the dimensionality and conceptualisation of strategic planning and organisational performance and b) a further, multivariate, examination of the relationship between strategic planning and performance, to capture the newly identified dimensionality. In addition to the previously identified strategic planning and organisational performance constructs, a comprehensive literature based assessment was undertaken and five main areas were identified for further examination, these were a) organisational b) comprehensive strategic choice, c) the quality of strategic options generated, d) political behavior and e) implementation success. From this, a conceptual model incorporating a set of hypotheses to be tested was formulated. In order to test the conceptual model specified and also the stated hypotheses, data gathering was undertaken. The quantitative phase of the research involved a mail survey of senior managers in medium to large UK based organisations, of which a total of 366 fully useable responses were received. Following rigorous individual construct validity and reliability testing, the complete conceptual model was tested using latent variable path analysis. The results for the individual hypotheses and also the complete conceptual model were most encouraging. The findings, theoretical and managerial implications, limitations and directions for future research are discussed.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

It is evident that empowerment is in widespread use as a management tool in international organisations. A comprehensive literature review identified that empowerment exists as two distinct constructs: relational empowerment and psychological empowerment. Building on this delineation, existing literature was used to develop a conceptual model of the antecedents and consequences of the two empowerment constructs. Furthermore, the impact of national culture was considered, resulting in a set of testable hypotheses concerning the cross-cultural differences in the relationships between empowerment and its antecedents and consequences. A quantitative study was undertaken to test the hypothesised conceptual model. Data were collected from India and the UK, via drop-off self-administered surveys from front-line employees of both an indigenous and multinational bank in the two cultures, achieving a total of 626 fully usable responses across the four samples. Rigorous scale development for all samples was undertaken and measurement invariance examined. Following this, the conceptual model was tested using latent variable path analysis. The results for the model were both encouraging and surprising. Similar results regarding the effects of relational empowerment and psychological empowerment were found across the two cultures. However, an examination of the antecedents to relational empowerment produced significantly different results across the cultures. Relational empowerment was found to have higher practical value as it had a significant positive effect on employee job satisfaction levels across both cultures.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This thesis applies a hierarchical latent trait model system to a large quantity of data. The motivation for it was lack of viable approaches to analyse High Throughput Screening datasets which maybe include thousands of data points with high dimensions. High Throughput Screening (HTS) is an important tool in the pharmaceutical industry for discovering leads which can be optimised and further developed into candidate drugs. Since the development of new robotic technologies, the ability to test the activities of compounds has considerably increased in recent years. Traditional methods, looking at tables and graphical plots for analysing relationships between measured activities and the structure of compounds, have not been feasible when facing a large HTS dataset. Instead, data visualisation provides a method for analysing such large datasets, especially with high dimensions. So far, a few visualisation techniques for drug design have been developed, but most of them just cope with several properties of compounds at one time. We believe that a latent variable model (LTM) with a non-linear mapping from the latent space to the data space is a preferred choice for visualising a complex high-dimensional data set. As a type of latent variable model, the latent trait model can deal with either continuous data or discrete data, which makes it particularly useful in this domain. In addition, with the aid of differential geometry, we can imagine the distribution of data from magnification factor and curvature plots. Rather than obtaining the useful information just from a single plot, a hierarchical LTM arranges a set of LTMs and their corresponding plots in a tree structure. We model the whole data set with a LTM at the top level, which is broken down into clusters at deeper levels of t.he hierarchy. In this manner, the refined visualisation plots can be displayed in deeper levels and sub-clusters may be found. Hierarchy of LTMs is trained using expectation-maximisation (EM) algorithm to maximise its likelihood with respect to the data sample. Training proceeds interactively in a recursive fashion (top-down). The user subjectively identifies interesting regions on the visualisation plot that they would like to model in a greater detail. At each stage of hierarchical LTM construction, the EM algorithm alternates between the E- and M-step. Another problem that can occur when visualising a large data set is that there may be significant overlaps of data clusters. It is very difficult for the user to judge where centres of regions of interest should be put. We address this problem by employing the minimum message length technique, which can help the user to decide the optimal structure of the model. In this thesis we also demonstrate the applicability of the hierarchy of latent trait models in the field of document data mining.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This thesis presents the results from an investigation into the merits of analysing Magnetoencephalographic (MEG) data in the context of dynamical systems theory. MEG is the study of both the methods for the measurement of minute magnetic flux variations at the scalp, resulting from neuro-electric activity in the neocortex, as well as the techniques required to process and extract useful information from these measurements. As a result of its unique mode of action - by directly measuring neuronal activity via the resulting magnetic field fluctuations - MEG possesses a number of useful qualities which could potentially make it a powerful addition to any brain researcher's arsenal. Unfortunately, MEG research has so far failed to fulfil its early promise, being hindered in its progress by a variety of factors. Conventionally, the analysis of MEG has been dominated by the search for activity in certain spectral bands - the so-called alpha, delta, beta, etc that are commonly referred to in both academic and lay publications. Other efforts have centred upon generating optimal fits of "equivalent current dipoles" that best explain the observed field distribution. Many of these approaches carry the implicit assumption that the dynamics which result in the observed time series are linear. This is despite a variety of reasons which suggest that nonlinearity might be present in MEG recordings. By using methods that allow for nonlinear dynamics, the research described in this thesis avoids these restrictive linearity assumptions. A crucial concept underpinning this project is the belief that MEG recordings are mere observations of the evolution of the true underlying state, which is unobservable and is assumed to reflect some abstract brain cognitive state. Further, we maintain that it is unreasonable to expect these processes to be adequately described in the traditional way: as a linear sum of a large number of frequency generators. One of the main objectives of this thesis will be to prove that much more effective and powerful analysis of MEG can be achieved if one were to assume the presence of both linear and nonlinear characteristics from the outset. Our position is that the combined action of a relatively small number of these generators, coupled with external and dynamic noise sources, is more than sufficient to account for the complexity observed in the MEG recordings. Another problem that has plagued MEG researchers is the extremely low signal to noise ratios that are obtained. As the magnetic flux variations resulting from actual cortical processes can be extremely minute, the measuring devices used in MEG are, necessarily, extremely sensitive. The unfortunate side-effect of this is that even commonplace phenomena such as the earth's geomagnetic field can easily swamp signals of interest. This problem is commonly addressed by averaging over a large number of recordings. However, this has a number of notable drawbacks. In particular, it is difficult to synchronise high frequency activity which might be of interest, and often these signals will be cancelled out by the averaging process. Other problems that have been encountered are high costs and low portability of state-of-the- art multichannel machines. The result of this is that the use of MEG has, hitherto, been restricted to large institutions which are able to afford the high costs associated with the procurement and maintenance of these machines. In this project, we seek to address these issues by working almost exclusively with single channel, unaveraged MEG data. We demonstrate the applicability of a variety of methods originating from the fields of signal processing, dynamical systems, information theory and neural networks, to the analysis of MEG data. It is noteworthy that while modern signal processing tools such as independent component analysis, topographic maps and latent variable modelling have enjoyed extensive success in a variety of research areas from financial time series modelling to the analysis of sun spot activity, their use in MEG analysis has thus far been extremely limited. It is hoped that this work will help to remedy this oversight.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Cadogan and Lee (this issue) discuss the problems inherent in modeling formative latent variables as endogenous. In response to the commentaries by Rigdon (this issue) and Finn and Wang (this issue), the present article extends the discussion on formative measures. First, the article shows that regardless of whether statistical identification is achieved, researchers are unable to illuminate the nature of a formative latent variable. Second, the study clarifies issues regarding formative indicator weighting, highlighting that the weightings of formative components should be specified as part of the construct definition. Finally, the study shows that higher-order reflective constructs are invalid, highlights the damage their use can inflict on theory development and knowledge accumulation, and provides recommendations on a number of alternative models which should be used in their place (including the formative model). © 2012 Elsevier Inc.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Researchers often develop and test conceptual models containing formative variables. In many cases, these formative variables are specified as being endogenous. This article provides a clarification of formative variable theory, distinguishing between the formative latent variable and the formative composite variable. When an endogenous latent variable relies on formative indicators for measurement, empirical studies can say nothing about the relationship between exogenous variables and the endogenous formative latent variable: conclusions can only be drawn regarding the exogenous variables' relationships with a composite variable. The authors also show the dangers associated with developing theory about antecedents to endogenous formative variables at the (aggregate) formative latent variable level. Modeling relationships with endogenous formative variables at the (disaggregate) indicator level informs richer theory development, and encourages more precise empirical testing. When antecedents' relationships with endogenous formative variables are modeled at the formative latent variable level rather than the formative indicator level, theory construction can verge on the superficial, and empirical findings can be ambiguous in substantive meaning.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Abstract Phonological tasks are highly predictive of reading development but their complexity obscures the underlying mechanisms driving this association. There are three key components hypothesised to drive the relationship between phonological tasks and reading; (a) the linguistic nature of the stimuli, (b) the phonological complexity of the stimuli, and (c) the production of a verbal response. We isolated the contribution of the stimulus and response components separately through the creation of latent variables to represent specially designed tasks that were matched for procedure. These tasks were administered to 570 6 to 7-year-old children along with standardised tests of regular word and non-word reading. A structural equation model, where tasks were grouped according to stimulus, revealed that the linguistic nature and the phonological complexity of the stimulus predicted unique variance in decoding, over and above matched comparison tasks without these components. An alternative model, grouped according to response mode, showed that the production of a verbal response was a unique predictor of decoding beyond matched tasks without a verbal response. In summary, we found that multiple factors contributed to reading development, supporting multivariate models over those that prioritize single factors. More broadly, we demonstrate the value of combining matched task designs with latent variable modelling to deconstruct the components of complex tasks.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Most machine-learning algorithms are designed for datasets with features of a single type whereas very little attention has been given to datasets with mixed-type features. We recently proposed a model to handle mixed types with a probabilistic latent variable formalism. This proposed model describes the data by type-specific distributions that are conditionally independent given the latent space and is called generalised generative topographic mapping (GGTM). It has often been observed that visualisations of high-dimensional datasets can be poor in the presence of noisy features. In this paper we therefore propose to extend the GGTM to estimate feature saliency values (GGTMFS) as an integrated part of the parameter learning process with an expectation-maximisation (EM) algorithm. The efficacy of the proposed GGTMFS model is demonstrated both for synthetic and real datasets.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The focus of this thesis is the extension of topographic visualisation mappings to allow for the incorporation of uncertainty. Few visualisation algorithms in the literature are capable of mapping uncertain data with fewer able to represent observation uncertainties in visualisations. As such, modifications are made to NeuroScale, Locally Linear Embedding, Isomap and Laplacian Eigenmaps to incorporate uncertainty in the observation and visualisation spaces. The proposed mappings are then called Normally-distributed NeuroScale (N-NS), T-distributed NeuroScale (T-NS), Probabilistic LLE (PLLE), Probabilistic Isomap (PIso) and Probabilistic Weighted Neighbourhood Mapping (PWNM). These algorithms generate a probabilistic visualisation space with each latent visualised point transformed to a multivariate Gaussian or T-distribution, using a feed-forward RBF network. Two types of uncertainty are then characterised dependent on the data and mapping procedure. Data dependent uncertainty is the inherent observation uncertainty. Whereas, mapping uncertainty is defined by the Fisher Information of a visualised distribution. This indicates how well the data has been interpolated, offering a level of ‘surprise’ for each observation. These new probabilistic mappings are tested on three datasets of vectorial observations and three datasets of real world time series observations for anomaly detection. In order to visualise the time series data, a method for analysing observed signals and noise distributions, Residual Modelling, is introduced. The performance of the new algorithms on the tested datasets is compared qualitatively with the latent space generated by the Gaussian Process Latent Variable Model (GPLVM). A quantitative comparison using existing evaluation measures from the literature allows performance of each mapping function to be compared. Finally, the mapping uncertainty measure is combined with NeuroScale to build a deep learning classifier, the Cascading RBF. This new structure is tested on the MNist dataset achieving world record performance whilst avoiding the flaws seen in other Deep Learning Machines.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The present research represents a coherent approach to understanding the root causes of ethnic group differences in ability test performance. Two studies were conducted, each of which was designed to address a key knowledge gap in the ethnic bias literature. In Study 1, both the LR Method of Differential Item Functioning (DIF) detection and Mixture Latent Variable Modelling were used to investigate the degree to which Differential Test Functioning (DTF) could explain ethnic group test performance differences in a large, previously unpublished dataset. Though mean test score differences were observed between a number of ethnic groups, neither technique was able to identify ethnic DTF. This calls into question the practical application of DTF to understanding these group differences. Study 2 investigated whether a number of non-cognitive factors might explain ethnic group test performance differences on a variety of ability tests. Two factors – test familiarity and trait optimism – were able to explain a large proportion of ethnic group test score differences. Furthermore, test familiarity was found to mediate the relationship between socio-economic factors – particularly participant educational level and familial social status – and test performance, suggesting that test familiarity develops over time through the mechanism of exposure to ability testing in other contexts. These findings represent a substantial contribution to the field’s understanding of two key issues surrounding ethnic test performance differences. The author calls for a new line of research into these performance facilitating and debilitating factors, before recommendations are offered for practitioners to ensure fairer deployment of ability testing in high-stakes selection processes.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Popular dimension reduction and visualisation algorithms rely on the assumption that input dissimilarities are typically Euclidean, for instance Metric Multidimensional Scaling, t-distributed Stochastic Neighbour Embedding and the Gaussian Process Latent Variable Model. It is well known that this assumption does not hold for most datasets and often high-dimensional data sits upon a manifold of unknown global geometry. We present a method for improving the manifold charting process, coupled with Elastic MDS, such that we no longer assume that the manifold is Euclidean, or of any particular structure. We draw on the benefits of different dissimilarity measures allowing for the relative responsibilities, under a linear combination, to drive the visualisation process.