944 resultados para Territorial approach on development
Resumo:
It has been argued that a single two-dimensional visualization plot may not be sufficient to capture all of the interesting aspects of complex data sets, and therefore a hierarchical visualization system is desirable. In this paper we extend an existing locally linear hierarchical visualization system PhiVis ¸iteBishop98a in several directions: bf(1) We allow for em non-linear projection manifolds. The basic building block is the Generative Topographic Mapping (GTM). bf(2) We introduce a general formulation of hierarchical probabilistic models consisting of local probabilistic models organized in a hierarchical tree. General training equations are derived, regardless of the position of the model in the tree. bf(3) Using tools from differential geometry we derive expressions for local directional curvatures of the projection manifold. Like PhiVis, our system is statistically principled and is built interactively in a top-down fashion using the EM algorithm. It enables the user to interactively highlight those data in the ancestor visualization plots which are captured by a child model. We also incorporate into our system a hierarchical, locally selective representation of magnification factors and directional curvatures of the projection manifolds. Such information is important for further refinement of the hierarchical visualization plot, as well as for controlling the amount of regularization imposed on the local models. We demonstrate the principle of the approach on a toy data set and apply our system to two more complex 12- and 18-dimensional data sets.
Resumo:
We have recently developed a principled approach to interactive non-linear hierarchical visualization [8] based on the Generative Topographic Mapping (GTM). Hierarchical plots are needed when a single visualization plot is not sufficient (e.g. when dealing with large quantities of data). In this paper we extend our system by giving the user a choice of initializing the child plots of the current plot in either interactive, or automatic mode. In the interactive mode the user interactively selects ``regions of interest'' as in [8], whereas in the automatic mode an unsupervised minimum message length (MML)-driven construction of a mixture of GTMs is used. The latter is particularly useful when the plots are covered with dense clusters of highly overlapping data projections, making it difficult to use the interactive mode. Such a situation often arises when visualizing large data sets. We illustrate our approach on a data set of 2300 18-dimensional points and mention extension of our system to accommodate discrete data types.
Resumo:
An interactive hierarchical Generative Topographic Mapping (HGTM) ¸iteHGTM has been developed to visualise complex data sets. In this paper, we build a more general visualisation system by extending the HGTM visualisation system in 3 directions: bf (1) We generalize HGTM to noise models from the exponential family of distributions. The basic building block is the Latent Trait Model (LTM) developed in ¸iteKabanpami. bf (2) We give the user a choice of initializing the child plots of the current plot in either em interactive, or em automatic mode. In the interactive mode the user interactively selects ``regions of interest'' as in ¸iteHGTM, whereas in the automatic mode an unsupervised minimum message length (MML)-driven construction of a mixture of LTMs is employed. bf (3) We derive general formulas for magnification factors in latent trait models. Magnification factors are a useful tool to improve our understanding of the visualisation plots, since they can highlight the boundaries between data clusters. The unsupervised construction is particularly useful when high-level plots are covered with dense clusters of highly overlapping data projections, making it difficult to use the interactive mode. Such a situation often arises when visualizing large data sets. We illustrate our approach on a toy example and apply our system to three more complex real data sets.
Resumo:
A new principled domain independent watermarking framework is presented. The new approach is based on embedding the message in statistically independent sources of the covertext to mimimise covertext distortion, maximise the information embedding rate and improve the method's robustness against various attacks. Experiments comparing the performance of the new approach, on several standard attacks show the current proposed approach to be competitive with other state of the art domain-specific methods.
Resumo:
It has been argued that a single two-dimensional visualization plot may not be sufficient to capture all of the interesting aspects of complex data sets, and therefore a hierarchical visualization system is desirable. In this paper we extend an existing locally linear hierarchical visualization system PhiVis (Bishop98a) in several directions: 1. We allow for em non-linear projection manifolds. The basic building block is the Generative Topographic Mapping. 2. We introduce a general formulation of hierarchical probabilistic models consisting of local probabilistic models organized in a hierarchical tree. General training equations are derived, regardless of the position of the model in the tree. 3. Using tools from differential geometry we derive expressions for local directionalcurvatures of the projection manifold. Like PhiVis, our system is statistically principled and is built interactively in a top-down fashion using the EM algorithm. It enables the user to interactively highlight those data in the parent visualization plot which are captured by a child model.We also incorporate into our system a hierarchical, locally selective representation of magnification factors and directional curvatures of the projection manifolds. Such information is important for further refinement of the hierarchical visualization plot, as well as for controlling the amount of regularization imposed on the local models. We demonstrate the principle of the approach on a toy data set andapply our system to two more complex 12- and 19-dimensional data sets.
Resumo:
We contend that powerful group studies can be conducted using magnetoencephalography (MEG), which can provide useful insights into the approximate distribution of the neural activity detected with MEG without requiring magnetic resonance imaging (MRI) for each participant. Instead, a participant's MRI is approximated with one chosen as a best match on the basis of the scalp surface from a database of available MRIs. Because large inter-individual variability in sulcal and gyral patterns is an inherent source of blurring in studies using grouped functional activity, the additional error introduced by this approximation procedure has little effect on the group results, and offers a sufficiently close approximation to that of the participants to yield a good indication of the true distribution of the grouped neural activity. T1-weighted MRIs of 28 adults were acquired in a variety of MR systems. An artificial functional image was prepared for each person in which eight 5 × 5 × 5 mm regions of brain activation were simulated. Spatial normalisation was applied to each image using transformations calculated using SPM99 with (1) the participant's actual MRI, and (2) the best matched MRI substituted from those of the other 27 participants. The distribution of distances between the locations of points using real and substituted MRIs had a modal value of 6 mm with 90% of cases falling below 12.5 mm. The effects of this -approach on real grouped SAM source imaging of MEG data in a verbal fluency task are also shown. The distribution of MEG activity in the estimated average response is very similar to that produced when using the real MRIs. © 2003 Wiley-Liss, Inc.
Resumo:
In this paper we present a radial basis function based extension to a recently proposed variational algorithm for approximate inference for diffusion processes. Inference, for state and in particular (hyper-) parameters, in diffusion processes is a challenging and crucial task. We show that the new radial basis function approximation based algorithm converges to the original algorithm and has beneficial characteristics when estimating (hyper-)parameters. We validate our new approach on a nonlinear double well potential dynamical system.
Resumo:
Purpose – The purpose of this paper is to understand the internal branding process from the employees' perspective; it will empirically assess the relationship between internal branding and employees' delivery of the brand promise as well as the relationships among their brand identification, brand commitment and brand loyalty. Design/methodology/approach – On a census basis, a quantitative survey is carried out with 699 customer-interface employees from five major hotels. Findings – Internal branding is found to have a positive impact on attitudinal and behavioural aspects of employees in their delivery of the brand promise. As employees' brand commitment does not have a statistically significant relationship with employees' brand performance, it is not regarded as a mediator in the link between internal branding and employees' brand performance. Furthermore, the study shows that brand identification is a driver of brand commitment, which precedes brand loyalty of employees. Practical implications – A number of significant managerial implications are drawn from this study, for example using both internal communication and training to influence employees' brand-supporting attitudes and behaviours. Still, it should be noted that the effect of internal branding on the behaviours could be dependent on the extent to which it could effectively influence their brand attitudes. Originality/value – The results provide valuable insights from the key internal audience's perspectives into an internal branding process to ensure the delivery of the brand promise. It empirically shows the relationship between internal branding and the behavioural outcome as well as the meditational effects of employees' brand identification, commitment and loyalty.
Resumo:
We report statistical time-series analysis tools providing improvements in the rapid, precision extraction of discrete state dynamics from time traces of experimental observations of molecular machines. By building physical knowledge and statistical innovations into analysis tools, we provide techniques for estimating discrete state transitions buried in highly correlated molecular noise. We demonstrate the effectiveness of our approach on simulated and real examples of steplike rotation of the bacterial flagellar motor and the F1-ATPase enzyme. We show that our method can clearly identify molecular steps, periodicities and cascaded processes that are too weak for existing algorithms to detect, and can do so much faster than existing algorithms. Our techniques represent a step in the direction toward automated analysis of high-sample-rate, molecular-machine dynamics. Modular, open-source software that implements these techniques is provided.
Resumo:
It is now stylized that the importance of foreign direct investment for developing countries and emerging markets arises from the impact of the presence of multinational corporations (MNCs) in the host country on the productivity of local firms, by way of technology diffusion and competition. There is also general agreement that the extent of technology transfer by an MNC to a developing country affiliate depends on the extent of its control on the local affiliate and that, in turn, the extent of this control depends on the mode of entry of the MNC into the host country. However, the existing literature is based on the experience of developed countries and as such does not contribute to the literature on development economics. This article addresses this lacuna using unique firm-level data from South Africa and Egypt. Our results indicate that the determinants of entry mode choice not only differ between developed and developing countries, but also among developing countries. They also bring into question the role of MNCs in fostering productivity growth in developing countries.
Resumo:
Context/Motivation - Different modeling techniques have been used to model requirements and decision-making of self-adaptive systems (SASs). Specifically, goal models have been prolific in supporting decision-making depending on partial and total fulfilment of functional (goals) and non-functional requirements (softgoals). Different goalrealization strategies can have different effects on softgoals which are specified with weighted contribution-links. The final decision about what strategy to use is based, among other reasons, on a utility function that takes into account the weighted sum of the different effects on softgoals. Questions/Problems - One of the main challenges about decisionmaking in self-adaptive systems is to deal with uncertainty during runtime. New techniques are needed to systematically revise the current model when empirical evidence becomes available from the deployment. Principal ideas/results - In this paper we enrich the decision-making supported by goal models by using Dynamic Decision Networks (DDNs). Goal realization strategies and their impact on softgoals have a correspondence with decision alternatives and conditional probabilities and expected utilities in the DDNs respectively. Our novel approach allows the specification of preferences over the softgoals and supports reasoning about partial satisfaction of softgoals using probabilities. We report results of the application of the approach on two different cases. Our early results suggest the decision-making process of SASs can be improved by using DDNs. © 2013 Springer-Verlag.
Resumo:
Projection of a high-dimensional dataset onto a two-dimensional space is a useful tool to visualise structures and relationships in the dataset. However, a single two-dimensional visualisation may not display all the intrinsic structure. Therefore, hierarchical/multi-level visualisation methods have been used to extract more detailed understanding of the data. Here we propose a multi-level Gaussian process latent variable model (MLGPLVM). MLGPLVM works by segmenting data (with e.g. K-means, Gaussian mixture model or interactive clustering) in the visualisation space and then fitting a visualisation model to each subset. To measure the quality of multi-level visualisation (with respect to parent and child models), metrics such as trustworthiness, continuity, mean relative rank errors, visualisation distance distortion and the negative log-likelihood per point are used. We evaluate the MLGPLVM approach on the ‘Oil Flow’ dataset and a dataset of protein electrostatic potentials for the ‘Major Histocompatibility Complex (MHC) class I’ of humans. In both cases, visual observation and the quantitative quality measures have shown better visualisation at lower levels.
Resumo:
Significance: Oxidized phospholipids are now well-recognized as markers of biological oxidative stress and bioactive molecules with both pro-inflammatory and anti-inflammatory effects. While analytical methods continue to be developed for studies of generic lipid oxidation, mass spectrometry (MS) has underpinned the advances in knowledge of specific oxidized phospholipids by allowing their identification and characterization, and is responsible for the expansion of oxidative lipidomics. Recent Advances: Studies of oxidized phospholipids in biological samples, both from animal models and clinical samples, have been facilitated by the recent improvements in MS, especially targeted routines that depend on the fragmentation pattern of the parent molecular ion and improved resolution and mass accuracy. MS can be used to identify selectively individual compounds or groups of compounds with common features, which greatly improves the sensitivity and specificity of detection. Application of these methods have enabled important advances in understanding the mechanisms of inflammatory diseases such as atherosclerosis, steatohepatitis, leprosy and cystic fibrosis, and offer potential for developing biomarkers of molecular aspects of the diseases. Critical Issues and Future Directions: The future in this field will depend on development of improved MS technologies, such as ion mobility, novel enrichment methods and databases and software for data analysis, owing to the very large amount of data generated in these experiments. Imaging of oxidized phospholipids in tissue MS is an additional exciting direction emerging that can be expected to advance understanding of physiology and disease.
Resumo:
Sentiment analysis on Twitter has attracted much attention recently due to its wide applications in both, commercial and public sectors. In this paper we present SentiCircles, a lexicon-based approach for sentiment analysis on Twitter. Different from typical lexicon-based approaches, which offer a fixed and static prior sentiment polarities of words regardless of their context, SentiCircles takes into account the co-occurrence patterns of words in different contexts in tweets to capture their semantics and update their pre-assigned strength and polarity in sentiment lexicons accordingly. Our approach allows for the detection of sentiment at both entity-level and tweet-level. We evaluate our proposed approach on three Twitter datasets using three different sentiment lexicons to derive word prior sentiments. Results show that our approach significantly outperforms the baselines in accuracy and F-measure for entity-level subjectivity (neutral vs. polar) and polarity (positive vs. negative) detections. For tweet-level sentiment detection, our approach performs better than the state-of-the-art SentiStrength by 4-5% in accuracy in two datasets, but falls marginally behind by 1% in F-measure in the third dataset.