28 resultados para Visualization Of Interval Methods

em Aston University Research Archive


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Hierarchical visualization systems are desirable because a single two-dimensional visualization plot may not be sufficient to capture all of the interesting aspects of complex high-dimensional data sets. We extend an existing locally linear hierarchical visualization system PhiVis [1] in several directions: bf(1) we allow for em non-linear projection manifolds (the basic building block is the Generative Topographic Mapping -- GTM), bf(2) we introduce a general formulation of hierarchical probabilistic models consisting of local probabilistic models organized in a hierarchical tree, bf(3) we describe folding patterns of low-dimensional projection manifold in high-dimensional data space by computing and visualizing the manifold's local directional curvatures. Quantities such as magnification factors [3] and directional curvatures are helpful for understanding the layout of the nonlinear projection manifold in the data space and for further refinement of the hierarchical visualization plot. Like PhiVis, our system is statistically principled and is built interactively in a top-down fashion using the EM algorithm. We demonstrate the visualization system principle of the approach on a complex 12-dimensional data set and mention possible applications in the pharmaceutical industry.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This retrospective study was designed to investigate the factors that influence performance in examinations comprised of multiple-choice questions (MCQs), short-answer questions (SAQs), and essay questions in an undergraduate population. Final year optometry degree examination marks were analyzed for two separate cohorts. Direct comparison found that students performed better in MCQs than essays. However, forward stepwise regression analysis of module marks compared with the overall score showed that MCQs were the least influential, and the essay or SAQ mark was a more reliable predictor of overall grade. This has implications for examination design.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

On July 17, 1990, President George Bush ssued “Proclamation #6158" which boldly declared the following ten years would be called the “Decade of the Brain” (Bush, 1990). Accordingly, the research mandates of all US federal biomedical institutions worldwide were redirected towards the study of the brain in general and cognitive neuroscience specifically. In 2008, one of the greatest legacies of this “Decade of the Brain” is the impressive array of techniques that can be used to study cortical activity. We now stand at a juncture where cognitive function can be mapped in the time, space and frequency domains, as and when such activity occurs. These advanced techniques have led to discoveries in many fields of research and clinical science, including psychology and psychiatry. Unfortunately, neuroscientific techniques have yet to be enthusiastically adopted by the social sciences. Market researchers, as specialized social scientists, have an unparalleled opportunity to adopt cognitive neuroscientific techniques and significantly redefine the field and possibly even cause substantial dislocations in business models. Following from this is a significant opportunity for more commercially-oriented researchers to employ such techniques in their own offerings. This report examines the feasibility of these techniques.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A visualization plot of a data set of molecular data is a useful tool for gaining insight into a set of molecules. In chemoinformatics, most visualization plots are of molecular descriptors, and the statistical model most often used to produce a visualization is principal component analysis (PCA). This paper takes PCA, together with four other statistical models (NeuroScale, GTM, LTM, and LTM-LIN), and evaluates their ability to produce clustering in visualizations not of molecular descriptors but of molecular fingerprints. Two different tasks are addressed: understanding structural information (particularly combinatorial libraries) and relating structure to activity. The quality of the visualizations is compared both subjectively (by visual inspection) and objectively (with global distance comparisons and local k-nearest-neighbor predictors). On the data sets used to evaluate clustering by structure, LTM is found to perform significantly better than the other models. In particular, the clusters in LTM visualization space are consistent with the relationships between the core scaffolds that define the combinatorial sublibraries. On the data sets used to evaluate clustering by activity, LTM again gives the best performance but by a smaller margin. The results of this paper demonstrate the value of using both a nonlinear projection map and a Bernoulli noise model for modeling binary data.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Two contrasting multivariate statistical methods, viz., principal components analysis (PCA) and cluster analysis were applied to the study of neuropathological variations between cases of Alzheimer's disease (AD). To compare the two methods, 78 cases of AD were analyzed, each characterised by measurements of 47 neuropathological variables. Both methods of analysis revealed significant variations between AD cases. These variations were related primarily to differences in the distribution and abundance of senile plaques (SP) and neurofibrillary tangles (NFT) in the brain. Cluster analysis classified the majority of AD cases into five groups which could represent subtypes of AD. However, PCA suggested that variation between cases was more continuous with no distinct subtypes. Hence, PCA may be a more appropriate method than cluster analysis in the study of neuropathological variations between AD cases.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This thesis is an exploration of the organisation and functioning of the human visual system using the non-invasive functional imaging modality magnetoencephalography (MEG). Chapters one and two provide an introduction to the ‘human visual system and magnetoencephalographic methodologies. These chapters subsequently describe the methods by which MEG can be used to measure neuronal activity from the visual cortex. Chapter three describes the development and implementation of novel analytical tools; including beamforming based analyses, spectrographic movies and an optimisation of group imaging methods. Chapter four focuses on the use of established and contemporary analytical tools in the investigation of visual function. This is initiated with an investigation of visually evoked and induced responses; covering visual evoked potentials (VEPs) and event related synchronisation/desynchronisation (ERS/ERD). Chapter five describes the employment of novel methods in the investigation of cortical contrast response and demonstrates distinct contrast response functions in striate and extra-striate regions of visual cortex. Chapter six use synthetic aperture magnetometry (SAM) to investigate the phenomena of visual cortical gamma oscillations in response to various visual stimuli; concluding that pattern is central to its generation and that it increases in amplitude linearly as a function of stimulus contrast, consistent with results from invasive electrode studies in the macaque monkey. Chapter seven describes the use of driven visual stimuli and tuned SAM methods in a pilot study of retinotopic mapping using MEG; finding that activity in the primary visual cortex can be distinguished in four quadrants and two eccentricities of the visual field. Chapter eight is a novel implementation of the SAM beamforming method in the investigation of a subject with migraine visual aura; the method reveals desynchronisation of the alpha and gamma frequency bands in occipital and temporal regions contralateral to observed visual abnormalities. The final chapter is a summary of main conclusions and suggested further work.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Objective: Qualitative research is increasingly valued as part of the evidence for policy and practice, but how it should be appraised is contested. Various appraisal methods, including checklists and other structured approaches, have been proposed but rarely evaluated. We aimed to compare three methods for appraising qualitative research papers that were candidates for inclusion in a systematic review of evidence on support for breast-feeding. Method: A sample of 12 research papers on support for breast-feeding was appraised by six qualitative reviewers using three appraisal methods: unprompted judgement, based on expert opinion; a UK Cabinet Office quality framework; and CASP, a Critical Appraisal Skills Programme tool. Papers were assigned, following appraisals, to 1 of 5 categories, which were dichotomized to indicate whether or not papers should be included in a systematic review. Patterns of agreement in categorization of papers were assessed quantitatively using κ statistics, and qualitatively using cross-case analysis. Results: Agreement in categorizing papers across the three methods was slight (κ =0.13; 95% CI 0.06-0.24). Structured approaches did not appear to yield higher agreement than that by unprompted judgement. Qualitative analysis revealed reviewers' dilemmas in deciding between the potential impact of findings and the quality of the research execution or reporting practice. Structured instruments appeared to make reviewers more explicit about the reasons for their judgements. Conclusions: Structured approaches may not produce greater consistency of judgements about whether to include qualitative papers in a systematic review. Future research should address how appraisals of qualitative research should be incorporated in systematic reviews. © The Royal Society of Medicine Press Ltd 2007.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

There are several methods of providing series compensation for transmission lines using power electronic switches. Four methods of series compensation have been examined in this thesis, the thyristor controlled series capacitor, a voltage sourced inverter series compensator using a capacitor as the series element, a current sourced inverter series compensator and a voltage sourced inverter using an inductor as the series element. All the compensators examined will provide a continuously variable series voltage which is controlled by the switching of the electronic switches. Two of the circuits will offer both capacitive and inductive compensation, the thyristor controlled series capacitor and the current sourced inverter series compensator. The other two will produce either capacitive or inductive series compensation. The thyristor controlled series capacitor offers the widest range of series compensation. However, there is a band of unavailable compensation between 0 and 1 pu capacitive compensation. Compared to the other compensators examined the harmonic content of the compensating voltage is quite high. An algebraic analysis showed that there is more than one state the thyristor controlled series capacitor can operate in. This state has the undesirable effect of introducing large losses. The voltage sourced inverter series compensator using a capacitor as the series element will provide only capacitive compensation. It uses two capacitors which increase the cost of the compensator significantly above the other three. This circuit has the advantage of very low harmonic distortion. The current sourced inverter series compensator will provide both capacitive and inductive series compensation. The harmonic content of the compensating voltage is second only to the voltage sourced inverter series compensator using a capacitor as the series element. The voltage sourced inverter series compensator using an inductor as the series element will only provide inductive compensation, and it is the least expensive compensator examined. Unfortunately, the harmonics introduced by this circuit are considerable.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

DUE TO COPYRIGHT RESTRICTIONS ONLY AVAILABLE FOR CONSULTATION AT ASTON UNIVERSITY LIBRARY AND INFORMATION SERVICES WITH PRIOR ARRANGEMENT

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Although the importance of dataset fitness-for-use evaluation and intercomparison is widely recognised within the GIS community, no practical tools have yet been developed to support such interrogation. GeoViQua aims to develop a GEO label which will visually summarise and allow interrogation of key informational aspects of geospatial datasets upon which users rely when selecting datasets for use. The proposed GEO label will be integrated in the Global Earth Observation System of Systems (GEOSS) and will be used as a value and trust indicator for datasets accessible through the GEO Portal. As envisioned, the GEO label will act as a decision support mechanism for dataset selection and thereby hopefully improve user recognition of the quality of datasets. To date we have conducted 3 user studies to (1) identify the informational aspects of geospatial datasets upon which users rely when assessing dataset quality and trustworthiness, (2) elicit initial user views on a GEO label and its potential role and (3), evaluate prototype label visualisations. Our first study revealed that, when evaluating quality of data, users consider 8 facets: dataset producer information; producer comments on dataset quality; dataset compliance with international standards; community advice; dataset ratings; links to dataset citations; expert value judgements; and quantitative quality information. Our second study confirmed the relevance of these facets in terms of the community-perceived function that a GEO label should fulfil: users and producers of geospatial data supported the concept of a GEO label that provides a drill-down interrogation facility covering all 8 informational aspects. Consequently, we developed three prototype label visualisations and evaluated their comparative effectiveness and user preference via a third user study to arrive at a final graphical GEO label representation. When integrated in the GEOSS, an individual GEO label will be provided for each dataset in the GEOSS clearinghouse (or other data portals and clearinghouses) based on its available quality information. Producer and feedback metadata documents are being used to dynamically assess information availability and generate the GEO labels. The producer metadata document can either be a standard ISO compliant metadata record supplied with the dataset, or an extended version of a GeoViQua-derived metadata record, and is used to assess the availability of a producer profile, producer comments, compliance with standards, citations and quantitative quality information. GeoViQua is also currently developing a feedback server to collect and encode (as metadata records) user and producer feedback on datasets; these metadata records will be used to assess the availability of user comments, ratings, expert reviews and user-supplied citations for a dataset. The GEO label will provide drill-down functionality which will allow a user to navigate to a GEO label page offering detailed quality information for its associated dataset. At this stage, we are developing the GEO label service that will be used to provide GEO labels on demand based on supplied metadata records. In this presentation, we will provide a comprehensive overview of the GEO label development process, with specific emphasis on the GEO label implementation and integration into the GEOSS.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Fluoroscopic images exhibit severe signal-dependent quantum noise, due to the reduced X-ray dose involved in image formation, that is generally modelled as Poisson-distributed. However, image gray-level transformations, commonly applied by fluoroscopic device to enhance contrast, modify the noise statistics and the relationship between image noise variance and expected pixel intensity. Image denoising is essential to improve quality of fluoroscopic images and their clinical information content. Simple average filters are commonly employed in real-time processing, but they tend to blur edges and details. An extensive comparison of advanced denoising algorithms specifically designed for both signal-dependent noise (AAS, BM3Dc, HHM, TLS) and independent additive noise (AV, BM3D, K-SVD) was presented. Simulated test images degraded by various levels of Poisson quantum noise and real clinical fluoroscopic images were considered. Typical gray-level transformations (e.g. white compression) were also applied in order to evaluate their effect on the denoising algorithms. Performances of the algorithms were evaluated in terms of peak-signal-to-noise ratio (PSNR), signal-to-noise ratio (SNR), mean square error (MSE), structural similarity index (SSIM) and computational time. On average, the filters designed for signal-dependent noise provided better image restorations than those assuming additive white Gaussian noise (AWGN). Collaborative denoising strategy was found to be the most effective in denoising of both simulated and real data, also in the presence of image gray-level transformations. White compression, by inherently reducing the greater noise variance of brighter pixels, appeared to support denoising algorithms in performing more effectively. © 2012 Elsevier Ltd. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Framing plays an important role in public policy. Interest groups strategically highlight some aspects of a policy proposal while downplaying others in order to steer the policy debate in a favorable direction. Despite the importance of framing, we still know relatively little about the framing strategies of interest groups due to methodological difficulties that have prevented scholars from systematically studying interest group framing across a large number of interest groups and multiple policy debates. This article therefore provides an overview of three novel research methods that allow researchers to systematically measure interest group frames. More specifically, this article introduces a word-based quantitative text analysis technique, a manual, computer-assisted content analysis approach and face-to-face interviews designed to systematically identify interest group frames. The results generated by all three techniques are compared on the basis of a case study of interest group framing in an environmental policy debate in the European Union.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The accuracy of a map is dependent on the reference dataset used in its construction. Classification analyses used in thematic mapping can, for example, be sensitive to a range of sampling and data quality concerns. With particular focus on the latter, the effects of reference data quality on land cover classifications from airborne thematic mapper data are explored. Variations in sampling intensity and effort are highlighted in a dataset that is widely used in mapping and modelling studies; these may need accounting for in analyses. The quality of the labelling in the reference dataset was also a key variable influencing mapping accuracy. Accuracy varied with the amount and nature of mislabelled training cases with the nature of the effects varying between classifiers. The largest impacts on accuracy occurred when mislabelling involved confusion between similar classes. Accuracy was also typically negatively related to the magnitude of mislabelled cases and the support vector machine (SVM), which has been claimed to be relatively insensitive to training data error, was the most sensitive of the set of classifiers investigated, with overall classification accuracy declining by 8% (significant at 95% level of confidence) with the use of a training set containing 20% mislabelled cases.