13 resultados para data types and operators

em Aston University Research Archive


Relevância:

100.00% 100.00%

Publicador:

Resumo:

The data available during the drug discovery process is vast in amount and diverse in nature. To gain useful information from such data, an effective visualisation tool is required. To provide better visualisation facilities to the domain experts (screening scientist, biologist, chemist, etc.),we developed a software which is based on recently developed principled visualisation algorithms such as Generative Topographic Mapping (GTM) and Hierarchical Generative Topographic Mapping (HGTM). The software also supports conventional visualisation techniques such as Principal Component Analysis, NeuroScale, PhiVis, and Locally Linear Embedding (LLE). The software also provides global and local regression facilities . It supports regression algorithms such as Multilayer Perceptron (MLP), Radial Basis Functions network (RBF), Generalised Linear Models (GLM), Mixture of Experts (MoE), and newly developed Guided Mixture of Experts (GME). This user manual gives an overview of the purpose of the software tool, highlights some of the issues to be taken care while creating a new model, and provides information about how to install & use the tool. The user manual does not require the readers to have familiarity with the algorithms it implements. Basic computing skills are enough to operate the software.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Today, the data available to tackle many scientific challenges is vast in quantity and diverse in nature. The exploration of heterogeneous information spaces requires suitable mining algorithms as well as effective visual interfaces. miniDVMS v1.8 provides a flexible visual data mining framework which combines advanced projection algorithms developed in the machine learning domain and visual techniques developed in the information visualisation domain. The advantage of this interface is that the user is directly involved in the data mining process. Principled projection methods, such as generative topographic mapping (GTM) and hierarchical GTM (HGTM), are integrated with powerful visual techniques, such as magnification factors, directional curvatures, parallel coordinates, and user interaction facilities, to provide this integrated visual data mining framework. The software also supports conventional visualisation techniques such as principal component analysis (PCA), Neuroscale, and PhiVis. This user manual gives an overview of the purpose of the software tool, highlights some of the issues to be taken care while creating a new model, and provides information about how to install and use the tool. The user manual does not require the readers to have familiarity with the algorithms it implements. Basic computing skills are enough to operate the software.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We demonstrate simultaneous demultiplexing, data regeneration and clock recovery at 10Gbits/s, using a single semiconductor optical amplifier–based nonlinear-optical loop mirror in a phase-locked loop configuration.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Visualising data for exploratory analysis is a major challenge in many applications. Visualisation allows scientists to gain insight into the structure and distribution of the data, for example finding common patterns and relationships between samples as well as variables. Typically, visualisation methods like principal component analysis and multi-dimensional scaling are employed. These methods are favoured because of their simplicity, but they cannot cope with missing data and it is difficult to incorporate prior knowledge about properties of the variable space into the analysis; this is particularly important in the high-dimensional, sparse datasets typical in geochemistry. In this paper we show how to utilise a block-structured correlation matrix using a modification of a well known non-linear probabilistic visualisation model, the Generative Topographic Mapping (GTM), which can cope with missing data. The block structure supports direct modelling of strongly correlated variables. We show that including prior structural information it is possible to improve both the data visualisation and the model fit. These benefits are demonstrated on artificial data as well as a real geochemical dataset used for oil exploration, where the proposed modifications improved the missing data imputation results by 3 to 13%.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This research explores the role of internal customers in the delivery of external service quality. It will consider any potentially different internal customer types that may exist within the organisation. Additionally, it will explore any potential differences in the dimensions that are used to measure service quality internally and externally. If there are different internal customer types then there may be different dimensions which are used to measure service quality between these types and this will be considered also. The approach adopted given the depth and breadth of understanding required, was an action research case based approach. The research objectives were:(i) To determine the dimensions of internal service quality between internal customer supplier cells. (ii) To determine what variation, if any, there is in the dimension sets between internal customer supplier cells. (iii) To determine any ranking in the dimensions that could exist by internal customer supplier cell type. (iv) To investigate the impact of internal service quality on external service quality over time. The research findings were: (i) The majority of the dimensions used in measuring external service quality were also used internally. There were additions of new dimensions however and some dimensions which were used externally, for internal use, had to be redefined. (ii) Variation in dimension sets were revealed during the research. Four different dimension sets were identified and these were matched with four different types of internal service interaction. (iii) Differences in the ranking of dimensions within each dimension set for each internal customer supplier cell type were confirmed. (iv) Internal service quality was seen to influence external service quality but at a cellular level rather than company level. At the company level, the average internal service quality at the start and finish of the research showed no improvement but external service quality had improved. Further investigation at the cellular level showed that improvements in internal service quality had occurred. Those improvements were found to be with the cells that were closest to the customer.The research implications were found to be: (i) some cells may not be necessary in the delivery of external service quality. (ii) The immediacy of the cell to the external customer and number of interactions into and out of that cell has the greatest effect on external customer satisfaction. (iii) Internal service quality may be driven by the customer affecting those cells at the front end of the business first. This then cascades back to those cells which are less immediate until ultimately the whole organisation shows improvements in internal service quality.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The use of quantitative methods has become increasingly important in the study of neuropathology and especially in neurodegenerative disease. Disorders such as Alzheimer's disease (AD) and the frontotemporal dementias (FTD) are characterized by the formation of discrete, microscopic, pathological lesions which play an important role in pathological diagnosis. This chapter reviews the advantages and limitations of the different methods of quantifying pathological lesions in histological sections including estimates of density, frequency, coverage, and the use of semi-quantitative scores. The sampling strategies by which these quantitative measures can be obtained from histological sections, including plot or quadrat sampling, transect sampling, and point-quarter sampling, are described. In addition, data analysis methods commonly used to analysis quantitative data in neuropathology, including analysis of variance (ANOVA), polynomial curve fitting, multiple regression, classification trees, and principal components analysis (PCA), are discussed. These methods are illustrated with reference to quantitative studies of a variety of neurodegenerative disorders.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

There is a proliferation of categorization schemes in the scientific literature that have mostly been developed from psychologists’ understanding of the nature of linguistic interactions. This has a led to problems in defining question types used by interviewers. Based on the principle that the overarching purpose of an interview is to elicit information and that questions can function both as actions in their own right and as vehicles for other actions, a Conversational Analysis approach was used to analyse a small number of police interviews. The analysis produced a different categorization of question types and, in particular, the conversational turns fell into two functional types: (i) Topic Initiation Questions and (ii) Topic Facilitation Questions. We argue that forensic interviewing requires a switch of focus from the ‘words’ used by interviewers in question types to the ‘function’ of conversational turns within interviews.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We demonstrate simultaneous demultiplexing, data regeneration and clock recovery at 10Gbits/s, using a single semiconductor optical amplifier–based nonlinear-optical loop mirror in a phase-locked loop configuration.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this work, we present an adaptive unequal loss protection (ULP) scheme for H264/AVC video transmission over lossy networks. This scheme combines erasure coding, H.264/AVC error resilience techniques and importance measures in video coding. The unequal importance of the video packets is identified in the group of pictures (GOP) and the H.264/AVC data partitioning levels. The presented method can adaptively assign unequal amount of forward error correction (FEC) parity across the video packets according to the network conditions, such as the available network bandwidth, packet loss rate and average packet burst loss length. A near optimal algorithm is developed to deal with the FEC assignment for optimization. The simulation results show that our scheme can effectively utilize network resources such as bandwidth, while improving the quality of the video transmission. In addition, the proposed ULP strategy ensures graceful degradation of the received video quality as the packet loss rate increases. © 2010 IEEE.