44 resultados para Telephone, Automatic
em CentAUR: Central Archive University of Reading - UK
Resumo:
Accurately measured peptide masses can be used for large-scale protein identification from bacterial whole-cell digests as an alternative to tandem mass spectrometry (MS/MS) provided mass measurement errors of a few parts-per-million (ppm) are obtained. Fourier transform ion cyclotron resonance (FTICR) mass spectrometry (MS) routinely achieves such mass accuracy either with internal calibration or by regulating the charge in the analyzer cell. We have developed a novel and automated method for internal calibration of liquid chromatography (LC)/FTICR data from whole-cell digests using peptides in the sample identified by concurrent MS/MS together with ambient polydimethyl-cyclosiloxanes as internal calibrants in the mass spectra. The method reduced mass measurement error from 4.3 +/- 3.7 ppm to 0.3 +/- 2.3 ppm in an E. coli LC/FTICR dataset of 1000 MS and MS/MS spectra and is applicable to all analyses of complex protein digests by FTICRMS. Copyright (c) 2006 John Wiley & Sons, Ltd.
Resumo:
The identification of criminal networks is not a routine exploratory process within the current practice of the law enforcement authorities; rather it is triggered by specific evidence of criminal activity being investigated. A network is identified when a criminal comes to notice and any associates who could also be potentially implicated would need to be identified if only to be eliminated from the enquiries as suspects or witnesses as well as to prevent and/or detect crime. However, an identified network may not be the one causing most harm in a given area.. This paper identifies a methodology to identify all of the criminal networks that are present within a Law Enforcement Area, and, prioritises those that are causing most harm to the community. Each crime is allocated a score based on its crime type and how recently the crime was committed; the network score, which can be used as decision support to help prioritise it for law enforcement purposes, is the sum of the individual crime scores.
Resumo:
There are still major challenges in the area of automatic indexing and retrieval of multimedia content data for very large multimedia content corpora. Current indexing and retrieval applications still use keywords to index multimedia content and those keywords usually do not provide any knowledge about the semantic content of the data. With the increasing amount of multimedia content, it is inefficient to continue with this approach. In this paper, we describe the project DREAM, which addresses such challenges by proposing a new framework for semi-automatic annotation and retrieval of multimedia based on the semantic content. The framework uses the Topic Map Technology, as a tool to model the knowledge automatically extracted from the multimedia content using an Automatic Labelling Engine. We describe how we acquire knowledge from the content and represent this knowledge using the support of NLP to automatically generate Topic Maps. The framework is described in the context of film post-production.
Resumo:
The 3D reconstruction of a Golgi-stained dendritic tree from a serial stack of images captured with a transmitted light bright-field microscope is investigated. Modifications to the bootstrap filter are discussed such that the tree structure may be estimated recursively as a series of connected segments. The tracking performance of the bootstrap particle filter is compared against Differential Evolution, an evolutionary global optimisation method, both in terms of robustness and accuracy. It is found that the particle filtering approach is significantly more robust and accurate for the data considered.
Resumo:
The externally recorded electroencephalogram (EEG) is contaminated with signals that do not originate from the brain, collectively known as artefacts. Thus, EEG signals must be cleaned prior to any further analysis. In particular, if the EEG is to be used in online applications such as Brain-Computer Interfaces (BCIs) the removal of artefacts must be performed in an automatic manner. This paper investigates the robustness of Mutual Information based features to inter-subject variability for use in an automatic artefact removal system. The system is based on the separation of EEG recordings into independent components using a temporal ICA method, RADICAL, and the utilisation of a Support Vector Machine for classification of the components into EEG and artefact signals. High accuracy and robustness to inter-subject variability is achieved.
Resumo:
Automatic indexing and retrieval of digital data poses major challenges. The main problem arises from the ever increasing mass of digital media and the lack of efficient methods for indexing and retrieval of such data based on the semantic content rather than keywords. To enable intelligent web interactions, or even web filtering, we need to be capable of interpreting the information base in an intelligent manner. For a number of years research has been ongoing in the field of ontological engineering with the aim of using ontologies to add such (meta) knowledge to information. In this paper, we describe the architecture of a system (Dynamic REtrieval Analysis and semantic metadata Management (DREAM)) designed to automatically and intelligently index huge repositories of special effects video clips, based on their semantic content, using a network of scalable ontologies to enable intelligent retrieval. The DREAM Demonstrator has been evaluated as deployed in the film post-production phase to support the process of storage, indexing and retrieval of large data sets of special effects video clips as an exemplar application domain. This paper provides its performance and usability results and highlights the scope for future enhancements of the DREAM architecture which has proven successful in its first and possibly most challenging proving ground, namely film production, where it is already in routine use within our test bed Partners' creative processes. (C) 2009 Published by Elsevier B.V.
Resumo:
An automatic nonlinear predictive model-construction algorithm is introduced based on forward regression and the predicted-residual-sums-of-squares (PRESS) statistic. The proposed algorithm is based on the fundamental concept of evaluating a model's generalisation capability through crossvalidation. This is achieved by using the PRESS statistic as a cost function to optimise model structure. In particular, the proposed algorithm is developed with the aim of achieving computational efficiency, such that the computational effort, which would usually be extensive in the computation of the PRESS statistic, is reduced or minimised. The computation of PRESS is simplified by avoiding a matrix inversion through the use of the orthogonalisation procedure inherent in forward regression, and is further reduced significantly by the introduction of a forward-recursive formula. Based on the properties of the PRESS statistic, the proposed algorithm can achieve a fully automated procedure without resort to any other validation data set for iterative model evaluation. Numerical examples are used to demonstrate the efficacy of the algorithm.
Resumo:
This paper outlines a method for automatic artefact removal from multichannel recordings of event-related potentials (ERPs). The proposed method is based on, firstly, separation of the ERP recordings into independent components using the method of temporal decorrelation source separation (TDSEP). Secondly, the novel lagged auto-mutual information clustering (LAMIC) algorithm is used to cluster the estimated components, together with ocular reference signals, into clusters corresponding to cerebral and non-cerebral activity. Thirdly, the components in the cluster which contains the ocular reference signals are discarded. The remaining components are then recombined to reconstruct the clean ERPs.