837 resultados para Automated cataloguing


Relevância:

20.00% 20.00%

Publicador:

Resumo:

INTAMAP is a Web Processing Service for the automatic spatial interpolation of measured point data. Requirements were (i) using open standards for spatial data such as developed in the context of the Open Geospatial Consortium (OGC), (ii) using a suitable environment for statistical modelling and computation, and (iii) producing an integrated, open source solution. The system couples an open-source Web Processing Service (developed by 52°North), accepting data in the form of standardised XML documents (conforming to the OGC Observations and Measurements standard) with a computing back-end realised in the R statistical environment. The probability distribution of interpolation errors is encoded with UncertML, a markup language designed to encode uncertain data. Automatic interpolation needs to be useful for a wide range of applications and the algorithms have been designed to cope with anisotropy, extreme values, and data with known error distributions. Besides a fully automatic mode, the system can be used with different levels of user control over the interpolation process.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Epitopes mediated by T cells lie at the heart of the adaptive immune response and form the essential nucleus of anti-tumour peptide or epitope-based vaccines. Antigenic T cell epitopes are mediated by major histocompatibility complex (MHC) molecules, which present them to T cell receptors. Calculating the affinity between a given MHC molecule and an antigenic peptide using experimental approaches is both difficult and time consuming, thus various computational methods have been developed for this purpose. A server has been developed to allow a structural approach to the problem by generating specific MHC:peptide complex structures and providing configuration files to run molecular modelling simulations upon them. A system has been produced which allows the automated construction of MHC:peptide structure files and the corresponding configuration files required to execute a molecular dynamics simulation using NAMD. The system has been made available through a web-based front end and stand-alone scripts. Previous attempts at structural prediction of MHC:peptide affinity have been limited due to the paucity of structures and the computational expense in running large scale molecular dynamics simulations. The MHCsim server (http://igrid-ext.cryst.bbk.ac.uk/MHCsim) allows the user to rapidly generate any desired MHC:peptide complex and will facilitate molecular modelling simulation of MHC complexes on an unprecedented scale.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

DUE TO COPYRIGHT RESTRICTIONS ONLY AVAILABLE FOR CONSULTATION AT ASTON UNIVERSITY LIBRARY AND INFORMATION SERVICES WITH PRIOR ARRANGEMENT

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Acute life-threatening events are mostly predictable in adults and children. Despite real-time monitoring these events still occur at a rate of 4%. This paper describes an automated prediction system based on the feature space embedding and time series forecasting methods of the SpO2 signal; a pulsatile signal synchronised with heart beat. We develop an age-independent index of abnormality that distinguishes patient-specific normal to abnormal physiology transitions. Two different methods were used to distinguish between normal and abnormal physiological trends based on SpO2 behaviour. The abnormality index derived by each method is compared against the current gold standard of clinical prediction of critical deterioration. Copyright © 2013 Inderscience Enterprises Ltd.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The G-protein coupled receptor (GPCR) superfamily fulfils various metabolic functions and interacts with a diverse range of ligands. There is a lack of sequence similarity between the six classes that comprise the GPCR superfamily. Moreover, most novel GPCRs found have low sequence similarity to other family members which makes it difficult to infer properties from related receptors. Many different approaches have been taken towards developing efficient and accurate methods for GPCR classification, ranging from motif-based systems to machine learning as well as a variety of alignment-free techniques based on the physiochemical properties of their amino acid sequences. This review describes the inherent difficulties in developing a GPCR classification algorithm and includes techniques previously employed in this area.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The 5-HT7 receptor is linked with various CNS disorders. Using an automated solution phase synthesis a combinatorial library of 384 N-substituted N-[1-methyl-3-(4-methylpiperidin-1-yl)propyl]-arylsulfonamides was prepared with 24 chemically diverse amines 1-24 and 16 sulfonyl chlorides A-P. The chemical library of alkylated sulfonamides was evaluated in a receptor binding assay with [3]H-5-CT as ligand. The key synthetic step was the alkylation of a sulfonamide with iodide E, which was prepared from butanediol in 4 synthetic steps. The target compounds 1A, 1B .....24A ... 24P were purified by solvent extraction on a Teacan liquid handling system. Sulfonamide J20, B23, D23, G23, G23, J23 , I24 and O24 displayed a binding affinity IC50 between 100 nM and 10 nM. The crystalline J20 (IC50=39 nM) and O24 (IC50=83 nM) were evaluated further in the despair swimming test and the tail suspension assay. A significant antidepressant activity was found in mice of a greater magnitude than imipramine and fluoxetine at low doses. © 2006 Bentham Science Publishers Ltd.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

SMS (Short Message Service) is now a hugely popular and a very powerful business communication technology for mobile phones. In order to respond correctly to a free form factual question given a large collection of texts, one needs to understand the question at a level that allows determining some of constraints the question imposes on a possible answer. These constraints may include a semantic classification of the sought after answer and may even suggest using different strategies when looking for and verifying a candidate answer. In this paper we focus on various attempts to overcome the major contradiction: the technical limitations of the SMS standard, and the huge number of found information for a possible answer.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

n the paper the procedure for calculation, designing and estimation of the ergonomics of the interface of systems of document circulation is considered. The original computation procedure and the data received during the designing of the interface of documentary system are given.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The purpose of the paper is to present an automated system for realization of effective internet marketing campaign (ASEIMC). The constantly growing number of websites available online brings more problems for the contemporary enterprises to reach their potential customers. Therefore the companies have to discover novel approaches to increase their online sales. The presented ASEIMC system gives such an approach and helps small and medium enterprises to compete for customers with big corporations in the Internet space.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The author analyzes some peculiarities of information perception and the problems of tests evaluation. A fuzzy model of tests evaluation as means of increasing the effectiveness of knowledge control is suggested.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Resource Space Model is a kind of data model which can effectively and flexibly manage the digital resources in cyber-physical system from multidimensional and hierarchical perspectives. This paper focuses on constructing resource space automatically. We propose a framework that organizes a set of digital resources according to different semantic dimensions combining human background knowledge in WordNet and Wikipedia. The construction process includes four steps: extracting candidate keywords, building semantic graphs, detecting semantic communities and generating resource space. An unsupervised statistical language topic model (i.e., Latent Dirichlet Allocation) is applied to extract candidate keywords of the facets. To better interpret meanings of the facets found by LDA, we map the keywords to Wikipedia concepts, calculate word relatedness using WordNet's noun synsets and construct corresponding semantic graphs. Moreover, semantic communities are identified by GN algorithm. After extracting candidate axes based on Wikipedia concept hierarchy, the final axes of resource space are sorted and picked out through three different ranking strategies. The experimental results demonstrate that the proposed framework can organize resources automatically and effectively.©2013 Published by Elsevier Ltd. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Many software engineers have found that it is difficult to understand, incorporate and use different formal models consistently in the process of software developments, especially for large and complex software systems. This is mainly due to the complex mathematical nature of the formal methods and the lack of tool support. It is highly desirable to have software models and their related software artefacts systematically connected and used collaboratively, rather than in isolation. The success of the Semantic Web, as the next generation of Web technology, can have profound impact on the environment for formal software development. It allows both the software engineers and machines to understand the content of formal models and supports more effective software design in terms of understanding, sharing and reusing in a distributed manner. To realise the full potential of the Semantic Web in formal software development, effectively creating proper semantic metadata for formal software models and their related software artefacts is crucial. This paper proposed a framework that allows users to interconnect the knowledge about formal software models and other related documents using the semantic technology. We first propose a methodology with tool support is proposed to automatically derive ontological metadata from formal software models and semantically describe them. We then develop a Semantic Web environment for representing and sharing formal Z/OZ models. A method with prototype tool is presented to enhance semantic query to software models and other artefacts. © 2014.