830 resultados para AUTOMATED DOCKING
Resumo:
The G-protein coupled receptor (GPCR) superfamily fulfils various metabolic functions and interacts with a diverse range of ligands. There is a lack of sequence similarity between the six classes that comprise the GPCR superfamily. Moreover, most novel GPCRs found have low sequence similarity to other family members which makes it difficult to infer properties from related receptors. Many different approaches have been taken towards developing efficient and accurate methods for GPCR classification, ranging from motif-based systems to machine learning as well as a variety of alignment-free techniques based on the physiochemical properties of their amino acid sequences. This review describes the inherent difficulties in developing a GPCR classification algorithm and includes techniques previously employed in this area.
Resumo:
The 5-HT7 receptor is linked with various CNS disorders. Using an automated solution phase synthesis a combinatorial library of 384 N-substituted N-[1-methyl-3-(4-methylpiperidin-1-yl)propyl]-arylsulfonamides was prepared with 24 chemically diverse amines 1-24 and 16 sulfonyl chlorides A-P. The chemical library of alkylated sulfonamides was evaluated in a receptor binding assay with [3]H-5-CT as ligand. The key synthetic step was the alkylation of a sulfonamide with iodide E, which was prepared from butanediol in 4 synthetic steps. The target compounds 1A, 1B .....24A ... 24P were purified by solvent extraction on a Teacan liquid handling system. Sulfonamide J20, B23, D23, G23, G23, J23 , I24 and O24 displayed a binding affinity IC50 between 100 nM and 10 nM. The crystalline J20 (IC50=39 nM) and O24 (IC50=83 nM) were evaluated further in the despair swimming test and the tail suspension assay. A significant antidepressant activity was found in mice of a greater magnitude than imipramine and fluoxetine at low doses. © 2006 Bentham Science Publishers Ltd.
Resumo:
SMS (Short Message Service) is now a hugely popular and a very powerful business communication technology for mobile phones. In order to respond correctly to a free form factual question given a large collection of texts, one needs to understand the question at a level that allows determining some of constraints the question imposes on a possible answer. These constraints may include a semantic classification of the sought after answer and may even suggest using different strategies when looking for and verifying a candidate answer. In this paper we focus on various attempts to overcome the major contradiction: the technical limitations of the SMS standard, and the huge number of found information for a possible answer.
Resumo:
n the paper the procedure for calculation, designing and estimation of the ergonomics of the interface of systems of document circulation is considered. The original computation procedure and the data received during the designing of the interface of documentary system are given.
Resumo:
The purpose of the paper is to present an automated system for realization of effective internet marketing campaign (ASEIMC). The constantly growing number of websites available online brings more problems for the contemporary enterprises to reach their potential customers. Therefore the companies have to discover novel approaches to increase their online sales. The presented ASEIMC system gives such an approach and helps small and medium enterprises to compete for customers with big corporations in the Internet space.
Resumo:
The author analyzes some peculiarities of information perception and the problems of tests evaluation. A fuzzy model of tests evaluation as means of increasing the effectiveness of knowledge control is suggested.
Resumo:
Resource Space Model is a kind of data model which can effectively and flexibly manage the digital resources in cyber-physical system from multidimensional and hierarchical perspectives. This paper focuses on constructing resource space automatically. We propose a framework that organizes a set of digital resources according to different semantic dimensions combining human background knowledge in WordNet and Wikipedia. The construction process includes four steps: extracting candidate keywords, building semantic graphs, detecting semantic communities and generating resource space. An unsupervised statistical language topic model (i.e., Latent Dirichlet Allocation) is applied to extract candidate keywords of the facets. To better interpret meanings of the facets found by LDA, we map the keywords to Wikipedia concepts, calculate word relatedness using WordNet's noun synsets and construct corresponding semantic graphs. Moreover, semantic communities are identified by GN algorithm. After extracting candidate axes based on Wikipedia concept hierarchy, the final axes of resource space are sorted and picked out through three different ranking strategies. The experimental results demonstrate that the proposed framework can organize resources automatically and effectively.©2013 Published by Elsevier Ltd. All rights reserved.
Resumo:
Many software engineers have found that it is difficult to understand, incorporate and use different formal models consistently in the process of software developments, especially for large and complex software systems. This is mainly due to the complex mathematical nature of the formal methods and the lack of tool support. It is highly desirable to have software models and their related software artefacts systematically connected and used collaboratively, rather than in isolation. The success of the Semantic Web, as the next generation of Web technology, can have profound impact on the environment for formal software development. It allows both the software engineers and machines to understand the content of formal models and supports more effective software design in terms of understanding, sharing and reusing in a distributed manner. To realise the full potential of the Semantic Web in formal software development, effectively creating proper semantic metadata for formal software models and their related software artefacts is crucial. This paper proposed a framework that allows users to interconnect the knowledge about formal software models and other related documents using the semantic technology. We first propose a methodology with tool support is proposed to automatically derive ontological metadata from formal software models and semantically describe them. We then develop a Semantic Web environment for representing and sharing formal Z/OZ models. A method with prototype tool is presented to enhance semantic query to software models and other artefacts. © 2014.
Resumo:
An automated cognitive approach for the design of Information Systems is presented. It is supposed to be used at the very beginning of the design process, between the stages of requirements determination and analysis, including the stage of analysis. In the context of the approach used either UML or ERD notations may be used for model representation. The approach provides the opportunity of using natural language text documents as a source of knowledge for automated problem domain model generation. It also simplifies the process of modelling by assisting the human user during the whole period of working upon the model (using UML or ERD notations).
Resumo:
The method (algorithm BIDIMS) of multivariate objects display to bidimensional structure in which the sum of differences of objects properties and their nearest neighbors is minimal is being described. The basic regularities on the set of objects at this ordering become evident. Besides, such structures (tables) have high inductive opportunities: many latent properties of objects may be predicted on their coordinates in this table. Opportunities of a method are illustrated on an example of bidimentional ordering of chemical elements. The table received in result practically coincides with the periodic Mendeleev table.
Resumo:
In this paper RDPPLan, a model for planning with quantitative resources specified as numerical intervals, is presented. Nearly all existing models of planning with resources require to specify exact values for updating resources modified by actions execution. In other words these models cannot deal with more realistic situations in which the resources quantities are not completely known but are bounded by intervals. The RDPPlan model allow to manage domains more tailored to real world, where preconditions and effects over quantitative resources can be specified by intervals of values, in addition mixed logical/quantitative and pure numerical goals can be posed. RDPPlan is based on non directional search over a planning graph, like DPPlan, from which it derives, it uses propagation rules which have been appropriately extended to the management of resource intervals. The propagation rules extended with resources must verify invariant properties over the planning graph which have been proven by the authors and guarantee the correctness of the approach. An implementation of the RDPPlan model is described with search strategies specifically developed for interval resources.
Resumo:
We have previously described ProxiMAX, a technology that enables the fabrication of precise, combinatorial gene libraries via codon-by-codon saturation mutagenesis. ProxiMAX was originally performed using manual, enzymatic transfer of codons via blunt-end ligation. Here we present Colibra™: an automated, proprietary version of ProxiMAX used specifically for antibody library generation, in which double-codon hexamers are transferred during the saturation cycling process. The reduction in process complexity, resulting library quality and an unprecedented saturation of up to 24 contiguous codons are described. Utility of the method is demonstrated via fabrication of complementarity determining regions (CDR) in antibody fragment libraries and next generation sequencing (NGS) analysis of their quality and diversity.
Resumo:
Cellular peptide vaccines contain T-cell epitopes. The main prerequisite for a peptide to act as a T-cell epitope is that it binds to a major histocompatibility complex (MHC) protein. Peptide MHC binder identification is an extremely costly experimental challenge since human MHCs, named human leukocyte antigen, are highly polymorphic and polygenic. Here we present EpiDOCK, the first structure-based server for MHC class II binding prediction. EpiDOCK predicts binding to the 23 most frequent human, MHC class II proteins. It identifies 90% of true binders and 76% of true non-binders, with an overall accuracy of 83%. EpiDOCK is freely accessible at http://epidock.ddg-pharmfac. net. © The Author 2013. Published by Oxford University Press. All rights reserved.
Resumo:
Hydrogen bonds play important roles in maintaining the structure of proteins and in the formation of most biomolecular protein-ligand complexes. All amino acids can act as hydrogen bond donors and acceptors. Among amino acids, Histidine is unique, as it can exist in neutral or positively charged forms within the physiological pH range of 5.0 to 7.0. Histidine can thus interact with other aromatic residues as well as forming hydrogen bonds with polar and charged residues. The ability of His to exchange a proton lies at the heart of many important functional biomolecular interactions, including immunological ones. By using molecular docking and molecular dynamics simulation, we examine the influence of His protonation/deprotonation on peptide binding affinity to MHC class II proteins from locus HLA-DP. Peptide-MHC interaction underlies the adaptive cellular immune response, upon which the next generation of commercially-important vaccines will depend. Consistent with experiment, we find that peptides containing protonated His residues bind better to HLA-DP proteins than those with unprotonated His. Enhanced binding at pH 5.0 is due, in part, to additional hydrogen bonds formed between peptide His+ and DP proteins. In acidic endosomes, protein His79β is predominantly protonated. As a result, the peptide binding cleft narrows in the vicinity of His79β, which stabilizes the peptide - HLA-DP protein complex. © 2014 Bentham Science Publishers.
Resumo:
In this paper we show how event processing over semantically annotated streams of events can be exploited, for implementing tracing and tracking of products in supply chains through the automated generation of linked pedigrees. In our abstraction, events are encoded as spatially and temporally oriented named graphs, while linked pedigrees as RDF datasets are their specific compositions. We propose an algorithm that operates over streams of RDF annotated EPCIS events to generate linked pedigrees. We exemplify our approach using the pharmaceuticals supply chain and show how counterfeit detection is an implicit part of our pedigree generation. Our evaluation results show that for fast moving supply chains, smaller window sizes on event streams provide significantly higher efficiency in the generation of pedigrees as well as enable early counterfeit detection.