958 resultados para Domain structure
Resumo:
A three-dimensional model of human ABCB1 nucleotide-binding domain (NBD) was developed by homology modelling using the high-resolution human TAP1 transporter structure as template. Interactions between NBD and flavonoids were investigated using in silico docking studies. Ring-A of unmodified flavonoid was located within the NBD P-loop with the 5-hydroxyl group involved in hydrogen bonding with Lys1076. Ring-B was stabilised by hydrophobic stacking interactions with Tyr1044. The 3-hydroxyl group and carbonyl oxygen were extensively involved in hydrogen bonding interactions with amino acids within the NBD. Addition of prenyl, benzyl or geranyl moieties to ring-A (position-6) and hydrocarbon substituents (O-n-butyl to O-n-decyl) to ring-B (position-4) resulted in a size-dependent decrease in predicted docking energy which reflected the increased binding affinities reported in vitro.
Resumo:
Inductive reasoning is fundamental to human cognition, yet it remains unclear how we develop this ability and what might influence our inductive choices. We created novel categories in which crucial factors such as domain and category structure were manipulated orthogonally. We trained 403 4-9-year-old children to categorise well-matched natural kind and artefact stimuli with either featural or relational category structure, followed by induction tasks. This wide age range allowed for the first full exploration of the developmental trajectory of inductive reasoning in both domains. We found a gradual transition from perceptual to categorical induction with age. This pattern was stable across domains, but interestingly, children showed a category bias one year later for relational categories. We hypothesise that the ability to use category information in inductive reasoning develops gradually, but is delayed when children need to process and apply more complex category structures. © 2014 © 2014 Taylor & Francis.
Resumo:
Optimization of design, creation, functioning and accompaniment processes of expert system is the important problem of artificial intelligence theory and decisions making methods techniques. In this paper the approach to its solving with the use of technology, being based on methodology of systems analysis, ontology of subject domain, principles and methods of self-organisation, is offered. The aspects of such approach realization, being based on construction of accordance between the ontology hierarchical structure and sequence of questions in automated systems for examination, are expounded.
Resumo:
This paper proposes an ontology-based approach to representation of courseware knowledge in different domains. The focus is on a three-level semantic graph, modeling respectively the course as a whole, its structure, and domain contents itself. The authors plan to use this representation for flexibie e- learning and generation of different study plans for the learners.
Resumo:
∗ Partially supported by INTAS grant 97-1644
Resumo:
In the recent years the East-Christian iconographical art works have been digitized providing a large volume of data. The need for effective classification, indexing and retrieval of iconography repositories was the motivation of the design and development of a systemized ontological structure for description of iconographical art objects. This paper presents the ontology of the East-Christian iconographical art, developed to provide content annotation in the Virtual encyclopedia of Bulgarian iconography multimedia digital library. The ontology’s main classes, relations, facts, rules, and problems appearing during the design and development are described. The paper also presents an application of the ontology for learning analysis on an iconography domain implemented during the SINUS project “Semantic Technologies for Web Services and Technology Enhanced Learning”.
Resumo:
Extracellular signal-regulated kinase 5 (ERK5), also termed big mitogen-activated protein kinase-1 (BMK1), is the most recently identified member of the mitogen-activated protein kinase (MAPK) family and consists of an amino-terminal kinase domain, with a relatively large carboxy-terminal of unique structure and function that makes it distinct from other MAPK members. It is ubiquitously expressed in numerous tissues and is activated by a variety of extracellular stimuli, such as cellular stresses and growth factors, to regulate processes such as cell proliferation and differentiation. Targeted deletion of Erk5 in mice has revealed that the ERK5 signalling cascade plays a critical role in cardiovascular development and vascular integrity. Recent data points to a potential role in pathological conditions such as cancer and tumour angiogenesis. This review focuses on the physiological and pathological role of ERK5, the regulation of this kinase and the recent development of small molecule inhibitors of the ERK5 signalling cascade. © 2012 Elsevier Inc.
Resumo:
Antenna design is an iterative process in which structures are analyzed and changed to comply with certain performance parameters required. The classic approach starts with analyzing a "known" structure, obtaining the value of its performance parameter and changing this structure until the "target" value is achieved. This process relies on having an initial structure, which follows some known or "intuitive" patterns already familiar to the designer. The purpose of this research was to develop a method of designing UWB antennas. What is new in this proposal is that the design process is reversed: the designer will start with the target performance parameter and obtain a structure as the result of the design process. This method provided a new way to replicate and optimize existing performance parameters. The base of the method was the use of a Genetic Algorithm (GA) adapted to the format of the chromosome that will be evaluated by the Electromagnetic (EM) solver. For the electromagnetic study we used XFDTD™ program, based in the Finite-Difference Time-Domain technique. The programming portion of the method was created under the MatLab environment, which serves as the interface for converting chromosomes, file formats and transferring of data between the XFDTD™ and GA. A high level of customization had to be written into the code to work with the specific files generated by the XFDTD™ program. Two types of cost functions were evaluated; the first one seeking broadband performance within the UWB band, and the second one searching for curve replication of a reference geometry. The performance of the method was evaluated considering the speed provided by the computer resources used. Balance between accuracy, data file size and speed of execution was achieved by defining parameters in the GA code as well as changing the internal parameters of the XFDTD™ projects. The results showed that the GA produced geometries that were analyzed by the XFDTD™ program and changed following the search criteria until reaching the target value of the cost function. Results also showed how the parameters can change the search criteria and influence the running of the code to provide a variety of geometries.
Resumo:
The increasing amount of available semistructured data demands efficient mechanisms to store, process, and search an enormous corpus of data to encourage its global adoption. Current techniques to store semistructured documents either map them to relational databases, or use a combination of flat files and indexes. These two approaches result in a mismatch between the tree-structure of semistructured data and the access characteristics of the underlying storage devices. Furthermore, the inefficiency of XML parsing methods has slowed down the large-scale adoption of XML into actual system implementations. The recent development of lazy parsing techniques is a major step towards improving this situation, but lazy parsers still have significant drawbacks that undermine the massive adoption of XML. ^ Once the processing (storage and parsing) issues for semistructured data have been addressed, another key challenge to leverage semistructured data is to perform effective information discovery on such data. Previous works have addressed this problem in a generic (i.e. domain independent) way, but this process can be improved if knowledge about the specific domain is taken into consideration. ^ This dissertation had two general goals: The first goal was to devise novel techniques to efficiently store and process semistructured documents. This goal had two specific aims: We proposed a method for storing semistructured documents that maps the physical characteristics of the documents to the geometrical layout of hard drives. We developed a Double-Lazy Parser for semistructured documents which introduces lazy behavior in both the pre-parsing and progressive parsing phases of the standard Document Object Model’s parsing mechanism. ^ The second goal was to construct a user-friendly and efficient engine for performing Information Discovery over domain-specific semistructured documents. This goal also had two aims: We presented a framework that exploits the domain-specific knowledge to improve the quality of the information discovery process by incorporating domain ontologies. We also proposed meaningful evaluation metrics to compare the results of search systems over semistructured documents. ^
Resumo:
Hydrophobicity as measured by Log P is an important molecular property related to toxicity and carcinogenicity. With increasing public health concerns for the effects of Disinfection By-Products (DBPs), there are considerable benefits in developing Quantitative Structure and Activity Relationship (QSAR) models capable of accurately predicting Log P. In this research, Log P values of 173 DBP compounds in 6 functional classes were used to develop QSAR models, by applying 3 molecular descriptors, namely, Energy of the Lowest Unoccupied Molecular Orbital (ELUMO), Number of Chlorine (NCl) and Number of Carbon (NC) by Multiple Linear Regression (MLR) analysis. The QSAR models developed were validated based on the Organization for Economic Co-operation and Development (OECD) principles. The model Applicability Domain (AD) and mechanistic interpretation were explored. Considering the very complex nature of DBPs, the established QSAR models performed very well with respect to goodness-of-fit, robustness and predictability. The predicted values of Log P of DBPs by the QSAR models were found to be significant with a correlation coefficient R2 from 81% to 98%. The Leverage Approach by Williams Plot was applied to detect and remove outliers, consequently increasing R 2 by approximately 2% to 13% for different DBP classes. The developed QSAR models were statistically validated for their predictive power by the Leave-One-Out (LOO) and Leave-Many-Out (LMO) cross validation methods. Finally, Monte Carlo simulation was used to assess the variations and inherent uncertainties in the QSAR models of Log P and determine the most influential parameters in connection with Log P prediction. The developed QSAR models in this dissertation will have a broad applicability domain because the research data set covered six out of eight common DBP classes, including halogenated alkane, halogenated alkene, halogenated aromatic, halogenated aldehyde, halogenated ketone, and halogenated carboxylic acid, which have been brought to the attention of regulatory agencies in recent years. Furthermore, the QSAR models are suitable to be used for prediction of similar DBP compounds within the same applicability domain. The selection and integration of various methodologies developed in this research may also benefit future research in similar fields.
Resumo:
The increasing amount of available semistructured data demands efficient mechanisms to store, process, and search an enormous corpus of data to encourage its global adoption. Current techniques to store semistructured documents either map them to relational databases, or use a combination of flat files and indexes. These two approaches result in a mismatch between the tree-structure of semistructured data and the access characteristics of the underlying storage devices. Furthermore, the inefficiency of XML parsing methods has slowed down the large-scale adoption of XML into actual system implementations. The recent development of lazy parsing techniques is a major step towards improving this situation, but lazy parsers still have significant drawbacks that undermine the massive adoption of XML. Once the processing (storage and parsing) issues for semistructured data have been addressed, another key challenge to leverage semistructured data is to perform effective information discovery on such data. Previous works have addressed this problem in a generic (i.e. domain independent) way, but this process can be improved if knowledge about the specific domain is taken into consideration. This dissertation had two general goals: The first goal was to devise novel techniques to efficiently store and process semistructured documents. This goal had two specific aims: We proposed a method for storing semistructured documents that maps the physical characteristics of the documents to the geometrical layout of hard drives. We developed a Double-Lazy Parser for semistructured documents which introduces lazy behavior in both the pre-parsing and progressive parsing phases of the standard Document Object Model's parsing mechanism. The second goal was to construct a user-friendly and efficient engine for performing Information Discovery over domain-specific semistructured documents. This goal also had two aims: We presented a framework that exploits the domain-specific knowledge to improve the quality of the information discovery process by incorporating domain ontologies. We also proposed meaningful evaluation metrics to compare the results of search systems over semistructured documents.
Resumo:
Finite-Differences Time-Domain (FDTD) algorithms are well established tools of computational electromagnetism. Because of their practical implementation as computer codes, they are affected by many numerical artefact and noise. In order to obtain better results we propose using Principal Component Analysis (PCA) based on multivariate statistical techniques. The PCA has been successfully used for the analysis of noise and spatial temporal structure in a sequence of images. It allows a straightforward discrimination between the numerical noise and the actual electromagnetic variables, and the quantitative estimation of their respective contributions. Besides, The GDTD results can be filtered to clean the effect of the noise. In this contribution we will show how the method can be applied to several FDTD simulations: the propagation of a pulse in vacuum, the analysis of two-dimensional photonic crystals. In this last case, PCA has revealed hidden electromagnetic structures related to actual modes of the photonic crystal.
Resumo:
AEM was supported by a BBSRC-CASE studentship award. Research in the IJM laboratory is currently supported by the Chief Scientist's Office of the Scottish Government and the charity Friends of Anchor.
Resumo:
FtsZ, a bacterial tubulin homologue, is a cytoskeleton protein that plays key roles in cytokinesis of almost all prokaryotes. FtsZ assembles into protofilaments (pfs), one subunit thick, and these pfs assemble further to form a “Z ring” at the center of prokaryotic cells. The Z ring generates a constriction force on the inner membrane, and also serves as a scaffold to recruit cell-wall remodeling proteins for complete cell division in vivo. FtsZ can be subdivided into 3 main functional regions: globular domain, C terminal (Ct) linker, and Ct peptide. The globular domain binds GTP to assembles the pfs. The extreme Ct peptide binds membrane proteins to allow cytoplasmic FtsZ to function at the inner membrane. The Ct linker connects the globular domain and Ct peptide. In the present studies, we used genetic and structural approaches to investigate the function of Escherichia coli (E. coli) FtsZ. We sought to examine three questions: (1) Are lateral bonds between pfs essential for the Z ring? (2) Can we improve direct visualization of FtsZ in vivo by engineering an FtsZ-FP fusion that can function as the sole source of FtsZ for cell division? (3) Is the divergent Ct linker of FtsZ an intrinsically disordered peptide (IDP)?
One model of the Z ring proposes that pfs associate via lateral bonds to form ribbons; however, lateral bonds are still only hypothetical. To explore potential lateral bonding sites, we probed the surface of E. coli FtsZ by inserting either small peptides or whole FPs. Of the four lateral surfaces on FtsZ pfs, we obtained inserts on the front and back surfaces that were functional for cell division. We concluded that these faces are not sites of essential interactions. Inserts at two sites, G124 and R174 located on the left and right surfaces, completely blocked function, and were identified as possible sites for essential lateral interactions. Another goal was to find a location within FtsZ that supported fusion of FP reporter proteins, while allowing the FtsZ-FP to function as the sole source of FtsZ. We discovered one internal site, G55-Q56, where several different FPs could be inserted without impairing function. These FtsZ-FPs may provide advances for imaging Z-ring structure by super-resolution techniques.
The Ct linker is the most divergent region of FtsZ in both sequence and length. In E. coli FtsZ the Ct linker is 50 amino acids (aa), but for other FtsZ it can be as short as 37 aa or as long as 250 aa. The Ct linker has been hypothesized to be an IDP. In the present study, circular dichroism confirmed that isolated Ct linkers of E. coli (50 aa) and C. crescentus (175 aa) are IDPs. Limited trypsin proteolysis followed by mass spectrometry (LC-MS/MS) confirmed Ct linkers of E. coli (50 aa) and B. subtilis (47 aa) as IDPs even when still attached to the globular domain. In addition, we made chimeras, swapping the E. coli Ct linker for other peptides and proteins. Most chimeras allowed for normal cell division in E. coli, suggesting that IDPs with a length of 43 to 95 aa are tolerated, sequence has little importance, and electrostatic charge is unimportant. Several chimeras were purified to confirm the effect they had on pf assembly. We concluded that the Ct linker functions as a flexible tether allowing for force to be transferred from the FtsZ pf to the membrane to constrict the septum for division.
A New Method for Modeling Free Surface Flows and Fluid-structure Interaction with Ocean Applications
Resumo:
The computational modeling of ocean waves and ocean-faring devices poses numerous challenges. Among these are the need to stably and accurately represent both the fluid-fluid interface between water and air as well as the fluid-structure interfaces arising between solid devices and one or more fluids. As techniques are developed to stably and accurately balance the interactions between fluid and structural solvers at these boundaries, a similarly pressing challenge is the development of algorithms that are massively scalable and capable of performing large-scale three-dimensional simulations on reasonable time scales. This dissertation introduces two separate methods for approaching this problem, with the first focusing on the development of sophisticated fluid-fluid interface representations and the second focusing primarily on scalability and extensibility to higher-order methods.
We begin by introducing the narrow-band gradient-augmented level set method (GALSM) for incompressible multiphase Navier-Stokes flow. This is the first use of the high-order GALSM for a fluid flow application, and its reliability and accuracy in modeling ocean environments is tested extensively. The method demonstrates numerous advantages over the traditional level set method, among these a heightened conservation of fluid volume and the representation of subgrid structures.
Next, we present a finite-volume algorithm for solving the incompressible Euler equations in two and three dimensions in the presence of a flow-driven free surface and a dynamic rigid body. In this development, the chief concerns are efficiency, scalability, and extensibility (to higher-order and truly conservative methods). These priorities informed a number of important choices: The air phase is substituted by a pressure boundary condition in order to greatly reduce the size of the computational domain, a cut-cell finite-volume approach is chosen in order to minimize fluid volume loss and open the door to higher-order methods, and adaptive mesh refinement (AMR) is employed to focus computational effort and make large-scale 3D simulations possible. This algorithm is shown to produce robust and accurate results that are well-suited for the study of ocean waves and the development of wave energy conversion (WEC) devices.