969 resultados para domain structure
Resumo:
This paper proposes an ontology-based approach to representation of courseware knowledge in different domains. The focus is on a three-level semantic graph, modeling respectively the course as a whole, its structure, and domain contents itself. The authors plan to use this representation for flexibie e- learning and generation of different study plans for the learners.
Resumo:
∗ Partially supported by INTAS grant 97-1644
Resumo:
In the recent years the East-Christian iconographical art works have been digitized providing a large volume of data. The need for effective classification, indexing and retrieval of iconography repositories was the motivation of the design and development of a systemized ontological structure for description of iconographical art objects. This paper presents the ontology of the East-Christian iconographical art, developed to provide content annotation in the Virtual encyclopedia of Bulgarian iconography multimedia digital library. The ontology’s main classes, relations, facts, rules, and problems appearing during the design and development are described. The paper also presents an application of the ontology for learning analysis on an iconography domain implemented during the SINUS project “Semantic Technologies for Web Services and Technology Enhanced Learning”.
Resumo:
Extracellular signal-regulated kinase 5 (ERK5), also termed big mitogen-activated protein kinase-1 (BMK1), is the most recently identified member of the mitogen-activated protein kinase (MAPK) family and consists of an amino-terminal kinase domain, with a relatively large carboxy-terminal of unique structure and function that makes it distinct from other MAPK members. It is ubiquitously expressed in numerous tissues and is activated by a variety of extracellular stimuli, such as cellular stresses and growth factors, to regulate processes such as cell proliferation and differentiation. Targeted deletion of Erk5 in mice has revealed that the ERK5 signalling cascade plays a critical role in cardiovascular development and vascular integrity. Recent data points to a potential role in pathological conditions such as cancer and tumour angiogenesis. This review focuses on the physiological and pathological role of ERK5, the regulation of this kinase and the recent development of small molecule inhibitors of the ERK5 signalling cascade. © 2012 Elsevier Inc.
Resumo:
Antenna design is an iterative process in which structures are analyzed and changed to comply with certain performance parameters required. The classic approach starts with analyzing a "known" structure, obtaining the value of its performance parameter and changing this structure until the "target" value is achieved. This process relies on having an initial structure, which follows some known or "intuitive" patterns already familiar to the designer. The purpose of this research was to develop a method of designing UWB antennas. What is new in this proposal is that the design process is reversed: the designer will start with the target performance parameter and obtain a structure as the result of the design process. This method provided a new way to replicate and optimize existing performance parameters. The base of the method was the use of a Genetic Algorithm (GA) adapted to the format of the chromosome that will be evaluated by the Electromagnetic (EM) solver. For the electromagnetic study we used XFDTD™ program, based in the Finite-Difference Time-Domain technique. The programming portion of the method was created under the MatLab environment, which serves as the interface for converting chromosomes, file formats and transferring of data between the XFDTD™ and GA. A high level of customization had to be written into the code to work with the specific files generated by the XFDTD™ program. Two types of cost functions were evaluated; the first one seeking broadband performance within the UWB band, and the second one searching for curve replication of a reference geometry. The performance of the method was evaluated considering the speed provided by the computer resources used. Balance between accuracy, data file size and speed of execution was achieved by defining parameters in the GA code as well as changing the internal parameters of the XFDTD™ projects. The results showed that the GA produced geometries that were analyzed by the XFDTD™ program and changed following the search criteria until reaching the target value of the cost function. Results also showed how the parameters can change the search criteria and influence the running of the code to provide a variety of geometries.
Resumo:
The increasing amount of available semistructured data demands efficient mechanisms to store, process, and search an enormous corpus of data to encourage its global adoption. Current techniques to store semistructured documents either map them to relational databases, or use a combination of flat files and indexes. These two approaches result in a mismatch between the tree-structure of semistructured data and the access characteristics of the underlying storage devices. Furthermore, the inefficiency of XML parsing methods has slowed down the large-scale adoption of XML into actual system implementations. The recent development of lazy parsing techniques is a major step towards improving this situation, but lazy parsers still have significant drawbacks that undermine the massive adoption of XML. ^ Once the processing (storage and parsing) issues for semistructured data have been addressed, another key challenge to leverage semistructured data is to perform effective information discovery on such data. Previous works have addressed this problem in a generic (i.e. domain independent) way, but this process can be improved if knowledge about the specific domain is taken into consideration. ^ This dissertation had two general goals: The first goal was to devise novel techniques to efficiently store and process semistructured documents. This goal had two specific aims: We proposed a method for storing semistructured documents that maps the physical characteristics of the documents to the geometrical layout of hard drives. We developed a Double-Lazy Parser for semistructured documents which introduces lazy behavior in both the pre-parsing and progressive parsing phases of the standard Document Object Model’s parsing mechanism. ^ The second goal was to construct a user-friendly and efficient engine for performing Information Discovery over domain-specific semistructured documents. This goal also had two aims: We presented a framework that exploits the domain-specific knowledge to improve the quality of the information discovery process by incorporating domain ontologies. We also proposed meaningful evaluation metrics to compare the results of search systems over semistructured documents. ^
Resumo:
Hydrophobicity as measured by Log P is an important molecular property related to toxicity and carcinogenicity. With increasing public health concerns for the effects of Disinfection By-Products (DBPs), there are considerable benefits in developing Quantitative Structure and Activity Relationship (QSAR) models capable of accurately predicting Log P. In this research, Log P values of 173 DBP compounds in 6 functional classes were used to develop QSAR models, by applying 3 molecular descriptors, namely, Energy of the Lowest Unoccupied Molecular Orbital (ELUMO), Number of Chlorine (NCl) and Number of Carbon (NC) by Multiple Linear Regression (MLR) analysis. The QSAR models developed were validated based on the Organization for Economic Co-operation and Development (OECD) principles. The model Applicability Domain (AD) and mechanistic interpretation were explored. Considering the very complex nature of DBPs, the established QSAR models performed very well with respect to goodness-of-fit, robustness and predictability. The predicted values of Log P of DBPs by the QSAR models were found to be significant with a correlation coefficient R2 from 81% to 98%. The Leverage Approach by Williams Plot was applied to detect and remove outliers, consequently increasing R 2 by approximately 2% to 13% for different DBP classes. The developed QSAR models were statistically validated for their predictive power by the Leave-One-Out (LOO) and Leave-Many-Out (LMO) cross validation methods. Finally, Monte Carlo simulation was used to assess the variations and inherent uncertainties in the QSAR models of Log P and determine the most influential parameters in connection with Log P prediction. The developed QSAR models in this dissertation will have a broad applicability domain because the research data set covered six out of eight common DBP classes, including halogenated alkane, halogenated alkene, halogenated aromatic, halogenated aldehyde, halogenated ketone, and halogenated carboxylic acid, which have been brought to the attention of regulatory agencies in recent years. Furthermore, the QSAR models are suitable to be used for prediction of similar DBP compounds within the same applicability domain. The selection and integration of various methodologies developed in this research may also benefit future research in similar fields.
Resumo:
The increasing amount of available semistructured data demands efficient mechanisms to store, process, and search an enormous corpus of data to encourage its global adoption. Current techniques to store semistructured documents either map them to relational databases, or use a combination of flat files and indexes. These two approaches result in a mismatch between the tree-structure of semistructured data and the access characteristics of the underlying storage devices. Furthermore, the inefficiency of XML parsing methods has slowed down the large-scale adoption of XML into actual system implementations. The recent development of lazy parsing techniques is a major step towards improving this situation, but lazy parsers still have significant drawbacks that undermine the massive adoption of XML. Once the processing (storage and parsing) issues for semistructured data have been addressed, another key challenge to leverage semistructured data is to perform effective information discovery on such data. Previous works have addressed this problem in a generic (i.e. domain independent) way, but this process can be improved if knowledge about the specific domain is taken into consideration. This dissertation had two general goals: The first goal was to devise novel techniques to efficiently store and process semistructured documents. This goal had two specific aims: We proposed a method for storing semistructured documents that maps the physical characteristics of the documents to the geometrical layout of hard drives. We developed a Double-Lazy Parser for semistructured documents which introduces lazy behavior in both the pre-parsing and progressive parsing phases of the standard Document Object Model's parsing mechanism. The second goal was to construct a user-friendly and efficient engine for performing Information Discovery over domain-specific semistructured documents. This goal also had two aims: We presented a framework that exploits the domain-specific knowledge to improve the quality of the information discovery process by incorporating domain ontologies. We also proposed meaningful evaluation metrics to compare the results of search systems over semistructured documents.
Resumo:
Finite-Differences Time-Domain (FDTD) algorithms are well established tools of computational electromagnetism. Because of their practical implementation as computer codes, they are affected by many numerical artefact and noise. In order to obtain better results we propose using Principal Component Analysis (PCA) based on multivariate statistical techniques. The PCA has been successfully used for the analysis of noise and spatial temporal structure in a sequence of images. It allows a straightforward discrimination between the numerical noise and the actual electromagnetic variables, and the quantitative estimation of their respective contributions. Besides, The GDTD results can be filtered to clean the effect of the noise. In this contribution we will show how the method can be applied to several FDTD simulations: the propagation of a pulse in vacuum, the analysis of two-dimensional photonic crystals. In this last case, PCA has revealed hidden electromagnetic structures related to actual modes of the photonic crystal.
Resumo:
AEM was supported by a BBSRC-CASE studentship award. Research in the IJM laboratory is currently supported by the Chief Scientist's Office of the Scottish Government and the charity Friends of Anchor.
Resumo:
FtsZ, a bacterial tubulin homologue, is a cytoskeleton protein that plays key roles in cytokinesis of almost all prokaryotes. FtsZ assembles into protofilaments (pfs), one subunit thick, and these pfs assemble further to form a “Z ring” at the center of prokaryotic cells. The Z ring generates a constriction force on the inner membrane, and also serves as a scaffold to recruit cell-wall remodeling proteins for complete cell division in vivo. FtsZ can be subdivided into 3 main functional regions: globular domain, C terminal (Ct) linker, and Ct peptide. The globular domain binds GTP to assembles the pfs. The extreme Ct peptide binds membrane proteins to allow cytoplasmic FtsZ to function at the inner membrane. The Ct linker connects the globular domain and Ct peptide. In the present studies, we used genetic and structural approaches to investigate the function of Escherichia coli (E. coli) FtsZ. We sought to examine three questions: (1) Are lateral bonds between pfs essential for the Z ring? (2) Can we improve direct visualization of FtsZ in vivo by engineering an FtsZ-FP fusion that can function as the sole source of FtsZ for cell division? (3) Is the divergent Ct linker of FtsZ an intrinsically disordered peptide (IDP)?
One model of the Z ring proposes that pfs associate via lateral bonds to form ribbons; however, lateral bonds are still only hypothetical. To explore potential lateral bonding sites, we probed the surface of E. coli FtsZ by inserting either small peptides or whole FPs. Of the four lateral surfaces on FtsZ pfs, we obtained inserts on the front and back surfaces that were functional for cell division. We concluded that these faces are not sites of essential interactions. Inserts at two sites, G124 and R174 located on the left and right surfaces, completely blocked function, and were identified as possible sites for essential lateral interactions. Another goal was to find a location within FtsZ that supported fusion of FP reporter proteins, while allowing the FtsZ-FP to function as the sole source of FtsZ. We discovered one internal site, G55-Q56, where several different FPs could be inserted without impairing function. These FtsZ-FPs may provide advances for imaging Z-ring structure by super-resolution techniques.
The Ct linker is the most divergent region of FtsZ in both sequence and length. In E. coli FtsZ the Ct linker is 50 amino acids (aa), but for other FtsZ it can be as short as 37 aa or as long as 250 aa. The Ct linker has been hypothesized to be an IDP. In the present study, circular dichroism confirmed that isolated Ct linkers of E. coli (50 aa) and C. crescentus (175 aa) are IDPs. Limited trypsin proteolysis followed by mass spectrometry (LC-MS/MS) confirmed Ct linkers of E. coli (50 aa) and B. subtilis (47 aa) as IDPs even when still attached to the globular domain. In addition, we made chimeras, swapping the E. coli Ct linker for other peptides and proteins. Most chimeras allowed for normal cell division in E. coli, suggesting that IDPs with a length of 43 to 95 aa are tolerated, sequence has little importance, and electrostatic charge is unimportant. Several chimeras were purified to confirm the effect they had on pf assembly. We concluded that the Ct linker functions as a flexible tether allowing for force to be transferred from the FtsZ pf to the membrane to constrict the septum for division.
A New Method for Modeling Free Surface Flows and Fluid-structure Interaction with Ocean Applications
Resumo:
The computational modeling of ocean waves and ocean-faring devices poses numerous challenges. Among these are the need to stably and accurately represent both the fluid-fluid interface between water and air as well as the fluid-structure interfaces arising between solid devices and one or more fluids. As techniques are developed to stably and accurately balance the interactions between fluid and structural solvers at these boundaries, a similarly pressing challenge is the development of algorithms that are massively scalable and capable of performing large-scale three-dimensional simulations on reasonable time scales. This dissertation introduces two separate methods for approaching this problem, with the first focusing on the development of sophisticated fluid-fluid interface representations and the second focusing primarily on scalability and extensibility to higher-order methods.
We begin by introducing the narrow-band gradient-augmented level set method (GALSM) for incompressible multiphase Navier-Stokes flow. This is the first use of the high-order GALSM for a fluid flow application, and its reliability and accuracy in modeling ocean environments is tested extensively. The method demonstrates numerous advantages over the traditional level set method, among these a heightened conservation of fluid volume and the representation of subgrid structures.
Next, we present a finite-volume algorithm for solving the incompressible Euler equations in two and three dimensions in the presence of a flow-driven free surface and a dynamic rigid body. In this development, the chief concerns are efficiency, scalability, and extensibility (to higher-order and truly conservative methods). These priorities informed a number of important choices: The air phase is substituted by a pressure boundary condition in order to greatly reduce the size of the computational domain, a cut-cell finite-volume approach is chosen in order to minimize fluid volume loss and open the door to higher-order methods, and adaptive mesh refinement (AMR) is employed to focus computational effort and make large-scale 3D simulations possible. This algorithm is shown to produce robust and accurate results that are well-suited for the study of ocean waves and the development of wave energy conversion (WEC) devices.
Resumo:
Cette thèse concerne la modélisation des interactions fluide-structure et les méthodes numériques qui s’y rattachent. De ce fait, la thèse est divisée en deux parties. La première partie concerne l’étude des interactions fluide-structure par la méthode des domaines fictifs. Dans cette contribution, le fluide est incompressible et laminaire et la structure est considérée rigide, qu’elle soit immobile ou en mouvement. Les outils que nous avons développés comportent la mise en oeuvre d’un algorithme fiable de résolution qui intégrera les deux domaines (fluide et solide) dans une formulation mixte. L’algorithme est basé sur des techniques de raffinement local adaptatif des maillages utilisés permettant de mieux séparer les éléments du milieu fluide de ceux du solide que ce soit en 2D ou en 3D. La seconde partie est l’étude des interactions mécaniques entre une structure flexible et un fluide incompressible. Dans cette contribution, nous proposons et analysons des méthodes numériques partitionnées pour la simulation de phénomènes d’interaction fluide-structure (IFS). Nous avons adopté à cet effet, la méthode dite «arbitrary Lagrangian-Eulerian» (ALE). La résolution fluide est effectuée itérativement à l’aide d’un schéma de type projection et la structure est modélisée par des modèles hyper élastiques en grandes déformations. Nous avons développé de nouvelles méthodes de mouvement de maillages pour aboutir à de grandes déformations de la structure. Enfin, une stratégie de complexification du problème d’IFS a été définie. La modélisation de la turbulence et des écoulements à surfaces libres ont été introduites et couplées à la résolution des équations de Navier-Stokes. Différentes simulations numériques sont présentées pour illustrer l’efficacité et la robustesse de l’algorithme. Les résultats numériques présentés attestent de la validité et l’efficacité des méthodes numériques développées.
Resumo:
A three-dimensional finite volume, unstructured mesh (FV-UM) method for dynamic fluid–structure interaction (DFSI) is described. Fluid structure interaction, as applied to flexible structures, has wide application in diverse areas such as flutter in aircraft, wind response of buildings, flows in elastic pipes and blood vessels. It involves the coupling of fluid flow and structural mechanics, two fields that are conventionally modelled using two dissimilar methods, thus a single comprehensive computational model of both phenomena is a considerable challenge. Until recently work in this area focused on one phenomenon and represented the behaviour of the other more simply. More recently, strategies for solving the full coupling between the fluid and solid mechanics behaviour have been developed. A key contribution has been made by Farhat et al. [Int. J. Numer. Meth. Fluids 21 (1995) 807] employing FV-UM methods for solving the Euler flow equations and a conventional finite element method for the elastic solid mechanics and the spring based mesh procedure of Batina [AIAA paper 0115, 1989] for mesh movement. In this paper, we describe an approach which broadly exploits the three field strategy described by Farhat for fluid flow, structural dynamics and mesh movement but, in the context of DFSI, contains a number of novel features: a single mesh covering the entire domain, a Navier–Stokes flow, a single FV-UM discretisation approach for both the flow and solid mechanics procedures, an implicit predictor–corrector version of the Newmark algorithm, a single code embedding the whole strategy.
Resumo:
The cytokine hormone leptin is a key signalling molecule in many pathways that control physiological functions. Although leptin demonstrates structural conservation in mammals, there is evidence of positive selection in primates, lagomorphs and chiropterans. We previously reported that the leptin genes of the grey and harbour seals (phocids) have significantly diverged from other mammals. Therefore we further investigated the diversification of leptin in phocids, other marine mammals and terrestrial taxa by sequencing the leptin genes of representative species. Phylogenetic reconstruction revealed that leptin diversification was pronounced within the phocid seals with a high dN/dS ratio of 2.8, indicating positive selection. We found significant evidence of positive selection along the branch leading to the phocids, within the phocid clade, but not over the dataset as a whole. Structural predictions indicate that the individual residues under selection are away from the leptin receptor (LEPR) binding site. Predictions of the surface electrostatic potential indicate that phocid seal leptin is notably different to other mammalian leptins, including the otariids. Cloning the grey seal leptin binding domain of LEPR confirmed that this was structurally conserved. These data, viewed in toto, support a hypothesis that phocid leptin divergence is unlikely to have arisen by random mutation. Based upon these phylogenetic and structural assessments, and considering the comparative physiology and varying life histories among species, we postulate that the unique phocid diving behaviour has produced this selection pressure. The Phocidae includes some of the deepest diving species, yet have the least modified lung structure to cope with pressure and volume changes experienced at depth. Therefore, greater surfactant production is required to facilitate rapid lung re-inflation upon surfacing, while maintaining patent airways. We suggest that this additional surfactant requirement is met by the leptin pulmonary surfactant production pathway which normally appears only to function in the mammalian foetus.