985 resultados para synthetic methods


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Large quantities of pure synthetic oligodeoxynucleotides (ODNs) are important for preclinical research, drug development, and biological studies. These ODNs are synthesized on an automated synthesizer. It is inevitable that the crude ODN product contains failure sequences which are not easily removed because they have the same properties as the full length ODNs. Current ODN purification methods such as polyacrylamide gel electrophoresis (PAGE), reversed-phase high performance liquid chromatography (RP HPLC), anion exchange HPLC, and affinity purification can remove those impurities. However, they are not suitable for large scale purification due to the expensive aspects associated with instrumentation, solvent demand, and high labor costs. To solve these problems, two non-chromatographic ODN purification methods have been developed. In the first method, the full-length ODN was tagged with the phosphoramidite containing a methacrylamide group and a cleavable linker while the failure sequences were not. The full-length ODN was incorporated into a polymer through radical acrylamide polymerization whereas failure sequences and other impurities were removed by washing. Pure full-length ODN was obtained by cleaving it from the polymer. In the second method, the failure sequences were capped by a methacrylated phosphoramidite in each synthetic cycle. During purification, the failure sequences were separated from the full-length ODN by radical acrylamide polymerization. The full-length ODN was obtained via water extraction. For both methods, excellent purification yields were achieved and the purity of ODNs was very satisfactory. Thus, this new technology is expected to be beneficial for large scale ODN purification.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

PURPOSE: The aim was (1) to evaluate the soft-tissue reaction of a synthetic polyethylene glycol (PEG) hydrogel used as a barrier membrane for guided bone regeneration (GBR) compared with a collagen membrane and (2) to test whether or not the application of this in situ formed membrane will result in a similar amount of bone regeneration as the use of a collagen membrane. MATERIAL AND METHODS: Tooth extraction and preparation of osseous defects were performed in the mandibles of 11 beagle dogs. After 3 months, 44 cylindrical implants were placed within healed dehiscence-type bone defects resulting in approximately 6 mm exposed implant surface. The following four treatment modalities were randomly allocated: PEG+autogenous bone chips, PEG+hydroxyapatite (HA)/tricalcium phosphate (TCP) granules, bioresorbable collagen membrane+autogenous bone chips and autogenous bone chips without a membrane. After 2 and 6 months, six and five dogs were sacrificed, respectively. A semi-quantitative evaluation of the local tolerance and a histomorphometric analysis were performed. For statistical analysis, repeated measures analysis of variance (ANOVA) and subsequent pairwise Student's t-test were applied (P<0.05). RESULTS: No local adverse effects in association with the PEG compared with the collagen membrane was observed clinically and histologically at any time-point. Healing was uneventful and all implants were histologically integrated. Four out of 22 PEG membrane sites revealed a soft-tissue dehiscence after 1-2 weeks that subsequently healed uneventful. Histomorphometric measurement of the vertical bone gain showed after 2 months values between 31% and 45% and after 6 months between 31% and 38%. Bone-to-implant contact (BIC) within the former defect area was similarly high in all groups ranging from 71% to 82% after 2 months and 49% to 91% after 6 months. However, with regard to all evaluated parameters, the PEG and the collagen membranes did not show any statistically significant difference compared with sites treated with autogenous bone without a membrane. CONCLUSION: The in situ forming synthetic membrane made of PEG was safely used in the present study, revealing no biologically significant abnormal soft-tissue reaction and demonstrated similar amounts of newly formed bone for defects treated with the PEG membrane compared with defects treated with a standard collagen membrane.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

BACKGROUND Curcumin (CUR) is a dietary spice and food colorant (E100). Its potent anti-inflammatory activity by inhibiting the activation of Nuclear Factor-kappaB is well established. METHODS The aim of this study was to compare natural purified CUR (nCUR) with synthetically manufactured CUR (sCUR) with respect to their capacity to inhibit detrimental effects in an in vitro model of oral mucositis. The hypothesis was to demonstrate bioequivalence of nCUR and sCUR. RESULTS The purity of sCUR was HPLC-confirmed. Adherence and invasion assays for bacteria to human pharyngeal epithelial cells demonstrated equivalence of nCUR and sCUR. Standard assays also demonstrated an identical inhibitory effect on pro-inflammatory cytokine/chemokine secretion (e.g., interleukin-8, interleukin-6) by Detroit pharyngeal cells exposed to bacterial stimuli. There was bioequivalence of sCUR and nCUR with respect to their antibacterial effects against various pharyngeal species. CONCLUSION nCUR and sCUR are equipotent in in vitro assays mimicking aspects of oral mucositis. The advantages of sCUR include that it is odorless and tasteless, more easily soluble in DMSO, and that it is a single, highly purified molecule, lacking the batch-to-batch variation of CUR content in nCUR. sCUR is a promising agent for the development of an oral anti-mucositis agent.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

OBJECTIVES Recent studies suggest that a combination of enamel matrix derivative (EMD) with grafting material may improve periodontal wound healing/regeneration. Newly developed calcium phosphate (CaP) ceramics have been demonstrated a viable synthetic replacement option for bone grafting filler materials. AIMS This study aims to test the ability for EMD to adsorb to the surface of CaP particles and to determine the effect of EMD on downstream cellular pathways such as adhesion, proliferation, and differentiation of primary human osteoblasts and periodontal ligament (PDL) cells. MATERIALS AND METHODS EMD was adsorbed onto CaP particles and analyzed for protein adsorption patterns via scanning electron microscopy and high-resolution immunocytochemistry with an anti-EMD antibody. Cell attachment and cell proliferation were quantified using CellTiter 96 One Solution Cell Assay (MTS). Cell differentiation was analyzed using real-time PCR for genes encoding Runx2, alkaline phosphatase, osteocalcin, and collagen1α1, and mineralization was assessed using alizarin red staining. RESULTS Analysis of cell attachment revealed significantly higher number of cells attached to EMD-adsorbed CaP particles when compared to control and blood-adsorbed samples. EMD also significantly increased cell proliferation at 3 and 5 days post-seeding. Moreover, there were significantly higher mRNA levels of osteoblast differentiation markers including collagen1α1, alkaline phosphatase, and osteocalcin in osteoblasts and PDL cells cultured on EMD-adsorbed CaP particles at various time points. CONCLUSION The present study suggests that the addition of EMD to CaP grafting particles may influence periodontal regeneration by stimulating PDL cell and osteoblast attachment, proliferation, and differentiation. Future in vivo and clinical studies are required to confirm these findings. CLINICAL RELEVANCE The combination of EMD and CaP may represent an option for regenerative periodontal therapy in advanced intrabony defects.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

BACKGROUND Antifibrinolytics have been used for 2 decades to reduce bleeding in cardiac surgery. MDCO-2010 is a novel, synthetic, serine protease inhibitor. We describe the first experience with this drug in patients. METHODS In this phase II, double-blind, placebo-controlled study, 32 patients undergoing isolated primary coronary artery bypass grafting with cardiopulmonary bypass were randomly assigned to 1 of 5 increasing dosage groups of MDCO-2010. The primary aim was to evaluate pharmacokinetics (PK) with assessment of plasmatic concentrations of the drug, short-term safety, and tolerance of MDCO-2010. Secondary end points were influence on coagulation, chest tube drainage, and transfusion requirements. RESULTS PK analysis showed linear dosage-proportional correlation between MDCO-2010 infusion rate and PK parameters. Blood loss was significantly reduced in the 3 highest dosage groups compared with control (P = 0.002, 0.004 and 0.011, respectively). The incidence of allogeneic blood product transfusions was lower with MDCO-2010 4/24 (17%) vs 4/8 (50%) in the control group. MDCO-2010 exhibited dosage-dependent antifibrinolytic effects through suppression of D-dimer generation and inhibition of tissue plasminogen activator-induced lysis in ROTEM analysis as well as anticoagulant effects demonstrated by prolongation of activated clotting time and activated partial thromboplastin time. No systematic differences in markers of end organ function were observed among treatment groups. Three patients in the MDCO-2010 groups experienced serious adverse events. One patient experienced intraoperative thrombosis of venous grafts considered possibly related to the study drug. No reexploration for mediastinal bleeding was required, and there were no deaths. CONCLUSIONS This first-in-patient study demonstrated dosage-proportional PK for MDCO-2010 and reduction of chest tube drainage and transfusions in patients undergoing primary coronary artery bypass grafting. Antifibrinolytic and anticoagulant effects were demonstrated using various markers of coagulation. MDCO-2010 was well tolerated and showed an acceptable initial safety profile. Larger multi-institutional studies are warranted to further investigate the safety and efficacy of this compound.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

BACKGROUND: In clinical practise the high dose ACTH stimulation test (HDT) is frequently used in the assessment of adrenal insufficiency (AI). However, there is uncertainty regarding optimal time-points and number of blood samplings. The present study compared the utility of a single cortisol value taken either 30 or 60 minutes after ACTH stimulation with the traditional interpretation of the HDT. METHODS: Retrospective analysis of 73 HDT performed at a single tertiary endocrine centre. Serum cortisol was measured at baseline, 30 and 60 minutes after intravenous administration of 250 µg synthetic ACTH1-24. Adrenal insufficiency (AI) was defined as a stimulated cortisol level <550 nmol/l. RESULTS: There were twenty patients (27.4%) who showed an insufficient rise in serum cortisol using traditional HDT criteria and were diagnosed to suffer from AI. There were ten individuals who showed insufficient cortisol values after 30 minutes, rising to sufficient levels at 60 minutes. All patients revealing an insufficient cortisol response result after 60 minutes also had an insufficient result after 30 minutes. The cortisol value taken after 30 minutes did not add incremental diagnostic value in any of the cases under investigation compared with the 60 minutes' sample. CONCLUSIONS: Based on the findings of the present analysis the utility of a cortisol measurement 30 minutes after high dose ACTH injection was low and did not add incremental diagnostic value to a single measurement after 60 minutes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Introduction Gene expression is an important process whereby the genotype controls an individual cell’s phenotype. However, even genetically identical cells display a variety of phenotypes, which may be attributed to differences in their environment. Yet, even after controlling for these two factors, individual phenotypes still diverge due to noisy gene expression. Synthetic gene expression systems allow investigators to isolate, control, and measure the effects of noise on cell phenotypes. I used mathematical and computational methods to design, study, and predict the behavior of synthetic gene expression systems in S. cerevisiae, which were affected by noise. Methods I created probabilistic biochemical reaction models from known behaviors of the tetR and rtTA genes, gene products, and their gene architectures. I then simplified these models to account for essential behaviors of gene expression systems. Finally, I used these models to predict behaviors of modified gene expression systems, which were experimentally verified. Results Cell growth, which is often ignored when formulating chemical kinetics models, was essential for understanding gene expression behavior. Models incorporating growth effects were used to explain unexpected reductions in gene expression noise, design a set of gene expression systems with “linear” dose-responses, and quantify the speed with which cells explored their fitness landscapes due to noisy gene expression. Conclusions Models incorporating noisy gene expression and cell division were necessary to design, understand, and predict the behaviors of synthetic gene expression systems. The methods and models developed here will allow investigators to more efficiently design new gene expression systems, and infer gene expression properties of TetR based systems.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Measures of agro-ecosystems genetic variability are essential to sustain scientific-based actions and policies tending to protect the ecosystem services they provide. To build the genetic variability datum it is necessary to deal with a large number and different types of variables. Molecular marker data is highly dimensional by nature, and frequently additional types of information are obtained, as morphological and physiological traits. This way, genetic variability studies are usually associated with the measurement of several traits on each entity. Multivariate methods are aimed at finding proximities between entities characterized by multiple traits by summarizing information in few synthetic variables. In this work we discuss and illustrate several multivariate methods used for different purposes to build the datum of genetic variability. We include methods applied in studies for exploring the spatial structure of genetic variability and the association of genetic data to other sources of information. Multivariate techniques allow the pursuit of the genetic variability datum, as a unifying notion that merges concepts of type, abundance and distribution of variability at gene level.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Fish communities are a key element in fluvial ecosystems Their position in the top of the food chain and their sensitivity to a whole range of impacts make them a clear objective for ecosystem conservation and a sound indicator of biological integrity. The UE Water Framework Directive includes fish community composition, abundance and structure as relevant elements for the evaluation os biological condition. Several approaches have been proposed for the evaluation of the condition of fish communities, from the bio-indicator concept to the IBI (Index of biotic integrity) proposals. However, the complexity of fish communities and their ecological responses make this evaluation difficult, and we must avoid both oversimplified and extreme analytical procedures. In this work we present a new proposal to define reference conditions in fish communities, discussing them from an ecological viewpoint. This method is a synthetic approach called SYNTHETIC OPEN METHODOLOGICAL FRAMEWORK (SOMF) that has been applied to the rivers of Navarra. As a result, it is recommended the integration of all the available information from spatial, modelling, historical and expert sources, providing the better approach to fish reference conditions, keeping the highest level of information and meeting the legal requirements of the WFD.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Foliage Penetration (FOPEN) radar systems were introduced in 1960, and have been constantly improved by several organizations since that time. The use of Synthetic Aperture Radar (SAR) approaches for this application has important advantages, due to the need for high resolution in two dimensions. The design of this type of systems, however, includes some complications that are not present in standard SAR systems. FOPEN SAR systems need to operate with a low central frequency (VHF or UHF bands) in order to be able to penetrate the foliage. High bandwidth is also required to obtain high resolution. Due to the low central frequency, large integration angles are required during SAR image formation, and therefore the Range Migration Algorithm (RMA) is used. This project thesis identifies the three main complications that arise due to these requirements. First, a high fractional bandwidth makes narrowband propagation models no longer valid. Second, the VHF and UHF bands are used by many communications systems. The transmitted signal spectrum needs to be notched to avoid interfering them. Third, those communications systems cause Radio Frequency Interference (RFI) on the received signal. The thesis carries out a thorough analysis of the three problems, their degrading effects and possible solutions to compensate them. The UWB model is applied to the SAR signal, and the degradation induced by it is derived. The result is tested through simulation of both a single pulse stretch processor and the complete RMA image formation. Both methods show that the degradation is negligible, and therefore the UWB propagation effect does not need compensation. A technique is derived to design a notched transmitted signal. Then, its effect on the SAR image formation is evaluated analytically. It is shown that the stretch processor introduces a processing gain that reduces the degrading effects of the notches. The remaining degrading effect after processing gain is assessed through simulation, and an experimental graph of degradation as a function of percentage of nulled frequencies is obtained. The RFI is characterized and its effect on the SAR processor is derived. Once again, a processing gain is found to be introduced by the receiver. As the RFI power can be much higher than that of the desired signal, an algorithm is proposed to remove the RFI from the received signal before RMA processing. This algorithm is a modification of the Chirp Least Squares Algorithm (CLSA) explained in [4], which adapts it to deramped signals. The algorithm is derived analytically and then its performance is evaluated through simulation, showing that it is effective in removing the RFI and reducing the degradation caused by both RFI and notching. Finally, conclusions are drawn as to the importance of each one of the problems in SAR system design.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

En esta tesis se presenta un análisis en profundidad de cómo se deben utilizar dos tipos de métodos directos, Lucas-Kanade e Inverse Compositional, en imágenes RGB-D y se analiza la capacidad y precisión de los mismos en una serie de experimentos sintéticos. Estos simulan imágenes RGB, imágenes de profundidad (D) e imágenes RGB-D para comprobar cómo se comportan en cada una de las combinaciones. Además, se analizan estos métodos sin ninguna técnica adicional que modifique el algoritmo original ni que lo apoye en su tarea de optimización tal y como sucede en la mayoría de los artículos encontrados en la literatura. Esto se hace con el fin de poder entender cuándo y por qué los métodos convergen o divergen para que así en el futuro cualquier interesado pueda aplicar los conocimientos adquiridos en esta tesis de forma práctica. Esta tesis debería ayudar al futuro interesado a decidir qué algoritmo conviene más en una determinada situación y debería también ayudarle a entender qué problemas le pueden dar estos algoritmos para poder poner el remedio más apropiado. Las técnicas adicionales que sirven de remedio para estos problemas quedan fuera de los contenidos que abarca esta tesis, sin embargo, sí se hace una revisión sobre ellas.---ABSTRACT---This thesis presents an in-depth analysis about how direct methods such as Lucas- Kanade and Inverse Compositional can be applied in RGB-D images. The capability and accuracy of these methods is also analyzed employing a series of synthetic experiments. These simulate the efects produced by RGB images, depth images and RGB-D images so that diferent combinations can be evaluated. Moreover, these methods are analyzed without using any additional technique that modifies the original algorithm or that aids the algorithm in its search for a global optima unlike most of the articles found in the literature. Our goal is to understand when and why do these methods converge or diverge so that in the future, the knowledge extracted from the results presented here can efectively help a potential implementer. After reading this thesis, the implementer should be able to decide which algorithm fits best for a particular task and should also know which are the problems that have to be addressed in each algorithm so that an appropriate correction is implemented using additional techniques. These additional techniques are outside the scope of this thesis, however, they are reviewed from the literature.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

La tomografía axial computerizada (TAC) es la modalidad de imagen médica preferente para el estudio de enfermedades pulmonares y el análisis de su vasculatura. La segmentación general de vasos en pulmón ha sido abordada en profundidad a lo largo de los últimos años por la comunidad científica que trabaja en el campo de procesamiento de imagen; sin embargo, la diferenciación entre irrigaciones arterial y venosa es aún un problema abierto. De hecho, la separación automática de arterias y venas está considerado como uno de los grandes retos futuros del procesamiento de imágenes biomédicas. La segmentación arteria-vena (AV) permitiría el estudio de ambas irrigaciones por separado, lo cual tendría importantes consecuencias en diferentes escenarios médicos y múltiples enfermedades pulmonares o estados patológicos. Características como la densidad, geometría, topología y tamaño de los vasos sanguíneos podrían ser analizados en enfermedades que conllevan remodelación de la vasculatura pulmonar, haciendo incluso posible el descubrimiento de nuevos biomarcadores específicos que aún hoy en dípermanecen ocultos. Esta diferenciación entre arterias y venas también podría ayudar a la mejora y el desarrollo de métodos de procesamiento de las distintas estructuras pulmonares. Sin embargo, el estudio del efecto de las enfermedades en los árboles arterial y venoso ha sido inviable hasta ahora a pesar de su indudable utilidad. La extrema complejidad de los árboles vasculares del pulmón hace inabordable una separación manual de ambas estructuras en un tiempo realista, fomentando aún más la necesidad de diseñar herramientas automáticas o semiautomáticas para tal objetivo. Pero la ausencia de casos correctamente segmentados y etiquetados conlleva múltiples limitaciones en el desarrollo de sistemas de separación AV, en los cuales son necesarias imágenes de referencia tanto para entrenar como para validar los algoritmos. Por ello, el diseño de imágenes sintéticas de TAC pulmonar podría superar estas dificultades ofreciendo la posibilidad de acceso a una base de datos de casos pseudoreales bajo un entorno restringido y controlado donde cada parte de la imagen (incluyendo arterias y venas) está unívocamente diferenciada. En esta Tesis Doctoral abordamos ambos problemas, los cuales están fuertemente interrelacionados. Primero se describe el diseño de una estrategia para generar, automáticamente, fantomas computacionales de TAC de pulmón en humanos. Partiendo de conocimientos a priori, tanto biológicos como de características de imagen de CT, acerca de la topología y relación entre las distintas estructuras pulmonares, el sistema desarrollado es capaz de generar vías aéreas, arterias y venas pulmonares sintéticas usando métodos de crecimiento iterativo, que posteriormente se unen para formar un pulmón simulado con características realistas. Estos casos sintéticos, junto a imágenes reales de TAC sin contraste, han sido usados en el desarrollo de un método completamente automático de segmentación/separación AV. La estrategia comprende una primera extracción genérica de vasos pulmonares usando partículas espacio-escala, y una posterior clasificación AV de tales partículas mediante el uso de Graph-Cuts (GC) basados en la similitud con arteria o vena (obtenida con algoritmos de aprendizaje automático) y la inclusión de información de conectividad entre partículas. La validación de los fantomas pulmonares se ha llevado a cabo mediante inspección visual y medidas cuantitativas relacionadas con las distribuciones de intensidad, dispersión de estructuras y relación entre arterias y vías aéreas, los cuales muestran una buena correspondencia entre los pulmones reales y los generados sintéticamente. La evaluación del algoritmo de segmentación AV está basada en distintas estrategias de comprobación de la exactitud en la clasificación de vasos, las cuales revelan una adecuada diferenciación entre arterias y venas tanto en los casos reales como en los sintéticos, abriendo así un amplio abanico de posibilidades en el estudio clínico de enfermedades cardiopulmonares y en el desarrollo de metodologías y nuevos algoritmos para el análisis de imágenes pulmonares. ABSTRACT Computed tomography (CT) is the reference image modality for the study of lung diseases and pulmonary vasculature. Lung vessel segmentation has been widely explored by the biomedical image processing community, however, differentiation of arterial from venous irrigations is still an open problem. Indeed, automatic separation of arterial and venous trees has been considered during last years as one of the main future challenges in the field. Artery-Vein (AV) segmentation would be useful in different medical scenarios and multiple pulmonary diseases or pathological states, allowing the study of arterial and venous irrigations separately. Features such as density, geometry, topology and size of vessels could be analyzed in diseases that imply vasculature remodeling, making even possible the discovery of new specific biomarkers that remain hidden nowadays. Differentiation between arteries and veins could also enhance or improve methods processing pulmonary structures. Nevertheless, AV segmentation has been unfeasible until now in clinical routine despite its objective usefulness. The huge complexity of pulmonary vascular trees makes a manual segmentation of both structures unfeasible in realistic time, encouraging the design of automatic or semiautomatic tools to perform the task. However, this lack of proper labeled cases seriously limits in the development of AV segmentation systems, where reference standards are necessary in both algorithm training and validation stages. For that reason, the design of synthetic CT images of the lung could overcome these difficulties by providing a database of pseudorealistic cases in a constrained and controlled scenario where each part of the image (including arteries and veins) is differentiated unequivocally. In this Ph.D. Thesis we address both interrelated problems. First, the design of a complete framework to automatically generate computational CT phantoms of the human lung is described. Starting from biological and imagebased knowledge about the topology and relationships between structures, the system is able to generate synthetic pulmonary arteries, veins, and airways using iterative growth methods that can be merged into a final simulated lung with realistic features. These synthetic cases, together with labeled real CT datasets, have been used as reference for the development of a fully automatic pulmonary AV segmentation/separation method. The approach comprises a vessel extraction stage using scale-space particles and their posterior artery-vein classification using Graph-Cuts (GC) based on arterial/venous similarity scores obtained with a Machine Learning (ML) pre-classification step and particle connectivity information. Validation of pulmonary phantoms from visual examination and quantitative measurements of intensity distributions, dispersion of structures and relationships between pulmonary air and blood flow systems, show good correspondence between real and synthetic lungs. The evaluation of the Artery-Vein (AV) segmentation algorithm, based on different strategies to assess the accuracy of vessel particles classification, reveal accurate differentiation between arteries and vein in both real and synthetic cases that open a huge range of possibilities in the clinical study of cardiopulmonary diseases and the development of methodological approaches for the analysis of pulmonary images.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

RDB to RDF Mapping Language (R2RML) es una recomendación del W3C que permite especificar reglas para transformar bases de datos relacionales a RDF. Estos datos en RDF se pueden materializar y almacenar en un sistema gestor de tripletas RDF (normalmente conocidos con el nombre triple store), en el cual se pueden evaluar consultas SPARQL. Sin embargo, hay casos en los cuales la materialización no es adecuada o posible, por ejemplo, cuando la base de datos se actualiza frecuentemente. En estos casos, lo mejor es considerar los datos en RDF como datos virtuales, de tal manera que las consultas SPARQL anteriormente mencionadas se traduzcan a consultas SQL que se pueden evaluar sobre los sistemas gestores de bases de datos relacionales (SGBD) originales. Para esta traducción se tienen en cuenta los mapeos R2RML. La primera parte de esta tesis se centra en la traducción de consultas. Se propone una formalización de la traducción de SPARQL a SQL utilizando mapeos R2RML. Además se proponen varias técnicas de optimización para generar consultas SQL que son más eficientes cuando son evaluadas en sistemas gestores de bases de datos relacionales. Este enfoque se evalúa mediante un benchmark sintético y varios casos reales. Otra recomendación relacionada con R2RML es la conocida como Direct Mapping (DM), que establece reglas fijas para la transformación de datos relacionales a RDF. A pesar de que ambas recomendaciones se publicaron al mismo tiempo, en septiembre de 2012, todavía no se ha realizado un estudio formal sobre la relación entre ellas. Por tanto, la segunda parte de esta tesis se centra en el estudio de la relación entre R2RML y DM. Se divide este estudio en dos partes: de R2RML a DM, y de DM a R2RML. En el primer caso, se estudia un fragmento de R2RML que tiene la misma expresividad que DM. En el segundo caso, se representan las reglas de DM como mapeos R2RML, y también se añade la semántica implícita (relaciones de subclase, 1-N y M-N) que se puede encontrar codificada en la base de datos. Esta tesis muestra que es posible usar R2RML en casos reales, sin necesidad de realizar materializaciones de los datos, puesto que las consultas SQL generadas son suficientemente eficientes cuando son evaluadas en el sistema gestor de base de datos relacional. Asimismo, esta tesis profundiza en el entendimiento de la relación existente entre las dos recomendaciones del W3C, algo que no había sido estudiado con anterioridad. ABSTRACT. RDB to RDF Mapping Language (R2RML) is a W3C recommendation that allows specifying rules for transforming relational databases into RDF. This RDF data can be materialized and stored in a triple store, so that SPARQL queries can be evaluated by the triple store. However, there are several cases where materialization is not adequate or possible, for example, if the underlying relational database is updated frequently. In those cases, RDF data is better kept virtual, and hence SPARQL queries over it have to be translated into SQL queries to the underlying relational database system considering that the translation process has to take into account the specified R2RML mappings. The first part of this thesis focuses on query translation. We discuss the formalization of the translation from SPARQL to SQL queries that takes into account R2RML mappings. Furthermore, we propose several optimization techniques so that the translation procedure generates SQL queries that can be evaluated more efficiently over the underlying databases. We evaluate our approach using a synthetic benchmark and several real cases, and show positive results that we obtained. Direct Mapping (DM) is another W3C recommendation for the generation of RDF data from relational databases. While R2RML allows users to specify their own transformation rules, DM establishes fixed transformation rules. Although both recommendations were published at the same time, September 2012, there has not been any study regarding the relationship between them. The second part of this thesis focuses on the study of the relationship between R2RML and DM. We divide this study into two directions: from R2RML to DM, and from DM to R2RML. From R2RML to DM, we study a fragment of R2RML having the same expressive power than DM. From DM to R2RML, we represent DM transformation rules as R2RML mappings, and also add the implicit semantics encoded in databases, such as subclass, 1-N and N-N relationships. This thesis shows that by formalizing and optimizing R2RML-based SPARQL to SQL query translation, it is possible to use R2RML engines in real cases as the resulting SQL is efficient enough to be evaluated by the underlying relational databases. In addition to that, this thesis facilitates the understanding of bidirectional relationship between the two W3C recommendations, something that had not been studied before.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

BACKGROUND & AIMS The liver performs a panoply of complex activities coordinating metabolic, immunologic and detoxification processes. Despite the liver's robustness and unique self-regeneration capacity, viral infection, autoimmune disorders, fatty liver disease, alcohol abuse and drug-induced hepatotoxicity contribute to the increasing prevalence of liver failure. Liver injuries impair the clearance of bile acids from the hepatic portal vein which leads to their spill over into the peripheral circulation where they activate the G-protein-coupled bile acid receptor TGR5 to initiate a variety of hepatoprotective processes. METHODS By functionally linking activation of ectopically expressed TGR5 to an artificial promoter controlling transcription of the hepatocyte growth factor (HGF), we created a closed-loop synthetic signalling network that coordinated liver injury-associated serum bile acid levels to expression of HGF in a self-sufficient, reversible and dose-dependent manner. RESULTS After implantation of genetically engineered human cells inside auto-vascularizing, immunoprotective and clinically validated alginate-poly-(L-lysine)-alginate beads into mice, the liver-protection device detected pathologic serum bile acid levels and produced therapeutic HGF levels that protected the animals from acute drug-induced liver failure. CONCLUSIONS Genetically engineered cells containing theranostic gene circuits that dynamically interface with host metabolism may provide novel opportunities for preventive, acute and chronic healthcare. LAY SUMMARY Liver diseases leading to organ failure may go unnoticed as they do not trigger any symptoms or significant discomfort. We have designed a synthetic gene circuit that senses excessive bile acid levels associated with liver injuries and automatically produces a therapeutic protein in response. When integrated into mammalian cells and implanted into mice, the circuit detects the onset of liver injuries and coordinates the production of a protein pharmaceutical which prevents liver damage.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Two different slug test field methods are conducted in wells completed in a Puget Lowland aquifer and are examined for systematic error resulting from water column displacement techniques. Slug tests using the standard slug rod and the pneumatic method were repeated on the same wells and hydraulic conductivity estimates were calculated according to Bouwer & Rice and Hvorslev before using a non-parametric statistical test for analysis. Practical considerations of performing the tests in real life settings are also considered in the method comparison. Statistical analysis indicates that the slug rod method results in up to 90% larger hydraulic conductivity values than the pneumatic method, with at least a 95% certainty that the error is method related. This confirms the existence of a slug-rod bias in a real world scenario which has previously been demonstrated by others in synthetic aquifers. In addition to more accurate values, the pneumatic method requires less field labor, less decontamination, and provides the ability to control the magnitudes of the initial displacement, making it the superior slug test procedure.