951 resultados para Query errors


Relevância:

20.00% 20.00%

Publicador:

Resumo:

R2RML is used to specify transformations of data available in relational databases into materialised or virtual RDF datasets. SPARQL queries evaluated against virtual datasets are translated into SQL queries according to the R2RML mappings, so that they can be evaluated over the underlying relational database engines. In this paper we describe an extension of a well-known algorithm for SPARQL to SQL translation, originally formalised for RDBMS-backed triple stores, that takes into account R2RML mappings. We present the result of our implementation using queries from a synthetic benchmark and from three real use cases, and show that SPARQL queries can be in general evaluated as fast as the SQL queries that would have been generated by SQL experts if no R2RML mappings had been used.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Query rewriting is one of the fundamental steps in ontologybased data access (OBDA) approaches. It takes as inputs an ontology and a query written according to that ontology, and produces as an output a set of queries that should be evaluated to account for the inferences that should be considered for that query and ontology. Different query rewriting systems give support to different ontology languages with varying expressiveness, and the rewritten queries obtained as an output do also vary in expressiveness. This heterogeneity has traditionally made it difficult to compare different approaches, and the area lacks in general commonly agreed benchmarks that could be used not only for such comparisons but also for improving OBDA support. In this paper we compile data, dimensions and measurements that have been used to evaluate some of the most recent systems, we analyse and characterise these assets, and provide a unified set of them that could be used as a starting point towards a more systematic benchmarking process for such systems. Finally, we apply this initial benchmark with some of the most relevant OBDA approaches in the state of the art.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Ontology-based data access (OBDA) systems use ontologies to provide views over relational databases. Most of these systems work with ontologies implemented in description logic families of reduced expressiveness, what allows applying efficient query rewriting techniques for query answering. In this paper we describe a set of optimisations that are applicable with one of the most expressive families used in this context (ELHIO¬). Our resulting system exhibits a behaviour that is comparable to the one shown by systems that handle less expressive logics.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Ontology-Based Data Access (OBDA) permite el acceso a diferentes tipos de fuentes de datos (tradicionalmente bases de datos) usando un modelo más abstracto proporcionado por una ontología. La reescritura de consultas (query rewriting) usa una ontología para reescribir una consulta en una consulta reescrita que puede ser evaluada en la fuente de datos. Las consultas reescritas recuperan las respuestas que están implicadas por la combinación de los datos explicitamente almacenados en la fuente de datos, la consulta original y la ontología. Al trabajar sólo sobre las queries, la reescritura de consultas permite OBDA sobre cualquier fuente de datos que puede ser consultada, independientemente de las posibilidades para modificarla. Sin embargo, producir y evaluar las consultas reescritas son procesos costosos que suelen volverse más complejos conforme la expresividad y tamaño de la ontología y las consultas aumentan. En esta tesis exploramos distintas optimizaciones que peuden ser realizadas tanto en el proceso de reescritura como en las consultas reescritas para mejorar la aplicabilidad de OBDA en contextos realistas. Nuestra contribución técnica principal es un sistema de reescritura de consultas que implementa las optimizaciones presentadas en esta tesis. Estas optimizaciones son las contribuciones principales de la tesis y se pueden agrupar en tres grupos diferentes: -optimizaciones que se pueden aplicar al considerar los predicados en la ontología que no están realmente mapeados con las fuentes de datos. -optimizaciones en ingeniería que se pueden aplicar al manejar el proceso de reescritura de consultas en una forma que permite reducir la carga computacional del proceso de generación de consultas reescritas. -optimizaciones que se pueden aplicar al considerar metainformación adicional acerca de las características de la ABox. En esta tesis proporcionamos demostraciones formales acerca de la corrección y completitud de las optimizaciones propuestas, y una evaluación empírica acerca del impacto de estas optimizaciones. Como contribución adicional, parte de este enfoque empírico, proponemos un banco de pruebas (benchmark) para la evaluación de los sistemas de reescritura de consultas. Adicionalmente, proporcionamos algunas directrices para la creación y expansión de esta clase de bancos de pruebas. ABSTRACT Ontology-Based Data Access (OBDA) allows accessing different kinds of data sources (traditionally databases) using a more abstract model provided by an ontology. Query rewriting uses such ontology to rewrite a query into a rewritten query that can be evaluated on the data source. The rewritten queries retrieve the answers that are entailed by the combination of the data explicitly stored in the data source, the original query and the ontology. However, producing and evaluating the rewritten queries are both costly processes that become generally more complex as the expressiveness and size of the ontology and queries increase. In this thesis we explore several optimisations that can be performed both in the rewriting process and in the rewritten queries to improve the applicability of OBDA in real contexts. Our main technical contribution is a query rewriting system that implements the optimisations presented in this thesis. These optimisations are the core contributions of the thesis and can be grouped into three different groups: -optimisations that can be applied when considering the predicates in the ontology that are actually mapped to the data sources. -engineering optimisations that can be applied by handling the process of query rewriting in a way that permits to reduce the computational load of the query generation process. -optimisations that can be applied when considering additional metainformation about the characteristics of the ABox. In this thesis we provide formal proofs for the correctness of the proposed optimisations, and an empirical evaluation about the impact of the optimisations. As an additional contribution, part of this empirical approach, we propose a benchmark for the evaluation of query rewriting systems. We also provide some guidelines for the creation and expansion of this kind of benchmarks.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Acknowledgements The iHARP database was funded by unrestricted grants from Mundipharma International Ltd and Research in Real-Life Ltd; these analyses were funded by an unrestricted grant from Teva Pharmaceuticals. Mundipharma and Teva played no role in study conduct or analysis and did not modify or approve the manuscript. The authors wish to direct a special appreciation to all the participants of the iHARP group who contributed data to this study and to Mundipharma, sponsors of the iHARP group. In addition, we thank Julie von Ziegenweidt for assistance with data extraction and Anna Gilchrist and Valerie L. Ashton, PhD, for editorial assistance. Elizabeth V. Hillyer, DVM, provided editorial and writing support, funded by Research in Real-Life, Ltd.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Nuclear-localized mtDNA pseudogenes might explain a recent report describing a heteroplasmic mtDNA molecule containing five linked missense mutations dispersed over the contiguous mtDNA CO1 and CO2 genes in Alzheimer’s disease (AD) patients. To test this hypothesis, we have used the PCR primers utilized in the original report to amplify CO1 and CO2 sequences from two independent ρ° (mtDNA-less) cell lines. CO1 and CO2 sequences amplified from both of the ρ° cells, demonstrating that these sequences are also present in the human nuclear DNA. The nuclear pseudogene CO1 and CO2 sequences were then tested for each of the five “AD” missense mutations by restriction endonuclease site variant assays. All five mutations were found in the nuclear CO1 and CO2 PCR products from ρ° cells, but none were found in the PCR products obtained from cells with normal mtDNA. Moreover, when the overlapping nuclear CO1 and CO2 PCR products were cloned and sequenced, all five missense mutations were found, as well as a linked synonymous mutation. Unlike the findings in the original report, an additional 32 base substitutions were found, including two in adjacent tRNAs and a two base pair deletion in the CO2 gene. Phylogenetic analysis of the nuclear CO1 and CO2 sequences revealed that they diverged from modern human mtDNAs early in hominid evolution about 770,000 years before present. These data would be consistent with the interpretation that the missense mutations proposed to cause AD may be the product of ancient mtDNA variants preserved as nuclear pseudogenes.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Previously conducted sequence analysis of Arabidopsis thaliana (ecotype Columbia-0) reported an insertion of 270-kb mtDNA into the pericentric region on the short arm of chromosome 2. DNA fiber-based fluorescence in situ hybridization analyses reveal that the mtDNA insert is 618 ± 42 kb, ≈2.3 times greater than that determined by contig assembly and sequencing analysis. Portions of the mitochondrial genome previously believed to be absent were identified within the insert. Sections of the mtDNA are repeated throughout the insert. The cytological data illustrate that DNA contig assembly by using bacterial artificial chromosomes tends to produce a minimal clone path by skipping over duplicated regions, thereby resulting in sequencing errors. We demonstrate that fiber-fluorescence in situ hybridization is a powerful technique to analyze large repetitive regions in the higher eukaryotic genomes and is a valuable complement to ongoing large genome sequencing projects.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Background: Refractive error is defined as the inability of the eye to bring parallel rays of light into focus on the retina, resulting in nearsightedness (myopia), farsightedness (Hyperopia) or astigmatism. Uncorrected refractive error in children is associated with increased morbidity and reduced educational opportunities. Vision screening (VS) is a method for identifying children with visual impairment or eye conditions likely to lead to visual impairment. Objective: To analyze the utility of vision screening conducted by teachers and to contribute to a better estimation of the prevalence of childhood refractive errors in Apurimac, Peru. Design: A pilot vision screening program in preschool (Group I) and elementary school children (Group II) was conducted with the participation of 26 trained teachers. Children whose visual acuity was<6/9 [20/30] (Group I) and≤6/9 (Group II) in one or both eyes, measured with the Snellen Tumbling E chart at 6 m, were referred for a comprehensive eye exam. Specificity and positive predictive value to detect refractive error were calculated against clinical examination. Program assessment with participants was conducted to evaluate outcomes and procedures. Results: A total sample of 364 children aged 3–11 were screened; 45 children were examined at Centro Oftalmológico Monseñor Enrique Pelach (COMEP) Eye Hospital. Prevalence of refractive error was 6.2% (Group I) and 6.9% (Group II); specificity of teacher vision screening was 95.8% and 93.0%, while positive predictive value was 59.1% and 47.8% for each group, respectively. Aspects highlighted to improve the program included extending training, increasing parental involvement, and helping referred children to attend the hospital. Conclusion: Prevalence of refractive error in children is significant in the region. Vision screening performed by trained teachers is a valid intervention for early detection of refractive error, including screening of preschool children. Program sustainability and improvements in education and quality of life resulting from childhood vision screening require further research.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Natural Language Interfaces to Query Databases (NLIDBs) have been an active research field since the 1960s. However, they have not been widely adopted. This article explores some of the biggest challenges and approaches for building NLIDBs and proposes techniques to reduce implementation and adoption costs. The article describes {AskMe*}, a new system that leverages some of these approaches and adds an innovative feature: query-authoring services, which lower the entry barrier for end users. Advantages of these approaches are proven with experimentation. Results confirm that, even when {AskMe*} is automatically reconfigurable against multiple domains, its accuracy is comparable to domain-specific NLIDBs.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The use of microprocessor-based systems is gaining importance in application domains where safety is a must. For this reason, there is a growing concern about the mitigation of SEU and SET effects. This paper presents a new hybrid technique aimed to protect both the data and the control-flow of embedded applications running on microprocessors. On one hand, the approach is based on software redundancy techniques for correcting errors produced in the data. On the other hand, control-flow errors can be detected by reusing the on-chip debug interface, existing in most modern microprocessors. Experimental results show an important increase in the system reliability even superior to two orders of magnitude, in terms of mitigation of both SEUs and SETs. Furthermore, the overheads incurred by our technique can be perfectly assumable in low-cost systems.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Purpose: To analyze and define the possible errors that may be introduced in keratoconus classification when the keratometric corneal power is used in such classification. Materials and methods: Retrospective study including a total of 44 keratoconus eyes. A comprehensive ophthalmologic examination was performed in all cases, which included a corneal analysis with the Pentacam system (Oculus). Classical keratometric corneal power (Pk), Gaussian corneal power (Pc Gauss), True Net Power (TNP) (Gaussian power neglecting the corneal thickness effect), and an adjusted keratometric corneal power (Pkadj) (keratometric power considering a variable keratometric index) were calculated. All cases included in the study were classified according to five different classification systems: Alió-Shabayek, Amsler-Krumeich, Rabinowitz-McDonnell, collaborative longitudinal evaluation of keratoconus (CLEK), and McMahon. Results: When Pk and Pkadj were compared, differences in the type of grading of keratoconus cases was found in 13.6% of eyes when the Alió-Shabayek or the Amsler-Krumeich systems were used. Likewise, grading differences were observed in 22.7% of eyes with the Rabinowitz-McDonnell and McMahon classification systems and in 31.8% of eyes with the CLEK classification system. All reclassified cases using Pkadj were done in a less severe stage, indicating that the use of Pk may lead to the classification of a cornea as keratoconus, being normal. In general, the results obtained using Pkadj, Pc Gauss or the TNP were equivalent. Differences between Pkadj and Pc Gauss were within ± 0.7D. Conclusion: The use of classical keratometric corneal power may lead to incorrect grading of the severity of keratoconus, with a trend to a more severe grading.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Contains summaries of cases heard by the Delaware Supreme Court and the Delaware Appeals Court in the counties of Sussex, Kent, and Newcastle covering a variety of legal topics. Supposedly based on Wilson's Red Book.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Policy errors occur regularly in EU Member States. Learning from these errors can be beneficial. This paper explains how the European Union can facilitate this learning. At present, much attention is given to “best practices”. But learning from mistakes is also valuable. The paper develops the concept of “avoidable error” and examines evidence from infringement proceedings and special reports of the European Court of Auditors which indicate that Member States do indeed commit avoidable errors. The paper considers how Member States may take measures not to repeat avoidable or predictable errors and makes appropriate proposals.