959 resultados para Query errors
Resumo:
SOUZA, Anderson A. S. ; SANTANA, André M. ; BRITTO, Ricardo S. ; GONÇALVES, Luiz Marcos G. ; MEDEIROS, Adelardo A. D. Representation of Odometry Errors on Occupancy Grids. In: INTERNATIONAL CONFERENCE ON INFORMATICS IN CONTROL, AUTOMATION AND ROBOTICS, 5., 2008, Funchal, Portugal. Proceedings... Funchal, Portugal: ICINCO, 2008.
Resumo:
To determine the prevalence of refractive errors in the public and private school system in the city of Natal, Northeastern Brazil. Methods: Refractometry was performed on both eyes of 1,024 randomly selected students, enrolled in the 2001 school year and the data were evaluated by the SPSS Data Editor 10.0. Ametropia was divided into: 1- from 0.1 to 0.99 diopter (D); 2- 1.0 to 2.99D; 3- 3.00 to 5.99D and 4- 6D or greater. Astigmatism was regrouped in: I- with-the-rule (axis from 0 to 30 and 150 to 180 degrees), II- against-the-rule (axis between 60 and 120 degrees) and III- oblique (axis between > 30 and < 60 and >120 and <150 degrees). The age groups were categorized as follows, in: 1- 5 to 10 years, 2- 11 to 15 years, 3- 16 to 20 years, 4- over 21 years. Results: Among refractive errors, hyperopia was the most common with 71%, followed by astigmatism (34%) and myopia (13.3%). Of the students with myopia and hyperopia, 48.5% and 34.1% had astigmatism, respectively. With respect to diopters, 58.1% of myopic students were in group 1, and 39% distributed between groups 2 and 3. Hyperopia were mostly found in group 1 (61.7%) as well as astigmatism (70.6%). The association of the astigmatism axes of both eyes showed 92.5% with axis with-the-rule in both eyes, while the percentage for those with axis againstthe- rule was 82.1% and even lower for the oblique axis (50%). Conclusion: The results found differed from those of most international studies, mainly from the Orient, which pointed to myopia as the most common refractive error, and corroborates the national ones, with the majority being hyperopia
Resumo:
SOUZA, Anderson A. S. ; SANTANA, André M. ; BRITTO, Ricardo S. ; GONÇALVES, Luiz Marcos G. ; MEDEIROS, Adelardo A. D. Representation of Odometry Errors on Occupancy Grids. In: INTERNATIONAL CONFERENCE ON INFORMATICS IN CONTROL, AUTOMATION AND ROBOTICS, 5., 2008, Funchal, Portugal. Proceedings... Funchal, Portugal: ICINCO, 2008.
Resumo:
This paper reports the use of proof planning to diagnose errors in program code. In particular it looks at the errors that arise in the base cases of recursive programs produced by undergraduates. It describes two classes of error that arise in this situation. The use of test cases would catch these errors but would fail to distinguish between them. The system adapts proof critics, commonly used to patch faulty proofs, to diagnose such errors and distinguish between the two classes. It has been implemented in Lambda-clam, a proof planning system, and applied successfully to a small set of examples.
Resumo:
Edge-labeled graphs have proliferated rapidly over the last decade due to the increased popularity of social networks and the Semantic Web. In social networks, relationships between people are represented by edges and each edge is labeled with a semantic annotation. Hence, a huge single graph can express many different relationships between entities. The Semantic Web represents each single fragment of knowledge as a triple (subject, predicate, object), which is conceptually identical to an edge from subject to object labeled with predicates. A set of triples constitutes an edge-labeled graph on which knowledge inference is performed. Subgraph matching has been extensively used as a query language for patterns in the context of edge-labeled graphs. For example, in social networks, users can specify a subgraph matching query to find all people that have certain neighborhood relationships. Heavily used fragments of the SPARQL query language for the Semantic Web and graph queries of other graph DBMS can also be viewed as subgraph matching over large graphs. Though subgraph matching has been extensively studied as a query paradigm in the Semantic Web and in social networks, a user can get a large number of answers in response to a query. These answers can be shown to the user in accordance with an importance ranking. In this thesis proposal, we present four different scoring models along with scalable algorithms to find the top-k answers via a suite of intelligent pruning techniques. The suggested models consist of a practically important subset of the SPARQL query language augmented with some additional useful features. The first model called Substitution Importance Query (SIQ) identifies the top-k answers whose scores are calculated from matched vertices' properties in each answer in accordance with a user-specified notion of importance. The second model called Vertex Importance Query (VIQ) identifies important vertices in accordance with a user-defined scoring method that builds on top of various subgraphs articulated by the user. Approximate Importance Query (AIQ), our third model, allows partial and inexact matchings and returns top-k of them with a user-specified approximation terms and scoring functions. In the fourth model called Probabilistic Importance Query (PIQ), a query consists of several sub-blocks: one mandatory block that must be mapped and other blocks that can be opportunistically mapped. The probability is calculated from various aspects of answers such as the number of mapped blocks, vertices' properties in each block and so on and the most top-k probable answers are returned. An important distinguishing feature of our work is that we allow the user a huge amount of freedom in specifying: (i) what pattern and approximation he considers important, (ii) how to score answers - irrespective of whether they are vertices or substitution, and (iii) how to combine and aggregate scores generated by multiple patterns and/or multiple substitutions. Because so much power is given to the user, indexing is more challenging than in situations where additional restrictions are imposed on the queries the user can ask. The proposed algorithms for the first model can also be used for answering SPARQL queries with ORDER BY and LIMIT, and the method for the second model also works for SPARQL queries with GROUP BY, ORDER BY and LIMIT. We test our algorithms on multiple real-world graph databases, showing that our algorithms are far more efficient than popular triple stores.
Resumo:
To determine the prevalence of refractive errors in the public and private school system in the city of Natal, Northeastern Brazil. Methods: Refractometry was performed on both eyes of 1,024 randomly selected students, enrolled in the 2001 school year and the data were evaluated by the SPSS Data Editor 10.0. Ametropia was divided into: 1- from 0.1 to 0.99 diopter (D); 2- 1.0 to 2.99D; 3- 3.00 to 5.99D and 4- 6D or greater. Astigmatism was regrouped in: I- with-the-rule (axis from 0 to 30 and 150 to 180 degrees), II- against-the-rule (axis between 60 and 120 degrees) and III- oblique (axis between > 30 and < 60 and >120 and <150 degrees). The age groups were categorized as follows, in: 1- 5 to 10 years, 2- 11 to 15 years, 3- 16 to 20 years, 4- over 21 years. Results: Among refractive errors, hyperopia was the most common with 71%, followed by astigmatism (34%) and myopia (13.3%). Of the students with myopia and hyperopia, 48.5% and 34.1% had astigmatism, respectively. With respect to diopters, 58.1% of myopic students were in group 1, and 39% distributed between groups 2 and 3. Hyperopia were mostly found in group 1 (61.7%) as well as astigmatism (70.6%). The association of the astigmatism axes of both eyes showed 92.5% with axis with-the-rule in both eyes, while the percentage for those with axis againstthe- rule was 82.1% and even lower for the oblique axis (50%). Conclusion: The results found differed from those of most international studies, mainly from the Orient, which pointed to myopia as the most common refractive error, and corroborates the national ones, with the majority being hyperopia
Resumo:
Homomorphic encryption is a particular type of encryption method that enables computing over encrypted data. This has a wide range of real world ramifications such as being able to blindly compute a search result sent to a remote server without revealing its content. In the first part of this thesis, we discuss how database search queries can be made secure using a homomorphic encryption scheme based on the ideas of Gahi et al. Gahi’s method is based on the integer-based fully homomorphic encryption scheme proposed by Dijk et al. We propose a new database search scheme called the Homomorphic Query Processing Scheme, which can be used with the ring-based fully homomorphic encryption scheme proposed by Braserski. In the second part of this thesis, we discuss the cybersecurity of the smart electric grid. Specifically, we use the Homomorphic Query Processing scheme to construct a keyword search technique in the smart grid. Our work is based on the Public Key Encryption with Keyword Search (PEKS) method introduced by Boneh et al. and a Multi-Key Homomorphic Encryption scheme proposed by L´opez-Alt et al. A summary of the results of this thesis (specifically the Homomorphic Query Processing Scheme) is published at the 14th Canadian Workshop on Information Theory (CWIT).
Resumo:
International audience
Resumo:
Throughout the last years technologic improvements have enabled internet users to analyze and retrieve data regarding Internet searches. In several fields of study this data has been used. Some authors have been using search engine query data to forecast economic variables, to detect influenza areas or to demonstrate that it is possible to capture some patterns in stock markets indexes. In this paper one investment strategy is presented using Google Trends’ weekly query data from major global stock market indexes’ constituents. The results suggest that it is indeed possible to achieve higher Info Sharpe ratios, especially for the major European stock market indexes in comparison to those provided by a buy-and-hold strategy for the period considered.
Resumo:
As condições de ambiente térmico e aéreo, no interior de instalações para animais, alteram-se durante o dia, devido à influência do ambiente externo. Para que análises estatísticas e geoestatísticas sejam representativas, uma grande quantidade de pontos distribuídos espacialmente na área da instalação deve ser monitorada. Este trabalho propõe que a variação no tempo das variáveis ambientais de interesse para a produção animal, monitoradas no interior de instalações para animais, pode ser modelada com precisão a partir de registros discretos no tempo. O objetivo deste trabalho foi desenvolver um método numérico para corrigir as variações temporais dessas variáveis ambientais, transformando os dados para que tais observações independam do tempo gasto durante a aferição. O método proposto aproximou os valores registrados com retardos de tempo aos esperados no exato momento de interesse, caso os dados fossem medidos simultaneamente neste momento em todos os pontos distribuídos espacialmente. O modelo de correção numérica para variáveis ambientais foi validado para o parâmetro ambiental temperatura do ar, sendo que os valores corrigidos pelo método não diferiram pelo teste Tukey, a 5% de probabilidade dos valores reais registrados por meio de dataloggers.
Resumo:
The preparation and administration of medications is one of the most common and relevant functions of nurses, demanding great responsibility. Incorrect administration of medication, currently constitutes a serious problem in health services, and is considered one of the main adverse effects suffered by hospitalized patients. Objectives: Identify the major errors in the preparation and administration of medication by nurses in hospitals and know what factors lead to the error occurred in the preparation and administration of medication. Methods: A systematic review of the literature. Deined as inclusion criteria: original scientiic papers, complete, published in the period 2011 to May 2016, the SciELO and LILACS databases, performed in a hospital environment, addressing errors in preparation and administration of medication by nurses and in Portuguese language. After application of the inclusion criteria obtained a sample of 7 articles. Results: The main errors identiied in the pr eparation and administration of medication were wrong dose 71.4%, wrong time 71.4%, 57.2% dilution inadequate, incorrect selection of the patient 42.8% and 42.8% via inadequate. The factors that were most commonly reported by the nursing staff, as the cause of the error was the lack of human appeal 57.2%, inappropriate locations for the preparation of medication 57.2%, the presence of noise and low brightness in preparation location 57, 2%, professionals untrained 42.8%, fatigue and stress 42.8% and inattention 42.8%. Conclusions: The literature shows a high error rate in the preparation and administration of medication for various reasons, making it important that preventive measures of this occurrence are implemented.
Resumo:
We provide a comprehensive study of out-of-sample forecasts for the EUR/USD exchange rate based on multivariate macroeconomic models and forecast combinations. We use profit maximization measures based on directional accuracy and trading strategies in addition to standard loss minimization measures. When comparing predictive accuracy and profit measures, data snooping bias free tests are used. The results indicate that forecast combinations, in particular those based on principal components of forecasts, help to improve over benchmark trading strategies, although the excess return per unit of deviation is limited.
Resumo:
[Sin resumen]