949 resultados para Intelligent computing techniques
Resumo:
In this paper we propose an innovative approach for behaviour recognition, from a multicamera environment, based on translating video activity into semantics. First, we fuse tracks from individual cameras through clustering employing soft computing techniques. Then, we introduce a higher-level module able to translate fused tracks into semantic information. With our proposed approach, we address the challenge set in PETS 2014 on recognising behaviours of interest around a parked vehicle, namely the abnormal behaviour of someone walking around the vehicle.
Resumo:
The area of the hospital automation has been the subject a lot of research, addressing relevant issues which can be automated, such as: management and control (electronic medical records, scheduling appointments, hospitalization, among others); communication (tracking patients, staff and materials), development of medical, hospital and laboratory equipment; monitoring (patients, staff and materials); and aid to medical diagnosis (according to each speciality). This thesis presents an architecture for a patient monitoring and alert systems. This architecture is based on intelligent systems techniques and is applied in hospital automation, specifically in the Intensive Care Unit (ICU) for the patient monitoring in hospital environment. The main goal of this architecture is to transform the multiparameter monitor data into useful information, through the knowledge of specialists and normal parameters of vital signs based on fuzzy logic that allows to extract information about the clinical condition of ICU patients and give a pre-diagnosis. Finally, alerts are dispatched to medical professionals in case any abnormality is found during monitoring. After the validation of the architecture, the fuzzy logic inferences were applied to the trainning and validation of an Artificial Neural Network for classification of the cases that were validated a priori with the fuzzy system
Resumo:
This work presents a methodological proposal for acquisition of biometric data through telemetry basing its development on a research-action and a case study. Nowadays, the qualified professionals of physical evaluation have to use specific devices to obtain biometric signals and data. These devices in the most of the time are high cost and difficult to use and handling. Therefore, the methodological proposal was elaborate in order to develop, conceptually, a bio telemetric device which could acquire the desirable biometric signals: oxymetry, biometrics, corporal temperature and pedometry which are essential for the area of physical evaluation. It was researched the existent biometrics sensors, the possible ways for the remote transmission of signals and the computer systems available so that the acquisition of data could be possible. This methodological proposal of remote acquisition of biometrical signals is structured in four modules: Acquisitor of biometrics data; Converser and transmitter of biometric signals; Receiver and Processor of biometrics signals and Generator of Interpretative Graphs. The modules aim the obtention of interpretative graphics of human biometric signals. In order to validate this proposal a functional prototype was developed and it is presented in the development of this work.
Resumo:
This paper presents an individual designing prosthesis for surgical use and proposes a methodology for such design through mathematical extrapolation of data from digital images obtained via tomography of individual patient's bones. Individually tailored prosthesis designed to fit particular patient requirements as accurately as possible should result in more successful reconstruction, enable better planning before surgery and consequently fewer complications during surgery. Fast and accurate design and manufacture of personalized prosthesis for surgical use in bone replacement or reconstruction is potentially feasible through the application and integration of several different existing technologies, which are each at different stages of maturity. Initial case study experiments have been undertaken to validate the research concepts by making dimensional comparisons between a bone and a virtual model produced using the proposed methodology and a future research directions are discussed.
Resumo:
This classical way to manage product development processes for massive production seems to be changing: high pressure for cost reduction, higher quality standards, markets reaching for innovation lead to the necessity of new tools for development control. Into this, and learning from the automotive and aerospace industries factories from other segments are starting to understand and apply manufacturing and assembly oriented projects to ease the task of generate goods and from this obtain at least a part of the expected results. This paper is intended to demonstrate the applicability of the concepts of Concurrent Engineering and DFM/DFA (Design for Manufacturing and Assembly) in the development of products and parts for the White Goods industry in Brazil (major appliances as refrigerators, cookers and washing machines), showing one case concerning the development and releasing of a component. Finally is demonstrated in a short term how was reached a solution that could provide cost savings and reduction on the time to delivery using those techniques.
Resumo:
Pós-graduação em Geografia - IGCE
Resumo:
In this paper, we seek to expand the use of direct methods in real-time applications by proposing a vision-based strategy for pose estimation of aerial vehicles. The vast majority of approaches make use of features to estimate motion. Conversely, the strategy we propose is based on a MR (Multi-Resolution) implementation of an image registration technique (Inverse Compositional Image Alignment ICIA) using direct methods. An on-board camera in a downwards-looking configuration, and the assumption of planar scenes, are the bases of the algorithm. The motion between frames (rotation and translation) is recovered by decomposing the frame-to-frame homography obtained by the ICIA algorithm applied to a patch that covers around the 80% of the image. When the visual estimation is required (e.g. GPS drop-out), this motion is integrated with the previous known estimation of the vehicles' state, obtained from the on-board sensors (GPS/IMU), and the subsequent estimations are based only on the vision-based motion estimations. The proposed strategy is tested with real flight data in representative stages of a flight: cruise, landing, and take-off, being two of those stages considered critical: take-off and landing. The performance of the pose estimation strategy is analyzed by comparing it with the GPS/IMU estimations. Results show correlation between the visual estimation obtained with the MR-ICIA and the GPS/IMU data, that demonstrate that the visual estimation can be used to provide a good approximation of the vehicle's state when it is required (e.g. GPS drop-outs). In terms of performance, the proposed strategy is able to maintain an estimation of the vehicle's state for more than one minute, at real-time frame rates based, only on visual information.
Resumo:
En esta memoria se describe el trabajo de construcción de una arquitectura software diseñada para facilitar el desarrollo un planificador de misión de un vehículo aéreo no tripulado (UAV), con el fin de que éste alcance los objetivos marcados en la competición internacional de robótica IARC (séptima edición). A lo largo de la memoria, se describe en primer lugar, una revisión de técnicas de robótica inteligente aplicadas a la construcción de vehículos aéreos no tripulados, en el que se ven los diferentes paradigmas de programación de la robótica inteligente y la clasificación de dichos robots aéreos, dependiendo de su autonomía. Este descripción finaliza con la presentación del problema correspondiente a la competición IARC. A continuación se describe el diseño realizado para soporte al desarrollo de un planificador de misiones de UAVs, con simulación de comportamiento de vehículos robóticos y visualización 3D con movimiento. Finalmente, se muestran las pruebas que se han realizado para validar la construcción de dicha arquitectura software. ---ABSTRACT---In this report it is presented the construction of a software architecture, designed to facilitate the development of a mission planner for an unmanned aerial vehicle (UAV), so that it reaches the goals set in the International Aerial Robotics Competition - IARC (seventh edition). Throughout this report, it is described first, a review of intelligent robotics techniques applied to the construction of unmanned aerial vehicles, where different paradigms of intelligent robotics are seen, along with a classification of such aerial robots, depending on their autonomy. Description ends with the presentation of the problem corresponding to the IARC competition. Following, it is described the design made to satisfy the support to the development of a mission planner for UAV´s, with a simulation of the robotics vehicles’ behaviours and a 3D display with motion. Finally, we will deal with the tests that have been conducted to validate the construction of the software architecture.
Resumo:
El principal objetivo de este trabajo es aportar conocimiento para contestar la pregunta: ¿hasta que punto los ensayos en túnel aerodinámico pueden contribuir a determinar las características que afectan la respuesta dinámica de los aerogeneradores operando en terreno complejo?. Esta pregunta no es nueva, de hecho, el debate en la comunidad científica comenzó en el primer tercio del siglo pasado y aún está intensamente vivo. El método generalmente aceptado para enfrentar el mencionado problema consiste en analizar un caso de estudio determinado en el cual se aplican tanto ensayos a escala real como análisis computacionales y ensayos en túnel aerodinámico. Esto no es ni fácil ni barato. Esta es la razón por la cual desde el experimento de Askervein en 1988, los modelizadores del flujo atmosférico tuvieron que esperar hasta 2007 a que el experimento de Bolund fuese puesto en marcha con un despliegue de medios técnicos equivalentes (teniendo en cuenta la evolución de las tecnologías de sensores y computación). El problema contempla tantos aspectos que ambas experiencias fueron restringidas a condiciones de atmósfera neutra con efectos de Coriolis despreciables con objeto de reducir la complejidad. Este es el contexto en el que se ha desarrollado la presente tesis doctoral. La topología del flujo sobre la isla de Bolund ha sido estudiada mediante la reproducción del experimento de Bolund en los túneles aerodinámicos A9 y ACLA16 del IDR. Dos modelos de la isla de Bolund fueron fabricados a dos escalas, 1:230 y 1:115. El flujo de entrada en el túnel aerodinámico simulando la capa límite sin perturbar correspondía a régimen de transición (transitionally rough regime) y fue usado como situación de referencia. El modelo a escala 1:230 fue ensayado en el túnel A9 para determinar la presión sobre su superficie. La distribución del coeficiente de presión sobre la isla proporcionó una visualización y estimación de una región de desprendimiento sobre el pequeño acantilado situado al frente de la misma. Las medidas de presión instantánea con suficiente grado de resolución temporal pusieron de manifiesto la no estacionariedad en la región de desprendimiento. El modelo a escala 1:115 fue ensayado utilizando hilo caliente de tres componentes y un sistema de velocimetría por imágenes de partículas de dos componentes. El flujo fue caracterizado por el ratio de aceleración, el incremento normalizado de energía cinética turbulenta y los ángulos de inclinación y desviación horizontal. Los resultados a lo largo de la dirección 270°y alturas de 2 m y 5 m presentaron una gran similitud con los resultados a escala real del experimento de Bolund. Los perfiles verticales en las localizaciones de las torres meteorológicas mostraron un acuerdo significativo con los resultados a escala real. El análisis de los esfuerzos de Reynolds y el análisis espectral en las localizaciones de los mástiles meteorológicos presentaron niveles de acuerdo variados en ciertas posiciones, mientras que en otras presentaron claras diferencias. El mapeo horizontal del flujo, para una dirección de viento de 270°, permitió caracterizar el comportamiento de la burbuja intermitente de recirculación sobre el pequeño acantilado existente al frente de la isla así como de la región de relajación y de la capa de cortadura en la región corriente abajo de Bolund. Se realizaron medidas de velocidad con alta resolución espacial en planos perpendiculares a la dirección del flujo sin perturbar. Estas medidas permitieron detectar y caracterizar una estructura de flujo similar a un torbellino longitudinal con regiones con altos gradientes de velocidad y alta intensidad de turbulencia. Esta estructura de flujo es, sin duda, un reto para los modelos computacionales y puede considerarse un factor de riesgo para la operación de los aerogeneradores. Se obtuvieron y analizaron distribuciones espaciales de los esfuerzos de Reynolds mediante 3CHW y PIV. Este tipo de parámetros no constituyen parte de los resultados habituales en los ensayos en túnel sobre topografías y son muy útiles para los modelizadores que utilizan simulación de grades torbellinos (LES). Se proporciona una interpretación de los resultados obtenidos en el túnel aerodinámico en términos de utilidad para los diseñadores de parques eólicos. La evolución y variación de los parámetros del flujo a lo largo de líneas, planos y superficies han permitido identificar como estas propiedades del flujo podrían afectar la localización de los aerogeneradores y a la clasificación de emplazamientos. Los resultados presentados sugieren, bajo ciertas condiciones, la robustez de los ensayos en túnel para estudiar la topología sobre terreno complejo y su comparabilidad con otras técnicas de simulación, especialmente considerando el nivel de acuerdo del conjunto de resultados presentados con los resultados a escala real. De forma adicional, algunos de los parámetros del flujo obtenidos de las medidas en túnel son difícilmente determinables en ensayos a escala real o por medios computacionales, considerado el estado del arte. Este trabajo fue realizado como parte de las actividades subvencionadas por la Comisión Europea como dentro del proyecto FP7-PEOPLE-ITN-2008WAUDIT (Wind Resource Assessment Audit and Standardization) dentro de la FP7 Marie-Curie Initial Training Network y por el Ministerio Español de Economía y Competitividad dentro del proyecto ENE2012-36473, TURCO (Determinación en túnel aerodinámico de la distribución espacial de parámetros estadísticos de la turbulencia atmosférica sobre topografías complejas) del Plan Nacional de Investigación (Subprograma de investigación fundamental no orientada 2012). El informe se ha organizado en siete capítulos y un conjunto de anexos. En el primer capítulo se introduce el problema. En el capítulo dos se describen los medios experimentales utilizados. Seguidamente, en el capítulo tres, se analizan en detalle las condiciones de referencia del principal túnel aerodinámico utilizado en esta investigación. En el capítulo tres se presentan resultados de ensayos de presión superficial sobre un modelo de la isla. Los principales resultados del experimento de Bolund se reproducen en el capítulo cinco. En el capítulo seis se identifican diferentes estructuras del flujo sobre la isla y, finalmente, en el capitulo siete, se recogen las conclusiones y una propuesta de lineas de trabajo futuras. ABSTRACT The main objective of this work is to contribute to answer the question: to which extend can the wind tunnel testing contribute to determine the flow characteristics that affect the dynamic response of wind turbines operating in highly complex terrains?. This question is not new, indeed, the debate in the scientific community was opened in the first third of the past century and it is still intensely alive. The accepted approach to face this problem consists in analysing a given case study where full-scale tests, computational modelling and wind tunnel testing are applied to the same topography. This is neither easy nor cheap. This is is the reason why since the Askervein experience in 1988, the atmospheric flow modellers community had to wait till 2007 when the Bolund experiment was setup with a deployment of technical means equivalent (considering the evolution of the sensor and computing techniques). The problem is so manifold that both experiences were restricted to neutral conditions without Coriolis effects in order to reduce the complexity. This is the framework in which this PhD has been carried out. The flow topology over the Bolund Island has been studied by replicating the Bolund experiment in the IDR A9 and ACLA16 wind tunnels. Two mock-ups of the Bolund island were manufactured at two scales of 1:230 and 1:115. The in-flow in the empty wind tunnel simulating the incoming atmospheric boundary layer was in the transitionally rough regime and used as a reference case. The 1:230 model was tested in the A9 wind tunnel to measure surface pressure. The mapping of the pressure coefficient across the island gave a visualisation and estimation of a detachment region on the top of the escarpment in front of the island. Time resolved instantaneous pressure measurements illustrated the non-steadiness in the detachment region. The 1:115 model was tested using 3C hot-wires(HW) and 2C Particle Image Velocimetry(PIV). Measurements at met masts M3, M6, M7 and M8 and along Line 270°were taken to replicate the result of the Bolund experiment. The flow was characterised by the speed-up ratio, normalised increment of the turbulent kinetic energy, inclination angle and turning angle. Results along line 270°at heights of 2 m and 5 m compared very well with the full-scale results of the Bolund experiment. Vertical profiles at the met masts showed a significant agreement with the full-scale results. The analysis of the Reynolds stresses and the spectral analysis at the met mast locations gave a varied level of agreement at some locations while clear mismatch at others. The horizontal mapping of the flow field, for a 270°wind direction, allowed to characterise the behaviour of the intermittent recirculation bubble on top of the front escarpment followed by a relaxation region and the presence of a shear layer in the lee side of the island. Further detailed velocity measurements were taken at cross-flow planes over the island to study the flow structures on the island. A longitudinal vortex-like structure with high mean velocity gradients and high turbulent kinetic energy was characterised on the escarpment and evolving downstream. This flow structure is a challenge to the numerical models while posing a threat to wind farm designers when siting wind turbines. Spatial distribution of Reynold stresses were presented from 3C HW and PIV measurements. These values are not common results from usual wind tunnel measurements and very useful for modellers using large eddy simulation (LES). An interpretation of the wind tunnel results in terms of usefulness to wind farm designers is given. Evolution and variation of the flow parameters along measurement lines, planes and surfaces indicated how the flow field could affect wind turbine siting. Different flow properties were presented so compare the level of agreement to full-scale results and how this affected when characterising the site wind classes. The results presented suggest, under certain conditions, the robustness of the wind tunnel testing for studying flow topology over complex terrain and its capability to compare to other modelling techniques especially from the level of agreement between the different data sets presented. Additionally, some flow parameters obtained from wind tunnel measurements would have been quite difficult to be measured at full-scale or by computational means considering the state of the art. This work was carried out as a part of the activities supported by the EC as part of the FP7- PEOPLE-ITN-2008 WAUDIT project (Wind Resource Assessment Audit and Standardization) within the FP7 Marie-Curie Initial Training Network and by the Spanish Ministerio de Economía y Competitividad, within the framework of the ENE2012-36473, TURCO project (Determination of the Spatial Distribution of Statistic Parameters of Flow Turbulence over Complex Topographies in Wind Tunnel) belonging to the Spanish National Program of Research (Subprograma de investigación fundamental no orientada 2012). The report is organised in seven chapters and a collection of annexes. In chapter one, the problem is introduced. In chapter two the experimental setup is described. Following, in chapter three, the inflow conditions of the main wind tunnel used in this piece of research are analysed in detail. In chapter three, preliminary pressure tests results on a model of the island are presented. The main results from the Bolund experiment are replicated in chapter five. In chapter six, an identification of specific flow strutures over the island is presented and, finally, in chapter seven, conclusions and lines for future works related to the presented one are included.
Resumo:
The reverse time migration algorithm (RTM) has been widely used in the seismic industry to generate images of the underground and thus reduce the risk of oil and gas exploration. Its widespread use is due to its high quality in underground imaging. The RTM is also known for its high computational cost. Therefore, parallel computing techniques have been used in their implementations. In general, parallel approaches for RTM use a coarse granularity by distributing the processing of a subset of seismic shots among nodes of distributed systems. Parallel approaches with coarse granularity for RTM have been shown to be very efficient since the processing of each seismic shot can be performed independently. For this reason, RTM algorithm performance can be considerably improved by using a parallel approach with finer granularity for the processing assigned to each node. This work presents an efficient parallel algorithm for 3D reverse time migration with fine granularity using OpenMP. The propagation algorithm of 3D acoustic wave makes up much of the RTM. Different load balancing were analyzed in order to minimize possible losses parallel performance at this stage. The results served as a basis for the implementation of other phases RTM: backpropagation and imaging condition. The proposed algorithm was tested with synthetic data representing some of the possible underground structures. Metrics such as speedup and efficiency were used to analyze its parallel performance. The migrated sections show that the algorithm obtained satisfactory performance in identifying subsurface structures. As for the parallel performance, the analysis clearly demonstrate the scalability of the algorithm achieving a speedup of 22.46 for the propagation of the wave and 16.95 for the RTM, both with 24 threads.
Resumo:
Several are the areas in which digital images are used in solving day-to-day problems. In medicine the use of computer systems have improved the diagnosis and medical interpretations. In dentistry it’s not different, increasingly procedures assisted by computers have support dentists in their tasks. Set in this context, an area of dentistry known as public oral health is responsible for diagnosis and oral health treatment of a population. To this end, oral visual inspections are held in order to obtain oral health status information of a given population. From this collection of information, also known as epidemiological survey, the dentist can plan and evaluate taken actions for the different problems identified. This procedure has limiting factors, such as a limited number of qualified professionals to perform these tasks, different diagnoses interpretations among other factors. Given this context came the ideia of using intelligent systems techniques in supporting carrying out these tasks. Thus, it was proposed in this paper the development of an intelligent system able to segment, count and classify teeth from occlusal intraoral digital photographic images. The proposed system makes combined use of machine learning techniques and digital image processing. We first carried out a color-based segmentation on regions of interest, teeth and non teeth, in the images through the use of Support Vector Machine. After identifying these regions were used techniques based on morphological operators such as erosion and transformed watershed for counting and detecting the boundaries of the teeth, respectively. With the border detection of teeth was possible to calculate the Fourier descriptors for their shape and the position descriptors. Then the teeth were classified according to their types through the use of the SVM from the method one-against-all used in multiclass problem. The multiclass classification problem has been approached in two different ways. In the first approach we have considered three class types: molar, premolar and non teeth, while the second approach were considered five class types: molar, premolar, canine, incisor and non teeth. The system presented a satisfactory performance in the segmenting, counting and classification of teeth present in the images.
Resumo:
Several are the areas in which digital images are used in solving day-to-day problems. In medicine the use of computer systems have improved the diagnosis and medical interpretations. In dentistry it’s not different, increasingly procedures assisted by computers have support dentists in their tasks. Set in this context, an area of dentistry known as public oral health is responsible for diagnosis and oral health treatment of a population. To this end, oral visual inspections are held in order to obtain oral health status information of a given population. From this collection of information, also known as epidemiological survey, the dentist can plan and evaluate taken actions for the different problems identified. This procedure has limiting factors, such as a limited number of qualified professionals to perform these tasks, different diagnoses interpretations among other factors. Given this context came the ideia of using intelligent systems techniques in supporting carrying out these tasks. Thus, it was proposed in this paper the development of an intelligent system able to segment, count and classify teeth from occlusal intraoral digital photographic images. The proposed system makes combined use of machine learning techniques and digital image processing. We first carried out a color-based segmentation on regions of interest, teeth and non teeth, in the images through the use of Support Vector Machine. After identifying these regions were used techniques based on morphological operators such as erosion and transformed watershed for counting and detecting the boundaries of the teeth, respectively. With the border detection of teeth was possible to calculate the Fourier descriptors for their shape and the position descriptors. Then the teeth were classified according to their types through the use of the SVM from the method one-against-all used in multiclass problem. The multiclass classification problem has been approached in two different ways. In the first approach we have considered three class types: molar, premolar and non teeth, while the second approach were considered five class types: molar, premolar, canine, incisor and non teeth. The system presented a satisfactory performance in the segmenting, counting and classification of teeth present in the images.
Resumo:
A constante evolução da tecnologia disponibilizou, atualmente, ferramentas computacionais que eram apenas expectativas há 10 anos atrás. O aumento do potencial computacional aplicado a modelos numéricos que simulam a atmosfera permitiu ampliar o estudo de fenômenos atmosféricos, através do uso de ferramentas de computação de alto desempenho. O trabalho propôs o desenvolvimento de algoritmos com base em arquiteturas SIMT e aplicação de técnicas de paralelismo com uso da ferramenta OpenACC para processamento de dados de previsão numérica do modelo Weather Research and Forecast. Esta proposta tem forte conotação interdisciplinar, buscando a interação entre as áreas de modelagem atmosférica e computação científica. Foram testadas a influência da computação do cálculo de microfísica de nuvens na degradação temporal do modelo. Como a entrada de dados para execução na GPU não era suficientemente grande, o tempo necessário para transferir dados da CPU para a GPU foi maior do que a execução da computação na CPU. Outro fator determinante foi a adição de código CUDA dentro de um contexto MPI, causando assim condições de disputa de recursos entre os processadores, mais uma vez degradando o tempo de execução. A proposta do uso de diretivas para aplicar computação de alto desempenho em uma estrutura CUDA parece muito promissora, mas ainda precisa ser utilizada com muita cautela a fim de produzir bons resultados. A construção de um híbrido MPI + CUDA foi testada, mas os resultados não foram conclusivos.
Resumo:
Edge-labeled graphs have proliferated rapidly over the last decade due to the increased popularity of social networks and the Semantic Web. In social networks, relationships between people are represented by edges and each edge is labeled with a semantic annotation. Hence, a huge single graph can express many different relationships between entities. The Semantic Web represents each single fragment of knowledge as a triple (subject, predicate, object), which is conceptually identical to an edge from subject to object labeled with predicates. A set of triples constitutes an edge-labeled graph on which knowledge inference is performed. Subgraph matching has been extensively used as a query language for patterns in the context of edge-labeled graphs. For example, in social networks, users can specify a subgraph matching query to find all people that have certain neighborhood relationships. Heavily used fragments of the SPARQL query language for the Semantic Web and graph queries of other graph DBMS can also be viewed as subgraph matching over large graphs. Though subgraph matching has been extensively studied as a query paradigm in the Semantic Web and in social networks, a user can get a large number of answers in response to a query. These answers can be shown to the user in accordance with an importance ranking. In this thesis proposal, we present four different scoring models along with scalable algorithms to find the top-k answers via a suite of intelligent pruning techniques. The suggested models consist of a practically important subset of the SPARQL query language augmented with some additional useful features. The first model called Substitution Importance Query (SIQ) identifies the top-k answers whose scores are calculated from matched vertices' properties in each answer in accordance with a user-specified notion of importance. The second model called Vertex Importance Query (VIQ) identifies important vertices in accordance with a user-defined scoring method that builds on top of various subgraphs articulated by the user. Approximate Importance Query (AIQ), our third model, allows partial and inexact matchings and returns top-k of them with a user-specified approximation terms and scoring functions. In the fourth model called Probabilistic Importance Query (PIQ), a query consists of several sub-blocks: one mandatory block that must be mapped and other blocks that can be opportunistically mapped. The probability is calculated from various aspects of answers such as the number of mapped blocks, vertices' properties in each block and so on and the most top-k probable answers are returned. An important distinguishing feature of our work is that we allow the user a huge amount of freedom in specifying: (i) what pattern and approximation he considers important, (ii) how to score answers - irrespective of whether they are vertices or substitution, and (iii) how to combine and aggregate scores generated by multiple patterns and/or multiple substitutions. Because so much power is given to the user, indexing is more challenging than in situations where additional restrictions are imposed on the queries the user can ask. The proposed algorithms for the first model can also be used for answering SPARQL queries with ORDER BY and LIMIT, and the method for the second model also works for SPARQL queries with GROUP BY, ORDER BY and LIMIT. We test our algorithms on multiple real-world graph databases, showing that our algorithms are far more efficient than popular triple stores.