979 resultados para process query language
Resumo:
Edge-labeled graphs have proliferated rapidly over the last decade due to the increased popularity of social networks and the Semantic Web. In social networks, relationships between people are represented by edges and each edge is labeled with a semantic annotation. Hence, a huge single graph can express many different relationships between entities. The Semantic Web represents each single fragment of knowledge as a triple (subject, predicate, object), which is conceptually identical to an edge from subject to object labeled with predicates. A set of triples constitutes an edge-labeled graph on which knowledge inference is performed. Subgraph matching has been extensively used as a query language for patterns in the context of edge-labeled graphs. For example, in social networks, users can specify a subgraph matching query to find all people that have certain neighborhood relationships. Heavily used fragments of the SPARQL query language for the Semantic Web and graph queries of other graph DBMS can also be viewed as subgraph matching over large graphs. Though subgraph matching has been extensively studied as a query paradigm in the Semantic Web and in social networks, a user can get a large number of answers in response to a query. These answers can be shown to the user in accordance with an importance ranking. In this thesis proposal, we present four different scoring models along with scalable algorithms to find the top-k answers via a suite of intelligent pruning techniques. The suggested models consist of a practically important subset of the SPARQL query language augmented with some additional useful features. The first model called Substitution Importance Query (SIQ) identifies the top-k answers whose scores are calculated from matched vertices' properties in each answer in accordance with a user-specified notion of importance. The second model called Vertex Importance Query (VIQ) identifies important vertices in accordance with a user-defined scoring method that builds on top of various subgraphs articulated by the user. Approximate Importance Query (AIQ), our third model, allows partial and inexact matchings and returns top-k of them with a user-specified approximation terms and scoring functions. In the fourth model called Probabilistic Importance Query (PIQ), a query consists of several sub-blocks: one mandatory block that must be mapped and other blocks that can be opportunistically mapped. The probability is calculated from various aspects of answers such as the number of mapped blocks, vertices' properties in each block and so on and the most top-k probable answers are returned. An important distinguishing feature of our work is that we allow the user a huge amount of freedom in specifying: (i) what pattern and approximation he considers important, (ii) how to score answers - irrespective of whether they are vertices or substitution, and (iii) how to combine and aggregate scores generated by multiple patterns and/or multiple substitutions. Because so much power is given to the user, indexing is more challenging than in situations where additional restrictions are imposed on the queries the user can ask. The proposed algorithms for the first model can also be used for answering SPARQL queries with ORDER BY and LIMIT, and the method for the second model also works for SPARQL queries with GROUP BY, ORDER BY and LIMIT. We test our algorithms on multiple real-world graph databases, showing that our algorithms are far more efficient than popular triple stores.
Resumo:
According to much evidence, observing objects activates two types of information: structural properties, i.e., the visual information about the structural features of objects, and function knowledge, i.e., the conceptual information about their skilful use. Many studies so far have focused on the role played by these two kinds of information during object recognition and on their neural underpinnings. However, to the best of our knowledge no study so far has focused on the different activation of this information (structural vs. function) during object manipulation and conceptualization, depending on the age of participants and on the level of object familiarity (familiar vs. non-familiar). Therefore, the main aim of this dissertation was to investigate how actions and concepts related to familiar and non-familiar objects may vary across development. To pursue this aim, four studies were carried out. A first study led to the creation of the Familiar and Non-Familiar Stimuli Database, a set of everyday objects classified by Italian pre-schoolers, schoolers, and adults, useful to verify how object knowledge is modulated by age and frequency of use. A parallel study demonstrated that factors such as sociocultural dynamics may affect the perception of objects. Specifically, data for familiarity, naming, function, using and frequency of use of the objects used to create the Familiar And Non-Familiar Stimuli Database were collected with Dutch and Croatian children and adults. The last two studies on object interaction and language provide further evidence in support of the literature on affordances and on the link between affordances and the cognitive process of language from a developmental point of view, supporting the perspective of a situated cognition and emphasizing the crucial role of human experience.
Resumo:
Apesar da existência de produtos comerciais e da investigação na área, a construção de sistemas de informação com diversos componentes distribuídos, heterogéneos e autónomos - conhecidos como sistemas de informação federados - é ainda um desafio. Estes sistemas de informação oferecem uma visão global unificada sobre os vários modelos de dados (parciais). No entanto, a modelação destes sistemas é um desafio, já que modelos de dados como o relacional não incluem informação sobre a distribuição e tratamento de heterogeneidade. É também necessário interagir com estes sistemas de informação, através de interrogações sobre os diversos componentes dos sistemas, sem ser necessário conhecer os detalhes dos mesmos. Este trabalho propõe uma abordagem a estes desafios, através da utilização de modelos para descrição semântica, e.g. linguagem OWL (Ontology Web Language), para construir uma descrição unificada dos seus diversos modelos parciais. O modelo criado para dar suporte a esta descrição é, em parte, baseado em ontologias existentes, que foram alteradas e extendidas para resolver diversos desafios de modelação. Sobre este modelo, é criado um componente de software que permite a execução de interrogações SQL (Structured Query Language) sobre o sistema federado, resolvendo os problemas de distribuição e heterogeneidade existentes.
Resumo:
Mestrado (PESII), Educação Pré-Escolar e Ensino do 1º Ciclo do Ensino Básico
Resumo:
Dissertation submitted in partial fulfilment of the requirements for the Degree of Master of Science in Geospatial Technologies
Resumo:
Trabalho de Projeto submetido à Escola Superior de Teatro e Cinema para cumprimento dos requisitos necessários à obtenção do grau de Mestre em Teatro - especialização em Teatro do Movimento.
Resumo:
Dissertation submitted in partial fulfillment of the requirements for the Degree of Master of Science in Geospatial Technologies.
Resumo:
Trabalho de projecto apresentado como requisito parcial para obtenção do grau de Mestre em Estatística e Gestão de Informação
Resumo:
Relatório de estágio de mestrado em Ensino de Informática
Resumo:
Dissertação de Mestrado em Engenharia Informática
Resumo:
Volumes of data used in science and industry are growing rapidly. When researchers face the challenge of analyzing them, their format is often the first obstacle. Lack of standardized ways of exploring different data layouts requires an effort each time to solve the problem from scratch. Possibility to access data in a rich, uniform manner, e.g. using Structured Query Language (SQL) would offer expressiveness and user-friendliness. Comma-separated values (CSV) are one of the most common data storage formats. Despite its simplicity, with growing file size handling it becomes non-trivial. Importing CSVs into existing databases is time-consuming and troublesome, or even impossible if its horizontal dimension reaches thousands of columns. Most databases are optimized for handling large number of rows rather than columns, therefore, performance for datasets with non-typical layouts is often unacceptable. Other challenges include schema creation, updates and repeated data imports. To address the above-mentioned problems, I present a system for accessing very large CSV-based datasets by means of SQL. It's characterized by: "no copy" approach - data stay mostly in the CSV files; "zero configuration" - no need to specify database schema; written in C++, with boost [1], SQLite [2] and Qt [3], doesn't require installation and has very small size; query rewriting, dynamic creation of indices for appropriate columns and static data retrieval directly from CSV files ensure efficient plan execution; effortless support for millions of columns; due to per-value typing, using mixed text/numbers data is easy; very simple network protocol provides efficient interface for MATLAB and reduces implementation time for other languages. The software is available as freeware along with educational videos on its website [4]. It doesn't need any prerequisites to run, as all of the libraries are included in the distribution package. I test it against existing database solutions using a battery of benchmarks and discuss the results.
Resumo:
Estudi de les tecnologies Web Service Modeling Ontology (WSMO) i Business Process Execution Language For Web Services (BPEL4WS) i anàlisi de WSMO i de BPEL4WS documentant els punts clau de cada tecnologia, a continuació se'n realitzarà una comparativa entre les esmentades tecnologies, posant una especial atenció a WSMO
Resumo:
Sähköiset huutokaupat ovat virtuaalisia markkinapaikkoja, jotka sijaitsevat jossain päin internetiä. Sähköistä huutokauppaa käydään yritysten välillä (B2B), yritysten ja kuluttajien välillä (B2C) sekä kuluttajien kesken (C2C). Tässä työssä sähköisellä huutokaupalla tarkoitetaan ensin mainittua, yritysten keskinäistä kaupankäyntiä. Työn tarkoituksena on tutkia työnkulkukoneen soveltuvuutta sähköisen huutokauppajärjestelmän moottorina. Työssä perehdytään avoimen lähdekoodin ActiveBPEL-koneeseen, ja tutkimus tapahtuu suunnittelemalla, mallintamalla ja testaamalla liiketoimintaprosessi, joka rekisteröi ostajan ja myyjän tiedot järjestelmään. Toteutettava prosessi on yksi osa sähköistä huutokauppaa, mutta saman periaatteen mukaisesti olisi mahdollista toteuttaa myös kokonainen huutokauppa. Tässä työssä tarkastellaan sähköistä huutokauppaa, joka perustuu web-palveluihin, ja jolla on selvä koordinaattori. Koordinaattori ohjaa toisia mukana olevia web-palveluja ja niiden ajettavia operaatioita. Korkean tason mallit kuvataan BPMN-notaation avulla, itse prosessi toteutetaan BPEL-kielellä. Prosessin mallinnuksessa ja simuloinnissa käytetään apuna ActiveBPEL Designer -ohjelmaa. Työn tavoitteena on paitsi toteuttaa osa huutokaupasta, myös antaa lukijalle käsitys siitä liiketoimintaympäristöstä, johon huutokauppa kuuluu, sekä valottaa huutokaupan taustalla olevia teknologioita. Erityisesti web-palvelut ja niihin liittyvät käsitteet tulevat lukijalle tutuiksi.