939 resultados para Semantic Web Services
Resumo:
A presente tese propõe uma metodologia de vídeo-mapeamento móvel georreferenciado a partir do desenvolvimento de protótipos que utilizam uma Interface de Geovisualização Multimídia para sincronizar o registro (em vídeo) de um local ou evento de interesse com a rota percorrida pelo veículo de inspeção (sobre mapa ou imagem), através da coleta de dados por sensores móveis: câmera digital, microfone, receptor GNSS e bússola digital. A interface permite a integração desses sensores com os atuais serviços de mapas digitais disponíveis na web. Sistemas como esse melhoram significativamente as análises temporais, a gestão e a tomada de decisão. A interface proposta e desenvolvida no presente trabalho é útil para muitas aplicações como ferramenta de monitoramento e inventário. Esta interface pode ser entendida como o componente visual de um sistema de mapeamento móvel ou como um sistema cartográfico alternativo ou complementar, para aplicações em que a precisão geométrica do receptor GNSS, na modalidade de navegação, é suficiente e sua acessibilidade, um fator competitivo. As aplicações desenvolvidas no presente trabalho foram duas: um sistema de monitoramento e inventário de placas de sinalização viária e um sistema de monitoramento de cheias/secas e inventário de propriedades na borda de reservatórios de hidroelétricas, ambos em pleno funcionamento.
Resumo:
En esta memoria se presenta el diseño y desarrollo de una aplicación en la nube destinada a la compartición de objetos y servicios. El desarrollo de esta aplicación surge dentro del proyecto de I+D+i, SITAC: Social Internet of Things – Apps by and for the Crowd ITEA 2 11020, que trata de crear una arquitectura integradora y un “ecosistema” que incluya plataformas, herramientas y metodologías para facilitar la conexión y cooperación de entidades de distinto tipo conectadas a la red bien sean sistemas, máquinas, dispositivos o personas con dispositivos móviles personales como tabletas o teléfonos móviles. El proyecto innovará mediante la utilización de un modelo inspirado en las redes sociales para facilitar y unificar las interacciones tanto entre personas como entre personas y dispositivos. En este contexto surge la necesidad de desarrollar una aplicación destinada a la compartición de recursos en la nube que pueden ser tanto lógicos como físicos, y que esté orientada al big data. Ésta será la aplicación presentada en este trabajo, el “Resource Sharing Center”, que ofrece un servicio web para el intercambio y compartición de contenido, y un motor de recomendaciones basado en las preferencias de los usuarios. Con este objetivo, se han usado tecnologías de despliegue en la nube, como Elastic Beanstalk (el PaaS de Amazon Web Services), S3 (el sistema de almacenamiento de Amazon Web Services), SimpleDB (base de datos NoSQL) y HTML5 con JavaScript y Twitter Bootstrap para el desarrollo del front-end, siendo Python y Node.js las tecnologías usadas en el back end, y habiendo contribuido a la mejora de herramientas de clustering sobre big data. Por último, y de cara a realizar el estudio sobre las pruebas de carga de la aplicación se ha usado la herramienta ApacheJMeter.
Resumo:
Robotics is a field that presents a large number of problems because it depends on a large number of disciplines, devices, technologies and tasks. Its expansion from perfectly controlled industrial environments toward open and dynamic environment presents a many new challenges, such as robots household robots or professional robots. To facilitate the rapid development of robotic systems, low cost, reusability of code, its medium and long term maintainability and robustness are required novel approaches to provide generic models and software systems who develop paradigms capable of solving these problems. For this purpose, in this paper we propose a model based on multi-agent systems inspired by the human nervous system able to transfer the control characteristics of the biological system and able to take advantage of the best properties of distributed software systems.
Resumo:
This paper presents a model of a control system for robot systems inspired by the functionality and organisation of human neuroregulatory system. Our model was specified using software agents within a formal framework and implemented through Web Services. This approach allows the implementation of the control logic of a robot system with relative ease, in an incremental way, using the addition of new control centres to the system as its behaviour is observed or needs to be detailed with greater precision, without the need to modify existing functionality. The tests performed verify that the proposed model has the general characteristics of biological systems together with the desirable features of software, such as robustness, flexibility, reuse and decoupling.
Resumo:
Robotics is an emerging field with great activity. Robotics is a field that presents several problems because it depends on a large number of disciplines, technologies, devices and tasks. Its expansion from perfectly controlled industrial environments toward open and dynamic environment presents a many new challenges. New uses are, for example, household robots or professional robots. To facilitate the low cost, rapid development of robotic systems, reusability of code, its medium and long term maintainability and robustness are required novel approaches to provide generic models and software systems who develop paradigms capable of solving these problems. For this purpose, in this paper we propose a model based on multi-agent systems inspired by the human nervous system able to transfer the control characteristics of the biological system and able to take advantage of the best properties of distributed software systems. Specifically, we model the decentralized activity and hormonal variation.
Resumo:
Este artículo presenta la aplicación y resultados obtenidos de la investigación en técnicas de procesamiento de lenguaje natural y tecnología semántica en Brand Rain y Anpro21. Se exponen todos los proyectos relacionados con las temáticas antes mencionadas y se presenta la aplicación y ventajas de la transferencia de la investigación y nuevas tecnologías desarrolladas a la herramienta de monitorización y cálculo de reputación Brand Rain.
Resumo:
Cybercrime and related malicious activity in our increasingly digital world has become more prevalent and sophisticated, evading traditional security mechanisms. Digital forensics has been proposed to help investigate, understand and eventually mitigate such attacks. The practice of digital forensics, however, is still fraught with various challenges. Some of the most prominent of these challenges include the increasing amounts of data and the diversity of digital evidence sources appearing in digital investigations. Mobile devices and cloud infrastructures are an interesting specimen, as they inherently exhibit these challenging circumstances and are becoming more prevalent in digital investigations today. Additionally they embody further characteristics such as large volumes of data from multiple sources, dynamic sharing of resources, limited individual device capabilities and the presence of sensitive data. These combined set of circumstances make digital investigations in mobile and cloud environments particularly challenging. This is not aided by the fact that digital forensics today still involves manual, time consuming tasks within the processes of identifying evidence, performing evidence acquisition and correlating multiple diverse sources of evidence in the analysis phase. Furthermore, industry standard tools developed are largely evidence-oriented, have limited support for evidence integration and only automate certain precursory tasks, such as indexing and text searching. In this study, efficiency, in the form of reducing the time and human labour effort expended, is sought after in digital investigations in highly networked environments through the automation of certain activities in the digital forensic process. To this end requirements are outlined and an architecture designed for an automated system that performs digital forensics in highly networked mobile and cloud environments. Part of the remote evidence acquisition activity of this architecture is built and tested on several mobile devices in terms of speed and reliability. A method for integrating multiple diverse evidence sources in an automated manner, supporting correlation and automated reasoning is developed and tested. Finally the proposed architecture is reviewed and enhancements proposed in order to further automate the architecture by introducing decentralization particularly within the storage and processing functionality. This decentralization also improves machine to machine communication supporting several digital investigation processes enabled by the architecture through harnessing the properties of various peer-to-peer overlays. Remote evidence acquisition helps to improve the efficiency (time and effort involved) in digital investigations by removing the need for proximity to the evidence. Experiments show that a single TCP connection client-server paradigm does not offer the required scalability and reliability for remote evidence acquisition and that a multi-TCP connection paradigm is required. The automated integration, correlation and reasoning on multiple diverse evidence sources demonstrated in the experiments improves speed and reduces the human effort needed in the analysis phase by removing the need for time-consuming manual correlation. Finally, informed by published scientific literature, the proposed enhancements for further decentralizing the Live Evidence Information Aggregator (LEIA) architecture offer a platform for increased machine-to-machine communication thereby enabling automation and reducing the need for manual human intervention.
Resumo:
The ontological analysis of conceptual modelling techniques is of increasing popularity. Related research did not only explore the ontological deficiencies of classical techniques such as ER or UML, but also business process modelling techniques such as ARIS or even Web services standards such as BPEL4WS. While the selected ontologies are reasonably mature, it is the actual process of an ontological analysis that still lacks rigor. The current procedure leaves significant room for individual interpretations and is one reason for criticism of the entire ontological analysis. This paper proposes a procedural model for the ontological analysis based on the use of meta models, the involvement of more than one coder and metrics. This model is explained with examples from various ontological analyses.
Resumo:
In today’s financial markets characterized by constantly changing tax laws and increasingly complex transactions, the demand for family financial planning (FFP) services is rising dramatically. However, the current trend to develop advisory systems that focus mainly on the financial or investment side fails to consider the whole picture of FFP. Separating financial or investment advice from legal and accounting advice may result in conflicting advice or important omissions that could lead to users suffering financial loss. In this paper, we propose a conceptual model for FFP decision-making process, followed by a novel architecture to support an aggregated FFP decision process by utilizing intelligentagents and Web-services technology. A prototype system for supporting FFP decision is presented to demonstrate the advances of the proposed Web-service multi-agentsbased system architecture and business value.
Resumo:
Arguably, the world has become one large pervasive computing environment. Our planet is growing a digital skin of a wide array of sensors, hand-held computers, mobile phones, laptops, web services and publicly accessible web-cams. Often, these devices and services are deployed in groups, forming small communities of interacting devices. Service discovery protocols allow processes executing on each device to discover services offered by other devices within the community. These communities can be linked together to form a wide-area pervasive environment, allowing processes in one p u p tu interact with services in another. However, the costs of communication and the protocols by which this communication is mediated in the wide-area differ from those of intra-group, or local-area, communication. Communication is an expensive operation for small, battery powered devices, but it is less expensive for servem and workstations, which have a constant power supply and 81'e connected to high bandwidth networks. This paper introduces Superstring, a peer to-peer service discovery protocol optimised fur use in the wide-area. Its goals are to minimise computation and memory overhead in the face of large numbers of resources. It achieves this memory and computation scalability by distributing the storage cost of service descriptions and the computation cost of queries over multiple resolvers.
Resumo:
The developments of models in Earth Sciences, e.g. for earthquake prediction and for the simulation of mantel convection, are fare from being finalized. Therefore there is a need for a modelling environment that allows scientist to implement and test new models in an easy but flexible way. After been verified, the models should be easy to apply within its scope, typically by setting input parameters through a GUI or web services. It should be possible to link certain parameters to external data sources, such as databases and other simulation codes. Moreover, as typically large-scale meshes have to be used to achieve appropriate resolutions, the computational efficiency of the underlying numerical methods is important. Conceptional this leads to a software system with three major layers: the application layer, the mathematical layer, and the numerical algorithm layer. The latter is implemented as a C/C++ library to solve a basic, computational intensive linear problem, such as a linear partial differential equation. The mathematical layer allows the model developer to define his model and to implement high level solution algorithms (e.g. Newton-Raphson scheme, Crank-Nicholson scheme) or choose these algorithms form an algorithm library. The kernels of the model are generic, typically linear, solvers provided through the numerical algorithm layer. Finally, to provide an easy-to-use application environment, a web interface is (semi-automatically) built to edit the XML input file for the modelling code. In the talk, we will discuss the advantages and disadvantages of this concept in more details. We will also present the modelling environment escript which is a prototype implementation toward such a software system in Python (see www.python.org). Key components of escript are the Data class and the PDE class. Objects of the Data class allow generating, holding, accessing, and manipulating data, in such a way that the actual, in the particular context best, representation is transparent to the user. They are also the key to establish connections with external data sources. PDE class objects are describing (linear) partial differential equation objects to be solved by a numerical library. The current implementation of escript has been linked to the finite element code Finley to solve general linear partial differential equations. We will give a few simple examples which will illustrate the usage escript. Moreover, we show the usage of escript together with Finley for the modelling of interacting fault systems and for the simulation of mantel convection.
Resumo:
Ontology search and reuse is becoming increasingly important as the quest for methods to reduce the cost of constructing such knowledge structures continues. A number of ontology libraries and search engines are coming to existence to facilitate locating and retrieving potentially relevant ontologies. The number of ontologies available for reuse is steadily growing, and so is the need for methods to evaluate and rank existing ontologies in terms of their relevance to the needs of the knowledge engineer. This paper presents AKTiveRank, a prototype system for ranking ontologies based on a number of structural metrics.
Resumo:
AKT is a major research project applying a variety of technologies to knowledge management. Knowledge is a dynamic, ubiquitous resource, which is to be found equally in an expert's head, under terabytes of data, or explicitly stated in manuals. AKT will extend knowledge management technologies to exploit the potential of the semantic web, covering the use of knowledge over its entire lifecycle, from acquisition to maintenance and deletion. In this paper we discuss how HLT will be used in AKT and how the use of HLT will affect different areas of KM, such as knowledge acquisition, retrieval and publishing.
Resumo:
Knowledge maintenance is a major challenge for both knowledge management and the Semantic Web. Operating over the Semantic Web, there will be a network of collaborating agents, each with their own ontologies or knowledge bases. Change in the knowledge state of one agent may need to be propagated across a number of agents and their associated ontologies. The challenge is to decide how to propagate a change of knowledge state. The effects of a change in knowledge state cannot be known in advance, and so an agent cannot know who should be informed unless it adopts a simple ‘tell everyone – everything’ strategy. This situation is highly reminiscent of the classic Frame Problem in AI. We argue that for agent-based technologies to succeed, far greater attention must be given to creating an appropriate model for knowledge update. In a closed system, simple strategies are possible (e.g. ‘sleeping dog’ or ‘cheap test’ or even complete checking). However, in an open system where cause and effect are unpredictable, a coherent cost-benefit based model of agent interaction is essential. Otherwise, the effectiveness of every act of knowledge update/maintenance is brought into question.
Resumo:
The evaluation of ontologies is vital for the growth of the Semantic Web. We consider a number of problems in evaluating a knowledge artifact like an ontology. We propose in this paper that one approach to ontology evaluation should be corpus or data driven. A corpus is the most accessible form of knowledge and its use allows a measure to be derived of the ‘fit’ between an ontology and a domain of knowledge. We consider a number of methods for measuring this ‘fit’ and propose a measure to evaluate structural fit, and a probabilistic approach to identifying the best ontology.