927 resultados para CyberOPC. OPC UA. REST. SOAP. Web service. Sistemas distribuídos.Middleware.
Resumo:
The current INFRAWEBS European research project aims at developing ICT framework enabling software and service providers to generate and establish open and extensible development platforms for Web Service applications. One of the concrete project objectives is developing a full-life-cycle software toolset for creating and maintaining Semantic Web Services (SWSs) supporting specific applications based on Web Service Modelling Ontology (WSMO) framework. According to WSMO, functional and behavioural descriptions of a SWS may be represented by means of complex logical expressions (axioms). The paper describes a specialized userfriendly tool for constructing and editing such axioms – INFRAWEBS Axiom Editor. After discussing the main design principles of the Editor, its functional architecture is briefly presented. The tool is implemented in Eclipse Graphical Environment Framework and Eclipse Rich Client Platform.
Resumo:
* The research has been partially supported by INFRAWEBS - IST FP62003/IST/2.3.2.3 Research Project No. 511723 and “Technologies of the Information Society for Knowledge Processing and Management” - IIT-BAS Research Project No. 010061.
Resumo:
The evaluation of geospatial data quality and trustworthiness presents a major challenge to geospatial data users when making a dataset selection decision. The research presented here therefore focused on defining and developing a GEO label – a decision support mechanism to assist data users in efficient and effective geospatial dataset selection on the basis of quality, trustworthiness and fitness for use. This thesis thus presents six phases of research and development conducted to: (a) identify the informational aspects upon which users rely when assessing geospatial dataset quality and trustworthiness; (2) elicit initial user views on the GEO label role in supporting dataset comparison and selection; (3) evaluate prototype label visualisations; (4) develop a Web service to support GEO label generation; (5) develop a prototype GEO label-based dataset discovery and intercomparison decision support tool; and (6) evaluate the prototype tool in a controlled human-subject study. The results of the studies revealed, and subsequently confirmed, eight geospatial data informational aspects that were considered important by users when evaluating geospatial dataset quality and trustworthiness, namely: producer information, producer comments, lineage information, compliance with standards, quantitative quality information, user feedback, expert reviews, and citations information. Following an iterative user-centred design (UCD) approach, it was established that the GEO label should visually summarise availability and allow interrogation of these key informational aspects. A Web service was developed to support generation of dynamic GEO label representations and integrated into a number of real-world GIS applications. The service was also utilised in the development of the GEO LINC tool – a GEO label-based dataset discovery and intercomparison decision support tool. The results of the final evaluation study indicated that (a) the GEO label effectively communicates the availability of dataset quality and trustworthiness information and (b) GEO LINC successfully facilitates ‘at a glance’ dataset intercomparison and fitness for purpose-based dataset selection.
Resumo:
Report published in the Proceedings of the National Conference on "Education and Research in the Information Society", Plovdiv, May, 2015
Resumo:
Heterogeneous datasets arise naturally in most applications due to the use of a variety of sensors and measuring platforms. Such datasets can be heterogeneous in terms of the error characteristics and sensor models. Treating such data is most naturally accomplished using a Bayesian or model-based geostatistical approach; however, such methods generally scale rather badly with the size of dataset, and require computationally expensive Monte Carlo based inference. Recently within the machine learning and spatial statistics communities many papers have explored the potential of reduced rank representations of the covariance matrix, often referred to as projected or fixed rank approaches. In such methods the covariance function of the posterior process is represented by a reduced rank approximation which is chosen such that there is minimal information loss. In this paper a sequential Bayesian framework for inference in such projected processes is presented. The observations are considered one at a time which avoids the need for high dimensional integrals typically required in a Bayesian approach. A C++ library, gptk, which is part of the INTAMAP web service, is introduced which implements projected, sequential estimation and adds several novel features. In particular the library includes the ability to use a generic observation operator, or sensor model, to permit data fusion. It is also possible to cope with a range of observation error characteristics, including non-Gaussian observation errors. Inference for the covariance parameters is explored, including the impact of the projected process approximation on likelihood profiles. We illustrate the projected sequential method in application to synthetic and real datasets. Limitations and extensions are discussed. © 2010 Elsevier Ltd.
Resumo:
A computação ubíqua é um paradigma no qual dispositivos com capacidade de processamento e comunicação são embutidos nos elementos comuns de nossas vidas (casas, carros, máquinas fotográficas, telefones, escolas, museus, etc), provendo serviços com um alto grau de mobilidade e transparência. O desenvolvimento de sistemas ubíquos é uma tarefa complexa, uma vez que envolve várias áreas da computação, como Engenharia de Software, Inteligência Artificial e Sistemas Distribuídos. Essa tarefa torna-se ainda mais complexa pela ausência de uma arquitetura de referência para guiar o desenvolvimento de tais sistemas. Arquiteturas de referência têm sido usadas para fornecer uma base comum e dar diretrizes para a construção de arquiteturas de softwares para diferentes classes de sistemas. Por outro lado, as linguagens de descrição arquitetural (ADLs) fornecem uma sintaxe para representação estrutural dos elementos arquiteturais, suas restrições e interações, permitindo-se expressar modelo arquitetural de sistemas. Atualmente não há, na literatura, ADLs baseadas em arquiteturas de referência para o domínio de computação ubíqua. De forma a permitir a modelagem arquitetural de aplicações ubíquas, esse trabalho tem como objetivo principal especificar UbiACME, uma linguagem de descrição arquitetural para aplicações ubíquas, bem como disponibilizar a ferramenta UbiACME Studio, que permitirá arquitetos de software realizar modelagens usando UbiACME. Para esse fim, inicialmente realizamos uma revisão sistemática, de forma a investigar na literatura relacionada com sistemas ubíquos, os elementos comuns a esses sistemas que devem ser considerados no projeto de UbiACME. Além disso, com base na revisão sistemática, definimos uma arquitetura de referência para sistemas ubíquos, RA-Ubi, que é a base para a definição dos elementos necessários para a modelagem arquitetural e, portanto, fornece subsídios para a definição dos elementos de UbiACME. Por fim, de forma a validar a linguagem e a ferramenta, apresentamos um experimento controlado onde arquitetos modelam uma aplicação ubíqua usando UbiACME Studio e comparam com a modelagem da mesma aplicação em SySML.
Resumo:
Se presentan los resultados de la aplicación de una metodología integradora de auditoría de información y conocimiento, llevada a cabo en un Centro de Investigación del Ministerio de Ciencia, Tecnología y Medio Ambiente de la provincia de Holguín, Cuba, conformada por siete etapas con un enfoque híbrido dirigida a revisar la estrategia y la política de gestión de información y conocimiento, identificar e inventariar y mapear los recursos de I+C y sus flujos, y valorar los procesos asociados a su gestión. La alta dirección de este centro, sus especialistas e investigadores manifestaron la efectividad de la metodología aplicada cuyos resultados propiciaron reajustar la proyección estratégica en relación con la gestión de la I+C, rediseñar los flujos informativos de los procesos claves, disponer de un directorio de sus expertos por áreas y planificar el futuro aprendizaje y desarrollo profesional.
Resumo:
The Semantic Annotation component is a software application that provides support for automated text classification, a process grounded in a cohesion-centered representation of discourse that facilitates topic extraction. The component enables the semantic meta-annotation of text resources, including automated classification, thus facilitating information retrieval within the RAGE ecosystem. It is available in the ReaderBench framework (http://readerbench.com/) which integrates advanced Natural Language Processing (NLP) techniques. The component makes use of Cohesion Network Analysis (CNA) in order to ensure an in-depth representation of discourse, useful for mining keywords and performing automated text categorization. Our component automatically classifies documents into the categories provided by the ACM Computing Classification System (http://dl.acm.org/ccs_flat.cfm), but also into the categories from a high level serious games categorization provisionally developed by RAGE. English and French languages are already covered by the provided web service, whereas the entire framework can be extended in order to support additional languages.
Resumo:
Thesis (Ph.D.)--University of Washington, 2016-08
Resumo:
Internet users consume online targeted advertising based on information collected about them and voluntarily share personal information in social networks. Sensor information and data from smart-phones is collected and used by applications, sometimes in unclear ways. As it happens today with smartphones, in the near future sensors will be shipped in all types of connected devices, enabling ubiquitous information gathering from the physical environment, enabling the vision of Ambient Intelligence. The value of gathered data, if not obvious, can be harnessed through data mining techniques and put to use by enabling personalized and tailored services as well as business intelligence practices, fueling the digital economy. However, the ever-expanding information gathering and use undermines the privacy conceptions of the past. Natural social practices of managing privacy in daily relations are overridden by socially-awkward communication tools, service providers struggle with security issues resulting in harmful data leaks, governments use mass surveillance techniques, the incentives of the digital economy threaten consumer privacy, and the advancement of consumergrade data-gathering technology enables new inter-personal abuses. A wide range of fields attempts to address technology-related privacy problems, however they vary immensely in terms of assumptions, scope and approach. Privacy of future use cases is typically handled vertically, instead of building upon previous work that can be re-contextualized, while current privacy problems are typically addressed per type in a more focused way. Because significant effort was required to make sense of the relations and structure of privacy-related work, this thesis attempts to transmit a structured view of it. It is multi-disciplinary - from cryptography to economics, including distributed systems and information theory - and addresses privacy issues of different natures. As existing work is framed and discussed, the contributions to the state-of-theart done in the scope of this thesis are presented. The contributions add to five distinct areas: 1) identity in distributed systems; 2) future context-aware services; 3) event-based context management; 4) low-latency information flow control; 5) high-dimensional dataset anonymity. Finally, having laid out such landscape of the privacy-preserving work, the current and future privacy challenges are discussed, considering not only technical but also socio-economic perspectives.
Resumo:
En este documento se propone un marco de trabajo basado en tecnologías de la Web Semántica para detectar potenciales redes de colaboración, mediante el enriquecimiento semántico de artículos científicos producidos por investigadores que publican con afiliaciones ecuatorianas. El marco de trabajo se describe a través de un ciclo de publicación de datos enlazados. Como alcance se consideraron publicaciones que tienen al menos un autor con afiliación ecuatoriana. Las redes de colaboración detectadas son un insumo importante para fortalecer los esfuerzos del gobierno ecuatoriano y las autoridades universitarias del país, priorizar los esfuerzos y recursos invertidos en investigación y determinar la pertinencia o coherencia de los programas de investigación.
Resumo:
Dissertação (mestrado)—Universidade de Brasília, Faculdade de Tecnologia, Departamento de Engenharia Elétrica, 2016.
Resumo:
Gli Open Data sono un'utile strumento che sta via via assumendo sempre più importanza nella società; in questa tesi vedremo la loro utilità attraverso la realizzazione di un'applicazione mobile, che utilizza questi dati per fornire informazioni circa lo stato ambientale dell'aria e dei pollini in Emilia Romagna, sfruttando i dataset forniti da un noto ente pubblico (Arpa Emilia Romagna). Tale applicazione mobile si basa su un Web Service che gestisce i vari passaggi dei dati e li immagazzina in un database Mongodb. Tale Web Service è stato creato per essere a sua volta messo a disposizione di programmatori, enti o persone comuni per studi e sviluppi futuri in tale ambito.
Resumo:
Esta investigação trata-se de um estudo com a população universitária Portuguesa e Espanhola sobre padrões de uso/abuso da Internet, assim como os recursos específicos mais usados. Foi usada uma amostra aleatória de 206 sujeitos, da Universidade Autónoma de Madrid e do Campus Universitário de Lisboa, de diferentes licenciaturas, de ambos os sexos e com idades compreendidas entre os 18 e os 40 anos. O instrumento usado foi um questionário adaptado do Internet Addiction Test, de Young (1998). As principais conclusões sugerem que, no que se refere ao tempo de conexão verificámos que a maior percentagem da amostra (46,1%) realiza um uso moderado da Internet (menos de 2 horas diárias) mas 36,9% podem já apresentar alguns indícios de adicção ao estarem de 2 a 5 horas diárias conectados à Internet. Na avaliação dos parâmetros do uso dos recursos específicos da Internet, podemos encontrar que a Web é o serviço mais procurado seguindo-se o e-mail (com 62,1% e 57,3% respectivamente). Encontrámos ainda que 16,5% da amostra apresenta valores representativos de adicção. /ABSTRACT: This research comes from a study with the Portuguese and Spanish university population on patterns of use I abuse of the Internet, as well as the specific resources used most. We used a random sample of 206 subjects, the Autonomous University of Madrid and the University Campus of Lisbon, in different degrees of both sexes and aged between 18 and 40 years. The instrument used was a questionnaire adapted from the Internet Addiction Test, Young (1998). The main findings suggest that with regard to the connection time found that the largest percentage of the sample (46.1%) carries a moderate use of the Internet (less than 2 hours) but 36.9% can introduce some evidence addiction are at one of the 2 to 5 hours a day connected to the Internet. ln assessing the parameters of the use of specific resources of the Internet, we find that the Web service is the most sought followed by email (with 62.1% and 57.3% respectively). We found also that 16.5% of the sample shows values representative of addiction.
Resumo:
The paper deals with the integration of ROS, in the proprietary environment of the Marchesini Group company, for the control of industrial robotic systems. The basic tools of this open-source software are deeply studied to model a full proprietary Pick and Place manipulator inside it, and to develop custom ROS nodes to calculate trajectories; speaking of which, the URDF format is the standard to represent robots in ROS and the motion planning framework MoveIt offers user-friendly high-level methods. The communication between ROS and the Marchesini control architecture is established using the OPC UA standard; the tasks computed are transmitted offline to the PLC, supervisor controller of the physical robot, because the performances of the protocol don’t allow any kind of active control by ROS. Once the data are completely stored at the Marchesini side, the industrial PC makes the real robot execute a trajectory computed by MoveIt, so that it replicates the behaviour of the simulated manipulator in Rviz. Multiple experiments are performed to evaluate in detail the potential of ROS in the planning of movements for the company proprietary robots. The project ends with a small study regarding the use of ROS as a simulation platform. First, it is necessary to understand how a robotic application of the company can be reproduced in the Gazebo real world simulator. Then, a ROS node extracts information and examines the simulated robot behaviour, through the subscription to specific topics.