835 resultados para CLARITY center for sensor Web technologies
Resumo:
El protocolo SOS (Sensor Observation Service) es una especificación OGC dentro de la iniciativa Sensor Web Enablement (SWE), que permite acceder a las observaciones y datos de sensores heterogéneos de una manera estándar. En el proyecto gvSIG se ha abierto una línea de investigación entorno a la SWE, existiendo en la actualidad dos prototipos de clientes SOS para gvSIG y gvSIG Mobile. La especificación utilizada para describir las medidas proporcionadas por sensores es Observation & Measurement (O&M) y la descripción de los metadatos de los sensores (localización. ID, fenómenos medidos, procesamiento de los datos, etc) se obtiene a partir del esquema Sensor ML. Se ha implementado el siguiente conjunto de operaciones: GetCapabilities para la descripción del servicio; DescribeSensor para acceder a los metadatos del sensor y el GetObservation para recibir las observaciones. En el caso del prototipo para gvSIG escritorio se puede acceder a los datos procedentes de los distintos grupos de sensores “offerings” añadiéndolos en el mapa como nuevas capas. Los procedimientos o sensores que están incluidos en un “offering” son presentados como elementos de la capa que se pueden cartografiar en el mapa. Se puede acceder a las observaciones (GetObservation) de estos sensores filtrando los datos por intervalo de tiempo y propiedad del fenómeno observado. La información puede ser representada sobre el mapa mediante gráficas para una mejor comprensión con la posibilidad de comparar datos de distintos sensores. En el caso del prototipo para el cliente móvil gvSIG Mobile, se ha utilizado la misma filosofía que para el cliente de escritorio, siendo cada “offering” una nueva capa. Las observaciones de los sensores pueden ser visualizadas en la pantalla del dispositivo móvil y se pueden obtener mapas temáticos,con el objetivo de facilitar la interpretación de los datos
Resumo:
Space applications demand the need for building reliable systems. Autonomic computing defines such reliable systems as self-managing systems. The work reported in this paper combines agent based and swarm robotic approaches leading to swarm-array computing, a novel technique to achieve autonomy for distributed parallel computing systems. Two swarm-array computing approaches based on swarms of computational resources and swarms of tasks are explored. FPGA is considered as the computing system. The feasibility of the two proposed approaches that binds the computing system and the task together is simulated on the SeSAm multi-agent simulator.
Resumo:
Ubiquitous computing aims at providing services to users in everyday environments such as the home. One research theme in this area is that of building capture and access applications which support information to be recorded ( captured) during a live experience toward automatically producing documents for review (accessed). The recording demands instrumented environments with devices such as microphones, cameras, sensors and electronic whiteboards. Since each experience is usually related to many others ( e. g. several meetings of a project), there is a demand for mechanisms supporting the automatic linking among documents relative to different experiences. In this paper we present original results relative to the integration of our previous efforts in the Infrastructure for Capturing, Accessing, Linking, Storing and Presenting information (CALiSP). Ubiquitous computing aims at providing services to users in everyday environments such as the home. One research theme in this area is that of building capture and access applications which support information to be recorded (captured) during a live experience toward automatically producing documents for review (accessed). The recording demands instrumented environments with devices such as microphones, cameras, sensors and electronic whiteboards. Since each experience is usually related to many others (e.g. several meetings of a project), there is a demand for mechanisms supporting the automatic linking among documents relative to different experiences. In this paper we present original results relative to the integration of our previous efforts in the Infrastructure for Capturing, Accessing, Linking, Storing and Presenting information (CALiSP).
Resumo:
Observational data encodes values of properties associated with a feature of interest, estimated by a specified procedure. For water the properties are physical parameters like level, volume, flow and pressure, and concentrations and counts of chemicals, substances and organisms. Water property vocabularies have been assembled at project, agency and jurisdictional level. Organizations such as EPA, USGS, CEH, GA and BoM maintain vocabularies for internal use, and may make them available externally as text files. BODC and MMI have harvested many water vocabularies alongside others of interest in their domain, formalized the content using SKOS, and published them through web interfaces. Scope is highly variable both within and between vocabularies. Individual items may conflate multiple concerns (e.g. property, instrument, statistical procedure, units). There is significant duplication between vocabularies. Semantic web technologies provide the opportunity both to publish vocabularies more effectively, and achieve harmonization to support greater interoperability between datasets. - Models for vocabulary items (property, substance/taxon, process, unit-of-measure, etc) may be formalized OWL ontologies, supporting semantic relations between items in related vocabularies; - By specializing the ontology elements from SKOS concepts and properties, diverse vocabularies may be published through a common interface; - Properties from standard vocabularies (e.g. OWL, SKOS, PROV-O and VAEM) support mappings between vocabularies having a similar scope - Existing items from various sources may be assembled into new virtual vocabularies However, there are a number of challenges: - use of standard properties such as sameAs/exactMatch/equivalentClass require reasoning support; - items have been conceptualised as both classes and individuals, complicating the mapping mechanics; - re-use of items across vocabularies may conflict with expectations concerning URI patterns; - versioning complicates cross-references and re-use. This presentation will discuss ways to harness semantic web technologies to publish harmonized vocabularies, and will summarise how many of the challenges may be addressed.
Resumo:
As the world evolves, organizations are becoming more and more complex, and the need to understand that complexity is increasing as well. With this demand, arises organizational engineering, which is a subject that emerged with the purpose to make organizations easier to understand, by putting in practice the concept of organizational self-awareness, which means that that the collaborators who are part of an organization, need to understand it and know what their role in it is. The DEMO methodology (Design Engineering Methodology for Organizations), came up with the purpose of representing these organizations’ self-awareness, through the definition and creation of consistent and coherent diagrams. Semantic wikis have features that can help in enterprise modelling. UEAOM (Universal Enterprise Adaptive Organization Model) is a model that allows the specification and dynamic evolution of languages, meta-models, models, and their representations as diagrams and tables. In this project, it was implemented a system based on UEAOM, and Semantic Media Wiki which allows a graphical creation and edition of diagrams. UEAOM can be divided into the meta-modeling level where a language is defined, and the modelling level where instances of classes of that language are created. The system we developed focuses on the modeling level, but will takes as a basis the project that focuses on meta-modeling. The DEMO language was used as an example for the implementation and tests of a graphical editor, based in web technologies and SVG, integrated with SemanticMediaWiki to allow an intuitive, coherent and consistent navigation and editing of organization diagrams.
Resumo:
Develop software is still a risky business. After 60 years of experience, this community is still not able to consistently build Information Systems (IS) for organizations with predictable quality, within previously agreed budget and time constraints. Although software is changeable we are still unable to cope with the amount and complexity of change that organizations demand for their IS. To improve results, developers followed two alternatives: Frameworks that increase productivity but constrain the flexibility of possible solutions; Agile ways of developing software that keep flexibility with less upfront commitments. With strict frameworks, specific hacks have to be put in place to get around the framework construction options. In time this leads to inconsistent architectures that are harder to maintain due to incomplete documentation and human resources turnover. The main goals of this work is to create a new way to develop flexible IS for organizations, using web technologies, in a faster, better and cheaper way that is more suited to handle organizational change. To do so we propose an adaptive object model that uses a new ontology for data and action with strict normalizing rules. These rules should bound the effects of changes that can be better tested and therefore corrected. Interfaces are built with templates of resources that can be reused and extended in a flexible way. The “state of the world” for each IS is determined by all production and coordination acts that agents performed over time, even those performed by external systems. When bugs are found during maintenance, their past cascading effects can be checked through simulation, re-running the log of transaction acts over time and checking results with previous records. This work implements a prototype with part of the proposed system in order to have a preliminary assessment its feasibility and limitations.
Resumo:
During the petroleum well drilling operation many mechanical and hydraulic parameters are monitored by an instrumentation system installed in the rig called a mud-logging system. These sensors, distributed in the rig, monitor different operation parameters such as weight on the hook and drillstring rotation. These measurements are known as mud-logging records and allow the online following of all the drilling process with well monitoring purposes. However, in most of the cases, these data are stored without taking advantage of all their potential. On the other hand, to make use of the mud-logging data, an analysis and interpretationt is required. That is not an easy task because of the large volume of information involved. This paper presents a Support Vector Machine (SVM) used to automatically classify the drilling operation stages through the analysis of some mud-logging parameters. In order to validate the results of SVM technique, it was compared to a classification elaborated by a Petroleum Engineering expert. © 2006 IEEE.
Resumo:
Motivated by rising drilling operation costs, the oil industry has shown a trend towards real-time measurements and control. In this scenario, drilling control becomes a challenging problem for the industry, especially due to the difficulty associated to parameters modeling. One of the drill-bit performance evaluators, the Rate of Penetration (ROP), has been used in the literature as a drilling control parameter. However, the relationships between the operational variables affecting the ROP are complex and not easily modeled. This work presents a neuro-genetic adaptive controller to treat this problem. It is based on the Auto-Regressive with Extra Input Signals model, or ARX model, to accomplish the system identification and on a Genetic Algorithm (GA) to provide a robust control for the ROP. Results of simulations run over a real offshore oil field data, consisted of seven wells drilled with equal diameter bits, are provided. © 2006 IEEE.
Resumo:
Includes bibliography
Resumo:
[ES] Este proyecto de fin de carrera aborda la actualización y refactorización de la aplicación Hecaton. Esta aplicación permite la monitorización y actuación en instalaciones industriales de manera remota a través de un interfaz web. Para ello hace uso de sensores y actuadores que, conectados a través de un equipo de adquisición de datos a un sistema informático servidor, permiten obtener, manipular y almacenar los datos y eventos recibidos. Hecaton ha sido desarrollado enteramente utilizando software libre. Además, el sistema permite ser personalizado, lo que posibilita su uso en todo tipo de escenarios, siendo el usuario quién define las reglas de funcionamiento. Este trabajo se trata del cuarto ciclo de desarrollo, pues la aplicación ha sido crea y ampliada en otros tres proyectos. En este último desarrollo se han actualizado las tecnologías y herramientas que forman parte de la aplicación. Se ha puesto especial énfasis en el rediseño de la interfaz web, adoptando el uso de las últimas tecnologías web que permiten un funcionamiento dinámico de la misma. Por otro lado se han corregido algunos errores de diseño e introducido el uso de nuevas herramientas para la gestión del proyecto software. Se trata por lo tanto de un ejercicio de refactorización software donde se ha puesto especial atención en conseguir un proyecto actualizado y que utilice metodologías de desarrollo actuales y que posibilite que sea actualizado en un futuro.
Resumo:
This paper uses folksonomies and fuzzy clustering algorithms to establish term-relevant related results. This paper will propose a Meta search engine with the ability to search for vaguely associated terms and aggregate them into several meaningful cluster categories. The potential of the fuzzy weblog extraction is illustrated using a simple example and added value and possible future studies are discussed in the conclusion.
Resumo:
In parallel to the effort of creating Open Linked Data for the World Wide Web there is a number of projects aimed for developing the same technologies but in the context of their usage in closed environments such as private enterprises. In the paper, we present results of research on interlinking structured data for use in Idea Management Systems - a still rare breed of knowledge management systems dedicated to innovation management. In our study, we show the process of extending an ontology that initially covers only the Idea Management System structure towards the concept of linking with distributed enterprise data and public data using Semantic Web technologies. Furthermore we point out how the established links can help to solve the key problems of contemporary Idea Management Systems
Resumo:
This work describes the design and application of multimedia contents for web technologies-based training in minimally invasive surgery (MIS). The chosen strategy allows knowing the deficiencies of the current training methods so new multimedia contents can cover them. This study is concluded with the definition of three different types of multimedia contents accordingly to the development degree and didactic objectives that they present: Didactic resources are basic contents such as videos or documents that can be enhanced with contributions of users. On the other hand, case reports and didactic units have a defined structure. Didactic resources and case reports provide an informal training while didactic units are included in a more regulated training.