164 resultados para Reusing


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Abstract Web 2.0 applications enabled users to classify information resources using their own vocabularies. The bottom-up nature of these user-generated classification systems have turned them into interesting knowledge sources, since they provide a rich terminology generated by potentially large user communities. Previous research has shown that it is possible to elicit some emergent semantics from the aggregation of individual classifications in these systems. However the generation of ontologies from them is still an open research problem. In this thesis we address the problem of how to tap into user-generated classification systems for building domain ontologies. Our objective is to design a method to develop domain ontologies from user-generated classifications systems. To do so, we rely on ontologies in the Web of Data to formalize the semantics of the knowledge collected from the classification system. Current ontology development methodologies have recognized the importance of reusing knowledge from existing resources. Thus, our work is framed within the NeOn methodology scenario for building ontologies by reusing and reengineering non-ontological resources. The main contributions of this work are: An integrated method to develop ontologies from user-generated classification systems. With this method we extract a domain terminology from the classification system and then we formalize the semantics of this terminology by reusing ontologies in the Web of Data. Identification and adaptation of existing techniques for implementing the activities in the method so that they can fulfill the requirements of each activity. A novel study about emerging semantics in user-generated lists. Resumen La web 2.0 permitió a los usuarios clasificar recursos de información usando su propio vocabulario. Estos sistemas de clasificación generados por usuarios son recursos interesantes para la extracción de conocimiento debido principalmente a que proveen una extensa terminología generada por grandes comunidades de usuarios. Se ha demostrado en investigaciones previas que es posible obtener una semántica emergente de estos sistemas. Sin embargo la generación de ontologías a partir de ellos es todavía un problema de investigación abierto. Esta tesis trata el problema de cómo aprovechar los sistemas de clasificación generados por usuarios en la construcción de ontologías de dominio. Así el objetivo de la tesis es diseñar un método para desarrollar ontologías de dominio a partir de sistemas de clasificación generados por usuarios. El método propuesto reutiliza conceptualizaciones existentes en ontologías publicadas en la Web de Datos para formalizar la semántica del conocimiento que se extrae del sistema de clasificación. Por tanto, este trabajo está enmarcado dentro del escenario para desarrollar ontologías mediante la reutilización y reingeniería de recursos no ontológicos que se ha definido en la Metodología NeOn. Las principales contribuciones de este trabajo son: Un método integrado para desarrollar una ontología de dominio a partir de sistemas de clasificación generados por usuarios. En este método se extrae una terminología de dominio del sistema de clasificación y posteriormente se formaliza su semántica reutilizando ontologías en la Web de Datos. La identificación y adaptación de un conjunto de técnicas para implementar las actividades propuestas en el método de tal manera que puedan cumplir automáticamente los requerimientos de cada actividad. Un novedoso estudio acerca de la semántica emergente en las listas generadas por usuarios en la Web.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We describe the current status of and provide performance results for a prototype compiler of Prolog to C, ciaocc. ciaocc is novel in that it is designed to accept different kinds of high-level information, typically obtained via an automatic analysis of the initial Prolog program and expressed in a standardized language of assertions. This information is used to optimize the resulting C code, which is then processed by an off-the-shelf C compiler. The basic translation process essentially mimics the unfolding of a bytecode emulator with respect to the particular bytecode corresponding to the Prolog program. This is facilitated by a flexible design of the instructions and their lower-level components. This approach allows reusing a sizable amount of the machinery of the bytecode emulator: predicates already written in C, data definitions, memory management routines and áreas, etc., as well as mixing emulated bytecode with native code in a relatively straightforward way. We report on the performance of programs compiled by the current versión of the system, both with and without analysis information.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We describe the current status of and provide preliminary performance results for a compiler of Prolog to C. The compiler is novel in that it is designed to accept different kinds of high-level information (typically obtained via an analysis of the initial Prolog program and expressed in a standardized language of assertions) and use this information to optimize the resulting C code, which is then further processed by an off-the-shelf C compiler. The basic translation process used essentially mimics an unfolding of a C-coded bytecode emúlator with respect to the particular bytecode corresponding to the Prolog program. Optimizations are then applied to this unfolded program. This is facilitated by a more flexible design of the bytecode instructions and their lower-level components. This approach allows reusing a sizable amount of the machinery of the bytecode emulator: ancillary pieces of C code, data definitions, memory management routines and áreas, etc., as well as mixing bytecode emulated code with natively compiled code in a relatively straightforward way We report on the performance of programs compiled by the current versión of the system, both with and without analysis information.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This PhD thesis contributes to the problem of resource and service discovery in the context of the composable web. In the current web, mashup technologies allow developers reusing services and contents to build new web applications. However, developers face a problem of information flood when searching for appropriate services or resources for their combination. To contribute to overcoming this problem, a framework is defined for the discovery of services and resources. In this framework, three levels are defined for performing discovery at content, discovery and agente levels. The content level involves the information available in web resources. The web follows the Representational Stateless Transfer (REST) architectural style, in which resources are returned as representations from servers to clients. These representations usually employ the HyperText Markup Language (HTML), which, along with Content Style Sheets (CSS), describes the markup employed to render representations in a web browser. Although the use of SemanticWeb standards such as Resource Description Framework (RDF) make this architecture suitable for automatic processes to use the information present in web resources, these standards are too often not employed, so automation must rely on processing HTML. This process, often referred as Screen Scraping in the literature, is the content discovery according to the proposed framework. At this level, discovery rules indicate how the different pieces of data in resources’ representations are mapped onto semantic entities. By processing discovery rules on web resources, semantically described contents can be obtained out of them. The service level involves the operations that can be performed on the web. The current web allows users to perform different tasks such as search, blogging, e-commerce, or social networking. To describe the possible services in RESTful architectures, a high-level feature-oriented service methodology is proposed at this level. This lightweight description framework allows defining service discovery rules to identify operations in interactions with REST resources. The discovery is thus performed by applying discovery rules to contents discovered in REST interactions, in a novel process called service probing. Also, service discovery can be performed by modelling services as contents, i.e., by retrieving Application Programming Interface (API) documentation and API listings in service registries such as ProgrammableWeb. For this, a unified model for composable components in Mashup-Driven Development (MDD) has been defined after the analysis of service repositories from the web. The agent level involves the orchestration of the discovery of services and contents. At this level, agent rules allow to specify behaviours for crawling and executing services, which results in the fulfilment of a high-level goal. Agent rules are plans that allow introspecting the discovered data and services from the web and the knowledge present in service and content discovery rules to anticipate the contents and services to be found on specific resources from the web. By the definition of plans, an agent can be configured to target specific resources. The discovery framework has been evaluated on different scenarios, each one covering different levels of the framework. Contenidos a la Carta project deals with the mashing-up of news from electronic newspapers, and the framework was used for the discovery and extraction of pieces of news from the web. Similarly, in Resulta and VulneraNET projects the discovery of ideas and security knowledge in the web is covered, respectively. The service level is covered in the OMELETTE project, where mashup components such as services and widgets are discovered from component repositories from the web. The agent level is applied to the crawling of services and news in these scenarios, highlighting how the semantic description of rules and extracted data can provide complex behaviours and orchestrations of tasks in the web. The main contributions of the thesis are the unified framework for discovery, which allows configuring agents to perform automated tasks. Also, a scraping ontology has been defined for the construction of mappings for scraping web resources. A novel first-order logic rule induction algorithm is defined for the automated construction and maintenance of these mappings out of the visual information in web resources. Additionally, a common unified model for the discovery of services is defined, which allows sharing service descriptions. Future work comprises the further extension of service probing, resource ranking, the extension of the Scraping Ontology, extensions of the agent model, and contructing a base of discovery rules. Resumen La presente tesis doctoral contribuye al problema de descubrimiento de servicios y recursos en el contexto de la web combinable. En la web actual, las tecnologías de combinación de aplicaciones permiten a los desarrolladores reutilizar servicios y contenidos para construir nuevas aplicaciones web. Pese a todo, los desarrolladores afrontan un problema de saturación de información a la hora de buscar servicios o recursos apropiados para su combinación. Para contribuir a la solución de este problema, se propone un marco de trabajo para el descubrimiento de servicios y recursos. En este marco, se definen tres capas sobre las que se realiza descubrimiento a nivel de contenido, servicio y agente. El nivel de contenido involucra a la información disponible en recursos web. La web sigue el estilo arquitectónico Representational Stateless Transfer (REST), en el que los recursos son devueltos como representaciones por parte de los servidores a los clientes. Estas representaciones normalmente emplean el lenguaje de marcado HyperText Markup Language (HTML), que, unido al estándar Content Style Sheets (CSS), describe el marcado empleado para mostrar representaciones en un navegador web. Aunque el uso de estándares de la web semántica como Resource Description Framework (RDF) hace apta esta arquitectura para su uso por procesos automatizados, estos estándares no son empleados en muchas ocasiones, por lo que cualquier automatización debe basarse en el procesado del marcado HTML. Este proceso, normalmente conocido como Screen Scraping en la literatura, es el descubrimiento de contenidos en el marco de trabajo propuesto. En este nivel, un conjunto de reglas de descubrimiento indican cómo los diferentes datos en las representaciones de recursos se corresponden con entidades semánticas. Al procesar estas reglas sobre recursos web, pueden obtenerse contenidos descritos semánticamente. El nivel de servicio involucra las operaciones que pueden ser llevadas a cabo en la web. Actualmente, los usuarios de la web pueden realizar diversas tareas como búsqueda, blogging, comercio electrónico o redes sociales. Para describir los posibles servicios en arquitecturas REST, se propone en este nivel una metodología de alto nivel para descubrimiento de servicios orientada a funcionalidades. Este marco de descubrimiento ligero permite definir reglas de descubrimiento de servicios para identificar operaciones en interacciones con recursos REST. Este descubrimiento es por tanto llevado a cabo al aplicar las reglas de descubrimiento sobre contenidos descubiertos en interacciones REST, en un nuevo procedimiento llamado sondeo de servicios. Además, el descubrimiento de servicios puede ser llevado a cabo mediante el modelado de servicios como contenidos. Es decir, mediante la recuperación de documentación de Application Programming Interfaces (APIs) y listas de APIs en registros de servicios como ProgrammableWeb. Para ello, se ha definido un modelo unificado de componentes combinables para Mashup-Driven Development (MDD) tras el análisis de repositorios de servicios de la web. El nivel de agente involucra la orquestación del descubrimiento de servicios y contenidos. En este nivel, las reglas de nivel de agente permiten especificar comportamientos para el rastreo y ejecución de servicios, lo que permite la consecución de metas de mayor nivel. Las reglas de los agentes son planes que permiten la introspección sobre los datos y servicios descubiertos, así como sobre el conocimiento presente en las reglas de descubrimiento de servicios y contenidos para anticipar contenidos y servicios por encontrar en recursos específicos de la web. Mediante la definición de planes, un agente puede ser configurado para descubrir recursos específicos. El marco de descubrimiento ha sido evaluado sobre diferentes escenarios, cada uno cubriendo distintos niveles del marco. El proyecto Contenidos a la Carta trata de la combinación de noticias de periódicos digitales, y en él el framework se ha empleado para el descubrimiento y extracción de noticias de la web. De manera análoga, en los proyectos Resulta y VulneraNET se ha llevado a cabo un descubrimiento de ideas y de conocimientos de seguridad, respectivamente. El nivel de servicio se cubre en el proyecto OMELETTE, en el que componentes combinables como servicios y widgets se descubren en repositorios de componentes de la web. El nivel de agente se aplica al rastreo de servicios y noticias en estos escenarios, mostrando cómo la descripción semántica de reglas y datos extraídos permiten proporcionar comportamientos complejos y orquestaciones de tareas en la web. Las principales contribuciones de la tesis son el marco de trabajo unificado para descubrimiento, que permite configurar agentes para realizar tareas automatizadas. Además, una ontología de extracción ha sido definida para la construcción de correspondencias y extraer información de recursos web. Asimismo, un algoritmo para la inducción de reglas de lógica de primer orden se ha definido para la construcción y el mantenimiento de estas correspondencias a partir de la información visual de recursos web. Adicionalmente, se ha definido un modelo común y unificado para el descubrimiento de servicios que permite la compartición de descripciones de servicios. Como trabajos futuros se considera la extensión del sondeo de servicios, clasificación de recursos, extensión de la ontología de extracción y la construcción de una base de reglas de descubrimiento.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Provenance plays a major role when understanding and reusing the methods applied in a scientic experiment, as it provides a record of inputs, the processes carried out and the use and generation of intermediate and nal results. In the specic case of in-silico scientic experiments, a large variety of scientic workflow systems (e.g., Wings, Taverna, Galaxy, Vistrails) have been created to support scientists. All of these systems produce some sort of provenance about the executions of the workflows that encode scientic experiments. However, provenance is normally recorded at a very low level of detail, which complicates the understanding of what happened during execution. In this paper we propose an approach to automatically obtain abstractions from low-level provenance data by finding common workflow fragments on workflow execution provenance and relating them to templates. We have tested our approach with a dataset of workflows published by the Wings workflow system. Our results show that by using these kinds of abstractions we can highlight the most common abstract methods used in the executions of a repository, relating different runs and workflow templates with each other.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Biomedical ontologies are key elements for building up the Life Sciences Semantic Web. Reusing and building biomedical ontologies requires flexible and versatile tools to manipulate them efficiently, in particular for enriching their axiomatic content. The Ontology Pre Processor Language (OPPL) is an OWL-based language for automating the changes to be performed in an ontology. OPPL augments the ontologists’ toolbox by providing a more efficient, and less error-prone, mechanism for enriching a biomedical ontology than that obtained by a manual treatment. Results We present OPPL-Galaxy, a wrapper for using OPPL within Galaxy. The functionality delivered by OPPL (i.e. automated ontology manipulation) can be combined with the tools and workflows devised within the Galaxy framework, resulting in an enhancement of OPPL. Use cases are provided in order to demonstrate OPPL-Galaxy’s capability for enriching, modifying and querying biomedical ontologies. Conclusions Coupling OPPL-Galaxy with other bioinformatics tools of the Galaxy framework results in a system that is more than the sum of its parts. OPPL-Galaxy opens a new dimension of analyses and exploitation of biomedical ontologies, including automated reasoning, paving the way towards advanced biological data analyses.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Software testing is a key aspect of software reliability and quality assurance in a context where software development constantly has to overcome mammoth challenges in a continuously changing environment. One of the characteristics of software testing is that it has a large intellectual capital component and can thus benefit from the use of the experience gained from past projects. Software testing can, then, potentially benefit from solutions provided by the knowledge management discipline. There are in fact a number of proposals concerning effective knowledge management related to several software engineering processes. Objective: We defend the use of a lesson learned system for software testing. The reason is that such a system is an effective knowledge management resource enabling testers and managers to take advantage of the experience locked away in the brains of the testers. To do this, the experience has to be gathered, disseminated and reused. Method: After analyzing the proposals for managing software testing experience, significant weaknesses have been detected in the current systems of this type. The architectural model proposed here for lesson learned systems is designed to try to avoid these weaknesses. This model (i) defines the structure of the software testing lessons learned; (ii) sets up procedures for lesson learned management; and (iii) supports the design of software tools to manage the lessons learned. Results: A different approach, based on the management of the lessons learned that software testing engineers gather from everyday experience, with two basic goals: usefulness and applicability. Conclusion: The architectural model proposed here lays the groundwork to overcome the obstacles to sharing and reusing experience gained in the software testing and test management. As such, it provides guidance for developing software testing lesson learned systems.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The problem of recurring concepts in data stream classification is a special case of concept drift where concepts may reappear. Although several existing methods are able to learn in the presence of concept drift, few consider contextual information when tracking recurring concepts. Nevertheless, in many real-world scenarios context information is available and can be exploited to improve existing approaches in the detection or even anticipation of recurring concepts. In this work, we propose the extension of existing approaches to deal with the problem of recurring concepts by reusing previously learned decision models in situations where concepts reappear. The different underlying concepts are identified using an existing drift detection method, based on the error-rate of the learning process. A method to associate context information and learned decision models is proposed to improve the adaptation to recurring concepts. The method also addresses the challenge of retrieving the most appropriate concept for a particular context. Finally, to deal with situations of memory scarcity, an intelligent strategy to discard models is proposed. The experiments conducted so far, using synthetic and real datasets, show promising results and make it possible to analyze the trade-off between the accuracy gains and the learned models storage cost.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

While workflow technology has gained momentum in the last decade as a means for specifying and enacting computational experiments in modern science, reusing and repurposing existing workflows to build new scientific experiments is still a daunting task. This is partly due to the difficulty that scientists experience when attempting to understand existing workflows, which contain several data preparation and adaptation steps in addition to the scientifically significant analysis steps. One way to tackle the understandability problem is through providing abstractions that give a high-level view of activities undertaken within workflows. As a first step towards abstractions, we report in this paper on the results of a manual analysis performed over a set of real-world scientific workflows from Taverna and Wings systems. Our analysis has resulted in a set of scientific workflow motifs that outline i) the kinds of data intensive activities that are observed in workflows (data oriented motifs), and ii) the different manners in which activities are implemented within workflows (workflow oriented motifs). These motifs can be useful to inform workflow designers on the good and bad practices for workflow development, to inform the design of automated tools for the generation of workflow abstractions, etc.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Information integration is a very important topic. Reusing the knowledge and having common representations have been (and it is) an active research topic in the process systems community. Conventional (structural) But only structural models have been dealt with so far. In this paper the issue of integration is related with two types of different knowledge, functional and structural. Functional representation and analysis have proved very useful, but still it is developed and presented in a completely isolated way from the classic structural description of the process. This paper presents an architecture to integrate both representations.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Most data stream classification techniques assume that the underlying feature space is static. However, in real-world applications the set of features and their relevance to the target concept may change over time. In addition, when the underlying concepts reappear, reusing previously learnt models can enhance the learning process in terms of accuracy and processing time at the expense of manageable memory consumption. In this paper, we propose mining recurring concepts in a dynamic feature space (MReC-DFS), a data stream classification system to address the challenges of learning recurring concepts in a dynamic feature space while simultaneously reducing the memory cost associated with storing past models. MReC-DFS is able to detect and adapt to concept changes using the performance of the learning process and contextual information. To handle recurring concepts, stored models are combined in a dynamically weighted ensemble. Incremental feature selection is performed to reduce the combined feature space. This contribution allows MReC-DFS to store only the features most relevant to the learnt concepts, which in turn increases the memory efficiency of the technique. In addition, an incremental feature selection method is proposed that dynamically determines the threshold between relevant and irrelevant features. Experimental results demonstrating the high accuracy of MReC-DFS compared with state-of-the-art techniques on a variety of real datasets are presented. The results also show the superior memory efficiency of MReC-DFS.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Information integration is a very important topic. Reusing the knowledge and having common and exchangeable representations have been an active research topic in process systems engineering. In this paper we deal with information integration in two different ways, the first one sharing knowledge between different heterogeneous applications and the second one integrating two different (but complementary) types of knowledge: functional and structural. A new architecture to integrate these representation and use for several purposes is presented in this paper.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Interoperability between semantic technologies is a must because they need to be in communication to interchange ontologies and use them in the distributed and open environment of the SemanticWeb. However, such interoperability is not straightforward due to the high heterogeneity in such technologies. This chapter describes the problem of semantic technology interoperability from two different perspectives. First, from a theoretical perspective by presenting an overview of the different factors that affect interoperability and, second, from a practical perspective by reusing evaluation methods and applying them to six current semantic technologies in order to assess their interoperability.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Workflow technology continues to play an important role as a means for specifying and enacting computational experiments in modern science. Reusing and re-purposing workflows allow scientists to do new experiments faster, since the workflows capture useful expertise from others. As workflow libraries grow, scientists face the challenge of finding workflows appropriate for their task, understanding what each workflow does, and reusing relevant portions of a given workflow.We believe that workflows would be easier to understand and reuse if high-level views (abstractions) of their activities were available in workflow libraries. As a first step towards obtaining these abstractions, we report in this paper on the results of a manual analysis performed over a set of real-world scientific workflows from Taverna, Wings, Galaxy and Vistrails. Our analysis has resulted in a set of scientific workflow motifs that outline (i) the kinds of data-intensive activities that are observed in workflows (Data-Operation motifs), and (ii) the different manners in which activities are implemented within workflows (Workflow-Oriented motifs). These motifs are helpful to identify the functionality of the steps in a given workflow, to develop best practices for workflow design, and to develop approaches for automated generation of workflow abstractions.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Durante el transcurso de esta Tesis Doctoral se ha realizado un estudio de la problemática asociada al desarrollo de sistemas de interacción hombre-máquina sensibles al contexto. Este problema se enmarca dentro de dos áreas de investigación: los sistemas interactivos y las fuentes de información contextual. Tradicionalmente la integración entre ambos campos se desarrollaba a través de soluciones verticales específicas, que abstraen a los sistemas interactivos de conocer los procedimientos de bajo nivel de acceso a la información contextual, pero limitan su interoperabilidad con otras aplicaciones y fuentes de información. Para solventar esta limitación se hace imprescindible potenciar soluciones interoperables que permitan acceder a la información del mundo real a través de procedimientos homogéneos. Esta problemática coincide perfectamente con los escenarios de \Computación Ubicua" e \Internet de las Cosas", donde se apunta a un futuro en el que los objetos que nos rodean serán capaces de obtener información del entorno y comunicarla a otros objetos y personas. Los sistemas interactivos, al ser capaces de obtener información de su entorno a través de la interacción con el usuario, pueden tomar un papel especial en este escenario tanto como consumidores como productores de información. En esta Tesis se ha abordado la integración de ambos campos teniendo en cuenta este escenario tecnológico. Para ello, en primer lugar se ha realizado un an álisis de las iniciativas más importantes para la definición y diseño de sistemas interactivos, y de las principales infraestructuras de suministro de información. Mediante este estudio se ha propuesto utilizar el lenguaje SCXML del W3C para el diseño de los sistemas interactivos y el procesamiento de los datos proporcionados por fuentes de contexto. Así, se ha reflejado cómo las capacidades del lenguaje SCXML para combinar información de diferentes modalidades pueden también utilizarse para procesar e integrar información contextual de diferentes fuentes heterogéneas, y por consiguiente diseñar sistemas de interacción sensibles al contexto. Del mismo modo se presenta a la iniciativa Sensor Web, y a su extensión semántica Semantic Sensor Web, como una iniciativa idónea para permitir un acceso y suministro homogéneo de la información a los sistemas interactivos sensibles al contexto. Posteriormente se han analizado los retos que plantea la integración de ambos tipos de iniciativas. Como resultado se ha conseguido establecer una serie de funcionalidades que son necesarias implementar para llevar a cabo esta integración. Utilizando tecnologías que aportan una gran flexibilidad al proceso de implementación y que se apoyan en recomendaciones y estándares actuales, se implementaron una serie de desarrollos experimentales que integraban las funcionalidades identificadas anteriormente. Finalmente, con el fin de validar nuestra propuesta, se realizaron un conjunto de experimentos sobre un entorno de experimentación que simula el escenario de la conducción. En este escenario un sistema interactivo se comunica con una extensión semántica de una plataforma basada en los estándares de la Sensor Web para poder obtener información y publicar las observaciones que el usuario realizaba al sistema. Los resultados obtenidos han demostrado la viabilidad de utilizar el lenguaje SCXML para el diseño de sistemas interactivos sensibles al contexto que requieren acceder a plataformas avanzadas de información para consumir y publicar información a la vez que interaccionan con el usuario. Del mismo modo, se ha demostrado cómo la utilización de tecnologías semánticas en los procesos de consulta y publicación de información puede facilitar la reutilización de la información publicada en infraestructuras Sensor Web por cualquier tipo de aplicación, y de este modo contribuir al futuro escenario de Internet de las Cosas. ABSTRACT In this Thesis, we have addressed the difficulties related to the development of context-aware human-machine interaction systems. This issue is part of two research fields: interactive systems and contextual information sources. Traditionally both fields have been integrated through domain-specific vertical solutions that allow interactive systems to access contextual information without having to deal with low-level procedures, but restricting their interoperability with other applications and heterogeneous data sources. Thus, it is essential to boost the research on interoperable solutions that provide access to real world information through homogeneous procedures. This issue perfectly matches with the scenarios of \Ubiquitous Computing" and \Internet of Things", which point toward a future in which many objects around us will be able to acquire meaningful information about the environment and communicate it to other objects and to people. Since interactive systems are able to get information from their environment through interaction with the user, they can play an important role in this scenario as they can both consume real-world data and produce enriched information. This Thesis deals with the integration of both fields considering this technological scenario. In order to do this, we first carried out an analysis of the most important initiatives for the definition and design of interactive systems, and the main infrastructures for providing information. Through this study the use of the W3C SCXML language is proposed for both the design of interactive systems and the processing of data provided by different context sources. Thus, this work has shown how the SCXML capabilities for combining information from different modalities can also be used to process and integrate contextual information from different heterogeneous sensor sources, and therefore to develope context-aware interaction systems. Similarly, we present the Sensor Web initiative, and its semantic extension Semantic Sensor Web, as an appropriate initiative to allow uniform access and delivery of information to the context-aware interactive systems. Subsequently we have analyzed the challenges of integrating both types of initiatives: SCXML and (Semantic) Sensor Web. As a result, we state a number of functionalities that are necessary to implement in order to perform this integration. By using technologies that provide exibility to the implementation process and are based on current recommendations and standards, we implemented a series of experimental developments that integrate the identified functionalities. Finally, in order to validate our approach, we conducted different experiments with a testing environment simulating a driving scenario. In this framework an interactive system can access a semantic extension of a Telco plataform, based on the standards of the Sensor Web, to acquire contextual information and publish observations that the user performed to the system. The results showed the feasibility of using the SCXML language for designing context-aware interactive systems that require access to advanced sensor platforms for consuming and publishing information while interacting with the user. In the same way, it was shown how the use of semantic technologies in the processes of querying and publication sensor data can assist in reusing and sharing the information published by any application in Sensor Web infrastructures, and thus contribute to realize the future scenario of \Internet of Things".