82 resultados para ontology development methodology


Relevância:

90.00% 90.00%

Publicador:

Resumo:

La evaluación de ontologías, incluyendo diagnóstico y reparación de las mismas, es una compleja actividad que debe llevarse a cabo en cualquier proyecto de desarrollo ontológico para comprobar la calidad técnica de las ontologías. Sin embargo, existe una gran brecha entre los enfoques metodológicos sobre la evaluación de ontologías y las herramientas que le dan soporte. En particular, no existen enfoques que proporcionen guías concretas sobre cómo diagnosticar y, en consecuencia, reparar ontologías. Esta tesis pretende avanzar en el área de la evaluación de ontologías, concretamente en la actividad de diagnóstico. Los principales objetivos de esta tesis son (a) ayudar a los desarrolladores en el diagnóstico de ontologías para encontrar errores comunes y (b) facilitar dicho diagnóstico reduciendo el esfuerzo empleado proporcionando el soporte tecnológico adecuado. Esta tesis presenta las siguientes contribuciones: • Catálogo de 41 errores comunes que los ingenieros ontológicos pueden cometer durante el desarrollo de ontologías. • Modelo de calidad para el diagnóstico de ontologías alineando el catálogo de errores comunes con modelos de calidad existentes. • Diseño e implementación de 48 métodos para detectar 33 de los 41 errores comunes en el catálogo. • Soporte tecnológico OOPS!, que permite el diagnstico de ontologías de forma (semi)automática. De acuerdo con los comentarios recibidos y los resultados de los test de satisfacción realizados, se puede afirmar que el enfoque desarrollado y presentado en esta tesis ayuda de forma efectiva a los usuarios a mejorar la calidad de sus ontologías. OOPS! ha sido ampliamente aceptado por un gran número de usuarios de formal global y ha sido utilizado alrededor de 3000 veces desde 60 países diferentes. OOPS! se ha integrado en software desarrollado por terceros y ha sido instalado en empresas para ser utilizado tanto durante el desarrollo de ontologías como en actividades de formación. Abstract Ontology evaluation, which includes ontology diagnosis and repair, is a complex activity that should be carried out in every ontology development project, because it checks for the technical quality of the ontology. However, there is an important gap between the methodological work about ontology evaluation and the tools that support such an activity. More precisely, not many approaches provide clear guidance about how to diagnose ontologies and how to repair them accordingly. This thesis aims to advance the current state of the art of ontology evaluation, specifically in the ontology diagnosis activity. The main goals of this thesis are (a) to help ontology engineers to diagnose their ontologies in order to find common pitfalls and (b) to lessen the effort required from them by providing the suitable technological support. This thesis presents the following main contributions: • A catalogue that describes 41 pitfalls that ontology developers might include in their ontologies. • A quality model for ontology diagnose that aligns the pitfall catalogue to existing quality models for semantic technologies. • The design and implementation of 48 methods for detecting 33 out of the 41 pitfalls defined in the catalogue. • A system called OOPS! (OntOlogy Pitfall Scanner!) that allows ontology engineers to (semi)automatically diagnose their ontologies. According to the feedback gathered and satisfaction tests carried out, the approach developed and presented in this thesis effectively helps users to increase the quality of their ontologies. At the time of writing this thesis, OOPS! has been broadly accepted by a high number of users worldwide and has been used around 3000 times from 60 different countries. OOPS! is integrated with third-party software and is locally installed in private enterprises being used both for ontology development activities and training courses.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Ontology quality can be affected by the difficulties involved in on-tology modelling which may imply the appearance of anomalies in ontologies. This situation leads to the need of validating ontologies, that is, assessing their quality and correctness. Ontology validation is a key activity in different ontol-ogy engineering scenarios such as development and selection. This paper con-tributes to the ontology validation activity by proposing a web-based tool, called OOPS!, independent of any ontology development environment, for de-tecting anomalies in ontologies. This tool will help developers to improve on-tology quality by automatically detecting potential errors.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The uptake of Linked Data (LD) has promoted the proliferation of datasets and their associated ontologies for describing different domains. Par-ticular LD development characteristics such as agility and web-based architec-ture necessitate the revision, adaption, and lightening of existing methodologies for ontology development. This thesis proposes a lightweight method for ontol-ogy development in an LD context which will be based in data-driven agile de-velopments, existing resources to be reused, and the evaluation of the obtained products considering both classical ontological engineering principles and LD characteristics.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

A growing number of ontologies are already available thanks to development initiatives in many different fields. In such ontology developments, developers must tackle a wide range of difficulties and handicaps, which can result in the appearance of anomalies in the resulting ontologies. Therefore, ontology evaluation plays a key role in ontology development projects. OOPS! is an on-line tool that automatically detects pitfalls, considered as potential errors or problems, and thus may help ontology developers to improve their ontologies. To gain insight in the existence of pitfalls and to assess whether there are differences among ontologies developed by novices, a random set of already scanned ontologies, and existing well-known ones, data of 406 OWL ontologies were analysed on OOPS!’s 21 pitfalls, of which 24 ontologies were also examined manually on the detected pitfalls. The various analyses performed show only minor differences between the three sets of ontologies, therewith providing a general landscape of pitfalls in ontologies.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Este trabajo de Tesis se desarrolla en el marco de los escenarios de ejecución distribuida de servicios móviles y contribuye a la definición y desarrollo del concepto de usuario prosumer. El usuario prosumer se caracteriza por utilizar su teléfono móvil para crear, proveer y ejecutar servicios. Este nuevo modelo de usuario contribuye al avance de la sociedad de la información, ya que el usuario prosumer se transforma de creador de contenidos a creador de servicios (estos últimos formados por contenidos y la lógica para acceder a ellos, procesarlos y representarlos). El objetivo general de este trabajo de Tesis es la provisión de un modelo de creación, distribución y ejecución de servicios para entorno móvil que permita a los usuarios no programadores (usuarios prosumer), pero expertos en un determinado dominio, crear y ejecutar sus propias aplicaciones y servicios. Para ello se definen, desarrollan e implementan metodologías, procesos, algoritmos y mecanismos adaptables a dominios específicos, para construir entornos de ejecución distribuida de servicios móviles para usuarios prosumer. La provisión de herramientas de creación adaptadas a usuarios no expertos es una tendencia actual que está siendo desarrollada en distintos trabajos de investigación. Sin embargo, no se ha propuesto una metodología de desarrollo de servicios que involucre al usuario prosumer en el proceso de diseño, desarrollo, implementación y validación de servicios. Este trabajo de Tesis realiza un estudio de las metodologías y tecnologías más innovadoras relacionadas con la co‐creación y utiliza este análisis para definir y validar una metodología que habilita al usuario para ser el responsable de la creación de servicios finales. Siendo los entornos móviles prosumer (mobile prosumer environments) una particularización de los entornos de ejecución distribuida de servicios móviles, en este trabajo se tesis se investiga en técnicas de adaptación, distribución, coordinación de servicios y acceso a recursos identificando como requisitos las problemáticas de este tipo de entornos y las características de los usuarios que participan en los mismos. Se contribuye a la adaptación de servicios definiendo un modelo de variabilidad que soporte la interdependencia entre las decisiones de personalización de los usuarios, incorporando mecanismos de guiado y detección de errores. La distribución de servicios se implementa utilizando técnicas de descomposición en árbol SPQR, cuantificando el impacto de separar cualquier servicio en distintos dominios. Considerando el plano de comunicaciones para la coordinación en la ejecución de servicios distribuidos hemos identificado varias problemáticas, como las pérdidas de enlace, conexiones, desconexiones y descubrimiento de participantes, que resolvemos utilizando técnicas de diseminación basadas en publicación subscripción y algoritmos Gossip. Para lograr una ejecución flexible de servicios distribuidos en entorno móvil, soportamos la adaptación a cambios en la disponibilidad de los recursos, proporcionando una infraestructura de comunicaciones para el acceso uniforme y eficiente a recursos. Se han realizado validaciones experimentales para evaluar la viabilidad de las soluciones propuestas, definiendo escenarios de aplicación relevantes (el nuevo universo inteligente, prosumerización de servicios en entornos hospitalarios y emergencias en la web de la cosas). Abstract This Thesis work is developed in the framework of distributed execution of mobile services and contributes to the definition and development of the concept of prosumer user. The prosumer user is characterized by using his mobile phone to create, provide and execute services. This new user model contributes to the advancement of the information society, as the prosumer is transformed from producer of content, to producer of services (consisting of content and logic to access them, process them and represent them). The overall goal of this Thesis work is to provide a model for creation, distribution and execution of services for the mobile environment that enables non‐programmers (prosumer users), but experts in a given domain, to create and execute their own applications and services. For this purpose I define, develop and implement methodologies, processes, algorithms and mechanisms, adapted to specific domains, to build distributed environments for the execution of mobile services for prosumer users. The provision of creation tools adapted to non‐expert users is a current trend that is being developed in different research works. However, it has not been proposed a service development methodology involving the prosumer user in the process of design, development, implementation and validation of services. This thesis work studies innovative methodologies and technologies related to the co‐creation and relies on this analysis to define and validate a methodological approach that enables the user to be responsible for creating final services. Being mobile prosumer environments a specific case of environments for distributed execution of mobile services, this Thesis work researches in service adaptation, distribution, coordination and resource access techniques, and identifies as requirements the challenges of such environments and characteristics of the participating users. I contribute to service adaptation by defining a variability model that supports the dependency of user personalization decisions, incorporating guiding and error detection mechanisms. Service distribution is implemented by using decomposition techniques based on SPQR trees, quantifying the impact of separating any service in different domains. Considering the communication level for the coordination of distributed service executions I have identified several problems, such as link losses, connections, disconnections and discovery of participants, which I solve using dissemination techniques based on publish‐subscribe communication models and Gossip algorithms. To achieve a flexible distributed service execution in mobile environments, I support adaptation to changes in the availability of resources, while providing a communication infrastructure for the uniform and efficient access to resources. Experimental validations have been conducted to assess the feasibility of the proposed solutions, defining relevant application scenarios (the new intelligent universe, service prosumerization in hospitals and emergency situations in the web of things).

Relevância:

80.00% 80.00%

Publicador:

Resumo:

To our knowledge, no current software development methodology explicitly describes how to transit from the analysis model to the software architecture of the application. This paper presents a method to derive the software architecture of a system from its analysis model. To do this, we are going to use MDA. Both the analysis model and the architectural model are PIMs described with UML 2. The model type mapping designed consists of several rules (expressed using OCL and natural language) that, when applied to the analysis artifacts, generate the software architecture of the application. Specifically the rules act on elements of the UML 2 metamodel (metamodel mapping). We have developed a tool (using Smalltalk) that permits the automatic application of these rules to an analysis model defined in RoseTM to generate the application architecture expressed in the architectural style C2.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

En la actualidad existe una gran expectación ante la introducción de nuevas herramientas y métodos para el desarrollo de productos software, que permitirán en un futuro próximo un planteamiento de ingeniería del proceso de producción software. Las nuevas metodologías que empiezan a esbozarse suponen un enfoque integral del problema abarcando todas las fases del esquema productivo. Sin embargo el grado de automatización conseguido en el proceso de construcción de sistemas es muy bajo y éste está centrado en las últimas fases del ciclo de vida del software, consiguiéndose así una reducción poco significativa de sus costes y, lo que es aún más importante, sin garantizar la calidad de los productos software obtenidos. Esta tesis define una metodología de desarrollo software estructurada que se puede automatizar, es decir una metodología CASE. La metodología que se presenta se ajusta al modelo de ciclo de desarrollo CASE, que consta de las fases de análisis, diseño y pruebas; siendo su ámbito de aplicación los sistemas de información. Se establecen inicialmente los principios básicos sobre los que la metodología CASE se asienta. Posteriormente, y puesto que la metodología se inicia con la fijación de los objetivos de la empresa que demanda un sistema informático, se emplean técnicas que sirvan de recogida y validación de la información, que proporcionan a la vez un lenguaje de comunicación fácil entre usuarios finales e informáticos. Además, estas mismas técnicas detallarán de una manera completa, consistente y sin ambigüedad todos los requisitos del sistema. Asimismo, se presentan un conjunto de técnicas y algoritmos para conseguir que desde la especificación de requisitos del sistema se logre una automatización tanto del diseño lógico del Modelo de Procesos como del Modelo de Datos, validados ambos conforme a la especificación de requisitos previa. Por último se definen unos procedimientos formales que indican el conjunto de actividades a realizar en el proceso de construcción y cómo llevarlas a cabo, consiguiendo de esta manera una integridad en las distintas etapas del proceso de desarrollo.---ABSTRACT---Nowdays there is a great expectation with regard to the introduction of new tools and methods for the software products development that, in the very near future will allow, an engineering approach in the software development process. New methodologies, just emerging, imply an integral approach to the problem, including all the productive scheme stages. However, the automatization degree obtained in the systems construction process is very low and focused on the last phases of the software lifecycle, which means that the costs reduction obtained is irrelevant and, which is more important, the quality of the software products is not guaranteed. This thesis defines an structured software development methodology that can be automated, that is a CASE methodology. Such a methodology is adapted to the CASE development cycle-model, which consists in analysis, design and testing phases, being the information systems its field of application. Firstly, we present the basic principies on which CASE methodology is based. Secondly, since the methodology starts from fixing the objectives of the company demanding the automatization system, we use some techniques that are useful for gathering and validating the information, being at the same time an easy communication language between end-users and developers. Indeed, these same techniques will detail completely, consistently and non ambiguously all the system requirements. Likewise, a set of techniques and algorithms are shown in order to obtain, from the system requirements specification, an automatization of the Process Model logical design, and of the Data Model logical design. Those two models are validated according to the previous requirement specification. Finally, we define several formal procedures that suggest which set of activities to be accomplished in the construction process, and how to carry them out, getting in this way integrity and completness for the different stages of the development process.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Multiple indicators are of interest in smart cities at different scales and for different stakeholders. In open environments, such as The Web, or when indicator information has to be interchanged across systems, contextual information (e.g., unit of measurement, measurement method) should be transmitted together with the data and the lack of such information might cause undesirable effects. Describing the data by means of ontologies increases interoperability among datasets and applications. However, methodological guidance is crucial during ontology development in order to transform the art of modeling in an engineering activity. In the current paper, we present a methodological approach for modelling data about Key Performance Indicators and their context with an application example of such guidelines.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The conception of IoT (Internet of Things) is accepted as the future tendency of Internet among academia and industry. It will enable people and things to be connected at anytime and anyplace, with anything and anyone. IoT has been proposed to be applied into many areas such as Healthcare, Transportation,Logistics, and Smart environment etc. However, this thesis emphasizes on the home healthcare area as it is the potential healthcare model to solve many problems such as the limited medical resources, the increasing demands for healthcare from elderly and chronic patients which the traditional model is not capable of. A remarkable change in IoT in semantic oriented vision is that vast sensors or devices are involved which could generate enormous data. Methods to manage the data including acquiring, interpreting, processing and storing data need to be implemented. Apart from this, other abilities that IoT is not capable of are concluded, namely, interoperation, context awareness and security & privacy. Context awareness is an emerging technology to manage and take advantage of context to enable any type of system to provide personalized services. The aim of this thesis is to explore ways to facilitate context awareness in IoT. In order to realize this objective, a preliminary research is carried out in this thesis. The most basic premise to realize context awareness is to collect, model, understand, reason and make use of context. A complete literature review for the existing context modelling and context reasoning techniques is conducted. The conclusion is that the ontology-based context modelling and ontology-based context reasoning are the most promising and efficient techniques to manage context. In order to fuse ontology into IoT, a specific ontology-based context awareness framework is proposed for IoT applications. In general, the framework is composed of eight components which are hardware, UI (User Interface), Context modelling, Context fusion, Context reasoning, Context repository, Security unit and Context dissemination. Moreover, on the basis of TOVE (Toronto Virtual Enterprise), a formal ontology developing methodology is proposed and illustrated which consists of four stages: Specification & Conceptualization, Competency Formulation, Implementation and Validation & Documentation. In addition, a home healthcare scenario is elaborated by listing its well-defined functionalities. Aiming at representing this specific scenario, the proposed ontology developing methodology is applied and the ontology-based model is developed in a free and open-source ontology editor called Protégé. Finally, the accuracy and completeness of the proposed ontology are validated to show that this proposed ontology is able to accurately represent the scenario of interest.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Los sistemas de adquisición de datos utilizados en los diagnósticos de los dispositivos de fusión termonuclear se enfrentan a importantes retos planteados en los dispositivos de pulso largo. Incluso en los dispositivos de pulso corto, en los que se analizan los datos después de la descarga, existen aún una gran cantidad de datos sin analizar, lo cual supone que queda una gran cantidad de conocimiento por descubrir dentro de las bases de datos existentes. En la última década, la comunidad de fusión ha realizado un gran esfuerzo para mejorar los métodos de análisis off‐line para mejorar este problema, pero no se ha conseguido resolver completamente, debido a que algunos de estos métodos han de resolverse en tiempo real. Este paradigma lleva a establecer que los dispositivos de pulso largo deberán incluir dispositivos de adquisición de datos con capacidades de procesamiento local, capaces de ejecutar avanzados algoritmos de análisis. Los trabajos de investigación realizados en esta tesis tienen como objetivo determinar si es posible incrementar la capacidad local de procesamiento en tiempo real de dichos sistemas mediante el uso de GPUs. Para ello durante el trascurso del periodo de experimentación realizado se han evaluado distintas propuestas a través de casos de uso reales elaborados para algunos de los dispositivos de fusión más representativos como ITER, JET y TCV. Las conclusiones y experiencias obtenidas en dicha fase han permitido proponer un modelo y una metodología de desarrollo para incluir esta tecnología en los sistemas de adquisición para diagnósticos de distinta naturaleza. El modelo define no sólo la arquitectura hardware óptima para realizar dicha integración, sino también la incorporación de este nuevo recurso de procesamiento en los Sistemas de Control de Supervisión y Adquisición de Datos (SCADA) utilizados en la comunidad de fusión (EPICS), proporcionando una solución completa. La propuesta se complementa con la definición de una metodología que resuelve las debilidades detectadas, y permite trazar un camino de integración de la solución en los estándares hardware y software existentes. La evaluación final se ha realizado mediante el desarrollo de un caso de uso representativo de los diagnósticos que necesitan adquisición y procesado de imágenes en el contexto del dispositivo internacional ITER, y ha sido testeada con éxito en sus instalaciones. La solución propuesta en este trabajo ha sido incluida por la ITER IO en su catálogo de soluciones estándar para el desarrollo de sus futuros diagnósticos. Por otra parte, como resultado y fruto de la investigación de esta tesis, cabe destacar el acuerdo llevado a cabo con la empresa National Instruments en términos de transferencia tecnológica, lo que va a permitir la actualización de los sistemas de adquisición utilizados en los dispositivos de fusión. ABSTRACT Data acquisition systems used in the diagnostics of thermonuclear fusion devices face important challenges due to the change in the data acquisition paradigm needed for long pulse operation. Even in shot pulse devices, where data is mainly analyzed after the discharge has finished , there is still a large amount of data that has not been analyzed, therefore producing a lot of buried knowledge that still lies undiscovered in the data bases holding the vast amount of data that has been generated. There has been a strong effort in the fusion community in the last decade to improve the offline analysis methods to overcome this problem, but it has proved to be insufficient unless some of these mechanisms can be run in real time. In long pulse devices this new paradigm, where data acquisition devices include local processing capabilities to be able to run advanced data analysis algorithms, will be a must. The research works done in this thesis aim to determining whether it is possible to increase local capacity for real‐time processing of such systems by using GPUs. For that, during the experimentation period, various proposals have been evaluated through use cases developed for several of the most representative fusion devices, ITER, JET and TCV. Conclusions and experiences obtained have allowed to propose a model, and a development methodology, to include this technology in systems for diagnostics of different nature. The model defines not only the optimal hardware architecture for achieving this integration, but also the incorporation of this new processing resource in one of the Systems of Supervision Control and Data Acquisition (SCADA) systems more relevant at the moment in the fusion community (EPICS), providing a complete solution. The final evaluation has been performed through a use case developed for a generic diagnostic requiring image acquisition and processing for the international ITER device, and has been successfully tested in their premises. The solution proposed in this thesis has been included by the ITER IO in his catalog of standard solutions for the development of their future diagnostics. This has been possible thanks to the technologic transfer agreement signed with xi National Instruments which has permitted us to modify and update one of their core software products targeted for the acquisition systems used in these devices.

Relevância:

50.00% 50.00%

Publicador:

Resumo:

In contrast to other approaches that provide methodological guidance for ontology engineering, the NeOn Methodology does not prescribe a rigid workflow, but instead it suggests a variety of pathways for developing ontologies. The nine scenarios proposed in the methodology cover commonly occurring situations, for example, when available ontologies need to be re-engineered, aligned, modularized, localized to support different languages and cultures, and integrated with ontology design patterns and non-ontological resources, such as folksonomies or thesauri. In addition, the NeOn Methodology framework provides (a) a glossary of processes and activities involved in the development of ontologies, (b) two ontology life cycle models, and (c) a set of methodological guidelines for different processes and activities, which are described (a) functionally, in terms of goals, inputs, outputs, and relevant constraints; (b) procedurally, by means of workflow specifications; and (c) empirically, through a set of illustrative examples.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Investigating cell dynamics during early zebrafish embryogenesis requires specific image acquisition and analysis strategies. Multiharmonic microscopy, i.e., second- and third-harmonic generations, allows imaging cell divisions and cell membranes in unstained zebrafish embryos from 1- to 1000-cell stage. This paper presents the design and implementation of a dedicated image processing pipeline (tracking and segmentation) for the reconstruction of cell dynamics during these developmental stages. This methodology allows the reconstruction of the cell lineage tree including division timings, spatial coordinates, and cell shape until the 1000-cell stage with minute temporal accuracy and micrometer spatial resolution. Data analysis of the digital embryos provides an extensive quantitative description of early zebrafish embryogenesis.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Advances in electronics nowadays facilitate the design of smart spaces based on physical mash-ups of sensor and actuator devices. At the same time, software paradigms such as Internet of Things (IoT) and Web of Things (WoT) are motivating the creation of technology to support the development and deployment of web-enabled embedded sensor and actuator devices with two major objectives: (i) to integrate sensing and actuating functionalities into everyday objects, and (ii) to easily allow a diversity of devices to plug into the Internet. Currently, developers who are applying this Internet-oriented approach need to have solid understanding about specific platforms and web technologies. In order to alleviate this development process, this research proposes a Resource-Oriented and Ontology-Driven Development (ROOD) methodology based on the Model Driven Architecture (MDA). This methodology aims at enabling the development of smart spaces through a set of modeling tools and semantic technologies that support the definition of the smart space and the automatic generation of code at hardware level. ROOD feasibility is demonstrated by building an adaptive health monitoring service for a Smart Gym.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Autonomous systems refer to systems capable of operating in a real world environment without any form of external control for extended periods of time. Autonomy is a desired goal for every system as it improves its performance, safety and profit. Ontologies are a way to conceptualize the knowledge of a specific domain. In this paper an ontology for the description of autonomous systems as well as for its development (engineering) is presented and applied to a process. This ontology is intended to be applied and used to generate final applications following a model driven methodology.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The paper presents the main elements of a project entitled ICT-Emissions that aims at developing a novel methodology to evaluate the impact of ICT-related measures on mobility, vehicle energy consumption and CO2 emissions of vehicle fleets at the local scale, in order to promote the wider application of the most appropriate ICT measures. The proposed methodology combines traffic and emission modelling at micro and macro scales. These will be linked with interfaces and submodules which will be specifically designed and developed. A number of sources are available to the consortium to obtain the necessary input data. Also, experimental campaigns are offered to fill in gaps of information in traffic and emission patterns. The application of the methodology will be demonstrated using commercially available software. However, the methodology is developed in such a way as to enable its implementation by a variety of emission and traffic models. Particular emphasis is given to (a) the correct estimation of driver behaviour, as a result of traffic-related ICT measures, (b) the coverage of a large number of current vehicle technologies, including ICT systems, and (c) near future technologies such as hybrid, plug-in hybrids, and electric vehicles. The innovative combination of traffic, driver, and emission models produces a versatile toolbox that can simulate the impact on energy and CO2 of infrastructure measures (traffic management, dynamic traffic signs, etc.), driver assistance systems and ecosolutions (speed/cruise control, start/stop systems, etc.) or a combination of measures (cooperative systems).The methodology is validated by application in the Turin area and its capacity is further demonstrated by application in real world conditions in Madrid and Rome.