38 resultados para Safety critical applications

em Universidad Politécnica de Madrid


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Distributed real-time embedded systems are becoming increasingly important to society. More demands will be made on them and greater reliance will be placed on the delivery of their services. A relevant subset of them is high-integrity or hard real-time systems, where failure can cause loss of life, environmental harm, or significant financial loss. Additionally, the evolution of communication networks and paradigms as well as the necessity of demanding processing power and fault tolerance, motivated the interconnection between electronic devices; many of the communications have the possibility of transferring data at a high speed. The concept of distributed systems emerged as systems where different parts are executed on several nodes that interact with each other via a communication network. Java’s popularity, facilities and platform independence have made it an interesting language for the real-time and embedded community. This was the motivation for the development of RTSJ (Real-Time Specification for Java), which is a language extension intended to allow the development of real-time systems. The use of Java in the development of high-integrity systems requires strict development and testing techniques. However, RTJS includes a number of language features that are forbidden in such systems. In the context of the HIJA project, the HRTJ (Hard Real-Time Java) profile was developed to define a robust subset of the language that is amenable to static analysis for high-integrity system certification. Currently, a specification under the Java community process (JSR- 302) is being developed. Its purpose is to define those capabilities needed to create safety critical applications with Java technology called Safety Critical Java (SCJ). However, neither RTSJ nor its profiles provide facilities to develop distributed realtime applications. This is an important issue, as most of the current and future systems will be distributed. The Distributed RTSJ (DRTSJ) Expert Group was created under the Java community process (JSR-50) in order to define appropriate abstractions to overcome this problem. Currently there is no formal specification. The aim of this thesis is to develop a communication middleware that is suitable for the development of distributed hard real-time systems in Java, based on the integration between the RMI (Remote Method Invocation) model and the HRTJ profile. It has been designed and implemented keeping in mind the main requirements such as the predictability and reliability in the timing behavior and the resource usage. iThe design starts with the definition of a computational model which identifies among other things: the communication model, most appropriate underlying network protocols, the analysis model, and a subset of Java for hard real-time systems. In the design, the remote references are the basic means for building distributed applications which are associated with all non-functional parameters and resources needed to implement synchronous or asynchronous remote invocations with real-time attributes. The proposed middleware separates the resource allocation from the execution itself by defining two phases and a specific threading mechanism that guarantees a suitable timing behavior. It also includes mechanisms to monitor the functional and the timing behavior. It provides independence from network protocol defining a network interface and modules. The JRMP protocol was modified to include two phases, non-functional parameters, and message size optimizations. Although serialization is one of the fundamental operations to ensure proper data transmission, current implementations are not suitable for hard real-time systems and there are no alternatives. This thesis proposes a predictable serialization that introduces a new compiler to generate optimized code according to the computational model. The proposed solution has the advantage of allowing us to schedule the communications and to adjust the memory usage at compilation time. In order to validate the design and the implementation a demanding validation process was carried out with emphasis in the functional behavior, the memory usage, the processor usage (the end-to-end response time and the response time in each functional block) and the network usage (real consumption according to the calculated consumption). The results obtained in an industrial application developed by Thales Avionics (a Flight Management System) and in exhaustive tests show that the design and the prototype are reliable for industrial applications with strict timing requirements. Los sistemas empotrados y distribuidos de tiempo real son cada vez más importantes para la sociedad. Su demanda aumenta y cada vez más dependemos de los servicios que proporcionan. Los sistemas de alta integridad constituyen un subconjunto de gran importancia. Se caracterizan por que un fallo en su funcionamiento puede causar pérdida de vidas humanas, daños en el medio ambiente o cuantiosas pérdidas económicas. La necesidad de satisfacer requisitos temporales estrictos, hace más complejo su desarrollo. Mientras que los sistemas empotrados se sigan expandiendo en nuestra sociedad, es necesario garantizar un coste de desarrollo ajustado mediante el uso técnicas adecuadas en su diseño, mantenimiento y certificación. En concreto, se requiere una tecnología flexible e independiente del hardware. La evolución de las redes y paradigmas de comunicación, así como la necesidad de mayor potencia de cómputo y de tolerancia a fallos, ha motivado la interconexión de dispositivos electrónicos. Los mecanismos de comunicación permiten la transferencia de datos con alta velocidad de transmisión. En este contexto, el concepto de sistema distribuido ha emergido como sistemas donde sus componentes se ejecutan en varios nodos en paralelo y que interactúan entre ellos mediante redes de comunicaciones. Un concepto interesante son los sistemas de tiempo real neutrales respecto a la plataforma de ejecución. Se caracterizan por la falta de conocimiento de esta plataforma durante su diseño. Esta propiedad es relevante, por que conviene que se ejecuten en la mayor variedad de arquitecturas, tienen una vida media mayor de diez anos y el lugar ˜ donde se ejecutan puede variar. El lenguaje de programación Java es una buena base para el desarrollo de este tipo de sistemas. Por este motivo se ha creado RTSJ (Real-Time Specification for Java), que es una extensión del lenguaje para permitir el desarrollo de sistemas de tiempo real. Sin embargo, RTSJ no proporciona facilidades para el desarrollo de aplicaciones distribuidas de tiempo real. Es una limitación importante dado que la mayoría de los actuales y futuros sistemas serán distribuidos. El grupo DRTSJ (DistributedRTSJ) fue creado bajo el proceso de la comunidad de Java (JSR-50) con el fin de definir las abstracciones que aborden dicha limitación, pero en la actualidad aun no existe una especificacion formal. El objetivo de esta tesis es desarrollar un middleware de comunicaciones para el desarrollo de sistemas distribuidos de tiempo real en Java, basado en la integración entre el modelo de RMI (Remote Method Invocation) y el perfil HRTJ. Ha sido diseñado e implementado teniendo en cuenta los requisitos principales, como la predecibilidad y la confiabilidad del comportamiento temporal y el uso de recursos. El diseño parte de la definición de un modelo computacional el cual identifica entre otras cosas: el modelo de comunicaciones, los protocolos de red subyacentes más adecuados, el modelo de análisis, y un subconjunto de Java para sistemas de tiempo real crítico. En el diseño, las referencias remotas son el medio básico para construcción de aplicaciones distribuidas las cuales son asociadas a todos los parámetros no funcionales y los recursos necesarios para la ejecución de invocaciones remotas síncronas o asíncronas con atributos de tiempo real. El middleware propuesto separa la asignación de recursos de la propia ejecución definiendo dos fases y un mecanismo de hebras especifico que garantiza un comportamiento temporal adecuado. Además se ha incluido mecanismos para supervisar el comportamiento funcional y temporal. Se ha buscado independencia del protocolo de red definiendo una interfaz de red y módulos específicos. También se ha modificado el protocolo JRMP para incluir diferentes fases, parámetros no funcionales y optimizaciones de los tamaños de los mensajes. Aunque la serialización es una de las operaciones fundamentales para asegurar la adecuada transmisión de datos, las actuales implementaciones no son adecuadas para sistemas críticos y no hay alternativas. Este trabajo propone una serialización predecible que ha implicado el desarrollo de un nuevo compilador para la generación de código optimizado acorde al modelo computacional. La solución propuesta tiene la ventaja que en tiempo de compilación nos permite planificar las comunicaciones y ajustar el uso de memoria. Con el objetivo de validar el diseño e implementación se ha llevado a cabo un exigente proceso de validación con énfasis en: el comportamiento funcional, el uso de memoria, el uso del procesador (tiempo de respuesta de extremo a extremo y en cada uno de los bloques funcionales) y el uso de la red (consumo real conforme al estimado). Los buenos resultados obtenidos en una aplicación industrial desarrollada por Thales Avionics (un sistema de gestión de vuelo) y en las pruebas exhaustivas han demostrado que el diseño y el prototipo son fiables para aplicaciones industriales con estrictos requisitos temporales.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

In order to satisfy the safety-critical requirements, the train control system (TCS) often employs a layered safety communication protocol to provide reliable services. However, both description and verification of the safety protocols may be formidable due to the system complexity. In this paper, interface automata (IA) are used to describe the safety service interface behaviors of safety communication protocol. A formal verification method is proposed to describe the safety communication protocols using IA and translate IA model into PROMELA model so that the protocols can be verified by the model checker SPIN. A case study of using this method to describe and verify a safety communication protocol is included. The verification results illustrate that the proposed method is effective to describe the safety protocols and verify deadlocks, livelocks and several mandatory consistency properties. A prototype of safety protocols is also developed based on the presented formally verifying method.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Los sistemas empotrados son cada día más comunes y complejos, de modo que encontrar procesos seguros, eficaces y baratos de desarrollo software dirigidos específicamente a esta clase de sistemas es más necesario que nunca. A diferencia de lo que ocurría hasta hace poco, en la actualidad los avances tecnológicos en el campo de los microprocesadores de los últimos tiempos permiten el desarrollo de equipos con prestaciones más que suficientes para ejecutar varios sistemas software en una única máquina. Además, hay sistemas empotrados con requisitos de seguridad (safety) de cuyo correcto funcionamiento depende la vida de muchas personas y/o grandes inversiones económicas. Estos sistemas software se diseñan e implementan de acuerdo con unos estándares de desarrollo software muy estrictos y exigentes. En algunos casos puede ser necesaria también la certificación del software. Para estos casos, los sistemas con criticidades mixtas pueden ser una alternativa muy valiosa. En esta clase de sistemas, aplicaciones con diferentes niveles de criticidad se ejecutan en el mismo computador. Sin embargo, a menudo es necesario certificar el sistema entero con el nivel de criticidad de la aplicación más crítica, lo que hace que los costes se disparen. La virtualización se ha postulado como una tecnología muy interesante para contener esos costes. Esta tecnología permite que un conjunto de máquinas virtuales o particiones ejecuten las aplicaciones con unos niveles de aislamiento tanto temporal como espacial muy altos. Esto, a su vez, permite que cada partición pueda ser certificada independientemente. Para el desarrollo de sistemas particionados con criticidades mixtas se necesita actualizar los modelos de desarrollo software tradicionales, pues estos no cubren ni las nuevas actividades ni los nuevos roles que se requieren en el desarrollo de estos sistemas. Por ejemplo, el integrador del sistema debe definir las particiones o el desarrollador de aplicaciones debe tener en cuenta las características de la partición donde su aplicación va a ejecutar. Tradicionalmente, en el desarrollo de sistemas empotrados, el modelo en V ha tenido una especial relevancia. Por ello, este modelo ha sido adaptado para tener en cuenta escenarios tales como el desarrollo en paralelo de aplicaciones o la incorporación de una nueva partición a un sistema ya existente. El objetivo de esta tesis doctoral es mejorar la tecnología actual de desarrollo de sistemas particionados con criticidades mixtas. Para ello, se ha diseñado e implementado un entorno dirigido específicamente a facilitar y mejorar los procesos de desarrollo de esta clase de sistemas. En concreto, se ha creado un algoritmo que genera el particionado del sistema automáticamente. En el entorno de desarrollo propuesto, se han integrado todas las actividades necesarias para desarrollo de un sistema particionado, incluidos los nuevos roles y actividades mencionados anteriormente. Además, el diseño del entorno de desarrollo se ha basado en la ingeniería guiada por modelos (Model-Driven Engineering), la cual promueve el uso de los modelos como elementos fundamentales en el proceso de desarrollo. Así pues, se proporcionan las herramientas necesarias para modelar y particionar el sistema, así como para validar los resultados y generar los artefactos necesarios para el compilado, construcción y despliegue del mismo. Además, en el diseño del entorno de desarrollo, la extensión e integración del mismo con herramientas de validación ha sido un factor clave. En concreto, se pueden incorporar al entorno de desarrollo nuevos requisitos no-funcionales, la generación de nuevos artefactos tales como documentación o diferentes lenguajes de programación, etc. Una parte clave del entorno de desarrollo es el algoritmo de particionado. Este algoritmo se ha diseñado para ser independiente de los requisitos de las aplicaciones así como para permitir al integrador del sistema implementar nuevos requisitos del sistema. Para lograr esta independencia, se han definido las restricciones al particionado. El algoritmo garantiza que dichas restricciones se cumplirán en el sistema particionado que resulte de su ejecución. Las restricciones al particionado se han diseñado con una capacidad expresiva suficiente para que, con un pequeño grupo de ellas, se puedan expresar la mayor parte de los requisitos no-funcionales más comunes. Las restricciones pueden ser definidas manualmente por el integrador del sistema o bien pueden ser generadas automáticamente por una herramienta a partir de los requisitos funcionales y no-funcionales de una aplicación. El algoritmo de particionado toma como entradas los modelos y las restricciones al particionado del sistema. Tras la ejecución y como resultado, se genera un modelo de despliegue en el que se definen las particiones que son necesarias para el particionado del sistema. A su vez, cada partición define qué aplicaciones deben ejecutar en ella así como los recursos que necesita la partición para ejecutar correctamente. El problema del particionado y las restricciones al particionado se modelan matemáticamente a través de grafos coloreados. En dichos grafos, un coloreado propio de los vértices representa un particionado del sistema correcto. El algoritmo se ha diseñado también para que, si es necesario, sea posible obtener particionados alternativos al inicialmente propuesto. El entorno de desarrollo, incluyendo el algoritmo de particionado, se ha probado con éxito en dos casos de uso industriales: el satélite UPMSat-2 y un demostrador del sistema de control de una turbina eólica. Además, el algoritmo se ha validado mediante la ejecución de numerosos escenarios sintéticos, incluyendo algunos muy complejos, de más de 500 aplicaciones. ABSTRACT The importance of embedded software is growing as it is required for a large number of systems. Devising cheap, efficient and reliable development processes for embedded systems is thus a notable challenge nowadays. Computer processing power is continuously increasing, and as a result, it is currently possible to integrate complex systems in a single processor, which was not feasible a few years ago.Embedded systems may have safety critical requirements. Its failure may result in personal or substantial economical loss. The development of these systems requires stringent development processes that are usually defined by suitable standards. In some cases their certification is also necessary. This scenario fosters the use of mixed-criticality systems in which applications of different criticality levels must coexist in a single system. In these cases, it is usually necessary to certify the whole system, including non-critical applications, which is costly. Virtualization emerges as an enabling technology used for dealing with this problem. The system is structured as a set of partitions, or virtual machines, that can be executed with temporal and spatial isolation. In this way, applications can be developed and certified independently. The development of MCPS (Mixed-Criticality Partitioned Systems) requires additional roles and activities that traditional systems do not require. The system integrator has to define system partitions. Application development has to consider the characteristics of the partition to which it is allocated. In addition, traditional software process models have to be adapted to this scenario. The V-model is commonly used in embedded systems development. It can be adapted to the development of MCPS by enabling the parallel development of applications or adding an additional partition to an existing system. The objective of this PhD is to improve the available technology for MCPS development by providing a framework tailored to the development of this type of system and by defining a flexible and efficient algorithm for automatically generating system partitionings. The goal of the framework is to integrate all the activities required for developing MCPS and to support the different roles involved in this process. The framework is based on MDE (Model-Driven Engineering), which emphasizes the use of models in the development process. The framework provides basic means for modeling the system, generating system partitions, validating the system and generating final artifacts. The framework has been designed to facilitate its extension and the integration of external validation tools. In particular, it can be extended by adding support for additional non-functional requirements and support for final artifacts, such as new programming languages or additional documentation. The framework includes a novel partitioning algorithm. It has been designed to be independent of the types of applications requirements and also to enable the system integrator to tailor the partitioning to the specific requirements of a system. This independence is achieved by defining partitioning constraints that must be met by the resulting partitioning. They have sufficient expressive capacity to state the most common constraints and can be defined manually by the system integrator or generated automatically based on functional and non-functional requirements of the applications. The partitioning algorithm uses system models and partitioning constraints as its inputs. It generates a deployment model that is composed by a set of partitions. Each partition is in turn composed of a set of allocated applications and assigned resources. The partitioning problem, including applications and constraints, is modeled as a colored graph. A valid partitioning is a proper vertex coloring. A specially designed algorithm generates this coloring and is able to provide alternative partitions if required. The framework, including the partitioning algorithm, has been successfully used in the development of two industrial use cases: the UPMSat-2 satellite and the control system of a wind-power turbine. The partitioning algorithm has been successfully validated by using a large number of synthetic loads, including complex scenarios with more that 500 applications.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

First, this paper describes a future layered Air Traffic Management (ATM) system centred in the execution phase of flights. The layered ATM model is based on the work currently performed by SESAR [1] and takes into account the availability of accurate and updated flight information ?seen by all? across the European airspace. This shared information of each flight will be referred as Reference Business Trajectory (RBT). In the layered ATM system, exchanges of information will involve several actors (human or automatic), which will have varying time horizons, areas of responsibility and tasks. Second, the paper will identify the need to define the negotiation processes required to agree revisions to the RBT in the layered ATM system. Third, the final objective of the paper is to bring to the attention of researchers and engineers the communalities between multi-player games and Collaborative Decision Making processes (CDM) in a layered ATM system

Relevância:

80.00% 80.00%

Publicador:

Resumo:

La computación basada en servicios (Service-Oriented Computing, SOC) se estableció como un paradigma ampliamente aceptado para el desarollo de sistemas de software flexibles, distribuidos y adaptables, donde las composiciones de los servicios realizan las tareas más complejas o de nivel más alto, frecuentemente tareas inter-organizativas usando los servicios atómicos u otras composiciones de servicios. En tales sistemas, las propriedades de la calidad de servicio (Quality of Service, QoS), como la rapídez de procesamiento, coste, disponibilidad o seguridad, son críticas para la usabilidad de los servicios o sus composiciones en cualquier aplicación concreta. El análisis de estas propriedades se puede realizarse de una forma más precisa y rica en información si se utilizan las técnicas de análisis de programas, como el análisis de complejidad o de compartición de datos, que son capables de analizar simultáneamente tanto las estructuras de control como las de datos, dependencias y operaciones en una composición. El análisis de coste computacional para la composicion de servicios puede ayudar a una monitorización predictiva así como a una adaptación proactiva a través de una inferencia automática de coste computacional, usando los limites altos y bajos como funciones del valor o del tamaño de los mensajes de entrada. Tales funciones de coste se pueden usar para adaptación en la forma de selección de los candidatos entre los servicios que minimizan el coste total de la composición, basado en los datos reales que se pasan al servicio. Las funciones de coste también pueden ser combinadas con los parámetros extraídos empíricamente desde la infraestructura, para producir las funciones de los límites de QoS sobre los datos de entrada, cuales se pueden usar para previsar, en el momento de invocación, las violaciones de los compromisos al nivel de servicios (Service Level Agreements, SLA) potenciales or inminentes. En las composiciones críticas, una previsión continua de QoS bastante eficaz y precisa se puede basar en el modelado con restricciones de QoS desde la estructura de la composition, datos empiricos en tiempo de ejecución y (cuando estén disponibles) los resultados del análisis de complejidad. Este enfoque se puede aplicar a las orquestaciones de servicios con un control centralizado del flujo, así como a las coreografías con participantes multiples, siguiendo unas interacciones complejas que modifican su estado. El análisis del compartición de datos puede servir de apoyo para acciones de adaptación, como la paralelización, fragmentación y selección de los componentes, las cuales son basadas en dependencias funcionales y en el contenido de información en los mensajes, datos internos y las actividades de la composición, cuando se usan construcciones de control complejas, como bucles, bifurcaciones y flujos anidados. Tanto las dependencias funcionales como el contenido de información (descrito a través de algunos atributos definidos por el usuario) se pueden expresar usando una representación basada en la lógica de primer orden (claúsulas de Horn), y los resultados del análisis se pueden interpretar como modelos conceptuales basados en retículos. ABSTRACT Service-Oriented Computing (SOC) is a widely accepted paradigm for development of flexible, distributed and adaptable software systems, in which service compositions perform more complex, higher-level, often cross-organizational tasks using atomic services or other service compositions. In such systems, Quality of Service (QoS) properties, such as the performance, cost, availability or security, are critical for the usability of services and their compositions in concrete applications. Analysis of these properties can become more precise and richer in information, if it employs program analysis techniques, such as the complexity and sharing analyses, which are able to simultaneously take into account both the control and the data structures, dependencies, and operations in a composition. Computation cost analysis for service composition can support predictive monitoring and proactive adaptation by automatically inferring computation cost using the upper and lower bound functions of value or size of input messages. These cost functions can be used for adaptation by selecting service candidates that minimize total cost of the composition, based on the actual data that is passed to them. The cost functions can also be combined with the empirically collected infrastructural parameters to produce QoS bounds functions of input data that can be used to predict potential or imminent Service Level Agreement (SLA) violations at the moment of invocation. In mission-critical applications, an effective and accurate continuous QoS prediction, based on continuations, can be achieved by constraint modeling of composition QoS based on its structure, known data at runtime, and (when available) the results of complexity analysis. This approach can be applied to service orchestrations with centralized flow control, and choreographies with multiple participants with complex stateful interactions. Sharing analysis can support adaptation actions, such as parallelization, fragmentation, and component selection, which are based on functional dependencies and information content of the composition messages, internal data, and activities, in presence of complex control constructs, such as loops, branches, and sub-workflows. Both the functional dependencies and the information content (described using user-defined attributes) can be expressed using a first-order logic (Horn clause) representation, and the analysis results can be interpreted as a lattice-based conceptual models.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Cognitive wireless sensor network (CWSN) is a new paradigm, integrating cognitive features in traditional wireless sensor networks (WSNs) to mitigate important problems such as spectrum occupancy. Security in cognitive wireless sensor networks is an important problem since these kinds of networks manage critical applications and data. The specific constraints of WSN make the problem even more critical, and effective solutions have not yet been implemented. Primary user emulation (PUE) attack is the most studied specific attack deriving from new cognitive features. This work discusses a new approach, based on anomaly behavior detection and collaboration, to detect the primary user emulation attack in CWSN scenarios. Two non-parametric algorithms, suitable for low-resource networks like CWSNs, have been used in this work: the cumulative sum and data clustering algorithms. The comparison is based on some characteristics such as detection delay, learning time, scalability, resources, and scenario dependency. The algorithms have been tested using a cognitive simulator that provides important results in this area. Both algorithms have shown to be valid in order to detect PUE attacks, reaching a detection rate of 99% and less than 1% of false positives using collaboration.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Cognitive Wireless Sensor Network (CWSN) is a new paradigm which integrates cognitive features in traditional Wireless Sensor Networks (WSNs) to mitigate important problems such as spectrum occupancy. Security in Cognitive Wireless Sensor Networks is an important problem because these kinds of networks manage critical applications and data. Moreover, the specific constraints of WSN make the problem even more critical. However, effective solutions have not been implemented yet. Among the specific attacks derived from new cognitive features, the one most studied is the Primary User Emulation (PUE) attack. This paper discusses a new approach, based on anomaly behavior detection and collaboration, to detect the PUE attack in CWSN scenarios. A nonparametric CUSUM algorithm, suitable for low resource networks like CWSN, has been used in this work. The algorithm has been tested using a cognitive simulator that brings important results in this area. For example, the result shows that the number of collaborative nodes is the most important parameter in order to improve the PUE attack detection rates. If the 20% of the nodes collaborates, the PUE detection reaches the 98% with less than 1% of false positives.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Las redes de sensores inalámbricas son uno de los sectores con más crecimiento dentro de las redes inalámbricas. La rápida adopción de estas redes como solución para muchas nuevas aplicaciones ha llevado a un creciente tráfico en el espectro radioeléctrico. Debido a que las redes inalámbricas de sensores operan en las bandas libres Industrial, Scientific and Medical (ISM) se ha producido una saturación del espectro que en pocos años no permitirá un buen funcionamiento. Con el objetivo de solucionar este tipo de problemas ha aparecido el paradigma de Radio Cognitiva (CR). La introducción de las capacidades cognitivas en las redes inalámbricas de sensores permite utilizar estas redes para aplicaciones con unos requisitos más estrictos respecto a fiabilidad, cobertura o calidad de servicio. Estas redes que aúnan todas estas características son llamadas redes de sensores inalámbricas cognitivas (CWSNs). La mejora en prestaciones de las CWSNs permite su utilización en aplicaciones críticas donde antes no podían ser utilizadas como monitorización de estructuras, de servicios médicos, en entornos militares o de vigilancia. Sin embargo, estas aplicaciones también requieren de otras características que la radio cognitiva no nos ofrece directamente como, por ejemplo, la seguridad. La seguridad en CWSNs es un aspecto poco desarrollado al ser una característica no esencial para su funcionamiento, como pueden serlo el sensado del espectro o la colaboración. Sin embargo, su estudio y mejora es esencial de cara al crecimiento de las CWSNs. Por tanto, esta tesis tiene como objetivo implementar contramedidas usando las nuevas capacidades cognitivas, especialmente en la capa física, teniendo en cuenta las limitaciones con las que cuentan las WSNs. En el ciclo de trabajo de esta tesis se han desarrollado dos estrategias de seguridad contra ataques de especial importancia en redes cognitivas: el ataque de simulación de usuario primario (PUE) y el ataque contra la privacidad eavesdropping. Para mitigar el ataque PUE se ha desarrollado una contramedida basada en la detección de anomalías. Se han implementado dos algoritmos diferentes para detectar este ataque: el algoritmo de Cumulative Sum y el algoritmo de Data Clustering. Una vez comprobado su validez se han comparado entre sí y se han investigado los efectos que pueden afectar al funcionamiento de los mismos. Para combatir el ataque de eavesdropping se ha desarrollado una contramedida basada en la inyección de ruido artificial de manera que el atacante no distinga las señales con información del ruido sin verse afectada la comunicación que nos interesa. También se ha estudiado el impacto que tiene esta contramedida en los recursos de la red. Como resultado paralelo se ha desarrollado un marco de pruebas para CWSNs que consta de un simulador y de una red de nodos cognitivos reales. Estas herramientas han sido esenciales para la implementación y extracción de resultados de la tesis. ABSTRACT Wireless Sensor Networks (WSNs) are one of the fastest growing sectors in wireless networks. The fast introduction of these networks as a solution in many new applications has increased the traffic in the radio spectrum. Due to the operation of WSNs in the free industrial, scientific, and medical (ISM) bands, saturation has ocurred in these frequencies that will make the same operation methods impossible in the future. Cognitive radio (CR) has appeared as a solution for this problem. The networks that join all the mentioned features together are called cognitive wireless sensor networks (CWSNs). The adoption of cognitive features in WSNs allows the use of these networks in applications with higher reliability, coverage, or quality of service requirements. The improvement of the performance of CWSNs allows their use in critical applications where they could not be used before such as structural monitoring, medical care, military scenarios, or security monitoring systems. Nevertheless, these applications also need other features that cognitive radio does not add directly, such as security. The security in CWSNs has not yet been explored fully because it is not necessary field for the main performance of these networks. Instead, other fields like spectrum sensing or collaboration have been explored deeply. However, the study of security in CWSNs is essential for their growth. Therefore, the main objective of this thesis is to study the impact of some cognitive radio attacks in CWSNs and to implement countermeasures using new cognitive capabilities, especially in the physical layer and considering the limitations of WSNs. Inside the work cycle of this thesis, security strategies against two important kinds of attacks in cognitive networks have been developed. These attacks are the primary user emulator (PUE) attack and the eavesdropping attack. A countermeasure against the PUE attack based on anomaly detection has been developed. Two different algorithms have been implemented: the cumulative sum algorithm and the data clustering algorithm. After the verification of these solutions, they have been compared and the side effects that can disturb their performance have been analyzed. The developed approach against the eavesdropping attack is based on the generation of artificial noise to conceal information messages. The impact of this countermeasure on network resources has also been studied. As a parallel result, a new framework for CWSNs has been developed. This includes a simulator and a real network with cognitive nodes. This framework has been crucial for the implementation and extraction of the results presented in this thesis.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The need to refine models for best-estimate calculations, based on good-quality experimental data, has been expressed in many recent meetings in the field of nuclear applications. The modeling needs arising in this respect should not be limited to the currently available macroscopic methods but should be extended to next-generation analysis techniques that focus on more microscopic processes. One of the most valuable databases identified for the thermalhydraulics modeling was developed by the Nuclear Power Engineering Corporation (NUPEC), Japan. From 1987 to 1995, NUPEC performed steady-state and transient critical power and departure from nucleate boiling (DNB) test series based on the equivalent full-size mock-ups. Considering the reliability not only of the measured data, but also other relevant parameters such as the system pressure, inlet sub-cooling and rod surface temperature, these test series supplied the first substantial database for the development of truly mechanistic and consistent models for boiling transition and critical heat flux. Over the last few years the Pennsylvania State University (PSU) under the sponsorship of the U.S. Nuclear Regulatory Commission (NRC) has prepared, organized, conducted and summarized the OECD/NRC Full-size Fine-mesh Bundle Tests (BFBT) Benchmark. The international benchmark activities have been conducted in cooperation with the Nuclear Energy Agency/Organization for Economic Co-operation and Development (NEA/OECD) and Japan Nuclear Energy Safety (JNES) organization, Japan. Consequently, the JNES has made available the Boiling Water Reactor (BWR) NUPEC database for the purposes of the benchmark. Based on the success of the OECD/NRC BFBT benchmark the JNES has decided to release also the data based on the NUPEC Pressurized Water Reactor (PWR) subchannel and bundle tests for another follow-up international benchmark entitled OECD/NRC PWR Subchannel and Bundle Tests (PSBT) benchmark. This paper presents an application of the joint Penn State University/Technical University of Madrid (UPM) version of the well-known subchannel code COBRA-TF, namely CTF, to the critical power and departure from nucleate boiling (DNB) exercises of the OECD/NRC BFBT and PSBT benchmarks

Relevância:

40.00% 40.00%

Publicador:

Resumo:

This study addresses deflagration initiation of lean and stoichiometric hydrogen–air mixtures by the sudden discharge of a hot jet of their adiabatic combustion products. The objective is to compute the minimum jet radius required for ignition, a relevant quantity of interest for safety and technological applications. For sufficiently small discharge velocities, the numerical solution of the problem requires integration of the axisymmetric Navier–Stokes equations for chemically reacting ideal-gas mixtures, supplemented by standard descriptions of the molecular transport terms and a suitably reduced chemical-kinetic mechanism for the chemistry description. The computations provide the variation of the critical radius for hot-jet ignition with both the jet velocity and the equivalence ratio of the mixture, giving values that vary between a few tens microns to a few hundred microns in the range of conditions explored. For a given equivalence ratio, the critical radius is found to increase with increasing injection velocities, although the increase is only moderately large. On the other hand, for a given injection velocity, the smallest critical radius is found at stoichiometric conditions.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Pervasive computing offers new scenarios where users are surrounded by invisible and proactive technology making smart spaces. Although the utility and power of solutions developed using this computer paradigm are proved, there are unresolved problems that hinder their acceptance and inclusion in our private life. Users have problems understanding the operations of a pervasive computing solution, and therefore they should trust that the solution works properly and according to their expectations. Nevertheless, the concept of trust is already framed in a specific use within the ecosystem of applications that can populate a smart space. To take this concept of trust to the whole space, we propose to study and define the concept of confidence. In contrast to the concept of trust, confidence has deeper psychological implications.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Proof-Carrying Code (PCC) is a general approach to mobile code safety in which programs are augmented with a certificate (or proof). The intended benefit is that the program consumer can locally validate the certificate w.r.t. the "untrustcd" program by means of a certificate checker a process which should be much simpler, efficient, and automatic than generating the original proof. The practical uptake of PCC greatly depends on the existence of a variety of enabling technologies which allow both proving programs correct and replacing a costly verification process by an efficient checking proceduri on th( consumer side. In this work we propose Abstraction- Carrying Code (ACC), a novel approach which uses abstract interpretation as enabling technology. We argue that the large body of applications of abstract interpretation to program verification is amenable to the overall PCC scheme. In particular, we rely on an expressive class of safely policies which can be defined over different abstract domains. We use an abstraction (or abstract model) of the program computed by standard static analyzers as a certificate. The validity of the abstraction on ihe consumer side is checked in a single pass by a very efficient and specialized abstract-interpreter. We believe that ACC brings the expressiveness, flexibility and automation which is inherent in abstract interpretation techniques to the area of mobile code safety.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Laser Welding (LW) is more often used in manufacturing due to its advantages, such as accurate control, good repeatability, less heat input, opportunities for joining of special materials, high speed, capability to join small dimension parts etc. LW is dedicated to robotized manufacturing, and the fabrication cells are using various level of flexibility, from specialized robots to very flexible setups. This paper features several LW applications using two industrially-scaled manufacturing cells at UPM Laser Centre (CLUPM) of Polytechnical University of Madrid (Universidad Politécnica de Madrid). The one dedicated to Remote Laser Welding (RLW) of thin sheets for automotive and other sectors uses a CO2 laser of 3500 W. The second has a high flexibility, is based on a 6-axis ABB robot and a Nd:YAG laser of 3300 W, and is meant for various laser processing methods, including welding. After a short description of each cell, several LW applications experimented at CLUPM and recently implemented in industry are briefly presented: RLW of automotive coated sheets, LW of high strength automotive sheets, LW vs. laser hybrid welding (LHW) of Double Phase steel thin sheets, and LHW of thin sheets of stainless steel and carbon steel (dissimilar joints). The main technological issues overcame and the critical process parameters are pointed out. Conclusions about achievements and trends are provided.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Since the Three Mile Island accident, an important focus of pressurized water reactor (PWR) transient analyses has been a small-break loss-of-coolant accident (SBLOCA). In 2002, the discovery of thinning of the vessel head wall at the Davis Besse nuclear power plant reactor indicated the possibility of an SBLOCA in the upper head of the reactor vessel as a result of circumferential cracking of a control rod drive mechanism penetration nozzle - which has cast even greater importance on the study of SBLOCAs. Several experimental tests have been performed at the Large Scale Test Facility to simulate the behavior of a PWR during an upper-head SBLOCA. The last of these tests, Organisation for Economic Co-operation and Development Nuclear Energy Agency Rig of Safety Assessment (OECD/NEA ROSA) Test 6.1, was performed in 2005. This test was simulated with the TRACE 5.0 code, and good agreement with the experimental results was obtained. Additionally, a broad analysis of an upper-head SBLOCA with high-pressure safety injection failed in a Westinghouse PWR was performed taking into account different accident management actions and conditions in order to check their suitability. This issue has been analyzed also in the framework of the OECD/NEA ROSA project and the Code Applications and Maintenance Program (CAMP). The main conclusion is that the current emergency operating procedures for Westinghouse reactor design are adequate for these kinds of sequences, and they do not need to be modified.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The use of cloud computing is extending to all kind of systems, including the ones that are part of Critical Infrastructures, and measuring the reliability is becoming more difficult. Computing is becoming the 5th utility, in part thanks to the use of cloud services. Cloud computing is used now by all types of systems and organizations, including critical infrastructure, creating hidden inter-dependencies on both public and private cloud models. This paper investigates the use of cloud computing by critical infrastructure systems, the reliability and continuity of services risks associated with their use by critical systems. Some examples are presented of their use by different critical industries, and even when the use of cloud computing by such systems is not widely extended, there is a future risk that this paper presents. The concepts of macro and micro dependability and the model we introduce are useful for inter-dependency definition and for analyzing the resilience of systems that depend on other systems, specifically in the cloud model.