15 resultados para Expanded critical incident approach

em Universidad Politécnica de Madrid


Relevância:

40.00% 40.00%

Publicador:

Resumo:

Despite that Critical Infrastructures (CIs) security and surveillance are a growing concern for many countries and companies, Multi Robot Systems (MRSs) have not been yet broadly used in this type of facilities. This dissertation presents a novel study of the challenges arisen by the implementation of this type of systems and proposes solutions to specific problems. First, a comprehensive analysis of different types of CIs has been carried out, emphasizing the influence of the different characteristics of the facilities in the design of a security and surveillance MRS. One of the most important needs for the surveillance of a CI is the detection of intruders. From a technical point of view this problem can be abstracted as equivalent to the Detection and Tracking of Mobile Objects (DATMO). This dissertation proposes algorithms to solve this specific problem in a CI environment. Using 3D range images of the environment as input data, two detection algorithms for ground robots have been developed. These detection algorithms provide a list of moving objects in the robot detection area. Direct image differentiation and computer vision techniques are used when the robot is static. Alternatively, multi-layer ground reconstructions are compared to detect the dynamic objects when the robot is moving. Since CIs usually spread over large areas, it is very useful to incorporate aerial vehicles in the surveillance MRS. Therefore, a moving object detection algorithm for aerial vehicles has been also developed. This algorithm compares the real optical flow obtained from a down-face oriented camera with an artificial optical flow computed using a RANSAC based homography matrix. Two tracking algorithms have been developed to follow the moving objects trajectories. These algorithms can efficiently handle occlusions and crossings, as well as exchange information among robots. The multirobot tracking can be applied to any type of communication structure: centralized, decentralized or a combination of both. Even more, the developed tracking algorithms are independent of the detection algorithms and could be potentially used with other detection procedures or even with static sensors, such as cameras. In addition, using the 3D point clouds available to the robots, a relative localization algorithm has been developed to improve the position estimation of a given robot with observations from other robots. All the developed algorithms have been extensively tested in different simulated CIs using the Webots robotics simulator. Furthermore, the algorithms have also been validated with real robots operating in real scenarios. In conclusion, this dissertation presents a multirobot approach to Critical Infrastructure Surveillance, mainly focusing on Detecting and Tracking Dynamic Objects.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The railway overhead (or catenary) is the system of cables responsible for providing electric current to the train. This system has been reported as wind-sensitive (Scanlon et al., 2000), and particularly to the occurrence of galloping phenomena. Galloping phenomena of the railway overhead consists of undamped cable oscillations triggered by aerodynamic forces acting on the contact wire. As is well known, aerodynamic loads on the contact wire depends on the incident flow mean velocity and the angle of attack. The presence of embankments or hills modifies both vertical velocities profiles and angles of attack of the flow (Paiva et al., 2009). The presence of these cross-wind related oscillations can interfere with the safe operation of the railway service (Johnson, 1996). Therefore a correct modelling of the phenomena is required to avoid these unwanted oscillations.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The aim of this contribution is to present a theoretical approach and two experimental campaigns (on wind tunnel and on the track) concerning the research work about the ballast train-induced-wind erosion (BTIWE) phenomenon. When a high speed train overpasses the critical speed, it produces a wind speed close to the track large enough to start the motion of the ballast elements, eventually leading to the rolling of the stones (Kwon and Park, 2006) and, if these stones get enough energy, they can jump and then initiate a saltation-like chain reaction, as found in the saltation processes of soil eolian erosion (Bagnold, 1941). The expelled stones can reach a height which is larger than the lowest parts of the train, striking them (and the track surroundings) producing considerable damage that should be avoided. There is not much published work about this phenomenon, in spite of the great interest that exists due to its relevant applications in increasing the maximum operative train speed. Particularly, the initiation of flight of ballast due to the pass of a high speed train has been studied by Kwon and Park (2006) by performing field and wind tunnel experiments.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper reports on an innovative approach that aims to reduce information management costs in data-intensive and cognitively-complex biomedical environments. Recognizing the importance of prominent high-performance computing paradigms and large data processing technologies as well as collaboration support systems to remedy data-intensive issues, it adopts a hybrid approach by building on the synergy of these technologies. The proposed approach provides innovative Web-based workbenches that integrate and orchestrate a set of interoperable services that reduce the data-intensiveness and complexity overload at critical decision points to a manageable level, thus permitting stakeholders to be more productive and concentrate on creative activities.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

La aparición de la fatiga ha sido ampliamente investigada en el acero y en otros materiales metálicos, sin embargo no se conoce en tanta profundidad en el hormigón estructural. Esto crea falta de uniformidad y enfoque en el proceso de verificación de estructuras de hormigón para el estado límite último de la fatiga. A medida que se llevan a cabo más investigaciones, la información sobre los parámetros que afectan a la fatiga en el hormigón comienzan a ser difundidos e incluso los que les afectan de forma indirecta. Esto conlleva a que se estén incorporando en las guías de diseño de todo el mundo, a pesar de que la comprobación del estado límite último no se trata por igual entre los distintos órganos de diseño. Este trabajo presentará un conocimiento básico del fenómeno de la fatiga, qué lo causa y qué condiciones de carga o propiedades materiales amplían o reducen la probabilidad de fallo por fatiga. Cuatro distintos códigos de diseño serán expuestos y su proceso de verificación ha sido examinado, comparados y valorados cualitativa y cuantitativamente. Una torre eólica, como ejemplo, fue analizada usando los procedimientos de verificación como se indica en sus respectivos códigos de referencia. The occurrence of fatigue has been extensively researched in steel and other metallic materials it is however, not as broadly understood in concrete. This produces a lack of uniformity in the approach and process in the verification of concrete structures for the ultimate limit state of fatigue. As more research is conducted and more information is known about the parameters which cause, propagate, and indirectly affect fatigue in concrete, they are incorporated in design guides around the world. Nevertheless, this ultimate limit state verification is not addressed equally by various design governing bodies. This report presents a baseline understanding of what the phenomenon of fatigue is, what causes it, and what loading or material conditions amplify or reduce the likelihood of fatigue failure. Four different design codes are exposed and their verification process has been examined, compared and evaluated both qualitatively and quantitatively. Using a wind turbine tower structure as case study, this report presents calculated results following the verification processes as instructed in the respective reference codes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Hydrogenated amorphous silicon thin films were deposited using a high pressure sputtering (HPS) system. In this work, we have studied the composition and optical properties of the films (band-gap, absorption coefficient), and their dependence with the deposition parameters. For films deposited at high pressure (1 mbar), composition measurements show a critical dependence of the purity of the films with the RF power. Films manufactured with RF-power above 80W exhibit good properties for future application, similar to the films deposited by CVD (Chemical Vapor Deposition) for hydrogenated amorphous silicon.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Esta tesis doctoral se centra principalmente en técnicas de ataque y contramedidas relacionadas con ataques de canal lateral (SCA por sus siglas en inglés), que han sido propuestas dentro del campo de investigación académica desde hace 17 años. Las investigaciones relacionadas han experimentado un notable crecimiento en las últimas décadas, mientras que los diseños enfocados en la protección sólida y eficaz contra dichos ataques aún se mantienen como un tema de investigación abierto, en el que se necesitan iniciativas más confiables para la protección de la información persona de empresa y de datos nacionales. El primer uso documentado de codificación secreta se remonta a alrededor de 1700 B.C., cuando los jeroglíficos del antiguo Egipto eran descritos en las inscripciones. La seguridad de la información siempre ha supuesto un factor clave en la transmisión de datos relacionados con inteligencia diplomática o militar. Debido a la evolución rápida de las técnicas modernas de comunicación, soluciones de cifrado se incorporaron por primera vez para garantizar la seguridad, integridad y confidencialidad de los contextos de transmisión a través de cables sin seguridad o medios inalámbricos. Debido a las restricciones de potencia de cálculo antes de la era del ordenador, la técnica de cifrado simple era un método más que suficiente para ocultar la información. Sin embargo, algunas vulnerabilidades algorítmicas pueden ser explotadas para restaurar la regla de codificación sin mucho esfuerzo. Esto ha motivado nuevas investigaciones en el área de la criptografía, con el fin de proteger el sistema de información ante sofisticados algoritmos. Con la invención de los ordenadores se ha acelerado en gran medida la implementación de criptografía segura, que ofrece resistencia eficiente encaminada a obtener mayores capacidades de computación altamente reforzadas. Igualmente, sofisticados cripto-análisis han impulsado las tecnologías de computación. Hoy en día, el mundo de la información ha estado involucrado con el campo de la criptografía, enfocada a proteger cualquier campo a través de diversas soluciones de cifrado. Estos enfoques se han fortalecido debido a la unificación optimizada de teorías matemáticas modernas y prácticas eficaces de hardware, siendo posible su implementación en varias plataformas (microprocesador, ASIC, FPGA, etc.). Las necesidades y requisitos de seguridad en la industria son las principales métricas de conducción en el diseño electrónico, con el objetivo de promover la fabricación de productos de gran alcance sin sacrificar la seguridad de los clientes. Sin embargo, una vulnerabilidad en la implementación práctica encontrada por el Prof. Paul Kocher, et al en 1996 implica que un circuito digital es inherentemente vulnerable a un ataque no convencional, lo cual fue nombrado posteriormente como ataque de canal lateral, debido a su fuente de análisis. Sin embargo, algunas críticas sobre los algoritmos criptográficos teóricamente seguros surgieron casi inmediatamente después de este descubrimiento. En este sentido, los circuitos digitales consisten típicamente en un gran número de celdas lógicas fundamentales (como MOS - Metal Oxide Semiconductor), construido sobre un sustrato de silicio durante la fabricación. La lógica de los circuitos se realiza en función de las innumerables conmutaciones de estas células. Este mecanismo provoca inevitablemente cierta emanación física especial que puede ser medida y correlacionada con el comportamiento interno del circuito. SCA se puede utilizar para revelar datos confidenciales (por ejemplo, la criptografía de claves), analizar la arquitectura lógica, el tiempo e incluso inyectar fallos malintencionados a los circuitos que se implementan en sistemas embebidos, como FPGAs, ASICs, o tarjetas inteligentes. Mediante el uso de la comparación de correlación entre la cantidad de fuga estimada y las fugas medidas de forma real, información confidencial puede ser reconstruida en mucho menos tiempo y computación. Para ser precisos, SCA básicamente cubre una amplia gama de tipos de ataques, como los análisis de consumo de energía y radiación ElectroMagnética (EM). Ambos se basan en análisis estadístico y, por lo tanto, requieren numerosas muestras. Los algoritmos de cifrado no están intrínsecamente preparados para ser resistentes ante SCA. Es por ello que se hace necesario durante la implementación de circuitos integrar medidas que permitan camuflar las fugas a través de "canales laterales". Las medidas contra SCA están evolucionando junto con el desarrollo de nuevas técnicas de ataque, así como la continua mejora de los dispositivos electrónicos. Las características físicas requieren contramedidas sobre la capa física, que generalmente se pueden clasificar en soluciones intrínsecas y extrínsecas. Contramedidas extrínsecas se ejecutan para confundir la fuente de ataque mediante la integración de ruido o mala alineación de la actividad interna. Comparativamente, las contramedidas intrínsecas están integradas en el propio algoritmo, para modificar la aplicación con el fin de minimizar las fugas medibles, o incluso hacer que dichas fugas no puedan ser medibles. Ocultación y Enmascaramiento son dos técnicas típicas incluidas en esta categoría. Concretamente, el enmascaramiento se aplica a nivel algorítmico, para alterar los datos intermedios sensibles con una máscara de manera reversible. A diferencia del enmascaramiento lineal, las operaciones no lineales que ampliamente existen en criptografías modernas son difíciles de enmascarar. Dicho método de ocultación, que ha sido verificado como una solución efectiva, comprende principalmente la codificación en doble carril, que está ideado especialmente para aplanar o eliminar la fuga dependiente de dato en potencia o en EM. En esta tesis doctoral, además de la descripción de las metodologías de ataque, se han dedicado grandes esfuerzos sobre la estructura del prototipo de la lógica propuesta, con el fin de realizar investigaciones enfocadas a la seguridad sobre contramedidas de arquitectura a nivel lógico. Una característica de SCA reside en el formato de las fuentes de fugas. Un típico ataque de canal lateral se refiere al análisis basado en la potencia, donde la capacidad fundamental del transistor MOS y otras capacidades parásitas son las fuentes esenciales de fugas. Por lo tanto, una lógica robusta resistente a SCA debe eliminar o mitigar las fugas de estas micro-unidades, como las puertas lógicas básicas, los puertos I/O y las rutas. Las herramientas EDA proporcionadas por los vendedores manipulan la lógica desde un nivel más alto, en lugar de realizarlo desde el nivel de puerta, donde las fugas de canal lateral se manifiestan. Por lo tanto, las implementaciones clásicas apenas satisfacen estas necesidades e inevitablemente atrofian el prototipo. Por todo ello, la implementación de un esquema de diseño personalizado y flexible ha de ser tomado en cuenta. En esta tesis se presenta el diseño y la implementación de una lógica innovadora para contrarrestar SCA, en la que se abordan 3 aspectos fundamentales: I. Se basa en ocultar la estrategia sobre el circuito en doble carril a nivel de puerta para obtener dinámicamente el equilibrio de las fugas en las capas inferiores; II. Esta lógica explota las características de la arquitectura de las FPGAs, para reducir al mínimo el gasto de recursos en la implementación; III. Se apoya en un conjunto de herramientas asistentes personalizadas, incorporadas al flujo genérico de diseño sobre FPGAs, con el fin de manipular los circuitos de forma automática. El kit de herramientas de diseño automático es compatible con la lógica de doble carril propuesta, para facilitar la aplicación práctica sobre la familia de FPGA del fabricante Xilinx. En este sentido, la metodología y las herramientas son flexibles para ser extendido a una amplia gama de aplicaciones en las que se desean obtener restricciones mucho más rígidas y sofisticadas a nivel de puerta o rutado. En esta tesis se realiza un gran esfuerzo para facilitar el proceso de implementación y reparación de lógica de doble carril genérica. La viabilidad de las soluciones propuestas es validada mediante la selección de algoritmos criptográficos ampliamente utilizados, y su evaluación exhaustiva en comparación con soluciones anteriores. Todas las propuestas están respaldadas eficazmente a través de ataques experimentales con el fin de validar las ventajas de seguridad del sistema. El presente trabajo de investigación tiene la intención de cerrar la brecha entre las barreras de implementación y la aplicación efectiva de lógica de doble carril. En esencia, a lo largo de esta tesis se describirá un conjunto de herramientas de implementación para FPGAs que se han desarrollado para trabajar junto con el flujo de diseño genérico de las mismas, con el fin de lograr crear de forma innovadora la lógica de doble carril. Un nuevo enfoque en el ámbito de la seguridad en el cifrado se propone para obtener personalización, automatización y flexibilidad en el prototipo de circuito de bajo nivel con granularidad fina. Las principales contribuciones del presente trabajo de investigación se resumen brevemente a continuación: Lógica de Precharge Absorbed-DPL logic: El uso de la conversión de netlist para reservar LUTs libres para ejecutar la señal de precharge y Ex en una lógica DPL. Posicionamiento entrelazado Row-crossed con pares idénticos de rutado en redes de doble carril, lo que ayuda a aumentar la resistencia frente a la medición EM selectiva y mitigar los impactos de las variaciones de proceso. Ejecución personalizada y herramientas de conversión automática para la generación de redes idénticas para la lógica de doble carril propuesta. (a) Para detectar y reparar conflictos en las conexiones; (b) Detectar y reparar las rutas asimétricas. (c) Para ser utilizado en otras lógicas donde se requiere un control estricto de las interconexiones en aplicaciones basadas en Xilinx. Plataforma CPA de pruebas personalizadas para el análisis de EM y potencia, incluyendo la construcción de dicha plataforma, el método de medición y análisis de los ataques. Análisis de tiempos para cuantificar los niveles de seguridad. División de Seguridad en la conversión parcial de un sistema de cifrado complejo para reducir los costes de la protección. Prueba de concepto de un sistema de calefacción auto-adaptativo para mitigar los impactos eléctricos debido a la variación del proceso de silicio de manera dinámica. La presente tesis doctoral se encuentra organizada tal y como se detalla a continuación: En el capítulo 1 se abordan los fundamentos de los ataques de canal lateral, que abarca desde conceptos básicos de teoría de modelos de análisis, además de la implementación de la plataforma y la ejecución de los ataques. En el capítulo 2 se incluyen las estrategias de resistencia SCA contra los ataques de potencia diferencial y de EM. Además de ello, en este capítulo se propone una lógica en doble carril compacta y segura como contribución de gran relevancia, así como también se presentará la transformación lógica basada en un diseño a nivel de puerta. Por otra parte, en el Capítulo 3 se abordan los desafíos relacionados con la implementación de lógica en doble carril genérica. Así mismo, se describirá un flujo de diseño personalizado para resolver los problemas de aplicación junto con una herramienta de desarrollo automático de aplicaciones propuesta, para mitigar las barreras de diseño y facilitar los procesos. En el capítulo 4 se describe de forma detallada la elaboración e implementación de las herramientas propuestas. Por otra parte, la verificación y validaciones de seguridad de la lógica propuesta, así como un sofisticado experimento de verificación de la seguridad del rutado, se describen en el capítulo 5. Por último, un resumen de las conclusiones de la tesis y las perspectivas como líneas futuras se incluyen en el capítulo 6. Con el fin de profundizar en el contenido de la tesis doctoral, cada capítulo se describe de forma más detallada a continuación: En el capítulo 1 se introduce plataforma de implementación hardware además las teorías básicas de ataque de canal lateral, y contiene principalmente: (a) La arquitectura genérica y las características de la FPGA a utilizar, en particular la Xilinx Virtex-5; (b) El algoritmo de cifrado seleccionado (un módulo comercial Advanced Encryption Standard (AES)); (c) Los elementos esenciales de los métodos de canal lateral, que permiten revelar las fugas de disipación correlacionadas con los comportamientos internos; y el método para recuperar esta relación entre las fluctuaciones físicas en los rastros de canal lateral y los datos internos procesados; (d) Las configuraciones de las plataformas de pruebas de potencia / EM abarcadas dentro de la presente tesis. El contenido de esta tesis se amplia y profundiza a partir del capítulo 2, en el cual se abordan varios aspectos claves. En primer lugar, el principio de protección de la compensación dinámica de la lógica genérica de precarga de doble carril (Dual-rail Precharge Logic-DPL) se explica mediante la descripción de los elementos compensados a nivel de puerta. En segundo lugar, la lógica PA-DPL es propuesta como aportación original, detallando el protocolo de la lógica y un caso de aplicación. En tercer lugar, dos flujos de diseño personalizados se muestran para realizar la conversión de doble carril. Junto con ello, se aclaran las definiciones técnicas relacionadas con la manipulación por encima de la netlist a nivel de LUT. Finalmente, una breve discusión sobre el proceso global se aborda en la parte final del capítulo. El Capítulo 3 estudia los principales retos durante la implementación de DPLs en FPGAs. El nivel de seguridad de las soluciones de resistencia a SCA encontradas en el estado del arte se ha degenerado debido a las barreras de implantación a través de herramientas EDA convencionales. En el escenario de la arquitectura FPGA estudiada, se discuten los problemas de los formatos de doble carril, impactos parásitos, sesgo tecnológico y la viabilidad de implementación. De acuerdo con estas elaboraciones, se plantean dos problemas: Cómo implementar la lógica propuesta sin penalizar los niveles de seguridad, y cómo manipular un gran número de celdas y automatizar el proceso. El PA-DPL propuesto en el capítulo 2 se valida con una serie de iniciativas, desde características estructurales como doble carril entrelazado o redes de rutado clonadas, hasta los métodos de aplicación tales como las herramientas de personalización y automatización de EDA. Por otra parte, un sistema de calefacción auto-adaptativo es representado y aplicado a una lógica de doble núcleo, con el fin de ajustar alternativamente la temperatura local para equilibrar los impactos negativos de la variación del proceso durante la operación en tiempo real. El capítulo 4 se centra en los detalles de la implementación del kit de herramientas. Desarrollado sobre una API third-party, el kit de herramientas personalizado es capaz de manipular los elementos de la lógica de circuito post P&R ncd (una versión binaria ilegible del xdl) convertido al formato XDL Xilinx. El mecanismo y razón de ser del conjunto de instrumentos propuestos son cuidadosamente descritos, que cubre la detección de enrutamiento y los enfoques para la reparación. El conjunto de herramientas desarrollado tiene como objetivo lograr redes de enrutamiento estrictamente idénticos para la lógica de doble carril, tanto para posicionamiento separado como para el entrelazado. Este capítulo particularmente especifica las bases técnicas para apoyar las implementaciones en los dispositivos de Xilinx y su flexibilidad para ser utilizado sobre otras aplicaciones. El capítulo 5 se enfoca en la aplicación de los casos de estudio para la validación de los grados de seguridad de la lógica propuesta. Se discuten los problemas técnicos detallados durante la ejecución y algunas nuevas técnicas de implementación. (a) Se discute el impacto en el proceso de posicionamiento de la lógica utilizando el kit de herramientas propuesto. Diferentes esquemas de implementación, tomando en cuenta la optimización global en seguridad y coste, se verifican con los experimentos con el fin de encontrar los planes de posicionamiento y reparación optimizados; (b) las validaciones de seguridad se realizan con los métodos de correlación y análisis de tiempo; (c) Una táctica asintótica se aplica a un núcleo AES sobre BCDL estructurado para validar de forma sofisticada el impacto de enrutamiento sobre métricas de seguridad; (d) Los resultados preliminares utilizando el sistema de calefacción auto-adaptativa sobre la variación del proceso son mostrados; (e) Se introduce una aplicación práctica de las herramientas para un diseño de cifrado completa. Capítulo 6 incluye el resumen general del trabajo presentado dentro de esta tesis doctoral. Por último, una breve perspectiva del trabajo futuro se expone, lo que puede ampliar el potencial de utilización de las contribuciones de esta tesis a un alcance más allá de los dominios de la criptografía en FPGAs. ABSTRACT This PhD thesis mainly concentrates on countermeasure techniques related to the Side Channel Attack (SCA), which has been put forward to academic exploitations since 17 years ago. The related research has seen a remarkable growth in the past decades, while the design of solid and efficient protection still curiously remain as an open research topic where more reliable initiatives are required for personal information privacy, enterprise and national data protections. The earliest documented usage of secret code can be traced back to around 1700 B.C., when the hieroglyphs in ancient Egypt are scribed in inscriptions. Information security always gained serious attention from diplomatic or military intelligence transmission. Due to the rapid evolvement of modern communication technique, crypto solution was first incorporated by electronic signal to ensure the confidentiality, integrity, availability, authenticity and non-repudiation of the transmitted contexts over unsecure cable or wireless channels. Restricted to the computation power before computer era, simple encryption tricks were practically sufficient to conceal information. However, algorithmic vulnerabilities can be excavated to restore the encoding rules with affordable efforts. This fact motivated the development of modern cryptography, aiming at guarding information system by complex and advanced algorithms. The appearance of computers has greatly pushed forward the invention of robust cryptographies, which efficiently offers resistance relying on highly strengthened computing capabilities. Likewise, advanced cryptanalysis has greatly driven the computing technologies in turn. Nowadays, the information world has been involved into a crypto world, protecting any fields by pervasive crypto solutions. These approaches are strong because of the optimized mergence between modern mathematical theories and effective hardware practices, being capable of implement crypto theories into various platforms (microprocessor, ASIC, FPGA, etc). Security needs from industries are actually the major driving metrics in electronic design, aiming at promoting the construction of systems with high performance without sacrificing security. Yet a vulnerability in practical implementation found by Prof. Paul Kocher, et al in 1996 implies that modern digital circuits are inherently vulnerable to an unconventional attack approach, which was named as side-channel attack since then from its analysis source. Critical suspicions to theoretically sound modern crypto algorithms surfaced almost immediately after this discovery. To be specifically, digital circuits typically consist of a great number of essential logic elements (as MOS - Metal Oxide Semiconductor), built upon a silicon substrate during the fabrication. Circuit logic is realized relying on the countless switch actions of these cells. This mechanism inevitably results in featured physical emanation that can be properly measured and correlated with internal circuit behaviors. SCAs can be used to reveal the confidential data (e.g. crypto-key), analyze the logic architecture, timing and even inject malicious faults to the circuits that are implemented in hardware system, like FPGA, ASIC, smart Card. Using various comparison solutions between the predicted leakage quantity and the measured leakage, secrets can be reconstructed at much less expense of time and computation. To be precisely, SCA basically encloses a wide range of attack types, typically as the analyses of power consumption or electromagnetic (EM) radiation. Both of them rely on statistical analyses, and hence require a number of samples. The crypto algorithms are not intrinsically fortified with SCA-resistance. Because of the severity, much attention has to be taken into the implementation so as to assemble countermeasures to camouflage the leakages via "side channels". Countermeasures against SCA are evolving along with the development of attack techniques. The physical characteristics requires countermeasures over physical layer, which can be generally classified into intrinsic and extrinsic vectors. Extrinsic countermeasures are executed to confuse the attacker by integrating noise, misalignment to the intra activities. Comparatively, intrinsic countermeasures are built into the algorithm itself, to modify the implementation for minimizing the measurable leakage, or making them not sensitive any more. Hiding and Masking are two typical techniques in this category. Concretely, masking applies to the algorithmic level, to alter the sensitive intermediate values with a mask in reversible ways. Unlike the linear masking, non-linear operations that widely exist in modern cryptographies are difficult to be masked. Approved to be an effective counter solution, hiding method mainly mentions dual-rail logic, which is specially devised for flattening or removing the data-dependent leakage in power or EM signatures. In this thesis, apart from the context describing the attack methodologies, efforts have also been dedicated to logic prototype, to mount extensive security investigations to countermeasures on logic-level. A characteristic of SCA resides on the format of leak sources. Typical side-channel attack concerns the power based analysis, where the fundamental capacitance from MOS transistors and other parasitic capacitances are the essential leak sources. Hence, a robust SCA-resistant logic must eliminate or mitigate the leakages from these micro units, such as basic logic gates, I/O ports and routings. The vendor provided EDA tools manipulate the logic from a higher behavioral-level, rather than the lower gate-level where side-channel leakage is generated. So, the classical implementations barely satisfy these needs and inevitably stunt the prototype. In this case, a customized and flexible design scheme is appealing to be devised. This thesis profiles an innovative logic style to counter SCA, which mainly addresses three major aspects: I. The proposed logic is based on the hiding strategy over gate-level dual-rail style to dynamically overbalance side-channel leakage from lower circuit layer; II. This logic exploits architectural features of modern FPGAs, to minimize the implementation expenses; III. It is supported by a set of assistant custom tools, incorporated by the generic FPGA design flow, to have circuit manipulations in an automatic manner. The automatic design toolkit supports the proposed dual-rail logic, facilitating the practical implementation on Xilinx FPGA families. While the methodologies and the tools are flexible to be expanded to a wide range of applications where rigid and sophisticated gate- or routing- constraints are desired. In this thesis a great effort is done to streamline the implementation workflow of generic dual-rail logic. The feasibility of the proposed solutions is validated by selected and widely used crypto algorithm, for thorough and fair evaluation w.r.t. prior solutions. All the proposals are effectively verified by security experiments. The presented research work attempts to solve the implementation troubles. The essence that will be formalized along this thesis is that a customized execution toolkit for modern FPGA systems is developed to work together with the generic FPGA design flow for creating innovative dual-rail logic. A method in crypto security area is constructed to obtain customization, automation and flexibility in low-level circuit prototype with fine-granularity in intractable routings. Main contributions of the presented work are summarized next: Precharge Absorbed-DPL logic: Using the netlist conversion to reserve free LUT inputs to execute the Precharge and Ex signal in a dual-rail logic style. A row-crossed interleaved placement method with identical routing pairs in dual-rail networks, which helps to increase the resistance against selective EM measurement and mitigate the impacts from process variations. Customized execution and automatic transformation tools for producing identical networks for the proposed dual-rail logic. (a) To detect and repair the conflict nets; (b) To detect and repair the asymmetric nets. (c) To be used in other logics where strict network control is required in Xilinx scenario. Customized correlation analysis testbed for EM and power attacks, including the platform construction, measurement method and attack analysis. A timing analysis based method for quantifying the security grades. A methodology of security partitions of complex crypto systems for reducing the protection cost. A proof-of-concept self-adaptive heating system to mitigate electrical impacts over process variations in dynamic dual-rail compensation manner. The thesis chapters are organized as follows: Chapter 1 discusses the side-channel attack fundamentals, which covers from theoretic basics to analysis models, and further to platform setup and attack execution. Chapter 2 centers to SCA-resistant strategies against generic power and EM attacks. In this chapter, a major contribution, a compact and secure dual-rail logic style, will be originally proposed. The logic transformation based on bottom-layer design will be presented. Chapter 3 is scheduled to elaborate the implementation challenges of generic dual-rail styles. A customized design flow to solve the implementation problems will be described along with a self-developed automatic implementation toolkit, for mitigating the design barriers and facilitating the processes. Chapter 4 will originally elaborate the tool specifics and construction details. The implementation case studies and security validations for the proposed logic style, as well as a sophisticated routing verification experiment, will be described in Chapter 5. Finally, a summary of thesis conclusions and perspectives for future work are included in Chapter 5. To better exhibit the thesis contents, each chapter is further described next: Chapter 1 provides the introduction of hardware implementation testbed and side-channel attack fundamentals, and mainly contains: (a) The FPGA generic architecture and device features, particularly of Virtex-5 FPGA; (b) The selected crypto algorithm - a commercially and extensively used Advanced Encryption Standard (AES) module - is detailed; (c) The essentials of Side-Channel methods are profiled. It reveals the correlated dissipation leakage to the internal behaviors, and the method to recover this relationship between the physical fluctuations in side-channel traces and the intra processed data; (d) The setups of the power/EM testing platforms enclosed inside the thesis work are given. The content of this thesis is expanded and deepened from chapter 2, which is divided into several aspects. First, the protection principle of dynamic compensation of the generic dual-rail precharge logic is explained by describing the compensated gate-level elements. Second, the novel DPL is originally proposed by detailing the logic protocol and an implementation case study. Third, a couple of custom workflows are shown next for realizing the rail conversion. Meanwhile, the technical definitions that are about to be manipulated above LUT-level netlist are clarified. A brief discussion about the batched process is given in the final part. Chapter 3 studies the implementation challenges of DPLs in FPGAs. The security level of state-of-the-art SCA-resistant solutions are decreased due to the implementation barriers using conventional EDA tools. In the studied FPGA scenario, problems are discussed from dual-rail format, parasitic impact, technological bias and implementation feasibility. According to these elaborations, two problems arise: How to implement the proposed logic without crippling the security level; and How to manipulate a large number of cells and automate the transformation. The proposed PA-DPL in chapter 2 is legalized with a series of initiatives, from structures to implementation methods. Furthermore, a self-adaptive heating system is depicted and implemented to a dual-core logic, assumed to alternatively adjust local temperature for balancing the negative impacts from silicon technological biases on real-time. Chapter 4 centers to the toolkit system. Built upon a third-party Application Program Interface (API) library, the customized toolkit is able to manipulate the logic elements from post P&R circuit (an unreadable binary version of the xdl one) converted to Xilinx xdl format. The mechanism and rationale of the proposed toolkit are carefully convoyed, covering the routing detection and repairing approaches. The developed toolkit aims to achieve very strictly identical routing networks for dual-rail logic both for separate and interleaved placement. This chapter particularly specifies the technical essentials to support the implementations in Xilinx devices and the flexibility to be expanded to other applications. Chapter 5 focuses on the implementation of the case studies for validating the security grades of the proposed logic style from the proposed toolkit. Comprehensive implementation techniques are discussed. (a) The placement impacts using the proposed toolkit are discussed. Different execution schemes, considering the global optimization in security and cost, are verified with experiments so as to find the optimized placement and repair schemes; (b) Security validations are realized with correlation, timing methods; (c) A systematic method is applied to a BCDL structured module to validate the routing impact over security metric; (d) The preliminary results using the self-adaptive heating system over process variation is given; (e) A practical implementation of the proposed toolkit to a large design is introduced. Chapter 6 includes the general summary of the complete work presented inside this thesis. Finally, a brief perspective for the future work is drawn which might expand the potential utilization of the thesis contributions to a wider range of implementation domains beyond cryptography on FPGAs.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We present an undergraduate course on concurrent programming where formal models are used in different stages of the learning process. The main practical difference with other approaches lies in the fact that the ability to develop correct concurrent software relies on a systematic transformation of formal models of inter-process interaction (so called shared resources), rather than on the specific constructs of some programming language. Using a resource-centric rather than a language-centric approach has some benefits for both teachers and students. Besides the obvious advantage of being independent of the programming language, the models help in the early validation of concurrent software design, provide students and teachers with a lingua franca that greatly simplifies communication at the classroom and during supervision, and help in the automatic generation of tests for the practical assignments. This method has been in use, with slight variations, for some 15 years, surviving changes in the programming language and course length. In this article, we describe the components and structure of the current incarnation of the course?which uses Java as target language?and some tools used to support our method. We provide a detailed description of the different outcomes that the model-driven approach delivers (validation of the initial design, automatic generation of tests, and mechanical generation of code) from a teaching perspective. A critical discussion on the perceived advantages and risks of our approach follows, including some proposals on how these risks can be minimized. We include a statistical analysis to show that our method has a positive impact in the student ability to understand concurrency and to generate correct code.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

There is no doubt that there is no possibility of finding a single reference about domotics in the first half of the 20th century. The best known authors and those who have documented this discipline, set its origin in the 1970’s, when the x-10 technology began to be used, but it was not until 1988 when Larousse Encyclopedia decided to include the definition of "Smart Building". Furthermore, even nowadays, there is not a single definition widely accepted, and for that reason, many other expressions, namely "Intelligent Buildings" "Domotics" "Digital Home" or "Home Automation" have appeared to describe the automated buildings and homes. The lack of a clear definition for "Smart Buildings" causes difficulty not only in the development of a common international framework to develop research in this field, but it also causes insecurity in the potential user of these buildings. That is to say, the user does not know what is offered by this kind of buildings, hindering the dissemination of the culture of building automation in society. Thus, the main purpose of this paper is to propose a definition of the expression “Smart Buildings” that satisfactorily describes the meaning of this discipline. To achieve this aim, a thorough review of the origin of the term itself and the historical background before the emergence of the phenomenon of domotics was conducted, followed by a critical discussion of existing definitions of the term "Smart Buildings" and other similar terms. The extent of each definition has been analyzed, inaccuracies have been discarded and commonalities have been compared. Throughout the discussion, definitions that bring the term "Smart Buildings" near to disciplines such as computer science, robotics and also telecommunications have been found. However, there are also many other definitions that emphasize in a more abstract way the role of these new buildings in the society and the future of mankind.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

One of the main limiting factors in the development of new magnesium (Mg) alloys with enhanced mechanical behavior is the need to use vast experimental campaigns for microstructure and property screening. For example, the influence of new alloying additions on the critical resolved shear stresses (CRSSs) is currently evaluated by a combination of macroscopic single-crystal experiments and crystal plasticity finite-element simulations (CPFEM). This time-consuming process could be considerably simplified by the introduction of high-throughput techniques for efficient property testing. The aim of this paper is to propose a new and fast, methodology for the estimation of the CRSSs of hexagonal close-packed metals which, moreover, requires small amounts of material. The proposed method, which combines instrumented nanoindentation and CPFEM modeling, determines CRSS values by comparison of the variation of hardness (H) for different grain orientations with the outcome of CPFEM. This novel approach has been validated in a rolled and annealed pure Mg sheet, whose H variation with grain orientation has been successfully predicted using a set of CRSSs taken from recent crystal plasticity simulations of single-crystal experiments. Moreover, the proposed methodology has been utilized to infer the effect of the alloying elements of an MN11 (Mg–1% Mn–1% Nd) alloy. The results support the hypothesis that selected rare earth intermetallic precipitates help to bring the CRSS values of basal and non-basal slip systems closer together, thus contributing to the reduced plastic anisotropy observed in these alloys

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Limit equilibrium is a common method used to analyze the stability of a slope, and minimization of the factor of safety or identification of critical slip surfaces is a classical geotechnical problem in the context of limit equilibrium methods for slope stability analyses. A mutative scale chaos optimization algorithm is employed in this study to locate the noncircular critical slip surface with Spencer’s method being employed to compute the factor of safety. Four examples from the literature—one homogeneous slope and three layered slopes—are employed to identify the efficiency and accuracy of this approach. Results indicate that the algorithm is flexible and that although it does not generally provide the minimum FS, it provides results that are close to the minimum, an improvement over other solutions proposed in the literature and with small relative errors with respect to other minimum factor of safety (FS) values reported in the literature.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In order to reduce costs and time while improving quality, durability and sustainability in structural concrete constructions, a widely used material nowadays, special care must be taken in some crucial phases of the project and execution, including the structure design and calculation, the dosage, dumping and curing of concrete: another important aspect is the proper design and execution of assembly plans and construction details. The framework, a name designating the whole reinforcement bars cage already assembled as shown in the drawings, can be made up of several components and implies higher or lower industrialization degree. The framework costs constitute about one third of the price per cubic meter placed in concrete works. The best solutions from all points of view are clearly those involving an easier processing to achieve the same goal, and consequently carrying a high degree of industrialization, meaning quality and safety in the work. This thesis aims to provide an indepth analysis of a relatively new type of anchoring by plate known as headed reinforcement bars, which can potentially replace standard or L-shaped hooks, improving the cleaning of construction details and enabling a faster, more flexible, and therefore a more economical assembly. A literature review on the topic and an overview of typical applications is provided, followed by some examples of specific applications in real projects. Since a strict theoretical formulation used to provide the design plate dimensions has not yet been put forward, an equation is proposed for the side-face blowout strength of the anchorage, based on the capacity of concrete to carry concentrated loads in cases in which no transverse reinforcement is provided. The correlation of the calculated ultimate load with experimental results available in the literature is given. Besides, the proposed formulation can be expanded to cases in which a certain development length is available: using a software for nonlinear finite element analysis oriented to the study of reinforced concrete, numerical tests on the bond-bearing interaction are performed. The thesis ends with a testing of eight corner joints subjected to a closing moment, held in the Structures Laboratory of the Polytechnic University of Madrid, aiming to check whether the design of such plates as stated is adequate for these elements and whether an element with plate-anchored reinforcement is equivalent to one with a traditional construction detail.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

El MC en baloncesto es aquel fenómeno relacionado con el juego que presenta unas características particulares determinadas por la idiosincrasia de un equipo y puede afectar a los protagonistas y por ende al devenir del juego. En la presente Tesis se ha estudiado la incidencia del MC en Liga A.C.B. de baloncesto y para su desarrollo en profundidad se ha planteado dos investigaciones una cuantitativa y otra cualitativa cuya metodología se detalla a continuación: La investigación cuantitativa se ha basado en la técnica de estudio del “Performance analysis”, para ello se han estudiado cuatro temporadas de la Liga A.C.B. (del 2007/08 al 2010/11), tal y como refleja en la bibliografía consultada se han tomado como momentos críticos del juego a los últimos cinco minutos de partidos donde la diferencia de puntos fue de seis puntos y todos los Tiempos Extras disputados, de tal manera que se han estudiado 197 momentos críticos. La contextualización del estudio se ha hecho en función de la variables situacionales “game location” (local o visitante), “team quality” (mejores o peores clasificados) y “competition” (fases de LR y Playoff). Para la interpretación de los resultados se han realizado los siguientes análisis descriptivos: 1) Análisis Discriminante, 2) Regresión Lineal Múltiple; y 3) Análisis del Modelo Lineal General Multivariante. La investigación cualitativa se ha basado en la técnica de investigación de la entrevista semiestructurada. Se entrevistaron a 12 entrenadores que militaban en la Liga A.C.B. durante la temporada 2011/12, cuyo objetivo ha sido conocer el punto de vista que tiene el entrenador sobre el concepto del MC y que de esta forma pudiera dar un enfoque más práctico basado en su conocimiento y experiencia acerca de cómo actuar ante el MC en el baloncesto. Los resultados de ambas investigaciones coinciden en señalar la importancia del MC sobre el resultado final del juego. De igual forma, el concepto en sí entraña una gran complejidad por lo que se considera fundamental la visión científica de la observación del juego y la percepción subjetiva que presenta el entrenador ante el fenómeno, para la cual los aspectos psicológicos de sus protagonistas (jugadores y entrenadores) son determinantes. ABSTRACT The Critical Moment (CM) in basketball is a related phenomenon with the game that has particular features determined by the idiosyncrasies of a team and can affect the players and therefore the future of the game. In this Thesis we have studied the impact of CM in the A.C.B. League and from a profound development two investigations have been raised, quantitative and qualitative whose methodology is as follows: The quantitative research is based on the technique of study "Performance analysis", for this we have studied four seasons in the A.C.B. League (2007/08 to 2010/11), and as reflected in the literature the Critical Moments of the games were taken from the last five minutes of games where the point spread was six points and all overtimes disputed, such that 197 critical moments have been studied. The contextualization of the study has been based on the situational variables "game location" (home or away), "team quality" (better or lower classified) and "competition" (LR and Playoff phases). For the interpretation of the results the following descriptive analyzes were performed: 1) Discriminant Analysis, 2) Multiple Linear Regression Analysis; and 3) Analysis of Multivariate General Linear Model. Qualitative research is based on the technique of investigation of a semi-structured interview. 12 coaches who belonged to the A.C.B. League were interviewed in seasons 2011/12, which aimed to determine the point of view that the coach has on the CM concept and thus could give a more practical approach based on their knowledge and experience about how to deal with the CM in basketball. The results of both studies agree on the importance of the CM on the final outcome of the game. Similarly, the concept itself is highly complex so the scientific view of the observation of the game is considered essential as well as the subjective perception the coach presents before the phenomenon, for which the psychological aspects of their characters (players and coaches) are crucial.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

El origen de esta tesis considera una lectura (quizás) pendiente: definir críticamente a la monumentalidad en el contexto de la arquitectura moderna. La idea de lo monumental durante la modernidad establece parte de la negación enmarcada en un planteamiento más amplio, basado en el rechazo a todo vínculo con la tradición y la historia. Desde el estatismo del monumento como objeto anacrónico, a la instrumentación de la arquitectura como herramienta simbólica, el proceso transformador más importante para la arquitectura durante el siglo XX contaba con algunas señales que nos daban la pauta para imaginar una realidad conformada por matices y desacuerdos fundamentales. La investigación no pretende contar una nueva historia sobre el periodo moderno, aunque irremediablemente se vale de su registro para presentar la discusión. Así, la idea crítica que sostenemos tiene que ver con las posibilidades estructurales y objetivas del discurso arquitectónico. Un discurso que se analiza en función de tres campos diferenciados, designados como: lo escrito, lo proyectado y lo construido en el periodo de estudio. De esta manera, pensamos que se favorecen las posibilidades dimensionales de la crítica y se amplía el sentido narrativo de la linealidad histórica. Para esta trabajo, la monumentalidad constituye una sustancia de estudio que evidencia las contradicciones, inadvertencias y matices necesarios en la articulación de una visión más compleja sobre los acontecimientos. Convencidos de la eficacia de un modelo dialéctico, que define la condición de lo monumental tanto en una valoración positiva (lo propicio, lo útil, lo verdadero, etc.) como negativa (lo falso, lo ostentoso, lo altisonante, etc.); observaremos que las diferencias alrededor del concepto derivan respectivamente en los significados de monumentalidad y monumentalismo. El contraste y la oposición de ideas expuestas a la luz favorece esa pretensión dimensional de la crítica. De los escritos de Sigfried Giedion -y la Nueva Monumentalidad- a Le Corbusier y la construcción de Chandigarh; o de la crítica anti-monumental de Karel Teige, pasando por el proyecto constructivista de Ivan Leonidov; los distintos episodios referidos en el trabajo encuentran sentido y rechazan las probabilidades arbitrarias y confusas de la selección temática. En ese orden, se busca asignar cierto rigor metodológico e incluso geométrico: la estructura propuesta toma el gran "periodo moderno" en dos bloques temporales, primera-modernidad (alrededor de 1910-1935) y tardo-modernidad (aprox. 1935-1960). En la primera parte se analizan una postura -en mayor medida- reactiva a las manifestaciones de esa hipotética condición monumental, mientras que en el segundo caso la postura se transforma y se perfila un nuevo escenario que anticipará ideológicamente parte de la evidente fractura posmoderna. A su vez, los tres registros anunciados previamente se componen de dos capítulos en función del marco temporal descrito; cada capítulo se desarrolla en tres partes que abundan en los aspectos preliminares de la discusión, luego exponen unos puntos centrales y finalmente orientan un posible recuento. El trabajo se complementa con una parte introductoria que fluye sobre definiciones concretas del monumento, el monumentalismo y la monumentalidad; además de que definirá la orientación de la crítica desarrollada. En una última intervención, a manera de conclusión, se reflexiona sobre el salto temporal, ideológico y estético que la posmodernidad representó para el tema de investigación.   Abstract The purpose of this thesis is to consider a (perhaps) pendant issue: to define monumentality by means of critical approach within the modern context of architecture. The idea of what monumental is during modernity establishes a fraction of the "modern typical denial" based on the rejection of any link to tradition and history. From the anachronistic idea of static monuments, to the orchestration of architecture as a symbolic tool, the most important process of the revolution of architecture during the 20th Century had a few signs that allowed us to imagine a reality conformed by fundamental nuances and disagreements. The aim of this research is not to tell a new story about the modern period, although inexorable it takes note of the register to present the discussion. Therefore, the idea of what we expose as criticism has to do with structural and objective possibilities in the architectural discourse. A speech analyzed in response to three differentiated domains designated here as: the written, the projected and the built during the selected time. In that way, we believe the dimensional possibilities of criticism are favored and the narrative sense of historical process is expanded. In terms of this investigation monumentality constitutes a matter of study that leads us to contradictions, unnoticed issues and necessary gray areas in the articulation of a complex vision about depicted events. We are convinced in the efficiency of a dialectical analysis model in order to define the monumental condition both as a positive value (propitious, useful, truthful, etc.) and a negative one (untrue, ostentatious, pompous, etc.); the idea is to show the differences around respective meanings deriving in terms of monumentality and monumentalism. Contrasting information and the opposition of ideas exposed in this light helped to develop the assumption of dimensional criticism. From Sigfried Giedeon's writings -and the New Monumentality- to Le Corbusier and the construction of Chandigarh; and from Karel Teige's anti-monumental criticism going through the revision of Ivan Leonidov's constructivist project; the variety of episodes referred to this work find some sense and reject the probabilities about confusion and arbitrary in the selection of themes. In order to assign some methodological precision and even geometrical criterion, the proposed structure divides the "great modern time" into two historical blocks: first-modernity (circa 1910-1935) and late-modernity (around 1935-1960). The first part analyzes a -mainly- reactive stance towards the hypothetical expressions of monumental condition, whereas in the second block the rejection tends to be transformed and to project a new scenario that will foresee the ideological postmodern fracture. At the same time, the three registers are composed by two chapters each one will operate depending on the described time frame. Each chapter is organized in three subsequent parts: at first explaining preliminary ideas for discussion, second presenting central points and finally orienting a partial recount. The research is complemented with an introductory episode describing specific definitions concerning the concepts of monument, monumentalism and monumentality; and mainly orienting the developed critique. In a final intervention, as a way of conclusion, we reflect on ideological and aesthetic qualities that postmodern time shift represented for this investigation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Rural communities in Cuenca (Spain) are characterized by a great social dislocation, mostly due to the low population density in these areas. In this way, the existence of groups of citizens able to be active agents of their development process is a critical aspect for any community-based development process in this Spanish region. The Institute of Community Development of Cuenca (IDC) has been working with this type of groups for the last 30 years focusing on the organizational empowerment of the rural communities. Main tools in this process have been the empowerment evaluation approach and the critical friend role when helping the groups to achieve their objectives and reinforcing them. This chapter analyses the empowerment process and how the critical friend role is nourished by the facilitator figure.