785 resultados para Puonti, Anne: Learning to work together


Relevância:

100.00% 100.00%

Publicador:

Resumo:

This report describes work on the thesis “IT systems and organizational development”. The work is carried out on Stocksbro Energi AB, during spring 2009. The company wanted better flow in their production but also review if the inventory management function located in Visma SPCS Administration 2000, could be used in the organization. Under the first part of this report the system were tested to ensure if the system would work together with the organization. After testing the desired features they were evaluated along with two of the users to the system at the company. The wanted functions could not be obtained whenthe Admin 2000 is an economic system and are not made for the producing companies. Thereafter the theoretical possibility was examined of introducing a more advanced system to get the desired functionality to work together with the organization. To introduce a more advanced system will require major changes of the organization, before, during and after the introduction. It is therefore important how the management implements a possibility of a new system. It’s important to make the staff join in this implementation but also get as much as possible out of the system.To conclude the thesis a goal and problem analysis but also a solution proposal was made with the help of the FA/SIMM method. Also some suggestions are mentioned how the company could proceed in the implementation of the proposals for a solution.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Background: Bisphosphonate-related osteonecrosis of the jaw (BRONJ) is a clinical condition characterized by the presence of exposed bone in the maxillofacial region. Its pathogenesis is still undetermined, but may be associated with risk factors such as rheumatoid arthritis (RA). The aim of this paper is to report two unpublished cases of BRONJ in patients with RA and to conduct a literature review of similar clinical cases with a view to describe the main issues concerning these patients, including demographic characteristics and therapeutic approaches applied.Methods: Two case reports of BRONJ involving RA patients were discussedResults: Both patients were aging female taking alendronate for more than 3 years. Lesions were detected in stage II in posterior mandible with no clear trigger agent. The treatment applied consisted of antibiotics, oral rinses with chlorhexidine, drug discontinuation and surgical procedures. Complete healing of the lesions was achieved.Conclusions: This paper brings to light the necessity for rheumatologists to be aware of the potential risk to their patients of developing BRONJ and to work together with dentists for the prevention and early detection of the lesions. Although some features seem to link RA with oral BRONJ and act as synergistic effects, more studies should be developed to support the scientific bases for this hypothesis.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Bisphosphonates are compounds used in the treatment of various metabolic and malignant bone diseases. The relation between the use of bisphosphonates and ostenonecrosis of the jaws as an adverse effect of the drug has been intensely discussed during the last few years, and up to this moment, there is no consensus concerning an ideal treatment modality for this condition. Nevertheless, there is an agreement among researchers that the standard goal for controlling jaw osteonecrosis is to prevent it. Otherwise, the rationale for a randomized controlled trial is that current treatment has proven to be suboptimal, and no consensus has been reached yet on the best strategies to repair the exposed bone once bone necrosis is developed. This article is focused on reporting a case of moderate osteonecrosis of the upper jaw induced by bisphosphonates and discusses a possible role for surgical debridement associated to platelet-rich plasma, hyperbaric oxygen therapy, and the cessation of the bisphosphonate use in managing this type of lesion. Moreover, the dentist, the oral surgeon, and the oncologist need to work together to reach better outcomes.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Low-frequency multipath is still one of the major challenges for high precision GPS relative positioning. In kinematic applications, mainly, due to geometry changes, the low-frequency multipath is difficult to be removed or modeled. Spectral analysis has a powerful technique to analyze this kind of non-stationary signals: the wavelet transform. However, some processes and specific ways of processing are necessary to work together in order to detect and efficiently mitigate low-frequency multipath. In this paper, these processes are discussed. Some experiments were carried out in a kinematic mode with a controlled and known vehicle movement. The data were collected in the presence of a reflector surface placed close to the vehicle to cause, mainly, low-frequency multipath. From theanalyses realized, the results in terms of double difference residuals and statistical tests showed that the proposed methodology is very efficient to detect and mitigate low-frequency multipath effects. © 2008 IEEE.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Pós-graduação em Relações Internacionais (UNESP - UNICAMP - PUC-SP) - FFC

Relevância:

100.00% 100.00%

Publicador:

Resumo:

With the business environments no longer confined to geographical borders, the new wave of digital technologies has given organizations an enormous opportunity to bring together their distributed workforce and develop the ability to work together despite being apart (Prasad & Akhilesh, 2002). resupposing creativity to be a social process, the way that this phenomenon occurs when the configuration of the team is substantially modified will be questioned. Very little is known about the impact of interpersonal relationships in the creativity (Kurtzberg & Amabile, 2001). In order to analyse the ways in which the creative process may be developed, we ought to be taken into consideration the fact that participants are dealing with a quite an atypical situation. Firstly, in these cases socialization takes place amongst individuals belonging to a geographically dispersed workplace, where interpersonal relationships are mediated by the computer, and where trust must be developed among persons who have never met one another. Participants not only have multiple addresses and locations, but above all different nationalities, and different cultures, attitudes, thoughts, and working patterns, and languages. Therefore, the central research question of this thesis is as follows: “How does the creative process unfold in globally distributed teams?” With a qualitative approach, we used the case study of the Business Unit of Volvo 3P, an arm of Volvo Group. Throughout this research, we interviewed seven teams engaged in the development of a new product in the chassis and cab areas, for the brands Volvo and Renault Trucks, teams that were geographically distributed in Brazil, Sweden, France and India. Our research suggests that corporate values, alongside with intrinsic motivation and task which lay down the necessary foundations for the development of the creative process in GDT.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

L’aumento dei costi in sanità, l’aumentata prevalenza delle patologie croniche, le disuguaglianze attuali, evidenziano la domanda crescente di una popolazione fragile che richiede una risposta globale ai bisogni della persona nel suo insieme, attraverso la costruzione di un sistema sanitario integrato che ne garantisca una presa in carico efficace. Riuscire a gestire le patologie croniche in modo appropriato è la sfida a cui sono chiamati i professionisti socio-sanitari; ma quali sono gli elementi per riuscirci? Le evidenze scientifiche dimostrano che è fondamentale l’integrazione tra i professionisti e lo sviluppo dei Percorsi Diagnostici Terapeutici Assistenziali (PDTA). In quest’ottica, in Italia e in particolare in Emilia-Romagna e nell’Azienda USL di Bologna si sono succeduti, e ancora si stanno evolvendo, diversi modelli di organizzazione per migliorare la gestione appropriata delle patologie croniche e l’aderenza alle linee guida e/o ai PDTA. Il ruolo del medico di medicina generale (MMG) è ancora fondamentale e il suo contributo integrato a quello degli gli altri professionisti coinvolti sono imprescindibili per una buona gestione e presa in carico del paziente cronico. Per questo motivo, l’Azienda USL di Bologna ha sviluppato e implementato una politica strategica aziendale volta a disegnare i PDTA e incoraggiato la medicina generale a lavorare sempre di più in gruppo, rispetto al modello del singolo medico. Lo studio ha individuato nelle malattie cardiovascolari, che rimangono la causa principale di morte e morbilità, il suo focus prendendo in esame, in particolare,lo scompenso cardiaco e il post-IMA. L’obiettivo è verificare se e quanto il modello organizzativo, le caratteristiche del medico e del paziente influiscono sul buon management delle patologie croniche in esame valutando la buona adesione alla terapia farmacologica raccomandata dalle linee guida e/o PDTA dello scompenso cardiaco e post-IMA.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Fibre-Reinforced-Plastics are composite materials composed by thin fibres with high mechanical properties, made to work together with a cohesive plastic matrix. The huge advantages of fibre reinforced plastics over traditional materials are their high specific mechanical properties i.e. high stiffness and strength to weight ratios. This kind of composite materials is the most disruptive innovation in the structural materials field seen in recent years and the areas of potential application are still many. However, there are few aspects which limit their growth: on the one hand the information available about their properties and long term behaviour is still scarce, especially if compared with traditional materials for which there has been developed an extended database through years of use and research. On the other hand, the technologies of production are still not as developed as the ones available to form plastics, metals and other traditional materials. A third aspect is that the new properties presented by these materials e.g. their anisotropy, difficult the design of components. This thesis will provide several case-studies with advancements regarding the three limitations mentioned. In particular, the long term mechanical properties have been studied through an experimental analysis of the impact of seawater on GFRP. Regarding production methods, the pre-impregnated cured in autoclave process was considered: a rapid tooling method to produce moulds will be presented, and a study about the production of thick components. Also, two liquid composite moulding methods will be presented, with a case-study regarding a large component with sandwich structure that was produced with the Vacuum-Assisted-Resin-Infusion method, and a case-study regarding a thick con-rod beam that was produced with the Resin-Transfer-Moulding process. The final case-study will analyse the loads acting during the use of a particular sportive component, made with FRP layers and a sandwich structure, practical design rules will be provided.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The benefits animals derive from living in social groups have produced the evolution of many forms of cooperative behavior. To cooperate, two or more individuals coordinate their actions to accomplish a common goal. One cognitive process that has the potential to influence cooperation is self control. Individuals delaying their impulsive choice for an immediate reward may potentially receive a larger reward later by cooperating with others. In this study, I measured whether brown capuchin monkeys (Cebus apella) were capable of impulse control and whether impulse control was related to cooperation. Impulse control and cooperation were measured using a lazy susan-like apparatus, on which animals could turn a wheel to receive food rewards. The capuchins went through two training phases that taught them how to turn the wheel efficiently to obtain rewards and how to turn the wheel to obtain the larger of two rewards. After training, I tested impulse control by giving the capuchins a choice between a smaller and a larger reward placed at shorter or more distant locations on the wheel. The capuchins demonstrated impulse control in that they tended to inhibit the impulse to select the smaller reward when it was closer and easier to reach and instead selected the larger reward when it was farther away. Cooperation was tested in all possible dyads of seven individuals, a total of 21 dyads, by allowing each dyad 10 trials to work together with effort on the lazy-susan so that each would obtain a reward. Seventeen out of 21 dyads cooperated by simultaneously moving the wheel in the same direction. The correlation between how often a particular dyad cooperated and their average impulse control score was not statistically significant, r(21) = -.125, p = .591. Capuchins demonstrated impulse control and cooperation using this novel apparatus but the two abilities were not related. Other factors such as the unique social relationship between two individuals may play a more prominent role in the motivation to cooperate rather than the cognitive capacity to inhibit behavior.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The biological function of neurons can often be understood only in the context of large, highly interconnected networks. These networks typically form two-dimensional topographic maps, such as the retinotopic maps in mammalian visual cortex. Computational simulations of these areas have led to valuable insights about how cortical topography develops and functions, but further progress has been hindered due to the lack of appropriate simulation tools. This paper introduces the freely available Topographica maplevel simulator, originally developed at the University of Texas at Austin and now maintained at the University of Edinburgh, UK. Topographica is designed to make large-scale, detailed models practical. The goal is to allow neuroscientists and computational scientists to work together to understand how topographic maps and their connections organize and operate. This understanding will be crucial for integrating experimental observations into a comprehensive theory of brain function.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Esta tesis doctoral se centra principalmente en técnicas de ataque y contramedidas relacionadas con ataques de canal lateral (SCA por sus siglas en inglés), que han sido propuestas dentro del campo de investigación académica desde hace 17 años. Las investigaciones relacionadas han experimentado un notable crecimiento en las últimas décadas, mientras que los diseños enfocados en la protección sólida y eficaz contra dichos ataques aún se mantienen como un tema de investigación abierto, en el que se necesitan iniciativas más confiables para la protección de la información persona de empresa y de datos nacionales. El primer uso documentado de codificación secreta se remonta a alrededor de 1700 B.C., cuando los jeroglíficos del antiguo Egipto eran descritos en las inscripciones. La seguridad de la información siempre ha supuesto un factor clave en la transmisión de datos relacionados con inteligencia diplomática o militar. Debido a la evolución rápida de las técnicas modernas de comunicación, soluciones de cifrado se incorporaron por primera vez para garantizar la seguridad, integridad y confidencialidad de los contextos de transmisión a través de cables sin seguridad o medios inalámbricos. Debido a las restricciones de potencia de cálculo antes de la era del ordenador, la técnica de cifrado simple era un método más que suficiente para ocultar la información. Sin embargo, algunas vulnerabilidades algorítmicas pueden ser explotadas para restaurar la regla de codificación sin mucho esfuerzo. Esto ha motivado nuevas investigaciones en el área de la criptografía, con el fin de proteger el sistema de información ante sofisticados algoritmos. Con la invención de los ordenadores se ha acelerado en gran medida la implementación de criptografía segura, que ofrece resistencia eficiente encaminada a obtener mayores capacidades de computación altamente reforzadas. Igualmente, sofisticados cripto-análisis han impulsado las tecnologías de computación. Hoy en día, el mundo de la información ha estado involucrado con el campo de la criptografía, enfocada a proteger cualquier campo a través de diversas soluciones de cifrado. Estos enfoques se han fortalecido debido a la unificación optimizada de teorías matemáticas modernas y prácticas eficaces de hardware, siendo posible su implementación en varias plataformas (microprocesador, ASIC, FPGA, etc.). Las necesidades y requisitos de seguridad en la industria son las principales métricas de conducción en el diseño electrónico, con el objetivo de promover la fabricación de productos de gran alcance sin sacrificar la seguridad de los clientes. Sin embargo, una vulnerabilidad en la implementación práctica encontrada por el Prof. Paul Kocher, et al en 1996 implica que un circuito digital es inherentemente vulnerable a un ataque no convencional, lo cual fue nombrado posteriormente como ataque de canal lateral, debido a su fuente de análisis. Sin embargo, algunas críticas sobre los algoritmos criptográficos teóricamente seguros surgieron casi inmediatamente después de este descubrimiento. En este sentido, los circuitos digitales consisten típicamente en un gran número de celdas lógicas fundamentales (como MOS - Metal Oxide Semiconductor), construido sobre un sustrato de silicio durante la fabricación. La lógica de los circuitos se realiza en función de las innumerables conmutaciones de estas células. Este mecanismo provoca inevitablemente cierta emanación física especial que puede ser medida y correlacionada con el comportamiento interno del circuito. SCA se puede utilizar para revelar datos confidenciales (por ejemplo, la criptografía de claves), analizar la arquitectura lógica, el tiempo e incluso inyectar fallos malintencionados a los circuitos que se implementan en sistemas embebidos, como FPGAs, ASICs, o tarjetas inteligentes. Mediante el uso de la comparación de correlación entre la cantidad de fuga estimada y las fugas medidas de forma real, información confidencial puede ser reconstruida en mucho menos tiempo y computación. Para ser precisos, SCA básicamente cubre una amplia gama de tipos de ataques, como los análisis de consumo de energía y radiación ElectroMagnética (EM). Ambos se basan en análisis estadístico y, por lo tanto, requieren numerosas muestras. Los algoritmos de cifrado no están intrínsecamente preparados para ser resistentes ante SCA. Es por ello que se hace necesario durante la implementación de circuitos integrar medidas que permitan camuflar las fugas a través de "canales laterales". Las medidas contra SCA están evolucionando junto con el desarrollo de nuevas técnicas de ataque, así como la continua mejora de los dispositivos electrónicos. Las características físicas requieren contramedidas sobre la capa física, que generalmente se pueden clasificar en soluciones intrínsecas y extrínsecas. Contramedidas extrínsecas se ejecutan para confundir la fuente de ataque mediante la integración de ruido o mala alineación de la actividad interna. Comparativamente, las contramedidas intrínsecas están integradas en el propio algoritmo, para modificar la aplicación con el fin de minimizar las fugas medibles, o incluso hacer que dichas fugas no puedan ser medibles. Ocultación y Enmascaramiento son dos técnicas típicas incluidas en esta categoría. Concretamente, el enmascaramiento se aplica a nivel algorítmico, para alterar los datos intermedios sensibles con una máscara de manera reversible. A diferencia del enmascaramiento lineal, las operaciones no lineales que ampliamente existen en criptografías modernas son difíciles de enmascarar. Dicho método de ocultación, que ha sido verificado como una solución efectiva, comprende principalmente la codificación en doble carril, que está ideado especialmente para aplanar o eliminar la fuga dependiente de dato en potencia o en EM. En esta tesis doctoral, además de la descripción de las metodologías de ataque, se han dedicado grandes esfuerzos sobre la estructura del prototipo de la lógica propuesta, con el fin de realizar investigaciones enfocadas a la seguridad sobre contramedidas de arquitectura a nivel lógico. Una característica de SCA reside en el formato de las fuentes de fugas. Un típico ataque de canal lateral se refiere al análisis basado en la potencia, donde la capacidad fundamental del transistor MOS y otras capacidades parásitas son las fuentes esenciales de fugas. Por lo tanto, una lógica robusta resistente a SCA debe eliminar o mitigar las fugas de estas micro-unidades, como las puertas lógicas básicas, los puertos I/O y las rutas. Las herramientas EDA proporcionadas por los vendedores manipulan la lógica desde un nivel más alto, en lugar de realizarlo desde el nivel de puerta, donde las fugas de canal lateral se manifiestan. Por lo tanto, las implementaciones clásicas apenas satisfacen estas necesidades e inevitablemente atrofian el prototipo. Por todo ello, la implementación de un esquema de diseño personalizado y flexible ha de ser tomado en cuenta. En esta tesis se presenta el diseño y la implementación de una lógica innovadora para contrarrestar SCA, en la que se abordan 3 aspectos fundamentales: I. Se basa en ocultar la estrategia sobre el circuito en doble carril a nivel de puerta para obtener dinámicamente el equilibrio de las fugas en las capas inferiores; II. Esta lógica explota las características de la arquitectura de las FPGAs, para reducir al mínimo el gasto de recursos en la implementación; III. Se apoya en un conjunto de herramientas asistentes personalizadas, incorporadas al flujo genérico de diseño sobre FPGAs, con el fin de manipular los circuitos de forma automática. El kit de herramientas de diseño automático es compatible con la lógica de doble carril propuesta, para facilitar la aplicación práctica sobre la familia de FPGA del fabricante Xilinx. En este sentido, la metodología y las herramientas son flexibles para ser extendido a una amplia gama de aplicaciones en las que se desean obtener restricciones mucho más rígidas y sofisticadas a nivel de puerta o rutado. En esta tesis se realiza un gran esfuerzo para facilitar el proceso de implementación y reparación de lógica de doble carril genérica. La viabilidad de las soluciones propuestas es validada mediante la selección de algoritmos criptográficos ampliamente utilizados, y su evaluación exhaustiva en comparación con soluciones anteriores. Todas las propuestas están respaldadas eficazmente a través de ataques experimentales con el fin de validar las ventajas de seguridad del sistema. El presente trabajo de investigación tiene la intención de cerrar la brecha entre las barreras de implementación y la aplicación efectiva de lógica de doble carril. En esencia, a lo largo de esta tesis se describirá un conjunto de herramientas de implementación para FPGAs que se han desarrollado para trabajar junto con el flujo de diseño genérico de las mismas, con el fin de lograr crear de forma innovadora la lógica de doble carril. Un nuevo enfoque en el ámbito de la seguridad en el cifrado se propone para obtener personalización, automatización y flexibilidad en el prototipo de circuito de bajo nivel con granularidad fina. Las principales contribuciones del presente trabajo de investigación se resumen brevemente a continuación: Lógica de Precharge Absorbed-DPL logic: El uso de la conversión de netlist para reservar LUTs libres para ejecutar la señal de precharge y Ex en una lógica DPL. Posicionamiento entrelazado Row-crossed con pares idénticos de rutado en redes de doble carril, lo que ayuda a aumentar la resistencia frente a la medición EM selectiva y mitigar los impactos de las variaciones de proceso. Ejecución personalizada y herramientas de conversión automática para la generación de redes idénticas para la lógica de doble carril propuesta. (a) Para detectar y reparar conflictos en las conexiones; (b) Detectar y reparar las rutas asimétricas. (c) Para ser utilizado en otras lógicas donde se requiere un control estricto de las interconexiones en aplicaciones basadas en Xilinx. Plataforma CPA de pruebas personalizadas para el análisis de EM y potencia, incluyendo la construcción de dicha plataforma, el método de medición y análisis de los ataques. Análisis de tiempos para cuantificar los niveles de seguridad. División de Seguridad en la conversión parcial de un sistema de cifrado complejo para reducir los costes de la protección. Prueba de concepto de un sistema de calefacción auto-adaptativo para mitigar los impactos eléctricos debido a la variación del proceso de silicio de manera dinámica. La presente tesis doctoral se encuentra organizada tal y como se detalla a continuación: En el capítulo 1 se abordan los fundamentos de los ataques de canal lateral, que abarca desde conceptos básicos de teoría de modelos de análisis, además de la implementación de la plataforma y la ejecución de los ataques. En el capítulo 2 se incluyen las estrategias de resistencia SCA contra los ataques de potencia diferencial y de EM. Además de ello, en este capítulo se propone una lógica en doble carril compacta y segura como contribución de gran relevancia, así como también se presentará la transformación lógica basada en un diseño a nivel de puerta. Por otra parte, en el Capítulo 3 se abordan los desafíos relacionados con la implementación de lógica en doble carril genérica. Así mismo, se describirá un flujo de diseño personalizado para resolver los problemas de aplicación junto con una herramienta de desarrollo automático de aplicaciones propuesta, para mitigar las barreras de diseño y facilitar los procesos. En el capítulo 4 se describe de forma detallada la elaboración e implementación de las herramientas propuestas. Por otra parte, la verificación y validaciones de seguridad de la lógica propuesta, así como un sofisticado experimento de verificación de la seguridad del rutado, se describen en el capítulo 5. Por último, un resumen de las conclusiones de la tesis y las perspectivas como líneas futuras se incluyen en el capítulo 6. Con el fin de profundizar en el contenido de la tesis doctoral, cada capítulo se describe de forma más detallada a continuación: En el capítulo 1 se introduce plataforma de implementación hardware además las teorías básicas de ataque de canal lateral, y contiene principalmente: (a) La arquitectura genérica y las características de la FPGA a utilizar, en particular la Xilinx Virtex-5; (b) El algoritmo de cifrado seleccionado (un módulo comercial Advanced Encryption Standard (AES)); (c) Los elementos esenciales de los métodos de canal lateral, que permiten revelar las fugas de disipación correlacionadas con los comportamientos internos; y el método para recuperar esta relación entre las fluctuaciones físicas en los rastros de canal lateral y los datos internos procesados; (d) Las configuraciones de las plataformas de pruebas de potencia / EM abarcadas dentro de la presente tesis. El contenido de esta tesis se amplia y profundiza a partir del capítulo 2, en el cual se abordan varios aspectos claves. En primer lugar, el principio de protección de la compensación dinámica de la lógica genérica de precarga de doble carril (Dual-rail Precharge Logic-DPL) se explica mediante la descripción de los elementos compensados a nivel de puerta. En segundo lugar, la lógica PA-DPL es propuesta como aportación original, detallando el protocolo de la lógica y un caso de aplicación. En tercer lugar, dos flujos de diseño personalizados se muestran para realizar la conversión de doble carril. Junto con ello, se aclaran las definiciones técnicas relacionadas con la manipulación por encima de la netlist a nivel de LUT. Finalmente, una breve discusión sobre el proceso global se aborda en la parte final del capítulo. El Capítulo 3 estudia los principales retos durante la implementación de DPLs en FPGAs. El nivel de seguridad de las soluciones de resistencia a SCA encontradas en el estado del arte se ha degenerado debido a las barreras de implantación a través de herramientas EDA convencionales. En el escenario de la arquitectura FPGA estudiada, se discuten los problemas de los formatos de doble carril, impactos parásitos, sesgo tecnológico y la viabilidad de implementación. De acuerdo con estas elaboraciones, se plantean dos problemas: Cómo implementar la lógica propuesta sin penalizar los niveles de seguridad, y cómo manipular un gran número de celdas y automatizar el proceso. El PA-DPL propuesto en el capítulo 2 se valida con una serie de iniciativas, desde características estructurales como doble carril entrelazado o redes de rutado clonadas, hasta los métodos de aplicación tales como las herramientas de personalización y automatización de EDA. Por otra parte, un sistema de calefacción auto-adaptativo es representado y aplicado a una lógica de doble núcleo, con el fin de ajustar alternativamente la temperatura local para equilibrar los impactos negativos de la variación del proceso durante la operación en tiempo real. El capítulo 4 se centra en los detalles de la implementación del kit de herramientas. Desarrollado sobre una API third-party, el kit de herramientas personalizado es capaz de manipular los elementos de la lógica de circuito post P&R ncd (una versión binaria ilegible del xdl) convertido al formato XDL Xilinx. El mecanismo y razón de ser del conjunto de instrumentos propuestos son cuidadosamente descritos, que cubre la detección de enrutamiento y los enfoques para la reparación. El conjunto de herramientas desarrollado tiene como objetivo lograr redes de enrutamiento estrictamente idénticos para la lógica de doble carril, tanto para posicionamiento separado como para el entrelazado. Este capítulo particularmente especifica las bases técnicas para apoyar las implementaciones en los dispositivos de Xilinx y su flexibilidad para ser utilizado sobre otras aplicaciones. El capítulo 5 se enfoca en la aplicación de los casos de estudio para la validación de los grados de seguridad de la lógica propuesta. Se discuten los problemas técnicos detallados durante la ejecución y algunas nuevas técnicas de implementación. (a) Se discute el impacto en el proceso de posicionamiento de la lógica utilizando el kit de herramientas propuesto. Diferentes esquemas de implementación, tomando en cuenta la optimización global en seguridad y coste, se verifican con los experimentos con el fin de encontrar los planes de posicionamiento y reparación optimizados; (b) las validaciones de seguridad se realizan con los métodos de correlación y análisis de tiempo; (c) Una táctica asintótica se aplica a un núcleo AES sobre BCDL estructurado para validar de forma sofisticada el impacto de enrutamiento sobre métricas de seguridad; (d) Los resultados preliminares utilizando el sistema de calefacción auto-adaptativa sobre la variación del proceso son mostrados; (e) Se introduce una aplicación práctica de las herramientas para un diseño de cifrado completa. Capítulo 6 incluye el resumen general del trabajo presentado dentro de esta tesis doctoral. Por último, una breve perspectiva del trabajo futuro se expone, lo que puede ampliar el potencial de utilización de las contribuciones de esta tesis a un alcance más allá de los dominios de la criptografía en FPGAs. ABSTRACT This PhD thesis mainly concentrates on countermeasure techniques related to the Side Channel Attack (SCA), which has been put forward to academic exploitations since 17 years ago. The related research has seen a remarkable growth in the past decades, while the design of solid and efficient protection still curiously remain as an open research topic where more reliable initiatives are required for personal information privacy, enterprise and national data protections. The earliest documented usage of secret code can be traced back to around 1700 B.C., when the hieroglyphs in ancient Egypt are scribed in inscriptions. Information security always gained serious attention from diplomatic or military intelligence transmission. Due to the rapid evolvement of modern communication technique, crypto solution was first incorporated by electronic signal to ensure the confidentiality, integrity, availability, authenticity and non-repudiation of the transmitted contexts over unsecure cable or wireless channels. Restricted to the computation power before computer era, simple encryption tricks were practically sufficient to conceal information. However, algorithmic vulnerabilities can be excavated to restore the encoding rules with affordable efforts. This fact motivated the development of modern cryptography, aiming at guarding information system by complex and advanced algorithms. The appearance of computers has greatly pushed forward the invention of robust cryptographies, which efficiently offers resistance relying on highly strengthened computing capabilities. Likewise, advanced cryptanalysis has greatly driven the computing technologies in turn. Nowadays, the information world has been involved into a crypto world, protecting any fields by pervasive crypto solutions. These approaches are strong because of the optimized mergence between modern mathematical theories and effective hardware practices, being capable of implement crypto theories into various platforms (microprocessor, ASIC, FPGA, etc). Security needs from industries are actually the major driving metrics in electronic design, aiming at promoting the construction of systems with high performance without sacrificing security. Yet a vulnerability in practical implementation found by Prof. Paul Kocher, et al in 1996 implies that modern digital circuits are inherently vulnerable to an unconventional attack approach, which was named as side-channel attack since then from its analysis source. Critical suspicions to theoretically sound modern crypto algorithms surfaced almost immediately after this discovery. To be specifically, digital circuits typically consist of a great number of essential logic elements (as MOS - Metal Oxide Semiconductor), built upon a silicon substrate during the fabrication. Circuit logic is realized relying on the countless switch actions of these cells. This mechanism inevitably results in featured physical emanation that can be properly measured and correlated with internal circuit behaviors. SCAs can be used to reveal the confidential data (e.g. crypto-key), analyze the logic architecture, timing and even inject malicious faults to the circuits that are implemented in hardware system, like FPGA, ASIC, smart Card. Using various comparison solutions between the predicted leakage quantity and the measured leakage, secrets can be reconstructed at much less expense of time and computation. To be precisely, SCA basically encloses a wide range of attack types, typically as the analyses of power consumption or electromagnetic (EM) radiation. Both of them rely on statistical analyses, and hence require a number of samples. The crypto algorithms are not intrinsically fortified with SCA-resistance. Because of the severity, much attention has to be taken into the implementation so as to assemble countermeasures to camouflage the leakages via "side channels". Countermeasures against SCA are evolving along with the development of attack techniques. The physical characteristics requires countermeasures over physical layer, which can be generally classified into intrinsic and extrinsic vectors. Extrinsic countermeasures are executed to confuse the attacker by integrating noise, misalignment to the intra activities. Comparatively, intrinsic countermeasures are built into the algorithm itself, to modify the implementation for minimizing the measurable leakage, or making them not sensitive any more. Hiding and Masking are two typical techniques in this category. Concretely, masking applies to the algorithmic level, to alter the sensitive intermediate values with a mask in reversible ways. Unlike the linear masking, non-linear operations that widely exist in modern cryptographies are difficult to be masked. Approved to be an effective counter solution, hiding method mainly mentions dual-rail logic, which is specially devised for flattening or removing the data-dependent leakage in power or EM signatures. In this thesis, apart from the context describing the attack methodologies, efforts have also been dedicated to logic prototype, to mount extensive security investigations to countermeasures on logic-level. A characteristic of SCA resides on the format of leak sources. Typical side-channel attack concerns the power based analysis, where the fundamental capacitance from MOS transistors and other parasitic capacitances are the essential leak sources. Hence, a robust SCA-resistant logic must eliminate or mitigate the leakages from these micro units, such as basic logic gates, I/O ports and routings. The vendor provided EDA tools manipulate the logic from a higher behavioral-level, rather than the lower gate-level where side-channel leakage is generated. So, the classical implementations barely satisfy these needs and inevitably stunt the prototype. In this case, a customized and flexible design scheme is appealing to be devised. This thesis profiles an innovative logic style to counter SCA, which mainly addresses three major aspects: I. The proposed logic is based on the hiding strategy over gate-level dual-rail style to dynamically overbalance side-channel leakage from lower circuit layer; II. This logic exploits architectural features of modern FPGAs, to minimize the implementation expenses; III. It is supported by a set of assistant custom tools, incorporated by the generic FPGA design flow, to have circuit manipulations in an automatic manner. The automatic design toolkit supports the proposed dual-rail logic, facilitating the practical implementation on Xilinx FPGA families. While the methodologies and the tools are flexible to be expanded to a wide range of applications where rigid and sophisticated gate- or routing- constraints are desired. In this thesis a great effort is done to streamline the implementation workflow of generic dual-rail logic. The feasibility of the proposed solutions is validated by selected and widely used crypto algorithm, for thorough and fair evaluation w.r.t. prior solutions. All the proposals are effectively verified by security experiments. The presented research work attempts to solve the implementation troubles. The essence that will be formalized along this thesis is that a customized execution toolkit for modern FPGA systems is developed to work together with the generic FPGA design flow for creating innovative dual-rail logic. A method in crypto security area is constructed to obtain customization, automation and flexibility in low-level circuit prototype with fine-granularity in intractable routings. Main contributions of the presented work are summarized next: Precharge Absorbed-DPL logic: Using the netlist conversion to reserve free LUT inputs to execute the Precharge and Ex signal in a dual-rail logic style. A row-crossed interleaved placement method with identical routing pairs in dual-rail networks, which helps to increase the resistance against selective EM measurement and mitigate the impacts from process variations. Customized execution and automatic transformation tools for producing identical networks for the proposed dual-rail logic. (a) To detect and repair the conflict nets; (b) To detect and repair the asymmetric nets. (c) To be used in other logics where strict network control is required in Xilinx scenario. Customized correlation analysis testbed for EM and power attacks, including the platform construction, measurement method and attack analysis. A timing analysis based method for quantifying the security grades. A methodology of security partitions of complex crypto systems for reducing the protection cost. A proof-of-concept self-adaptive heating system to mitigate electrical impacts over process variations in dynamic dual-rail compensation manner. The thesis chapters are organized as follows: Chapter 1 discusses the side-channel attack fundamentals, which covers from theoretic basics to analysis models, and further to platform setup and attack execution. Chapter 2 centers to SCA-resistant strategies against generic power and EM attacks. In this chapter, a major contribution, a compact and secure dual-rail logic style, will be originally proposed. The logic transformation based on bottom-layer design will be presented. Chapter 3 is scheduled to elaborate the implementation challenges of generic dual-rail styles. A customized design flow to solve the implementation problems will be described along with a self-developed automatic implementation toolkit, for mitigating the design barriers and facilitating the processes. Chapter 4 will originally elaborate the tool specifics and construction details. The implementation case studies and security validations for the proposed logic style, as well as a sophisticated routing verification experiment, will be described in Chapter 5. Finally, a summary of thesis conclusions and perspectives for future work are included in Chapter 5. To better exhibit the thesis contents, each chapter is further described next: Chapter 1 provides the introduction of hardware implementation testbed and side-channel attack fundamentals, and mainly contains: (a) The FPGA generic architecture and device features, particularly of Virtex-5 FPGA; (b) The selected crypto algorithm - a commercially and extensively used Advanced Encryption Standard (AES) module - is detailed; (c) The essentials of Side-Channel methods are profiled. It reveals the correlated dissipation leakage to the internal behaviors, and the method to recover this relationship between the physical fluctuations in side-channel traces and the intra processed data; (d) The setups of the power/EM testing platforms enclosed inside the thesis work are given. The content of this thesis is expanded and deepened from chapter 2, which is divided into several aspects. First, the protection principle of dynamic compensation of the generic dual-rail precharge logic is explained by describing the compensated gate-level elements. Second, the novel DPL is originally proposed by detailing the logic protocol and an implementation case study. Third, a couple of custom workflows are shown next for realizing the rail conversion. Meanwhile, the technical definitions that are about to be manipulated above LUT-level netlist are clarified. A brief discussion about the batched process is given in the final part. Chapter 3 studies the implementation challenges of DPLs in FPGAs. The security level of state-of-the-art SCA-resistant solutions are decreased due to the implementation barriers using conventional EDA tools. In the studied FPGA scenario, problems are discussed from dual-rail format, parasitic impact, technological bias and implementation feasibility. According to these elaborations, two problems arise: How to implement the proposed logic without crippling the security level; and How to manipulate a large number of cells and automate the transformation. The proposed PA-DPL in chapter 2 is legalized with a series of initiatives, from structures to implementation methods. Furthermore, a self-adaptive heating system is depicted and implemented to a dual-core logic, assumed to alternatively adjust local temperature for balancing the negative impacts from silicon technological biases on real-time. Chapter 4 centers to the toolkit system. Built upon a third-party Application Program Interface (API) library, the customized toolkit is able to manipulate the logic elements from post P&R circuit (an unreadable binary version of the xdl one) converted to Xilinx xdl format. The mechanism and rationale of the proposed toolkit are carefully convoyed, covering the routing detection and repairing approaches. The developed toolkit aims to achieve very strictly identical routing networks for dual-rail logic both for separate and interleaved placement. This chapter particularly specifies the technical essentials to support the implementations in Xilinx devices and the flexibility to be expanded to other applications. Chapter 5 focuses on the implementation of the case studies for validating the security grades of the proposed logic style from the proposed toolkit. Comprehensive implementation techniques are discussed. (a) The placement impacts using the proposed toolkit are discussed. Different execution schemes, considering the global optimization in security and cost, are verified with experiments so as to find the optimized placement and repair schemes; (b) Security validations are realized with correlation, timing methods; (c) A systematic method is applied to a BCDL structured module to validate the routing impact over security metric; (d) The preliminary results using the self-adaptive heating system over process variation is given; (e) A practical implementation of the proposed toolkit to a large design is introduced. Chapter 6 includes the general summary of the complete work presented inside this thesis. Finally, a brief perspective for the future work is drawn which might expand the potential utilization of the thesis contributions to a wider range of implementation domains beyond cryptography on FPGAs.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

La tesis Flujo Laminar, El cementerio de Igualada y los procesos elásticos en la arquitectura de Enric Miralles y Carme Pinós analiza la metodología que utilizan y desarrollan los arquitectos Miralles y Pinós en el proyecto del cementerio de Igualada. Enric Miralles y Carme Pinós comienzan a trabajar juntos en 1983, separándose siete años después. La producción arquitectónica que generan durante este periodo ha sido valorada y considerada por la crítica como una de las de mayor calidad en el último cuarto del siglo XX. Respecto a esta etapa de colaboración existen artículos dispersos que analizan de manera aislada algunos aspectos de sus proyectos. Sin embargo, no se han realizado estudios que analicen su metodología proyectual, sus herramientas, sus estrategias y sus diferentes conexiones y referencias. Tampoco se han analizado hasta el momento, cómo son sus procesos proyectuales, sus tiempos y sus objetivos. En este sentido, el proyecto del cementerio de Igualada será clave en la trayectoria del equipo. El tiempo extenso empleado en el proceso y la singularidad de la obra, además de otros factores del propio imaginario de Miralles y Pinós, marcarán a este proyecto como un punto de inflexión en su trabajo. La investigación plantea, en primer lugar, la descripción del contexto en el cual se comienzan a desarrollar los proyectos iniciales de Miralles y Pinós, además de los antecedentes que generarán la propuesta para el cementerio de Igualada. Seguidamente, se describen y analizan los apoyos y las referencias que los arquitectos utilizan en este proyecto, tanto las vinculadas a la cultura local como las atrapadas desde el imaginario universal. Por otro lado, el bloque central de la investigación aporta doce herramientas que construyen la metodología proyectual de Miralles y Pinós. Esta contribución se ha realizado desde el análisis y el cruce de los documentos originales del proyecto de Igualada y los escritos existentes de los arquitectos. Las herramientas aportadas son: deslizar, desplazar, repetir, enterrar, constreñir, oscilar, estirar, desenredar, desviar, rehacer, hendir y fluir. Todas ellas se describen, analizan e interpretan desde setenta y dos documentos esenciales extraídos del proceso de Igualada. Un último capítulo de conclusiones, define lo que se presenta en la investigación como una metodología propia de Enric Miralles y Carme Pinós, denominada procesos elásticos, en el que se describen sus cualidades y sus tiempos, a la vez que se presenta también una táctica empleada en el proyecto de Igualada denominada flujo laminar. ABSTRACT The thesis Laminar Flow, the Igualada cemetery and the elastic processes in the architecture of Enric Miralles and Carme Pinós analyses the methodology that the architects Miralles and Pinós used in the project for the Igualada cemetery park. Enric Miralles and Carme Pinós began to work together in 1983, and went their different ways seven years later. The architectural production they generated during that period has been valued and considered by critics to be among the greatest quality of production in the final quarter of the twentieth century. There is a scattering of articles about this period which give an independent analysis of some features of their products. However, there are no studies which analyse their project design methodology, their tools, their strategies and their different connections and references. Neither has there been analysis to date of the nature of their project design processes, nor of their times and their aims. Within this context, the Igualada cemetery project was key in the path to be trodden by the team. The great length of time used on the process and the singularity of the work, alongside other factors from Miralles and Pinós's personal imaginarium, were to mark this project as a turning point in their work. This research sets out, first of all, to describe the context in which the early projects by Miralles and Pinós began to be developed, as well as the background to the generation of their proposal for the Igualada cemetery. There follows a description and analysis of the resources and of the references that the architects used in this project, both those linked to local culture and those picked from the universal imaginarium. Furthermore, the central block of the research identifies twelve tools which make up Miralles and Pinós's project design methodology. This contribution has been performed on the basis of analysis and comparison of the original Igualada project documents and of writings extant by the architects. The tools identified are: slide, shift, repeat, bury, constrain, oscillate, stretch, untangle, divert, redo, groove and flow. They are all described, analysed and interpreted on the basis of seventy-two essential documents extracted from the Igualada process. A final chapter of conclusions defines what is presented in the research as a methodology which is specific to Enric Miralles and Carme Pinós, denominated elastic processes, describing their qualities and their times, along with the presentation of a tactic used in the Igualada project which is denominated laminar flow.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Desde o seu desenvolvimento na década de 1970 a tomografia computadorizada (TC) passou por grandes mudanças tecnológicas, tornando-se uma importante ferramenta diagnóstica para a medicina. Consequentemente o papel da TC em diagnóstico por imagem expandiu-se rapidamente, principalmente devido a melhorias na qualidade da imagem e tempo de aquisição. A dose de radiação recebida por pacientes devido a tais procedimentos vem ganhando atenção, levando a comunidade científica e os fabricantes a trabalharem juntos em direção a determinação e otimização de doses. Nas últimas décadas muitas metodologias para dosimetria em pacientes têm sido propostas, baseadas especialmente em cálculos utilizando a técnica Monte Carlo ou medições experimentais com objetos simuladores e dosímetros. A possibilidade de medições in vivo também está sendo investigada. Atualmente as principais técnicas para a otimização da dose incluem redução e/ou modulação da corrente anódica. O presente trabalho propõe uma metodologia experimental para estimativa de doses absorvidas pelos pulmões devido a protocolos clínicos de TC, usando um objeto simulador antropomórfico adulto e dosímetros termoluminescentes de Fluoreto de Lítio (LiF). Sete protocolos clínicos diferentes foram selecionados, com base em sua relevância com respeito à otimização de dose e frequência na rotina clínica de dois hospitais de grande porte: Instituto de Radiologia do Hospital das Clínicas da Faculdade de Medicina da Universidade de São Paulo (InRad) e Instituto do Câncer do Estado de São Paulo Octávio Frias de Oliveira (ICESP). Quatro protocolos de otimização de dose foram analisados: Auto mA, Auto + Smart mA, Baixa Dose (BD) e Ultra Baixa Dose (UBD). Os dois primeiros protocolos supracitados buscam redução de dose por meio de modulação da corrente anódica, enquanto os protocolos BD e UBD propõem a redução do valor da corrente anódica, mantendo-a constante. Os protocolos BD e UBD proporcionaram redução de dose de 72,7(8) % e 91(1) %, respectivamente; 16,8(1,3) % e 35,0(1,2) % de redução de dose foram obtidas com os protocolos Auto mA e Auto + Smart mA, respectivamente. As estimativas de dose para os protocolos analisados neste estudo são compatíveis com estudos similares publicados na literatura, demonstrando a eficiência da metodologia para o cálculo de doses absorvidas no pulmão. Sua aplicabilidade pode ser estendida a diferentes órgãos, diferentes protocolos de CT e diferentes tipos de objetos simuladores antropomórficos (pediátricos, por exemplo). Por fim, a comparação entre os valores de doses estimadas para os pulmões e valores de estimativas de doses dependentes do tamanho (Size Specific Dose Estimates SSDE) demonstrou dependência linear entre as duas grandezas. Resultados de estudos similares exibiram comportamentos similares para doses no reto, sugerindo que doses absorvidas pelos uma órgãos podem ser linearmente dependente dos valores de SSDE, com coeficientes lineares específicos para cada órgão. Uma investigação mais aprofundada sobre doses em órgãos é necessária para avaliar essa hipótese.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Este trabalho trata do desenvolvimento de um sistema computacional, para a geração de dados e apresentação de resultados, específico para as estruturas de edifícios. As rotinas desenvolvidas devem trabalhar em conjunto com um sistema computacional para análise de estruturas com base no Método dos Elementos Finitos, contemplando tanto as estruturas de pavimentos; com a utilização de elementos de barra, placa/casca e molas; como as estruturas de contraventamento; com a utilização de elementos de barra tridimensional e recursos especiais como nó mestre e trechos rígidos. A linguagem computacional adotada para a elaboração das rotinas mencionadas é o Object Pascal do DELPHI, um ambiente de programação visual estruturado na programação orientada a objetos do Object Pascal. Essa escolha tem como objetivo, conseguir um sistema computacional onde alterações e adições de funções possam ser realizadas com facilidade, sem que todo o conjunto de programas precise ser analisado e modificado. Por fim, o programa deve servir como um verdadeiro ambiente para análise de estruturas de edifícios, controlando através de uma interface amigável com o usuário uma série de outros programas já desenvolvidos em FORTRAN, como por exemplo o dimensionamento de vigas, pilares, etc.