792 resultados para BIAS-ENHANCED NUCLEATION


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Voltage-controlled spin electronics is crucial for continued progress in information technology. It aims at reduced power consumption, increased integration density and enhanced functionality where non-volatile memory is combined with highspeed logical processing. Promising spintronic device concepts use the electric control of interface and surface magnetization. From the combination of magnetometry, spin-polarized photoemission spectroscopy, symmetry arguments and first-principles calculations, we show that the (0001) surface of magnetoelectric Cr2O3 has a roughness-insensitive, electrically switchable magnetization. Using a ferromagnetic Pd/Co multilayer deposited on the (0001) surface of a Cr2O3 single crystal, we achieve reversible, room-temperature isothermal switching of the exchange-bias field between positive and negative values by reversing the electric field while maintaining a permanent magnetic field. This effect reflects the switching of the bulk antiferromagnetic domain state and the interface magnetization coupled to it. The switchable exchange bias sets in exactly at the bulk Néel temperature.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Aerosol particles are strongly related to climate, air quality, visibility and human health issues. They contribute the largest uncertainty in the assessment of the Earth´s radiative budget, directly by scattering or absorbing solar radiation or indirectly by nucleating cloud droplets. The influence of aerosol particles on cloud related climatic effects essentially depends upon their number concentration, size and chemical composition. A major part of submicron aerosol consists of secondary organic aerosol (SOA) that is formed in the atmosphere by the oxidation of volatile organic compounds. SOA can comprise a highly diverse spectrum of compounds that undergo continuous chemical transformations in the atmosphere.rnThe aim of this work was to obtain insights into the complexity of ambient SOA by the application of advanced mass spectrometric techniques. Therefore, an atmospheric pressure chemical ionization ion trap mass spectrometer (APCI-IT-MS) was applied in the field, facilitating the measurement of ions of the intact molecular organic species. Furthermore, the high measurement frequency provided insights into SOA composition and chemical transformation processes on a high temporal resolution. Within different comprehensive field campaigns, online measurements of particular biogenic organic acids were achieved by combining an online aerosol concentrator with the APCI-IT-MS. A holistic picture of the ambient organic aerosol was obtained through the co-located application of other complementary MS techniques, such as aerosol mass spectrometry (AMS) or filter sampling for the analysis by liquid chromatography / ultrahigh resolution mass spectrometry (LC/UHRMS).rnIn particular, during a summertime field study at the pristine boreal forest station in Hyytiälä, Finland, the partitioning of organic acids between gas and particle phase was quantified, based on the online APCI-IT-MS and AMS measurements. It was found that low volatile compounds reside to a large extent in the gas phase. This observation can be interpreted as a consequence of large aerosol equilibration timescales, which build up due to the continuous production of low volatile compounds in the gas phase and/or a semi-solid phase state of the ambient aerosol. Furthermore, in-situ structural informations of particular compounds were achieved by using the MS/MS mode of the ion trap. The comparison to MS/MS spectra from laboratory generated SOA of specific monoterpene precursors indicated that laboratory SOA barely depicts the complexity of ambient SOA. Moreover, it was shown that the mass spectra of the laboratory SOA more closely resemble the ambient gas phase composition, indicating that the oxidation state of the ambient organic compounds in the particle phase is underestimated by the comparison to laboratory ozonolysis. These observations suggest that the micro-scale processes, such as the chemistry of aerosol aging or the gas-to-particle partitioning, need to be better understood in order to predict SOA concentrations more reliably.rnDuring a field study at the Mt. Kleiner Feldberg, Germany, a slightly different aerosol concentrator / APCI-IT-MS setup made the online analysis of new particle formation possible. During a particular nucleation event, the online mass spectra indicated that organic compounds of approximately 300 Da are main constituents of the bulk aerosol during ambient new particle formation. Co-located filter analysis by LC/UHRMS analysis supported these findings and furthermore allowed to determine the molecular formulas of the involved organic compounds. The unambiguous identification of several oxidized C 15 compounds indicated that oxidation products of sesquiterpenes can be important compounds for the initial formation and subsequent growth of atmospheric nanoparticles.rnThe LC/UHRMS analysis furthermore revealed that considerable amounts of organosulfates and nitrooxy organosulfates were detected on the filter samples. Indeed, it was found that several nitrooxy organosulfate related APCI-IT-MS mass traces were simultaneously enhanced. Concurrent particle phase ion chromatography and AMS measurements indicated a strong bias between inorganic sulfate and total sulfate concentrations, supporting the assumption that substantial amounts of sulfate was bonded to organic molecules.rnFinally, the comprehensive chemical analysis of the aerosol composition was compared to the hygroscopicity parameter kappa, which was derived from cloud condensation nuclei (CCN) measurements. Simultaneously, organic aerosol aging was observed by the evolution of a ratio between a second and a first generation biogenic oxidation product. It was found that this aging proxy positively correlates with increasing hygroscopicity. Moreover, it was observed that the bonding of sulfate to organic molecules leads to a significant reduction of kappa, compared to an internal mixture of the same mass fractions of purely inorganic sulfate and organic molecules. Concluding, it has been shown within this thesis that the application of modern mass spectrometric techniques allows for detailed insights into chemical and physico-chemical processes of atmospheric aerosols.rn

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In den vergangenen Jahren wurden einige bislang unbekannte Phänomene experimentell beobachtet, wie etwa die Existenz unterschiedlicher Prä-Nukleations-Strukturen. Diese haben zu einem neuen Verständnis von Prozessen, die auf molekularer Ebene während der Nukleation und dem Wachstum von Kristallen auftreten, beigetragen. Die Auswirkungen solcher Prä-Nukleations-Strukturen auf den Prozess der Biomineralisation sind noch nicht hinreichend verstanden. Die Mechanismen, mittels derer biomolekulare Modifikatoren, wie Peptide, mit Prä-Nukleations-Strukturen interagieren und somit den Nukleationsprozess von Mineralen beeinflussen könnten, sind vielfältig. Molekulare Simulationen sind zur Analyse der Formation von Prä-Nukleations-Strukturen in Anwesenheit von Modifikatoren gut geeignet. Die vorliegende Arbeit beschreibt einen Ansatz zur Analyse der Interaktion von Peptiden mit den in Lösung befindlichen Bestandteilen der entstehenden Kristalle mit Hilfe von Molekular-Dynamik Simulationen.rnUm informative Simulationen zu ermöglichen, wurde in einem ersten Schritt die Qualität bestehender Kraftfelder im Hinblick auf die Beschreibung von mit Calciumionen interagierenden Oligoglutamaten in wässrigen Lösungen untersucht. Es zeigte sich, dass große Unstimmigkeiten zwischen etablierten Kraftfeldern bestehen, und dass keines der untersuchten Kraftfelder eine realistische Beschreibung der Ionen-Paarung dieser komplexen Ionen widerspiegelte. Daher wurde eine Strategie zur Optimierung bestehender biomolekularer Kraftfelder in dieser Hinsicht entwickelt. Relativ geringe Veränderungen der auf die Ionen–Peptid van-der-Waals-Wechselwirkungen bezogenen Parameter reichten aus, um ein verlässliches Modell für das untersuchte System zu erzielen. rnDas umfassende Sampling des Phasenraumes der Systeme stellt aufgrund der zahlreichen Freiheitsgrade und der starken Interaktionen zwischen Calciumionen und Glutamat in Lösung eine besondere Herausforderung dar. Daher wurde die Methode der Biasing Potential Replica Exchange Molekular-Dynamik Simulationen im Hinblick auf das Sampling von Oligoglutamaten justiert und es erfolgte die Simulation von Peptiden verschiedener Kettenlängen in Anwesenheit von Calciumionen. Mit Hilfe der Sketch-Map Analyse konnten im Rahmen der Simulationen zahlreiche stabile Ionen-Peptid-Komplexe identifiziert werden, welche die Formation von Prä-Nukleations-Strukturen beeinflussen könnten. Abhängig von der Kettenlänge des Peptids weisen diese Komplexe charakteristische Abstände zwischen den Calciumionen auf. Diese ähneln einigen Abständen zwischen den Calciumionen in jenen Phasen von Calcium-Oxalat Kristallen, die in Anwesenheit von Oligoglutamaten gewachsen sind. Die Analogie der Abstände zwischen Calciumionen in gelösten Ionen-Peptid-Komplexen und in Calcium-Oxalat Kristallen könnte auf die Bedeutung von Ionen-Peptid-Komplexen im Prozess der Nukleation und des Wachstums von Biomineralen hindeuten und stellt einen möglichen Erklärungsansatz für die Fähigkeit von Oligoglutamaten zur Beeinflussung der Phase des sich formierenden Kristalls dar, die experimentell beobachtet wurde.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A crystal nucleus in a finite volume may exhibit phase coexistence with a surrounding fluid. The thermodynamic properties of the coexisting fluid (pressure and chemical potential) are enhanced relative to their coexistence values. This enhancement is uniquely related to the surface excess free energy. rnA model for weakly attractive soft colloidal particles is investigated, the so called Asakura-Oosawa model. In simulations, this model allows for the calculation of the pressure in the liquid using the virial formula directly. The phase coexistence pressure in the thermodynamic limit is obtained from the interface velocity method. We introduce a method by which the chemical potential in dense liquids can be measured. There is neither a need to locate the interface nor to compute the anisotropic interfacial tension to obtain nucleation barriers. Therefore, our analysis is appropriate for nuclei of arbitrary shape. Monte Carlo simulations over a wide range of nucleus volumes yield to nucleation barriers independent from the total system volume. The interfacial tension is determined via the ensemble-switch method, hence a detailed test of classical nucleation theory is possible. The anisotropy of the interfacial tension and the resulting non-spherical shape has only a minor effect on the barrier for the Asakura-Oosawa model.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The delayed Gadolinium Enhanced MRI of Cartilage (dGEMRIC) technique has shown promising results in pilot clinical studies of early osteoarthritis. Currently, its broader acceptance is limited by the long scan time and the need for postprocessing to calculate the T1 maps. A fast T1 mapping imaging technique based on two spoiled gradient echo images was implemented. In phantom studies, an appropriate flip angle combination optimized for center T1 of 756 to 955 ms yielded excellent agreement with T1 measured using the inversion recovery technique in the range of 200 to 900 ms, of interest in normal and diseased cartilage. In vivo validation was performed by serially imaging 26 hips using the inversion recovery and the Fast 2 angle T1 mapping techniques (center T1 756 ms). Excellent correlation with Pearson correlation coefficient R2 of 0.74 was seen and Bland-Altman plots demonstrated no systematic bias.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

PURPOSE In leukemic cutaneous T-cell lymphoma (L-CTCL), malignant T cells accumulate in the blood and give rise to widespread skin inflammation. Patients have intense pruritus, increased immunoglobulin E (IgE), and decreased T-helper (TH)-1 responses, and most die from infection. Depleting malignant T cells while preserving normal immunity is a clinical challenge. L-CTCL has been variably described as a malignancy of regulatory, TH2 and TH17 cells. EXPERIMENTAL DESIGN We analyzed phenotype and cytokine production in malignant and benign L-CTCL T cells, characterized the effects of malignant T cells on healthy T cells, and studied the immunomodulatory effects of treatment modalities in patients with L-CTCL. RESULTS Twelve out of 12 patients with L-CTCL overproduced TH2 cytokines. Remaining benign T cells were also strongly TH2 biased, suggesting a global TH2 skewing of the T-cell repertoire. Culture of benign T cells away from the malignant clone reduced TH2 and enhanced TH1 responses, but separate culture had no effect on malignant T cells. Coculture of healthy T cells with L-CTCL T cells reduced IFNγ production and neutralizing antibodies to interleukin (IL)-4 and IL-13 restored TH1 responses. In patients, enhanced TH1 responses were observed following a variety of treatment modalities that reduced malignant T-cell burden. CONCLUSIONS A global TH2 bias exists in both benign and malignant T cells in L-CTCL and may underlie the infectious susceptibility of patients. TH2 cytokines from malignant cells strongly inhibited TH1 responses. Our results suggest that therapies that inhibit TH2 cytokine activity, by virtue of their ability to improve TH1 responses, may have the potential to enhance both anticancer and antipathogen responses.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Esta tesis doctoral se centra principalmente en técnicas de ataque y contramedidas relacionadas con ataques de canal lateral (SCA por sus siglas en inglés), que han sido propuestas dentro del campo de investigación académica desde hace 17 años. Las investigaciones relacionadas han experimentado un notable crecimiento en las últimas décadas, mientras que los diseños enfocados en la protección sólida y eficaz contra dichos ataques aún se mantienen como un tema de investigación abierto, en el que se necesitan iniciativas más confiables para la protección de la información persona de empresa y de datos nacionales. El primer uso documentado de codificación secreta se remonta a alrededor de 1700 B.C., cuando los jeroglíficos del antiguo Egipto eran descritos en las inscripciones. La seguridad de la información siempre ha supuesto un factor clave en la transmisión de datos relacionados con inteligencia diplomática o militar. Debido a la evolución rápida de las técnicas modernas de comunicación, soluciones de cifrado se incorporaron por primera vez para garantizar la seguridad, integridad y confidencialidad de los contextos de transmisión a través de cables sin seguridad o medios inalámbricos. Debido a las restricciones de potencia de cálculo antes de la era del ordenador, la técnica de cifrado simple era un método más que suficiente para ocultar la información. Sin embargo, algunas vulnerabilidades algorítmicas pueden ser explotadas para restaurar la regla de codificación sin mucho esfuerzo. Esto ha motivado nuevas investigaciones en el área de la criptografía, con el fin de proteger el sistema de información ante sofisticados algoritmos. Con la invención de los ordenadores se ha acelerado en gran medida la implementación de criptografía segura, que ofrece resistencia eficiente encaminada a obtener mayores capacidades de computación altamente reforzadas. Igualmente, sofisticados cripto-análisis han impulsado las tecnologías de computación. Hoy en día, el mundo de la información ha estado involucrado con el campo de la criptografía, enfocada a proteger cualquier campo a través de diversas soluciones de cifrado. Estos enfoques se han fortalecido debido a la unificación optimizada de teorías matemáticas modernas y prácticas eficaces de hardware, siendo posible su implementación en varias plataformas (microprocesador, ASIC, FPGA, etc.). Las necesidades y requisitos de seguridad en la industria son las principales métricas de conducción en el diseño electrónico, con el objetivo de promover la fabricación de productos de gran alcance sin sacrificar la seguridad de los clientes. Sin embargo, una vulnerabilidad en la implementación práctica encontrada por el Prof. Paul Kocher, et al en 1996 implica que un circuito digital es inherentemente vulnerable a un ataque no convencional, lo cual fue nombrado posteriormente como ataque de canal lateral, debido a su fuente de análisis. Sin embargo, algunas críticas sobre los algoritmos criptográficos teóricamente seguros surgieron casi inmediatamente después de este descubrimiento. En este sentido, los circuitos digitales consisten típicamente en un gran número de celdas lógicas fundamentales (como MOS - Metal Oxide Semiconductor), construido sobre un sustrato de silicio durante la fabricación. La lógica de los circuitos se realiza en función de las innumerables conmutaciones de estas células. Este mecanismo provoca inevitablemente cierta emanación física especial que puede ser medida y correlacionada con el comportamiento interno del circuito. SCA se puede utilizar para revelar datos confidenciales (por ejemplo, la criptografía de claves), analizar la arquitectura lógica, el tiempo e incluso inyectar fallos malintencionados a los circuitos que se implementan en sistemas embebidos, como FPGAs, ASICs, o tarjetas inteligentes. Mediante el uso de la comparación de correlación entre la cantidad de fuga estimada y las fugas medidas de forma real, información confidencial puede ser reconstruida en mucho menos tiempo y computación. Para ser precisos, SCA básicamente cubre una amplia gama de tipos de ataques, como los análisis de consumo de energía y radiación ElectroMagnética (EM). Ambos se basan en análisis estadístico y, por lo tanto, requieren numerosas muestras. Los algoritmos de cifrado no están intrínsecamente preparados para ser resistentes ante SCA. Es por ello que se hace necesario durante la implementación de circuitos integrar medidas que permitan camuflar las fugas a través de "canales laterales". Las medidas contra SCA están evolucionando junto con el desarrollo de nuevas técnicas de ataque, así como la continua mejora de los dispositivos electrónicos. Las características físicas requieren contramedidas sobre la capa física, que generalmente se pueden clasificar en soluciones intrínsecas y extrínsecas. Contramedidas extrínsecas se ejecutan para confundir la fuente de ataque mediante la integración de ruido o mala alineación de la actividad interna. Comparativamente, las contramedidas intrínsecas están integradas en el propio algoritmo, para modificar la aplicación con el fin de minimizar las fugas medibles, o incluso hacer que dichas fugas no puedan ser medibles. Ocultación y Enmascaramiento son dos técnicas típicas incluidas en esta categoría. Concretamente, el enmascaramiento se aplica a nivel algorítmico, para alterar los datos intermedios sensibles con una máscara de manera reversible. A diferencia del enmascaramiento lineal, las operaciones no lineales que ampliamente existen en criptografías modernas son difíciles de enmascarar. Dicho método de ocultación, que ha sido verificado como una solución efectiva, comprende principalmente la codificación en doble carril, que está ideado especialmente para aplanar o eliminar la fuga dependiente de dato en potencia o en EM. En esta tesis doctoral, además de la descripción de las metodologías de ataque, se han dedicado grandes esfuerzos sobre la estructura del prototipo de la lógica propuesta, con el fin de realizar investigaciones enfocadas a la seguridad sobre contramedidas de arquitectura a nivel lógico. Una característica de SCA reside en el formato de las fuentes de fugas. Un típico ataque de canal lateral se refiere al análisis basado en la potencia, donde la capacidad fundamental del transistor MOS y otras capacidades parásitas son las fuentes esenciales de fugas. Por lo tanto, una lógica robusta resistente a SCA debe eliminar o mitigar las fugas de estas micro-unidades, como las puertas lógicas básicas, los puertos I/O y las rutas. Las herramientas EDA proporcionadas por los vendedores manipulan la lógica desde un nivel más alto, en lugar de realizarlo desde el nivel de puerta, donde las fugas de canal lateral se manifiestan. Por lo tanto, las implementaciones clásicas apenas satisfacen estas necesidades e inevitablemente atrofian el prototipo. Por todo ello, la implementación de un esquema de diseño personalizado y flexible ha de ser tomado en cuenta. En esta tesis se presenta el diseño y la implementación de una lógica innovadora para contrarrestar SCA, en la que se abordan 3 aspectos fundamentales: I. Se basa en ocultar la estrategia sobre el circuito en doble carril a nivel de puerta para obtener dinámicamente el equilibrio de las fugas en las capas inferiores; II. Esta lógica explota las características de la arquitectura de las FPGAs, para reducir al mínimo el gasto de recursos en la implementación; III. Se apoya en un conjunto de herramientas asistentes personalizadas, incorporadas al flujo genérico de diseño sobre FPGAs, con el fin de manipular los circuitos de forma automática. El kit de herramientas de diseño automático es compatible con la lógica de doble carril propuesta, para facilitar la aplicación práctica sobre la familia de FPGA del fabricante Xilinx. En este sentido, la metodología y las herramientas son flexibles para ser extendido a una amplia gama de aplicaciones en las que se desean obtener restricciones mucho más rígidas y sofisticadas a nivel de puerta o rutado. En esta tesis se realiza un gran esfuerzo para facilitar el proceso de implementación y reparación de lógica de doble carril genérica. La viabilidad de las soluciones propuestas es validada mediante la selección de algoritmos criptográficos ampliamente utilizados, y su evaluación exhaustiva en comparación con soluciones anteriores. Todas las propuestas están respaldadas eficazmente a través de ataques experimentales con el fin de validar las ventajas de seguridad del sistema. El presente trabajo de investigación tiene la intención de cerrar la brecha entre las barreras de implementación y la aplicación efectiva de lógica de doble carril. En esencia, a lo largo de esta tesis se describirá un conjunto de herramientas de implementación para FPGAs que se han desarrollado para trabajar junto con el flujo de diseño genérico de las mismas, con el fin de lograr crear de forma innovadora la lógica de doble carril. Un nuevo enfoque en el ámbito de la seguridad en el cifrado se propone para obtener personalización, automatización y flexibilidad en el prototipo de circuito de bajo nivel con granularidad fina. Las principales contribuciones del presente trabajo de investigación se resumen brevemente a continuación: Lógica de Precharge Absorbed-DPL logic: El uso de la conversión de netlist para reservar LUTs libres para ejecutar la señal de precharge y Ex en una lógica DPL. Posicionamiento entrelazado Row-crossed con pares idénticos de rutado en redes de doble carril, lo que ayuda a aumentar la resistencia frente a la medición EM selectiva y mitigar los impactos de las variaciones de proceso. Ejecución personalizada y herramientas de conversión automática para la generación de redes idénticas para la lógica de doble carril propuesta. (a) Para detectar y reparar conflictos en las conexiones; (b) Detectar y reparar las rutas asimétricas. (c) Para ser utilizado en otras lógicas donde se requiere un control estricto de las interconexiones en aplicaciones basadas en Xilinx. Plataforma CPA de pruebas personalizadas para el análisis de EM y potencia, incluyendo la construcción de dicha plataforma, el método de medición y análisis de los ataques. Análisis de tiempos para cuantificar los niveles de seguridad. División de Seguridad en la conversión parcial de un sistema de cifrado complejo para reducir los costes de la protección. Prueba de concepto de un sistema de calefacción auto-adaptativo para mitigar los impactos eléctricos debido a la variación del proceso de silicio de manera dinámica. La presente tesis doctoral se encuentra organizada tal y como se detalla a continuación: En el capítulo 1 se abordan los fundamentos de los ataques de canal lateral, que abarca desde conceptos básicos de teoría de modelos de análisis, además de la implementación de la plataforma y la ejecución de los ataques. En el capítulo 2 se incluyen las estrategias de resistencia SCA contra los ataques de potencia diferencial y de EM. Además de ello, en este capítulo se propone una lógica en doble carril compacta y segura como contribución de gran relevancia, así como también se presentará la transformación lógica basada en un diseño a nivel de puerta. Por otra parte, en el Capítulo 3 se abordan los desafíos relacionados con la implementación de lógica en doble carril genérica. Así mismo, se describirá un flujo de diseño personalizado para resolver los problemas de aplicación junto con una herramienta de desarrollo automático de aplicaciones propuesta, para mitigar las barreras de diseño y facilitar los procesos. En el capítulo 4 se describe de forma detallada la elaboración e implementación de las herramientas propuestas. Por otra parte, la verificación y validaciones de seguridad de la lógica propuesta, así como un sofisticado experimento de verificación de la seguridad del rutado, se describen en el capítulo 5. Por último, un resumen de las conclusiones de la tesis y las perspectivas como líneas futuras se incluyen en el capítulo 6. Con el fin de profundizar en el contenido de la tesis doctoral, cada capítulo se describe de forma más detallada a continuación: En el capítulo 1 se introduce plataforma de implementación hardware además las teorías básicas de ataque de canal lateral, y contiene principalmente: (a) La arquitectura genérica y las características de la FPGA a utilizar, en particular la Xilinx Virtex-5; (b) El algoritmo de cifrado seleccionado (un módulo comercial Advanced Encryption Standard (AES)); (c) Los elementos esenciales de los métodos de canal lateral, que permiten revelar las fugas de disipación correlacionadas con los comportamientos internos; y el método para recuperar esta relación entre las fluctuaciones físicas en los rastros de canal lateral y los datos internos procesados; (d) Las configuraciones de las plataformas de pruebas de potencia / EM abarcadas dentro de la presente tesis. El contenido de esta tesis se amplia y profundiza a partir del capítulo 2, en el cual se abordan varios aspectos claves. En primer lugar, el principio de protección de la compensación dinámica de la lógica genérica de precarga de doble carril (Dual-rail Precharge Logic-DPL) se explica mediante la descripción de los elementos compensados a nivel de puerta. En segundo lugar, la lógica PA-DPL es propuesta como aportación original, detallando el protocolo de la lógica y un caso de aplicación. En tercer lugar, dos flujos de diseño personalizados se muestran para realizar la conversión de doble carril. Junto con ello, se aclaran las definiciones técnicas relacionadas con la manipulación por encima de la netlist a nivel de LUT. Finalmente, una breve discusión sobre el proceso global se aborda en la parte final del capítulo. El Capítulo 3 estudia los principales retos durante la implementación de DPLs en FPGAs. El nivel de seguridad de las soluciones de resistencia a SCA encontradas en el estado del arte se ha degenerado debido a las barreras de implantación a través de herramientas EDA convencionales. En el escenario de la arquitectura FPGA estudiada, se discuten los problemas de los formatos de doble carril, impactos parásitos, sesgo tecnológico y la viabilidad de implementación. De acuerdo con estas elaboraciones, se plantean dos problemas: Cómo implementar la lógica propuesta sin penalizar los niveles de seguridad, y cómo manipular un gran número de celdas y automatizar el proceso. El PA-DPL propuesto en el capítulo 2 se valida con una serie de iniciativas, desde características estructurales como doble carril entrelazado o redes de rutado clonadas, hasta los métodos de aplicación tales como las herramientas de personalización y automatización de EDA. Por otra parte, un sistema de calefacción auto-adaptativo es representado y aplicado a una lógica de doble núcleo, con el fin de ajustar alternativamente la temperatura local para equilibrar los impactos negativos de la variación del proceso durante la operación en tiempo real. El capítulo 4 se centra en los detalles de la implementación del kit de herramientas. Desarrollado sobre una API third-party, el kit de herramientas personalizado es capaz de manipular los elementos de la lógica de circuito post P&R ncd (una versión binaria ilegible del xdl) convertido al formato XDL Xilinx. El mecanismo y razón de ser del conjunto de instrumentos propuestos son cuidadosamente descritos, que cubre la detección de enrutamiento y los enfoques para la reparación. El conjunto de herramientas desarrollado tiene como objetivo lograr redes de enrutamiento estrictamente idénticos para la lógica de doble carril, tanto para posicionamiento separado como para el entrelazado. Este capítulo particularmente especifica las bases técnicas para apoyar las implementaciones en los dispositivos de Xilinx y su flexibilidad para ser utilizado sobre otras aplicaciones. El capítulo 5 se enfoca en la aplicación de los casos de estudio para la validación de los grados de seguridad de la lógica propuesta. Se discuten los problemas técnicos detallados durante la ejecución y algunas nuevas técnicas de implementación. (a) Se discute el impacto en el proceso de posicionamiento de la lógica utilizando el kit de herramientas propuesto. Diferentes esquemas de implementación, tomando en cuenta la optimización global en seguridad y coste, se verifican con los experimentos con el fin de encontrar los planes de posicionamiento y reparación optimizados; (b) las validaciones de seguridad se realizan con los métodos de correlación y análisis de tiempo; (c) Una táctica asintótica se aplica a un núcleo AES sobre BCDL estructurado para validar de forma sofisticada el impacto de enrutamiento sobre métricas de seguridad; (d) Los resultados preliminares utilizando el sistema de calefacción auto-adaptativa sobre la variación del proceso son mostrados; (e) Se introduce una aplicación práctica de las herramientas para un diseño de cifrado completa. Capítulo 6 incluye el resumen general del trabajo presentado dentro de esta tesis doctoral. Por último, una breve perspectiva del trabajo futuro se expone, lo que puede ampliar el potencial de utilización de las contribuciones de esta tesis a un alcance más allá de los dominios de la criptografía en FPGAs. ABSTRACT This PhD thesis mainly concentrates on countermeasure techniques related to the Side Channel Attack (SCA), which has been put forward to academic exploitations since 17 years ago. The related research has seen a remarkable growth in the past decades, while the design of solid and efficient protection still curiously remain as an open research topic where more reliable initiatives are required for personal information privacy, enterprise and national data protections. The earliest documented usage of secret code can be traced back to around 1700 B.C., when the hieroglyphs in ancient Egypt are scribed in inscriptions. Information security always gained serious attention from diplomatic or military intelligence transmission. Due to the rapid evolvement of modern communication technique, crypto solution was first incorporated by electronic signal to ensure the confidentiality, integrity, availability, authenticity and non-repudiation of the transmitted contexts over unsecure cable or wireless channels. Restricted to the computation power before computer era, simple encryption tricks were practically sufficient to conceal information. However, algorithmic vulnerabilities can be excavated to restore the encoding rules with affordable efforts. This fact motivated the development of modern cryptography, aiming at guarding information system by complex and advanced algorithms. The appearance of computers has greatly pushed forward the invention of robust cryptographies, which efficiently offers resistance relying on highly strengthened computing capabilities. Likewise, advanced cryptanalysis has greatly driven the computing technologies in turn. Nowadays, the information world has been involved into a crypto world, protecting any fields by pervasive crypto solutions. These approaches are strong because of the optimized mergence between modern mathematical theories and effective hardware practices, being capable of implement crypto theories into various platforms (microprocessor, ASIC, FPGA, etc). Security needs from industries are actually the major driving metrics in electronic design, aiming at promoting the construction of systems with high performance without sacrificing security. Yet a vulnerability in practical implementation found by Prof. Paul Kocher, et al in 1996 implies that modern digital circuits are inherently vulnerable to an unconventional attack approach, which was named as side-channel attack since then from its analysis source. Critical suspicions to theoretically sound modern crypto algorithms surfaced almost immediately after this discovery. To be specifically, digital circuits typically consist of a great number of essential logic elements (as MOS - Metal Oxide Semiconductor), built upon a silicon substrate during the fabrication. Circuit logic is realized relying on the countless switch actions of these cells. This mechanism inevitably results in featured physical emanation that can be properly measured and correlated with internal circuit behaviors. SCAs can be used to reveal the confidential data (e.g. crypto-key), analyze the logic architecture, timing and even inject malicious faults to the circuits that are implemented in hardware system, like FPGA, ASIC, smart Card. Using various comparison solutions between the predicted leakage quantity and the measured leakage, secrets can be reconstructed at much less expense of time and computation. To be precisely, SCA basically encloses a wide range of attack types, typically as the analyses of power consumption or electromagnetic (EM) radiation. Both of them rely on statistical analyses, and hence require a number of samples. The crypto algorithms are not intrinsically fortified with SCA-resistance. Because of the severity, much attention has to be taken into the implementation so as to assemble countermeasures to camouflage the leakages via "side channels". Countermeasures against SCA are evolving along with the development of attack techniques. The physical characteristics requires countermeasures over physical layer, which can be generally classified into intrinsic and extrinsic vectors. Extrinsic countermeasures are executed to confuse the attacker by integrating noise, misalignment to the intra activities. Comparatively, intrinsic countermeasures are built into the algorithm itself, to modify the implementation for minimizing the measurable leakage, or making them not sensitive any more. Hiding and Masking are two typical techniques in this category. Concretely, masking applies to the algorithmic level, to alter the sensitive intermediate values with a mask in reversible ways. Unlike the linear masking, non-linear operations that widely exist in modern cryptographies are difficult to be masked. Approved to be an effective counter solution, hiding method mainly mentions dual-rail logic, which is specially devised for flattening or removing the data-dependent leakage in power or EM signatures. In this thesis, apart from the context describing the attack methodologies, efforts have also been dedicated to logic prototype, to mount extensive security investigations to countermeasures on logic-level. A characteristic of SCA resides on the format of leak sources. Typical side-channel attack concerns the power based analysis, where the fundamental capacitance from MOS transistors and other parasitic capacitances are the essential leak sources. Hence, a robust SCA-resistant logic must eliminate or mitigate the leakages from these micro units, such as basic logic gates, I/O ports and routings. The vendor provided EDA tools manipulate the logic from a higher behavioral-level, rather than the lower gate-level where side-channel leakage is generated. So, the classical implementations barely satisfy these needs and inevitably stunt the prototype. In this case, a customized and flexible design scheme is appealing to be devised. This thesis profiles an innovative logic style to counter SCA, which mainly addresses three major aspects: I. The proposed logic is based on the hiding strategy over gate-level dual-rail style to dynamically overbalance side-channel leakage from lower circuit layer; II. This logic exploits architectural features of modern FPGAs, to minimize the implementation expenses; III. It is supported by a set of assistant custom tools, incorporated by the generic FPGA design flow, to have circuit manipulations in an automatic manner. The automatic design toolkit supports the proposed dual-rail logic, facilitating the practical implementation on Xilinx FPGA families. While the methodologies and the tools are flexible to be expanded to a wide range of applications where rigid and sophisticated gate- or routing- constraints are desired. In this thesis a great effort is done to streamline the implementation workflow of generic dual-rail logic. The feasibility of the proposed solutions is validated by selected and widely used crypto algorithm, for thorough and fair evaluation w.r.t. prior solutions. All the proposals are effectively verified by security experiments. The presented research work attempts to solve the implementation troubles. The essence that will be formalized along this thesis is that a customized execution toolkit for modern FPGA systems is developed to work together with the generic FPGA design flow for creating innovative dual-rail logic. A method in crypto security area is constructed to obtain customization, automation and flexibility in low-level circuit prototype with fine-granularity in intractable routings. Main contributions of the presented work are summarized next: Precharge Absorbed-DPL logic: Using the netlist conversion to reserve free LUT inputs to execute the Precharge and Ex signal in a dual-rail logic style. A row-crossed interleaved placement method with identical routing pairs in dual-rail networks, which helps to increase the resistance against selective EM measurement and mitigate the impacts from process variations. Customized execution and automatic transformation tools for producing identical networks for the proposed dual-rail logic. (a) To detect and repair the conflict nets; (b) To detect and repair the asymmetric nets. (c) To be used in other logics where strict network control is required in Xilinx scenario. Customized correlation analysis testbed for EM and power attacks, including the platform construction, measurement method and attack analysis. A timing analysis based method for quantifying the security grades. A methodology of security partitions of complex crypto systems for reducing the protection cost. A proof-of-concept self-adaptive heating system to mitigate electrical impacts over process variations in dynamic dual-rail compensation manner. The thesis chapters are organized as follows: Chapter 1 discusses the side-channel attack fundamentals, which covers from theoretic basics to analysis models, and further to platform setup and attack execution. Chapter 2 centers to SCA-resistant strategies against generic power and EM attacks. In this chapter, a major contribution, a compact and secure dual-rail logic style, will be originally proposed. The logic transformation based on bottom-layer design will be presented. Chapter 3 is scheduled to elaborate the implementation challenges of generic dual-rail styles. A customized design flow to solve the implementation problems will be described along with a self-developed automatic implementation toolkit, for mitigating the design barriers and facilitating the processes. Chapter 4 will originally elaborate the tool specifics and construction details. The implementation case studies and security validations for the proposed logic style, as well as a sophisticated routing verification experiment, will be described in Chapter 5. Finally, a summary of thesis conclusions and perspectives for future work are included in Chapter 5. To better exhibit the thesis contents, each chapter is further described next: Chapter 1 provides the introduction of hardware implementation testbed and side-channel attack fundamentals, and mainly contains: (a) The FPGA generic architecture and device features, particularly of Virtex-5 FPGA; (b) The selected crypto algorithm - a commercially and extensively used Advanced Encryption Standard (AES) module - is detailed; (c) The essentials of Side-Channel methods are profiled. It reveals the correlated dissipation leakage to the internal behaviors, and the method to recover this relationship between the physical fluctuations in side-channel traces and the intra processed data; (d) The setups of the power/EM testing platforms enclosed inside the thesis work are given. The content of this thesis is expanded and deepened from chapter 2, which is divided into several aspects. First, the protection principle of dynamic compensation of the generic dual-rail precharge logic is explained by describing the compensated gate-level elements. Second, the novel DPL is originally proposed by detailing the logic protocol and an implementation case study. Third, a couple of custom workflows are shown next for realizing the rail conversion. Meanwhile, the technical definitions that are about to be manipulated above LUT-level netlist are clarified. A brief discussion about the batched process is given in the final part. Chapter 3 studies the implementation challenges of DPLs in FPGAs. The security level of state-of-the-art SCA-resistant solutions are decreased due to the implementation barriers using conventional EDA tools. In the studied FPGA scenario, problems are discussed from dual-rail format, parasitic impact, technological bias and implementation feasibility. According to these elaborations, two problems arise: How to implement the proposed logic without crippling the security level; and How to manipulate a large number of cells and automate the transformation. The proposed PA-DPL in chapter 2 is legalized with a series of initiatives, from structures to implementation methods. Furthermore, a self-adaptive heating system is depicted and implemented to a dual-core logic, assumed to alternatively adjust local temperature for balancing the negative impacts from silicon technological biases on real-time. Chapter 4 centers to the toolkit system. Built upon a third-party Application Program Interface (API) library, the customized toolkit is able to manipulate the logic elements from post P&R circuit (an unreadable binary version of the xdl one) converted to Xilinx xdl format. The mechanism and rationale of the proposed toolkit are carefully convoyed, covering the routing detection and repairing approaches. The developed toolkit aims to achieve very strictly identical routing networks for dual-rail logic both for separate and interleaved placement. This chapter particularly specifies the technical essentials to support the implementations in Xilinx devices and the flexibility to be expanded to other applications. Chapter 5 focuses on the implementation of the case studies for validating the security grades of the proposed logic style from the proposed toolkit. Comprehensive implementation techniques are discussed. (a) The placement impacts using the proposed toolkit are discussed. Different execution schemes, considering the global optimization in security and cost, are verified with experiments so as to find the optimized placement and repair schemes; (b) Security validations are realized with correlation, timing methods; (c) A systematic method is applied to a BCDL structured module to validate the routing impact over security metric; (d) The preliminary results using the self-adaptive heating system over process variation is given; (e) A practical implementation of the proposed toolkit to a large design is introduced. Chapter 6 includes the general summary of the complete work presented inside this thesis. Finally, a brief perspective for the future work is drawn which might expand the potential utilization of the thesis contributions to a wider range of implementation domains beyond cryptography on FPGAs.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Research investigating anxiety-related attentional bias for emotional information in anxious and nonanxious children has been equivocal with regard to whether a bias for fear-related stimuli is unique to anxious children or is common to children in general. Moreover, recent cognitive theories have proposed that an attentional bias for objectively threatening stimuli may be common to all individuals, with this effect enhanced in anxious individuals. The current study investigated whether an attentional bias toward fear-related pictures could be found in nonselected children (n = 105) and adults (n = 47) and whether a sample of clinically anxious children (n = 23) displayed an attentional bias for fear-related pictures over and above that expected for nonselected children. Participants completed a dot-probe task that employed fear-related, neutral, and pleasant pictures. As expected, both adults and children showed a stronger attentional bias toward fear-related pictures than toward pleasant pictures. Consistent with some findings in the childhood domain, the extent of the attentional bias toward fear-related pictures did not differ significantly between anxious children and nonselected children. However, compared with nonselected children, anxious children showed a stronger attentional bias overall toward affective picture stimuli. (C) 2004 Elsevier Inc. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Attentional bias to fear-relevant animals was assessed in 69 participants not preselected on self-reported anxiety with the use of a dot probe task showing pictures of snakes, spiders, mushrooms, and flowers. Probes that replaced the fear-relevant stimuli (snakes and spiders) were found faster than probes that replaced the non-fear-relevant stimuli, indicating an attentional bias in the entire sample. The bias was not correlated with self-reported state or trait anxiety or with general fearfulness. Participants reporting higher levels of spider fear showed an enhanced bias to spiders, but the bias remained significant in low scorers. The bias to snake pictures was not related to snake fear and was significant in high and low scorers. These results indicate preferential processing of fear-relevant stimuli in an unselected sample.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background. This study examined whether alcohol abuse patients are characterized either by enhanced schematic processing of alcohol related cues or by an attentional bias towards the processing of alcohol cues. Method. Abstinent alcohol abusers (N = 25) and non-clinical control participants (N = 24) performed a dual task paradigm in which they had to make an odd/even decision to a centrally presented number while performing a peripherally presented lexical decision task. Stimuli on the lexical decision task comprised alcohol words, neutral words and non-words. In addition, participants completed an incidental recall task for the words presented in the lexical decision task. Results. It was found that, in the presence of alcohol related words, the performance of patients on the odd/even decision task was poorer than in the presence of other stimului. In addition, patients displayed slower lexical decision times for alcohol related words. Both groups displayed better recall for alcohol words than for other stimuli. Conclusions. These results are interpreted as supporting neither model of drug cravings. Rather, it is proposed that, in the presence of alcohol stimuli, alcohol abuse patients display a breakdown in the ability to focus attention.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In three experiments we investigated the impact that exposure to counter-stereotypes has on emotional reactions to outgroups. In Experiment 1, thinking about gender counter-stereotypes attenuated stereotyped emotions toward females and males. In Experiment 2, an immigrant counterstereotype attenuated stereotyped emotions toward this outgroup and reduced dehumanization tendencies. Experiment 3 replicated these results using an alternative measure of humanization. In both Experiments 2 and 3 sequential meditational analysis revealed that counter-stereotypes produced feelings of surprise which, in turn, elicited a cognitive process of expectancy violation which resulted in attenuated stereotyped emotions and an enhanced use of uniquely human characteristics to describe the outgroup. The findings extend research supporting the usefulness of counter-stereotype exposure for reducing prejudice and highlight its positive impact on intergroup emotions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Secondary organic aerosol (SOA) accounts for a dominant fraction of the submicron atmospheric particle mass, but knowledge of the formation, composition and climate effects of SOA is incomplete and limits our understanding of overall aerosol effects in the atmosphere. Organic oligomers were discovered as dominant components in SOA over a decade ago in laboratory experiments and have since been proposed to play a dominant role in many aerosol processes. However, it remains unclear whether oligomers are relevant under ambient atmospheric conditions because they are often not clearly observed in field samples. Here we resolve this long-standing discrepancy by showing that elevated SOA mass is one of the key drivers of oligomer formation in the ambient atmosphere and laboratory experiments. We show for the first time that a specific organic compound class in aerosols, oligomers, is strongly correlated with cloud condensation nuclei (CCN) activities of SOA particles. These findings might have important implications for future climate scenarios where increased temperatures cause higher biogenic volatile organic compound (VOC) emissions, which in turn lead to higher SOA mass formation and significant changes in SOA composition. Such processes would need to be considered in climate models for a realistic representation of future aerosol-climate-biosphere feedbacks.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Recent evidences indicate that tRNA modifications and tRNA modifying enzymes may play important roles in complex human diseases such as cancer, neurological disorders and mitochondrial-linked diseases. We postulate that expression deregulation of tRNA modifying enzymes affects the level of tRNA modifications and, consequently, their function and the translation efficiency of their tRNA corresponding codons. Due to the degeneracy of the genetic code, most amino acids are encoded by two to six synonymous codons. This degeneracy and the biased usage of synonymous codons cause alterations that can span from protein folding to enhanced translation efficiency of a select gene group. In this work, we focused on cancer and performed a meta-analysis study to compare microarray gene expression profiles, reported by previous studies and evaluate the codon usage of different types of cancer where tRNA modifying enzymes were found de-regulated. A total of 36 different tRNA modifying enzymes were found de-regulated in most cancer datasets analyzed. The codon usage analysis revealed a preference for codons ending in AU for the up-regulated genes, while the down-regulated genes show a preference for GC ending codons. Furthermore, a PCA biplot analysis showed this same tendency. We also analyzed the codon usage of the datasets where the CTU2 tRNA modifying enzyme was found deregulated as this enzyme affects the wobble position (position 34) of specific tRNAs. Our data points to a distinct codon usage pattern between up and downregulated genes in cancer, which might be caused by the deregulation of specific tRNA modifying enzymes. This codon usage bias may augment the transcription and translation efficiency of some genes that otherwise, in a normal situation, would be translated less efficiently.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This dissertation aims to contribute to the ongoing discourse on the effect an enhanced financial literacy, through financial education, has on financial behaviour. We posit that financial literacy is enhanced through financial education courses, but it also significantly impacts the financial behaviour of individuals. Moreover, we argue that improved financial literacy plays a significant role in mitigating behavioural biases and an asset price bubble. Chapter 1 analyzes the impact of a financial education course in enhancing financial literacy in a high- school context. Students at specific schools in Tirana, Albania, are delivered a financial education course, which lasts one academic year. To understand the impact of this financial education course in enhancing financial literacy, PISA (2012) questionnaire on financial literacy is delivered to the students before and after the course is delivered. Chapter 2 analysis the impact of financial literacy in mitigating behavioural biases. We focus on the impact that enhanced financial literacy through the financial education course and financial education plays in reducing the propensity to mental accounting bias. Chapter 3 investigates how financial literacy affects the propensity to an asset price bubble occurrence. We posit that enhanced financial literacy through financial education reduces the probability of an asset price bubble occurrence. We find that financial literacy enhanced through financial education has a significant impact in the financial behaviour of the individuals.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Obesity is associated with development of the cardiorenal metabolic syndrome, which is a constellation of risk factors, such as insulin resistance, inflammatory response, dyslipidemia, and high blood pressure that predispose affected individuals to well-characterized medical conditions such as diabetes, cardiovascular and kidney chronic disease. The study was designed to establish relationship between metabolic and inflammatory disorder, renal sodium retention and enhanced blood pressure in a group of obese subjects compared with age-matched, lean volunteers. The study was performed after 14 h overnight fast after and before OGTT in 13 lean (BMI 22.92 ± 2.03 kg/m(2)) and, 27 obese (BMI 36.15 ± 3.84 kg/m(2)) volunteers. Assessment of HOMA-IR and QUICKI index were calculated and circulating concentrations of TNF-α, IL-6 and C-reactive protein, measured by immunoassay. THE STUDY SHOWS THAT A HYPERINSULINEMIC (HI: 10.85 ± 4.09 μg/ml) subgroup of well-characterized metabolic syndrome bearers-obese subjects show higher glycemic and elevated blood pressure levels when compared to lean and normoinsulinemic (NI: 5.51 ± 1.18 μg/ml, P < 0.027) subjects. Here, the combination of hyperinsulinemia, higher HOMA-IR (HI: 2.19 ± 0.70 (n = 12) vs. LS: 0.83 ± 0.23 (n = 12) and NI: 0.98 ± 0.22 (n = 15), P < 0.0001) associated with lower QUICKI in HI obese when compared with LS and NI volunteers (P < 0.0001), suggests the occurrence of insulin resistance and a defect in insulin-stimulated peripheral action. Otherwise, the adiponectin measured in basal period was significantly enhanced in NI subjects when compared to HI groups (P < 0.04). The report also showed a similar insulin-mediated reduction of post-proximal urinary sodium excretion in lean (LS: 9.41 ± 0.68% vs. 6.38 ± 0.92%, P = 0.086), and normoinsulinemic (NI: 8.41 ± 0.72% vs. 5.66 ± 0.53%, P = 0.0025) and hyperinsulinemic obese subjects (HI: 8.82 ± 0.98% vs. 6.32 ± 0.67%, P = 0.0264), after oral glucose load, despite elevated insulinemic levels in hyperinsulinemic obeses. In conclusion, this study highlights the importance of adiponectin levels and dysfunctional inflammatory modulation associated with hyperinsulinemia and peripheral insulin resistance, high blood pressure, and renal dysfunction in a particular subgroup of obeses.