982 resultados para Strong comparison principle
Resumo:
One novel treatment strategy for the diseased heart focuses on the use of pluripotent stem cell-derived cardiomyocytes (SC-CMs) to overcome the heart's innate deficiency for self-repair. However, targeted application of SC-CMs requires in-depth characterization of their true cardiogenic potential in terms of excitability and intercellular coupling at cellular level and in multicellular preparations. In this study, we elucidated the electrical characteristics of single SC-CMs and intercellular coupling quality of cell pairs, and concomitantly compared them with well-characterized murine native neonatal and immortalized HL-1 cardiomyocytes. Firstly, we investigated the electrical properties and Ca2+ signaling mechanisms specific to cardiac contraction in single SC-CMs. Despite heterogeneity of the new cardiac cell population, their electrophysiological activity and Ca2+ handling were similar to native cells. Secondly, we investigated the capability of paired SC-CMs to form an adequate subunit of a functional syncytium and analyzed gap junctions and signal transmission by dye transfer in cell pairs. We discovered significantly diminished coupling in SC-CMs compared with native cells, which could not be enhanced by a coculture approach combining SC-CMs and primary CMs. Moreover, quantitative and structural analysis of gap junctions presented significantly reduced connexin expression levels compared with native CMs. Strong dependence of intercellular coupling on gap junction density was further confirmed by computational simulations. These novel findings demonstrate that despite the cardiogenic electrophysiological profile, SC-CMs present significant limitations in intercellular communication. Inadequate coupling may severely impair functional integration and signal transmission, which needs to be carefully considered for the prospective use of SC-CMs in cardiac repair.
Resumo:
Hip dysplasia is characterized by insufficient femoral head coverage (FHC). Quantification of FHC is of importance as the underlying goal of the surgery to treat hip dysplasia is to restore a normal acetabular morphology and thereby to improve FHC. Unlike a pure 2D X-ray radiograph-based measurement method or a pure 3D CT-based measurement method, previously we presented a 2.5D method to quantify FHC from a single anteriorposterior (AP) pelvic radiograph. In this study, we first quantified and compared 3D FHC between a normal control group and a patient group using a CT-based measurement method. Taking the CT-based 3D measurements of FHC as the gold standard, we further quantified the bias, precision and correlation between the 2.5D measurements and the 3D measurements on both the control group and the patient group. Based on digitally reconstructed radiographs (DRRs), we investigated the influence of the pelvic tilt on the 2.5D measurements of FHC. The intraclass correlation coefficients (ICCs) for absolute agreement was used to quantify interobserver reliability and intraobserver reproducibility of the 2.5D measurement technique. The Pearson correlation coefficient, r, was used to determine the strength of the linear association between the 2.5D and the 3D measurements. Student's t-test was used to determine whether the differences between different measurements were statistically significant. Our experimental results demonstrated that both the interobserver reliability and the intraobserver reproducibility of the 2.5D measurement technique were very good (ICCs > 0.8). Regression analysis indicated that the correlation was very strong between the 2.5D and the 3D measurements (r = 0.89, p < 0.001). Student's t-test showed that there were no statistically significant differences between the 2.5D and the 3D measurements of FHC on the patient group (p > 0.05). The results of this study provided convincing evidence demonstrating the validity of the 2.5D measurements of FHC from a single AP pelvic radiograph and proved that it could serve as a surrogate for 3D CT-based measurements. Thus it may be possible to use this method to avoid a CT scan for the purpose of estimating 3D FHC in diagnosis and post-operative treatment evaluation of patients with hip dysplasia.
Resumo:
The hypothesis that large fluctuations in weight during young adulthood are associated with the degree of coronary artery disease was investigated by comparing patterns of weight change of patients with angiographically defined diseased or normal arteries. Participants (n = 823) were selected from men and women aged 40-74 years who had undergone angiography at North Carolina Baptist Hospital during 1987-88. Weight history from age 20 to 40 was assessed with a mailed questionnaire. Per cent prevalence of "yo-yo dieting" adjusted for age, race, and coronary disease risk factors in patients who had 0, 1, 2, 3, or more than 3 diseased arteries was 8.6, 8.8, 3.7, 5.6 and 7.1 per cent respectively (p = 0.313). These results do not support the research hypothesis. However, since the results may have been confound by neuroticism, they should not be interpreted as strong evidence against this hypothesis. ^
Resumo:
Pliocene vegetation dynamics and climate variability in West Africa have been investigated through pollen and XRF-scanning records obtained from sediment cores of ODP Site 659 (18°N, 21°W). The comparison between total pollen accumulation rates and Ti/Ca ratios, which is strongly correlated with the dust input at the site, showed elevated aeolian transport of pollen during dusty periods. Comparison of the pollen records of ODP Site 659 and the nearby Site 658 resulted in a robust reconstruction of West African vegetation change since the Late Pliocene. Between 3.6 and 3.0 Ma the savannah in West Africa differed in composition from its modern counterpart and was richer in Asteraceae, in particular of the Tribus Cichorieae. Between 3.24 and 3.20 Ma a stable wet period is inferred from the Fe/K ratios, which could stand for a narrower and better specified mid-Pliocene (mid-Piacenzian) warm time slice. The northward extension of woodland and savannah, albeit fluctuating, was generally greater in the Pliocene. NE trade wind vigour increased intermittently around 2.7 and 2.6 Ma, and more or less permanently since 2.5 Ma, as inferred from increased pollen concentrations of trade wind indicators (Ephedra, Artemisia, Pinus). Our findings link the NE trade wind development with the intensification of the Northern Hemisphere glaciations (iNHG). Prior to the iNHG, little or no systematic relation could be found between sea surface temperatures of the North Atlantic with aridity and dust in West Africa.
Resumo:
Phytoplankton biomass distribution (chlorophyll a, chl. a) and species composition (cell numbers) were investigated during three expeditions to the Kara Sea with "Akademik Boris Petrov" (BP) in 1997, 1999, and 2000. The distribution of biomass in the estuaries of Ob and Yenisei showed a similar range in 1997 (0.2 to 3.2 µg/l) and 2000 (0.4 to 3.5 ug/l); higher chl. a concentrations during these two years were found in Yenisei than in Ob. In 1999, phytoplankton biomass in the Ob and Ob Estuary was much higher than in 1997 and 2000, with maximum values above 10.0 ug chl. a/l. In 1999, biomass in Yenisei was lower (1.5 to ~5 ug/l) than in Ob but slightly higher than in 1997 and in 2000. During the expedition in 2000, the research area extended farther to the north, here, lowest phytoplankton biomass during all three years was found. Typical summer values for integrated chl.a biomass (surface to bottom) ranged between 6 and 20 mg m**-2. Strong differences existed in species composition in both rivers, the estuaries, and the open Kara Sea. In general, three or four different populations could be distinguished in surface waters: (1) freshwater diatoms together with bluegreen algae in both rivers, (2) centric and small pennate diatoms mainly brackish species in the estuaries, (3) north of 74°N, brackish/marine species dominated, i.e. in 1999 Thalassiosira cfpunctigera and Chaetoceros spp prevailed in the phytoplankton bloom in Ob. (4) At the northernmost, almost marine stations, a region with a more heterogeneous composition of unicellular plankton was encountered. We assume, we found different seasonal signals of phytoplankton development during 1997/2000 and 1999, respectively. However, the yearly fluctuation of freshwater runoff of both rivers seems to have the strongest influence on the timing and duration of phytoplankton blooms, species compositions and biomass standing stocks during summer.
Resumo:
Esta tesis doctoral se centra principalmente en técnicas de ataque y contramedidas relacionadas con ataques de canal lateral (SCA por sus siglas en inglés), que han sido propuestas dentro del campo de investigación académica desde hace 17 años. Las investigaciones relacionadas han experimentado un notable crecimiento en las últimas décadas, mientras que los diseños enfocados en la protección sólida y eficaz contra dichos ataques aún se mantienen como un tema de investigación abierto, en el que se necesitan iniciativas más confiables para la protección de la información persona de empresa y de datos nacionales. El primer uso documentado de codificación secreta se remonta a alrededor de 1700 B.C., cuando los jeroglíficos del antiguo Egipto eran descritos en las inscripciones. La seguridad de la información siempre ha supuesto un factor clave en la transmisión de datos relacionados con inteligencia diplomática o militar. Debido a la evolución rápida de las técnicas modernas de comunicación, soluciones de cifrado se incorporaron por primera vez para garantizar la seguridad, integridad y confidencialidad de los contextos de transmisión a través de cables sin seguridad o medios inalámbricos. Debido a las restricciones de potencia de cálculo antes de la era del ordenador, la técnica de cifrado simple era un método más que suficiente para ocultar la información. Sin embargo, algunas vulnerabilidades algorítmicas pueden ser explotadas para restaurar la regla de codificación sin mucho esfuerzo. Esto ha motivado nuevas investigaciones en el área de la criptografía, con el fin de proteger el sistema de información ante sofisticados algoritmos. Con la invención de los ordenadores se ha acelerado en gran medida la implementación de criptografía segura, que ofrece resistencia eficiente encaminada a obtener mayores capacidades de computación altamente reforzadas. Igualmente, sofisticados cripto-análisis han impulsado las tecnologías de computación. Hoy en día, el mundo de la información ha estado involucrado con el campo de la criptografía, enfocada a proteger cualquier campo a través de diversas soluciones de cifrado. Estos enfoques se han fortalecido debido a la unificación optimizada de teorías matemáticas modernas y prácticas eficaces de hardware, siendo posible su implementación en varias plataformas (microprocesador, ASIC, FPGA, etc.). Las necesidades y requisitos de seguridad en la industria son las principales métricas de conducción en el diseño electrónico, con el objetivo de promover la fabricación de productos de gran alcance sin sacrificar la seguridad de los clientes. Sin embargo, una vulnerabilidad en la implementación práctica encontrada por el Prof. Paul Kocher, et al en 1996 implica que un circuito digital es inherentemente vulnerable a un ataque no convencional, lo cual fue nombrado posteriormente como ataque de canal lateral, debido a su fuente de análisis. Sin embargo, algunas críticas sobre los algoritmos criptográficos teóricamente seguros surgieron casi inmediatamente después de este descubrimiento. En este sentido, los circuitos digitales consisten típicamente en un gran número de celdas lógicas fundamentales (como MOS - Metal Oxide Semiconductor), construido sobre un sustrato de silicio durante la fabricación. La lógica de los circuitos se realiza en función de las innumerables conmutaciones de estas células. Este mecanismo provoca inevitablemente cierta emanación física especial que puede ser medida y correlacionada con el comportamiento interno del circuito. SCA se puede utilizar para revelar datos confidenciales (por ejemplo, la criptografía de claves), analizar la arquitectura lógica, el tiempo e incluso inyectar fallos malintencionados a los circuitos que se implementan en sistemas embebidos, como FPGAs, ASICs, o tarjetas inteligentes. Mediante el uso de la comparación de correlación entre la cantidad de fuga estimada y las fugas medidas de forma real, información confidencial puede ser reconstruida en mucho menos tiempo y computación. Para ser precisos, SCA básicamente cubre una amplia gama de tipos de ataques, como los análisis de consumo de energía y radiación ElectroMagnética (EM). Ambos se basan en análisis estadístico y, por lo tanto, requieren numerosas muestras. Los algoritmos de cifrado no están intrínsecamente preparados para ser resistentes ante SCA. Es por ello que se hace necesario durante la implementación de circuitos integrar medidas que permitan camuflar las fugas a través de "canales laterales". Las medidas contra SCA están evolucionando junto con el desarrollo de nuevas técnicas de ataque, así como la continua mejora de los dispositivos electrónicos. Las características físicas requieren contramedidas sobre la capa física, que generalmente se pueden clasificar en soluciones intrínsecas y extrínsecas. Contramedidas extrínsecas se ejecutan para confundir la fuente de ataque mediante la integración de ruido o mala alineación de la actividad interna. Comparativamente, las contramedidas intrínsecas están integradas en el propio algoritmo, para modificar la aplicación con el fin de minimizar las fugas medibles, o incluso hacer que dichas fugas no puedan ser medibles. Ocultación y Enmascaramiento son dos técnicas típicas incluidas en esta categoría. Concretamente, el enmascaramiento se aplica a nivel algorítmico, para alterar los datos intermedios sensibles con una máscara de manera reversible. A diferencia del enmascaramiento lineal, las operaciones no lineales que ampliamente existen en criptografías modernas son difíciles de enmascarar. Dicho método de ocultación, que ha sido verificado como una solución efectiva, comprende principalmente la codificación en doble carril, que está ideado especialmente para aplanar o eliminar la fuga dependiente de dato en potencia o en EM. En esta tesis doctoral, además de la descripción de las metodologías de ataque, se han dedicado grandes esfuerzos sobre la estructura del prototipo de la lógica propuesta, con el fin de realizar investigaciones enfocadas a la seguridad sobre contramedidas de arquitectura a nivel lógico. Una característica de SCA reside en el formato de las fuentes de fugas. Un típico ataque de canal lateral se refiere al análisis basado en la potencia, donde la capacidad fundamental del transistor MOS y otras capacidades parásitas son las fuentes esenciales de fugas. Por lo tanto, una lógica robusta resistente a SCA debe eliminar o mitigar las fugas de estas micro-unidades, como las puertas lógicas básicas, los puertos I/O y las rutas. Las herramientas EDA proporcionadas por los vendedores manipulan la lógica desde un nivel más alto, en lugar de realizarlo desde el nivel de puerta, donde las fugas de canal lateral se manifiestan. Por lo tanto, las implementaciones clásicas apenas satisfacen estas necesidades e inevitablemente atrofian el prototipo. Por todo ello, la implementación de un esquema de diseño personalizado y flexible ha de ser tomado en cuenta. En esta tesis se presenta el diseño y la implementación de una lógica innovadora para contrarrestar SCA, en la que se abordan 3 aspectos fundamentales: I. Se basa en ocultar la estrategia sobre el circuito en doble carril a nivel de puerta para obtener dinámicamente el equilibrio de las fugas en las capas inferiores; II. Esta lógica explota las características de la arquitectura de las FPGAs, para reducir al mínimo el gasto de recursos en la implementación; III. Se apoya en un conjunto de herramientas asistentes personalizadas, incorporadas al flujo genérico de diseño sobre FPGAs, con el fin de manipular los circuitos de forma automática. El kit de herramientas de diseño automático es compatible con la lógica de doble carril propuesta, para facilitar la aplicación práctica sobre la familia de FPGA del fabricante Xilinx. En este sentido, la metodología y las herramientas son flexibles para ser extendido a una amplia gama de aplicaciones en las que se desean obtener restricciones mucho más rígidas y sofisticadas a nivel de puerta o rutado. En esta tesis se realiza un gran esfuerzo para facilitar el proceso de implementación y reparación de lógica de doble carril genérica. La viabilidad de las soluciones propuestas es validada mediante la selección de algoritmos criptográficos ampliamente utilizados, y su evaluación exhaustiva en comparación con soluciones anteriores. Todas las propuestas están respaldadas eficazmente a través de ataques experimentales con el fin de validar las ventajas de seguridad del sistema. El presente trabajo de investigación tiene la intención de cerrar la brecha entre las barreras de implementación y la aplicación efectiva de lógica de doble carril. En esencia, a lo largo de esta tesis se describirá un conjunto de herramientas de implementación para FPGAs que se han desarrollado para trabajar junto con el flujo de diseño genérico de las mismas, con el fin de lograr crear de forma innovadora la lógica de doble carril. Un nuevo enfoque en el ámbito de la seguridad en el cifrado se propone para obtener personalización, automatización y flexibilidad en el prototipo de circuito de bajo nivel con granularidad fina. Las principales contribuciones del presente trabajo de investigación se resumen brevemente a continuación: Lógica de Precharge Absorbed-DPL logic: El uso de la conversión de netlist para reservar LUTs libres para ejecutar la señal de precharge y Ex en una lógica DPL. Posicionamiento entrelazado Row-crossed con pares idénticos de rutado en redes de doble carril, lo que ayuda a aumentar la resistencia frente a la medición EM selectiva y mitigar los impactos de las variaciones de proceso. Ejecución personalizada y herramientas de conversión automática para la generación de redes idénticas para la lógica de doble carril propuesta. (a) Para detectar y reparar conflictos en las conexiones; (b) Detectar y reparar las rutas asimétricas. (c) Para ser utilizado en otras lógicas donde se requiere un control estricto de las interconexiones en aplicaciones basadas en Xilinx. Plataforma CPA de pruebas personalizadas para el análisis de EM y potencia, incluyendo la construcción de dicha plataforma, el método de medición y análisis de los ataques. Análisis de tiempos para cuantificar los niveles de seguridad. División de Seguridad en la conversión parcial de un sistema de cifrado complejo para reducir los costes de la protección. Prueba de concepto de un sistema de calefacción auto-adaptativo para mitigar los impactos eléctricos debido a la variación del proceso de silicio de manera dinámica. La presente tesis doctoral se encuentra organizada tal y como se detalla a continuación: En el capítulo 1 se abordan los fundamentos de los ataques de canal lateral, que abarca desde conceptos básicos de teoría de modelos de análisis, además de la implementación de la plataforma y la ejecución de los ataques. En el capítulo 2 se incluyen las estrategias de resistencia SCA contra los ataques de potencia diferencial y de EM. Además de ello, en este capítulo se propone una lógica en doble carril compacta y segura como contribución de gran relevancia, así como también se presentará la transformación lógica basada en un diseño a nivel de puerta. Por otra parte, en el Capítulo 3 se abordan los desafíos relacionados con la implementación de lógica en doble carril genérica. Así mismo, se describirá un flujo de diseño personalizado para resolver los problemas de aplicación junto con una herramienta de desarrollo automático de aplicaciones propuesta, para mitigar las barreras de diseño y facilitar los procesos. En el capítulo 4 se describe de forma detallada la elaboración e implementación de las herramientas propuestas. Por otra parte, la verificación y validaciones de seguridad de la lógica propuesta, así como un sofisticado experimento de verificación de la seguridad del rutado, se describen en el capítulo 5. Por último, un resumen de las conclusiones de la tesis y las perspectivas como líneas futuras se incluyen en el capítulo 6. Con el fin de profundizar en el contenido de la tesis doctoral, cada capítulo se describe de forma más detallada a continuación: En el capítulo 1 se introduce plataforma de implementación hardware además las teorías básicas de ataque de canal lateral, y contiene principalmente: (a) La arquitectura genérica y las características de la FPGA a utilizar, en particular la Xilinx Virtex-5; (b) El algoritmo de cifrado seleccionado (un módulo comercial Advanced Encryption Standard (AES)); (c) Los elementos esenciales de los métodos de canal lateral, que permiten revelar las fugas de disipación correlacionadas con los comportamientos internos; y el método para recuperar esta relación entre las fluctuaciones físicas en los rastros de canal lateral y los datos internos procesados; (d) Las configuraciones de las plataformas de pruebas de potencia / EM abarcadas dentro de la presente tesis. El contenido de esta tesis se amplia y profundiza a partir del capítulo 2, en el cual se abordan varios aspectos claves. En primer lugar, el principio de protección de la compensación dinámica de la lógica genérica de precarga de doble carril (Dual-rail Precharge Logic-DPL) se explica mediante la descripción de los elementos compensados a nivel de puerta. En segundo lugar, la lógica PA-DPL es propuesta como aportación original, detallando el protocolo de la lógica y un caso de aplicación. En tercer lugar, dos flujos de diseño personalizados se muestran para realizar la conversión de doble carril. Junto con ello, se aclaran las definiciones técnicas relacionadas con la manipulación por encima de la netlist a nivel de LUT. Finalmente, una breve discusión sobre el proceso global se aborda en la parte final del capítulo. El Capítulo 3 estudia los principales retos durante la implementación de DPLs en FPGAs. El nivel de seguridad de las soluciones de resistencia a SCA encontradas en el estado del arte se ha degenerado debido a las barreras de implantación a través de herramientas EDA convencionales. En el escenario de la arquitectura FPGA estudiada, se discuten los problemas de los formatos de doble carril, impactos parásitos, sesgo tecnológico y la viabilidad de implementación. De acuerdo con estas elaboraciones, se plantean dos problemas: Cómo implementar la lógica propuesta sin penalizar los niveles de seguridad, y cómo manipular un gran número de celdas y automatizar el proceso. El PA-DPL propuesto en el capítulo 2 se valida con una serie de iniciativas, desde características estructurales como doble carril entrelazado o redes de rutado clonadas, hasta los métodos de aplicación tales como las herramientas de personalización y automatización de EDA. Por otra parte, un sistema de calefacción auto-adaptativo es representado y aplicado a una lógica de doble núcleo, con el fin de ajustar alternativamente la temperatura local para equilibrar los impactos negativos de la variación del proceso durante la operación en tiempo real. El capítulo 4 se centra en los detalles de la implementación del kit de herramientas. Desarrollado sobre una API third-party, el kit de herramientas personalizado es capaz de manipular los elementos de la lógica de circuito post P&R ncd (una versión binaria ilegible del xdl) convertido al formato XDL Xilinx. El mecanismo y razón de ser del conjunto de instrumentos propuestos son cuidadosamente descritos, que cubre la detección de enrutamiento y los enfoques para la reparación. El conjunto de herramientas desarrollado tiene como objetivo lograr redes de enrutamiento estrictamente idénticos para la lógica de doble carril, tanto para posicionamiento separado como para el entrelazado. Este capítulo particularmente especifica las bases técnicas para apoyar las implementaciones en los dispositivos de Xilinx y su flexibilidad para ser utilizado sobre otras aplicaciones. El capítulo 5 se enfoca en la aplicación de los casos de estudio para la validación de los grados de seguridad de la lógica propuesta. Se discuten los problemas técnicos detallados durante la ejecución y algunas nuevas técnicas de implementación. (a) Se discute el impacto en el proceso de posicionamiento de la lógica utilizando el kit de herramientas propuesto. Diferentes esquemas de implementación, tomando en cuenta la optimización global en seguridad y coste, se verifican con los experimentos con el fin de encontrar los planes de posicionamiento y reparación optimizados; (b) las validaciones de seguridad se realizan con los métodos de correlación y análisis de tiempo; (c) Una táctica asintótica se aplica a un núcleo AES sobre BCDL estructurado para validar de forma sofisticada el impacto de enrutamiento sobre métricas de seguridad; (d) Los resultados preliminares utilizando el sistema de calefacción auto-adaptativa sobre la variación del proceso son mostrados; (e) Se introduce una aplicación práctica de las herramientas para un diseño de cifrado completa. Capítulo 6 incluye el resumen general del trabajo presentado dentro de esta tesis doctoral. Por último, una breve perspectiva del trabajo futuro se expone, lo que puede ampliar el potencial de utilización de las contribuciones de esta tesis a un alcance más allá de los dominios de la criptografía en FPGAs. ABSTRACT This PhD thesis mainly concentrates on countermeasure techniques related to the Side Channel Attack (SCA), which has been put forward to academic exploitations since 17 years ago. The related research has seen a remarkable growth in the past decades, while the design of solid and efficient protection still curiously remain as an open research topic where more reliable initiatives are required for personal information privacy, enterprise and national data protections. The earliest documented usage of secret code can be traced back to around 1700 B.C., when the hieroglyphs in ancient Egypt are scribed in inscriptions. Information security always gained serious attention from diplomatic or military intelligence transmission. Due to the rapid evolvement of modern communication technique, crypto solution was first incorporated by electronic signal to ensure the confidentiality, integrity, availability, authenticity and non-repudiation of the transmitted contexts over unsecure cable or wireless channels. Restricted to the computation power before computer era, simple encryption tricks were practically sufficient to conceal information. However, algorithmic vulnerabilities can be excavated to restore the encoding rules with affordable efforts. This fact motivated the development of modern cryptography, aiming at guarding information system by complex and advanced algorithms. The appearance of computers has greatly pushed forward the invention of robust cryptographies, which efficiently offers resistance relying on highly strengthened computing capabilities. Likewise, advanced cryptanalysis has greatly driven the computing technologies in turn. Nowadays, the information world has been involved into a crypto world, protecting any fields by pervasive crypto solutions. These approaches are strong because of the optimized mergence between modern mathematical theories and effective hardware practices, being capable of implement crypto theories into various platforms (microprocessor, ASIC, FPGA, etc). Security needs from industries are actually the major driving metrics in electronic design, aiming at promoting the construction of systems with high performance without sacrificing security. Yet a vulnerability in practical implementation found by Prof. Paul Kocher, et al in 1996 implies that modern digital circuits are inherently vulnerable to an unconventional attack approach, which was named as side-channel attack since then from its analysis source. Critical suspicions to theoretically sound modern crypto algorithms surfaced almost immediately after this discovery. To be specifically, digital circuits typically consist of a great number of essential logic elements (as MOS - Metal Oxide Semiconductor), built upon a silicon substrate during the fabrication. Circuit logic is realized relying on the countless switch actions of these cells. This mechanism inevitably results in featured physical emanation that can be properly measured and correlated with internal circuit behaviors. SCAs can be used to reveal the confidential data (e.g. crypto-key), analyze the logic architecture, timing and even inject malicious faults to the circuits that are implemented in hardware system, like FPGA, ASIC, smart Card. Using various comparison solutions between the predicted leakage quantity and the measured leakage, secrets can be reconstructed at much less expense of time and computation. To be precisely, SCA basically encloses a wide range of attack types, typically as the analyses of power consumption or electromagnetic (EM) radiation. Both of them rely on statistical analyses, and hence require a number of samples. The crypto algorithms are not intrinsically fortified with SCA-resistance. Because of the severity, much attention has to be taken into the implementation so as to assemble countermeasures to camouflage the leakages via "side channels". Countermeasures against SCA are evolving along with the development of attack techniques. The physical characteristics requires countermeasures over physical layer, which can be generally classified into intrinsic and extrinsic vectors. Extrinsic countermeasures are executed to confuse the attacker by integrating noise, misalignment to the intra activities. Comparatively, intrinsic countermeasures are built into the algorithm itself, to modify the implementation for minimizing the measurable leakage, or making them not sensitive any more. Hiding and Masking are two typical techniques in this category. Concretely, masking applies to the algorithmic level, to alter the sensitive intermediate values with a mask in reversible ways. Unlike the linear masking, non-linear operations that widely exist in modern cryptographies are difficult to be masked. Approved to be an effective counter solution, hiding method mainly mentions dual-rail logic, which is specially devised for flattening or removing the data-dependent leakage in power or EM signatures. In this thesis, apart from the context describing the attack methodologies, efforts have also been dedicated to logic prototype, to mount extensive security investigations to countermeasures on logic-level. A characteristic of SCA resides on the format of leak sources. Typical side-channel attack concerns the power based analysis, where the fundamental capacitance from MOS transistors and other parasitic capacitances are the essential leak sources. Hence, a robust SCA-resistant logic must eliminate or mitigate the leakages from these micro units, such as basic logic gates, I/O ports and routings. The vendor provided EDA tools manipulate the logic from a higher behavioral-level, rather than the lower gate-level where side-channel leakage is generated. So, the classical implementations barely satisfy these needs and inevitably stunt the prototype. In this case, a customized and flexible design scheme is appealing to be devised. This thesis profiles an innovative logic style to counter SCA, which mainly addresses three major aspects: I. The proposed logic is based on the hiding strategy over gate-level dual-rail style to dynamically overbalance side-channel leakage from lower circuit layer; II. This logic exploits architectural features of modern FPGAs, to minimize the implementation expenses; III. It is supported by a set of assistant custom tools, incorporated by the generic FPGA design flow, to have circuit manipulations in an automatic manner. The automatic design toolkit supports the proposed dual-rail logic, facilitating the practical implementation on Xilinx FPGA families. While the methodologies and the tools are flexible to be expanded to a wide range of applications where rigid and sophisticated gate- or routing- constraints are desired. In this thesis a great effort is done to streamline the implementation workflow of generic dual-rail logic. The feasibility of the proposed solutions is validated by selected and widely used crypto algorithm, for thorough and fair evaluation w.r.t. prior solutions. All the proposals are effectively verified by security experiments. The presented research work attempts to solve the implementation troubles. The essence that will be formalized along this thesis is that a customized execution toolkit for modern FPGA systems is developed to work together with the generic FPGA design flow for creating innovative dual-rail logic. A method in crypto security area is constructed to obtain customization, automation and flexibility in low-level circuit prototype with fine-granularity in intractable routings. Main contributions of the presented work are summarized next: Precharge Absorbed-DPL logic: Using the netlist conversion to reserve free LUT inputs to execute the Precharge and Ex signal in a dual-rail logic style. A row-crossed interleaved placement method with identical routing pairs in dual-rail networks, which helps to increase the resistance against selective EM measurement and mitigate the impacts from process variations. Customized execution and automatic transformation tools for producing identical networks for the proposed dual-rail logic. (a) To detect and repair the conflict nets; (b) To detect and repair the asymmetric nets. (c) To be used in other logics where strict network control is required in Xilinx scenario. Customized correlation analysis testbed for EM and power attacks, including the platform construction, measurement method and attack analysis. A timing analysis based method for quantifying the security grades. A methodology of security partitions of complex crypto systems for reducing the protection cost. A proof-of-concept self-adaptive heating system to mitigate electrical impacts over process variations in dynamic dual-rail compensation manner. The thesis chapters are organized as follows: Chapter 1 discusses the side-channel attack fundamentals, which covers from theoretic basics to analysis models, and further to platform setup and attack execution. Chapter 2 centers to SCA-resistant strategies against generic power and EM attacks. In this chapter, a major contribution, a compact and secure dual-rail logic style, will be originally proposed. The logic transformation based on bottom-layer design will be presented. Chapter 3 is scheduled to elaborate the implementation challenges of generic dual-rail styles. A customized design flow to solve the implementation problems will be described along with a self-developed automatic implementation toolkit, for mitigating the design barriers and facilitating the processes. Chapter 4 will originally elaborate the tool specifics and construction details. The implementation case studies and security validations for the proposed logic style, as well as a sophisticated routing verification experiment, will be described in Chapter 5. Finally, a summary of thesis conclusions and perspectives for future work are included in Chapter 5. To better exhibit the thesis contents, each chapter is further described next: Chapter 1 provides the introduction of hardware implementation testbed and side-channel attack fundamentals, and mainly contains: (a) The FPGA generic architecture and device features, particularly of Virtex-5 FPGA; (b) The selected crypto algorithm - a commercially and extensively used Advanced Encryption Standard (AES) module - is detailed; (c) The essentials of Side-Channel methods are profiled. It reveals the correlated dissipation leakage to the internal behaviors, and the method to recover this relationship between the physical fluctuations in side-channel traces and the intra processed data; (d) The setups of the power/EM testing platforms enclosed inside the thesis work are given. The content of this thesis is expanded and deepened from chapter 2, which is divided into several aspects. First, the protection principle of dynamic compensation of the generic dual-rail precharge logic is explained by describing the compensated gate-level elements. Second, the novel DPL is originally proposed by detailing the logic protocol and an implementation case study. Third, a couple of custom workflows are shown next for realizing the rail conversion. Meanwhile, the technical definitions that are about to be manipulated above LUT-level netlist are clarified. A brief discussion about the batched process is given in the final part. Chapter 3 studies the implementation challenges of DPLs in FPGAs. The security level of state-of-the-art SCA-resistant solutions are decreased due to the implementation barriers using conventional EDA tools. In the studied FPGA scenario, problems are discussed from dual-rail format, parasitic impact, technological bias and implementation feasibility. According to these elaborations, two problems arise: How to implement the proposed logic without crippling the security level; and How to manipulate a large number of cells and automate the transformation. The proposed PA-DPL in chapter 2 is legalized with a series of initiatives, from structures to implementation methods. Furthermore, a self-adaptive heating system is depicted and implemented to a dual-core logic, assumed to alternatively adjust local temperature for balancing the negative impacts from silicon technological biases on real-time. Chapter 4 centers to the toolkit system. Built upon a third-party Application Program Interface (API) library, the customized toolkit is able to manipulate the logic elements from post P&R circuit (an unreadable binary version of the xdl one) converted to Xilinx xdl format. The mechanism and rationale of the proposed toolkit are carefully convoyed, covering the routing detection and repairing approaches. The developed toolkit aims to achieve very strictly identical routing networks for dual-rail logic both for separate and interleaved placement. This chapter particularly specifies the technical essentials to support the implementations in Xilinx devices and the flexibility to be expanded to other applications. Chapter 5 focuses on the implementation of the case studies for validating the security grades of the proposed logic style from the proposed toolkit. Comprehensive implementation techniques are discussed. (a) The placement impacts using the proposed toolkit are discussed. Different execution schemes, considering the global optimization in security and cost, are verified with experiments so as to find the optimized placement and repair schemes; (b) Security validations are realized with correlation, timing methods; (c) A systematic method is applied to a BCDL structured module to validate the routing impact over security metric; (d) The preliminary results using the self-adaptive heating system over process variation is given; (e) A practical implementation of the proposed toolkit to a large design is introduced. Chapter 6 includes the general summary of the complete work presented inside this thesis. Finally, a brief perspective for the future work is drawn which might expand the potential utilization of the thesis contributions to a wider range of implementation domains beyond cryptography on FPGAs.
Resumo:
La tesis “1950 En torno al Museo Louisiana 1970” analiza varias obras relacionadas con el espacio doméstico, que se realizaron entre 1950 y 1970 en Dinamarca, un periodo de esplendor de la Arquitectura Moderna. Tras el aislamiento y restricciones del conflicto bélico que asoló Europa, los jóvenes arquitectos daneses, estaban deseosos por experimentar nuevas ideas de procedencia internacional, favorecidos por diferentes circunstancias, encuentran el mejor campo de ensayo en el espacio doméstico. La mejor arquitectura doméstica en Dinamarca, de aquel periodo, debe entenderse como un sistema compuesto por diferentes autores, que tienen en común muchas más similitudes que diferencias, se complementan unos a otros. Para la comprensión y el entendimiento de ello se hace necesario el estudio de varias figuras y edificios, que completen este sistema cuya investigación está escasamente desarrollada. La tesis propone un viaje para conocer los nombres de algunos de sus protagonistas, que mostraron con su trabajo, que tradición y vanguardia no estarán reñidas. El objetivo es desvelar las claves de la Modernidad Danesa, reconocer, descubrir y recuperar el legado de algunos de sus protagonistas en el ámbito doméstico, cuya lección se considera de total actualidad. Una arquitectura que asume las aportaciones extranjeras con moderación y crítica, cuya íntima relación con la tradición arquitectónica y la artesanía propias, será una de sus notas especiales. Del estudio contrastado de varios proyectos y versiones, se obtienen valores comunes entre sus autores, al igual que se descubren sus afinidades o diferencias respecto a los mismos asuntos; que permitirán comprender sus actuaciones según las referencias e influencias, y definir las variables que configuran sus espacios arquitectónicos. La línea de conexión entre los edificios elegidos será su particular relación con la naturaleza y el lugar en que se integran. La fachada, lugar donde se negociará la relación entre el interior y el paisaje, será un elemento entendido de un modo diferente en cada uno de ellos, una relación que se extenderá en todas ellas, más allá de su perímetro. La investigación se ha estructurado en seis capítulos, que van precedidos de una Introducción. En el capítulo primero, se estudian y se señalan los antecedentes, las figuras y edificios más relevantes de la Tradición Danesa, para la comprensión y el esclarecimiento de algunas de las claves de su Modernidad en el campo de la Arquitectura, que se produce con una clara intención de encontrar su propia identidad y expresión. Esta Modernidad floreciente se caracteriza por la asimilación de otras culturas extranjeras desde la moderación, con un punto de vista crítico, y encuentra sus raíces ancladas a la tradición arquitectónica y la artesanía propia, que fragua en la aparición de un ideal común con enorme personalidad y que hoy se valora como una auténtica aportación de una cultura considerada entonces periférica. Se mostrará el debate y el camino seguido por las generaciones anteriores, a las obras análizadas. Las sensibilidades por lo vernáculo y lo clásico, que aparentemente son contradictorias, dominaran el debate con la misma veracidad y respetabilidad. La llamada tercera generación por Sigfried Giedion reanudará la práctica entre lo clásico y lo vernáculo, apoyados en el espíritu del trabajo artesanal y de la tradición, con el objetivo de conocer del acto arquitectónico su “la verdad” y “la esencia original”. El capítulo segundo, analiza la casa Varming, de 1953, situada en un área residencial de Gentofte, por Eva y Nils Koppel, que reinterpreta la visión de Asplund de un paisaje interior continuación del exterior, donde rompen la caja de ladrillo macizo convencional propia de los años 30. Es el ejemplo más poderoso de la unión de tradición e innovación en su obra residencial. Sus formas sobrias entre el Funcionalismo Danés y la Modernidad se singularizarán por su abstracción y volúmenes limpios que acentúan el efecto de su geometría, prismática y brutalista. El desplazamiento de los cuerpos que lo componen, unos sobre otros, generan un ritmo, que se producirá a otras escalas, ello unido a las variaciones de sus formas y a la elección de sus materiales, ladrillo y madera, le confieren a la casa un carácter orgánico. El edificio se ancla a la tierra resolviéndose en diferentes niveles tras el estudio del lugar y su topografía. El resultado es una versión construida del paisaje, en la cual el edificio da forma al lugar y ensalza la experiencia del escenario natural. La unidad de las estructuras primitivas, parece estar presente. Constituye un ejemplo de la “La idea de Promenade de Asplund”, el proyecto ofrece diferentes recorridos, permitiendo su propia vivencia de la casa, que ofrece la posibilidad vital de decidir. El capítulo tercero trata sobre el pabellón de invitados de Niels Bohr de 1957, situado un área boscosa, en Tisvilde Hegn, fue el primer edificio del arquitecto danés Vilhelm Wohlert. Arraigado a la Tradición Danesa, representa una renovación basada en la absorción de influencias extranjeras: la Arquitectura Americana y la Tradición Japonesa. La caja de madera, posada sobre un terreno horizontal, tiene el carácter sensible de un organismo vivo, siempre cambiante según las variaciones de luz del día o temperatura. Cuando se abre, crea una prolongación del espacio interior, que se extiende a la naturaleza circundante, y se expande hacia el espacio exterior, permitiendo su movilización. Se establece una arquitectura de flujos. Hay un interés por la materia, su textura y el efecto emocional que emana. Las proporciones y dimensiones del edificio están reguladas por un módulo, que se ajusta a la medida del hombre, destacando la gran unidad del edificio. La llave se su efecto estético está en su armonía y equilibrio, que transmiten serenidad y belleza. El encuentro con la naturaleza es la lección más básica del proyecto, donde un mundo de relaciones es amable al ser humano. El capítulo cuarto, analiza el proyecto del Museo Louisiana de 1958, en Humlebæk, primer proyecto de la pareja de arquitectos daneses Jørgen Bo y Vilhelm Wohlert. La experiencia en California de Wohlert donde será visitado por Bo, será trascendental para el desarrollo de Louisiana, donde la identidad Danesa se fusiona con la asimilación de otras culturas, la arquitectura de Frank Lloyd Wright, la del área de la Bahía y la Tradición Japonesa principalmente. La idea del proyecto es la de una obra de arte integral: arquitectura, arte y paisaje, coexistirían en un mismo lugar. Diferentes recursos realzarán su carácter residencial, como el uso de los materiales propios de un entorno doméstico, la realización a la escala del hombre, el modo de usar la iluminación. Cubiertas planas que muestran su artificialidad, parecen flotar sobre galerías acristaladas, acentuarán la fuerza del plano horizontal y establecerán un recorrido en zig-zag de marcado ritmo acompasado. Ritmo que tiene que ver con la encarnación del pulso de la naturaleza, que se acompaña de juegos de luz, y de otras vibraciones materiales a diferentes escalas, imagen, que encuentra una analogía semejante en la cultura japonesa. Todo se coordina con la trama estructural, que conlleva a una construcción y proporción disciplinada. Louisiana atiende al principio de crecimiento de la naturaleza, con la que su conexión es profunda. Hay un dinamismo expresado por el despliegue del edificio, que evoca a algunos proyectos de la Tradición Japonesa. Los blancos muros tienen su propia identidad como formas en sí mismas, avanzan prolongándose fuera de la línea del vidrio, se mueven libremente siguiendo el orden estructural, acompañando al espacio que fluye, en contacto directo con la naturaleza que está en un continuo estado de flujos. Se da todo un mundo de relaciones, donde existe un dialogo entre el paisaje, arte y arquitectura. El capítulo quinto, se dedica a analizar la segunda casa del arquitecto danés Halldor Gunnløgsson, de 1959. Evoca a la Arquitectura Japonesa y Americana, pero es principalmente resultado de una fuerte voluntad y disciplina artística personal. La cubierta, plana, suspendida sobre una gran plataforma pavimentada, que continúa la sección del terreno construyendo de lugar, tiene una gran presencia y arroja una profunda sombra bajo ella. En el interior un espacio único, que se puede dividir eventualmente, discurre en torno a un cuerpo central. El espacio libre fluye, extendiéndose a través de la transparencia de sus ventanales a dos espacios contrapuestos: un patio ajardinado íntimo, que inspira calma y sosiego; y la naturaleza salvaje del mar que proyecta el color del cielo, ambos en constante estado de cambio. El proyecto se elabora de un modo rigurosamente formal, existiendo al mismo tiempo un perfecto equilibrio entre la abstracción de su estructura y su programa. La estructura de madera cuyo orden se extiende más allá de los límites de su perímetro, está formada por pórticos completos como elementos libres, queda expuesta, guardando una estrecha relación con el concepto de modernidad de Mies, equivalente a la arquitectura clásica. La preocupación por el efecto estético es máxima, nada es improvisado. Pero además la combinación de materiales y el juego de las texturas hay una cualidad táctil, cierto erotismo, que flota alrededor de ella. La precisión constructiva y su refinamiento se acercan a Mies. La experiencia del espacio arquitectónico es una vivencia global. La influencia de la arquitectura japonesa, es más conceptual que formal, revelada en un respeto por la naturaleza, la búsqueda del refinamiento a través de la moderación, la eliminación de los objetos innecesarios que distraen de la experiencia del lugar y la preocupación por la luz y la sombra, donde se establece cierto paralelismo con el oscuro mundo del invierno nórdico. Hay un entendimiento de que el espacio, en lugar de ser un objeto inmaterial definido por superficies materiales se entiende como interacciones dinámicas. El capítulo sexto. Propone un viaje para conocer algunas de las viviendas unifamiliares más interesantes que se construyeron en el periodo, que forman parte del sistema investigado. Del estudio comparado y orientado en varios temas, se obtienen diversa conclusiones propias del sistema estudiado. La maestría de la sustancia y la forma será una característica distintiva en Dinamarca, se demuestra que hay un acercamiento a la cultura de Oriente, conceptual y formal, y unos intereses comunes con cierta arquitectura Americana. Su lección nos sensibiliza hacia un sentido fortalecido de proporción, escala, materialidad, textura y peso, densidad del espacio, se valora lo táctil y lo visual, hay una sensibilidad hacia la naturaleza, hacia lo humano, hacia el paisaje, la integridad de la obra. ABSTRACT The thesis “1950 around the Louisiana Museum 1970” analyses several works related to domestic space, which were carried out between 1950 and 1970 in Denmark, a golden age of modern architecture. After the isolation and limitations brought about by the war that blighted Europe, young Danish architects were keen to experiment with ideas of an international origin, encouraged by different circumstances. They find the best field of rehearsal to be the domestic space. The best architecture of that period in Denmark should be understood as a system composed of different authors, who have in common with each other many more similarities than differences, thus complimenting each other. In the interests of understanding, the study of a range of figures and buildings is necessary so that this system, the research of which is still in its infancy, can be completed. The thesis proposes a journey of discovery through the names of some of those protagonists who were showcased through their work so that tradition and avant- garde could go hand in hand. The objective is to unveil the keys to Danish Modernity; to recognise, discover and revive the legacy of some of its protagonists in the domestic field whose lessons are seen as entirely of the present. For an architect, the taking on of modern contributions with both moderation and caution, with its intimate relationship with architectural tradition and its own craft, will be one of his hallmarks. With the study set against several projects and versions, one can derive common values among their authors. In the same way their affinities and differences in respect of the same issue emerge. This will allow an understanding of their measures in line with references and influences and enable the defining of the variables of their architectural spaces. The common line between the buildings selected will be their particular relationship with nature and the space with which they integrate. The façade, the place where the relationship between the interior and the landscape would be negotiated, wouldl be the discriminating element in a distinct way for each one of them. It is through each of these facades that this relationship would extend, and far beyond their physical perimeter. The investigation has been structured into six chapters, preceded by an introduction. The first chapter outlines and analyses the backgrounds, figures and buildings most relevant to the Danish Tradition. This is to facilitate the understanding and elucidation of some of the keys to its modernity in the field of architecture, which came about with the clear intention to discover its own identity and expression. This thriving modernity is characterized by its moderate assimilation with foreign cultures with a critical eye, and finds its roots anchored in architectural tradition and its own handcraft. It is forged in the emergence of a common ideal of enormous personality which today has come to be valued as an authentic contribution to the sphere from a culture that was formerly seen as on the peripheries. What will be demonstrated is the path taken by previous generations to these works and the debate that surrounds them. The sensibilities for both the vernacular and the classic, which at first glance may seem contradictory, will dominate the debate with the same veracity and respectability. The so-called third generation of Sigfried Giedion will revive the process between the classic and the vernacular, supported in spirit by the handcraft work and by tradition, with the objective of discovering the “truth” and the “original essence” of the architectural act. The second chapter analyzes the Varming house, built by Eva and Nils Koppel 1953, which is situated in a residential area of Gentofte. This reinterprets Asplund’s vision of an interior landscape extending to the exterior, where we see a break with the conventional sturdy brick shell of the 1930s. It is the most powerful example of the union of tradition and innovation in his their residential work. Their sober forms caught between Danish Functionalism and modernity are characterized by their abstraction and clean shapes which accentuate their prismatic and brutal geometry, The displacement of the parts of which they are composed, one over the other, generate a rhythm. This is produced to varying scales and is closely linked to its forms and the selection of materials – brick and wood – that confer an organic character to the house. The building is anchored to the earth, finding solution at different levels through the study of place and topography. The result is an adaption constructed out of the landscape, in which the building gives form to the place and celebrates the experience of the natural setting. The unity of primitive structures appears to be present. It constitutes an example of “Asplund’s Promenade Idea”. Different routes of exploration within are available to the visitor, allowing for one’s own personal experience of the house, allowing in turn for the vital chance to decide. The third chapter deals with Niels Bohr’s guest pavilion. Built in 1957, it is situated in a wooded area of Tisvilde Hegn and was the architect Vilhelm Wohlert’s first building. Rooted in the Danish Tradition, it represents a renewal based on the absorption of foreign influences: American architecture and the Japanese tradition. The wooden box, perched atop a horizontal terrain, possesses the sensitive character of the living organism, ever-changing in accordance with the variations in daylight and temperature. When opened up, it creates an elongation of the interior space which extends into the surrounding nature and it expands towards the exterior space, allowing for its mobilisation. It establishes an architecture of flux. There is interest in the material, its texture and the emotional effect it inspires. The building’s proportions and dimensions are regulated by a module, which is adjusted by hand, bringing out the great unity of the building. The key to its aesthetic effect is its harmony and equilibrium, which convey serenity and beauty. The meeting with nature is the most fundamental lesson of the project, where a world of relationships softens the personality of the human being. The fourth chapter analyzes the Louisiana Museum project of 1958 in 1958. It was the first project of the Danish architects Jørgen Bo and Vilhelm Wohlert. Wohlert’s experience in California where he was visited by Bo would be essential to the development of Louisiana, where the Danish identity is fused in assimilation with other cultures, the architecture of Frank Lloyd Wright, that of the Bahía area and principally the Japanese tradition. The idea of the project was for an integrated work of art: architecture, art and landscape, which would coexist in the same space. A range of different resources would realize the residential character, such as the use of materials taken from a domestic environment, the attainment of human scale and the manner in which light was used. Flat roof plans that show their artificiality and appear to float over glassed galleries. They accentuate the strength of the horizontal plan and establish a zigzag route of marked and measured rhythm. It is a rhythm that has to do with the incarnation of nature’s pulse, which is accompanied with plays of light, as well as material vibrations of different scales, imagery which uncovers a parallel analogy with Japanese culture. Everything is coordinated along a structural frame, which involves a disciplined construction and proportion. Louisiana cherishes nature’s principle of growth, to which its connection is profound. Here is a dynamism expressed through the disposition of the building, which evokes in some projects the Japanese tradition. The white walls possess their own identity as forms in their own right. They advance, extending beyond the line of glass, moving freely along the structural line, accompanying a space that flows and in direct contact with nature that is itself in a constant state of flux. It creates a world of relationships, where dialogue exists between the landscape, art and architecture. The fifth chapter is dedicated to analyzing the Danish architect Halldor Gunnløgsson’s second house, built in 1959. It evokes both Japanese and American architecture but is principally the result of a strong will and personal artistic discipline. The flat roof suspended above a large paved platform – itself continuing the constructed terrain of the place – has great presence and casts a heavy shadow beneath. In the interior, a single space, which can at length be divided and which flows around a central space. The space flows freely, extending through the transparency of its windows which give out onto two contrasting locations: an intimate garden patio, inspiring calm and tranquillity, and the wild nature of the sea which projects the colour of sky, both in a constant state of change. The project is realized in a rigorously formal manner. A perfect balance exists between the abstraction of his structure and his project. The wooden structure, whose order extends beyond the limits of its perimeter, is formed of complete porticos of free elements. It remains exposed, maintaining a close relationship with Mies’ concept of modernity, analogous to classical architecture. The preoccupation with the aesthetic effect is paramount and nothing is improvised. But in addition to this - the combination of materials and the play of textures - there is a tactile quality, a certain eroticism, which lingers all about. The constructive precision and its refinement are close to Mies. The experience of the architectural space is universal. The influence of Japanese architecture, more conceptual than formal, is revealed in a respect for nature. It can be seen in the search for refinement through moderation, the elimination of the superfluous object that distract from the experience of place and the preoccupation with light and shade, where a certain parallel with the dark world of the Nordic winter is established. There is an understanding that space, rather than being an immaterial object defined by material surfaces, extends instead as dynamic interactions. The sixth chapter. This proposes a journey to discover some of the unfamiliar residences of most interest which were constructed in the period, and which form part of the system being investigated. Through the study of comparison and one which is geared towards various themes, diverse conclusions are drawn regarding the system being researched. The expertise in substance and form will be a distinctive characteristic in Denmark, demonstrating an approach to the culture of the Orient, both conceptual and formal, and some common interests in certain American architecture. Its teachings sensitize us to a strengthened sense of proportion, scale, materiality, texture and weight and density of space. It values both the tactile and the visual. There is a sensitivity to nature, to the human, to the landscape and to the integrity of the work.
Resumo:
Of the rules used by the splicing machinery to precisely determine intron–exon boundaries only a fraction is known. Recent evidence suggests that specific short sequences within exons help in defining these boundaries. Such sequences are known as exonic splicing enhancers (ESE). A possible bioinformatical approach to studying ESE sequences is to compare genes that harbor introns with genes that do not. For this purpose two non-redundant samples of 719 intron-containing and 63 intron-lacking human genes were created. We performed a statistical analysis on these datasets of intron-containing and intron-lacking human coding sequences and found a statistically significant difference (P = 0.01) between these samples in terms of 5–6mer oligonucleotide distributions. The difference is not created by a few strong signals present in the majority of exons, but rather by the accumulation of multiple weak signals through small variations in codon frequencies, codon biases and context-dependent codon biases between the samples. A list of putative novel human splicing regulation sequences has been elucidated by our analysis.
Resumo:
Vaccination of two chimpanzees against hepatitis B virus (HBV) by intramuscular injection of plasmid DNA encoding the major and middle HBV envelope proteins induced group-, subtype- and preS2-specific antibodies. These were initially of IgM isotype, and then they were of IgG (predominantly IgGl) isotype. The chimpanzee injected with 2 mg of DNA attained >100 milli-international units/ml of anti-HBs antibody after one injection and 14,000 milli-international units/ml after four injections. A smaller dose (400 microg) induced lower and transient titers, but a strong anamnestic response occurred 1 year later. Comparison with responses in 23 chimpanzees receiving various antigen-based HBV vaccines suggests that the DNA approach is promising for prophylactic immunization against HBV.
Resumo:
In this study we have investigated the role of the N-terminal region of thyroid hormone receptors (TRs) in thyroid hormone (TH)-dependent transactivation of a thymidine kinase promoter containing TH response elements composed either of a direct repeat or an inverted palindrome. Comparison of rat TR beta 1 with TR beta 2 provides an excellent model since they share identical sequences except for their N termini. Our results show that TR beta 2 is an inefficient TH-dependent transcriptional activator. The degree of transactivation corresponds to that observed for the mutant TR delta N beta 1/2, which contains only those sequences common to TR beta 1 and TR beta 2. Thus, TH-dependent activation appears to be associated with two separate domains. The more important region, however, is embedded in the N-terminal domain. Furthermore, the transactivating property of TR alpha 1 was also localized to the N-terminal domain between amino acids 19 and 30. Using a coimmunoprecipitation assay, we show that the differential interaction of the N terminus of TR beta 1 and TR beta 2 with transcription factor IIB correlates with the TR beta 1 activation function. Hence, our results underscore the importance of the N-terminal region of TRs in TH-dependent transactivation and suggest that a transactivating signal is transmitted to the general transcriptional machinery via a direct interaction of the receptor N-terminal region with transcription factor IIB.
Resumo:
Plants can defend themselves from potential pathogenic microorganisms relying on a complex interplay of signaling pathways: activation of the MAPK cascade, transcription of defense related genes, production of reactive oxygen species, nitric oxide and synthesis of other defensive compounds such as phytoalexins. These events are triggered by the recognition of pathogen’s effectors (effector-triggered immunity) or PAMPs (PAMP-triggered immunity). The Cerato Platanin Family (CPF) members are Cys-rich proteins secreted and localized on fungal cell walls, involved in several aspects of fungal development and pathogen-host interactions. Although more than hundred genes of the CPF have been identified and analyzed, the structural and functional characterization of the expressed proteins has been restricted only to few members of the family. Interestingly, those proteins have been shown to bind chitin with diverse affinity and after foliar treatment they elicit defensive mechanisms in host and non-host plants. This property turns cerato platanins into interesting candidates, worth to be studied to develop new fungal elicitors with applications in sustainable agriculture. This study focus on cerato-platanin (CP), core member of the family and on the orthologous cerato-populin (Pop1). The latter shows an identity of 62% and an overall homology of 73% with respect to CP. Both proteins are able to induce MAPKs phosphorylation, production of reactive oxygen species and nitric oxide, overexpression of defense’s related genes, programmed cell death and synthesis of phytoalexins. CP, however, when compared to Pop1, induces a faster response and, in some cases, a stronger activity on plane leaves. Aim of the present research is to verify if the dissimilarities observed in the defense elicitation activity of these proteins can be associated to their structural and dynamic features. Taking advantage of the available CP NMR structure, Pop1’s 3D one was obtained by homology modeling. Experimental residual dipolar couplings and 1H, 15N, 13C resonance assignments were used to validate the model. Previous works on CPF members, addressed the highly conserved random coil regions (loops b1-b2 and b2-b3) as sufficient and necessary to induce necrosis in plants’ leaves: that region was investigated in both Pop1 and CP. In the two proteins the loops differ, in their primary sequence, for few mutations and an insertion with a consequent diversification of the proteins’ electrostatic surface. A set of 2D and 3D NMR experiments was performed to characterize both the spatial arrangement and the dynamic features of the loops. NOE data revealed a more extended network of interactions between the loops in Pop1 than in CP. In addition, in Pop1 we identified a salt bridge Lys25/Asp52 and a strong hydrophobic interaction between Phe26/Trp53. These structural features were expected not only to affect the loops’ spatial arrangement, but also to reduce the degree of their conformational freedom. Relaxation data and the order parameter S2 indeed highlighted reduced flexibility, in particular for loop b1-b2 of Pop1. In vitro NMR experiments, where Pop1 and CP were titrated with oligosaccharides, supported the hypothesis that the loops structural and dynamic differences may be responsible for the different chitin-binding properties of the two proteins: CP selectively binds tetramers of chitin in a shallow groove on one side of the barrel defined by loops b1-b2, b2-b3 and b4-b5, Pop1, instead, interacts in a non-specific fashion with oligosaccharides. Because the region involved in chitin-binding is also responsible for the defense elicitation activity, possibly being recognized by plant's receptors, it is reasonable to expect that those structural and dynamic modifications may also justify the different extent of defense elicitation. To test that hypothesis, the initial steps of a protocol aimed to the identify a receptor for CP, in silico, are presented.
Type 1 nitrergic (ND1) cells of the rabbit retina: Comparison with other axon-bearing amacrine cells
Resumo:
NADPH diaphorase (NADPHd) histochemistry labels two types of nitrergic amacrine cells in the rabbit retina. Both the large ND1 cells and the small ND2 cells stratify in the middle of the inner plexiform layer, and their overlapping processes produce a dense plexus, which makes it difficult to trace the morphology of single cells. The complete morphology of the ND1 amacrine cells has been revealed by injecting Neurobiotin into large round somata in the inner nuclear layer, which resulted in the labelling of amacrine cells whose proximal morphology and stratification matched those of the ND1 cells stained by NADPHd histochemistry. The Neurobiotin-injected ND1 cells showed strong homologous tracer coupling to surrounding ND1 cells, and double-labelling experiments confirmed that these coupled cells showed NADPHd reactivity. The ND1 amacrine cells branch in stratum 3 of the inner plexiform layer, where they produce a sparsely branched dendritic tree of 400-600 mum diameter in ventral peripheral retina. In addition, each cell gives rise to several fine beaded processes, which arise either from a side branch of the dendritic tree or from the tapering of a distal dendrite. These axon-like processes branch successively within the vicinity of the dendritic field before extending, with little or no further branching, for 3-5 mm from the soma in ventral peripheral retina. Consequently, these cells may span one-third of the visual field of each eye, and their spatial extent appears to be greater than that of most other types of axon-bearing amacrine cells injected with Neurobiotin in this study. The morphology and tracer-coupling pattern of the ND1 cells are compared with those of confirmed type 1 catecholaminergic cells, a presumptive type 2 catecholaminergic cell, the type 1 polyaxonal. cells, the long-range amacrine cells, a novel type of axon-bearing cell that also branches in stratum 3, and a type of displaced amacrine cell that may correspond to the type 2 polyaxonal cell. (C) 2004 Wiley-Liss, Inc.
Resumo:
The measurement of alcohol craving began with single-item scales. Multifactorial scales developed with the intention to capture more fully the phenomenon of craving. This study examines the construct validity of a multifactorial scale, the Yale-Brown Obsessive Compulsive Scale for heavy drinking (Y-BOCS-hd). The study compares its clinical utility with a single item visual-analogue craving scale. The study includes 212 alcohol dependent subjects (127 males, 75 females) undertaking an outpatient treatment program between 1999-2001. Subjects completed the Y-BOCS-hd and a single item visual-analogue scale, in addition to alcohol consumption and dependence severity measures. The Y-BOCS-hd had strong construct validity. Both the visual-analogue alcohol craving scale and Y-BOCS-hd were weakly associated with pretreatment dependence severity. There was a significant association between pretreatment alcohol consumption and the visual-analogue craving scale. Neither craving measure was able to predict total program abstinence or days abstinent. The relationship between obsessive-compulsive behavior in alcohol dependence and craving remains unclear.
Resumo:
Subsequent to the influential paper of [Chan, K.C., Karolyi, G.A., Longstaff, F.A., Sanders, A.B., 1992. An empirical comparison of alternative models of the short-term interest rate. Journal of Finance 47, 1209-1227], the generalised method of moments (GMM) has been a popular technique for estimation and inference relating to continuous-time models of the short-term interest rate. GMM has been widely employed to estimate model parameters and to assess the goodness-of-fit of competing short-rate specifications. The current paper conducts a series of simulation experiments to document the bias and precision of GMM estimates of short-rate parameters, as well as the size and power of [Hansen, L.P., 1982. Large sample properties of generalised method of moments estimators. Econometrica 50, 1029-1054], J-test of over-identifying restrictions. While the J-test appears to have appropriate size and good power in sample sizes commonly encountered in the short-rate literature, GMM estimates of the speed of mean reversion are shown to be severely biased. Consequently, it is dangerous to draw strong conclusions about the strength of mean reversion using GMM. In contrast, the parameter capturing the levels effect, which is important in differentiating between competing short-rate specifications, is estimated with little bias. (c) 2006 Elsevier B.V. All rights reserved.
Resumo:
The Thames Estuary, UK, and the Brisbane River, Australia, are comparable in size and catchment area. Both are representative of the large and growing number of the world's estuaries associated with major cities. Principle differences between the two systems relate to climate and human population pressures. In order to assess the potential phytotoxic impact of herbicide residues in the estuaries, surface waters were analysed with a PAM fluorometry-based bioassay that employs the photosynthetic efficiency (photosystem II quantum yield) of laboratory cultured microalgae, as an endpoint measure of phytotoxicity. In addition, surface waters were chemically analysed for a limited number of herbicides. Diuron atrazine and simazine were detected in both systems at comparable concentrations. In contrast, bioassay results revealed that whilst detected herbicides accounted for the observed phytotoxicity of Brisbane River extracts with great accuracy, they consistently explained only around 50% of the phytotoxicity induced by Thames Estuary extracts. Unaccounted for phytotoxicity in Thames surface waters is indicative of unidentified phytotoxins. The greatest phytotoxic response was measured at Charing Cross, Thames Estuary, and corresponded to a diuron equivalent concentration of 180 ng L-1. The study employs relative potencies (REP) of PSII impacting herbicides and demonstrates that chemical analysis alone is prone to omission of valuable information. Results of the study provide support for the incorporation of bioassays into routine monitoring programs where bioassay data may be used to predict and verify chemical contamination data, alert to unidentified compounds and provide the user with information regarding cumulative toxicity of complex mixtures. (c) 2005 Elsevier B.V. All rights reserved.