989 resultados para bottom layer


Relevância:

60.00% 60.00%

Publicador:

Resumo:

Properties of the dense ice shelf water plume emerging from the Filchner Depression in the southwestern Weddell Sea are described, using available current meter records and CTD stations. A mean hydrography, based on more than 300 CTD stations gathered over 25 yr points to a cold, relatively thin and vertically well-defined plume east of the two ridges cross-cutting the continental slope about 60 km from the Filchner sill, whereas the dense bottom layer is warmer, more stratified and much thicker west of these ridges. The data partly confirm the three major pathways suggested earlier and agree with recent theories on topographic steering by submarine ridges. A surprisingly high mesoscale variability in the overflow region is documented and discussed. The variability is to a large extent due to three distinct oscillations (with periods of about 35 h, 3 and 6 d) seen in both temperature and velocity records on the slope. The oscillations are episodic, barotropic and have a horizontal scale of ~20-40 km across the slope. They are partly geographically separated, with the longer period being stronger on the lower part of the slope and the shorter on the upper part of the slope. Energy levels are lower west of the ridges, and in the Filchner Depression. The observations are discussed in relation to existing theories on eddies, commonly generated in plumes, and continental shelf waves.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Oceanographic research in the Amvrakikos Gulf in Western Greece, a semi-enclosed embayment isolated from the Ionian Sea by a narrow, shallow sill, has shown that it is characterised by a fjord-like oceanographic regime. The Gulf is characterised by a well-stratified two layer structure in the water column made up of a surface layer and a bottom layer that are separated by a strong pycnocline. At the entrance over the sill, there is a brackish water outflow in the surface water and a saline water inflow in the near-bed region. This morphology and water circulation pattern makes the Amvrakikos Gulf the only Mediterranean Sea fjord. The investigations have also shown that the surface layer is well oxygenated, whereas in the pycnocline, the dissolved oxygen (DO) declines sharply and finally attains a value of zero, thus dividing the water column into oxic, dysoxic and anoxic environments. At the dysoxic/anoxic interface, at a depth of approximately 35 m, a sharp redox cline develops with Eh values between 0 and 120 mV occurring above and values between 0 and -250 mV occurring below, where oxic and anoxic biochemical processes prevail, respectively. On the seafloor underneath the anoxic waters, a black silt layer and a white mat cover resembling Beggiatoa-like cells are formed. The dysoxic/anoxic conditions appeared during the last 20 to 30 years and have been caused by the excessive use of fertilisers, the increase in animal stocks, intensive fish farming and domestic effluents. The inflicted dysoxia/anoxia has resulted in habitat loss on the seafloor over an area that makes up just over 50% of the total Gulf area and approximately 28% of the total water volume. Furthermore, anoxia is also considered to have been responsible for the sudden fish mortality which occurred in aquaculture rafts in the Gulf in February 2008. Therefore, anoxic conditions can be considered to be a potential hazard to the ecosystem and to the present thriving fishing and mariculture industry in the Gulf.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Serial observations of temperature, salinity, oxygen, alkalinity and pH are presented. They were carried out during an anchor station of R.V. "Meteor" west of Cape Sao Vincente (Portugal) in the area of the maximum Mediterranean water outflow, which follows the continental slope off Portugal. Two observational results are pointed out: The Mediterranean water masses spread out into the Atlantic Ocean, consisting of two distinct layers at depth of 700 m (T=12.0 °C, S=36.15 ?) and 1250 m (T=11.3 °C, S=36.40 ?). The salinity proved to be the most significant indicator of the observed stratification. The values of dissolved oxygen content, alkalinity and pH in the very near bottom layer (1 m above the bottom at depth of 3250 m) are different from the values at depth of 15 m to 100 m above the bottom. As this phenomenon is not observed for the salinity, the changes may be interpreted in terms of chemical and biological processes at the sediment-water interface.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Species composition and abundance of phytoplankton and chlorophyll concentration were measured at three horizons of 9 stations in the Nha Trang Bay of the South China Sea in March 1998. Vertical distribution of fluorescence parameters, temperature and irradiance were measured in the 0-18 m layer of the water column at 21 stations. It was shown that according to biomass (B) and chlorophyll concentration (Chl) the Bay is mezotrophic. B and Chl in the water column increased seaward. Mean values of Chl in the southern part of the Bay exceeded those in northern part. Mean values of B were similar. B and Chl in the bottom layer exceeded ones in the upper layer. Diatoms dominated in species diversity and abundance. Diatom Guinardia striata made the main contribution to phytoplankton biomass. Similarity of phytoplankton was high. In the upper layer phytoplankton was photoinhibited during the most part of the light period, but at the bottom photosynthetic activity was high. Water column B varied in an order of magnitude during the daily cycle mainly because of B variations in the bottom layer due to tide flow.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Comprehensive biogeochemical studies including determination of isotopic composition of organic carbon in both suspended matter and surface layer (0-1 cm) bottom sediments (more than 260 determinations of d13C-Corg) were carried out for five Arctic shelf seas: White, Barents, Kara, East Siberian, and Chukchi Seas. The aim of this study is to elucidate causes that change isotopic composition of particulate organic carbon at the water-sediment boundary. It is shown that isotopic composition of organic carbon in sediments from seas with high river run-off (White, Kara, and East Siberian Seas) does not inherit isotopic composition of organic carbon in particles precipitating from the water column, but is enriched in 13C. Seas with low river run-off (Barents and Chukchi Seas) show insignificant difference between d13C-Corg values in both suspended load and sediments because of low content of isotopically light allochthonous organic matter in suspended matter. Biogeochemical studies with radioisotope tracers (14CO2, 35S, and 14CH4) revealed existence of specific microbial filter formed from heterotrophic and autotrophic organisms at the water-sediment boundary. This filter prevents mass influx of products of organic matter decomposition into the water column, as well as reduces influx of OM contained in suspended matter from water into sediments.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In this paper authors present and discuss data on distribution and mineral composition of suspended particulate matter (SPM) in the Franz Victoria Trough, collected during Cruise 14 of scientific icebreaker Akademik Fedorov in the northern Barents Sea in October 1998. Higher total SPM concentrations (0.4-1.8 mg/l) were measured in the near-bottom layer of the Franz Victoria Strait and central part of the trough. Potential source of mineral particles in SPM is fine fractions of Barents Sea bottom sediments. They form the nepheloid layer, which spreads on the continental slope along the trough together with Barents Sea waters at 350-400 m depth.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Turbulence profile measurements made on the upper continental slope and shelf of the southeastern Weddell Sea reveal striking contrasts in dissipation and mixing rates between the two sites. The mean profiles of dissipation rates from the upper slope are 1-2 orders of magnitude greater than the profiles collected over the shelf in the entire water column. The difference increases toward the bottom where the dissipation rate of turbulent kinetic energy and the vertical eddy diffusivity on the slope exceed 10?7 W kg?1 and 10?2 m2 s?1, respectively. Elevated levels of turbulence on the slope are concentrated within a 100 m thick bottom layer, which is absent on the shelf. The upper slope is characterized by near-critical slopes and is in close proximity to the critical latitude for semidiurnal internal tides. Our observations suggest that the upper continental slope of the southern Weddell Sea is a generation site of semidiurnal internal tide, which is trapped along the slope along the critical latitude, and dissipates its energy in a inline image m thick layer near the bottom and within inline image km across the slope.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Esta tesis doctoral se centra principalmente en técnicas de ataque y contramedidas relacionadas con ataques de canal lateral (SCA por sus siglas en inglés), que han sido propuestas dentro del campo de investigación académica desde hace 17 años. Las investigaciones relacionadas han experimentado un notable crecimiento en las últimas décadas, mientras que los diseños enfocados en la protección sólida y eficaz contra dichos ataques aún se mantienen como un tema de investigación abierto, en el que se necesitan iniciativas más confiables para la protección de la información persona de empresa y de datos nacionales. El primer uso documentado de codificación secreta se remonta a alrededor de 1700 B.C., cuando los jeroglíficos del antiguo Egipto eran descritos en las inscripciones. La seguridad de la información siempre ha supuesto un factor clave en la transmisión de datos relacionados con inteligencia diplomática o militar. Debido a la evolución rápida de las técnicas modernas de comunicación, soluciones de cifrado se incorporaron por primera vez para garantizar la seguridad, integridad y confidencialidad de los contextos de transmisión a través de cables sin seguridad o medios inalámbricos. Debido a las restricciones de potencia de cálculo antes de la era del ordenador, la técnica de cifrado simple era un método más que suficiente para ocultar la información. Sin embargo, algunas vulnerabilidades algorítmicas pueden ser explotadas para restaurar la regla de codificación sin mucho esfuerzo. Esto ha motivado nuevas investigaciones en el área de la criptografía, con el fin de proteger el sistema de información ante sofisticados algoritmos. Con la invención de los ordenadores se ha acelerado en gran medida la implementación de criptografía segura, que ofrece resistencia eficiente encaminada a obtener mayores capacidades de computación altamente reforzadas. Igualmente, sofisticados cripto-análisis han impulsado las tecnologías de computación. Hoy en día, el mundo de la información ha estado involucrado con el campo de la criptografía, enfocada a proteger cualquier campo a través de diversas soluciones de cifrado. Estos enfoques se han fortalecido debido a la unificación optimizada de teorías matemáticas modernas y prácticas eficaces de hardware, siendo posible su implementación en varias plataformas (microprocesador, ASIC, FPGA, etc.). Las necesidades y requisitos de seguridad en la industria son las principales métricas de conducción en el diseño electrónico, con el objetivo de promover la fabricación de productos de gran alcance sin sacrificar la seguridad de los clientes. Sin embargo, una vulnerabilidad en la implementación práctica encontrada por el Prof. Paul Kocher, et al en 1996 implica que un circuito digital es inherentemente vulnerable a un ataque no convencional, lo cual fue nombrado posteriormente como ataque de canal lateral, debido a su fuente de análisis. Sin embargo, algunas críticas sobre los algoritmos criptográficos teóricamente seguros surgieron casi inmediatamente después de este descubrimiento. En este sentido, los circuitos digitales consisten típicamente en un gran número de celdas lógicas fundamentales (como MOS - Metal Oxide Semiconductor), construido sobre un sustrato de silicio durante la fabricación. La lógica de los circuitos se realiza en función de las innumerables conmutaciones de estas células. Este mecanismo provoca inevitablemente cierta emanación física especial que puede ser medida y correlacionada con el comportamiento interno del circuito. SCA se puede utilizar para revelar datos confidenciales (por ejemplo, la criptografía de claves), analizar la arquitectura lógica, el tiempo e incluso inyectar fallos malintencionados a los circuitos que se implementan en sistemas embebidos, como FPGAs, ASICs, o tarjetas inteligentes. Mediante el uso de la comparación de correlación entre la cantidad de fuga estimada y las fugas medidas de forma real, información confidencial puede ser reconstruida en mucho menos tiempo y computación. Para ser precisos, SCA básicamente cubre una amplia gama de tipos de ataques, como los análisis de consumo de energía y radiación ElectroMagnética (EM). Ambos se basan en análisis estadístico y, por lo tanto, requieren numerosas muestras. Los algoritmos de cifrado no están intrínsecamente preparados para ser resistentes ante SCA. Es por ello que se hace necesario durante la implementación de circuitos integrar medidas que permitan camuflar las fugas a través de "canales laterales". Las medidas contra SCA están evolucionando junto con el desarrollo de nuevas técnicas de ataque, así como la continua mejora de los dispositivos electrónicos. Las características físicas requieren contramedidas sobre la capa física, que generalmente se pueden clasificar en soluciones intrínsecas y extrínsecas. Contramedidas extrínsecas se ejecutan para confundir la fuente de ataque mediante la integración de ruido o mala alineación de la actividad interna. Comparativamente, las contramedidas intrínsecas están integradas en el propio algoritmo, para modificar la aplicación con el fin de minimizar las fugas medibles, o incluso hacer que dichas fugas no puedan ser medibles. Ocultación y Enmascaramiento son dos técnicas típicas incluidas en esta categoría. Concretamente, el enmascaramiento se aplica a nivel algorítmico, para alterar los datos intermedios sensibles con una máscara de manera reversible. A diferencia del enmascaramiento lineal, las operaciones no lineales que ampliamente existen en criptografías modernas son difíciles de enmascarar. Dicho método de ocultación, que ha sido verificado como una solución efectiva, comprende principalmente la codificación en doble carril, que está ideado especialmente para aplanar o eliminar la fuga dependiente de dato en potencia o en EM. En esta tesis doctoral, además de la descripción de las metodologías de ataque, se han dedicado grandes esfuerzos sobre la estructura del prototipo de la lógica propuesta, con el fin de realizar investigaciones enfocadas a la seguridad sobre contramedidas de arquitectura a nivel lógico. Una característica de SCA reside en el formato de las fuentes de fugas. Un típico ataque de canal lateral se refiere al análisis basado en la potencia, donde la capacidad fundamental del transistor MOS y otras capacidades parásitas son las fuentes esenciales de fugas. Por lo tanto, una lógica robusta resistente a SCA debe eliminar o mitigar las fugas de estas micro-unidades, como las puertas lógicas básicas, los puertos I/O y las rutas. Las herramientas EDA proporcionadas por los vendedores manipulan la lógica desde un nivel más alto, en lugar de realizarlo desde el nivel de puerta, donde las fugas de canal lateral se manifiestan. Por lo tanto, las implementaciones clásicas apenas satisfacen estas necesidades e inevitablemente atrofian el prototipo. Por todo ello, la implementación de un esquema de diseño personalizado y flexible ha de ser tomado en cuenta. En esta tesis se presenta el diseño y la implementación de una lógica innovadora para contrarrestar SCA, en la que se abordan 3 aspectos fundamentales: I. Se basa en ocultar la estrategia sobre el circuito en doble carril a nivel de puerta para obtener dinámicamente el equilibrio de las fugas en las capas inferiores; II. Esta lógica explota las características de la arquitectura de las FPGAs, para reducir al mínimo el gasto de recursos en la implementación; III. Se apoya en un conjunto de herramientas asistentes personalizadas, incorporadas al flujo genérico de diseño sobre FPGAs, con el fin de manipular los circuitos de forma automática. El kit de herramientas de diseño automático es compatible con la lógica de doble carril propuesta, para facilitar la aplicación práctica sobre la familia de FPGA del fabricante Xilinx. En este sentido, la metodología y las herramientas son flexibles para ser extendido a una amplia gama de aplicaciones en las que se desean obtener restricciones mucho más rígidas y sofisticadas a nivel de puerta o rutado. En esta tesis se realiza un gran esfuerzo para facilitar el proceso de implementación y reparación de lógica de doble carril genérica. La viabilidad de las soluciones propuestas es validada mediante la selección de algoritmos criptográficos ampliamente utilizados, y su evaluación exhaustiva en comparación con soluciones anteriores. Todas las propuestas están respaldadas eficazmente a través de ataques experimentales con el fin de validar las ventajas de seguridad del sistema. El presente trabajo de investigación tiene la intención de cerrar la brecha entre las barreras de implementación y la aplicación efectiva de lógica de doble carril. En esencia, a lo largo de esta tesis se describirá un conjunto de herramientas de implementación para FPGAs que se han desarrollado para trabajar junto con el flujo de diseño genérico de las mismas, con el fin de lograr crear de forma innovadora la lógica de doble carril. Un nuevo enfoque en el ámbito de la seguridad en el cifrado se propone para obtener personalización, automatización y flexibilidad en el prototipo de circuito de bajo nivel con granularidad fina. Las principales contribuciones del presente trabajo de investigación se resumen brevemente a continuación: Lógica de Precharge Absorbed-DPL logic: El uso de la conversión de netlist para reservar LUTs libres para ejecutar la señal de precharge y Ex en una lógica DPL. Posicionamiento entrelazado Row-crossed con pares idénticos de rutado en redes de doble carril, lo que ayuda a aumentar la resistencia frente a la medición EM selectiva y mitigar los impactos de las variaciones de proceso. Ejecución personalizada y herramientas de conversión automática para la generación de redes idénticas para la lógica de doble carril propuesta. (a) Para detectar y reparar conflictos en las conexiones; (b) Detectar y reparar las rutas asimétricas. (c) Para ser utilizado en otras lógicas donde se requiere un control estricto de las interconexiones en aplicaciones basadas en Xilinx. Plataforma CPA de pruebas personalizadas para el análisis de EM y potencia, incluyendo la construcción de dicha plataforma, el método de medición y análisis de los ataques. Análisis de tiempos para cuantificar los niveles de seguridad. División de Seguridad en la conversión parcial de un sistema de cifrado complejo para reducir los costes de la protección. Prueba de concepto de un sistema de calefacción auto-adaptativo para mitigar los impactos eléctricos debido a la variación del proceso de silicio de manera dinámica. La presente tesis doctoral se encuentra organizada tal y como se detalla a continuación: En el capítulo 1 se abordan los fundamentos de los ataques de canal lateral, que abarca desde conceptos básicos de teoría de modelos de análisis, además de la implementación de la plataforma y la ejecución de los ataques. En el capítulo 2 se incluyen las estrategias de resistencia SCA contra los ataques de potencia diferencial y de EM. Además de ello, en este capítulo se propone una lógica en doble carril compacta y segura como contribución de gran relevancia, así como también se presentará la transformación lógica basada en un diseño a nivel de puerta. Por otra parte, en el Capítulo 3 se abordan los desafíos relacionados con la implementación de lógica en doble carril genérica. Así mismo, se describirá un flujo de diseño personalizado para resolver los problemas de aplicación junto con una herramienta de desarrollo automático de aplicaciones propuesta, para mitigar las barreras de diseño y facilitar los procesos. En el capítulo 4 se describe de forma detallada la elaboración e implementación de las herramientas propuestas. Por otra parte, la verificación y validaciones de seguridad de la lógica propuesta, así como un sofisticado experimento de verificación de la seguridad del rutado, se describen en el capítulo 5. Por último, un resumen de las conclusiones de la tesis y las perspectivas como líneas futuras se incluyen en el capítulo 6. Con el fin de profundizar en el contenido de la tesis doctoral, cada capítulo se describe de forma más detallada a continuación: En el capítulo 1 se introduce plataforma de implementación hardware además las teorías básicas de ataque de canal lateral, y contiene principalmente: (a) La arquitectura genérica y las características de la FPGA a utilizar, en particular la Xilinx Virtex-5; (b) El algoritmo de cifrado seleccionado (un módulo comercial Advanced Encryption Standard (AES)); (c) Los elementos esenciales de los métodos de canal lateral, que permiten revelar las fugas de disipación correlacionadas con los comportamientos internos; y el método para recuperar esta relación entre las fluctuaciones físicas en los rastros de canal lateral y los datos internos procesados; (d) Las configuraciones de las plataformas de pruebas de potencia / EM abarcadas dentro de la presente tesis. El contenido de esta tesis se amplia y profundiza a partir del capítulo 2, en el cual se abordan varios aspectos claves. En primer lugar, el principio de protección de la compensación dinámica de la lógica genérica de precarga de doble carril (Dual-rail Precharge Logic-DPL) se explica mediante la descripción de los elementos compensados a nivel de puerta. En segundo lugar, la lógica PA-DPL es propuesta como aportación original, detallando el protocolo de la lógica y un caso de aplicación. En tercer lugar, dos flujos de diseño personalizados se muestran para realizar la conversión de doble carril. Junto con ello, se aclaran las definiciones técnicas relacionadas con la manipulación por encima de la netlist a nivel de LUT. Finalmente, una breve discusión sobre el proceso global se aborda en la parte final del capítulo. El Capítulo 3 estudia los principales retos durante la implementación de DPLs en FPGAs. El nivel de seguridad de las soluciones de resistencia a SCA encontradas en el estado del arte se ha degenerado debido a las barreras de implantación a través de herramientas EDA convencionales. En el escenario de la arquitectura FPGA estudiada, se discuten los problemas de los formatos de doble carril, impactos parásitos, sesgo tecnológico y la viabilidad de implementación. De acuerdo con estas elaboraciones, se plantean dos problemas: Cómo implementar la lógica propuesta sin penalizar los niveles de seguridad, y cómo manipular un gran número de celdas y automatizar el proceso. El PA-DPL propuesto en el capítulo 2 se valida con una serie de iniciativas, desde características estructurales como doble carril entrelazado o redes de rutado clonadas, hasta los métodos de aplicación tales como las herramientas de personalización y automatización de EDA. Por otra parte, un sistema de calefacción auto-adaptativo es representado y aplicado a una lógica de doble núcleo, con el fin de ajustar alternativamente la temperatura local para equilibrar los impactos negativos de la variación del proceso durante la operación en tiempo real. El capítulo 4 se centra en los detalles de la implementación del kit de herramientas. Desarrollado sobre una API third-party, el kit de herramientas personalizado es capaz de manipular los elementos de la lógica de circuito post P&R ncd (una versión binaria ilegible del xdl) convertido al formato XDL Xilinx. El mecanismo y razón de ser del conjunto de instrumentos propuestos son cuidadosamente descritos, que cubre la detección de enrutamiento y los enfoques para la reparación. El conjunto de herramientas desarrollado tiene como objetivo lograr redes de enrutamiento estrictamente idénticos para la lógica de doble carril, tanto para posicionamiento separado como para el entrelazado. Este capítulo particularmente especifica las bases técnicas para apoyar las implementaciones en los dispositivos de Xilinx y su flexibilidad para ser utilizado sobre otras aplicaciones. El capítulo 5 se enfoca en la aplicación de los casos de estudio para la validación de los grados de seguridad de la lógica propuesta. Se discuten los problemas técnicos detallados durante la ejecución y algunas nuevas técnicas de implementación. (a) Se discute el impacto en el proceso de posicionamiento de la lógica utilizando el kit de herramientas propuesto. Diferentes esquemas de implementación, tomando en cuenta la optimización global en seguridad y coste, se verifican con los experimentos con el fin de encontrar los planes de posicionamiento y reparación optimizados; (b) las validaciones de seguridad se realizan con los métodos de correlación y análisis de tiempo; (c) Una táctica asintótica se aplica a un núcleo AES sobre BCDL estructurado para validar de forma sofisticada el impacto de enrutamiento sobre métricas de seguridad; (d) Los resultados preliminares utilizando el sistema de calefacción auto-adaptativa sobre la variación del proceso son mostrados; (e) Se introduce una aplicación práctica de las herramientas para un diseño de cifrado completa. Capítulo 6 incluye el resumen general del trabajo presentado dentro de esta tesis doctoral. Por último, una breve perspectiva del trabajo futuro se expone, lo que puede ampliar el potencial de utilización de las contribuciones de esta tesis a un alcance más allá de los dominios de la criptografía en FPGAs. ABSTRACT This PhD thesis mainly concentrates on countermeasure techniques related to the Side Channel Attack (SCA), which has been put forward to academic exploitations since 17 years ago. The related research has seen a remarkable growth in the past decades, while the design of solid and efficient protection still curiously remain as an open research topic where more reliable initiatives are required for personal information privacy, enterprise and national data protections. The earliest documented usage of secret code can be traced back to around 1700 B.C., when the hieroglyphs in ancient Egypt are scribed in inscriptions. Information security always gained serious attention from diplomatic or military intelligence transmission. Due to the rapid evolvement of modern communication technique, crypto solution was first incorporated by electronic signal to ensure the confidentiality, integrity, availability, authenticity and non-repudiation of the transmitted contexts over unsecure cable or wireless channels. Restricted to the computation power before computer era, simple encryption tricks were practically sufficient to conceal information. However, algorithmic vulnerabilities can be excavated to restore the encoding rules with affordable efforts. This fact motivated the development of modern cryptography, aiming at guarding information system by complex and advanced algorithms. The appearance of computers has greatly pushed forward the invention of robust cryptographies, which efficiently offers resistance relying on highly strengthened computing capabilities. Likewise, advanced cryptanalysis has greatly driven the computing technologies in turn. Nowadays, the information world has been involved into a crypto world, protecting any fields by pervasive crypto solutions. These approaches are strong because of the optimized mergence between modern mathematical theories and effective hardware practices, being capable of implement crypto theories into various platforms (microprocessor, ASIC, FPGA, etc). Security needs from industries are actually the major driving metrics in electronic design, aiming at promoting the construction of systems with high performance without sacrificing security. Yet a vulnerability in practical implementation found by Prof. Paul Kocher, et al in 1996 implies that modern digital circuits are inherently vulnerable to an unconventional attack approach, which was named as side-channel attack since then from its analysis source. Critical suspicions to theoretically sound modern crypto algorithms surfaced almost immediately after this discovery. To be specifically, digital circuits typically consist of a great number of essential logic elements (as MOS - Metal Oxide Semiconductor), built upon a silicon substrate during the fabrication. Circuit logic is realized relying on the countless switch actions of these cells. This mechanism inevitably results in featured physical emanation that can be properly measured and correlated with internal circuit behaviors. SCAs can be used to reveal the confidential data (e.g. crypto-key), analyze the logic architecture, timing and even inject malicious faults to the circuits that are implemented in hardware system, like FPGA, ASIC, smart Card. Using various comparison solutions between the predicted leakage quantity and the measured leakage, secrets can be reconstructed at much less expense of time and computation. To be precisely, SCA basically encloses a wide range of attack types, typically as the analyses of power consumption or electromagnetic (EM) radiation. Both of them rely on statistical analyses, and hence require a number of samples. The crypto algorithms are not intrinsically fortified with SCA-resistance. Because of the severity, much attention has to be taken into the implementation so as to assemble countermeasures to camouflage the leakages via "side channels". Countermeasures against SCA are evolving along with the development of attack techniques. The physical characteristics requires countermeasures over physical layer, which can be generally classified into intrinsic and extrinsic vectors. Extrinsic countermeasures are executed to confuse the attacker by integrating noise, misalignment to the intra activities. Comparatively, intrinsic countermeasures are built into the algorithm itself, to modify the implementation for minimizing the measurable leakage, or making them not sensitive any more. Hiding and Masking are two typical techniques in this category. Concretely, masking applies to the algorithmic level, to alter the sensitive intermediate values with a mask in reversible ways. Unlike the linear masking, non-linear operations that widely exist in modern cryptographies are difficult to be masked. Approved to be an effective counter solution, hiding method mainly mentions dual-rail logic, which is specially devised for flattening or removing the data-dependent leakage in power or EM signatures. In this thesis, apart from the context describing the attack methodologies, efforts have also been dedicated to logic prototype, to mount extensive security investigations to countermeasures on logic-level. A characteristic of SCA resides on the format of leak sources. Typical side-channel attack concerns the power based analysis, where the fundamental capacitance from MOS transistors and other parasitic capacitances are the essential leak sources. Hence, a robust SCA-resistant logic must eliminate or mitigate the leakages from these micro units, such as basic logic gates, I/O ports and routings. The vendor provided EDA tools manipulate the logic from a higher behavioral-level, rather than the lower gate-level where side-channel leakage is generated. So, the classical implementations barely satisfy these needs and inevitably stunt the prototype. In this case, a customized and flexible design scheme is appealing to be devised. This thesis profiles an innovative logic style to counter SCA, which mainly addresses three major aspects: I. The proposed logic is based on the hiding strategy over gate-level dual-rail style to dynamically overbalance side-channel leakage from lower circuit layer; II. This logic exploits architectural features of modern FPGAs, to minimize the implementation expenses; III. It is supported by a set of assistant custom tools, incorporated by the generic FPGA design flow, to have circuit manipulations in an automatic manner. The automatic design toolkit supports the proposed dual-rail logic, facilitating the practical implementation on Xilinx FPGA families. While the methodologies and the tools are flexible to be expanded to a wide range of applications where rigid and sophisticated gate- or routing- constraints are desired. In this thesis a great effort is done to streamline the implementation workflow of generic dual-rail logic. The feasibility of the proposed solutions is validated by selected and widely used crypto algorithm, for thorough and fair evaluation w.r.t. prior solutions. All the proposals are effectively verified by security experiments. The presented research work attempts to solve the implementation troubles. The essence that will be formalized along this thesis is that a customized execution toolkit for modern FPGA systems is developed to work together with the generic FPGA design flow for creating innovative dual-rail logic. A method in crypto security area is constructed to obtain customization, automation and flexibility in low-level circuit prototype with fine-granularity in intractable routings. Main contributions of the presented work are summarized next: Precharge Absorbed-DPL logic: Using the netlist conversion to reserve free LUT inputs to execute the Precharge and Ex signal in a dual-rail logic style. A row-crossed interleaved placement method with identical routing pairs in dual-rail networks, which helps to increase the resistance against selective EM measurement and mitigate the impacts from process variations. Customized execution and automatic transformation tools for producing identical networks for the proposed dual-rail logic. (a) To detect and repair the conflict nets; (b) To detect and repair the asymmetric nets. (c) To be used in other logics where strict network control is required in Xilinx scenario. Customized correlation analysis testbed for EM and power attacks, including the platform construction, measurement method and attack analysis. A timing analysis based method for quantifying the security grades. A methodology of security partitions of complex crypto systems for reducing the protection cost. A proof-of-concept self-adaptive heating system to mitigate electrical impacts over process variations in dynamic dual-rail compensation manner. The thesis chapters are organized as follows: Chapter 1 discusses the side-channel attack fundamentals, which covers from theoretic basics to analysis models, and further to platform setup and attack execution. Chapter 2 centers to SCA-resistant strategies against generic power and EM attacks. In this chapter, a major contribution, a compact and secure dual-rail logic style, will be originally proposed. The logic transformation based on bottom-layer design will be presented. Chapter 3 is scheduled to elaborate the implementation challenges of generic dual-rail styles. A customized design flow to solve the implementation problems will be described along with a self-developed automatic implementation toolkit, for mitigating the design barriers and facilitating the processes. Chapter 4 will originally elaborate the tool specifics and construction details. The implementation case studies and security validations for the proposed logic style, as well as a sophisticated routing verification experiment, will be described in Chapter 5. Finally, a summary of thesis conclusions and perspectives for future work are included in Chapter 5. To better exhibit the thesis contents, each chapter is further described next: Chapter 1 provides the introduction of hardware implementation testbed and side-channel attack fundamentals, and mainly contains: (a) The FPGA generic architecture and device features, particularly of Virtex-5 FPGA; (b) The selected crypto algorithm - a commercially and extensively used Advanced Encryption Standard (AES) module - is detailed; (c) The essentials of Side-Channel methods are profiled. It reveals the correlated dissipation leakage to the internal behaviors, and the method to recover this relationship between the physical fluctuations in side-channel traces and the intra processed data; (d) The setups of the power/EM testing platforms enclosed inside the thesis work are given. The content of this thesis is expanded and deepened from chapter 2, which is divided into several aspects. First, the protection principle of dynamic compensation of the generic dual-rail precharge logic is explained by describing the compensated gate-level elements. Second, the novel DPL is originally proposed by detailing the logic protocol and an implementation case study. Third, a couple of custom workflows are shown next for realizing the rail conversion. Meanwhile, the technical definitions that are about to be manipulated above LUT-level netlist are clarified. A brief discussion about the batched process is given in the final part. Chapter 3 studies the implementation challenges of DPLs in FPGAs. The security level of state-of-the-art SCA-resistant solutions are decreased due to the implementation barriers using conventional EDA tools. In the studied FPGA scenario, problems are discussed from dual-rail format, parasitic impact, technological bias and implementation feasibility. According to these elaborations, two problems arise: How to implement the proposed logic without crippling the security level; and How to manipulate a large number of cells and automate the transformation. The proposed PA-DPL in chapter 2 is legalized with a series of initiatives, from structures to implementation methods. Furthermore, a self-adaptive heating system is depicted and implemented to a dual-core logic, assumed to alternatively adjust local temperature for balancing the negative impacts from silicon technological biases on real-time. Chapter 4 centers to the toolkit system. Built upon a third-party Application Program Interface (API) library, the customized toolkit is able to manipulate the logic elements from post P&R circuit (an unreadable binary version of the xdl one) converted to Xilinx xdl format. The mechanism and rationale of the proposed toolkit are carefully convoyed, covering the routing detection and repairing approaches. The developed toolkit aims to achieve very strictly identical routing networks for dual-rail logic both for separate and interleaved placement. This chapter particularly specifies the technical essentials to support the implementations in Xilinx devices and the flexibility to be expanded to other applications. Chapter 5 focuses on the implementation of the case studies for validating the security grades of the proposed logic style from the proposed toolkit. Comprehensive implementation techniques are discussed. (a) The placement impacts using the proposed toolkit are discussed. Different execution schemes, considering the global optimization in security and cost, are verified with experiments so as to find the optimized placement and repair schemes; (b) Security validations are realized with correlation, timing methods; (c) A systematic method is applied to a BCDL structured module to validate the routing impact over security metric; (d) The preliminary results using the self-adaptive heating system over process variation is given; (e) A practical implementation of the proposed toolkit to a large design is introduced. Chapter 6 includes the general summary of the complete work presented inside this thesis. Finally, a brief perspective for the future work is drawn which might expand the potential utilization of the thesis contributions to a wider range of implementation domains beyond cryptography on FPGAs.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

O presente trabalho discute a compatibilidade e integração entre sistemas e dispositivos de automação residencial, propondo formas de melhorá-la. Essa integração tende a se tornar uma tarefa complexa devido à grande variedade de padrões e tecnologias de integração adotados na automação residencial. O presente trabalho propõe uma extensão do padrão Universal Plug and Play (UPnP) e a utilização de uma arquitetura modular com duas camadas, afim de adaptá-lo à integração dos subsistemas de automação residencial. Esse padrão estendido é, então, utilizado na camada superior, para o controle e integração entre os subsistemas. Já na camada inferior, cada subsistema utiliza a tecnologia de comunicação mais adequada para controlar seus dispositivos, e possui uma interface UPnP para se comunicar com outros subsistemas e permitir seu controle pelo usuário. Dessa forma os subsistemas tornam-se módulos do sistema de automação da residência. Essa proposta permite que o usuário compre e substitua facilmente subsistemas de fabricantes distintos, de forma a integrá-los, resultando em um sistema de automação residencial flexível e independente de fabricante. Para testar a extensão proposta, um caso de uso de um subsistema de iluminação foi criado. A partir deste, foram realizadas simulações computacionais. Os resultados destas foram apresentados e analisados, verificando-se o atendimento aos requisitos do sistema e se as características desejadas foram alcançadas, tais como, a característica plug and play de subsistemas, o aumento da flexibilidade e a modularização do sistema, para facilitar a compra e manutenção de sistemas de automação residencial, gerando o potencial para fomentar a maior adoção de sistemas de automação residencial. No entanto, a extensão proposta também resulta no aumento da complexidade do cliente UPnP que a utiliza para interagir com o sistema, o que pode dificultar a adoção de sistemas de automação residencial no futuro. Por fim, sugestões de continuidade e perspectivas futuras foram apresentadas.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Suspended matter concentration along a meridian section from the Lena River delta to 78°N (~500 km) at ten stations from the surface to the bottom was studied with weight and optical (light attenuation index) techniques. At seven stations average residence time of suspended matter in surface waters was determined by the disequilibrium 234Th method. Average residence time of suspended matter in other depth intervals was calculated by regression dependence between the 234Th/238U ratio and suspended matter concentration. Differential and integral fluxes of suspended matter in the water column were also calculated. Nepheloid matter dominates in suspended matter composition in surface waters. Calculations indicate that, before being buried on the bottom, solid river run-off is resuspended 2.3 times (aver.). Redistribution of nepheloid suspended matter in the near-bottom layer results in formation of a strongly pronounced depocentre - an area of maximal discharge of solid river run-off within the Laptev Sea.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Rate of CO2 assimilation was determined above the Broken Spur and TAG active hydrothermal fields for three main ecosystems: (1) hydrothermal vents; (2) 300 m near-bottom layer of plume water; and (3) bottom sediments. In water samples from warm (40-45°C) vents assimilation rates were maximal and reached 2.82-3.76 µg C/l/day. In plume waters CO2 assimilation rates ranged from 0.38 to 0.65 µg C/l/day. In bottom sediments CO2 assimilation rates varied from 0.8 to 28.0 µg C/l/day, rising up to 56 mg C/kg/day near shrimp swarms. In the most active plume zone of the long-living TAG field bacterial production of organic matter (OM) from carbonic is up to 170 mg C/m**2/day); production of autotrophic process of bacterial chemosynthesis reaches about 90% (156 mg C/m**2/day). Thus, chemosynthetic production of OM in September-October is almost equal to that of photosynthetic production in the oceanic region. Bacterial production of OM above the Broken Spur hydrothermal field is one order lower and reaches only 20 mg C/m**2/day.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This article demonstrates the use of embedded fibre Bragg gratings as vector bending sensor to monitor two-dimensional shape deformation of a shape memory polymer plate. The shape memory polymer plate was made by using thermal-responsive epoxy-based shape memory polymer materials, and the two fibre Bragg grating sensors were orthogonally embedded, one on the top and the other on the bottom layer of the plate, in order to measure the strain distribution in both longitudinal and transverse directions separately and also with temperature reference. When the shape memory polymer plate was bent at different angles, the Bragg wavelengths of the embedded fibre Bragg gratings showed a red-shift of 50 pm/°caused by the bent-induced tensile strain on the plate surface. The finite element method was used to analyse the stress distribution for the whole shape recovery process. The strain transfer rate between the shape memory polymer and optical fibre was also calculated from the finite element method and determined by experimental results, which was around 0.25. During the experiment, the embedded fibre Bragg gratings showed very high temperature sensitivity due to the high thermal expansion coefficient of the shape memory polymer, which was around 108.24 pm/°C below the glass transition temperature (Tg) and 47.29 pm/°C above Tg. Therefore, the orthogonal arrangement of the two fibre Bragg grating sensors could provide a temperature compensation function, as one of the fibre Bragg gratings only measures the temperature while the other is subjected to the directional deformation. © The Author(s) 2013.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Extremely low summer sea-ice coverage in the Arctic Ocean in 2007 allowed extensive sampling and a wide quasi-synoptic hydrographic and d18O dataset could be collected in the Eurasian Basin and the Makarov Basin up to the Alpha Ridge and the East Siberian continental margin. With the aim of determining the origin of freshwater in the halocline, fractions of river water and sea-ice meltwater in the upper 150 m were quantified by a combination of salinity and d18O in the Eurasian Basin. Two methods, applying the preformed phosphate concentration (PO*) and the nitrate-to-phosphate ratio (N/P), were compared to further differentiate the marine fraction into Atlantic and Pacific-derived contributions. While PO*-based assessments systematically underestimate the contribution of Pacific-derived waters, N/P-based calculations overestimate Pacific-derived waters within the Transpolar Drift due to denitrification in bottom sediments at the Laptev Sea continental margin. Within the Eurasian Basin a west to east oriented front between net melting and production of sea-ice is observed. Outside the Atlantic regime dominated by net sea-ice melting, a pronounced layer influenced by brines released during sea-ice formation is present at about 30 to 50 m water depth with a maximum over the Lomonosov Ridge. The geographically distinct definition of this maximum demonstrates the rapid release and transport of signals from the shelf regions in discrete pulses within the Transpolar Drift. The ratio of sea-ice derived brine influence and river water is roughly constant within each layer of the Arctic Ocean halocline. The correlation between brine influence and river water reveals two clusters that can be assigned to the two main mechanisms of sea-ice formation within the Arctic Ocean. Over the open ocean or in polynyas at the continental slope where relatively small amounts of river water are found, sea-ice formation results in a linear correlation between brine influence and river water at salinities of about 32 to 34. In coastal polynyas in the shallow regions of the Laptev Sea and southern Kara Sea, sea-ice formation transports river water into the shelf's bottom layer due to the close proximity to the river mouths. This process therefore results in waters that form a second linear correlation between brine influence and river water at salinities of about 30 to 32. Our study indicates which layers of the Arctic Ocean halocline are primarily influenced by sea-ice formation in coastal polynyas and which layers are primarily influenced by sea-ice formation over the open ocean. Accordingly we use the ratio of sea-ice derived brine influence and river water to link the maximum in brine influence within the Transpolar Drift with a pulse of shelf waters from the Laptev Sea that was likely released in summer 2005.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Meltponds on Arctic sea ice have previously been reported to be devoid of marine metazoans due to fresh-water conditions. The predominantly dark frequently also green and brownish meltponds observed in the Central Arctic in summer 2007 hinted to brackish conditions and considerable amounts of algae, possibly making the habitat suitable for marine metazoans. Environmental conditions in meltponds as well as sympagic meiofauna in new ice covering pond surfaces and in rotten ice on the bottom of ponds were studied, applying modified techniques from sea-ice and under-ice research. Due to the very porous structure of the rotten ice, the meltponds were usually brackish to saline, providing living conditions very similar to sub-ice water. The new ice cover on the surface had similar characteristics as the bottom layer of level ice. The ponds were thus accessible to and inhabitable by metazoans. The new ice cover and the rotten ice were inhabited by various sympagic meiofauna taxa, predominantly ciliates, rotifers, acoels, nematodes and foraminiferans. Also, sympagic amphipods were found on the bottom of meltponds. We suggest that, in consequence of global warming, brackish and saline meltponds are becoming more frequent in the Arctic, providing a new habitat to marine metazoans.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The Ice Station POLarstern (ISPOL) cruise revisited the western Weddell Sea in late 2004 and obtained a comprehensive set of conductivity-temperature-depth (CTD) data. This study describes the thermohaline structure and diapycnal mixing environment observed in 2004 and compares them with conditions observed more than a decade earlier. Hydrographic conditions on the central western Weddell Sea continental slope, off Larsen C Ice Shelf, in late winter/early spring of 2004/2005 can be described as a well-stratified environment with upper layers evidencing relict structures from intense winter near-surface vertical fluxes, an intermediate depth temperature maximum, and a cold near-bottom layer marked by patchy property distributions. A well-developed surface mixed layer, isolated from the underlying Warm Deep Water (WDW) by a pronounced pycnocline and characterized by lack of warming and by minimal sea-ice basal melting, supports the assumption that upper ocean winter conditions persisted during most of the ISPOL experiment. Much of the western Weddell Sea water column has remained essentially unchanged since 1992; however, significant differences were observed in two of the regional water masses. The first, Modified Weddell Deep Water (MWDW), comprises the permanent pycnocline and was less saline than a decade earlier, whereas Weddell Sea Bottom Water (WSBW) was horizontally patchier and colder. Near-bottom temperatures observed in 2004 were the coldest on record for the western Weddell Sea over the continental slope. Minimum temperatures were ~0.4 and ~0.3 °C colder than during 1992-1993, respectively. The 2004 near-bottom temperature/salinity characteristics revealed the presence of two different WSBW types, whereby a warm, fresh layer overlays a colder, saltier layer (both formed in the western Weddell Sea). The deeper layer may have formed locally as high salinity shelf water (HSSW) that flowed intermittently down the continental slope, which is consistent with the observed horizontal patchiness. The latter can be associated with the near-bottom variability found in Powell Basin with consequences for the deep water outflow from the Weddell Sea.