997 resultados para require solutions


Relevância:

30.00% 30.00%

Publicador:

Resumo:

As manufacturing enterprises become increasingly globalised, the supply chain is becoming more fragmented, with multiple players engaged in key aspects of the value chain. This has seen the emergence of suppliers offering specialised operations (research, design, production, service) which are capable of serving widely dispersed markets. It is generally assumed that managing these increasingly complex international supply chains requires sophisticated management techniques. Many companies have installed advanced planning systems for just this reason - systems that require skilled staff to implement the complex processes involved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

O Mercúrio é um dos metais pesados mais tóxicos existentes no meio ambiente, é persistente e caracteriza-se por bioamplificar e bioacumular ao longo da cadeia trófica. A poluição com mercúrio é um problema à escala global devido à combinação de emissões naturais e emissões antropogénicas, o que obriga a políticas ambientais mais restritivas sobre a descarga de metais pesados. Consequentemente o desenvolvimento de novos e eficientes materiais e de novas tecnologias para remover mercúrio de efluentes é necessário e urgente. Neste contexto, alguns materiais microporosos provenientes de duas famílias, titanossilicatos e zirconossilicatos, foram investigados com o objectivo de avaliar a sua capacidade para remover iões Hg2+ de soluções aquosas. De um modo geral, quase todos os materiais estudados apresentaram elevadas percentagens de remoção, confirmando que são bons permutadores iónicos e que têm capacidade para serem utilizados como agentes descontaminantes. O titanossilicato ETS-4 foi o material mais estudado devido à sua elevada eficiência de remoção (>98%), aliada à pequena quantidade de massa necessária para atingir essa elevada percentagem de remoção. Com apenas 4 mg⋅dm-3 de ETS-4 foi possível tratar uma solução com uma concentração igual ao valor máximo admissível para descargas de efluentes em cursos de água (50 μg⋅dm-3) e obter água com qualidade para consumo humano (<1.0 μg⋅dm-3), de acordo com a legislação Portuguesa (DL 236/98). Tal como para outros adsorbentes, a capacidade de remoção de Hg2+ do ETS- 4 depende de várias condições experimentais, tais como o tempo de contacto, a massa, a concentração inicial de mercúrio, o pH e a temperatura. Do ponto de vista industrial as condições óptimas para a aplicação do ETS-4 são bastante atractivas, uma vez que não requerem grandes quantidades de material e o tratamento da solução pode ser feito à temperatura ambiente. A aplicação do ETS-4 torna-se ainda mais interessante no caso de efluentes hospitalares, de processos de electro-deposição com níquel, metalúrgica, extracção de minérios, especialmente ouro, e indústrias de fabrico de cloro e soda cáustica, uma vez que estes efluentes apresentam valores de pH semelhantes ao valor de pH óptimo para a aplicação do ETS-4. A cinética do processo de troca iónica é bem descrita pelo modelo Nernst-Planck, enquanto que os dados de equilíbrio são bem ajustados pelas isotérmicas de Langmuir e de Freundlich. Os parâmetros termodinâmicos, ΔG° and ΔH° indicam que a remoção de Hg2+ pelo ETS-4 é um processo espontâneo e exotérmico. A elevada eficiência do ETS-4 é confirmada pelos valores da capacidade de remoção de outros materiais para os iões Hg2+, descritos na literatura. A utilização de coluna de ETS-4 preparada no nosso laboratório, para a remoção em contínuo de Hg2+ confirma que este material apresenta um grande potencial para ser utilizado no tratamento de águas. ABSTRACT: Mercury is one of the most toxic heavy metals, exhibiting a persistent character in the environment and biota as well as bioamplification and bioaccumulation along the food chain. Natural inputs combined with the global anthropogenic sources make mercury pollution a planetary-scale problem, and strict environmental policies on metal discharges have been enforced. The development of efficient new materials and clean-up technologies for removing mercury from effluents is, thus, timely. In this context, in my study, several microporous materials from two families, titanosilicates and zirconosilicates were investigated in order to assess their Hg2+ sorption capacity and removal efficiency, under different operating conditions. In general, almost all microporous materials studied exhibited high removal efficiencies, confirming that they are good ion exchangers and have potential to be used as Hg2+ decontaminant agents. Titanosilicate ETS-4 was the material most studied here, by its highest removal efficiency (>98%) and lowest mass necessary to attain it. Moreover, according with the Portuguese legislation (DL 236/98) it is possible to attain drinking water quality (i.e. [Hg2+]< 1.0 μg⋅dm-3) by treating a solution with a Hg2+ concentration equal to the maximum value admissible for effluents discharges into water bodies (50 μg⋅dm-3), using only 4 mg⋅dm-3 of ETS-4. Even in the presence of major freshwater cations, ETS-4 removal efficiency remains high. Like for other adsorbents, the sorption capacity of ETS-4 for Hg2+ ions is strongly dependent on the operating conditions, such as contact time, mass, initial Hg2+ concentration and solution pH and, to a lesser extent, temperature. The optimum operating conditions found for ETS-4 are very attractive from the industrial point of view because the application of ETS-4 for the treatment of wastewater and/or industrial effluents will not require larges amounts of adsorbent, neither energy supply for temperature adjustments becoming the removal process economically competitive. These conditions become even more interesting in the case of medical institutions liquid, nickel electroplating process, copper smelter, gold ore tailings and chlor-alkali effluents, since no significant pH adjustments to the effluent are necessary. The ion exchange kinetics of Hg2+ uptake is successfully described by the Nernst-Planck based model, while the ion exchange equilibrium is well fitted by both Langmuir and Freundlich isotherms. Moreover, the feasibility of the removal process was confirmed by the thermodynamic parameters (ΔG° and ΔH°) which indicate that the Hg2+ sorption by ETS-4 is spontaneous and exothermic. The higher efficiency of ETS-4 for Hg2+ ions is corroborate by the values reported in literature for the sorption capacity of other adsorbents for Hg2+ ions. The use of an ETS-4 fixed-bed ion exchange column, manufactured in our laboratory, in the continuous removal of Hg2+ ions from solutions confirms that this titanosilicate has potential to be used in industrial water treatment.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Article

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This study investigated the potential of Dichrostachys cinerea fruits as a protein supplement in semi-arid areas of Zimbabwe. The tanniniferous fruits were treated with aqueous solutions of polyethylene glycol (PEG) or sodium hydroxide (NaOH). Both treatments increased the soluble fraction, rate of degradation and effective degradability (ED) of nitrogen (N) in sacco. The PEG effects were higher than the NaOH effects (e.g. a 25% vs. 6% increase in effective N degradabilities, respectively). Five treatments were evaluated in a N-balance trial using Matebele goats: ground, PEG- or NaOH-treated D. cinerea fruits, a commercial protein supplement (CPS) and no supplement. Animals offered ground fruits or CPS retained most N (3.7 or 4.1 g N/day, respectively), while those offered NaOH- or PEG-treated fruits retained significantly less N (2.7 or 1.0 g/day, respectively). Unsupplemented animals were in negative N balance (-2.4 g/day). PEG treatment deactivated the tannins more than the NaOH treatment. PEG treatment resulted in excessive protein degradation in the rumen leading to high urine N loss. It is concluded that the D. cinerea fruits were beneficial for goat N-nutrition and that the tannins did not require inactivation. D. cinerea fruits can, therefore, replace the expensive commercial protein supplement. It is also suggested that the collection and grinding of fruits could be used as a management tool to control bush encroachment. (C) 2004 Elsevier B.V. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This study explores the implications of an organization moving toward service-dominant logic (S-D logic) on the sales function. Driven by its customers’ needs, a service orientation by its nature requires personal interaction and sales personnel are in an ideal position to develop offerings with the customer. However, the development of S-D logic may require sales staff to develop additional skills. Employing a single case study, the study identified that sales personnel are quick to appreciate the advantages of S-D logic for customer satisfaction and six specific skills were highlighted and explored. Further, three propositions were identified: in an organization adopting S-D logic, the sales process needs to elicit needs at both embedded-value and value-in-use levels. In addition, the sales process needs to coproduce not just goods and service attributes but also attributes of the customer’s usage processes. Further, the sales process needs to coproduce not just goods and service attributes but also attributes of the customer’s usage processes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

When modeling real-world decision-theoretic planning problems in the Markov Decision Process (MDP) framework, it is often impossible to obtain a completely accurate estimate of transition probabilities. For example, natural uncertainty arises in the transition specification due to elicitation of MOP transition models from an expert or estimation from data, or non-stationary transition distributions arising from insufficient state knowledge. In the interest of obtaining the most robust policy under transition uncertainty, the Markov Decision Process with Imprecise Transition Probabilities (MDP-IPs) has been introduced to model such scenarios. Unfortunately, while various solution algorithms exist for MDP-IPs, they often require external calls to optimization routines and thus can be extremely time-consuming in practice. To address this deficiency, we introduce the factored MDP-IP and propose efficient dynamic programming methods to exploit its structure. Noting that the key computational bottleneck in the solution of factored MDP-IPs is the need to repeatedly solve nonlinear constrained optimization problems, we show how to target approximation techniques to drastically reduce the computational overhead of the nonlinear solver while producing bounded, approximately optimal solutions. Our results show up to two orders of magnitude speedup in comparison to traditional ""flat"" dynamic programming approaches and up to an order of magnitude speedup over the extension of factored MDP approximate value iteration techniques to MDP-IPs while producing the lowest error of any approximation algorithm evaluated. (C) 2011 Elsevier B.V. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The recently synthesized ionic liquid (IL) 2-butylthiolonium bis(trifluoromethanesulfonyl)amide, [mimSBu][NTf2], has been used for the extraction of copper(II) from aqueous solution. The pH of the aqueous phase decreases upon addition of [mimSBu]+, which is attributed to partial release of the hydrogen attached to the N(3) nitrogen atom of the imidazolium ring. The presence of sparingly soluble water in [mimSBu][NTf2] also is required in solvent extraction studies to promote the incorporation of Cu(II) into the [mimSBu][NTf2] ionic liquid phase. The labile copper(II) system formed by interacting with both the water and the IL cation component has been characterized by cyclic voltammetry as well as UV−vis, Raman, and 1H, 13C, and 15N NMR spectroscopies. The extraction process does not require the addition of a complexing agent or pH control of the aqueous phase. [mimSBu][NTf2] can be recovered from the labile copper−water−IL interacting system by washing with a strong acid. High selectivity of copper(II) extraction is achieved relative to that of other divalent cobalt(II), iron(II), and nickel(II) transition-metal cations. The course of microextraction of Cu2+ from aqueous media into the [mimSBu][NTf2] IL phase was monitored in situ by cyclic voltammetry using a well-defined process in which specific interaction with copper is believed to switch from the ionic liquid cation component, [mimSBu], to the [NTf2] anion during the course of electrochemical reduction from Cu(II) to Cu(I). The microextraction−voltammetry technique provides a fast and convenient method to determine whether an IL is able to extract electroactive metal ions from an aqueous solution.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Cloud-based service computing has started to change the way how research in science, in particular biology, medicine, and engineering, is being carried out. Researchers in the area of mammalian genomics have taken advantage of cloud computing technology to cost-effectively process large amounts of data and speed up discovery. Mammalian genomics is limited by the cost and complexity of analysis, which require large amounts of computational resources to analyse huge amount of data and biology specialists to interpret results. On the other hand the application of this technology requires computing knowledge, in particular programming and operations management skills to develop high performance computing (HPC) applications and deploy them on HPC clouds. We carried out a survey of cloud-based service computing solutions, as the most recent and promising instantiations of distributed computing systems, in the context their use in research of mammalian genomic analysis. We describe our most recent research and development effort which focuses on building Software as a Service (SaaS) clouds to simplify the use of HPC clouds for carrying out mammalian genomic analysis.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this thesis we have developed solutions to common issues regarding widefield microscopes, facing the problem of the intensity inhomogeneity of an image and dealing with two strong limitations: the impossibility of acquiring either high detailed images representative of whole samples or deep 3D objects. First, we cope with the problem of the non-uniform distribution of the light signal inside a single image, named vignetting. In particular we proposed, for both light and fluorescent microscopy, non-parametric multi-image based methods, where the vignetting function is estimated directly from the sample without requiring any prior information. After getting flat-field corrected images, we studied how to fix the problem related to the limitation of the field of view of the camera, so to be able to acquire large areas at high magnification. To this purpose, we developed mosaicing techniques capable to work on-line. Starting from a set of overlapping images manually acquired, we validated a fast registration approach to accurately stitch together the images. Finally, we worked to virtually extend the field of view of the camera in the third dimension, with the purpose of reconstructing a single image completely in focus, stemming from objects having a relevant depth or being displaced in different focus planes. After studying the existing approaches for extending the depth of focus of the microscope, we proposed a general method that does not require any prior information. In order to compare the outcome of existing methods, different standard metrics are commonly used in literature. However, no metric is available to compare different methods in real cases. First, we validated a metric able to rank the methods as the Universal Quality Index does, but without needing any reference ground truth. Second, we proved that the approach we developed performs better in both synthetic and real cases.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Continent catheterizable ileal pouches require regular irrigations to reduce the risk of bacteriuria and urinary tract infections (UTIs).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Development of new personal mobile and wireless devices for healthcare has become essential due to our aging population characterized by constant rise in chronic diseases that consequently require a complex treatment and close monitoring. Personal telehealth devices allow patients to adequately receive their appropriate treatment, followup with their doctors, and report any emergency without the need of the presence of any caregivers with them thus increasing their quality of life in a cost-effective fashion. This paper includes a brief overview of personal telehealth systems, a survey of 100 consecutive ED patients aged >65 years, and introduces "Limmex" a new GSM based technology packaged in a wristwatch. Limmex can by a push of a button initiate multiple emergency call and establish mobile communication between the patient and a preselected person, institution, or a search and rescue service. To the best of our knowledge, Limmex is the first of its kind worldwide.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The accepted chemical reactions in the dissolution of gold by cyanide solutions require the presence of gold, cyanide, water, and oxygen. The importance of dissolved oxygen in cyanide solutions as a factor is recognized by those familiar with cyanidation. Manufacturers of cyanidation equipment realize the necessity of oxygen, as shown by the appliances they have developed which are attached to the agitators in order to saturate the cyanide solutions with air.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Esta tesis doctoral se centra principalmente en técnicas de ataque y contramedidas relacionadas con ataques de canal lateral (SCA por sus siglas en inglés), que han sido propuestas dentro del campo de investigación académica desde hace 17 años. Las investigaciones relacionadas han experimentado un notable crecimiento en las últimas décadas, mientras que los diseños enfocados en la protección sólida y eficaz contra dichos ataques aún se mantienen como un tema de investigación abierto, en el que se necesitan iniciativas más confiables para la protección de la información persona de empresa y de datos nacionales. El primer uso documentado de codificación secreta se remonta a alrededor de 1700 B.C., cuando los jeroglíficos del antiguo Egipto eran descritos en las inscripciones. La seguridad de la información siempre ha supuesto un factor clave en la transmisión de datos relacionados con inteligencia diplomática o militar. Debido a la evolución rápida de las técnicas modernas de comunicación, soluciones de cifrado se incorporaron por primera vez para garantizar la seguridad, integridad y confidencialidad de los contextos de transmisión a través de cables sin seguridad o medios inalámbricos. Debido a las restricciones de potencia de cálculo antes de la era del ordenador, la técnica de cifrado simple era un método más que suficiente para ocultar la información. Sin embargo, algunas vulnerabilidades algorítmicas pueden ser explotadas para restaurar la regla de codificación sin mucho esfuerzo. Esto ha motivado nuevas investigaciones en el área de la criptografía, con el fin de proteger el sistema de información ante sofisticados algoritmos. Con la invención de los ordenadores se ha acelerado en gran medida la implementación de criptografía segura, que ofrece resistencia eficiente encaminada a obtener mayores capacidades de computación altamente reforzadas. Igualmente, sofisticados cripto-análisis han impulsado las tecnologías de computación. Hoy en día, el mundo de la información ha estado involucrado con el campo de la criptografía, enfocada a proteger cualquier campo a través de diversas soluciones de cifrado. Estos enfoques se han fortalecido debido a la unificación optimizada de teorías matemáticas modernas y prácticas eficaces de hardware, siendo posible su implementación en varias plataformas (microprocesador, ASIC, FPGA, etc.). Las necesidades y requisitos de seguridad en la industria son las principales métricas de conducción en el diseño electrónico, con el objetivo de promover la fabricación de productos de gran alcance sin sacrificar la seguridad de los clientes. Sin embargo, una vulnerabilidad en la implementación práctica encontrada por el Prof. Paul Kocher, et al en 1996 implica que un circuito digital es inherentemente vulnerable a un ataque no convencional, lo cual fue nombrado posteriormente como ataque de canal lateral, debido a su fuente de análisis. Sin embargo, algunas críticas sobre los algoritmos criptográficos teóricamente seguros surgieron casi inmediatamente después de este descubrimiento. En este sentido, los circuitos digitales consisten típicamente en un gran número de celdas lógicas fundamentales (como MOS - Metal Oxide Semiconductor), construido sobre un sustrato de silicio durante la fabricación. La lógica de los circuitos se realiza en función de las innumerables conmutaciones de estas células. Este mecanismo provoca inevitablemente cierta emanación física especial que puede ser medida y correlacionada con el comportamiento interno del circuito. SCA se puede utilizar para revelar datos confidenciales (por ejemplo, la criptografía de claves), analizar la arquitectura lógica, el tiempo e incluso inyectar fallos malintencionados a los circuitos que se implementan en sistemas embebidos, como FPGAs, ASICs, o tarjetas inteligentes. Mediante el uso de la comparación de correlación entre la cantidad de fuga estimada y las fugas medidas de forma real, información confidencial puede ser reconstruida en mucho menos tiempo y computación. Para ser precisos, SCA básicamente cubre una amplia gama de tipos de ataques, como los análisis de consumo de energía y radiación ElectroMagnética (EM). Ambos se basan en análisis estadístico y, por lo tanto, requieren numerosas muestras. Los algoritmos de cifrado no están intrínsecamente preparados para ser resistentes ante SCA. Es por ello que se hace necesario durante la implementación de circuitos integrar medidas que permitan camuflar las fugas a través de "canales laterales". Las medidas contra SCA están evolucionando junto con el desarrollo de nuevas técnicas de ataque, así como la continua mejora de los dispositivos electrónicos. Las características físicas requieren contramedidas sobre la capa física, que generalmente se pueden clasificar en soluciones intrínsecas y extrínsecas. Contramedidas extrínsecas se ejecutan para confundir la fuente de ataque mediante la integración de ruido o mala alineación de la actividad interna. Comparativamente, las contramedidas intrínsecas están integradas en el propio algoritmo, para modificar la aplicación con el fin de minimizar las fugas medibles, o incluso hacer que dichas fugas no puedan ser medibles. Ocultación y Enmascaramiento son dos técnicas típicas incluidas en esta categoría. Concretamente, el enmascaramiento se aplica a nivel algorítmico, para alterar los datos intermedios sensibles con una máscara de manera reversible. A diferencia del enmascaramiento lineal, las operaciones no lineales que ampliamente existen en criptografías modernas son difíciles de enmascarar. Dicho método de ocultación, que ha sido verificado como una solución efectiva, comprende principalmente la codificación en doble carril, que está ideado especialmente para aplanar o eliminar la fuga dependiente de dato en potencia o en EM. En esta tesis doctoral, además de la descripción de las metodologías de ataque, se han dedicado grandes esfuerzos sobre la estructura del prototipo de la lógica propuesta, con el fin de realizar investigaciones enfocadas a la seguridad sobre contramedidas de arquitectura a nivel lógico. Una característica de SCA reside en el formato de las fuentes de fugas. Un típico ataque de canal lateral se refiere al análisis basado en la potencia, donde la capacidad fundamental del transistor MOS y otras capacidades parásitas son las fuentes esenciales de fugas. Por lo tanto, una lógica robusta resistente a SCA debe eliminar o mitigar las fugas de estas micro-unidades, como las puertas lógicas básicas, los puertos I/O y las rutas. Las herramientas EDA proporcionadas por los vendedores manipulan la lógica desde un nivel más alto, en lugar de realizarlo desde el nivel de puerta, donde las fugas de canal lateral se manifiestan. Por lo tanto, las implementaciones clásicas apenas satisfacen estas necesidades e inevitablemente atrofian el prototipo. Por todo ello, la implementación de un esquema de diseño personalizado y flexible ha de ser tomado en cuenta. En esta tesis se presenta el diseño y la implementación de una lógica innovadora para contrarrestar SCA, en la que se abordan 3 aspectos fundamentales: I. Se basa en ocultar la estrategia sobre el circuito en doble carril a nivel de puerta para obtener dinámicamente el equilibrio de las fugas en las capas inferiores; II. Esta lógica explota las características de la arquitectura de las FPGAs, para reducir al mínimo el gasto de recursos en la implementación; III. Se apoya en un conjunto de herramientas asistentes personalizadas, incorporadas al flujo genérico de diseño sobre FPGAs, con el fin de manipular los circuitos de forma automática. El kit de herramientas de diseño automático es compatible con la lógica de doble carril propuesta, para facilitar la aplicación práctica sobre la familia de FPGA del fabricante Xilinx. En este sentido, la metodología y las herramientas son flexibles para ser extendido a una amplia gama de aplicaciones en las que se desean obtener restricciones mucho más rígidas y sofisticadas a nivel de puerta o rutado. En esta tesis se realiza un gran esfuerzo para facilitar el proceso de implementación y reparación de lógica de doble carril genérica. La viabilidad de las soluciones propuestas es validada mediante la selección de algoritmos criptográficos ampliamente utilizados, y su evaluación exhaustiva en comparación con soluciones anteriores. Todas las propuestas están respaldadas eficazmente a través de ataques experimentales con el fin de validar las ventajas de seguridad del sistema. El presente trabajo de investigación tiene la intención de cerrar la brecha entre las barreras de implementación y la aplicación efectiva de lógica de doble carril. En esencia, a lo largo de esta tesis se describirá un conjunto de herramientas de implementación para FPGAs que se han desarrollado para trabajar junto con el flujo de diseño genérico de las mismas, con el fin de lograr crear de forma innovadora la lógica de doble carril. Un nuevo enfoque en el ámbito de la seguridad en el cifrado se propone para obtener personalización, automatización y flexibilidad en el prototipo de circuito de bajo nivel con granularidad fina. Las principales contribuciones del presente trabajo de investigación se resumen brevemente a continuación: Lógica de Precharge Absorbed-DPL logic: El uso de la conversión de netlist para reservar LUTs libres para ejecutar la señal de precharge y Ex en una lógica DPL. Posicionamiento entrelazado Row-crossed con pares idénticos de rutado en redes de doble carril, lo que ayuda a aumentar la resistencia frente a la medición EM selectiva y mitigar los impactos de las variaciones de proceso. Ejecución personalizada y herramientas de conversión automática para la generación de redes idénticas para la lógica de doble carril propuesta. (a) Para detectar y reparar conflictos en las conexiones; (b) Detectar y reparar las rutas asimétricas. (c) Para ser utilizado en otras lógicas donde se requiere un control estricto de las interconexiones en aplicaciones basadas en Xilinx. Plataforma CPA de pruebas personalizadas para el análisis de EM y potencia, incluyendo la construcción de dicha plataforma, el método de medición y análisis de los ataques. Análisis de tiempos para cuantificar los niveles de seguridad. División de Seguridad en la conversión parcial de un sistema de cifrado complejo para reducir los costes de la protección. Prueba de concepto de un sistema de calefacción auto-adaptativo para mitigar los impactos eléctricos debido a la variación del proceso de silicio de manera dinámica. La presente tesis doctoral se encuentra organizada tal y como se detalla a continuación: En el capítulo 1 se abordan los fundamentos de los ataques de canal lateral, que abarca desde conceptos básicos de teoría de modelos de análisis, además de la implementación de la plataforma y la ejecución de los ataques. En el capítulo 2 se incluyen las estrategias de resistencia SCA contra los ataques de potencia diferencial y de EM. Además de ello, en este capítulo se propone una lógica en doble carril compacta y segura como contribución de gran relevancia, así como también se presentará la transformación lógica basada en un diseño a nivel de puerta. Por otra parte, en el Capítulo 3 se abordan los desafíos relacionados con la implementación de lógica en doble carril genérica. Así mismo, se describirá un flujo de diseño personalizado para resolver los problemas de aplicación junto con una herramienta de desarrollo automático de aplicaciones propuesta, para mitigar las barreras de diseño y facilitar los procesos. En el capítulo 4 se describe de forma detallada la elaboración e implementación de las herramientas propuestas. Por otra parte, la verificación y validaciones de seguridad de la lógica propuesta, así como un sofisticado experimento de verificación de la seguridad del rutado, se describen en el capítulo 5. Por último, un resumen de las conclusiones de la tesis y las perspectivas como líneas futuras se incluyen en el capítulo 6. Con el fin de profundizar en el contenido de la tesis doctoral, cada capítulo se describe de forma más detallada a continuación: En el capítulo 1 se introduce plataforma de implementación hardware además las teorías básicas de ataque de canal lateral, y contiene principalmente: (a) La arquitectura genérica y las características de la FPGA a utilizar, en particular la Xilinx Virtex-5; (b) El algoritmo de cifrado seleccionado (un módulo comercial Advanced Encryption Standard (AES)); (c) Los elementos esenciales de los métodos de canal lateral, que permiten revelar las fugas de disipación correlacionadas con los comportamientos internos; y el método para recuperar esta relación entre las fluctuaciones físicas en los rastros de canal lateral y los datos internos procesados; (d) Las configuraciones de las plataformas de pruebas de potencia / EM abarcadas dentro de la presente tesis. El contenido de esta tesis se amplia y profundiza a partir del capítulo 2, en el cual se abordan varios aspectos claves. En primer lugar, el principio de protección de la compensación dinámica de la lógica genérica de precarga de doble carril (Dual-rail Precharge Logic-DPL) se explica mediante la descripción de los elementos compensados a nivel de puerta. En segundo lugar, la lógica PA-DPL es propuesta como aportación original, detallando el protocolo de la lógica y un caso de aplicación. En tercer lugar, dos flujos de diseño personalizados se muestran para realizar la conversión de doble carril. Junto con ello, se aclaran las definiciones técnicas relacionadas con la manipulación por encima de la netlist a nivel de LUT. Finalmente, una breve discusión sobre el proceso global se aborda en la parte final del capítulo. El Capítulo 3 estudia los principales retos durante la implementación de DPLs en FPGAs. El nivel de seguridad de las soluciones de resistencia a SCA encontradas en el estado del arte se ha degenerado debido a las barreras de implantación a través de herramientas EDA convencionales. En el escenario de la arquitectura FPGA estudiada, se discuten los problemas de los formatos de doble carril, impactos parásitos, sesgo tecnológico y la viabilidad de implementación. De acuerdo con estas elaboraciones, se plantean dos problemas: Cómo implementar la lógica propuesta sin penalizar los niveles de seguridad, y cómo manipular un gran número de celdas y automatizar el proceso. El PA-DPL propuesto en el capítulo 2 se valida con una serie de iniciativas, desde características estructurales como doble carril entrelazado o redes de rutado clonadas, hasta los métodos de aplicación tales como las herramientas de personalización y automatización de EDA. Por otra parte, un sistema de calefacción auto-adaptativo es representado y aplicado a una lógica de doble núcleo, con el fin de ajustar alternativamente la temperatura local para equilibrar los impactos negativos de la variación del proceso durante la operación en tiempo real. El capítulo 4 se centra en los detalles de la implementación del kit de herramientas. Desarrollado sobre una API third-party, el kit de herramientas personalizado es capaz de manipular los elementos de la lógica de circuito post P&R ncd (una versión binaria ilegible del xdl) convertido al formato XDL Xilinx. El mecanismo y razón de ser del conjunto de instrumentos propuestos son cuidadosamente descritos, que cubre la detección de enrutamiento y los enfoques para la reparación. El conjunto de herramientas desarrollado tiene como objetivo lograr redes de enrutamiento estrictamente idénticos para la lógica de doble carril, tanto para posicionamiento separado como para el entrelazado. Este capítulo particularmente especifica las bases técnicas para apoyar las implementaciones en los dispositivos de Xilinx y su flexibilidad para ser utilizado sobre otras aplicaciones. El capítulo 5 se enfoca en la aplicación de los casos de estudio para la validación de los grados de seguridad de la lógica propuesta. Se discuten los problemas técnicos detallados durante la ejecución y algunas nuevas técnicas de implementación. (a) Se discute el impacto en el proceso de posicionamiento de la lógica utilizando el kit de herramientas propuesto. Diferentes esquemas de implementación, tomando en cuenta la optimización global en seguridad y coste, se verifican con los experimentos con el fin de encontrar los planes de posicionamiento y reparación optimizados; (b) las validaciones de seguridad se realizan con los métodos de correlación y análisis de tiempo; (c) Una táctica asintótica se aplica a un núcleo AES sobre BCDL estructurado para validar de forma sofisticada el impacto de enrutamiento sobre métricas de seguridad; (d) Los resultados preliminares utilizando el sistema de calefacción auto-adaptativa sobre la variación del proceso son mostrados; (e) Se introduce una aplicación práctica de las herramientas para un diseño de cifrado completa. Capítulo 6 incluye el resumen general del trabajo presentado dentro de esta tesis doctoral. Por último, una breve perspectiva del trabajo futuro se expone, lo que puede ampliar el potencial de utilización de las contribuciones de esta tesis a un alcance más allá de los dominios de la criptografía en FPGAs. ABSTRACT This PhD thesis mainly concentrates on countermeasure techniques related to the Side Channel Attack (SCA), which has been put forward to academic exploitations since 17 years ago. The related research has seen a remarkable growth in the past decades, while the design of solid and efficient protection still curiously remain as an open research topic where more reliable initiatives are required for personal information privacy, enterprise and national data protections. The earliest documented usage of secret code can be traced back to around 1700 B.C., when the hieroglyphs in ancient Egypt are scribed in inscriptions. Information security always gained serious attention from diplomatic or military intelligence transmission. Due to the rapid evolvement of modern communication technique, crypto solution was first incorporated by electronic signal to ensure the confidentiality, integrity, availability, authenticity and non-repudiation of the transmitted contexts over unsecure cable or wireless channels. Restricted to the computation power before computer era, simple encryption tricks were practically sufficient to conceal information. However, algorithmic vulnerabilities can be excavated to restore the encoding rules with affordable efforts. This fact motivated the development of modern cryptography, aiming at guarding information system by complex and advanced algorithms. The appearance of computers has greatly pushed forward the invention of robust cryptographies, which efficiently offers resistance relying on highly strengthened computing capabilities. Likewise, advanced cryptanalysis has greatly driven the computing technologies in turn. Nowadays, the information world has been involved into a crypto world, protecting any fields by pervasive crypto solutions. These approaches are strong because of the optimized mergence between modern mathematical theories and effective hardware practices, being capable of implement crypto theories into various platforms (microprocessor, ASIC, FPGA, etc). Security needs from industries are actually the major driving metrics in electronic design, aiming at promoting the construction of systems with high performance without sacrificing security. Yet a vulnerability in practical implementation found by Prof. Paul Kocher, et al in 1996 implies that modern digital circuits are inherently vulnerable to an unconventional attack approach, which was named as side-channel attack since then from its analysis source. Critical suspicions to theoretically sound modern crypto algorithms surfaced almost immediately after this discovery. To be specifically, digital circuits typically consist of a great number of essential logic elements (as MOS - Metal Oxide Semiconductor), built upon a silicon substrate during the fabrication. Circuit logic is realized relying on the countless switch actions of these cells. This mechanism inevitably results in featured physical emanation that can be properly measured and correlated with internal circuit behaviors. SCAs can be used to reveal the confidential data (e.g. crypto-key), analyze the logic architecture, timing and even inject malicious faults to the circuits that are implemented in hardware system, like FPGA, ASIC, smart Card. Using various comparison solutions between the predicted leakage quantity and the measured leakage, secrets can be reconstructed at much less expense of time and computation. To be precisely, SCA basically encloses a wide range of attack types, typically as the analyses of power consumption or electromagnetic (EM) radiation. Both of them rely on statistical analyses, and hence require a number of samples. The crypto algorithms are not intrinsically fortified with SCA-resistance. Because of the severity, much attention has to be taken into the implementation so as to assemble countermeasures to camouflage the leakages via "side channels". Countermeasures against SCA are evolving along with the development of attack techniques. The physical characteristics requires countermeasures over physical layer, which can be generally classified into intrinsic and extrinsic vectors. Extrinsic countermeasures are executed to confuse the attacker by integrating noise, misalignment to the intra activities. Comparatively, intrinsic countermeasures are built into the algorithm itself, to modify the implementation for minimizing the measurable leakage, or making them not sensitive any more. Hiding and Masking are two typical techniques in this category. Concretely, masking applies to the algorithmic level, to alter the sensitive intermediate values with a mask in reversible ways. Unlike the linear masking, non-linear operations that widely exist in modern cryptographies are difficult to be masked. Approved to be an effective counter solution, hiding method mainly mentions dual-rail logic, which is specially devised for flattening or removing the data-dependent leakage in power or EM signatures. In this thesis, apart from the context describing the attack methodologies, efforts have also been dedicated to logic prototype, to mount extensive security investigations to countermeasures on logic-level. A characteristic of SCA resides on the format of leak sources. Typical side-channel attack concerns the power based analysis, where the fundamental capacitance from MOS transistors and other parasitic capacitances are the essential leak sources. Hence, a robust SCA-resistant logic must eliminate or mitigate the leakages from these micro units, such as basic logic gates, I/O ports and routings. The vendor provided EDA tools manipulate the logic from a higher behavioral-level, rather than the lower gate-level where side-channel leakage is generated. So, the classical implementations barely satisfy these needs and inevitably stunt the prototype. In this case, a customized and flexible design scheme is appealing to be devised. This thesis profiles an innovative logic style to counter SCA, which mainly addresses three major aspects: I. The proposed logic is based on the hiding strategy over gate-level dual-rail style to dynamically overbalance side-channel leakage from lower circuit layer; II. This logic exploits architectural features of modern FPGAs, to minimize the implementation expenses; III. It is supported by a set of assistant custom tools, incorporated by the generic FPGA design flow, to have circuit manipulations in an automatic manner. The automatic design toolkit supports the proposed dual-rail logic, facilitating the practical implementation on Xilinx FPGA families. While the methodologies and the tools are flexible to be expanded to a wide range of applications where rigid and sophisticated gate- or routing- constraints are desired. In this thesis a great effort is done to streamline the implementation workflow of generic dual-rail logic. The feasibility of the proposed solutions is validated by selected and widely used crypto algorithm, for thorough and fair evaluation w.r.t. prior solutions. All the proposals are effectively verified by security experiments. The presented research work attempts to solve the implementation troubles. The essence that will be formalized along this thesis is that a customized execution toolkit for modern FPGA systems is developed to work together with the generic FPGA design flow for creating innovative dual-rail logic. A method in crypto security area is constructed to obtain customization, automation and flexibility in low-level circuit prototype with fine-granularity in intractable routings. Main contributions of the presented work are summarized next: Precharge Absorbed-DPL logic: Using the netlist conversion to reserve free LUT inputs to execute the Precharge and Ex signal in a dual-rail logic style. A row-crossed interleaved placement method with identical routing pairs in dual-rail networks, which helps to increase the resistance against selective EM measurement and mitigate the impacts from process variations. Customized execution and automatic transformation tools for producing identical networks for the proposed dual-rail logic. (a) To detect and repair the conflict nets; (b) To detect and repair the asymmetric nets. (c) To be used in other logics where strict network control is required in Xilinx scenario. Customized correlation analysis testbed for EM and power attacks, including the platform construction, measurement method and attack analysis. A timing analysis based method for quantifying the security grades. A methodology of security partitions of complex crypto systems for reducing the protection cost. A proof-of-concept self-adaptive heating system to mitigate electrical impacts over process variations in dynamic dual-rail compensation manner. The thesis chapters are organized as follows: Chapter 1 discusses the side-channel attack fundamentals, which covers from theoretic basics to analysis models, and further to platform setup and attack execution. Chapter 2 centers to SCA-resistant strategies against generic power and EM attacks. In this chapter, a major contribution, a compact and secure dual-rail logic style, will be originally proposed. The logic transformation based on bottom-layer design will be presented. Chapter 3 is scheduled to elaborate the implementation challenges of generic dual-rail styles. A customized design flow to solve the implementation problems will be described along with a self-developed automatic implementation toolkit, for mitigating the design barriers and facilitating the processes. Chapter 4 will originally elaborate the tool specifics and construction details. The implementation case studies and security validations for the proposed logic style, as well as a sophisticated routing verification experiment, will be described in Chapter 5. Finally, a summary of thesis conclusions and perspectives for future work are included in Chapter 5. To better exhibit the thesis contents, each chapter is further described next: Chapter 1 provides the introduction of hardware implementation testbed and side-channel attack fundamentals, and mainly contains: (a) The FPGA generic architecture and device features, particularly of Virtex-5 FPGA; (b) The selected crypto algorithm - a commercially and extensively used Advanced Encryption Standard (AES) module - is detailed; (c) The essentials of Side-Channel methods are profiled. It reveals the correlated dissipation leakage to the internal behaviors, and the method to recover this relationship between the physical fluctuations in side-channel traces and the intra processed data; (d) The setups of the power/EM testing platforms enclosed inside the thesis work are given. The content of this thesis is expanded and deepened from chapter 2, which is divided into several aspects. First, the protection principle of dynamic compensation of the generic dual-rail precharge logic is explained by describing the compensated gate-level elements. Second, the novel DPL is originally proposed by detailing the logic protocol and an implementation case study. Third, a couple of custom workflows are shown next for realizing the rail conversion. Meanwhile, the technical definitions that are about to be manipulated above LUT-level netlist are clarified. A brief discussion about the batched process is given in the final part. Chapter 3 studies the implementation challenges of DPLs in FPGAs. The security level of state-of-the-art SCA-resistant solutions are decreased due to the implementation barriers using conventional EDA tools. In the studied FPGA scenario, problems are discussed from dual-rail format, parasitic impact, technological bias and implementation feasibility. According to these elaborations, two problems arise: How to implement the proposed logic without crippling the security level; and How to manipulate a large number of cells and automate the transformation. The proposed PA-DPL in chapter 2 is legalized with a series of initiatives, from structures to implementation methods. Furthermore, a self-adaptive heating system is depicted and implemented to a dual-core logic, assumed to alternatively adjust local temperature for balancing the negative impacts from silicon technological biases on real-time. Chapter 4 centers to the toolkit system. Built upon a third-party Application Program Interface (API) library, the customized toolkit is able to manipulate the logic elements from post P&R circuit (an unreadable binary version of the xdl one) converted to Xilinx xdl format. The mechanism and rationale of the proposed toolkit are carefully convoyed, covering the routing detection and repairing approaches. The developed toolkit aims to achieve very strictly identical routing networks for dual-rail logic both for separate and interleaved placement. This chapter particularly specifies the technical essentials to support the implementations in Xilinx devices and the flexibility to be expanded to other applications. Chapter 5 focuses on the implementation of the case studies for validating the security grades of the proposed logic style from the proposed toolkit. Comprehensive implementation techniques are discussed. (a) The placement impacts using the proposed toolkit are discussed. Different execution schemes, considering the global optimization in security and cost, are verified with experiments so as to find the optimized placement and repair schemes; (b) Security validations are realized with correlation, timing methods; (c) A systematic method is applied to a BCDL structured module to validate the routing impact over security metric; (d) The preliminary results using the self-adaptive heating system over process variation is given; (e) A practical implementation of the proposed toolkit to a large design is introduced. Chapter 6 includes the general summary of the complete work presented inside this thesis. Finally, a brief perspective for the future work is drawn which might expand the potential utilization of the thesis contributions to a wider range of implementation domains beyond cryptography on FPGAs.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Purpose: Soft contact lenses for continuous wear require the use of cleaning regimes which utilise hydrogen peroxide systems or multipurpose cleaning solutions (MPS). The compositions of MPS are becoming increasingly complex and often include disinfectants, cleaning agents, preservatives, wetting agents, demulcents, chelating and buffering agents. Recent research on solution–lens interactions has focused on specific ocular parameters such as corneal staining. However the effect of a solution on the lens, particularly silicone hydrogel lenses, itself has received less attention. The purpose of this work was to establish and understand the effects that care solutions have on selected bulk and surface material properties. Methods: Selected bulk and surface properties of each material (etafilcon A, vifilcon A, balafilcon A, senofilcon A, lotrafilcon A and lotrafilcon B, galyfilcon A) were measured after a 24 h soak in a variety of care solutions. Additionally the lenses were soaked for 24 h in hyperosmolar (680 mOsm L-1) and hyposmolar (170 mOsm L-1) PBS. A bulk property parameter the total diameter (TD) was measured using an Optimec contact lens analyser. The surface property related CoF of soaked lenses was measured on a nano-tribometer with conditions of load 30 mN, at a distance of 20 mm and speed 30 mm/min. Results: In terms of bulk properties, change is related to the EWC of the lens, the higher the EWC of the lens the greater the TD changes. Silicone hydrogel lenses have EWCs of <47% and little or no TD changes were observed; lotrafilcon A exhibited no change irrespective of the cleaning solution. Conventional contact lenses have higher EWCs (58% for etafilcon A and 55% for vifilcon A) and the TD was seen to change to a greater extent, for example the etafilcon A material in ReNu MPS had an increase to 14.45± 0.07 mm from the cited 14.2 mm. Other lenses increased or decreased in TD depending on the solution used. The osmolarity of the solution although important is not the only factor governing change in the TD, for example soaking senofilcon A in hyperosmolar PBS (680 mOsm L-1) for 24 h increased the TD of the lens (+0.25 ± 0.07 mm), however when the same lens type was soaked for 24 h in a MPS with a lower osmolarity there was a similar effect. Biotribology measurements demonstrated that some solution–lens combinations can reduce the CoF by 55%, when compared with biotribology with the native packing solution. An increase in the CoF was observed for other solution–lens combinations. Conclusions: There is a dramatic difference in bulk and surface performance of specific lens materials with particular care solutions. Individual components of the care solutions have effects on the bulk and surface properties of contact lenses. The affects are not as great with the silicone hydrogel as compared with conventional hydrogels.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Using the securitization framework to highlight the arguments that facilitated the “War on Drugs”, this paper highlights a separate war against drug traffickers. Facilitated by ideology through the rhetoric promoted by the “War on Drugs,” the fear of communist expansion and democratic contraction, the “War on Drug Traffickers” was implemented, requiring its own strategy separate from the “War on Drugs.” This is an important distinction because the play on words changes the perception of the issue from one of drug addiction to one of weak institutions and insurgent/terrorist threat to those institutions. Furthermore, one cannot propose strategy to win, lose, or retreat in a war that one has been unable to identify properly. And while the all-encompassing “War on Drugs” has motivated tremendous discourse on its failure and possible solutions to remedy its failure, the generalizations made as a result of the inability to distinguish between the policies behind drug addiction and the militarized policies behind drug trafficking have discounted the effect of violence perpetrated by the state, the rationale for the state perpetrating that violence, and the dependence that the state has on foreign actors to perpetrate such violence. This makes it impossible to not only propose effective strategy but also to persuade states that participate in the “War on Drug Traffickers” to adopt the proposed strategy.