860 resultados para Global optimization, unconstrained optimization, constrained optimization
Resumo:
To enhance the global search ability of population based incremental learning (PBIL) methods, it is proposed that multiple probability vectors are to be included on available PBIL algorithms. The strategy for updating those probability vectors and the negative learning and mutation operators are thus re-defined correspondingly. Moreover, to strike the best tradeoff between exploration and exploitation searches, an adaptive updating strategy for the learning rate is designed. Numerical examples are reported to demonstrate the pros and cons of the newly implemented algorithm.
Resumo:
This work considers nonsmooth optimal control problems and provides two new sufficient conditions of optimality. The first condition involves the Lagrange multipliers while the second does not. We show that under the first new condition all processes satisfying the Pontryagin Maximum Principle (called MP-processes) are optimal. Conversely, we prove that optimal control problems in which every MP-process is optimal necessarily obey our first optimality condition. The second condition is more natural, but it is only applicable to normal problems and the converse holds just for smooth problems. Nevertheless, it is proved that for the class of normal smooth optimal control problems the two conditions are equivalent. Some examples illustrating the features of these sufficient concepts are presented. 2012 Springer Science+Business Media New York.
Resumo:
Ps-graduao em Cincias Cartogrficas - FCT
Resumo:
Coordenao de Aperfeioamento de Pessoal de Nvel Superior (CAPES)
Resumo:
O mtodo de empilhamento ssmico CRS simula sees ssmicas ZO a partir de dados de cobertura mltipla, independente do macro-modelo de velocidades. Para meios 2-D, a funo tempo de trnsito de empilhamento depende de trs parmetros, a saber: do ngulo de emergncia do raio de reflexo normal (em relao normal da superfcie) e das curvaturas das frentes de onda relacionadas s ondas hipotticas, denominadas NIP e Normal. O empilhamento CRS consiste na soma das amplitudes dos traos ssmicos em dados de mltipla cobertura, ao longo da superfcie definida pela funo tempo de trnsito do empilhamento CRS, que melhor se ajusta aos dados. O resultado do empilhamento CRS assinalado a pontos de uma malha pr-definida na seo ZO. Como resultado tem-se a simulao de uma seo ssmica ZO. Isto significa que para cada ponto da seo ZO deve-se estimar o trio de parmetros timos que produz a mxima coerncia entre os eventos de reflexo ssmica. Nesta Tese apresenta-se frmulas para o mtodo CRS 2-D e para a velocidade NMO, que consideram a topografia da superfcie de medio. O algoritmo baseado na estratgia de otimizao dos parmetros de frmula CRS atravs de um processo em trs etapas: 1) Busca dos parmetros, o ngulo de emergncia e a curvatura da onda NIP, aplicando uma otimizao global, 2) busca de um parmetro, a curvatura da onda N, aplicando uma otimizao global, e 3) busca de trs parmetros aplicando uma otimizao local para refinar os parmetros estimados nas etapas anteriores. Na primeira e segunda etapas usado o algoritmo Simulated Annealing (SA) e na terceira etapa usado o algoritmo Variable Metric (VM). Para o caso de uma superfcie de medio com variaes topogrficas suaves, foi considerada a curvatura desta superfcie no algoritmo do mtodo de empilhamento CRS 2-D, com aplicao a dados sintticos. O resultado foi uma seo ZO simulada, de alta qualidade ao ser comparada com a seo ZO obtida por modelamento direto, com uma alta razo sinal-rudo, alm da estimativa do trio de parmetros da funo tempo de trnsito. Foi realizada uma nlise de sensibilidade para a nova funo de tempo de trnsito CRS em relao curvatura da superfcie de medio. Os resultados demonstraram que a funo tempo de trnsito CRS mais sensvel nos pontos-mdios afastados do ponto central e para grandes afastamentos. As expresses da velocidade NMO apresentadas foram aplicadas para estimar as velocidades e as profundidades dos refletores para um modelo 2-D com topografia suave. Para a inverso destas velocidades e profundidades dos refletores, foi considerado o algoritmo de inverso tipo Dix. A velocidade NMO para uma superfcie de medio curva, permite estimar muito melhor estas velocidades e profundidades dos refletores, que as velocidades NMO referidas as superfcies planas. Tambm apresenta-se uma abordagem do empilhamento CRS no caso 3-D. neste caso a funo tempo de trnsito depende de oito parmetros. So abordadas cinco estratgias de busca destes parmetros. A combinao de duas destas estratgias (estratgias das trs aproximaes dos tempos de trnsito e a estratgia das configuraes e curvaturas arbitrrias) foi aplicada exitosamente no empilhamento CRS 3-D de dados sintticos e reais.
Resumo:
O presente trabalho demonstra a aplicao de um Algoritmo Gentico com o intuito de projetar um controlador Fuzzy MISO, atravs da sintonia de seus parmetros, em um processo experimental de nivelamento de lquido em um tanque, cuja dinmica apresenta caractersticas no-lineares. Para o projeto e sintonia do controlador, foi utilizado o suporte do software Matlab, e seus pacotes Simulink e Global Optimization Toolbox. O Controlador Fuzzy ora projetado teve seu desempenho avaliado atravs de ensaios em tempo real em um Sistema de Nvel de Liquido.
Resumo:
Recently, researches have shown that the performance of metaheuristics can be affected by population initialization. Opposition-based Differential Evolution (ODE), Quasi-Oppositional Differential Evolution (QODE), and Uniform-Quasi-Opposition Differential Evolution (UQODE) are three state-of-the-art methods that improve the performance of the Differential Evolution algorithm based on population initialization and different search strategies. In a different approach to achieve similar results, this paper presents a technique to discover promising regions in a continuous search-space of an optimization problem. Using machine-learning techniques, the algorithm named Smart Sampling (SS) finds regions with high possibility of containing a global optimum. Next, a metaheuristic can be initialized inside each region to find that optimum. SS and DE were combined (originating the SSDE algorithm) to evaluate our approach, and experiments were conducted in the same set of benchmark functions used by ODE, QODE and UQODE authors. Results have shown that the total number of function evaluations required by DE to reach the global optimum can be significantly reduced and that the success rate improves if SS is employed first. Such results are also in consonance with results from the literature, stating the importance of an adequate starting population. Moreover, SS presents better efficacy to find initial populations of superior quality when compared to the other three algorithms that employ oppositional learning. Finally and most important, the SS performance in finding promising regions is independent of the employed metaheuristic with which SS is combined, making SS suitable to improve the performance of a large variety of optimization techniques. (C) 2012 Elsevier Inc. All rights reserved.
Resumo:
One of the most important challenges in chemistry and material science is the connection between the contents of a compound and its chemical and physical properties. In solids, these are greatly influenced by the crystal structure.rnrnThe prediction of hitherto unknown crystal structures with regard to external conditions like pressure and temperature is therefore one of the most important goals to achieve in theoretical chemistry. The stable structure of a compound is the global minimum of the potential energy surface, which is the high dimensional representation of the enthalpy of the investigated system with respect to its structural parameters. The fact that the complexity of the problem grows exponentially with the system size is the reason why it can only be solved via heuristic strategies.rnrnImprovements to the artificial bee colony method, where the local exploration of the potential energy surface is done by a high number of independent walkers, are developed and implemented. This results in an improved communication scheme between these walkers. This directs the search towards the most promising areas of the potential energy surface.rnrnThe minima hopping method uses short molecular dynamics simulations at elevated temperatures to direct the structure search from one local minimum of the potential energy surface to the next. A modification, where the local information around each minimum is extracted and used in an optimization of the search direction, is developed and implemented. Our method uses this local information to increase the probability of finding new, lower local minima. This leads to an enhanced performance in the global optimization algorithm.rnrnHydrogen is a highly relevant system, due to the possibility of finding a metallic phase and even superconductor with a high critical temperature. An application of a structure prediction method on SiH12 finds stable crystal structures in this material. Additionally, it becomes metallic at relatively low pressures.
Resumo:
Let us consider a large set of candidate parameter fields, such as hydraulic conductivity maps, on which we can run an accurate forward flow and transport simulation. We address the issue of rapidly identifying a subset of candidates whose response best match a reference response curve. In order to keep the number of calls to the accurate flow simulator computationally tractable, a recent distance-based approach relying on fast proxy simulations is revisited, and turned into a non-stationary kriging method where the covariance kernel is obtained by combining a classical kernel with the proxy. Once the accurate simulator has been run for an initial subset of parameter fields and a kriging metamodel has been inferred, the predictive distributions of misfits for the remaining parameter fields can be used as a guide to select candidate parameter fields in a sequential way. The proposed algorithm, Proxy-based Kriging for Sequential Inversion (ProKSI), relies on a variant of the Expected Improvement, a popular criterion for kriging-based global optimization. A statistical benchmark of ProKSIs performances illustrates the efficiency and the robustness of the approach when using different kinds of proxies.
Resumo:
Two new approaches to quantitatively analyze diffuse diffraction intensities from faulted layer stacking are reported. The parameters of a probability-based growth model are determined with two iterative global optimization methods: a genetic algorithm (GA) and particle swarm optimization (PSO). The results are compared with those from a third global optimization method, a differential evolution (DE) algorithm [Storn & Price (1997). J. Global Optim. 11, 341359]. The algorithm efficiencies in the early and late stages of iteration are compared. The accuracy of the optimized parameters improves with increasing size of the simulated crystal volume. The wall clock time for computing quite large crystal volumes can be kept within reasonable limits by the parallel calculation of many crystals (clones) generated for each model parameter set on a super- or grid computer. The faulted layer stacking in single crystals of trigonal three-pointedstar- shaped tris(bicylco[2.1.1]hexeno)benzene molecules serves as an example for the numerical computations. Based on numerical values of seven model parameters (reference parameters), nearly noise-free reference intensities of 14 diffuse streaks were simulated from 1280 clones, each consisting of 96 000 layers (reference crystal). The parameters derived from the reference intensities with GA, PSO and DE were compared with the original reference parameters as a function of the simulated total crystal volume. The statistical distribution of structural motifs in the simulated crystals is in good agreement with that in the reference crystal. The results found with the growth model for layer stacking disorder are applicable to other disorder types and modeling techniques, Monte Carlo in particular.
Resumo:
La evaluacin de la seguridad de estructuras antiguas de fbrica es un problema abierto.El material es heterogneo y anistropo, el estado previo de tensiones difcil de conocer y las condiciones de contorno inciertas. A comienzos de los aos 50 se demostr que el anlisis lmite era aplicable a este tipo de estructuras, considerndose desde entonces como una herramienta adecuada. En los casos en los que no se produce deslizamiento la aplicacin de los teoremas del anlisis lmite estndar constituye una herramienta formidable por su simplicidad y robustez. No es necesario conocer el estado real de tensiones. Basta con encontrar cualquier solucin de equilibrio, y que satisfaga las condiciones de lmite del material, en la seguridad de que su carga ser igual o inferior a la carga real de inicio de colapso. Adems esta carga de inicio de colapso es nica (teorema de la unicidad) y se puede obtener como el ptimo de uno cualquiera entre un par de programas matemticos convexos duales. Sin embargo, cuando puedan existir mecanismos de inicio de colapso que impliquen deslizamientos, cualquier solucin debe satisfacer tanto las restricciones estticas como las cinemticas, as como un tipo especial de restricciones disyuntivas que ligan las anteriores y que pueden plantearse como de complementariedad. En este ltimo caso no est asegurada la existencia de una solucin nica, por lo que es necesaria la bsqueda de otros mtodos para tratar la incertidumbre asociada a su multiplicidad. En los ltimos aos, la investigacin se ha centrado en la bsqueda de un mnimo absoluto por debajo del cual el colapso sea imposible. Este mtodo es fcil de plantear desde el punto de vista matemtico, pero intratable computacionalmente, debido a las restricciones de complementariedad 0 y z 0 que no son ni convexas ni suaves. El problema de decisin resultante es de complejidad computacional No determinista Polinomial (NP)- completo y el problema de optimizacin global NP-difcil. A pesar de ello, obtener una solucin (sin garanta de exito) es un problema asequible. La presente tesis propone resolver el problema mediante Programacin Lineal Secuencial, aprovechando las especiales caractersticas de las restricciones de complementariedad, que escritas en forma bilineal son del tipo y z = 0; y 0; z 0 , y aprovechando que el error de complementariedad (en forma bilineal) es una funcin de penalizacin exacta. Pero cuando se trata de encontrar la peor solucin, el problema de optimizacin global equivalente es intratable (NP-difcil). Adems, en tanto no se demuestre la existencia de un principio de mximo o mnimo, existe la duda de que el esfuerzo empleado en aproximar este mnimo est justificado. En el captulo 5, se propone hallar la distribucin de frecuencias del factor de carga, para todas las soluciones de inicio de colapso posibles, sobre un sencillo ejemplo. Para ello, se realiza un muestreo de soluciones mediante el mtodo de Monte Carlo, utilizando como contraste un mtodo exacto de computacin de politopos. El objetivo final es plantear hasta que punto est justificada la busqueda del mnimo absoluto y proponer un mtodo alternativo de evaluacin de la seguridad basado en probabilidades. Las distribuciones de frecuencias, de los factores de carga correspondientes a las soluciones de inicio de colapso obtenidas para el caso estudiado, muestran que tanto el valor mximo como el mnimo de los factores de carga son muy infrecuentes, y tanto ms, cuanto ms perfecto y contnuo es el contacto. Los resultados obtenidos confirman el inters de desarrollar nuevos mtodos probabilistas. En el captulo 6, se propone un mtodo de este tipo basado en la obtencin de mltiples soluciones, desde puntos de partida aleatorios y calificando los resultados mediante la Estadstica de Orden. El propsito es determinar la probabilidad de inicio de colapso para cada solucin.El mtodo se aplica (de acuerdo a la reduccin de expectativas propuesta por la Optimizacin Ordinal) para obtener una solucin que se encuentre en un porcentaje determinado de las peores. Finalmente, en el captulo 7, se proponen mtodos hbridos, incorporando metaheursticas, para los casos en que la bsqueda del mnimo global est justificada. Abstract Safety assessment of the historic masonry structures is an open problem. The material is heterogeneous and anisotropic, the previous state of stress is hard to know and the boundary conditions are uncertain. In the early 50's it was proven that limit analysis was applicable to this kind of structures, being considered a suitable tool since then. In cases where no slip occurs, the application of the standard limit analysis theorems constitutes an excellent tool due to its simplicity and robustness. It is enough find any equilibrium solution which satisfy the limit constraints of the material. As we are certain that this load will be equal to or less than the actual load of the onset of collapse, it is not necessary to know the actual stresses state. Furthermore this load for the onset of collapse is unique (uniqueness theorem), and it can be obtained as the optimal from any of two mathematical convex duals programs However, if the mechanisms of the onset of collapse involve sliding, any solution must satisfy both static and kinematic constraints, and also a special kind of disjunctive constraints linking the previous ones, which can be formulated as complementarity constraints. In the latter case, it is not guaranted the existence of a single solution, so it is necessary to look for other ways to treat the uncertainty associated with its multiplicity. In recent years, research has been focused on finding an absolute minimum below which collapse is impossible. This method is easy to set from a mathematical point of view, but computationally intractable. This is due to the complementarity constraints 0 y z 0 , which are neither convex nor smooth. The computational complexity of the resulting decision problem is "Not-deterministic Polynomialcomplete" (NP-complete), and the corresponding global optimization problem is NP-hard. However, obtaining a solution (success is not guaranteed) is an affordable problem. This thesis proposes solve that problem through Successive Linear Programming: taking advantage of the special characteristics of complementarity constraints, which written in bilinear form are y z = 0; y 0; z 0 ; and taking advantage of the fact that the complementarity error (bilinear form) is an exact penalty function. But when it comes to finding the worst solution, the (equivalent) global optimization problem is intractable (NP-hard). Furthermore, until a minimum or maximum principle is not demonstrated, it is questionable that the effort expended in approximating this minimum is justified. XIV In chapter 5, it is proposed find the frequency distribution of the load factor, for all possible solutions of the onset of collapse, on a simple example. For this purpose, a Monte Carlo sampling of solutions is performed using a contrast method "exact computation of polytopes". The ultimate goal is to determine to which extent the search of the global minimum is justified, and to propose an alternative approach to safety assessment based on probabilities. The frequency distributions for the case study show that both the maximum and the minimum load factors are very infrequent, especially when the contact gets more perfect and more continuous. The results indicates the interest of developing new probabilistic methods. In Chapter 6, is proposed a method based on multiple solutions obtained from random starting points, and qualifying the results through Order Statistics. The purpose is to determine the probability for each solution of the onset of collapse. The method is applied (according to expectations reduction given by the Ordinal Optimization) to obtain a solution that is in a certain percentage of the worst. Finally, in Chapter 7, hybrid methods incorporating metaheuristics are proposed for cases in which the search for the global minimum is justified.
Resumo:
Resumen El diseo de sistemas pticos, entendido como un arte por algunos, como una ciencia por otros, se ha realizado durante siglos. Desde los egipcios hasta nuestros das los sistemas de formacin de imagen han ido evolucionando as como las tcnicas de diseo asociadas. Sin embargo ha sido en los ltimos 50 aos cuando las tcnicas de diseo han experimentado su mayor desarrollo y evolucin, debido, en parte, a la aparicin de nuevas tcnicas de fabricacin y al desarrollo de ordenadores cada vez ms potentes que han permitido el clculo y anlisis del trazado de rayos a travs de los sistemas pticos de forma rpida y eficiente. Esto ha propiciado que el diseo de sistemas pticos evolucione desde los diseos desarrollados nicamente a partir de la ptica paraxial hasta lo modernos diseos realizados mediante la utilizacin de diferentes tcnicas de optimizacin multiparamtrica. El principal problema con el que se encuentra el diseador es que las diferentes tcnicas de optimizacin necesitan partir de un diseo inicial el cual puede fijar las posibles soluciones. Dicho de otra forma, si el punto de inicio est lejos del mnimo global, o diseo ptimo para las condiciones establecidas, el diseo final puede ser un mnimo local cerca del punto de inicio y lejos del mnimo global. Este tipo de problemtica ha llevado al desarrollo de sistemas globales de optimizacin que cada vez sean menos sensibles al punto de inicio de la optimizacin. Aunque si bien es cierto que es posible obtener buenos diseos a partir de este tipo de tcnicas, se requiere de muchos intentos hasta llegar a la solucin deseada, habiendo un entorno de incertidumbre durante todo el proceso, puesto que no est asegurado el que se llegue a la solucin ptima. El mtodo de las Superficies Mltiples Simultaneas (SMS), que naci como una herramienta de clculo de concentradores anidlicos, se ha demostrado como una herramienta tambin capaz utilizarse para el diseo de sistemas pticos formadores de imagen, aunque hasta la fecha se ha utilizado para el diseo puntual de sistemas de formacin de imagen. Esta tesis tiene por objeto presentar el SMS como un mtodo que puede ser utilizado de forma general para el diseo de cualquier sistema ptico de focal fija o v afocal con un aumento definido as como una herramienta que puede industrializarse para ayudar al diseador a afrontar de forma sencilla el diseo de sistemas pticos complejos. Esta tesis est estructurada en cinco captulos: El captulo 1, es un captulo de fundamentos donde se presentan los conceptos fundamentales necesarios para que el lector, aunque no posea una gran base en ptica formadora de imagen, pueda entender los planteamientos y resultados que se presentan en el resto de captulos El capitulo 2 aborda el problema de la optimizacin de sistemas pticos, donde se presenta el mtodo SMS como una herramienta idnea para obtener un punto de partida para el proceso de optimizacin. Mediante un ejemplo aplicado se demuestra la importancia del punto de partida utilizado en la solucin final encontrada. Adems en este captulo se presentan diferentes tcnicas que permiten la interpolacin y optimizacin de las superficies obtenidas a partir de la aplicacin del SMS. Aunque en esta tesis se trabajar nicamente utilizando el SMS2D, se presenta adems un mtodo para la interpolacin y optimizacin de las nubes de puntos obtenidas a partir del SMS3D basado en funciones de base radial (RBF). En el captulo 3 se presenta el diseo, fabricacin y medidas de un objetivo catadiptrico panormico diseado para trabajar en la banda del infrarrojo lejano (8-12 m) para aplicaciones de vigilancia perimetral. El objetivo presentado se disea utilizando el mtodo SMS para tres frentes de onda de entrada utilizando cuatro superficies. La potencia del mtodo de diseo utilizado se hace evidente en la sencillez con la que este complejo sistema se disea. Las imgenes presentadas demuestran cmo el prototipo desarrollado cumple a la perfeccin su propsito. El captulo 4 aborda el problema del diseo de sistemas pticos ultra compactos, se introduce el concepto de sistemas multicanal, como aquellos sistemas pticos compuestos por una serie de canales que trabajan en paralelo. Este tipo de sistemas resultan particularmente idneos para l diseo de sistemas afocales. Se presentan estrategias de diseo para sistemas multicanal tanto monocromticos como policromticos. Utilizando la novedosa tcnica de diseo que en este captulo se presenta el diseo de un telescopio de seis aumentos y medio. En el captulo 5 se presenta una generalizacin del mtodo SMS para rayos meridianos. En este captulo se presenta el algoritmo que debe utilizarse para el diseo de cualquier sistema ptico de focal fija. La denominada optimizacin fase 1 se vi introduce en el algoritmo presentado de forma que mediante el cambio de las condiciones inciales del diseo SMS que, aunque el diseo se realice para rayos meridianos, los rayos skew tengan un comportamiento similar. Para probar la potencia del algoritmo desarrollado se presenta un conjunto de diseos con diferente nmero de superficies. La estabilidad y potencia del algoritmo se hace evidente al conseguirse por primera vez el diseo de un sistema de seis superficies diseado por SMS. vii Abstract The design of optical systems, considered an art by some and a science by others, has been developed for centuries. Imaging optical systems have been evolving since Ancient Egyptian times, as have design techniques. Nevertheless, the most important developments in design techniques have taken place over the past 50 years, in part due to the advances in manufacturing techniques and the development of increasingly powerful computers, which have enabled the fast and efficient calculation and analysis of ray tracing through optical systems. This has led to the design of optical systems evolving from designs developed solely from paraxial optics to modern designs created by using different multiparametric optimization techniques. The main problem the designer faces is that the different optimization techniques require an initial design which can set possible solutions as a starting point. In other words, if the starting point is far from the global minimum or optimal design for the set conditions, the final design may be a local minimum close to the starting point and far from the global minimum. This type of problem has led to the development of global optimization systems which are increasingly less sensitive to the starting point of the optimization process. Even though it is possible to obtain good designs from these types of techniques, many attempts are necessary to reach the desired solution. This is because of the uncertain environment due to the fact that there is no guarantee that the optimal solution will be obtained. The Simultaneous Multiple Surfaces (SMS) method, designed as a tool to calculate anidolic concentrators, has also proved useful for the design of image-forming optical systems, although until now it has occasionally been used for the design of imaging systems. This thesis aims to present the SMS method as a technique that can be used in general for the design of any optical system, whether with a fixed focal or an afocal with a defined magnification, and also as a tool that can be commercialized to help designers in the design of complex optical systems. The thesis is divided into five chapters. Chapter 1 establishes the basics by presenting the fundamental concepts which the reader needs to acquire, even if he/she doesnt have extensive knowledge in the field viii of image-forming optics, in order to understand the steps taken and the results obtained in the following chapters. Chapter 2 addresses the problem of optimizing optical systems. Here the SMS method is presented as an ideal tool to obtain a starting point for the optimization process. The importance of the starting point for the final solution is demonstrated through an example. Additionally, this chapter introduces various techniques for the interpolation and optimization of the surfaces obtained through the application of the SMS method. Even though in this thesis only the SMS2D method is used, we present a method for the interpolation and optimization of clouds of points obtained though the SMS3D method, based on radial basis functions (RBF). Chapter 3 presents the design, manufacturing and measurement processes of a catadioptric panoramic lens designed to work in the Long Wavelength Infrared (LWIR) (8-12 microns) for perimeter surveillance applications. The lens presented is designed by using the SMS method for three input wavefronts using four surfaces. The powerfulness of the design method used is revealed through the ease with which this complex system is designed. The images presented show how the prototype perfectly fulfills its purpose. Chapter 4 addresses the problem of designing ultra-compact optical systems. The concept of multi-channel systems, such as optical systems composed of a series of channels that work in parallel, is introduced. Such systems are especially suitable for the design of afocal systems. We present design strategies for multichannel systems, both monochromatic and polychromatic. A telescope designed with a magnification of six-and-a-half through the innovative technique exposed in this chapter is presented. Chapter 5 presents a generalization of the SMS method for meridian rays. The algorithm to be used for the design of any fixed focal optics is revealed. The optimization known as phase 1 optimization is inserted into the algorithm so that, by changing the initial conditions of the SMS design, the skew rays have a similar behavior, despite the design being carried out for meridian rays. To test the power of the developed algorithm, a set of designs with a different number of surfaces is presented. The stability and strength of the algorithm become apparent when the first design of a system with six surfaces if obtained through the SMS method.
Resumo:
Esta tesis doctoral se centra principalmente en tcnicas de ataque y contramedidas relacionadas con ataques de canal lateral (SCA por sus siglas en ingls), que han sido propuestas dentro del campo de investigacin acadmica desde hace 17 aos. Las investigaciones relacionadas han experimentado un notable crecimiento en las ltimas dcadas, mientras que los diseos enfocados en la proteccin slida y eficaz contra dichos ataques an se mantienen como un tema de investigacin abierto, en el que se necesitan iniciativas ms confiables para la proteccin de la informacin persona de empresa y de datos nacionales. El primer uso documentado de codificacin secreta se remonta a alrededor de 1700 B.C., cuando los jeroglficos del antiguo Egipto eran descritos en las inscripciones. La seguridad de la informacin siempre ha supuesto un factor clave en la transmisin de datos relacionados con inteligencia diplomtica o militar. Debido a la evolucin rpida de las tcnicas modernas de comunicacin, soluciones de cifrado se incorporaron por primera vez para garantizar la seguridad, integridad y confidencialidad de los contextos de transmisin a travs de cables sin seguridad o medios inalmbricos. Debido a las restricciones de potencia de clculo antes de la era del ordenador, la tcnica de cifrado simple era un mtodo ms que suficiente para ocultar la informacin. Sin embargo, algunas vulnerabilidades algortmicas pueden ser explotadas para restaurar la regla de codificacin sin mucho esfuerzo. Esto ha motivado nuevas investigaciones en el rea de la criptografa, con el fin de proteger el sistema de informacin ante sofisticados algoritmos. Con la invencin de los ordenadores se ha acelerado en gran medida la implementacin de criptografa segura, que ofrece resistencia eficiente encaminada a obtener mayores capacidades de computacin altamente reforzadas. Igualmente, sofisticados cripto-anlisis han impulsado las tecnologas de computacin. Hoy en da, el mundo de la informacin ha estado involucrado con el campo de la criptografa, enfocada a proteger cualquier campo a travs de diversas soluciones de cifrado. Estos enfoques se han fortalecido debido a la unificacin optimizada de teoras matemticas modernas y prcticas eficaces de hardware, siendo posible su implementacin en varias plataformas (microprocesador, ASIC, FPGA, etc.). Las necesidades y requisitos de seguridad en la industria son las principales mtricas de conduccin en el diseo electrnico, con el objetivo de promover la fabricacin de productos de gran alcance sin sacrificar la seguridad de los clientes. Sin embargo, una vulnerabilidad en la implementacin prctica encontrada por el Prof. Paul Kocher, et al en 1996 implica que un circuito digital es inherentemente vulnerable a un ataque no convencional, lo cual fue nombrado posteriormente como ataque de canal lateral, debido a su fuente de anlisis. Sin embargo, algunas crticas sobre los algoritmos criptogrficos tericamente seguros surgieron casi inmediatamente despus de este descubrimiento. En este sentido, los circuitos digitales consisten tpicamente en un gran nmero de celdas lgicas fundamentales (como MOS - Metal Oxide Semiconductor), construido sobre un sustrato de silicio durante la fabricacin. La lgica de los circuitos se realiza en funcin de las innumerables conmutaciones de estas clulas. Este mecanismo provoca inevitablemente cierta emanacin fsica especial que puede ser medida y correlacionada con el comportamiento interno del circuito. SCA se puede utilizar para revelar datos confidenciales (por ejemplo, la criptografa de claves), analizar la arquitectura lgica, el tiempo e incluso inyectar fallos malintencionados a los circuitos que se implementan en sistemas embebidos, como FPGAs, ASICs, o tarjetas inteligentes. Mediante el uso de la comparacin de correlacin entre la cantidad de fuga estimada y las fugas medidas de forma real, informacin confidencial puede ser reconstruida en mucho menos tiempo y computacin. Para ser precisos, SCA bsicamente cubre una amplia gama de tipos de ataques, como los anlisis de consumo de energa y radiacin ElectroMagntica (EM). Ambos se basan en anlisis estadstico y, por lo tanto, requieren numerosas muestras. Los algoritmos de cifrado no estn intrnsecamente preparados para ser resistentes ante SCA. Es por ello que se hace necesario durante la implementacin de circuitos integrar medidas que permitan camuflar las fugas a travs de "canales laterales". Las medidas contra SCA estn evolucionando junto con el desarrollo de nuevas tcnicas de ataque, as como la continua mejora de los dispositivos electrnicos. Las caractersticas fsicas requieren contramedidas sobre la capa fsica, que generalmente se pueden clasificar en soluciones intrnsecas y extrnsecas. Contramedidas extrnsecas se ejecutan para confundir la fuente de ataque mediante la integracin de ruido o mala alineacin de la actividad interna. Comparativamente, las contramedidas intrnsecas estn integradas en el propio algoritmo, para modificar la aplicacin con el fin de minimizar las fugas medibles, o incluso hacer que dichas fugas no puedan ser medibles. Ocultacin y Enmascaramiento son dos tcnicas tpicas incluidas en esta categora. Concretamente, el enmascaramiento se aplica a nivel algortmico, para alterar los datos intermedios sensibles con una mscara de manera reversible. A diferencia del enmascaramiento lineal, las operaciones no lineales que ampliamente existen en criptografas modernas son difciles de enmascarar. Dicho mtodo de ocultacin, que ha sido verificado como una solucin efectiva, comprende principalmente la codificacin en doble carril, que est ideado especialmente para aplanar o eliminar la fuga dependiente de dato en potencia o en EM. En esta tesis doctoral, adems de la descripcin de las metodologas de ataque, se han dedicado grandes esfuerzos sobre la estructura del prototipo de la lgica propuesta, con el fin de realizar investigaciones enfocadas a la seguridad sobre contramedidas de arquitectura a nivel lgico. Una caracterstica de SCA reside en el formato de las fuentes de fugas. Un tpico ataque de canal lateral se refiere al anlisis basado en la potencia, donde la capacidad fundamental del transistor MOS y otras capacidades parsitas son las fuentes esenciales de fugas. Por lo tanto, una lgica robusta resistente a SCA debe eliminar o mitigar las fugas de estas micro-unidades, como las puertas lgicas bsicas, los puertos I/O y las rutas. Las herramientas EDA proporcionadas por los vendedores manipulan la lgica desde un nivel ms alto, en lugar de realizarlo desde el nivel de puerta, donde las fugas de canal lateral se manifiestan. Por lo tanto, las implementaciones clsicas apenas satisfacen estas necesidades e inevitablemente atrofian el prototipo. Por todo ello, la implementacin de un esquema de diseo personalizado y flexible ha de ser tomado en cuenta. En esta tesis se presenta el diseo y la implementacin de una lgica innovadora para contrarrestar SCA, en la que se abordan 3 aspectos fundamentales: I. Se basa en ocultar la estrategia sobre el circuito en doble carril a nivel de puerta para obtener dinmicamente el equilibrio de las fugas en las capas inferiores; II. Esta lgica explota las caractersticas de la arquitectura de las FPGAs, para reducir al mnimo el gasto de recursos en la implementacin; III. Se apoya en un conjunto de herramientas asistentes personalizadas, incorporadas al flujo genrico de diseo sobre FPGAs, con el fin de manipular los circuitos de forma automtica. El kit de herramientas de diseo automtico es compatible con la lgica de doble carril propuesta, para facilitar la aplicacin prctica sobre la familia de FPGA del fabricante Xilinx. En este sentido, la metodologa y las herramientas son flexibles para ser extendido a una amplia gama de aplicaciones en las que se desean obtener restricciones mucho ms rgidas y sofisticadas a nivel de puerta o rutado. En esta tesis se realiza un gran esfuerzo para facilitar el proceso de implementacin y reparacin de lgica de doble carril genrica. La viabilidad de las soluciones propuestas es validada mediante la seleccin de algoritmos criptogrficos ampliamente utilizados, y su evaluacin exhaustiva en comparacin con soluciones anteriores. Todas las propuestas estn respaldadas eficazmente a travs de ataques experimentales con el fin de validar las ventajas de seguridad del sistema. El presente trabajo de investigacin tiene la intencin de cerrar la brecha entre las barreras de implementacin y la aplicacin efectiva de lgica de doble carril. En esencia, a lo largo de esta tesis se describir un conjunto de herramientas de implementacin para FPGAs que se han desarrollado para trabajar junto con el flujo de diseo genrico de las mismas, con el fin de lograr crear de forma innovadora la lgica de doble carril. Un nuevo enfoque en el mbito de la seguridad en el cifrado se propone para obtener personalizacin, automatizacin y flexibilidad en el prototipo de circuito de bajo nivel con granularidad fina. Las principales contribuciones del presente trabajo de investigacin se resumen brevemente a continuacin: Lgica de Precharge Absorbed-DPL logic: El uso de la conversin de netlist para reservar LUTs libres para ejecutar la seal de precharge y Ex en una lgica DPL. Posicionamiento entrelazado Row-crossed con pares idnticos de rutado en redes de doble carril, lo que ayuda a aumentar la resistencia frente a la medicin EM selectiva y mitigar los impactos de las variaciones de proceso. Ejecucin personalizada y herramientas de conversin automtica para la generacin de redes idnticas para la lgica de doble carril propuesta. (a) Para detectar y reparar conflictos en las conexiones; (b) Detectar y reparar las rutas asimtricas. (c) Para ser utilizado en otras lgicas donde se requiere un control estricto de las interconexiones en aplicaciones basadas en Xilinx. Plataforma CPA de pruebas personalizadas para el anlisis de EM y potencia, incluyendo la construccin de dicha plataforma, el mtodo de medicin y anlisis de los ataques. Anlisis de tiempos para cuantificar los niveles de seguridad. Divisin de Seguridad en la conversin parcial de un sistema de cifrado complejo para reducir los costes de la proteccin. Prueba de concepto de un sistema de calefaccin auto-adaptativo para mitigar los impactos elctricos debido a la variacin del proceso de silicio de manera dinmica. La presente tesis doctoral se encuentra organizada tal y como se detalla a continuacin: En el captulo 1 se abordan los fundamentos de los ataques de canal lateral, que abarca desde conceptos bsicos de teora de modelos de anlisis, adems de la implementacin de la plataforma y la ejecucin de los ataques. En el captulo 2 se incluyen las estrategias de resistencia SCA contra los ataques de potencia diferencial y de EM. Adems de ello, en este captulo se propone una lgica en doble carril compacta y segura como contribucin de gran relevancia, as como tambin se presentar la transformacin lgica basada en un diseo a nivel de puerta. Por otra parte, en el Captulo 3 se abordan los desafos relacionados con la implementacin de lgica en doble carril genrica. As mismo, se describir un flujo de diseo personalizado para resolver los problemas de aplicacin junto con una herramienta de desarrollo automtico de aplicaciones propuesta, para mitigar las barreras de diseo y facilitar los procesos. En el captulo 4 se describe de forma detallada la elaboracin e implementacin de las herramientas propuestas. Por otra parte, la verificacin y validaciones de seguridad de la lgica propuesta, as como un sofisticado experimento de verificacin de la seguridad del rutado, se describen en el captulo 5. Por ltimo, un resumen de las conclusiones de la tesis y las perspectivas como lneas futuras se incluyen en el captulo 6. Con el fin de profundizar en el contenido de la tesis doctoral, cada captulo se describe de forma ms detallada a continuacin: En el captulo 1 se introduce plataforma de implementacin hardware adems las teoras bsicas de ataque de canal lateral, y contiene principalmente: (a) La arquitectura genrica y las caractersticas de la FPGA a utilizar, en particular la Xilinx Virtex-5; (b) El algoritmo de cifrado seleccionado (un mdulo comercial Advanced Encryption Standard (AES)); (c) Los elementos esenciales de los mtodos de canal lateral, que permiten revelar las fugas de disipacin correlacionadas con los comportamientos internos; y el mtodo para recuperar esta relacin entre las fluctuaciones fsicas en los rastros de canal lateral y los datos internos procesados; (d) Las configuraciones de las plataformas de pruebas de potencia / EM abarcadas dentro de la presente tesis. El contenido de esta tesis se amplia y profundiza a partir del captulo 2, en el cual se abordan varios aspectos claves. En primer lugar, el principio de proteccin de la compensacin dinmica de la lgica genrica de precarga de doble carril (Dual-rail Precharge Logic-DPL) se explica mediante la descripcin de los elementos compensados a nivel de puerta. En segundo lugar, la lgica PA-DPL es propuesta como aportacin original, detallando el protocolo de la lgica y un caso de aplicacin. En tercer lugar, dos flujos de diseo personalizados se muestran para realizar la conversin de doble carril. Junto con ello, se aclaran las definiciones tcnicas relacionadas con la manipulacin por encima de la netlist a nivel de LUT. Finalmente, una breve discusin sobre el proceso global se aborda en la parte final del captulo. El Captulo 3 estudia los principales retos durante la implementacin de DPLs en FPGAs. El nivel de seguridad de las soluciones de resistencia a SCA encontradas en el estado del arte se ha degenerado debido a las barreras de implantacin a travs de herramientas EDA convencionales. En el escenario de la arquitectura FPGA estudiada, se discuten los problemas de los formatos de doble carril, impactos parsitos, sesgo tecnolgico y la viabilidad de implementacin. De acuerdo con estas elaboraciones, se plantean dos problemas: Cmo implementar la lgica propuesta sin penalizar los niveles de seguridad, y cmo manipular un gran nmero de celdas y automatizar el proceso. El PA-DPL propuesto en el captulo 2 se valida con una serie de iniciativas, desde caractersticas estructurales como doble carril entrelazado o redes de rutado clonadas, hasta los mtodos de aplicacin tales como las herramientas de personalizacin y automatizacin de EDA. Por otra parte, un sistema de calefaccin auto-adaptativo es representado y aplicado a una lgica de doble ncleo, con el fin de ajustar alternativamente la temperatura local para equilibrar los impactos negativos de la variacin del proceso durante la operacin en tiempo real. El captulo 4 se centra en los detalles de la implementacin del kit de herramientas. Desarrollado sobre una API third-party, el kit de herramientas personalizado es capaz de manipular los elementos de la lgica de circuito post P&R ncd (una versin binaria ilegible del xdl) convertido al formato XDL Xilinx. El mecanismo y razn de ser del conjunto de instrumentos propuestos son cuidadosamente descritos, que cubre la deteccin de enrutamiento y los enfoques para la reparacin. El conjunto de herramientas desarrollado tiene como objetivo lograr redes de enrutamiento estrictamente idnticos para la lgica de doble carril, tanto para posicionamiento separado como para el entrelazado. Este captulo particularmente especifica las bases tcnicas para apoyar las implementaciones en los dispositivos de Xilinx y su flexibilidad para ser utilizado sobre otras aplicaciones. El captulo 5 se enfoca en la aplicacin de los casos de estudio para la validacin de los grados de seguridad de la lgica propuesta. Se discuten los problemas tcnicos detallados durante la ejecucin y algunas nuevas tcnicas de implementacin. (a) Se discute el impacto en el proceso de posicionamiento de la lgica utilizando el kit de herramientas propuesto. Diferentes esquemas de implementacin, tomando en cuenta la optimizacin global en seguridad y coste, se verifican con los experimentos con el fin de encontrar los planes de posicionamiento y reparacin optimizados; (b) las validaciones de seguridad se realizan con los mtodos de correlacin y anlisis de tiempo; (c) Una tctica asinttica se aplica a un ncleo AES sobre BCDL estructurado para validar de forma sofisticada el impacto de enrutamiento sobre mtricas de seguridad; (d) Los resultados preliminares utilizando el sistema de calefaccin auto-adaptativa sobre la variacin del proceso son mostrados; (e) Se introduce una aplicacin prctica de las herramientas para un diseo de cifrado completa. Captulo 6 incluye el resumen general del trabajo presentado dentro de esta tesis doctoral. Por ltimo, una breve perspectiva del trabajo futuro se expone, lo que puede ampliar el potencial de utilizacin de las contribuciones de esta tesis a un alcance ms all de los dominios de la criptografa en FPGAs. ABSTRACT This PhD thesis mainly concentrates on countermeasure techniques related to the Side Channel Attack (SCA), which has been put forward to academic exploitations since 17 years ago. The related research has seen a remarkable growth in the past decades, while the design of solid and efficient protection still curiously remain as an open research topic where more reliable initiatives are required for personal information privacy, enterprise and national data protections. The earliest documented usage of secret code can be traced back to around 1700 B.C., when the hieroglyphs in ancient Egypt are scribed in inscriptions. Information security always gained serious attention from diplomatic or military intelligence transmission. Due to the rapid evolvement of modern communication technique, crypto solution was first incorporated by electronic signal to ensure the confidentiality, integrity, availability, authenticity and non-repudiation of the transmitted contexts over unsecure cable or wireless channels. Restricted to the computation power before computer era, simple encryption tricks were practically sufficient to conceal information. However, algorithmic vulnerabilities can be excavated to restore the encoding rules with affordable efforts. This fact motivated the development of modern cryptography, aiming at guarding information system by complex and advanced algorithms. The appearance of computers has greatly pushed forward the invention of robust cryptographies, which efficiently offers resistance relying on highly strengthened computing capabilities. Likewise, advanced cryptanalysis has greatly driven the computing technologies in turn. Nowadays, the information world has been involved into a crypto world, protecting any fields by pervasive crypto solutions. These approaches are strong because of the optimized mergence between modern mathematical theories and effective hardware practices, being capable of implement crypto theories into various platforms (microprocessor, ASIC, FPGA, etc). Security needs from industries are actually the major driving metrics in electronic design, aiming at promoting the construction of systems with high performance without sacrificing security. Yet a vulnerability in practical implementation found by Prof. Paul Kocher, et al in 1996 implies that modern digital circuits are inherently vulnerable to an unconventional attack approach, which was named as side-channel attack since then from its analysis source. Critical suspicions to theoretically sound modern crypto algorithms surfaced almost immediately after this discovery. To be specifically, digital circuits typically consist of a great number of essential logic elements (as MOS - Metal Oxide Semiconductor), built upon a silicon substrate during the fabrication. Circuit logic is realized relying on the countless switch actions of these cells. This mechanism inevitably results in featured physical emanation that can be properly measured and correlated with internal circuit behaviors. SCAs can be used to reveal the confidential data (e.g. crypto-key), analyze the logic architecture, timing and even inject malicious faults to the circuits that are implemented in hardware system, like FPGA, ASIC, smart Card. Using various comparison solutions between the predicted leakage quantity and the measured leakage, secrets can be reconstructed at much less expense of time and computation. To be precisely, SCA basically encloses a wide range of attack types, typically as the analyses of power consumption or electromagnetic (EM) radiation. Both of them rely on statistical analyses, and hence require a number of samples. The crypto algorithms are not intrinsically fortified with SCA-resistance. Because of the severity, much attention has to be taken into the implementation so as to assemble countermeasures to camouflage the leakages via "side channels". Countermeasures against SCA are evolving along with the development of attack techniques. The physical characteristics requires countermeasures over physical layer, which can be generally classified into intrinsic and extrinsic vectors. Extrinsic countermeasures are executed to confuse the attacker by integrating noise, misalignment to the intra activities. Comparatively, intrinsic countermeasures are built into the algorithm itself, to modify the implementation for minimizing the measurable leakage, or making them not sensitive any more. Hiding and Masking are two typical techniques in this category. Concretely, masking applies to the algorithmic level, to alter the sensitive intermediate values with a mask in reversible ways. Unlike the linear masking, non-linear operations that widely exist in modern cryptographies are difficult to be masked. Approved to be an effective counter solution, hiding method mainly mentions dual-rail logic, which is specially devised for flattening or removing the data-dependent leakage in power or EM signatures. In this thesis, apart from the context describing the attack methodologies, efforts have also been dedicated to logic prototype, to mount extensive security investigations to countermeasures on logic-level. A characteristic of SCA resides on the format of leak sources. Typical side-channel attack concerns the power based analysis, where the fundamental capacitance from MOS transistors and other parasitic capacitances are the essential leak sources. Hence, a robust SCA-resistant logic must eliminate or mitigate the leakages from these micro units, such as basic logic gates, I/O ports and routings. The vendor provided EDA tools manipulate the logic from a higher behavioral-level, rather than the lower gate-level where side-channel leakage is generated. So, the classical implementations barely satisfy these needs and inevitably stunt the prototype. In this case, a customized and flexible design scheme is appealing to be devised. This thesis profiles an innovative logic style to counter SCA, which mainly addresses three major aspects: I. The proposed logic is based on the hiding strategy over gate-level dual-rail style to dynamically overbalance side-channel leakage from lower circuit layer; II. This logic exploits architectural features of modern FPGAs, to minimize the implementation expenses; III. It is supported by a set of assistant custom tools, incorporated by the generic FPGA design flow, to have circuit manipulations in an automatic manner. The automatic design toolkit supports the proposed dual-rail logic, facilitating the practical implementation on Xilinx FPGA families. While the methodologies and the tools are flexible to be expanded to a wide range of applications where rigid and sophisticated gate- or routing- constraints are desired. In this thesis a great effort is done to streamline the implementation workflow of generic dual-rail logic. The feasibility of the proposed solutions is validated by selected and widely used crypto algorithm, for thorough and fair evaluation w.r.t. prior solutions. All the proposals are effectively verified by security experiments. The presented research work attempts to solve the implementation troubles. The essence that will be formalized along this thesis is that a customized execution toolkit for modern FPGA systems is developed to work together with the generic FPGA design flow for creating innovative dual-rail logic. A method in crypto security area is constructed to obtain customization, automation and flexibility in low-level circuit prototype with fine-granularity in intractable routings. Main contributions of the presented work are summarized next: Precharge Absorbed-DPL logic: Using the netlist conversion to reserve free LUT inputs to execute the Precharge and Ex signal in a dual-rail logic style. A row-crossed interleaved placement method with identical routing pairs in dual-rail networks, which helps to increase the resistance against selective EM measurement and mitigate the impacts from process variations. Customized execution and automatic transformation tools for producing identical networks for the proposed dual-rail logic. (a) To detect and repair the conflict nets; (b) To detect and repair the asymmetric nets. (c) To be used in other logics where strict network control is required in Xilinx scenario. Customized correlation analysis testbed for EM and power attacks, including the platform construction, measurement method and attack analysis. A timing analysis based method for quantifying the security grades. A methodology of security partitions of complex crypto systems for reducing the protection cost. A proof-of-concept self-adaptive heating system to mitigate electrical impacts over process variations in dynamic dual-rail compensation manner. The thesis chapters are organized as follows: Chapter 1 discusses the side-channel attack fundamentals, which covers from theoretic basics to analysis models, and further to platform setup and attack execution. Chapter 2 centers to SCA-resistant strategies against generic power and EM attacks. In this chapter, a major contribution, a compact and secure dual-rail logic style, will be originally proposed. The logic transformation based on bottom-layer design will be presented. Chapter 3 is scheduled to elaborate the implementation challenges of generic dual-rail styles. A customized design flow to solve the implementation problems will be described along with a self-developed automatic implementation toolkit, for mitigating the design barriers and facilitating the processes. Chapter 4 will originally elaborate the tool specifics and construction details. The implementation case studies and security validations for the proposed logic style, as well as a sophisticated routing verification experiment, will be described in Chapter 5. Finally, a summary of thesis conclusions and perspectives for future work are included in Chapter 5. To better exhibit the thesis contents, each chapter is further described next: Chapter 1 provides the introduction of hardware implementation testbed and side-channel attack fundamentals, and mainly contains: (a) The FPGA generic architecture and device features, particularly of Virtex-5 FPGA; (b) The selected crypto algorithm - a commercially and extensively used Advanced Encryption Standard (AES) module - is detailed; (c) The essentials of Side-Channel methods are profiled. It reveals the correlated dissipation leakage to the internal behaviors, and the method to recover this relationship between the physical fluctuations in side-channel traces and the intra processed data; (d) The setups of the power/EM testing platforms enclosed inside the thesis work are given. The content of this thesis is expanded and deepened from chapter 2, which is divided into several aspects. First, the protection principle of dynamic compensation of the generic dual-rail precharge logic is explained by describing the compensated gate-level elements. Second, the novel DPL is originally proposed by detailing the logic protocol and an implementation case study. Third, a couple of custom workflows are shown next for realizing the rail conversion. Meanwhile, the technical definitions that are about to be manipulated above LUT-level netlist are clarified. A brief discussion about the batched process is given in the final part. Chapter 3 studies the implementation challenges of DPLs in FPGAs. The security level of state-of-the-art SCA-resistant solutions are decreased due to the implementation barriers using conventional EDA tools. In the studied FPGA scenario, problems are discussed from dual-rail format, parasitic impact, technological bias and implementation feasibility. According to these elaborations, two problems arise: How to implement the proposed logic without crippling the security level; and How to manipulate a large number of cells and automate the transformation. The proposed PA-DPL in chapter 2 is legalized with a series of initiatives, from structures to implementation methods. Furthermore, a self-adaptive heating system is depicted and implemented to a dual-core logic, assumed to alternatively adjust local temperature for balancing the negative impacts from silicon technological biases on real-time. Chapter 4 centers to the toolkit system. Built upon a third-party Application Program Interface (API) library, the customized toolkit is able to manipulate the logic elements from post P&R circuit (an unreadable binary version of the xdl one) converted to Xilinx xdl format. The mechanism and rationale of the proposed toolkit are carefully convoyed, covering the routing detection and repairing approaches. The developed toolkit aims to achieve very strictly identical routing networks for dual-rail logic both for separate and interleaved placement. This chapter particularly specifies the technical essentials to support the implementations in Xilinx devices and the flexibility to be expanded to other applications. Chapter 5 focuses on the implementation of the case studies for validating the security grades of the proposed logic style from the proposed toolkit. Comprehensive implementation techniques are discussed. (a) The placement impacts using the proposed toolkit are discussed. Different execution schemes, considering the global optimization in security and cost, are verified with experiments so as to find the optimized placement and repair schemes; (b) Security validations are realized with correlation, timing methods; (c) A systematic method is applied to a BCDL structured module to validate the routing impact over security metric; (d) The preliminary results using the self-adaptive heating system over process variation is given; (e) A practical implementation of the proposed toolkit to a large design is introduced. Chapter 6 includes the general summary of the complete work presented inside this thesis. Finally, a brief perspective for the future work is drawn which might expand the potential utilization of the thesis contributions to a wider range of implementation domains beyond cryptography on FPGAs.
Resumo:
The conformational space annealing (CSA) method for global optimization has been applied to the 10-55 fragment of the B-domain of staphylococcal protein A (protein A) and to a 75-residue protein, apo calbindin D9K (PDB ID code 1CLB), by using the UNRES off-lattice united-residue force field. Although the potential was not calibrated with these two proteins, the native-like structures were found among the low-energy conformations, without the use of threading or secondary-structure predictions. This is because the CSA method can find many distinct families of low-energy conformations. Starting from random conformations, the CSA method found that there are two families of low-energy conformations for each of the two proteins, the native-like fold and its mirror image. The CSA method converged to the same low-energy folds in all cases studied, as opposed to other optimization methods. It appears that the CSA method with the UNRES force field, which is based on the thermodynamic hypothesis, can be used in prediction of protein structures in real time.
Resumo:
The Remez penalty and smoothing algorithm (RPSALG) is a unified framework for penalty and smoothing methods for solving min-max convex semi-infinite programing problems, whose convergence was analyzed in a previous paper of three of the authors. In this paper we consider a partial implementation of RPSALG for solving ordinary convex semi-infinite programming problems. Each iteration of RPSALG involves two types of auxiliary optimization problems: the first one consists of obtaining an approximate solution of some discretized convex problem, while the second one requires to solve a non-convex optimization problem involving the parametric constraints as objective function with the parameter as variable. In this paper we tackle the latter problem with a variant of the cutting angle method called ECAM, a global optimization procedure for solving Lipschitz programming problems. We implement different variants of RPSALG which are compared with the unique publicly available SIP solver, NSIPS, on a battery of test problems.