983 resultados para Personal Digital Assistant
Resumo:
[ES] Se analizan las posibilidades del Image based modeling (IBM), como técnica de escaneado 3D de bajo coste para la modelización de inscripciones romanas, a partir del trabajo realizado en el Museo Arqueológico Nacional de Madrid sobre una amplia tipología de soportes epigráficos (piedra, bronce, arcilla), con resultados óptimos para la catalogación, estudio y difusión de este tipo de documentación histórica. Los resultados obtenidos permiten obtener inscripciones romanas en 3D que se pueden incorporar a los proyectos de epigrafía digital en curso, permitiendo su acceso a través de ordenadores y dispositivos móviles, sin coste añadido para los investigadores.
Resumo:
Personal photographs permeate our lives from the moment we are born as they define who we are within our familial group and local communities. Archived in family albums or framed on living room walls, they continue on after our death as mnemonic artifacts referencing our gendered, raced, and ethnic identities. This dissertation examines salient instances of what women “do” with personal photographs, not only as authors and subjects but also as collectors, archivists, and family and cultural historians. This project seeks to contribute to more productive, complex discourse about how women form relationships and engage with the conventions and practices of personal photography. In the first part of this dissertation I revisit developments in the history of personal photography, including the advertising campaigns of the Kodak and Agfa Girls and the development of albums such as the Stammbuch and its predecessor, the carte-de-visite, that demonstrate how personal photography has functioned as a gendered activity that references family unity, sentimentalism for the past, and self-representation within normative familial and dominant cultural groups, thus suggesting its importance as a cultural practice of identity formation. The second and primary section of the dissertation expands on the critical analyses of Gillian Rose, Patricia Holland, and Nancy Martha West, who propose that personal photography, marketed to and taken on by women, double-exposes their gendered identities. Drawing on work by critics such as Deborah Willis, bell hooks, and Abigail Solomon-Godeau, I examine how the reconfiguration, recontextualization, and relocation of personal photographs in the respective work of Christine Saari, Fern Logan, and Katie Knight interrogates and complicates gendered, raced, and ethnic identities and cultural attitudes about them. In the final section of the dissertation I briefly examine select examples of how emerging digital spaces on the Internet function as a site for personal photography, one that both reinscribes traditional cultural formations while offering new opportunities for women for the display and audiencing of identities outside the family.
Resumo:
A body sensor network solution for personal healthcare under an indoor environment is developed. The system is capable of logging the physiological signals of human beings, tracking the orientations of human body, and monitoring the environmental attributes, which covers all necessary information for the personal healthcare in an indoor environment. The major three chapters of this dissertation contain three subsystems in this work, each corresponding to one subsystem: BioLogger, PAMS and CosNet. Each chapter covers the background and motivation of the subsystem, the related theory, the hardware/software design, and the evaluation of the prototype’s performance.
Resumo:
This thesis examines digital technologies used by technical communicators in healthcare settings. I show that technical communicators, who function as users, advocators and evaluators, need a useable framework for ethical engagement with digital technologies, which integrally affect the physician-patient relationship. Therefore, I apply rhetorical methodology by producing useable knowledge and phenomenological methodology by examining lived experiences of technical communicators. Substantiation comes from theories spanning technical communication, philosophy, and composition studies. Evidence also emerges from qualitative interviews with communication professionals working in healthcare; my concerns arise from personal experiences with electronic recordkeeping in the exam room. This thesis anticipates challenging the presumed theory-practice divide while encouraging greater disciplinary reciprocity. Because technical communication infuses theory into productive capacity, this thesis presents the tripartite summons of the ethical technical communicator: to exercise critically-reflective action that safeguards the physician-patient relationship by ways of using digital technologies, advocating for audiences, and evaluating digital technologies.
Resumo:
During the lead-up to Montana second progressive era, Lee Metcalf and Forrest Anderson, along with others, kept the progressive flame lit in Montana. Metcalf’s political history is replete with close electoral wins because of his commitment to progressive ideals when the times were not always politically favorable for that. As State Legislator, MT Supreme Court Justice, Congressman and eventually as US Senator, Lee won races by as little as 55 votes because he stuck to his guns as a progressive. In Forrest Anderson’s career as a County Attorney, State Legislator, MT Supreme Court Justice and 12 years as MT Attorney General he was respected as a pragmatic practitioner of politics. But during that entire career leading up to his election as Governor, Forrest Anderson was also a stalwart supporter of the progressive agenda exemplified by FDR and the New Deal, which brought folks out of the Great Depression that was brought on by the bad policies of the GOP and big business. As MT’s second progressive period began in 1965, the first important election was Senator Metcalf’s successful re-election battle in 1966 with the sitting MT Governor, Tim Babcock. And the progressive express was really ignited by the election of Forrest Anderson as Governor in 1968 after 16 years of Republican Governors in MT. Gordon Bennett played a rather unique role, being a confidant of Metcalf and Anderson, both who respected his wide and varied experience, his intellect, and his roots in progressivism beginning with his formative years in the Red Corner of NE Montana. Working with Senator Metcalf and his team, including Brit Englund, Vic Reinemer, Peggy McLaughlin, Betty Davis and Jack Condon among others, Bennett helped shape the progressive message both in Washington DC and MT. Progressive labor and farm organizations, part of the progressive coalition, benefitted from Bennett’s advice and counsel and aided the Senator in his career including the huge challenge of having a sitting popular governor run against him for the Senate in 1966. Metcalf’s noted intern program produced a cadre of progressive leaders in Montana over the years. Most notably, Ron Richards transitioned from Metcalf Intern to Executive Secretary of the Montana Democratic Party (MDP) and assisted, along with Bennett, in the 1966 Metcalf-Babcock race in a big way. As Executive Secretary Richards was critical to the success of the MDP as a platform for Forrest Anderson’s general election run and win in 1968. After Forrest’s gubernatorial election, Richards became Executive Assistant (now called Chief of Staff) for Governor Anderson and also for Governor Thomas Judge. The Metcalf progressive strain, exemplified by many including Richards and Bennett, permeated Democratic politics during the second progressive era. So, too, did the coalition that supported Metcalf and his policies. The progressivism of the period of “In the Crucible of Change” was fired up by Lee Metcalf, Forrest Anderson and their supporters and coalitions, and Gordon Bennett was in the center of all of that, helping fire up the crucible, setting the stage for many policy advancements in both Washington DC and Montana. Gordon Bennett’s important role in the 1966 re-election of Senator Lee Metcalf and the 1968 election of Governor Forrest Anderson, as well as his wide experience in government and politics of that time allows him to provide us with an insider’s personal perspective of those races and other events at the beginning of the period of progressive change being documented “In the Crucible of Change,” as well as his personal insights into the larger political/policy picture of Montana. Gordon Bennett, a major and formative player “In the Crucible of Change,” was born in the far northeast town of Scobey, MT in 1922. He attended school in Scobey through the eighth grade and graduated from Helena High School. After attending Carroll College for two years, he received his BA in economics from Carleton College in Northfield, MN. During a brief stint on the east coast, his daily reading of the New York Times (“best newspaper in the world at that time … and now”) inspired him to pursue a career in journalism. He received his MA in Journalism from the University of Missouri and entered the field. As a reporter for the Great Falls Tribune under the ownership and management of the Warden Family, he observed and competed with the rigid control of Montana’s press by the Anaconda Company (the Great Falls Tribune was the only large newspaper in Montana NOT owned by ACM). Following his intellectual curiosity and his philosophical bend, he attended a number of Farm-Labor Institutes which he credits with motivating him to pursue solutions to economic and social woes through the law. In 1956, at the age of 34, he received his Juris Doctorate degree from the Georgetown University Law Center in Washington, DC. Bennett’s varied career included eighteen years as a farmer, four years in the US Army during WWII (1942-46), two years as Assistant MT Attorney General (1957-59) with Forrest Anderson, three years in private practice in Glasgow (1959-61), two years as Associate Solicitor in the Department of Interior in Washington, DC (1961-62), and private law practice in Helena from 1962 to 1969. While in Helena he was an unsuccessful candidate for the Montana Supreme Court (1962) and cemented his previous relationships with Attorney General Forrest Anderson and US Senator Lee Metcalf. Bennett modestly refuses to accept the title of Campaign Manager for either Lee Metcalf (1966 re-election over the challenger, MT Republican Governor Tim Babcock) or Forrest Anderson (his 1968 election as Governor), saying that “they ran their campaigns … we were only there to help.” But he has been generally recognized as having filled that critical role in both of those critical elections. After Governor Anderson’s election in 1968, Bennett was appointed Director of the MT Unemployment Compensation Commission, a position from where he could be a close advisor and confidant of the new Governor. In 1971, Governor Anderson appointed him Judge in the most important jurisdiction in Montana, the 1st Judicial District in Helena, a position he held for seventeen years (1971-88). Upon stepping down from his judgeship, for twenty years (1988-2008) he was a law instructor, mediator and arbitrator. He currently resides in Helena with his wife, Norma Tirrell, former newspaper reporter and researcher/writer. Bennett has two adult children and four grandchildren.
Resumo:
BACKGROUND: Little is known about the population's exposure to radio frequency electromagnetic fields (RF-EMF) in industrialized countries. OBJECTIVES: To examine levels of exposure and the importance of different RF-EMF sources and settings in a sample of volunteers living in a Swiss city. METHODS: RF-EMF exposure of 166 volunteers from Basel, Switzerland, was measured with personal exposure meters (exposimeters). Participants carried an exposimeter for 1 week (two separate weeks in 32 participants) and completed an activity diary. Mean values were calculated using the robust regression on order statistics (ROS) method. RESULTS: Mean weekly exposure to all RF-EMF sources was 0.13 mW/m(2) (0.22 V/m) (range of individual means 0.014-0.881 mW/m(2)). Exposure was mainly due to mobile phone base stations (32.0%), mobile phone handsets (29.1%) and digital enhanced cordless telecommunications (DECT) phones (22.7%). Persons owning a DECT phone (total mean 0.15 mW/m(2)) or mobile phone (0.14 mW/m(2)) were exposed more than those not owning a DECT or mobile phone (0.10 mW/m(2)). Mean values were highest in trains (1.16 mW/m(2)), airports (0.74 mW/m(2)) and tramways or buses (0.36 mW/m(2)), and higher during daytime (0.16 mW/m(2)) than nighttime (0.08 mW/m(2)). The Spearman correlation coefficient between mean exposure in the first and second week was 0.61. CONCLUSIONS: Exposure to RF-EMF varied considerably between persons and locations but was fairly consistent within persons. Mobile phone handsets, mobile phone base stations and cordless phones were important sources of exposure in urban Switzerland.
Resumo:
In light of the recent European Court of Justice ruling (ECJ C-131/12, Google Spain v Spanish Data Protection Agency),the “right to be forgotten” has once again gained worldwide media attention. Already in 2012, whenthe European Commission proposed aright to be forgotten,this proposal received broad public interest and was debated intensively. Under certain conditions, individuals should thereby be able todelete personal data concerning them. More recently – in light of the European Parliament’s approval of the LIBE Committee’samendments onMarch 14, 2014 – the concept seems tobe close to its final form.Although it remains, for the most part,unchanged from the previously circulated drafts, it has beenre-labelled as a“right of erasure”. This article argues that, despite its catchy terminology, the right to be forgotten can be understood as a generic term, bringing together existing legal provisions: the substantial right of oblivion and the rather procedural right to erasure derived from data protection. Hereinafter, the article presents an analysis of selected national legal frameworks and corresponding case law, accounting for data protection, privacy, and general tort law as well as defamation law. This comparative analysis grasps the practical challenges which the attempt to strengthen individual control and informational self-determination faces. Consequently, it is argued that narrowing the focus on the data protection law amendments neglects the elaborate balancing of conflicting interests in European legal tradition. It is shown thatthe attemptto implement oblivion, erasure and forgetting in the digital age is a complex undertaking.
Resumo:
von M. Weinberg
Resumo:
MPEG-M is a suite of ISO/IEC standards (ISO/IEC 23006) that has been developed under the auspices of Moving Picture Experts Group (MPEG). MPEG-M, also known as Multimedia Service Platform Technologies (MSPT), facilitates a collection of multimedia middleware APIs and elementary services as well as service aggregation so that service providers can offer users a plethora of innovative services by extending current IPTV technology toward the seamless integration of personal content creation and distribution, e-commerce, social networks and Internet distribution of digital media.
Resumo:
El mundo tecnológico está cambiando hacia la optimización en la gestión de recursos gracias a la poderosa influencia de tecnologías como la virtualización y la computación en la nube (Cloud Computing). En esta memoria se realiza un acercamiento a las mismas, desde las causas que las motivaron hasta sus últimas tendencias, pasando por la identificación de sus principales características, ventajas e inconvenientes. Por otro lado, el Hogar Digital es ya una realidad para la mayoría de los seres humanos. En él se dispone de acceso a múltiples tipos de redes de telecomunicaciones (3G, 4G, WI-FI, ADSL…) con más o menos capacidad pero que permiten conexiones a internet desde cualquier parte, en todo momento, y con prácticamente cualquier dispositivo (ordenadores personales, smartphones, tabletas, televisores…). Esto es aprovechado por las empresas para ofrecer todo tipo de servicios. Algunos de estos servicios están basados en el cloud computing sobre todo ofreciendo almacenamiento en la nube a aquellos dispositivos con capacidad reducida, como son los smarthphones y las tabletas. Ese espacio de almacenamiento normalmente está en los servidores bajo el control de grandes compañías. Guardar documentos, videos, fotos privadas sin tener la certeza de que estos no son consultados por alguien sin consentimiento, puede despertar en el usuario cierto recelo. Para estos usuarios que desean control sobre su intimidad, se ofrece la posibilidad de que sea el propio usuario el que monte sus propios servidores y su propio servicio cloud para compartir su información privada sólo con sus familiares y amigos o con cualquiera al que le dé permiso. Durante el proyecto se han comparado diversas soluciones, la mayoría de código abierto y de libre distribución, que permiten desplegar como mínimo un servicio de almacenamiento accesible a través de Internet. Algunas de ellas lo complementan con servicios de streaming tanto de música como de videos, compartición y sincronización de documentos entre múltiples dispositivos, calendarios, copias de respaldo (backups), virtualización de escritorios, versionado de ficheros, chats, etc. El proyecto finaliza con una demostración de cómo utilizar dispositivos de un hogar digital interactuando con un servidor Cloud, en el que previamente se ha instalado y configurado una de las soluciones comparadas. Este servidor quedará empaquetado en una máquina virtual para que sea fácilmente transportable e utilizable. ABSTRACT. The technological world is changing towards optimizing resource management thanks to the powerful influence of technologies such as Virtualization and Cloud Computing. This document presents a closer approach to them, from the causes that have motivated to their last trends, as well as showing their main features, advantages and disadvantages. In addition, the Digital Home is a reality for most humans. It provides access to multiple types of telecommunication networks (3G, 4G, WI-FI, ADSL...) with more or less capacity, allowing Internet connections from anywhere, at any time, and with virtually any device (computer personal smartphones, tablets, televisions...).This is used by companies to provide all kinds of services. Some of these services offer storage on the cloud to devices with limited capacity, such as smartphones and tablets. That is normally storage space on servers under the control of important companies. Saving private documents, videos, photos, without being sure that they are not viewed by anyone without consent, can wake up suspicions in some users. For those users who want control over their privacy, it offers the possibility that it is the user himself to mount his own server and its own cloud service to share private information only with family and friends or with anyone with consent. During the project I have compared different solutions, most open source and with GNU licenses, for deploying one storage facility accessible via the Internet. Some supplement include streaming services of music , videos or photos, sharing and syncing documents across multiple devices, calendars, backups, desktop virtualization, file versioning, chats... The project ends with a demonstration of how to use our digital home devices interacting with a cloud server where one of the solutions compared is installed and configured. This server will be packaged in a virtual machine to be easily transportable and usable.
Resumo:
El presente proyecto describe la instalación de audio de un estudio de grabación digital musical. La finalidad de este proyecto es puramente educativa, afianzando conceptos que se han contemplado durante la carrera. La instalación tiene carácter ficticio, por lo que no tiene implementación real. Aun así, se ha intentado desarrollar con carácter profesional. El proyecto se ha dividido en varias fases de trabajo. Primeramente, se procedió a la búsqueda de información relativa a estudios de grabación, atendiendo principalmente a sus configuraciones. Paralelamente, se buscó información sobre los principales equipos dentro de un estudio de grabación y realizando un pequeño estudio de mercado. Posteriormente, se ha procedido a la elección de la configuración del equipamiento del estudio, atendiendo a las ventajas e inconvenientes de cada tipo de configuración. La tercera fase, corresponde a la elección de los equipos. Siguiendo la cadena de audio, se ha ido analizando la necesidad de cada uno de ellos. Seguidamente, se ha realizado una comparación de diferentes equipos que componen cada bloque de elección, y finalmente la selección del más apropiado junto con su justificación. En la última fase se ha realizado la interconexión de todos los equipos atendiendo a la configuración elegida en la segunda fase. Para ello, se ha llevado a cabo la implementación de una serie de tablas escritas, donde se especifica cada tipo de conexión. El proyecto ha terminado con una presentación del presupuesto, dividido en varios aparatados, y el desarrollo de las conclusiones. En ellas, se ha analizado tanto los objetivos propuestos al principio del proyecto como una valoración personal del proyecto en general. ABSTRACT. This project describes the audio installation of a digital music recording studio. The purpose of this project is purely educational, strengthening concepts that have been laid during college. The installation is fictitious and has not been implemented in a real situation. Nevertheless, it has been developed with a professional character. This project has been divided in various phases. Firstly, I proceeded to search information related to recording studios, focusing specially on their configurations. Simultaneously, I looked for information about the main digital equipment of a recording studio and performed a brief market research. Secondly, I selected the studio equipment configuration, taking care of the advantages and disadvantages of each type of configuration. The third phase corresponds to the choice of the equipment. Following the audio chain, I analyzed the need for each of them. Then, I compared the different equipment that compose each of the choice blocks and finally opt for the most appropriate with its justification. In the last phase, I interconnected all the equipment according to the chosen configuration of the second phase. For this, I implemented a series of written tables, where I specified each connection type. The Project ends with a presentation of the budget, divided into several sections, followed by the conclusion in which I analyze both the objectives of the project and my personal valuation.
Resumo:
The deployment of home-based smart health services requires effective and reliable systems for personal and environmental data management. ooperation between Home Area Networks (HAN) and Body Area Networks (BAN) can provide smart systems with ad hoc reasoning information to support health care. This paper details the implementation of an architecture that integrates BAN, HAN and intelligent agents to manage physiological and environmental data to proactively detect risk situations at the digital home. The system monitors dynamic situations and timely adjusts its behavior to detect user risks concerning to health. Thus, this work provides a reasoning framework to infer appropriate solutions in cases of health risk episodes. Proposed smart health monitoring approach integrates complex reasoning according to home environment, user profile and physiological parameters defined by a scalable ontology. As a result, health care demands can be detected to activate adequate internal mechanisms and report public health services for requested actions.
Resumo:
Esta tesis doctoral se centra principalmente en técnicas de ataque y contramedidas relacionadas con ataques de canal lateral (SCA por sus siglas en inglés), que han sido propuestas dentro del campo de investigación académica desde hace 17 años. Las investigaciones relacionadas han experimentado un notable crecimiento en las últimas décadas, mientras que los diseños enfocados en la protección sólida y eficaz contra dichos ataques aún se mantienen como un tema de investigación abierto, en el que se necesitan iniciativas más confiables para la protección de la información persona de empresa y de datos nacionales. El primer uso documentado de codificación secreta se remonta a alrededor de 1700 B.C., cuando los jeroglíficos del antiguo Egipto eran descritos en las inscripciones. La seguridad de la información siempre ha supuesto un factor clave en la transmisión de datos relacionados con inteligencia diplomática o militar. Debido a la evolución rápida de las técnicas modernas de comunicación, soluciones de cifrado se incorporaron por primera vez para garantizar la seguridad, integridad y confidencialidad de los contextos de transmisión a través de cables sin seguridad o medios inalámbricos. Debido a las restricciones de potencia de cálculo antes de la era del ordenador, la técnica de cifrado simple era un método más que suficiente para ocultar la información. Sin embargo, algunas vulnerabilidades algorítmicas pueden ser explotadas para restaurar la regla de codificación sin mucho esfuerzo. Esto ha motivado nuevas investigaciones en el área de la criptografía, con el fin de proteger el sistema de información ante sofisticados algoritmos. Con la invención de los ordenadores se ha acelerado en gran medida la implementación de criptografía segura, que ofrece resistencia eficiente encaminada a obtener mayores capacidades de computación altamente reforzadas. Igualmente, sofisticados cripto-análisis han impulsado las tecnologías de computación. Hoy en día, el mundo de la información ha estado involucrado con el campo de la criptografía, enfocada a proteger cualquier campo a través de diversas soluciones de cifrado. Estos enfoques se han fortalecido debido a la unificación optimizada de teorías matemáticas modernas y prácticas eficaces de hardware, siendo posible su implementación en varias plataformas (microprocesador, ASIC, FPGA, etc.). Las necesidades y requisitos de seguridad en la industria son las principales métricas de conducción en el diseño electrónico, con el objetivo de promover la fabricación de productos de gran alcance sin sacrificar la seguridad de los clientes. Sin embargo, una vulnerabilidad en la implementación práctica encontrada por el Prof. Paul Kocher, et al en 1996 implica que un circuito digital es inherentemente vulnerable a un ataque no convencional, lo cual fue nombrado posteriormente como ataque de canal lateral, debido a su fuente de análisis. Sin embargo, algunas críticas sobre los algoritmos criptográficos teóricamente seguros surgieron casi inmediatamente después de este descubrimiento. En este sentido, los circuitos digitales consisten típicamente en un gran número de celdas lógicas fundamentales (como MOS - Metal Oxide Semiconductor), construido sobre un sustrato de silicio durante la fabricación. La lógica de los circuitos se realiza en función de las innumerables conmutaciones de estas células. Este mecanismo provoca inevitablemente cierta emanación física especial que puede ser medida y correlacionada con el comportamiento interno del circuito. SCA se puede utilizar para revelar datos confidenciales (por ejemplo, la criptografía de claves), analizar la arquitectura lógica, el tiempo e incluso inyectar fallos malintencionados a los circuitos que se implementan en sistemas embebidos, como FPGAs, ASICs, o tarjetas inteligentes. Mediante el uso de la comparación de correlación entre la cantidad de fuga estimada y las fugas medidas de forma real, información confidencial puede ser reconstruida en mucho menos tiempo y computación. Para ser precisos, SCA básicamente cubre una amplia gama de tipos de ataques, como los análisis de consumo de energía y radiación ElectroMagnética (EM). Ambos se basan en análisis estadístico y, por lo tanto, requieren numerosas muestras. Los algoritmos de cifrado no están intrínsecamente preparados para ser resistentes ante SCA. Es por ello que se hace necesario durante la implementación de circuitos integrar medidas que permitan camuflar las fugas a través de "canales laterales". Las medidas contra SCA están evolucionando junto con el desarrollo de nuevas técnicas de ataque, así como la continua mejora de los dispositivos electrónicos. Las características físicas requieren contramedidas sobre la capa física, que generalmente se pueden clasificar en soluciones intrínsecas y extrínsecas. Contramedidas extrínsecas se ejecutan para confundir la fuente de ataque mediante la integración de ruido o mala alineación de la actividad interna. Comparativamente, las contramedidas intrínsecas están integradas en el propio algoritmo, para modificar la aplicación con el fin de minimizar las fugas medibles, o incluso hacer que dichas fugas no puedan ser medibles. Ocultación y Enmascaramiento son dos técnicas típicas incluidas en esta categoría. Concretamente, el enmascaramiento se aplica a nivel algorítmico, para alterar los datos intermedios sensibles con una máscara de manera reversible. A diferencia del enmascaramiento lineal, las operaciones no lineales que ampliamente existen en criptografías modernas son difíciles de enmascarar. Dicho método de ocultación, que ha sido verificado como una solución efectiva, comprende principalmente la codificación en doble carril, que está ideado especialmente para aplanar o eliminar la fuga dependiente de dato en potencia o en EM. En esta tesis doctoral, además de la descripción de las metodologías de ataque, se han dedicado grandes esfuerzos sobre la estructura del prototipo de la lógica propuesta, con el fin de realizar investigaciones enfocadas a la seguridad sobre contramedidas de arquitectura a nivel lógico. Una característica de SCA reside en el formato de las fuentes de fugas. Un típico ataque de canal lateral se refiere al análisis basado en la potencia, donde la capacidad fundamental del transistor MOS y otras capacidades parásitas son las fuentes esenciales de fugas. Por lo tanto, una lógica robusta resistente a SCA debe eliminar o mitigar las fugas de estas micro-unidades, como las puertas lógicas básicas, los puertos I/O y las rutas. Las herramientas EDA proporcionadas por los vendedores manipulan la lógica desde un nivel más alto, en lugar de realizarlo desde el nivel de puerta, donde las fugas de canal lateral se manifiestan. Por lo tanto, las implementaciones clásicas apenas satisfacen estas necesidades e inevitablemente atrofian el prototipo. Por todo ello, la implementación de un esquema de diseño personalizado y flexible ha de ser tomado en cuenta. En esta tesis se presenta el diseño y la implementación de una lógica innovadora para contrarrestar SCA, en la que se abordan 3 aspectos fundamentales: I. Se basa en ocultar la estrategia sobre el circuito en doble carril a nivel de puerta para obtener dinámicamente el equilibrio de las fugas en las capas inferiores; II. Esta lógica explota las características de la arquitectura de las FPGAs, para reducir al mínimo el gasto de recursos en la implementación; III. Se apoya en un conjunto de herramientas asistentes personalizadas, incorporadas al flujo genérico de diseño sobre FPGAs, con el fin de manipular los circuitos de forma automática. El kit de herramientas de diseño automático es compatible con la lógica de doble carril propuesta, para facilitar la aplicación práctica sobre la familia de FPGA del fabricante Xilinx. En este sentido, la metodología y las herramientas son flexibles para ser extendido a una amplia gama de aplicaciones en las que se desean obtener restricciones mucho más rígidas y sofisticadas a nivel de puerta o rutado. En esta tesis se realiza un gran esfuerzo para facilitar el proceso de implementación y reparación de lógica de doble carril genérica. La viabilidad de las soluciones propuestas es validada mediante la selección de algoritmos criptográficos ampliamente utilizados, y su evaluación exhaustiva en comparación con soluciones anteriores. Todas las propuestas están respaldadas eficazmente a través de ataques experimentales con el fin de validar las ventajas de seguridad del sistema. El presente trabajo de investigación tiene la intención de cerrar la brecha entre las barreras de implementación y la aplicación efectiva de lógica de doble carril. En esencia, a lo largo de esta tesis se describirá un conjunto de herramientas de implementación para FPGAs que se han desarrollado para trabajar junto con el flujo de diseño genérico de las mismas, con el fin de lograr crear de forma innovadora la lógica de doble carril. Un nuevo enfoque en el ámbito de la seguridad en el cifrado se propone para obtener personalización, automatización y flexibilidad en el prototipo de circuito de bajo nivel con granularidad fina. Las principales contribuciones del presente trabajo de investigación se resumen brevemente a continuación: Lógica de Precharge Absorbed-DPL logic: El uso de la conversión de netlist para reservar LUTs libres para ejecutar la señal de precharge y Ex en una lógica DPL. Posicionamiento entrelazado Row-crossed con pares idénticos de rutado en redes de doble carril, lo que ayuda a aumentar la resistencia frente a la medición EM selectiva y mitigar los impactos de las variaciones de proceso. Ejecución personalizada y herramientas de conversión automática para la generación de redes idénticas para la lógica de doble carril propuesta. (a) Para detectar y reparar conflictos en las conexiones; (b) Detectar y reparar las rutas asimétricas. (c) Para ser utilizado en otras lógicas donde se requiere un control estricto de las interconexiones en aplicaciones basadas en Xilinx. Plataforma CPA de pruebas personalizadas para el análisis de EM y potencia, incluyendo la construcción de dicha plataforma, el método de medición y análisis de los ataques. Análisis de tiempos para cuantificar los niveles de seguridad. División de Seguridad en la conversión parcial de un sistema de cifrado complejo para reducir los costes de la protección. Prueba de concepto de un sistema de calefacción auto-adaptativo para mitigar los impactos eléctricos debido a la variación del proceso de silicio de manera dinámica. La presente tesis doctoral se encuentra organizada tal y como se detalla a continuación: En el capítulo 1 se abordan los fundamentos de los ataques de canal lateral, que abarca desde conceptos básicos de teoría de modelos de análisis, además de la implementación de la plataforma y la ejecución de los ataques. En el capítulo 2 se incluyen las estrategias de resistencia SCA contra los ataques de potencia diferencial y de EM. Además de ello, en este capítulo se propone una lógica en doble carril compacta y segura como contribución de gran relevancia, así como también se presentará la transformación lógica basada en un diseño a nivel de puerta. Por otra parte, en el Capítulo 3 se abordan los desafíos relacionados con la implementación de lógica en doble carril genérica. Así mismo, se describirá un flujo de diseño personalizado para resolver los problemas de aplicación junto con una herramienta de desarrollo automático de aplicaciones propuesta, para mitigar las barreras de diseño y facilitar los procesos. En el capítulo 4 se describe de forma detallada la elaboración e implementación de las herramientas propuestas. Por otra parte, la verificación y validaciones de seguridad de la lógica propuesta, así como un sofisticado experimento de verificación de la seguridad del rutado, se describen en el capítulo 5. Por último, un resumen de las conclusiones de la tesis y las perspectivas como líneas futuras se incluyen en el capítulo 6. Con el fin de profundizar en el contenido de la tesis doctoral, cada capítulo se describe de forma más detallada a continuación: En el capítulo 1 se introduce plataforma de implementación hardware además las teorías básicas de ataque de canal lateral, y contiene principalmente: (a) La arquitectura genérica y las características de la FPGA a utilizar, en particular la Xilinx Virtex-5; (b) El algoritmo de cifrado seleccionado (un módulo comercial Advanced Encryption Standard (AES)); (c) Los elementos esenciales de los métodos de canal lateral, que permiten revelar las fugas de disipación correlacionadas con los comportamientos internos; y el método para recuperar esta relación entre las fluctuaciones físicas en los rastros de canal lateral y los datos internos procesados; (d) Las configuraciones de las plataformas de pruebas de potencia / EM abarcadas dentro de la presente tesis. El contenido de esta tesis se amplia y profundiza a partir del capítulo 2, en el cual se abordan varios aspectos claves. En primer lugar, el principio de protección de la compensación dinámica de la lógica genérica de precarga de doble carril (Dual-rail Precharge Logic-DPL) se explica mediante la descripción de los elementos compensados a nivel de puerta. En segundo lugar, la lógica PA-DPL es propuesta como aportación original, detallando el protocolo de la lógica y un caso de aplicación. En tercer lugar, dos flujos de diseño personalizados se muestran para realizar la conversión de doble carril. Junto con ello, se aclaran las definiciones técnicas relacionadas con la manipulación por encima de la netlist a nivel de LUT. Finalmente, una breve discusión sobre el proceso global se aborda en la parte final del capítulo. El Capítulo 3 estudia los principales retos durante la implementación de DPLs en FPGAs. El nivel de seguridad de las soluciones de resistencia a SCA encontradas en el estado del arte se ha degenerado debido a las barreras de implantación a través de herramientas EDA convencionales. En el escenario de la arquitectura FPGA estudiada, se discuten los problemas de los formatos de doble carril, impactos parásitos, sesgo tecnológico y la viabilidad de implementación. De acuerdo con estas elaboraciones, se plantean dos problemas: Cómo implementar la lógica propuesta sin penalizar los niveles de seguridad, y cómo manipular un gran número de celdas y automatizar el proceso. El PA-DPL propuesto en el capítulo 2 se valida con una serie de iniciativas, desde características estructurales como doble carril entrelazado o redes de rutado clonadas, hasta los métodos de aplicación tales como las herramientas de personalización y automatización de EDA. Por otra parte, un sistema de calefacción auto-adaptativo es representado y aplicado a una lógica de doble núcleo, con el fin de ajustar alternativamente la temperatura local para equilibrar los impactos negativos de la variación del proceso durante la operación en tiempo real. El capítulo 4 se centra en los detalles de la implementación del kit de herramientas. Desarrollado sobre una API third-party, el kit de herramientas personalizado es capaz de manipular los elementos de la lógica de circuito post P&R ncd (una versión binaria ilegible del xdl) convertido al formato XDL Xilinx. El mecanismo y razón de ser del conjunto de instrumentos propuestos son cuidadosamente descritos, que cubre la detección de enrutamiento y los enfoques para la reparación. El conjunto de herramientas desarrollado tiene como objetivo lograr redes de enrutamiento estrictamente idénticos para la lógica de doble carril, tanto para posicionamiento separado como para el entrelazado. Este capítulo particularmente especifica las bases técnicas para apoyar las implementaciones en los dispositivos de Xilinx y su flexibilidad para ser utilizado sobre otras aplicaciones. El capítulo 5 se enfoca en la aplicación de los casos de estudio para la validación de los grados de seguridad de la lógica propuesta. Se discuten los problemas técnicos detallados durante la ejecución y algunas nuevas técnicas de implementación. (a) Se discute el impacto en el proceso de posicionamiento de la lógica utilizando el kit de herramientas propuesto. Diferentes esquemas de implementación, tomando en cuenta la optimización global en seguridad y coste, se verifican con los experimentos con el fin de encontrar los planes de posicionamiento y reparación optimizados; (b) las validaciones de seguridad se realizan con los métodos de correlación y análisis de tiempo; (c) Una táctica asintótica se aplica a un núcleo AES sobre BCDL estructurado para validar de forma sofisticada el impacto de enrutamiento sobre métricas de seguridad; (d) Los resultados preliminares utilizando el sistema de calefacción auto-adaptativa sobre la variación del proceso son mostrados; (e) Se introduce una aplicación práctica de las herramientas para un diseño de cifrado completa. Capítulo 6 incluye el resumen general del trabajo presentado dentro de esta tesis doctoral. Por último, una breve perspectiva del trabajo futuro se expone, lo que puede ampliar el potencial de utilización de las contribuciones de esta tesis a un alcance más allá de los dominios de la criptografía en FPGAs. ABSTRACT This PhD thesis mainly concentrates on countermeasure techniques related to the Side Channel Attack (SCA), which has been put forward to academic exploitations since 17 years ago. The related research has seen a remarkable growth in the past decades, while the design of solid and efficient protection still curiously remain as an open research topic where more reliable initiatives are required for personal information privacy, enterprise and national data protections. The earliest documented usage of secret code can be traced back to around 1700 B.C., when the hieroglyphs in ancient Egypt are scribed in inscriptions. Information security always gained serious attention from diplomatic or military intelligence transmission. Due to the rapid evolvement of modern communication technique, crypto solution was first incorporated by electronic signal to ensure the confidentiality, integrity, availability, authenticity and non-repudiation of the transmitted contexts over unsecure cable or wireless channels. Restricted to the computation power before computer era, simple encryption tricks were practically sufficient to conceal information. However, algorithmic vulnerabilities can be excavated to restore the encoding rules with affordable efforts. This fact motivated the development of modern cryptography, aiming at guarding information system by complex and advanced algorithms. The appearance of computers has greatly pushed forward the invention of robust cryptographies, which efficiently offers resistance relying on highly strengthened computing capabilities. Likewise, advanced cryptanalysis has greatly driven the computing technologies in turn. Nowadays, the information world has been involved into a crypto world, protecting any fields by pervasive crypto solutions. These approaches are strong because of the optimized mergence between modern mathematical theories and effective hardware practices, being capable of implement crypto theories into various platforms (microprocessor, ASIC, FPGA, etc). Security needs from industries are actually the major driving metrics in electronic design, aiming at promoting the construction of systems with high performance without sacrificing security. Yet a vulnerability in practical implementation found by Prof. Paul Kocher, et al in 1996 implies that modern digital circuits are inherently vulnerable to an unconventional attack approach, which was named as side-channel attack since then from its analysis source. Critical suspicions to theoretically sound modern crypto algorithms surfaced almost immediately after this discovery. To be specifically, digital circuits typically consist of a great number of essential logic elements (as MOS - Metal Oxide Semiconductor), built upon a silicon substrate during the fabrication. Circuit logic is realized relying on the countless switch actions of these cells. This mechanism inevitably results in featured physical emanation that can be properly measured and correlated with internal circuit behaviors. SCAs can be used to reveal the confidential data (e.g. crypto-key), analyze the logic architecture, timing and even inject malicious faults to the circuits that are implemented in hardware system, like FPGA, ASIC, smart Card. Using various comparison solutions between the predicted leakage quantity and the measured leakage, secrets can be reconstructed at much less expense of time and computation. To be precisely, SCA basically encloses a wide range of attack types, typically as the analyses of power consumption or electromagnetic (EM) radiation. Both of them rely on statistical analyses, and hence require a number of samples. The crypto algorithms are not intrinsically fortified with SCA-resistance. Because of the severity, much attention has to be taken into the implementation so as to assemble countermeasures to camouflage the leakages via "side channels". Countermeasures against SCA are evolving along with the development of attack techniques. The physical characteristics requires countermeasures over physical layer, which can be generally classified into intrinsic and extrinsic vectors. Extrinsic countermeasures are executed to confuse the attacker by integrating noise, misalignment to the intra activities. Comparatively, intrinsic countermeasures are built into the algorithm itself, to modify the implementation for minimizing the measurable leakage, or making them not sensitive any more. Hiding and Masking are two typical techniques in this category. Concretely, masking applies to the algorithmic level, to alter the sensitive intermediate values with a mask in reversible ways. Unlike the linear masking, non-linear operations that widely exist in modern cryptographies are difficult to be masked. Approved to be an effective counter solution, hiding method mainly mentions dual-rail logic, which is specially devised for flattening or removing the data-dependent leakage in power or EM signatures. In this thesis, apart from the context describing the attack methodologies, efforts have also been dedicated to logic prototype, to mount extensive security investigations to countermeasures on logic-level. A characteristic of SCA resides on the format of leak sources. Typical side-channel attack concerns the power based analysis, where the fundamental capacitance from MOS transistors and other parasitic capacitances are the essential leak sources. Hence, a robust SCA-resistant logic must eliminate or mitigate the leakages from these micro units, such as basic logic gates, I/O ports and routings. The vendor provided EDA tools manipulate the logic from a higher behavioral-level, rather than the lower gate-level where side-channel leakage is generated. So, the classical implementations barely satisfy these needs and inevitably stunt the prototype. In this case, a customized and flexible design scheme is appealing to be devised. This thesis profiles an innovative logic style to counter SCA, which mainly addresses three major aspects: I. The proposed logic is based on the hiding strategy over gate-level dual-rail style to dynamically overbalance side-channel leakage from lower circuit layer; II. This logic exploits architectural features of modern FPGAs, to minimize the implementation expenses; III. It is supported by a set of assistant custom tools, incorporated by the generic FPGA design flow, to have circuit manipulations in an automatic manner. The automatic design toolkit supports the proposed dual-rail logic, facilitating the practical implementation on Xilinx FPGA families. While the methodologies and the tools are flexible to be expanded to a wide range of applications where rigid and sophisticated gate- or routing- constraints are desired. In this thesis a great effort is done to streamline the implementation workflow of generic dual-rail logic. The feasibility of the proposed solutions is validated by selected and widely used crypto algorithm, for thorough and fair evaluation w.r.t. prior solutions. All the proposals are effectively verified by security experiments. The presented research work attempts to solve the implementation troubles. The essence that will be formalized along this thesis is that a customized execution toolkit for modern FPGA systems is developed to work together with the generic FPGA design flow for creating innovative dual-rail logic. A method in crypto security area is constructed to obtain customization, automation and flexibility in low-level circuit prototype with fine-granularity in intractable routings. Main contributions of the presented work are summarized next: Precharge Absorbed-DPL logic: Using the netlist conversion to reserve free LUT inputs to execute the Precharge and Ex signal in a dual-rail logic style. A row-crossed interleaved placement method with identical routing pairs in dual-rail networks, which helps to increase the resistance against selective EM measurement and mitigate the impacts from process variations. Customized execution and automatic transformation tools for producing identical networks for the proposed dual-rail logic. (a) To detect and repair the conflict nets; (b) To detect and repair the asymmetric nets. (c) To be used in other logics where strict network control is required in Xilinx scenario. Customized correlation analysis testbed for EM and power attacks, including the platform construction, measurement method and attack analysis. A timing analysis based method for quantifying the security grades. A methodology of security partitions of complex crypto systems for reducing the protection cost. A proof-of-concept self-adaptive heating system to mitigate electrical impacts over process variations in dynamic dual-rail compensation manner. The thesis chapters are organized as follows: Chapter 1 discusses the side-channel attack fundamentals, which covers from theoretic basics to analysis models, and further to platform setup and attack execution. Chapter 2 centers to SCA-resistant strategies against generic power and EM attacks. In this chapter, a major contribution, a compact and secure dual-rail logic style, will be originally proposed. The logic transformation based on bottom-layer design will be presented. Chapter 3 is scheduled to elaborate the implementation challenges of generic dual-rail styles. A customized design flow to solve the implementation problems will be described along with a self-developed automatic implementation toolkit, for mitigating the design barriers and facilitating the processes. Chapter 4 will originally elaborate the tool specifics and construction details. The implementation case studies and security validations for the proposed logic style, as well as a sophisticated routing verification experiment, will be described in Chapter 5. Finally, a summary of thesis conclusions and perspectives for future work are included in Chapter 5. To better exhibit the thesis contents, each chapter is further described next: Chapter 1 provides the introduction of hardware implementation testbed and side-channel attack fundamentals, and mainly contains: (a) The FPGA generic architecture and device features, particularly of Virtex-5 FPGA; (b) The selected crypto algorithm - a commercially and extensively used Advanced Encryption Standard (AES) module - is detailed; (c) The essentials of Side-Channel methods are profiled. It reveals the correlated dissipation leakage to the internal behaviors, and the method to recover this relationship between the physical fluctuations in side-channel traces and the intra processed data; (d) The setups of the power/EM testing platforms enclosed inside the thesis work are given. The content of this thesis is expanded and deepened from chapter 2, which is divided into several aspects. First, the protection principle of dynamic compensation of the generic dual-rail precharge logic is explained by describing the compensated gate-level elements. Second, the novel DPL is originally proposed by detailing the logic protocol and an implementation case study. Third, a couple of custom workflows are shown next for realizing the rail conversion. Meanwhile, the technical definitions that are about to be manipulated above LUT-level netlist are clarified. A brief discussion about the batched process is given in the final part. Chapter 3 studies the implementation challenges of DPLs in FPGAs. The security level of state-of-the-art SCA-resistant solutions are decreased due to the implementation barriers using conventional EDA tools. In the studied FPGA scenario, problems are discussed from dual-rail format, parasitic impact, technological bias and implementation feasibility. According to these elaborations, two problems arise: How to implement the proposed logic without crippling the security level; and How to manipulate a large number of cells and automate the transformation. The proposed PA-DPL in chapter 2 is legalized with a series of initiatives, from structures to implementation methods. Furthermore, a self-adaptive heating system is depicted and implemented to a dual-core logic, assumed to alternatively adjust local temperature for balancing the negative impacts from silicon technological biases on real-time. Chapter 4 centers to the toolkit system. Built upon a third-party Application Program Interface (API) library, the customized toolkit is able to manipulate the logic elements from post P&R circuit (an unreadable binary version of the xdl one) converted to Xilinx xdl format. The mechanism and rationale of the proposed toolkit are carefully convoyed, covering the routing detection and repairing approaches. The developed toolkit aims to achieve very strictly identical routing networks for dual-rail logic both for separate and interleaved placement. This chapter particularly specifies the technical essentials to support the implementations in Xilinx devices and the flexibility to be expanded to other applications. Chapter 5 focuses on the implementation of the case studies for validating the security grades of the proposed logic style from the proposed toolkit. Comprehensive implementation techniques are discussed. (a) The placement impacts using the proposed toolkit are discussed. Different execution schemes, considering the global optimization in security and cost, are verified with experiments so as to find the optimized placement and repair schemes; (b) Security validations are realized with correlation, timing methods; (c) A systematic method is applied to a BCDL structured module to validate the routing impact over security metric; (d) The preliminary results using the self-adaptive heating system over process variation is given; (e) A practical implementation of the proposed toolkit to a large design is introduced. Chapter 6 includes the general summary of the complete work presented inside this thesis. Finally, a brief perspective for the future work is drawn which might expand the potential utilization of the thesis contributions to a wider range of implementation domains beyond cryptography on FPGAs.