894 resultados para lack of catalytic mechanism
Resumo:
Side Channel Attacks (SCAs) typically gather unintentional (side channel) physical leakages from running crypto-devices to reveal confidential data. Dual-rail Precharge Logic (DPL) is one of the most efficient countermeasures against power or EM side channel threats. This logic relies on the implementation of complementary rails to counterbalance the data-dependent variations of the leakage from dynamic behavior of the original circuit. However, the lack of flexibility of commercial FPGA design tools makes it quite difficult to obtain completely balanced routings between complementary networks. In this paper, a controllable repair mechanism to guarantee identical net pairs from two lines is presented: i. repairs the identical yet conflict nets after the duplication (copy & paste) from original rail to complementary rail, and ii. repairs the non-identical nets in off-the-stock DPL circuits; These rerouting steps are carried out starting from a placed and routed netlist using Xilinx Description Language (XDL). Low level XDL modifications have been completely automated using a set of APIs named RapidSmith. Experimental EM attacks show that the resistance level of an AES core after the automatic routing repair is increased in a factor of at least 3.5. Timing analyses further demonstrate that net delay differences between complementary networks are minimized significantly.
Resumo:
El extraordinario auge de las nuevas tecnologías de la información, el desarrollo de la Internet de las Cosas, el comercio electrónico, las redes sociales, la telefonía móvil y la computación y almacenamiento en la nube, han proporcionado grandes beneficios en todos los ámbitos de la sociedad. Junto a éstos, se presentan nuevos retos para la protección y privacidad de la información y su contenido, como la suplantación de personalidad y la pérdida de la confidencialidad e integridad de los documentos o las comunicaciones electrónicas. Este hecho puede verse agravado por la falta de una frontera clara que delimite el mundo personal del mundo laboral en cuanto al acceso de la información. En todos estos campos de la actividad personal y laboral, la Criptografía ha jugado un papel fundamental aportando las herramientas necesarias para garantizar la confidencialidad, integridad y disponibilidad tanto de la privacidad de los datos personales como de la información. Por otro lado, la Biometría ha propuesto y ofrecido diferentes técnicas con el fin de garantizar la autentificación de individuos a través del uso de determinadas características personales como las huellas dáctilares, el iris, la geometría de la mano, la voz, la forma de caminar, etc. Cada una de estas dos ciencias, Criptografía y Biometría, aportan soluciones a campos específicos de la protección de datos y autentificación de usuarios, que se verían enormemente potenciados si determinadas características de ambas ciencias se unieran con vistas a objetivos comunes. Por ello es imperativo intensificar la investigación en estos ámbitos combinando los algoritmos y primitivas matemáticas de la Criptografía con la Biometría para dar respuesta a la demanda creciente de nuevas soluciones más técnicas, seguras y fáciles de usar que potencien de modo simultáneo la protección de datos y la identificacíón de usuarios. En esta combinación el concepto de biometría cancelable ha supuesto una piedra angular en el proceso de autentificación e identificación de usuarios al proporcionar propiedades de revocación y cancelación a los ragos biométricos. La contribución de esta tesis se basa en el principal aspecto de la Biometría, es decir, la autentificación segura y eficiente de usuarios a través de sus rasgos biométricos, utilizando tres aproximaciones distintas: 1. Diseño de un esquema criptobiométrico borroso que implemente los principios de la biometría cancelable para identificar usuarios lidiando con los problemas acaecidos de la variabilidad intra e inter-usuarios. 2. Diseño de una nueva función hash que preserva la similitud (SPHF por sus siglas en inglés). Actualmente estas funciones se usan en el campo del análisis forense digital con el objetivo de buscar similitudes en el contenido de archivos distintos pero similares de modo que se pueda precisar hasta qué punto estos archivos pudieran ser considerados iguales. La función definida en este trabajo de investigación, además de mejorar los resultados de las principales funciones desarrolladas hasta el momento, intenta extender su uso a la comparación entre patrones de iris. 3. Desarrollando un nuevo mecanismo de comparación de patrones de iris que considera tales patrones como si fueran señales para compararlos posteriormente utilizando la transformada de Walsh-Hadarmard. Los resultados obtenidos son excelentes teniendo en cuenta los requerimientos de seguridad y privacidad mencionados anteriormente. Cada uno de los tres esquemas diseñados han sido implementados para poder realizar experimentos y probar su eficacia operativa en escenarios que simulan situaciones reales: El esquema criptobiométrico borroso y la función SPHF han sido implementados en lenguaje Java mientras que el proceso basado en la transformada de Walsh-Hadamard en Matlab. En los experimentos se ha utilizado una base de datos de imágenes de iris (CASIA) para simular una población de usuarios del sistema. En el caso particular de la función de SPHF, además se han realizado experimentos para comprobar su utilidad en el campo de análisis forense comparando archivos e imágenes con contenido similar y distinto. En este sentido, para cada uno de los esquemas se han calculado los ratios de falso negativo y falso positivo. ABSTRACT The extraordinary increase of new information technologies, the development of Internet of Things, the electronic commerce, the social networks, mobile or smart telephony and cloud computing and storage, have provided great benefits in all areas of society. Besides this fact, there are new challenges for the protection and privacy of information and its content, such as the loss of confidentiality and integrity of electronic documents and communications. This is exarcebated by the lack of a clear boundary between the personal world and the business world as their differences are becoming narrower. In both worlds, i.e the personal and the business one, Cryptography has played a key role by providing the necessary tools to ensure the confidentiality, integrity and availability both of the privacy of the personal data and information. On the other hand, Biometrics has offered and proposed different techniques with the aim to assure the authentication of individuals through their biometric traits, such as fingerprints, iris, hand geometry, voice, gait, etc. Each of these sciences, Cryptography and Biometrics, provides tools to specific problems of the data protection and user authentication, which would be widely strengthen if determined characteristics of both sciences would be combined in order to achieve common objectives. Therefore, it is imperative to intensify the research in this area by combining the basics mathematical algorithms and primitives of Cryptography with Biometrics to meet the growing demand for more secure and usability techniques which would improve the data protection and the user authentication. In this combination, the use of cancelable biometrics makes a cornerstone in the user authentication and identification process since it provides revocable or cancelation properties to the biometric traits. The contributions in this thesis involve the main aspect of Biometrics, i.e. the secure and efficient authentication of users through their biometric templates, considered from three different approaches. The first one is designing a fuzzy crypto-biometric scheme using the cancelable biometric principles to take advantage of the fuzziness of the biometric templates at the same time that it deals with the intra- and inter-user variability among users without compromising the biometric templates extracted from the legitimate users. The second one is designing a new Similarity Preserving Hash Function (SPHF), currently widely used in the Digital Forensics field to find similarities among different files to calculate their similarity level. The function designed in this research work, besides the fact of improving the results of the two main functions of this field currently in place, it tries to expand its use to the iris template comparison. Finally, the last approach of this thesis is developing a new mechanism of handling the iris templates, considering them as signals, to use the Walsh-Hadamard transform (complemented with three other algorithms) to compare them. The results obtained are excellent taking into account the security and privacy requirements mentioned previously. Every one of the three schemes designed have been implemented to test their operational efficacy in situations that simulate real scenarios: The fuzzy crypto-biometric scheme and the SPHF have been implemented in Java language, while the process based on the Walsh-Hadamard transform in Matlab. The experiments have been performed using a database of iris templates (CASIA-IrisV2) to simulate a user population. The case of the new SPHF designed is special since previous to be applied i to the Biometrics field, it has been also tested to determine its applicability in the Digital Forensic field comparing similar and dissimilar files and images. The ratios of efficiency and effectiveness regarding user authentication, i.e. False Non Match and False Match Rate, for the schemes designed have been calculated with different parameters and cases to analyse their behaviour.
Resumo:
Internet está evolucionando hacia la conocida como Live Web. En esta nueva etapa en la evolución de Internet, se pone al servicio de los usuarios multitud de streams de datos sociales. Gracias a estas fuentes de datos, los usuarios han pasado de navegar por páginas web estáticas a interacturar con aplicaciones que ofrecen contenido personalizado, basada en sus preferencias. Cada usuario interactúa a diario con multiples aplicaciones que ofrecen notificaciones y alertas, en este sentido cada usuario es una fuente de eventos, y a menudo los usuarios se sienten desbordados y no son capaces de procesar toda esa información a la carta. Para lidiar con esta sobresaturación, han aparecido múltiples herramientas que automatizan las tareas más habituales, desde gestores de bandeja de entrada, gestores de alertas en redes sociales, a complejos CRMs o smart-home hubs. La contrapartida es que aunque ofrecen una solución a problemas comunes, no pueden adaptarse a las necesidades de cada usuario ofreciendo una solucion personalizada. Los Servicios de Automatización de Tareas (TAS de sus siglas en inglés) entraron en escena a partir de 2012 para dar solución a esta liminación. Dada su semejanza, estos servicios también son considerados como un nuevo enfoque en la tecnología de mash-ups pero centra en el usuarios. Los usuarios de estas plataformas tienen la capacidad de interconectar servicios, sensores y otros aparatos con connexión a internet diseñando las automatizaciones que se ajustan a sus necesidades. La propuesta ha sido ámpliamante aceptada por los usuarios. Este hecho ha propiciado multitud de plataformas que ofrecen servicios TAS entren en escena. Al ser un nuevo campo de investigación, esta tesis presenta las principales características de los TAS, describe sus componentes, e identifica las dimensiones fundamentales que los defines y permiten su clasificación. En este trabajo se acuña el termino Servicio de Automatización de Tareas (TAS) dando una descripción formal para estos servicios y sus componentes (llamados canales), y proporciona una arquitectura de referencia. De igual forma, existe una falta de herramientas para describir servicios de automatización, y las reglas de automatización. A este respecto, esta tesis propone un modelo común que se concreta en la ontología EWE (Evented WEb Ontology). Este modelo permite com parar y equiparar canales y automatizaciones de distintos TASs, constituyendo un aporte considerable paraa la portabilidad de automatizaciones de usuarios entre plataformas. De igual manera, dado el carácter semántico del modelo, permite incluir en las automatizaciones elementos de fuentes externas sobre los que razonar, como es el caso de Linked Open Data. Utilizando este modelo, se ha generado un dataset de canales y automatizaciones, con los datos obtenidos de algunos de los TAS existentes en el mercado. Como último paso hacia el lograr un modelo común para describir TAS, se ha desarrollado un algoritmo para aprender ontologías de forma automática a partir de los datos del dataset. De esta forma, se favorece el descubrimiento de nuevos canales, y se reduce el coste de mantenimiento del modelo, el cual se actualiza de forma semi-automática. En conclusión, las principales contribuciones de esta tesis son: i) describir el estado del arte en automatización de tareas y acuñar el término Servicio de Automatización de Tareas, ii) desarrollar una ontología para el modelado de los componentes de TASs y automatizaciones, iii) poblar un dataset de datos de canales y automatizaciones, usado para desarrollar un algoritmo de aprendizaje automatico de ontologías, y iv) diseñar una arquitectura de agentes para la asistencia a usuarios en la creación de automatizaciones. ABSTRACT The new stage in the evolution of the Web (the Live Web or Evented Web) puts lots of social data-streams at the service of users, who no longer browse static web pages but interact with applications that present them contextual and relevant experiences. Given that each user is a potential source of events, a typical user often gets overwhelmed. To deal with that huge amount of data, multiple automation tools have emerged, covering from simple social media managers or notification aggregators to complex CRMs or smart-home Hub/Apps. As a downside, they cannot tailor to the needs of every single user. As a natural response to this downside, Task Automation Services broke in the Internet. They may be seen as a new model of mash-up technology for combining social streams, services and connected devices from an end-user perspective: end-users are empowered to connect those stream however they want, designing the automations they need. The numbers of those platforms that appeared early on shot up, and as a consequence the amount of platforms following this approach is growing fast. Being a novel field, this thesis aims to shed light on it, presenting and exemplifying the main characteristics of Task Automation Services, describing their components, and identifying several dimensions to classify them. This thesis coins the term Task Automation Services (TAS) by providing a formal definition of them, their components (called channels), as well a TAS reference architecture. There is also a lack of tools for describing automation services and automations rules. In this regard, this thesis proposes a theoretical common model of TAS and formalizes it as the EWE ontology This model enables to compare channels and automations from different TASs, which has a high impact in interoperability; and enhances automations providing a mechanism to reason over external sources such as Linked Open Data. Based on this model, a dataset of components of TAS was built, harvesting data from the web sites of actual TASs. Going a step further towards this common model, an algorithm for categorizing them was designed, enabling their discovery across different TAS. Thus, the main contributions of the thesis are: i) surveying the state of the art on task automation and coining the term Task Automation Service; ii) providing a semantic common model for describing TAS components and automations; iii) populating a categorized dataset of TAS components, used to learn ontologies of particular domains from the TAS perspective; and iv) designing an agent architecture for assisting users in setting up automations, that is aware of their context and acts in consequence.
Resumo:
This document is a summary of the Bachelor thesis titled “VHDL-Based System Design of a Cognitive Sensorimotor Loop (CSL) for Haptic Human-Machine Interaction (HMI)” written by Pablo de Miguel Morales, Electronics Engineering student at the Universidad Politécnica de Madrid (UPM Madrid, Spain) during an Erasmus+ Exchange Program at the Beuth Hochschule für Technik (BHT Berlin, Germany). The tutor of this project is Dr. Prof. Hild. This project has been developed inside the Neurobotics Research Laboratory (NRL) in close collaboration with Benjamin Panreck, a member of the NRL, and another exchange student from the UPM Pablo Gabriel Lezcano. For a deeper comprehension of the content of the thesis, a deeper look in the document is needed as well as the viewing of the videos and the VHDL design. In the growing field of automation, a large amount of workforce is dedicated to improve, adapt and design motor controllers for a wide variety of applications. In the specific field of robotics or other machinery designed to interact with humans or their environment, new needs and technological solutions are often being discovered due to the existing, relatively unexplored new scenario it is. The project consisted of three main parts: Two VHDL-based systems and one short experiment on the haptic perception. Both VHDL systems are based on a Cognitive Sensorimotor Loop (CSL) which is a control loop designed by the NRL and mainly developed by Dr. Prof. Hild. The CSL is a control loop whose main characteristic is the fact that it does not use any external sensor to measure the speed or position of the motor but the motor itself. The motor always generates a voltage that is proportional to its angular speed so it does not need calibration. This method is energy efficient and simplifies control loops in complex systems. The first system, named CSL Stay In Touch (SIT), consists in a one DC motor system controller by a FPGA Board (Zynq ZYBO 7000) whose aim is to keep contact with any external object that touches its Sensing Platform in both directions. Apart from the main behavior, three features (Search Mode, Inertia Mode and Return Mode) have been designed to enhance the haptic interaction experience. Additionally, a VGA-Screen is also controlled by the FPGA Board for the monitoring of the whole system. This system has been completely developed, tested and improved; analyzing its timing and consumption properties. The second system, named CSL Fingerlike Mechanism (FM), consists in a fingerlike mechanical system controlled by two DC motors (Each controlling one part of the finger). The behavior is similar to the first system but in a more complex structure. This system was optional and not part of the original objectives of the thesis and it could not be properly finished and tested due to the lack of time. The haptic perception experiment was an experiment conducted to have an insight into the complexity of human haptic perception in order to implement this knowledge into technological applications. The experiment consisted in testing the capability of the subjects to recognize different objects and shapes while being blindfolded and with their ears covered. Two groups were done, one had full haptic perception while the other had to explore the environment with a plastic piece attached to their finger to create a haptic handicap. The conclusion of the thesis was that a haptic system based only on a CSL-based system is not enough to retrieve valuable information from the environment and that other sensors are needed (temperature, pressure, etc.) but that a CSL-based system is very useful to control the force applied by the system to interact with haptic sensible surfaces such as skin or tactile screens. RESUMEN. Este documento es un resumen del proyecto fin de grado titulado “VHDL-Based System Design of a Cognitive Sensorimotor Loop (CSL) for Haptic Human-Machine Interaction (HMI)” escrito por Pablo de Miguel, estudiante de Ingeniería Electrónica de Comunicaciones en la Universidad Politécnica de Madrid (UPM Madrid, España) durante un programa de intercambio Erasmus+ en la Beuth Hochschule für Technik (BHT Berlin, Alemania). El tutor de este proyecto ha sido Dr. Prof. Hild. Este proyecto se ha desarrollado dentro del Neurorobotics Research Laboratory (NRL) en estrecha colaboración con Benjamin Panreck (un miembro del NRL) y con Pablo Lezcano (Otro estudiante de intercambio de la UPM). Para una comprensión completa del trabajo es necesaria una lectura detenida de todo el documento y el visionado de los videos y análisis del diseño VHDL incluidos en el CD adjunto. En el creciente sector de la automatización, una gran cantidad de esfuerzo está dedicada a mejorar, adaptar y diseñar controladores de motor para un gran rango de aplicaciones. En el campo específico de la robótica u otra maquinaria diseñada para interactuar con los humanos o con su entorno, nuevas necesidades y soluciones tecnológicas se siguen desarrollado debido al relativamente inexplorado y nuevo escenario que supone. El proyecto consta de tres partes principales: Dos sistemas basados en VHDL y un pequeño experimento sobre la percepción háptica. Ambos sistemas VHDL están basados en el Cognitive Sesnorimotor Loop (CSL) que es un lazo de control creado por el NRL y cuyo desarrollador principal ha sido Dr. Prof. Hild. El CSL es un lazo de control cuya principal característica es la ausencia de sensores externos para medir la velocidad o la posición del motor, usando el propio motor como sensor. El motor siempre genera un voltaje proporcional a su velocidad angular de modo que no es necesaria calibración. Este método es eficiente en términos energéticos y simplifica los lazos de control en sistemas complejos. El primer sistema, llamado CSL Stay In Touch (SIT), consiste en un sistema formado por un motor DC controlado por una FPGA Board (Zynq ZYBO 7000) cuyo objetivo es mantener contacto con cualquier objeto externo que toque su plataforma sensible en ambas direcciones. Aparte del funcionamiento básico, tres modos (Search Mode, Inertia Mode y Return Mode) han sido diseñados para mejorar la interacción. Adicionalmente, se ha diseñado el control a través de la FPGA Board de una pantalla VGA para la monitorización de todo el sistema. El sistema ha sido totalmente desarrollado, testeado y mejorado; analizando su propiedades de timing y consumo energético. El segundo sistema, llamado CSL Fingerlike Mechanism (FM), consiste en un mecanismo similar a un dedo controlado por dos motores DC (Cada uno controlando una falange). Su comportamiento es similar al del primer sistema pero con una estructura más compleja. Este sistema no formaba parte de los objetivos iniciales del proyecto y por lo tanto era opcional. No pudo ser plenamente desarrollado debido a la falta de tiempo. El experimento de percepción háptica fue diseñado para profundizar en la percepción háptica humana con el objetivo de aplicar este conocimiento en aplicaciones tecnológicas. El experimento consistía en testear la capacidad de los sujetos para reconocer diferentes objetos, formas y texturas en condiciones de privación del sentido del oído y la vista. Se crearon dos grupos, en uno los sujetos tenían plena percepción háptica mientras que en el otro debían interactuar con los objetos a través de una pieza de plástico para generar un hándicap háptico. La conclusión del proyecto fue que un sistema háptico basado solo en sistemas CSL no es suficiente para recopilar información valiosa del entorno y que debe hacer uso de otros sensores (temperatura, presión, etc.). En cambio, un sistema basado en CSL es idóneo para el control de la fuerza aplicada por el sistema durante la interacción con superficies hápticas sensibles tales como la piel o pantallas táctiles.
Resumo:
Until a few years ago, most of the network communications were based in the wire as the physical media, but due to the advances and the maturity of the wireless communications, this is changing. Nowadays wireless communications offers fast, secure, efficient and reliable connections. Mobile communications are in expansion, clearly driven by the use of smart phones and other mobile devices, the use of laptops, etc… Besides that point, the inversion in the installation and maintenance of the physical medium is much lower than in wired communications, not only because the air has no cost, but because the installation and maintenance of the wire require a high economic cost. Besides the economic cost we find that wire is a more vulnerable medium to external threats such as noise, sabotages, etc… There are two different types of wireless networks: those which the structure is part of the network itself and those which have a lack of structure or any centralization, in a way that the devices that form part of the network can connect themselves in a dynamic and random way, handling also the routing of every control and information messages, this kind of networks is known as Ad-hoc. In the present work we will proceed to study one of the multiple wireless protocols that allows mobile communications, it is Optimized Link State Routing, from now on, OLSR, it is an pro-active routing, standard mechanism that works in a distributed in order to stablish the connections among the different nodes that belong to a wireless network. Thanks to this protocol it is possible to get all the routing tables in all the devices correctly updated every moment through the periodical transmission of control messages and on this way allow a complete connectivity among the devices that are part of the network and also, allow access to other external networks such as virtual private networks o Internet. This protocol could be perfectly used in environments such as airports, malls, etc… The update of the routing tables in all the devices is got thanks to the periodical transmission of control messages and finally it will offer connectivity among all the devices and the corresponding external networks. For the study of OLSR protocol we will have the help of the network simulator “Network Simulator 2”, a freeware network simulator programmed in C++ based in discrete events. This simulator is used mainly in educational and research environments and allows a very extensive range of protocols, both, wired networks protocols and wireless network protocols, what is going to be really useful to proceed to the simulation of different configurations of networks and protocols. In the present work we will also study different simulations with Network Simulator 2, in different scenarios with different configurations, wired networks, and Ad-hoc networks, where we will study OLSR Protocol. RESUMEN. Hasta hace pocos años, la mayoría de las comunicaciones de red estaban basadas en el cable como medio físico pero debido al avance y madurez alcanzados en el campo de las comunicaciones inalámbricas esto está cambiando. Hoy día las comunicaciones inalámbricas nos ofrecen conexiones veloces, seguras, eficientes y fiables. Las comunicaciones móviles se encuentran en su momento de máxima expansión, claramente impulsadas por el uso de teléfonos y demás dispositivos móviles, el uso de portátiles, etc… Además la inversión a realizar en la instalación y el mantenimiento del medio físico en las comunicaciones móviles es muchísimo menor que en comunicaciones por cable, ya no sólo porque el aire no tenga coste alguno, sino porque la instalación y mantenimiento del cable precisan de un elevado coste económico por norma. Además del coste económico nos encontramos con que es un medio más vulnerable a amenazas externas tales como el ruido, escuchas no autorizadas, sabotajes, etc… Existen dos tipos de redes inalámbricas: las constituidas por una infraestructura que forma parte más o menos de la misma y las que carecen de estructura o centralización alguna, de modo que los dispositivos que forman parte de ella pueden conectarse de manera dinámica y arbitraria entre ellos, encargándose además del encaminamiento de todos los mensajes de control e información, a este tipo de redes se las conoce como redes Ad-hoc. En el presente Proyecto de Fin de Carrera se procederá al estudio de uno de los múltiples protocolos inalámbricos que permiten comunicaciones móviles, se trata del protocolo inalámbrico Optimized Link State Routing, de ahora en adelante OLSR, un mecanismo estándar de enrutamiento pro-activo, que trabaja de manera distribuida para establecer las conexiones entre los nodos que formen parte de las redes inalámbricas Ad-hoc, las cuales carecen de un nodo central y de una infraestructura pre-existente. Gracias a este protocolo es posible conseguir que todos los equipos mantengan en todo momento las tablas de ruta actualizadas correctamente mediante la transmisión periódica de mensajes de control y así permitir una completa conectividad entre todos los equipos que formen parte de la red y, a su vez, también permitir el acceso a otras redes externas tales como redes privadas virtuales o Internet. Este protocolo sería usado en entornos tales como aeropuertos La actualización de las tablas de enrutamiento de todos los equipos se conseguirá mediante la transmisión periódica de mensajes de control y así finalmente se podrá permitir conectividad entre todos los equipos y con las correspondientes redes externas. Para el estudio del protocolo OLSR contaremos con el simulador de redes Network Simulator 2, un simulador de redes freeware programado en C++ basado en eventos discretos. Este simulador es usado principalmente en ambientes educativos y de investigación y permite la simulación tanto de protocolos unicast como multicast. El campo donde más se utiliza es precisamente en el de la investigación de redes móviles Ad-hoc. El simulador Network Simulator 2 no sólo implementa el protocolo OLSR, sino que éste implementa una amplia gama de protocolos, tanto de redes cableadas como de redes inalámbricas, lo cual va a sernos de gran utilidad para proceder a la simulación de distintas configuraciones de redes y protocolos. En el presente Proyecto de Fin de Carrera se estudiarán también diversas simulaciones con el simulador NS2 en diferentes escenarios con diversas configuraciones; redes cableadas, redes inalámbricas Ad-hoc, donde se estudiará el protocolo antes mencionado: OLSR. Este Proyecto de Fin de Carrera consta de cuatro apartados distintos: Primeramente se realizará el estudio completo del protocolo OLSR, se verán los beneficios y contrapartidas que ofrece este protocolo inalámbrico. También se verán los distintos tipos de mensajes existentes en este protocolo y unos pequeños ejemplos del funcionamiento del protocolo OLSR. Seguidamente se hará una pequeña introducción al simulador de redes Network Simulator 2, veremos la historia de este simulador, y también se hará referencia a la herramienta extra NAM, la cual nos permitirá visualizar el intercambio de paquetes que se produce entre los diferentes dispositivos de nuestras simulaciones de forma intuitiva y amigable. Se hará mención a la plataforma MASIMUM, encargada de facilitar en un entorno académico software y documentación a sus alumnos con el fin de facilitarles la investigación y la simulación de redes y sensores Ad-hoc. Finalmente se verán dos ejemplos, uno en el que se realizará una simulación entre dos PCs en un entorno Ethernet y otro ejemplo en el que se realizará una simulación inalámbrica entre cinco dispositivos móviles mediante el protocolo a estudiar, OLSR.
Resumo:
Heparin-like glycosaminoglycans, acidic complex polysaccharides present on cell surfaces and in the extracellular matrix, regulate important physiological processes such as anticoagulation and angiogenesis. Heparin-like glycosaminoglycan degrading enzymes or heparinases are powerful tools that have enabled the elucidation of important biological properties of heparin-like glycosaminoglycans in vitro and in vivo. With an overall goal of developing an approach to sequence heparin-like glycosaminoglycans using the heparinases, we recently have elaborated a mass spectrometry methodology to elucidate the mechanism of depolymerization of heparin-like glycosaminoglycans by heparinase I. In this study, we investigate the mechanism of depolymerization of heparin-like glycosaminoglycans by heparinase II, which possesses the broadest known substrate specificity of the heparinases. We show here that heparinase II cleaves heparin-like glycosaminoglycans endolytically in a nonrandom manner. In addition, we show that heparinase II has two distinct active sites and provide evidence that one of the active sites is heparinase I-like, cleaving at hexosamine–sulfated iduronate linkages, whereas the other is presumably heparinase III-like, cleaving at hexosamine–glucuronate linkages. Elucidation of the mechanism of depolymerization of heparin-like glycosaminoglycans by the heparinases and mutant heparinases could pave the way to the development of much needed methods to sequence heparin-like glycosaminoglycans.
Resumo:
The anti-atherogenic role of high density lipoprotein is well known even though the mechanism has not been established. In this study, we have used a novel model system to test whether removal of lipoprotein cholesterol from a localized depot will be affected by apolipoprotein A-I (apo A-I) deficiency. We compared the egress of cholesterol injected in the form of cationized low density lipoprotein into the rectus femoris muscle of apo A-I K-O and control mice. When the injected lipoprotein had been labeled with [3H]cholesterol, the t½ of labeled cholesterol loss from the muscle was about 4 days in controls and more than 7 days in apo A-I K-O mice. The loss of cholesterol mass had an initial slow (about 4 days) and a later more rapid component; after day 4, the disappearance curves for apo A-I K-O and controls began to diverge, and by day 7, the loss of injected cholesterol was significantly slower in apo A-I K-O than in controls. The injected lipoprotein cholesterol is about 70% in esterified form and undergoes hydrolysis, which by day 4 was similar in control and apo A-I K-O mice. The efflux potential of serum from control and apo A-I K-O mice was studied using media containing 2% native or delipidated serum. A significantly lower efflux of [3H]cholesterol from macrophages was found with native and delipidated serum from apo A-I K-O mice. In conclusion, these findings show that lack of apo A-I results in a delay in cholesterol loss from a localized depot in vivo and from macrophages in culture. These results provide support for the thesis that anti-atherogenicity of high density lipoprotein is related in part to its role in cholesterol removal.
Resumo:
Catalytic antibodies have shown great promise for catalyzing a tremendously diverse set of natural and unnatural chemical transformations. However, few catalytic antibodies have efficiencies that approach those of natural enzymes. In principle, random mutagenesis procedures such as phage display could be used to improve the catalytic activities of existing antibodies; however, these studies have been hampered by difficulties in the recombinant expression of antibodies. Here, we have grafted the antigen binding loops from a murine-derived catalytic antibody, 17E8, onto a human antibody framework in an effort to overcome difficulties associated with recombinant expression and phage display of this antibody. “Humanized” 17E8 retained similar catalytic and hapten binding properties as the murine antibody while levels of functional Fab displayed on phage were 200-fold higher than for a murine variable region/human constant region chimeric Fab. This construct was used to prepare combinatorial libraries. Affinity panning of these resulted in the selection of variants with 2- to 8-fold improvements in binding affinity for a phosphonate transition-state analog. Surprisingly, none of the affinity-matured variants was more catalytically active than the parent antibody and some were significantly less active. By contrast, a weaker binding variant was identified with 2-fold greater catalytic activity and incorporation of a single substitution (Tyr-100aH → Asn) from this variant into the parent antibody led to a 5-fold increase in catalytic efficiency. Thus, phage display methods can be readily used to optimize binding of catalytic antibodies to transition-state analogs, and when used in conjunction with limited screening for catalysis can identify variants with higher catalytic efficiencies.
Resumo:
The γ-aminobutyric acid type A (GABAA) receptor is a transmitter-gated ion channel mediating the majority of fast inhibitory synaptic transmission within the brain. The receptor is a pentameric assembly of subunits drawn from multiple classes (α1–6, β1–3, γ1–3, δ1, and ɛ1). Positive allosteric modulation of GABAA receptor activity by general anesthetics represents one logical mechanism for central nervous system depression. The ability of the intravenous general anesthetic etomidate to modulate and activate GABAA receptors is uniquely dependent upon the β subunit subtype present within the receptor. Receptors containing β2- or β3-, but not β1 subunits, are highly sensitive to the agent. Here, chimeric β1/β2 subunits coexpressed in Xenopus laevis oocytes with human α6 and γ2 subunits identified a region distal to the extracellular N-terminal domain as a determinant of the selectivity of etomidate. The mutation of an amino acid (Asn-289) present within the channel domain of the β3 subunit to Ser (the homologous residue in β1), strongly suppressed the GABA-modulatory and GABA-mimetic effects of etomidate. The replacement of the β1 subunit Ser-290 by Asn produced the converse effect. When applied intracellularly to mouse L(tk−) cells stably expressing the α6β3γ2 subunit combination, etomidate was inert. Hence, the effects of a clinically utilized general anesthetic upon a physiologically relevant target protein are dramatically influenced by a single amino acid. Together with the lack of effect of intracellular etomidate, the data argue against a unitary, lipid-based theory of anesthesia.
Resumo:
Oncoprotein 18/stathmin (Op18) has been identified recently as a protein that destabilizes microtubules, but the mechanism of destabilization is currently controversial. Based on in vitro microtubule assembly assays, evidence has been presented supporting conflicting destabilization models of either tubulin sequestration or promotion of microtubule catastrophes. We found that Op18 can destabilize microtubules by both of these mechanisms and that these activities can be dissociated by changing pH. At pH 6.8, Op18 slowed microtubule elongation and increased catastrophes at both plus and minus ends, consistent with a tubulin-sequestering activity. In contrast, at pH 7.5, Op18 promoted microtubule catastrophes, particularly at plus ends, with little effect on elongation rates at either microtubule end. Dissociation of tubulin-sequestering and catastrophe-promoting activities of Op18 was further demonstrated by analysis of truncated Op18 derivatives. Lack of a C-terminal region of Op18 (aa 100–147) resulted in a truncated protein that lost sequestering activity at pH 6.8 but retained catastrophe-promoting activity. In contrast, lack of an N-terminal region of Op18 (aa 5–25) resulted in a truncated protein that still sequestered tubulin at pH 6.8 but was unable to promote catastrophes at pH 7.5. At pH 6.8, both the full length and the N-terminal–truncated Op18 bound tubulin, whereas truncation at the C-terminus resulted in a pronounced decrease in tubulin binding. Based on these results, and a previous study documenting a pH-dependent change in binding affinity between Op18 and tubulin, it is likely that tubulin sequestering observed at lower pH resulted from the relatively tight interaction between Op18 and tubulin and that this tight binding requires the C-terminus of Op18; however, under conditions in which Op18 binds weakly to tubulin (pH 7.5), Op18 stimulated catastrophes without altering tubulin subunit association or dissociation rates, and Op18 did not depolymerize microtubules capped with guanylyl (α, β)-methylene diphosphonate–tubulin subunits. We hypothesize that weak binding between Op18 and tubulin results in free Op18, which is available to interact with microtubule ends and thereby promote catastrophes by a mechanism that likely involves GTP hydrolysis.
Resumo:
The G2 DNA damage and slowing of S-phase checkpoints over mitosis function through tyrosine phosphorylation of NIMXcdc2 in Aspergillus nidulans. We demonstrate that breaking these checkpoints leads to a defective premature mitosis followed by dramatic rereplication of genomic DNA. Two additional checkpoint functions, uvsB and uvsD, also cause the rereplication phenotype after their mutation allows premature mitosis in the presence of low concentrations of hydroxyurea. uvsB is shown to encode a rad3/ATR homologue, whereas uvsD displays homology to rad26, which has only previously been identified in Schizosaccharomyces pombe. uvsBrad3 and uvsDrad26 have G2 checkpoint functions over mitosis and another function essential for surviving DNA damage. The rereplication phenotype is accompanied by lack of NIMEcyclinB, but ectopic expression of active nondegradable NIMEcyclinB does not arrest DNA rereplication. DNA rereplication can also be induced in cells that enter mitosis prematurely because of lack of tyrosine phosphorylation of NIMXcdc2 and impaired anaphase-promoting complex function. The data demonstrate that lack of checkpoint control over mitosis can secondarily cause defects in the checkpoint system that prevents DNA rereplication in the absence of mitosis. This defines a new mechanism by which endoreplication of DNA can be triggered and maintained in eukaryotic cells.
Resumo:
Enzymatic transformations of macromolecular substrates such as DNA repair enzyme/DNA transformations are commonly interpreted primarily by active-site functional-group chemistry that ignores their extensive interfaces. Yet human uracil–DNA glycosylase (UDG), an archetypical enzyme that initiates DNA base-excision repair, efficiently excises the damaged base uracil resulting from cytosine deamination even when active-site functional groups are deleted by mutagenesis. The 1.8-Å resolution substrate analogue and 2.0-Å resolution cleaved product cocrystal structures of UDG bound to double-stranded DNA suggest enzyme–DNA substrate-binding energy from the macromolecular interface is funneled into catalytic power at the active site. The architecturally stabilized closing of UDG enforces distortions of the uracil and deoxyribose in the flipped-out nucleotide substrate that are relieved by glycosylic bond cleavage in the product complex. This experimentally defined substrate stereochemistry implies the enzyme alters the orientation of three orthogonal electron orbitals to favor electron transpositions for glycosylic bond cleavage. By revealing the coupling of this anomeric effect to a delocalization of the glycosylic bond electrons into the uracil aromatic system, this structurally implicated mechanism resolves apparent paradoxes concerning the transpositions of electrons among orthogonal orbitals and the retention of catalytic efficiency despite mutational removal of active-site functional groups. These UDG/DNA structures and their implied dissociative excision chemistry suggest biology favors a chemistry for base-excision repair initiation that optimizes pathway coordination by product binding to avoid the release of cytotoxic and mutagenic intermediates. Similar excision chemistry may apply to other biological reaction pathways requiring the coordination of complex multistep chemical transformations.
Resumo:
The reaction center (RC) from Rhodobacter sphaeroides converts light into chemical energy through the light induced two-electron, two-proton reduction of a bound quinone molecule QB (the secondary quinone acceptor). A unique pathway for proton transfer to the QB site had so far not been determined. To study the molecular basis for proton transfer, we investigated the effects of exogenous metal ion binding on the kinetics of the proton-assisted electron transfer kAB(2) (QA−•QB−• + H+ → QA(QBH)−, where QA is the primary quinone acceptor). Zn2+ and Cd2+ bound stoichiometrically to the RC (KD ≤ 0.5 μM) and reduced the observed value of kAB(2) 10-fold and 20-fold (pH 8.0), respectively. The bound metal changed the mechanism of the kAB(2) reaction. In native RCs, kAB(2) was previously shown to be rate-limited by electron transfer based on the dependence of kAB(2) on the driving force for electron transfer. Upon addition of Zn2+ or Cd2+, kAB(2) became approximately independent of the electron driving force, implying that the rate of proton transfer was reduced (≥ 102-fold) and has become the rate-limiting step. The lack of an effect of the metal binding on the charge recombination reaction D+•QAQB−• → DQAQB suggests that the binding site is located far (>10 Å) from QB. This hypothesis is confirmed by preliminary x-ray structure analysis. The large change in the rate of proton transfer caused by the stoichiometric binding of the metal ion shows that there is one dominant site of proton entry into the RC from which proton transfer to QB−• occurs.
Resumo:
Natural ribozymes require metal ion cofactors that aid both in structural folding and in chemical catalysis. In contrast, many protein enzymes produce dramatic rate enhancements using only the chemical groups that are supplied by their constituent amino acids. This fact is widely viewed as the most important feature that makes protein a superior polymer for the construction of biological catalysts. Herein we report the in vitro selection of a catalytic DNA that uses histidine as an active component for an RNA cleavage reaction. An optimized deoxyribozyme from this selection requires l-histidine or a closely related analog to catalyze RNA phosphoester cleavage, producing a rate enhancement of ≈1-million-fold over the rate of substrate cleavage in the absence of enzyme. Kinetic analysis indicates that a DNA–histidine complex may perform a reaction that is analogous to the first step of the proposed catalytic mechanism of RNase A, in which the imidazole group of histidine serves as a general base catalyst. Similarly, ribozymes of the “RNA world” may have used amino acids and other small organic cofactors to expand their otherwise limited catalytic potential.
Resumo:
Many bacterial plasmids replicate by a rolling-circle mechanism that involves the generation of single-stranded DNA (ssDNA) intermediates. Replication of the lagging strand of such plasmids initiates from their single strand origin (sso). Many different types of ssos have been identified. One group of ssos, termed ssoA, which have conserved sequence and structural features, function efficiently only in their natural hosts in vivo. To study the host specificity of sso sequences, we have analyzed the functions of two closely related ssoAs belonging to the staphylococcal plasmid pE194 and the streptococcal plasmid pLS1 in Staphylococcus aureus. The pLS1 ssoA functioned poorly in vivo in S. aureus as evidenced by accumulation of high levels of ssDNA but supported efficient replication in vitro in staphylococcal extracts. These results suggest that one or more host factors that are present in sufficient quantities in S. aureus cell-free extracts may be limiting in vivo. Mapping of the initiation points of lagging strand synthesis in vivo and in vitro showed that DNA synthesis initiates from specific sites within the pLS1 ssoA. These results demonstrate that specific initiation of replication can occur from the pLS1 ssoA in S. aureus although it plays a minimal role in lagging strand synthesis in vivo. Therefore, the poor functionality of the pLS1 in vivo in a nonnative host is caused by the low efficiency rather than a lack of specificity of the initiation process. We also have identified ssDNA promoters and mapped the primer RNAs synthesized by the S. aureus and Bacillus subtilis RNA polymerases from the pE194 and pLS1 ssoAs. The S. aureus RNA polymerase bound more efficiently to the native pE194 ssoA as compared with the pLS1 ssoA, suggesting that the strength of RNA polymerase–ssoA interaction may play a major role in the functionality of the ssoA sequences in Gram-positive bacteria.