706 resultados para E-voting
Resumo:
This paper present a preliminary analysis of electronic voting schemes and the requirements of Electronic Democracy as a part of the work carried out by the authors in the VOTESCRIPT project (TIC2000-1630-C02). A summary of the most relevant experiences on this field are discussed and a basic classification of them is pointed out, according to different degrees on process computerization. As it is shown, most of them only take into account a technological perspective, just trying to imitate the conventional voting schemes. A citizen-base bottom-up perspective is proposed to analyze the implementation of electronic voting systems in order to avoid citizen rejection. The paper also hallmarks the new technical possibilities created to be applied to the development of citizen?s right realm. Further than conventional voting schemes, the paper proposes the use of advanced security services to extend conceptualization of Electronic Democracy in which citizens have a key role on decision making processes.
Resumo:
Case-based reasoning (CBR) is a unique tool for the evaluation of possible failure of firms (EOPFOF) for its eases of interpretation and implementation. Ensemble computing, a variation of group decision in society, provides a potential means of improving predictive performance of CBR-based EOPFOF. This research aims to integrate bagging and proportion case-basing with CBR to generate a method of proportion bagging CBR for EOPFOF. Diverse multiple case bases are first produced by multiple case-basing, in which a volume parameter is introduced to control the size of each case base. Then, the classic case retrieval algorithm is implemented to generate diverse member CBR predictors. Majority voting, the most frequently used mechanism in ensemble computing, is finally used to aggregate outputs of member CBR predictors in order to produce final prediction of the CBR ensemble. In an empirical experiment, we statistically validated the results of the CBR ensemble from multiple case bases by comparing them with those of multivariate discriminant analysis, logistic regression, classic CBR, the best member CBR predictor and bagging CBR ensemble. The results from Chinese EOPFOF prior to 3 years indicate that the new CBR ensemble, which significantly improved CBRs predictive ability, outperformed all the comparative methods.
Resumo:
The primary hypothesis stated by this paper is that the use of social choice theory in Ambient Intelligence systems can improve significantly users satisfaction when accessing shared resources. A research methodology based on agent based social simulations is employed to support this hypothesis and to evaluate these benefits. The result is a six-fold contribution summarized as follows. Firstly, several considerable differences between this application case and the most prominent social choice application, political elections, have been found and described. Secondly, given these differences, a number of metrics to evaluate different voting systems in this scope have been proposed and formalized. Thirdly, given the presented application and the metrics proposed, the performance of a number of well known electoral systems is compared. Fourthly, as a result of the performance study, a novel voting algorithm capable of obtaining the best balance between the metrics reviewed is introduced. Fifthly, to improve the social welfare in the experiments, the voting methods are combined with cluster analysis techniques. Finally, the article is complemented by a free and open-source tool, VoteSim, which ensures not only the reproducibility of the experimental results presented, but also allows the interested reader to adapt the case study presented to different environments.
Resumo:
El Malware es una grave amenaza para la seguridad de los sistemas. Con el uso generalizado de la World Wide Web, ha habido un enorme aumento en los ataques de virus, haciendo que la seguridad informática sea esencial para todas las computadoras y se expandan las áreas de investigación sobre los nuevos incidentes que se generan, siendo una de éstas la clasificación del malware. Los “desarrolladores de malware” utilizan nuevas técnicas para generar malware polimórfico reutilizando los malware existentes, por lo cual es necesario agruparlos en familias para estudiar sus características y poder detectar nuevas variantes de los mismos. Este trabajo, además de presentar un detallado estado de la cuestión de la clasificación del malware de ficheros ejecutables PE, presenta un enfoque en el que se mejora el índice de la clasificación de la base de datos de Malware MALICIA utilizando las características estáticas de ficheros ejecutables Imphash y Pehash, utilizando dichas características se realiza un clustering con el algoritmo clustering agresivo el cual se cambia con la clasificación actual mediante el algoritmo de majority voting y la característica icon_label, obteniendo un Precision de 99,15% y un Recall de 99,32% mejorando la clasificación de MALICIA con un F-measure de 99,23%.---ABSTRACT---Malware is a serious threat to the security of systems. With the widespread use of the World Wide Web, there has been a huge increase in virus attacks, making the computer security essential for all computers. Near areas of research have append in this area including classifying malware into families, Malware developers use polymorphism to generate new variants of existing malware. Thus it is crucial to group variants of the same family, to study their characteristics and to detect new variants. This work, in addition to presenting a detailed analysis of the problem of classifying malware PE executable files, presents an approach in which the classification in the Malware database MALICIA is improved by using static characteristics of executable files, namely Imphash and Pehash. Both features are evaluated through clustering real malware with family labels with aggressive clustering algorithm and combining this with the current classification by Majority voting algorithm, obtaining a Precision of 99.15% and a Recall of 99.32%, improving the classification of MALICIA with an F-measure of 99,23%.
Resumo:
This paper presents a robust approach for recognition of thermal face images based on decision level fusion of 34 different region classifiers. The region classifiers concentrate on local variations. They use singular value decomposition (SVD) for feature extraction. Fusion of decisions of the region classifier is done by using majority voting technique. The algorithm is tolerant against false exclusion of thermal information produced by the presence of inconsistent distribution of temperature statistics which generally make the identification process difficult. The algorithm is extensively evaluated on UGC-JU thermal face database, and Terravic facial infrared database and the recognition performance are found to be 95.83% and 100%, respectively. A comparative study has also been made with the existing works in the literature.
Resumo:
Cooperative systems are suitable for many types of applications and nowadays these system are vastly used to improve a previously defined system or to coordinate multiple devices working together. This paper provides an alternative to improve the reliability of a previous intelligent identification system. The proposed approach implements a cooperative model based on multi-agent architecture. This new system is composed of several radar-based systems which identify a detected object and transmit its own partial result by implementing several agents and by using a wireless network to transfer data. The proposed topology is a centralized architecture where the coordinator device is in charge of providing the final identification result depending on the group behavior. In order to find the final outcome, three different mechanisms are introduced. The simplest one is based on majority voting whereas the others use two different weighting voting procedures, both providing the system with learning capabilities. Using an appropriate network configuration, the success rate can be improved from the initial 80% up to more than 90%.
Resumo:
Aircraft tracking plays a key and important role in the Sense-and-Avoid system of Unmanned Aerial Vehicles (UAVs). This paper presents a novel robust visual tracking algorithm for UAVs in the midair to track an arbitrary aircraft at real-time frame rates, together with a unique evaluation system. This visual algorithm mainly consists of adaptive discriminative visual tracking method, Multiple-Instance (MI) learning approach, Multiple-Classifier (MC) voting mechanism and Multiple-Resolution (MR) representation strategy, that is called Adaptive M3 tracker, i.e. AM3. In this tracker, the importance of test sample has been integrated to improve the tracking stability, accuracy and real-time performances. The experimental results show that this algorithm is more robust, efficient and accurate against the existing state-of-art trackers, overcoming the problems generated by the challenging situations such as obvious appearance change, variant surrounding illumination, partial aircraft occlusion, blur motion, rapid pose variation and onboard mechanical vibration, low computation capacity and delayed information communication between UAVs and Ground Station (GS). To our best knowledge, this is the first work to present this tracker for solving online learning and tracking freewill aircraft/intruder in the UAVs.
Resumo:
Esta tesis doctoral se enmarca dentro del campo de los sistemas embebidos reconfigurables, redes de sensores inalámbricas para aplicaciones de altas prestaciones, y computación distribuida. El documento se centra en el estudio de alternativas de procesamiento para sistemas embebidos autónomos distribuidos de altas prestaciones (por sus siglas en inglés, High-Performance Autonomous Distributed Systems (HPADS)), así como su evolución hacia el procesamiento de alta resolución. El estudio se ha llevado a cabo tanto a nivel de plataforma como a nivel de las arquitecturas de procesamiento dentro de la plataforma con el objetivo de optimizar aspectos tan relevantes como la eficiencia energética, la capacidad de cómputo y la tolerancia a fallos del sistema. Los HPADS son sistemas realimentados, normalmente formados por elementos distribuidos conectados o no en red, con cierta capacidad de adaptación, y con inteligencia suficiente para llevar a cabo labores de prognosis y/o autoevaluación. Esta clase de sistemas suele formar parte de sistemas más complejos llamados sistemas ciber-físicos (por sus siglas en inglés, Cyber-Physical Systems (CPSs)). Los CPSs cubren un espectro enorme de aplicaciones, yendo desde aplicaciones médicas, fabricación, o aplicaciones aeroespaciales, entre otras muchas. Para el diseño de este tipo de sistemas, aspectos tales como la confiabilidad, la definición de modelos de computación, o el uso de metodologías y/o herramientas que faciliten el incremento de la escalabilidad y de la gestión de la complejidad, son fundamentales. La primera parte de esta tesis doctoral se centra en el estudio de aquellas plataformas existentes en el estado del arte que por sus características pueden ser aplicables en el campo de los CPSs, así como en la propuesta de un nuevo diseño de plataforma de altas prestaciones que se ajuste mejor a los nuevos y más exigentes requisitos de las nuevas aplicaciones. Esta primera parte incluye descripción, implementación y validación de la plataforma propuesta, así como conclusiones sobre su usabilidad y sus limitaciones. Los principales objetivos para el diseño de la plataforma propuesta se enumeran a continuación: • Estudiar la viabilidad del uso de una FPGA basada en RAM como principal procesador de la plataforma en cuanto a consumo energético y capacidad de cómputo. • Propuesta de técnicas de gestión del consumo de energía en cada etapa del perfil de trabajo de la plataforma. •Propuestas para la inclusión de reconfiguración dinámica y parcial de la FPGA (por sus siglas en inglés, Dynamic Partial Reconfiguration (DPR)) de forma que sea posible cambiar ciertas partes del sistema en tiempo de ejecución y sin necesidad de interrumpir al resto de las partes. Evaluar su aplicabilidad en el caso de HPADS. Las nuevas aplicaciones y nuevos escenarios a los que se enfrentan los CPSs, imponen nuevos requisitos en cuanto al ancho de banda necesario para el procesamiento de los datos, así como en la adquisición y comunicación de los mismos, además de un claro incremento en la complejidad de los algoritmos empleados. Para poder cumplir con estos nuevos requisitos, las plataformas están migrando desde sistemas tradicionales uni-procesador de 8 bits, a sistemas híbridos hardware-software que incluyen varios procesadores, o varios procesadores y lógica programable. Entre estas nuevas arquitecturas, las FPGAs y los sistemas en chip (por sus siglas en inglés, System on Chip (SoC)) que incluyen procesadores embebidos y lógica programable, proporcionan soluciones con muy buenos resultados en cuanto a consumo energético, precio, capacidad de cómputo y flexibilidad. Estos buenos resultados son aún mejores cuando las aplicaciones tienen altos requisitos de cómputo y cuando las condiciones de trabajo son muy susceptibles de cambiar en tiempo real. La plataforma propuesta en esta tesis doctoral se ha denominado HiReCookie. La arquitectura incluye una FPGA basada en RAM como único procesador, así como un diseño compatible con la plataforma para redes de sensores inalámbricas desarrollada en el Centro de Electrónica Industrial de la Universidad Politécnica de Madrid (CEI-UPM) conocida como Cookies. Esta FPGA, modelo Spartan-6 LX150, era, en el momento de inicio de este trabajo, la mejor opción en cuanto a consumo y cantidad de recursos integrados, cuando además, permite el uso de reconfiguración dinámica y parcial. Es importante resaltar que aunque los valores de consumo son los mínimos para esta familia de componentes, la potencia instantánea consumida sigue siendo muy alta para aquellos sistemas que han de trabajar distribuidos, de forma autónoma, y en la mayoría de los casos alimentados por baterías. Por esta razón, es necesario incluir en el diseño estrategias de ahorro energético para incrementar la usabilidad y el tiempo de vida de la plataforma. La primera estrategia implementada consiste en dividir la plataforma en distintas islas de alimentación de forma que sólo aquellos elementos que sean estrictamente necesarios permanecerán alimentados, cuando el resto puede estar completamente apagado. De esta forma es posible combinar distintos modos de operación y así optimizar enormemente el consumo de energía. El hecho de apagar la FPGA para ahora energía durante los periodos de inactividad, supone la pérdida de la configuración, puesto que la memoria de configuración es una memoria volátil. Para reducir el impacto en el consumo y en el tiempo que supone la reconfiguración total de la plataforma una vez encendida, en este trabajo, se incluye una técnica para la compresión del archivo de configuración de la FPGA, de forma que se consiga una reducción del tiempo de configuración y por ende de la energía consumida. Aunque varios de los requisitos de diseño pueden satisfacerse con el diseño de la plataforma HiReCookie, es necesario seguir optimizando diversos parámetros tales como el consumo energético, la tolerancia a fallos y la capacidad de procesamiento. Esto sólo es posible explotando todas las posibilidades ofrecidas por la arquitectura de procesamiento en la FPGA. Por lo tanto, la segunda parte de esta tesis doctoral está centrada en el diseño de una arquitectura reconfigurable denominada ARTICo3 (Arquitectura Reconfigurable para el Tratamiento Inteligente de Cómputo, Confiabilidad y Consumo de energía) para la mejora de estos parámetros por medio de un uso dinámico de recursos. ARTICo3 es una arquitectura de procesamiento para FPGAs basadas en RAM, con comunicación tipo bus, preparada para dar soporte para la gestión dinámica de los recursos internos de la FPGA en tiempo de ejecución gracias a la inclusión de reconfiguración dinámica y parcial. Gracias a esta capacidad de reconfiguración parcial, es posible adaptar los niveles de capacidad de procesamiento, energía consumida o tolerancia a fallos para responder a las demandas de la aplicación, entorno, o métricas internas del dispositivo mediante la adaptación del número de recursos asignados para cada tarea. Durante esta segunda parte de la tesis se detallan el diseño de la arquitectura, su implementación en la plataforma HiReCookie, así como en otra familia de FPGAs, y su validación por medio de diferentes pruebas y demostraciones. Los principales objetivos que se plantean la arquitectura son los siguientes: • Proponer una metodología basada en un enfoque multi-hilo, como las propuestas por CUDA (por sus siglas en inglés, Compute Unified Device Architecture) u Open CL, en la cual distintos kernels, o unidades de ejecución, se ejecuten en un numero variable de aceleradores hardware sin necesidad de cambios en el código de aplicación. • Proponer un diseño y proporcionar una arquitectura en la que las condiciones de trabajo cambien de forma dinámica dependiendo bien de parámetros externos o bien de parámetros que indiquen el estado de la plataforma. Estos cambios en el punto de trabajo de la arquitectura serán posibles gracias a la reconfiguración dinámica y parcial de aceleradores hardware en tiempo real. • Explotar las posibilidades de procesamiento concurrente, incluso en una arquitectura basada en bus, por medio de la optimización de las transacciones en ráfaga de datos hacia los aceleradores. •Aprovechar las ventajas ofrecidas por la aceleración lograda por módulos puramente hardware para conseguir una mejor eficiencia energética. • Ser capaces de cambiar los niveles de redundancia de hardware de forma dinámica según las necesidades del sistema en tiempo real y sin cambios para el código de aplicación. • Proponer una capa de abstracción entre el código de aplicación y el uso dinámico de los recursos de la FPGA. El diseño en FPGAs permite la utilización de módulos hardware específicamente creados para una aplicación concreta. De esta forma es posible obtener rendimientos mucho mayores que en el caso de las arquitecturas de propósito general. Además, algunas FPGAs permiten la reconfiguración dinámica y parcial de ciertas partes de su lógica en tiempo de ejecución, lo cual dota al diseño de una gran flexibilidad. Los fabricantes de FPGAs ofrecen arquitecturas predefinidas con la posibilidad de añadir bloques prediseñados y poder formar sistemas en chip de una forma más o menos directa. Sin embargo, la forma en la que estos módulos hardware están organizados dentro de la arquitectura interna ya sea estática o dinámicamente, o la forma en la que la información se intercambia entre ellos, influye enormemente en la capacidad de cómputo y eficiencia energética del sistema. De la misma forma, la capacidad de cargar módulos hardware bajo demanda, permite añadir bloques redundantes que permitan aumentar el nivel de tolerancia a fallos de los sistemas. Sin embargo, la complejidad ligada al diseño de bloques hardware dedicados no debe ser subestimada. Es necesario tener en cuenta que el diseño de un bloque hardware no es sólo su propio diseño, sino también el diseño de sus interfaces, y en algunos casos de los drivers software para su manejo. Además, al añadir más bloques, el espacio de diseño se hace más complejo, y su programación más difícil. Aunque la mayoría de los fabricantes ofrecen interfaces predefinidas, IPs (por sus siglas en inglés, Intelectual Property) comerciales y plantillas para ayudar al diseño de los sistemas, para ser capaces de explotar las posibilidades reales del sistema, es necesario construir arquitecturas sobre las ya establecidas para facilitar el uso del paralelismo, la redundancia, y proporcionar un entorno que soporte la gestión dinámica de los recursos. Para proporcionar este tipo de soporte, ARTICo3 trabaja con un espacio de soluciones formado por tres ejes fundamentales: computación, consumo energético y confiabilidad. De esta forma, cada punto de trabajo se obtiene como una solución de compromiso entre estos tres parámetros. Mediante el uso de la reconfiguración dinámica y parcial y una mejora en la transmisión de los datos entre la memoria principal y los aceleradores, es posible dedicar un número variable de recursos en el tiempo para cada tarea, lo que hace que los recursos internos de la FPGA sean virtualmente ilimitados. Este variación en el tiempo del número de recursos por tarea se puede usar bien para incrementar el nivel de paralelismo, y por ende de aceleración, o bien para aumentar la redundancia, y por lo tanto el nivel de tolerancia a fallos. Al mismo tiempo, usar un numero óptimo de recursos para una tarea mejora el consumo energético ya que bien es posible disminuir la potencia instantánea consumida, o bien el tiempo de procesamiento. Con el objetivo de mantener los niveles de complejidad dentro de unos límites lógicos, es importante que los cambios realizados en el hardware sean totalmente transparentes para el código de aplicación. A este respecto, se incluyen distintos niveles de transparencia: • Transparencia a la escalabilidad: los recursos usados por una misma tarea pueden ser modificados sin que el código de aplicación sufra ningún cambio. • Transparencia al rendimiento: el sistema aumentara su rendimiento cuando la carga de trabajo aumente, sin cambios en el código de aplicación. • Transparencia a la replicación: es posible usar múltiples instancias de un mismo módulo bien para añadir redundancia o bien para incrementar la capacidad de procesamiento. Todo ello sin que el código de aplicación cambie. • Transparencia a la posición: la posición física de los módulos hardware es arbitraria para su direccionamiento desde el código de aplicación. • Transparencia a los fallos: si existe un fallo en un módulo hardware, gracias a la redundancia, el código de aplicación tomará directamente el resultado correcto. • Transparencia a la concurrencia: el hecho de que una tarea sea realizada por más o menos bloques es transparente para el código que la invoca. Por lo tanto, esta tesis doctoral contribuye en dos líneas diferentes. En primer lugar, con el diseño de la plataforma HiReCookie y en segundo lugar con el diseño de la arquitectura ARTICo3. Las principales contribuciones de esta tesis se resumen a continuación. • Arquitectura de la HiReCookie incluyendo: o Compatibilidad con la plataforma Cookies para incrementar las capacidades de esta. o División de la arquitectura en distintas islas de alimentación. o Implementación de los diversos modos de bajo consumo y políticas de despertado del nodo. o Creación de un archivo de configuración de la FPGA comprimido para reducir el tiempo y el consumo de la configuración inicial. • Diseño de la arquitectura reconfigurable para FPGAs basadas en RAM ARTICo3: o Modelo de computación y modos de ejecución inspirados en el modelo de CUDA pero basados en hardware reconfigurable con un número variable de bloques de hilos por cada unidad de ejecución. o Estructura para optimizar las transacciones de datos en ráfaga proporcionando datos en cascada o en paralelo a los distinto módulos incluyendo un proceso de votado por mayoría y operaciones de reducción. o Capa de abstracción entre el procesador principal que incluye el código de aplicación y los recursos asignados para las diferentes tareas. o Arquitectura de los módulos hardware reconfigurables para mantener la escalabilidad añadiendo una la interfaz para las nuevas funcionalidades con un simple acceso a una memoria RAM interna. o Caracterización online de las tareas para proporcionar información a un módulo de gestión de recursos para mejorar la operación en términos de energía y procesamiento cuando además se opera entre distintos nieles de tolerancia a fallos. El documento está dividido en dos partes principales formando un total de cinco capítulos. En primer lugar, después de motivar la necesidad de nuevas plataformas para cubrir las nuevas aplicaciones, se detalla el diseño de la plataforma HiReCookie, sus partes, las posibilidades para bajar el consumo energético y se muestran casos de uso de la plataforma así como pruebas de validación del diseño. La segunda parte del documento describe la arquitectura reconfigurable, su implementación en varias FPGAs, y pruebas de validación en términos de capacidad de procesamiento y consumo energético, incluyendo cómo estos aspectos se ven afectados por el nivel de tolerancia a fallos elegido. Los capítulos a lo largo del documento son los siguientes: El capítulo 1 analiza los principales objetivos, motivación y aspectos teóricos necesarios para seguir el resto del documento. El capítulo 2 está centrado en el diseño de la plataforma HiReCookie y sus posibilidades para disminuir el consumo de energía. El capítulo 3 describe la arquitectura reconfigurable ARTICo3. El capítulo 4 se centra en las pruebas de validación de la arquitectura usando la plataforma HiReCookie para la mayoría de los tests. Un ejemplo de aplicación es mostrado para analizar el funcionamiento de la arquitectura. El capítulo 5 concluye esta tesis doctoral comentando las conclusiones obtenidas, las contribuciones originales del trabajo y resultados y líneas futuras. ABSTRACT This PhD Thesis is framed within the field of dynamically reconfigurable embedded systems, advanced sensor networks and distributed computing. The document is centred on the study of processing solutions for high-performance autonomous distributed systems (HPADS) as well as their evolution towards High performance Computing (HPC) systems. The approach of the study is focused on both platform and processor levels to optimise critical aspects such as computing performance, energy efficiency and fault tolerance. HPADS are considered feedback systems, normally networked and/or distributed, with real-time adaptive and predictive functionality. These systems, as part of more complex systems known as Cyber-Physical Systems (CPSs), can be applied in a wide range of fields such as military, health care, manufacturing, aerospace, etc. For the design of HPADS, high levels of dependability, the definition of suitable models of computation, and the use of methodologies and tools to support scalability and complexity management, are required. The first part of the document studies the different possibilities at platform design level in the state of the art, together with description, development and validation tests of the platform proposed in this work to cope with the previously mentioned requirements. The main objectives targeted by this platform design are the following: • Study the feasibility of using SRAM-based FPGAs as the main processor of the platform in terms of energy consumption and performance for high demanding applications. • Analyse and propose energy management techniques to reduce energy consumption in every stage of the working profile of the platform. • Provide a solution with dynamic partial and wireless remote HW reconfiguration (DPR) to be able to change certain parts of the FPGA design at run time and on demand without interrupting the rest of the system. • Demonstrate the applicability of the platform in different test-bench applications. In order to select the best approach for the platform design in terms of processing alternatives, a study of the evolution of the state-of-the-art platforms is required to analyse how different architectures cope with new more demanding applications and scenarios: security, mixed-critical systems for aerospace, multimedia applications, or military environments, among others. In all these scenarios, important changes in the required processing bandwidth or the complexity of the algorithms used are provoking the migration of the platforms from single microprocessor architectures to multiprocessing and heterogeneous solutions with more instant power consumption but higher energy efficiency. Within these solutions, FPGAs and Systems on Chip including FPGA fabric and dedicated hard processors, offer a good trade of among flexibility, processing performance, energy consumption and price, when they are used in demanding applications where working conditions are very likely to vary over time and high complex algorithms are required. The platform architecture proposed in this PhD Thesis is called HiReCookie. It includes an SRAM-based FPGA as the main and only processing unit. The FPGA selected, the Xilinx Spartan-6 LX150, was at the beginning of this work the best choice in terms of amount of resources and power. Although, the power levels are the lowest of these kind of devices, they can be still very high for distributed systems that normally work powered by batteries. For that reason, it is necessary to include different energy saving possibilities to increase the usability of the platform. In order to reduce energy consumption, the platform architecture is divided into different power islands so that only those parts of the systems that are strictly needed are powered on, while the rest of the islands can be completely switched off. This allows a combination of different low power modes to decrease energy. In addition, one of the most important handicaps of SRAM-based FPGAs is that they are not alive at power up. Therefore, recovering the system from a switch-off state requires to reload the FPGA configuration from a non-volatile memory device. For that reason, this PhD Thesis also proposes a methodology to compress the FPGA configuration file in order to reduce time and energy during the initial configuration process. Although some of the requirements for the design of HPADS are already covered by the design of the HiReCookie platform, it is necessary to continue improving energy efficiency, computing performance and fault tolerance. This is only possible by exploiting all the opportunities provided by the processing architectures configured inside the FPGA. Therefore, the second part of the thesis details the design of the so called ARTICo3 FPGA architecture to enhance the already intrinsic capabilities of the FPGA. ARTICo3 is a DPR-capable bus-based virtual architecture for multiple HW acceleration in SRAM-based FPGAs. The architecture provides support for dynamic resource management in real time. In this way, by using DPR, it will be possible to change the levels of computing performance, energy consumption and fault tolerance on demand by increasing or decreasing the amount of resources used by the different tasks. Apart from the detailed design of the architecture and its implementation in different FPGA devices, different validation tests and comparisons are also shown. The main objectives targeted by this FPGA architecture are listed as follows: • Provide a method based on a multithread approach such as those offered by CUDA (Compute Unified Device Architecture) or OpenCL kernel executions, where kernels are executed in a variable number of HW accelerators without requiring application code changes. • Provide an architecture to dynamically adapt working points according to either self-measured or external parameters in terms of energy consumption, fault tolerance and computing performance. Taking advantage of DPR capabilities, the architecture must provide support for a dynamic use of resources in real time. • Exploit concurrent processing capabilities in a standard bus-based system by optimizing data transactions to and from HW accelerators. • Measure the advantage of HW acceleration as a technique to boost performance to improve processing times and save energy by reducing active times for distributed embedded systems. • Dynamically change the levels of HW redundancy to adapt fault tolerance in real time. • Provide HW abstraction from SW application design. FPGAs give the possibility of designing specific HW blocks for every required task to optimise performance while some of them include the possibility of including DPR. Apart from the possibilities provided by manufacturers, the way these HW modules are organised, addressed and multiplexed in area and time can improve computing performance and energy consumption. At the same time, fault tolerance and security techniques can also be dynamically included using DPR. However, the inherent complexity of designing new HW modules for every application is not negligible. It does not only consist of the HW description, but also the design of drivers and interfaces with the rest of the system, while the design space is widened and more complex to define and program. Even though the tools provided by the majority of manufacturers already include predefined bus interfaces, commercial IPs, and templates to ease application prototyping, it is necessary to improve these capabilities. By adding new architectures on top of them, it is possible to take advantage of parallelization and HW redundancy while providing a framework to ease the use of dynamic resource management. ARTICo3 works within a solution space where working points change at run time in a 3D space defined by three different axes: Computation, Consumption, and Fault Tolerance. Therefore, every working point is found as a trade-off solution among these three axes. By means of DPR, different accelerators can be multiplexed so that the amount of available resources for any application is virtually unlimited. Taking advantage of DPR capabilities and a novel way of transmitting data to the reconfigurable HW accelerators, it is possible to dedicate a dynamically-changing number of resources for a given task in order to either boost computing speed or adding HW redundancy and a voting process to increase fault-tolerance levels. At the same time, using an optimised amount of resources for a given task reduces energy consumption by reducing instant power or computing time. In order to keep level complexity under certain limits, it is important that HW changes are transparent for the application code. Therefore, different levels of transparency are targeted by the system: • Scalability transparency: a task must be able to expand its resources without changing the system structure or application algorithms. • Performance transparency: the system must reconfigure itself as load changes. • Replication transparency: multiple instances of the same task are loaded to increase reliability and performance. • Location transparency: resources are accessed with no knowledge of their location by the application code. • Failure transparency: task must be completed despite a failure in some components. • Concurrency transparency: different tasks will work in a concurrent way transparent to the application code. Therefore, as it can be seen, the Thesis is contributing in two different ways. First with the design of the HiReCookie platform and, second with the design of the ARTICo3 architecture. The main contributions of this PhD Thesis are then listed below: • Architecture of the HiReCookie platform including: o Compatibility of the processing layer for high performance applications with the Cookies Wireless Sensor Network platform for fast prototyping and implementation. o A division of the architecture in power islands. o All the different low-power modes. o The creation of the partial-initial bitstream together with the wake-up policies of the node. • The design of the reconfigurable architecture for SRAM FPGAs: ARTICo3: o A model of computation and execution modes inspired in CUDA but based on reconfigurable HW with a dynamic number of thread blocks per kernel. o A structure to optimise burst data transactions providing coalesced or parallel data to HW accelerators, parallel voting process and reduction operation. o The abstraction provided to the host processor with respect to the operation of the kernels in terms of the number of replicas, modes of operation, location in the reconfigurable area and addressing. o The architecture of the modules representing the thread blocks to make the system scalable by adding functional units only adding an access to a BRAM port. o The online characterization of the kernels to provide information to a scheduler or resource manager in terms of energy consumption and processing time when changing among different fault-tolerance levels, as well as if a kernel is expected to work in the memory-bounded or computing-bounded areas. The document of the Thesis is divided into two main parts with a total of five chapters. First, after motivating the need for new platforms to cover new more demanding applications, the design of the HiReCookie platform, its parts and several partial tests are detailed. The design of the platform alone does not cover all the needs of these applications. Therefore, the second part describes the architecture inside the FPGA, called ARTICo3, proposed in this PhD Thesis. The architecture and its implementation are tested in terms of energy consumption and computing performance showing different possibilities to improve fault tolerance and how this impact in energy and time of processing. Chapter 1 shows the main goals of this PhD Thesis and the technology background required to follow the rest of the document. Chapter 2 shows all the details about the design of the FPGA-based platform HiReCookie. Chapter 3 describes the ARTICo3 architecture. Chapter 4 is focused on the validation tests of the ARTICo3 architecture. An application for proof of concept is explained where typical kernels related to image processing and encryption algorithms are used. Further experimental analyses are performed using these kernels. Chapter 5 concludes the document analysing conclusions, comments about the contributions of the work, and some possible future lines for the work.
Resumo:
La tesis estudia uno de los aspectos más importantes de la gestión de la sociedad de la información: conocer la manera en que una persona valora cualquier situación. Esto es importante para el individuo que realiza la valoración y para el entorno con el que se relaciona. La valoración es el resultado de la comparación: se asignan los mismos valores a alternativas similares y mayores valores a alternativas mejor consideradas en el proceso de comparación. Los patrones que guían al individuo a la hora de hacer la comparación se derivan de sus preferencias individuales (es decir, de sus opiniones). En la tesis se presentan varios procedimientos para establecer las relaciones de preferencia entre alternativas de una persona. La valoración progresa hasta obtener una representación numérica de sus preferencias. Cuando la representación de preferencias es homogénea permite, además, contrastar las preferencias personales con las del resto de evaluadores, lo que favorece la evaluación de políticas, la transferencia de información entre diferentes individuos y el diseño de la alternativa que mejor se adapte a las preferencias identificadas. Al mismo tiempo, con esta información se pueden construir comunidades de personas con los mismos sistemas de preferencias ante una cuestión concreta. La tesis muestra un caso de aplicación de esta metodología: optimización de las políticas laborales en un mercado real. Para apoyar a los demandantes de empleo (en su iniciación o reincorporación al mundo laboral o en el cambio de su actividad) es necesario conocer sus preferencias respecto a las ocupaciones que están dispuestos a desempeñar. Además, para que la intermediación laboral sea efectiva, las ocupaciones buscadas deben de ser ofrecidas por el mercado de trabajo y el demandante debe reunir las condiciones para acceder a esas ocupaciones. El siguiente desarrollo de estos modelos nos lleva a los procedimientos utilizados para transformar múltiples preferencias en una decisión agregada y que consideran tanto la opinión de cada uno de los individuos que participan en la decisión como las interacciones sociales, todo ello dirigido a generar una solución que se ajuste lo mejor posible al punto de vista de toda la población. Las decisiones con múltiples participantes inciden, principalmente, en: el aumento del alcance para incorporar a personas que tradicionalmente no han sido consideradas en las tomas de decisiones, la agregación de las preferencias de los múltiples participantes en las tomas de decisiones colectivas (mediante votación, utilizando aplicaciones desarrolladas para la Web2.0 y a través de comparaciones interpersonales de utilidad) y, finalmente, la auto-organización para permitir que interaccionen entre si los participantes en la valoración, de forma que hagan que el resultado final sea mejor que la mera agregación de opiniones individuales. La tesis analiza los sistemas de e-democracia o herramientas para su implantación que tienen más más utilización en la actualidad o son más avanzados. Están muy relacionados con la web 2.0 y su implantación está suponiendo una evolución de la democracia actual. También se estudian aplicaciones de software de Colaboración en la toma de decisiones (Collaborative decision-making (CDM)) que ayudan a dar sentido y significado a los datos. Pretenden coordinar las funciones y características necesarias para llegar a decisiones colectivas oportunas, lo que permite a todos los interesados participar en el proceso. La tesis finaliza con la presentación de un nuevo modelo o paradigma en la toma de decisiones con múltiples participantes. El desarrollo se apoya en el cálculo de las funciones de utilidad empática. Busca la colaboración entre los individuos para que la toma de decisiones sea más efectiva, además pretende aumentar el número de personas implicadas. Estudia las interacciones y la retroalimentación entre ciudadanos, ya que la influencia de unos ciudadanos en los otros es fundamental para los procesos de toma de decisiones colectivas y de e-democracia. También incluye métodos para detectar cuando se ha estancado el proceso y se debe interrumpir. Este modelo se aplica a la consulta de los ciudadanos de un municipio sobre la oportunidad de implantar carriles-bici y las características que deben tomar. Se simula la votación e interacción entre los votantes. ABSTRACT The thesis examines one of the most important aspects of the management of the information society: get to know how a person values every situation. This is important for the individual performing the assessment and for the environment with which he interacts. The assessment is a result of the comparison: identical values are allocated to similar alternatives and higher values are assigned to those alternatives that are more favorably considered in the comparison process. Patterns that guide the individual in making the comparison are derived from his individual preferences (ie, his opinions). In the thesis several procedures to establish preference relations between alternatives a person are presented. The assessment progresses to obtain a numerical representation of his preferences. When the representation of preferences is homogeneous, it also allows the personal preferences of each individual to be compared with those of other evaluators, favoring policy evaluation, the transfer of information between different individuals and design the alternative that best suits the identified preferences. At the same time, with this information you can build communities of people with similar systems of preferences referred to a particular issue. The thesis shows a case of application of this methodology: optimization of labour policies in a real market. To be able support jobseekers (in their initiation or reinstatement to employment or when changing area of professional activity) is necessary to know their preferences for jobs that he is willing to perform. In addition, for labour mediation to be effective occupations that are sought must be offered by the labour market and the applicant must meet the conditions for access to these occupations. Further development of these models leads us to the procedures used to transform multiple preferences in an aggregate decision and consider both the views of each of the individuals involved in the decision and the social interactions, all aimed at generating a solution that best fits of the point of view of the entire population. Decisions with multiple participants mainly focus on: increasing the scope to include people who traditionally have not been considered in decision making, aggregation of the preferences of multiple participants in collective decision making (by vote, using applications developed for the Web 2.0 and through interpersonal comparisons of utility) and, finally, self-organization to allow participants to interact with each other in the assessment, so that the final result is better than the mere aggregation of individual opinions. The thesis analyzes the systems of e-democracy or tools for implementation which are more popular or more advanced. They are closely related to the Web 2.0 and its implementation is bringing an evolution of the current way of understanding democracy. We have also studied Collaborative Decision-Making (CDM)) software applications in decision-making which help to give sense and meaning to the data. They intend to coordinate the functions and features needed to reach adequate collective decisions, allowing all stakeholders to participate in the process. The thesis concludes with the presentation of a new model or paradigm in decision-making with multiple participants. The development is based on the calculation of the empathic utility functions. It seeks collaboration between individuals to make decision-making more effective; it also aims to increase the number of people involved. It studies the interactions and feedback among citizens, because the influence of some citizens in the other is fundamental to the process of collective decision-making and e-democracy. It also includes methods for detecting when the process has stalled and should be discontinued. This model is applied to the consultation of the citizens of a municipality on the opportunity to introduce bike lanes and characteristics they should have. Voting and interaction among voters is simulated.
Resumo:
LINCOLN UNIVERSITY - On March 25, 1965, a bus loaded with Lincoln University students and staff arrived in Montgomery, Ala. to join the Selma march for racial and voting equality. Although the Civil Rights Act of 1964 was in force, African-Americans continued to feel the effects of segregation. The 1960s was a decade of social unrest and change. In the Deep South, specifically Alabama, racial segregation was a cultural norm resistant to change. Governor George Wallace never concealed his personal viewpoints and political stance of the white majority, declaring “Segregation now, segregation tomorrow, segregation forever.” The march was aimed at obtaining African-Americans their constitutionally protected right to vote. However, Alabama’s deep-rooted culture of racial bias began to be challenged by a shift in American attitudes towards equality. Both black and whites wanted to end discrimination by using passive resistance, a movement utilized by Dr. Martin Luther King Jr. That passive resistance was often met with violence, sometimes at the hands of law enforcement and local citizens. The Selma to Montgomery march was a result of a protest for voting equality. The Student Nonviolent Coordinating Committee (SNCC) and the Southern Christian Leadership Counsel (SCLC) among other students marched along the streets to bring awareness to the voter registration campaign, which was organized to end discrimination in voting based on race. Violent acts of police officers and others were some of the everyday challenges protesters were facing. Forty-one participants from Lincoln University arrived in Montgomery to take part in the 1965 march for equality. Students from Lincoln University’s Journalism 383 class spent part of their 2015 spring semester researching the historical event. Here are their stories: Peter Kellogg “We’ve been watching the television, reading about it in the newspapers,” said Peter Kellogg during a February 2015 telephone interview. “Everyone knew the civil rights movement was going on, and it was important that we give him (Robert Newton) some assistance … and Newton said we needed to get involve and do something,” Kellogg, a lecturer in the 1960s at Lincoln University, discussed how the bus trip originated. “That’s why the bus happened,” Kellogg said. “Because of what he (Newton) did - that’s why Lincoln students went and participated.” “People were excited and the people along the sidewalk were supportive,” Kellogg said. However, the mood flipped from excited to scared and feeling intimidated. “It seems though every office building there was a guy in a blue uniform with binoculars standing in the crowd with troops and police. And if looks could kill me, we could have all been dead.” He says the hatred and intimidation was intense. Kellogg, being white, was an immediate target among many white people. He didn’t realize how dangerous the event in Alabama was until he and the others in the bus heard about the death of Viola Liuzzo. The married mother of five from Detroit was shot and killed by members of the Ku Klux Klan while shuttling activists to the Montgomery airport. “We found out about her death on the ride back,” Kellogg recalled. “Because it was a loss of life, and it shows the violence … we could have been exposed to that danger!” After returning to LU, Kellogg’s outlook on life took a dramatic turn. Kellogg noted King’s belief that a person should be willing to die for important causes. “The idea is that life is about something larger and more important than your own immediate gratification, and career success or personal achievements,” Kellogg said. “The civil rights movement … it made me, it made my life more significant because it was about something important.” The civil rights movement influenced Kellogg to change his career path and to become a black history lecturer. Until this day, he has no regrets and believes that his choices made him as a better individual. The bus ride to Alabama, he says, began with the actions of just one student. Robert Newton Robert Newton was the initiator, recruiter and leader of the Lincoln University movement to join Dr. Martin Luther King’s march in Selma. “In the 60s much of the civil rights activists came out of college,” said Newton during a recent phone interview. Many of the events that involved segregation compelled college students to fight for equality. “We had selected boycotts of merchants, when blacks were not allowed to try on clothes,” Newton said. “You could buy clothes at department stores, but no blacks could work at the department stores as sales people. If you bought clothes there you couldn’t try them on, you had to buy them first and take them home and try them on.” Newton said the students risked their lives to be a part of history and influence change. He not only recognized the historic event of his fellow Lincolnites, but also recognized other college students and historical black colleges and universities who played a vital role in history. “You had the S.N.C.C organization, in terms of voting rights and other things, including a lot of participation and working off the bureau,” Newton said. Other schools and places such as UNT, Greenville and Howard University and other historically black schools had groups that came out as leaders. Newton believes that much has changed from 50 years ago. “I think we’ve certainly come a long way from what I’ve seen from the standpoint of growing up outside of Birmingham, Alabama,” Newton said. He believes that college campuses today are more organized in their approach to social causes. “The campus appears to be some more integrated amongst students in terms of organizations and friendships.” Barbara Flint Dr. Barbara Flint grew up in the southern part of Arkansas and came to Lincoln University in 1961. She describes her experience at Lincoln as “being at Lincoln when the world was changing.“ She was an active member of Lincoln’s History Club, which focused on current events and issues and influenced her decision to join the Selma march. “The first idea was to raise some money and then we started talking about ‘why can’t we go?’ I very much wanted to be a living witness in history.” Reflecting on the march and journey to Montgomery, Flint describes it as being filled with tension. “We were very conscious of the fact that once we got on the road past Tennessee we didn’t know what was going to happen,” said Flint during a February 2015 phone interview. “Many of the students had not been beyond Missouri, so they didn’t have that sense of what happens in the South. Having lived there you knew the balance as well as what is likely to happen and what is not likely to happen. As my father use to say, ‘you have to know how to stay on that line of balance.’” Upon arriving in Alabama she remembers the feeling of excitement and relief from everyone on the bus. “We were tired and very happy to be there and we were trying to figure out where we were going to join and get into the march,” Flint said. “There were so many people coming in and then we were also trying to stay together; that was one of the things that really stuck out for me, not just for us but the people who were coming in. You didn’t want to lose sight of the people you came with.” Flint says she was keenly aware of her surroundings. For her, it was more than just marching forward. “I can still hear those helicopters now,” Flint recalled. “Every time the helicopters would come over the sound would make people jump and look up - I think that demonstrated the extent of the tenseness that was there at the time because the helicopters kept coming over every few minutes.” She said that the marchers sang “we are not afraid,” but that fear remained with every step. “Just having been there and being a witness and marching you realize that I’m one of those drops that’s going to make up this flood and with this flood things will move,” said Flint. As a student at Lincoln in 1965, Flint says the Selma experience undoubtedly changed her life. “You can’t expect to do exactly what you came to Lincoln to do,” Flint says. “That march - along with all the other marchers and the action that was taking place - directly changed the paths that I and many other people at Lincoln would take.” She says current students and new generations need to reflect on their personal role in society. “Decide what needs to be done and ask yourself ‘how can I best contribute to it?’” Flint said. She notes technology and social media can be used to reach audiences in ways unavailable to her generation in 1965. “So you don’t always have to wait for someone else to step out there and say ‘let’s march,’ you can express your vision and your views and you have the means to do so (so) others can follow you. Jaci Newsom Jaci Newsom came to Lincoln in 1965 from Atlanta. She came to Lincoln to major in sociology and being in Jefferson City was largely different from what she had grown up with. “To be able to come into a restaurant, sit down and be served a nice meal was eye-opening to me,” said Newsom during a recent interview. She eventually became accustomed to the relaxed attitude of Missouri and was shocked by the situation she encountered on an out-of-town trip. “I took a bus trip from Atlanta to Pensacola and I encountered the worse racism that I have ever seen. I was at bus stop, I went in to be served and they would not serve me. There was a policeman sitting there at the table and he told me that privately owned places could select not to serve you.” Newsom describes her experience of marching in Montgomery as being one with a purpose. “We felt as though we achieved something - we felt a sense of unity,” Newsom said. “We were very excited (because) we were going to hear from Martin Luther King. To actually be in the presence of him and the other civil rights workers there was just such enthusiasm and excitement yet there was also some apprehension of what we might encounter.” Many of the marchers showed their inspiration and determination while pressing forward towards the grounds of the Alabama Capitol building. Newsom recalled that the marchers were singing the lyrics “ain’t gonna let nobody turn me around” and “we shall overcome.” “ I started seeing people just like me,” Newsom said. “I don’t recall any of the scowling, the hitting, the things I would see on TV later. I just saw a sea of humanity marching towards the Capitol. I don’t remember what Martin Luther King said but it was always the same message: keep the faith; we’re going to get where we’re going and let us remember what our purpose is.” Newsom offers advice on what individuals can do to make their society a more productive and peaceful place. “We have come a long way and we have ways to change things that we did not have before,” Newsom said. “You need to work in positive ways to change.” Referencing the recent unrest in Ferguson, Mo., she believes that people become destructive as a way to show and vent anger. Her generation, she says, was raised to react in lawful ways – and believe in hope. “We have faith to do things in a way that was lawful and it makes me sad what people do when they feel without hope, and there is hope,” Newsom says. “Non-violence does work - we need to include everyone to make this world a better place.” Newsom graduated from Lincoln in 1969 and describes her experience at Lincoln as, “I grew up and did more growing at Lincoln than I think I did for the rest of my life.”
Resumo:
The Academy has elected 72 new members and 15 foreign associates from 10 countries in recognition of their distinguished and continuing achievements in original research. The election was held during the business session of the 138th annual meeting of the Academy. Election to membership in the Academy is considered one of the highest honors that can be accorded a U.S. scientist or engineer. Foreign associates are non-voting members of the Academy, with citizenship outside of the United States.
Resumo:
The Academy has elected 60 new members and 15 foreign associates from 9 countries in recognition of their distinguished and continuing achievements in original research. The election was held during the business session of the 137th annual meeting of the Academy. Election to membership in the Academy is considered one of the highest honors that can be accorded a U.S. scientist or engineer. Foreign associates are non-voting members of the Academy, with citizenship outside of the United States.
Resumo:
We present a method for predicting protein folding class based on global protein chain description and a voting process. Selection of the best descriptors was achieved by a computer-simulated neural network trained on a data base consisting of 83 folding classes. Protein-chain descriptors include overall composition, transition, and distribution of amino acid attributes, such as relative hydrophobicity, predicted secondary structure, and predicted solvent exposure. Cross-validation testing was performed on 15 of the largest classes. The test shows that proteins were assigned to the correct class (correct positive prediction) with an average accuracy of 71.7%, whereas the inverse prediction of proteins as not belonging to a particular class (correct negative prediction) was 90-95% accurate. When tested on 254 structures used in this study, the top two predictions contained the correct class in 91% of the cases.
Resumo:
This research project examines the role of electoral system rules in affecting the extent of conciliatory behavior and cross-ethnic coalition making in Northern Ireland. It focuses on the role of the Single Transferable Vote (STV) electoral system in shaping party and voter incentives in a post-conflict divided society. The research uses a structured, focused comparison of the four electoral cycles since the Belfast Agreement of 1998. This enables a systematic examination of each electoral cycle using a common set of criteria focused on conciliation and cross-ethnic coalition making. Whilst preference voting is assumed to benefit moderate candidates, in Northern Ireland centrist and multi-ethnic parties outside of the dominant ethnic communities have received little electoral success. In Northern Ireland the primary effect of STV has not been to encourage inter-communal voting but to facilitate intra-community and intra-party moderation. STV has encouraged the moderation of the historically extreme political parties in each of the ethnic bloc. Patterns across electoral cycles suggest that party elites from the Democratic Unionist Party (DUP) and Sinn Fein have moderated their policy positions due to the electoral system rules. Therefore they have pursued lower-preference votes from within their ethnic bloc but in doing so have marginalized parties of a multi-ethnic or non-ethnic orientation.
Resumo:
Esta dissertação tem como objeto a análise da atuação judicial no âmbito dos processos de recuperação judicial de empresas, regulados pela Lei nº 11.101 de 9 de fevereiro de 2002 (\"LRE\"). No primeiro capítulo, são introduzidas as limitações do trabalho e as principais questões a serem respondidas ao longo do texto. No segundo capítulo, são expostos os panoramas histórico e jurídico da LRE, para que se extraiam os verdadeiros objetivos tutelados pela lei e o diálogo destes objetivos com a atuação do Poder Judiciário. No terceiro capítulo, são propostos três níveis de intervenção judicial no bojo do processo de recuperação, sendo eles: (a) o controle de legalidade estrita, por meio do qual o juiz verificará a observância aos requisitos e vedações impostos pela LRE ao conteúdo do plano de recuperação e à sua votação; (b) o controle de legalidade material ou controle de juridicidade, por meio do qual o juiz avaliará se o conteúdo do plano e sua votação atendem aos princípios gerais orientadores do ordenamento brasileiro; e (c) o juízo de viabilidade, por meio do qual o juiz, usando de critérios objetivos sugeridos pela doutrina, avaliaria o mérito do plano de recuperação judicial para averiguar se, além de atenderem aos critérios de legalidade, as disposições do plano de recuperação atingem os objetivos traçados pela LRE, no sentido de tutela da empresa viável e tutela institucional do crédito. No quarto capítulo, são retomadas as conclusões alcançadas ao final de cada um dos subcapítulos.