907 resultados para Computer technical support
Resumo:
In this article, the authors examine the current status of different elements that integrate the landscape of the municipality of Olias del Rey in Toledo (Spain). A methodology for the study of rural roads, activity farming and local hunting management. We used Geographic Information Technologies (GIT) in order to optimize spatial information including the design of a Geographic Information System (GIS). In the acquisition of field data we have used vehicle "mobile mapping" instrumentation equipped with GNSS, LiDAR, digital cameras and odometer. The main objective is the integration of geoinformation and geovisualization of the information to provide a fundamental tool for rural planning and management.
Resumo:
Information and Communication Technologies can support Active Aging strategies in a scenario like the Smart Home. This paper details a person centered distributed framework, called TALISMAN+, whose aim is to promote personal autonomy by taking advantage of knowledge based technologies, sensors networks, mobile devices and internet. The proposed solution can support an elderly person to keep living alone at his house without being obliged to move to a residential center. The framework is composed by five subsystems: a reasoning module that is able to take local decisions at home in order to support active aging, a biomedical variables telemonitorisation platform running on a mobile device, a hybrid reasoning middleware aimed to assess cardiovascular risk in a remote way, a private vision based sensor subsystem, and a secure telematics solution that guarantees confidentiality for personal information. TALISMAN+ framework deployment is being evaluated at a real environment like the Accessible Digital Home.
Resumo:
This work introduces a web-based learning environment to facilitate learning in Project Management. The proposed web-based support system integrates methodological procedures and information systems, allowing to promote learning among geographically-dispersed students. Thus, students who are enrolled in different universities at different locations and attend their own project management courses, share a virtual experience in executing and managing projects. Specific support systems were used or developed to automatically collect information about student activities, making it possible to monitor the progress made on learning and assess learning performance as established in the defined rubric.
Resumo:
In recent years, the continuous incorporation of new technologies in the learning process has been an important factor in the educational process [1]. The Technical University of Madrid (UPM) promotes educational innovation processes and develops projects related to the improvement of the education quality. The experience that we present fits into the Educational Innovation Project (EIP) of the E.U. of Agricultural Engineering of Madrid. One of the main objectives of the EIP is to "Take advantage of the new opportunities offered by the Learning and Knowledge Technologies in order to enrich the educational processes and teaching management" [2].
Resumo:
Esta tesis doctoral se centra principalmente en técnicas de ataque y contramedidas relacionadas con ataques de canal lateral (SCA por sus siglas en inglés), que han sido propuestas dentro del campo de investigación académica desde hace 17 años. Las investigaciones relacionadas han experimentado un notable crecimiento en las últimas décadas, mientras que los diseños enfocados en la protección sólida y eficaz contra dichos ataques aún se mantienen como un tema de investigación abierto, en el que se necesitan iniciativas más confiables para la protección de la información persona de empresa y de datos nacionales. El primer uso documentado de codificación secreta se remonta a alrededor de 1700 B.C., cuando los jeroglíficos del antiguo Egipto eran descritos en las inscripciones. La seguridad de la información siempre ha supuesto un factor clave en la transmisión de datos relacionados con inteligencia diplomática o militar. Debido a la evolución rápida de las técnicas modernas de comunicación, soluciones de cifrado se incorporaron por primera vez para garantizar la seguridad, integridad y confidencialidad de los contextos de transmisión a través de cables sin seguridad o medios inalámbricos. Debido a las restricciones de potencia de cálculo antes de la era del ordenador, la técnica de cifrado simple era un método más que suficiente para ocultar la información. Sin embargo, algunas vulnerabilidades algorítmicas pueden ser explotadas para restaurar la regla de codificación sin mucho esfuerzo. Esto ha motivado nuevas investigaciones en el área de la criptografía, con el fin de proteger el sistema de información ante sofisticados algoritmos. Con la invención de los ordenadores se ha acelerado en gran medida la implementación de criptografía segura, que ofrece resistencia eficiente encaminada a obtener mayores capacidades de computación altamente reforzadas. Igualmente, sofisticados cripto-análisis han impulsado las tecnologías de computación. Hoy en día, el mundo de la información ha estado involucrado con el campo de la criptografía, enfocada a proteger cualquier campo a través de diversas soluciones de cifrado. Estos enfoques se han fortalecido debido a la unificación optimizada de teorías matemáticas modernas y prácticas eficaces de hardware, siendo posible su implementación en varias plataformas (microprocesador, ASIC, FPGA, etc.). Las necesidades y requisitos de seguridad en la industria son las principales métricas de conducción en el diseño electrónico, con el objetivo de promover la fabricación de productos de gran alcance sin sacrificar la seguridad de los clientes. Sin embargo, una vulnerabilidad en la implementación práctica encontrada por el Prof. Paul Kocher, et al en 1996 implica que un circuito digital es inherentemente vulnerable a un ataque no convencional, lo cual fue nombrado posteriormente como ataque de canal lateral, debido a su fuente de análisis. Sin embargo, algunas críticas sobre los algoritmos criptográficos teóricamente seguros surgieron casi inmediatamente después de este descubrimiento. En este sentido, los circuitos digitales consisten típicamente en un gran número de celdas lógicas fundamentales (como MOS - Metal Oxide Semiconductor), construido sobre un sustrato de silicio durante la fabricación. La lógica de los circuitos se realiza en función de las innumerables conmutaciones de estas células. Este mecanismo provoca inevitablemente cierta emanación física especial que puede ser medida y correlacionada con el comportamiento interno del circuito. SCA se puede utilizar para revelar datos confidenciales (por ejemplo, la criptografía de claves), analizar la arquitectura lógica, el tiempo e incluso inyectar fallos malintencionados a los circuitos que se implementan en sistemas embebidos, como FPGAs, ASICs, o tarjetas inteligentes. Mediante el uso de la comparación de correlación entre la cantidad de fuga estimada y las fugas medidas de forma real, información confidencial puede ser reconstruida en mucho menos tiempo y computación. Para ser precisos, SCA básicamente cubre una amplia gama de tipos de ataques, como los análisis de consumo de energía y radiación ElectroMagnética (EM). Ambos se basan en análisis estadístico y, por lo tanto, requieren numerosas muestras. Los algoritmos de cifrado no están intrínsecamente preparados para ser resistentes ante SCA. Es por ello que se hace necesario durante la implementación de circuitos integrar medidas que permitan camuflar las fugas a través de "canales laterales". Las medidas contra SCA están evolucionando junto con el desarrollo de nuevas técnicas de ataque, así como la continua mejora de los dispositivos electrónicos. Las características físicas requieren contramedidas sobre la capa física, que generalmente se pueden clasificar en soluciones intrínsecas y extrínsecas. Contramedidas extrínsecas se ejecutan para confundir la fuente de ataque mediante la integración de ruido o mala alineación de la actividad interna. Comparativamente, las contramedidas intrínsecas están integradas en el propio algoritmo, para modificar la aplicación con el fin de minimizar las fugas medibles, o incluso hacer que dichas fugas no puedan ser medibles. Ocultación y Enmascaramiento son dos técnicas típicas incluidas en esta categoría. Concretamente, el enmascaramiento se aplica a nivel algorítmico, para alterar los datos intermedios sensibles con una máscara de manera reversible. A diferencia del enmascaramiento lineal, las operaciones no lineales que ampliamente existen en criptografías modernas son difíciles de enmascarar. Dicho método de ocultación, que ha sido verificado como una solución efectiva, comprende principalmente la codificación en doble carril, que está ideado especialmente para aplanar o eliminar la fuga dependiente de dato en potencia o en EM. En esta tesis doctoral, además de la descripción de las metodologías de ataque, se han dedicado grandes esfuerzos sobre la estructura del prototipo de la lógica propuesta, con el fin de realizar investigaciones enfocadas a la seguridad sobre contramedidas de arquitectura a nivel lógico. Una característica de SCA reside en el formato de las fuentes de fugas. Un típico ataque de canal lateral se refiere al análisis basado en la potencia, donde la capacidad fundamental del transistor MOS y otras capacidades parásitas son las fuentes esenciales de fugas. Por lo tanto, una lógica robusta resistente a SCA debe eliminar o mitigar las fugas de estas micro-unidades, como las puertas lógicas básicas, los puertos I/O y las rutas. Las herramientas EDA proporcionadas por los vendedores manipulan la lógica desde un nivel más alto, en lugar de realizarlo desde el nivel de puerta, donde las fugas de canal lateral se manifiestan. Por lo tanto, las implementaciones clásicas apenas satisfacen estas necesidades e inevitablemente atrofian el prototipo. Por todo ello, la implementación de un esquema de diseño personalizado y flexible ha de ser tomado en cuenta. En esta tesis se presenta el diseño y la implementación de una lógica innovadora para contrarrestar SCA, en la que se abordan 3 aspectos fundamentales: I. Se basa en ocultar la estrategia sobre el circuito en doble carril a nivel de puerta para obtener dinámicamente el equilibrio de las fugas en las capas inferiores; II. Esta lógica explota las características de la arquitectura de las FPGAs, para reducir al mínimo el gasto de recursos en la implementación; III. Se apoya en un conjunto de herramientas asistentes personalizadas, incorporadas al flujo genérico de diseño sobre FPGAs, con el fin de manipular los circuitos de forma automática. El kit de herramientas de diseño automático es compatible con la lógica de doble carril propuesta, para facilitar la aplicación práctica sobre la familia de FPGA del fabricante Xilinx. En este sentido, la metodología y las herramientas son flexibles para ser extendido a una amplia gama de aplicaciones en las que se desean obtener restricciones mucho más rígidas y sofisticadas a nivel de puerta o rutado. En esta tesis se realiza un gran esfuerzo para facilitar el proceso de implementación y reparación de lógica de doble carril genérica. La viabilidad de las soluciones propuestas es validada mediante la selección de algoritmos criptográficos ampliamente utilizados, y su evaluación exhaustiva en comparación con soluciones anteriores. Todas las propuestas están respaldadas eficazmente a través de ataques experimentales con el fin de validar las ventajas de seguridad del sistema. El presente trabajo de investigación tiene la intención de cerrar la brecha entre las barreras de implementación y la aplicación efectiva de lógica de doble carril. En esencia, a lo largo de esta tesis se describirá un conjunto de herramientas de implementación para FPGAs que se han desarrollado para trabajar junto con el flujo de diseño genérico de las mismas, con el fin de lograr crear de forma innovadora la lógica de doble carril. Un nuevo enfoque en el ámbito de la seguridad en el cifrado se propone para obtener personalización, automatización y flexibilidad en el prototipo de circuito de bajo nivel con granularidad fina. Las principales contribuciones del presente trabajo de investigación se resumen brevemente a continuación: Lógica de Precharge Absorbed-DPL logic: El uso de la conversión de netlist para reservar LUTs libres para ejecutar la señal de precharge y Ex en una lógica DPL. Posicionamiento entrelazado Row-crossed con pares idénticos de rutado en redes de doble carril, lo que ayuda a aumentar la resistencia frente a la medición EM selectiva y mitigar los impactos de las variaciones de proceso. Ejecución personalizada y herramientas de conversión automática para la generación de redes idénticas para la lógica de doble carril propuesta. (a) Para detectar y reparar conflictos en las conexiones; (b) Detectar y reparar las rutas asimétricas. (c) Para ser utilizado en otras lógicas donde se requiere un control estricto de las interconexiones en aplicaciones basadas en Xilinx. Plataforma CPA de pruebas personalizadas para el análisis de EM y potencia, incluyendo la construcción de dicha plataforma, el método de medición y análisis de los ataques. Análisis de tiempos para cuantificar los niveles de seguridad. División de Seguridad en la conversión parcial de un sistema de cifrado complejo para reducir los costes de la protección. Prueba de concepto de un sistema de calefacción auto-adaptativo para mitigar los impactos eléctricos debido a la variación del proceso de silicio de manera dinámica. La presente tesis doctoral se encuentra organizada tal y como se detalla a continuación: En el capítulo 1 se abordan los fundamentos de los ataques de canal lateral, que abarca desde conceptos básicos de teoría de modelos de análisis, además de la implementación de la plataforma y la ejecución de los ataques. En el capítulo 2 se incluyen las estrategias de resistencia SCA contra los ataques de potencia diferencial y de EM. Además de ello, en este capítulo se propone una lógica en doble carril compacta y segura como contribución de gran relevancia, así como también se presentará la transformación lógica basada en un diseño a nivel de puerta. Por otra parte, en el Capítulo 3 se abordan los desafíos relacionados con la implementación de lógica en doble carril genérica. Así mismo, se describirá un flujo de diseño personalizado para resolver los problemas de aplicación junto con una herramienta de desarrollo automático de aplicaciones propuesta, para mitigar las barreras de diseño y facilitar los procesos. En el capítulo 4 se describe de forma detallada la elaboración e implementación de las herramientas propuestas. Por otra parte, la verificación y validaciones de seguridad de la lógica propuesta, así como un sofisticado experimento de verificación de la seguridad del rutado, se describen en el capítulo 5. Por último, un resumen de las conclusiones de la tesis y las perspectivas como líneas futuras se incluyen en el capítulo 6. Con el fin de profundizar en el contenido de la tesis doctoral, cada capítulo se describe de forma más detallada a continuación: En el capítulo 1 se introduce plataforma de implementación hardware además las teorías básicas de ataque de canal lateral, y contiene principalmente: (a) La arquitectura genérica y las características de la FPGA a utilizar, en particular la Xilinx Virtex-5; (b) El algoritmo de cifrado seleccionado (un módulo comercial Advanced Encryption Standard (AES)); (c) Los elementos esenciales de los métodos de canal lateral, que permiten revelar las fugas de disipación correlacionadas con los comportamientos internos; y el método para recuperar esta relación entre las fluctuaciones físicas en los rastros de canal lateral y los datos internos procesados; (d) Las configuraciones de las plataformas de pruebas de potencia / EM abarcadas dentro de la presente tesis. El contenido de esta tesis se amplia y profundiza a partir del capítulo 2, en el cual se abordan varios aspectos claves. En primer lugar, el principio de protección de la compensación dinámica de la lógica genérica de precarga de doble carril (Dual-rail Precharge Logic-DPL) se explica mediante la descripción de los elementos compensados a nivel de puerta. En segundo lugar, la lógica PA-DPL es propuesta como aportación original, detallando el protocolo de la lógica y un caso de aplicación. En tercer lugar, dos flujos de diseño personalizados se muestran para realizar la conversión de doble carril. Junto con ello, se aclaran las definiciones técnicas relacionadas con la manipulación por encima de la netlist a nivel de LUT. Finalmente, una breve discusión sobre el proceso global se aborda en la parte final del capítulo. El Capítulo 3 estudia los principales retos durante la implementación de DPLs en FPGAs. El nivel de seguridad de las soluciones de resistencia a SCA encontradas en el estado del arte se ha degenerado debido a las barreras de implantación a través de herramientas EDA convencionales. En el escenario de la arquitectura FPGA estudiada, se discuten los problemas de los formatos de doble carril, impactos parásitos, sesgo tecnológico y la viabilidad de implementación. De acuerdo con estas elaboraciones, se plantean dos problemas: Cómo implementar la lógica propuesta sin penalizar los niveles de seguridad, y cómo manipular un gran número de celdas y automatizar el proceso. El PA-DPL propuesto en el capítulo 2 se valida con una serie de iniciativas, desde características estructurales como doble carril entrelazado o redes de rutado clonadas, hasta los métodos de aplicación tales como las herramientas de personalización y automatización de EDA. Por otra parte, un sistema de calefacción auto-adaptativo es representado y aplicado a una lógica de doble núcleo, con el fin de ajustar alternativamente la temperatura local para equilibrar los impactos negativos de la variación del proceso durante la operación en tiempo real. El capítulo 4 se centra en los detalles de la implementación del kit de herramientas. Desarrollado sobre una API third-party, el kit de herramientas personalizado es capaz de manipular los elementos de la lógica de circuito post P&R ncd (una versión binaria ilegible del xdl) convertido al formato XDL Xilinx. El mecanismo y razón de ser del conjunto de instrumentos propuestos son cuidadosamente descritos, que cubre la detección de enrutamiento y los enfoques para la reparación. El conjunto de herramientas desarrollado tiene como objetivo lograr redes de enrutamiento estrictamente idénticos para la lógica de doble carril, tanto para posicionamiento separado como para el entrelazado. Este capítulo particularmente especifica las bases técnicas para apoyar las implementaciones en los dispositivos de Xilinx y su flexibilidad para ser utilizado sobre otras aplicaciones. El capítulo 5 se enfoca en la aplicación de los casos de estudio para la validación de los grados de seguridad de la lógica propuesta. Se discuten los problemas técnicos detallados durante la ejecución y algunas nuevas técnicas de implementación. (a) Se discute el impacto en el proceso de posicionamiento de la lógica utilizando el kit de herramientas propuesto. Diferentes esquemas de implementación, tomando en cuenta la optimización global en seguridad y coste, se verifican con los experimentos con el fin de encontrar los planes de posicionamiento y reparación optimizados; (b) las validaciones de seguridad se realizan con los métodos de correlación y análisis de tiempo; (c) Una táctica asintótica se aplica a un núcleo AES sobre BCDL estructurado para validar de forma sofisticada el impacto de enrutamiento sobre métricas de seguridad; (d) Los resultados preliminares utilizando el sistema de calefacción auto-adaptativa sobre la variación del proceso son mostrados; (e) Se introduce una aplicación práctica de las herramientas para un diseño de cifrado completa. Capítulo 6 incluye el resumen general del trabajo presentado dentro de esta tesis doctoral. Por último, una breve perspectiva del trabajo futuro se expone, lo que puede ampliar el potencial de utilización de las contribuciones de esta tesis a un alcance más allá de los dominios de la criptografía en FPGAs. ABSTRACT This PhD thesis mainly concentrates on countermeasure techniques related to the Side Channel Attack (SCA), which has been put forward to academic exploitations since 17 years ago. The related research has seen a remarkable growth in the past decades, while the design of solid and efficient protection still curiously remain as an open research topic where more reliable initiatives are required for personal information privacy, enterprise and national data protections. The earliest documented usage of secret code can be traced back to around 1700 B.C., when the hieroglyphs in ancient Egypt are scribed in inscriptions. Information security always gained serious attention from diplomatic or military intelligence transmission. Due to the rapid evolvement of modern communication technique, crypto solution was first incorporated by electronic signal to ensure the confidentiality, integrity, availability, authenticity and non-repudiation of the transmitted contexts over unsecure cable or wireless channels. Restricted to the computation power before computer era, simple encryption tricks were practically sufficient to conceal information. However, algorithmic vulnerabilities can be excavated to restore the encoding rules with affordable efforts. This fact motivated the development of modern cryptography, aiming at guarding information system by complex and advanced algorithms. The appearance of computers has greatly pushed forward the invention of robust cryptographies, which efficiently offers resistance relying on highly strengthened computing capabilities. Likewise, advanced cryptanalysis has greatly driven the computing technologies in turn. Nowadays, the information world has been involved into a crypto world, protecting any fields by pervasive crypto solutions. These approaches are strong because of the optimized mergence between modern mathematical theories and effective hardware practices, being capable of implement crypto theories into various platforms (microprocessor, ASIC, FPGA, etc). Security needs from industries are actually the major driving metrics in electronic design, aiming at promoting the construction of systems with high performance without sacrificing security. Yet a vulnerability in practical implementation found by Prof. Paul Kocher, et al in 1996 implies that modern digital circuits are inherently vulnerable to an unconventional attack approach, which was named as side-channel attack since then from its analysis source. Critical suspicions to theoretically sound modern crypto algorithms surfaced almost immediately after this discovery. To be specifically, digital circuits typically consist of a great number of essential logic elements (as MOS - Metal Oxide Semiconductor), built upon a silicon substrate during the fabrication. Circuit logic is realized relying on the countless switch actions of these cells. This mechanism inevitably results in featured physical emanation that can be properly measured and correlated with internal circuit behaviors. SCAs can be used to reveal the confidential data (e.g. crypto-key), analyze the logic architecture, timing and even inject malicious faults to the circuits that are implemented in hardware system, like FPGA, ASIC, smart Card. Using various comparison solutions between the predicted leakage quantity and the measured leakage, secrets can be reconstructed at much less expense of time and computation. To be precisely, SCA basically encloses a wide range of attack types, typically as the analyses of power consumption or electromagnetic (EM) radiation. Both of them rely on statistical analyses, and hence require a number of samples. The crypto algorithms are not intrinsically fortified with SCA-resistance. Because of the severity, much attention has to be taken into the implementation so as to assemble countermeasures to camouflage the leakages via "side channels". Countermeasures against SCA are evolving along with the development of attack techniques. The physical characteristics requires countermeasures over physical layer, which can be generally classified into intrinsic and extrinsic vectors. Extrinsic countermeasures are executed to confuse the attacker by integrating noise, misalignment to the intra activities. Comparatively, intrinsic countermeasures are built into the algorithm itself, to modify the implementation for minimizing the measurable leakage, or making them not sensitive any more. Hiding and Masking are two typical techniques in this category. Concretely, masking applies to the algorithmic level, to alter the sensitive intermediate values with a mask in reversible ways. Unlike the linear masking, non-linear operations that widely exist in modern cryptographies are difficult to be masked. Approved to be an effective counter solution, hiding method mainly mentions dual-rail logic, which is specially devised for flattening or removing the data-dependent leakage in power or EM signatures. In this thesis, apart from the context describing the attack methodologies, efforts have also been dedicated to logic prototype, to mount extensive security investigations to countermeasures on logic-level. A characteristic of SCA resides on the format of leak sources. Typical side-channel attack concerns the power based analysis, where the fundamental capacitance from MOS transistors and other parasitic capacitances are the essential leak sources. Hence, a robust SCA-resistant logic must eliminate or mitigate the leakages from these micro units, such as basic logic gates, I/O ports and routings. The vendor provided EDA tools manipulate the logic from a higher behavioral-level, rather than the lower gate-level where side-channel leakage is generated. So, the classical implementations barely satisfy these needs and inevitably stunt the prototype. In this case, a customized and flexible design scheme is appealing to be devised. This thesis profiles an innovative logic style to counter SCA, which mainly addresses three major aspects: I. The proposed logic is based on the hiding strategy over gate-level dual-rail style to dynamically overbalance side-channel leakage from lower circuit layer; II. This logic exploits architectural features of modern FPGAs, to minimize the implementation expenses; III. It is supported by a set of assistant custom tools, incorporated by the generic FPGA design flow, to have circuit manipulations in an automatic manner. The automatic design toolkit supports the proposed dual-rail logic, facilitating the practical implementation on Xilinx FPGA families. While the methodologies and the tools are flexible to be expanded to a wide range of applications where rigid and sophisticated gate- or routing- constraints are desired. In this thesis a great effort is done to streamline the implementation workflow of generic dual-rail logic. The feasibility of the proposed solutions is validated by selected and widely used crypto algorithm, for thorough and fair evaluation w.r.t. prior solutions. All the proposals are effectively verified by security experiments. The presented research work attempts to solve the implementation troubles. The essence that will be formalized along this thesis is that a customized execution toolkit for modern FPGA systems is developed to work together with the generic FPGA design flow for creating innovative dual-rail logic. A method in crypto security area is constructed to obtain customization, automation and flexibility in low-level circuit prototype with fine-granularity in intractable routings. Main contributions of the presented work are summarized next: Precharge Absorbed-DPL logic: Using the netlist conversion to reserve free LUT inputs to execute the Precharge and Ex signal in a dual-rail logic style. A row-crossed interleaved placement method with identical routing pairs in dual-rail networks, which helps to increase the resistance against selective EM measurement and mitigate the impacts from process variations. Customized execution and automatic transformation tools for producing identical networks for the proposed dual-rail logic. (a) To detect and repair the conflict nets; (b) To detect and repair the asymmetric nets. (c) To be used in other logics where strict network control is required in Xilinx scenario. Customized correlation analysis testbed for EM and power attacks, including the platform construction, measurement method and attack analysis. A timing analysis based method for quantifying the security grades. A methodology of security partitions of complex crypto systems for reducing the protection cost. A proof-of-concept self-adaptive heating system to mitigate electrical impacts over process variations in dynamic dual-rail compensation manner. The thesis chapters are organized as follows: Chapter 1 discusses the side-channel attack fundamentals, which covers from theoretic basics to analysis models, and further to platform setup and attack execution. Chapter 2 centers to SCA-resistant strategies against generic power and EM attacks. In this chapter, a major contribution, a compact and secure dual-rail logic style, will be originally proposed. The logic transformation based on bottom-layer design will be presented. Chapter 3 is scheduled to elaborate the implementation challenges of generic dual-rail styles. A customized design flow to solve the implementation problems will be described along with a self-developed automatic implementation toolkit, for mitigating the design barriers and facilitating the processes. Chapter 4 will originally elaborate the tool specifics and construction details. The implementation case studies and security validations for the proposed logic style, as well as a sophisticated routing verification experiment, will be described in Chapter 5. Finally, a summary of thesis conclusions and perspectives for future work are included in Chapter 5. To better exhibit the thesis contents, each chapter is further described next: Chapter 1 provides the introduction of hardware implementation testbed and side-channel attack fundamentals, and mainly contains: (a) The FPGA generic architecture and device features, particularly of Virtex-5 FPGA; (b) The selected crypto algorithm - a commercially and extensively used Advanced Encryption Standard (AES) module - is detailed; (c) The essentials of Side-Channel methods are profiled. It reveals the correlated dissipation leakage to the internal behaviors, and the method to recover this relationship between the physical fluctuations in side-channel traces and the intra processed data; (d) The setups of the power/EM testing platforms enclosed inside the thesis work are given. The content of this thesis is expanded and deepened from chapter 2, which is divided into several aspects. First, the protection principle of dynamic compensation of the generic dual-rail precharge logic is explained by describing the compensated gate-level elements. Second, the novel DPL is originally proposed by detailing the logic protocol and an implementation case study. Third, a couple of custom workflows are shown next for realizing the rail conversion. Meanwhile, the technical definitions that are about to be manipulated above LUT-level netlist are clarified. A brief discussion about the batched process is given in the final part. Chapter 3 studies the implementation challenges of DPLs in FPGAs. The security level of state-of-the-art SCA-resistant solutions are decreased due to the implementation barriers using conventional EDA tools. In the studied FPGA scenario, problems are discussed from dual-rail format, parasitic impact, technological bias and implementation feasibility. According to these elaborations, two problems arise: How to implement the proposed logic without crippling the security level; and How to manipulate a large number of cells and automate the transformation. The proposed PA-DPL in chapter 2 is legalized with a series of initiatives, from structures to implementation methods. Furthermore, a self-adaptive heating system is depicted and implemented to a dual-core logic, assumed to alternatively adjust local temperature for balancing the negative impacts from silicon technological biases on real-time. Chapter 4 centers to the toolkit system. Built upon a third-party Application Program Interface (API) library, the customized toolkit is able to manipulate the logic elements from post P&R circuit (an unreadable binary version of the xdl one) converted to Xilinx xdl format. The mechanism and rationale of the proposed toolkit are carefully convoyed, covering the routing detection and repairing approaches. The developed toolkit aims to achieve very strictly identical routing networks for dual-rail logic both for separate and interleaved placement. This chapter particularly specifies the technical essentials to support the implementations in Xilinx devices and the flexibility to be expanded to other applications. Chapter 5 focuses on the implementation of the case studies for validating the security grades of the proposed logic style from the proposed toolkit. Comprehensive implementation techniques are discussed. (a) The placement impacts using the proposed toolkit are discussed. Different execution schemes, considering the global optimization in security and cost, are verified with experiments so as to find the optimized placement and repair schemes; (b) Security validations are realized with correlation, timing methods; (c) A systematic method is applied to a BCDL structured module to validate the routing impact over security metric; (d) The preliminary results using the self-adaptive heating system over process variation is given; (e) A practical implementation of the proposed toolkit to a large design is introduced. Chapter 6 includes the general summary of the complete work presented inside this thesis. Finally, a brief perspective for the future work is drawn which might expand the potential utilization of the thesis contributions to a wider range of implementation domains beyond cryptography on FPGAs.
Resumo:
Cloud-based infrastructure has been increasingly adopted by the industry in distributed software development (DSD) environments. Its proponents claim that its several benefits include reduced cost, increased speed and greater productivity in software development. Empirical evaluations, however, are in the nascent stage of examining both the benefits and the risks of cloud-based infrastructure. The objective of this paper is to identify potential benefits and risks of using cloud in a DSD project conducted by teams based in Helsinki and Madrid. A cross-case qualitative analysis is performed based on focus groups conducted at the Helsinki and Madrid sites. Participants observations are used to supplement the analysis. The results of the analysis indicated that the main benefits of using cloud are rapid development, continuous integration, cost savings, code sharing, and faster ramp-up. The key risks determined by the project are dependencies, unavailability of access to the cloud, code commitment and integration, technical debt, and additional support costs. The results revealed that if such environments are not planned and set up carefully, the benefits of using cloud in DSD projects might be overshadowed by the risks associated with it.
Resumo:
The competence evaluation promoted by the European High Education Area entails a very important methodological change that requires guiding support to help lecturers carry out this new and complex task. In this regard, the Technical University of Madrid (UPM, by its Spanish acronym) has financed a series of coordinated projects with the objective of developing a model for teaching and evaluating core competences and providing support to lecturers. This paper deals with the problem-solving competence. The first step has been to elaborate a guide for teachers to provide a homogeneous way to asses this competence. This guide considers several levels of acquisition of the competence and provides the rubrics to be applied for each one. The guide has been subsequently validated with several pilot experiences. In this paper we will explain the problem-solving assessment guide for teachers and will show the pilot experiences that has been carried out. We will finally justify the validity of the method to assess the problem-solving competence.
Resumo:
This article describes a knowledge-based application in the domain of road traffic management that we have developed following a knowledge modeling approach and the notion of problem-solving method. The article presents first a domain-independent model for real-time decision support as a structured collection of problem solving methods. Then, it is described how this general model is used to develop an operational version for the domain of traffic management. For this purpose, a particular knowledge modeling tool, called KSM (Knowledge Structure Manager), was applied. Finally, the article shows an application developed for a traffic network of the city of Madrid and it is compared with a second application developed for a different traffic area of the city of Barcelona.
Resumo:
In recent years, the continuous incorporation of new technologies in the learning process has been an important factor in the educational process (1). The Technical University of Madrid (UPM) promotes educational innovation processes and develops projects related to the improvement of the education quality. The experience that we present fits into the Educational Innovation Project (EIP) of the E.U. of Agricultural Engineering of Madrid. One of the main objectives of the EIP is to Take advantage of the new opportunities offered by the Learning and Knowledge Technologies in order to enrich the educational processes and teaching management (2).
Resumo:
The risks associated with gestational diabetes (GD) can be reduced with an active treatment able to improve glycemic control. Advances in mobile health can provide new patient-centric models for GD to create personalized health care services, increase patient independence and improve patients’ self-management capabilities, and potentially improve their treatment compliance. In these models, decision-support functions play an essential role. The telemedicine system MobiGuide provides personalized medical decision support for GD patients that is based on computerized clinical guidelines and adapted to a mobile environment. The patient’s access to the system is supported by a smartphone-based application that enhances the efficiency and ease of use of the system. We formalized the GD guideline into a computer-interpretable guideline (CIG). We identified several workflows that provide decision-support functionalities to patients and 4 types of personalized advice to be delivered through a mobile application at home, which is a preliminary step to providing decision-support tools in a telemedicine system: (1) therapy, to help patients to comply with medical prescriptions; (2) monitoring, to help patients to comply with monitoring instructions; (3) clinical assessment, to inform patients about their health conditions; and (4) upcoming events, to deal with patients’ personal context or special events. The whole process to specify patient-oriented decision support functionalities ensures that it is based on the knowledge contained in the GD clinical guideline and thus follows evidence-based recommendations but at the same time is patient-oriented, which could enhance clinical outcomes and patients’ acceptance of the whole system.
Resumo:
The competence evaluation promoted by the European High Education Area entails a very important methodological change that requires guiding support to help lecturers carry out this new and complex task. In this regard, the Technical University of Madrid (UPM, by its Spanish acronym) has financed a series of coordinated projects with the objective of developing a model for teaching and evaluating core competences and providing support to lecturers. This paper deals with the problem solving competence. The first step has been to elaborate a guide for teachers to provide an homogeneous way to asses this competence. This guide considers several levels of acquisition of the competence and provided the rubrics to be applied for each one. The guide has been subsequently validated with several pilot experiences. In this paper we will explain the problem-solving assessment guide for teachers and will show the pilot experiences that has been carried out. We will finally justify the validity of the method to assess the problem solving competence.
Resumo:
Empirical Software Engineering (ESE) replication researchers need to store and manipulate experimental data for several purposes, in particular analysis and reporting. Current research needs call for sharing and preservation of experimental data as well. In a previous work, we analyzed Replication Data Management (RDM) needs. A novel concept, called Experimental Ecosystem, was proposed to solve current deficiencies in RDMapproaches. The empirical ecosystem provides replication researchers with a common framework that integrates transparently local heterogeneous data sources. A typical situation where the Empirical Ecosystem is applicable, is when several members of a research group, or several research groups collaborating together, need to share and access each other experimental results. However, to be able to apply the Empirical Ecosystem concept and deliver all promised benefits, it is necessary to analyze the software architectures and tools that can properly support it.
Resumo:
Los sistemas microinformáticos se componen principalmente de hardware y software, con el paso del tiempo el hardware se degrada, se deteriora y en ocasiones se avería. El software evoluciona, requiere un mantenimiento, de actualización y en ocasiones falla teniendo que ser reparado o reinstalado. A nivel hardware se analizan los principales componentes que integran y que son comunes en gran parte estos sistemas, tanto en equipos de sobre mesa como portátiles, independientes del sistema operativo, además de los principales periféricos, también se analizan y recomiendan algunas herramientas necesarias para realizar el montaje, mantenimiento y reparación de estos equipos. Los principales componentes hardware internos son la placa base, memoria RAM, procesador, disco duro, carcasa, fuente de alimentación y tarjeta gráfica. Los periféricos más destacados son el monitor, teclado, ratón, impresora y escáner. Se ha incluido un apartado donde se detallan los distintos tipos de BIOS y los principales parámetros de configuración. Para todos estos componentes, tanto internos como periféricos, se ha realizado un análisis de las características que ofrecen y los detalles en los que se debe prestar especial atención en el momento de seleccionar uno frente a otro. En los casos que existen diferentes tecnologías se ha hecho una comparativa entre ambas, destacando las ventajas y los inconvenientes de unas frente a otras para que sea el usuario final quien decida cual se ajusta mejor a sus necesidades en función de las prestaciones y el coste. Un ejemplo son las impresoras de inyección de tinta frente a las laser o los discos duros mecánicos en comparación con y los discos de estado sólido (SSD). Todos estos componentes están relacionados, interconectados y dependen unos de otros, se ha dedicado un capítulo exclusivamente para estudiar cómo se ensamblan estos componentes, resaltando los principales fallos que se suelen cometer o producir y se han indicado unas serie tareas de mantenimiento preventivo que se pueden realizar para prolongar la vida útil del equipo y evitar averías por mal uso. Los mantenimientos se pueden clasificar como predictivo, perfectivo, adaptativo, preventivo y correctivo. Se ha puesto el foco principalmente en dos tipos de mantenimiento, el preventivo descrito anteriormente y en el correctivo, tanto software como hardware. El mantenimiento correctivo está enfocado al análisis, localización, diagnóstico y reparación de fallos y averías hardware y software. Se describen los principales fallos que se producen en cada componente, cómo se manifiestan o qué síntomas presentan para poder realizar pruebas específicas que diagnostiquen y acoten el fallo. En los casos que es posible la reparación se detallan las instrucciones a seguir, en otro caso se recomienda la sustitución de la pieza o componente. Se ha incluido un apartado dedicado a la virtualización, una tecnología en auge que resulta muy útil para realizar pruebas de software, reduciendo tiempos y costes en las pruebas. Otro aspecto interesante de la virtualización es que se utiliza para montar diferentes servidores virtuales sobre un único servidor físico, lo cual representa un importante ahorro en hardware y costes de mantenimiento, como por ejemplo el consumo eléctrico. A nivel software se realiza un estudio detallado de los principales problemas de seguridad y vulnerabilidades a los que está expuesto un sistema microinformático enumerando y describiendo el comportamiento de los distintos tipos de elementos maliciosos que pueden infectar un equipo, las precauciones que se deben tomar para minimizar los riesgos y las utilidades que se pueden ejecutar para prevenir o limpiar un equipo en caso de infección. Los mantenimientos y asistencias técnicas, en especial las de tipo software, no siempre precisan de la atención presencial de un técnico cualificado, por ello se ha dedicado un capítulo a las herramientas de asistencia remota que se pueden utilizar en este ámbito. Se describen algunas de las más populares y utilizadas en el mercado, su funcionamiento, características y requerimientos. De esta forma el usuario puede ser atendido de una forma rápida, minimizando los tiempos de respuesta y reduciendo los costes. ABSTRACT Microcomputer systems are basically made up of pieces of hardware and software, as time pass, there’s a degradation of the hardware pieces and sometimes failures of them. The software evolves, new versions appears and requires maintenance, upgrades and sometimes also fails having to be repaired or reinstalled. The most important hardware components in a microcomputer system are analyzed in this document for a laptop or a desktop, with independency of the operating system they run. In addition to this, the main peripherals and devices are also analyzed and a recommendation about the most proper tools necessary for maintenance and repair this kind of equipment is given as well. The main internal hardware components are: motherboard, RAM memory, microprocessor, hard drive, housing box, power supply and graphics card. The most important peripherals are: monitor, keyboard, mouse, printer and scanner. A section has been also included where different types of BIOS and main settings are listed with the basic setup parameters in each case. For all these internal components and peripherals, an analysis of their features has been done. Also an indication of the details in which special attention must be payed when choosing more than one at the same time is given. In those cases where different technologies are available, a comparison among them has been done, highlighting the advantages and disadvantages of selecting one or another to guide the end user to decide which one best fits his needs in terms of performance and costs. As an example, the inkjet vs the laser printers technologies has been faced, or also the mechanical hard disks vs the new solid state drives (SSD). All these components are interconnected and are dependent one to each other, a special chapter has been included in order to study how they must be assembled, emphasizing the most often mistakes and faults that can appear during that process, indicating different tasks that can be done as preventive maintenance to enlarge the life of the equipment and to prevent damage because of a wrong use. The different maintenances can be classified as: predictive, perfective, adaptive, preventive and corrective. The main focus is on the preventive maintains, described above, and in the corrective one, in software and hardware. Corrective maintenance is focused on the analysis, localization, diagnosis and repair of hardware and software failures and breakdowns. The most typical failures that can occur are described, also how they can be detected or the specific symptoms of each one in order to apply different technics or specific tests to diagnose and delimit the failure. In those cases where the reparation is possible, instructions to do so are given, otherwise, the replacement of the component is recommended. A complete section about virtualization has also been included. Virtualization is a state of the art technology that is very useful especially for testing software purposes, reducing time and costs during the tests. Another interesting aspect of virtualization is the possibility to have different virtual servers on a single physical server, which represents a significant savings in hardware inversion and maintenance costs, such as electricity consumption. In the software area, a detailed study has been done about security problems and vulnerabilities a microcomputer system is exposed, listing and describing the behavior of different types of malicious elements that can infect a computer, the precautions to be taken to minimize the risks and the tools that can be used to prevent or clean a computer system in case of infection. The software maintenance and technical assistance not always requires the physical presence of a qualified technician to solve the possible problems, that’s why a complete chapter about the remote support tools that can be used to do so has been also included. Some of the most popular ones used in the market are described with their characteristics and requirements. Using this kind of technology, final users can be served quickly, minimizing response times and reducing costs.
Resumo:
Emotion is generally argued to be an influence on the behavior of life systems, largely concerning flexibility and adaptivity. The way in which life systems acts in response to a particular situations of the environment, has revealed the decisive and crucial importance of this feature in the success of behaviors. And this source of inspiration has influenced the way of thinking artificial systems. During the last decades, artificial systems have undergone such an evolution that each day more are integrated in our daily life. They have become greater in complexity, and the subsequent effects are related to an increased demand of systems that ensure resilience, robustness, availability, security or safety among others. All of them questions that raise quite a fundamental challenges in control design. This thesis has been developed under the framework of the Autonomous System project, a.k.a the ASys-Project. Short-term objectives of immediate application are focused on to design improved systems, and the approaching of intelligence in control strategies. Besides this, long-term objectives underlying ASys-Project concentrate on high order capabilities such as cognition, awareness and autonomy. This thesis is placed within the general fields of Engineery and Emotion science, and provides a theoretical foundation for engineering and designing computational emotion for artificial systems. The starting question that has grounded this thesis aims the problem of emotion--based autonomy. And how to feedback systems with valuable meaning has conformed the general objective. Both the starting question and the general objective, have underlaid the study of emotion, the influence on systems behavior, the key foundations that justify this feature in life systems, how emotion is integrated within the normal operation, and how this entire problem of emotion can be explained in artificial systems. By assuming essential differences concerning structure, purpose and operation between life and artificial systems, the essential motivation has been the exploration of what emotion solves in nature to afterwards analyze analogies for man--made systems. This work provides a reference model in which a collection of entities, relationships, models, functions and informational artifacts, are all interacting to provide the system with non-explicit knowledge under the form of emotion-like relevances. This solution aims to provide a reference model under which to design solutions for emotional operation, but related to the real needs of artificial systems. The proposal consists of a multi-purpose architecture that implement two broad modules in order to attend: (a) the range of processes related to the environment affectation, and (b) the range or processes related to the emotion perception-like and the higher levels of reasoning. This has required an intense and critical analysis beyond the state of the art around the most relevant theories of emotion and technical systems, in order to obtain the required support for those foundations that sustain each model. The problem has been interpreted and is described on the basis of AGSys, an agent assumed with the minimum rationality as to provide the capability to perform emotional assessment. AGSys is a conceptualization of a Model-based Cognitive agent that embodies an inner agent ESys, the responsible of performing the emotional operation inside of AGSys. The solution consists of multiple computational modules working federated, and aimed at conforming a mutual feedback loop between AGSys and ESys. Throughout this solution, the environment and the effects that might influence over the system are described as different problems. While AGSys operates as a common system within the external environment, ESys is designed to operate within a conceptualized inner environment. And this inner environment is built on the basis of those relevances that might occur inside of AGSys in the interaction with the external environment. This allows for a high-quality separate reasoning concerning mission goals defined in AGSys, and emotional goals defined in ESys. This way, it is provided a possible path for high-level reasoning under the influence of goals congruence. High-level reasoning model uses knowledge about emotional goals stability, letting this way new directions in which mission goals might be assessed under the situational state of this stability. This high-level reasoning is grounded by the work of MEP, a model of emotion perception that is thought as an analogy of a well-known theory in emotion science. The work of this model is described under the operation of a recursive-like process labeled as R-Loop, together with a system of emotional goals that are assumed as individual agents. This way, AGSys integrates knowledge that concerns the relation between a perceived object, and the effect which this perception induces on the situational state of the emotional goals. This knowledge enables a high-order system of information that provides the sustain for a high-level reasoning. The extent to which this reasoning might be approached is just delineated and assumed as future work. This thesis has been studied beyond a long range of fields of knowledge. This knowledge can be structured into two main objectives: (a) the fields of psychology, cognitive science, neurology and biological sciences in order to obtain understanding concerning the problem of the emotional phenomena, and (b) a large amount of computer science branches such as Autonomic Computing (AC), Self-adaptive software, Self-X systems, Model Integrated Computing (MIC) or the paradigm of models@runtime among others, in order to obtain knowledge about tools for designing each part of the solution. The final approach has been mainly performed on the basis of the entire acquired knowledge, and described under the fields of Artificial Intelligence, Model-Based Systems (MBS), and additional mathematical formalizations to provide punctual understanding in those cases that it has been required. This approach describes a reference model to feedback systems with valuable meaning, allowing for reasoning with regard to (a) the relationship between the environment and the relevance of the effects on the system, and (b) dynamical evaluations concerning the inner situational state of the system as a result of those effects. And this reasoning provides a framework of distinguishable states of AGSys derived from its own circumstances, that can be assumed as artificial emotion.
Resumo:
La situación energética actual es insostenible y como consecuencia se plantea un escenario próximo orientado a conseguir un futuro energético sostenible que permita el desarrollo económico y el bienestar social. La situación ambiental actual está afectada directamente por la combustión de combustibles fósiles que en 2013 constituyeron el 81% de la energía primaria utilizada por el ser humano y son la principal fuente antropogénica de gases de efecto invernadero. Los informes del IPCC1, ponen de manifiesto que el cambio climático se ha consolidado durante los últimos años y en la conferencia de la ONU sobre cambio climático de París que se celebrará a finales de 2015, se pretende que los gobiernos suscriban un acuerdo universal para limitar las emisiones de gases de efecto invernadero y evitar que el incremento de la temperatura media global supere los 2°C. Por otra parte, en el interior de las ciudades es especialmente preocupante, por su efecto directo sobre la salud humana, el impacto ambiental producido por las emisiones de NOx que generan el transporte de personas y mercancías. El sector del transporte fue responsable en 2012 del 27,9% del consumo final de energía. Una vez expuesto el escenario energético y ambiental actual, en esta tesis, se analiza la eficiencia de un sistema autónomo fotovoltaico para la carga de baterías de vehículos eléctricos y el uso del mismo con otras cargas, con el objetivo de aprovechar al máximo la energía eléctrica generada y contribuir a la utilización de energía limpia que no produzca impacto ambiental. Como primer paso para el desarrollo de la tesis se hizo un estudio de trabajos previos comenzando por las primeras aplicaciones de la energía fotovoltaica en los vehículos solares para después pasar a trabajos más recientes enfocados al suministro de energía a los vehículos eléctricos. También se hizo este estudio sobre las metodologías de simulación en los sistemas fotovoltaicos y en el modelado de distintos componentes. Posteriormente se eligieron, dentro de la amplia oferta existente en el mercado, los componentes con características técnicas más adecuadas para este tipo de instalaciones y para las necesidades que se pretenden cubrir. A partir de los parámetros técnicos de los componentes elegidos para configurar la instalación autónoma y utilizando modelos contrastados de distintos componentes, se ha desarrollado un modelo de simulación en ordenador del sistema completo con el que se han hecho simulaciones con distintos modos de demanda de energía eléctrica, según los modos de carga disponibles en el vehículo eléctrico para corriente alterna monofásica de 230 V. También se han simulado distintos tamaños del generador fotovoltaico y del sistema de acumulación de energía eléctrica para poder determinar la influencia de estos parámetros en los balances energéticos del sistema. Utilizando recursos propios el doctorando ha realizado la instalación real de un sistema fotovoltaico que incluye sistema de acumulación e inversor en un edificio de su propiedad. Para la realización de la tesis, La Fundación de Fomento e Innovación Industrial (F2I2) ha facilitado al doctorando un dispositivo que permite realizar la alimentación del vehículo eléctrico en modo 2 (este modo emplea un adaptador que incorpora dispositivos de seguridad y se comunica con el vehículo permitiendo ajustar la velocidad de recarga) y que ha sido necesario para los trabajos desarrollados. Se ha utilizado la red eléctrica como sistema de apoyo de la instalación fotovoltaica para permitir la recarga en el modo 2 que requiere más potencia que la proporcionada por el sistema fotovoltaico instalado. Se han analizado mediante simulación distintos regímenes de carga que se han estudiado experimentalmente en la instalación realizada, a la vez que se han hecho ensayos que se han reproducido mediante simulación con los mismos valores de radiación solar y temperatura con objeto de contrastar el modelo. Se han comparado los resultados experimentales con los obtenidos mediante simulación con objeto de caracterizar el comportamiento del sistema de acumulación (energía eléctrica suministrada y tensión de salida en las baterías) y del generador fotovoltaico (energía eléctrica fotovoltaica suministrada). Por último, se ha realizado un estudio económico de la instalación autónoma fotovoltaica ejecutada y simulada. En el mismo se ha planteado la utilización de fondos propios (como realmente se ha llevado a cabo) y la utilización de financiación, para determinar dos posibles escenarios que pudieran ser de utilidad a un propietario de vehículo eléctrico. Se han comparado los resultados obtenidos en los dos escenarios propuestos del estudio económico del sistema, en cuanto a los parámetros de tiempo de retorno de la inversión, valor actual neto de la inversión y tasa interna de retorno de la misma. Las conclusiones técnicas obtenidas, permiten la utilización del sistema con los modos de carga ensayados y otro tipo de cargas que aprovechen la generación eléctrica del sistema. Las baterías ofrecen mejor comportamiento cuando el aporte fotovoltaico está presente, pero no considera adecuado la conexión de cargas elevadas a un sistema de acumulación de gel (plomo-acido) como el que se ha utilizado, debido al comportamiento de este tipo de baterías ante demandas de intensidad de corriente eléctrica elevadas. Por otra parte, el comportamiento de este tipo de baterías con valores de intensidad de corriente eléctrica inferiores a 10 A en ausencia de energía fotovoltaica, con el objetivo de utilizar la generación de energía eléctrica diaria acumulada en el sistema, sí resulta interesante y ofrece un buen comportamiento del sistema de acumulación. Las circunstancias actuales de mercado, que carece de sistemas de acumulación de litio con precios de compra interesantes, no han permitido poder experimentar este sistema de acumulación en la instalación autónoma fotovoltaica ejecutada, tampoco se ha podido obtener el favor de ningún fabricante para ello. Actualmente hay disponibles sistemas de acumulación en litio que no se comercializan en España y que serían adecuados para el sistema de acumulación de energía propuesto en este estudio, que deja abierta las puertas para futuros trabajos de investigación. Las conclusiones económicas obtenidas, rentabilizan el uso de una instalación autónoma fotovoltaica con consumo instantáneo, sin acumulación de energía eléctrica. El futuro de conexión a red por parte de estas instalaciones, cuando se regule, aportará un incentivo económico para rentabilizar con menos tiempo las instalaciones autónomas fotovoltaicas, esto también deja la puerta abierta a futuros trabajos de investigación. El sistema de acumulación de energía aporta el mayor peso económico de inversión en este tipo de instalaciones. La instalación estudiada aporta indicadores económicos que la hacen rentable, pero se necesitaría que los precios de acumulación de la energía en sistemas eficientes estén comprendidos entre 100-200 €/kWh para que el sistema propuesto en este trabajo resulte atractivo a un potencial propietario de un vehículo eléctrico. ABSTRACT The current energy situation is untenable; it poses a scenario next focused on reaching a sustainable energy future, to allow economic development and social welfare. The environmental current situation is affected directly by the combustion of fossil fuels that in 2013 constituted 81 % of the primary energy used by the human being and they are the principal source human of greenhouse gases. The reports of the IPCC2, they reveal that the climate change has consolidated during the last years and in the conference of the UNO on climate change of Paris that will be celebrated at the end of 2015, there is claimed that the governments sign a universal agreement to limit the emission of greenhouse gases and to prevent that the increase of the global average temperature overcomes them 2°C. On the other hand, inside the cities it is specially worrying, for his direct effect on the human health, the environmental impact produced by the NOx emissions that generate the persons' transport and goods. The sector of the transport was responsible in 2012 of 27,9 % of the final consumption of energy. Once exposed the scenario and present environmental energy, in this thesis, it has analyzed the efficiency of an autonomous photovoltaic system for charging electric vehicles, and the use of the same with other workloads, with the objective to maximize the electrical energy generated and contribute to the use of clean energy that does not produce environmental impact. Since the first step for the development of the thesis did to itself a study of previous works beginning for the first applications of the photovoltaic power in the solar vehicles later to go on to more recent works focused on the power supply to the electrical vehicles. Also this study was done on the methodologies of simulation in the photovoltaic systems and in the shaped one of different components. Later they were chosen, inside the wide existing offer on the market, the components with technical characteristics more adapted for this type of facilities and for the needs that try to cover. From the technical parameters of the components chosen to form the autonomous installation and using models confirmed of different components, a model of simulation has developed in computer of the complete system with which simulations have been done by different manners of demand of electric power, according to the available manners of load in the electrical vehicle for single-phase alternating current of 230 V. Also there have been simulated different sizes of the photovoltaic generator and of the system of accumulation of electric power to be able to determine the influence of these parameters in the energy balances of the system. Using own resources the PhD student has realized a real installation of a photovoltaic system that includes system of accumulation and investing in a building of his property. For the accomplishment of the thesis, The Foundation of Promotion and Industrial Innovation (F2I2) it has facilitated to the PhD student a device that allows to realize the supply of the electrical vehicle in way 2 (this way uses an adapter that incorporates safety devices and communicates with the vehicle allowing to fit the speed of recharges) and that has been necessary for the developed works. The electrical network has been in use as system of support of the photovoltaic installation for allowing it her recharges in the way 2 that more power needs that provided by the photovoltaic installed system. There have been analyzed by means of simulation different rate of load that have been studied experimentally in the realized installation, simultaneously that have done to themselves tests that have reproduced by means of simulation with the same values of solar radiation and temperature in order the model contrasted. The experimental results have been compared by the obtained ones by means of simulation in order to characterize the behavior of the system of accumulation (supplied electric power and tension of exit in the batteries) and of the photovoltaic generator (photovoltaic supplied electric power). Finally, there has been realized an economic study of the autonomous photovoltaic executed and simulated installation. In the same one there has appeared the utilization of own funds (since really it has been carried out) and the utilization of financing, to determine two possible scenes that could be of usefulness to an owner of electrical vehicle. There have been compared the results obtained in both scenes proposed of the economic study of the system, as for the parameters of time of return of the investment, current clear value of the investment and rate hospitalizes of return of the same one. The technical obtained conclusions, they make the utilization of the system viable with the manners of load tested and another type of loads of that they take advantage the electrical generation of the system. The batteries offer better behavior when the photovoltaic contribution is present, but he does not consider to be suitable the connection of loads risen up to a system of accumulation of gel (lead - acid) as the one that has been in use, due to the behavior of this type of batteries before demands of intensity of electrical current raised. On the other hand, the behavior of this type of batteries with low values of intensity of electrical current to 10 To in absence of photovoltaic power, with the aim to use the generation of daily electric power accumulated in the system, yes turns out to be interesting and offers a good behavior of the system of accumulation. The current circumstances of market, which lacks systems of accumulation of lithium with interesting purchase prices, have not allowed to be able to experience this system of accumulation in the autonomous photovoltaic executed installation, neither one could have obtained the favor of any manufacturer for it. Nowadays there are available systems of accumulation in lithium that is not commercialized in Spain and that they would be adapted for the system of accumulation of energy proposed in this study, which makes the doors opened for future works of investigation. The economic obtained conclusions; they make more profitable the use of an autonomous photovoltaic installation with instantaneous consumption, without accumulation of electric power. The future of connection to network on the part of these facilities, when it is regulated, will contribute an economic incentive to make profitable with less time the autonomous photovoltaic facilities, this also leaves the door opened for future works of investigation. The system of accumulation of energy contributes the major economic weight of investment in this type of facilities. The studied installation contributes economic indicators that make her profitable, but it would be necessary that the prices of accumulation of the energy in efficient systems are understood between 100-200 € in order that the system proposed in this work turns out to be attractive to a proprietary potential of an electrical vehicle.