800 resultados para cryptographic computing
Resumo:
Modern Field Programmable Gate Arrays (FPGAs) are power packed with features to facilitate designers. Availability of features like huge block memory (BRAM), Digital Signal Processing (DSP) cores, embedded CPU makes the design strategy of FPGAs quite different from ASICs. FPGA are also widely used in security-critical application where protection against known attacks is of prime importance. We focus ourselves on physical attacks which target physical implementations. To design countermeasures against such attacks, the strategy for FPGA designers should also be different from that in ASIC. The available features should be exploited to design compact and strong countermeasures. In this paper, we propose methods to exploit the BRAMs in FPGAs for designing compact countermeasures. BRAM can be used to optimize intrinsic countermeasures like masking and dual-rail logic, which otherwise have significant overhead (at least 2X). The optimizations are applied on a real AES-128 co-processor and tested for area overhead and resistance on Xilinx Virtex-5 chips. The presented masking countermeasure has an overhead of only 16% when applied on AES. Moreover Dual-rail Precharge Logic (DPL) countermeasure has been optimized to pack the whole sequential part in the BRAM, hence enhancing the security. Proper robustness evaluations are conducted to analyze the optimization for area and security.
Resumo:
The assessment of learning outcomes is a key concept in the European Credit Transfer and Accumulation System (ECTS) since credits are awarded when the assessment shows the competences which were aimed at have been developed at an appropriate level. This paper describes a study which was first part of the Bologna Experts Team-Spain project and then developed as an independent study. It was carried out with the overall goal to gain experience in the assessment of learning outcomes. More specifically it aimed at 1) designing procedures for the assessment of learning outcomes related to these compulsory generic competences; 2) testing some basic psychometric features that an assessment device with some consequences for the subjects being evaluated needs to prove; 3) testing different procedures of standard setting, and 4) using assessment results as orienting feedback to students and their tutors. The process of development of tests to carry out the assessment of learning outcomes is described as well as some basic features regarding their reliability and validity. First conclusions on the comparison of the results achieved at two academic levels are also presented.
Resumo:
The assessment of learning outcomes is a key concept in the European Credit Transfer and Accumulation System (ECTS) since credits are awarded when the assessment shows the competences which were aimed at have been developed at an appropriate level. This paper describes a study which was first part of the project of the Bologna Experts Team-Spain and then developed as an independent study. It was carried out with the overall goal to gain experience in the assessment of learning outcomes. More specifically it aimed at 1) designing procedures for the assessment of learning outcomes related to these compulsory generic competences; 2) testing some basic psychometric features that an assessment device with some consequences for the subjects being evaluated needs to prove; 3) testing different procedures of standard setting, and 4) using assessment results as orienting feedback to students and their tutors. The process of development of tests to carry out the assessment of learning outcomes related to these competences, as well as some basic features regarding their reliability and validity is described and first results on the comparison of results achieved at two academic levels, will also be described at a later stage.
Resumo:
En el presente artículo se muestran las ventajas de la programación en paralelo resolviendo numéricamente la ecuación del calor en dos dimensiones a través del método de diferencias finitas explícito centrado en el espacio FTCS. De las conclusiones de este trabajo se pone de manifiesto la importancia de la programación en paralelo para tratar problemas grandes, en los que se requiere un elevado número de cálculos, para los cuales la programación secuencial resulta impracticable por el elevado tiempo de ejecución. En la primera sección se describe brevemente los conceptos básicos de programación en paralelo. Seguidamente se resume el método de diferencias finitas explícito centrado en el espacio FTCS aplicado a la ecuación parabólica del calor. Seguidamente se describe el problema de condiciones de contorno y valores iniciales específico al que se va a aplicar el método de diferencias finitas FTCS, proporcionando pseudocódigos de una implementación secuencial y dos implementaciones en paralelo. Finalmente tras la discusión de los resultados se presentan algunas conclusiones. In this paper the advantages of parallel computing are shown by solving the heat conduction equation in two dimensions with the forward in time central in space (FTCS) finite difference method. Two different levels of parallelization are consider and compared with traditional serial procedures. We show in this work the importance of parallel computing when dealing with large problems that are impractical or impossible to solve them with a serial computing procedure. In the first section a summary of parallel computing approach is presented. Subsequently, the forward in time central in space (FTCS) finite difference method for the heat conduction equation is outline, describing how the heat flow equation is derived in two dimensions and the particularities of the finite difference numerical technique considered. Then, a specific initial boundary value problem is solved by the FTCS finite difference method and serial and parallel pseudo codes are provided. Finally after results are discussed some conclusions are presented.
Resumo:
Currently, student dropout rates are a matter of concern among universities. Many research studies, aimed at discovering the causes, have been carried out. However, few solutions, that could serve all students and related problems, have been proposed so far. One such problem is caused by the lack of the "knowledge chain educational links" that occurs when students move onto higher studies without mastering their basic studies. Most regulated studies imparted at universities are designed so that some basic subjects serve as support for other, more complicated, subjects, thus forming a complicated knowledge network. When a link in this chain fails, student frustration occurs as it prevents him from fully understanding the following educational links. In this proposal we try to mitigate these disasters that stem, for the most part, the student?s frustration beyond his college stay. On one hand, we make a dissertation on the student?s learning process, which we divide into a series of phases that amount to what we call the "learning lifecycle." Also, we analyze at what stage the action by the stakeholders involved in this scenario: teachers and students; is most important. On the other hand, we consider that Information and Communication Technologies ICT, such as Cloud Computing, can help develop new ways, allowing for the teaching of higher education, while easing and facilitating the student?s learning process. But, methods and processes need to be defined as to direct the use of such technologies; in the teaching process in general, and within higher education in particular; in order to achieve optimum results. Our methodology integrates, as another actor, the ICT into the "Learning Lifecycle". We stimulate students to stop being merely spectators of their own education, and encourage them to take an active part in their training process. To do this, we offer a set of useful tools to determine not only academic failure causes, (for self assessment), but also to remedy these failures (with corrective actions); "discovered the causes it is easier to determine solutions?. We believe this study will be useful for both students and teachers. Students learn from their own experience and improve their learning process, while obtaining all of the "knowledge chain educational links? required in their studies. We stand by the motto "Studying to learn instead of studying to pass". Teachers will also be benefited by detecting where and how to strengthen their teaching proposals. All of this will also result in decreasing dropout rates.
Resumo:
RESUMEN La realización de túneles de gran longitud para ferrocarriles ha adquirido un gran auge en los últimos años. En España se han abordado proyectos de estas características, no existiendo para su ejecución una metodología completa y contrastada de actuación. Las características geométricas, de observación y de trabajo en túneles hace que las metodologías que se aplican en otros proyectos de ingeniería no sean aplicables por las siguientes causas: separación de las redes exteriores e interiores de los túneles debido a la diferente naturaleza de los observables, geometría en el interior siempre desfavorable a los requerimientos de observación clásica, mala visibilidad dentro del túnel, aumento de errores conforme avanza la perforación, y movimientos propios del túnel durante su ejecución por la propia geodinámica activa. Los patrones de observación geodésica usados deben revisarse cuando se ejecutan túneles de gran longitud. Este trabajo establece una metodología para el diseño de redes exteriores. ABSTRACT: The realization of long railway tunnels has acquired a great interest in recent years. In Spain it is necessary to address projects of this nature, but ther is no corresponding methodological framework supporting them. The tunnel observational and working geometrical properties, make that former methodologies used may be unuseful in this case: the observation of the exterior and interior geodetical networks of the tunnel is different in nature. Conditions of visibility in the interior of the tunnels, regardless of the geometry, are not the most advantageous for observation due to the production system and the natural conditions of the tunnels. Errors increase as the drilling of the tunnel progresses, as it becomes problematical to perform continuous verifications along the itinerary itself. Moreover, inherent tunnel movements due to active geodynamics must also be considered. Therefore patterns for geodetic and topographic observations have to be reviewed when very long tunnels are constructed.
Resumo:
A first-rate e-Health system saves lives, provides better patient care, allows complex but useful epidemiologic analysis and saves money. However, there may also be concerns about the costs and complexities associated with e-health implementation, and the need to solve issues about the energy footprint of the high-demanding computing facilities. This paper proposes a novel and evolved computing paradigm that: (i) provides the required computing and sensing resources; (ii) allows the population-wide diffusion; (iii) exploits the storage, communication and computing services provided by the Cloud; (iv) tackles the energy-optimization issue as a first-class requirement, taking it into account during the whole development cycle. The novel computing concept and the multi-layer top-down energy-optimization methodology obtain promising results in a realistic scenario for cardiovascular tracking and analysis, making the Home Assisted Living a reality.
Resumo:
As advanced Cloud services are becoming mainstream, the contribution of data centers in the overall power consumption of modern cities is growing dramatically. The average consumption of a single data center is equivalent to the energy consumption of 25.000 households. Modeling the power consumption for these infrastructures is crucial to anticipate the effects of aggressive optimization policies, but accurate and fast power modeling is a complex challenge for high-end servers not yet satisfied by analytical approaches. This work proposes an automatic method, based on Multi-Objective Particle Swarm Optimization, for the identification of power models of enterprise servers in Cloud data centers. Our approach, as opposed to previous procedures, does not only consider the workload consolidation for deriving the power model, but also incorporates other non traditional factors like the static power consumption and its dependence with temperature. Our experimental results shows that we reach slightly better models than classical approaches, but simul- taneously simplifying the power model structure and thus the numbers of sensors needed, which is very promising for a short-term energy prediction. This work, validated with real Cloud applications, broadens the possibilities to derive efficient energy saving techniques for Cloud facilities.
Resumo:
This is the final report on reproducibility@xsede, a one-day workshop held in conjunction with XSEDE14, the annual conference of the Extreme Science and Engineering Discovery Environment (XSEDE). The workshop's discussion-oriented agenda focused on reproducibility in large-scale computational research. Two important themes capture the spirit of the workshop submissions and discussions: (1) organizational stakeholders, especially supercomputer centers, are in a unique position to promote, enable, and support reproducible research; and (2) individual researchers should conduct each experiment as though someone will replicate that experiment. Participants documented numerous issues, questions, technologies, practices, and potentially promising initiatives emerging from the discussion, but also highlighted four areas of particular interest to XSEDE: (1) documentation and training that promotes reproducible research; (2) system-level tools that provide build- and run-time information at the level of the individual job; (3) the need to model best practices in research collaborations involving XSEDE staff; and (4) continued work on gateways and related technologies. In addition, an intriguing question emerged from the day's interactions: would there be value in establishing an annual award for excellence in reproducible research? Overview
Resumo:
Abstract. Receptive fields of retinal and other sensory neurons show a large variety of spatiotemporal linear and non linear types of responses to local stimuli. In visual neurons, these responses present either asymmetric sensitive zones or center-surround organization. In most cases, the nature of the responses suggests the existence of a kind of distributed computation prior to the integration by the final cell which is evidently supported by the anatomy. We describe a new kind of discrete and continuous filters to model the kind of computations taking place in the receptive fields of retinal cells. To show their performance in the analysis of diferent non-trivial neuron-like structures, we use a computer tool specifically programmed by the authors to that efect. This tool is also extended to study the efect of lesions on the whole performance of our model nets.
Resumo:
En esta tesis se aborda el problema de la externalización segura de servicios de datos y computación. El escenario de interés es aquel en el que el usuario posee datos y quiere subcontratar un servidor en la nube (“Cloud”). Además, el usuario puede querer también delegar el cálculo de un subconjunto de sus datos al servidor. Se presentan dos aspectos de seguridad relacionados con este escenario, en concreto, la integridad y la privacidad y se analizan las posibles soluciones a dichas cuestiones, aprovechando herramientas criptográficas avanzadas, como el Autentificador de Mensajes Homomórfico (“Homomorphic Message Authenticators”) y el Cifrado Totalmente Homomórfico (“Fully Homomorphic Encryption”). La contribución de este trabajo es tanto teórica como práctica. Desde el punto de vista de la contribución teórica, se define un nuevo esquema de externalización (en lo siguiente, denominado con su término inglés Outsourcing), usando como punto de partida los artículos de [3] y [12], con el objetivo de realizar un modelo muy genérico y flexible que podría emplearse para representar varios esquemas de ”outsourcing” seguro. Dicho modelo puede utilizarse para representar esquemas de “outsourcing” seguro proporcionando únicamente integridad, únicamente privacidad o, curiosamente, integridad con privacidad. Utilizando este nuevo modelo también se redefine un esquema altamente eficiente, construido en [12] y que se ha denominado Outsourcinglin. Este esquema permite calcular polinomios multivariados de grado 1 sobre el anillo Z2k . Desde el punto de vista de la contribución práctica, se ha construido una infraestructura marco (“Framework”) para aplicar el esquema de “outsourcing”. Seguidamente, se ha testado dicho “Framework” con varias implementaciones, en concreto la implementación del criptosistema Joye-Libert ([18]) y la implementación del esquema propio Outsourcinglin. En el contexto de este trabajo práctico, la tesis también ha dado lugar a algunas contribuciones innovadoras: el diseño y la implementación de un nuevo algoritmo de descifrado para el esquema de cifrado Joye-Libert, en colaboración con Darío Fiore. Presenta un mejor comportamiento frente a los algoritmos propuestos por los autores de [18];la implementación de la función eficiente pseudo-aleatoria de forma amortizada cerrada (“amortized-closed-form efficient pseudorandom function”) de [12]. Esta función no se había implementado con anterioridad y no supone un problema trivial, por lo que este trabajo puede llegar a ser útil en otros contextos. Finalmente se han usado las implementaciones durante varias pruebas para medir tiempos de ejecución de los principales algoritmos.---ABSTRACT---In this thesis we tackle the problem of secure outsourcing of data and computation. The scenario we are interested in is that in which a user owns some data and wants to “outsource” it to a Cloud server. Furthermore, the user may want also to delegate the computation over a subset of its data to the server. We present the security issues related to this scenario, namely integrity and privacy and we analyse some possible solutions to these two issues, exploiting advanced cryptographic tools, such as Homomorphic Message Authenticators and Fully Homomorphic Encryption. Our contribution is both theoretical and practical. Considering our theoretical contribution, using as starting points the articles of [3] and [12], we introduce a new cryptographic primitive, called Outsourcing with the aim of realizing a very generic and flexible model that might be employed to represent several secure outsourcing schemes. Such model can be used to represent secure outsourcing schemes that provide only integrity, only privacy or, interestingly, integrity with privacy. Using our new model we also re-define an highly efficient scheme constructed in [12], that we called Outsourcinglin and that is a scheme for computing multi-variate polynomials of degree 1 over the ring Z2k. Considering our practical contribution, we build a Framework to implement the Outsourcing scheme. Then, we test such Framework to realize several implementations, specifically the implementation of the Joye-Libert cryptosystem ([18]) and the implementation of our Outsourcinglin scheme. In the context of this practical work, the thesis also led to some novel contributions: the design and the implementation, in collaboration with Dario Fiore, of a new decryption algorithm for the Joye-Libert encryption scheme, that performs better than the algorithms proposed by the authors in [18]; the implementation of the amortized-closed-form efficient pseudorandom function of [12]. There was no prior implementation of this function and it represented a non trivial work, which can become useful in other contexts. Finally we test the implementations to execute several experiments for measuring the timing performances of the main algorithms.
Resumo:
LLas nuevas tecnologías orientadas a la nube, el internet de las cosas o las tendencias "as a service" se basan en el almacenamiento y procesamiento de datos en servidores remotos. Para garantizar la seguridad en la comunicación de dichos datos al servidor remoto, y en el manejo de los mismos en dicho servidor, se hace uso de diferentes esquemas criptográficos. Tradicionalmente, dichos sistemas criptográficos se centran en encriptar los datos mientras no sea necesario procesarlos (es decir, durante la comunicación y almacenamiento de los mismos). Sin embargo, una vez es necesario procesar dichos datos encriptados (en el servidor remoto), es necesario desencriptarlos, momento en el cual un intruso en dicho servidor podría a acceder a datos sensibles de usuarios del mismo. Es más, este enfoque tradicional necesita que el servidor sea capaz de desencriptar dichos datos, teniendo que confiar en la integridad de dicho servidor de no comprometer los datos. Como posible solución a estos problemas, surgen los esquemas de encriptación homomórficos completos. Un esquema homomórfico completo no requiere desencriptar los datos para operar con ellos, sino que es capaz de realizar las operaciones sobre los datos encriptados, manteniendo un homomorfismo entre el mensaje cifrado y el mensaje plano. De esta manera, cualquier intruso en el sistema no podría robar más que textos cifrados, siendo imposible un robo de los datos sensibles sin un robo de las claves de cifrado. Sin embargo, los esquemas de encriptación homomórfica son, actualmente, drás-ticamente lentos comparados con otros esquemas de encriptación clásicos. Una op¬eración en el anillo del texto plano puede conllevar numerosas operaciones en el anillo del texto encriptado. Por esta razón, están surgiendo distintos planteamientos sobre como acelerar estos esquemas para un uso práctico. Una de las propuestas para acelerar los esquemas homomórficos consiste en el uso de High-Performance Computing (HPC) usando FPGAs (Field Programmable Gate Arrays). Una FPGA es un dispositivo semiconductor que contiene bloques de lógica cuya interconexión y funcionalidad puede ser reprogramada. Al compilar para FPGAs, se genera un circuito hardware específico para el algorithmo proporcionado, en lugar de hacer uso de instrucciones en una máquina universal, lo que supone una gran ventaja con respecto a CPUs. Las FPGAs tienen, por tanto, claras difrencias con respecto a CPUs: -Arquitectura en pipeline: permite la obtención de outputs sucesivos en tiempo constante -Posibilidad de tener multiples pipes para computación concurrente/paralela. Así, en este proyecto: -Se realizan diferentes implementaciones de esquemas homomórficos en sistemas basados en FPGAs. -Se analizan y estudian las ventajas y desventajas de los esquemas criptográficos en sistemas basados en FPGAs, comparando con proyectos relacionados. -Se comparan las implementaciones con trabajos relacionados New cloud-based technologies, the internet of things or "as a service" trends are based in data storage and processing in a remote server. In order to guarantee a secure communication and handling of data, cryptographic schemes are used. Tradi¬tionally, these cryptographic schemes focus on guaranteeing the security of data while storing and transferring it, not while operating with it. Therefore, once the server has to operate with that encrypted data, it first decrypts it, exposing unencrypted data to intruders in the server. Moreover, the whole traditional scheme is based on the assumption the server is reliable, giving it enough credentials to decipher data to process it. As a possible solution for this issues, fully homomorphic encryption(FHE) schemes is introduced. A fully homomorphic scheme does not require data decryption to operate, but rather operates over the cyphertext ring, keeping an homomorphism between the cyphertext ring and the plaintext ring. As a result, an outsider could only obtain encrypted data, making it impossible to retrieve the actual sensitive data without its associated cypher keys. However, using homomorphic encryption(HE) schemes impacts performance dras-tically, slowing it down. One operation in the plaintext space can lead to several operations in the cyphertext space. Because of this, different approaches address the problem of speeding up these schemes in order to become practical. One of these approaches consists in the use of High-Performance Computing (HPC) using FPGAs (Field Programmable Gate Array). An FPGA is an integrated circuit designed to be configured by a customer or a designer after manufacturing - hence "field-programmable". Compiling into FPGA means generating a circuit (hardware) specific for that algorithm, instead of having an universal machine and generating a set of machine instructions. FPGAs have, thus, clear differences compared to CPUs: - Pipeline architecture, which allows obtaining successive outputs in constant time. -Possibility of having multiple pipes for concurrent/parallel computation. Thereby, In this project: -We present different implementations of FHE schemes in FPGA-based systems. -We analyse and study advantages and drawbacks of the implemented FHE schemes, compared to related work.
Resumo:
During the last three decades, FPGA technology has quickly evolved to become a major subject of research in computer and electrical engineering as it has been identified as a powerful alternative for creating highly efficient computing systems. FPGA devices offer substantial performance improvements when compared against traditional processing architectures via custom design and reconfiguration capabilities.
Resumo:
It has been shown that cloud computing brings cost benefits and promotes efficiency in the operations of the organizations, no matter what their type or size. However, few public organizations are benefiting from this new paradigm shift in the way the organizations consume and manage computational resources. The objective of this thesis is to analyze both internal and external factors that may influence the adoption of cloud computing by public organizations and propose possible strategies that can assist these organizations in their path to cloud usage. In order to achieve this objective, a SWOT analysis has been conducted, detecting internal factors (strengths and weaknesses) and external factors (opportunities and threats) that can influence the adoption of a governmental cloud. With the application of a TOWS matrix, by combining the internal and external factors, a list of possible strategies have been formulated to be used as a guide to decision-making related to the transition to a cloud environment.