937 resultados para parallel computer systems
Resumo:
Infrastructure as a Service clouds are a flexible and fast way to obtain (virtual) resources as demand varies. Grids, on the other hand, are middleware platforms able to combine resources from different administrative domains for task execution. Clouds can be used by grids as providers of devices such as virtual machines, so they only use the resources they need. But this requires grids to be able to decide when to allocate and release those resources. Here we introduce and analyze by simulations an economic mechanism (a) to set resource prices and (b) resolve when to scale resources depending on the users’ demand. This system has a strong emphasis on fairness, so no user hinders the execution of other users’ tasks by getting too many resources. Our simulator is based on the well-known GridSim software for grid simulation, which we expand to simulate infrastructure clouds. The results show how the proposed system can successfully adapt the amount of allocated resources to the demand, while at the same time ensuring that resources are fairly shared among users.
Resumo:
Distributed parallel execution systems speed up applications by splitting tasks into processes whose execution is assigned to different receiving nodes in a high-bandwidth network. On the distributing side, a fundamental problem is grouping and scheduling such tasks such that each one involves sufñcient computational cost when compared to the task creation and communication costs and other such practical overheads. On the receiving side, an important issue is to have some assurance of the correctness and characteristics of the code received and also of the kind of load the particular task is going to pose, which can be specified by means of certificates. In this paper we present in a tutorial way a number of general solutions to these problems, and illustrate them through their implementation in the Ciao multi-paradigm language and program development environment. This system includes facilities for parallel and distributed execution, an assertion language for specifying complex programs properties (including safety and resource-related properties), and compile-time and run-time tools for performing automated parallelization and resource control, as well as certification of programs with resource consumption assurances and efñcient checking of such certificates.
Resumo:
The future Internet is expected to be composed of a mesh of interoperable web services accessible from all over the web. This approach has not yet caught on since global user?service interaction is still an open issue. This paper states one vision with regard to next-generation front-end Web 2.0 technology that will enable integrated access to services, contents and things in the future Internet. In this paper, we illustrate how front-ends that wrap traditional services and resources can be tailored to the needs of end users, converting end users into prosumers (creators and consumers of service-based applications). To do this, we propose an architecture that end users without programming skills can use to create front-ends, consult catalogues of resources tailored to their needs, easily integrate and coordinate front-ends and create composite applications to orchestrate services in their back-end. The paper includes a case study illustrating that current user-centred web development tools are at a very early stage of evolution. We provide statistical data on how the proposed architecture improves these tools. This paper is based on research conducted by the Service Front End (SFE) Open Alliance initiative.
Resumo:
Future high-quality consumer electronics will contain a number of applications running in a highly dynamic environment, and their execution will need to be efficiently arbitrated by the underlying platform software. The multimedia applications that currently execute in such similar contexts face frequent run-time variations in their resource demands, originated by the greedy nature of the multimedia processing itself. Changes in resource demands are triggered by numerous reasons (e.g. a switch in the input media compression format). Such situations require real-time adaptation mechanisms to adjust the system operation to the new requirements, and this must be done seamlessly to satisfy the user experience. One solution for efficiently managing application execution is to apply quality of service resource management techniques, based on assigning and enforcing resource contracts to applications. Most resource management solutions provide temporal isolation by enforcing resource assignments and avoiding any resource overruns. However, this has a clear limitation over the cost-effective resource usage. This paper presents a simple priority assignment scheme based on uniform priority bands to allow that greedy multimedia tasks incur in safe overruns that increase resource usage and do not threaten the timely execution of non-overrunning tasks. Experimental results show that the proposed priority assignment scheme in combination with a resource accounting mechanism preserves timely multimedia execution and delivery, achieves a higher cost-effective processor usage, and guarantees the execution isolation of non-overrunning tasks.
Resumo:
Data grid services have been used to deal with the increasing needs of applications in terms of data volume and throughput. The large scale, heterogeneity and dynamism of grid environments often make management and tuning of these data services very complex. Furthermore, current high-performance I/O approaches are characterized by their high complexity and specific features that usually require specialized administrator skills. Autonomic computing can help manage this complexity. The present paper describes an autonomic subsystem intended to provide self-management features aimed at efficiently reducing the I/O problem in a grid environment, thereby enhancing the quality of service (QoS) of data access and storage services in the grid. Our proposal takes into account that data produced in an I/O system is not usually immediately required. Therefore, performance improvements are related not only to current but also to any future I/O access, as the actual data access usually occurs later on. Nevertheless, the exact time of the next I/O operations is unknown. Thus, our approach proposes a long-term prediction designed to forecast the future workload of grid components. This enables the autonomic subsystem to determine the optimal data placement to improve both current and future I/O operations.
Resumo:
This work proposes an encapsulation scheme aimed at simplifying the reuse process of hardware cores. This hardware encapsulation approach has been conceived with a twofold objective. First, we look for the improvement of the reuse interface associated with the hardware core description. This is carried out in a first encapsulation level by improving the limited types and configuration options available in the conventional HDLs interface, and also providing information related to the implementation itself. Second, we have devised a more generic interface focused on describing the function avoiding details from a particular implementation, what corresponds to a second encapsulation level. This encapsulation allows the designer to define how to configure and use the design to implement a given functionality. The proposed encapsulation schemes help improving the amount of information that can be supplied with the design, and also allow to automate the process of searching, configuring and implementing diverse alternatives.
Resumo:
Workflow technology continues to play an important role as a means for specifying and enacting computational experiments in modern science. Reusing and re-purposing workflows allow scientists to do new experiments faster, since the workflows capture useful expertise from others. As workflow libraries grow, scientists face the challenge of finding workflows appropriate for their task, understanding what each workflow does, and reusing relevant portions of a given workflow.We believe that workflows would be easier to understand and reuse if high-level views (abstractions) of their activities were available in workflow libraries. As a first step towards obtaining these abstractions, we report in this paper on the results of a manual analysis performed over a set of real-world scientific workflows from Taverna, Wings, Galaxy and Vistrails. Our analysis has resulted in a set of scientific workflow motifs that outline (i) the kinds of data-intensive activities that are observed in workflows (Data-Operation motifs), and (ii) the different manners in which activities are implemented within workflows (Workflow-Oriented motifs). These motifs are helpful to identify the functionality of the steps in a given workflow, to develop best practices for workflow design, and to develop approaches for automated generation of workflow abstractions.
Resumo:
La aparición de Internet y los sistemas informáticos supuso un antes y un después en el modo que las personas emplearían para acceder a los sistemas de información. El crecimiento exponencial seguido en los años posteriores ha llevado este hecho hasta la situación actual, donde prácticamente todos los ámbitos del día a día se encuentran reflejados en la Red. Por otro lado, a la par que la sociedad se desplazaba al ciberespacio, también comenzaban a hacerlo aquellos que buscaban obtener un rendimiento delictivo de los nuevos medios y herramientas que se ponían a su disposición. Avanzando a pasos agigantados en el desarrollo de técnicas y métodos para vulnerar unos sistemas de seguridad, aún muy inmaduros, los llamados ciberdelincuentes tomaban ventaja sobre las autoridades y su escasa preparación para abordar este nuevo problema. Poco a poco, y con el paso de los años, esta distancia ha ido reduciéndose, y pese a que aún queda mucho trabajo por hacer, y que el crecimiento de los índices de ciberdelincuencia, junto con la evolución y aparición de nuevas técnicas, sigue a un ritmo desenfrenado, los gobiernos y las empresas han tomado consciencia de la gravedad de este problema y han comenzado a poner sobre la mesa grandes esfuerzos e inversiones con el fin de mejorar sus armas de lucha y métodos de prevención para combatirla. Este Proyecto de Fin de Carrera dedica sus objetivos a la investigación y comprensión de todos estos puntos, desarrollando una visión específica de cada uno de ellos y buscando la intención final de establecer las bases suficientes que permitan abordar con la efectividad requerida el trabajo necesario para la persecución y eliminación del problema. ABSTRACT. The emergence of Internet and computer systems marked a before and after in the way that people access information systems. The continued exponential growth in the following years has taken this fact to the current situation, where virtually all areas of everyday life are reflected in the Net. On the other hand, meanwhile society moved into cyberspace, the same began to do those seeking to obtain a criminal performance of new media and tools at their disposal. Making great strides in the development of techniques and methods to undermine security systems, still very immature, so called cybercriminals took advantage over the authorities and their lack of preparation to deal with this new problem. Gradually, and over the years, this distance has been declining, and although there is still much work to do and the growth rates of cybercrime, along with the evolution and emergence of new techniques, keep increasing at a furious pace, governments and companies have become aware of the seriousness of this problem and have begun to put on the table great efforts and investments in order to upgrade their weapons to fight against this kind of crimes and prevention methods to combat it. This Thesis End of Grade Project focuses its objectives on the research and understanding of all these points, developing a specific vision of each of them and looking for the ultimate intention of establishing a sufficient basis by which to manage with the required effectiveness the type of work needed for the persecution and elimination of the problem.
Resumo:
La seguridad verificada es una metodología para demostrar propiedades de seguridad de los sistemas informáticos que se destaca por las altas garantías de corrección que provee. Los sistemas informáticos se modelan como programas probabilísticos y para probar que verifican una determinada propiedad de seguridad se utilizan técnicas rigurosas basadas en modelos matemáticos de los programas. En particular, la seguridad verificada promueve el uso de demostradores de teoremas interactivos o automáticos para construir demostraciones completamente formales cuya corrección es certificada mecánicamente (por ordenador). La seguridad verificada demostró ser una técnica muy efectiva para razonar sobre diversas nociones de seguridad en el área de criptografía. Sin embargo, no ha podido cubrir un importante conjunto de nociones de seguridad “aproximada”. La característica distintiva de estas nociones de seguridad es que se expresan como una condición de “similitud” entre las distribuciones de salida de dos programas probabilísticos y esta similitud se cuantifica usando alguna noción de distancia entre distribuciones de probabilidad. Este conjunto incluye destacadas nociones de seguridad de diversas áreas como la minería de datos privados, el análisis de flujo de información y la criptografía. Ejemplos representativos de estas nociones de seguridad son la indiferenciabilidad, que permite reemplazar un componente idealizado de un sistema por una implementación concreta (sin alterar significativamente sus propiedades de seguridad), o la privacidad diferencial, una noción de privacidad que ha recibido mucha atención en los últimos años y tiene como objetivo evitar la publicación datos confidenciales en la minería de datos. La falta de técnicas rigurosas que permitan verificar formalmente este tipo de propiedades constituye un notable problema abierto que tiene que ser abordado. En esta tesis introducimos varias lógicas de programa quantitativas para razonar sobre esta clase de propiedades de seguridad. Nuestra principal contribución teórica es una versión quantitativa de una lógica de Hoare relacional para programas probabilísticos. Las pruebas de correción de estas lógicas son completamente formalizadas en el asistente de pruebas Coq. Desarrollamos, además, una herramienta para razonar sobre propiedades de programas a través de estas lógicas extendiendo CertiCrypt, un framework para verificar pruebas de criptografía en Coq. Confirmamos la efectividad y aplicabilidad de nuestra metodología construyendo pruebas certificadas por ordendor de varios sistemas cuyo análisis estaba fuera del alcance de la seguridad verificada. Esto incluye, entre otros, una meta-construcción para diseñar funciones de hash “seguras” sobre curvas elípticas y algoritmos diferencialmente privados para varios problemas de optimización combinatoria de la literatura reciente. ABSTRACT The verified security methodology is an emerging approach to build high assurance proofs about security properties of computer systems. Computer systems are modeled as probabilistic programs and one relies on rigorous program semantics techniques to prove that they comply with a given security goal. In particular, it advocates the use of interactive theorem provers or automated provers to build fully formal machine-checked versions of these security proofs. The verified security methodology has proved successful in modeling and reasoning about several standard security notions in the area of cryptography. However, it has fallen short of covering an important class of approximate, quantitative security notions. The distinguishing characteristic of this class of security notions is that they are stated as a “similarity” condition between the output distributions of two probabilistic programs, and this similarity is quantified using some notion of distance between probability distributions. This class comprises prominent security notions from multiple areas such as private data analysis, information flow analysis and cryptography. These include, for instance, indifferentiability, which enables securely replacing an idealized component of system with a concrete implementation, and differential privacy, a notion of privacy-preserving data mining that has received a great deal of attention in the last few years. The lack of rigorous techniques for verifying these properties is thus an important problem that needs to be addressed. In this dissertation we introduce several quantitative program logics to reason about this class of security notions. Our main theoretical contribution is, in particular, a quantitative variant of a full-fledged relational Hoare logic for probabilistic programs. The soundness of these logics is fully formalized in the Coq proof-assistant and tool support is also available through an extension of CertiCrypt, a framework to verify cryptographic proofs in Coq. We validate the applicability of our approach by building fully machine-checked proofs for several systems that were out of the reach of the verified security methodology. These comprise, among others, a construction to build “safe” hash functions into elliptic curves and differentially private algorithms for several combinatorial optimization problems from the recent literature.
Resumo:
The inherent complexity of modern cloud infrastructures has created the need for innovative monitoring approaches, as state-of-the-art solutions used for other large-scale environments do not address specific cloud features. Although cloud monitoring is nowadays an active research field, a comprehensive study covering all its aspects has not been presented yet. This paper provides a deep insight into cloud monitoring. It proposes a unified cloud monitoring taxonomy, based on which it defines a layered cloud monitoring architecture. To illustrate it, we have implemented GMonE, a general-purpose cloud monitoring tool which covers all aspects of cloud monitoring by specifically addressing the needs of modern cloud infrastructures. Furthermore, we have evaluated the performance, scalability and overhead of GMonE with Yahoo Cloud Serving Benchmark (YCSB), by using the OpenNebula cloud middleware on the Grid’5000 experimental testbed. The results of this evaluation demonstrate the benefits of our approach, surpassing the monitoring performance and capabilities of cloud monitoring alternatives such as those present in state-of-the-art systems such as Amazon EC2 and OpenNebula.
Resumo:
La sociedad depende hoy más que nunca de la tecnología, pero la inversión en seguridad es escasa y los riesgos de usar sistemas informáticos son cada día mayores. La criptografía es una de las piedras angulares de la seguridad en este ámbito, por lo que recientemente se ha dedicado una cantidad considerable de recursos al desarrollo de herramientas que ayuden en la evaluación y mejora de los algoritmos criptográficos. EasyCrypt es uno de estos sistemas, desarrollado recientemente en el Instituto IMDEA Software en respuesta a la creciente necesidad de disponer de herramientas fiables de verificación de criptografía. A lo largo de este trabajo se abordará el diseño e implementación de funcionalidad adicional para EasyCrypt. En la primera parte de documento se discutirá la importancia de disponer de una forma de especificar el coste de algoritmos a la hora de desarrollar pruebas que dependan del mismo, y se modificará el lenguaje de EasyCrypt para permitir al usuario abordar un mayor espectro de problemas. En la segunda parte se tratará el problema de la usabilidad de EasyCrypt y se intentará mejorar dentro de lo posible desarrollando una interfaz web que permita usar el sistema fáacilmente y sin necesidad de tener instaladas todas las herramientas que necesita EasyCrypt. ---ABSTRACT---Today, society depends more than ever on technology, but the investment in security is still scarce and the risk of using computer systems is constantly increasing. Cryptography is one of the cornerstones of security, so there has been a considerable amount of efort devoted recently to the development of tools oriented to the evaluation and improvement of cryptographic algorithms. One of these tools is EasyCrypt, developed recently at IMDEA Software Institute in response to the increasing need of reliable cryptography verification tools. Throughout this document we will design and implement two diferent EasyCrypt features. In the first part of the document we will consider the importance of having a way to specify the cost of algorithms in order to develop proofs that depend on it, and then we will modify the EasyCrypt's language so that the user can tackle a wider range of problems. In the second part we will assess EasyCrypt's poor usability and try to improve it by developing a web interface which enables the user to use it easily and without having to install the whole EasyCrypt toolchain.
Resumo:
El presente documento pretende ofrecer una visión general del estado del conjunto de herramientas disponibles para el análisis y explotación de vulnerabilidades en sistemas informáticos y más concretamente en redes de ordenadores. Por un lado se ha procedido a describir analíticamente el conjunto de herramientas de software libre que se ofrecen en la actualidad para analizar y detectar vulnerabilidades en sistemas informáticos. Se ha descrito el funcionamiento, las opciones, y la motivación de uso para dichas herramientas, comparándolas con otras en algunos casos, describiendo sus diferencias en otros, y justificando su elección en todos ellos. Por otro lado se ha procedido a utilizar dichas herramientas analizadas con el objetivo de desarrollar ejemplos concretos de uso con sus diferentes parámetros seleccionados observando su comportamiento y tratando de discernir qué datos son útiles para obtener información acerca de las vulnerabilidades existentes en el sistema. Además, se ha desarrollado un caso práctico en el que se pone en práctica el conocimiento teórico presentado de forma que el lector sea capaz de asentar lo aprendido comprobando mediante un caso real la utilidad de las herramientas descritas. Los resultados obtenidos han demostrado que el análisis y detección de vulnerabilidades por parte de un administrador de sistemas competente permite ofrecer a la organización en cuestión un conjunto de técnicas para mejorar su seguridad informática y así evitar problemas con potenciales atacantes. ABSTRACT. This paper tries to provide an overview of the features of the set of tools available for the analysis and exploitation of vulnerabilities in computer systems and more specifically in computer networks. On the one hand we pretend analytically describe the set of free software tools that are offered today to analyze and detect vulnerabilities in computer systems. We have described the operation, options, and motivation to use these tools in comparison with other in some case, describing their differences in others, and justifying them in all cases. On the other hand we proceeded to use these analyzed tools in order to develop concrete examples of use with different parameters selected by observing their behavior and trying to discern what data are useful for obtaining information on existing vulnerabilities in the system. In addition, we have developed a practical case in which we put in practice the theoretical knowledge presented so that the reader is able to settle what has been learned through a real case verifying the usefulness of the tools previously described. The results have shown that vulnerabilities analysis and detection made by a competent system administrator can provide to an organization a set of techniques to improve its systems and avoid any potential attacker.
Resumo:
The Bologna Declaration and the implementation of the European Higher Education Area are promoting the use of active learning methodologies such as cooperative learning and project based learning. This study was motivated by the comparison of the results obtained after applying Cooperative Learning (CL) and Project Based Learning (PBL) to a subject of Computer Engineering. The fundamental hypothesis tested was whether the academic success achieved by the students of the first years was higher when CL was applied than in those cases to which PBL was applied. A practical case, by means of which the effectiveness of CL and PBL are compared, is presented in this work. This study has been carried out at the Universidad Politécnica de Madrid, where these mechanisms have been applied to the Operating Systems I subject from the Technical Engineering in Computer Systems degree (OSIS) and to the same subject from the Technical Engineering in Computer Management degree (OSIM). Both subjects have the same syllabus, are taught in the same year and semester and share also formative objectives. From this study we can conclude that students¿ academic performance (regarding the grades given) is greater with PBL than with CL. To be more specific, the difference is between 0.5 and 1 point for the individual tests. For the group tests, this difference is between 2.5 and 3 points. Therefore, this study refutes the fundamental hypothesis formulated at the beginning. Some of the possible interpretations of these results are referred to in this study.
Resumo:
Parallel processing systems require complex interconnection networks. In order to obtain fast and flexible communications at a reasonable cost, different types of networks has been studied in the past. None of them can be considered best. The cost-effectiveness of a particular network design depends of several factors that will not be treat here. Nevertheless, the basic device that configurate an interconnection network can be the same for most of them. In this way, an Optical Interconetion Network made with Holographic Optical Element (HOE) is presented. The HOE recording way use present special caracteristics that are described. A Perfect Shuffle and Banyan networks has been implemented.
Resumo:
A first-rate e-Health system saves lives, provides better patient care, allows complex but useful epidemiologic analysis and saves money. However, there may also be concerns about the costs and complexities associated with e-health implementation, and the need to solve issues about the energy footprint of the high-demanding computing facilities. This paper proposes a novel and evolved computing paradigm that: (i) provides the required computing and sensing resources; (ii) allows the population-wide diffusion; (iii) exploits the storage, communication and computing services provided by the Cloud; (iv) tackles the energy-optimization issue as a first-class requirement, taking it into account during the whole development cycle. The novel computing concept and the multi-layer top-down energy-optimization methodology obtain promising results in a realistic scenario for cardiovascular tracking and analysis, making the Home Assisted Living a reality.