36 resultados para Time-sharing computer systems.


Relevância:

100.00% 100.00%

Publicador:

Resumo:

The Andorra family of languages (which includes the Andorra Kernel Language -AKL) is aimed, in principie, at simultaneously supporting the programming styles of Prolog and committed choice languages. On the other hand, AKL requires a somewhat detailed specification of control by the user. This could be avoided by programming in Prolog to run on AKL. However, Prolog programs cannot be executed directly on AKL. This is due to a number of factors, from more or less trivial syntactic differences to more involved issues such as the treatment of cut and making the exploitation of certain types of parallelism possible. This paper provides basic guidelines for constructing an automatic compiler of Prolog programs into AKL, which can bridge those differences. In addition to supporting Prolog, our style of translation achieves independent and-parallel execution where possible, which is relevant since this type of parallel execution preserves, through the translation, the user-perceived "complexity" of the original Prolog program.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In an increasing number of applications (e.g., in embedded, real-time, or mobile systems) it is important or even essential to ensure conformance with respect to a specification expressing resource usages, such as execution time, memory, energy, or user-defined resources. In previous work we have presented a novel framework for data size-aware, static resource usage verification. Specifications can include both lower and upper bound resource usage functions. In order to statically check such specifications, both upper- and lower-bound resource usage functions (on input data sizes) approximating the actual resource usage of the program which are automatically inferred and compared against the specification. The outcome of the static checking of assertions can express intervals for the input data sizes such that a given specification can be proved for some intervals but disproved for others. After an overview of the approach in this paper we provide a number of novel contributions: we present a full formalization, and we report on and provide results from an implementation within the Ciao/CiaoPP framework (which provides a general, unified platform for static and run-time verification, as well as unit testing). We also generalize the checking of assertions to allow preconditions expressing intervals within which the input data size of a program is supposed to lie (i.e., intervals for which each assertion is applicable), and we extend the class of resource usage functions that can be checked.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

El objetivo de este trabajo consiste en evaluar los beneficios de la utilización de sistemas de almacenamiento de energía para incrementar el valor de un parque eólico. Debido a la naturaleza aleatoria inherente a la producción eléctrica de origen eólico, su integración en red plantea importantes problemas que suponen un reto tecnológico cada vez mayor a medida que la penetración de la energía eólica en el mix de generación aumenta. A su vez, las aplicaciones de los sistemas de almacenamiento de energía representan la oportunidad de mitigar los efectos indeseables de la producción eléctrica eólica. Los conocimientos adquiridos a lo largo de este proyecto se aplicarán para determinar qué sistema de almacenamiento de energía es óptimo para implantar en el parque eólico objeto del estudio, así como para cuantificar los beneficios derivados de la hibridación. ABSTRACT The aim of this work consist of assess the benefits of the use of energy storage systems for wind farm value enhacement. Due to the random nature inherent to wind electric generation, its grid integration carry important issues that represent a technology challenge that increases as wind generation penetration within energy mix increases as well. At the same time, energy storage systems applications represent an opportunity to mitigate the undesirable effects of wind generation. The knowledge gained throughout this project will be applied to determine which energy storage system is ideally suited for implementation within the wind farm under study, as well as quantify the benefits of hybridization.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Opportunities offered by high performance computing provide a significant degree of promise in the enhancement of the performance of real-time flood forecasting systems. In this paper, a real-time framework for probabilistic flood forecasting through data assimilation is presented. The distributed rainfall-runoff real-time interactive basin simulator (RIBS) model is selected to simulate the hydrological process in the basin. Although the RIBS model is deterministic, it is run in a probabilistic way through the results of calibration developed in a previous work performed by the authors that identifies the probability distribution functions that best characterise the most relevant model parameters. Adaptive techniques improve the result of flood forecasts because the model can be adapted to observations in real time as new information is available. The new adaptive forecast model based on genetic programming as a data assimilation technique is compared with the previously developed flood forecast model based on the calibration results. Both models are probabilistic as they generate an ensemble of hydrographs, taking the different uncertainties inherent in any forecast process into account. The Manzanares River basin was selected as a case study, with the process being computationally intensive as it requires simulation of many replicas of the ensemble in real time.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The future Internet is expected to be composed of a mesh of interoperable web services accessible from all over the web. This approach has not yet caught on since global user?service interaction is still an open issue. This paper states one vision with regard to next-generation front-end Web 2.0 technology that will enable integrated access to services, contents and things in the future Internet. In this paper, we illustrate how front-ends that wrap traditional services and resources can be tailored to the needs of end users, converting end users into prosumers (creators and consumers of service-based applications). To do this, we propose an architecture that end users without programming skills can use to create front-ends, consult catalogues of resources tailored to their needs, easily integrate and coordinate front-ends and create composite applications to orchestrate services in their back-end. The paper includes a case study illustrating that current user-centred web development tools are at a very early stage of evolution. We provide statistical data on how the proposed architecture improves these tools. This paper is based on research conducted by the Service Front End (SFE) Open Alliance initiative.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This work proposes an encapsulation scheme aimed at simplifying the reuse process of hardware cores. This hardware encapsulation approach has been conceived with a twofold objective. First, we look for the improvement of the reuse interface associated with the hardware core description. This is carried out in a first encapsulation level by improving the limited types and configuration options available in the conventional HDLs interface, and also providing information related to the implementation itself. Second, we have devised a more generic interface focused on describing the function avoiding details from a particular implementation, what corresponds to a second encapsulation level. This encapsulation allows the designer to define how to configure and use the design to implement a given functionality. The proposed encapsulation schemes help improving the amount of information that can be supplied with the design, and also allow to automate the process of searching, configuring and implementing diverse alternatives.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A contactless transformer model is proposed in this paper using Finite Element Analysis (FEA). This model can be used to simulate Inductive Coupling Power Transfer (ICPT) systems with good accuracy of the transformer and reduce the fabrication time of these systems. The model not only takes into account the geometry of the windings but also the frequency effects in them. As the transformer does not have a magnetic core, it is complicated to model because the flux is expanded in the area around the windings. In order to obtain a very accurate model, it is necessary to use a 2D/3D field solver.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Workflow technology continues to play an important role as a means for specifying and enacting computational experiments in modern science. Reusing and re-purposing workflows allow scientists to do new experiments faster, since the workflows capture useful expertise from others. As workflow libraries grow, scientists face the challenge of finding workflows appropriate for their task, understanding what each workflow does, and reusing relevant portions of a given workflow.We believe that workflows would be easier to understand and reuse if high-level views (abstractions) of their activities were available in workflow libraries. As a first step towards obtaining these abstractions, we report in this paper on the results of a manual analysis performed over a set of real-world scientific workflows from Taverna, Wings, Galaxy and Vistrails. Our analysis has resulted in a set of scientific workflow motifs that outline (i) the kinds of data-intensive activities that are observed in workflows (Data-Operation motifs), and (ii) the different manners in which activities are implemented within workflows (Workflow-Oriented motifs). These motifs are helpful to identify the functionality of the steps in a given workflow, to develop best practices for workflow design, and to develop approaches for automated generation of workflow abstractions.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

La aparición de Internet y los sistemas informáticos supuso un antes y un después en el modo que las personas emplearían para acceder a los sistemas de información. El crecimiento exponencial seguido en los años posteriores ha llevado este hecho hasta la situación actual, donde prácticamente todos los ámbitos del día a día se encuentran reflejados en la Red. Por otro lado, a la par que la sociedad se desplazaba al ciberespacio, también comenzaban a hacerlo aquellos que buscaban obtener un rendimiento delictivo de los nuevos medios y herramientas que se ponían a su disposición. Avanzando a pasos agigantados en el desarrollo de técnicas y métodos para vulnerar unos sistemas de seguridad, aún muy inmaduros, los llamados ciberdelincuentes tomaban ventaja sobre las autoridades y su escasa preparación para abordar este nuevo problema. Poco a poco, y con el paso de los años, esta distancia ha ido reduciéndose, y pese a que aún queda mucho trabajo por hacer, y que el crecimiento de los índices de ciberdelincuencia, junto con la evolución y aparición de nuevas técnicas, sigue a un ritmo desenfrenado, los gobiernos y las empresas han tomado consciencia de la gravedad de este problema y han comenzado a poner sobre la mesa grandes esfuerzos e inversiones con el fin de mejorar sus armas de lucha y métodos de prevención para combatirla. Este Proyecto de Fin de Carrera dedica sus objetivos a la investigación y comprensión de todos estos puntos, desarrollando una visión específica de cada uno de ellos y buscando la intención final de establecer las bases suficientes que permitan abordar con la efectividad requerida el trabajo necesario para la persecución y eliminación del problema. ABSTRACT. The emergence of Internet and computer systems marked a before and after in the way that people access information systems. The continued exponential growth in the following years has taken this fact to the current situation, where virtually all areas of everyday life are reflected in the Net. On the other hand, meanwhile society moved into cyberspace, the same began to do those seeking to obtain a criminal performance of new media and tools at their disposal. Making great strides in the development of techniques and methods to undermine security systems, still very immature, so called cybercriminals took advantage over the authorities and their lack of preparation to deal with this new problem. Gradually, and over the years, this distance has been declining, and although there is still much work to do and the growth rates of cybercrime, along with the evolution and emergence of new techniques, keep increasing at a furious pace, governments and companies have become aware of the seriousness of this problem and have begun to put on the table great efforts and investments in order to upgrade their weapons to fight against this kind of crimes and prevention methods to combat it. This Thesis End of Grade Project focuses its objectives on the research and understanding of all these points, developing a specific vision of each of them and looking for the ultimate intention of establishing a sufficient basis by which to manage with the required effectiveness the type of work needed for the persecution and elimination of the problem.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

La seguridad verificada es una metodología para demostrar propiedades de seguridad de los sistemas informáticos que se destaca por las altas garantías de corrección que provee. Los sistemas informáticos se modelan como programas probabilísticos y para probar que verifican una determinada propiedad de seguridad se utilizan técnicas rigurosas basadas en modelos matemáticos de los programas. En particular, la seguridad verificada promueve el uso de demostradores de teoremas interactivos o automáticos para construir demostraciones completamente formales cuya corrección es certificada mecánicamente (por ordenador). La seguridad verificada demostró ser una técnica muy efectiva para razonar sobre diversas nociones de seguridad en el área de criptografía. Sin embargo, no ha podido cubrir un importante conjunto de nociones de seguridad “aproximada”. La característica distintiva de estas nociones de seguridad es que se expresan como una condición de “similitud” entre las distribuciones de salida de dos programas probabilísticos y esta similitud se cuantifica usando alguna noción de distancia entre distribuciones de probabilidad. Este conjunto incluye destacadas nociones de seguridad de diversas áreas como la minería de datos privados, el análisis de flujo de información y la criptografía. Ejemplos representativos de estas nociones de seguridad son la indiferenciabilidad, que permite reemplazar un componente idealizado de un sistema por una implementación concreta (sin alterar significativamente sus propiedades de seguridad), o la privacidad diferencial, una noción de privacidad que ha recibido mucha atención en los últimos años y tiene como objetivo evitar la publicación datos confidenciales en la minería de datos. La falta de técnicas rigurosas que permitan verificar formalmente este tipo de propiedades constituye un notable problema abierto que tiene que ser abordado. En esta tesis introducimos varias lógicas de programa quantitativas para razonar sobre esta clase de propiedades de seguridad. Nuestra principal contribución teórica es una versión quantitativa de una lógica de Hoare relacional para programas probabilísticos. Las pruebas de correción de estas lógicas son completamente formalizadas en el asistente de pruebas Coq. Desarrollamos, además, una herramienta para razonar sobre propiedades de programas a través de estas lógicas extendiendo CertiCrypt, un framework para verificar pruebas de criptografía en Coq. Confirmamos la efectividad y aplicabilidad de nuestra metodología construyendo pruebas certificadas por ordendor de varios sistemas cuyo análisis estaba fuera del alcance de la seguridad verificada. Esto incluye, entre otros, una meta-construcción para diseñar funciones de hash “seguras” sobre curvas elípticas y algoritmos diferencialmente privados para varios problemas de optimización combinatoria de la literatura reciente. ABSTRACT The verified security methodology is an emerging approach to build high assurance proofs about security properties of computer systems. Computer systems are modeled as probabilistic programs and one relies on rigorous program semantics techniques to prove that they comply with a given security goal. In particular, it advocates the use of interactive theorem provers or automated provers to build fully formal machine-checked versions of these security proofs. The verified security methodology has proved successful in modeling and reasoning about several standard security notions in the area of cryptography. However, it has fallen short of covering an important class of approximate, quantitative security notions. The distinguishing characteristic of this class of security notions is that they are stated as a “similarity” condition between the output distributions of two probabilistic programs, and this similarity is quantified using some notion of distance between probability distributions. This class comprises prominent security notions from multiple areas such as private data analysis, information flow analysis and cryptography. These include, for instance, indifferentiability, which enables securely replacing an idealized component of system with a concrete implementation, and differential privacy, a notion of privacy-preserving data mining that has received a great deal of attention in the last few years. The lack of rigorous techniques for verifying these properties is thus an important problem that needs to be addressed. In this dissertation we introduce several quantitative program logics to reason about this class of security notions. Our main theoretical contribution is, in particular, a quantitative variant of a full-fledged relational Hoare logic for probabilistic programs. The soundness of these logics is fully formalized in the Coq proof-assistant and tool support is also available through an extension of CertiCrypt, a framework to verify cryptographic proofs in Coq. We validate the applicability of our approach by building fully machine-checked proofs for several systems that were out of the reach of the verified security methodology. These comprise, among others, a construction to build “safe” hash functions into elliptic curves and differentially private algorithms for several combinatorial optimization problems from the recent literature.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The inherent complexity of modern cloud infrastructures has created the need for innovative monitoring approaches, as state-of-the-art solutions used for other large-scale environments do not address specific cloud features. Although cloud monitoring is nowadays an active research field, a comprehensive study covering all its aspects has not been presented yet. This paper provides a deep insight into cloud monitoring. It proposes a unified cloud monitoring taxonomy, based on which it defines a layered cloud monitoring architecture. To illustrate it, we have implemented GMonE, a general-purpose cloud monitoring tool which covers all aspects of cloud monitoring by specifically addressing the needs of modern cloud infrastructures. Furthermore, we have evaluated the performance, scalability and overhead of GMonE with Yahoo Cloud Serving Benchmark (YCSB), by using the OpenNebula cloud middleware on the Grid’5000 experimental testbed. The results of this evaluation demonstrate the benefits of our approach, surpassing the monitoring performance and capabilities of cloud monitoring alternatives such as those present in state-of-the-art systems such as Amazon EC2 and OpenNebula.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

El trabajo ha sido realizado dentro del marco de los proyectos EURECA (Enabling information re-Use by linking clinical REsearch and Care) e INTEGRATE (Integrative Cancer Research Through Innovative Biomedical Infrastructures), en los que colabora el Grupo de Informática Biomédica de la UPM junto a otras universidades e instituciones sanitarias europeas. En ambos proyectos se desarrollan servicios e infraestructuras con el objetivo principal de almacenar información clínica, procedente de fuentes diversas (como por ejemplo de historiales clínicos electrónicos de hospitales, de ensayos clínicos o artículos de investigación biomédica), de una forma común y fácilmente accesible y consultable para facilitar al máximo la investigación de estos ámbitos, de manera colaborativa entre instituciones. Esta es la idea principal de la interoperabilidad semántica en la que se concentran ambos proyectos, siendo clave para el correcto funcionamiento del software del que se componen. El intercambio de datos con un modelo de representación compartido, común y sin ambigüedades, en el que cada concepto, término o dato clínico tendrá una única forma de representación. Lo cual permite la inferencia de conocimiento, y encaja perfectamente en el contexto de la investigación médica. En concreto, la herramienta a desarrollar en este trabajo también está orientada a la idea de maximizar la interoperabilidad semántica, pues se ocupa de la carga de información clínica con un formato estandarizado en un modelo común de almacenamiento de datos, implementado en bases de datos relacionales. El trabajo ha sido desarrollado en el periodo comprendido entre el 3 de Febrero y el 6 de Junio de 2014. Se ha seguido un ciclo de vida en cascada para la organización del trabajo realizado en las tareas de las que se compone el proyecto, de modo que una fase no puede iniciarse sin que se haya terminado, revisado y aceptado la fase anterior. Exceptuando la tarea de documentación del trabajo (para la elaboración de esta memoria), que se ha desarrollado paralelamente a todas las demás. ----ABSTRACT--- The project has been developed during the second semester of the 2013/2014 academic year. This Project has been done inside EURECA and INTEGRATE European biomedical research projects, where the GIB (Biomedical Informatics Group) of the UPM works as a partner. Both projects aim is to develop platforms and services with the main goal of storing clinical information (e.g. information from hospital electronic health records (EHRs), clinical trials or research articles) in a common way and easy to access and query, in order to support medical research. The whole software environment of these projects is based on the idea of semantic interoperability, which means the ability of computer systems to exchange data with unambiguous and shared meaning. This idea allows knowledge inference, which fits perfectly in medical research context. The tool to develop in this project is also "semantic operability-oriented". Its purpose is to store standardized clinical information in a common data model, implemented in relational databases. The project has been performed during the period between February 3rd and June 6th, of 2014. It has followed a "Waterfall model" of software development, in which progress is seen as flowing steadily downwards through its phases. Each phase starts when its previous phase has been completed and reviewed. The task of documenting the project‟s work is an exception; it has been performed in a parallel way to the rest of the tasks.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

La sociedad depende hoy más que nunca de la tecnología, pero la inversión en seguridad es escasa y los riesgos de usar sistemas informáticos son cada día mayores. La criptografía es una de las piedras angulares de la seguridad en este ámbito, por lo que recientemente se ha dedicado una cantidad considerable de recursos al desarrollo de herramientas que ayuden en la evaluación y mejora de los algoritmos criptográficos. EasyCrypt es uno de estos sistemas, desarrollado recientemente en el Instituto IMDEA Software en respuesta a la creciente necesidad de disponer de herramientas fiables de verificación de criptografía. A lo largo de este trabajo se abordará el diseño e implementación de funcionalidad adicional para EasyCrypt. En la primera parte de documento se discutirá la importancia de disponer de una forma de especificar el coste de algoritmos a la hora de desarrollar pruebas que dependan del mismo, y se modificará el lenguaje de EasyCrypt para permitir al usuario abordar un mayor espectro de problemas. En la segunda parte se tratará el problema de la usabilidad de EasyCrypt y se intentará mejorar dentro de lo posible desarrollando una interfaz web que permita usar el sistema fáacilmente y sin necesidad de tener instaladas todas las herramientas que necesita EasyCrypt. ---ABSTRACT---Today, society depends more than ever on technology, but the investment in security is still scarce and the risk of using computer systems is constantly increasing. Cryptography is one of the cornerstones of security, so there has been a considerable amount of efort devoted recently to the development of tools oriented to the evaluation and improvement of cryptographic algorithms. One of these tools is EasyCrypt, developed recently at IMDEA Software Institute in response to the increasing need of reliable cryptography verification tools. Throughout this document we will design and implement two diferent EasyCrypt features. In the first part of the document we will consider the importance of having a way to specify the cost of algorithms in order to develop proofs that depend on it, and then we will modify the EasyCrypt's language so that the user can tackle a wider range of problems. In the second part we will assess EasyCrypt's poor usability and try to improve it by developing a web interface which enables the user to use it easily and without having to install the whole EasyCrypt toolchain.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

El presente documento pretende ofrecer una visión general del estado del conjunto de herramientas disponibles para el análisis y explotación de vulnerabilidades en sistemas informáticos y más concretamente en redes de ordenadores. Por un lado se ha procedido a describir analíticamente el conjunto de herramientas de software libre que se ofrecen en la actualidad para analizar y detectar vulnerabilidades en sistemas informáticos. Se ha descrito el funcionamiento, las opciones, y la motivación de uso para dichas herramientas, comparándolas con otras en algunos casos, describiendo sus diferencias en otros, y justificando su elección en todos ellos. Por otro lado se ha procedido a utilizar dichas herramientas analizadas con el objetivo de desarrollar ejemplos concretos de uso con sus diferentes parámetros seleccionados observando su comportamiento y tratando de discernir qué datos son útiles para obtener información acerca de las vulnerabilidades existentes en el sistema. Además, se ha desarrollado un caso práctico en el que se pone en práctica el conocimiento teórico presentado de forma que el lector sea capaz de asentar lo aprendido comprobando mediante un caso real la utilidad de las herramientas descritas. Los resultados obtenidos han demostrado que el análisis y detección de vulnerabilidades por parte de un administrador de sistemas competente permite ofrecer a la organización en cuestión un conjunto de técnicas para mejorar su seguridad informática y así evitar problemas con potenciales atacantes. ABSTRACT. This paper tries to provide an overview of the features of the set of tools available for the analysis and exploitation of vulnerabilities in computer systems and more specifically in computer networks. On the one hand we pretend analytically describe the set of free software tools that are offered today to analyze and detect vulnerabilities in computer systems. We have described the operation, options, and motivation to use these tools in comparison with other in some case, describing their differences in others, and justifying them in all cases. On the other hand we proceeded to use these analyzed tools in order to develop concrete examples of use with different parameters selected by observing their behavior and trying to discern what data are useful for obtaining information on existing vulnerabilities in the system. In addition, we have developed a practical case in which we put in practice the theoretical knowledge presented so that the reader is able to settle what has been learned through a real case verifying the usefulness of the tools previously described. The results have shown that vulnerabilities analysis and detection made by a competent system administrator can provide to an organization a set of techniques to improve its systems and avoid any potential attacker.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The Bologna Declaration and the implementation of the European Higher Education Area are promoting the use of active learning methodologies such as cooperative learning and project based learning. This study was motivated by the comparison of the results obtained after applying Cooperative Learning (CL) and Project Based Learning (PBL) to a subject of Computer Engineering. The fundamental hypothesis tested was whether the academic success achieved by the students of the first years was higher when CL was applied than in those cases to which PBL was applied. A practical case, by means of which the effectiveness of CL and PBL are compared, is presented in this work. This study has been carried out at the Universidad Politécnica de Madrid, where these mechanisms have been applied to the Operating Systems I subject from the Technical Engineering in Computer Systems degree (OSIS) and to the same subject from the Technical Engineering in Computer Management degree (OSIM). Both subjects have the same syllabus, are taught in the same year and semester and share also formative objectives. From this study we can conclude that students¿ academic performance (regarding the grades given) is greater with PBL than with CL. To be more specific, the difference is between 0.5 and 1 point for the individual tests. For the group tests, this difference is between 2.5 and 3 points. Therefore, this study refutes the fundamental hypothesis formulated at the beginning. Some of the possible interpretations of these results are referred to in this study.