931 resultados para User interfaces (Computer systems)


Relevância:

100.00% 100.00%

Publicador:

Resumo:

BioMet®Tools is a set of software applications developed for the biometrical characterization of voice in different fields as voice quality evaluation in laryngology, speech therapy and rehabilitation, education of the singing voice, forensic voice analysis in court, emotional detection in voice, secure access to facilities and services, etc. Initially it was conceived as plain research code to estimate the glottal source from voice and obtain the biomechanical parameters of the vocal folds from the spectral density of the estimate. This code grew to what is now the Glottex®Engine package (G®E). Further demands from users in medical and forensic fields instantiated the development of different Graphic User Interfaces (GUI’s) to encapsulate user interaction with the G®E. This required the personalized design of different GUI’s handling the same G®E. In this way development costs and time could be saved. The development model is described in detail leading to commercial production and distribution. Study cases from its application to the field of laryngology and speech therapy are given and discussed.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Data grid services have been used to deal with the increasing needs of applications in terms of data volume and throughput. The large scale, heterogeneity and dynamism of grid environments often make management and tuning of these data services very complex. Furthermore, current high-performance I/O approaches are characterized by their high complexity and specific features that usually require specialized administrator skills. Autonomic computing can help manage this complexity. The present paper describes an autonomic subsystem intended to provide self-management features aimed at efficiently reducing the I/O problem in a grid environment, thereby enhancing the quality of service (QoS) of data access and storage services in the grid. Our proposal takes into account that data produced in an I/O system is not usually immediately required. Therefore, performance improvements are related not only to current but also to any future I/O access, as the actual data access usually occurs later on. Nevertheless, the exact time of the next I/O operations is unknown. Thus, our approach proposes a long-term prediction designed to forecast the future workload of grid components. This enables the autonomic subsystem to determine the optimal data placement to improve both current and future I/O operations.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This work proposes an encapsulation scheme aimed at simplifying the reuse process of hardware cores. This hardware encapsulation approach has been conceived with a twofold objective. First, we look for the improvement of the reuse interface associated with the hardware core description. This is carried out in a first encapsulation level by improving the limited types and configuration options available in the conventional HDLs interface, and also providing information related to the implementation itself. Second, we have devised a more generic interface focused on describing the function avoiding details from a particular implementation, what corresponds to a second encapsulation level. This encapsulation allows the designer to define how to configure and use the design to implement a given functionality. The proposed encapsulation schemes help improving the amount of information that can be supplied with the design, and also allow to automate the process of searching, configuring and implementing diverse alternatives.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Este trabajo presenta un estudio sobre el funcionamiento y aplicaciones de las células de combustible de membrana tipo PEM, o de intercambio de protones, alimentadas con hidrógeno puro y oxigeno obtenido de aire comprimido. Una vez evaluado el proceso de dichas células y las variables que intervienen en el mismo, como presión, humedad y temperatura, se presenta una variedad de métodos para la instrumentación de tales variables así como métodos y sistemas para la estabilidad y control de las mismas, en torno a los valores óptimos para una mayor eficacia en el proceso. Tomando como variable principal a controlar la temperatura del proceso, y exponiendo los valores concretos en torno a 80 grados centígrados entre los que debe situarse, es realizado un modelo del proceso de calentamiento y evolución de la temperatura en función de la potencia del calentador resistivo en el dominio de la frecuencia compleja, y a su vez implementado un sistema de medición mediante sensores termopar de tipo K de respuesta casi lineal. La señal medida por los sensores es amplificada de manera diferencial mediante amplificadores de instrumentación INA2126, y es desarrollado un algoritmo de corrección de error de unión fría (error producido por la inclusión de nuevos metales del conector en el efecto termopar). Son incluidos los datos de test referentes al sistema de medición de temperatura , incluyendo las desviaciones o error respecto a los valores ideales de medida. Para la adquisición de datos y implementación de algoritmos de control, es utilizado un PC con el software Labview de National Instruments, que permite una programación intuitiva, versátil y visual, y poder realizar interfaces de usuario gráficas simples. La conexión entre el hardware de instrumentación y control de la célula y el PC se realiza mediante un interface de adquisición de datos USB NI 6800 que cuenta con un amplio número de salidas y entradas analógicas. Una vez digitalizadas las muestras de la señal medida, y corregido el error de unión fría anteriormente apuntado, es implementado en dicho software un controlador de tipo PID ( proporcional-integral-derivativo) , que se presenta como uno de los métodos más adecuados por su simplicidad de programación y su eficacia para el control de este tipo de variables. Para la evaluación del comportamiento del sistema son expuestas simulaciones mediante el software Matlab y Simulink determinando por tanto las mejores estrategias para desarrollar el control PID, así como los posibles resultados del proceso. En cuanto al sistema de calentamiento de los fluidos, es empleado un elemento resistor calentador, cuya potencia es controlada mediante un circuito electrónico compuesto por un detector de cruce por cero de la onda AC de alimentación y un sistema formado por un elemento TRIAC y su circuito de accionamiento. De manera análoga se expone el sistema de instrumentación para la presión de los gases en el circuito, variable que oscila en valores próximos a 3 atmosferas, para ello es empleado un sensor de presión con salida en corriente mediante bucle 4-20 mA, y un convertidor simple corriente a tensión para la entrada al sistema de adquisición de datos. Consecuentemente se presenta el esquema y componentes necesarios para la canalización, calentamiento y humidificación de los gases empleados en el proceso así como la situación de los sensores y actuadores. Por último el trabajo expone la relación de algoritmos desarrollados y un apéndice con información relativa al software Labview. ABTRACT This document presents a study about the operation and applications of PEM fuel cells (Proton exchange membrane fuel cells), fed with pure hydrogen and oxygen obtained from compressed air. Having evaluated the process of these cells and the variables involved on it, such as pressure, humidity and temperature, there is a variety of methods for implementing their control and to set up them around optimal values for greater efficiency in the process. Taking as primary process variable the temperature, and exposing its correct values around 80 degrees centigrade, between which must be placed, is carried out a model of the heating process and the temperature evolution related with the resistive heater power on the complex frequency domain, and is implemented a measuring system with thermocouple sensor type K performing a almost linear response. The differential signal measured by the sensor is amplified through INA2126 instrumentation amplifiers, and is developed a cold junction error correction algorithm (error produced by the inclusion of additional metals of connectors on the thermocouple effect). Data from the test concerning the temperature measurement system are included , including deviations or error regarding the ideal values of measurement. For data acquisition and implementation of control algorithms, is used a PC with LabVIEW software from National Instruments, which makes programming intuitive, versatile, visual, and useful to perform simple user interfaces. The connection between the instrumentation and control hardware of the cell and the PC interface is via a USB data acquisition NI 6800 that has a large number of analog inputs and outputs. Once stored the samples of the measured signal, and correct the error noted above junction, is implemented a software controller PID (proportional-integral-derivative), which is presented as one of the best methods for their programming simplicity and effectiveness for the control of such variables. To evaluate the performance of the system are presented simulations using Matlab and Simulink software thereby determining the best strategies to develop PID control, and possible outcomes of the process. As fluid heating system, is employed a heater resistor element whose power is controlled by an electronic circuit comprising a zero crossing detector of the AC power wave and a system consisting of a Triac and its drive circuit. As made with temperature variable it is developed an instrumentation system for gas pressure in the circuit, variable ranging in values around 3 atmospheres, it is employed a pressure sensor with a current output via 4-20 mA loop, and a single current to voltage converter to adequate the input to the data acquisition system. Consequently is developed the scheme and components needed for circulation, heating and humidification of the gases used in the process as well as the location of sensors and actuators. Finally the document presents the list of algorithms and an appendix with information about Labview software.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Workflow technology continues to play an important role as a means for specifying and enacting computational experiments in modern science. Reusing and re-purposing workflows allow scientists to do new experiments faster, since the workflows capture useful expertise from others. As workflow libraries grow, scientists face the challenge of finding workflows appropriate for their task, understanding what each workflow does, and reusing relevant portions of a given workflow.We believe that workflows would be easier to understand and reuse if high-level views (abstractions) of their activities were available in workflow libraries. As a first step towards obtaining these abstractions, we report in this paper on the results of a manual analysis performed over a set of real-world scientific workflows from Taverna, Wings, Galaxy and Vistrails. Our analysis has resulted in a set of scientific workflow motifs that outline (i) the kinds of data-intensive activities that are observed in workflows (Data-Operation motifs), and (ii) the different manners in which activities are implemented within workflows (Workflow-Oriented motifs). These motifs are helpful to identify the functionality of the steps in a given workflow, to develop best practices for workflow design, and to develop approaches for automated generation of workflow abstractions.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

SSR es el acrónimo de SoundScape Renderer (tool for real-time spatial audio reproduction providing a variety of rendering algorithms), es un programa escrito en su mayoría en C++. El programa permite al usuario escuchar tanto sonidos grabados con anterioridad como sonidos en directo. El sonido o los sonidos se oirán, desde el punto de vista del oyente, como si el sonido se produjese en el punto que el programa decida, lo interesante de este proyecto es que el sonido podrá cambiar de lugar, moverse, etc. Todo en tiempo real. Esto se consigue sin modificar el sonido al grabarlo pero sí al emitirlo, el programa calcula las variaciones necesarias para que al emitir el sonido al oyente le llegue como si el sonido realmente se generase en un punto del espacio o lo más parecido posible. La sensación de movimiento no deja de ser el punto anterior cambiando de lugar. La idea era crear una aplicación web basada en Canvas de HTML5 que se comunicará con esta interfaz de usuario remota. Así se solucionarían todos los problemas de compatibilidad ya que cualquier dispositivo con posibilidad de visualizar páginas web podría correr una aplicación basada en estándares web, por ejemplo un sistema con Windows o un móvil con navegador. El protocolo debía de ser WebSocket porque es un protocolo HTML5 y ofrece las “garantías” de latencia que una aplicación con necesidades de información en tiempo real requiere. Nos permite una comunicación full-dúplex asíncrona sin mucho payload que es justo lo que se venía a evitar al no usar polling normal de HTML. El problema que surgió fue que la interfaz de usuario de red que tenía el programa no era compatible con WebSocket debido a un handshacking inicial y obligatorio que realiza el protocolo, por lo que se necesitaba otra interfaz de red. Se decidió entonces cambiar a JSON como formato para el intercambio de mensajes. Al final el proyecto comprende no sólo la aplicación web basada en Canvas sino también un servidor funcional y la definición de una nueva interfaz de usuario de red con su protocolo añadido. ABSTRACT. This project aims to become a part of the SSR tool to extend its capabilities in the field of the access. SSR is an acronym for SoundScape Renderer, is a program mostly written in C++ that allows you to hear already recorded or live sound with a variety of sound equipment as if the sound came from a desired place in the space. Like the web-page of the SSR says surely better explained: “The SoundScape Renderer (SSR) is a tool for real-time spatial audio reproduction providing a variety of rendering algorithms.” The application can be used with a graphical interface written in Qt but has also a network interface for external applications to use it. This network interface communicates using XML messages. A good example of it is the Android client. This Android client is already working. In order to use the application should be run it by loading an audio source and the wanted environment so that the renderer knows what to do. In that moment the server binds and anyone can use the network interface. Since the network interface is documented everyone can make an application to interact with this network interface. So the application can have as many user interfaces as wanted. The part that is developed in this project has nothing to do neither with audio rendering nor even with the reproduction of the spatial audio. The part that is developed here is about the interface used in the SSR application. As it can be deduced from the title: “Distributed Web Interface for Real-Time Spatial Audio Reproduction System”, this work aims only to offer the interface via web for the SSR (“Real-Time Spatial Audio Reproduction System”). The idea is not to make a new graphical interface for SSR but to allow more types of interfaces and communication. To accomplish the objective of allowing more graphical interfaces this project is going to use a new network interface. By now the SSR application is using only XML for data interchange but this new network interface support JSON. This project comprehends the server that launch the application, the user interface and the new network interface. It is done with these modules in order to allow creating new user interfaces that can communicate with the server or new servers that can communicate with the user interface by defining a complete network interface for data interchange.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

La aparición de Internet y los sistemas informáticos supuso un antes y un después en el modo que las personas emplearían para acceder a los sistemas de información. El crecimiento exponencial seguido en los años posteriores ha llevado este hecho hasta la situación actual, donde prácticamente todos los ámbitos del día a día se encuentran reflejados en la Red. Por otro lado, a la par que la sociedad se desplazaba al ciberespacio, también comenzaban a hacerlo aquellos que buscaban obtener un rendimiento delictivo de los nuevos medios y herramientas que se ponían a su disposición. Avanzando a pasos agigantados en el desarrollo de técnicas y métodos para vulnerar unos sistemas de seguridad, aún muy inmaduros, los llamados ciberdelincuentes tomaban ventaja sobre las autoridades y su escasa preparación para abordar este nuevo problema. Poco a poco, y con el paso de los años, esta distancia ha ido reduciéndose, y pese a que aún queda mucho trabajo por hacer, y que el crecimiento de los índices de ciberdelincuencia, junto con la evolución y aparición de nuevas técnicas, sigue a un ritmo desenfrenado, los gobiernos y las empresas han tomado consciencia de la gravedad de este problema y han comenzado a poner sobre la mesa grandes esfuerzos e inversiones con el fin de mejorar sus armas de lucha y métodos de prevención para combatirla. Este Proyecto de Fin de Carrera dedica sus objetivos a la investigación y comprensión de todos estos puntos, desarrollando una visión específica de cada uno de ellos y buscando la intención final de establecer las bases suficientes que permitan abordar con la efectividad requerida el trabajo necesario para la persecución y eliminación del problema. ABSTRACT. The emergence of Internet and computer systems marked a before and after in the way that people access information systems. The continued exponential growth in the following years has taken this fact to the current situation, where virtually all areas of everyday life are reflected in the Net. On the other hand, meanwhile society moved into cyberspace, the same began to do those seeking to obtain a criminal performance of new media and tools at their disposal. Making great strides in the development of techniques and methods to undermine security systems, still very immature, so called cybercriminals took advantage over the authorities and their lack of preparation to deal with this new problem. Gradually, and over the years, this distance has been declining, and although there is still much work to do and the growth rates of cybercrime, along with the evolution and emergence of new techniques, keep increasing at a furious pace, governments and companies have become aware of the seriousness of this problem and have begun to put on the table great efforts and investments in order to upgrade their weapons to fight against this kind of crimes and prevention methods to combat it. This Thesis End of Grade Project focuses its objectives on the research and understanding of all these points, developing a specific vision of each of them and looking for the ultimate intention of establishing a sufficient basis by which to manage with the required effectiveness the type of work needed for the persecution and elimination of the problem.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

La seguridad verificada es una metodología para demostrar propiedades de seguridad de los sistemas informáticos que se destaca por las altas garantías de corrección que provee. Los sistemas informáticos se modelan como programas probabilísticos y para probar que verifican una determinada propiedad de seguridad se utilizan técnicas rigurosas basadas en modelos matemáticos de los programas. En particular, la seguridad verificada promueve el uso de demostradores de teoremas interactivos o automáticos para construir demostraciones completamente formales cuya corrección es certificada mecánicamente (por ordenador). La seguridad verificada demostró ser una técnica muy efectiva para razonar sobre diversas nociones de seguridad en el área de criptografía. Sin embargo, no ha podido cubrir un importante conjunto de nociones de seguridad “aproximada”. La característica distintiva de estas nociones de seguridad es que se expresan como una condición de “similitud” entre las distribuciones de salida de dos programas probabilísticos y esta similitud se cuantifica usando alguna noción de distancia entre distribuciones de probabilidad. Este conjunto incluye destacadas nociones de seguridad de diversas áreas como la minería de datos privados, el análisis de flujo de información y la criptografía. Ejemplos representativos de estas nociones de seguridad son la indiferenciabilidad, que permite reemplazar un componente idealizado de un sistema por una implementación concreta (sin alterar significativamente sus propiedades de seguridad), o la privacidad diferencial, una noción de privacidad que ha recibido mucha atención en los últimos años y tiene como objetivo evitar la publicación datos confidenciales en la minería de datos. La falta de técnicas rigurosas que permitan verificar formalmente este tipo de propiedades constituye un notable problema abierto que tiene que ser abordado. En esta tesis introducimos varias lógicas de programa quantitativas para razonar sobre esta clase de propiedades de seguridad. Nuestra principal contribución teórica es una versión quantitativa de una lógica de Hoare relacional para programas probabilísticos. Las pruebas de correción de estas lógicas son completamente formalizadas en el asistente de pruebas Coq. Desarrollamos, además, una herramienta para razonar sobre propiedades de programas a través de estas lógicas extendiendo CertiCrypt, un framework para verificar pruebas de criptografía en Coq. Confirmamos la efectividad y aplicabilidad de nuestra metodología construyendo pruebas certificadas por ordendor de varios sistemas cuyo análisis estaba fuera del alcance de la seguridad verificada. Esto incluye, entre otros, una meta-construcción para diseñar funciones de hash “seguras” sobre curvas elípticas y algoritmos diferencialmente privados para varios problemas de optimización combinatoria de la literatura reciente. ABSTRACT The verified security methodology is an emerging approach to build high assurance proofs about security properties of computer systems. Computer systems are modeled as probabilistic programs and one relies on rigorous program semantics techniques to prove that they comply with a given security goal. In particular, it advocates the use of interactive theorem provers or automated provers to build fully formal machine-checked versions of these security proofs. The verified security methodology has proved successful in modeling and reasoning about several standard security notions in the area of cryptography. However, it has fallen short of covering an important class of approximate, quantitative security notions. The distinguishing characteristic of this class of security notions is that they are stated as a “similarity” condition between the output distributions of two probabilistic programs, and this similarity is quantified using some notion of distance between probability distributions. This class comprises prominent security notions from multiple areas such as private data analysis, information flow analysis and cryptography. These include, for instance, indifferentiability, which enables securely replacing an idealized component of system with a concrete implementation, and differential privacy, a notion of privacy-preserving data mining that has received a great deal of attention in the last few years. The lack of rigorous techniques for verifying these properties is thus an important problem that needs to be addressed. In this dissertation we introduce several quantitative program logics to reason about this class of security notions. Our main theoretical contribution is, in particular, a quantitative variant of a full-fledged relational Hoare logic for probabilistic programs. The soundness of these logics is fully formalized in the Coq proof-assistant and tool support is also available through an extension of CertiCrypt, a framework to verify cryptographic proofs in Coq. We validate the applicability of our approach by building fully machine-checked proofs for several systems that were out of the reach of the verified security methodology. These comprise, among others, a construction to build “safe” hash functions into elliptic curves and differentially private algorithms for several combinatorial optimization problems from the recent literature.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The inherent complexity of modern cloud infrastructures has created the need for innovative monitoring approaches, as state-of-the-art solutions used for other large-scale environments do not address specific cloud features. Although cloud monitoring is nowadays an active research field, a comprehensive study covering all its aspects has not been presented yet. This paper provides a deep insight into cloud monitoring. It proposes a unified cloud monitoring taxonomy, based on which it defines a layered cloud monitoring architecture. To illustrate it, we have implemented GMonE, a general-purpose cloud monitoring tool which covers all aspects of cloud monitoring by specifically addressing the needs of modern cloud infrastructures. Furthermore, we have evaluated the performance, scalability and overhead of GMonE with Yahoo Cloud Serving Benchmark (YCSB), by using the OpenNebula cloud middleware on the Grid’5000 experimental testbed. The results of this evaluation demonstrate the benefits of our approach, surpassing the monitoring performance and capabilities of cloud monitoring alternatives such as those present in state-of-the-art systems such as Amazon EC2 and OpenNebula.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

El trabajo ha sido realizado dentro del marco de los proyectos EURECA (Enabling information re-Use by linking clinical REsearch and Care) e INTEGRATE (Integrative Cancer Research Through Innovative Biomedical Infrastructures), en los que colabora el Grupo de Informática Biomédica de la UPM junto a otras universidades e instituciones sanitarias europeas. En ambos proyectos se desarrollan servicios e infraestructuras con el objetivo principal de almacenar información clínica, procedente de fuentes diversas (como por ejemplo de historiales clínicos electrónicos de hospitales, de ensayos clínicos o artículos de investigación biomédica), de una forma común y fácilmente accesible y consultable para facilitar al máximo la investigación de estos ámbitos, de manera colaborativa entre instituciones. Esta es la idea principal de la interoperabilidad semántica en la que se concentran ambos proyectos, siendo clave para el correcto funcionamiento del software del que se componen. El intercambio de datos con un modelo de representación compartido, común y sin ambigüedades, en el que cada concepto, término o dato clínico tendrá una única forma de representación. Lo cual permite la inferencia de conocimiento, y encaja perfectamente en el contexto de la investigación médica. En concreto, la herramienta a desarrollar en este trabajo también está orientada a la idea de maximizar la interoperabilidad semántica, pues se ocupa de la carga de información clínica con un formato estandarizado en un modelo común de almacenamiento de datos, implementado en bases de datos relacionales. El trabajo ha sido desarrollado en el periodo comprendido entre el 3 de Febrero y el 6 de Junio de 2014. Se ha seguido un ciclo de vida en cascada para la organización del trabajo realizado en las tareas de las que se compone el proyecto, de modo que una fase no puede iniciarse sin que se haya terminado, revisado y aceptado la fase anterior. Exceptuando la tarea de documentación del trabajo (para la elaboración de esta memoria), que se ha desarrollado paralelamente a todas las demás. ----ABSTRACT--- The project has been developed during the second semester of the 2013/2014 academic year. This Project has been done inside EURECA and INTEGRATE European biomedical research projects, where the GIB (Biomedical Informatics Group) of the UPM works as a partner. Both projects aim is to develop platforms and services with the main goal of storing clinical information (e.g. information from hospital electronic health records (EHRs), clinical trials or research articles) in a common way and easy to access and query, in order to support medical research. The whole software environment of these projects is based on the idea of semantic interoperability, which means the ability of computer systems to exchange data with unambiguous and shared meaning. This idea allows knowledge inference, which fits perfectly in medical research context. The tool to develop in this project is also "semantic operability-oriented". Its purpose is to store standardized clinical information in a common data model, implemented in relational databases. The project has been performed during the period between February 3rd and June 6th, of 2014. It has followed a "Waterfall model" of software development, in which progress is seen as flowing steadily downwards through its phases. Each phase starts when its previous phase has been completed and reviewed. The task of documenting the project‟s work is an exception; it has been performed in a parallel way to the rest of the tasks.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

El presente documento pretende ofrecer una visión general del estado del conjunto de herramientas disponibles para el análisis y explotación de vulnerabilidades en sistemas informáticos y más concretamente en redes de ordenadores. Por un lado se ha procedido a describir analíticamente el conjunto de herramientas de software libre que se ofrecen en la actualidad para analizar y detectar vulnerabilidades en sistemas informáticos. Se ha descrito el funcionamiento, las opciones, y la motivación de uso para dichas herramientas, comparándolas con otras en algunos casos, describiendo sus diferencias en otros, y justificando su elección en todos ellos. Por otro lado se ha procedido a utilizar dichas herramientas analizadas con el objetivo de desarrollar ejemplos concretos de uso con sus diferentes parámetros seleccionados observando su comportamiento y tratando de discernir qué datos son útiles para obtener información acerca de las vulnerabilidades existentes en el sistema. Además, se ha desarrollado un caso práctico en el que se pone en práctica el conocimiento teórico presentado de forma que el lector sea capaz de asentar lo aprendido comprobando mediante un caso real la utilidad de las herramientas descritas. Los resultados obtenidos han demostrado que el análisis y detección de vulnerabilidades por parte de un administrador de sistemas competente permite ofrecer a la organización en cuestión un conjunto de técnicas para mejorar su seguridad informática y así evitar problemas con potenciales atacantes. ABSTRACT. This paper tries to provide an overview of the features of the set of tools available for the analysis and exploitation of vulnerabilities in computer systems and more specifically in computer networks. On the one hand we pretend analytically describe the set of free software tools that are offered today to analyze and detect vulnerabilities in computer systems. We have described the operation, options, and motivation to use these tools in comparison with other in some case, describing their differences in others, and justifying them in all cases. On the other hand we proceeded to use these analyzed tools in order to develop concrete examples of use with different parameters selected by observing their behavior and trying to discern what data are useful for obtaining information on existing vulnerabilities in the system. In addition, we have developed a practical case in which we put in practice the theoretical knowledge presented so that the reader is able to settle what has been learned through a real case verifying the usefulness of the tools previously described. The results have shown that vulnerabilities analysis and detection made by a competent system administrator can provide to an organization a set of techniques to improve its systems and avoid any potential attacker.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The Bologna Declaration and the implementation of the European Higher Education Area are promoting the use of active learning methodologies such as cooperative learning and project based learning. This study was motivated by the comparison of the results obtained after applying Cooperative Learning (CL) and Project Based Learning (PBL) to a subject of Computer Engineering. The fundamental hypothesis tested was whether the academic success achieved by the students of the first years was higher when CL was applied than in those cases to which PBL was applied. A practical case, by means of which the effectiveness of CL and PBL are compared, is presented in this work. This study has been carried out at the Universidad Politécnica de Madrid, where these mechanisms have been applied to the Operating Systems I subject from the Technical Engineering in Computer Systems degree (OSIS) and to the same subject from the Technical Engineering in Computer Management degree (OSIM). Both subjects have the same syllabus, are taught in the same year and semester and share also formative objectives. From this study we can conclude that students¿ academic performance (regarding the grades given) is greater with PBL than with CL. To be more specific, the difference is between 0.5 and 1 point for the individual tests. For the group tests, this difference is between 2.5 and 3 points. Therefore, this study refutes the fundamental hypothesis formulated at the beginning. Some of the possible interpretations of these results are referred to in this study.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Este proyecto tiene como objetivo el desarrollo de una herramienta que permita al alumno la autocorrección de prácticas de la asignatura de Procesado Digital de la Señal. La herramienta será diseñada por medio del GUI de Matlab, que permite la creación de interfaces gráficos de usuario para la interacción con el alumno, así él mismo podrá comprobar si los resultado obtenidos para el enunciado de la práctica facilitado son correctos. La evaluación del alumno se llevará a cabo pidiendo distintas respuestas sobre las prácticas y comparándolas posteriormente con los resultados correctos. El código invisible al usuario será el encargado de indicar si el resultado es correcto o no lo es. ABSTRACT. The aim of this project is to develop a tool for the students of Digital Signal Processing that help them self-correct their lab exercises. The tool will be designed using the Matlab GUI, which allows the creation of graphical user interfaces to interact with the student, who can check whether the results obtained are correct or not. The student will be asked about different results of the exercises and the answers will be compared with the correct results. A part of the tool hidden to the student will reveal to the lecturer the outcome of this comparison.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A first-rate e-Health system saves lives, provides better patient care, allows complex but useful epidemiologic analysis and saves money. However, there may also be concerns about the costs and complexities associated with e-health implementation, and the need to solve issues about the energy footprint of the high-demanding computing facilities. This paper proposes a novel and evolved computing paradigm that: (i) provides the required computing and sensing resources; (ii) allows the population-wide diffusion; (iii) exploits the storage, communication and computing services provided by the Cloud; (iv) tackles the energy-optimization issue as a first-class requirement, taking it into account during the whole development cycle. The novel computing concept and the multi-layer top-down energy-optimization methodology obtain promising results in a realistic scenario for cardiovascular tracking and analysis, making the Home Assisted Living a reality.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

La sociedad depende hoy más que nunca de la tecnología, pero la inversión en seguridad es escasa y los sistemas informáticos siguen estando muy lejos de ser seguros. La criptografía es una de las piedras angulares de la seguridad en este ámbito, por lo que recientemente se ha dedicado una cantidad considerable de recursos al desarrollo de herramientas que ayuden en la evaluación y mejora de los algoritmos criptográficos. EasyCrypt es uno de estos sistemas, desarrollado recientemente en el Instituto IMDEA Software en respuesta a la creciente necesidad de disponer de herramientas fiables de verificación formal de criptografía. En este trabajo se abordará la implementación de una mejora en el reductor de términos de EasyCrypt, sustituyéndolo por una máquina abstracta simbólica. Para ello se estudiarán e implementarán previamente dos máquinas abstractas muy conocidas, la Máquina de Krivine y la ZAM, introduciendo variaciones sobre ellas y estudiando sus diferencias desde un punto de vista práctico.---ABSTRACT---Today, society depends more than ever on technology, but the investment in security is still scarce and using computer systems are still far from safe to use. Cryptography is one of the cornerstones of security, so there has been a considerable amount of effort devoted recently to the development of tools oriented to the evaluation and improvement of cryptographic algorithms. One of these tools is EasyCrypt, developed recently at IMDEA Software Institute in response to the increasing need of reliable formal verification tools for cryptography. This work will focus on the improvement of the EasyCrypt’s term rewriting system, replacing it with a symbolic abstract machine. In order to do that, we will previously study and implement two widely known abstract machines, the Krivine Machine and the ZAM, introducing some variations and studying their differences from a practical point of view.