906 resultados para User-interface Design
Resumo:
This Phd thesis was entirely developed at the Telescopio Nazionale Galileo (TNG, Roque de los Muchachos, La Palma Canary Islands) with the aim of designing, developing and implementing a new Graphical User Interface (GUI) for the Near Infrared Camera Spectrometer (NICS) installed on the Nasmyth A of the telescope. The idea of a new GUI for NICS has risen for optimizing the astronomers work through a set of powerful tools not present in the existing GUI, such as the possibility to move automatically, an object on the slit or do a very preliminary images analysis and spectra extraction. The new GUI also provides a wide and versatile image display, an automatic procedure to find out the astronomical objects and a facility for the automatic image crosstalk correction. In order to test the overall correct functioning of the new GUI for NICS, and providing some information on the atmospheric extinction at the TNG site, two telluric standard stars have been spectroscopically observed within some engineering time, namely Hip031303 and Hip031567. The used NICS set-up is as follows: Large Field (0.25'' /pixel) mode, 0.5'' slit and spectral dispersion through the AMICI prism (R~100), and the higher resolution (R~1000) JH and HK grisms.
Resumo:
This thesis aimed at addressing some of the issues that, at the state of the art, avoid the P300-based brain computer interface (BCI) systems to move from research laboratories to end users’ home. An innovative asynchronous classifier has been defined and validated. It relies on the introduction of a set of thresholds in the classifier, and such thresholds have been assessed considering the distributions of score values relating to target, non-target stimuli and epochs of voluntary no-control. With the asynchronous classifier, a P300-based BCI system can adapt its speed to the current state of the user and can automatically suspend the control when the user diverts his attention from the stimulation interface. Since EEG signals are non-stationary and show inherent variability, in order to make long-term use of BCI possible, it is important to track changes in ongoing EEG activity and to adapt BCI model parameters accordingly. To this aim, the asynchronous classifier has been subsequently improved by introducing a self-calibration algorithm for the continuous and unsupervised recalibration of the subjective control parameters. Finally an index for the online monitoring of the EEG quality has been defined and validated in order to detect potential problems and system failures. This thesis ends with the description of a translational work involving end users (people with amyotrophic lateral sclerosis-ALS). Focusing on the concepts of the user centered design approach, the phases relating to the design, the development and the validation of an innovative assistive device have been described. The proposed assistive technology (AT) has been specifically designed to meet the needs of people with ALS during the different phases of the disease (i.e. the degree of motor abilities impairment). Indeed, the AT can be accessed with several input devices either conventional (mouse, touchscreen) or alterative (switches, headtracker) up to a P300-based BCI.
Resumo:
EUMETSAT (www.eumetsat.int) e’ l’agenzia europea per operazioni su satelliti per monitorare clima, meteo e ambiente terrestre. Dal centro operativo situato a Darmstadt (Germania), si controllano satelliti meteorologici su orbite geostazionarie e polari che raccolgono dati per l’osservazione dell’atmosfera, degli oceani e della superficie terrestre per un servizio continuo di 24/7. Un sistema di monitoraggio centralizzato per programmi diversi all’interno dell’ambiente operazionale di EUMETSAT, e’ dato da GEMS (Generic Event Monitoring System). Il software garantisce il controllo di diverse piattaforme, cross-monitoring di diverse sezioni operative, ed ha le caratteristiche per potere essere esteso a future missioni. L’attuale versione della GEMS MMI (Multi Media Interface), v. 3.6, utilizza standard Java Server Pages (JSP) e fa uso pesante di codici Java; utilizza inoltre files ASCII per filtri e display dei dati. Conseguenza diretta e’ ad esempio, il fatto che le informazioni non sono automaticamente aggiornate, ma hanno bisogno di ricaricare la pagina. Ulteriori inputs per una nuova versione della GEMS MMI vengono da diversi comportamenti anomali riportati durante l’uso quotidiano del software. La tesi si concentra sulla definizione di nuovi requisiti per una nuova versione della GEMS MMI (v. 4.4) da parte della divisione ingegneristica e di manutenzione di operazioni di EUMETSAT. Per le attivita’ di supporto, i test sono stati condotti presso Solenix. Il nuovo software permettera’ una migliore applicazione web, con tempi di risposta piu’ rapidi, aggiornamento delle informazioni automatico, utilizzo totale del database di GEMS e le capacita’ di filtri, insieme ad applicazioni per telefoni cellulari per il supporto delle attivita’ di reperibilita’. La nuova versione di GEMS avra’ una nuova Graphical User Interface (GUI) che utilizza tecnologie moderne. Per un ambiente di operazioni come e’ quello di EUMETSAT, dove l’affidabilita’ delle tecnologie e la longevita’ dell’approccio scelto sono di vitale importanza, non tutti gli attuali strumenti a disposizione sono adatti e hanno bisogno di essere migliorati. Allo stesso tempo, un’ interfaccia moderna, in termini di visual design, interattivita’ e funzionalita’, e’ importante per la nuova GEMS MMI.
Resumo:
Analog filters and direct digital filters are implemented using digital signal processing techniques. Specifically, Butterworth, Elliptic, and Chebyshev filters are implemented using the Motorola 56001 Digital Signal Processor by the integration of three software packages: MATLAB, C++, and Motorola's Application Development System. The integrated environment allows the novice user to design a filter automatically by specifying the filter order and critical frequencies, while permitting more experienced designers to take advantage of MATLAB's advanced design capabilities. This project bridges the gap between the theoretical results produced by MATLAB and the practicalities of implementing digital filters using the Motorola 56001 Digital Signal Processor. While these results are specific to the Motorola 56001 they may be extended to other digital signal processors. MATLAB handles the filter calculations, a C++ routine handles the conversion to assembly code, and the Motorola software compiles and transmits the code to the processor
Resumo:
The article is concerned with design and use of e-learning technology to develop education qualitatively. The purpose is to develop a framework for a pedagogical evaluation of e-learning technology. The approach is that evaluation and design must be grounded in a learning theoretical approach, and it is argued that it is necessary to make a reflection of technology in relation to activities, learning principles, and a learning theory in order to qualitatively develop education. The article presents three frameworks developed on the basis of cognitivism, radical constructivism and activity theory. Finally, on the basis of the frameworks, the article discusses e-learning technology and, more specifically, design of virtual learning environments and learning objects. It is argued that e-learning technology is not pedagogically neutral, and that it is therefore necessary to focus on design of technology that explicitly supports a certain pedagogical approach. Further, it is argued that design should direct its focus away from organisation of content and towards design of activities.
Resumo:
After 20 years of silence, two recent references from the Czech Republic (Bezpečnostní softwarová asociace, Case C-393/09) and from the English High Court (SAS Institute, Case C-406/10) touch upon several questions that are fundamental for the extent of copyright protection for software under the Computer Program Directive 91/25 (now 2009/24) and the Information Society Directive 2001/29. In Case C-393/09, the European Court of Justice held that “the object of the protection conferred by that directive is the expression in any form of a computer program which permits reproduction in different computer languages, such as the source code and the object code.” As “any form of expression of a computer program must be protected from the moment when its reproduction would engender the reproduction of the computer program itself, thus enabling the computer to perform its task,” a graphical user interface (GUI) is not protected under the Computer Program Directive, as it does “not enable the reproduction of that computer program, but merely constitutes one element of that program by means of which users make use of the features of that program.” While the definition of computer program and the exclusion of GUIs mirror earlier jurisprudence in the Member States and therefore do not come as a surprise, the main significance of Case C-393/09 lies in its interpretation of the Information Society Directive. In confirming that a GUI “can, as a work, be protected by copyright if it is its author’s own intellectual creation,” the ECJ continues the Europeanization of the definition of “work” which began in Infopaq (Case C-5/08). Moreover, the Court elaborated this concept further by excluding expressions from copyright protection which are dictated by their technical function. Even more importantly, the ECJ held that a television broadcasting of a GUI does not constitute a communication to the public, as the individuals cannot have access to the “essential element characterising the interface,” i.e., the interaction with the user. The exclusion of elements dictated by technical functions from copyright protection and the interpretation of the right of communication to the public with reference to the “essential element characterising” the work may be seen as welcome limitations of copyright protection in the interest of a free public domain which were not yet apparent in Infopaq. While Case C-393/09 has given a first definition of the computer program, the pending reference in Case C-406/10 is likely to clarify the scope of protection against nonliteral copying, namely in how far the protection extends beyond the text of the source code to the design of a computer program and where the limits of protection lie as regards the functionality of a program and mere “principles and ideas.” In light of the travaux préparatoires, it is submitted that the ECJ is also likely to grant protection for the design of a computer program, while excluding both the functionality and underlying principles and ideas from protection under the European copyright directives.
Resumo:
Mobile learning, in the past defined as learning with mobile devices, now refers to any type of learning-on-the-go or learning that takes advantage of mobile technologies. This new definition shifted its focus from the mobility of technology to the mobility of the learner (O'Malley and Stanton 2002; Sharples, Arnedillo-Sanchez et al. 2009). Placing emphasis on the mobile learner’s perspective requires studying “how the mobility of learners augmented by personal and public technology can contribute to the process of gaining new knowledge, skills, and experience” (Sharples, Arnedillo-Sanchez et al. 2009). The demands of an increasingly knowledge based society and the advances in mobile phone technology are combining to spur the growth of mobile learning. Around the world, mobile learning is predicted to be the future of online learning, and is slowly entering the mainstream education. However, for mobile learning to attain its full potential, it is essential to develop more advanced technologies that are tailored to the needs of this new learning environment. A research field that allows putting the development of such technologies onto a solid basis is user experience design, which addresses how to improve usability and therefore user acceptance of a system. Although there is no consensus definition of user experience, simply stated it focuses on how a person feels about using a product, system or service. It is generally agreed that user experience adds subjective attributes and social aspects to a space that has previously concerned itself mainly with ease-of-use. In addition, it can include users’ perceptions of usability and system efficiency. Recent advances in mobile and ubiquitous computing technologies further underline the importance of human-computer interaction and user experience (feelings, motivations, and values) with a system. Today, there are plenty of reports on the limitations of mobile technologies for learning (e.g., small screen size, slow connection), but there is a lack of research on user experience with mobile technologies. This dissertation will fill in this gap by a new approach in building a user experience-based mobile learning environment. The optimized user experience we suggest integrates three priorities, namely a) content, by improving the quality of delivered learning materials, b) the teaching and learning process, by enabling live and synchronous learning, and c) the learners themselves, by enabling a timely detection of their emotional state during mobile learning. In detail, the contributions of this thesis are as follows: • A video codec optimized for screencast videos which achieves an unprecedented compression rate while maintaining a very high video quality, and a novel UI layout for video lectures, which together enable truly mobile access to live lectures. • A new approach in HTTP-based multimedia delivery that exploits the characteristics of live lectures in a mobile context and enables a significantly improved user experience for mobile live lectures. • A non-invasive affective learning model based on multi-modal emotion detection with very high recognition rates, which enables real-time emotion detection and subsequent adaption of the learning environment on mobile devices. The technology resulting from the research presented in this thesis is in daily use at the School of Continuing Education of Shanghai Jiaotong University (SOCE), a blended-learning institution with 35.000 students.
Resumo:
Various applications for the purposes of event detection, localization, and monitoring can benefit from the use of wireless sensor networks (WSNs). Wireless sensor networks are generally easy to deploy, with flexible topology and can support diversity of tasks thanks to the large variety of sensors that can be attached to the wireless sensor nodes. To guarantee the efficient operation of such a heterogeneous wireless sensor networks during its lifetime an appropriate management is necessary. Typically, there are three management tasks, namely monitoring, (re) configuration, and code updating. On the one hand, status information, such as battery state and node connectivity, of both the wireless sensor network and the sensor nodes has to be monitored. And on the other hand, sensor nodes have to be (re)configured, e.g., setting the sensing interval. Most importantly, new applications have to be deployed as well as bug fixes have to be applied during the network lifetime. All management tasks have to be performed in a reliable, time- and energy-efficient manner. The ability to disseminate data from one sender to multiple receivers in a reliable, time- and energy-efficient manner is critical for the execution of the management tasks, especially for code updating. Using multicast communication in wireless sensor networks is an efficient way to handle such traffic pattern. Due to the nature of code updates a multicast protocol has to support bulky traffic and endto-end reliability. Further, the limited resources of wireless sensor nodes demand an energy-efficient operation of the multicast protocol. Current data dissemination schemes do not fulfil all of the above requirements. In order to close the gap, we designed the Sensor Node Overlay Multicast (SNOMC) protocol such that to support a reliable, time-efficient and energy-efficient dissemination of data from one sender node to multiple receivers. In contrast to other multicast transport protocols, which do not support reliability mechanisms, SNOMC supports end-to-end reliability using a NACK-based reliability mechanism. The mechanism is simple and easy to implement and can significantly reduce the number of transmissions. It is complemented by a data acknowledgement after successful reception of all data fragments by the receiver nodes. In SNOMC three different caching strategies are integrated for an efficient handling of necessary retransmissions, namely, caching on each intermediate node, caching on branching nodes, or caching only on the sender node. Moreover, an option was included to pro-actively request missing fragments. SNOMC was evaluated both in the OMNeT++ simulator and in our in-house real-world testbed and compared to a number of common data dissemination protocols, such as Flooding, MPR, TinyCubus, PSFQ, and both UDP and TCP. The results showed that SNOMC outperforms the selected protocols in terms of transmission time, number of transmitted packets, and energy-consumption. Moreover, we showed that SNOMC performs well with different underlying MAC protocols, which support different levels of reliability and energy-efficiency. Thus, SNOMC can offer a robust, high-performing solution for the efficient distribution of code updates and management information in a wireless sensor network. To address the three management tasks, in this thesis we developed the Management Architecture for Wireless Sensor Networks (MARWIS). MARWIS is specifically designed for the management of heterogeneous wireless sensor networks. A distinguished feature of its design is the use of wireless mesh nodes as backbone, which enables diverse communication platforms and offloading functionality from the sensor nodes to the mesh nodes. This hierarchical architecture allows for efficient operation of the management tasks, due to the organisation of the sensor nodes into small sub-networks each managed by a mesh node. Furthermore, we developed a intuitive -based graphical user interface, which allows non-expert users to easily perform management tasks in the network. In contrast to other management frameworks, such as Mate, MANNA, TinyCubus, or code dissemination protocols, such as Impala, Trickle, and Deluge, MARWIS offers an integrated solution monitoring, configuration and code updating of sensor nodes. Integration of SNOMC into MARWIS further increases performance efficiency of the management tasks. To our knowledge, our approach is the first one, which offers a combination of a management architecture with an efficient overlay multicast transport protocol. This combination of SNOMC and MARWIS supports reliably, time- and energy-efficient operation of a heterogeneous wireless sensor network.
Resumo:
High Angular Resolution Diffusion Imaging (HARDI) techniques, including Diffusion Spectrum Imaging (DSI), have been proposed to resolve crossing and other complex fiber architecture in the human brain white matter. In these methods, directional information of diffusion is inferred from the peaks in the orientation distribution function (ODF). Extensive studies using histology on macaque brain, cat cerebellum, rat hippocampus and optic tracts, and bovine tongue are qualitatively in agreement with the DSI-derived ODFs and tractography. However, there are only two studies in the literature which validated the DSI results using physical phantoms and both these studies were not performed on a clinical MRI scanner. Also, the limited studies which optimized DSI in a clinical setting, did not involve a comparison against physical phantoms. Finally, there is lack of consensus on the necessary pre- and post-processing steps in DSI; and ground truth diffusion fiber phantoms are not yet standardized. Therefore, the aims of this dissertation were to design and construct novel diffusion phantoms, employ post-processing techniques in order to systematically validate and optimize (DSI)-derived fiber ODFs in the crossing regions on a clinical 3T MR scanner, and develop user-friendly software for DSI data reconstruction and analysis. Phantoms with a fixed crossing fiber configuration of two crossing fibers at 90° and 45° respectively along with a phantom with three crossing fibers at 60°, using novel hollow plastic capillaries and novel placeholders, were constructed. T2-weighted MRI results on these phantoms demonstrated high SNR, homogeneous signal, and absence of air bubbles. Also, a technique to deconvolve the response function of an individual peak from the overall ODF was implemented, in addition to other DSI post-processing steps. This technique greatly improved the angular resolution of the otherwise unresolvable peaks in a crossing fiber ODF. The effects of DSI acquisition parameters and SNR on the resultant angular accuracy of DSI on the clinical scanner were studied and quantified using the developed phantoms. With a high angular direction sampling and reasonable levels of SNR, quantification of a crossing region in the 90°, 45° and 60° phantoms resulted in a successful detection of angular information with mean ± SD of 86.93°±2.65°, 44.61°±1.6° and 60.03°±2.21° respectively, while simultaneously enhancing the ODFs in regions containing single fibers. For the applicability of these validated methodologies in DSI, improvement in ODFs and fiber tracking from known crossing fiber regions in normal human subjects were demonstrated; and an in-house software package in MATLAB which streamlines the data reconstruction and post-processing for DSI, with easy to use graphical user interface was developed. In conclusion, the phantoms developed in this dissertation offer a means of providing ground truth for validation of reconstruction and tractography algorithms of various diffusion models (including DSI). Also, the deconvolution methodology (when applied as an additional DSI post-processing step) significantly improved the angular accuracy of the ODFs obtained from DSI, and should be applicable to ODFs obtained from the other high angular resolution diffusion imaging techniques.
End-User Development Success Factors and their Application to Composite Web Development Environments
Resumo:
The Future Internet is expected to be composed of a mesh of interoperable Web services accessed from all over the Web. This approach has not yet caught on since global user-service interaction is still an open issue. Successful composite applications rely on heavyweight service orchestration technologies that raise the bar far above end-user skills. The weakness lies in the abstraction of the underlying service front-end architecture rather than the infrastructure technologies themselves. In our opinion, the best approach is to offer end-to-end composition from user interface to service invocation, as well as an understandable abstraction of both building blocks and a visual composition technique. In this paper we formalize our vision with regard to the next-generation front-end Web technology that will enable integrated access to services, contents and things in the Future Internet. We present a novel reference architecture designed to empower non-technical end users to create and share their own self-service composite applications. A tool implementing this architecture has been developed as part of the European FP7 FAST Project and EzWeb Project, allowing us to validate the rationale behind our approach.
Resumo:
A partir de un simulador de vocales denominado Vox, programado en MATLAB, desarrollado originalmente en la Universidad Técnica de Aquisgrán por Malte Kob [1] y mejorado en el Departamento de ICS de la EUITT [2], se pueden generar voces sintéticas. La principal limitación del simulador es que sólo puede generar vocales sintéticas, además la simulación se realiza a partir de parámetros anatómicos y fisiológicos fijos. La estructura actual del programa dificulta la modificación rápida de cualquiera de los parámetros básicos de la misma, circunstancia que podría mejorar mediante una interfaz gráfica. El proyecto consistirá, por un lado, en completar el simulador haciendo posible también la síntesis a partir de los parámetros anatómicos de hombres, mujeres y niños; y por otro, en el diseño e implementación de una interfaz gráfica de usuario que nos permita seleccionar los diferentes parámetros físicos para la simulación y recoger los resultados de la misma de manera más sencilla. Starting from a vowels simulator called Vox, programmed in MATLAB, originally developed in the Technical college of Aquisgrán by Malte Kob [1] and improved in the ICS Department of the EUITT [2], with this programme you can generate synthetic voices. The main limitation of the simulator is that it only can generate synthetic vowels; moreover the simulation is made from anatomical and physiological fixed parameters. The current structure of the programme complicates the quick modification of any of the basic parameters of it, circumstance that could be improved through a graphic interface. On the one hand, the project consists in completing the simulator doing the synthesis possible, from the anatomical woman, men and children parameters; on the other hand, the design and implementation of a graphic user interface, that allow us to select different physical parameters to the simulation and gather the results of it in a simple way.
Resumo:
Este proyecto fin de carrera tiene como finalidad el diseño e implementación de un sistema multicanal de medida de temperaturas con termopares con procesado digital. Se ha realizado un prototipo de cuatro canales con conexión de termopar, que es el tipo de sensor utilizado para realizar dichas medidas. La tensión generada por el termopar es procesada mediante un conversor de termopar a digital con salida en interfaz modo serie o SPI (Serial Peripheral Interface). El control de dicha comunicación se realiza por medio de un Array de Puertas Lógicas Programables o FPGA (Field Programmable Gate Array), en concreto se ha utilizado una plataforma de desarrollo modelo Virtex-5 de la empresa Xilinx. Esta tarjeta se ha programado también para el procesado software y la posterior comunicación serie con el PC, el cual consta de una interfaz de usuario donde se muestran los resultados de las medidas en tiempo real. El proyecto ha sido desarrollado en colaboración con una empresa privada dedicada principalmente al diseño electrónico. La finalidad de este prototipo es el estudio de una actualización del bloque de medida para el control de las curvas de temperatura de un equipo de reparación aeronáutica. En esta memoria se describe el proceso realizado para el desarrollo del prototipo, incluye la presentación de los estudios realizados y la información necesaria para llevar a cabo el diseño, la fabricación y la programación de los diferentes bloques que componen el sistema. ABSTRACT. The aim of this project is to implement a multichannel temperature measurement system with digital processing, using thermocouples. A four-channel prototype with thermocouple connection has been built. The thermocouple voltage is converted to digital line using a Thermocouple-to-Digital Converter with a Serial Perpheral Interface (SPI) output. The master which controls this communication is embedded in a Field Programmable Gate Array (FPGA), specifically the Xilinx Virtex-5 model. This FPGA also has the code for software temperature processing and the prototype to PC serial communication embedded. The PC user interface displays the measurement results in real time. This project has been developed at a private electronics design company. The company wants to study an update to change the analogue temperature controller equipment to a digital one. So this prototype studies a digital version of the temperature measurement block. The processes accomplished for the prototype development are detailed in the next pages of this document. It includes the studies and information needed to develop the design, manufacturing process and programming of the blocks which integrate with the global system.
Resumo:
La Diabetes Mellitus se define como el trastorno del metabolismo de los carbohidratos, resultante de una producción insuficiente o nula de insulina en las células beta del páncreas, o la manifestación de una sensibilidad reducida a la insulina por parte del sistema metabólico. La diabetes tipo 1 se caracteriza por la nula producción de insulina por la destrucción de las células beta del páncreas. Si no hay insulina en el torrente sanguíneo, la glucosa no puede ser absorbida por las células, produciéndose un estado de hiperglucemia en el paciente, que a medio y largo plazo si no es tratado puede ocasionar severas enfermedades, conocidos como síndromes de la diabetes. La diabetes tipo 1 es una enfermedad incurable pero controlable. La terapia para esta enfermedad consiste en la aplicación exógena de insulina con el objetivo de mantener el nivel de glucosa en sangre dentro de los límites normales. Dentro de las múltiples formas de aplicación de la insulina, en este proyecto se usará una bomba de infusión, que unida a un sensor subcutáneo de glucosa permitirá crear un lazo de control autónomo que regule la cantidad optima de insulina aplicada en cada momento. Cuando el algoritmo de control se utiliza en un sistema digital, junto con el sensor subcutáneo y bomba de infusión subcutánea, se conoce como páncreas artificial endocrino (PAE) de uso ambulatorio, hoy día todavía en fase de investigación. Estos algoritmos de control metabólico deben de ser evaluados en simulación para asegurar la integridad física de los pacientes, por lo que es necesario diseñar un sistema de simulación mediante el cual asegure la fiabilidad del PAE. Este sistema de simulación conecta los algoritmos con modelos metabólicos matemáticos para obtener una visión previa de su funcionamiento. En este escenario se diseñó DIABSIM, una herramienta desarrollada en LabViewTM, que posteriormente se trasladó a MATLABTM, y basada en el modelo matemático compartimental propuesto por Hovorka, con la que poder simular y evaluar distintos tipos de terapias y reguladores en lazo cerrado. Para comprobar que estas terapias y reguladores funcionan, una vez simulados y evaluados, se tiene que pasar a la experimentación real a través de un protocolo de ensayo clínico real, como paso previo al PEA ambulatorio. Para poder gestionar este protocolo de ensayo clínico real para la verificación de los algoritmos de control, se creó una interfaz de usuario a través de una serie de funciones de simulación y evaluación de terapias con insulina realizadas con MATLABTM (GUI: Graphics User Interface), conocido como Entorno de Páncreas artificial con Interfaz Clínica (EPIC). EPIC ha sido ya utilizada en 10 ensayos clínicos de los que se han ido proponiendo posibles mejoras, ampliaciones y/o cambios. Este proyecto propone una versión mejorada de la interfaz de usuario EPIC propuesta en un proyecto anterior para gestionar un protocolo de ensayo clínico real para la verificación de algoritmos de control en un ambiente hospitalario muy controlado, además de estudiar la viabilidad de conectar el GUI con SimulinkTM (entorno gráfico de Matlab de simulación de sistemas) para su conexión con un nuevo simulador de pacientes aprobado por la JDRF (Juvenil Diabetes Research Foundation). SUMMARY The diabetes mellitus is a metabolic disorder of carbohydrates, as result of an insufficient or null production of insulin in the beta cellules of pancreas, or the manifestation of a reduced sensibility to the insulin from the metabolic system. The type 1 diabetes is characterized for a null production of insulin due to destruction of the beta cellules. Without insulin in the bloodstream, glucose can’t be absorbed by the cellules, producing a hyperglycemia state in the patient and if pass a medium or long time and is not treated can cause severe disease like diabetes syndrome. The type 1 diabetes is an incurable disease but controllable one. The therapy for this disease consists on the exogenous insulin administration with the objective to maintain the glucose level in blood within the normal limits. For the insulin administration, in this project is used an infusion pump, that permit with a subcutaneous glucose sensor, create an autonomous control loop that regulate the optimal insulin amount apply in each moment. When the control algorithm is used in a digital system, with the subcutaneous senor and infusion subcutaneous pump, is named as “Artificial Endocrine Pancreas” for ambulatory use, currently under investigate. These metabolic control algorithms should be evaluates in simulation for assure patients’ physical integrity, for this reason is necessary to design a simulation system that assure the reliability of PAE. This simulation system connects algorithms with metabolic mathematics models for get a previous vision of its performance. In this scenario was created DIABSIMTM, a tool developed in LabView, that later was converted to MATLABTM, and based in the compartmental mathematic model proposed by Hovorka that could simulate and evaluate several different types of therapy and regulators in closed loop. To check the performance of these therapies and regulators, when have been simulated and evaluated, will be necessary to pass to real experimentation through a protocol of real clinical test like previous step to ambulatory PEA. To manage this protocol was created an user interface through the simulation and evaluation functions od therapies with insulin realized with MATLABTM (GUI: Graphics User Interface), known as “Entorno de Páncreas artificial con Interfaz Clínica” (EPIC).EPIC have been used in 10 clinical tests which have been proposed improvements, adds and changes. This project proposes a best version of user interface EPIC proposed in another project for manage a real test clinical protocol for checking control algorithms in a controlled hospital environment and besides studying viability to connect the GUI with SimulinkTM (Matlab graphical environment in systems simulation) for its connection with a new patients simulator approved for the JDRF (Juvenil Diabetes Research Foundation).
Resumo:
El presente proyecto fin de carrera, realizado por el ingeniero técnico en telecomunicaciones Pedro M. Matamala Lucas, es la fase final de desarrollo de un proyecto de mayor magnitud correspondiente al software de vídeo forense SAVID. El propósito del proyecto en su totalidad es la creación de una herramienta informática capacitada para realizar el análisis de ficheros de vídeo, codificados y comprimidos por el sistema DV –Digital Video-. El objetivo del análisis, es aportar información acerca de si la cinta magnética presenta indicios de haber sido manipulada con una edición posterior a su grabación original, además, de mostrar al usuario otros datos de interés como las especificaciones técnicas de la señal de vídeo y audio. Por lo tanto, se facilitará al usuario, analista de vídeo forense, información que le ayude a valorar la originalidad del contenido del soporte que es sujeto del análisis. El objetivo específico de esta fase final, es la creación de la interfaz de usuario del software, que informa tanto del código binario de los sectores significativos, como de su interpretación tras el análisis. También permitirá al usuario el reporte de los resultados, además de otras funcionalidades que le permitan la navegación por los sectores del código que han sido modificados como efecto colateral de la edición de la cinta magnética original. Otro objetivo importante del proyecto ha sido la investigación de metodologías y técnicas de desarrollo de software para su posterior implementación, buscando con esto, una mayor eficiencia en la gestión del tiempo y una mayor calidad de software con el fin de garantizar su evolución y sostenibilidad en el futuro. Se ha hecho hincapié en las metodologías ágiles que han ido ganando relevancia en el sector de las tecnologías de la información en las últimas décadas, sustituyendo a metodologías clásicas como el desarrollo en cascada. Su flexibilidad durante el ciclo de vida del software, permite obtener mejores resultados cuando las especificaciones no están del todo definidas, ajustándose de este modo a las condiciones del proyecto. Resumiendo las especificaciones técnicas del software, C++ es el lenguaje de programación orientado a objetos con el que se ha desarrollado, utilizándose la tecnología MFC -Microsoft Foundation Classes- para la implementación. Es un proyecto MFC de tipo cuadro de dialogo,creado, compilado y publicado, con la herramienta de desarrollo integrado Microsoft Visual Studio 2010. La arquitectura con la que se ha estructurado es la arquetípica de tres capas, compuesta por la interfaz de usuario, capa de negocio y capa de acceso a datos. Se ha visto necesario configurar el proyecto con compatibilidad con CLR –Common Languages Runtime- para poder implementar la funcionalidad de creación de reportes. Acompañando a la aplicación informática, se presenta la memoria del proyecto y sus anexos correspondientes a los documentos EDRF –Especificaciones Detalladas de Requisitos funcionales-, EIU –Especificaciones de Interfaz de Usuario , DT -Diseño Técnico- y Guía de Usuario. SUMMARY. This dissertation, carried out by the telecommunications engineer Pedro M. Matamala Lucas, is in its final stage and is part of a larger project for the software of forensic video called SAVID. The purpose of the entire project is the creation of a software tool capable of analyzing video files that are coded and compressed by the DV -Digital Video- System. The objective of the analysis is to provide information on whether the magnetic tape shows signs of having been tampered with after the editing of the original recording, and also to show the user other relevant data and technical specifications of the video signal and audio. Therefore the user, forensic video analyst, will have information to help assess the originality of the content of the media that is subject to analysis. The specific objective of this final phase is the creation of the user interface of the software that provides information about the binary code of the significant sectors and also its interpretation after analysis. It will also allow the user to report the results, and other features that will allow browsing through the sections of the code that have been modified as a secondary effect of the original magnetic tape being tampered. Another important objective of the project is the investigation of methodologies and software development techniques to be used in deployment, with the aim of greater efficiency in time management and enhanced software quality in order to ensure its development and maintenance in the future. Agile methodologies, which have become important in the field of information technology in recent decades, have been used during the execution of the project, replacing classical methodologies such as Waterfall Development. The flexibility, as the result of using by agile methodologies, during the software life cycle, produces better results when the specifications are not fully defined, thus conforming to the initial conditions of the project. Summarizing the software technical specifications, C + + the programming language – which is object oriented and has been developed using technology MFC- Microsoft Foundation Classes for implementation. It is a project type dialog box, created, compiled and released with the integrated development tool Microsoft Visual Studio 2010. The architecture is structured in three layers: the user interface, business layer and data access layer. It has been necessary to configure the project with the support CLR -Common Languages Runtime – in order to implement the reporting functionality. The software application is submitted with the project report and its annexes to the following documents: Functional Requirements Specifications - Detailed User Interface Specifications, Technical Design and User Guide.
Resumo:
El actual proyecto consiste en la creación de una interfaz gráfica de usuario (GUI) en entorno de MATLAB que realice una representación gráfica de la base de datos de HRTF (Head-Related Transfer Function). La función de transferencia de la cabeza es una herramienta muy útil en el estudio de la capacidad del ser humano para percibir su entorno sonoro, además de la habilidad de éste en la localización de fuentes sonoras en el espacio que le rodea. La HRTF biaural (terminología para referirse al conjunto de HRTF del oído izquierdo y del oído derecho) en sí misma, posee información de especial interés ya que las diferencias entre las HRTF de cada oído, conceden la información que nuestro sistema de audición utiliza en la percepción del campo sonoro. Por ello, la funcionalidad de la interfaz gráfica creada presenta gran provecho dentro del estudio de este campo. Las diferencias interaurales se caracterizan en amplitud y en tiempo, variando en función de la frecuencia. Mediante la transformada inversa de Fourier de la señal HRTF, se obtiene la repuesta al impulso de la cabeza, es decir, la HRIR (Head-Related Impulse Response). La cual, además de tener una gran utilidad en la creación de software o dispositivos de generación de sonido envolvente, se utiliza para obtener las diferencias ITD (Interaural Time Difference) e ILD (Interaural Time Difference), comúnmente denominados “parámetros de localización espacial”. La base de datos de HRTF contiene la información biaural de diferentes puntos de ubicación de la fuente sonora, formando una red de coordenadas esféricas que envuelve la cabeza del sujeto. Dicha red, según las medidas realizadas en la cámara anecoica de la EUITT (Escuela Universitaria de Ingeniería Técnica de Telecomunicación), presenta una precisión en elevación de 10º y en azimut de 5º. Los receptores son dos micrófonos alojados en el maniquí acústico llamado HATS (Hats and Torso Simulator) modelo 4100D de Brüel&Kjaer. Éste posee las características físicas que influyen en la percepción del entorno como son las formas del pabellón auditivo (pinna), de la cabeza, del cuello y del torso humano. Será necesario realizar los cálculos de interpolación para todos aquellos puntos no contenidos en la base de datos HRTF, este proceso es sumamente importante no solo para potenciar la capacidad de la misma sino por su utilidad para la comparación entre otras bases de datos existentes en el estudio de este ámbito. La interfaz gráfica de usuario está concebida para un manejo sencillo, claro y predecible, a la vez que interactivo. Desde el primer boceto del programa se ha tenido clara su filosofía, impuesta por las necesidades de un usuario que busca una herramienta práctica y de manejo intuitivo. Su diseño de una sola ventana reúne tanto los componentes de obtención de datos como los que hacen posible la representación gráfica de las HRTF, las HRIR y los parámetros de localización espacial, ITD e ILD. El usuario podrá ir alternando las representaciones gráficas a la vez que introduce las coordenadas de los puntos que desea visualizar, definidas por phi (elevación) y theta (azimut). Esta faceta de la interfaz es la que le otorga una gran facilidad de acceso y lectura de la información representada en ella. Además, el usuario puede introducir valores incluidos en la base de datos o valores intermedios a estos, de esta manera, se indica a la interfaz la necesidad de realizar la interpolación de los mismos. El método de interpolación escogido es el de la ponderación de la distancia inversa entre puntos. Dependiendo de los valores introducidos por el usuario se realizará una interpolación de dos o cuatro puntos, siendo éstos limítrofes al valor introducido, ya sea de phi o theta. Para añadir versatilidad a la interfaz gráfica de usuario, se ha añadido la opción de generar archivos de salida en forma de imagen de las gráficas representadas, de tal forma que el usuario pueda extraer los datos que le interese para cualquier valor de phi y theta. Se completa el presente proyecto fin de carrera con un trabajo de investigación y estudio comparativo de la función y la aplicación de las bases de datos de HRTF dentro del marco científico y de investigación. Esto ha hecho posible concentrar información relacionada a través de revistas científicas de investigación como la JAES (Journal of the Audio Engineering Society) o la ASA (Acoustical Society of America), además, del IEEE ( Institute of Electrical and Electronics Engineers) o la “Web of knowledge” entre otras. Además de realizar la búsqueda en estas fuentes, se ha optado por vías de información más comunes como Google Académico o el portal de acceso “Ingenio” a los todos los recursos electrónicos contenidos en la base de datos de la universidad. El estudio genera una ampliación en el conocimiento de la labor práctica de las HRTF. La mayoría de los estudios enfocan sus esfuerzos en mejorar la percepción del evento sonoro mediante su simulación en la escucha estéreo o multicanal. A partir de las HRTF, esto es posible mediante el análisis y el cálculo de datos como pueden ser las regresiones, siendo éstas muy útiles en la predicción de una medida basándose en la información de la actual. Otro campo de especial interés es el de la generación de sonido 3D. Mediante la base de datos HRTF es posible la simulación de una señal biaural. Se han diseñado algoritmos que son implementados en dispositivos DSP, de tal manera que por medio de retardos interaurales y de diferencias espectrales es posible llegar a un resultado óptimo de sonido envolvente, sin olvidar la importancia de los efectos de reverberación para conseguir un efecto creíble de sonido envolvente. Debido a la complejidad computacional que esto requiere, gran parte de los estudios coinciden en desarrollar sistemas más eficientes, llegando a objetivos tales como la generación de sonido 3D en tiempo real. ABSTRACT. This project involves the creation of a Graphic User Interface (GUI) in the Matlab environment which creates a graphic representation of the HRTF (Head-Related Transfer Function) database. The head transfer function is a very useful tool in the study of the capacity of human beings to perceive their sound environment, as well as their ability to localise sound sources in the area surrounding them. The binaural HRTF (terminology which refers to the HRTF group of the left and right ear) in itself possesses information of special interest seeing that the differences between the HRTF of each ear admits the information that our system of hearing uses in the perception of each sound field. For this reason, the functionality of the graphic interface created presents great benefits within the study of this field. The interaural differences are characterised in space and in time, varying depending on the frequency. By means of Fourier's transformed inverse of the HRTF signal, the response to the head impulse is obtained, in other words, the HRIR (Head-Related Impulse Response). This, as well as having a great use in the creation of software or surround sound generating devices, is used to obtain ITD differences (Interaural Time Difference) and ILD (Interaural Time Difference), commonly named “spatial localisation parameters”. The HRTF database contains the binaural information of different points of sound source location, forming a network of spherical coordinates which surround the subject's head. This network, according to the measures carried out in the anechoic chamber at the EUITT (School of Telecommunications Engineering) gives a precision in elevation of 10º and in azimuth of 5º. The receivers are two microphones placed on the acoustic mannequin called HATS (Hats and Torso Simulator) Brüel&Kjaer model 4100D. This has the physical characteristics which affect the perception of the surroundings which are the forms of the auricle (pinna), the head, neck and human torso. It will be necessary to make interpolation calculations for all those points which are not contained the HRTF database. This process is extremely important not only to strengthen the database's capacity but also for its usefulness in making comparisons with other databases that exist in the study of this field. The graphic user interface is conceived for a simple, clear and predictable use which is also interactive. Since the first outline of the program, its philosophy has been clear, based on the needs of a user who requires a practical tool with an intuitive use. Its design with only one window unites not only the components which obtain data but also those which make the graphic representation of the HRTFs possible, the hrir and the ITD and ILD spatial location parameters. The user will be able to alternate the graphic representations at the same time as entering the point coordinates that they wish to display, defined by phi (elevation) and theta (azimuth). The facet of the interface is what provides the great ease of access and reading of the information displayed on it. In addition, the user can enter values included in the database or values which are intermediate to these. It is, likewise, indicated to the interface the need to carry out the interpolation of these values. The interpolation method is the deliberation of the inverse distance between points. Depending on the values entered by the user, an interpolation of two or four points will be carried out, with these being adjacent to the entered value, whether that is phi or theta. To add versatility to the graphic user interface, the option of generating output files in the form of an image of the graphics displayed has been added. This is so that the user may extract the information that interests them for any phi and theta value. This final project is completed with a research and comparative study essay on the function and application of HRTF databases within the scientific and research framework. It has been possible to collate related information by means of scientific research magazines such as the JAES (Journal of the Audio Engineering Society), the ASA (Acoustical Society of America) as well as the IEEE (Institute of Electrical and Electronics Engineers) and the “Web of knowledge” amongst others. In addition to carrying out research with these sources, I also opted to use more common sources of information such as Academic Google and the “Ingenio” point of entry to all the electronic resources contained on the university databases. The study generates an expansion in the knowledge of the practical work of the HRTF. The majority of studies focus their efforts on improving the perception of the sound event by means of its simulation in stereo or multichannel listening. With the HRTFs, this is possible by means of analysis and calculation of data as can be the regressions. These are very useful in the prediction of a measure being based on the current information. Another field of special interest is that of the generation of 3D sound. Through HRTF databases it is possible to simulate the binaural signal. Algorithms have been designed which are implemented in DSP devices, in such a way that by means of interaural delays and wavelength differences it is possible to achieve an excellent result of surround sound, without forgetting the importance of the effects of reverberation to achieve a believable effect of surround sound. Due to the computational complexity that this requires, a great many studies agree on the development of more efficient systems which achieve objectives such as the generation of 3D sound in real time.