878 resultados para system integration


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Assessing social benefits in transport policy implementation has been studied by many researchers using theoretical or empirical measures. However, few of them measure social benefit using different discount rates including the inter-temporal preferences rate of users, the private investment discount rate and the inter-temporal preferences rate of the government. In general, the social discount rate used is the same for all social actors. Therefore, this paper aims to assess a new method by integrating different types of discount rate belonging to different social actors in order to measure the real benefits of each actor in the short, medium and long term. A dynamic simulation is provided by a strategic Land-Use and Transport Interaction (LUTI) model. The method is tested by optimizing a cordon toll scheme in Madrid considering socio- economic efficiency and environmental criteria. Based on the modified social welfare function (WF), the effects on the measure of social benefits are estimated and compared with the classical WF results as well. The results of this research could be a key issue to understanding the relationship between transport system policies and social actors' benefits distribution in a metropolitan context. The results show that the use of more suitable discount rates for each social actor had an effect on the selection and definition of optimal strategy of congestion pricing. The usefulness of the measure of congestion toll declines more quickly overtime.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents an Ontology-Based multi-technology platform designed to avoid some issues of Building Automation Systems. The platform allows the integration of several building automation protocols, eases the development and implementation of different kinds of services and allows sharing information related to the infrastructure and facilities within a building. The system has been implemented and tested in the Energy Efficiency Research Facility at CeDInt-UPM.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Many researchers have used theoretical or empirical measures to assess social benefits in transport policy implementation. However, few have measured social benefits by using discount rates, including the intertemporal preference rate of users, the private investment discount rate, and the intertemporal preference rate of the government. In general, the social discount rate used is the same for all social actors. This paper aims to assess a new method by integrating different types of discount rates belonging to different social actors to measure the real benefits of each actor in the short term, medium term, and long term. A dynamic simulation is provided by a strategic land use and transport interaction model. The method was tested by optimizing a cordon toll scheme in Madrid, Spain. Socioeconomic efficiency and environmental criteria were considered. On the basis of the modified social welfare function, the effects on the measure of social benefits were estimated and compared with the classical welfare function measures. The results show that the use of more suitable discount rates for each social actor had an effect on the selection and definition of optimal strategy of congestion pricing. The usefulness of the measure of congestion toll declines more quickly over time. This result could be the key to understanding the relationship between transport system policies and the distribution of social actors? benefits in a metropolitan context.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Cloud computing and, more particularly, private IaaS, is seen as a mature technol- ogy with a myriad solutions to choose from. However, this disparity of solutions and products has instilled in potential adopters the fear of vendor and data lock- in. Several competing and incompatible interfaces and management styles have increased even more these fears. On top of this, cloud users might want to work with several solutions at the same time, an integration that is difficult to achieve in practice. In this Master Thesis I propose a management architecture that tries to solve these problems; it provides a generalized control mechanism for several cloud infrastructures, and an interface that can meet the requirements of the users. This management architecture is designed in a modular way, and using a generic infor- mation model. I have validated the approach through the implementation of the components needed for this architecture to support a sample private IaaS solution: OpenStack.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Applications based on Wireless Sensor Networks for Internet of Things scenarios are on the rise. The multiple possibilities they offer have spread towards previously hard to imagine fields, like e-health or human physiological monitoring. An application has been developed for its usage in scenarios where data collection is applied to smart spaces, aiming at its usage in fire fighting and sports. This application has been tested in a gymnasium with real, non-simulated nodes and devices. A Graphic User Interface has been implemented to suggest a series of exercises to improve a sportsman/woman s condition, depending on the context and their profile. This system can be adapted to a wide variety of e-health applications with minimum changes, and the user will interact using different devices, like smart phones, smart watches and/or tablets.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Durante los últimos años la tendencia en el sector de las telecomunicaciones ha sido un aumento y diversificación en la transmisión de voz, video y fundamentalmente de datos. Para conseguir alcanzar las tasas de transmisión requeridas, los nuevos estándares de comunicaciones requieren un mayor ancho de banda y tienen un mayor factor de pico, lo cual influye en el bajo rendimiento del amplificador de radiofrecuencia (RFPA). Otro factor que ha influido en el bajo rendimiento es el diseño del amplificador de radiofrecuencia. Tradicionalmente se han utilizado amplificadores lineales por su buen funcionamiento. Sin embargo, debido al elevado factor de pico de las señales transmitidas, el rendimiento de este tipo de amplificadores es bajo. El bajo rendimiento del sistema conlleva desventajas adicionales como el aumento del coste y del tamaño del sistema de refrigeración, como en el caso de una estación base, o como la reducción del tiempo de uso y un mayor calentamiento del equipo para sistemas portátiles alimentados con baterías. Debido a estos factores, se han desarrollado durante las últimas décadas varias soluciones para aumentar el rendimiento del RFPA como la técnica de Outphasing, combinadores de potencia o la técnica de Doherty. Estas soluciones mejoran las prestaciones del RFPA y en algún caso han sido ampliamente utilizados comercialmente como la técnica de Doherty, que alcanza rendimientos hasta del 50% para el sistema completo para anchos de banda de hasta 20MHz. Pese a las mejoras obtenidas con estas soluciones, los mayores rendimientos del sistema se obtienen para soluciones basadas en la modulación de la tensión de alimentación del amplificador de potencia como “Envelope Tracking” o “EER”. La técnica de seguimiento de envolvente o “Envelope Tracking” está basada en la modulación de la tensión de alimentación de un amplificador lineal de potencia para obtener una mejora en el rendimiento en el sistema comparado a una solución con una tensión de alimentación constante. Para la implementación de esta técnica se necesita una etapa adicional, el amplificador de envolvente, que añade complejidad al amplificador de radiofrecuencia. En un amplificador diseñado con esta técnica, se aumentan las pérdidas debido a la etapa adicional que supone el amplificador de envolvente pero a su vez disminuyen las pérdidas en el amplificador de potencia. Si el diseño se optimiza adecuadamente, puede conseguirse un aumento global en el rendimiento del sistema superior al conseguido con las técnicas mencionadas anteriormente. Esta técnica presenta ventajas en el diseño del amplificador de envolvente, ya que el ancho de banda requerido puede ser menor que el ancho de banda de la señal de envolvente si se optimiza adecuadamente el diseño. Adicionalmente, debido a que la sincronización entre la señal de envolvente y de fase no tiene que ser perfecta, el proceso de integración conlleva ciertas ventajas respecto a otras técnicas como EER. La técnica de eliminación y restauración de envolvente, llamada EER o técnica de Kahn está basada en modulación simultánea de la envolvente y la fase de la señal usando un amplificador de potencia conmutado, no lineal y que permite obtener un elevado rendimiento. Esta solución fue propuesta en el año 1952, pero no ha sido implementada con éxito durante muchos años debido a los exigentes requerimientos en cuanto a la sincronización entre fase y envolvente, a las técnicas de control y de corrección de los errores y no linealidades de cada una de las etapas así como de los equipos para poder implementar estas técnicas, que tienen unos requerimientos exigentes en capacidad de cálculo y procesamiento. Dentro del diseño de un RFPA, el amplificador de envolvente tiene una gran importancia debido a su influencia en el rendimiento y ancho de banda del sistema completo. Adicionalmente, la linealidad y la calidad de la señal de transmitida deben ser elevados para poder cumplir con los diferentes estándares de telecomunicaciones. Esta tesis se centra en el amplificador de envolvente y el objetivo principal es el desarrollo de soluciones que permitan el aumento del rendimiento total del sistema a la vez que satisfagan los requerimientos de ancho de banda, calidad de la señal transmitida y de linealidad. Debido al elevado rendimiento que potencialmente puede alcanzarse con la técnica de EER, esta técnica ha sido objeto de análisis y en el estado del arte pueden encontrarse numerosas referencias que analizan el diseño y proponen diversas implementaciones. En una clasificación de alto nivel, podemos agrupar las soluciones propuestas del amplificador de envolvente según estén compuestas de una o múltiples etapas. Las soluciones para el amplificador de envolvente en una configuración multietapa se basan en la combinación de un convertidor conmutado, de elevado rendimiento con un regulador lineal, de alto ancho de banda, en una combinación serie o paralelo. Estas soluciones, debido a la combinación de las características de ambas etapas, proporcionan un buen compromiso entre rendimiento y buen funcionamiento del amplificador de RF. Por otro lado, la complejidad del sistema aumenta debido al mayor número de componentes y de señales de control necesarias y el aumento de rendimiento que se consigue con estas soluciones es limitado. Una configuración en una etapa tiene las ventajas de una mayor simplicidad, pero debido al elevado ancho de banda necesario, la frecuencia de conmutación debe aumentarse en gran medida. Esto implicará un bajo rendimiento y un peor funcionamiento del amplificador de envolvente. En el estado del arte pueden encontrarse diversas soluciones para un amplificador de envolvente en una etapa, como aumentar la frecuencia de conmutación y realizar la implementación en un circuito integrado, que tendrá mejor funcionamiento a altas frecuencias o utilizar técnicas topológicas y/o filtros de orden elevado, que permiten una reducción de la frecuencia de conmutación. En esta tesis se propone de manera original el uso de la técnica de cancelación de rizado, aplicado al convertidor reductor síncrono, para reducir la frecuencia de conmutación comparado con diseño equivalente del convertidor reductor convencional. Adicionalmente se han desarrollado dos variantes topológicas basadas en esta solución para aumentar la robustez y las prestaciones de la misma. Otro punto de interés en el diseño de un RFPA es la dificultad de poder estimar la influencia de los parámetros de diseño del amplificador de envolvente en el amplificador final integrado. En esta tesis se ha abordado este problema y se ha desarrollado una herramienta de diseño que permite obtener las principales figuras de mérito del amplificador integrado para la técnica de EER a partir del diseño del amplificador de envolvente. Mediante el uso de esta herramienta pueden validarse el efecto del ancho de banda, el rizado de tensión de salida o las no linealidades del diseño del amplificador de envolvente para varias modulaciones digitales. Las principales contribuciones originales de esta tesis son las siguientes: La aplicación de la técnica de cancelación de rizado a un convertidor reductor síncrono para un amplificador de envolvente de alto rendimiento para un RFPA linealizado mediante la técnica de EER. Una reducción del 66% en la frecuencia de conmutación, comparado con el reductor convencional equivalente. Esta reducción se ha validado experimentalmente obteniéndose una mejora en el rendimiento de entre el 12.4% y el 16% para las especificaciones de este trabajo. La topología y el diseño del convertidor reductor con dos redes de cancelación de rizado en cascada para mejorar el funcionamiento y robustez de la solución con una red de cancelación. La combinación de un convertidor redactor multifase con la técnica de cancelación de rizado para obtener una topología que proporciona una reducción del cociente entre frecuencia de conmutación y ancho de banda de la señal. El proceso de optimización del control del amplificador de envolvente en lazo cerrado para mejorar el funcionamiento respecto a la solución en lazo abierto del convertidor reductor con red de cancelación de rizado. Una herramienta de simulación para optimizar el proceso de diseño del amplificador de envolvente mediante la estimación de las figuras de mérito del RFPA, implementado mediante EER, basada en el diseño del amplificador de envolvente. La integración y caracterización del amplificador de envolvente basado en un convertidor reductor con red de cancelación de rizado en el transmisor de radiofrecuencia completo consiguiendo un elevado rendimiento, entre 57% y 70.6% para potencias de salida de 14.4W y 40.7W respectivamente. Esta tesis se divide en seis capítulos. El primer capítulo aborda la introducción enfocada en la aplicación, los amplificadores de potencia de radiofrecuencia, así como los principales problemas, retos y soluciones existentes. En el capítulo dos se desarrolla el estado del arte de amplificadores de potencia de RF, describiéndose las principales técnicas de diseño, las causas de no linealidad y las técnicas de optimización. El capítulo tres está centrado en las soluciones propuestas para el amplificador de envolvente. El modo de control se ha abordado en este capítulo y se ha presentado una optimización del diseño en lazo cerrado para el convertidor reductor convencional y para el convertidor reductor con red de cancelación de rizado. El capítulo cuatro se centra en el proceso de diseño del amplificador de envolvente. Se ha desarrollado una herramienta de diseño para evaluar la influencia del amplificador de envolvente en las figuras de mérito del RFPA. En el capítulo cinco se presenta el proceso de integración realizado y las pruebas realizadas para las diversas modulaciones, así como la completa caracterización y análisis del amplificador de RF. El capítulo seis describe las principales conclusiones de la tesis y las líneas futuras. ABSTRACT The trend in the telecommunications sector during the last years follow a high increase in the transmission rate of voice, video and mainly in data. To achieve the required levels of data rates, the new modulation standards demand higher bandwidths and have a higher peak to average power ratio (PAPR). These specifications have a direct impact in the low efficiency of the RFPA. An additional factor for the low efficiency of the RFPA is in the power amplifier design. Traditionally, linear classes have been used for the implementation of the power amplifier as they comply with the technical requirements. However, they have a low efficiency, especially in the operating range of signals with a high PAPR. The low efficiency of the transmitter has additional disadvantages as an increase in the cost and size as the cooling system needs to be increased for a base station and a temperature increase and a lower use time for portable devices. Several solutions have been proposed in the state of the art to improve the efficiency of the transmitter as Outphasing, power combiners or Doherty technique. However, the highest potential of efficiency improvement can be obtained using a modulated power supply for the power amplifier, as in the Envelope Tracking and EER techniques. The Envelope Tracking technique is based on the modulation of the power supply of a linear power amplifier to improve the overall efficiency compared to a fixed voltage supply. In the implementation of this technique an additional stage is needed, the envelope amplifier, that will increase the complexity of the RFPA. However, the efficiency of the linear power amplifier will increase and, if designed properly, the RFPA efficiency will be improved. The advantages of this technique are that the envelope amplifier design does not require such a high bandwidth as the envelope signal and that in the integration process a perfect synchronization between envelope and phase is not required. The Envelope Elimination and Restoration (EER) technique, known also as Kahn’s technique, is based on the simultaneous modulation of envelope and phase using a high efficiency switched power amplifier. This solution has the highest potential in terms of the efficiency improvement but also has the most challenging specifications. This solution, proposed in 1952, has not been successfully implemented until the last two decades due to the high demanding requirements for each of the stages as well as for the highly demanding processing and computation capabilities needed. At the system level, a very precise synchronization is required between the envelope and phase paths to avoid a linearity decrease of the system. Several techniques are used to compensate the non-linear effects in amplitude and phase and to improve the rejection of the out of band noise as predistortion, feedback and feed-forward. In order to obtain a high bandwidth and efficient RFPA using either ET or EER, the envelope amplifier stage will have a critical importance. The requirements for this stage are very demanding in terms of bandwidth, linearity and quality of the transmitted signal. Additionally the efficiency should be as high as possible, as the envelope amplifier has a direct impact in the efficiency of the overall system. This thesis is focused on the envelope amplifier stage and the main objective will be the development of high efficiency envelope amplifier solutions that comply with the requirements of the RFPA application. The design and optimization of an envelope amplifier for a RFPA application is a highly referenced research topic, and many solutions that address the envelope amplifier and the RFPA design and optimization can be found in the state of the art. From a high level classification, multiple and single stage envelope amplifiers can be identified. Envelope amplifiers for EER based on multiple stage architecture combine a linear assisted stage and a switched-mode stage, either in a series or parallel configuration, to achieve a very high performance RFPA. However, the complexity of the system increases and the efficiency improvement is limited. A single-stage envelope amplifier has the advantage of a lower complexity but in order to achieve the required bandwidth the switching frequency has to be highly increased, and therefore the performance and the efficiency are degraded. Several techniques are used to overcome this limitation, as the design of integrated circuits that are capable of switching at very high rates or the use of topological solutions, high order filters or a combination of both to reduce the switching frequency requirements. In this thesis it is originally proposed the use of the ripple cancellation technique, applied to a synchronous buck converter, to reduce the switching frequency requirements compared to a conventional buck converter for an envelope amplifier application. Three original proposals for the envelope amplifier stage, based on the ripple cancellation technique, are presented and one of the solutions has been experimentally validated and integrated in the complete amplifier, showing a high total efficiency increase compared to other solutions of the state of the art. Additionally, the proposed envelope amplifier has been integrated in the complete RFPA achieving a high total efficiency. The design process optimization has also been analyzed in this thesis. Due to the different figures of merit between the envelope amplifier and the complete RFPA it is very difficult to obtain an optimized design for the envelope amplifier. To reduce the design uncertainties, a design tool has been developed to provide an estimation of the RFPA figures of merit based on the design of the envelope amplifier. The main contributions of this thesis are: The application of the ripple cancellation technique to a synchronous buck converter for an envelope amplifier application to achieve a high efficiency and high bandwidth EER RFPA. A 66% reduction of the switching frequency, validated experimentally, compared to the equivalent conventional buck converter. This reduction has been reflected in an improvement in the efficiency between 12.4% and 16%, validated for the specifications of this work. The synchronous buck converter with two cascaded ripple cancellation networks (RCNs) topology and design to improve the robustness and the performance of the envelope amplifier. The combination of a phase-shifted multi-phase buck converter with the ripple cancellation technique to improve the envelope amplifier switching frequency to signal bandwidth ratio. The optimization of the control loop of an envelope amplifier to improve the performance of the open loop design for the conventional and ripple cancellation buck converter. A simulation tool to optimize the envelope amplifier design process. Using the envelope amplifier design as the input data, the main figures of merit of the complete RFPA for an EER application are obtained for several digital modulations. The successful integration of the envelope amplifier based on a RCN buck converter in the complete RFPA obtaining a high efficiency integrated amplifier. The efficiency obtained is between 57% and 70.6% for an output power of 14.4W and 40.7W respectively. The main figures of merit for the different modulations have been characterized and analyzed. This thesis is organized in six chapters. In Chapter 1 is provided an introduction of the RFPA application, where the main problems, challenges and solutions are described. In Chapter 2 the technical background for radiofrequency power amplifiers (RF) is presented. The main techniques to implement an RFPA are described and analyzed. The state of the art techniques to improve performance of the RFPA are identified as well as the main sources of no-linearities for the RFPA. Chapter 3 is focused on the envelope amplifier stage. The three different solutions proposed originally in this thesis for the envelope amplifier are presented and analyzed. The control stage design is analyzed and an optimization is proposed both for the conventional and the RCN buck converter. Chapter 4 is focused in the design and optimization process of the envelope amplifier and a design tool to evaluate the envelope amplifier design impact in the RFPA is presented. Chapter 5 shows the integration process of the complete amplifier. Chapter 6 addresses the main conclusions of the thesis and the future work.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The uncertainty associated to the forecast of photovoltaic generation is a major drawback for the widespread introduction of this technology into electricity grids. This uncertainty is a challenge in the design and operation of electrical systems that include photovoltaic generation. Demand-Side Management (DSM) techniques are widely used to modify energy consumption. If local photovoltaic generation is available, DSM techniques can use generation forecast to schedule the local consumption. On the other hand, local storage systems can be used to separate electricity availability from instantaneous generation; therefore, the effects of forecast error in the electrical system are reduced. The effects of uncertainty associated to the forecast of photovoltaic generation in a residential electrical system equipped with DSM techniques and a local storage system are analyzed in this paper. The study has been performed in a solar house that is able to displace a residential user?s load pattern, manage local storage and estimate forecasts of electricity generation. A series of real experiments and simulations have carried out on the house. The results of this experiments show that the use of Demand Side Management (DSM) and local storage reduces to 2% the uncertainty on the energy exchanged with the grid. In the case that the photovoltaic system would operate as a pure electricity generator feeding all generated electricity into grid, the uncertainty would raise to around 40%.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This document is the result of a process of web development to create a tool that will allow to Cracow University of Technology consult, create and manage timetables. The technologies chosen for this purpose are Apache Tomcat Server, My SQL Community Server, JDBC driver, Java Servlets and JSPs for the server side. The client part counts on Javascript, jQuery, AJAX and CSS technologies to perform the dynamism. The document will justify the choice of these technologies and will explain some development tools that help in the integration and development of all this elements: specifically, NetBeans IDE and MySQL workbench have been used as helpful tools. After explaining all the elements involved in the development of the web application, the architecture and the code developed are explained through UML diagrams. Some implementation details related to security are also deeper explained through sequence diagrams. As the source code of the application is provided, an installation manual has been developed to run the project. In addition, as the platform is intended to be a beta that will be grown, some unimplemented ideas for future development are also exposed. Finally, some annexes with important files and scripts related to the initiation of the platform are attached. This project started through an existing tool that needed to be expanded. The main purpose of the project along its development has focused on setting the roots for a whole new platform that will replace the existing one. For this goal, it has been needed to make a deep inspection on the existing web technologies: a web server and a SQL database had to be chosen. Although the alternatives were a lot, Java technology for the server was finally selected because of the big community backwards, the easiness of modelling the language through UML diagrams and the fact of being free license software. Apache Tomcat is the open source server that can use Java Servlet and JSP technology. Related to the SQL database, MySQL Community Server is the most popular open-source SQL Server, with a big community after and quite a lot of tools to manage the server. JDBC is the driver needed to put in contact Java and MySQL. Once we chose the technologies that would be part of the platform, the development process started. After a detailed explanation of the development environment installation, we used UML use case diagrams to set the main tasks of the platform; UML class diagrams served to establish the existing relations between the classes generated; the architecture of the platform was represented through UML deployment diagrams; and Enhanced entity–relationship (EER) model were used to define the tables of the database and their relationships. Apart from the previous diagrams, some implementation issues were explained to make a better understanding of the developed code - UML sequence diagrams helped to explain this. Once the whole platform was properly defined and developed, the performance of the application has been shown: it has been proved that with the current state of the code, the platform covers the use cases that were set as the main target. Nevertheless, some requisites needed for the proper working of the platform have been specified. As the project is aimed to be grown, some ideas that could not be added to this beta have been explained in order not to be missed for future development. Finally, some annexes containing important configuration issues for the platform have been added after proper explanation, as well as an installation guide that will let a new developer get the project ready. In addition to this document some other files related to the project are provided: - Javadoc. The Javadoc containing the information of every Java class created is necessary for a better understanding of the source code. - database_model.mwb. This file contains the model of the database for MySQL Workbench. This model allows, among other things, generate the MySQL script for the creation of the tables. - ScheduleManager.war. The WAR file that will allow loading the developed application into Tomcat Server without using NetBeans. - ScheduleManager.zip. The source code exported from NetBeans project containing all Java packages, JSPs, Javascript files and CSS files that are part of the platform. - config.properties. The configuration file to properly get the names and credentials to use the database, also explained in Annex II. Example of config.properties file. - db_init_script.sql. The SQL query to initiate the database explained in Annex III. SQL statements for MySQL initialization. RESUMEN. Este proyecto tiene como punto de partida la necesidad de evolución de una herramienta web existente. El propósito principal del proyecto durante su desarrollo se ha centrado en establecer las bases de una completamente nueva plataforma que reemplazará a la existente. Para lograr esto, ha sido necesario realizar una profunda inspección en las tecnologías web existentes: un servidor web y una base de datos SQL debían ser elegidos. Aunque existen muchas alternativas, la tecnología Java ha resultado ser elegida debido a la gran comunidad de desarrolladores que tiene detrás, además de la facilidad que proporciona este lenguaje a la hora de modelarlo usando diagramas UML. Tampoco hay que olvidar que es una tecnología de uso libre de licencia. Apache Tomcat es el servidor de código libre que permite emplear Java Servlets y JSPs para hacer uso de la tecnología de Java. Respecto a la base de datos SQL, el servidor más popular de código libre es MySQL, y cuenta también con una gran comunidad detrás y buenas herramientas de modelado, creación y gestión de la bases de datos. JDBC es el driver que va a permitir comunicar las aplicaciones Java con MySQL. Tras elegir las tecnologías que formarían parte de esta nueva plataforma, el proceso de desarrollo tiene comienzo. Tras una extensa explicación de la instalación del entorno de desarrollo, se han usado diagramas de caso de UML para establecer cuáles son los objetivos principales de la plataforma; los diagramas de clases nos permiten realizar una organización del código java desarrollado de modo que sean fácilmente entendibles las relaciones entre las diferentes clases. La arquitectura de la plataforma queda definida a través de diagramas de despliegue. Por último, diagramas EER van a definir las relaciones entre las tablas creadas en la base de datos. Aparte de estos diagramas, algunos detalles de implementación se van a justificar para tener una mejor comprensión del código desarrollado. Diagramas de secuencia ayudarán en estas explicaciones. Una vez que toda la plataforma haya quedad debidamente definida y desarrollada, se va a realizar una demostración de la misma: se demostrará cómo los objetivos generales han sido alcanzados con el desarrollo actual del proyecto. No obstante, algunos requisitos han sido aclarados para que la plataforma trabaje adecuadamente. Como la intención del proyecto es crecer (no es una versión final), algunas ideas que se han podido llevar acabo han quedado descritas de manera que no se pierdan. Por último, algunos anexos que contienen información importante acerca de la plataforma se han añadido tras la correspondiente explicación de su utilidad, así como una guía de instalación que va a permitir a un nuevo desarrollador tener el proyecto preparado. Junto a este documento, ficheros conteniendo el proyecto desarrollado quedan adjuntos. Estos ficheros son: - Documentación Javadoc. Contiene la información de las clases Java que han sido creadas. - database_model.mwb. Este fichero contiene el modelo de la base de datos para MySQL Workbench. Esto permite, entre otras cosas, generar el script de iniciación de la base de datos para la creación de las tablas. - ScheduleManager.war. El fichero WAR que permite desplegar la plataforma en un servidor Apache Tomcat. - ScheduleManager.zip. El código fuente exportado directamente del proyecto de Netbeans. Contiene todos los paquetes de Java generados, ficheros JSPs, Javascript y CSS que forman parte de la plataforma. - config.properties. Ejemplo del fichero de configuración que permite obtener los nombres de la base de datos - db_init_script.sql. Las consultas SQL necesarias para la creación de la base de datos.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

La cuestión principal abordada en esta tesis doctoral es la mejora de los sistemas biométricos de reconocimiento de personas a partir de la voz, proponiendo el uso de una nueva parametrización, que hemos denominado parametrización biométrica extendida dependiente de género (GDEBP en sus siglas en inglés). No se propone una ruptura completa respecto a los parámetros clásicos sino una nueva forma de utilizarlos y complementarlos. En concreto, proponemos el uso de parámetros diferentes dependiendo del género del locutor, ya que como es bien sabido, la voz masculina y femenina presentan características diferentes que deberán modelarse, por tanto, de diferente manera. Además complementamos los parámetros clásicos utilizados (MFFC extraídos de la señal de voz), con un nuevo conjunto de parámetros extraídos a partir de la deconstrucción de la señal de voz en sus componentes de fuente glótica (más relacionada con el proceso y órganos de fonación y por tanto con características físicas del locutor) y de tracto vocal (más relacionada con la articulación acústica y por tanto con el mensaje emitido). Para verificar la validez de esta propuesta se plantean diversos escenarios, utilizando diferentes bases de datos, para validar que la GDEBP permite generar una descripción más precisa de los locutores que los parámetros MFCC clásicos independientes del género. En concreto se plantean diferentes escenarios de identificación sobre texto restringido y texto independiente utilizando las bases de datos de HESPERIA y ALBAYZIN. El trabajo también se completa con la participación en dos competiciones internacionales de reconocimiento de locutor, NIST SRE (2010 y 2012) y MOBIO 2013. En el primer caso debido a la naturaleza de las bases de datos utilizadas se obtuvieron resultados cercanos al estado del arte, mientras que en el segundo de los casos el sistema presentado obtuvo la mejor tasa de reconocimiento para locutores femeninos. A pesar de que el objetivo principal de esta tesis no es el estudio de sistemas de clasificación, sí ha sido necesario analizar el rendimiento de diferentes sistemas de clasificación, para ver el rendimiento de la parametrización propuesta. En concreto, se ha abordado el uso de sistemas de reconocimiento basados en el paradigma GMM-UBM, supervectores e i-vectors. Los resultados que se presentan confirman que la utilización de características que permitan describir los locutores de manera más precisa es en cierto modo más importante que la elección del sistema de clasificación utilizado por el sistema. En este sentido la parametrización propuesta supone un paso adelante en la mejora de los sistemas de reconocimiento biométrico de personas por la voz, ya que incluso con sistemas de clasificación relativamente simples se consiguen tasas de reconocimiento realmente competitivas. ABSTRACT The main question addressed in this thesis is the improvement of automatic speaker recognition systems, by the introduction of a new front-end module that we have called Gender Dependent Extended Biometric Parameterisation (GDEBP). This front-end do not constitute a complete break with respect to classical parameterisation techniques used in speaker recognition but a new way to obtain these parameters while introducing some complementary ones. Specifically, we propose a gender-dependent parameterisation, since as it is well known male and female voices have different characteristic, and therefore the use of different parameters to model these distinguishing characteristics should provide a better characterisation of speakers. Additionally, we propose the introduction of a new set of biometric parameters extracted from the components which result from the deconstruction of the voice into its glottal source estimate (close related to the phonation process and the involved organs, and therefore the physical characteristics of the speaker) and vocal tract estimate (close related to acoustic articulation and therefore to the spoken message). These biometric parameters constitute a complement to the classical MFCC extracted from the power spectral density of speech as a whole. In order to check the validity of this proposal we establish different practical scenarios, using different databases, so we can conclude that a GDEBP generates a more accurate description of speakers than classical approaches based on gender-independent MFCC. Specifically, we propose scenarios based on text-constrain and text-independent test using HESPERIA and ALBAYZIN databases. This work is also completed with the participation in two international speaker recognition evaluations: NIST SRE (2010 and 2012) and MOBIO 2013, with diverse results. In the first case, due to the nature of the NIST databases, we obtain results closed to state-of-the-art although confirming our hypothesis, whereas in the MOBIO SRE we obtain the best simple system performance for female speakers. Although the study of classification systems is beyond the scope of this thesis, we found it necessary to analise the performance of different classification systems, in order to verify the effect of them on the propose parameterisation. In particular, we have addressed the use of speaker recognition systems based on the GMM-UBM paradigm, supervectors and i-vectors. The presented results confirm that the selection of a set of parameters that allows for a more accurate description of the speakers is as important as the selection of the classification method used by the biometric system. In this sense, the proposed parameterisation constitutes a step forward in improving speaker recognition systems, since even when using relatively simple classification systems, really competitive recognition rates are achieved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

La investigación para el conocimiento del cerebro es una ciencia joven, su inicio se remonta a Santiago Ramón y Cajal en 1888. Desde esta fecha a nuestro tiempo la neurociencia ha avanzado mucho en el desarrollo de técnicas que permiten su estudio. Desde la neurociencia cognitiva hoy se explican muchos modelos que nos permiten acercar a nuestro entendimiento a capacidades cognitivas complejas. Aun así hablamos de una ciencia casi en pañales que tiene un lago recorrido por delante. Una de las claves del éxito en los estudios de la función cerebral ha sido convertirse en una disciplina que combina conocimientos de diversas áreas: de la física, de las matemáticas, de la estadística y de la psicología. Esta es la razón por la que a lo largo de este trabajo se entremezclan conceptos de diferentes campos con el objetivo de avanzar en el conocimiento de un tema tan complejo como el que nos ocupa: el entendimiento de la mente humana. Concretamente, esta tesis ha estado dirigida a la integración multimodal de la magnetoencefalografía (MEG) y la resonancia magnética ponderada en difusión (dMRI). Estas técnicas son sensibles, respectivamente, a los campos magnéticos emitidos por las corrientes neuronales, y a la microestructura de la materia blanca cerebral. A lo largo de este trabajo hemos visto que la combinación de estas técnicas permiten descubrir sinergias estructurofuncionales en el procesamiento de la información en el cerebro sano y en el curso de patologías neurológicas. Más específicamente en este trabajo se ha estudiado la relación entre la conectividad funcional y estructural y en cómo fusionarlas. Para ello, se ha cuantificado la conectividad funcional mediante el estudio de la sincronización de fase o la correlación de amplitudes entre series temporales, de esta forma se ha conseguido un índice que mide la similitud entre grupos neuronales o regiones cerebrales. Adicionalmente, la cuantificación de la conectividad estructural a partir de imágenes de resonancia magnética ponderadas en difusión, ha permitido hallar índices de la integridad de materia blanca o de la fuerza de las conexiones estructurales entre regiones. Estas medidas fueron combinadas en los capítulos 3, 4 y 5 de este trabajo siguiendo tres aproximaciones que iban desde el nivel más bajo al más alto de integración. Finalmente se utilizó la información fusionada de MEG y dMRI para la caracterización de grupos de sujetos con deterioro cognitivo leve, la detección de esta patología resulta relevante en la identificación precoz de la enfermedad de Alzheimer. Esta tesis está dividida en seis capítulos. En el capítulos 1 se establece un contexto para la introducción de la connectómica dentro de los campos de la neuroimagen y la neurociencia. Posteriormente en este capítulo se describen los objetivos de la tesis, y los objetivos específicos de cada una de las publicaciones científicas que resultaron de este trabajo. En el capítulo 2 se describen los métodos para cada técnica que fue empleada: conectividad estructural, conectividad funcional en resting state, redes cerebrales complejas y teoría de grafos y finalmente se describe la condición de deterioro cognitivo leve y el estado actual en la búsqueda de nuevos biomarcadores diagnósticos. En los capítulos 3, 4 y 5 se han incluido los artículos científicos que fueron producidos a lo largo de esta tesis. Estos han sido incluidos en el formato de la revista en que fueron publicados, estando divididos en introducción, materiales y métodos, resultados y discusión. Todos los métodos que fueron empleados en los artículos están descritos en el capítulo 2 de la tesis. Finalmente, en el capítulo 6 se concluyen los resultados generales de la tesis y se discuten de forma específica los resultados de cada artículo. ABSTRACT In this thesis I apply concepts from mathematics, physics and statistics to the neurosciences. This field benefits from the collaborative work of multidisciplinary teams where physicians, psychologists, engineers and other specialists fight for a common well: the understanding of the brain. Research on this field is still in its early years, being its birth attributed to the neuronal theory of Santiago Ramo´n y Cajal in 1888. In more than one hundred years only a very little percentage of the brain functioning has been discovered, and still much more needs to be explored. Isolated techniques aim at unraveling the system that supports our cognition, nevertheless in order to provide solid evidence in such a field multimodal techniques have arisen, with them we will be able to improve current knowledge about human cognition. Here we focus on the multimodal integration of magnetoencephalography (MEG) and diffusion weighted magnetic resonance imaging. These techniques are sensitive to the magnetic fields emitted by the neuronal currents and to the white matter microstructure, respectively. The combination of such techniques could bring up evidences about structural-functional synergies in the brain information processing and which part of this synergy fails in specific neurological pathologies. In particular, we are interested in the relationship between functional and structural connectivity, and how two integrate this information. We quantify the functional connectivity by studying the phase synchronization or the amplitude correlation between time series obtained by MEG, and so we get an index indicating similarity between neuronal entities, i.e. brain regions. In addition we quantify structural connectivity by performing diffusion tensor estimation from the diffusion weighted images, thus obtaining an indicator of the integrity of the white matter or, if preferred, the strength of the structural connections between regions. These quantifications are then combined following three different approaches, from the lowest to the highest level of integration, in chapters 3, 4 and 5. We finally apply the fused information to the characterization or prediction of mild cognitive impairment, a clinical entity which is considered as an early step in the continuum pathological process of dementia. The dissertation is divided in six chapters. In chapter 1 I introduce connectomics within the fields of neuroimaging and neuroscience. Later in this chapter we describe the objectives of this thesis, and the specific objectives of each of the scientific publications that were produced as result of this work. In chapter 2 I describe the methods for each of the techniques that were employed, namely structural connectivity, resting state functional connectivity, complex brain networks and graph theory, and finally, I describe the clinical condition of mild cognitive impairment and the current state of the art in the search for early biomarkers. In chapters 3, 4 and 5 I have included the scientific publications that were generated along this work. They have been included in in their original format and they contain introduction, materials and methods, results and discussion. All methods that were employed in these papers have been described in chapter 2. Finally, in chapter 6 I summarize all the results from this thesis, both locally for each of the scientific publications and globally for the whole work.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Upper limb function impairment is one of the most common sequelae of central nervous system injury, especially in stroke patients and when spinal cord injury produces tetraplegia. Conventional assessment methods cannot provide objective evaluation of patient performance and the tiveness of therapies. The most common assessment tools are based on rating scales, which are inefficient when measuring small changes and can yield subjective bias. In this study, we designed an inertial sensor-based monitoring system composed of five sensors to measure and analyze the complex movements of the upper limbs, which are common in activities of daily living. We developed a kinematic model with nine degrees of freedom to analyze upper limb and head movements in three dimensions. This system was then validated using a commercial optoelectronic system. These findings suggest that an inertial sensor-based motion tracking system can be used in patients who have upper limb impairment through data integration with a virtual reality-based neuroretation system.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This report analyzes the basis of hydrogen and power integration strategies, by using water electrolysis processes as a means of flexible energy storage at large scales.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper, a methodology for the integral energy performance characterization (thermal, daylighting and electrical behavior) of semi-transparent photovoltaic modules (STPV) under real operation conditions is presented. An outdoor testing facility to analyze simultaneously thermal, luminous and electrical performance of the devices has been designed, constructed and validated. The system, composed of three independent measurement subsystems, has been operated in Madrid with four prototypes of a-Si STPV modules, each one corresponding to a specific degree of transparency. The extensive experimental campaign, continued for a whole year rotating the modules under test, has validated the reliability of the testing facility under varying environmental conditions. The thermal analyses show that both the solar protection and insulating properties of the laminated prototypes are lower than those achieved by a reference glazing whose characteristics are in accordance with the Spanish Technical Building Code. Daylighting analysis shows that STPV elements have an important lighting energy saving potential that could be exploited through their integration with strategies focused to reduce illuminance values in sunny conditions. Finally, the electrical tests show that the degree of transparency is not the most determining factor that affects the conversion efficiency.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The deployment of the Ambient Intelligence (AmI) paradigm requires designing and integrating user-centered smart environments to assist people in their daily life activities. This research paper details an integration and validation of multiple heterogeneous sensors with hybrid reasoners that support decision making in order to monitor personal and environmental data at a smart home in a private way. The results innovate on knowledge-based platforms, distributed sensors, connected objects, accessibility and authentication methods to promote independent living for elderly people. TALISMAN+, the AmI framework deployed, integrates four subsystems in the smart home: (i) a mobile biomedical telemonitoring platform to provide elderly patients with continuous disease management; (ii) an integration middleware that allows context capture from heterogeneous sensors to program environment¿s reaction; (iii) a vision system for intelligent monitoring of daily activities in the home; and (iv) an ontologies-based integrated reasoning platform to trigger local actions and manage private information in the smart home. The framework was integrated in two real running environments, the UPM Accessible Digital Home and MetalTIC house, and successfully validated by five experts in home care, elderly people and personal autonomy.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents a communication interface between supervisory low-cost mobile robots and domestic Wireless Sensor Network (WSN) based on the Zig Bee protocol from different manufacturers. The communication interface allows control and communication with other network devices using the same protocol. The robot can receive information from sensor devices (temperature, humidity, luminosity) and send commands to actuator devices (lights, shutters, thermostats) from different manufacturers. The architecture of the system, the interfaces and devices needed to establish the communication are described in the paper.