60 resultados para Computational Intelligence System
em Universidad Politécnica de Madrid
Resumo:
In this work we present an optimized fuzzy visual servoing system for obstacle avoidance using an unmanned aerial vehicle. The cross-entropy theory is used to optimise the gains of our controllers. The optimization process was made using the ROS-Gazebo 3D simulation with purposeful extensions developed for our experiments. Visual servoing is achieved through an image processing front-end that uses the Camshift algorithm to detect and track objects in the scene. Experimental flight trials using a small quadrotor were performed to validate the parameters estimated from simulation. The integration of crossentropy methods is a straightforward way to estimate optimal gains achieving excellent results when tested in real flights.
Resumo:
Una Red de Procesadores Evolutivos o NEP (por sus siglas en ingles), es un modelo computacional inspirado por el modelo evolutivo de las celulas, específicamente por las reglas de multiplicación de las mismas. Esta inspiración hace que el modelo sea una abstracción sintactica de la manipulation de information de las celulas. En particu¬lar, una NEP define una maquina de cómputo teorica capaz de resolver problemas NP completos de manera eficiente en tóerminos de tiempo. En la praóctica, se espera que las NEP simuladas en móaquinas computacionales convencionales puedan resolver prob¬lemas reales complejos (que requieran ser altamente escalables) a cambio de una alta complejidad espacial. En el modelo NEP, las cóelulas estóan representadas por palabras que codifican sus secuencias de ADN. Informalmente, en cualquier momento de cómputo del sistema, su estado evolutivo se describe como un coleccion de palabras, donde cada una de ellas representa una celula. Estos momentos fijos de evolucion se denominan configuraciones. De manera similar al modelo biologico, las palabras (celulas) mutan y se dividen en base a bio-operaciones sencillas, pero solo aquellas palabras aptas (como ocurre de forma parecida en proceso de selection natural) seran conservadas para la siguiente configuracióon. Una NEP como herramienta de computation, define una arquitectura paralela y distribuida de procesamiento simbolico, en otras palabras, una red de procesadores de lenguajes. Desde el momento en que el modelo fue propuesto a la comunidad científica en el año 2001, múltiples variantes se han desarrollado y sus propiedades respecto a la completitud computacional, eficiencia y universalidad han sido ampliamente estudiadas y demostradas. En la actualidad, por tanto, podemos considerar que el modelo teórico NEP se encuentra en el estadio de la madurez. La motivación principal de este Proyecto de Fin de Grado, es proponer una aproxi-mación práctica que permita dar un salto del modelo teórico NEP a una implantación real que permita su ejecucion en plataformas computacionales de alto rendimiento, con el fin de solucionar problemas complejos que demanda la sociedad actual. Hasta el momento, las herramientas desarrolladas para la simulation del modelo NEP, si bien correctas y con resultados satisfactorios, normalmente estón atadas a su entorno de ejecucion, ya sea el uso de hardware específico o implementaciones particulares de un problema. En este contexto, el propósito fundamental de este trabajo es el desarrollo de Nepfix, una herramienta generica y extensible para la ejecucion de cualquier algo¬ritmo de un modelo NEP (o alguna de sus variantes), ya sea de forma local, como una aplicación tradicional, o distribuida utilizando los servicios de la nube. Nepfix es una aplicacion software desarrollada durante 7 meses y que actualmente se encuentra en su segunda iteration, una vez abandonada la fase de prototipo. Nepfix ha sido disenada como una aplicacion modular escrita en Java 8 y autocontenida, es decir, no requiere de un entorno de ejecucion específico (cualquier maquina virtual de Java es un contenedor vólido). Nepfix contiene dos componentes o móodulos. El primer móodulo corresponde a la ejecución de una NEP y es por lo tanto, el simulador. Para su desarrollo, se ha tenido en cuenta el estado actual del modelo, es decir, las definiciones de los procesadores y filtros mas comunes que conforman la familia del modelo NEP. Adicionalmente, este componente ofrece flexibilidad en la ejecucion, pudiendo ampliar las capacidades del simulador sin modificar Nepfix, usando para ello un lenguaje de scripting. Dentro del desarrollo de este componente, tambióen se ha definido un estóandar de representacióon del modelo NEP basado en el formato JSON y se propone una forma de representation y codificación de las palabras, necesaria para la comunicación entre servidores. Adicional-mente, una característica importante de este componente, es que se puede considerar una aplicacion aislada y por tanto, la estrategia de distribution y ejecución son total-mente independientes. El segundo moódulo, corresponde a la distribucióon de Nepfix en la nube. Este de-sarrollo es el resultado de un proceso de i+D, que tiene una componente científica considerable. Vale la pena resaltar el desarrollo de este modulo no solo por los resul-tados prócticos esperados, sino por el proceso de investigation que se se debe abordar con esta nueva perspectiva para la ejecución de sistemas de computación natural. La principal característica de las aplicaciones que se ejecutan en la nube es que son gestionadas por la plataforma y normalmente se encapsulan en un contenedor. En el caso de Nepfix, este contenedor es una aplicacion Spring que utiliza el protocolo HTTP o AMQP para comunicarse con el resto de instancias. Como valor añadido, Nepfix aborda dos perspectivas de implementation distintas (que han sido desarrolladas en dos iteraciones diferentes) del modelo de distribution y ejecucion, que tienen un impacto muy significativo en las capacidades y restricciones del simulador. En concreto, la primera iteration utiliza un modelo de ejecucion asincrono. En esta perspectiva asincrona, los componentes de la red NEP (procesadores y filtros) son considerados como elementos reactivos a la necesidad de procesar una palabra. Esta implementation es una optimization de una topologia comun en el modelo NEP que permite utilizar herramientas de la nube para lograr un escalado transparente (en lo ref¬erente al balance de carga entre procesadores) pero produce efectos no deseados como indeterminacion en el orden de los resultados o imposibilidad de distribuir eficiente-mente redes fuertemente interconectadas. Por otro lado, la segunda iteration corresponde al modelo de ejecucion sincrono. Los elementos de una red NEP siguen un ciclo inicio-computo-sincronizacion hasta que el problema se ha resuelto. Esta perspectiva sincrona representa fielmente al modelo teórico NEP pero el proceso de sincronizacion es costoso y requiere de infraestructura adicional. En concreto, se requiere un servidor de colas de mensajes RabbitMQ. Sin embargo, en esta perspectiva los beneficios para problemas suficientemente grandes superan a los inconvenientes, ya que la distribuciín es inmediata (no hay restricciones), aunque el proceso de escalado no es trivial. En definitiva, el concepto de Nepfix como marco computacional se puede considerar satisfactorio: la tecnología es viable y los primeros resultados confirman que las carac-terísticas que se buscaban originalmente se han conseguido. Muchos frentes quedan abiertos para futuras investigaciones. En este documento se proponen algunas aproxi-maciones a la solucion de los problemas identificados como la recuperacion de errores y la division dinamica de una NEP en diferentes subdominios. Por otra parte, otros prob-lemas, lejos del alcance de este proyecto, quedan abiertos a un futuro desarrollo como por ejemplo, la estandarización de la representación de las palabras y optimizaciones en la ejecucion del modelo síncrono. Finalmente, algunos resultados preliminares de este Proyecto de Fin de Grado han sido presentados recientemente en formato de artículo científico en la "International Work-Conference on Artificial Neural Networks (IWANN)-2015" y publicados en "Ad-vances in Computational Intelligence" volumen 9094 de "Lecture Notes in Computer Science" de Springer International Publishing. Lo anterior, es una confirmation de que este trabajo mas que un Proyecto de Fin de Grado, es solo el inicio de un trabajo que puede tener mayor repercusion en la comunidad científica. Abstract Network of Evolutionary Processors -NEP is a computational model inspired by the evolution of cell populations, which might model some properties of evolving cell communities at the syntactical level. NEP defines theoretical computing devices able to solve NP complete problems in an efficient manner. In this model, cells are represented by words which encode their DNA sequences. Informally, at any moment of time, the evolutionary system is described by a collection of words, where each word represents one cell. Cells belong to species and their community evolves according to mutations and division which are defined by operations on words. Only those cells are accepted as surviving (correct) ones which are represented by a word in a given set of words, called the genotype space of the species. This feature is analogous with the natural process of evolution. Formally, NEP is based on an architecture for parallel and distributed processing, in other words, a network of language processors. Since the date when NEP was pro¬posed, several extensions and variants have appeared engendering a new set of models named Networks of Bio-inspired Processors (NBP). During this time, several works have proved the computational power of NBP. Specifically, their efficiency, universality, and computational completeness have been thoroughly investigated. Therefore, we can say that the NEP model has reached its maturity. The main motivation for this End of Grade project (EOG project in short) is to propose a practical approximation that allows to close the gap between theoretical NEP model and a practical implementation in high performing computational platforms in order to solve some of high the high complexity problems society requires today. Up until now tools developed to simulate NEPs, while correct and successful, are usu¬ally tightly coupled to the execution environment, using specific software frameworks (Hadoop) or direct hardware usage (GPUs). Within this context the main purpose of this work is the development of Nepfix, a generic and extensible tool that aims to execute algorithms based on NEP model and compatible variants in a local way, similar to a traditional application or in a distributed cloud environment. Nepfix as an application was developed during a 7 month cycle and is undergoing its second iteration once the prototype period was abandoned. Nepfix is designed as a modular self-contained application written in Java 8, that is, no additional external dependencies are required and it does not rely on an specific execution environment, any JVM is a valid container. Nepfix is made of two components or modules. The first module corresponds to the NEP execution and therefore simulation. During the development the current state of the theoretical model was used as a reference including most common filters and processors. Additionally extensibility is provided by the use of Python as a scripting language to run custom logic. Along with the simulation a definition language for NEP has been defined based on JSON as well as a mechanisms to represent words and their possible manipulations. NEP simulator is isolated from distribution and as mentioned before different applications that include it as a dependency are possible, the distribution of NEPs is an example of this. The second module corresponds to executing Nepfix in the cloud. The development carried a heavy R&D process since this front was not explored by other research groups until now. It's important to point out that the development of this module is not focused on results at this point in time, instead we focus on feasibility and discovery of this new perspective to execute natural computing systems and NEPs specifically. The main properties of cloud applications is that they are managed by the platform and are encapsulated in a container. For Nepfix a Spring application becomes the container and the HTTP or AMQP protocols are used for communication with the rest of the instances. Different execution perspectives were studied, namely asynchronous and synchronous models were developed for solving different kind of problems using NEPs. Different limitations and restrictions manifest in both models and are explored in detail in the respective chapters. In conclusion we can consider that Nepfix as a computational framework is suc-cessful: Cloud technology is ready for the challenge and the first results reassure that the properties Nepfix project pursued were met. Many investigation branches are left open for future investigations. In this EOG implementation guidelines are proposed for some of them like error recovery or dynamic NEP splitting. On the other hand other interesting problems that were not in the scope of this project were identified during development like word representation standardization or NEP model optimizations. As a confirmation that the results of this work can be useful to the scientific com-munity a preliminary version of this project was published in The International Work- Conference on Artificial Neural Networks (IWANN) in May 2015. Development has not stopped since that point and while Nepfix in it's current state can not be consid¬ered a final product the most relevant ideas, possible problems and solutions that were produced during the seven months development cycle are worthy to be gathered and presented giving a meaning to this EOG work.
Resumo:
En un ejercicio no extenuante la frecuencia cardíaca (FC) guarda una relación lineal con el consumo máximo de oxígeno (V O2max) y se suele usar como uno de los parámetros de referencia para cuantificar la capacidad del sistema cardiovascular. Normalmente la frecuencia cardíaca puede remplazar el porcentaje de V O2max en las prescripciones básicas de ejercicio para la mejora de la resistencia aeróbica. Para obtener los mejores resultados en la mejora de la resistencia aeróbica, el entrenamiento de los individuos se debe hacer a una frecuencia cardíaca suficientemente alta, para que el trabajo sea de predominio dinámico con la fosforilación oxidativa como fuente energética primaria, pero no tan elevada que pueda suponer un riesgo de infarto de miocardio para el sujeto que se está entrenando. Los programas de entrenamiento de base mínima y de base óptima, con ejercicios de estiramientos para prevenir lesiones, son algunos de los programas más adecuados para el entrenamiento de la resistencia aeróbica porque maximizan los beneficios y minimizan los riesgos para el sistema cardiovascular durante las sesiones de entrenamiento. En esta tesis, se ha definido un modelo funcional para sistemas de inteligencia ambiental capaz de monitorizar, evaluar y entrenar las cualidades físicas que ha sido validado cuando la cualidad física es la resistencia aeróbica. El modelo se ha implementado en una aplicación Android utilizando la camiseta inteligente “GOW running” de la empresa Weartech. El sistema se ha comparado en el Laboratorio de Fisiología del Esfuerzo (LFE) de la Universidad Politécnica de Madrid (UPM) durante la realización de pruebas de esfuerzo. Además se ha evaluado un sistema de guiado con voz para los entrenamientos de base mínima y de base óptima. También el desarrollo del software ha sido validado. Con el uso de cuestionarios sobre las experiencias de los usuarios utilizando la aplicación se ha evaluado el atractivo de la misma. Por otro lado se ha definido una nueva metodología y nuevos tipos de cuestionarios diseñados para evaluar la utilidad que los usuarios asignan al uso de un sistema de guiado por voz. Los resultados obtenidos confirman la validez del modelo. Se ha obtenido una alta concordancia entre las medidas de FC hecha por la aplicación Android y el LFE. También ha resultado que los métodos de estimación del VO2max de los dos sistemas pueden ser intercambiables. Todos los usuarios que utilizaron el sistema de guiado por voz para entrenamientos de 10 base mínima y de base óptimas de la resistencia aeróbica consiguieron llevar a cabo las sesiones de entrenamientos con un 95% de éxito considerando unos márgenes de error de un 10% de la frecuencia cardíaca máxima teórica. La aplicación fue atractiva para los usuarios y hubo también una aceptación del sistema de guiado por voz. Se ha obtenido una evaluación psicológica positiva de la satisfacción de los usuarios que interactuaron con el sistema. En conclusión, se ha demostrado que es posible desarrollar sistemas de Inteligencia Ambiental en dispositivos móviles para la mejora de la salud. El modelo definido en la tesis es el primero modelo funcional teórico de referencia para el desarrollo de este tipo de aplicaciones. Posteriores estudios se realizarán con el objetivo de extender dicho modelo para las demás cualidades físicas que suponen modelos fisiológicos más complejos como por ejemplo la flexibilidad. Abstract In a non-strenuous exercise, the heart rate (HR) shows a linear relationship with the maximum volume of oxygen consumption (V O2max) and serves as an indicator of performance of the cardiovascular system. The heart rate replaces the %V O2max in exercise program prescription to improve aerobic endurance. In order to achieve an optimal effect during endurance training, the athlete needs to work out at a heart rate high enough to trigger the aerobic metabolism, while avoiding the high heart rates that bring along significant risks of myocardial infarction. The minimal and optimal base training programs, followed by stretching exercises to prevent injuries, are adequate programs to maximize benefits and minimize health risks for the cardiovascular system during single session training. In this thesis, we have defined an ambient intelligence system functional model that monitors, evaluates and trains physical qualities, and it has been validated for aerobic endurance. It is based on the Android System and the “GOW Running” smart shirt. The system has been evaluated during functional assessment stress testing of aerobic endurance in the Stress Physiology Laboratory (SPL) of the Technical University of Madrid (UPM). Furthermore, a voice system, designed to guide the user through minimal and optimal base training programs, has been evaluated. Also the software development has been evaluated. By means of user experience questionnaires, we have rated the attractiveness of the android application. Moreover, we have defined a methodology and a new kind of questionnaires in order to assess the user experience with the audio exercise guide system. The results obtained confirm the model. We have a high similarity between HR measurements made of our system and the one used by SPL. We have also a high correlation between the VO2max estimations of our system and the SPL system. All users, that tried the voice guidance system for minimal and optimal base training programs, were able to perform the 95% of the training session with an error lower than the 10% of theoretical maximum heart rate. The application appeared attractive to the users, and it has also been proven that the voice guidance system was useful. As result we obtained a positive evaluation of the users' satisfaction while they interacted with the system. In conclusion, it has been demonstrated that is possible to develop mobile Ambient Intelligence applications for the improvement of healthy lifestyle. AmIRTEM model is the first theoretical reference functional model for the design of this kind of applications. Further studies will be realized in order to extend the AmIRTEM model to other physical qualities whose physiological models are more complex than the aerobic endurance.
Resumo:
Systems biology techniques are a topic of recent interest within the neurological field. Computational intelligence (CI) addresses this holistic perspective by means of consensus or ensemble techniques ultimately capable of uncovering new and relevant findings. In this paper, we propose the application of a CI approach based on ensemble Bayesian network classifiers and multivariate feature subset selection to induce probabilistic dependences that could match or unveil biological relationships. The research focuses on the analysis of high-throughput Alzheimer's disease (AD) transcript profiling. The analysis is conducted from two perspectives. First, we compare the expression profiles of hippocampus subregion entorhinal cortex (EC) samples of AD patients and controls. Second, we use the ensemble approach to study four types of samples: EC and dentate gyrus (DG) samples from both patients and controls. Results disclose transcript interaction networks with remarkable structures and genes not directly related to AD by previous studies. The ensemble is able to identify a variety of transcripts that play key roles in other neurological pathologies. Classical statistical assessment by means of non-parametric tests confirms the relevance of the majority of the transcripts. The ensemble approach pinpoints key metabolic mechanisms that could lead to new findings in the pathogenesis and development of AD
Resumo:
he simulation of complex LoC (Lab-on-a-Chip) devices is a process that requires solving computationally expensive partial differential equations. An interesting alternative uses artificial neural networks for creating computationally feasible models based on MOR techniques. This paper proposes an approach that uses artificial neural networks for designing LoC components considering the artificial neural network topology as an isomorphism of the LoC device topology. The parameters of the trained neural networks are based on equations for modeling microfluidic circuits, analogous to electronic circuits. The neural networks have been trained to behave like AND, OR, Inverter gates. The parameters of the trained neural networks represent the features of LoC devices that behave as the aforementioned gates. This would mean that LoC devices universally compute.
Resumo:
Desde hace ya muchos años, uno de los servicios de telecomunicaciones más demandado por los españoles ha sido la televisión de pago, complementando y ampliando la oferta de contenidos audiovisuales que habitualmente son ofrecidos de manera gratuita por la televisión analógica y recientemente por la televisión digital terrestre o TDT. Estos servicios de video, han sido tradicionalmente ofrecidos por operadores satélites, operadores de cable u otros operadores de telecomunicaciones con los que a través de una conexión de datos (ADSL, VDSL o fibra óptica), ofrecían sus contenidos a través de IP. La propia evolución y mejora de la tecnología utilizada para la emisión de contenidos sobre IP, ha permitido que a día de hoy, la televisión se conciba como un servicio Over The Top (OTT) ajeno al medio de transmisión, permitiendo a cualquier agente, distribuir sus contenidos audiovisuales de manera sencilla y a todos sus clientes en todas las partes del mundo; siendo solamente necesario disponer de una conexión a internet. De esta manera, el proyecto desarrollado va a girar en torno a la herramienta StormTest de la empresa S3Group, comprada por CENTUM Solutions (empresa especializada en ofrecer servicio de ingeniería para sistema de comunicaciones, control e inteligencia de señal) con el objetivo de satisfacer las necesidades de sus clientes y con la que en definitiva se ha contado para la realización de este proyecto. El principal objetivo de este proyecto es la definición e implementación de un banco de pruebas que permita optimizar los procesos de validación técnica, mejorando los tiempos de ejecución y concentrando la actividad de los ingenieros en tareas de mayor valor. Para la realización de este proyecto, se han fijado diversos objetivos necesarios para el desarrollo de este tipo de actividades. Los principales son los siguientes: Análisis de la problemática actual: donde en los procesos de aceptación técnica se dedica muchas horas de trabajo para la realización de pruebas repetitivas y de poco valor las cuales se pueden automatizar por herramientas existentes en el mercado. Búsqueda y selección de una herramienta que satisfaga las necesidades de pruebas. Instalación en los laboratorios. Configuración y adaptación de la herramienta a las necesidades y proyectos específicos. Con todo ello, este proyecto cubrirá los siguientes logros: Reducir los tiempos de ejecución de las campañas de pruebas, gracias a la automatización de gran parte ellas. Realizar medidas de calidad subjetiva y objetiva complejas, imposibles de ejecutar a través de las personas. Mejorar y automatizar los sistemas de reporte de resultados. Abstract: Many years ago, one of the telecommunications services most demanded in Spain has been pay television, complementing and extending the offer of audiovisual content which are usually offered for free by analog tv and recently by digital terrestrial televisión or TDT. These video services, have been traditionally offered by satellite operators, cable operators or other telecommunications operators that through a data connection (ADSL,VDSL or fiber optic), offered its content over IP. The evolution and improvement of the technology used for broadcasting over IP, has allowed that to date, television is conceived as a service Over The Top (OTT), not dependent on the transmission medium, allowing any agent to distribute audiovisual content in a very simple way and to all its customers in all parts of the world; being only necessary to have an decent internet connection. In this way, the project will have relation with S3Group’s StormTest tool, bought by CENTUM Solutions (company specialized in engineering services for communications, control and signal intelligence system) with the aim of satisfying the needs of its customers and which ultimately has counted for the realization of this project. The main objective of this project is the definition and implementation of a test bench that allows to optimize the processes of technical validation, improving execution times and concentrating the activities of engineers on higher value tasks. For the realization of this project, it has been defined several objectives necessary for the development of this type of activity. The most important tones are listed below: Analysis of the current situation: where in technical acceptance processes it is dedicated many hours of work for the completion of repetitive testing and without value which can be automated by tools available on the market Search and selection of a tool that meets the needs of testing. Installation on the laboratories. Configuration and customization of the tool to specific projects. With all this, this project will cover the following achievements: Reduce the execution time of the testing campaigns, thanks to the automation of many of them. Measurements of subjective and objective quality tests, impossible to run with engineers (due to subjective perception). Improve and automate reporting of results systems
Resumo:
Providing QoS in the context of Ad Hoc networks includes a very wide field of application from the perspective of every level of the architecture in the network.In order for simulation studies to be useful, it is very important that the simulation results match as closely as possible with the test bed results. In this Paper, we study the throughput performance (parameter QoS) in Mobile Ad Hoc Networks (MANETs) and compares emulated test bed results with simulation results from NS2 (Network Simulator). The performance of the Mobile Ad Hoc Networks is very sensitive to the number of users and the offered load. When the number of users/offered load is high then the collisions increase resulting in larger wastage of the medium and lowering overall throughput. The aim of this research is to compare the throughput of Mobile Ad Hoc Networks using three different scenarios: 97, 100 and 120 users (nodes) using simulator NS2. By analyzing the graphs in MANETs, it is concluded When the number of users o nodes is increased beyond the certain limit, throughput decreases.
Resumo:
This paper presents a mechanism to generate virtual buildings considering designer constraints and guidelines. This mechanism is implemented as a pipeline of different Variable Neighborhood Search (VNS) optimization processes in which several subproblems are tackled (1) rooms locations, (2) connectivity graph, and (3) element placement. The core VNS algorithm includes some variants to improve its performance, such as, for example constraint handling and biased operator selection. The optimization process uses a toolkit of construction primitives implemented as "smart objects" providing basic elements such as rooms, doors, staircases and other connectors. The paper also shows experimental results of the application of different designer constraints to a wide range of buildings from small houses to a large castle with several underground levels.
Resumo:
This article presents the model and implementation of a multiagent fuzzy system (MAFS), to automate the search of solutions of incidents in telecommunications, expressed by the users in an imprecise way and, later, registered in a a knowledge base keeping their original vaguenesses and the relationships between the incidents considered as ancestors and descendants. The process of the fuzzy incidents, no matter their causes, is based on the application of a formula which transforms the intervals of the fuzzy incidents to a computational language and in the interaction between the different kinds of software agents and the humans. To search and suggest solutions of the incident originally stated, a search algorithm is used and illustrated with an example. The preliminary results obtained show the users' satisfaction, in a great percentage of the presented cases. The system is adaptive and allows to record new solutions for future users.
Resumo:
sharedcircuitmodels is presented in this work. The sharedcircuitsmodelapproach of sociocognitivecapacities recently proposed by Hurley in The sharedcircuitsmodel (SCM): how control, mirroring, and simulation can enable imitation, deliberation, and mindreading. Behavioral and Brain Sciences 31(1) (2008) 1–22 is enriched and improved in this work. A five-layer computational architecture for designing artificialcognitivecontrolsystems is proposed on the basis of a modified sharedcircuitsmodel for emulating sociocognitive experiences such as imitation, deliberation, and mindreading. In order to show the enormous potential of this approach, a simplified implementation is applied to a case study. An artificialcognitivecontrolsystem is applied for controlling force in a manufacturing process that demonstrates the suitability of the suggested approach
Resumo:
The design of a modern aircraft is based on three pillars: theoretical results, experimental test and computational simulations. As a results of this, Computational Fluid Dynamic (CFD) solvers are widely used in the aeronautical field. These solvers require the correct selection of many parameters in order to obtain successful results. Besides, the computational time spent in the simulation depends on the proper choice of these parameters. In this paper we create an expert system capable of making an accurate prediction of the number of iterations and time required for the convergence of a computational fluid dynamic (CFD) solver. Artificial neural network (ANN) has been used to design the expert system. It is shown that the developed expert system is capable of making an accurate prediction the number of iterations and time required for the convergence of a CFD solver.
Resumo:
Esta Tesis aborda los problemas de eficiencia de las redes eléctrica desde el punto de vista del consumo. En particular, dicha eficiencia es mejorada mediante el suavizado de la curva de consumo agregado. Este objetivo de suavizado de consumo implica dos grandes mejoras en el uso de las redes eléctricas: i) a corto plazo, un mejor uso de la infraestructura existente y ii) a largo plazo, la reducción de la infraestructura necesaria para suplir las mismas necesidades energéticas. Además, esta Tesis se enfrenta a un nuevo paradigma energético, donde la presencia de generación distribuida está muy extendida en las redes eléctricas, en particular, la generación fotovoltaica (FV). Este tipo de fuente energética afecta al funcionamiento de la red, incrementando su variabilidad. Esto implica que altas tasas de penetración de electricidad de origen fotovoltaico es perjudicial para la estabilidad de la red eléctrica. Esta Tesis trata de suavizar la curva de consumo agregado considerando esta fuente energética. Por lo tanto, no sólo se mejora la eficiencia de la red eléctrica, sino que también puede ser aumentada la penetración de electricidad de origen fotovoltaico en la red. Esta propuesta conlleva grandes beneficios en los campos económicos, social y ambiental. Las acciones que influyen en el modo en que los consumidores hacen uso de la electricidad con el objetivo producir un ahorro energético o un aumento de eficiencia son llamadas Gestión de la Demanda Eléctrica (GDE). Esta Tesis propone dos algoritmos de GDE diferentes para cumplir con el objetivo de suavizado de la curva de consumo agregado. La diferencia entre ambos algoritmos de GDE reside en el marco en el cual estos tienen lugar: el marco local y el marco de red. Dependiendo de este marco de GDE, el objetivo energético y la forma en la que se alcanza este objetivo son diferentes. En el marco local, el algoritmo de GDE sólo usa información local. Este no tiene en cuenta a otros consumidores o a la curva de consumo agregado de la red eléctrica. Aunque esta afirmación pueda diferir de la definición general de GDE, esta vuelve a tomar sentido en instalaciones locales equipadas con Recursos Energéticos Distribuidos (REDs). En este caso, la GDE está enfocada en la maximización del uso de la energía local, reduciéndose la dependencia con la red. El algoritmo de GDE propuesto mejora significativamente el auto-consumo del generador FV local. Experimentos simulados y reales muestran que el auto-consumo es una importante estrategia de gestión energética, reduciendo el transporte de electricidad y alentando al usuario a controlar su comportamiento energético. Sin embargo, a pesar de todas las ventajas del aumento de auto-consumo, éstas no contribuyen al suavizado del consumo agregado. Se han estudiado los efectos de las instalaciones locales en la red eléctrica cuando el algoritmo de GDE está enfocado en el aumento del auto-consumo. Este enfoque puede tener efectos no deseados, incrementando la variabilidad en el consumo agregado en vez de reducirlo. Este efecto se produce porque el algoritmo de GDE sólo considera variables locales en el marco local. Los resultados sugieren que se requiere una coordinación entre las instalaciones. A través de esta coordinación, el consumo debe ser modificado teniendo en cuenta otros elementos de la red y buscando el suavizado del consumo agregado. En el marco de la red, el algoritmo de GDE tiene en cuenta tanto información local como de la red eléctrica. En esta Tesis se ha desarrollado un algoritmo autoorganizado para controlar el consumo de la red eléctrica de manera distribuida. El objetivo de este algoritmo es el suavizado del consumo agregado, como en las implementaciones clásicas de GDE. El enfoque distribuido significa que la GDE se realiza desde el lado de los consumidores sin seguir órdenes directas emitidas por una entidad central. Por lo tanto, esta Tesis propone una estructura de gestión paralela en lugar de una jerárquica como en las redes eléctricas clásicas. Esto implica que se requiere un mecanismo de coordinación entre instalaciones. Esta Tesis pretende minimizar la cantidad de información necesaria para esta coordinación. Para lograr este objetivo, se han utilizado dos técnicas de coordinación colectiva: osciladores acoplados e inteligencia de enjambre. La combinación de estas técnicas para llevar a cabo la coordinación de un sistema con las características de la red eléctrica es en sí mismo un enfoque novedoso. Por lo tanto, este objetivo de coordinación no es sólo una contribución en el campo de la gestión energética, sino también en el campo de los sistemas colectivos. Los resultados muestran que el algoritmo de GDE propuesto reduce la diferencia entre máximos y mínimos de la red eléctrica en proporción a la cantidad de energía controlada por el algoritmo. Por lo tanto, conforme mayor es la cantidad de energía controlada por el algoritmo, mayor es la mejora de eficiencia en la red eléctrica. Además de las ventajas resultantes del suavizado del consumo agregado, otras ventajas surgen de la solución distribuida seguida en esta Tesis. Estas ventajas se resumen en las siguientes características del algoritmo de GDE propuesto: • Robustez: en un sistema centralizado, un fallo o rotura del nodo central provoca un mal funcionamiento de todo el sistema. La gestión de una red desde un punto de vista distribuido implica que no existe un nodo de control central. Un fallo en cualquier instalación no afecta el funcionamiento global de la red. • Privacidad de datos: el uso de una topología distribuida causa de que no hay un nodo central con información sensible de todos los consumidores. Esta Tesis va más allá y el algoritmo propuesto de GDE no utiliza información específica acerca de los comportamientos de los consumidores, siendo la coordinación entre las instalaciones completamente anónimos. • Escalabilidad: el algoritmo propuesto de GDE opera con cualquier número de instalaciones. Esto implica que se permite la incorporación de nuevas instalaciones sin afectar a su funcionamiento. • Bajo coste: el algoritmo de GDE propuesto se adapta a las redes actuales sin requisitos topológicos. Además, todas las instalaciones calculan su propia gestión con un bajo requerimiento computacional. Por lo tanto, no se requiere un nodo central con un alto poder de cómputo. • Rápido despliegue: las características de escalabilidad y bajo coste de los algoritmos de GDE propuestos permiten una implementación rápida. No se requiere una planificación compleja para el despliegue de este sistema. ABSTRACT This Thesis addresses the efficiency problems of the electrical grids from the consumption point of view. In particular, such efficiency is improved by means of the aggregated consumption smoothing. This objective of consumption smoothing entails two major improvements in the use of electrical grids: i) in the short term, a better use of the existing infrastructure and ii) in long term, the reduction of the required infrastructure to supply the same energy needs. In addition, this Thesis faces a new energy paradigm, where the presence of distributed generation is widespread over the electrical grids, in particular, the Photovoltaic (PV) generation. This kind of energy source affects to the operation of the grid by increasing its variability. This implies that a high penetration rate of photovoltaic electricity is pernicious for the electrical grid stability. This Thesis seeks to smooth the aggregated consumption considering this energy source. Therefore, not only the efficiency of the electrical grid is improved, but also the penetration of photovoltaic electricity into the grid can be increased. This proposal brings great benefits in the economic, social and environmental fields. The actions that influence the way that consumers use electricity in order to achieve energy savings or higher efficiency in energy use are called Demand-Side Management (DSM). This Thesis proposes two different DSM algorithms to meet the aggregated consumption smoothing objective. The difference between both DSM algorithms lie in the framework in which they take place: the local framework and the grid framework. Depending on the DSM framework, the energy goal and the procedure to reach this goal are different. In the local framework, the DSM algorithm only uses local information. It does not take into account other consumers or the aggregated consumption of the electrical grid. Although this statement may differ from the general definition of DSM, it makes sense in local facilities equipped with Distributed Energy Resources (DERs). In this case, the DSM is focused on the maximization of the local energy use, reducing the grid dependence. The proposed DSM algorithm significantly improves the self-consumption of the local PV generator. Simulated and real experiments show that self-consumption serves as an important energy management strategy, reducing the electricity transport and encouraging the user to control his energy behavior. However, despite all the advantages of the self-consumption increase, they do not contribute to the smooth of the aggregated consumption. The effects of the local facilities on the electrical grid are studied when the DSM algorithm is focused on self-consumption maximization. This approach may have undesirable effects, increasing the variability in the aggregated consumption instead of reducing it. This effect occurs because the algorithm only considers local variables in the local framework. The results suggest that coordination between these facilities is required. Through this coordination, the consumption should be modified by taking into account other elements of the grid and seeking for an aggregated consumption smoothing. In the grid framework, the DSM algorithm takes into account both local and grid information. This Thesis develops a self-organized algorithm to manage the consumption of an electrical grid in a distributed way. The goal of this algorithm is the aggregated consumption smoothing, as the classical DSM implementations. The distributed approach means that the DSM is performed from the consumers side without following direct commands issued by a central entity. Therefore, this Thesis proposes a parallel management structure rather than a hierarchical one as in the classical electrical grids. This implies that a coordination mechanism between facilities is required. This Thesis seeks for minimizing the amount of information necessary for this coordination. To achieve this objective, two collective coordination techniques have been used: coupled oscillators and swarm intelligence. The combination of these techniques to perform the coordination of a system with the characteristics of the electric grid is itself a novel approach. Therefore, this coordination objective is not only a contribution in the energy management field, but in the collective systems too. Results show that the proposed DSM algorithm reduces the difference between the maximums and minimums of the electrical grid proportionally to the amount of energy controlled by the system. Thus, the greater the amount of energy controlled by the algorithm, the greater the improvement of the efficiency of the electrical grid. In addition to the advantages resulting from the smoothing of the aggregated consumption, other advantages arise from the distributed approach followed in this Thesis. These advantages are summarized in the following features of the proposed DSM algorithm: • Robustness: in a centralized system, a failure or breakage of the central node causes a malfunction of the whole system. The management of a grid from a distributed point of view implies that there is not a central control node. A failure in any facility does not affect the overall operation of the grid. • Data privacy: the use of a distributed topology causes that there is not a central node with sensitive information of all consumers. This Thesis goes a step further and the proposed DSM algorithm does not use specific information about the consumer behaviors, being the coordination between facilities completely anonymous. • Scalability: the proposed DSM algorithm operates with any number of facilities. This implies that it allows the incorporation of new facilities without affecting its operation. • Low cost: the proposed DSM algorithm adapts to the current grids without any topological requirements. In addition, every facility calculates its own management with low computational requirements. Thus, a central computational node with a high computational power is not required. • Quick deployment: the scalability and low cost features of the proposed DSM algorithms allow a quick deployment. A complex schedule of the deployment of this system is not required.
Resumo:
Experimental diffusion data were critically assessed to develop the atomic mobility for the bcc phase of the Ti–Al–Fe system by using the DICTRA software. Good agreements were obtained from comprehensive comparisons made between the calculated and the experimental diffusion coefficients. The developed atomic mobility was then validated by well predicting the interdiffusion behavior observed from the diffusion-couple experiments in available literature.
Resumo:
Emotion is generally argued to be an influence on the behavior of life systems, largely concerning flexibility and adaptivity. The way in which life systems acts in response to a particular situations of the environment, has revealed the decisive and crucial importance of this feature in the success of behaviors. And this source of inspiration has influenced the way of thinking artificial systems. During the last decades, artificial systems have undergone such an evolution that each day more are integrated in our daily life. They have become greater in complexity, and the subsequent effects are related to an increased demand of systems that ensure resilience, robustness, availability, security or safety among others. All of them questions that raise quite a fundamental challenges in control design. This thesis has been developed under the framework of the Autonomous System project, a.k.a the ASys-Project. Short-term objectives of immediate application are focused on to design improved systems, and the approaching of intelligence in control strategies. Besides this, long-term objectives underlying ASys-Project concentrate on high order capabilities such as cognition, awareness and autonomy. This thesis is placed within the general fields of Engineery and Emotion science, and provides a theoretical foundation for engineering and designing computational emotion for artificial systems. The starting question that has grounded this thesis aims the problem of emotion--based autonomy. And how to feedback systems with valuable meaning has conformed the general objective. Both the starting question and the general objective, have underlaid the study of emotion, the influence on systems behavior, the key foundations that justify this feature in life systems, how emotion is integrated within the normal operation, and how this entire problem of emotion can be explained in artificial systems. By assuming essential differences concerning structure, purpose and operation between life and artificial systems, the essential motivation has been the exploration of what emotion solves in nature to afterwards analyze analogies for man--made systems. This work provides a reference model in which a collection of entities, relationships, models, functions and informational artifacts, are all interacting to provide the system with non-explicit knowledge under the form of emotion-like relevances. This solution aims to provide a reference model under which to design solutions for emotional operation, but related to the real needs of artificial systems. The proposal consists of a multi-purpose architecture that implement two broad modules in order to attend: (a) the range of processes related to the environment affectation, and (b) the range or processes related to the emotion perception-like and the higher levels of reasoning. This has required an intense and critical analysis beyond the state of the art around the most relevant theories of emotion and technical systems, in order to obtain the required support for those foundations that sustain each model. The problem has been interpreted and is described on the basis of AGSys, an agent assumed with the minimum rationality as to provide the capability to perform emotional assessment. AGSys is a conceptualization of a Model-based Cognitive agent that embodies an inner agent ESys, the responsible of performing the emotional operation inside of AGSys. The solution consists of multiple computational modules working federated, and aimed at conforming a mutual feedback loop between AGSys and ESys. Throughout this solution, the environment and the effects that might influence over the system are described as different problems. While AGSys operates as a common system within the external environment, ESys is designed to operate within a conceptualized inner environment. And this inner environment is built on the basis of those relevances that might occur inside of AGSys in the interaction with the external environment. This allows for a high-quality separate reasoning concerning mission goals defined in AGSys, and emotional goals defined in ESys. This way, it is provided a possible path for high-level reasoning under the influence of goals congruence. High-level reasoning model uses knowledge about emotional goals stability, letting this way new directions in which mission goals might be assessed under the situational state of this stability. This high-level reasoning is grounded by the work of MEP, a model of emotion perception that is thought as an analogy of a well-known theory in emotion science. The work of this model is described under the operation of a recursive-like process labeled as R-Loop, together with a system of emotional goals that are assumed as individual agents. This way, AGSys integrates knowledge that concerns the relation between a perceived object, and the effect which this perception induces on the situational state of the emotional goals. This knowledge enables a high-order system of information that provides the sustain for a high-level reasoning. The extent to which this reasoning might be approached is just delineated and assumed as future work. This thesis has been studied beyond a long range of fields of knowledge. This knowledge can be structured into two main objectives: (a) the fields of psychology, cognitive science, neurology and biological sciences in order to obtain understanding concerning the problem of the emotional phenomena, and (b) a large amount of computer science branches such as Autonomic Computing (AC), Self-adaptive software, Self-X systems, Model Integrated Computing (MIC) or the paradigm of models@runtime among others, in order to obtain knowledge about tools for designing each part of the solution. The final approach has been mainly performed on the basis of the entire acquired knowledge, and described under the fields of Artificial Intelligence, Model-Based Systems (MBS), and additional mathematical formalizations to provide punctual understanding in those cases that it has been required. This approach describes a reference model to feedback systems with valuable meaning, allowing for reasoning with regard to (a) the relationship between the environment and the relevance of the effects on the system, and (b) dynamical evaluations concerning the inner situational state of the system as a result of those effects. And this reasoning provides a framework of distinguishable states of AGSys derived from its own circumstances, that can be assumed as artificial emotion.
Resumo:
We present an evaluation of a spoken language dialogue system with a module for the management of userrelated information, stored as user preferences and privileges. The flexibility of our dialogue management approach, based on Bayesian Networks (BN), together with a contextual information module, which performs different strategies for handling such information, allows us to include user information as a new level into the Context Manager hierarchy. We propose a set of objective and subjective metrics to measure the relevance of the different contextual information sources. The analysis of our evaluation scenarios shows that the relevance of the short-term information (i.e. the system status) remains pretty stable throughout the dialogue, whereas the dialogue history and the user profile (i.e. the middle-term and the long-term information, respectively) play a complementary role, evolving their usefulness as the dialogue evolves.