926 resultados para Optimal Control Problems
Resumo:
La inmensa mayoría de los flujos de relevancia ingenieril permanecen sin estudiar en el marco de la teoría de estabilidad global. Esto es debido a dos razones fundamentalmente, las dificultades asociadas con el análisis de los flujos turbulentos y los inmensos recursos computacionales requeridos para obtener la solución del problema de autovalores asociado al análisis de inestabilidad de flujos tridimensionales, también conocido como problema TriGlobal. En esta tesis se aborda el problema asociado con la tridimensionalidad. Se ha desarrollado una metodología general para obtener soluciones de problemas de análisis modal de las inestabilidades lineales globales mediante el acoplamiento de métodos de evolución temporal, desarrollados en este trabajo, con códigos de mecánica de fluidos computacional de segundo orden, utilizados de forma general en la industria. Esta metodología consiste en la resolución del problema de autovalores asociado al análisis de inestabilidad mediante métodos de proyección en subespacios de Krylov, con la particularidad de que dichos subespacios son generados por medio de la integración temporal de un vector inicial usando cualquier código de mecánica de fluidos computacional. Se han elegido tres problemas desafiantes en función de la exigencia de recursos computacionales necesarios y de la complejidad física para la demostración de la presente metodología: (i) el flujo en el interior de una cavidad tridimensional impulsada por una de sus tapas, (ii) el flujo alrededor de un cilindro equipado con aletas helicoidales a lo largo su envergadura y (iii) el flujo a través de una cavidad abierta tridimensinal en ausencia de homogeneidades espaciales. Para la validación de la tecnología se ha obtenido la solución del problema TriGlobal asociado al flujo en la cavidad tridimensional, utilizando el método de evolución temporal desarrollado acoplado con los operadores numéricos de flujo incompresible del código CFD OpenFOAM (código libre). Los resultados obtenidos coinciden plentamente con la literatura. La aplicación de esta metodología al estudio de inestabilidades globales de flujos abiertos tridimensionales ha proporcionado por primera vez, información sobre la transición tridimensional de estos flujos. Además, la metodología ha sido adaptada para resolver problemas adjuntos TriGlobales, permitiendo el control de flujo basado en modificaciones de las inestabilidades globales. Finalmente, se ha demostrado que la cantidad moderada de los recursos computacionales requeridos para la solución del problema de valor propio TriGlobal usando este método numérico, junto a su versatilidad al poder acoplarse a cualquier código aerodinámico, permite la realización de análisis de inestabilidad global y control de flujos complejos de relevancia industrial. Abstract Most flows of engineering relevance still remain unexplored in a global instability theory context for two reasons. First, because of the difficulties associated with the analysis of turbulent flows and, second, for the formidable computational resources required for the solution of the eigenvalue problem associated with the instability analysis of three-dimensional base flows, also known as TriGlobal problem. In this thesis, the problem associated with the three-dimensionality is addressed by means of the development of a general approach to the solution of large-scale global linear instability analysis by coupling a time-stepping approach with second order aerodynamic codes employed in industry. Three challenging flows in the terms of required computational resources and physical complexity have been chosen for demonstration of the present methodology; (i) the flow inside a wall-bounded three-dimensional lid-driven cavity, (ii) the flow past a cylinder fitted with helical strakes and (iii) the flow over a inhomogeneous three-dimensional open cavity. Results in excellent agreement with the literature have been obtained for the three-dimensional lid-driven cavity by using this methodology coupled with the incompressible solver of the open-source toolbox OpenFOAM®, which has served as validation. Moreover, significant physical insight of the instability of three-dimensional open flows has been gained through the application of the present time-stepping methodology to the other two cases. In addition, modifications to the present approach have been proposed in order to perform adjoint instability analysis of three-dimensional base flows and flow control; validation and TriGlobal examples are presented. Finally, it has been demonstrated that the moderate amount of computational resources required for the solution of the TriGlobal eigenvalue problem using this method enables the performance of instability analysis and control of flows of industrial relevance.
Resumo:
Actualmente la agricultura cubana, por ser un sector estratégico en la economía del país, incorpora en su desarrollo y gestión las energías renovables como criterio básico para su viabilidad futura. Sin embargo existen un número de problemas que limitan el desarrollo de estas fuentes energéticas en Cuba, entre los que se encuentran el conocimiento incompleto de su potencial de utilización. Por esta razón, la presente investigación tiene como objetivo la maximización de la superficie regada de un cultivo dado y la determinación del volumen de regulación mínimo, usando una aerobomba tipo, en condiciones ambientales dadas. Se desarrolla una metodología para predecir la máxima potencialidad de las aerobombas para un sistema de riego localizado, basada en el cálculo del balance diario entre las necesidades de agua del cultivo y la disponibilidad de agua. Mediante un ejemplo que ilustra el uso de esta metodología en el cultivo de tomate (Solanum lycopersicum L. var. FL - 5) bajo invernadero en Ciego de Ávila, Cuba, se hace una descripción de los elementos de la instalación propuesta para el suministro de agua por parte de la aerobomba. Se estudiaron varios factores, tales como la serie de velocidad del viento trihoraria ( h V3 , m s-1) para un año medio de viento y para un año medio de poco viento; el caudal suministrado por la aerobomba en función de la altura de elevación ( H , m); y la evapotranspiración diaria del cultivo en invernadero en función de la fecha de siembra. A partir de los factores mencionados se determinaron los volúmenes de agua mensuales necesarios para el riego ( r D , m3 ha-1), la capacidad del depósito de almacenamiento ( dep. V , m3), así como las áreas máximas regables ( r A , ha) para cada variante. Los resultados muestran que el período óptimo de bombeo eólico para el riego del cultivo de tomate en invernadero bajo las condiciones ambientales estudiadas es de noviembre a febrero, y que los factores que más influyen en la superficie que se puede regar con el bombeo eólico son la fecha de plantación y el volumen de depósito. Abstract Currently Cuban agriculture, as a strategic sector in the economy of the country, incorporates in its development and renewable energy management as a basic criterion for its future viability. However, there are a number of problems that limit the development of these energy sources in Cuba, among which are the incomplete knowledge of their potential use. For this reason, this research aims at maximizing the irrigated area of a given culture and determination of minimum control volume, using a type Windpump in given environmental conditions. We develop a methodology to predict the maximum potential of windmills for irrigation system, based on the daily balance calculation between the crop water needs and water availability. Through an example that illustrates the use of this methodology in the cultivation of tomato (Solanum lycopersicum L. var. FL - 5) under greenhouse in Ciego de Avila, Cuba, is a description of the elements of the proposed facility to supply water from the windmill. We studied several factors such as the number of trihoraria wind speed ( h V3 , m s- 1) for an average wind year and an average year with little wind, the flow supplied by the windmill depending on the lift height ( H , m) and daily crop evapotranspiration in greenhouse based on planting date. From the above factors were determined monthly water volumes needed for irrigation ( r D , m3 ha-1), the storage tank capacity ( dep. V , m3) and peak areas irrigated ( r A , ha) for each variant. The results show that the optimal period wind pumping for irrigation of greenhouse tomato crop under the environmental conditions studied is from November to February, and that the factors that influence the surface that can be irrigated with wind pumping are planting date and amount of deposit.
Resumo:
This paper contributes with a unified formulation that merges previ- ous analysis on the prediction of the performance ( value function ) of certain sequence of actions ( policy ) when an agent operates a Markov decision process with large state-space. When the states are represented by features and the value function is linearly approxi- mated, our analysis reveals a new relationship between two common cost functions used to obtain the optimal approximation. In addition, this analysis allows us to propose an efficient adaptive algorithm that provides an unbiased linear estimate. The performance of the pro- posed algorithm is illustrated by simulation, showing competitive results when compared with the state-of-the-art solutions.
Resumo:
A membrane system is a massive parallel system, which is inspired by the living cells when processing information. As a part of unconventional computing, membrane systems are proven to be effective in solving complex problems. A new factor is introduced. This factor can decide whether a technique is worthwhile being used or not. The use of this factor provides the best chances for selecting the strategy for the rules application phase. Referring to the “best” is in reference to the one that reduces execution time within the membrane system. A pre-analysis of the membrane system determines the P-factor, which in return advises the optimal strategy to use. In particular, this paper compares the use of two strategies based on the P-factor and provides results upon the application of them. The paper concludes that the P-factor is an effective indicator for choosing the right strategy to implement the rules application phase in membrane systems.
Resumo:
Flash floods are of major relevance in natural disaster management in the Mediterranean region. In many cases, the damaging effects of flash floods can be mitigated by adequate management of flood control reservoirs. This requires the development of suitable models for optimal operation of reservoirs. A probabilistic methodology for calibrating the parameters of a reservoir flood control model (RFCM) that takes into account the stochastic variability of flood events is presented. This study addresses the crucial problem of operating reservoirs during flood events, considering downstream river damages and dam failure risk as conflicting operation criteria. These two criteria are aggregated into a single objective of total expected damages from both the maximum released flows and stored volumes (overall risk index). For each selected parameter set the RFCM is run under a wide range of hydrologic loads (determined through Monte Carlo simulation). The optimal parameter set is obtained through the overall risk index (balanced solution) and then compared with other solutions of the Pareto front. The proposed methodology is implemented at three different reservoirs in the southeast of Spain. The results obtained show that the balanced solution offers a good compromise between the two main objectives of reservoir flood control management
Resumo:
Reducing the energy consumption for computation and cooling in servers is a major challenge considering the data center energy costs today. To ensure energy-efficient operation of servers in data centers, the relationship among computa- tional power, temperature, leakage, and cooling power needs to be analyzed. By means of an innovative setup that enables monitoring and controlling the computing and cooling power consumption separately on a commercial enterprise server, this paper studies temperature-leakage-energy tradeoffs, obtaining an empirical model for the leakage component. Using this model, we design a controller that continuously seeks and settles at the optimal fan speed to minimize the energy consumption for a given workload. We run a customized dynamic load-synthesis tool to stress the system. Our proposed cooling controller achieves up to 9% energy savings and 30W reduction in peak power in comparison to the default cooling control scheme.
Resumo:
- Context: Pinus pinea L. presents serious problems of natural regeneration in managed forest of Central Spain. The species exhibits specific traits linked to frugivore activity. Therefore, information on plant–animal interactions may be crucial to understand regeneration failure. - Aims: Determining the spatio-temporal pattern of P. pinea seed predation by Apodemus sylvaticus L. and the factors involved. Exploring the importance of A. sylvaticus L. as a disperser of P. pinea. Identifying other frugivores and their seasonal patterns. - Methods: An intensive 24-month seed predation trial was carried out. The probability of seeds escaping predation was modelled through a zero-inflated binomial mixed model. Experiments on seed dispersal by A. sylvaticus were conducted. Cameras were set up to identify other potential frugivores. - Results: Decreasing rodent population in summer and masting enhances seed survival. Seeds were exploited more rapidly nearby parent trees and shelters. A. sylvaticus dispersal activity was found to be scarce. Corvids marginally preyed upon P. pinea seeds. - Conclusions: Survival of P. pinea seeds is climate-controlled through the timing of the dry period together with masting occurrence. Should germination not take place during the survival period, establishment may be limited. A. sylvaticus mediated dispersal does not modify the seed shadow. Seasonality of corvid activity points to a role of corvids in dispersal.
Resumo:
Electricity price forecasting is an interesting problem for all the agents involved in electricity market operation. For instance, every profit maximisation strategy is based on the computation of accurate one-day-ahead forecasts, which is why electricity price forecasting has been a growing field of research in recent years. In addition, the increasing concern about environmental issues has led to a high penetration of renewable energies, particularly wind. In some European countries such as Spain, Germany and Denmark, renewable energy is having a deep impact on the local power markets. In this paper, we propose an optimal model from the perspective of forecasting accuracy, and it consists of a combination of several univariate and multivariate time series methods that account for the amount of energy produced with clean energies, particularly wind and hydro, which are the most relevant renewable energy sources in the Iberian Market. This market is used to illustrate the proposed methodology, as it is one of those markets in which wind power production is more relevant in terms of its percentage of the total demand, but of course our method can be applied to any other liberalised power market. As far as our contribution is concerned, first, the methodology proposed by García-Martos et al(2007 and 2012) is generalised twofold: we allow the incorporation of wind power production and hydro reservoirs, and we do not impose the restriction of using the same model for 24h. A computational experiment and a Design of Experiments (DOE) are performed for this purpose. Then, for those hours in which there are two or more models without statistically significant differences in terms of their forecasting accuracy, a combination of forecasts is proposed by weighting the best models(according to the DOE) and minimising the Mean Absolute Percentage Error (MAPE). The MAPE is the most popular accuracy metric for comparing electricity price forecasting models. We construct the combi nation of forecasts by solving several nonlinear optimisation problems that allow computation of the optimal weights for building the combination of forecasts. The results are obtained by a large computational experiment that entails calculating out-of-sample forecasts for every hour in every day in the period from January 2007 to Decem ber 2009. In addition, to reinforce the value of our methodology, we compare our results with those that appear in recent published works in the field. This comparison shows the superiority of our methodology in terms of forecasting accuracy.
Resumo:
La gestión del tráfico aéreo (Air Traffic Management, ATM) está experimentando un cambio de paradigma hacia las denominadas operaciones basadas trayectoria. Bajo dicho paradigma se modifica el papel de los controladores de tráfico aéreo desde una operativa basada su intervención táctica continuada hacia una labor de supervisión a más largo plazo. Esto se apoya en la creciente confianza en las soluciones aportadas por las herramientas automatizadas de soporte a la decisión más modernas. Para dar soporte a este concepto, se precisa una importante inversión para el desarrollo, junto con la adquisición de nuevos equipos en tierra y embarcados, que permitan la sincronización precisa de la visión de la trayectoria, basada en el intercambio de información entre ambos actores. Durante los últimos 30 a 40 años las aerolíneas han generado uno de los menores retornos de la inversión de entre todas las industrias. Sin beneficios tangibles, la industria aérea tiene dificultades para atraer el capital requerido para su modernización, lo que retrasa la implantación de dichas mejoras. Esta tesis tiene como objetivo responder a la pregunta de si las capacidades actualmente instaladas en las aeronaves comerciales se pueden aplicar para lograr la sincronización de la trayectoria con el nivel de calidad requerido. Además, se analiza en ella si, conjuntamente con mejoras en las herramientas de predicción trayectorias instaladas en tierra en para facilitar la gestión de las arribadas, dichas capacidades permiten obtener los beneficios esperados en el marco de las operaciones basadas en trayectoria. Esto podría proporcionar un incentivo para futuras actualizaciones de la aviónica que podrían llevar a mejoras adicionales. El concepto operacional propuesto en esta tesis tiene como objetivo permitir que los aviones sean pilotados de una manera consistente con las técnicas actuales de vuelo optimizado. Se permite a las aeronaves que desciendan en el denominado “modo de ángulo de descenso gestionado” (path-managed mode), que es el preferido por la mayoría de las compañías aéreas, debido a que conlleva un reducido consumo de combustible. El problema de este modo es que en él no se controla de forma activa el tiempo de llegada al punto de interés. En nuestro concepto operacional, la incertidumbre temporal se gestiona en mediante de la medición del tiempo en puntos estratégicamente escogidos a lo largo de la trayectoria de la aeronave, y permitiendo la modificación por el control de tierra de la velocidad de la aeronave. Aunque la base del concepto es la gestión de las ordenes de velocidad que se proporcionan al piloto, para ser capaces de operar con los niveles de equipamiento típicos actualmente, dicho concepto también constituye un marco en el que la aviónica más avanzada (por ejemplo, que permita el control por el FMS del tiempo de llegada) puede integrarse de forma natural, una vez que esta tecnología este instalada. Además de gestionar la incertidumbre temporal a través de la medición en múltiples puntos, se intenta reducir dicha incertidumbre al mínimo mediante la mejora de las herramienta de predicción de la trayectoria en tierra. En esta tesis se presenta una novedosa descomposición del proceso de predicción de trayectorias en dos etapas. Dicha descomposición permite integrar adecuadamente los datos de la trayectoria de referencia calculada por el Flight Management System (FMS), disponibles usando Futuro Sistema de Navegación Aérea (FANS), en el sistema de predicción de trayectorias en tierra. FANS es un equipo presente en los aviones comerciales de fuselaje ancho actualmente en la producción, e incluso algunos aviones de fuselaje estrecho pueden tener instalada avionica FANS. Además de informar automáticamente de la posición de la aeronave, FANS permite proporcionar (parte de) la trayectoria de referencia en poder de los FMS, pero la explotación de esta capacidad para la mejora de la predicción de trayectorias no se ha estudiado en profundidad en el pasado. La predicción en dos etapas proporciona una solución adecuada al problema de sincronización de trayectorias aire-tierra dado que permite la sincronización de las dimensiones controladas por el sistema de guiado utilizando la información de la trayectoria de referencia proporcionada mediante FANS, y también facilita la mejora en la predicción de las dimensiones abiertas restantes usado un modelo del guiado que explota los modelos meteorológicos mejorados disponibles en tierra. Este proceso de predicción de la trayectoria de dos etapas se aplicó a una muestra de 438 vuelos reales que realizaron un descenso continuo (sin intervención del controlador) con destino Melbourne. Dichos vuelos son de aeronaves del modelo Boeing 737-800, si bien la metodología descrita es extrapolable a otros tipos de aeronave. El método propuesto de predicción de trayectorias permite una mejora en la desviación estándar del error de la estimación del tiempo de llegada al punto de interés, que es un 30% menor que la que obtiene el FMS. Dicha trayectoria prevista mejorada se puede utilizar para establecer la secuencia de arribadas y para la asignación de las franjas horarias para cada aterrizaje (slots). Sobre la base del slot asignado, se determina un perfil de velocidades que permita cumplir con dicho slot con un impacto mínimo en la eficiencia del vuelo. En la tesis se propone un nuevo algoritmo que determina las velocidades requeridas sin necesidad de un proceso iterativo de búsqueda sobre el sistema de predicción de trayectorias. El algoritmo se basa en una parametrización inteligente del proceso de predicción de la trayectoria, que permite relacionar el tiempo estimado de llegada con una función polinómica. Resolviendo dicho polinomio para el tiempo de llegada deseado, se obtiene de forma natural el perfil de velocidades optimo para cumplir con dicho tiempo de llegada sin comprometer la eficiencia. El diseño de los sistemas de gestión de arribadas propuesto en esta tesis aprovecha la aviónica y los sistemas de comunicación instalados de un modo mucho más eficiente, proporcionando valor añadido para la industria. Por tanto, la solución es compatible con la transición hacia los sistemas de aviónica avanzados que están desarrollándose actualmente. Los beneficios que se obtengan a lo largo de dicha transición son un incentivo para inversiones subsiguientes en la aviónica y en los sistemas de control de tráfico en tierra. ABSTRACT Air traffic management (ATM) is undergoing a paradigm shift towards trajectory based operations where the role of an air traffic controller evolves from that of continuous intervention towards supervision, as decision making is improved based on increased confidence in the solutions provided by advanced automation. To support this concept, significant investment for the development and acquisition of new equipment is required on the ground as well as in the air, to facilitate the high degree of trajectory synchronisation and information exchange required. Over the past 30-40 years the airline industry has generated one of the lowest returns on invested capital among all industries. Without tangible benefits realised, the airline industry may find it difficult to attract the required investment capital and delay acquiring equipment needed to realise the concept of trajectory based operations. In response to these challenges facing the modernisation of ATM, this thesis aims to answer the question whether existing aircraft capabilities can be applied to achieve sufficient trajectory synchronisation and improvements to ground-based trajectory prediction in support of the arrival management process, to realise some of the benefits envisioned under trajectory based operations, and to provide an incentive for further avionics upgrades. The proposed operational concept aims to permit aircraft to operate in a manner consistent with current optimal aircraft operating techniques. It allows aircraft to descend in the fuel efficient path managed mode as preferred by a majority of airlines, with arrival time not actively controlled by the airborne automation. The temporal uncertainty is managed through metering at strategically chosen points along the aircraft’s trajectory with primary use of speed advisories. While the focus is on speed advisories to support all aircraft and different levels of equipage, the concept also constitutes a framework in which advanced avionics as airborne time-of-arrival control can be integrated once this technology is widely available. In addition to managing temporal uncertainty through metering at multiple points, this temporal uncertainty is minimised by improving the supporting trajectory prediction capability. A novel two-stage trajectory prediction process is presented to adequately integrate aircraft trajectory data available through Future Air Navigation Systems (FANS) into the ground-based trajectory predictor. FANS is standard equipment on any wide-body aircraft in production today, and some single-aisle aircraft are easily capable of being fitted with FANS. In addition to automatic position reporting, FANS provides the ability to provide (part of) the reference trajectory held by the aircraft’s Flight Management System (FMS), but this capability has yet been widely overlooked. The two-stage process provides a ‘best of both world’s’ solution to the air-ground synchronisation problem by synchronising with the FMS reference trajectory those dimensions controlled by the guidance mode, and improving on the prediction of the remaining open dimensions by exploiting the high resolution meteorological forecast available to a ground-based system. The two-stage trajectory prediction process was applied to a sample of 438 FANS-equipped Boeing 737-800 flights into Melbourne conducting a continuous descent free from ATC intervention, and can be extrapolated to other types of aircraft. Trajectories predicted through the two-stage approach provided estimated time of arrivals with a 30% reduction in standard deviation of the error compared to estimated time of arrival calculated by the FMS. This improved predicted trajectory can subsequently be used to set the sequence and allocate landing slots. Based on the allocated landing slot, the proposed system calculates a speed schedule for the aircraft to meet this landing slot at minimal flight efficiency impact. A novel algorithm is presented that determines this speed schedule without requiring an iterative process in which multiple calls to a trajectory predictor need to be made. The algorithm is based on parameterisation of the trajectory prediction process, allowing the estimate time of arrival to be represented by a polynomial function of the speed schedule, providing an analytical solution to the speed schedule required to meet a set arrival time. The arrival management solution proposed in this thesis leverages the use of existing avionics and communications systems resulting in new value for industry for current investment. The solution therefore supports a transition concept from mixed equipage towards advanced avionics currently under development. Benefits realised under this transition may provide an incentive for ongoing investment in avionics.
Resumo:
In different problems of Elasticity the definition of the optimal gcometry of the boundary, according to a given objective function, is an issue of great interest. Finding the shape of a hole in the middle of a plate subjected to an arbitrary loading such that the stresses along the hole minimizes some functional or the optimal middle curved concrete vault for a tunnel along which a uniform minimum compression are two typical examples. In these two examples the objective functional depends on the geometry of the boundary that can be either a curve (in case of 2D problems) or a surface boundary (in 3D problems). Typically, optimization is achieved by means of an iterative process which requires the computation of gradients of the objective function with respect to design variables. Gradients can by computed in a variety of ways, although adjoint methods either continuous or discrete ones are the more efficient ones when they are applied in different technical branches. In this paper the adjoint continuous method is introduced in a systematic way to this type of problems and an illustrative simple example, namely the finding of an optimal shape tunnel vault immersed in a linearly elastic terrain, is presented.
Resumo:
La diabetes mellitus es una enfermedad que se caracteriza por la nula o insuficiente producción de insulina, o la resistencia del organismo a la misma. La insulina es una hormona que ayuda a que la glucosa llegue a los tejidos periféricos y al sistema nervioso para suministrar energía. Actualmente existen dos tipos de terapias aplicada en tejido subcutáneo: mediante inyección múltiple realizada con plumas, y la otra es mediante infusión continua de insulina por bomba (CSII). El mayor problema de esta terapia son los retardos por la absorción, tanto de los carbohidratos como de la insulina, y los retardos introducidos por el sensor subcutáneo de glucosa que mide la glucosa del líquido intersticial, lo deseable es controlar la glucosa en sangre. Para intentar independizar al paciente de su enfermedad se está trabajando en el desarrollo del páncreas endocrino artificial (PEA) que dotaría al paciente de una bomba de insulina, un sensor de glucosa y un controlador, el cual se encargaría de la toma de decisiones de las infusiones de insulina. Este proyecto persigue el diseño de un regulador en modo de funcionamiento en CL, con el objetivo de conseguir una regulación óptima del nivel de glucosa en sangre. El diseño de dicho regulador va a ser acometido utilizando la teoría del control por modelo interno (IMC). Esta teoría se basa en la idea de que es necesario realimentar la respuesta de un modelo aproximado del proceso que se quiere controlar. La salida del modelo, comparada con la del proceso real nos da la incertidumbre del modelo de la planta, frente a la planta real. Dado que según la teoría del modelo interno, estas diferencias se dan en las altas frecuencias, la teoría IMC propone un filtro paso bajo como regulador en serie con la inversa del modelo de la planta para conseguir el comportamiento deseado. Además se pretende implementar un Predictor Smith para minimizar los efectos del retardo de la medida del sensor. En el proyecto para conseguir la viabilidad del PEA se ha adaptado el controlador IMC clásico utilizando las ganancias estáticas de un modelo de glucosa, a partir de la ruta subcutánea de infusión y la vía subcutánea de medida. El modo de funcionamiento del controlador en SCL mejora el rango de normoglucemia, necesitando la intervención del paciente indicando anticipadamente el momento de las ingestas al controlador. El uso de un control SCL con el Predictor de Smith mejora los resultados pues se añade al controlador una variable sobre las ingestas con la participación del paciente. ABSTRACT. Diabetes mellitus is a group of metabolic diseases in which a person has high blood sugar, due to the body does not produce enough insulin, or because cells do not respond to the insulin produced. The insulin is a hormone that helps the glucose to reach to outlying tissues and the nervous system to supply energy. There are currently two types of therapies applied in subcutaneous tissue: the first one consists in using the intensive therapy with an insulin pen, and the other one is by continuous subcutaneous insulin infusion (CSII). The biggest problems of this therapy are the delays caused by the absorption of carbohydrates and insulin, and the delays introduced by the subcutaneous glucose sensor that measures glucose from interstitial fluid, it is suitable to control glucose blood. To try to improve these patients quality of life, work is being done on the development of an artificial endocrine pancreas (PEA) consisting of a subcutaneous insulin pump, a subcutaneous glucose sensor and an algorithm of glucose control, which would calculate the bolus that the pump would infuse to patient. This project aims to design a controller for closed-loop therapy, with the objective of obtain an optimal regulation of blood glucose level. The design of this controller will be formed using the theory of internal model control (IMC). This theory is based on the uncertainties given by a model to feedback the system control. Output model, in comparison with the actual process gives the uncertainty of the plant model, compared to the real plant. Since the theory of the internal model, these differences occur at high frequencies, the theory proposes IMC as a low pass filter regulator in series with the inverse model of the plant to get the required behavior. In addition, it will implement a Smith Predictor to minimize the effects of the delay measurement sensor. The project for the viability of PEA has adapted the classic IMC controller using the gains static of glucose model from the subcutaneous infusion and subcutaneous measuring. In simulation the SemiClosed-Loop controller get on the normoglycemia range, requiring patient intervention announce the bolus priming connected to intakes. Using an SCL control with the Smith Predictor improves the outcome because a variable about intakes is added to the controller through patient intervention.
Resumo:
The relationship between structural controllability and observability of complex systems is studied. Algebraic and graph theoretic tools are combined to prove the extent of some controller/observer duality results. Two types of control design problems are addressed and some fundamental theoretical results are provided. In addition new algorithms are presented to compute optimal solutions for monitoring large scale real networks.
Resumo:
El objetivo principal de esta tesis doctoral es profundizar en el análisis y diseño de un sistema inteligente para la predicción y control del acabado superficial en un proceso de fresado a alta velocidad, basado fundamentalmente en clasificadores Bayesianos, con el prop´osito de desarrollar una metodolog´ıa que facilite el diseño de este tipo de sistemas. El sistema, cuyo propósito es posibilitar la predicción y control de la rugosidad superficial, se compone de un modelo aprendido a partir de datos experimentales con redes Bayesianas, que ayudar´a a comprender los procesos dinámicos involucrados en el mecanizado y las interacciones entre las variables relevantes. Dado que las redes neuronales artificiales son modelos ampliamente utilizados en procesos de corte de materiales, también se incluye un modelo para fresado usándolas, donde se introdujo la geometría y la dureza del material como variables novedosas hasta ahora no estudiadas en este contexto. Por lo tanto, una importante contribución en esta tesis son estos dos modelos para la predicción de la rugosidad superficial, que se comparan con respecto a diferentes aspectos: la influencia de las nuevas variables, los indicadores de evaluación del desempeño, interpretabilidad. Uno de los principales problemas en la modelización con clasificadores Bayesianos es la comprensión de las enormes tablas de probabilidad a posteriori producidas. Introducimos un m´etodo de explicación que genera un conjunto de reglas obtenidas de árboles de decisión. Estos árboles son inducidos a partir de un conjunto de datos simulados generados de las probabilidades a posteriori de la variable clase, calculadas con la red Bayesiana aprendida a partir de un conjunto de datos de entrenamiento. Por último, contribuimos en el campo multiobjetivo en el caso de que algunos de los objetivos no se puedan cuantificar en números reales, sino como funciones en intervalo de valores. Esto ocurre a menudo en aplicaciones de aprendizaje automático, especialmente las basadas en clasificación supervisada. En concreto, se extienden las ideas de dominancia y frontera de Pareto a esta situación. Su aplicación a los estudios de predicción de la rugosidad superficial en el caso de maximizar al mismo tiempo la sensibilidad y la especificidad del clasificador inducido de la red Bayesiana, y no solo maximizar la tasa de clasificación correcta. Los intervalos de estos dos objetivos provienen de un m´etodo de estimación honesta de ambos objetivos, como e.g. validación cruzada en k rodajas o bootstrap.---ABSTRACT---The main objective of this PhD Thesis is to go more deeply into the analysis and design of an intelligent system for surface roughness prediction and control in the end-milling machining process, based fundamentally on Bayesian network classifiers, with the aim of developing a methodology that makes easier the design of this type of systems. The system, whose purpose is to make possible the surface roughness prediction and control, consists of a model learnt from experimental data with the aid of Bayesian networks, that will help to understand the dynamic processes involved in the machining and the interactions among the relevant variables. Since artificial neural networks are models widely used in material cutting proceses, we include also an end-milling model using them, where the geometry and hardness of the piecework are introduced as novel variables not studied so far within this context. Thus, an important contribution in this thesis is these two models for surface roughness prediction, that are then compared with respecto to different aspects: influence of the new variables, performance evaluation metrics, interpretability. One of the main problems with Bayesian classifier-based modelling is the understanding of the enormous posterior probabilitiy tables produced. We introduce an explanation method that generates a set of rules obtained from decision trees. Such trees are induced from a simulated data set generated from the posterior probabilities of the class variable, calculated with the Bayesian network learned from a training data set. Finally, we contribute in the multi-objective field in the case that some of the objectives cannot be quantified as real numbers but as interval-valued functions. This often occurs in machine learning applications, especially those based on supervised classification. Specifically, the dominance and Pareto front ideas are extended to this setting. Its application to the surface roughness prediction studies the case of maximizing simultaneously the sensitivity and specificity of the induced Bayesian network classifier, rather than only maximizing the correct classification rate. Intervals in these two objectives come from a honest estimation method of both objectives, like e.g. k-fold cross-validation or bootstrap.
Resumo:
El principal objetivo de la tesis es estudiar el acoplamiento entre los subsistemas de control de actitud y de control térmico de un pequeño satélite, con el fin de buscar la solución a los problemas relacionados con la determinación de los parámetros de diseño. Se considera la evolución de la actitud y de las temperaturas del satélite bajo la influencia de dos estrategias de orientación diferentes: 1) estabilización magnética pasiva de la orientación (PMAS, passive magnetic attitude stabilization), y 2) control de actitud magnético activo (AMAC, active magnetic attitude control). En primer lugar se presenta el modelo matemático del problema, que incluye la dinámica rotacional y el modelo térmico. En el problema térmico se considera un satélite cúbico modelizado por medio de siete nodos (seis externos y uno interno) aplicando la ecuación del balance térmico. Una vez establecido el modelo matemático del problema, se estudia la evolución que corresponde a las dos estrategias mencionadas. La estrategia PMAS se ha seleccionado por su simplicidad, fiabilidad, bajo coste, ahorrando consumo de potencia, masa coste y complejidad, comparado con otras estrategias. Se ha considerado otra estrategia de control que consigue que el satélite gire a una velocidad requerida alrededor de un eje deseado de giro, pudiendo controlar su dirección en un sistema inercial de referencia, ya que frecuentemente el subsistema térmico establece requisitos de giro alrededor de un eje del satélite orientado en una dirección perpendicular a la radiación solar incidente. En relación con el problema térmico, para estudiar la influencia de la velocidad de giro en la evolución de las temperaturas en diversos puntos del satélite, se ha empleado un modelo térmico linealizado, obtenido a partir de la formulación no lineal aplicando un método de perturbaciones. El resultado del estudio muestra que el tiempo de estabilización de la temperatura y la influencia de las cargas periódicas externas disminuye cuando aumenta la velocidad de giro. Los cambios de temperatura se reducen hasta ser muy pequeños para velocidades de rotación altas. En relación con la estrategia PMAC se ha observado que a pesar de su uso extendido entre los micro y nano satélites todavía presenta problemas que resolver. Estos problemas están relacionados con el dimensionamiento de los parámetros del sistema y la predicción del funcionamiento en órbita. Los problemas aparecen debido a la dificultad en la determinación de las características magnéticas de los cuerpos ferromagnéticos (varillas de histéresis) que se utilizan como amortiguadores de oscilaciones en los satélites. Para estudiar este problema se presenta un modelo analítico que permite estimar la eficiencia del amortiguamiento, y que se ha aplicado al estudio del comportamiento en vuelo de varios satélites, y que se ha empleado para comparar los resultados del modelo con los obtenidos en vuelo, observándose que el modelo permite explicar satisfactoriamente el comportamiento registrado. ABSTRACT The main objective of this thesis is to study the coupling between the attitude control and thermal control subsystems of a small satellite, and address the solution to some existing issues concerning the determination of their parameters. Through the thesis the attitude and temperature evolution of the satellite is studied under the influence of two independent attitude stabilization and control strategies: (1) passive magnetic attitude stabilization (PMAS), and (2) active magnetic attitude control (AMAC). In this regard the mathematical model of the problem is explained and presented. The mathematical model includes both the rotational dynamics and the thermal model. The thermal model is derived for a cubic satellite by solving the heat balance equation for 6 external and 1 internal nodes. Once established the mathematical model of the problem, the above mentioned attitude strategies were applied to the system and the temperature evolution of the 7 nodes of the satellite was studied. The PMAS technique has been selected to be studied due to its prevalent use, simplicity, reliability, and cost, as this strategy significantly saves the overall power, weight, cost, and reduces the complexity of the system compared to other attitude control strategies. In addition to that, another control law that provides the satellite with a desired spin rate along a desired axis of the satellite, whose direction can be controlled with respect to the inertial reference frame is considered, as the thermal subsystem of a satellite usually demands a spin requirement around an axis of the satellite which is positioned perpendicular to the direction of the coming solar radiation. Concerning the thermal problem, to study the influence of spin rate on temperature evolution of the satellite a linear approach of the thermal model is used, which is based on perturbation theory applied to the nonlinear differential equations of the thermal model of a spacecraft moving in a closed orbit. The results of this study showed that the temperature stabilization time and the periodic influence of the external thermal loads decreases by increasing the spin rate. However, the changes become insignificant for higher values of spin rate. Concerning the PMAS strategy, it was observed that in spite of its extended application to micro and nano satellites, still there are some issues to be solved regarding this strategy. These issues are related to the sizing of its system parameters and predicting the in-orbit performance. The problems were found to be rooted in the difficulties that exist in determining the magnetic characteristics of the ferromagnetic bodies (hysteresis rods) that are applied as damping devices on-board satellites. To address these issues an analytic model for estimating their damping efficiency is proposed and applied to several existing satellites in order to compare the results with their respective in-flight data. This model can explain the behavior showed by these satellites.
Resumo:
Many computer vision and human-computer interaction applications developed in recent years need evaluating complex and continuous mathematical functions as an essential step toward proper operation. However, rigorous evaluation of this kind of functions often implies a very high computational cost, unacceptable in real-time applications. To alleviate this problem, functions are commonly approximated by simpler piecewise-polynomial representations. Following this idea, we propose a novel, efficient, and practical technique to evaluate complex and continuous functions using a nearly optimal design of two types of piecewise linear approximations in the case of a large budget of evaluation subintervals. To this end, we develop a thorough error analysis that yields asymptotically tight bounds to accurately quantify the approximation performance of both representations. It provides an improvement upon previous error estimates and allows the user to control the trade-off between the approximation error and the number of evaluation subintervals. To guarantee real-time operation, the method is suitable for, but not limited to, an efficient implementation in modern Graphics Processing Units (GPUs), where it outperforms previous alternative approaches by exploiting the fixed-function interpolation routines present in their texture units. The proposed technique is a perfect match for any application requiring the evaluation of continuous functions, we have measured in detail its quality and efficiency on several functions, and, in particular, the Gaussian function because it is extensively used in many areas of computer vision and cybernetics, and it is expensive to evaluate.