982 resultados para Cross-layer optimization


Relevância:

30.00% 30.00%

Publicador:

Resumo:

This work aims to develop a novel Cross-Entropy (CE) optimization-based fuzzy controller for Unmanned Aerial Monocular Vision-IMU System (UAMVIS) to solve the seeand- avoid problem using its accurate autonomous localization information. The function of this fuzzy controller is regulating the heading of this system to avoid the obstacle, e.g. wall. In the Matlab Simulink-based training stages, the Scaling Factor (SF) is adjusted according to the specified task firstly, and then the Membership Function (MF) is tuned based on the optimized Scaling Factor to further improve the collison avoidance performance. After obtained the optimal SF and MF, 64% of rules has been reduced (from 125 rules to 45 rules), and a large number of real flight tests with a quadcopter have been done. The experimental results show that this approach precisely navigates the system to avoid the obstacle. To our best knowledge, this is the first work to present the optimized fuzzy controller for UAMVIS using Cross-Entropy method in Scaling Factors and Membership Functions optimization.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Negative co-occurrence is a common phenomenon in many signal processing applications. In some cases the signals involved are sparse, and this information can be exploited to recover them. In this paper, we present a sparse learning approach that explicitly takes into account negative co-occurrence. This is achieved by adding a novel penalty term to the LASSO cost function based on the cross-products between the reconstruction coefficients. Although the resulting optimization problem is non-convex, we develop a new and efficient method for solving it based on successive convex approximations. Results on synthetic data, for both complete and overcomplete dictionaries, are provided to validate the proposed approach.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The aim of this work is to develop an automated tool for the optimization of turbomachinery blades founded on an evolutionary strategy. This optimization scheme will serve to deal with supersonic blades cascades for application to Organic Rankine Cycle (ORC) turbines. The blade geometry is defined using parameterization techniques based on B-Splines curves, that allow to have a local control of the shape. The location in space of the control points of the B-Spline curve define the design variables of the optimization problem. In the present work, the performance of the blade shape is assessed by means of fully-turbulent flow simulations performed with a CFD package, in which a look-up table method is applied to ensure an accurate thermodynamic treatment. The solver is set along with the optimization tool to determine the optimal shape of the blade. As only blade-to-blade effects are of interest in this study, quasi-3D calculations are performed, and a single-objective evolutionary strategy is applied to the optimization. As a result, a non-intrusive tool, with no need for gradients definition, is developed. The computational cost is reduced by the use of surrogate models. A Gaussian interpolation scheme (Kriging model) is applied for the estimated n-dimensional function, and a surrogate-based local optimization strategy is proved to yield an accurate way for optimization. In particular, the present optimization scheme has been applied to the re-design of a supersonic stator cascade of an axial-flow turbine. In this design exercise very strong shock waves are generated in the rear blade suction side and shock-boundary layer interaction mechanisms occur. A significant efficiency improvement as a consequence of a more uniform flow at the blade outlet section of the stator is achieved. This is also expected to provide beneficial effects on the design of a subsequent downstream rotor. The method provides an improvement to gradient-based methods and an optimized blade geometry is easily achieved using the genetic algorithm.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Composite laminates on the nanoscale have shown superior hardness and toughness, but little is known about their high temperature behavior. The mechanical properties (elastic modulus and hardness) were measured as a function of temperature by means of nanoindentation in Al/SiC nanolaminates, a model metal–ceramic nanolaminate fabricated by physical vapor deposition. The influence of the Al and SiC volume fraction and layer thicknesses was determined between room temperature and 150 °C and, the deformation modes were analyzed by transmission electron microscopy, using a focused ion beam to prepare cross-sections through selected indents. It was found that ambient temperature deformation was controlled by the plastic flow of the Al layers, constrained by the SiC, and the elastic bending of the SiC layers. The reduction in hardness with temperature showed evidence of the development of interface-mediated deformation mechanisms, which led to a clear influence of layer thickness on the hardness.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Based on our previous knowledge on Cu/Nb nanoscale metallic multilayers (NMMs), Cu/WNMMs show a good potential for applications as heat skins in plasma experiments and armors, and it could be expected that the substitution of Nb byWwould increase the strength, particularly at high temperatures. To check this hypothesis, Cu/WNMMs with individual layer thicknesses ranging between 5 and 30 nm were deposited by physical vapour deposition, and their mechanical properties were measured by nanoindentation. The results showed that, contrary to Cu/Nb NMMs, the hardness was independent of the layer thickness and decreased rapidlywith temperature, especially above 200 °C. This behavior was attributed to the growth morphology of theWlayers aswell as the jagged Cu/W interface, both a consequence of the lowW adatom mobility during deposition. Therefore, future efforts on the development of Cu/Wmultilayers should concentrate on optimization of theWdeposition parameters via substrate heating and/or ion assisted deposition to increase the W adatom mobility during deposition.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

El objetivo principal de esta tesis ha sido el diseño y la optimización de receptores implementados con fibra óptica, para ser usados en redes ópticas de alta velocidad que empleen formatos de modulación de fase. En los últimos años, los formatos de modulación de fase (Phase Shift keying, PSK) han captado gran atención debido a la mejora de sus prestaciones respecto a los formatos de modulación convencionales. Principalmente, presentan una mejora de la eficiencia espectral y una mayor tolerancia a la degradación de la señal causada por la dispersión cromática, la dispersión por modo de polarización y los efectos no-lineales en la fibra óptica. En este trabajo, se analizan en detalle los formatos PSK, incluyendo sus variantes de modulación de fase diferencial (Differential Phase Shift Keying, DPSK), en cuadratura (Differential Quadrature Phase Shift Keying, DQPSK) y multiplexación en polarización (Polarization Multiplexing Differential Quadrature Phase Shift Keying, PM-DQPSK), con la finalidad de diseñar y optimizar los receptores que permita su demodulación. Para ello, se han analizado y desarrollado nuevas estructuras que ofrecen una mejora en las prestaciones del receptor y una reducción de coste comparadas con las actualmente disponibles. Para la demodulación de señales DPSK, en esta tesis, se proponen dos nuevos receptores basados en un interferómetro en línea Mach-Zehnder (MZI) implementado con tecnología todo-fibra. El principio de funcionamiento de los MZI todo-fibra propuestos se asienta en la interferencia modal que se produce en una fibra multimodo (MMF) cuando se situada entre dos monomodo (SMF). Este tipo de configuración (monomodo-multimodo-monomodo, SMS) presenta un buen ratio de extinción interferente si la potencia acoplada en la fibra multimodo se reparte, principal y equitativamente, entre dos modos dominantes. Con este objetivo, se han estudiado y demostrado tanto teórica como experimentalmente dos nuevas estructuras SMS que mejoran el ratio de extinción. Una de las propuestas se basa en emplear una fibra multimodo de índice gradual cuyo perfil del índice de refracción presenta un hundimiento en su zona central. La otra consiste en una estructura SMS con las fibras desalineadas y donde la fibra multimodo es una fibra de índice gradual convencional. Para las dos estructuras, mediante el análisis teórico desarrollado, se ha demostrado que el 80 – 90% de la potencia de entrada se acopla a los dos modos dominantes de la fibra multimodo y se consigue una diferencia inferior al 10% entre ellos. También se ha demostrado experimentalmente que se puede obtener un ratio de extinción de al menos 12 dB. Con el objeto de demostrar la capacidad de estas estructuras para ser empleadas como demoduladores de señales DPSK, se han realizado numerosas simulaciones de un sistema de transmisión óptico completo y se ha analizado la calidad del receptor bajo diferentes perspectivas, tales como la sensibilidad, la tolerancia a un filtrado óptico severo o la tolerancia a las dispersiones cromática y por modo de polarización. En todos los casos se ha concluido que los receptores propuestos presentan rendimientos comparables a los obtenidos con receptores convencionales. En esta tesis, también se presenta un diseño alternativo para la implementación de un receptor DQPSK, basado en el uso de una fibra mantenedora de la polarización (PMF). A través del análisi teórico y del desarrollo de simulaciones numéricas, se ha demostrado que el receptor DQPSK propuesto presenta prestaciones similares a los convencionales. Para complementar el trabajo realizado sobre el receptor DQPSK basado en PMF, se ha extendido el estudio de su principio de demodulación con el objeto de demodular señales PM-DQPSK, obteniendo como resultado la propuesta de una nueva estructura de demodulación. El receptor PM-DQPSK propuesto se basa en la estructura conjunta de una única línea de retardo junto con un rotador de polarización. Se ha analizado la calidad de los receptores DQPSK y PM-DQPSK bajo diferentes perspectivas, tales como la sensibilidad, la tolerancia a un filtrado óptico severo, la tolerancia a las dispersiones cromática y por modo de polarización o su comportamiento bajo condiciones no-ideales. En comparación con los receptores convencionales, nuestra propuesta exhibe prestaciones similares y además permite un diseño más simple que redunda en un coste potencialmente menor. En las redes de comunicaciones ópticas actuales se utiliza la tecnología de multimplexación en longitud de onda (WDM) que obliga al uso de filtros ópticos con bandas de paso lo más estrechas posibles y a emplear una serie de dispositivos que incorporan filtros en su arquitectura, tales como los multiplexores, demultiplexores, ROADMs, conmutadores y OXCs. Todos estos dispositivos conectados entre sí son equivalentes a una cadena de filtros cuyo ancho de banda se va haciendo cada vez más estrecho, llegando a distorsionar la forma de onda de las señales. Por esto, además de analizar el impacto del filtrado óptico en las señales de 40 Gbps DQPSK y 100 Gbps PM-DQPSK, este trabajo de tesis se completa estudiando qué tipo de filtro óptico minimiza las degradaciones causadas en la señal y analizando el número máximo de filtros concatenados que permiten mantener la calidad requerida al sistema. Se han estudiado y simulado cuatro tipos de filtros ópticos;Butterworth, Bessel, FBG y F-P. ABSTRACT The objective of this thesis is the design and optimization of optical fiber-based phase shift keying (PSK) demodulators for high-bit-rate optical networks. PSK modulation formats have attracted significant attention in recent years, because of the better performance with respect to conventional modulation formats. Principally, PSK signals can improve spectrum efficiency and tolerate more signal degradation caused by chromatic dispersion, polarization mode dispersion and nonlinearities in the fiber. In this work, many PSK formats were analyzed in detail, including the variants of differential phase modulation (Differential Phase Shift Keying, DPSK), in quadrature (Differential Quadrature Phase Shift Keying, DQPSK) and polarization multiplexing (Polarization Multiplexing Differential Quadrature Phase Shift Keying, PM-DQPSK), in order to design and optimize receivers enabling demodulations. Therefore, novel structures, which offer good receiver performances and a reduction in cost compared to the current structures, have been analyzed and developed. Two novel receivers based on an all-fiber in-line Mach-Zehnder interferometer (MZI) were proposed for DPSK signal demodulation in this thesis. The operating principle of the all-fiber MZI is based on the modal interference that occurs in a multimode fiber (MMF) when it is located between two single-mode fibers (SMFs). This type of configuration (Single-mode-multimode-single-mode, SMS) can provide a good extinction ratio if the incoming power from the SMF could be coupled equally into two dominant modes excited in the MMF. In order to improve the interference extinction ratio, two novel SMS structures have been studied and demonstrated, theoretically and experimentally. One of the two proposed MZIs is based on a graded-index multimode fiber (MMF) with a central dip in the index profile, located between two single-mode fibers (SMFs). The other one is based on a conventional graded-index MMF mismatch spliced between two SMFs. Theoretical analysis has shown that, in these two schemes, 80 – 90% of the incoming power can be coupled into the two dominant modes exited in the MMF, and the power difference between them is only ~10%. Experimental results show that interference extinction ratio of 12 dB could be obtained. In order to demonstrate the capacity of these two structures for use as DPSK signal demodulators, numerical simulations in a completed optical transmission system have been carried out, and the receiver quality has been analyzed under different perspectives, such as sensitivity, tolerance to severe optical filtering or tolerance to chromatic and polarization mode dispersion. In all cases, from the simulation results we can conclude that the two proposed receivers can provide performances comparable to conventional ones. In this thesis, an alternative design for the implementation of a DQPSK receiver, which is based on a polarization maintaining fiber (PMF), was also presented. To complement the work made for the PMF-based DQPSK receiver, the study of the demodulation principle has been extended to demodulate PM-DQPSK signals, resulting in the proposal of a novel demodulation structure. The proposed PM-DQPSK receiver is based on only one delay line and a polarization rotator. The quality of the proposed DQPSK and PM-DQPSK receivers under different perspectives, such as sensitivity, tolerance to severe optical filtering, tolerance to chromatic dispersion and polarization mode dispersion, or behavior under non-ideal conditions. Compared with the conventional receivers, our proposals exhibit similar performances but allow a simpler design which can potentially reduce the cost. The wavelength division multiplexing (WDM) technology used in current optical communications networks requires the use of optical filters with a passband as narrow as possible, and the use of a series of devices that incorporate filters in their architecture, such as multiplexers, demultiplexers, switches, reconfigurable add-drop multiplexers (ROADMs) and optical cross-connects (OXCs). All these devices connected together are equivalent to a chain of filters whose bandwidth becomes increasingly narrow, resulting in distortion to the waveform of the signals. Therefore, in addition to analyzing the impact of optical filtering on signal of 40 Gbps DQPSK and 100 Gbps PM-DQPSK, we study which kind of optical filter minimizes the signal degradation and analyze the maximum number of concatenated filters for maintaining the required quality of the system. Four types of optical filters, including Butterworth, Bessel, FBG and FP, have studied and simulated.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This work aims to develop a novel Cross-Entropy (CE) optimization-based fuzzy controller for Unmanned Aerial Monocular Vision-IMU System (UAMVIS) to solve the seeand-avoid problem using its accurate autonomous localization information. The function of this fuzzy controller is regulating the heading of this system to avoid the obstacle, e.g. wall. In the Matlab Simulink-based training stages, the Scaling Factor (SF) is adjusted according to the specified task firstly, and then the Membership Function (MF) is tuned based on the optimized Scaling Factor to further improve the collison avoidance performance. After obtained the optimal SF and MF, 64% of rules has been reduced (from 125 rules to 45 rules), and a large number of real flight tests with a quadcopter have been done. The experimental results show that this approach precisely navigates the system to avoid the obstacle. To our best knowledge, this is the first work to present the optimized fuzzy controller for UAMVIS using Cross-Entropy method in Scaling Factors and Membership Functions optimization.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Las futuras misiones para misiles aire-aire operando dentro de la atmósfera requieren la interceptación de blancos a mayores velocidades y más maniobrables, incluyendo los esperados vehículos aéreos de combate no tripulados. La intercepción tiene que lograrse desde cualquier ángulo de lanzamiento. Una de las principales discusiones en la tecnología de misiles en la actualidad es cómo satisfacer estos nuevos requisitos incrementando la capacidad de maniobra del misil y en paralelo, a través de mejoras en los métodos de guiado y control modernos. Esta Tesis aborda estos dos objetivos simultáneamente, al proponer un diseño integrando el guiado y el control de vuelo (autopiloto) y aplicarlo a misiles con control aerodinámico simultáneo en canard y cola. Un primer avance de los resultados obtenidos ha sido publicado recientemente en el Journal of Aerospace Engineering, en Abril de 2015, [Ibarrondo y Sanz-Aranguez, 2015]. El valor del diseño integrado obtenido es que permite al misil cumplir con los requisitos operacionales mencionados empleando únicamente control aerodinámico. El diseño propuesto se compara favorablemente con esquemas más tradicionales, consiguiendo menores distancias de paso al blanco y necesitando de menores esfuerzos de control incluso en presencia de ruidos. En esta Tesis se demostrará cómo la introducción del doble mando, donde tanto el canard como las aletas de cola son móviles, puede mejorar las actuaciones de un misil existente. Comparado con un misil con control en cola, el doble control requiere sólo introducir dos servos adicionales para accionar los canards también en guiñada y cabeceo. La sección de cola será responsable de controlar el misil en balanceo mediante deflexiones diferenciales de los controles. En el caso del doble mando, la complicación añadida es que los vórtices desprendidos de los canards se propagan corriente abajo y pueden incidir sobre las superficies de cola, alterando sus características de control. Como un primer aporte, se ha desarrollado un modelo analítico completo para la aerodinámica no lineal de un misil con doble control, incluyendo la caracterización de este efecto de acoplamiento aerodinámico. Hay dos modos de funcionamiento en picado y guiñada para un misil de doble mando: ”desviación” y ”opuesto”. En modo ”desviación”, los controles actúan en la misma dirección, generando un cambio inmediato en la sustentación y produciendo un movimiento de translación en el misil. La respuesta es rápida, pero en el modo ”desviación” los misiles con doble control pueden tener dificultades para alcanzar grandes ángulos de ataque y altas aceleraciones laterales. Cuando los controles actúan en direcciones opuestas, el misil rota y el ángulo de ataque del fuselaje se incrementa para generar mayores aceleraciones en estado estacionario, aunque el tiempo de respuesta es mayor. Con el modelo aerodinámico completo, es posible obtener una parametrización dependiente de los estados de la dinámica de corto periodo del misil. Debido al efecto de acoplamiento entre los controles, la respuesta en bucle abierto no depende linealmente de los controles. El autopiloto se optimiza para obtener la maniobra requerida por la ley de guiado sin exceder ninguno de los límites aerodinámicos o mecánicos del misil. Una segunda contribución de la tesis es el desarrollo de un autopiloto con múltiples entradas de control y que integra la aerodinámica no lineal, controlando los tres canales de picado, guiñada y cabeceo de forma simultánea. Las ganancias del autopiloto dependen de los estados del misil y se calculan a cada paso de integración mediante la resolución de una ecuación de Riccati de orden 21x21. Las ganancias obtenidas son sub-óptimas, debido a que una solución completa de la ecuación de Hamilton-Jacobi-Bellman no puede obtenerse de manera práctica, y se asumen ciertas simplificaciones. Se incorpora asimismo un mecanismo que permite acelerar la respuesta en caso necesario. Como parte del autopiloto, se define una estrategia para repartir el esfuerzo de control entre el canard y la cola. Esto se consigue mediante un controlador aumentado situado antes del bucle de optimización, que minimiza el esfuerzo total de control para maniobrar. Esta ley de alimentación directa mantiene al misil cerca de sus condiciones de equilibrio, garantizando una respuesta transitoria adecuada. El controlador no lineal elimina la respuesta de fase no-mínima característica de la cola. En esta Tesis se consideran dos diseños para el guiado y control, el control en Doble-Lazo y el control Integrado. En la aproximación de Doble-Lazo, el autopiloto se sitúa dentro de un bucle interior y se diseña independientemente del guiado, que conforma el bucle más exterior del control. Esta estructura asume que existe separación espectral entre los dos, esto es, que los tiempos de respuesta del autopiloto son mucho mayores que los tiempos característicos del guiado. En el estudio se combina el autopiloto desarrollado con una ley de guiado óptimo. Los resultados obtenidos demuestran que se consiguen aumentos muy importantes en las actuaciones frente a misiles con control canard o control en cola, y que la interceptación, cuando se lanza cerca del curso de colisión, se consigue desde cualquier ángulo alrededor del blanco. Para el misil de doble mando, la estrategia óptima resulta en utilizar el modo de control opuesto en la aproximación al blanco y utilizar el modo de desviación justo antes del impacto. Sin embargo la lógica de doble bucle no consigue el impacto cuando hay desviaciones importantes con respecto al curso de colisión. Una de las razones es que parte de la demanda de guiado se pierde, ya que el misil solo es capaz de modificar su aceleración lateral, y no tiene control sobre su aceleración axial, a no ser que incorpore un motor de empuje regulable. La hipótesis de separación mencionada, y que constituye la base del Doble-Bucle, puede no ser aplicable cuando la dinámica del misil es muy alta en las proximidades del blanco. Si se combinan el guiado y el autopiloto en un único bucle, la información de los estados del misil está disponible para el cálculo de la ley de guiado, y puede calcularse la estrategia optima de guiado considerando las capacidades y la actitud del misil. Una tercera contribución de la Tesis es la resolución de este segundo diseño, la integración no lineal del guiado y del autopiloto (IGA) para el misil de doble control. Aproximaciones anteriores en la literatura han planteado este sistema en ejes cuerpo, resultando en un sistema muy inestable debido al bajo amortiguamiento del misil en cabeceo y guiñada. Las simplificaciones que se tomaron también causan que el misil se deslice alrededor del blanco y no consiga la intercepción. En nuestra aproximación el problema se plantea en ejes inerciales y se recurre a la dinámica de los cuaterniones, eliminado estos inconvenientes. No se limita a la dinámica de corto periodo del misil, porque se construye incluyendo de modo explícito la velocidad dentro del bucle de optimización. La formulación resultante en el IGA es independiente de la maniobra del blanco, que sin embargo se ha de incluir en el cálculo del modelo en Doble-bucle. Un típico inconveniente de los sistemas integrados con controlador proporcional, es el problema de las escalas. Los errores de guiado dominan sobre los errores de posición del misil y saturan el controlador, provocando la pérdida del misil. Este problema se ha tratado aquí con un controlador aumentado previo al bucle de optimización, que define un estado de equilibrio local para el sistema integrado, que pasa a actuar como un regulador. Los criterios de actuaciones para el IGA son los mismos que para el sistema de Doble-Bucle. Sin embargo el problema matemático resultante es muy complejo. El problema óptimo para tiempo finito resulta en una ecuación diferencial de Riccati con condiciones terminales, que no puede resolverse. Mediante un cambio de variable y la introducción de una matriz de transición, este problema se transforma en una ecuación diferencial de Lyapunov que puede resolverse mediante métodos numéricos. La solución resultante solo es aplicable en un entorno cercano del blanco. Cuando la distancia entre misil y blanco es mayor, se desarrolla una solución aproximada basada en la solución de una ecuación algebraica de Riccati para cada paso de integración. Los resultados que se han obtenido demuestran, a través de análisis numéricos en distintos escenarios, que la solución integrada es mejor que el sistema de Doble-Bucle. Las trayectorias resultantes son muy distintas. El IGA preserva el guiado del misil y consigue maximizar el uso de la propulsión, consiguiendo la interceptación del blanco en menores tiempos de vuelo. El sistema es capaz de lograr el impacto donde el Doble-Bucle falla, y además requiere un orden menos de magnitud en la cantidad de cálculos necesarios. El efecto de los ruidos radar, datos discretos y errores del radomo se investigan. El IGA es más robusto, resultando menos afectado por perturbaciones que el Doble- Bucle, especialmente porque el núcleo de optimización en el IGA es independiente de la maniobra del blanco. La estimación de la maniobra del blanco es siempre imprecisa y contaminada por ruido, y degrada la precisión de la solución de Doble-Bucle. Finalmente, como una cuarta contribución, se demuestra que el misil con guiado IGA es capaz de realizar una maniobra de defensa contra un blanco que ataque por su cola, sólo con control aerodinámico. Las trayectorias estudiadas consideran una fase pre-programada de alta velocidad de giro, manteniendo siempre el misil dentro de su envuelta de vuelo. Este procedimiento no necesita recurrir a soluciones técnicamente más complejas como el control vectorial del empuje o control por chorro para ejecutar esta maniobra. En todas las demostraciones matemáticas se utiliza el producto de Kronecker como una herramienta practica para manejar las parametrizaciones dependientes de variables, que resultan en matrices de grandes dimensiones. ABSTRACT Future missions for air to air endo-atmospheric missiles require the interception of targets with higher speeds and more maneuverable, including forthcoming unmanned supersonic combat vehicles. The interception will need to be achieved from any angle and off-boresight launch conditions. One of the most significant discussions in missile technology today is how to satisfy these new operational requirements by increasing missile maneuvering capabilities and in parallel, through the development of more advanced guidance and control methods. This Thesis addresses these two objectives by proposing a novel optimal integrated guidance and autopilot design scheme, applicable to more maneuverable missiles with forward and rearward aerodynamic controls. A first insight of these results have been recently published in the Journal of Aerospace Engineering in April 2015, [Ibarrondo and Sanz-Aránguez, 2015]. The value of this integrated solution is that it allows the missile to comply with the aforementioned requirements only by applying aerodynamic control. The proposed design is compared against more traditional guidance and control approaches with positive results, achieving reduced control efforts and lower miss distances with the integrated logic even in the presence of noises. In this Thesis it will be demonstrated how the dual control missile, where canard and tail fins are both movable, can enhance the capabilities of an existing missile airframe. Compared to a tail missile, dual control only requires two additional servos to actuate the canards in pitch and yaw. The tail section will be responsible to maintain the missile stabilized in roll, like in a classic tail missile. The additional complexity is that the vortices shed from the canard propagate downstream where they interact with the tail surfaces, altering the tail expected control characteristics. These aerodynamic phenomena must be properly described, as a preliminary step, with high enough precision for advanced guidance and control studies. As a first contribution we have developed a full analytical model of the nonlinear aerodynamics of a missile with dual control, including the characterization of this cross-control coupling effect. This development has been produced from a theoretical model validated with reliable practical data obtained from wind tunnel experiments available in the scientific literature, complement with computer fluid dynamics and semi-experimental methods. There are two modes of operating a missile with forward and rear controls, ”divert” and ”opposite” modes. In divert mode, controls are deflected in the same direction, generating an increment in direct lift and missile translation. Response is fast, but in this mode, dual control missiles may have difficulties in achieving large angles of attack and high level of lateral accelerations. When controls are deflected in opposite directions (opposite mode) the missile airframe rotates and the body angle of attack is increased to generate greater accelerations in steady-state, although the response time is larger. With the aero-model, a state dependent parametrization of the dual control missile short term dynamics can be obtained. Due to the cross-coupling effect, the open loop dynamics for the dual control missile is not linearly dependent of the fin positions. The short term missile dynamics are blended with the servo system to obtain an extended autopilot model, where the response is linear with the control fins turning rates, that will be the control variables. The flight control loop is optimized to achieve the maneuver required by the guidance law without exceeding any of the missile aerodynamic or mechanical limitations. The specific aero-limitations and relevant performance indicators for the dual control are set as part of the analysis. A second contribution of this Thesis is the development of a step-tracking multi-input autopilot that integrates non-linear aerodynamics. The designed dual control missile autopilot is a full three dimensional autopilot, where roll, pitch and yaw are integrated, calculating command inputs simultaneously. The autopilot control gains are state dependent, and calculated at each integration step solving a matrix Riccati equation of order 21x21. The resulting gains are sub-optimal as a full solution for the Hamilton-Jacobi-Bellman equation cannot be resolved in practical terms and some simplifications are taken. Acceleration mechanisms with an λ-shift is incorporated in the design. As part of the autopilot, a strategy is defined for proper allocation of control effort between canard and tail channels. This is achieved with an augmented feed forward controller that minimizes the total control effort of the missile to maneuver. The feedforward law also maintains the missile near trim conditions, obtaining a well manner response of the missile. The nonlinear controller proves to eliminate the non-minimum phase effect of the tail. Two guidance and control designs have been considered in this Thesis: the Two- Loop and the Integrated approaches. In the Two-Loop approach, the autopilot is placed in an inner loop and designed separately from an outer guidance loop. This structure assumes that spectral separation holds, meaning that the autopilot response times are much higher than the guidance command updates. The developed nonlinear autopilot is linked in the study to an optimal guidance law. Simulations are carried on launching close to collision course against supersonic and highly maneuver targets. Results demonstrate a large boost in performance provided by the dual control versus more traditional canard and tail missiles, where interception with the dual control close to collision course is achieved form 365deg all around the target. It is shown that for the dual control missile the optimal flight strategy results in using opposite control in its approach to target and quick corrections with divert just before impact. However the Two-Loop logic fails to achieve target interception when there are large deviations initially from collision course. One of the reasons is that part of the guidance command is not followed, because the missile is not able to control its axial acceleration without a throttleable engine. Also the separation hypothesis may not be applicable for a high dynamic vehicle like a dual control missile approaching a maneuvering target. If the guidance and autopilot are combined into a single loop, the guidance law will have information of the missile states and could calculate the most optimal approach to the target considering the actual capabilities and attitude of the missile. A third contribution of this Thesis is the resolution of the mentioned second design, the non-linear integrated guidance and autopilot (IGA) problem for the dual control missile. Previous approaches in the literature have posed the problem in body axes, resulting in high unstable behavior due to the low damping of the missile, and have also caused the missile to slide around the target and not actually hitting it. The IGA system is posed here in inertial axes and quaternion dynamics, eliminating these inconveniences. It is not restricted to the missile short term dynamic, and we have explicitly included the missile speed as a state variable. The IGA formulation is also independent of the target maneuver model that is explicitly included in the Two-loop optimal guidance law model. A typical problem of the integrated systems with a proportional control law is the problem of scales. The guidance errors are larger than missile state errors during most of the flight and result in high gains, control saturation and loss of control. It has been addressed here with an integrated feedforward controller that defines a local equilibrium state at each flight point and the controller acts as a regulator to minimize the IGA states excursions versus the defined feedforward state. The performance criteria for the IGA are the same as in the Two-Loop case. However the resulting optimization problem is mathematically very complex. The optimal problem in a finite-time horizon results in an irresoluble state dependent differential Riccati equation with terminal conditions. With a change of variable and the introduction of a transition matrix, the equation is transformed into a time differential Lyapunov equation that can be solved with known numerical methods in real time. This solution results range limited, and applicable when the missile is in a close neighborhood of the target. For larger ranges, an approximate solution is used, obtained from solution of an algebraic matrix Riccati equation at each integration step. The results obtained show, by mean of several comparative numerical tests in diverse homing scenarios, than the integrated approach is a better solution that the Two- Loop scheme. Trajectories obtained are very different in the two cases. The IGA fully preserves the guidance command and it is able to maximize the utilization of the missile propulsion system, achieving interception with lower miss distances and in lower flight times. The IGA can achieve interception against off-boresight targets where the Two- Loop was not able to success. As an additional advantage, the IGA also requires one order of magnitude less calculations than the Two-Loop solution. The effects of radar noises, discrete radar data and radome errors are investigated. IGA solution is robust, and less affected by radar than the Two-Loop, especially because the target maneuvers are not part of the IGA core optimization loop. Estimation of target acceleration is always imprecise and noisy and degrade the performance of the two-Loop solution. The IGA trajectories are such that minimize the impact of radome errors in the guidance loop. Finally, as a fourth contribution, it is demonstrated that the missile with IGA guidance is capable of performing a defense against attacks from its rear hemisphere, as a tail attack, only with aerodynamic control. The studied trajectories have a preprogrammed high rate turn maneuver, maintaining the missile within its controllable envelope. This solution does not recur to more complex features in service today, like vector control of the missile thrust or side thrusters. In all the mathematical treatments and demonstrations, the Kronecker product has been introduced as a practical tool to handle the state dependent parametrizations that have resulted in very high order matrix equations.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Los Centros de Datos se encuentran actualmente en cualquier sector de la economía mundial. Están compuestos por miles de servidores, dando servicio a los usuarios de forma global, las 24 horas del día y los 365 días del año. Durante los últimos años, las aplicaciones del ámbito de la e-Ciencia, como la e-Salud o las Ciudades Inteligentes han experimentado un desarrollo muy significativo. La necesidad de manejar de forma eficiente las necesidades de cómputo de aplicaciones de nueva generación, junto con la creciente demanda de recursos en aplicaciones tradicionales, han facilitado el rápido crecimiento y la proliferación de los Centros de Datos. El principal inconveniente de este aumento de capacidad ha sido el rápido y dramático incremento del consumo energético de estas infraestructuras. En 2010, la factura eléctrica de los Centros de Datos representaba el 1.3% del consumo eléctrico mundial. Sólo en el año 2012, el consumo de potencia de los Centros de Datos creció un 63%, alcanzando los 38GW. En 2013 se estimó un crecimiento de otro 17%, hasta llegar a los 43GW. Además, los Centros de Datos son responsables de más del 2% del total de emisiones de dióxido de carbono a la atmósfera. Esta tesis doctoral se enfrenta al problema energético proponiendo técnicas proactivas y reactivas conscientes de la temperatura y de la energía, que contribuyen a tener Centros de Datos más eficientes. Este trabajo desarrolla modelos de energía y utiliza el conocimiento sobre la demanda energética de la carga de trabajo a ejecutar y de los recursos de computación y refrigeración del Centro de Datos para optimizar el consumo. Además, los Centros de Datos son considerados como un elemento crucial dentro del marco de la aplicación ejecutada, optimizando no sólo el consumo del Centro de Datos sino el consumo energético global de la aplicación. Los principales componentes del consumo en los Centros de Datos son la potencia de computación utilizada por los equipos de IT, y la refrigeración necesaria para mantener los servidores dentro de un rango de temperatura de trabajo que asegure su correcto funcionamiento. Debido a la relación cúbica entre la velocidad de los ventiladores y el consumo de los mismos, las soluciones basadas en el sobre-aprovisionamiento de aire frío al servidor generalmente tienen como resultado ineficiencias energéticas. Por otro lado, temperaturas más elevadas en el procesador llevan a un consumo de fugas mayor, debido a la relación exponencial del consumo de fugas con la temperatura. Además, las características de la carga de trabajo y las políticas de asignación de recursos tienen un impacto importante en los balances entre corriente de fugas y consumo de refrigeración. La primera gran contribución de este trabajo es el desarrollo de modelos de potencia y temperatura que permiten describes estos balances entre corriente de fugas y refrigeración; así como la propuesta de estrategias para minimizar el consumo del servidor por medio de la asignación conjunta de refrigeración y carga desde una perspectiva multivariable. Cuando escalamos a nivel del Centro de Datos, observamos un comportamiento similar en términos del balance entre corrientes de fugas y refrigeración. Conforme aumenta la temperatura de la sala, mejora la eficiencia de la refrigeración. Sin embargo, este incremente de la temperatura de sala provoca un aumento en la temperatura de la CPU y, por tanto, también del consumo de fugas. Además, la dinámica de la sala tiene un comportamiento muy desigual, no equilibrado, debido a la asignación de carga y a la heterogeneidad en el equipamiento de IT. La segunda contribución de esta tesis es la propuesta de técnicas de asigación conscientes de la temperatura y heterogeneidad que permiten optimizar conjuntamente la asignación de tareas y refrigeración a los servidores. Estas estrategias necesitan estar respaldadas por modelos flexibles, que puedan trabajar en tiempo real, para describir el sistema desde un nivel de abstracción alto. Dentro del ámbito de las aplicaciones de nueva generación, las decisiones tomadas en el nivel de aplicación pueden tener un impacto dramático en el consumo energético de niveles de abstracción menores, como por ejemplo, en el Centro de Datos. Es importante considerar las relaciones entre todos los agentes computacionales implicados en el problema, de forma que puedan cooperar para conseguir el objetivo común de reducir el coste energético global del sistema. La tercera contribución de esta tesis es el desarrollo de optimizaciones energéticas para la aplicación global por medio de la evaluación de los costes de ejecutar parte del procesado necesario en otros niveles de abstracción, que van desde los nodos hasta el Centro de Datos, por medio de técnicas de balanceo de carga. Como resumen, el trabajo presentado en esta tesis lleva a cabo contribuciones en el modelado y optimización consciente del consumo por fugas y la refrigeración de servidores; el modelado de los Centros de Datos y el desarrollo de políticas de asignación conscientes de la heterogeneidad; y desarrolla mecanismos para la optimización energética de aplicaciones de nueva generación desde varios niveles de abstracción. ABSTRACT Data centers are easily found in every sector of the worldwide economy. They consist of tens of thousands of servers, serving millions of users globally and 24-7. In the last years, e-Science applications such e-Health or Smart Cities have experienced a significant development. The need to deal efficiently with the computational needs of next-generation applications together with the increasing demand for higher resources in traditional applications has facilitated the rapid proliferation and growing of data centers. A drawback to this capacity growth has been the rapid increase of the energy consumption of these facilities. In 2010, data center electricity represented 1.3% of all the electricity use in the world. In year 2012 alone, global data center power demand grew 63% to 38GW. A further rise of 17% to 43GW was estimated in 2013. Moreover, data centers are responsible for more than 2% of total carbon dioxide emissions. This PhD Thesis addresses the energy challenge by proposing proactive and reactive thermal and energy-aware optimization techniques that contribute to place data centers on a more scalable curve. This work develops energy models and uses the knowledge about the energy demand of the workload to be executed and the computational and cooling resources available at data center to optimize energy consumption. Moreover, data centers are considered as a crucial element within their application framework, optimizing not only the energy consumption of the facility, but the global energy consumption of the application. The main contributors to the energy consumption in a data center are the computing power drawn by IT equipment and the cooling power needed to keep the servers within a certain temperature range that ensures safe operation. Because of the cubic relation of fan power with fan speed, solutions based on over-provisioning cold air into the server usually lead to inefficiencies. On the other hand, higher chip temperatures lead to higher leakage power because of the exponential dependence of leakage on temperature. Moreover, workload characteristics as well as allocation policies also have an important impact on the leakage-cooling tradeoffs. The first key contribution of this work is the development of power and temperature models that accurately describe the leakage-cooling tradeoffs at the server level, and the proposal of strategies to minimize server energy via joint cooling and workload management from a multivariate perspective. When scaling to the data center level, a similar behavior in terms of leakage-temperature tradeoffs can be observed. As room temperature raises, the efficiency of data room cooling units improves. However, as we increase room temperature, CPU temperature raises and so does leakage power. Moreover, the thermal dynamics of a data room exhibit unbalanced patterns due to both the workload allocation and the heterogeneity of computing equipment. The second main contribution is the proposal of thermal- and heterogeneity-aware workload management techniques that jointly optimize the allocation of computation and cooling to servers. These strategies need to be backed up by flexible room level models, able to work on runtime, that describe the system from a high level perspective. Within the framework of next-generation applications, decisions taken at this scope can have a dramatical impact on the energy consumption of lower abstraction levels, i.e. the data center facility. It is important to consider the relationships between all the computational agents involved in the problem, so that they can cooperate to achieve the common goal of reducing energy in the overall system. The third main contribution is the energy optimization of the overall application by evaluating the energy costs of performing part of the processing in any of the different abstraction layers, from the node to the data center, via workload management and off-loading techniques. In summary, the work presented in this PhD Thesis, makes contributions on leakage and cooling aware server modeling and optimization, data center thermal modeling and heterogeneityaware data center resource allocation, and develops mechanisms for the energy optimization for next-generation applications from a multi-layer perspective.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A numerical method to analyse the stability of transverse galloping based on experimental measurements, as an alternative method to polynomial fitting of the transverse force coefficient Cz, is proposed in this paper. The Glauert–Den Hartog criterion is used to determine the region of angles of attack (pitch angles) prone to present galloping. An analytic solution (based on a polynomial curve of Cz) is used to validate the method and to evaluate the discretization errors. Several bodies (of biconvex, D-shape and rhomboidal cross sections) have been tested in a wind tunnel and the stability of the galloping region has been analysed with the new method. An algorithm to determine the pitch angle of the body that allows the maximum value of the kinetic energy of the flow to be extracted is presented.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Recently, the target function for crystallographic refinement has been improved through a maximum likelihood analysis, which makes proper allowance for the effects of data quality, model errors, and incompleteness. The maximum likelihood target reduces the significance of false local minima during the refinement process, but it does not completely eliminate them, necessitating the use of stochastic optimization methods such as simulated annealing for poor initial models. It is shown that the combination of maximum likelihood with cross-validation, which reduces overfitting, and simulated annealing by torsion angle molecular dynamics, which simplifies the conformational search problem, results in a major improvement of the radius of convergence of refinement and the accuracy of the refined structure. Torsion angle molecular dynamics and the maximum likelihood target function interact synergistically, the combination of both methods being significantly more powerful than each method individually. This is demonstrated in realistic test cases at two typical minimum Bragg spacings (dmin = 2.0 and 2.8 Å, respectively), illustrating the broad applicability of the combined method. In an application to the refinement of a new crystal structure, the combined method automatically corrected a mistraced loop in a poor initial model, moving the backbone by 4 Å.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In addition to the contractile proteins actin and myosin, contractile filaments of striated muscle contain other proteins that are important for regulating the structure and the interaction of the two force-generating proteins. In the thin filaments, troponin and tropomyosin form a Ca-sensitive trigger that activates normal contraction when intracellular Ca is elevated. In the thick filament, there are several myosin-binding proteins whose functions are unclear. Among these is the myosin-binding protein C (MBP-C). The cardiac isoform contains four phosphorylation sites under the control of cAMP and calmodulin-regulated kinases, whereas the skeletal isoform contains only one such site, suggesting that phosphorylation in cardiac muscle has a specific regulatory function. We isolated natural thick filaments from cardiac muscle and, using electron microscopy and optical diffraction, determined the effect of phosphorylation of MBP-C on cross bridges. The thickness of the filaments that had been treated with protein kinase A was increased where cross bridges were present. No change occurred in the central bare zone that is devoid of cross bridges. The intensity of the reflections along the 43-nm layer line, which is primarily due to the helical array of cross bridges, was increased, and the distance of the first peak reflection from the meridian along the 43-nm layer line was decreased. The results indicate that phosphorylation of MBP-C (i) extends the cross bridges from the backbone of the filament and (ii) increases their degree of order and/or alters their orientation. These changes could alter rate constants for attachment to and detachment from the thin filament and thereby modify force production in activated cardiac muscle.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Absorption induced by electrochemically injected holes is studied in poly-9,9-dioctylfluorene (PFO) films. Injected charges form positive polarons which are delocalised over four fluorene units in the glassy phase and about seven fluorene units in its β-phase. Polaron absorption cross-sections at the 640 nm peak are similar to the published values of chemically reduced oligofluorenes in solution. The absorption cross-section of polaron in the β-phase at 470 nm is about eight times smaller than the stimulated emission cross-section derived from published data. This indicates that β-phase-rich PFO is an attractive candidate for a light-emitting layer in double-heterostructure organic laser diodes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Purpose: To evaluate the possible associations between corneal biomechanical parameters, optic disc morphology, and retinal nerve fiber layer (RNFL) thickness in healthy white Spanish children. Methods: This cross-sectional study included 100 myopic children and 99 emmetropic children as a control group, ranging in age from 6 to 17 years. The Ocular Response Analyzer was used to measure corneal hysteresis (CH) and corneal resistance factor. The optic disc morphology and RNFL thickness were assessed using posterior segment optical coherence tomography (Cirrus HD-OCT). The axial length was measured using an IOLMaster, whereas the central corneal thickness was measured by anterior segment optical coherence tomography (Visante OCT). Results: The mean (±SD) age and spherical equivalent were 12.11 (±2.76) years and −3.32 (±2.32) diopters for the myopic group and 11.88 (±2.97) years and +0.34 (±0.41) diopters for the emmetropic group. In a multivariable mixed-model analysis in myopic children, the average RNFL thickness and rim area correlated positively with CH (p = 0.007 and p = 0.001, respectively), whereas the average cup-to-disc area ratio correlated negatively with CH (p = 0.01). We did not observe correlation between RNFL thickness and axial length (p = 0.05). Corneal resistance factor was only positively correlated with the rim area (p = 0.001). The central corneal thickness did not correlate with the optic nerve parameters or with RNFL thickness. These associations were not found in the emmetropic group (p > 0.05 for all). Conclusions: The corneal biomechanics characterized with the Ocular Response Analyzer system are correlated with the optic disc profile and RNFL thickness in myopic children. Low CH values may indicate a reduction in the viscous dampening properties of the cornea and the sclera, especially in myopic children.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Increased temperature and precipitation in Arctic regions have led to deeper thawing and structural instability in permafrost soil. The resulting localized disturbances, referred to as active layer detachments (ALDs), may transport organic matter (OM) to more biogeochemically active zones. To examine this further, solid state cross polarization magic angle spinning 13C nuclear magnetic resonance (CPMAS NMR) and biomarker analysis were used to evaluate potential shifts in riverine sediment OM composition due to nearby ALDs within the Cape Bounty Arctic Watershed Observatory, Nunavut, Canada. In sedimentary OM near ALDs, NMR analysis revealed signals indicative of unaltered plant-derived material, likely derived from permafrost. Long chain acyclic aliphatic lipids, steroids, cutin, suberin and lignin occurred in the sediments, consistent with a dominance of plant-derived compounds, some of which may have originated from permafrost-derived OM released by ALDs. OM degradation proxies for sediments near ALDs revealed less alteration in acyclic aliphatic lipids, while constituents such as steroids, cutin, suberin and lignin were found at a relatively advanced stage of degradation. Phospholipid fatty acid analysis indicated that microbial activity was higher near ALDs than downstream but microbial substrate limitation was prevalent within disturbed regions. Our study suggests that, as these systems recover from disturbance, ALDs likely provide permafrost-derived OM to sedimentary environments. This source of OM, which is enriched in labile OM, may alter biogeochemical patterns and enhance microbial respiration within these ecosystems.