15 resultados para OPTIMIZATION TECHNIQUE
em Universidad Politécnica de Madrid
Resumo:
The use of data mining techniques for the gene profile discovery of diseases, such as cancer, is becoming usual in many researches. These techniques do not usually analyze the relationships between genes in depth, depending on the different variety of manifestations of the disease (related to patients). This kind of analysis takes a considerable amount of time and is not always the focus of the research. However, it is crucial in order to generate personalized treatments to fight the disease. Thus, this research focuses on finding a mechanism for gene profile analysis to be used by the medical and biologist experts. Results: In this research, the MedVir framework is proposed. It is an intuitive mechanism based on the visualization of medical data such as gene profiles, patients, clinical data, etc. MedVir, which is based on an Evolutionary Optimization technique, is a Dimensionality Reduction (DR) approach that presents the data in a three dimensional space. Furthermore, thanks to Virtual Reality technology, MedVir allows the expert to interact with the data in order to tailor it to the experience and knowledge of the expert.
Resumo:
La Fotogrametría, como ciencia y técnica de obtención de información tridimensional del espacio objeto a partir de imágenes bidimensionales, requiere de medidas de precisión y en ese contexto, la calibración geométrica de cámaras ocupa un lugar importante. El conocimiento de la geometría interna de la cámara es fundamental para lograr mayor precisión en las medidas realizadas. En Fotogrametría Aérea se utilizan cámaras métricas (fabricadas exclusivamente para aplicaciones cartográficas), que incluyen objetivos fotográficos con sistemas de lentes complejos y de alta calidad. Pero en Fotogrametría de Objeto Cercano se está trabajando cada vez con más asiduidad con cámaras no métricas, con ópticas de peor calidad que exigen una calibración geométrica antes o después de cada trabajo. El proceso de calibración encierra tres conceptos fundamentales: modelo de cámara, modelo de distorsión y método de calibración. El modelo de cámara es un modelo matemático que aproxima la transformación proyectiva original a la realidad física de las lentes. Ese modelo matemático incluye una serie de parámetros entre los que se encuentran los correspondientes al modelo de distorsión, que se encarga de corregir los errores sistemáticos de la imagen. Finalmente, el método de calibración propone el método de estimación de los parámetros del modelo matemático y la técnica de optimización a emplear. En esta Tesis se propone la utilización de un patrón de calibración bidimensional que se desplaza en la dirección del eje óptico de la cámara, ofreciendo así tridimensionalidad a la escena fotografiada. El patrón incluye un número elevado de marcas, lo que permite realizar ensayos con distintas configuraciones geométricas. Tomando el modelo de proyección perspectiva (o pinhole) como modelo de cámara, se realizan ensayos con tres modelos de distorsión diferentes, el clásico de distorsión radial y tangencial propuesto por D.C. Brown, una aproximación por polinomios de Legendre y una interpolación bicúbica. De la combinación de diferentes configuraciones geométricas y del modelo de distorsión más adecuado, se llega al establecimiento de una metodología de calibración óptima. Para ayudar a la elección se realiza un estudio de las precisiones obtenidas en los distintos ensayos y un control estereoscópico de un panel test construido al efecto. ABSTRACT Photogrammetry, as science and technique for obtaining three-dimensional information of the space object from two-dimensional images, requires measurements of precision and in that context, the geometric camera calibration occupies an important place. The knowledge of the internal geometry of the camera is fundamental to achieve greater precision in measurements made. Metric cameras (manufactured exclusively for cartographic applications), including photographic lenses with complex lenses and high quality systems are used in Aerial Photogrammetry. But in Close Range Photogrammetry is working increasingly more frequently with non-metric cameras, worst quality optical components which require a geometric calibration before or after each job. The calibration process contains three fundamental concepts: camera model, distortion model and method of calibration. The camera model is a mathematical model that approximates the original projective transformation to the physical reality of the lenses. The mathematical model includes a series of parameters which include the correspondents to the model of distortion, which is in charge of correcting the systematic errors of the image. Finally, the calibration method proposes the method of estimation of the parameters of the mathematical modeling and optimization technique to employ. This Thesis is proposing the use of a pattern of two dimensional calibration that moves in the direction of the optical axis of the camera, thus offering three-dimensionality to the photographed scene. The pattern includes a large number of marks, which allows testing with different geometric configurations. Taking the projection model perspective (or pinhole) as a model of camera, tests are performed with three different models of distortion, the classical of distortion radial and tangential proposed by D.C. Brown, an approximation by Legendre polynomials and bicubic interpolation. From the combination of different geometric configurations and the most suitable distortion model, brings the establishment of a methodology for optimal calibration. To help the election, a study of the information obtained in the various tests and a purpose built test panel stereoscopic control is performed.
Resumo:
This article focuses on the evaluation of a biometric technique based on the performance of an identifying gesture by holding a telephone with an embedded accelerometer in his/her hand. The acceleration signals obtained when users perform gestures are analyzed following a mathematical method based on global sequence alignment. In this article, eight different scores are proposed and evaluated in order to quantify the differences between gestures, obtaining an optimal EER result of 3.42% when analyzing a random set of 40 users of a database made up of 80 users with real attempts of falsification. Moreover, a temporal study of the technique is presented leeding to the need to update the template to adapt the manner in which users modify how they perform their identifying gesture over time. Six updating schemes have been assessed within a database of 22 users repeating their identifying gesture in 20 sessions over 4 months, concluding that the more often the template is updated the better and more stable performance the technique presents.
Resumo:
This paper presents the security evaluation, energy consumption optimization, and spectrum scarcity analysis of artificial noise techniques to increase physical-layer security in Cognitive Wireless Sensor Networks (CWSNs). These techniques introduce noise into the spectrum in order to hide real information. Nevertheless, they directly affect two important parameters in Cognitive Wireless Sensor Networks (CWSNs), energy consumption and spectrum utilization. Both are affected because the number of packets transmitted by the network and the active period of the nodes increase. Security evaluation demonstrates that these techniques are effective against eavesdropper attacks, but also optimization allows for the implementation of these approaches in low-resource networks such as Cognitive Wireless Sensor Networks. In this work, the scenario is formally modeled and the optimization according to the simulation results and the impact analysis over the frequency spectrum are presented.
Resumo:
The technique of Abstract Interpretation has allowed the development of very sophisticated global program analyses which are at the same time provably correct and practical. We present in a tutorial fashion a novel program development framework which uses abstract interpretation as a fundamental tool. The framework uses modular, incremental abstract interpretation to obtain information about the program. This information is used to validate programs, to detect bugs with respect to partial specifications written using assertions (in the program itself and/or in system libraries), to generate and simplify run-time tests, and to perform high-level program transformations such as multiple abstract specialization, parallelization, and resource usage control, all in a provably correct way. In the case of validation and debugging, the assertions can refer to a variety of program points such as procedure entry, procedure exit, points within procedures, or global computations. The system can reason with much richer information than, for example, traditional types. This includes data structure shape (including pointer sharing), bounds on data structure sizes, and other operational variable instantiation properties, as well as procedure-level properties such as determinacy, termination, nonfailure, and bounds on resource consumption (time or space cost). CiaoPP, the preprocessor of the Ciao multi-paradigm programming system, which implements the described functionality, will be used to illustrate the fundamental ideas.
Resumo:
This paper presents a technique for achieving a class of optimizations related to the reduction of checks within cycles. The technique uses both Program Transformation and Abstract Interpretation. After a ñrst pass of an abstract interpreter which detects simple invariants, program transformation is used to build a hypothetical situation that simpliñes some predicates that should be executed within the cycle. This transformation implements the heuristic hypothesis that once conditional tests hold they may continué doing so recursively. Specialized versions of predicates are generated to detect and exploit those cases in which the invariance may hold. Abstract interpretation is then used again to verify the truth of such hypotheses and conñrm the proposed simpliñcation. This allows optimizations that go beyond those possible with only one pass of the abstract interpreter over the original program, as is normally the case. It also allows selective program specialization using a standard abstract interpreter not speciñcally designed for this purpose, thus simplifying the design of this already complex module of the compiler. In the paper, a class of programs amenable to such optimization is presented, along with some examples and an evaluation of the proposed techniques in some application áreas such as floundering detection and reducing run-time tests in automatic logic program parallelization. The analysis of the examples presented has been performed automatically by an implementation of the technique using existing abstract interpretation and program transformation tools.
Resumo:
Knowing the size of the terms to which program variables are bound at run-time in logic programs is required in a class of applications related to program optimization such as, for example, granularity analysis and selection among different algorithms or control rules whose performance may be dependent on such size. Such size is difficult to even approximate at compile time and is thus generally computed at run-time by using (possibly predefined) predicates which traverse the terms involved. We propose a technique based on program transformation which has the potential of performing this computation much more efficiently. The technique is based on finding program procedures which are called before those in which knowledge regarding term sizes is needed and which traverse the terms whose size is to be determined, and transforming such procedures so that they compute term sizes "on the fly". We present a systematic way of determining whether a given program can be transformed in order to compute a given term size at a given program point without additional term traversal. Also, if several such transformations are possible our approach allows finding minimal transformations under certain criteria. We also discuss the advantages and applications of our technique and present some performance results.
Resumo:
Probabilistic modeling is the de�ning characteristic of estimation of distribution algorithms (EDAs) which determines their behavior and performance in optimization. Regularization is a well-known statistical technique used for obtaining an improved model by reducing the generalization error of estimation, especially in high-dimensional problems. `1-regularization is a type of this technique with the appealing variable selection property which results in sparse model estimations. In this thesis, we study the use of regularization techniques for model learning in EDAs. Several methods for regularized model estimation in continuous domains based on a Gaussian distribution assumption are presented, and analyzed from di�erent aspects when used for optimization in a high-dimensional setting, where the population size of EDA has a logarithmic scale with respect to the number of variables. The optimization results obtained for a number of continuous problems with an increasing number of variables show that the proposed EDA based on regularized model estimation performs a more robust optimization, and is able to achieve signi�cantly better results for larger dimensions than other Gaussian-based EDAs. We also propose a method for learning a marginally factorized Gaussian Markov random �eld model using regularization techniques and a clustering algorithm. The experimental results show notable optimization performance on continuous additively decomposable problems when using this model estimation method. Our study also covers multi-objective optimization and we propose joint probabilistic modeling of variables and objectives in EDAs based on Bayesian networks, speci�cally models inspired from multi-dimensional Bayesian network classi�ers. It is shown that with this approach to modeling, two new types of relationships are encoded in the estimated models in addition to the variable relationships captured in other EDAs: objectivevariable and objective-objective relationships. An extensive experimental study shows the e�ectiveness of this approach for multi- and many-objective optimization. With the proposed joint variable-objective modeling, in addition to the Pareto set approximation, the algorithm is also able to obtain an estimation of the multi-objective problem structure. Finally, the study of multi-objective optimization based on joint probabilistic modeling is extended to noisy domains, where the noise in objective values is represented by intervals. A new version of the Pareto dominance relation for ordering the solutions in these problems, namely �-degree Pareto dominance, is introduced and its properties are analyzed. We show that the ranking methods based on this dominance relation can result in competitive performance of EDAs with respect to the quality of the approximated Pareto sets. This dominance relation is then used together with a method for joint probabilistic modeling based on `1-regularization for multi-objective feature subset selection in classi�cation, where six di�erent measures of accuracy are considered as objectives with interval values. The individual assessment of the proposed joint probabilistic modeling and solution ranking methods on datasets with small-medium dimensionality, when using two di�erent Bayesian classi�ers, shows that comparable or better Pareto sets of feature subsets are approximated in comparison to standard methods.
Transformation�based implementation and optimization of programs exploiting the basic Andorra model.
Resumo:
The characteristics of CC and CLP systems are in principle very dierent However a recent trend towards convergence in the implementation techniques for these systems can be observed While CLP and Prolog systems have been incorporating capabilities to deal with userdened suspension and coroutining CC compilers have been trying to coalesce negrained tasks into coarsergrained sequential threads This convergence of techniques opens up the possibility of having a general purpose kernel language and abstract machine to serve as a compilation target for a variety of userlevel languages We propose a transformation technique directed towards such an objective In particular we report on techniques to support the Andorra computational model essentially emulating the AndorraI system via program transformation into a sequential language with delay primitives The system is automatic comprising an optional program analyzer and a basic transformer to the kernel language It turns out that a simple parallel CLP or Prolog system with dynamic scheduling is sucient as a kernel language for this purpose The preliminary results are quite encouraging performance of the resulting system is comparable to the current AndorraI implementation.
Resumo:
El diseño y desarrollo de sistemas de suspensión para vehículos se basa cada día más en el diseño por ordenador y en herramientas de análisis por ordenador, las cuales permiten anticipar problemas y resolverlos por adelantado. El comportamiento y las características dinámicas se calculan con precisión, bajo coste, y recursos y tiempos de cálculo reducidos. Sin embargo, existe una componente iterativa en el proceso, que requiere la definición manual de diseños a través de técnicas “prueba y error”. Esta Tesis da un paso hacia el desarrollo de un entorno de simulación eficiente capaz de simular, analizar y evaluar diseños de suspensiones vehiculares, y de mejorarlos hacia la solución optima mediante la modificación de los parámetros de diseño. La modelización mediante sistemas multicuerpo se utiliza aquí para desarrollar un modelo de autocar con 18 grados de libertad, de manera detallada y eficiente. La geometría y demás características de la suspensión se ajustan a las del vehículo real, así como los demás parámetros del modelo. Para simular la dinámica vehicular, se utiliza una formulación multicuerpo moderna y eficiente basada en las ecuaciones de Maggi, a la que se ha incorporado un visor 3D. Así, se consigue simular maniobras vehiculares en tiempos inferiores al tiempo real. Una vez que la dinámica está disponible, los análisis de sensibilidad son cruciales para una optimización robusta y eficiente. Para ello, se presenta una técnica matemática que permite derivar las variables dinámicas dentro de la formulación, de forma algorítmica, general, con la precisión de la maquina, y razonablemente eficiente: la diferenciación automática. Este método propaga las derivadas con respecto a las variables de diseño a través del código informático y con poca intervención del usuario. En contraste con otros enfoques en la bibliografía, generalmente particulares y limitados, se realiza una comparación de librerías, se desarrolla una formulación híbrida directa-automática para el cálculo de sensibilidades, y se presentan varios ejemplos reales. Finalmente, se lleva a cabo la optimización de la respuesta dinámica del vehículo citado. Se analizan cuatro tipos distintos de optimización: identificación de parámetros, optimización de la maniobrabilidad, optimización del confort y optimización multi-objetivo, todos ellos aplicados al diseño del autocar. Además de resultados analíticos y gráficos, se incluyen algunas consideraciones acerca de la eficiencia. En resumen, se mejora el comportamiento dinámico de vehículos por medio de modelos multicuerpo y de técnicas de diferenciación automática y optimización avanzadas, posibilitando un ajuste automático, preciso y eficiente de los parámetros de diseño. ABSTRACT Each day, the design and development of vehicle suspension systems relies more on computer-aided design and computer-aided engineering tools, which allow anticipating the problems and solving them ahead of time. Dynamic behavior and characteristics are thus simulated accurately and inexpensively with moderate computational times and resources. There is, however, an iterative component in the process, which involves the manual definition of designs in a trialand-error manner. This Thesis takes a step towards the development of an efficient simulation framework capable of simulating, analyzing and evaluating vehicle suspension designs, and automatically improving them by varying the design parameters towards the optimal solution. The multibody systems approach is hereby used to model a three-dimensional 18-degrees-of-freedom coach in a comprehensive yet efficient way. The suspension geometry and characteristics resemble the ones from the real vehicle, as do the rest of vehicle parameters. In order to simulate vehicle dynamics, an efficient, state-of-the-art multibody formulation based on Maggi’s equations is employed, and a three-dimensional graphics viewer is developed. As a result, vehicle maneuvers can be simulated faster than real-time. Once the dynamics are ready, a sensitivity analysis is crucial for a robust optimization. To that end, a mathematical technique is introduced, which allows differentiating the dynamic variables within the multibody formulation in a general, algorithmic, accurate to machine precision, and reasonably efficient way: automatic differentiation. This method propagates the derivatives with respect to the design parameters throughout the computer code, with little user interaction. In contrast with other attempts in the literature, mostly not generalpurpose, a benchmarking of libraries is carried out, a hybrid direct-automatic differentiation approach for the computation of sensitivities is developed, and several real-life examples are analyzed. Finally, a design optimization process of the aforementioned vehicle is carried out. Four different types of dynamic response optimization are presented: parameter identification, handling optimization, ride comfort optimization and multi-objective optimization; all of which are applied to the design of the coach example. Together with analytical and visual proof of the results, efficiency considerations are made. In summary, the dynamic behavior of vehicles is improved by using the multibody systems approach, along with advanced differentiation and optimization techniques, enabling an automatic, accurate and efficient tuning of design parameters.
Resumo:
The objective of this study was to propose a multi-criteria optimization and decision-making technique to solve food engineering problems. This technique was demostrated using experimental data obtained on osmotic dehydratation of carrot cubes in a sodium chloride solution. The Aggregating Functions Approach, the Adaptive Random Search Algorithm, and the Penalty Functions Approach were used in this study to compute the initial set of non-dominated or Pareto-optimal solutions. Multiple non-linear regression analysis was performed on a set of experimental data in order to obtain particular multi-objective functions (responses), namely water loss, solute gain, rehydration ratio, three different colour criteria of rehydrated product, and sensory evaluation (organoleptic quality). Two multi-criteria decision-making approaches, the Analytic Hierarchy Process (AHP) and the Tabular Method (TM), were used simultaneously to choose the best alternative among the set of non-dominated solutions. The multi-criteria optimization and decision-making technique proposed in this study can facilitate the assessment of criteria weights, giving rise to a fairer, more consistent, and adequate final compromised solution or food process. This technique can be useful to food scientists in research and education, as well as to engineers involved in the improvement of a variety of food engineering processes.
Resumo:
Durante los últimos años la tendencia en el sector de las telecomunicaciones ha sido un aumento y diversificación en la transmisión de voz, video y fundamentalmente de datos. Para conseguir alcanzar las tasas de transmisión requeridas, los nuevos estándares de comunicaciones requieren un mayor ancho de banda y tienen un mayor factor de pico, lo cual influye en el bajo rendimiento del amplificador de radiofrecuencia (RFPA). Otro factor que ha influido en el bajo rendimiento es el diseño del amplificador de radiofrecuencia. Tradicionalmente se han utilizado amplificadores lineales por su buen funcionamiento. Sin embargo, debido al elevado factor de pico de las señales transmitidas, el rendimiento de este tipo de amplificadores es bajo. El bajo rendimiento del sistema conlleva desventajas adicionales como el aumento del coste y del tamaño del sistema de refrigeración, como en el caso de una estación base, o como la reducción del tiempo de uso y un mayor calentamiento del equipo para sistemas portátiles alimentados con baterías. Debido a estos factores, se han desarrollado durante las últimas décadas varias soluciones para aumentar el rendimiento del RFPA como la técnica de Outphasing, combinadores de potencia o la técnica de Doherty. Estas soluciones mejoran las prestaciones del RFPA y en algún caso han sido ampliamente utilizados comercialmente como la técnica de Doherty, que alcanza rendimientos hasta del 50% para el sistema completo para anchos de banda de hasta 20MHz. Pese a las mejoras obtenidas con estas soluciones, los mayores rendimientos del sistema se obtienen para soluciones basadas en la modulación de la tensión de alimentación del amplificador de potencia como “Envelope Tracking” o “EER”. La técnica de seguimiento de envolvente o “Envelope Tracking” está basada en la modulación de la tensión de alimentación de un amplificador lineal de potencia para obtener una mejora en el rendimiento en el sistema comparado a una solución con una tensión de alimentación constante. Para la implementación de esta técnica se necesita una etapa adicional, el amplificador de envolvente, que añade complejidad al amplificador de radiofrecuencia. En un amplificador diseñado con esta técnica, se aumentan las pérdidas debido a la etapa adicional que supone el amplificador de envolvente pero a su vez disminuyen las pérdidas en el amplificador de potencia. Si el diseño se optimiza adecuadamente, puede conseguirse un aumento global en el rendimiento del sistema superior al conseguido con las técnicas mencionadas anteriormente. Esta técnica presenta ventajas en el diseño del amplificador de envolvente, ya que el ancho de banda requerido puede ser menor que el ancho de banda de la señal de envolvente si se optimiza adecuadamente el diseño. Adicionalmente, debido a que la sincronización entre la señal de envolvente y de fase no tiene que ser perfecta, el proceso de integración conlleva ciertas ventajas respecto a otras técnicas como EER. La técnica de eliminación y restauración de envolvente, llamada EER o técnica de Kahn está basada en modulación simultánea de la envolvente y la fase de la señal usando un amplificador de potencia conmutado, no lineal y que permite obtener un elevado rendimiento. Esta solución fue propuesta en el año 1952, pero no ha sido implementada con éxito durante muchos años debido a los exigentes requerimientos en cuanto a la sincronización entre fase y envolvente, a las técnicas de control y de corrección de los errores y no linealidades de cada una de las etapas así como de los equipos para poder implementar estas técnicas, que tienen unos requerimientos exigentes en capacidad de cálculo y procesamiento. Dentro del diseño de un RFPA, el amplificador de envolvente tiene una gran importancia debido a su influencia en el rendimiento y ancho de banda del sistema completo. Adicionalmente, la linealidad y la calidad de la señal de transmitida deben ser elevados para poder cumplir con los diferentes estándares de telecomunicaciones. Esta tesis se centra en el amplificador de envolvente y el objetivo principal es el desarrollo de soluciones que permitan el aumento del rendimiento total del sistema a la vez que satisfagan los requerimientos de ancho de banda, calidad de la señal transmitida y de linealidad. Debido al elevado rendimiento que potencialmente puede alcanzarse con la técnica de EER, esta técnica ha sido objeto de análisis y en el estado del arte pueden encontrarse numerosas referencias que analizan el diseño y proponen diversas implementaciones. En una clasificación de alto nivel, podemos agrupar las soluciones propuestas del amplificador de envolvente según estén compuestas de una o múltiples etapas. Las soluciones para el amplificador de envolvente en una configuración multietapa se basan en la combinación de un convertidor conmutado, de elevado rendimiento con un regulador lineal, de alto ancho de banda, en una combinación serie o paralelo. Estas soluciones, debido a la combinación de las características de ambas etapas, proporcionan un buen compromiso entre rendimiento y buen funcionamiento del amplificador de RF. Por otro lado, la complejidad del sistema aumenta debido al mayor número de componentes y de señales de control necesarias y el aumento de rendimiento que se consigue con estas soluciones es limitado. Una configuración en una etapa tiene las ventajas de una mayor simplicidad, pero debido al elevado ancho de banda necesario, la frecuencia de conmutación debe aumentarse en gran medida. Esto implicará un bajo rendimiento y un peor funcionamiento del amplificador de envolvente. En el estado del arte pueden encontrarse diversas soluciones para un amplificador de envolvente en una etapa, como aumentar la frecuencia de conmutación y realizar la implementación en un circuito integrado, que tendrá mejor funcionamiento a altas frecuencias o utilizar técnicas topológicas y/o filtros de orden elevado, que permiten una reducción de la frecuencia de conmutación. En esta tesis se propone de manera original el uso de la técnica de cancelación de rizado, aplicado al convertidor reductor síncrono, para reducir la frecuencia de conmutación comparado con diseño equivalente del convertidor reductor convencional. Adicionalmente se han desarrollado dos variantes topológicas basadas en esta solución para aumentar la robustez y las prestaciones de la misma. Otro punto de interés en el diseño de un RFPA es la dificultad de poder estimar la influencia de los parámetros de diseño del amplificador de envolvente en el amplificador final integrado. En esta tesis se ha abordado este problema y se ha desarrollado una herramienta de diseño que permite obtener las principales figuras de mérito del amplificador integrado para la técnica de EER a partir del diseño del amplificador de envolvente. Mediante el uso de esta herramienta pueden validarse el efecto del ancho de banda, el rizado de tensión de salida o las no linealidades del diseño del amplificador de envolvente para varias modulaciones digitales. Las principales contribuciones originales de esta tesis son las siguientes: La aplicación de la técnica de cancelación de rizado a un convertidor reductor síncrono para un amplificador de envolvente de alto rendimiento para un RFPA linealizado mediante la técnica de EER. Una reducción del 66% en la frecuencia de conmutación, comparado con el reductor convencional equivalente. Esta reducción se ha validado experimentalmente obteniéndose una mejora en el rendimiento de entre el 12.4% y el 16% para las especificaciones de este trabajo. La topología y el diseño del convertidor reductor con dos redes de cancelación de rizado en cascada para mejorar el funcionamiento y robustez de la solución con una red de cancelación. La combinación de un convertidor redactor multifase con la técnica de cancelación de rizado para obtener una topología que proporciona una reducción del cociente entre frecuencia de conmutación y ancho de banda de la señal. El proceso de optimización del control del amplificador de envolvente en lazo cerrado para mejorar el funcionamiento respecto a la solución en lazo abierto del convertidor reductor con red de cancelación de rizado. Una herramienta de simulación para optimizar el proceso de diseño del amplificador de envolvente mediante la estimación de las figuras de mérito del RFPA, implementado mediante EER, basada en el diseño del amplificador de envolvente. La integración y caracterización del amplificador de envolvente basado en un convertidor reductor con red de cancelación de rizado en el transmisor de radiofrecuencia completo consiguiendo un elevado rendimiento, entre 57% y 70.6% para potencias de salida de 14.4W y 40.7W respectivamente. Esta tesis se divide en seis capítulos. El primer capítulo aborda la introducción enfocada en la aplicación, los amplificadores de potencia de radiofrecuencia, así como los principales problemas, retos y soluciones existentes. En el capítulo dos se desarrolla el estado del arte de amplificadores de potencia de RF, describiéndose las principales técnicas de diseño, las causas de no linealidad y las técnicas de optimización. El capítulo tres está centrado en las soluciones propuestas para el amplificador de envolvente. El modo de control se ha abordado en este capítulo y se ha presentado una optimización del diseño en lazo cerrado para el convertidor reductor convencional y para el convertidor reductor con red de cancelación de rizado. El capítulo cuatro se centra en el proceso de diseño del amplificador de envolvente. Se ha desarrollado una herramienta de diseño para evaluar la influencia del amplificador de envolvente en las figuras de mérito del RFPA. En el capítulo cinco se presenta el proceso de integración realizado y las pruebas realizadas para las diversas modulaciones, así como la completa caracterización y análisis del amplificador de RF. El capítulo seis describe las principales conclusiones de la tesis y las líneas futuras. ABSTRACT The trend in the telecommunications sector during the last years follow a high increase in the transmission rate of voice, video and mainly in data. To achieve the required levels of data rates, the new modulation standards demand higher bandwidths and have a higher peak to average power ratio (PAPR). These specifications have a direct impact in the low efficiency of the RFPA. An additional factor for the low efficiency of the RFPA is in the power amplifier design. Traditionally, linear classes have been used for the implementation of the power amplifier as they comply with the technical requirements. However, they have a low efficiency, especially in the operating range of signals with a high PAPR. The low efficiency of the transmitter has additional disadvantages as an increase in the cost and size as the cooling system needs to be increased for a base station and a temperature increase and a lower use time for portable devices. Several solutions have been proposed in the state of the art to improve the efficiency of the transmitter as Outphasing, power combiners or Doherty technique. However, the highest potential of efficiency improvement can be obtained using a modulated power supply for the power amplifier, as in the Envelope Tracking and EER techniques. The Envelope Tracking technique is based on the modulation of the power supply of a linear power amplifier to improve the overall efficiency compared to a fixed voltage supply. In the implementation of this technique an additional stage is needed, the envelope amplifier, that will increase the complexity of the RFPA. However, the efficiency of the linear power amplifier will increase and, if designed properly, the RFPA efficiency will be improved. The advantages of this technique are that the envelope amplifier design does not require such a high bandwidth as the envelope signal and that in the integration process a perfect synchronization between envelope and phase is not required. The Envelope Elimination and Restoration (EER) technique, known also as Kahn’s technique, is based on the simultaneous modulation of envelope and phase using a high efficiency switched power amplifier. This solution has the highest potential in terms of the efficiency improvement but also has the most challenging specifications. This solution, proposed in 1952, has not been successfully implemented until the last two decades due to the high demanding requirements for each of the stages as well as for the highly demanding processing and computation capabilities needed. At the system level, a very precise synchronization is required between the envelope and phase paths to avoid a linearity decrease of the system. Several techniques are used to compensate the non-linear effects in amplitude and phase and to improve the rejection of the out of band noise as predistortion, feedback and feed-forward. In order to obtain a high bandwidth and efficient RFPA using either ET or EER, the envelope amplifier stage will have a critical importance. The requirements for this stage are very demanding in terms of bandwidth, linearity and quality of the transmitted signal. Additionally the efficiency should be as high as possible, as the envelope amplifier has a direct impact in the efficiency of the overall system. This thesis is focused on the envelope amplifier stage and the main objective will be the development of high efficiency envelope amplifier solutions that comply with the requirements of the RFPA application. The design and optimization of an envelope amplifier for a RFPA application is a highly referenced research topic, and many solutions that address the envelope amplifier and the RFPA design and optimization can be found in the state of the art. From a high level classification, multiple and single stage envelope amplifiers can be identified. Envelope amplifiers for EER based on multiple stage architecture combine a linear assisted stage and a switched-mode stage, either in a series or parallel configuration, to achieve a very high performance RFPA. However, the complexity of the system increases and the efficiency improvement is limited. A single-stage envelope amplifier has the advantage of a lower complexity but in order to achieve the required bandwidth the switching frequency has to be highly increased, and therefore the performance and the efficiency are degraded. Several techniques are used to overcome this limitation, as the design of integrated circuits that are capable of switching at very high rates or the use of topological solutions, high order filters or a combination of both to reduce the switching frequency requirements. In this thesis it is originally proposed the use of the ripple cancellation technique, applied to a synchronous buck converter, to reduce the switching frequency requirements compared to a conventional buck converter for an envelope amplifier application. Three original proposals for the envelope amplifier stage, based on the ripple cancellation technique, are presented and one of the solutions has been experimentally validated and integrated in the complete amplifier, showing a high total efficiency increase compared to other solutions of the state of the art. Additionally, the proposed envelope amplifier has been integrated in the complete RFPA achieving a high total efficiency. The design process optimization has also been analyzed in this thesis. Due to the different figures of merit between the envelope amplifier and the complete RFPA it is very difficult to obtain an optimized design for the envelope amplifier. To reduce the design uncertainties, a design tool has been developed to provide an estimation of the RFPA figures of merit based on the design of the envelope amplifier. The main contributions of this thesis are: The application of the ripple cancellation technique to a synchronous buck converter for an envelope amplifier application to achieve a high efficiency and high bandwidth EER RFPA. A 66% reduction of the switching frequency, validated experimentally, compared to the equivalent conventional buck converter. This reduction has been reflected in an improvement in the efficiency between 12.4% and 16%, validated for the specifications of this work. The synchronous buck converter with two cascaded ripple cancellation networks (RCNs) topology and design to improve the robustness and the performance of the envelope amplifier. The combination of a phase-shifted multi-phase buck converter with the ripple cancellation technique to improve the envelope amplifier switching frequency to signal bandwidth ratio. The optimization of the control loop of an envelope amplifier to improve the performance of the open loop design for the conventional and ripple cancellation buck converter. A simulation tool to optimize the envelope amplifier design process. Using the envelope amplifier design as the input data, the main figures of merit of the complete RFPA for an EER application are obtained for several digital modulations. The successful integration of the envelope amplifier based on a RCN buck converter in the complete RFPA obtaining a high efficiency integrated amplifier. The efficiency obtained is between 57% and 70.6% for an output power of 14.4W and 40.7W respectively. The main figures of merit for the different modulations have been characterized and analyzed. This thesis is organized in six chapters. In Chapter 1 is provided an introduction of the RFPA application, where the main problems, challenges and solutions are described. In Chapter 2 the technical background for radiofrequency power amplifiers (RF) is presented. The main techniques to implement an RFPA are described and analyzed. The state of the art techniques to improve performance of the RFPA are identified as well as the main sources of no-linearities for the RFPA. Chapter 3 is focused on the envelope amplifier stage. The three different solutions proposed originally in this thesis for the envelope amplifier are presented and analyzed. The control stage design is analyzed and an optimization is proposed both for the conventional and the RCN buck converter. Chapter 4 is focused in the design and optimization process of the envelope amplifier and a design tool to evaluate the envelope amplifier design impact in the RFPA is presented. Chapter 5 shows the integration process of the complete amplifier. Chapter 6 addresses the main conclusions of the thesis and the future work.
Resumo:
Fiber reinforced polymer composites (FRP) have found widespread usage in the repair and strengthening of concrete structures. FRP composites exhibit high strength-to-weight ratio, corrosion resistance, and are convenient to use in repair applications. Externally bonded FRP flexural strengthening of concrete beams is the most extended application of this technique. A common cause of failure in such members is associated with intermediate crack-induced debonding (IC debonding) of the FRP substrate from the concrete in an abrupt manner. Continuous monitoring of the concrete?FRP interface is essential to pre- vent IC debonding. Objective condition assessment and performance evaluation are challenging activities since they require some type of monitoring to track the response over a period of time. In this paper, a multi-objective model updating method integrated in the context of structural health monitoring is demonstrated as promising technology for the safety and reliability of this kind of strengthening technique. The proposed method, solved by a multi-objective extension of the particle swarm optimization method, is based on strain measurements under controlled loading. The use of permanently installed fiber Bragg grating (FBG) sensors embedded into the FRP-concrete interface or bonded onto the FRP strip together with the proposed methodology results in an automated method able to operate in an unsupervised mode.
Resumo:
El uso de aritmética de punto fijo es una opción de diseño muy extendida en sistemas con fuertes restricciones de área, consumo o rendimiento. Para producir implementaciones donde los costes se minimicen sin impactar negativamente en la precisión de los resultados debemos llevar a cabo una asignación cuidadosa de anchuras de palabra. Encontrar la combinación óptima de anchuras de palabra en coma fija para un sistema dado es un problema combinatorio NP-hard al que los diseñadores dedican entre el 25 y el 50 % del ciclo de diseño. Las plataformas hardware reconfigurables, como son las FPGAs, también se benefician de las ventajas que ofrece la aritmética de coma fija, ya que éstas compensan las frecuencias de reloj más bajas y el uso más ineficiente del hardware que hacen estas plataformas respecto a los ASICs. A medida que las FPGAs se popularizan para su uso en computación científica los diseños aumentan de tamaño y complejidad hasta llegar al punto en que no pueden ser manejados eficientemente por las técnicas actuales de modelado de señal y ruido de cuantificación y de optimización de anchura de palabra. En esta Tesis Doctoral exploramos distintos aspectos del problema de la cuantificación y presentamos nuevas metodologías para cada uno de ellos: Las técnicas basadas en extensiones de intervalos han permitido obtener modelos de propagación de señal y ruido de cuantificación muy precisos en sistemas con operaciones no lineales. Nosotros llevamos esta aproximación un paso más allá introduciendo elementos de Multi-Element Generalized Polynomial Chaos (ME-gPC) y combinándolos con una técnica moderna basada en Modified Affine Arithmetic (MAA) estadístico para así modelar sistemas que contienen estructuras de control de flujo. Nuestra metodología genera los distintos caminos de ejecución automáticamente, determina las regiones del dominio de entrada que ejercitarán cada uno de ellos y extrae los momentos estadísticos del sistema a partir de dichas soluciones parciales. Utilizamos esta técnica para estimar tanto el rango dinámico como el ruido de redondeo en sistemas con las ya mencionadas estructuras de control de flujo y mostramos la precisión de nuestra aproximación, que en determinados casos de uso con operadores no lineales llega a tener tan solo una desviación del 0.04% con respecto a los valores de referencia obtenidos mediante simulación. Un inconveniente conocido de las técnicas basadas en extensiones de intervalos es la explosión combinacional de términos a medida que el tamaño de los sistemas a estudiar crece, lo cual conlleva problemas de escalabilidad. Para afrontar este problema presen tamos una técnica de inyección de ruidos agrupados que hace grupos con las señales del sistema, introduce las fuentes de ruido para cada uno de los grupos por separado y finalmente combina los resultados de cada uno de ellos. De esta forma, el número de fuentes de ruido queda controlado en cada momento y, debido a ello, la explosión combinatoria se minimiza. También presentamos un algoritmo de particionado multi-vía destinado a minimizar la desviación de los resultados a causa de la pérdida de correlación entre términos de ruido con el objetivo de mantener los resultados tan precisos como sea posible. La presente Tesis Doctoral también aborda el desarrollo de metodologías de optimización de anchura de palabra basadas en simulaciones de Monte-Cario que se ejecuten en tiempos razonables. Para ello presentamos dos nuevas técnicas que exploran la reducción del tiempo de ejecución desde distintos ángulos: En primer lugar, el método interpolativo aplica un interpolador sencillo pero preciso para estimar la sensibilidad de cada señal, y que es usado después durante la etapa de optimización. En segundo lugar, el método incremental gira en torno al hecho de que, aunque es estrictamente necesario mantener un intervalo de confianza dado para los resultados finales de nuestra búsqueda, podemos emplear niveles de confianza más relajados, lo cual deriva en un menor número de pruebas por simulación, en las etapas iniciales de la búsqueda, cuando todavía estamos lejos de las soluciones optimizadas. Mediante estas dos aproximaciones demostramos que podemos acelerar el tiempo de ejecución de los algoritmos clásicos de búsqueda voraz en factores de hasta x240 para problemas de tamaño pequeño/mediano. Finalmente, este libro presenta HOPLITE, una infraestructura de cuantificación automatizada, flexible y modular que incluye la implementación de las técnicas anteriores y se proporciona de forma pública. Su objetivo es ofrecer a desabolladores e investigadores un entorno común para prototipar y verificar nuevas metodologías de cuantificación de forma sencilla. Describimos el flujo de trabajo, justificamos las decisiones de diseño tomadas, explicamos su API pública y hacemos una demostración paso a paso de su funcionamiento. Además mostramos, a través de un ejemplo sencillo, la forma en que conectar nuevas extensiones a la herramienta con las interfaces ya existentes para poder así expandir y mejorar las capacidades de HOPLITE. ABSTRACT Using fixed-point arithmetic is one of the most common design choices for systems where area, power or throughput are heavily constrained. In order to produce implementations where the cost is minimized without negatively impacting the accuracy of the results, a careful assignment of word-lengths is required. The problem of finding the optimal combination of fixed-point word-lengths for a given system is a combinatorial NP-hard problem to which developers devote between 25 and 50% of the design-cycle time. Reconfigurable hardware platforms such as FPGAs also benefit of the advantages of fixed-point arithmetic, as it compensates for the slower clock frequencies and less efficient area utilization of the hardware platform with respect to ASICs. As FPGAs become commonly used for scientific computation, designs constantly grow larger and more complex, up to the point where they cannot be handled efficiently by current signal and quantization noise modelling and word-length optimization methodologies. In this Ph.D. Thesis we explore different aspects of the quantization problem and we present new methodologies for each of them: The techniques based on extensions of intervals have allowed to obtain accurate models of the signal and quantization noise propagation in systems with non-linear operations. We take this approach a step further by introducing elements of MultiElement Generalized Polynomial Chaos (ME-gPC) and combining them with an stateof- the-art Statistical Modified Affine Arithmetic (MAA) based methodology in order to model systems that contain control-flow structures. Our methodology produces the different execution paths automatically, determines the regions of the input domain that will exercise them, and extracts the system statistical moments from the partial results. We use this technique to estimate both the dynamic range and the round-off noise in systems with the aforementioned control-flow structures. We show the good accuracy of our approach, which in some case studies with non-linear operators shows a 0.04 % deviation respect to the simulation-based reference values. A known drawback of the techniques based on extensions of intervals is the combinatorial explosion of terms as the size of the targeted systems grows, which leads to scalability problems. To address this issue we present a clustered noise injection technique that groups the signals in the system, introduces the noise terms in each group independently and then combines the results at the end. In this way, the number of noise sources in the system at a given time is controlled and, because of this, the combinato rial explosion is minimized. We also present a multi-way partitioning algorithm aimed at minimizing the deviation of the results due to the loss of correlation between noise terms, in order to keep the results as accurate as possible. This Ph.D. Thesis also covers the development of methodologies for word-length optimization based on Monte-Carlo simulations in reasonable times. We do so by presenting two novel techniques that explore the reduction of the execution times approaching the problem in two different ways: First, the interpolative method applies a simple but precise interpolator to estimate the sensitivity of each signal, which is later used to guide the optimization effort. Second, the incremental method revolves on the fact that, although we strictly need to guarantee a certain confidence level in the simulations for the final results of the optimization process, we can do it with more relaxed levels, which in turn implies using a considerably smaller amount of samples, in the initial stages of the process, when we are still far from the optimized solution. Through these two approaches we demonstrate that the execution time of classical greedy techniques can be accelerated by factors of up to ×240 for small/medium sized problems. Finally, this book introduces HOPLITE, an automated, flexible and modular framework for quantization that includes the implementation of the previous techniques and is provided for public access. The aim is to offer a common ground for developers and researches for prototyping and verifying new techniques for system modelling and word-length optimization easily. We describe its work flow, justifying the taken design decisions, explain its public API and we do a step-by-step demonstration of its execution. We also show, through an example, the way new extensions to the flow should be connected to the existing interfaces in order to expand and improve the capabilities of HOPLITE.
Resumo:
The variability of the solar spectra in the field may reduce the annual energy yield of multijunction solar cells. It would, therefore, be desirable to implement a cell design procedure based on the maximization of the annual energy yield. In this study, we present a measurement technique to generate maps of the real performance of the solar cell for a range of light spectrum contents using a solar simulator with a computer-controllable spectral content. These performance maps are demonstrated to be a powerful tool for analyzing the characteristics of any given set of annual spectra representative of a site and their influence on the energy yield of any solar cell. The effect of luminescence coupling on buffering against variations of the spectrum and improving the annual energy yield is demonstrated using this method.