955 resultados para automatic test and measurement
Resumo:
This paper describes a novel method to enhance current airport surveillance systems used in Advanced Surveillance Monitoring Guidance and Control Systems (A-SMGCS). The proposed method allows for the automatic calibration of measurement models and enhanced detection of nonideal situations, increasing surveillance products integrity. It is based on the definition of a set of observables from the surveillance processing chain and a rule based expert system aimed to change the data processing methods
Resumo:
This report presents an overview of the current work performed by us in the context of the efficient parallel implementation of traditional logic programming systems. The work is based on the &-Prolog System, a system for the automatic parallelization and execution of logic programming languages within the Independent And-parallelism model, and the global analysis and parallelization tools which have been developed for this system. In order to make the report self-contained, we first describe the "classical" tools of the &-Prolog system. We then explain in detail the work performed in improving and generalizing the global analysis and parallelization tools. Also, we describe the objectives which will drive our future work in this area.
Resumo:
Resumen El diseño de sistemas ópticos, entendido como un arte por algunos, como una ciencia por otros, se ha realizado durante siglos. Desde los egipcios hasta nuestros días los sistemas de formación de imagen han ido evolucionando así como las técnicas de diseño asociadas. Sin embargo ha sido en los últimos 50 años cuando las técnicas de diseño han experimentado su mayor desarrollo y evolución, debido, en parte, a la aparición de nuevas técnicas de fabricación y al desarrollo de ordenadores cada vez más potentes que han permitido el cálculo y análisis del trazado de rayos a través de los sistemas ópticos de forma rápida y eficiente. Esto ha propiciado que el diseño de sistemas ópticos evolucione desde los diseños desarrollados únicamente a partir de la óptica paraxial hasta lo modernos diseños realizados mediante la utilización de diferentes técnicas de optimización multiparamétrica. El principal problema con el que se encuentra el diseñador es que las diferentes técnicas de optimización necesitan partir de un diseño inicial el cual puede fijar las posibles soluciones. Dicho de otra forma, si el punto de inicio está lejos del mínimo global, o diseño óptimo para las condiciones establecidas, el diseño final puede ser un mínimo local cerca del punto de inicio y lejos del mínimo global. Este tipo de problemática ha llevado al desarrollo de sistemas globales de optimización que cada vez sean menos sensibles al punto de inicio de la optimización. Aunque si bien es cierto que es posible obtener buenos diseños a partir de este tipo de técnicas, se requiere de muchos intentos hasta llegar a la solución deseada, habiendo un entorno de incertidumbre durante todo el proceso, puesto que no está asegurado el que se llegue a la solución óptima. El método de las Superficies Múltiples Simultaneas (SMS), que nació como una herramienta de cálculo de concentradores anidólicos, se ha demostrado como una herramienta también capaz utilizarse para el diseño de sistemas ópticos formadores de imagen, aunque hasta la fecha se ha utilizado para el diseño puntual de sistemas de formación de imagen. Esta tesis tiene por objeto presentar el SMS como un método que puede ser utilizado de forma general para el diseño de cualquier sistema óptico de focal fija o v afocal con un aumento definido así como una herramienta que puede industrializarse para ayudar al diseñador a afrontar de forma sencilla el diseño de sistemas ópticos complejos. Esta tesis está estructurada en cinco capítulos: El capítulo 1, es un capítulo de fundamentos donde se presentan los conceptos fundamentales necesarios para que el lector, aunque no posea una gran base en óptica formadora de imagen, pueda entender los planteamientos y resultados que se presentan en el resto de capítulos El capitulo 2 aborda el problema de la optimización de sistemas ópticos, donde se presenta el método SMS como una herramienta idónea para obtener un punto de partida para el proceso de optimización. Mediante un ejemplo aplicado se demuestra la importancia del punto de partida utilizado en la solución final encontrada. Además en este capítulo se presentan diferentes técnicas que permiten la interpolación y optimización de las superficies obtenidas a partir de la aplicación del SMS. Aunque en esta tesis se trabajará únicamente utilizando el SMS2D, se presenta además un método para la interpolación y optimización de las nubes de puntos obtenidas a partir del SMS3D basado en funciones de base radial (RBF). En el capítulo 3 se presenta el diseño, fabricación y medidas de un objetivo catadióptrico panorámico diseñado para trabajar en la banda del infrarrojo lejano (8-12 μm) para aplicaciones de vigilancia perimetral. El objetivo presentado se diseña utilizando el método SMS para tres frentes de onda de entrada utilizando cuatro superficies. La potencia del método de diseño utilizado se hace evidente en la sencillez con la que este complejo sistema se diseña. Las imágenes presentadas demuestran cómo el prototipo desarrollado cumple a la perfección su propósito. El capítulo 4 aborda el problema del diseño de sistemas ópticos ultra compactos, se introduce el concepto de sistemas multicanal, como aquellos sistemas ópticos compuestos por una serie de canales que trabajan en paralelo. Este tipo de sistemas resultan particularmente idóneos para él diseño de sistemas afocales. Se presentan estrategias de diseño para sistemas multicanal tanto monocromáticos como policromáticos. Utilizando la novedosa técnica de diseño que en este capítulo se presenta el diseño de un telescopio de seis aumentos y medio. En el capítulo 5 se presenta una generalización del método SMS para rayos meridianos. En este capítulo se presenta el algoritmo que debe utilizarse para el diseño de cualquier sistema óptico de focal fija. La denominada optimización fase 1 se vi introduce en el algoritmo presentado de forma que mediante el cambio de las condiciones iníciales del diseño SMS que, aunque el diseño se realice para rayos meridianos, los rayos skew tengan un comportamiento similar. Para probar la potencia del algoritmo desarrollado se presenta un conjunto de diseños con diferente número de superficies. La estabilidad y potencia del algoritmo se hace evidente al conseguirse por primera vez el diseño de un sistema de seis superficies diseñado por SMS. vii Abstract The design of optical systems, considered an art by some and a science by others, has been developed for centuries. Imaging optical systems have been evolving since Ancient Egyptian times, as have design techniques. Nevertheless, the most important developments in design techniques have taken place over the past 50 years, in part due to the advances in manufacturing techniques and the development of increasingly powerful computers, which have enabled the fast and efficient calculation and analysis of ray tracing through optical systems. This has led to the design of optical systems evolving from designs developed solely from paraxial optics to modern designs created by using different multiparametric optimization techniques. The main problem the designer faces is that the different optimization techniques require an initial design which can set possible solutions as a starting point. In other words, if the starting point is far from the global minimum or optimal design for the set conditions, the final design may be a local minimum close to the starting point and far from the global minimum. This type of problem has led to the development of global optimization systems which are increasingly less sensitive to the starting point of the optimization process. Even though it is possible to obtain good designs from these types of techniques, many attempts are necessary to reach the desired solution. This is because of the uncertain environment due to the fact that there is no guarantee that the optimal solution will be obtained. The Simultaneous Multiple Surfaces (SMS) method, designed as a tool to calculate anidolic concentrators, has also proved useful for the design of image-forming optical systems, although until now it has occasionally been used for the design of imaging systems. This thesis aims to present the SMS method as a technique that can be used in general for the design of any optical system, whether with a fixed focal or an afocal with a defined magnification, and also as a tool that can be commercialized to help designers in the design of complex optical systems. The thesis is divided into five chapters. Chapter 1 establishes the basics by presenting the fundamental concepts which the reader needs to acquire, even if he/she doesn‟t have extensive knowledge in the field viii of image-forming optics, in order to understand the steps taken and the results obtained in the following chapters. Chapter 2 addresses the problem of optimizing optical systems. Here the SMS method is presented as an ideal tool to obtain a starting point for the optimization process. The importance of the starting point for the final solution is demonstrated through an example. Additionally, this chapter introduces various techniques for the interpolation and optimization of the surfaces obtained through the application of the SMS method. Even though in this thesis only the SMS2D method is used, we present a method for the interpolation and optimization of clouds of points obtained though the SMS3D method, based on radial basis functions (RBF). Chapter 3 presents the design, manufacturing and measurement processes of a catadioptric panoramic lens designed to work in the Long Wavelength Infrared (LWIR) (8-12 microns) for perimeter surveillance applications. The lens presented is designed by using the SMS method for three input wavefronts using four surfaces. The powerfulness of the design method used is revealed through the ease with which this complex system is designed. The images presented show how the prototype perfectly fulfills its purpose. Chapter 4 addresses the problem of designing ultra-compact optical systems. The concept of multi-channel systems, such as optical systems composed of a series of channels that work in parallel, is introduced. Such systems are especially suitable for the design of afocal systems. We present design strategies for multichannel systems, both monochromatic and polychromatic. A telescope designed with a magnification of six-and-a-half through the innovative technique exposed in this chapter is presented. Chapter 5 presents a generalization of the SMS method for meridian rays. The algorithm to be used for the design of any fixed focal optics is revealed. The optimization known as phase 1 optimization is inserted into the algorithm so that, by changing the initial conditions of the SMS design, the skew rays have a similar behavior, despite the design being carried out for meridian rays. To test the power of the developed algorithm, a set of designs with a different number of surfaces is presented. The stability and strength of the algorithm become apparent when the first design of a system with six surfaces if obtained through the SMS method.
Resumo:
Esta tesis está incluida dentro del campo del campo de Multiband Orthogonal Frequency Division Multiplexing Ultra Wideband (MB-OFDM UWB), el cual ha adquirido una gran importancia en las comunicaciones inalámbricas de alta tasa de datos en la última década. UWB surgió con el objetivo de satisfacer la creciente demanda de conexiones inalámbricas en interiores y de uso doméstico, con bajo coste y alta velocidad. La disponibilidad de un ancho de banda grande, el potencial para alta velocidad de transmisión, baja complejidad y bajo consumo de energía, unido al bajo coste de implementación, representa una oportunidad única para que UWB se convierta en una solución ampliamente utilizada en aplicaciones de Wireless Personal Area Network (WPAN). UWB está definido como cualquier transmisión que ocupa un ancho de banda de más de 20% de su frecuencia central, o más de 500 MHz. En 2002, la Comisión Federal de Comunicaciones (FCC) definió que el rango de frecuencias de transmisión de UWB legal es de 3.1 a 10.6 GHz, con una energía de transmisión de -41.3 dBm/Hz. Bajo las directrices de FCC, el uso de la tecnología UWB puede aportar una enorme capacidad en las comunicaciones de corto alcance. Considerando las ecuaciones de capacidad de Shannon, incrementar la capacidad del canal requiere un incremento lineal en el ancho de banda, mientras que un aumento similar de la capacidad de canal requiere un aumento exponencial en la energía de transmisión. En los últimos años, s diferentes desarrollos del UWB han sido extensamente estudiados en diferentes áreas, entre los cuales, el protocolo de comunicaciones inalámbricas MB-OFDM UWB está considerado como la mejor elección y ha sido adoptado como estándar ISO/IEC para los WPANs. Combinando la modulación OFDM y la transmisión de datos utilizando las técnicas de salto de frecuencia, el sistema MB-OFDM UWB es capaz de soportar tasas de datos con que pueden variar de los 55 a los 480 Mbps, alcanzando una distancia máxima de hasta 10 metros. Se esperara que la tecnología MB-OFDM tenga un consumo energético muy bajo copando un are muy reducida en silicio, proporcionando soluciones de bajo coste que satisfagan las demandas del mercado. Para cumplir con todas estas expectativas, el desarrollo y la investigación del MBOFDM UWB deben enfrentarse a varios retos, como son la sincronización de alta sensibilidad, las restricciones de baja complejidad, las estrictas limitaciones energéticas, la escalabilidad y la flexibilidad. Tales retos requieren un procesamiento digital de la señal de última generación, capaz de desarrollar sistemas que puedan aprovechar por completo las ventajas del espectro UWB y proporcionar futuras aplicaciones inalámbricas en interiores. Esta tesis se centra en la completa optimización de un sistema de transceptor de banda base MB-OFDM UWB digital, cuyo objetivo es investigar y diseñar un subsistema de comunicación inalámbrica para la aplicación de las Redes de Sensores Inalámbricas Visuales. La complejidad inherente de los procesadores FFT/IFFT y el sistema de sincronización así como la alta frecuencia de operación para todos los elementos de procesamiento, se convierten en el cuello de la botella para el diseño y la implementación del sistema de UWB digital en base de banda basado en MB-OFDM de baja energía. El objetivo del transceptor propuesto es conseguir baja energía y baja complejidad bajo la premisa de un alto rendimiento. Las optimizaciones están realizadas tanto a nivel algorítmico como a nivel arquitectural para todos los elementos del sistema. Una arquitectura hardware eficiente en consumo se propone en primer lugar para aquellos módulos correspondientes a núcleos de computación. Para el procesado de la Transformada Rápida de Fourier (FFT/IFFT), se propone un algoritmo mixed-radix, basado en una arquitectura con pipeline y se ha desarrollado un módulo de Decodificador de Viterbi (VD) equilibrado en coste-velocidad con el objetivo de reducir el consumo energético e incrementar la velocidad de procesamiento. También se ha implementado un correlador signo-bit simple basado en la sincronización del tiempo de símbolo es presentado. Este correlador es usado para detectar y sincronizar los paquetes de OFDM de forma robusta y precisa. Para el desarrollo de los subsitemas de procesamiento y realizar la integración del sistema completo se han empleado tecnologías de última generación. El dispositivo utilizado para el sistema propuesto es una FPGA Virtex 5 XC5VLX110T del fabricante Xilinx. La validación el propuesta para el sistema transceptor se ha implementado en dicha placa de FPGA. En este trabajo se presenta un algoritmo, y una arquitectura, diseñado con filosofía de co-diseño hardware/software para el desarrollo de sistemas de FPGA complejos. El objetivo principal de la estrategia propuesta es de encontrar una metodología eficiente para el diseño de un sistema de FPGA configurable optimizado con el empleo del mínimo esfuerzo posible en el sistema de procedimiento de verificación, por tanto acelerar el periodo de desarrollo del sistema. La metodología de co-diseño presentada tiene la ventaja de ser fácil de usar, contiene todos los pasos desde la propuesta del algoritmo hasta la verificación del hardware, y puede ser ampliamente extendida para casi todos los tipos de desarrollos de FPGAs. En este trabajo se ha desarrollado sólo el sistema de transceptor digital de banda base por lo que la comprobación de señales transmitidas a través del canal inalámbrico en los entornos reales de comunicación sigue requiriendo componentes RF y un front-end analógico. No obstante, utilizando la metodología de co-simulación hardware/software citada anteriormente, es posible comunicar el sistema de transmisor y el receptor digital utilizando los modelos de canales propuestos por IEEE 802.15.3a, implementados en MATLAB. Por tanto, simplemente ajustando las características de cada modelo de canal, por ejemplo, un incremento del retraso y de la frecuencia central, podemos estimar el comportamiento del sistema propuesto en diferentes escenarios y entornos. Las mayores contribuciones de esta tesis son: • Se ha propuesto un nuevo algoritmo 128-puntos base mixto FFT usando la arquitectura pipeline multi-ruta. Los complejos multiplicadores para cada etapa de procesamiento son diseñados usando la arquitectura modificada shiftadd. Los sistemas word length y twiddle word length son comparados y seleccionados basándose en la señal para cuantización del SQNR y el análisis de energías. • El desempeño del procesador IFFT es analizado bajo diferentes situaciones aritméticas de bloques de punto flotante (BFP) para el control de desbordamiento, por tanto, para encontrar la arquitectura perfecta del algoritmo IFFT basado en el procesador FFT propuesto. • Para el sistema de receptor MB-OFDM UWB se ha empleado una sincronización del tiempo innovadora, de baja complejidad y esquema de compensación, que consiste en funciones de Detector de Paquetes (PD) y Estimación del Offset del tiempo. Simplificando el cross-correlation y maximizar las funciones probables solo a sign-bit, la complejidad computacional se ve reducida significativamente. • Se ha propuesto un sistema de decodificadores Viterbi de 64 estados de decisión-débil usando velocidad base-4 de arquitectura suma-comparaselecciona. El algoritmo Two-pointer Even también es introducido en la unidad de rastreador de origen con el objetivo de conseguir la eficiencia en el hardware. • Se han integrado varias tecnologías de última generación en el completo sistema transceptor basebanda , con el objetivo de implementar un sistema de comunicación UWB altamente optimizado. • Un diseño de flujo mejorado es propuesto para el complejo sistema de implementación, el cual puede ser usado para diseños de Cadena de puertas de campo programable general (FPGA). El diseño mencionado no sólo reduce dramáticamente el tiempo para la verificación funcional, sino también provee un análisis automático como los errores del retraso del output para el sistema de hardware implementado. • Un ambiente de comunicación virtual es establecido para la validación del propuesto sistema de transceptores MB-OFDM. Este método es provisto para facilitar el uso y la conveniencia de analizar el sistema digital de basebanda sin parte frontera analógica bajo diferentes ambientes de comunicación. Esta tesis doctoral está organizada en seis capítulos. En el primer capítulo se encuentra una breve introducción al campo del UWB, tanto relacionado con el proyecto como la motivación del desarrollo del sistema de MB-OFDM. En el capítulo 2, se presenta la información general y los requisitos del protocolo de comunicación inalámbrica MBOFDM UWB. En el capítulo 3 se habla de la arquitectura del sistema de transceptor digital MB-OFDM de banda base . El diseño del algoritmo propuesto y la arquitectura para cada elemento del procesamiento está detallado en este capítulo. Los retos de diseño del sistema que involucra un compromiso de discusión entre la complejidad de diseño, el consumo de energía, el coste de hardware, el desempeño del sistema, y otros aspectos. En el capítulo 4, se ha descrito la co-diseñada metodología de hardware/software. Cada parte del flujo del diseño será detallado con algunos ejemplos que se ha hecho durante el desarrollo del sistema. Aprovechando esta estrategia de diseño, el procedimiento de comunicación virtual es llevado a cabo para probar y analizar la arquitectura del transceptor propuesto. Los resultados experimentales de la co-simulación y el informe sintético de la implementación del sistema FPGA son reflejados en el capítulo 5. Finalmente, en el capítulo 6 se incluye las conclusiones y los futuros proyectos, y también los resultados derivados de este proyecto de doctorado. ABSTRACT In recent years, the Wireless Visual Sensor Network (WVSN) has drawn great interest in wireless communication research area. They enable a wealth of new applications such as building security control, image sensing, and target localization. However, nowadays wireless communication protocols (ZigBee, Wi-Fi, and Bluetooth for example) cannot fully satisfy the demands of high data rate, low power consumption, short range, and high robustness requirements. New communication protocol is highly desired for such kind of applications. The Ultra Wideband (UWB) wireless communication protocol, which has increased in importance for high data rate wireless communication field, are emerging as an important topic for WVSN research. UWB has emerged as a technology that offers great promise to satisfy the growing demand for low-cost, high-speed digital wireless indoor and home networks. The large bandwidth available, the potential for high data rate transmission, and the potential for low complexity and low power consumption, along with low implementation cost, all present a unique opportunity for UWB to become a widely adopted radio solution for future Wireless Personal Area Network (WPAN) applications. UWB is defined as any transmission that occupies a bandwidth of more than 20% of its center frequency, or more than 500 MHz. In 2002, the Federal Communications Commission (FCC) has mandated that UWB radio transmission can legally operate in the range from 3.1 to 10.6 GHz at a transmitter power of -41.3 dBm/Hz. Under the FCC guidelines, the use of UWB technology can provide enormous capacity over short communication ranges. Considering Shannon’s capacity equations, increasing the channel capacity requires linear increasing in bandwidth, whereas similar channel capacity increases would require exponential increases in transmission power. In recent years, several different UWB developments has been widely studied in different area, among which, the MB-OFDM UWB wireless communication protocol is considered to be the leading choice and has recently been adopted in the ISO/IEC standard for WPANs. By combing the OFDM modulation and data transmission using frequency hopping techniques, the MB-OFDM UWB system is able to support various data rates, ranging from 55 to 480 Mbps, over distances up to 10 meters. The MB-OFDM technology is expected to consume very little power and silicon area, as well as provide low-cost solutions that can satisfy consumer market demands. To fulfill these expectations, MB-OFDM UWB research and development have to cope with several challenges, which consist of high-sensitivity synchronization, low- complexity constraints, strict power limitations, scalability, and flexibility. Such challenges require state-of-the-art digital signal processing expertise to develop systems that could fully take advantages of the UWB spectrum and support future indoor wireless applications. This thesis focuses on fully optimization for the MB-OFDM UWB digital baseband transceiver system, aiming at researching and designing a wireless communication subsystem for the Wireless Visual Sensor Networks (WVSNs) application. The inherent high complexity of the FFT/IFFT processor and synchronization system, and high operation frequency for all processing elements, becomes the bottleneck for low power MB-OFDM based UWB digital baseband system hardware design and implementation. The proposed transceiver system targets low power and low complexity under the premise of high performance. Optimizations are made at both algorithm and architecture level for each element of the transceiver system. The low-power hardwareefficient structures are firstly proposed for those core computation modules, i.e., the mixed-radix algorithm based pipelined architecture is proposed for the Fast Fourier Transform (FFT/IFFT) processor, and the cost-speed balanced Viterbi Decoder (VD) module is developed, in the aim of lowering the power consumption and increasing the processing speed. In addition, a low complexity sign-bit correlation based symbol timing synchronization scheme is presented so as to detect and synchronize the OFDM packets robustly and accurately. Moreover, several state-of-the-art technologies are used for developing other processing subsystems and an entire MB-OFDM digital baseband transceiver system is integrated. The target device for the proposed transceiver system is Xilinx Virtex 5 XC5VLX110T FPGA board. In order to validate the proposed transceiver system in the FPGA board, a unified algorithm-architecture-circuit hardware/software co-design environment for complex FPGA system development is presented in this work. The main objective of the proposed strategy is to find an efficient methodology for designing a configurable optimized FPGA system by using as few efforts as possible in system verification procedure, so as to speed up the system development period. The presented co-design methodology has the advantages of easy to use, covering all steps from algorithm proposal to hardware verification, and widely spread for almost all kinds of FPGA developments. Because only the digital baseband transceiver system is developed in this thesis, the validation of transmitting signals through wireless channel in real communication environments still requires the analog front-end and RF components. However, by using the aforementioned hardware/software co-simulation methodology, the transmitter and receiver digital baseband systems get the opportunity to communicate with each other through the channel models, which are proposed from the IEEE 802.15.3a research group, established in MATLAB. Thus, by simply adjust the characteristics of each channel model, e.g. mean excess delay and center frequency, we can estimate the transmission performance of the proposed transceiver system through different communication situations. The main contributions of this thesis are: • A novel mixed radix 128-point FFT algorithm by using multipath pipelined architecture is proposed. The complex multipliers for each processing stage are designed by using modified shift-add architectures. The system wordlength and twiddle word-length are compared and selected based on Signal to Quantization Noise Ratio (SQNR) and power analysis. • IFFT processor performance is analyzed under different Block Floating Point (BFP) arithmetic situations for overflow control, so as to find out the perfect architecture of IFFT algorithm based on the proposed FFT processor. • An innovative low complex timing synchronization and compensation scheme, which consists of Packet Detector (PD) and Timing Offset Estimation (TOE) functions, for MB-OFDM UWB receiver system is employed. By simplifying the cross-correlation and maximum likelihood functions to signbit only, the computational complexity is significantly reduced. • A 64 state soft-decision Viterbi Decoder system by using high speed radix-4 Add-Compare-Select architecture is proposed. Two-pointer Even algorithm is also introduced into the Trace Back unit in the aim of hardware-efficiency. • Several state-of-the-art technologies are integrated into the complete baseband transceiver system, in the aim of implementing a highly-optimized UWB communication system. • An improved design flow is proposed for complex system implementation which can be used for general Field-Programmable Gate Array (FPGA) designs. The design method not only dramatically reduces the time for functional verification, but also provides automatic analysis such as errors and output delays for the implemented hardware systems. • A virtual communication environment is established for validating the proposed MB-OFDM transceiver system. This methodology is proved to be easy for usage and convenient for analyzing the digital baseband system without analog frontend under different communication environments. This PhD thesis is organized in six chapters. In the chapter 1 a brief introduction to the UWB field, as well as the related work, is done, along with the motivation of MBOFDM system development. In the chapter 2, the general information and requirement of MB-OFDM UWB wireless communication protocol is presented. In the chapter 3, the architecture of the MB-OFDM digital baseband transceiver system is presented. The design of the proposed algorithm and architecture for each processing element is detailed in this chapter. Design challenges of such system involve trade-off discussions among design complexity, power consumption, hardware cost, system performance, and some other aspects. All these factors are analyzed and discussed. In the chapter 4, the hardware/software co-design methodology is proposed. Each step of this design flow will be detailed by taking some examples that we met during system development. Then, taking advantages of this design strategy, the Virtual Communication procedure is carried out so as to test and analyze the proposed transceiver architecture. Experimental results from the co-simulation and synthesis report of the implemented FPGA system are given in the chapter 5. The chapter 6 includes conclusions and future work, as well as the results derived from this PhD work.
Resumo:
El diseño y desarrollo de sistemas de suspensión para vehículos se basa cada día más en el diseño por ordenador y en herramientas de análisis por ordenador, las cuales permiten anticipar problemas y resolverlos por adelantado. El comportamiento y las características dinámicas se calculan con precisión, bajo coste, y recursos y tiempos de cálculo reducidos. Sin embargo, existe una componente iterativa en el proceso, que requiere la definición manual de diseños a través de técnicas “prueba y error”. Esta Tesis da un paso hacia el desarrollo de un entorno de simulación eficiente capaz de simular, analizar y evaluar diseños de suspensiones vehiculares, y de mejorarlos hacia la solución optima mediante la modificación de los parámetros de diseño. La modelización mediante sistemas multicuerpo se utiliza aquí para desarrollar un modelo de autocar con 18 grados de libertad, de manera detallada y eficiente. La geometría y demás características de la suspensión se ajustan a las del vehículo real, así como los demás parámetros del modelo. Para simular la dinámica vehicular, se utiliza una formulación multicuerpo moderna y eficiente basada en las ecuaciones de Maggi, a la que se ha incorporado un visor 3D. Así, se consigue simular maniobras vehiculares en tiempos inferiores al tiempo real. Una vez que la dinámica está disponible, los análisis de sensibilidad son cruciales para una optimización robusta y eficiente. Para ello, se presenta una técnica matemática que permite derivar las variables dinámicas dentro de la formulación, de forma algorítmica, general, con la precisión de la maquina, y razonablemente eficiente: la diferenciación automática. Este método propaga las derivadas con respecto a las variables de diseño a través del código informático y con poca intervención del usuario. En contraste con otros enfoques en la bibliografía, generalmente particulares y limitados, se realiza una comparación de librerías, se desarrolla una formulación híbrida directa-automática para el cálculo de sensibilidades, y se presentan varios ejemplos reales. Finalmente, se lleva a cabo la optimización de la respuesta dinámica del vehículo citado. Se analizan cuatro tipos distintos de optimización: identificación de parámetros, optimización de la maniobrabilidad, optimización del confort y optimización multi-objetivo, todos ellos aplicados al diseño del autocar. Además de resultados analíticos y gráficos, se incluyen algunas consideraciones acerca de la eficiencia. En resumen, se mejora el comportamiento dinámico de vehículos por medio de modelos multicuerpo y de técnicas de diferenciación automática y optimización avanzadas, posibilitando un ajuste automático, preciso y eficiente de los parámetros de diseño. ABSTRACT Each day, the design and development of vehicle suspension systems relies more on computer-aided design and computer-aided engineering tools, which allow anticipating the problems and solving them ahead of time. Dynamic behavior and characteristics are thus simulated accurately and inexpensively with moderate computational times and resources. There is, however, an iterative component in the process, which involves the manual definition of designs in a trialand-error manner. This Thesis takes a step towards the development of an efficient simulation framework capable of simulating, analyzing and evaluating vehicle suspension designs, and automatically improving them by varying the design parameters towards the optimal solution. The multibody systems approach is hereby used to model a three-dimensional 18-degrees-of-freedom coach in a comprehensive yet efficient way. The suspension geometry and characteristics resemble the ones from the real vehicle, as do the rest of vehicle parameters. In order to simulate vehicle dynamics, an efficient, state-of-the-art multibody formulation based on Maggi’s equations is employed, and a three-dimensional graphics viewer is developed. As a result, vehicle maneuvers can be simulated faster than real-time. Once the dynamics are ready, a sensitivity analysis is crucial for a robust optimization. To that end, a mathematical technique is introduced, which allows differentiating the dynamic variables within the multibody formulation in a general, algorithmic, accurate to machine precision, and reasonably efficient way: automatic differentiation. This method propagates the derivatives with respect to the design parameters throughout the computer code, with little user interaction. In contrast with other attempts in the literature, mostly not generalpurpose, a benchmarking of libraries is carried out, a hybrid direct-automatic differentiation approach for the computation of sensitivities is developed, and several real-life examples are analyzed. Finally, a design optimization process of the aforementioned vehicle is carried out. Four different types of dynamic response optimization are presented: parameter identification, handling optimization, ride comfort optimization and multi-objective optimization; all of which are applied to the design of the coach example. Together with analytical and visual proof of the results, efficiency considerations are made. In summary, the dynamic behavior of vehicles is improved by using the multibody systems approach, along with advanced differentiation and optimization techniques, enabling an automatic, accurate and efficient tuning of design parameters.
Wireless measurement system for structural health monitoring with high time synchronization accuracy
Resumo:
Structural health monitoring (SHM) systems have excellent potential to improve the regular operation and maintenance of structures. Wireless networks (WNs) have been used to avoid the high cost of traditional generic wired systems. The most important limitation of SHM wireless systems is time-synchronization accuracy, scalability, and reliability. A complete wireless system for structural identification under environmental load is designed, implemented, deployed, and tested on three different real bridges. Our contribution ranges from the hardware to the graphical front end. System goal is to avoid the main limitations of WNs for SHM particularly in regard to reliability, scalability, and synchronization. We reduce spatial jitter to 125 ns, far below the 120 μs required for high-precision acquisition systems and much better than the 10-μs current solutions, without adding complexity. The system is scalable to a large number of nodes to allow for dense sensor coverage of real-world structures, only limited by a compromise between measurement length and mandatory time to obtain the final result. The system addresses a myriad of problems encountered in a real deployment under difficult conditions, rather than a simulation or laboratory test bed.
Resumo:
Las personas que usan la silla de ruedas como su forma de movilidad prioritaria presentan una elevada incidencia (73%) de dolor de hombro debido al sobreuso y al movimiento repetitivo de la propulsión. Existen numerosos métodos de diagnóstico para la detección de las patologías del hombro, sin embargo la literatura reclama la necesidad de un test no invasivo y fiable, y sugiere la termografía como una técnica adecuada para evaluar el dolor articular. La termografía infrarroja (IRT) proporciona información acerca de los procesos fisiológicos a través del estudio de las distribuciones de la temperatura de la piel. Debido a la alta correlación entre ambos lados corporales, las asimetrías térmicas entre flancos contralaterales son una buena indicación de patologías o disfunciones físicas subyacentes. La fiabilidad de la IRT ha sido estudiada con anterioridad en sujetos sanos, pero nunca en usuarios de sillas de ruedas. Las características especiales de la población con discapacidad (problemas de sudoración y termorregulación, distribución sanguínea o medicación), hacen necesario estudiar los factores que afectan a la aplicación de la IRT en usuarios de sillas de ruedas. La bibliografía discrepa en cuanto a los beneficios o daños resultantes de la práctica de la actividad física en las lesiones de hombro por sobreuso en usuarios de sillas de ruedas. Recientes resultados apuntan a un aumento del riesgo de rotura del manguito rotador en personas con paraplejia que practican deportes con elevación del brazo por encima de la cabeza. Debido a esta falta de acuerdo en la literatura, surge la necesidad de analizar el perfil termográfico en usuarios de sillas de ruedas sedentarios y deportistas y su relación con el dolor de hombro. Hasta la fecha sólo se han publicado estudios termográficos durante el ejercicio en sujetos sanos. Un mayor entendimiento de la respuesta termográfica al ejercicio en silla de ruedas en relación al dolor de hombro clarificará su aparición y desarrollo y permitirá una apropiada intervención. El primer estudio demuestra que la fiabilidad de la IRT en usuarios de sillas de ruedas varía dependiendo de las zonas analizadas, y corrobora que la IRT es una técnica no invasiva, de no contacto, que permite medir la temperatura de la piel, y con la cual avanzar en la investigación en usuarios de sillas de ruedas. El segundo estudio proporciona un perfil de temperatura para usuarios de sillas de ruedas. Los sujetos no deportistas presentaron mayores asimetrías entre lados corporales que los sedentarios, y ambos obtuvieron superiores asimetrías que los sujetos sin discapacidad reportados en la literatura. Los no deportistas también presentaron resultados más elevados en el cuestionario de dolor de hombro. El área con mayores asimetrías térmicas fue hombro. En deportistas, algunas regiones de interés (ROIs) se relacionaron con el dolor de hombro. Estos resultados ayudan a entender el mapa térmico en usuarios de sillas de ruedas. El último estudio referente a la evaluación de la temperatura de la piel en usuarios de sillas de ruedas en ejercicio, reportó diferencias significativas entre la temperatura de la piel antes del test y 10 minutos después del test de propulsión de silla de ruedas, en 12 ROIs; y entre el post-test y 10 minutos después del test en la mayoría de las ROIs. Estas diferencias se vieron atenuadas cuando se compararon las asimetrías antes y después del test. La temperatura de la piel tendió a disminuir inmediatamente después completar el ejercicio, e incrementar significativamente 10 minutos después. El análisis de las asimetrías vs dolor de hombro reveló relaciones significativas negativas en 5 de las 26 ROIs. No se encontraron correlaciones significativas entre las variables de propulsión y el cuestionario de dolor de hombro. Todas las variables cinemáticas correlacionaron significativamente con las asimetrías en múltiples ROIs. Estos resultados indican que los deportistas en sillas de ruedas exhiben una capacidad similar de producir calor que los deportistas sin discapacidad; no obstante, su patrón térmico es más característico de ejercicios prolongados que de esfuerzos breves. Este trabajo contribuye al conocimiento de la termorregulación en usuarios de sillas de ruedas durante el ejercicio, y aporta información relevante para programas deportivos y de rehabilitación. ABSTRACT Individuals who use wheelchairs as their main means of mobility have a high incidence (73%) of shoulder pain (SP) owing to overuse and repetitive propulsion movement. There are numerous diagnostic methods for the detection of shoulder pathologies, however the literature claims that a noninvasive accurate test to properly assess shoulder pain would be necessary, and suggests thermography as a suitable technique for joint pain evaluation. Infrared thermography (IRT) provides information about physiological processes by studying the skin temperature (Tsk) distributions. Due to the high correlation of skin temperature between both sides of the body, thermal asymmetries between contralateral flanks are an indicator of underlying pathologies or physical dysfunctions. The reliability of infrared thermography has been studied in healthy subjects but there are no studies that have analyzed the reliability of IRT in wheelchair users (WCUs). The special characteristics of people with disabilities (sweating and thermoregulation problems, or blood distribution) make it necessary to study the factors affecting the application of IRT in WCUs. Discrepant reports exist on the benefits of, or damage resulting from, physical exercise and the relationship to shoulder overuse injuries in WCUs. Recent findings have found that overhead sports increase the risk of rotator cuff tears in wheelchair patients with paraplegia. Since there is no agreement in the literature, the thermographic profile of wheelchair athletes and nonathletes and its relation with shoulder pain should also be analysed. Infrared thermographic studies during exercise have been carried out only with able-bodied population at present. The understanding of the thermographic response to wheelchair exercise in relation to shoulder pain will offer an insight into the development of shoulder pain, which is necessary for appropriate interventions. The first study presented in this thesis demonstrates that the reliability of IRT in WCUs varies depending on the areas of the body that are analyzed. Moreover, it corroborates that IRT is a noninvasive and noncontact technique that allows the measurement of Tsk, which will allow for advances to be made in research concerned with WCUs. The second study provides a thermal profile of WCUs. Nonathletic subjects presented higher side-to-side skin temperature differences (ΔTsk) than athletes, and both had greater ΔTsk than the able-bodied results that have been published in the literature. Nonathletes also revealed larger Wheelchair Users Shoulder Pain Index (WUSPI) score than athletes. The shoulder region of interest (ROI) was the area with the highest ΔTsk of the regions measured. The analysis of the athletes’ Tsk showed that some ROIs are related to shoulder pain. These findings help to understand the thermal map in WCUs. Finally, the third study evaluated the thermal response of WCUs in exercise. There were significant differences in Tsk between the pre-test and the post-10 min in 12 ROIs, and between the post-test and the post-10 in most of the ROIs. These differences were attenuated when the ΔTsk was compared before and after exercise. Skin temperature tended to initially decrease immediately after the test, followed by a significant increase at 10 minutes after completing the exercise. The ΔTsk versus shoulder pain analysis yielded significant inverse relationships in 5 of the 26 ROIs. No significant correlations between propulsion variables and the results of the WUSPI questionnaire were found. All kinematic variables were significantly correlated with the temperature asymmetries in multiple ROIs. These results present indications that high performance wheelchair athletes exhibit similar capacity of heat production to able-bodied population; however, they presented a thermal pattern more characteristic of a prolonged exercise rather than brief exercise. This work contributes to improve the understanding about temperature changes in wheelchair athletes during exercise and provides implications to the sports and rehabilitation programs.
Resumo:
Ensaios de distribuição de água de aspersores são convencionalmente realizados manualmente, requerendo tempo e mão de obra treinada. A automação desses ensaios proporciona redução da demanda por esses recursos e apresenta potencial para minimizar falhas e/ou desvios de procedimento. Atualmente, laboratórios de ensaio e calibração acreditados junto a organismos legais devem apresentar em seus relatórios a incerteza de medição de seus instrumentos e sistemas de medição. Além disso, normas de ensaio e calibração apresentam especificação de incerteza aceitável, como a norma de ensaios de distribuição de água por aspersores, ISO 15886-3 (2012), a qual exige uma incerteza expandida de até 3% em 80% dos coletores. Os objetivos deste trabalho foram desenvolver um sistema automatizado para os ensaios de aspersores em laboratório e realizar a análise de incerteza de medição, para sua quantificação nos resultados de ensaio e para dar suporte ao dimensionamento dos tubos de coleta. O sistema automático foi constituído por um subsistema de gerenciamento, por meio de um aplicativo supervisório, um de pressurização e um de coleta, por meio de módulos eletrônicos microprocessados desenvolvidos. De acordo com instruções do sistema de gerenciamento o sistema de pressurização ajustava a pressão no aspersor por meio do controle da rotação da motobomba, e o sistema de coleta realizava a medição da intensidade de precipitação de água ao longo do raio de alcance do aspersor. A água captada por cada coletor drenava para um tubo de coleta, que estava conectado a uma das válvulas solenoides de um conjunto, onde havia um transmissor de pressão. Cada válvula era acionada individualmente numa sequência para a medição do nível de água em cada tubo de coleta, por meio do transmissor. Por meio das análises realizadas, as menores incertezas foram obtidas para os menores diâmetros de tubo de coleta, sendo que se deve utilizar o menor diâmetro possível. Quanto ao tempo de coleta, houve redução da incerteza de medição ao se aumentar a duração, devendo haver um tempo mínimo para se atingir a incerteza-alvo. Apesar de cada intensidade requer um tempo mínimo para garantir a incerteza, a diferença mínima de nível a ser medida foi a mesma. Portanto, para os ensaios visando atender a incerteza, realizou-se o monitoramento da diferença de nível nos tubos, ou diferença de nível, facilitando a realização do ensaio. Outra condição de ensaio considerou um tempo de coleta para 30 voltas do aspersor, também exigido pela norma ISO 15886-3 (2012). A terceira condição considerou 1 h de coleta, como tradicionalmente realizado. As curvas de distribuição de água obtidas por meio do sistema desenvolvido foram semelhantes às obtidas em ensaios convencionais, para as três situações avaliadas. Para tempos de coleta de 1 h ou 30 voltas do aspersor o sistema automático requereu menos tempo total de ensaio que o ensaio convencional. Entretanto, o sistema desenvolvido demandou mais tempo para atingir a incerteza-alvo, o que é uma limitação, mesmo sendo automatizado. De qualquer forma, o sistema necessitava apenas que um técnico informasse os parâmetros de ensaio e o acionasse, possibilitando que o mesmo alocasse seu tempo em outras atividades.
Resumo:
The increasing economic competition drives the industry to implement tools that improve their processes efficiencies. The process automation is one of these tools, and the Real Time Optimization (RTO) is an automation methodology that considers economic aspects to update the process control in accordance with market prices and disturbances. Basically, RTO uses a steady-state phenomenological model to predict the process behavior, and then, optimizes an economic objective function subject to this model. Although largely implemented in industry, there is not a general agreement about the benefits of implementing RTO due to some limitations discussed in the present work: structural plant/model mismatch, identifiability issues and low frequency of set points update. Some alternative RTO approaches have been proposed in literature to handle the problem of structural plant/model mismatch. However, there is not a sensible comparison evaluating the scope and limitations of these RTO approaches under different aspects. For this reason, the classical two-step method is compared to more recently derivative-based methods (Modifier Adaptation, Integrated System Optimization and Parameter estimation, and Sufficient Conditions of Feasibility and Optimality) using a Monte Carlo methodology. The results of this comparison show that the classical RTO method is consistent, providing a model flexible enough to represent the process topology, a parameter estimation method appropriate to handle measurement noise characteristics and a method to improve the sample information quality. At each iteration, the RTO methodology updates some key parameter of the model, where it is possible to observe identifiability issues caused by lack of measurements and measurement noise, resulting in bad prediction ability. Therefore, four different parameter estimation approaches (Rotational Discrimination, Automatic Selection and Parameter estimation, Reparametrization via Differential Geometry and classical nonlinear Least Square) are evaluated with respect to their prediction accuracy, robustness and speed. The results show that the Rotational Discrimination method is the most suitable to be implemented in a RTO framework, since it requires less a priori information, it is simple to be implemented and avoid the overfitting caused by the Least Square method. The third RTO drawback discussed in the present thesis is the low frequency of set points update, this problem increases the period in which the process operates at suboptimum conditions. An alternative to handle this problem is proposed in this thesis, by integrating the classic RTO and Self-Optimizing control (SOC) using a new Model Predictive Control strategy. The new approach demonstrates that it is possible to reduce the problem of low frequency of set points updates, improving the economic performance. Finally, the practical aspects of the RTO implementation are carried out in an industrial case study, a Vapor Recompression Distillation (VRD) process located in Paulínea refinery from Petrobras. The conclusions of this study suggest that the model parameters are successfully estimated by the Rotational Discrimination method; the RTO is able to improve the process profit in about 3%, equivalent to 2 million dollars per year; and the integration of SOC and RTO may be an interesting control alternative for the VRD process.
Resumo:
This paper addresses the problem of the automatic recognition and classification of temporal expressions and events in human language. Efficacy in these tasks is crucial if the broader task of temporal information processing is to be successfully performed. We analyze whether the application of semantic knowledge to these tasks improves the performance of current approaches. We therefore present and evaluate a data-driven approach as part of a system: TIPSem. Our approach uses lexical semantics and semantic roles as additional information to extend classical approaches which are principally based on morphosyntax. The results obtained for English show that semantic knowledge aids in temporal expression and event recognition, achieving an error reduction of 59% and 21%, while in classification the contribution is limited. From the analysis of the results it may be concluded that the application of semantic knowledge leads to more general models and aids in the recognition of temporal entities that are ambiguous at shallower language analysis levels. We also discovered that lexical semantics and semantic roles have complementary advantages, and that it is useful to combine them. Finally, we carried out the same analysis for Spanish. The results obtained show comparable advantages. This supports the hypothesis that applying the proposed semantic knowledge may be useful for different languages.
Resumo:
From the Introduction. With the results of its asset quality review (AQR), to be published on 26 October 2014, the European Central Bank intends to provide clarity on the shape of the 120 banks it will supervise in the eurozone, and it may request a series of follow-up actions before assuming its new set of tasks under the Single Supervisory Mechanism (SSM) Regulation in November. On the same day, the European Banking Authority (EBA) will also be publishing the results of its stress test, covering 123 banks across 22 European Economic Area (EEA) countries. For the ECB, it will be a matter of setting the standard for its future task, whereas EBA, seeks to restore the confidence it lost in the 2011 stress test and 2012 capital exercise. Both institutions will need to indicate how they will cooperate in the future in these tasks, and through enhanced disclosure, strengthen the confidence in the European banking system.
Resumo:
This study was carried out to detect differences in locomotion and feeding behavior in lame (group L; n = 41; gait score ≥ 2.5) and non-lame (group C; n = 12; gait score ≤ 2) multiparous Holstein cows in a cross-sectional study design. A model for automatic lameness detection was created, using data from accelerometers attached to the hind limbs and noseband sensors attached to the head. Each cow's gait was videotaped and scored on a 5-point scale before and after a period of 3 consecutive days of behavioral data recording. The mean value of 3 independent experienced observers was taken as a definite gait score and considered to be the gold standard. For statistical analysis, data from the noseband sensor and one of two accelerometers per cow (randomly selected) of 2 out of 3 randomly selected days was used. For comparison between group L and group C, the T-test, the Aspin-Welch Test and the Wilcoxon Test were used. The sensitivity and specificity for lameness detection was determined with logistic regression and ROC-analysis. Group L compared to group C had significantly lower eating and ruminating time, fewer eating chews, ruminating chews and ruminating boluses, longer lying time and lying bout duration, lower standing time, fewer standing and walking bouts, fewer, slower and shorter strides and a lower walking speed. The model considering the number of standing bouts and walking speed was the best predictor of cows being lame with a sensitivity of 90.2% and specificity of 91.7%. Sensitivity and specificity of the lameness detection model were considered to be very high, even without the use of halter data. It was concluded that under the conditions of the study farm, accelerometer data were suitable for accurately distinguishing between lame and non-lame dairy cows, even in cases of slight lameness with a gait score of 2.5.
Resumo:
BACKGROUND Screening of aphasia in acute stroke is crucial for directing patients to early language therapy. The Language Screening Test (LAST), originally developed in French, is a validated language screening test that allows detection of a language deficit within a few minutes. The aim of the present study was to develop and validate two parallel German versions of the LAST. METHODS The LAST includes subtests for naming, repetition, automatic speech, and comprehension. For the translation into German, task constructs and psycholinguistic criteria for item selection were identical to the French LAST. A cohort of 101 stroke patients were tested, all of whom were native German speakers. Validation of the LAST was based on (1) analysis of equivalence of the German versions, which was established by administering both versions successively in a subset of patients, (2) internal validity by means of internal consistency analysis, and (3) external validity by comparison with the short version of the Token Test in another subset of patients. RESULTS The two German versions were equivalent as demonstrated by a high intraclass correlation coefficient of 0.91. Furthermore, an acceptable internal structure of the LAST was found (Cronbach's α = 0.74). A highly significant correlation (r = 0.74, p < 0.0001) between the LAST and the short version of the Token Test indicated good external validity of the scale. CONCLUSION The German version of the LAST, available in two parallel versions, is a new and valid language screening test in stroke.
Resumo:
Control and governance theories recognize that exchange partners are subject to two general forms of control, the unilateral authority of one firm and bilateral expectations extending from their social bond. In this way, a supplier both exerts unilateral, authority-based controls and is subject to socially-based, bilateral controls as it attempts to manage its brand successfully through reseller channels. Such control is being challenged by suppliers’ growing relative dependence on increasingly dominant resellers in many industries. Yet the impact of supplier relative dependence on the efficacy of control-based governance in the supplier’s channel is not well understood. To address this gap, we specify and test a control model moderated by relative dependence involving the conceptualization and measurement of governance at the level of specific control processes: incenting, monitoring, and enforcing. Our empirical findings show relative dependence undercuts the effectiveness of certain unilateral and bilateral control processes while enhancing the effectiveness of others, largely supporting our dual suppositions that each control process operates through a specialized behavioral mechanism and that these underlying mechanisms are differentially impacted by relative dependence. We offer implications of these findings for managers and identify our contributions to channel theory and research.
Resumo:
We propose a dual-parameter optical sensor device achieved by UV inscription of a hybrid long-period grating-fiber Bragg grating structure in D fiber. The hybrid configuration permits the detection of the temperature from the latter's response and measurement of the external refractive index from the former's response. In addition, the host D fiber permits effective modification of the device's sensitivity by cladding etching. The grating sensor has been used to measure the concentrations of aqueous sugar solutions, demonstrating its potential capability to detect concentration changes as small as 0.01%.