15 resultados para Planar vector field

em Universidad Politécnica de Madrid


Relevância:

100.00% 100.00%

Publicador:

Resumo:

A method to reduce the noise power in far-field pattern without modifying the desired signal is proposed. Therefore, an important signal-to-noise ratio improvement may be achieved. The method is used when the antenna measurement is performed in planar near-field, where the recorded data are assumed to be corrupted with white Gaussian and space-stationary noise, because of the receiver additive noise. Back-propagating the measured field from the scan plane to the antenna under test (AUT) plane, the noise remains white Gaussian and space-stationary, whereas the desired field is theoretically concentrated in the aperture antenna. Thanks to this fact, a spatial filtering may be applied, cancelling the field which is located out of the AUT dimensions and which is only composed by noise. Next, a planar field to far-field transformation is carried out, achieving a great improvement compared to the pattern obtained directly from the measurement. To verify the effectiveness of the method, two examples will be presented using both simulated and measured near-field data.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper describes two methods to cancel the effect of two kinds of leakage signals which may be presented when an antenna is measured in a planar near-field range. One method tries to reduce leakage bias errors from the receiver¿s quadrature detector and it is based on estimating the bias constant added to every near-field data sample. Then, that constant is subtracted from the data, removing its undesired effect on the far-field pattern. The estimation is performed by back-propagating the field from the scan plane to the antenna under test plane (AUT) and averaging all the data located outside the AUT aperture. The second method is able to cancel the effect of the leakage from faulty transmission lines, connectors or rotary joints. The basis of this method is also a reconstruction process to determine the field distribution on the AUT plane. Once this distribution is known, a spatial filtering is applied to cancel the contribution due to those faulty elements. After that, a near-field-to-far-field transformation is applied, obtaining a new radiation pattern where the leakage effects have disappeared. To verify the effectiveness of both methods, several examples are presented.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

El estudio del comportamiento de la atmósfera ha resultado de especial importancia tanto en el programa SESAR como en NextGen, en los que la gestión actual del tránsito aéreo (ATM) está experimentando una profunda transformación hacia nuevos paradigmas tanto en Europa como en los EE.UU., respectivamente, para el guiado y seguimiento de las aeronaves en la realización de rutas más eficientes y con mayor precisión. La incertidumbre es una característica fundamental de los fenómenos meteorológicos que se transfiere a la separación de las aeronaves, las trayectorias de vuelo libres de conflictos y a la planificación de vuelos. En este sentido, el viento es un factor clave en cuanto a la predicción de la futura posición de la aeronave, por lo que tener un conocimiento más profundo y preciso de campo de viento reducirá las incertidumbres del ATC. El objetivo de esta tesis es el desarrollo de una nueva técnica operativa y útil destinada a proporcionar de forma adecuada y directa el campo de viento atmosférico en tiempo real, basada en datos de a bordo de la aeronave, con el fin de mejorar la predicción de las trayectorias de las aeronaves. Para lograr este objetivo se ha realizado el siguiente trabajo. Se han descrito y analizado los diferentes sistemas de la aeronave que proporcionan las variables necesarias para obtener la velocidad del viento, así como de las capacidades que permiten la presentación de esta información para sus aplicaciones en la gestión del tráfico aéreo. Se ha explorado el uso de aeronaves como los sensores de viento en un área terminal para la estimación del viento en tiempo real con el fin de mejorar la predicción de las trayectorias de aeronaves. Se han desarrollado métodos computacionalmente eficientes para estimar las componentes horizontales de la velocidad del viento a partir de las velocidades de las aeronaves (VGS, VCAS/VTAS), la presión y datos de temperatura. Estos datos de viento se han utilizado para estimar el campo de viento en tiempo real utilizando un sistema de procesamiento de datos a través de un método de mínima varianza. Por último, se ha evaluado la exactitud de este procedimiento para que esta información sea útil para el control del tráfico aéreo. La información inicial proviene de una muestra de datos de Registradores de Datos de Vuelo (FDR) de aviones que aterrizaron en el aeropuerto Madrid-Barajas. Se dispuso de datos de ciertas aeronaves durante un periodo de más de tres meses que se emplearon para calcular el vector viento en cada punto del espacio aéreo. Se utilizó un modelo matemático basado en diferentes métodos de interpolación para obtener los vectores de viento en áreas sin datos disponibles. Se han utilizado tres escenarios concretos para validar dos métodos de interpolación: uno de dos dimensiones que trabaja con ambas componentes horizontales de forma independiente, y otro basado en el uso de una variable compleja que relaciona ambas componentes. Esos métodos se han probado en diferentes escenarios con resultados dispares. Esta metodología se ha aplicado en un prototipo de herramienta en MATLAB © para analizar automáticamente los datos de FDR y determinar el campo vectorial del viento que encuentra la aeronave al volar en el espacio aéreo en estudio. Finalmente se han obtenido las condiciones requeridas y la precisión de los resultados para este modelo. El método desarrollado podría utilizar los datos de los aviones comerciales como inputs utilizando los datos actualmente disponibles y la capacidad computacional, para proporcionárselos a los sistemas ATM donde se podría ejecutar el método propuesto. Estas velocidades del viento calculadas, o bien la velocidad respecto al suelo y la velocidad verdadera, se podrían difundir, por ejemplo, a través del sistema de direccionamiento e informe para comunicaciones de aeronaves (ACARS), mensajes de ADS-B o Modo S. Esta nueva fuente ayudaría a actualizar la información del viento suministrada en los productos aeronáuticos meteorológicos (PAM), informes meteorológicos de aeródromos (AIRMET), e información meteorológica significativa (SIGMET). ABSTRACT The study of the atmosphere behaviour is been of particular importance both in SESAR and NextGen programs, where the current air traffic management (ATM) system is undergoing a profound transformation to the new paradigms both in Europe and the USA, respectively, to guide and track aircraft more precisely on more efficient routes. Uncertainty is a fundamental characteristic of weather phenomena which is transferred to separation assurance, flight path de-confliction and flight planning applications. In this respect, the wind is a key factor regarding the prediction of the future position of the aircraft, so that having a deeper and accurate knowledge of wind field will reduce ATC uncertainties. The purpose of this thesis is to develop a new and operationally useful technique intended to provide adequate and direct real-time atmospheric winds fields based on on-board aircraft data, in order to improve aircraft trajectory prediction. In order to achieve this objective the following work has been accomplished. The different sources in the aircraft systems that provide the variables needed to derivate the wind velocity have been described and analysed, as well as the capabilities which allow presenting this information for air traffic management applications. The use of aircraft as wind sensors in a terminal area for real-time wind estimation in order to improve aircraft trajectory prediction has been explored. Computationally efficient methods have been developed to estimate horizontal wind components from aircraft velocities (VGS, VCAS/VTAS), pressure, and temperature data. These wind data were utilized to estimate a real-time wind field using a data processing approach through a minimum variance method. Finally, the accuracy of this procedure has been evaluated for this information to be useful to air traffic control. The initial information comes from a Flight Data Recorder (FDR) sample of aircraft landing in Madrid-Barajas Airport. Data available for more than three months were exploited in order to derive the wind vector field in each point of the airspace. Mathematical model based on different interpolation methods were used in order to obtain wind vectors in void areas. Three particular scenarios were employed to test two interpolation methods: a two-dimensional one that works with both horizontal components in an independent way, and also a complex variable formulation that links both components. Those methods were tested using various scenarios with dissimilar results. This methodology has been implemented in a prototype tool in MATLAB © in order to automatically analyse FDR and determine the wind vector field that aircraft encounter when flying in the studied airspace. Required conditions and accuracy of the results were derived for this model. The method developed could be fed by commercial aircraft utilizing their currently available data sources and computational capabilities, and providing them to ATM system where the proposed method could be run. Computed wind velocities, or ground and true airspeeds, would then be broadcasted, for example, via the Aircraft Communication Addressing and Reporting System (ACARS), ADS-B out messages, or Mode S. This new source would help updating the wind information furnished in meteorological aeronautical products (PAM), meteorological aerodrome reports (AIRMET), and significant meteorological information (SIGMET).

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This Doctoral Thesis entitled Contribution to the analysis, design and assessment of compact antenna test ranges at millimeter wavelengths aims to deepen the knowledge of a particular antenna measurement system: the compact range, operating in the frequency bands of millimeter wavelengths. The thesis has been developed at Radiation Group (GR), an antenna laboratory which belongs to the Signals, Systems and Radiocommunications department (SSR), from Technical University of Madrid (UPM). The Radiation Group owns an extensive experience on antenna measurements, running at present four facilities which operate in different configurations: Gregorian compact antenna test range, spherical near field, planar near field and semianechoic arch system. The research work performed in line with this thesis contributes the knowledge of the first measurement configuration at higher frequencies, beyond the microwaves region where Radiation Group features customer-level performance. To reach this high level purpose, a set of scientific tasks were sequentially carried out. Those are succinctly described in the subsequent paragraphs. A first step dealed with the State of Art review. The study of scientific literature dealed with the analysis of measurement practices in compact antenna test ranges in addition with the particularities of millimeter wavelength technologies. Joint study of both fields of knowledge converged, when this measurement facilities are of interest, in a series of technological challenges which become serious bottlenecks at different stages: analysis, design and assessment. Thirdly after the overview study, focus was set on Electromagnetic analysis algorithms. These formulations allow to approach certain electromagnetic features of interest, such as field distribution phase or stray signal analysis of particular structures when they interact with electromagnetic waves sources. Properly operated, a CATR facility features electromagnetic waves collimation optics which are large, in terms of wavelengths. Accordingly, the electromagnetic analysis tasks introduce an extense number of mathematic unknowns which grow with frequency, following different polynomic order laws depending on the used algorithmia. In particular, the optics configuration which was of our interest consisted on the reflection type serrated edge collimator. The analysis of these devices requires a flexible handling of almost arbitrary scattering geometries, becoming this flexibility the nucleus of the algorithmia’s ability to perform the subsequent design tasks. This thesis’ contribution to this field of knowledge consisted on reaching a formulation which was powerful at the same time when dealing with various analysis geometries and computationally speaking. Two algorithmia were developed. While based on the same principle of hybridization, they reached different order Physics performance at the cost of the computational efficiency. Inter-comparison of their CATR design capabilities was performed, reaching both qualitative as well as quantitative conclusions on their scope. In third place, interest was shifted from analysis - design tasks towards range assessment. Millimetre wavelengths imply strict mechanical tolerances and fine setup adjustment. In addition, the large number of unknowns issue already faced in the analysis stage appears as well in the on chamber field probing stage. Natural decrease of dynamic range available by semiconductor millimeter waves sources requires in addition larger integration times at each probing point. These peculiarities increase exponentially the difficulty of performing assessment processes in CATR facilities beyond microwaves. The bottleneck becomes so tight that it compromises the range characterization beyond a certain limit frequency which typically lies on the lowest segment of millimeter wavelength frequencies. However the value of range assessment moves, on the contrary, towards the highest segment. This thesis contributes this technological scenario developing quiet zone probing techniques which achieves substantial data reduction ratii. Collaterally, it increases the robustness of the results to noise, which is a virtual rise of the setup’s available dynamic range. In fourth place, the environmental sensitivity of millimeter wavelengths issue was approached. It is well known the drifts of electromagnetic experiments due to the dependance of the re sults with respect to the surrounding environment. This feature relegates many industrial practices of microwave frequencies to the experimental stage, at millimeter wavelengths. In particular, evolution of the atmosphere within acceptable conditioning bounds redounds in drift phenomena which completely mask the experimental results. The contribution of this thesis on this aspect consists on modeling electrically the indoor atmosphere existing in a CATR, as a function of environmental variables which affect the range’s performance. A simple model was developed, being able to handle high level phenomena, such as feed - probe phase drift as a function of low level magnitudes easy to be sampled: relative humidity and temperature. With this model, environmental compensation can be performed and chamber conditioning is automatically extended towards higher frequencies. Therefore, the purpose of this thesis is to go further into the knowledge of millimetre wavelengths involving compact antenna test ranges. This knowledge is dosified through the sequential stages of a CATR conception, form early low level electromagnetic analysis towards the assessment of an operative facility, stages for each one of which nowadays bottleneck phenomena exist and seriously compromise the antenna measurement practices at millimeter wavelengths.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This paper presents a simple theory to analyze some of the more common visual illusions. The reported mathematical model is able to give some numerical results concerning well known phenomena as the Zollner, Muller-Lyer or Wundt illusions. We present two different approaches, the first one related with the symmetrical operations present in each image and the second one concerning a proposed vector field related with the lines present at the image

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In this paper we develop new techniques for revealing geometrical structures in phase space that are valid for aperiodically time dependent dynamical systems, which we refer to as Lagrangian descriptors. These quantities are based on the integration, for a finite time, along trajectories of an intrinsic bounded, positive geometrical and/or physical property of the trajectory itself. We discuss a general methodology for constructing Lagrangian descriptors, and we discuss a “heuristic argument” that explains why this method is successful for revealing geometrical structures in the phase space of a dynamical system. We support this argument by explicit calculations on a benchmark problem having a hyperbolic fixed point with stable and unstable manifolds that are known analytically. Several other benchmark examples are considered that allow us the assess the performance of Lagrangian descriptors in revealing invariant tori and regions of shear. Throughout the paper “side-by-side” comparisons of the performance of Lagrangian descriptors with both finite time Lyapunov exponents (FTLEs) and finite time averages of certain components of the vector field (“time averages”) are carried out and discussed. In all cases Lagrangian descriptors are shown to be both more accurate and computationally efficient than these methods. We also perform computations for an explicitly three dimensional, aperiodically time-dependent vector field and an aperiodically time dependent vector field defined as a data set. Comparisons with FTLEs and time averages for these examples are also carried out, with similar conclusions as for the benchmark examples.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This paper describes the new anechoic chamber available at The University of Kent, UK. This facility includes a spherical near/far field, planar near field, cylindrical near field and a compact range. The facility allows measurement from 600 MHz up to 110 MHz. The spherical, planar and cylindrical ranges covers up to 40 GHz and the compact range is available from 50 GHz up to 110 MHz. Immediate plans are to use the new facility to measure body-centric antennas and sensing nodes together with near field sampling of finite sized Frequency Selective Surfaces.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Lagrangian descriptors are a recent technique which reveals geometrical structures in phase space and which are valid for aperiodically time dependent dynamical systems. We discuss a general methodology for constructing them and we discuss a "heuristic argument" that explains why this method is successful. We support this argument by explicit calculations on a benchmark problem. Several other benchmark examples are considered that allow us to assess the performance of Lagrangian descriptors with both finite time Lyapunov exponents (FTLEs) and finite time averages of certain components of the vector field ("time averages"). In all cases Lagrangian descriptors are shown to be both more accurate and computationally efficient than these methods.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

El daño cerebral adquirido (DCA) es un problema social y sanitario grave, de magnitud creciente y de una gran complejidad diagnóstica y terapéutica. Su elevada incidencia, junto con el aumento de la supervivencia de los pacientes, una vez superada la fase aguda, lo convierten también en un problema de alta prevalencia. En concreto, según la Organización Mundial de la Salud (OMS) el DCA estará entre las 10 causas más comunes de discapacidad en el año 2020. La neurorrehabilitación permite mejorar el déficit tanto cognitivo como funcional y aumentar la autonomía de las personas con DCA. Con la incorporación de nuevas soluciones tecnológicas al proceso de neurorrehabilitación se pretende alcanzar un nuevo paradigma donde se puedan diseñar tratamientos que sean intensivos, personalizados, monitorizados y basados en la evidencia. Ya que son estas cuatro características las que aseguran que los tratamientos son eficaces. A diferencia de la mayor parte de las disciplinas médicas, no existen asociaciones de síntomas y signos de la alteración cognitiva que faciliten la orientación terapéutica. Actualmente, los tratamientos de neurorrehabilitación se diseñan en base a los resultados obtenidos en una batería de evaluación neuropsicológica que evalúa el nivel de afectación de cada una de las funciones cognitivas (memoria, atención, funciones ejecutivas, etc.). La línea de investigación en la que se enmarca este trabajo de investigación pretende diseñar y desarrollar un perfil cognitivo basado no sólo en el resultado obtenido en esa batería de test, sino también en información teórica que engloba tanto estructuras anatómicas como relaciones funcionales e información anatómica obtenida de los estudios de imagen. De esta forma, el perfil cognitivo utilizado para diseñar los tratamientos integra información personalizada y basada en la evidencia. Las técnicas de neuroimagen representan una herramienta fundamental en la identificación de lesiones para la generación de estos perfiles cognitivos. La aproximación clásica utilizada en la identificación de lesiones consiste en delinear manualmente regiones anatómicas cerebrales. Esta aproximación presenta diversos problemas relacionados con inconsistencias de criterio entre distintos clínicos, reproducibilidad y tiempo. Por tanto, la automatización de este procedimiento es fundamental para asegurar una extracción objetiva de información. La delineación automática de regiones anatómicas se realiza mediante el registro tanto contra atlas como contra otros estudios de imagen de distintos sujetos. Sin embargo, los cambios patológicos asociados al DCA están siempre asociados a anormalidades de intensidad y/o cambios en la localización de las estructuras. Este hecho provoca que los algoritmos de registro tradicionales basados en intensidad no funcionen correctamente y requieran la intervención del clínico para seleccionar ciertos puntos (que en esta tesis hemos denominado puntos singulares). Además estos algoritmos tampoco permiten que se produzcan deformaciones grandes deslocalizadas. Hecho que también puede ocurrir ante la presencia de lesiones provocadas por un accidente cerebrovascular (ACV) o un traumatismo craneoencefálico (TCE). Esta tesis se centra en el diseño, desarrollo e implementación de una metodología para la detección automática de estructuras lesionadas que integra algoritmos cuyo objetivo principal es generar resultados que puedan ser reproducibles y objetivos. Esta metodología se divide en cuatro etapas: pre-procesado, identificación de puntos singulares, registro y detección de lesiones. Los trabajos y resultados alcanzados en esta tesis son los siguientes: Pre-procesado. En esta primera etapa el objetivo es homogeneizar todos los datos de entrada con el objetivo de poder extraer conclusiones válidas de los resultados obtenidos. Esta etapa, por tanto, tiene un gran impacto en los resultados finales. Se compone de tres operaciones: eliminación del cráneo, normalización en intensidad y normalización espacial. Identificación de puntos singulares. El objetivo de esta etapa es automatizar la identificación de puntos anatómicos (puntos singulares). Esta etapa equivale a la identificación manual de puntos anatómicos por parte del clínico, permitiendo: identificar un mayor número de puntos lo que se traduce en mayor información; eliminar el factor asociado a la variabilidad inter-sujeto, por tanto, los resultados son reproducibles y objetivos; y elimina el tiempo invertido en el marcado manual de puntos. Este trabajo de investigación propone un algoritmo de identificación de puntos singulares (descriptor) basado en una solución multi-detector y que contiene información multi-paramétrica: espacial y asociada a la intensidad. Este algoritmo ha sido contrastado con otros algoritmos similares encontrados en el estado del arte. Registro. En esta etapa se pretenden poner en concordancia espacial dos estudios de imagen de sujetos/pacientes distintos. El algoritmo propuesto en este trabajo de investigación está basado en descriptores y su principal objetivo es el cálculo de un campo vectorial que permita introducir deformaciones deslocalizadas en la imagen (en distintas regiones de la imagen) y tan grandes como indique el vector de deformación asociado. El algoritmo propuesto ha sido comparado con otros algoritmos de registro utilizados en aplicaciones de neuroimagen que se utilizan con estudios de sujetos control. Los resultados obtenidos son prometedores y representan un nuevo contexto para la identificación automática de estructuras. Identificación de lesiones. En esta última etapa se identifican aquellas estructuras cuyas características asociadas a la localización espacial y al área o volumen han sido modificadas con respecto a una situación de normalidad. Para ello se realiza un estudio estadístico del atlas que se vaya a utilizar y se establecen los parámetros estadísticos de normalidad asociados a la localización y al área. En función de las estructuras delineadas en el atlas, se podrán identificar más o menos estructuras anatómicas, siendo nuestra metodología independiente del atlas seleccionado. En general, esta tesis doctoral corrobora las hipótesis de investigación postuladas relativas a la identificación automática de lesiones utilizando estudios de imagen médica estructural, concretamente estudios de resonancia magnética. Basándose en estos cimientos, se han abrir nuevos campos de investigación que contribuyan a la mejora en la detección de lesiones. ABSTRACT Brain injury constitutes a serious social and health problem of increasing magnitude and of great diagnostic and therapeutic complexity. Its high incidence and survival rate, after the initial critical phases, makes it a prevalent problem that needs to be addressed. In particular, according to the World Health Organization (WHO), brain injury will be among the 10 most common causes of disability by 2020. Neurorehabilitation improves both cognitive and functional deficits and increases the autonomy of brain injury patients. The incorporation of new technologies to the neurorehabilitation tries to reach a new paradigm focused on designing intensive, personalized, monitored and evidence-based treatments. Since these four characteristics ensure the effectivity of treatments. Contrary to most medical disciplines, it is not possible to link symptoms and cognitive disorder syndromes, to assist the therapist. Currently, neurorehabilitation treatments are planned considering the results obtained from a neuropsychological assessment battery, which evaluates the functional impairment of each cognitive function (memory, attention, executive functions, etc.). The research line, on which this PhD falls under, aims to design and develop a cognitive profile based not only on the results obtained in the assessment battery, but also on theoretical information that includes both anatomical structures and functional relationships and anatomical information obtained from medical imaging studies, such as magnetic resonance. Therefore, the cognitive profile used to design these treatments integrates information personalized and evidence-based. Neuroimaging techniques represent an essential tool to identify lesions and generate this type of cognitive dysfunctional profiles. Manual delineation of brain anatomical regions is the classical approach to identify brain anatomical regions. Manual approaches present several problems related to inconsistencies across different clinicians, time and repeatability. Automated delineation is done by registering brains to one another or to a template. However, when imaging studies contain lesions, there are several intensity abnormalities and location alterations that reduce the performance of most of the registration algorithms based on intensity parameters. Thus, specialists may have to manually interact with imaging studies to select landmarks (called singular points in this PhD) or identify regions of interest. These two solutions have the same inconvenient than manual approaches, mentioned before. Moreover, these registration algorithms do not allow large and distributed deformations. This type of deformations may also appear when a stroke or a traumatic brain injury (TBI) occur. This PhD is focused on the design, development and implementation of a new methodology to automatically identify lesions in anatomical structures. This methodology integrates algorithms whose main objective is to generate objective and reproducible results. It is divided into four stages: pre-processing, singular points identification, registration and lesion detection. Pre-processing stage. In this first stage, the aim is to standardize all input data in order to be able to draw valid conclusions from the results. Therefore, this stage has a direct impact on the final results. It consists of three steps: skull-stripping, spatial and intensity normalization. Singular points identification. This stage aims to automatize the identification of anatomical points (singular points). It involves the manual identification of anatomical points by the clinician. This automatic identification allows to identify a greater number of points which results in more information; to remove the factor associated to inter-subject variability and thus, the results are reproducible and objective; and to eliminate the time spent on manual marking. This PhD proposed an algorithm to automatically identify singular points (descriptor) based on a multi-detector approach. This algorithm contains multi-parametric (spatial and intensity) information. This algorithm has been compared with other similar algorithms found on the state of the art. Registration. The goal of this stage is to put in spatial correspondence two imaging studies of different subjects/patients. The algorithm proposed in this PhD is based on descriptors. Its main objective is to compute a vector field to introduce distributed deformations (changes in different imaging regions), as large as the deformation vector indicates. The proposed algorithm has been compared with other registration algorithms used on different neuroimaging applications which are used with control subjects. The obtained results are promising and they represent a new context for the automatic identification of anatomical structures. Lesion identification. This final stage aims to identify those anatomical structures whose characteristics associated to spatial location and area or volume has been modified with respect to a normal state. A statistical study of the atlas to be used is performed to establish which are the statistical parameters associated to the normal state. The anatomical structures that may be identified depend on the selected anatomical structures identified on the atlas. The proposed methodology is independent from the selected atlas. Overall, this PhD corroborates the investigated research hypotheses regarding the automatic identification of lesions based on structural medical imaging studies (resonance magnetic studies). Based on these foundations, new research fields to improve the automatic identification of lesions in brain injury can be proposed.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

A method to reduce truncation errors in near-field antenna measurements is presented. The method is based on the Gerchberg-Papoulis iterative algorithm used to extrapolate band-limited functions and it is able to extend the valid region of the calculated far-field pattern up to the whole forward hemisphere. The extension of the valid region is achieved by the iterative application of a transformation between two different domains. After each transformation, a filtering process that is based on known information at each domain is applied. The first domain is the spectral domain in which the plane wave spectrum (PWS) is reliable only within a known region. The second domain is the field distribution over the antenna under test (AUT) plane in which the desired field is assumed to be concentrated on the antenna aperture. The method can be applied to any scanning geometry, but in this paper, only the planar, cylindrical, and partial spherical near-field measurements are considered. Several simulation and measurement examples are presented to verify the effectiveness of the method.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper outlines an automatic computervision system for the identification of avena sterilis which is a special weed seed growing in cereal crops. The final goal is to reduce the quantity of herbicide to be sprayed as an important and necessary step for precision agriculture. So, only areas where the presence of weeds is important should be sprayed. The main problems for the identification of this kind of weed are its similar spectral signature with respect the crops and also its irregular distribution in the field. It has been designed a new strategy involving two processes: image segmentation and decision making. The image segmentation combines basic suitable image processing techniques in order to extract cells from the image as the low level units. Each cell is described by two area-based attributes measuring the relations among the crops and weeds. The decision making is based on the SupportVectorMachines and determines if a cell must be sprayed. The main findings of this paper are reflected in the combination of the segmentation and the SupportVectorMachines decision processes. Another important contribution of this approach is the minimum requirements of the system in terms of memory and computation power if compared with other previous works. The performance of the method is illustrated by comparative analysis against some existing strategies.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

La tecnología de múltiples antenas ha evolucionado para dar soporte a los actuales y futuros sistemas de comunicaciones inalámbricas en su afán por proporcionar la calidad de señal y las altas tasas de transmisión que demandan los nuevos servicios de voz, datos y multimedia. Sin embargo, es fundamental comprender las características espaciales del canal radio, ya que son las características del propio canal lo que limita en gran medida las prestaciones de los sistemas de comunicación actuales. Por ello surge la necesidad de estudiar la estructura espacial del canal de propagación para poder diseñar, evaluar e implementar de forma más eficiente tecnologías multiantena en los actuales y futuros sistemas de comunicación inalámbrica. Las tecnologías multiantena denominadas antenas inteligentes y MIMO han generado un gran interés en el área de comunicaciones inalámbricas, por ejemplo los sistemas de telefonía celular o más recientemente en las redes WLAN (Wireless Local Area Network), principalmente por la mejora que proporcionan en la calidad de las señales y en la tasa de transmisión de datos, respectivamente. Las ventajas de estas tecnologías se fundamentan en el uso de la dimensión espacial para obtener ganancia por diversidad espacial, como ya sucediera con las tecnologías FDMA (Frequency Division Multiplexing Access), TDMA (Time Division Multiplexing Access) y CDMA (Code Division Multiplexing Access) para obtener diversidad en las dimensiones de frecuencia, tiempo y código, respectivamente. Esta Tesis se centra en estudiar las características espaciales del canal con sistemas de múltiples antenas mediante la estimación de los perfiles de ángulos de llegada (DoA, Direction-of- Arrival) considerando esquemas de diversidad en espacio, polarización y frecuencia. Como primer paso se realiza una revisión de los sistemas con antenas inteligentes y los sistemas MIMO, describiendo con detalle la base matemática que sustenta las prestaciones ofrecidas por estos sistemas. Posteriormente se aportan distintos estudios sobre la estimación de los perfiles de DoA de canales radio con sistemas multiantena evaluando distintos aspectos de antenas, algoritmos de estimación, esquemas de polarización, campo lejano y campo cercano de las fuentes. Así mismo, se presenta un prototipo de medida MIMO-OFDM-SPAA3D en la banda ISM (Industrial, Scientific and Medical) de 2,45 Ghz, el cual está preparado para caracterizar experimentalmente el rendimiento de los sistemas MIMO, y para caracterizar espacialmente canales de propagación, considerando los esquemas de diversidad espacial, por polarización y frecuencia. Los estudios aportados se describen a continuación. Los sistemas de antenas inteligentes dependen en gran medida de la posición de los usuarios. Estos sistemas están equipados con arrays de antenas, los cuales aportan la diversidad espacial necesaria para obtener una representación espacial fidedigna del canal radio a través de los perfiles de DoA (DoA, Direction-of-Arrival) y por tanto, la posición de las fuentes de señal. Sin embargo, los errores de fabricación de arrays así como ciertos parámetros de señal conlleva un efecto negativo en las prestaciones de estos sistemas. Por ello se plantea un modelo de señal parametrizado que permite estudiar la influencia que tienen estos factores sobre los errores de estimación de DoA, tanto en acimut como en elevación, utilizando los algoritmos de estimación de DOA más conocidos en la literatura. A partir de las curvas de error, se pueden obtener parámetros de diseño para sistemas de localización basados en arrays. En un segundo estudio se evalúan esquemas de diversidad por polarización con los sistemas multiantena para mejorar la estimación de los perfiles de DoA en canales que presentan pérdidas por despolarización. Para ello se desarrolla un modelo de señal en array con sensibilidad de polarización que toma en cuenta el campo electromagnético de ondas planas. Se realizan simulaciones MC del modelo para estudiar el efecto de la orientación de la polarización como el número de polarizaciones usadas en el transmisor como en el receptor sobre la precisión en la estimación de los perfiles de DoA observados en el receptor. Además, se presentan los perfiles DoA obtenidos en escenarios quasiestáticos de interior con un prototipo de medida MIMO 4x4 de banda estrecha en la banda de 2,45 GHz, los cuales muestran gran fidelidad con el escenario real. Para la obtención de los perfiles DoA se propone un método basado en arrays virtuales, validado con los datos de simulación y los datos experimentales. Con relación a la localización 3D de fuentes en campo cercano (zona de Fresnel), se presenta un tercer estudio para obtener con gran exactitud la estructura espacial del canal de propagación en entornos de interior controlados (en cámara anecóica) utilizando arrays virtuales. El estudio analiza la influencia del tamaño del array y el diagrama de radiación en la estimación de los parámetros de localización proponiendo, para ello, un modelo de señal basado en un vector de enfoque de onda esférico (SWSV). Al aumentar el número de antenas del array se consigue reducir el error RMS de estimación y mejorar sustancialmente la representación espacial del canal. La estimación de los parámetros de localización se lleva a cabo con un nuevo método de búsqueda multinivel adaptativo, propuesto con el fin de reducir drásticamente el tiempo de procesado que demandan otros algoritmos multivariable basados en subespacios, como el MUSIC, a costa de incrementar los requisitos de memoria. Las simulaciones del modelo arrojan resultados que son validados con resultados experimentales y comparados con el límite de Cramer Rao en términos del error cuadrático medio. La compensación del diagrama de radiación acerca sustancialmente la exactitud de estimación de la distancia al límite de Cramer Rao. Finalmente, es igual de importante la evaluación teórica como experimental de las prestaciones de los sistemas MIMO-OFDM. Por ello, se presenta el diseño e implementación de un prototipo de medida MIMO-OFDM-SPAA3D autocalibrado con sistema de posicionamiento de antena automático en la banda de 2,45 Ghz con capacidad para evaluar la capacidad de los sistemas MIMO. Además, tiene la capacidad de caracterizar espacialmente canales MIMO, incorporando para ello una etapa de autocalibración para medir la respuesta en frecuencia de los transmisores y receptores de RF, y así poder caracterizar la respuesta de fase del canal con mayor precisión. Este sistema incorpora un posicionador de antena automático 3D (SPAA3D) basado en un scanner con 3 brazos mecánicos sobre los que se desplaza un posicionador de antena de forma independiente, controlado desde un PC. Este posicionador permite obtener una gran cantidad de mediciones del canal en regiones locales, lo cual favorece la caracterización estadística de los parámetros del sistema MIMO. Con este prototipo se realizan varias campañas de medida para evaluar el canal MIMO en términos de capacidad comparando 2 esquemas de polarización y tomando en cuenta la diversidad en frecuencia aportada por la modulación OFDM en distintos escenarios. ABSTRACT Multiple-antennas technologies have been evolved to be the support of the actual and future wireless communication systems in its way to provide the high quality and high data rates required by new data, voice and data services. However, it is important to understand the behavior of the spatial characteristics of the radio channel, since the channel by itself limits the performance of the actual wireless communications systems. This drawback raises the need to understand the spatial structure of the propagation channel in order to design, assess, and develop more efficient multiantenna technologies for the actual and future wireless communications systems. Multiantenna technologies such as ‘Smart Antennas’ and MIMO systems have generated great interest in the field of wireless communications, i.e. cellular communications systems and more recently WLAN (Wireless Local Area Networks), mainly because the higher quality and the high data rate they are able to provide. Their technological benefits are based on the exploitation of the spatial diversity provided by the use of multiple antennas as happened in the past with some multiaccess technologies such as FDMA (Frequency Division Multiplexing Access), TDMA (Time Division Multiplexing Access), and CDMA (Code Division Multiplexing Access), which give diversity in the domains of frequency, time and code, respectively. This Thesis is mainly focus to study the spatial channel characteristics using schemes of multiple antennas considering several diversity schemes such as space, polarization, and frequency. The spatial characteristics will be study in terms of the direction-of-arrival profiles viewed at the receiver side of the radio link. The first step is to do a review of the smart antennas and MIMO systems technologies highlighting their advantages and drawbacks from a mathematical point of view. In the second step, a set of studies concerning the spatial characterization of the radio channel through the DoA profiles are addressed. The performance of several DoA estimation methods is assessed considering several aspects regarding antenna array structure, polarization diversity, and far-field and near-field conditions. Most of the results of these studies come from simulations of data models and measurements with real multiantena prototypes. In the same way, having understand the importance of validate the theoretical data models with experimental results, a 2,4 GHz MIMO-OFDM-SPAA2D prototype is presented. This prototype is intended for evaluating MIMO-OFDM capacity in indoor and outdoor scenarios, characterize the spatial structure of radio channels, assess several diversity schemes such as polarization, space, and frequency diversity, among others aspects. The studies reported are briefly described below. As is stated in Chapter two, the determination of user position is a fundamental task to be resolved for the smart antenna systems. As these systems are equipped with antenna arrays, they can provide the enough spatial diversity to accurately draw the spatial characterization of the radio channel through the DoA profiles, and therefore the source location. However, certain real implementation factors related to antenna errors, signals, and receivers will certainly reduce the performance of such direction finding systems. In that sense, a parameterized narrowband signal model is proposed to evaluate the influence of these factors in the location parameter estimation through extensive MC simulations. The results obtained from several DoA algorithms may be useful to extract some parameter design for directing finding systems based on arrays. The second study goes through the importance that polarization schemes can have for estimating far-field DoA profiles in radio channels, particularly for scenarios that may introduce polarization losses. For this purpose, a narrowband signal model with polarization sensibility is developed to conduct an analysis of several polarization schemes at transmitter (TX) and receiver (RX) through extensive MC simulations. In addition, spatial characterization of quasistatic indoor scenarios is also carried out using a 2.45 GHz MIMO prototype equipped with single and dual-polarized antennas. A good agreement between the measured DoA profiles with the propagation scenario is achieved. The theoretical and experimental evaluation of polarization schemes is performed using virtual arrays. In that case, a DoA estimation method is proposed based on adding an phase reference to properly track the DoA, which shows good results. In the third study, the special case of near-field source localization with virtual arrays is addressed. Most of DoA estimation algorithms are focused in far-field source localization where the radiated wavefronts are assume to be planar waves at the receive array. However, when source are located close to the array, the assumption of plane waves is no longer valid as the wavefronts exhibit a spherical behavior along the array. Thus, a faster and effective method of azimuth, elevation angles-of-arrival, and range estimation for near-field sources is proposed. The efficacy of the proposed method is evaluated with simulation and validated with measurements collected from a measurement campaign carried out in a controlled propagation environment, i.e. anechoic chamber. Moreover, the performance of the method is assessed in terms of the RMSE for several array sizes, several source positions, and taking into account the effect of radiation pattern. In general, better results are obtained with larger array and larger source distances. The effect of the antennas is included in the data model leading to more accurate results, particularly for range rather than for angle estimation. Moreover, a new multivariable searching method based on the MUSIC algorithm, called MUSA (multilevel MUSIC-based algorithm), is presented. This method is proposed to estimate the 3D location parameters in a faster way than other multivariable algorithms, such as MUSIC algorithm, at the cost of increasing the memory size. Finally, in the last chapter, a MIMO-OFDM-SPAA3D prototype is presented to experimentally evaluate different MIMO schemes regarding antennas, polarization, and frequency in different indoor and outdoor scenarios. The prototype has been developed on a Software-Defined Radio (SDR) platform. It allows taking measurements where future wireless systems will be developed. The novelty of this prototype is concerning the following 2 subsystems. The first one is the tridimensional (3D) antenna positioning system (SPAA3D) based on three linear scanners which is developed for making automatic testing possible reducing errors of the antenna array positioning. A set of software has been developed for research works such as MIMO channel characterization, MIMO capacity, OFDM synchronization, and so on. The second subsystem is the RF autocalibration module at the TX and RX. This subsystem allows to properly tracking the spatial structure of indoor and outdoor channels in terms of DoA profiles. Some results are draw regarding performance of MIMO-OFDM systems with different polarization schemes and different propagation environments.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

El análisis de imágenes hiperespectrales permite obtener información con una gran resolución espectral: cientos de bandas repartidas desde el espectro infrarrojo hasta el ultravioleta. El uso de dichas imágenes está teniendo un gran impacto en el campo de la medicina y, en concreto, destaca su utilización en la detección de distintos tipos de cáncer. Dentro de este campo, uno de los principales problemas que existen actualmente es el análisis de dichas imágenes en tiempo real ya que, debido al gran volumen de datos que componen estas imágenes, la capacidad de cómputo requerida es muy elevada. Una de las principales líneas de investigación acerca de la reducción de dicho tiempo de procesado se basa en la idea de repartir su análisis en diversos núcleos trabajando en paralelo. En relación a esta línea de investigación, en el presente trabajo se desarrolla una librería para el lenguaje RVC – CAL – lenguaje que está especialmente pensado para aplicaciones multimedia y que permite realizar la paralelización de una manera intuitiva – donde se recogen las funciones necesarias para implementar el clasificador conocido como Support Vector Machine – SVM. Cabe mencionar que este trabajo complementa el realizado en [1] y [2] donde se desarrollaron las funciones necesarias para implementar una cadena de procesado que utiliza el método unmixing para procesar la imagen hiperespectral. En concreto, este trabajo se encuentra dividido en varias partes. La primera de ellas expone razonadamente los motivos que han llevado a comenzar este Trabajo de Investigación y los objetivos que se pretenden conseguir con él. Tras esto, se hace un amplio estudio del estado del arte actual y, en él, se explican tanto las imágenes hiperespectrales como sus métodos de procesado y, en concreto, se detallará el método que utiliza el clasificador SVM. Una vez expuesta la base teórica, nos centraremos en la explicación del método seguido para convertir una versión en Matlab del clasificador SVM optimizado para analizar imágenes hiperespectrales; un punto importante en este apartado es que se desarrolla la versión secuencial del algoritmo y se asientan las bases para una futura paralelización del clasificador. Tras explicar el método utilizado, se exponen los resultados obtenidos primero comparando ambas versiones y, posteriormente, analizando por etapas la versión adaptada al lenguaje RVC – CAL. Por último, se aportan una serie de conclusiones obtenidas tras analizar las dos versiones del clasificador SVM en cuanto a bondad de resultados y tiempos de procesado y se proponen una serie de posibles líneas de actuación futuras relacionadas con dichos resultados. ABSTRACT. Hyperspectral imaging allows us to collect high resolution spectral information: hundred of bands covering from infrared to ultraviolet spectrum. These images have had strong repercussions in the medical field; in particular, we must highlight its use in cancer detection. In this field, the main problem we have to deal with is the real time analysis, because these images have a great data volume and they require a high computational power. One of the main research lines that deals with this problem is related with the analysis of these images using several cores working at the same time. According to this investigation line, this document describes the development of a RVC – CAL library – this language has been widely used for working with multimedia applications and allows an optimized system parallelization –, which joins all the functions needed to implement the Support Vector Machine – SVM - classifier. This research complements the research conducted in [1] and [2] where the necessary functions to implement the unmixing method to analyze hyperspectral images were developed. The document is divided in several chapters. The first of them introduces the motivation of the Master Thesis and the main objectives to achieve. After that, we study the state of the art of some technologies related with this work, like hyperspectral images, their processing methods and, concretely, the SVM classifier. Once we have exposed the theoretical bases, we will explain the followed methodology to translate a Matlab version of the SVM classifier optimized to process an hyperspectral image to RVC – CAL language; one of the most important issues in this chapter is that a sequential implementation is developed and the bases of a future parallelization of the SVM classifier are set. At this point, we will expose the results obtained in the comparative between versions and then, the results of the different steps that compose the SVM in its RVC – CAL version. Finally, we will extract some conclusions related with algorithm behavior and time processing. In the same way, we propose some future research lines according to the results obtained in this document.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Pest management practices that rely on pesticides are growing increasingly less effective and environmentally inappropriate in many cases and the search of alternatives is under focus nowadays. Exclusion of pests from the crop by means of pesticide-treated screens can be an eco-friendly method to protect crops, especially if pests are vectors of important diseases. The mesh size of nets is crucial to determine if insects can eventually cross the barrier or exclude them because there is a great variation in insect size depending on the species. Long-lasting insecticide-treated (LLITN) nets, factory pre-treated, have been used since years to fight against mosquitoes vector of malaria and are able to retain their biological efficacy under field for 3 years. In agriculture, treated nets with different insecticides have shown efficacy in controlling some insects and mites, so they seem to be a good tool in helping to solve some pest problems. However, treated nets must be carefully evaluated because can diminish air flow, increase temperature and humidity and decrease light transmission, which may affect plant growth, pests and natural enemies. As biological control is considered a key factor in IPM nowadays, the potential negative effects of treated nets on natural enemies need to be studied carefully. In this work, the effects of a bifentrhin-treated net (3 g/Kg) (supplied by the company Intelligent Insect Control, IIC) on natural enemies of aphids were tested on a cucumber crop in Central Spain in autumn 2011. The crop was sown in 8x6.5 m tunnels divided in 2 sealed compartments with control or treated nets, which were simple yellow netting with 25 mesh (10 x 10 threads/cm2; 1 x 1 mm hole size). Pieces of 2 m high of the treated-net were placed along the lateral sides of one of the two tunnel compartments in each of the 3 available tunnels (replicates); the rest was covered by a commercial untreated net of a similar mesh. The pest, Aphis gossypii Glover (Aphidae), the parasitoid Aphidius colemani (Haliday) (Braconidae) and the predator Adalia bipunctata L. (Coccinellidae) were artificially introduced in the crop. Weekly sampling was done determining the presence or absence of the pest and the natural enemies (NE) in the 42 plants/compartment as well as the number of insects in 11 marked plants. Environmental conditions (temperature, relative humidity, UV and PAR radiation) were recorded. Results show that when aphids were artificially released inside the tunnels, neither its number/plant nor their distribution was affected by the treated net. A lack of negative effect of the insecticide-treated net on natural enemies was also observed. Adalia bipunctata did not establish in the crop and only a short term control of aphids was observed one week after release. On the other hand, A. colemani did establish in the crop and a more long-term effect on the numbers of aphids/plant was detected irrespective of the type of net. KEY WORDS: bifenthrin-treated net, Adalia bipunctata, Aphidius colemani, Aphis gossypii, semi-field

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A method to reconstruct the excitation coefficients of wide-slot arrays from near-field data is presented. The plane wave spectrum (PWS) is used for reconstruction, and the shape of the field distribution on a wide slot is considered in the calculation of the PWS. The proposed algorithm is validated through the reconstruction of the excitation coefficients of a wide-slot array with element failures from the simulated nearfield data. The element failures are clearly located by the proposed algorithm