907 resultados para computationally efficient algorithm
Resumo:
Use of microarray technology often leads to high-dimensional and low- sample size data settings. Over the past several years, a variety of novel approaches have been proposed for variable selection in this context. However, only a small number of these have been adapted for time-to-event data where censoring is present. Among standard variable selection methods shown both to have good predictive accuracy and to be computationally efficient is the elastic net penalization approach. In this paper, adaptation of the elastic net approach is presented for variable selection both under the Cox proportional hazards model and under an accelerated failure time (AFT) model. Assessment of the two methods is conducted through simulation studies and through analysis of microarray data obtained from a set of patients with diffuse large B-cell lymphoma where time to survival is of interest. The approaches are shown to match or exceed the predictive performance of a Cox-based and an AFT-based variable selection method. The methods are moreover shown to be much more computationally efficient than their respective Cox- and AFT- based counterparts.
Resumo:
This book will serve as a foundation for a variety of useful applications of graph theory to computer vision, pattern recognition, and related areas. It covers a representative set of novel graph-theoretic methods for complex computer vision and pattern recognition tasks. The first part of the book presents the application of graph theory to low-level processing of digital images such as a new method for partitioning a given image into a hierarchy of homogeneous areas using graph pyramids, or a study of the relationship between graph theory and digital topology. Part II presents graph-theoretic learning algorithms for high-level computer vision and pattern recognition applications, including a survey of graph based methodologies for pattern recognition and computer vision, a presentation of a series of computationally efficient algorithms for testing graph isomorphism and related graph matching tasks in pattern recognition and a new graph distance measure to be used for solving graph matching problems. Finally, Part III provides detailed descriptions of several applications of graph-based methods to real-world pattern recognition tasks. It includes a critical review of the main graph-based and structural methods for fingerprint classification, a new method to visualize time series of graphs, and potential applications in computer network monitoring and abnormal event detection.
Resumo:
There is a need by engine manufactures for computationally efficient and accurate predictive combustion modeling tools for integration in engine simulation software for the assessment of combustion system hardware designs and early development of engine calibrations. This thesis discusses the process for the development and validation of a combustion modeling tool for Gasoline Direct Injected Spark Ignited Engine with variable valve timing, lift and duration valvetrain hardware from experimental data. Data was correlated and regressed from accepted methods for calculating the turbulent flow and flame propagation characteristics for an internal combustion engine. A non-linear regression modeling method was utilized to develop a combustion model to determine the fuel mass burn rate at multiple points during the combustion process. The computational fluid dynamic software Converge ©, was used to simulate and correlate the 3-D combustion system, port and piston geometry to the turbulent flow development within the cylinder to properly predict the experimental data turbulent flow parameters through the intake, compression and expansion processes. The engine simulation software GT-Power © is then used to determine the 1-D flow characteristics of the engine hardware being tested to correlate the regressed combustion modeling tool to experimental data to determine accuracy. The results of the combustion modeling tool show accurate trends capturing the combustion sensitivities to turbulent flow, thermodynamic and internal residual effects with changes in intake and exhaust valve timing, lift and duration.
Resumo:
Detector uniformity is a fundamental performance characteristic of all modern gamma camera systems, and ensuring a stable, uniform detector response is critical for maintaining clinical images that are free of artifact. For these reasons, the assessment of detector uniformity is one of the most common activities associated with a successful clinical quality assurance program in gamma camera imaging. The evaluation of this parameter, however, is often unclear because it is highly dependent upon acquisition conditions, reviewer expertise, and the application of somewhat arbitrary limits that do not characterize the spatial location of the non-uniformities. Furthermore, as the goal of any robust quality control program is the determination of significant deviations from standard or baseline conditions, clinicians and vendors often neglect the temporal nature of detector degradation (1). This thesis describes the development and testing of new methods for monitoring detector uniformity. These techniques provide more quantitative, sensitive, and specific feedback to the reviewer so that he or she may be better equipped to identify performance degradation prior to its manifestation in clinical images. The methods exploit the temporal nature of detector degradation and spatially segment distinct regions-of-non-uniformity using multi-resolution decomposition. These techniques were tested on synthetic phantom data using different degradation functions, as well as on experimentally acquired time series floods with induced, progressively worsening defects present within the field-of-view. The sensitivity of conventional, global figures-of-merit for detecting changes in uniformity was evaluated and compared to these new image-space techniques. The image-space algorithms provide a reproducible means of detecting regions-of-non-uniformity prior to any single flood image’s having a NEMA uniformity value in excess of 5%. The sensitivity of these image-space algorithms was found to depend on the size and magnitude of the non-uniformities, as well as on the nature of the cause of the non-uniform region. A trend analysis of the conventional figures-of-merit demonstrated their sensitivity to shifts in detector uniformity. The image-space algorithms are computationally efficient. Therefore, the image-space algorithms should be used concomitantly with the trending of the global figures-of-merit in order to provide the reviewer with a richer assessment of gamma camera detector uniformity characteristics.
Resumo:
In this paper, we present a novel technique for the removal of astigmatism in submillimeter-wave optical systems through employment of a specific combination of so-called astigmatic off-axis reflectors. This technique treats an orthogonally astigmatic beam using skew Gaussian beam analysis, from which an anastigmatic imaging network is derived. The resultant beam is considered truly stigmatic, with all Gaussian beam parameters in the orthogonal directions being matched. This is thus considered an improvement over previous techniques wherein a beam corrected for astigmatism has only the orthogonal beam amplitude radii matched, with phase shift and phase radius of curvature not considered. This technique is computationally efficient, negating the requirement for computationally intensive numerical analysis of shaped reflector surfaces. The required optical surfaces are also relatively simple to implement compared to such numerically optimized shaped surfaces. This technique is implemented in this work as part of the complete optics train for the STEAMR antenna. The STEAMR instrument is envisaged as a mutli-beam limb sounding instrument operating at submillimeter wavelengths. The antenna optics arrangement for this instrument uses multiple off-axis reflectors to control the incident radiation and couple them to their corresponding receiver feeds. An anastigmatic imaging network is successfully implemented into an optical model of this antenna, and the resultant design ensures optimal imaging of the beams to the corresponding feed horns. This example also addresses the challenges of imaging in multi-beam antenna systems.
Resumo:
In this work we devise two novel algorithms for blind deconvolution based on a family of logarithmic image priors. In contrast to recent approaches, we consider a minimalistic formulation of the blind deconvolution problem where there are only two energy terms: a least-squares term for the data fidelity and an image prior based on a lower-bounded logarithm of the norm of the image gradients. We show that this energy formulation is sufficient to achieve the state of the art in blind deconvolution with a good margin over previous methods. Much of the performance is due to the chosen prior. On the one hand, this prior is very effective in favoring sparsity of the image gradients. On the other hand, this prior is non convex. Therefore, solutions that can deal effectively with local minima of the energy become necessary. We devise two iterative minimization algorithms that at each iteration solve convex problems: one obtained via the primal-dual approach and one via majorization-minimization. While the former is computationally efficient, the latter achieves state-of-the-art performance on a public dataset.
Resumo:
A new method is presented to generate reduced order models (ROMs) in Fluid Dynamics problems of industrial interest. The method is based on the expansion of the flow variables in a Proper Orthogonal Decomposition (POD) basis, calculated from a limited number of snapshots, which are obtained via Computational Fluid Dynamics (CFD). Then, the POD-mode amplitudes are calculated as minimizers of a properly defined overall residual of the equations and boundary conditions. The method includes various ingredients that are new in this field. The residual can be calculated using only a limited number of points in the flow field, which can be scattered either all over the whole computational domain or over a smaller projection window. The resulting ROM is both computationally efficient(reconstructed flow fields require, in cases that do not present shock waves, less than 1 % of the time needed to compute a full CFD solution) and flexible(the projection window can avoid regions of large localized CFD errors).Also, for problems related with aerodynamics, POD modes are obtained from a set of snapshots calculated by a CFD method based on the compressible Navier Stokes equations and a turbulence model (which further more includes some unphysical stabilizing terms that are included for purely numerical reasons), but projection onto the POD manifold is made using the inviscid Euler equations, which makes the method independent of the CFD scheme. In addition, shock waves are treated specifically in the POD description, to avoid the need of using a too large number of snapshots. Various definitions of the residual are also discussed, along with the number and distribution of snapshots, the number of retained modes, and the effect of CFD errors. The method is checked and discussed on several test problems that describe (i) heat transfer in the recirculation region downstream of a backwards facing step, (ii) the flow past a two-dimensional airfoil in both the subsonic and transonic regimes, and (iii) the flow past a three-dimensional horizontal tail plane. The method is both efficient and numerically robust in the sense that the computational effort is quite small compared to CFD and results are both reasonably accurate and largely insensitive to the definition of the residual, to CFD errors, and to the CFD method itself, which may contain artificial stabilizing terms. Thus, the method is amenable for practical engineering applications. Resumen Se presenta un nuevo método para generar modelos de orden reducido (ROMs) aplicado a problemas fluidodinámicos de interés industrial. El nuevo método se basa en la expansión de las variables fluidas en una base POD, calculada a partir de un cierto número de snapshots, los cuales se han obtenido gracias a simulaciones numéricas (CFD). A continuación, las amplitudes de los modos POD se calculan minimizando un residual global adecuadamente definido que combina las ecuaciones y las condiciones de contorno. El método incluye varios ingredientes que son nuevos en este campo de estudio. El residual puede calcularse utilizando únicamente un número limitado de puntos del campo fluido. Estos puntos puede encontrarse dispersos a lo largo del dominio computacional completo o sobre una ventana de proyección. El modelo ROM obtenido es tanto computacionalmente eficiente (en aquellos casos que no presentan ondas de choque reconstruir los campos fluidos requiere menos del 1% del tiempo necesario para calcular una solución CFD) como flexible (la ventana de proyección puede escogerse de forma que evite contener regiones con errores en la solución CFD localizados y grandes). Además, en problemas aerodinámicos, los modos POD se obtienen de un conjunto de snapshots calculados utilizando un código CFD basado en la versión compresible de las ecuaciones de Navier Stokes y un modelo de turbulencia (el cual puede incluir algunos términos estabilizadores sin sentido físico que se añaden por razones puramente numéricas), aunque la proyección en la variedad POD se hace utilizando las ecuaciones de Euler, lo que hace al método independiente del esquema utilizado en el código CFD. Además, las ondas de choque se tratan específicamente en la descripción POD para evitar la necesidad de utilizar un número demasiado grande de snapshots. Varias definiciones del residual se discuten, así como el número y distribución de los snapshots,el número de modos retenidos y el efecto de los errores debidos al CFD. El método se comprueba y discute para varios problemas de evaluación que describen (i) la transferencia de calor en la región de recirculación aguas abajo de un escalón, (ii) el flujo alrededor de un perfil bidimensional en regímenes subsónico y transónico y (iii) el flujo alrededor de un estabilizador horizontal tridimensional. El método es tanto eficiente como numéricamente robusto en el sentido de que el esfuerzo computacional es muy pequeño comparado con el requerido por el CFD y los resultados son razonablemente precisos y muy insensibles a la definición del residual, los errores debidos al CFD y al método CFD en sí mismo, el cual puede contener términos estabilizadores artificiales. Por lo tanto, el método puede utilizarse en aplicaciones prácticas de ingeniería.
Resumo:
When we try to analyze and to control a system whose model was obtained only based on input/output data, accuracy is essential in the model. On the other hand, to make the procedure practical, the modeling stage must be computationally efficient. In this regard, this paper presents the application of extended Kalman filter for the parametric adaptation of a fuzzy model
Resumo:
Some basic ideas are presented for the construction of robust, computationally efficient reduced order models amenable to be used in industrial environments, combined with somewhat rough computational fluid dynamics solvers. These ideas result from a critical review of the basic principles of proper orthogonal decomposition-based reduced order modeling of both steady and unsteady fluid flows. In particular, the extent to which some artifacts of the computational fluid dynamics solvers can be ignored is addressed, which opens up the possibility of obtaining quite flexible reduced order models. The methods are illustrated with the steady aerodynamic flow around a horizontal tail plane of a commercial aircraft in transonic conditions, and the unsteady lid-driven cavity problem. In both cases, the approximations are fairly good, thus reducing the computational cost by a significant factor.
Resumo:
A method is presented to construct computationally efficient reduced-order models (ROMs) of three-dimensional aerodynamic flows around commercial aircraft components. The method is based on the proper orthogonal decomposition (POD) of a set of steady snapshots, which are calculated using an industrial solver based on some Reynolds averaged Navier-Stokes (RANS) equations. The POD-mode amplitudes are calculated by minimizing a residual defined from the Euler equations, even though the snapshots themselves are calculated from viscous equations. This makes the ROM independent of the peculiarities of the solver used to calculate the snapshots. Also, both the POD modes and the residual are calculated using points in the computational mesh that are concentrated in a close vicinity of the aircraft, which constitute a much smaller number than the total number of mesh points. Despite these simplifications, the method provides quite good approximations of the flow variables distributions in the whole computational domain, including the boundary layer attached to the aircraft surface and the wake. Thus, the method is both robust and computationally efficient, which is checked considering the aerodynamic flow around a horizontal tail plane, in the transonic range 0.4?Mach number?0.8, ?3°?angle of attack?3°.
Resumo:
Inicio del desarrollo de un algoritmo eficiente orientado a dispositivos con baja capacidad de proceso, que ayude a personas sin necesariamente una preparación adecuada a llevar a cabo un proceso de toma de una señal biológica, como puede ser un electrocardiograma. La aplicación deberá, por tanto, asesorar en la toma de la señal al usuario, evaluar la calidad de la grabación obtenida, y en tiempo seudo real, comprobar si la calidad de la señal obtenida es suficientemente buena para su posterior diagnóstico, de tal modo que en caso de que sea necesaria una repetición de la prueba médica, esta pueda realizarse de inmediato. Además, el algoritmo debe extraer las características más relevantes de la señal electrocardiográfica, procesarlas, y obtener una serie de patrones significativos que permitan la orientación a la diagnosis de algunas de las patologías más comunes que se puedan extraer de la información de las señales cardíacas. Para la extracción, evaluación y toma de decisiones de este proceso previo a la generación del diagnóstico, se seguirá la arquitectura clásica de un sistema de detección de patrones, definiendo las clases que sean necesarias según el número de patologías que se deseen identificar. Esta información de diagnosis, obtenida mediante la identificación del sistema de reconocimiento de patrones, podría ser de ayuda u orientación para la posterior revisión de la prueba por parte de un profesional médico cualificado y de manera remota, evitando así el desplazamiento del mismo a zonas donde, por los medios existentes a día de hoy, es muy remota la posibilidad de presencia de personal sanitario. ABTRACT Start of development of an efficient algorithm designed to devices with low processing power, which could help people without adequate preparation to undertake a process of taking a biological signal, such as an electrocardiogram. Therefore, the application must assist the user in taking the signal and evaluating the quality of the recording. All of this must to be in live time. It must to check the quality of the signal obtained, and if is it necessary a repetition of the test, this could be done immediately. Furthermore, the algorithm must extract the most relevant features of the ECG signal, process it, and get meaningful patterns that allow to a diagnosis orientation of some of the more common diseases that can be drawn from the cardiac signal information. For the extraction, evaluation and decision making in this previous process to the generation of diagnosis, we will follow the classic architecture of a pattern recognition system, defining the necessary classes according to the number of pathologies that we wish to identify. This diagnostic information obtained by identifying the pattern recognition system could be for help or guidance for further review of the signal by a qualified medical professional, and it could be done remotely, thus avoiding the movements to areas where nowadays it is extremely unlikely to place any health staff, due to the poor economic condition.
Resumo:
In this paper we develop new techniques for revealing geometrical structures in phase space that are valid for aperiodically time dependent dynamical systems, which we refer to as Lagrangian descriptors. These quantities are based on the integration, for a finite time, along trajectories of an intrinsic bounded, positive geometrical and/or physical property of the trajectory itself. We discuss a general methodology for constructing Lagrangian descriptors, and we discuss a “heuristic argument” that explains why this method is successful for revealing geometrical structures in the phase space of a dynamical system. We support this argument by explicit calculations on a benchmark problem having a hyperbolic fixed point with stable and unstable manifolds that are known analytically. Several other benchmark examples are considered that allow us the assess the performance of Lagrangian descriptors in revealing invariant tori and regions of shear. Throughout the paper “side-by-side” comparisons of the performance of Lagrangian descriptors with both finite time Lyapunov exponents (FTLEs) and finite time averages of certain components of the vector field (“time averages”) are carried out and discussed. In all cases Lagrangian descriptors are shown to be both more accurate and computationally efficient than these methods. We also perform computations for an explicitly three dimensional, aperiodically time-dependent vector field and an aperiodically time dependent vector field defined as a data set. Comparisons with FTLEs and time averages for these examples are also carried out, with similar conclusions as for the benchmark examples.
Resumo:
Soil tomography and morphological functions built over Minkowski functionals were used to describe the impact on pore structure of two soil management practices in a Mediterranean vineyard. Soil structure controls important physical and biological processes in soil–plant–microbial systems. Those processes are dominated by the geometry of soil pore structure, and a correct model of this geometry is critical for understanding them. Soil tomography has been shown to provide rich three-dimensional digital information on soil pore geometry. Recently, mathematical morphological techniques have been proposed as powerful tools to analyze and quantify the geometrical features of porous media. Minkowski functionals and morphological functions built over Minkowski functionals provide computationally efficient means to measure four fundamental geometrical features of three-dimensional geometrical objects, that is, volume, boundary surface, mean boundary surface curvature, and connectivity. We used the threshold and the dilation and erosion of three-dimensional images to generate morphological functions and explore the evolution of Minkowski functionals as the threshold and as the degree of dilation and erosion changes. We analyzed the three-dimensional geometry of soil pore space with X-ray computed tomography (CT) of intact soil columns from a Spanish Mediterranean vineyard by using two different management practices (conventional tillage versus permanent cover crop of resident vegetation). Our results suggested that morphological functions built over Minkowski functionals provide promising tools to characterize soil macropore structure and that the evolution of morphological features with dilation and erosion is more informative as an indicator of structure than moving threshold for both soil managements studied.
Resumo:
La ecuación en derivadas parciales de advección difusión con reacción química es la base de los modelos de dispersión de contaminantes en la atmósfera, y los diferentes métodos numéricos empleados para su resolución han sido objeto de amplios estudios a lo largo de su desarrollo. En esta Tesis se presenta la implementación de un nuevo método conservativo para la resolución de la parte advectiva de la ecuación en derivadas parciales que modela la dispersión de contaminantes dentro del modelo mesoescalar de transporte químico CHIMERE. Este método está basado en una técnica de volúmenes finitos junto con una interpolación racional. La ventaja de este método es la conservación exacta de la masa transportada debido al empleo de la ley de conservación de masas. Para ello emplea una formulación de flujo basado en el cálculo de la integral ponderada dentro de cada celda definida para la discretización del espacio en el método de volúmenes finitos. Los resultados numéricos obtenidos en las simulaciones realizadas (implementando el modelo conservativo para la advección en el modelo CHIMERE) se han comparado con los datos observados de concentración de contaminantes registrados en la red de estaciones de seguimiento y medición distribuidas por la Península Ibérica. Los datos estadísticos de medición del error, la media normalizada y la media absoluta normalizada del error, presentan valores que están dentro de los rangos propuestos por la EPA para considerar el modelo preciso. Además, se introduce un nuevo método para resolver la parte advectivadifusiva de la ecuación en derivadas parciales que modeliza la dispersión de contaminantes en la atmósfera. Se ha empleado un método de diferencias finitas de alto orden para resolver la parte difusiva de la ecuación de transporte de contaminantes junto con el método racional conservativo para la parte advectiva en una y dos dimensiones. Los resultados obtenidos de la aplicación del método a diferentes situaciones incluyendo casos académicos y reales han sido comparados con la solución analítica de la ecuación de advección-difusión, demostrando que el nuevo método proporciona un resultado preciso para aproximar la solución. Por último, se ha desarrollado un modelo completo que contempla los fenómenos advectivo y difusivo con reacción química, usando los métodos anteriores junto con una técnica de diferenciación regresiva (BDF2). Esta técnica consiste en un método implícito multipaso de diferenciación regresiva de segundo orden, que nos permite resolver los problemas rígidos típicos de la química atmosférica, modelizados a través de sistemas de ecuaciones diferenciales ordinarias. Este método hace uso de la técnica iterativa Gauss- Seidel para obtener la solución de la parte implícita de la fórmula BDF2. El empleo de la técnica de Gauss-Seidel en lugar de otras técnicas comúnmente empleadas, como la iteración por el método de Newton, nos proporciona rapidez de cálculo y bajo consumo de memoria, ideal para obtener modelos operativos para la resolución de la cinética química atmosférica. ABSTRACT Extensive research has been performed to solve the atmospheric chemicaladvection- diffusion equation and different numerical methods have been proposed. This Thesis presents the implementation of an exactly conservative method for the advection equation in the European scale Eulerian chemistry transport model CHIMERE based on a rational interpolation and a finite volume algorithm. The advantage of the method is that the cell-integrated average is predicted via a flux formulation, thus the mass is exactly conserved. Numerical results are compared with a set of observation registered at some monitoring sites in Spain. The mean normalized bias and the mean normalized absolute error present values that are inside the range to consider an accurate model performance. In addition, it has been introduced a new method to solve the advectiondiffusion equation. It is based on a high-order accurate finite difference method to solve de diffusion equation together with a rational interpolation and a finite volume to solve the advection equation in one dimension and two dimensions. Numerical results obtained from solving several problems include academic and real atmospheric problems have been compared with the analytical solution of the advection-diffusion equation, showing that the new method give an efficient algorithm for solving such problems. Finally, a complete model has been developed to solve the atmospheric chemical-advection-diffusion equation, adding the conservative method for the advection equation, the high-order finite difference method for the diffusion equation and a second-order backward differentiation formula (BDF2) to solve the atmospheric chemical kinetics. The BDF2 is an implicit, second order multistep backward differentiation formula used to solve the stiff systems of ordinary differential equations (ODEs) from atmospheric chemistry. The Gauss-Seidel iteration is used for approximately solving the implicitly defined BDF solution, giving a faster tool than the more commonly used iterative modified Newton technique. This method implies low start-up costs and a low memory demand due to the use of Gauss-Seidel iteration.
Resumo:
El estudio del comportamiento de la atmósfera ha resultado de especial importancia tanto en el programa SESAR como en NextGen, en los que la gestión actual del tránsito aéreo (ATM) está experimentando una profunda transformación hacia nuevos paradigmas tanto en Europa como en los EE.UU., respectivamente, para el guiado y seguimiento de las aeronaves en la realización de rutas más eficientes y con mayor precisión. La incertidumbre es una característica fundamental de los fenómenos meteorológicos que se transfiere a la separación de las aeronaves, las trayectorias de vuelo libres de conflictos y a la planificación de vuelos. En este sentido, el viento es un factor clave en cuanto a la predicción de la futura posición de la aeronave, por lo que tener un conocimiento más profundo y preciso de campo de viento reducirá las incertidumbres del ATC. El objetivo de esta tesis es el desarrollo de una nueva técnica operativa y útil destinada a proporcionar de forma adecuada y directa el campo de viento atmosférico en tiempo real, basada en datos de a bordo de la aeronave, con el fin de mejorar la predicción de las trayectorias de las aeronaves. Para lograr este objetivo se ha realizado el siguiente trabajo. Se han descrito y analizado los diferentes sistemas de la aeronave que proporcionan las variables necesarias para obtener la velocidad del viento, así como de las capacidades que permiten la presentación de esta información para sus aplicaciones en la gestión del tráfico aéreo. Se ha explorado el uso de aeronaves como los sensores de viento en un área terminal para la estimación del viento en tiempo real con el fin de mejorar la predicción de las trayectorias de aeronaves. Se han desarrollado métodos computacionalmente eficientes para estimar las componentes horizontales de la velocidad del viento a partir de las velocidades de las aeronaves (VGS, VCAS/VTAS), la presión y datos de temperatura. Estos datos de viento se han utilizado para estimar el campo de viento en tiempo real utilizando un sistema de procesamiento de datos a través de un método de mínima varianza. Por último, se ha evaluado la exactitud de este procedimiento para que esta información sea útil para el control del tráfico aéreo. La información inicial proviene de una muestra de datos de Registradores de Datos de Vuelo (FDR) de aviones que aterrizaron en el aeropuerto Madrid-Barajas. Se dispuso de datos de ciertas aeronaves durante un periodo de más de tres meses que se emplearon para calcular el vector viento en cada punto del espacio aéreo. Se utilizó un modelo matemático basado en diferentes métodos de interpolación para obtener los vectores de viento en áreas sin datos disponibles. Se han utilizado tres escenarios concretos para validar dos métodos de interpolación: uno de dos dimensiones que trabaja con ambas componentes horizontales de forma independiente, y otro basado en el uso de una variable compleja que relaciona ambas componentes. Esos métodos se han probado en diferentes escenarios con resultados dispares. Esta metodología se ha aplicado en un prototipo de herramienta en MATLAB © para analizar automáticamente los datos de FDR y determinar el campo vectorial del viento que encuentra la aeronave al volar en el espacio aéreo en estudio. Finalmente se han obtenido las condiciones requeridas y la precisión de los resultados para este modelo. El método desarrollado podría utilizar los datos de los aviones comerciales como inputs utilizando los datos actualmente disponibles y la capacidad computacional, para proporcionárselos a los sistemas ATM donde se podría ejecutar el método propuesto. Estas velocidades del viento calculadas, o bien la velocidad respecto al suelo y la velocidad verdadera, se podrían difundir, por ejemplo, a través del sistema de direccionamiento e informe para comunicaciones de aeronaves (ACARS), mensajes de ADS-B o Modo S. Esta nueva fuente ayudaría a actualizar la información del viento suministrada en los productos aeronáuticos meteorológicos (PAM), informes meteorológicos de aeródromos (AIRMET), e información meteorológica significativa (SIGMET). ABSTRACT The study of the atmosphere behaviour is been of particular importance both in SESAR and NextGen programs, where the current air traffic management (ATM) system is undergoing a profound transformation to the new paradigms both in Europe and the USA, respectively, to guide and track aircraft more precisely on more efficient routes. Uncertainty is a fundamental characteristic of weather phenomena which is transferred to separation assurance, flight path de-confliction and flight planning applications. In this respect, the wind is a key factor regarding the prediction of the future position of the aircraft, so that having a deeper and accurate knowledge of wind field will reduce ATC uncertainties. The purpose of this thesis is to develop a new and operationally useful technique intended to provide adequate and direct real-time atmospheric winds fields based on on-board aircraft data, in order to improve aircraft trajectory prediction. In order to achieve this objective the following work has been accomplished. The different sources in the aircraft systems that provide the variables needed to derivate the wind velocity have been described and analysed, as well as the capabilities which allow presenting this information for air traffic management applications. The use of aircraft as wind sensors in a terminal area for real-time wind estimation in order to improve aircraft trajectory prediction has been explored. Computationally efficient methods have been developed to estimate horizontal wind components from aircraft velocities (VGS, VCAS/VTAS), pressure, and temperature data. These wind data were utilized to estimate a real-time wind field using a data processing approach through a minimum variance method. Finally, the accuracy of this procedure has been evaluated for this information to be useful to air traffic control. The initial information comes from a Flight Data Recorder (FDR) sample of aircraft landing in Madrid-Barajas Airport. Data available for more than three months were exploited in order to derive the wind vector field in each point of the airspace. Mathematical model based on different interpolation methods were used in order to obtain wind vectors in void areas. Three particular scenarios were employed to test two interpolation methods: a two-dimensional one that works with both horizontal components in an independent way, and also a complex variable formulation that links both components. Those methods were tested using various scenarios with dissimilar results. This methodology has been implemented in a prototype tool in MATLAB © in order to automatically analyse FDR and determine the wind vector field that aircraft encounter when flying in the studied airspace. Required conditions and accuracy of the results were derived for this model. The method developed could be fed by commercial aircraft utilizing their currently available data sources and computational capabilities, and providing them to ATM system where the proposed method could be run. Computed wind velocities, or ground and true airspeeds, would then be broadcasted, for example, via the Aircraft Communication Addressing and Reporting System (ACARS), ADS-B out messages, or Mode S. This new source would help updating the wind information furnished in meteorological aeronautical products (PAM), meteorological aerodrome reports (AIRMET), and significant meteorological information (SIGMET).