952 resultados para orthonormal basis functions (OBF)
Resumo:
There is a continuous search for theoretical methods that are able to describe the effects of the liquid environment on molecular systems. Different methods emphasize different aspects, and the treatment of both the local and bulk properties is still a great challenge. In this work, the electronic properties of a water molecule in liquid environment is studied by performing a relaxation of the geometry and electronic distribution using the free energy gradient method. This is made using a series of steps in each of which we run a purely molecular mechanical (MM) Monte Carlo Metropolis simulation of liquid water and subsequently perform a quantum mechanical/molecular mechanical (QM/MM) calculation of the ensemble averages of the charge distribution, atomic forces, and second derivatives. The MP2/aug-cc-pV5Z level is used to describe the electronic properties of the QM water. B3LYP with specially designed basis functions are used for the magnetic properties. Very good agreement is found for the local properties of water, such as geometry, vibrational frequencies, dipole moment, dipole polarizability, chemical shift, and spin-spin coupling constants. The very good performance of the free energy method combined with a QM/MM approach along with the possible limitations are briefly discussed.
Resumo:
Every seismic event produces seismic waves which travel throughout the Earth. Seismology is the science of interpreting measurements to derive information about the structure of the Earth. Seismic tomography is the most powerful tool for determination of 3D structure of deep Earth's interiors. Tomographic models obtained at the global and regional scales are an underlying tool for determination of geodynamical state of the Earth, showing evident correlation with other geophysical and geological characteristics. The global tomographic images of the Earth can be written as a linear combinations of basis functions from a specifically chosen set, defining the model parameterization. A number of different parameterizations are commonly seen in literature: seismic velocities in the Earth have been expressed, for example, as combinations of spherical harmonics or by means of the simpler characteristic functions of discrete cells. With this work we are interested to focus our attention on this aspect, evaluating a new type of parameterization, performed by means of wavelet functions. It is known from the classical Fourier theory that a signal can be expressed as the sum of a, possibly infinite, series of sines and cosines. This sum is often referred as a Fourier expansion. The big disadvantage of a Fourier expansion is that it has only frequency resolution and no time resolution. The Wavelet Analysis (or Wavelet Transform) is probably the most recent solution to overcome the shortcomings of Fourier analysis. The fundamental idea behind this innovative analysis is to study signal according to scale. Wavelets, in fact, are mathematical functions that cut up data into different frequency components, and then study each component with resolution matched to its scale, so they are especially useful in the analysis of non stationary process that contains multi-scale features, discontinuities and sharp strike. Wavelets are essentially used in two ways when they are applied in geophysical process or signals studies: 1) as a basis for representation or characterization of process; 2) as an integration kernel for analysis to extract information about the process. These two types of applications of wavelets in geophysical field, are object of study of this work. At the beginning we use the wavelets as basis to represent and resolve the Tomographic Inverse Problem. After a briefly introduction to seismic tomography theory, we assess the power of wavelet analysis in the representation of two different type of synthetic models; then we apply it to real data, obtaining surface wave phase velocity maps and evaluating its abilities by means of comparison with an other type of parametrization (i.e., block parametrization). For the second type of wavelet application we analyze the ability of Continuous Wavelet Transform in the spectral analysis, starting again with some synthetic tests to evaluate its sensibility and capability and then apply the same analysis to real data to obtain Local Correlation Maps between different model at same depth or between different profiles of the same model.
Resumo:
Die Arbeit behandelt das Problem der Skalierbarkeit von Reinforcement Lernen auf hochdimensionale und komplexe Aufgabenstellungen. Unter Reinforcement Lernen versteht man dabei eine auf approximativem Dynamischen Programmieren basierende Klasse von Lernverfahren, die speziell Anwendung in der Künstlichen Intelligenz findet und zur autonomen Steuerung simulierter Agenten oder realer Hardwareroboter in dynamischen und unwägbaren Umwelten genutzt werden kann. Dazu wird mittels Regression aus Stichproben eine Funktion bestimmt, die die Lösung einer "Optimalitätsgleichung" (Bellman) ist und aus der sich näherungsweise optimale Entscheidungen ableiten lassen. Eine große Hürde stellt dabei die Dimensionalität des Zustandsraums dar, die häufig hoch und daher traditionellen gitterbasierten Approximationsverfahren wenig zugänglich ist. Das Ziel dieser Arbeit ist es, Reinforcement Lernen durch nichtparametrisierte Funktionsapproximation (genauer, Regularisierungsnetze) auf -- im Prinzip beliebig -- hochdimensionale Probleme anwendbar zu machen. Regularisierungsnetze sind eine Verallgemeinerung von gewöhnlichen Basisfunktionsnetzen, die die gesuchte Lösung durch die Daten parametrisieren, wodurch die explizite Wahl von Knoten/Basisfunktionen entfällt und so bei hochdimensionalen Eingaben der "Fluch der Dimension" umgangen werden kann. Gleichzeitig sind Regularisierungsnetze aber auch lineare Approximatoren, die technisch einfach handhabbar sind und für die die bestehenden Konvergenzaussagen von Reinforcement Lernen Gültigkeit behalten (anders als etwa bei Feed-Forward Neuronalen Netzen). Allen diesen theoretischen Vorteilen gegenüber steht allerdings ein sehr praktisches Problem: der Rechenaufwand bei der Verwendung von Regularisierungsnetzen skaliert von Natur aus wie O(n**3), wobei n die Anzahl der Daten ist. Das ist besonders deswegen problematisch, weil bei Reinforcement Lernen der Lernprozeß online erfolgt -- die Stichproben werden von einem Agenten/Roboter erzeugt, während er mit der Umwelt interagiert. Anpassungen an der Lösung müssen daher sofort und mit wenig Rechenaufwand vorgenommen werden. Der Beitrag dieser Arbeit gliedert sich daher in zwei Teile: Im ersten Teil der Arbeit formulieren wir für Regularisierungsnetze einen effizienten Lernalgorithmus zum Lösen allgemeiner Regressionsaufgaben, der speziell auf die Anforderungen von Online-Lernen zugeschnitten ist. Unser Ansatz basiert auf der Vorgehensweise von Recursive Least-Squares, kann aber mit konstantem Zeitaufwand nicht nur neue Daten sondern auch neue Basisfunktionen in das bestehende Modell einfügen. Ermöglicht wird das durch die "Subset of Regressors" Approximation, wodurch der Kern durch eine stark reduzierte Auswahl von Trainingsdaten approximiert wird, und einer gierigen Auswahlwahlprozedur, die diese Basiselemente direkt aus dem Datenstrom zur Laufzeit selektiert. Im zweiten Teil übertragen wir diesen Algorithmus auf approximative Politik-Evaluation mittels Least-Squares basiertem Temporal-Difference Lernen, und integrieren diesen Baustein in ein Gesamtsystem zum autonomen Lernen von optimalem Verhalten. Insgesamt entwickeln wir ein in hohem Maße dateneffizientes Verfahren, das insbesondere für Lernprobleme aus der Robotik mit kontinuierlichen und hochdimensionalen Zustandsräumen sowie stochastischen Zustandsübergängen geeignet ist. Dabei sind wir nicht auf ein Modell der Umwelt angewiesen, arbeiten weitestgehend unabhängig von der Dimension des Zustandsraums, erzielen Konvergenz bereits mit relativ wenigen Agent-Umwelt Interaktionen, und können dank des effizienten Online-Algorithmus auch im Kontext zeitkritischer Echtzeitanwendungen operieren. Wir demonstrieren die Leistungsfähigkeit unseres Ansatzes anhand von zwei realistischen und komplexen Anwendungsbeispielen: dem Problem RoboCup-Keepaway, sowie der Steuerung eines (simulierten) Oktopus-Tentakels.
Resumo:
Coupled-cluster theory provides one of the most successful concepts in electronic-structure theory. This work covers the parallelization of coupled-cluster energies, gradients, and second derivatives and its application to selected large-scale chemical problems, beside the more practical aspects such as the publication and support of the quantum-chemistry package ACES II MAB and the design and development of a computational environment optimized for coupled-cluster calculations. The main objective of this thesis was to extend the range of applicability of coupled-cluster models to larger molecular systems and their properties and therefore to bring large-scale coupled-cluster calculations into day-to-day routine of computational chemistry. A straightforward strategy for the parallelization of CCSD and CCSD(T) energies, gradients, and second derivatives has been outlined and implemented for closed-shell and open-shell references. Starting from the highly efficient serial implementation of the ACES II MAB computer code an adaptation for affordable workstation clusters has been obtained by parallelizing the most time-consuming steps of the algorithms. Benchmark calculations for systems with up to 1300 basis functions and the presented applications show that the resulting algorithm for energies, gradients and second derivatives at the CCSD and CCSD(T) level of theory exhibits good scaling with the number of processors and substantially extends the range of applicability. Within the framework of the ’High accuracy Extrapolated Ab initio Thermochemistry’ (HEAT) protocols effects of increased basis-set size and higher excitations in the coupled- cluster expansion were investigated. The HEAT scheme was generalized for molecules containing second-row atoms in the case of vinyl chloride. This allowed the different experimental reported values to be discriminated. In the case of the benzene molecule it was shown that even for molecules of this size chemical accuracy can be achieved. Near-quantitative agreement with experiment (about 2 ppm deviation) for the prediction of fluorine-19 nuclear magnetic shielding constants can be achieved by employing the CCSD(T) model together with large basis sets at accurate equilibrium geometries if vibrational averaging and temperature corrections via second-order vibrational perturbation theory are considered. Applying a very similar level of theory for the calculation of the carbon-13 NMR chemical shifts of benzene resulted in quantitative agreement with experimental gas-phase data. The NMR chemical shift study for the bridgehead 1-adamantyl cation at the CCSD(T) level resolved earlier discrepancies of lower-level theoretical treatment. The equilibrium structure of diacetylene has been determined based on the combination of experimental rotational constants of thirteen isotopic species and zero-point vibrational corrections calculated at various quantum-chemical levels. These empirical equilibrium structures agree to within 0.1 pm irrespective of the theoretical level employed. High-level quantum-chemical calculations on the hyperfine structure parameters of the cyanopolyynes were found to be in excellent agreement with experiment. Finally, the theoretically most accurate determination of the molecular equilibrium structure of ferrocene to date is presented.
Resumo:
Klimamontoring benötigt eine operative, raum-zeitliche Analyse der Klimavariabilität. Mit dieser Zielsetzung, funktionsbereite Karten regelmäßig zu erstellen, ist es hilfreich auf einen Blick, die räumliche Variabilität der Klimaelemente in der zeitlichen Veränderungen darzustellen. Für aktuelle und kürzlich vergangene Jahre entwickelte der Deutsche Wetterdienst ein Standardverfahren zur Erstellung solcher Karten. Die Methode zur Erstellung solcher Karten variiert für die verschiedenen Klimaelemente bedingt durch die Datengrundlage, die natürliche Variabilität und der Verfügbarkeit der in-situ Daten.rnIm Rahmen der Analyse der raum-zeitlichen Variabilität innerhalb dieser Dissertation werden verschiedene Interpolationsverfahren auf die Mitteltemperatur der fünf Dekaden der Jahre 1951-2000 für ein relativ großes Gebiet, der Region VI der Weltorganisation für Meteorologie (Europa und Naher Osten) angewendet. Die Region deckt ein relativ heterogenes Arbeitsgebiet von Grönland im Nordwesten bis Syrien im Südosten hinsichtlich der Klimatologie ab.rnDas zentrale Ziel der Dissertation ist eine Methode zur räumlichen Interpolation der mittleren Dekadentemperaturwerte für die Region VI zu entwickeln. Diese Methode soll in Zukunft für die operative monatliche Klimakartenerstellung geeignet sein. Diese einheitliche Methode soll auf andere Klimaelemente übertragbar und mit der entsprechenden Software überall anwendbar sein. Zwei zentrale Datenbanken werden im Rahmen dieser Dissertation verwendet: So genannte CLIMAT-Daten über dem Land und Schiffsdaten über dem Meer.rnIm Grunde wird die Übertragung der Punktwerte der Temperatur per räumlicher Interpolation auf die Fläche in drei Schritten vollzogen. Der erste Schritt beinhaltet eine multiple Regression zur Reduktion der Stationswerte mit den vier Einflussgrößen der Geographischen Breite, der Höhe über Normalnull, der Jahrestemperaturamplitude und der thermischen Kontinentalität auf ein einheitliches Niveau. Im zweiten Schritt werden die reduzierten Temperaturwerte, so genannte Residuen, mit der Interpolationsmethode der Radialen Basis Funktionen aus der Gruppe der Neuronalen Netzwerk Modelle (NNM) interpoliert. Im letzten Schritt werden die interpolierten Temperaturraster mit der Umkehrung der multiplen Regression aus Schritt eins mit Hilfe der vier Einflussgrößen auf ihr ursprüngliches Niveau hochgerechnet.rnFür alle Stationswerte wird die Differenz zwischen geschätzten Wert aus der Interpolation und dem wahren gemessenen Wert berechnet und durch die geostatistische Kenngröße des Root Mean Square Errors (RMSE) wiedergegeben. Der zentrale Vorteil ist die wertegetreue Wiedergabe, die fehlende Generalisierung und die Vermeidung von Interpolationsinseln. Das entwickelte Verfahren ist auf andere Klimaelemente wie Niederschlag, Schneedeckenhöhe oder Sonnenscheindauer übertragbar.
Resumo:
ab-initio Hartree Fock (HF), density functional theory (DFT) and hybrid potentials were employed to compute the optimized lattice parameters and elastic properties of perovskite 3-d transition metal oxides. The optimized lattice parameters and elastic properties are interdependent in these materials. An interaction is observed between the electronic charge, spin and lattice degrees of freedom in 3-d transition metal oxides. The coupling between the electronic charge, spin and lattice structures originates due to localization of d-atomic orbitals. The coupling between the electronic charge, spin and crystalline lattice also contributes in the ferroelectric and ferromagnetic properties in perovskites. The cubic and tetragonal crystalline structures of perovskite transition metal oxides of ABO3 are studied. The electronic structure and the physics of 3-d perovskite materials is complex and less well considered. Moreover, the novelty of the electronic structure and properties of these perovskites transition metal oxides exceeds the challenge offered by their complex crystalline structures. To achieve the objective of understanding the structure and property relationship of these materials the first-principle computational method is employed. CRYSTAL09 code is employed for computing crystalline structure, elastic, ferromagnetic and other electronic properties. Second-order elastic constants (SOEC) and bulk moduli (B) are computed in an automated process by employing ELASTCON (elastic constants) and EOS (equation of state) programs in CRYSTAL09 code. ELASTCON, EOS and other computational algorithms are utilized to determine the elastic properties of tetragonal BaTiO3, rutile TiO2, cubic and tetragonal BaFeO3 and the ferromagentic properties of 3-d transition metal oxides. Multiple methods are employed to crosscheck the consistency of our computational results. Computational results have motivated us to explore the ferromagnetic properties of 3-d transition metal oxides. Billyscript and CRYSTAL09 code are employed to compute the optimized geometry of the cubic and tetragonal crystalline structure of transition metal oxides of Sc to Cu. Cubic crystalline structure is initially chosen to determine the effect of lattice strains on ferromagnetism due to the spin angular momentum of an electron. The 3-d transition metals and their oxides are challenging as the basis functions and potentials are not fully developed to address the complex physics of the transition metals. Moreover, perovskite crystalline structures are extremely challenging with respect to the quality of computations as the latter requires the well established methods. Ferroelectric and ferromagnetic properties of bulk, surfaces and interfaces are explored by employing CRYSTAL09 code. In our computations done on cubic TMOs of Sc-Fe it is observed that there is a coupling between the crystalline structure and FM/AFM spin polarization. Strained crystalline structures of 3-d transition metal oxides are subjected to changes in the electromagnetic and electronic properties. The electronic structure and properties of bulk, composites, surfaces of 3-d transition metal oxides are computed successfully.
Resumo:
An integrated approach for multi-spectral segmentation of MR images is presented. This method is based on the fuzzy c-means (FCM) and includes bias field correction and contextual constraints over spatial intensity distribution and accounts for the non-spherical cluster's shape in the feature space. The bias field is modeled as a linear combination of smooth polynomial basis functions for fast computation in the clustering iterations. Regularization terms for the neighborhood continuity of intensity are added into the FCM cost functions. To reduce the computational complexity, the contextual regularizations are separated from the clustering iterations. Since the feature space is not isotropic, distance measure adopted in Gustafson-Kessel (G-K) algorithm is used instead of the Euclidean distance, to account for the non-spherical shape of the clusters in the feature space. These algorithms are quantitatively evaluated on MR brain images using the similarity measures.
Resumo:
Intensity non-uniformity (bias field) correction, contextual constraints over spatial intensity distribution and non-spherical cluster's shape in the feature space are incorporated into the fuzzy c-means (FCM) for segmentation of three-dimensional multi-spectral MR images. The bias field is modeled by a linear combination of smooth polynomial basis functions for fast computation in the clustering iterations. Regularization terms for the neighborhood continuity of either intensity or membership are added into the FCM cost functions. Since the feature space is not isotropic, distance measures, other than the Euclidean distance, are used to account for the shape and volumetric effects of clusters in the feature space. The performance of segmentation is improved by combining the adaptive FCM scheme with the criteria used in Gustafson-Kessel (G-K) and Gath-Geva (G-G) algorithms through the inclusion of the cluster scatter measure. The performance of this integrated approach is quantitatively evaluated on normal MR brain images using the similarity measures. The improvement in the quality of segmentation obtained with our method is also demonstrated by comparing our results with those produced by FSL (FMRIB Software Library), a software package that is commonly used for tissue classification.
Resumo:
Resumen El diseño de sistemas ópticos, entendido como un arte por algunos, como una ciencia por otros, se ha realizado durante siglos. Desde los egipcios hasta nuestros días los sistemas de formación de imagen han ido evolucionando así como las técnicas de diseño asociadas. Sin embargo ha sido en los últimos 50 años cuando las técnicas de diseño han experimentado su mayor desarrollo y evolución, debido, en parte, a la aparición de nuevas técnicas de fabricación y al desarrollo de ordenadores cada vez más potentes que han permitido el cálculo y análisis del trazado de rayos a través de los sistemas ópticos de forma rápida y eficiente. Esto ha propiciado que el diseño de sistemas ópticos evolucione desde los diseños desarrollados únicamente a partir de la óptica paraxial hasta lo modernos diseños realizados mediante la utilización de diferentes técnicas de optimización multiparamétrica. El principal problema con el que se encuentra el diseñador es que las diferentes técnicas de optimización necesitan partir de un diseño inicial el cual puede fijar las posibles soluciones. Dicho de otra forma, si el punto de inicio está lejos del mínimo global, o diseño óptimo para las condiciones establecidas, el diseño final puede ser un mínimo local cerca del punto de inicio y lejos del mínimo global. Este tipo de problemática ha llevado al desarrollo de sistemas globales de optimización que cada vez sean menos sensibles al punto de inicio de la optimización. Aunque si bien es cierto que es posible obtener buenos diseños a partir de este tipo de técnicas, se requiere de muchos intentos hasta llegar a la solución deseada, habiendo un entorno de incertidumbre durante todo el proceso, puesto que no está asegurado el que se llegue a la solución óptima. El método de las Superficies Múltiples Simultaneas (SMS), que nació como una herramienta de cálculo de concentradores anidólicos, se ha demostrado como una herramienta también capaz utilizarse para el diseño de sistemas ópticos formadores de imagen, aunque hasta la fecha se ha utilizado para el diseño puntual de sistemas de formación de imagen. Esta tesis tiene por objeto presentar el SMS como un método que puede ser utilizado de forma general para el diseño de cualquier sistema óptico de focal fija o v afocal con un aumento definido así como una herramienta que puede industrializarse para ayudar al diseñador a afrontar de forma sencilla el diseño de sistemas ópticos complejos. Esta tesis está estructurada en cinco capítulos: El capítulo 1, es un capítulo de fundamentos donde se presentan los conceptos fundamentales necesarios para que el lector, aunque no posea una gran base en óptica formadora de imagen, pueda entender los planteamientos y resultados que se presentan en el resto de capítulos El capitulo 2 aborda el problema de la optimización de sistemas ópticos, donde se presenta el método SMS como una herramienta idónea para obtener un punto de partida para el proceso de optimización. Mediante un ejemplo aplicado se demuestra la importancia del punto de partida utilizado en la solución final encontrada. Además en este capítulo se presentan diferentes técnicas que permiten la interpolación y optimización de las superficies obtenidas a partir de la aplicación del SMS. Aunque en esta tesis se trabajará únicamente utilizando el SMS2D, se presenta además un método para la interpolación y optimización de las nubes de puntos obtenidas a partir del SMS3D basado en funciones de base radial (RBF). En el capítulo 3 se presenta el diseño, fabricación y medidas de un objetivo catadióptrico panorámico diseñado para trabajar en la banda del infrarrojo lejano (8-12 μm) para aplicaciones de vigilancia perimetral. El objetivo presentado se diseña utilizando el método SMS para tres frentes de onda de entrada utilizando cuatro superficies. La potencia del método de diseño utilizado se hace evidente en la sencillez con la que este complejo sistema se diseña. Las imágenes presentadas demuestran cómo el prototipo desarrollado cumple a la perfección su propósito. El capítulo 4 aborda el problema del diseño de sistemas ópticos ultra compactos, se introduce el concepto de sistemas multicanal, como aquellos sistemas ópticos compuestos por una serie de canales que trabajan en paralelo. Este tipo de sistemas resultan particularmente idóneos para él diseño de sistemas afocales. Se presentan estrategias de diseño para sistemas multicanal tanto monocromáticos como policromáticos. Utilizando la novedosa técnica de diseño que en este capítulo se presenta el diseño de un telescopio de seis aumentos y medio. En el capítulo 5 se presenta una generalización del método SMS para rayos meridianos. En este capítulo se presenta el algoritmo que debe utilizarse para el diseño de cualquier sistema óptico de focal fija. La denominada optimización fase 1 se vi introduce en el algoritmo presentado de forma que mediante el cambio de las condiciones iníciales del diseño SMS que, aunque el diseño se realice para rayos meridianos, los rayos skew tengan un comportamiento similar. Para probar la potencia del algoritmo desarrollado se presenta un conjunto de diseños con diferente número de superficies. La estabilidad y potencia del algoritmo se hace evidente al conseguirse por primera vez el diseño de un sistema de seis superficies diseñado por SMS. vii Abstract The design of optical systems, considered an art by some and a science by others, has been developed for centuries. Imaging optical systems have been evolving since Ancient Egyptian times, as have design techniques. Nevertheless, the most important developments in design techniques have taken place over the past 50 years, in part due to the advances in manufacturing techniques and the development of increasingly powerful computers, which have enabled the fast and efficient calculation and analysis of ray tracing through optical systems. This has led to the design of optical systems evolving from designs developed solely from paraxial optics to modern designs created by using different multiparametric optimization techniques. The main problem the designer faces is that the different optimization techniques require an initial design which can set possible solutions as a starting point. In other words, if the starting point is far from the global minimum or optimal design for the set conditions, the final design may be a local minimum close to the starting point and far from the global minimum. This type of problem has led to the development of global optimization systems which are increasingly less sensitive to the starting point of the optimization process. Even though it is possible to obtain good designs from these types of techniques, many attempts are necessary to reach the desired solution. This is because of the uncertain environment due to the fact that there is no guarantee that the optimal solution will be obtained. The Simultaneous Multiple Surfaces (SMS) method, designed as a tool to calculate anidolic concentrators, has also proved useful for the design of image-forming optical systems, although until now it has occasionally been used for the design of imaging systems. This thesis aims to present the SMS method as a technique that can be used in general for the design of any optical system, whether with a fixed focal or an afocal with a defined magnification, and also as a tool that can be commercialized to help designers in the design of complex optical systems. The thesis is divided into five chapters. Chapter 1 establishes the basics by presenting the fundamental concepts which the reader needs to acquire, even if he/she doesn‟t have extensive knowledge in the field viii of image-forming optics, in order to understand the steps taken and the results obtained in the following chapters. Chapter 2 addresses the problem of optimizing optical systems. Here the SMS method is presented as an ideal tool to obtain a starting point for the optimization process. The importance of the starting point for the final solution is demonstrated through an example. Additionally, this chapter introduces various techniques for the interpolation and optimization of the surfaces obtained through the application of the SMS method. Even though in this thesis only the SMS2D method is used, we present a method for the interpolation and optimization of clouds of points obtained though the SMS3D method, based on radial basis functions (RBF). Chapter 3 presents the design, manufacturing and measurement processes of a catadioptric panoramic lens designed to work in the Long Wavelength Infrared (LWIR) (8-12 microns) for perimeter surveillance applications. The lens presented is designed by using the SMS method for three input wavefronts using four surfaces. The powerfulness of the design method used is revealed through the ease with which this complex system is designed. The images presented show how the prototype perfectly fulfills its purpose. Chapter 4 addresses the problem of designing ultra-compact optical systems. The concept of multi-channel systems, such as optical systems composed of a series of channels that work in parallel, is introduced. Such systems are especially suitable for the design of afocal systems. We present design strategies for multichannel systems, both monochromatic and polychromatic. A telescope designed with a magnification of six-and-a-half through the innovative technique exposed in this chapter is presented. Chapter 5 presents a generalization of the SMS method for meridian rays. The algorithm to be used for the design of any fixed focal optics is revealed. The optimization known as phase 1 optimization is inserted into the algorithm so that, by changing the initial conditions of the SMS design, the skew rays have a similar behavior, despite the design being carried out for meridian rays. To test the power of the developed algorithm, a set of designs with a different number of surfaces is presented. The stability and strength of the algorithm become apparent when the first design of a system with six surfaces if obtained through the SMS method.
Resumo:
The laplacian pyramid is a well-known technique for image processing in which local operators of many scales, but identical shape, serve as the basis functions. The required properties to the pyramidal filter produce a family of filters, which is unipara metrical in the case of the classical problem, when the length of the filter is 5. We pay attention to gaussian and fractal behaviour of these basis functions (or filters), and we determine the gaussian and fractal ranges in the case of single parameter ?. These fractal filters loose less energy in every step of the laplacian pyramid, and we apply this property to get threshold values for segmenting soil images, and then evaluate their porosity. Also, we evaluate our results by comparing them with the Otsu algorithm threshold values, and conclude that our algorithm produce reliable test results.
Resumo:
Optical communications receivers using wavelet signals processing is proposed in this paper for dense wavelength-division multiplexed (DWDM) systems and modal-division multiplexed (MDM) transmissions. The optical signal-to-noise ratio (OSNR) required to demodulate polarization-division multiplexed quadrature phase shift keying (PDM-QPSK) modulation format is alleviated with the wavelet denoising process. This procedure improves the bit error rate (BER) performance and increasing the transmission distance in DWDM systems. Additionally, the wavelet-based design relies on signal decomposition using time-limited basis functions allowing to reduce the computational cost in Digital-Signal-Processing (DSP) module. Attending to MDM systems, a new scheme of encoding data bits based on wavelets is presented to minimize the mode coupling in few-mode (FWF) and multimode fibers (MMF). The Shifted Prolate Wave Spheroidal (SPWS) functions are proposed to reduce the modal interference.
Resumo:
La presente Tesis Doctoral aborda la introducción de la Partición de Unidad de Bernstein en la forma débil de Galerkin para la resolución de problemas de condiciones de contorno en el ámbito del análisis estructural. La familia de funciones base de Bernstein conforma un sistema generador del espacio de funciones polinómicas que permite construir aproximaciones numéricas para las que no se requiere la existencia de malla: las funciones de forma, de soporte global, dependen únicamente del orden de aproximación elegido y de la parametrización o mapping del dominio, estando las posiciones nodales implícitamente definidas. El desarrollo de la formulación está precedido por una revisión bibliográfica que, con su punto de partida en el Método de Elementos Finitos, recorre las principales técnicas de resolución sin malla de Ecuaciones Diferenciales en Derivadas Parciales, incluyendo los conocidos como Métodos Meshless y los métodos espectrales. En este contexto, en la Tesis se somete la aproximación Bernstein-Galerkin a validación en tests uni y bidimensionales clásicos de la Mecánica Estructural. Se estudian aspectos de la implementación tales como la consistencia, la capacidad de reproducción, la naturaleza no interpolante en la frontera, el planteamiento con refinamiento h-p o el acoplamiento con otras aproximaciones numéricas. Un bloque importante de la investigación se dedica al análisis de estrategias de optimización computacional, especialmente en lo referente a la reducción del tiempo de máquina asociado a la generación y operación con matrices llenas. Finalmente, se realiza aplicación a dos casos de referencia de estructuras aeronáuticas, el análisis de esfuerzos en un angular de material anisotrópico y la evaluación de factores de intensidad de esfuerzos de la Mecánica de Fractura mediante un modelo con Partición de Unidad de Bernstein acoplada a una malla de elementos finitos. ABSTRACT This Doctoral Thesis deals with the introduction of Bernstein Partition of Unity into Galerkin weak form to solve boundary value problems in the field of structural analysis. The family of Bernstein basis functions constitutes a spanning set of the space of polynomial functions that allows the construction of numerical approximations that do not require the presence of a mesh: the shape functions, which are globally-supported, are determined only by the selected approximation order and the parametrization or mapping of the domain, being the nodal positions implicitly defined. The exposition of the formulation is preceded by a revision of bibliography which begins with the review of the Finite Element Method and covers the main techniques to solve Partial Differential Equations without the use of mesh, including the so-called Meshless Methods and the spectral methods. In this context, in the Thesis the Bernstein-Galerkin approximation is subjected to validation in one- and two-dimensional classic benchmarks of Structural Mechanics. Implementation aspects such as consistency, reproduction capability, non-interpolating nature at boundaries, h-p refinement strategy or coupling with other numerical approximations are studied. An important part of the investigation focuses on the analysis and optimization of computational efficiency, mainly regarding the reduction of the CPU cost associated with the generation and handling of full matrices. Finally, application to two reference cases of aeronautic structures is performed: the stress analysis in an anisotropic angle part and the evaluation of stress intensity factors of Fracture Mechanics by means of a coupled Bernstein Partition of Unity - finite element mesh model.
Resumo:
Mixtures of polynomials (MoPs) are a non-parametric density estimation technique especially designed for hybrid Bayesian networks with continuous and discrete variables. Algorithms to learn one- and multi-dimensional (marginal) MoPs from data have recently been proposed. In this paper we introduce two methods for learning MoP approximations of conditional densities from data. Both approaches are based on learning MoP approximations of the joint density and the marginal density of the conditioning variables, but they differ as to how the MoP approximation of the quotient of the two densities is found. We illustrate and study the methods using data sampled from known parametric distributions, and we demonstrate their applicability by learning models based on real neuroscience data. Finally, we compare the performance of the proposed methods with an approach for learning mixtures of truncated basis functions (MoTBFs). The empirical results show that the proposed methods generally yield models that are comparable to or significantly better than those found using the MoTBF-based method.
Resumo:
The present paper deals with the calculation of grounding resistance of an electrode composed of thin wires, that we consider here as perfect electric conductors (PEC) e.g. with null internal resistance, when buried in a soil of uniform resistivity. The potential profile at the ground surface is also calculated when the electrode is energized with low frequency current. The classic treatment by using leakage currents, called Charge Simulated Method (CSM), is compared with that using a set of steady currents along the axis of the wires, here called the Longitudinal Currents Method (LCM), to solve the Maxwell equations. The method of moments is applied to obtain a numerical approximation of the solution by using rectangular basis functions. Both methods are applied to two types of electrodes and the results are also compared with those obtained using a thirth approach, the Average Potential Method (APM), later described in the text. From the analysis performed, we can estimate a value of the error in the determination of grounding resistance as a function of the number of segments in which the electrodes are divided.