974 resultados para High-Order Accurate Scheme
Resumo:
This paper is concerned with the numerical solutions of time dependent two-dimensional incompressible flows. By using the primitive variables of velocity and pressure, the Navier-Stokes and mass conservation equations are solved by a semi-implicit finite difference projection method. A new bounded higher order upwind convection scheme is employed to deal with the non-linear (advective) terms. The procedure is an adaptation of the GENSMAC (J. Comput. Phys. 1994; 110: 171-186) methodology for calculating confined and free surface fluid flows at both low and high Reynolds numbers. The calculations were performed by using the 2D version of the Freeflow simulation system (J. Comp. Visual. Science 2000; 2:199-210). In order to demonstrate the capabilities of the numerical method, various test cases are presented. These are the fully developed flow in a channel, the flow over a backward facing step, the die-swell problem, the broken dam flow, and an impinging jet onto a flat plate. The numerical results compare favourably with the experimental data and the analytical solutions. Copyright (c) 2006 John Wiley & Sons, Ltd.
Resumo:
The photonic modes of Thue-Morse and Fibonacci lattices with generating layers A and B, of positive and negative indices of refraction, are calculated by the transfer-matrix technique. For Thue-Morse lattices, as well for periodic lattices with AB unit cell, the constructive interference of reflected waves, corresponding to the zero(th)-order gap, takes place when the optical paths in single layers A and B are commensurate. In contrast, for Fibonacci lattices of high order, the same phenomenon occurs when the ratio of those optical paths is close to the golden ratio. In the long wavelength limit, analytical expressions defining the edge frequencies of the zero(th) order gap are obtained for both quasi-periodic lattices. Furthermore, analytical expressions that define the gap edges around the zero(th) order gap are shown to correspond to the
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
Herbs and spices have long been used to improve the flavour of food without being considered as nutritionally significant ingredients. However, the bioactive phenolic content of these plant-based products is currently attracting interest.In the present work, liquid chromatography coupled to high-resolution/accurate mass measurement LTQ-Orbitrap mass spectrometry was applied for the comprehensive identification of phenolic constituents of six of the most widely used culinary herbs (rosemary, thyme, oregano and bay) and spices (cinnamon and cumin). In this way, up to 52 compounds were identified in these culinary ingredients, some of them, as far as we know, for the first time. In order to establish the phenolic profiles of the different herbs and spices, accurate quantification of the major phenolics was performed by multiple reaction monitoring in a triple quadrupole mass spectrometer. Multivariate statistical treatment of the results allowed the assessment of distinctive features among the studied herbs and spices. (C) 2014 Elsevier Ltd. All rights reserved.
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
During the last 30 years the Atomic Force Microscopy became the most powerful tool for surface probing in atomic scale. The Tapping-Mode Atomic Force Microscope is used to generate high quality accurate images of the samples surface. However, in this mode of operation the microcantilever frequently presents chaotic motion due to the nonlinear characteristics of the tip-sample forces interactions, degrading the image quality. This kind of irregular motion must be avoided by the control system. In this work, the tip-sample interaction is modelled considering the Lennard-Jones potentials and the two-term Galerkin aproximation. Additionally, the State Dependent Ricatti Equation and Time-Delayed Feedback Control techniques are used in order to force the Tapping-Mode Atomic Force Microscope system motion to a periodic orbit, preventing the microcantilever chaotic motion
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
Resumo:
ABSTRACT: One way to produce high order in a block copolymer thin film is by solution casting a thin film and slowly evaporating the solvent in a sealed vessel. Such a solvent-annealing process is a versatile method to produce a highly ordered thin film of a block copolymer. However, the ordered structure of the film degrades over time when stored under ambient conditions. Remarkably, this aging process occurs in mesoscale thin films of polystyrene-polyisoprene triblock copolymer where the monolayer of vitrified 15 nm diameter polystyrene cylinders sink in a 20 nm thick film at 22 °C. The transformation is studied by atomic force microscopy (AFM). We describe the phenomena, characterize the aging process, and propose a semiquantitative model to explain the observations. The residual solvent effects are important but not the primary driving force for the aging process. The study may lead to effective avenue to improve order and make the morphology robust and possibly the solvent-annealing process more effective.
Resumo:
Plasmabasierte Röntgenlaser sind aufgrund ihrer kurzen Wellenlänge und schma-rnlen spektralen Bandbreite attraktive Diagnose-Instrumente in einer Vielzahl potentieller Anwendungen, beispielsweise in den Bereichen Spektroskopie, Mikroskopie und EUV-Lithografie. Dennoch sind Röntgenlaser zum heutigen Stand noch nicht sehr weit verbreitet, was vorwiegend auf eine zu geringe Pulsenergie und für manche Anwendungen nicht hinreichende Strahlqualität zurückzuführen ist. In diesem Zusammenhang wurden in den letzten Jahren bedeutende Fortschritte erzielt. Die gleichzeitige Weiterentwicklung von Pumplasersystemen und Pumpmechanismen ermöglichte es, kompakte Röntgenlaserquellen mit bis zu 100 Hz zu betreiben. Um gleichzeitig höhere Pulsenergien, höhere Strahlqualität und volle räumliche Kohärenz zu erhalten, wurden intensive Studien theoretischer und experimenteller Natur durchgeführt. In diesem Kontext wurde in der vorliegenden Arbeit ein experimenteller Aufbau zur Kombination von zwei Röntgenlaser-Targets entwickelt, die sogenannte Butterfly-Konfiguration. Der erste Röntgenlaser wird dabei als sogenannter Seed für das zweite, als Verstärker dienende Röntgenlasermedium verwendet (injection-seeding). Aufrndiese Weise werden störende Effekte vermieden, welche beim Entstehungsprozessrndes Röntgenlasers durch die Verstärkung von spontaner Emission zustande kom-rnmen. Unter Verwendung des ebenfalls an der GSI entwickelten Double-Pulse Gra-rnzing Incidence Pumpschemas ermöglicht das hier vorgestellte Konzept, erstmaligrnbeide Röntgenlasertargets effizient und inklusive Wanderwellenanregung zu pum-rnpen.rnBei einer ersten experimentellen Umsetzung gelang die Erzeugung verstärkter Silber-Röntgenlaserpulse von 1 µJ bei 13.9 nm Wellenlänge. Anhand der gewonnenen Daten erfolgte neben dem Nachweis der Verstärkung die Bestimmung der Lebensdauer der Besetzungsinversion zu 3 ps. In einem Nachfolgeexperiment wurden die Eigenschaften eines Molybdän-Röntgenlaserplasmas näher untersucht. Neben dem bisher an der GSI angewandten Pumpschema kam in dieser Strahlzeit noch eine weitere Technik zum Einsatz, welche auf einem zusätzlichen Pumppuls basierte. In beiden Schemata gelang neben dem Nachweis der Verstärkung die zeitliche und räumliche Charakterisierung des Verstärkermediums. Röntgenlaserpulse mit bis zu 240 nJ bei einer Wellenlänge von 18.9 nm wurden nachgewiesen. Die erreichte Brillanz der verstärkten Pulse lag ca. zwei Größenordnungen über der des ursprünglichen Seeds und mehr als eine Größenordnung über der Brillanz eines Röntgenlasers, dessen Erzeugung auf der Verwendung eines einzelnen Targets basierte. Das in dieser Arbeitrnentwickelte und experimentell verifizierte Konzept birgt somit das Potential, extrem brillante plasmabasierte Röntgenlaser mit vollständiger räumlicher und zeitlicher Kohärenz zu erzeugen.rnDie in dieser Arbeit diskutierten Ergebnisse sind ein wesentlicher Beitrag zu der Entwicklung eines Röntgenlasers, der bei spektroskopischen Untersuchungen von hochgeladenen Schwerionen eingesetzt werden soll. Diese Experimente sind amrnExperimentierspeicherring der GSI und zukünftig auch am High-Energy StoragernRing der FAIR-Anlage vorgesehen.rn
Resumo:
In dieser Arbeit wird ein neuer Dynamikkern entwickelt und in das bestehendernnumerische Wettervorhersagesystem COSMO integriert. Für die räumlichernDiskretisierung werden diskontinuierliche Galerkin-Verfahren (DG-Verfahren)rnverwendet, für die zeitliche Runge-Kutta-Verfahren. Hierdurch ist ein Verfahrenrnhoher Ordnung einfach zu realisieren und es sind lokale Erhaltungseigenschaftenrnder prognostischen Variablen gegeben. Der hier entwickelte Dynamikkern verwendetrngeländefolgende Koordinaten in Erhaltungsform für die Orographiemodellierung undrnkoppelt das DG-Verfahren mit einem Kessler-Schema für warmen Niederschlag. Dabeirnwird die Fallgeschwindigkeit des Regens, nicht wie üblich implizit imrnKessler-Schema diskretisiert, sondern explizit im Dynamikkern. Hierdurch sindrndie Zeitschritte der Parametrisierung für die Phasenumwandlung des Wassers undrnfür die Dynamik vollständig entkoppelt, wodurch auch sehr große Zeitschritte fürrndie Parametrisierung verwendet werden können. Die Kopplung ist sowohl fürrnOperatoraufteilung, als auch für Prozessaufteilung realisiert.rnrnAnhand idealisierter Testfälle werden die Konvergenz und die globalenrnErhaltungseigenschaften des neu entwickelten Dynamikkerns validiert. Die Massernwird bis auf Maschinengenauigkeit global erhalten. Mittels Bergüberströmungenrnwird die Orographiemodellierung validiert. Die verwendete Kombination ausrnDG-Verfahren und geländefolgenden Koordinaten ermöglicht die Behandlung vonrnsteileren Bergen, als dies mit dem auf Finite-Differenzenverfahren-basierendenrnDynamikkern von COSMO möglich ist. Es wird gezeigt, wann die vollernTensorproduktbasis und wann die Minimalbasis vorteilhaft ist. Die Größe desrnEinflusses auf das Simulationsergebnis der Verfahrensordnung, desrnParametrisierungszeitschritts und der Aufteilungsstrategie wirdrnuntersucht. Zuletzt wird gezeigt dass bei gleichem Zeitschritt die DG-Verfahrenrnaufgrund der besseren Skalierbarkeit in der Laufzeit konkurrenzfähig zurnFinite-Differenzenverfahren sind.
Resumo:
The problem of re-sampling spatially distributed data organized into regular or irregular grids to finer or coarser resolution is a common task in data processing. This procedure is known as 'gridding' or 're-binning'. Depending on the quantity the data represents, the gridding-algorithm has to meet different requirements. For example, histogrammed physical quantities such as mass or energy have to be re-binned in order to conserve the overall integral. Moreover, if the quantity is positive definite, negative sampling values should be avoided. The gridding process requires a re-distribution of the original data set to a user-requested grid according to a distribution function. The distribution function can be determined on the basis of the given data by interpolation methods. In general, accurate interpolation with respect to multiple boundary conditions of heavily fluctuating data requires polynomial interpolation functions of second or even higher order. However, this may result in unrealistic deviations (overshoots or undershoots) of the interpolation function from the data. Accordingly, the re-sampled data may overestimate or underestimate the given data by a significant amount. The gridding-algorithm presented in this work was developed in order to overcome these problems. Instead of a straightforward interpolation of the given data using high-order polynomials, a parametrized Hermitian interpolation curve was used to approximate the integrated data set. A single parameter is determined by which the user can control the behavior of the interpolation function, i.e. the amount of overshoot and undershoot. Furthermore, it is shown how the algorithm can be extended to multidimensional grids. The algorithm was compared to commonly used gridding-algorithms using linear and cubic interpolation functions. It is shown that such interpolation functions may overestimate or underestimate the source data by about 10-20%, while the new algorithm can be tuned to significantly reduce these interpolation errors. The accuracy of the new algorithm was tested on a series of x-ray CT-images (head and neck, lung, pelvis). The new algorithm significantly improves the accuracy of the sampled images in terms of the mean square error and a quality index introduced by Wang and Bovik (2002 IEEE Signal Process. Lett. 9 81-4).
Resumo:
Patients suffering from cystic fibrosis (CF) show thick secretions, mucus plugging and bronchiectasis in bronchial and alveolar ducts. This results in substantial structural changes of the airway morphology and heterogeneous ventilation. Disease progression and treatment effects are monitored by so-called gas washout tests, where the change in concentration of an inert gas is measured over a single or multiple breaths. The result of the tests based on the profile of the measured concentration is a marker for the severity of the ventilation inhomogeneity strongly affected by the airway morphology. However, it is hard to localize underlying obstructions to specific parts of the airways, especially if occurring in the lung periphery. In order to support the analysis of lung function tests (e.g. multi-breath washout), we developed a numerical model of the entire airway tree, coupling a lumped parameter model for the lung ventilation with a 4th-order accurate finite difference model of a 1D advection-diffusion equation for the transport of an inert gas. The boundary conditions for the flow problem comprise the pressure and flow profile at the mouth, which is typically known from clinical washout tests. The natural asymmetry of the lung morphology is approximated by a generic, fractal, asymmetric branching scheme which we applied for the conducting airways. A conducting airway ends when its dimension falls below a predefined limit. A model acinus is then connected to each terminal airway. The morphology of an acinus unit comprises a network of expandable cells. A regional, linear constitutive law describes the pressure-volume relation between the pleural gap and the acinus. The cyclic expansion (breathing) of each acinus unit depends on the resistance of the feeding airway and on the flow resistance and stiffness of the cells themselves. Special care was taken in the development of a conservative numerical scheme for the gas transport across bifurcations, handling spatially and temporally varying advective and diffusive fluxes over a wide range of scales. Implicit time integration was applied to account for the numerical stiffness resulting from the discretized transport equation. Local or regional modification of the airway dimension, resistance or tissue stiffness are introduced to mimic pathological airway restrictions typical for CF. This leads to a more heterogeneous ventilation of the model lung. As a result the concentration in some distal parts of the lung model remains increased for a longer duration. The inert gas concentration at the mouth towards the end of the expirations is composed of gas from regions with very different washout efficiency. This results in a steeper slope of the corresponding part of the washout profile.
Resumo:
The present contribution discusses the development of a PSE-3D instability analysis algorithm, in which a matrix forming and storing approach is followed. Alternatively to the typically used in stability calculations spectral methods, new stable high-order finitedifference-based numerical schemes for spatial discretization 1 are employed. Attention is paid to the issue of efficiency, which is critical for the success of the overall algorithm. To this end, use is made of a parallelizable sparse matrix linear algebra package which takes advantage of the sparsity offered by the finite-difference scheme and, as expected, is shown to perform substantially more efficiently than when spectral collocation methods are used. The building blocks of the algorithm have been implemented and extensively validated, focusing on classic PSE analysis of instability on the flow-plate boundary layer, temporal and spatial BiGlobal EVP solutions (the latter necessary for the initialization of the PSE-3D), as well as standard PSE in a cylindrical coordinates using the nonparallel Batchelor vortex basic flow model, such that comparisons between PSE and PSE-3D be possible; excellent agreement is shown in all aforementioned comparisons. Finally, the linear PSE-3D instability analysis is applied to a fully three-dimensional flow composed of a counter-rotating pair of nonparallel Batchelor vortices.
Resumo:
We propose the use of a highly-accurate three-dimensional (3D) fully automatic hp-adaptive finite element method (FEM) for the characterization of rectangular waveguide discontinuities. These discontinuities are either the unavoidable result of mechanical/electrical transitions or deliberately introduced in order to perform certain electrical functions in modern communication systems. The proposed numerical method combines the geometrical flexibility of finite elements with an accuracy that is often superior to that provided by semi-analytical methods. It supports anisotropic refinements on irregular meshes with hanging nodes, and isoparametric elements. It makes use of hexahedral elements compatible with high-order H(curl)H(curl) discretizations. The 3D hp-adaptive FEM is applied for the first time to solve a wide range of 3D waveguide discontinuity problems of microwave communication systems in which exponential convergence of the error is observed.
Resumo:
La segmentación de imágenes es un campo importante de la visión computacional y una de las áreas de investigación más activas, con aplicaciones en comprensión de imágenes, detección de objetos, reconocimiento facial, vigilancia de vídeo o procesamiento de imagen médica. La segmentación de imágenes es un problema difícil en general, pero especialmente en entornos científicos y biomédicos, donde las técnicas de adquisición imagen proporcionan imágenes ruidosas. Además, en muchos de estos casos se necesita una precisión casi perfecta. En esta tesis, revisamos y comparamos primero algunas de las técnicas ampliamente usadas para la segmentación de imágenes médicas. Estas técnicas usan clasificadores a nivel de pixel e introducen regularización sobre pares de píxeles que es normalmente insuficiente. Estudiamos las dificultades que presentan para capturar la información de alto nivel sobre los objetos a segmentar. Esta deficiencia da lugar a detecciones erróneas, bordes irregulares, configuraciones con topología errónea y formas inválidas. Para solucionar estos problemas, proponemos un nuevo método de regularización de alto nivel que aprende información topológica y de forma a partir de los datos de entrenamiento de una forma no paramétrica usando potenciales de orden superior. Los potenciales de orden superior se están popularizando en visión por computador, pero la representación exacta de un potencial de orden superior definido sobre muchas variables es computacionalmente inviable. Usamos una representación compacta de los potenciales basada en un conjunto finito de patrones aprendidos de los datos de entrenamiento que, a su vez, depende de las observaciones. Gracias a esta representación, los potenciales de orden superior pueden ser convertidos a potenciales de orden 2 con algunas variables auxiliares añadidas. Experimentos con imágenes reales y sintéticas confirman que nuestro modelo soluciona los errores de aproximaciones más débiles. Incluso con una regularización de alto nivel, una precisión exacta es inalcanzable, y se requeire de edición manual de los resultados de la segmentación automática. La edición manual es tediosa y pesada, y cualquier herramienta de ayuda es muy apreciada. Estas herramientas necesitan ser precisas, pero también lo suficientemente rápidas para ser usadas de forma interactiva. Los contornos activos son una buena solución: son buenos para detecciones precisas de fronteras y, en lugar de buscar una solución global, proporcionan un ajuste fino a resultados que ya existían previamente. Sin embargo, requieren una representación implícita que les permita trabajar con cambios topológicos del contorno, y esto da lugar a ecuaciones en derivadas parciales (EDP) que son costosas de resolver computacionalmente y pueden presentar problemas de estabilidad numérica. Presentamos una aproximación morfológica a la evolución de contornos basada en un nuevo operador morfológico de curvatura que es válido para superficies de cualquier dimensión. Aproximamos la solución numérica de la EDP de la evolución de contorno mediante la aplicación sucesiva de un conjunto de operadores morfológicos aplicados sobre una función de conjuntos de nivel. Estos operadores son muy rápidos, no sufren de problemas de estabilidad numérica y no degradan la función de los conjuntos de nivel, de modo que no hay necesidad de reinicializarlo. Además, su implementación es mucho más sencilla que la de las EDP, ya que no requieren usar sofisticados algoritmos numéricos. Desde un punto de vista teórico, profundizamos en las conexiones entre operadores morfológicos y diferenciales, e introducimos nuevos resultados en este área. Validamos nuestra aproximación proporcionando una implementación morfológica de los contornos geodésicos activos, los contornos activos sin bordes, y los turbopíxeles. En los experimentos realizados, las implementaciones morfológicas convergen a soluciones equivalentes a aquéllas logradas mediante soluciones numéricas tradicionales, pero con ganancias significativas en simplicidad, velocidad y estabilidad. ABSTRACT Image segmentation is an important field in computer vision and one of its most active research areas, with applications in image understanding, object detection, face recognition, video surveillance or medical image processing. Image segmentation is a challenging problem in general, but especially in the biological and medical image fields, where the imaging techniques usually produce cluttered and noisy images and near-perfect accuracy is required in many cases. In this thesis we first review and compare some standard techniques widely used for medical image segmentation. These techniques use pixel-wise classifiers and introduce weak pairwise regularization which is insufficient in many cases. We study their difficulties to capture high-level structural information about the objects to segment. This deficiency leads to many erroneous detections, ragged boundaries, incorrect topological configurations and wrong shapes. To deal with these problems, we propose a new regularization method that learns shape and topological information from training data in a nonparametric way using high-order potentials. High-order potentials are becoming increasingly popular in computer vision. However, the exact representation of a general higher order potential defined over many variables is computationally infeasible. We use a compact representation of the potentials based on a finite set of patterns learned fromtraining data that, in turn, depends on the observations. Thanks to this representation, high-order potentials can be converted into pairwise potentials with some added auxiliary variables and minimized with tree-reweighted message passing (TRW) and belief propagation (BP) techniques. Both synthetic and real experiments confirm that our model fixes the errors of weaker approaches. Even with high-level regularization, perfect accuracy is still unattainable, and human editing of the segmentation results is necessary. The manual edition is tedious and cumbersome, and tools that assist the user are greatly appreciated. These tools need to be precise, but also fast enough to be used in real-time. Active contours are a good solution: they are good for precise boundary detection and, instead of finding a global solution, they provide a fine tuning to previously existing results. However, they require an implicit representation to deal with topological changes of the contour, and this leads to PDEs that are computationally costly to solve and may present numerical stability issues. We present a morphological approach to contour evolution based on a new curvature morphological operator valid for surfaces of any dimension. We approximate the numerical solution of the contour evolution PDE by the successive application of a set of morphological operators defined on a binary level-set. These operators are very fast, do not suffer numerical stability issues, and do not degrade the level set function, so there is no need to reinitialize it. Moreover, their implementation is much easier than their PDE counterpart, since they do not require the use of sophisticated numerical algorithms. From a theoretical point of view, we delve into the connections between differential andmorphological operators, and introduce novel results in this area. We validate the approach providing amorphological implementation of the geodesic active contours, the active contours without borders, and turbopixels. In the experiments conducted, the morphological implementations converge to solutions equivalent to those achieved by traditional numerical solutions, but with significant gains in simplicity, speed, and stability.