967 resultados para Differential equations, Nonlinear -- Numerical solutions -- Computer programs
Resumo:
In this work we develop and analyze an adaptive numerical scheme for simulating a class of macroscopic semiconductor models. At first the numerical modelling of semiconductors is reviewed in order to classify the Energy-Transport models for semiconductors that are later simulated in 2D. In this class of models the flow of charged particles, that are negatively charged electrons and so-called holes, which are quasi-particles of positive charge, as well as their energy distributions are described by a coupled system of nonlinear partial differential equations. A considerable difficulty in simulating these convection-dominated equations is posed by the nonlinear coupling as well as due to the fact that the local phenomena such as "hot electron effects" are only partially assessable through the given data. The primary variables that are used in the simulations are the particle density and the particle energy density. The user of these simulations is mostly interested in the current flow through parts of the domain boundary - the contacts. The numerical method considered here utilizes mixed finite-elements as trial functions for the discrete solution. The continuous discretization of the normal fluxes is the most important property of this discretization from the users perspective. It will be proven that under certain assumptions on the triangulation the particle density remains positive in the iterative solution algorithm. Connected to this result an a priori error estimate for the discrete solution of linear convection-diffusion equations is derived. The local charge transport phenomena will be resolved by an adaptive algorithm, which is based on a posteriori error estimators. At that stage a comparison of different estimations is performed. Additionally a method to effectively estimate the error in local quantities derived from the solution, so-called "functional outputs", is developed by transferring the dual weighted residual method to mixed finite elements. For a model problem we present how this method can deliver promising results even when standard error estimator fail completely to reduce the error in an iterative mesh refinement process.
Resumo:
This thesis is concerned with the calculation of virtual Compton scattering (VCS) in manifestly Lorentz-invariant baryon chiral perturbation theory to fourth order in the momentum and quark-mass expansion. In the one-photon-exchange approximation, the VCS process is experimentally accessible in photon electro-production and has been measured at the MAMI facility in Mainz, at MIT-Bates, and at Jefferson Lab. Through VCS one gains new information on the nucleon structure beyond its static properties, such as charge, magnetic moments, or form factors. The nucleon response to an incident electromagnetic field is parameterized in terms of 2 spin-independent (scalar) and 4 spin-dependent (vector) generalized polarizabilities (GP). In analogy to classical electrodynamics the two scalar GPs represent the induced electric and magnetic dipole polarizability of a medium. For the vector GPs, a classical interpretation is less straightforward. They are derived from a multipole expansion of the VCS amplitude. This thesis describes the first calculation of all GPs within the framework of manifestly Lorentz-invariant baryon chiral perturbation theory. Because of the comparatively large number of diagrams - 100 one-loop diagrams need to be calculated - several computer programs were developed dealing with different aspects of Feynman diagram calculations. One can distinguish between two areas of development, the first concerning the algebraic manipulations of large expressions, and the second dealing with numerical instabilities in the calculation of one-loop integrals. In this thesis we describe our approach using Mathematica and FORM for algebraic tasks, and C for the numerical evaluations. We use our results for real Compton scattering to fix the two unknown low-energy constants emerging at fourth order. Furthermore, we present the results for the differential cross sections and the generalized polarizabilities of VCS off the proton.
Resumo:
In various imaging problems the task is to use the Cauchy data of the solutions to an elliptic boundary value problem to reconstruct the coefficients of the corresponding partial differential equation. Often the examined object has known background properties but is contaminated by inhomogeneities that cause perturbations of the coefficient functions. The factorization method of Kirsch provides a tool for locating such inclusions. In this paper, the factorization technique is studied in the framework of coercive elliptic partial differential equations of the divergence type: Earlier it has been demonstrated that the factorization algorithm can reconstruct the support of a strictly positive (or negative) definite perturbation of the leading order coefficient, or if that remains unperturbed, the support of a strictly positive (or negative) perturbation of the zeroth order coefficient. In this work we show that these two types of inhomogeneities can, in fact, be located simultaneously. Unlike in the earlier articles on the factorization method, our inclusions may have disconnected complements and we also weaken some other a priori assumptions of the method. Our theoretical findings are complemented by two-dimensional numerical experiments that are presented in the framework of the diffusion approximation of optical tomography.
Resumo:
Over the years the Differential Quadrature (DQ) method has distinguished because of its high accuracy, straightforward implementation and general ap- plication to a variety of problems. There has been an increase in this topic by several researchers who experienced significant development in the last years. DQ is essentially a generalization of the popular Gaussian Quadrature (GQ) used for numerical integration functions. GQ approximates a finite in- tegral as a weighted sum of integrand values at selected points in a problem domain whereas DQ approximate the derivatives of a smooth function at a point as a weighted sum of function values at selected nodes. A direct appli- cation of this elegant methodology is to solve ordinary and partial differential equations. Furthermore in recent years the DQ formulation has been gener- alized in the weighting coefficients computations to let the approach to be more flexible and accurate. As a result it has been indicated as Generalized Differential Quadrature (GDQ) method. However the applicability of GDQ in its original form is still limited. It has been proven to fail for problems with strong material discontinuities as well as problems involving singularities and irregularities. On the other hand the very well-known Finite Element (FE) method could overcome these issues because it subdivides the computational domain into a certain number of elements in which the solution is calculated. Recently, some researchers have been studying a numerical technique which could use the advantages of the GDQ method and the advantages of FE method. This methodology has got different names among each research group, it will be indicated here as Generalized Differential Quadrature Finite Element Method (GDQFEM).
Resumo:
A field of computational neuroscience develops mathematical models to describe neuronal systems. The aim is to better understand the nervous system. Historically, the integrate-and-fire model, developed by Lapique in 1907, was the first model describing a neuron. In 1952 Hodgkin and Huxley [8] described the so called Hodgkin-Huxley model in the article “A Quantitative Description of Membrane Current and Its Application to Conduction and Excitation in Nerve”. The Hodgkin-Huxley model is one of the most successful and widely-used biological neuron models. Based on experimental data from the squid giant axon, Hodgkin and Huxley developed their mathematical model as a four-dimensional system of first-order ordinary differential equations. One of these equations characterizes the membrane potential as a process in time, whereas the other three equations depict the opening and closing state of sodium and potassium ion channels. The membrane potential is proportional to the sum of ionic current flowing across the membrane and an externally applied current. For various types of external input the membrane potential behaves differently. This thesis considers the following three types of input: (i) Rinzel and Miller [15] calculated an interval of amplitudes for a constant applied current, where the membrane potential is repetitively spiking; (ii) Aihara, Matsumoto and Ikegaya [1] said that dependent on the amplitude and the frequency of a periodic applied current the membrane potential responds periodically; (iii) Izhikevich [12] stated that brief pulses of positive and negative current with different amplitudes and frequencies can lead to a periodic response of the membrane potential. In chapter 1 the Hodgkin-Huxley model is introduced according to Izhikevich [12]. Besides the definition of the model, several biological and physiological notes are made, and further concepts are described by examples. Moreover, the numerical methods to solve the equations of the Hodgkin-Huxley model are presented which were used for the computer simulations in chapter 2 and chapter 3. In chapter 2 the statements for the three different inputs (i), (ii) and (iii) will be verified, and periodic behavior for the inputs (ii) and (iii) will be investigated. In chapter 3 the inputs are embedded in an Ornstein-Uhlenbeck process to see the influence of noise on the results of chapter 2.
Resumo:
The numerical solution of the incompressible Navier-Stokes equations offers an alternative to experimental analysis of fluid-structure interaction (FSI). We would save a lot of time and effort and help cut back on costs, if we are able to accurately model systems by these numerical solutions. These advantages are even more obvious when considering huge structures like bridges, high rise buildings or even wind turbine blades with diameters as large as 200 meters. The modeling of such processes, however, involves complex multiphysics problems along with complex geometries. This thesis focuses on a novel vorticity-velocity formulation called the Kinematic Laplacian Equation (KLE) to solve the incompressible Navier-stokes equations for such FSI problems. This scheme allows for the implementation of robust adaptive ordinary differential equations (ODE) time integration schemes, allowing us to tackle each problem as a separate module. The current algortihm for the KLE uses an unstructured quadrilateral mesh, formed by dividing each triangle of an unstructured triangular mesh into three quadrilaterals for spatial discretization. This research deals with determining a suitable measure of mesh quality based on the physics of the problems being tackled. This is followed by exploring methods to improve the quality of quadrilateral elements obtained from the triangles and thereby improving the overall mesh quality. A series of numerical experiments were designed and conducted for this purpose and the results obtained were tested on different geometries with varying degrees of mesh density.
Resumo:
Growth codes are a subclass of Rateless codes that have found interesting applications in data dissemination problems. Compared to other Rateless and conventional channel codes, Growth codes show improved intermediate performance which is particularly useful in applications where partial data presents some utility. In this paper, we investigate the asymptotic performance of Growth codes using the Wormald method, which was proposed for studying the Peeling Decoder of LDPC and LDGM codes. Compared to previous works, the Wormald differential equations are set on nodes' perspective which enables a numerical solution to the computation of the expected asymptotic decoding performance of Growth codes. Our framework is appropriate for any class of Rateless codes that does not include a precoding step. We further study the performance of Growth codes with moderate and large size codeblocks through simulations and we use the generalized logistic function to model the decoding probability. We then exploit the decoding probability model in an illustrative application of Growth codes to error resilient video transmission. The video transmission problem is cast as a joint source and channel rate allocation problem that is shown to be convex with respect to the channel rate. This illustrative application permits to highlight the main advantage of Growth codes, namely improved performance in the intermediate loss region.
Resumo:
The traditional Newton method for solving nonlinear operator equations in Banach spaces is discussed within the context of the continuous Newton method. This setting makes it possible to interpret the Newton method as a discrete dynamical system and thereby to cast it in the framework of an adaptive step size control procedure. In so doing, our goal is to reduce the chaotic behavior of the original method without losing its quadratic convergence property close to the roots. The performance of the modified scheme is illustrated with various examples from algebraic and differential equations.
Resumo:
During time-resolved optical stimulation experiments (TR-OSL), one uses short light pulses to separate the stimulation and emission of luminescence in time. Experimental TR-OSL results show that the luminescence lifetime in quartz of sedimentary origin is independent of annealing temperature below 500 °C, but decreases monotonically thereafter. These results have been interpreted previously empirically on the basis of the existence of two separate luminescence centers LH and LL in quartz, each with its own distinct luminescence lifetime. Additional experimental evidence also supports the presence of a non-luminescent hole reservoir R, which plays a critical role in the predose effect in this material. This paper extends a recently published analytical model for thermal quenching in quartz, to include the two luminescence centers LH and LL, as well as the hole reservoir R. The new extended model involves localized electronic transitions between energy states within the two luminescence centers, and is described by a system of differential equations based on the Mott–Seitz mechanism of thermal quenching. It is shown that by using simplifying physical assumptions, one can obtain analytical solutions for the intensity of the light during a TR-OSL experiment carried out with previously annealed samples. These analytical expressions are found to be in good agreement with the numerical solutions of the equations. The results from the model are shown to be in quantitative agreement with published experimental data for commercially available quartz samples. Specifically the model describes the variation of the luminescence lifetimes with (a) annealing temperatures between room temperature and 900 °C, and (b) with stimulation temperatures between 20 and 200 °C. This paper also reports new radioluminescence (RL) measurements carried out using the same commercially available quartz samples. Gaussian deconvolution of the RL emission spectra was carried out using a total of seven emission bands between 1.5 and 4.5 eV, and the behavior of these bands was examined as a function of the annealing temperature. An emission band at ∼3.44 eV (360 nm) was found to be strongly enhanced when the annealing temperature was increased to 500 °C, and this band underwent a significant reduction in intensity with further increase in temperature. Furthermore, a new emission band at ∼3.73 eV (330 nm) became apparent for annealing temperatures in the range 600–700 °C. These new experimental results are discussed within the context of the model presented in this paper.
Resumo:
A theoretical and numerical framework to model the foundation of marine offshore structures is presented. The theoretical model is composed by a system of partial differential equations describing coupling between seabed solid skeleton and pore fluids (water, air, oil,…) combined with a system of ordinary differential equations describing the specific constitutive relation of the seabed soil skeleton. Once the theoretical model is described, the finite element numerical procedure to achieve an approximate solution of the governing equations is outlined. In order to validate the proposed theoretical and numerical framework the seaward tilt mechanism induced by the action of breaking waves over a vertical breakwater is numerically reproduced. The results numerically attained are in agreement with the main conclusions drawn from the literature associated with this failure mechanism
Resumo:
A linear method is developed for solving the nonlinear differential equations of a lumped-parameter thermal model of a spacecraft moving in a closed orbit. This method, based on perturbation theory, is compared with heuristic linearizations of the same equations. The essential feature of the linear approach is that it provides a decomposition in thermal modes, like the decomposition of mechanical vibrations in normal modes. The stationary periodic solution of the linear equations can be alternately expressed as an explicit integral or as a Fourier series. This method is applied to a minimal thermal model of a satellite with ten isothermal parts (nodes), and the method is compared with direct numerical integration of the nonlinear equations. The computational complexity of this method is briefly studied for general thermal models of orbiting spacecraft, and it is concluded that it is certainly useful for reduced models and conceptual design but it can also be more efficient than the direct integration of the equations for large models. The results of the Fourier series computations for the ten-node satellite model show that the periodic solution at the second perturbative order is sufficiently accurate.
Resumo:
El objetivo de esta Tesis ha sido la consecución de simulaciones en tiempo real de vehículos industriales modelizados como sistemas multicuerpo complejos formados por sólidos rígidos. Para el desarrollo de un programa de simulación deben considerarse cuatro aspectos fundamentales: la modelización del sistema multicuerpo (tipos de coordenadas, pares ideales o impuestos mediante fuerzas), la formulación a utilizar para plantear las ecuaciones diferenciales del movimiento (coordenadas dependientes o independientes, métodos globales o topológicos, forma de imponer las ecuaciones de restricción), el método de integración numérica para resolver estas ecuaciones en el tiempo (integradores explícitos o implícitos) y finalmente los detalles de la implementación realizada (lenguaje de programación, librerías matemáticas, técnicas de paralelización). Estas cuatro etapas están interrelacionadas entre sí y todas han formado parte de este trabajo. Desde la generación de modelos de una furgoneta y de camión con semirremolque, el uso de tres formulaciones dinámicas diferentes, la integración de las ecuaciones diferenciales del movimiento mediante métodos explícitos e implícitos, hasta el uso de funciones BLAS, de técnicas de matrices sparse y la introducción de paralelización para utilizar los distintos núcleos del procesador. El trabajo presentado en esta Tesis ha sido organizado en 8 capítulos, dedicándose el primero de ellos a la Introducción. En el Capítulo 2 se presentan dos formulaciones semirrecursivas diferentes, de las cuales la primera está basada en una doble transformación de velocidades, obteniéndose las ecuaciones diferenciales del movimiento en función de las aceleraciones relativas independientes. La integración numérica de estas ecuaciones se ha realizado con el método de Runge-Kutta explícito de cuarto orden. La segunda formulación está basada en coordenadas relativas dependientes, imponiendo las restricciones por medio de penalizadores en posición y corrigiendo las velocidades y aceleraciones mediante métodos de proyección. En este segundo caso la integración de las ecuaciones del movimiento se ha llevado a cabo mediante el integrador implícito HHT (Hilber, Hughes and Taylor), perteneciente a la familia de integradores estructurales de Newmark. En el Capítulo 3 se introduce la tercera formulación utilizada en esta Tesis. En este caso las uniones entre los sólidos del sistema se ha realizado mediante uniones flexibles, lo que obliga a imponer los pares por medio de fuerzas. Este tipo de uniones impide trabajar con coordenadas relativas, por lo que la posición del sistema y el planteamiento de las ecuaciones del movimiento se ha realizado utilizando coordenadas Cartesianas y parámetros de Euler. En esta formulación global se introducen las restricciones mediante fuerzas (con un planteamiento similar al de los penalizadores) y la estabilización del proceso de integración numérica se realiza también mediante proyecciones de velocidades y aceleraciones. En el Capítulo 4 se presenta una revisión de las principales herramientas y estrategias utilizadas para aumentar la eficiencia de las implementaciones de los distintos algoritmos. En primer lugar se incluye una serie de consideraciones básicas para aumentar la eficiencia numérica de las implementaciones. A continuación se mencionan las principales características de los analizadores de códigos utilizados y también las librerías matemáticas utilizadas para resolver los problemas de álgebra lineal tanto con matrices densas como sparse. Por último se desarrolla con un cierto detalle el tema de la paralelización en los actuales procesadores de varios núcleos, describiendo para ello el patrón empleado y las características más importantes de las dos herramientas propuestas, OpenMP y las TBB de Intel. Hay que señalar que las características de los sistemas multicuerpo problemas de pequeño tamaño, frecuente uso de la recursividad, y repetición intensiva en el tiempo de los cálculos con fuerte dependencia de los resultados anteriores dificultan extraordinariamente el uso de técnicas de paralelización frente a otras áreas de la mecánica computacional, tales como por ejemplo el cálculo por elementos finitos. Basándose en los conceptos mencionados en el Capítulo 4, el Capítulo 5 está dividido en tres secciones, una para cada formulación propuesta en esta Tesis. En cada una de estas secciones se describen los detalles de cómo se han realizado las distintas implementaciones propuestas para cada algoritmo y qué herramientas se han utilizado para ello. En la primera sección se muestra el uso de librerías numéricas para matrices densas y sparse en la formulación topológica semirrecursiva basada en la doble transformación de velocidades. En la segunda se describe la utilización de paralelización mediante OpenMP y TBB en la formulación semirrecursiva con penalizadores y proyecciones. Por último, se describe el uso de técnicas de matrices sparse y paralelización en la formulación global con uniones flexibles y parámetros de Euler. El Capítulo 6 describe los resultados alcanzados mediante las formulaciones e implementaciones descritas previamente. Este capítulo comienza con una descripción de la modelización y topología de los dos vehículos estudiados. El primer modelo es un vehículo de dos ejes del tipo chasis-cabina o furgoneta, perteneciente a la gama de vehículos de carga medianos. El segundo es un vehículo de cinco ejes que responde al modelo de un camión o cabina con semirremolque, perteneciente a la categoría de vehículos industriales pesados. En este capítulo además se realiza un estudio comparativo entre las simulaciones de estos vehículos con cada una de las formulaciones utilizadas y se presentan de modo cuantitativo los efectos de las mejoras alcanzadas con las distintas estrategias propuestas en esta Tesis. Con objeto de extraer conclusiones más fácilmente y para evaluar de un modo más objetivo las mejoras introducidas en la Tesis, todos los resultados de este capítulo se han obtenido con el mismo computador, que era el top de la gama Intel Xeon en 2007, pero que hoy día está ya algo obsoleto. Por último los Capítulos 7 y 8 están dedicados a las conclusiones finales y las futuras líneas de investigación que pueden derivar del trabajo realizado en esta Tesis. Los objetivos de realizar simulaciones en tiempo real de vehículos industriales de gran complejidad han sido alcanzados con varias de las formulaciones e implementaciones desarrolladas. ABSTRACT The objective of this Dissertation has been the achievement of real time simulations of industrial vehicles modeled as complex multibody systems made up by rigid bodies. For the development of a simulation program, four main aspects must be considered: the modeling of the multibody system (types of coordinates, ideal joints or imposed by means of forces), the formulation to be used to set the differential equations of motion (dependent or independent coordinates, global or topological methods, ways to impose constraints equations), the method of numerical integration to solve these equations in time (explicit or implicit integrators) and the details of the implementation carried out (programming language, mathematical libraries, parallelization techniques). These four stages are interrelated and all of them are part of this work. They involve the generation of models for a van and a semitrailer truck, the use of three different dynamic formulations, the integration of differential equations of motion through explicit and implicit methods, the use of BLAS functions and sparse matrix techniques, and the introduction of parallelization to use the different processor cores. The work presented in this Dissertation has been structured in eight chapters, the first of them being the Introduction. In Chapter 2, two different semi-recursive formulations are shown, of which the first one is based on a double velocity transformation, thus getting the differential equations of motion as a function of the independent relative accelerations. The numerical integration of these equations has been made with the Runge-Kutta explicit method of fourth order. The second formulation is based on dependent relative coordinates, imposing the constraints by means of position penalty coefficients and correcting the velocities and accelerations by projection methods. In this second case, the integration of the motion equations has been carried out by means of the HHT implicit integrator (Hilber, Hughes and Taylor), which belongs to the Newmark structural integrators family. In Chapter 3, the third formulation used in this Dissertation is presented. In this case, the joints between the bodies of the system have been considered as flexible joints, with forces used to impose the joint conditions. This kind of union hinders to work with relative coordinates, so the position of the system bodies and the setting of the equations of motion have been carried out using Cartesian coordinates and Euler parameters. In this global formulation, constraints are introduced through forces (with a similar approach to the penalty coefficients) are presented. The stabilization of the numerical integration is carried out also by velocity and accelerations projections. In Chapter 4, a revision of the main computer tools and strategies used to increase the efficiency of the implementations of the algorithms is presented. First of all, some basic considerations to increase the numerical efficiency of the implementations are included. Then the main characteristics of the code’ analyzers used and also the mathematical libraries used to solve linear algebra problems (both with dense and sparse matrices) are mentioned. Finally, the topic of parallelization in current multicore processors is developed thoroughly. For that, the pattern used and the most important characteristics of the tools proposed, OpenMP and Intel TBB, are described. It needs to be highlighted that the characteristics of multibody systems small size problems, frequent recursion use and intensive repetition along the time of the calculation with high dependencies of the previous results complicate extraordinarily the use of parallelization techniques against other computational mechanics areas, as the finite elements computation. Based on the concepts mentioned in Chapter 4, Chapter 5 is divided into three sections, one for each formulation proposed in this Dissertation. In each one of these sections, the details of how these different proposed implementations have been made for each algorithm and which tools have been used are described. In the first section, it is shown the use of numerical libraries for dense and sparse matrices in the semirecursive topological formulation based in the double velocity transformation. In the second one, the use of parallelization by means OpenMP and TBB is depicted in the semi-recursive formulation with penalization and projections. Lastly, the use of sparse matrices and parallelization techniques is described in the global formulation with flexible joints and Euler parameters. Chapter 6 depicts the achieved results through the formulations and implementations previously described. This chapter starts with a description of the modeling and topology of the two vehicles studied. The first model is a two-axle chassis-cabin or van like vehicle, which belongs to the range of medium charge vehicles. The second one is a five-axle vehicle belonging to the truck or cabin semi-trailer model, belonging to the heavy industrial vehicles category. In this chapter, a comparative study is done between the simulations of these vehicles with each one of the formulations used and the improvements achieved are presented in a quantitative way with the different strategies proposed in this Dissertation. With the aim of deducing the conclusions more easily and to evaluate in a more objective way the improvements introduced in the Dissertation, all the results of this chapter have been obtained with the same computer, which was the top one among the Intel Xeon range in 2007, but which is rather obsolete today. Finally, Chapters 7 and 8 are dedicated to the final conclusions and the future research projects that can be derived from the work presented in this Dissertation. The objectives of doing real time simulations in high complex industrial vehicles have been achieved with the formulations and implementations developed.
Resumo:
La segmentación de imágenes es un campo importante de la visión computacional y una de las áreas de investigación más activas, con aplicaciones en comprensión de imágenes, detección de objetos, reconocimiento facial, vigilancia de vídeo o procesamiento de imagen médica. La segmentación de imágenes es un problema difícil en general, pero especialmente en entornos científicos y biomédicos, donde las técnicas de adquisición imagen proporcionan imágenes ruidosas. Además, en muchos de estos casos se necesita una precisión casi perfecta. En esta tesis, revisamos y comparamos primero algunas de las técnicas ampliamente usadas para la segmentación de imágenes médicas. Estas técnicas usan clasificadores a nivel de pixel e introducen regularización sobre pares de píxeles que es normalmente insuficiente. Estudiamos las dificultades que presentan para capturar la información de alto nivel sobre los objetos a segmentar. Esta deficiencia da lugar a detecciones erróneas, bordes irregulares, configuraciones con topología errónea y formas inválidas. Para solucionar estos problemas, proponemos un nuevo método de regularización de alto nivel que aprende información topológica y de forma a partir de los datos de entrenamiento de una forma no paramétrica usando potenciales de orden superior. Los potenciales de orden superior se están popularizando en visión por computador, pero la representación exacta de un potencial de orden superior definido sobre muchas variables es computacionalmente inviable. Usamos una representación compacta de los potenciales basada en un conjunto finito de patrones aprendidos de los datos de entrenamiento que, a su vez, depende de las observaciones. Gracias a esta representación, los potenciales de orden superior pueden ser convertidos a potenciales de orden 2 con algunas variables auxiliares añadidas. Experimentos con imágenes reales y sintéticas confirman que nuestro modelo soluciona los errores de aproximaciones más débiles. Incluso con una regularización de alto nivel, una precisión exacta es inalcanzable, y se requeire de edición manual de los resultados de la segmentación automática. La edición manual es tediosa y pesada, y cualquier herramienta de ayuda es muy apreciada. Estas herramientas necesitan ser precisas, pero también lo suficientemente rápidas para ser usadas de forma interactiva. Los contornos activos son una buena solución: son buenos para detecciones precisas de fronteras y, en lugar de buscar una solución global, proporcionan un ajuste fino a resultados que ya existían previamente. Sin embargo, requieren una representación implícita que les permita trabajar con cambios topológicos del contorno, y esto da lugar a ecuaciones en derivadas parciales (EDP) que son costosas de resolver computacionalmente y pueden presentar problemas de estabilidad numérica. Presentamos una aproximación morfológica a la evolución de contornos basada en un nuevo operador morfológico de curvatura que es válido para superficies de cualquier dimensión. Aproximamos la solución numérica de la EDP de la evolución de contorno mediante la aplicación sucesiva de un conjunto de operadores morfológicos aplicados sobre una función de conjuntos de nivel. Estos operadores son muy rápidos, no sufren de problemas de estabilidad numérica y no degradan la función de los conjuntos de nivel, de modo que no hay necesidad de reinicializarlo. Además, su implementación es mucho más sencilla que la de las EDP, ya que no requieren usar sofisticados algoritmos numéricos. Desde un punto de vista teórico, profundizamos en las conexiones entre operadores morfológicos y diferenciales, e introducimos nuevos resultados en este área. Validamos nuestra aproximación proporcionando una implementación morfológica de los contornos geodésicos activos, los contornos activos sin bordes, y los turbopíxeles. En los experimentos realizados, las implementaciones morfológicas convergen a soluciones equivalentes a aquéllas logradas mediante soluciones numéricas tradicionales, pero con ganancias significativas en simplicidad, velocidad y estabilidad. ABSTRACT Image segmentation is an important field in computer vision and one of its most active research areas, with applications in image understanding, object detection, face recognition, video surveillance or medical image processing. Image segmentation is a challenging problem in general, but especially in the biological and medical image fields, where the imaging techniques usually produce cluttered and noisy images and near-perfect accuracy is required in many cases. In this thesis we first review and compare some standard techniques widely used for medical image segmentation. These techniques use pixel-wise classifiers and introduce weak pairwise regularization which is insufficient in many cases. We study their difficulties to capture high-level structural information about the objects to segment. This deficiency leads to many erroneous detections, ragged boundaries, incorrect topological configurations and wrong shapes. To deal with these problems, we propose a new regularization method that learns shape and topological information from training data in a nonparametric way using high-order potentials. High-order potentials are becoming increasingly popular in computer vision. However, the exact representation of a general higher order potential defined over many variables is computationally infeasible. We use a compact representation of the potentials based on a finite set of patterns learned fromtraining data that, in turn, depends on the observations. Thanks to this representation, high-order potentials can be converted into pairwise potentials with some added auxiliary variables and minimized with tree-reweighted message passing (TRW) and belief propagation (BP) techniques. Both synthetic and real experiments confirm that our model fixes the errors of weaker approaches. Even with high-level regularization, perfect accuracy is still unattainable, and human editing of the segmentation results is necessary. The manual edition is tedious and cumbersome, and tools that assist the user are greatly appreciated. These tools need to be precise, but also fast enough to be used in real-time. Active contours are a good solution: they are good for precise boundary detection and, instead of finding a global solution, they provide a fine tuning to previously existing results. However, they require an implicit representation to deal with topological changes of the contour, and this leads to PDEs that are computationally costly to solve and may present numerical stability issues. We present a morphological approach to contour evolution based on a new curvature morphological operator valid for surfaces of any dimension. We approximate the numerical solution of the contour evolution PDE by the successive application of a set of morphological operators defined on a binary level-set. These operators are very fast, do not suffer numerical stability issues, and do not degrade the level set function, so there is no need to reinitialize it. Moreover, their implementation is much easier than their PDE counterpart, since they do not require the use of sophisticated numerical algorithms. From a theoretical point of view, we delve into the connections between differential andmorphological operators, and introduce novel results in this area. We validate the approach providing amorphological implementation of the geodesic active contours, the active contours without borders, and turbopixels. In the experiments conducted, the morphological implementations converge to solutions equivalent to those achieved by traditional numerical solutions, but with significant gains in simplicity, speed, and stability.
Resumo:
A mathematical model for the group combustion of pulverized coal particles was developed in a previous work. It includes the Lagrangian description of the dehumidification, devolatilization and char gasification reactions of the coal particles in the homogenized gaseous environment resulting from the three fuels, CO, H2 and volatiles, supplied by the gasification of the particles and their simultaneous group combustion by the gas phase oxidation reactions, which are considered to be very fast. This model is complemented here with an analysis of the particle dynamics, determined principally by the effects of aerodynamic drag and gravity, and its dispersion based on a stochastic model. It is also extended to include two other simpler models for the gasification of the particles: the first one for particles small enough to extinguish the surrounding diffusion flames, and a second one for particles with small ash content when the porous shell of ashes remaining after gasification of the char, non structurally stable, is disrupted. As an example of the applicability of the models, they are used in the numerical simulation of an experiment of a non-swirling pulverized coal jet with a nearly stagnant air at ambient temperature, with an initial region of interaction with a small annular methane flame. Computational algorithms for solving the different stages undergone by a coal particle during its combustion are proposed. For the partial differential equations modeling the gas phase, a second order finite element method combined with a semi-Lagrangian characteristics method are used. The results obtained with the three versions of the model are compared among them and show how the first of the simpler models fits better the experimental results.
Resumo:
We propose in this work a very simple torsion-free beam element capable of capturing geometrical nonlinearities. The simple formulation is objective and unconditionally con- vergent for geometrically nonlinear models with large displacements, in the traditional sense that guarantees more precise numerical solutions for finer discretizations. The formulation does not employ rotational degrees of freedom, can be applied to two and three-dimensional problems, and it is computationally very efficient.