32 resultados para Partial Differential Equation

em Universidad Politécnica de Madrid


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Nowadays, Computational Fluid Dynamics (CFD) solvers are widely used within the industry to model fluid flow phenomenons. Several fluid flow model equations have been employed in the last decades to simulate and predict forces acting, for example, on different aircraft configurations. Computational time and accuracy are strongly dependent on the fluid flow model equation and the spatial dimension of the problem considered. While simple models based on perfect flows, like panel methods or potential flow models can be very fast to solve, they usually suffer from a poor accuracy in order to simulate real flows (transonic, viscous). On the other hand, more complex models such as the full Navier- Stokes equations provide high fidelity predictions but at a much higher computational cost. Thus, a good compromise between accuracy and computational time has to be fixed for engineering applications. A discretisation technique widely used within the industry is the so-called Finite Volume approach on unstructured meshes. This technique spatially discretises the flow motion equations onto a set of elements which form a mesh, a discrete representation of the continuous domain. Using this approach, for a given flow model equation, the accuracy and computational time mainly depend on the distribution of nodes forming the mesh. Therefore, a good compromise between accuracy and computational time might be obtained by carefully defining the mesh. However, defining an optimal mesh for complex flows and geometries requires a very high level expertize in fluid mechanics and numerical analysis, and in most cases a simple guess of regions of the computational domain which might affect the most the accuracy is impossible. Thus, it is desirable to have an automatized remeshing tool, which is more flexible with unstructured meshes than its structured counterpart. However, adaptive methods currently in use still have an opened question: how to efficiently drive the adaptation ? Pioneering sensors based on flow features generally suffer from a lack of reliability, so in the last decade more effort has been made in developing numerical error-based sensors, like for instance the adjoint-based adaptation sensors. While very efficient at adapting meshes for a given functional output, the latter method is very expensive as it requires to solve a dual set of equations and computes the sensor on an embedded mesh. Therefore, it would be desirable to develop a more affordable numerical error estimation method. The current work aims at estimating the truncation error, which arises when discretising a partial differential equation. These are the higher order terms neglected in the construction of the numerical scheme. The truncation error provides very useful information as it is strongly related to the flow model equation and its discretisation. On one hand, it is a very reliable measure of the quality of the mesh, therefore very useful in order to drive a mesh adaptation procedure. On the other hand, it is strongly linked to the flow model equation, so that a careful estimation actually gives information on how well a given equation is solved, which may be useful in the context of _ -extrapolation or zonal modelling. The following work is organized as follows: Chap. 1 contains a short review of mesh adaptation techniques as well as numerical error prediction. In the first section, Sec. 1.1, the basic refinement strategies are reviewed and the main contribution to structured and unstructured mesh adaptation are presented. Sec. 1.2 introduces the definitions of errors encountered when solving Computational Fluid Dynamics problems and reviews the most common approaches to predict them. Chap. 2 is devoted to the mathematical formulation of truncation error estimation in the context of finite volume methodology, as well as a complete verification procedure. Several features are studied, such as the influence of grid non-uniformities, non-linearity, boundary conditions and non-converged numerical solutions. This verification part has been submitted and accepted for publication in the Journal of Computational Physics. Chap. 3 presents a mesh adaptation algorithm based on truncation error estimates and compares the results to a feature-based and an adjoint-based sensor (in collaboration with Jorge Ponsín, INTA). Two- and three-dimensional cases relevant for validation in the aeronautical industry are considered. This part has been submitted and accepted in the AIAA Journal. An extension to Reynolds Averaged Navier- Stokes equations is also included, where _ -estimation-based mesh adaptation and _ -extrapolation are applied to viscous wing profiles. The latter has been submitted in the Proceedings of the Institution of Mechanical Engineers, Part G: Journal of Aerospace Engineering. Keywords: mesh adaptation, numerical error prediction, finite volume Hoy en día, la Dinámica de Fluidos Computacional (CFD) es ampliamente utilizada dentro de la industria para obtener información sobre fenómenos fluidos. La Dinámica de Fluidos Computacional considera distintas modelizaciones de las ecuaciones fluidas (Potencial, Euler, Navier-Stokes, etc) para simular y predecir las fuerzas que actúan, por ejemplo, sobre una configuración de aeronave. El tiempo de cálculo y la precisión en la solución depende en gran medida de los modelos utilizados, así como de la dimensión espacial del problema considerado. Mientras que modelos simples basados en flujos perfectos, como modelos de flujos potenciales, se pueden resolver rápidamente, por lo general aducen de una baja precisión a la hora de simular flujos reales (viscosos, transónicos, etc). Por otro lado, modelos más complejos tales como el conjunto de ecuaciones de Navier-Stokes proporcionan predicciones de alta fidelidad, a expensas de un coste computacional mucho más elevado. Por lo tanto, en términos de aplicaciones de ingeniería se debe fijar un buen compromiso entre precisión y tiempo de cálculo. Una técnica de discretización ampliamente utilizada en la industria es el método de los Volúmenes Finitos en mallas no estructuradas. Esta técnica discretiza espacialmente las ecuaciones del movimiento del flujo sobre un conjunto de elementos que forman una malla, una representación discreta del dominio continuo. Utilizando este enfoque, para una ecuación de flujo dado, la precisión y el tiempo computacional dependen principalmente de la distribución de los nodos que forman la malla. Por consiguiente, un buen compromiso entre precisión y tiempo de cálculo se podría obtener definiendo cuidadosamente la malla, concentrando sus elementos en aquellas zonas donde sea estrictamente necesario. Sin embargo, la definición de una malla óptima para corrientes y geometrías complejas requiere un nivel muy alto de experiencia en la mecánica de fluidos y el análisis numérico, así como un conocimiento previo de la solución. Aspecto que en la mayoría de los casos no está disponible. Por tanto, es deseable tener una herramienta que permita adaptar los elementos de malla de forma automática, acorde a la solución fluida (remallado). Esta herramienta es generalmente más flexible en mallas no estructuradas que con su homóloga estructurada. No obstante, los métodos de adaptación actualmente en uso todavía dejan una pregunta abierta: cómo conducir de manera eficiente la adaptación. Sensores pioneros basados en las características del flujo en general, adolecen de una falta de fiabilidad, por lo que en la última década se han realizado grandes esfuerzos en el desarrollo numérico de sensores basados en el error, como por ejemplo los sensores basados en el adjunto. A pesar de ser muy eficientes en la adaptación de mallas para un determinado funcional, este último método resulta muy costoso, pues requiere resolver un doble conjunto de ecuaciones: la solución y su adjunta. Por tanto, es deseable desarrollar un método numérico de estimación de error más asequible. El presente trabajo tiene como objetivo estimar el error local de truncación, que aparece cuando se discretiza una ecuación en derivadas parciales. Estos son los términos de orden superior olvidados en la construcción del esquema numérico. El error de truncación proporciona una información muy útil sobre la solución: es una medida muy fiable de la calidad de la malla, obteniendo información que permite llevar a cabo un procedimiento de adaptación de malla. Está fuertemente relacionado al modelo matemático fluido, de modo que una estimación precisa garantiza la idoneidad de dicho modelo en un campo fluido, lo que puede ser útil en el contexto de modelado zonal. Por último, permite mejorar la precisión de la solución resolviendo un nuevo sistema donde el error local actúa como término fuente (_ -extrapolación). El presenta trabajo se organiza de la siguiente manera: Cap. 1 contiene una breve reseña de las técnicas de adaptación de malla, así como de los métodos de predicción de los errores numéricos. En la primera sección, Sec. 1.1, se examinan las estrategias básicas de refinamiento y se presenta la principal contribución a la adaptación de malla estructurada y no estructurada. Sec 1.2 introduce las definiciones de los errores encontrados en la resolución de problemas de Dinámica Computacional de Fluidos y se examinan los enfoques más comunes para predecirlos. Cap. 2 está dedicado a la formulación matemática de la estimación del error de truncación en el contexto de la metodología de Volúmenes Finitos, así como a un procedimiento de verificación completo. Se estudian varias características que influyen en su estimación: la influencia de la falta de uniformidad de la malla, el efecto de las no linealidades del modelo matemático, diferentes condiciones de contorno y soluciones numéricas no convergidas. Esta parte de verificación ha sido presentada y aceptada para su publicación en el Journal of Computational Physics. Cap. 3 presenta un algoritmo de adaptación de malla basado en la estimación del error de truncación y compara los resultados con sensores de featured-based y adjointbased (en colaboración con Jorge Ponsín del INTA). Se consideran casos en dos y tres dimensiones, relevantes para la validación en la industria aeronáutica. Este trabajo ha sido presentado y aceptado en el AIAA Journal. También se incluye una extensión de estos métodos a las ecuaciones RANS (Reynolds Average Navier- Stokes), en donde adaptación de malla basada en _ y _ -extrapolación son aplicados a perfiles con viscosidad de alas. Este último trabajo se ha presentado en los Actas de la Institución de Ingenieros Mecánicos, Parte G: Journal of Aerospace Engineering. Palabras clave: adaptación de malla, predicción del error numérico, volúmenes finitos

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Una evolución del método de diferencias finitas ha sido el desarrollo del método de diferencias finitas generalizadas (MDFG) que se puede aplicar a mallas irregulares o nubes de puntos. En este método se emplea una expansión en serie de Taylor junto con una aproximación por mínimos cuadrados móviles (MCM). De ese modo, las fórmulas explícitas de diferencias para nubes irregulares de puntos se pueden obtener fácilmente usando el método de Cholesky. El MDFG-MCM es un método sin malla que emplea únicamente puntos. Una contribución de esta Tesis es la aplicación del MDFG-MCM al caso de la modelización de problemas anisótropos elípticos de conductividad eléctrica incluyendo el caso de tejidos reales cuando la dirección de las fibras no es fija, sino que varía a lo largo del tejido. En esta Tesis también se muestra la extensión del método de diferencias finitas generalizadas a la solución explícita de ecuaciones parabólicas anisótropas. El método explícito incluye la formulación de un límite de estabilidad para el caso de nubes irregulares de nodos que es fácilmente calculable. Además se presenta una nueva solución analítica para una ecuación parabólica anisótropa y el MDFG-MCM explícito se aplica al caso de problemas parabólicos anisótropos de conductividad eléctrica. La evidente dificultad de realizar mediciones directas en electrocardiología ha motivado un gran interés en la simulación numérica de modelos cardiacos. La contribución más importante de esta Tesis es la aplicación de un esquema explícito con el MDFG-MCM al caso de la modelización monodominio de problemas de conductividad eléctrica. En esta Tesis presentamos un algoritmo altamente eficiente, exacto y condicionalmente estable para resolver el modelo monodominio, que describe la actividad eléctrica del corazón. El modelo consiste en una ecuación en derivadas parciales parabólica anisótropa (EDP) que está acoplada con un sistema de ecuaciones diferenciales ordinarias (EDOs) que describen las reacciones electroquímicas en las células cardiacas. El sistema resultante es difícil de resolver numéricamente debido a su complejidad. Proponemos un método basado en una separación de operadores y un método sin malla para resolver la EDP junto a un método de Runge-Kutta para resolver el sistema de EDOs de la membrana y las corrientes iónicas. ABSTRACT An evolution of the method of finite differences has been the development of generalized finite difference (GFD) method that can be applied to irregular grids or clouds of points. In this method a Taylor series expansion is used together with a moving least squares (MLS) approximation. Then, the explicit difference formulae for irregular clouds of points can be easily obtained using a simple Cholesky method. The MLS-GFD is a mesh-free method using only points. A contribution of this Thesis is the application of the MLS-GFDM to the case of modelling elliptic anisotropic electrical conductivity problems including the case of real tissues when the fiber direction is not fixed, but varies throughout the tissue. In this Thesis the extension of the generalized finite difference method to the explicit solution of parabolic anisotropic equations is also given. The explicit method includes a stability limit formulated for the case of irregular clouds of nodes that can be easily calculated. Also a new analytical solution for homogeneous parabolic anisotropic equation has been presented and an explicit MLS- GFDM has been applied to the case of parabolic anisotropic electrical conductivity problems. The obvious difficulty of performing direct measurements in electrocardiology has motivated wide interest in the numerical simulation of cardiac models. The main contribution of this Thesis is the application of an explicit scheme based in the MLS-GFDM to the case of modelling monodomain electrical conductivity problems using operator splitting including the case of anisotropic real tissues. In this Thesis we present a highly efficient, accurate and conditionally stable algorithm to solve a monodomain model, which describes the electrical activity in the heart. The model consists of a parabolic anisotropic partial differential equation (PDE), which is coupled to systems of ordinary differential equations (ODEs) describing electrochemical reactions in the cardiac cells. The resulting system is challenging to solve numerically, because of its complexity. We propose a method based on operator splitting and a meshless method for solving the PDE together with a Runge-Kutta method for solving the system of ODE’s for the membrane and ionic currents.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

En este proyecto se trata la simulación numérica de un fenómeno dinámico, basado en el comportamiento de una onda transmitida a lo largo de una cuerda elástica de un instrumento musical, cuyos extremos se encuentran anclados. El fenómeno físico, se desarrolla utilizando una ecuación en derivadas parciales hiperbólicas con variables espacial y temporal, acompañada por unas condiciones de contorno tipo Dirichlet en los extremos y por más condiciones iniciales que dan comienzo al proceso. Posteriormente se han generado algoritmos para el método numérico empleado (Diferencias finitas centrales y progresivas) y la programación del problema aproximado con su consistencia, estabilidad y convergencia, obteniéndose unos resultados acordes con la solución analítica del problema matemático. La programación y salida de resultados se ha realizado con Visual Studio 8.0. y la programación de objetos con Visual Basic .Net In this project the topic is the numerical simulation of a dynamic phenomenon, based on the behavior of a transmitted wave along an elastic string of a musical instrument, whose ends are anchored. The physical phenomenon is developed using a hyperbolic partial differential equation with spatial and temporal variables, accompanied by a Dirichlet boundary conditions at the ends and more initial conditions that start the process. Subsequently generated algorithms for the numerical method used (central and forward finite differences) and the programming of the approximate problem with consistency, stability and convergence, yielding results in line with the analytical solution of the mathematical problem. Programming and output results has been made with Visual Studio 8.0. and object programming with Visual Basic. Net

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper presents a Finite Element Model, which has been used for forecasting the diffusion of innovations in time and space. Unlike conventional models used in diffusion literature, the model considers the spatial heterogeneity. The implementation steps of the model are explained by applying it to the case of diffusion of photovoltaic systems in a local region in southern Germany. The applied model is based on a parabolic partial differential equation that describes the diffusion ratio of photovoltaic systems in a given region over time. The results of the application show that the Finite Element Model constitutes a powerful tool to better understand the diffusion of an innovation as a simultaneous space-time process. For future research, model limitations and possible extensions are also discussed.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Esta tesis considera dos tipos de aplicaciones del diseño óptico: óptica formadora de imagen por un lado, y óptica anidólica (nonimaging) o no formadora de imagen, por otro. Las ópticas formadoras de imagen tienen como objetivo la obtención de imágenes de puntos del objeto en el plano de la imagen. Por su parte, la óptica anidólica, surgida del desarrollo de aplicaciones de concentración e iluminación, se centra en la transferencia de energía en forma de luz de forma eficiente. En general, son preferibles los diseños ópticos que den como resultado sistemas compactos, para ambos tipos de ópticas (formadora de imagen y anidólica). En el caso de los sistemas anidólicos, una óptica compacta permite tener costes de producción reducidos. Hay dos razones: (1) una óptica compacta presenta volúmenes reducidos, lo que significa que se necesita menos material para la producción en masa; (2) una óptica compacta es pequeña y ligera, lo que ahorra costes en el transporte. Para los sistemas ópticos de formación de imagen, además de las ventajas anteriores, una óptica compacta aumenta la portabilidad de los dispositivos, que es una gran ventaja en tecnologías de visualización portátiles, tales como cascos de realidad virtual (HMD del inglés Head Mounted Display). Esta tesis se centra por tanto en nuevos enfoques de diseño de sistemas ópticos compactos para aplicaciones tanto de formación de imagen, como anidólicas. Los colimadores son uno de los diseños clásicos dentro la óptica anidólica, y se pueden utilizar en aplicaciones fotovoltaicas y de iluminación. Hay varios enfoques a la hora de diseñar estos colimadores. Los diseños convencionales tienen una relación de aspecto mayor que 0.5. Con el fin de reducir la altura del colimador manteniendo el área de iluminación, esta tesis presenta un diseño de un colimador multicanal. En óptica formadora de imagen, las superficies asféricas y las superficies sin simetría de revolución (o freeform) son de gran utilidad de cara al control de las aberraciones de la imagen y para reducir el número y tamaño de los elementos ópticos. Debido al rápido desarrollo de sistemas de computación digital, los trazados de rayos se pueden realizar de forma rápida y sencilla para evaluar el rendimiento del sistema óptico analizado. Esto ha llevado a los diseños ópticos modernos a ser generados mediante el uso de diferentes técnicas de optimización multi-paramétricas. Estas técnicas requieren un buen diseño inicial como punto de partida para el diseño final, que será obtenido tras un proceso de optimización. Este proceso precisa un método de diseño directo para superficies asféricas y freeform que den como resultado un diseño cercano al óptimo. Un método de diseño basado en ecuaciones diferenciales se presenta en esta tesis para obtener un diseño óptico formado por una superficie freeform y dos superficies asféricas. Esta tesis consta de cinco capítulos. En Capítulo 1, se presentan los conceptos básicos de la óptica formadora de imagen y de la óptica anidólica, y se introducen las técnicas clásicas del diseño de las mismas. El Capítulo 2 describe el diseño de un colimador ultra-compacto. La relación de aspecto ultra-baja de este colimador se logra mediante el uso de una estructura multicanal. Se presentará su procedimiento de diseño, así como un prototipo fabricado y la caracterización del mismo. El Capítulo 3 describe los conceptos principales de la optimización de los sistemas ópticos: función de mérito y método de mínimos cuadrados amortiguados. La importancia de un buen punto de partida se demuestra mediante la presentación de un mismo ejemplo visto a través de diferentes enfoques de diseño. El método de las ecuaciones diferenciales se presenta como una herramienta ideal para obtener un buen punto de partida para la solución final. Además, diferentes técnicas de interpolación y representación de superficies asféricas y freeform se presentan para el procedimiento de optimización. El Capítulo 4 describe la aplicación del método de las ecuaciones diferenciales para un diseño de un sistema óptico de una sola superficie freeform. Algunos conceptos básicos de geometría diferencial son presentados para una mejor comprensión de la derivación de las ecuaciones diferenciales parciales. También se presenta un procedimiento de solución numérica. La condición inicial está elegida como un grado de libertad adicional para controlar la superficie donde se forma la imagen. Basado en este enfoque, un diseño anastigmático se puede obtener fácilmente y se utiliza como punto de partida para un ejemplo de diseño de un HMD con una única superficie reflectante. Después de la optimización, dicho diseño muestra mejor rendimiento. El Capítulo 5 describe el método de las ecuaciones diferenciales ampliado para diseños de dos superficies asféricas. Para diseños ópticos de una superficie, ni la superficie de imagen ni la correspondencia entre puntos del objeto y la imagen pueden ser prescritas. Con esta superficie adicional, la superficie de la imagen se puede prescribir. Esto conduce a un conjunto de tres ecuaciones diferenciales ordinarias implícitas. La solución numérica se puede obtener a través de cualquier software de cálculo numérico. Dicho procedimiento también se explica en este capítulo. Este método de diseño da como resultado una lente anastigmática, que se comparará con una lente aplanática. El diseño anastigmático converge mucho más rápido en la optimización y la solución final muestra un mejor rendimiento. ABSTRACT We will consider optical design from two points of view: imaging optics and nonimaging optics. Imaging optics focuses on the imaging of the points of the object. Nonimaging optics arose from the development of concentrators and illuminators, focuses on the transfer of light energy, and has wide applications in illumination and concentration photovoltaics. In general, compact optical systems are necessary for both imaging and nonimaging designs. For nonimaging optical systems, compact optics use to be important for reducing cost. The reasons are twofold: (1) compact optics is small in volume, which means less material is needed for mass-production; (2) compact optics is small in size and light in weight, which saves cost in transportation. For imaging optical systems, in addition to the above advantages, compact optics increases portability of devices as well, which contributes a lot to wearable display technologies such as Head Mounted Displays (HMD). This thesis presents novel design approaches of compact optical systems for both imaging and nonimaging applications. Collimator is a typical application of nonimaging optics in illumination, and can be used in concentration photovoltaics as well due to the reciprocity of light. There are several approaches for collimator designs. In general, all of these approaches have an aperture diameter to collimator height not greater than 2. In order to reduce the height of the collimator while maintaining the illumination area, a multichannel design is presented in this thesis. In imaging optics, aspheric and freeform surfaces are useful in controlling image aberrations and reducing the number and size of optical elements. Due to the rapid development of digital computing systems, ray tracing can be easily performed to evaluate the performance of optical system. This has led to the modern optical designs created by using different multi-parametric optimization techniques. These techniques require a good initial design to be a starting point so that the final design after optimization procedure can reach the optimum solution. This requires a direct design method for aspheric and freeform surface close to the optimum. A differential equation based design method is presented in this thesis to obtain single freeform and double aspheric surfaces. The thesis comprises of five chapters. In Chapter 1, basic concepts of imaging and nonimaging optics are presented and typical design techniques are introduced. Readers can obtain an understanding for the following chapters. Chapter 2 describes the design of ultra-compact collimator. The ultra-low aspect ratio of this collimator is achieved by using a multichannel structure. Its design procedure is presented together with a prototype and its evaluation. The ultra-compactness of the device has been approved. Chapter 3 describes the main concepts of optimizing optical systems: merit function and Damped Least-Squares method. The importance of a good starting point is demonstrated by presenting an example through different design approaches. The differential equation method is introduced as an ideal tool to obtain a good starting point for the final solution. Additionally, different interpolation and representation techniques for aspheric and freeform surface are presented for optimization procedure. Chapter 4 describes the application of differential equation method in the design of single freeform surface optical system. Basic concepts of differential geometry are presented for understanding the derivation of partial differential equations. A numerical solution procedure is also presented. The initial condition is chosen as an additional freedom to control the image surface. Based on this approach, anastigmatic designs can be readily obtained and is used as starting point for a single reflective surface HMD design example. After optimization, the evaluation shows better MTF. Chapter 5 describes the differential equation method extended to double aspheric surface designs. For single optical surface designs, neither image surface nor the mapping from object to image can be prescribed. With one more surface added, the image surface can be prescribed. This leads to a set of three implicit ordinary differential equations. Numerical solution can be obtained by MATLAB and its procedure is also explained. An anastigmatic lens is derived from this design method and compared with an aplanatic lens. The anastigmatic design converges much faster in optimization and the final solution shows better performance.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

We present a remote sensing observational method for the measurement of the spatio-temporal dynamics of ocean waves. Variational techniques are used to recover a coherent space-time reconstruction of oceanic sea states given stereo video imagery. The stereoscopic reconstruction problem is expressed in a variational optimization framework. There, we design an energy functional whose minimizer is the desired temporal sequence of wave heights. The functional combines photometric observations as well as spatial and temporal regularizers. A nested iterative scheme is devised to numerically solve, via 3-D multigrid methods, the system of partial differential equations resulting from the optimality condition of the energy functional. The output of our method is the coherent, simultaneous estimation of the wave surface height and radiance at multiple snapshots. We demonstrate our algorithm on real data collected off-shore. Statistical and spectral analysis are performed. Comparison with respect to an existing sequential method is analyzed.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Let π : FM ! M be the bundle of linear frames of a manifold M. A basis Lijk , j < k, of diffeomorphism invariant Lagrangians on J1 (FM) was determined in [J. Muñoz Masqué, M. E. Rosado, Invariant variational problems on linear frame bundles, J. Phys. A35 (2002) 2013-2036]. The notion of a characteristic hypersurface for an arbitrary first-order PDE system on an ar- bitrary bred manifold π : P → M, is introduced and for the systems dened by the Euler-Lagrange equations of Lijk every hypersurface is shown to be characteristic. The Euler-Lagrange equations of the natural basis of Lagrangian densities Lijk on the bundle of linear frames of a manifold M which are invariant under diffeomorphisms, are shown to be an underdetermined PDEs systems such that every hypersurface of M is characteristic for such equations. This explains why these systems cannot be written in the Cauchy-Kowaleska form, although they are known to be formally integrable by using the tools of geometric theory of partial differential equations, see [J. Muñoz Masqué, M. E. Rosado, Integrability of the eld equations of invariant variational problems on linear frame bundles, J. Geom. Phys. 49 (2004), 119-155]

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The aim of this paper was to accurately estimate the local truncation error of partial differential equations, that are numerically solved using a finite difference or finite volume approach on structured and unstructured meshes. In this work, we approximated the local truncation error using the @t-estimation procedure, which aims to compare the residuals on a sequence of grids with different spacing. First, we focused the analysis on one-dimensional scalar linear and non-linear test cases to examine the accuracy of the estimation of the truncation error for both finite difference and finite volume approaches on different grid topologies. Then, we extended the analysis to two-dimensional problems: first on linear and non-linear scalar equations and finally on the Euler equations. We demonstrated that this approach yields a highly accurate estimation of the truncation error if some conditions are fulfilled. These conditions are related to the accuracy of the restriction operators, the choice of the boundary conditions, the distortion of the grids and the magnitude of the iteration error.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

We develop a novel remote sensing technique for the observation of waves on the ocean surface. Our method infers the 3-D waveform and radiance of oceanic sea states via a variational stereo imagery formulation. In this setting, the shape and radiance of the wave surface are given by minimizers of a composite energy functional that combines a photometric matching term along with regularization terms involving the smoothness of the unknowns. The desired ocean surface shape and radiance are the solution of a system of coupled partial differential equations derived from the optimality conditions of the energy functional. The proposed method is naturally extended to study the spatiotemporal dynamics of ocean waves and applied to three sets of stereo video data. Statistical and spectral analysis are carried out. Our results provide evidence that the observed omnidirectional wavenumber spectrum S(k) decays as k-2.5 is in agreement with Zakharov's theory (1999). Furthermore, the 3-D spectrum of the reconstructed wave surface is exploited to estimate wave dispersion and currents.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The Boundary Element Method (BEM) is a discretisation technique for solving partial differential equations, which offers, for certain problems, important advantages over domain techniques. Despite the high CPU time reduction that can be achieved, some 3D problems remain today untreatable because the extremely large number of degrees of freedom—dof—involved in the boundary description. Model reduction seems to be an appealing choice for both, accurate and efficient numerical simulations. However, in the BEM the reduction in the number of degrees of freedom does not imply a significant reduction in the CPU time, because in this technique the more important part of the computing time is spent in the construction of the discrete system of equations. In this way, a reduction also in the number of weighting functions, seems to be a key point to render efficient boundary element simulations.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In the recent decades, meshless methods (MMs), like the element-free Galerkin method (EFGM), have been widely studied and interesting results have been reached when solving partial differential equations. However, such solutions show a problem around boundary conditions, where the accuracy is not adequately achieved. This is caused by the use of moving least squares or residual kernel particle method methods to obtain the shape functions needed in MM, since such methods are good enough in the inner of the integration domains, but not so accurate in boundaries. This way, Bernstein curves, which are a partition of unity themselves,can solve this problem with the same accuracy in the inner area of the domain and at their boundaries.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Combining the kinematical definitions of the two dimensionless parameters, the deceleration q(x) and the Hubble t 0 H(x), we get a differential equation (where x=t/t 0 is the age of the universe relative to its present value t 0). First integration gives the function H(x). The present values of the Hubble parameter H(1) [approximately t 0 H(1)≈1], and the deceleration parameter [approximately q(1)≈−0.5], determine the function H(x). A second integration gives the cosmological scale factor a(x). Differentiation of a(x) gives the speed of expansion of the universe. The evolution of the universe that results from our approach is: an initial extremely fast exponential expansion (inflation), followed by an almost linear expansion (first decelerated, and later accelerated). For the future, at approximately t≈3t 0 there is a final exponential expansion, a second inflation that produces a disaggregation of the universe to infinity. We find the necessary and sufficient conditions for this disaggregation to occur. The precise value of the final age is given only with one parameter: the present value of the deceleration parameter [q(1)≈−0.5]. This emerging picture of the history of the universe represents an important challenge, an opportunity for the immediate research on the Universe. These conclusions have been elaborated without the use of any particular cosmological model of the universe

Relevância:

80.00% 80.00%

Publicador:

Resumo:

We study a system of three partial differential equations modelling the spatiotemporal behaviour of two competitive populations of biological species both of which are attracted chemotactically by the same signal substance. For a range of the parameters the system possesses a uniquely determined spatially homogeneous positive equilibrium (u?, v?) globally asymptotically stable within a certain nonempty range of the logistic growth coefficients.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

We consider non-negative solution of a chemotaxis system with non constant chemotaxis sensitivity function X. This system appears as a limit case of a model formorphogenesis proposed by Bollenbach et al. (Phys. Rev. E. 75, 2007).Under suitable boundary conditions, modeling the presence of a morphogen source at x=0, we prove the existence of a global and bounded weak solution using an approximation by problems where diffusion is introduced in the ordinary differential equation. Moreover,we prove the convergence of the solution to the unique steady state provided that ? is small and ? is large enough. Numerical simulations both illustrate these results and give rise to further conjectures on the solution behavior that go beyond the rigorously proved statements.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

he simulation of complex LoC (Lab-on-a-Chip) devices is a process that requires solving computationally expensive partial differential equations. An interesting alternative uses artificial neural networks for creating computationally feasible models based on MOR techniques. This paper proposes an approach that uses artificial neural networks for designing LoC components considering the artificial neural network topology as an isomorphism of the LoC device topology. The parameters of the trained neural networks are based on equations for modeling microfluidic circuits, analogous to electronic circuits. The neural networks have been trained to behave like AND, OR, Inverter gates. The parameters of the trained neural networks represent the features of LoC devices that behave as the aforementioned gates. This would mean that LoC devices universally compute.