991 resultados para Exponential Euler Method


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Industrial rotating machines may be exposed to severe dynamic excitations due to resonant working regimes. Dealing with the bending vibration, problem of a machine rotor, the shaft - and attached discs - can be simply modelled using the Bernoulli-Euler beam theory, as a continuous beam subjected to a specific set of boundary conditions. In this study, the authors recall Rayleigh's method to propose an iterative strategy, which allows for the determination of natural frequencies and mode shapes of continuous beams taking into account the effect of attached concentrated masses and rotational inertias, including different stiffness coefficients at the right and the left end sides. The algorithm starts with the exact solutions from Bernoulli-Euler's beam theory, which are then updated through Rayleigh's quotient parameters. Several loading cases are examined in comparison with the experimental data and examples are presented to illustrate the validity of the model and the accuracy of the obtained values.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We propose a mixed finite element method for a class of nonlinear diffusion equations, which is based on their interpretation as gradient flows in optimal transportation metrics. We introduce an appropriate linearization of the optimal transport problem, which leads to a mixed symmetric formulation. This formulation preserves the maximum principle in case of the semi-discrete scheme as well as the fully discrete scheme for a certain class of problems. In addition solutions of the mixed formulation maintain exponential convergence in the relative entropy towards the steady state in case of a nonlinear Fokker-Planck equation with uniformly convex potential. We demonstrate the behavior of the proposed scheme with 2D simulations of the porous medium equations and blow-up questions in the Patlak-Keller-Segel model.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this article, we consider solutions starting close to some linearly stable invariant tori in an analytic Hamiltonian system and we prove results of stability for a super-exponentially long interval of time, under generic conditions. The proof combines classical Birkhoff normal forms and a new method to obtain generic Nekhoroshev estimates developed by the author and L. Niederman in another paper. We will mainly focus on the neighbourhood of elliptic fixed points, the other cases being completely similar.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Models incorporating more realistic models of customer behavior, as customers choosing froman offer set, have recently become popular in assortment optimization and revenue management.The dynamic program for these models is intractable and approximated by a deterministiclinear program called the CDLP which has an exponential number of columns. However, whenthe segment consideration sets overlap, the CDLP is difficult to solve. Column generationhas been proposed but finding an entering column has been shown to be NP-hard. In thispaper we propose a new approach called SDCP to solving CDLP based on segments and theirconsideration sets. SDCP is a relaxation of CDLP and hence forms a looser upper bound onthe dynamic program but coincides with CDLP for the case of non-overlapping segments. Ifthe number of elements in a consideration set for a segment is not very large (SDCP) can beapplied to any discrete-choice model of consumer behavior. We tighten the SDCP bound by(i) simulations, called the randomized concave programming (RCP) method, and (ii) by addingcuts to a recent compact formulation of the problem for a latent multinomial-choice model ofdemand (SBLP+). This latter approach turns out to be very effective, essentially obtainingCDLP value, and excellent revenue performance in simulations, even for overlapping segments.By formulating the problem as a separation problem, we give insight into why CDLP is easyfor the MNL with non-overlapping considerations sets and why generalizations of MNL posedifficulties. We perform numerical simulations to determine the revenue performance of all themethods on reference data sets in the literature.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper studies the rate of convergence of an appropriatediscretization scheme of the solution of the Mc Kean-Vlasovequation introduced by Bossy and Talay. More specifically,we consider approximations of the distribution and of thedensity of the solution of the stochastic differentialequation associated to the Mc Kean - Vlasov equation. Thescheme adopted here is a mixed one: Euler/weakly interactingparticle system. If $n$ is the number of weakly interactingparticles and $h$ is the uniform step in the timediscretization, we prove that the rate of convergence of thedistribution functions of the approximating sequence in the $L^1(\Omega\times \Bbb R)$ norm and in the sup norm is of theorder of $\frac 1{\sqrt n} + h $, while for the densities is ofthe order $ h +\frac 1 {\sqrt {nh}}$. This result is obtainedby carefully employing techniques of Malliavin Calculus.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Over the last 10 years, diffusion-weighted imaging (DWI) has become an important tool to investigate white matter (WM) anomalies in schizophrenia. Despite technological improvement and the exponential use of this technique, discrepancies remain and little is known about optimal parameters to apply for diffusion weighting during image acquisition. Specifically, high b-value diffusion-weighted imaging known to be more sensitive to slow diffusion is not widely used, even though subtle myelin alterations as thought to happen in schizophrenia are likely to affect slow-diffusing protons. Schizophrenia patients and healthy controls were scanned with a high b-value (4000s/mm(2)) protocol. Apparent diffusion coefficient (ADC) measures turned out to be very sensitive in detecting differences between schizophrenia patients and healthy volunteers even in a relatively small sample. We speculate that this is related to the sensitivity of high b-value imaging to the slow-diffusing compartment believed to reflect mainly the intra-axonal and myelin bound water pool. We also compared these results to a low b-value imaging experiment performed on the same population in the same scanning session. Even though the acquisition protocols are not strictly comparable, we noticed important differences in sensitivities in the favor of high b-value imaging, warranting further exploration.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The ability to determine the location and relative strength of all transcription-factor binding sites in a genome is important both for a comprehensive understanding of gene regulation and for effective promoter engineering in biotechnological applications. Here we present a bioinformatically driven experimental method to accurately define the DNA-binding sequence specificity of transcription factors. A generalized profile was used as a predictive quantitative model for binding sites, and its parameters were estimated from in vitro-selected ligands using standard hidden Markov model training algorithms. Computer simulations showed that several thousand low- to medium-affinity sequences are required to generate a profile of desired accuracy. To produce data on this scale, we applied high-throughput genomics methods to the biochemical problem addressed here. A method combining systematic evolution of ligands by exponential enrichment (SELEX) and serial analysis of gene expression (SAGE) protocols was coupled to an automated quality-controlled sequence extraction procedure based on Phred quality scores. This allowed the sequencing of a database of more than 10,000 potential DNA ligands for the CTF/NFI transcription factor. The resulting binding-site model defines the sequence specificity of this protein with a high degree of accuracy not achieved earlier and thereby makes it possible to identify previously unknown regulatory sequences in genomic DNA. A covariance analysis of the selected sites revealed non-independent base preferences at different nucleotide positions, providing insight into the binding mechanism.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The most suitable method for estimation of size diversity is investigated. Size diversity is computed on the basis of the Shannon diversity expression adapted for continuous variables, such as size. It takes the form of an integral involving the probability density function (pdf) of the size of the individuals. Different approaches for the estimation of pdf are compared: parametric methods, assuming that data come from a determinate family of pdfs, and nonparametric methods, where pdf is estimated using some kind of local evaluation. Exponential, generalized Pareto, normal, and log-normal distributions have been used to generate simulated samples using estimated parameters from real samples. Nonparametric methods include discrete computation of data histograms based on size intervals and continuous kernel estimation of pdf. Kernel approach gives accurate estimation of size diversity, whilst parametric methods are only useful when the reference distribution have similar shape to the real one. Special attention is given for data standardization. The division of data by the sample geometric mean is proposedas the most suitable standardization method, which shows additional advantages: the same size diversity value is obtained when using original size or log-transformed data, and size measurements with different dimensionality (longitudes, areas, volumes or biomasses) may be immediately compared with the simple addition of ln k where kis the dimensionality (1, 2, or 3, respectively). Thus, the kernel estimation, after data standardization by division of sample geometric mean, arises as the most reliable and generalizable method of size diversity evaluation

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In This Paper Several Additional Gmm Specification Tests Are Studied. a First Test Is a Chow-Type Test for Structural Parameter Stability of Gmm Estimates. the Test Is Inspired by the Fact That \"Taste and Technology\" Parameters Are Uncovered. the Second Set of Specification Tests Are Var Encompassing Tests. It Is Assumed That the Dgp Has a Finite Var Representation. the Moment Restrictions Which Are Suggested by Economic Theory and Exploited in the Gmm Procedure Represent One Possible Characterization of the Dgp. the Var Is a Different But Compatible Characterization of the Same Dgp. the Idea of the Var Encompassing Tests Is to Compare Parameter Estimates of the Euler Conditions and Var Representations of the Dgp Obtained Separately with Parameter Estimates of the Euler Conditions and Var Representations Obtained Jointly. There Are Several Ways to Construct Joint Systems Which Are Discussed in the Paper. Several Applications Are Also Discussed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper a cell by cell anisotropic adaptive mesh technique is added to an existing staggered mesh Lagrange plus remap finite element ALE code for the solution of the Euler equations. The quadrilateral finite elements may be subdivided isotropically or anisotropically and a hierarchical data structure is employed. An efficient computational method is proposed, which only solves on the finest level of resolution that exists for each part of the domain with disjoint or hanging nodes being used at resolution transitions. The Lagrangian, equipotential mesh relaxation and advection (solution remapping) steps are generalised so that they may be applied on the dynamic mesh. It is shown that for a radial Sod problem and a two-dimensional Riemann problem the anisotropic adaptive mesh method runs over eight times faster.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A cell by cell anisotropic adaptive mesh Arbitrary Lagrangian Eulerian (ALE) method for the solution of the Euler equations is described. An efficient approach to equipotential mesh relaxation on anisotropically refined meshes is developed. Results for two test problems are presented.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

An efficient numerical method is presented for the solution of the Euler equations governing the compressible flow of a real gas. The scheme is based on the approximate solution of a specially constructed set of linearised Riemann problems. An average of the flow variables across the interface between cells is required, and this is chosen to be the arithmetic mean for computational efficiency, which is in contrast to the usual square root averaging. The scheme is applied to a test problem for five different equations of state.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A finite difference scheme based on flux difference splitting is presented for the solution of the Euler equations for the compressible flow of an ideal gas. A linearised Riemann problem is defined, and a scheme based on numerical characteristic decomposition is presented for obtaining approximate solutions to the linearised problem. An average of the flow variables across the interface between cells is required, and this average is chosen to be the arithmetic mean for computational efficiency, leading to arithmetic averaging. This is in contrast to the usual ‘square root’ averages found in this type of Riemann solver, where the computational expense can be prohibitive. The method of upwind differencing is used for the resulting scalar problems, together with a flux limiter for obtaining a second order scheme which avoids nonphysical, spurious oscillations. The scheme is applied to a shock tube problem and a blast wave problem. Each approximate solution compares well with those given by other schemes, and for the shock tube problem is in agreement with the exact solution.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In recent years nonpolynomial finite element methods have received increasing attention for the efficient solution of wave problems. As with their close cousin the method of particular solutions, high efficiency comes from using solutions to the Helmholtz equation as basis functions. We present and analyze such a method for the scattering of two-dimensional scalar waves from a polygonal domain that achieves exponential convergence purely by increasing the number of basis functions in each element. Key ingredients are the use of basis functions that capture the singularities at corners and the representation of the scattered field towards infinity by a combination of fundamental solutions. The solution is obtained by minimizing a least-squares functional, which we discretize in such a way that a matrix least-squares problem is obtained. We give computable exponential bounds on the rate of convergence of the least-squares functional that are in very good agreement with the observed numerical convergence. Challenging numerical examples, including a nonconvex polygon with several corner singularities, and a cavity domain, are solved to around 10 digits of accuracy with a few seconds of CPU time. The examples are implemented concisely with MPSpack, a MATLAB toolbox for wave computations with nonpolynomial basis functions, developed by the authors. A code example is included.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The energy-Casimir stability method, also known as the Arnold stability method, has been widely used in fluid dynamical applications to derive sufficient conditions for nonlinear stability. The most commonly studied system is two-dimensional Euler flow. It is shown that the set of two-dimensional Euler flows satisfying the energy-Casimir stability criteria is empty for two important cases: (i) domains having the topology of the sphere, and (ii) simply-connected bounded domains with zero net vorticity. The results apply to both the first and the second of Arnold’s stability theorems. In the spirit of Andrews’ theorem, this puts a further limitation on the applicability of the method. © 2000 American Institute of Physics.