975 resultados para Numerical experiments


Relevância:

70.00% 70.00%

Publicador:

Resumo:

In order to assist in comparing the computational techniques used in different models, the authors propose a standardized set of one-dimensional numerical experiments that could be completed for each model. The results of these experiments, with a simplified form of the computational representation for advection, diffusion, pressure gradient term, Coriolis term, and filter used in the models, should be reported in the peer-reviewed literature. Specific recommendations are described in this paper.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

4-Dimensional Variational Data Assimilation (4DVAR) assimilates observations through the minimisation of a least-squares objective function, which is constrained by the model flow. We refer to 4DVAR as strong-constraint 4DVAR (sc4DVAR) in this thesis as it assumes the model is perfect. Relaxing this assumption gives rise to weak-constraint 4DVAR (wc4DVAR), leading to a different minimisation problem with more degrees of freedom. We consider two wc4DVAR formulations in this thesis, the model error formulation and state estimation formulation. The 4DVAR objective function is traditionally solved using gradient-based iterative methods. The principle method used in Numerical Weather Prediction today is the Gauss-Newton approach. This method introduces a linearised `inner-loop' objective function, which upon convergence, updates the solution of the non-linear `outer-loop' objective function. This requires many evaluations of the objective function and its gradient, which emphasises the importance of the Hessian. The eigenvalues and eigenvectors of the Hessian provide insight into the degree of convexity of the objective function, while also indicating the difficulty one may encounter while iterative solving 4DVAR. The condition number of the Hessian is an appropriate measure for the sensitivity of the problem to input data. The condition number can also indicate the rate of convergence and solution accuracy of the minimisation algorithm. This thesis investigates the sensitivity of the solution process minimising both wc4DVAR objective functions to the internal assimilation parameters composing the problem. We gain insight into these sensitivities by bounding the condition number of the Hessians of both objective functions. We also precondition the model error objective function and show improved convergence. We show that both formulations' sensitivities are related to error variance balance, assimilation window length and correlation length-scales using the bounds. We further demonstrate this through numerical experiments on the condition number and data assimilation experiments using linear and non-linear chaotic toy models.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

A simple and easily implemented method is developed to keep the vertical velocity equal to zero at the bottom and top of hydrostatic incompressible numerical models. The pressure is computed at the top by correcting its value given in the previous time step so that the vertical integral of the horizontal divergence is zero at each column. Numerical experiments that exhibit small time variations of pressure at the top are able to simplify the algorithm and save computer time. Numerical simulations illustrate the method effectiveness for a horizontal deformation-induced frontogenesis.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

Liquids and gasses form a vital part of nature. Many of these are complex fluids with non-Newtonian behaviour. We introduce a mathematical model describing the unsteady motion of an incompressible polymeric fluid. Each polymer molecule is treated as two beads connected by a spring. For the nonlinear spring force it is not possible to obtain a closed system of equations, unless we approximate the force law. The Peterlin approximation replaces the length of the spring by the length of the average spring. Consequently, the macroscopic dumbbell-based model for dilute polymer solutions is obtained. The model consists of the conservation of mass and momentum and time evolution of the symmetric positive definite conformation tensor, where the diffusive effects are taken into account. In two space dimensions we prove global in time existence of weak solutions. Assuming more regular data we show higher regularity and consequently uniqueness of the weak solution. For the Oseen-type Peterlin model we propose a linear pressure-stabilized characteristics finite element scheme. We derive the corresponding error estimates and we prove, for linear finite elements, the optimal first order accuracy. Theoretical error of the pressure-stabilized characteristic finite element scheme is confirmed by a series of numerical experiments.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

We derive a new class of iterative schemes for accelerating the convergence of the EM algorithm, by exploiting the connection between fixed point iterations and extrapolation methods. First, we present a general formulation of one-step iterative schemes, which are obtained by cycling with the extrapolation methods. We, then square the one-step schemes to obtain the new class of methods, which we call SQUAREM. Squaring a one-step iterative scheme is simply applying it twice within each cycle of the extrapolation method. Here we focus on the first order or rank-one extrapolation methods for two reasons, (1) simplicity, and (2) computational efficiency. In particular, we study two first order extrapolation methods, the reduced rank extrapolation (RRE1) and minimal polynomial extrapolation (MPE1). The convergence of the new schemes, both one-step and squared, is non-monotonic with respect to the residual norm. The first order one-step and SQUAREM schemes are linearly convergent, like the EM algorithm but they have a faster rate of convergence. We demonstrate, through five different examples, the effectiveness of the first order SQUAREM schemes, SqRRE1 and SqMPE1, in accelerating the EM algorithm. The SQUAREM schemes are also shown to be vastly superior to their one-step counterparts, RRE1 and MPE1, in terms of computational efficiency. The proposed extrapolation schemes can fail due to the numerical problems of stagnation and near breakdown. We have developed a new hybrid iterative scheme that combines the RRE1 and MPE1 schemes in such a manner that it overcomes both stagnation and near breakdown. The squared first order hybrid scheme, SqHyb1, emerges as the iterative scheme of choice based on our numerical experiments. It combines the fast convergence of the SqMPE1, while avoiding near breakdowns, with the stability of SqRRE1, while avoiding stagnations. The SQUAREM methods can be incorporated very easily into an existing EM algorithm. They only require the basic EM step for their implementation and do not require any other auxiliary quantities such as the complete data log likelihood, and its gradient or hessian. They are an attractive option in problems with a very large number of parameters, and in problems where the statistical model is complex, the EM algorithm is slow and each EM step is computationally demanding.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

To estimate a parameter in an elliptic boundary value problem, the method of equation error chooses the value that minimizes the error in the PDE and boundary condition (the solution of the BVP having been replaced by a measurement). The estimated parameter converges to the exact value as the measured data converge to the exact value, provided Tikhonov regularization is used to control the instability inherent in the problem. The error in the estimated solution can be bounded in an appropriate quotient norm; estimates can be derived for both the underlying (infinite-dimensional) problem and a finite-element discretization that can be implemented in a practical algorithm. Numerical experiments demonstrate the efficacy and limitations of the method.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

Numerical simulation experiments give insight into the evolving energy partitioning during high-strain torsion experiments of calcite. Our numerical experiments are designed to derive a generic macroscopic grain size sensitive flow law capable of describing the full evolution from the transient regime to steady state. The transient regime is crucial for understanding the importance of micro structural processes that may lead to strain localization phenomena in deforming materials. This is particularly important in geological and geodynamic applications where the phenomenon of strain localization happens outside the time frame that can be observed under controlled laboratory conditions. Ourmethod is based on an extension of the paleowattmeter approach to the transient regime. We add an empirical hardening law using the Ramberg-Osgood approximation and assess the experiments by an evolution test function of stored over dissipated energy (lambda factor). Parameter studies of, strain hardening, dislocation creep parameter, strain rates, temperature, and lambda factor as well asmesh sensitivity are presented to explore the sensitivity of the newly derived transient/steady state flow law. Our analysis can be seen as one of the first steps in a hybrid computational-laboratory-field modeling workflow. The analysis could be improved through independent verifications by thermographic analysis in physical laboratory experiments to independently assess lambda factor evolution under laboratory conditions.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

García et al. present a class of column generation (CG) algorithms for nonlinear programs. Its main motivation from a theoretical viewpoint is that under some circumstances, finite convergence can be achieved, in much the same way as for the classic simplicial decomposition method; the main practical motivation is that within the class there are certain nonlinear column generation problems that can accelerate the convergence of a solution approach which generates a sequence of feasible points. This algorithm can, for example, accelerate simplicial decomposition schemes by making the subproblems nonlinear. This paper complements the theoretical study on the asymptotic and finite convergence of these methods given in [1] with an experimental study focused on their computational efficiency. Three types of numerical experiments are conducted. The first group of test problems has been designed to study the parameters involved in these methods. The second group has been designed to investigate the role and the computation of the prolongation of the generated columns to the relative boundary. The last one has been designed to carry out a more complete investigation of the difference in computational efficiency between linear and nonlinear column generation approaches. In order to carry out this investigation, we consider two types of test problems: the first one is the nonlinear, capacitated single-commodity network flow problem of which several large-scale instances with varied degrees of nonlinearity and total capacity are constructed and investigated, and the second one is a combined traffic assignment model

Relevância:

70.00% 70.00%

Publicador:

Resumo:

Stochastic arithmetic has been developed as a model for exact computing with imprecise data. Stochastic arithmetic provides confidence intervals for the numerical results and can be implemented in any existing numerical software by redefining types of the variables and overloading the operators on them. Here some properties of stochastic arithmetic are further investigated and applied to the computation of inner products and the solution to linear systems. Several numerical experiments are performed showing the efficiency of the proposed approach.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

2010 Mathematics Subject Classification: Primary 35J70; Secondary 35J15, 35D05.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Networks of Kuramoto oscillators with a positive correlation between the oscillators frequencies and the degree of their corresponding vertices exhibit so-called explosive synchronization behavior, which is now under intensive investigation. Here we study and discuss explosive synchronization in a situation that has not yet been considered, namely when only a part, typically a small part, of the vertices is subjected to a degree-frequency correlation. Our results show that in order to have explosive synchronization, it suffices to have degree-frequency correlations only for the hubs, the vertices with the highest degrees. Moreover, we show that a partial degree-frequency correlation does not only promotes but also allows explosive synchronization to happen in networks for which a full degree-frequency correlation would not allow it. We perform a mean-field analysis and our conclusions were corroborated by exhaustive numerical experiments for synthetic networks and also for the undirected and unweighed version of a typical benchmark biological network, namely the neural network of the worm Caenorhabditis elegans. The latter is an explicit example where partial degree-frequency correlation leads to explosive synchronization with hysteresis, in contrast with the fully correlated case, for which no explosive synchronization is observed.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In this paper, space adaptivity is introduced to control the error in the numerical solution of hyperbolic systems of conservation laws. The reference numerical scheme is a new version of the discontinuous Galerkin method, which uses an implicit diffusive term in the direction of the streamlines, for stability purposes. The decision whether to refine or to unrefine the grid in a certain location is taken according to the magnitude of wavelet coefficients, which are indicators of local smoothness of the numerical solution. Numerical solutions of the nonlinear Euler equations illustrate the efficiency of the method. © Springer 2005.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Este trabalho apresenta uma análise das condições sinótica e dinâmica associadas ao desenvolvimento do ciclone ocorrido entre 12 e 19 de setembro de 2008, com o objetivo de destacar diferenças e semelhanças com o ambiente em que se inseriu o evento Catarina em março de 2004. As principais semelhanças foram encontradas no padrão sinótico geral: a ocorrência de um padrão típico de bloqueio do tipo dipolo associado à anomalia de vorticidade potencial em altos níveis; cavado em níveis médios com inclinação para oeste; a presença de uma coluna de vorticidade ciclônica desde a superfície até a baixa estratosfera; e, em superfície, o padrão de uma alta ao sul de uma baixa pressão. Apesar das semelhanças no padrão geral, diferenças ocorreram entre os dois eventos que influenciaram na intensidade dos sistemas: o Catarina ocorreu em latitudes mais baixas em relação ao caso de setembro de 2008; o padrão típico de bloqueio associado ao caso de setembro de 2008 durou um dia e meio, enquanto no evento Catarina foi de três dias; a configuração da advecção de temperatura na camada entre 1000-500 hPa favoreceu o deslocamento do evento de setembro de 2008 para leste/sudeste, ao contrário do Catarina, a advecção de ar quente a leste do ciclone foi praticamente suprimida e a tendência de altura geopotencial passou a ser positiva, padrões que impedem o deslocamento do sistema para leste; no caso de setembro de 2008 o padrão da inversão do gradiente meridional de temperatura potencial na superfície de -2,0 unidade de vorticidade potencial (UVP) foi caracterizado pela incursão de uma região alongada de ar quente vinda do equador em direção ao sul e ar frio vinda do sul em direção ao equador, enquanto no caso Catarina a inversão ocorre pelo isolamento de uma bolha de ar frio ao norte e uma bolha de ar quente ao sul, o que pode ter contribuído para maior duração do padrão de bloqueio, pois a dissipação neste caso é dificultada. Sistemas como o Catarina podem ser raros no Atlântico Sul, mas isso não ocorre em relação ao ambiente sinótico em que se formou o Catarina. Para melhor entender o processo atmosférico que levou à formação do Catarina, é necessário realizar experimentos numéricos de sensibilidade para o caso de setembro de 2008 com o objetivo de verificar a possibilidade do ciclone extratropical se tornar um ciclone tropical.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Gene clustering is a useful exploratory technique to group together genes with similar expression levels under distinct cell cycle phases or distinct conditions. It helps the biologist to identify potentially meaningful relationships between genes. In this study, we propose a clustering method based on multivariate normal mixture models, where the number of clusters is predicted via sequential hypothesis tests: at each step, the method considers a mixture model of m components (m = 2 in the first step) and tests if in fact it should be m - 1. If the hypothesis is rejected, m is increased and a new test is carried out. The method continues (increasing m) until the hypothesis is accepted. The theoretical core of the method is the full Bayesian significance test, an intuitive Bayesian approach, which needs no model complexity penalization nor positive probabilities for sharp hypotheses. Numerical experiments were based on a cDNA microarray dataset consisting of expression levels of 205 genes belonging to four functional categories, for 10 distinct strains of Saccharomyces cerevisiae. To analyze the method's sensitivity to data dimension, we performed principal components analysis on the original dataset and predicted the number of classes using 2 to 10 principal components. Compared to Mclust (model-based clustering), our method shows more consistent results.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Compartmental epidemiological models have been developed since the 1920s and successfully applied to study the propagation of infectious diseases. Besides, due to their structure, in the 1960s an interesting version of these models was developed to clarify some aspects of rumor propagation, considering that spreading an infectious disease or disseminating information is analogous phenomena. Here, in an analogy with the SIR (Susceptible-Infected-Removed) epidemiological model, the ISS (Ignorant-Spreader-Stifler) rumor spreading model is studied. By using concepts from the Dynamical Systems Theory, stability of equilibrium points is established, according to propagation parameters and initial conditions. Some numerical experiments are conducted in order to validate the model.