11 resultados para Self-Validating Numerical Methods

em Biblioteca Digital da Produção Intelectual da Universidade de São Paulo (BDPI/USP)


Relevância:

100.00% 100.00%

Publicador:

Resumo:

This work presents a numerical method suitable for the study of the development of internal boundary layers (IBL) and their characteristics for flows over various types of coastal cliffs. The IBL is an important meteorological occurrence for flows with surface roughness and topographical step changes. A two-dimensional flow program was used for this study. The governing equations were written using the vorticity-velocity formulation. The spatial derivatives were discretized by high-order compact finite differences schemes. The time integration was performed with a low storage fourth-order Runge-Kutta scheme. The coastal cliff (step) was specified through an immersed boundary method. The validation of the code was done by comparison of the results with experimental and observational data. The numerical simulations were carried out for different coastal cliff heights and inclinations. The results show that the predominant factors for the height of the IBL and its characteristics are the upstream velocity, and the height and form (inclination) of the coastal cliff. Copyright (C) 2010 John Wiley & Sons, Ltd.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Evidence of jet precession in many galactic and extragalactic sources has been reported in the literature. Much of this evidence is based on studies of the kinematics of the jet knots, which depends on the correct identification of the components to determine their respective proper motions and position angles on the plane of the sky. Identification problems related to fitting procedures, as well as observations poorly sampled in time, may influence the follow-up of the components in time, which consequently might contribute to a misinterpretation of the data. In order to deal with these limitations, we introduce a very powerful statistical tool to analyse jet precession: the cross-entropy method for continuous multi-extremal optimization. Only based on the raw data of the jet components (right ascension and declination offsets from the core), the cross-entropy method searches for the precession model parameters that better represent the data. In this work we present a large number of tests to validate this technique, using synthetic precessing jets built from a given set of precession parameters. With the aim of recovering these parameters, we applied the cross-entropy method to our precession model, varying exhaustively the quantities associated with the method. Our results have shown that even in the most challenging tests, the cross-entropy method was able to find the correct parameters within a 1 per cent level. Even for a non-precessing jet, our optimization method could point out successfully the lack of precession.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We present a new technique for obtaining model fittings to very long baseline interferometric images of astrophysical jets. The method minimizes a performance function proportional to the sum of the squared difference between the model and observed images. The model image is constructed by summing N(s) elliptical Gaussian sources characterized by six parameters: two-dimensional peak position, peak intensity, eccentricity, amplitude, and orientation angle of the major axis. We present results for the fitting of two main benchmark jets: the first constructed from three individual Gaussian sources, the second formed by five Gaussian sources. Both jets were analyzed by our cross-entropy technique in finite and infinite signal-to-noise regimes, the background noise chosen to mimic that found in interferometric radio maps. Those images were constructed to simulate most of the conditions encountered in interferometric images of active galactic nuclei. We show that the cross-entropy technique is capable of recovering the parameters of the sources with a similar accuracy to that obtained from the very traditional Astronomical Image Processing System Package task IMFIT when the image is relatively simple (e. g., few components). For more complex interferometric maps, our method displays superior performance in recovering the parameters of the jet components. Our methodology is also able to show quantitatively the number of individual components present in an image. An additional application of the cross-entropy technique to a real image of a BL Lac object is shown and discussed. Our results indicate that our cross-entropy model-fitting technique must be used in situations involving the analysis of complex emission regions having more than three sources, even though it is substantially slower than current model-fitting tasks (at least 10,000 times slower for a single processor, depending on the number of sources to be optimized). As in the case of any model fitting performed in the image plane, caution is required in analyzing images constructed from a poorly sampled (u, v) plane.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper we describe and evaluate a geometric mass-preserving redistancing procedure for the level set function on general structured grids. The proposed algorithm is adapted from a recent finite element-based method and preserves the mass by means of a localized mass correction. A salient feature of the scheme is the absence of adjustable parameters. The algorithm is tested in two and three spatial dimensions and compared with the widely used partial differential equation (PDE)-based redistancing method using structured Cartesian grids. Through the use of quantitative error measures of interest in level set methods, we show that the overall performance of the proposed geometric procedure is better than PDE-based reinitialization schemes, since it is more robust with comparable accuracy. We also show that the algorithm is well-suited for the highly stretched curvilinear grids used in CFD simulations. Copyright (C) 2010 John Wiley & Sons, Ltd.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this article we address decomposition strategies especially tailored to perform strong coupling of dimensionally heterogeneous models, under the hypothesis that one wants to solve each submodel separately and implement the interaction between subdomains by boundary conditions alone. The novel methodology takes full advantage of the small number of interface unknowns in this kind of problems. Existing algorithms can be viewed as variants of the `natural` staggered algorithm in which each domain transfers function values to the other, and receives fluxes (or forces), and vice versa. This natural algorithm is known as Dirichlet-to-Neumann in the Domain Decomposition literature. Essentially, we propose a framework in which this algorithm is equivalent to applying Gauss-Seidel iterations to a suitably defined (linear or nonlinear) system of equations. It is then immediate to switch to other iterative solvers such as GMRES or other Krylov-based method. which we assess through numerical experiments showing the significant gain that can be achieved. indeed. the benefit is that an extremely flexible, automatic coupling strategy can be developed, which in addition leads to iterative procedures that are parameter-free and rapidly converging. Further, in linear problems they have the finite termination property. Copyright (C) 2009 John Wiley & Sons, Ltd.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A numerical method to approximate partial differential equations on meshes that do not conform to the domain boundaries is introduced. The proposed method is conceptually simple and free of user-defined parameters. Starting with a conforming finite element mesh, the key ingredient is to switch those elements intersected by the Dirichlet boundary to a discontinuous-Galerkin approximation and impose the Dirichlet boundary conditions strongly. By virtue of relaxing the continuity constraint at those elements. boundary locking is avoided and optimal-order convergence is achieved. This is shown through numerical experiments in reaction-diffusion problems. Copyright (c) 2008 John Wiley & Sons, Ltd.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Purpose - The purpose of this paper is to develop a novel unstructured simulation approach for injection molding processes described by the Hele-Shaw model. Design/methodology/approach - The scheme involves dual dynamic meshes with active and inactive cells determined from an initial background pointset. The quasi-static pressure solution in each timestep for this evolving unstructured mesh system is approximated using a control volume finite element method formulation coupled to a corresponding modified volume of fluid method. The flow is considered to be isothermal and non-Newtonian. Findings - Supporting numerical tests and performance studies for polystyrene described by Carreau, Cross, Ellis and Power-law fluid models are conducted. Results for the present method are shown to be comparable to those from other methods for both Newtonian fluid and polystyrene fluid injected in different mold geometries. Research limitations/implications - With respect to the methodology, the background pointset infers a mesh that is dynamically reconstructed here, and there are a number of efficiency issues and improvements that would be relevant to industrial applications. For instance, one can use the pointset to construct special bases and invoke a so-called ""meshless"" scheme using the basis. This would require some interesting strategies to deal with the dynamic point enrichment of the moving front that could benefit from the present front treatment strategy. There are also issues related to mass conservation and fill-time errors that might be addressed by introducing suitable projections. The general question of ""rate of convergence"" of these schemes requires analysis. Numerical results here suggest first-order accuracy and are consistent with the approximations made, but theoretical results are not available yet for these methods. Originality/value - This novel unstructured simulation approach involves dual meshes with active and inactive cells determined from an initial background pointset: local active dual patches are constructed ""on-the-fly"" for each ""active point"" to form a dynamic virtual mesh of active elements that evolves with the moving interface.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We consider bipartitions of one-dimensional extended systems whose probability distribution functions describe stationary states of stochastic models. We define estimators of the information shared between the two subsystems. If the correlation length is finite, the estimators stay finite for large system sizes. If the correlation length diverges, so do the estimators. The definition of the estimators is inspired by information theory. We look at several models and compare the behaviors of the estimators in the finite-size scaling limit. Analytical and numerical methods as well as Monte Carlo simulations are used. We show how the finite-size scaling functions change for various phase transitions, including the case where one has conformal invariance.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The interest in attractive Bose-Einstein Condensates arises due to the chemical instabilities generate when the number of trapped atoms is above a critical number. In this case, recombination process promotes the collapse of the cloud. This behavior is normally geometry dependent. Within the context of the mean field approximation, the system is described by the Gross-Pitaevskii equation. We have considered the attractive Bose-Einstein condensate, confined in a nonspherical trap, investigating numerically and analytically the solutions, using controlled perturbation and self-similar approximation methods. This approximation is valid in all interval of the negative coupling parameter allowing interpolation between weak-coupling and strong-coupling limits. When using the self-similar approximation methods, accurate analytical formulas were derived. These obtained expressions are discussed for several different traps and may contribute to the understanding of experimental observations.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Using digitized images of the three-dimensional, branching structures for root systems of bean seedlings, together with analytical and numerical methods that map a common susceptible-infected- recovered (`SIR`) epidemiological model onto the bond percolation problem, we show how the spatially correlated branching structures of plant roots affect transmission efficiencies, and hence the invasion criterion, for a soil-borne pathogen as it spreads through ensembles of morphologically complex hosts. We conclude that the inherent heterogeneities in transmissibilities arising from correlations in the degrees of overlap between neighbouring plants render a population of root systems less susceptible to epidemic invasion than a corresponding homogeneous system. Several components of morphological complexity are analysed that contribute to disorder and heterogeneities in the transmissibility of infection. Anisotropy in root shape is shown to increase resilience to epidemic invasion, while increasing the degree of branching enhances the spread of epidemics in the population of roots. Some extension of the methods for other epidemiological systems are discussed.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

By means of self-consistent three-dimensional magnetohydrodynamics (MHD) numerical simulations, we analyze magnetized solar-like stellar winds and their dependence on the plasma-beta parameter (the ratio between thermal and magnetic energy densities). This is the first study to perform such analysis solving the fully ideal three-dimensional MHD equations. We adopt in our simulations a heating parameter described by gamma, which is responsible for the thermal acceleration of the wind. We analyze winds with polar magnetic field intensities ranging from 1 to 20 G. We show that the wind structure presents characteristics that are similar to the solar coronal wind. The steady-state magnetic field topology for all cases is similar, presenting a configuration of helmet streamer-type, with zones of closed field lines and open field lines coexisting. Higher magnetic field intensities lead to faster and hotter winds. For the maximum magnetic intensity simulated of 20 G and solar coronal base density, the wind velocity reaches values of similar to 1000 km s(-1) at r similar to 20r(0) and a maximum temperature of similar to 6 x 10(6) K at r similar to 6r(0). The increase of the field intensity generates a larger ""dead zone"" in the wind, i.e., the closed loops that inhibit matter to escape from latitudes lower than similar to 45 degrees extend farther away from the star. The Lorentz force leads naturally to a latitude-dependent wind. We show that by increasing the density and maintaining B(0) = 20 G the system recover back to slower and cooler winds. For a fixed gamma, we show that the key parameter in determining the wind velocity profile is the beta-parameter at the coronal base. Therefore, there is a group of magnetized flows that would present the same terminal velocity despite its thermal and magnetic energy densities, as long as the plasma-beta parameter is the same. This degeneracy, however, can be removed if we compare other physical parameters of the wind, such as the mass-loss rate. We analyze the influence of gamma in our results and we show that it is also important in determining the wind structure.