945 resultados para methods : numerical
Resumo:
In this article we describe recent progress on the design, analysis and implementation of hybrid numerical-asymptotic boundary integral methods for boundary value problems for the Helmholtz equation that model time harmonic acoustic wave scattering in domains exterior to impenetrable obstacles. These hybrid methods combine conventional piecewise polynomial approximations with high-frequency asymptotics to build basis functions suitable for representing the oscillatory solutions. They have the potential to solve scattering problems accurately in a computation time that is (almost) independent of frequency and this has been realized for many model problems. The design and analysis of this class of methods requires new results on the analysis and numerical analysis of highly oscillatory boundary integral operators and on the high-frequency asymptotics of scattering problems. The implementation requires the development of appropriate quadrature rules for highly oscillatory integrals. This article contains a historical account of the development of this currently very active field, a detailed account of recent progress and, in addition, a number of original research results on the design, analysis and implementation of these methods.
Resumo:
The high computational cost of calculating the radiative heating rates in numerical weather prediction (NWP) and climate models requires that calculations are made infrequently, leading to poor sampling of the fast-changing cloud field and a poor representation of the feedback that would occur. This paper presents two related schemes for improving the temporal sampling of the cloud field. Firstly, the ‘split time-stepping’ scheme takes advantage of the independent nature of the monochromatic calculations of the ‘correlated-k’ method to split the calculation into gaseous absorption terms that are highly dependent on changes in cloud (the optically thin terms) and those that are not (optically thick). The small number of optically thin terms can then be calculated more often to capture changes in the grey absorption and scattering associated with cloud droplets and ice crystals. Secondly, the ‘incremental time-stepping’ scheme uses a simple radiative transfer calculation using only one or two monochromatic calculations representing the optically thin part of the atmospheric spectrum. These are found to be sufficient to represent the heating rate increments caused by changes in the cloud field, which can then be added to the last full calculation of the radiation code. We test these schemes in an operational forecast model configuration and find a significant improvement is achieved, for a small computational cost, over the current scheme employed at the Met Office. The ‘incremental time-stepping’ scheme is recommended for operational use, along with a new scheme to correct the surface fluxes for the change in solar zenith angle between radiation calculations.
Resumo:
O desenvolvimento de projetos relacionados ao desempenho de diversas culturas tem recebido aperfeiçoamento cada vez maior, incorporado a modelos matemáticos sendo indispensável à utilização de equações cada vez mais consistentes que possibilitem previsão e maior aproximação do comportamento real, diminuindo o erro na obtenção das estimativas. Entre as operações unitárias que demandam maior estudo estão aquelas relacionadas com o crescimento da cultura, caracterizadas pela temperatura ideal para o acréscimo de matéria seca. Pelo amplo uso dos métodos matemáticos na representação, análise e obtenção de estimativas de graus-dia, juntamente com a grande importância que a cultura da cana-de-açúcar tem para a economia brasileira, foi realizada uma avaliação dos modelos matemáticos comumente usados e dos métodos numéricos de integração na estimativa da disponibilidade de graus-dia para essa cultura, na região de Botucatu, Estado de São Paulo. Os modelos de integração, com discretização de 6 em 6 h, apresentaram resultados satisfatórios na estimativa de graus-dia. As metodologias tradicionais apresentaram desempenhos satisfatórios quanto à estimativa de grausdia com base na curva de temperatura horária para cada dia e para os agrupamentos de três, sete, 15 e 30 dias. Pelo método numérico de integração, a região de Botucatu, Estado de São Paulo, apresentou disponibilidade térmica anual média de 1.070,6 GD para a cultura da cana-de-açúcar.
Resumo:
This is an introductory course to the Lanczos Method and Density Matrix Renormalization Group Algorithms (DMRG), two among the leading numerical techniques applied in studies of low-dimensional quantum models. The idea of studying the models on clusters of a finite size in order to extract their physical properties is briefly discussed. The important role played by the model symmetries is also examined. Special emphasis is given to the DMRG.
Resumo:
This paper made an analysis of some numerical integration methods that can be used in electromagnetic transient simulations. Among the existing methods, we analyzed the trapezoidal integration method (or Heun formula), Simpson's Rule and Runge-Kutta. These methods were used in simulations of electromagnetic transients in power systems, resulting from switching operations and maneuvers that occur in transmission lines. Analyzed the characteristics such as accuracy, computation time and robustness of the methods of integration.
Resumo:
Includes bibliography
Resumo:
In this work, different methods to estimate the value of thin film residual stresses using instrumented indentation data were analyzed. This study considered procedures proposed in the literature, as well as a modification on one of these methods and a new approach based on the effect of residual stress on the value of hardness calculated via the Oliver and Pharr method. The analysis of these methods was centered on an axisymmetric two-dimensional finite element model, which was developed to simulate instrumented indentation testing of thin ceramic films deposited onto hard steel substrates. Simulations were conducted varying the level of film residual stress, film strain hardening exponent, film yield strength, and film Poisson's ratio. Different ratios of maximum penetration depth h(max) over film thickness t were also considered, including h/t = 0.04, for which the contribution of the substrate in the mechanical response of the system is not significant. Residual stresses were then calculated following the procedures mentioned above and compared with the values used as input in the numerical simulations. In general, results indicate the difference that each method provides with respect to the input values depends on the conditions studied. The method by Suresh and Giannakopoulos consistently overestimated the values when stresses were compressive. The method provided by Wang et al. has shown less dependence on h/t than the others.
Resumo:
This thesis is based on five papers addressing variance reduction in different ways. The papers have in common that they all present new numerical methods. Paper I investigates quantitative structure-retention relationships from an image processing perspective, using an artificial neural network to preprocess three-dimensional structural descriptions of the studied steroid molecules. Paper II presents a new method for computing free energies. Free energy is the quantity that determines chemical equilibria and partition coefficients. The proposed method may be used for estimating, e.g., chromatographic retention without performing experiments. Two papers (III and IV) deal with correcting deviations from bilinearity by so-called peak alignment. Bilinearity is a theoretical assumption about the distribution of instrumental data that is often violated by measured data. Deviations from bilinearity lead to increased variance, both in the data and in inferences from the data, unless invariance to the deviations is built into the model, e.g., by the use of the method proposed in paper III and extended in paper IV. Paper V addresses a generic problem in classification; namely, how to measure the goodness of different data representations, so that the best classifier may be constructed. Variance reduction is one of the pillars on which analytical chemistry rests. This thesis considers two aspects on variance reduction: before and after experiments are performed. Before experimenting, theoretical predictions of experimental outcomes may be used to direct which experiments to perform, and how to perform them (papers I and II). After experiments are performed, the variance of inferences from the measured data are affected by the method of data analysis (papers III-V).
Resumo:
Questo documento descrive gran parte del lavoro svolto durante un periodo di studio di sei mesi all’International Centre for Geohazards (ICG) di Oslo. Seguendo la linea guida dettata nel titolo, sono stati affrontati diversi aspetti riguardanti la modellazione numerica dei pendii quali l’influenza delle condizioni al contorno e delle proporzioni del modello, la back-analysis di eventi di scivolamento e l’applicazione delle analisi di stabilità monodimensionali. La realizzazione di semplici modelli con il programma agli elementi finiti PLAXIS (Brinkgreve et al., 2008) ha consentito di analizzare le prestazioni dei modelli numerici riguardo all’influenza delle condizioni al contorno confrontandoli con un calcolo teorico del fattore di amplificazione. Questa serie di test ha consentito di stabilire alcune linee guida per la realizzazione di test con un buon livello di affidabilità. Alcuni case-history, in particolare quello di Las Colinas (El Salvador), sono stati modellati allo scopo di applicare e verificare i risultati ottenuti con i semplici modelli sopracitati. Inoltre sono state svolte analisi di sensitività alla dimensione della mesh e ai parametri di smorzamento e di elasticità. I risultati hanno evidenziato una forte dipendenza dei risultati dai parametri di smorzamento, rilevando l’importanza di una corretta valutazione di questa grandezza. In ultima battuta ci si è occupati dell’accuratezza e dell’applicabilità dei modelli monodimensionali. I risultati di alcuni modelli monodimensionali realizzati con il software Quiver (Kaynia, 2009) sono stati confrontati con quelli ottenuti da modelli bidimensionali. Dal confronto è risultato un buon grado di approssimazione accompagnato da un margine di sicurezza costante. Le analisi monodimensionali sono poi state utilizzate per la verifica di sensitività. I risultati di questo lavoro sono qui presentati e accompagnati da suggerimenti qualitativi e quantitativi per la realizzazione di modelli bidimensionali affidabili. Inoltre si descrive la possibilità di utilizzare modelli monodimensionali in caso d’incertezze sui parametri. Dai risultati osservati emerge la possibilità di ottenere un risparmio di tempo nella realizzazione di importanti indagini di sensitività.
Resumo:
This research has focused on the study of the behavior and of the collapse of masonry arch bridges. The latest decades have seen an increasing interest in this structural type, that is still present and in use, despite the passage of time and the variation of the transport means. Several strategies have been developed during the time to simulate the response of this type of structures, although even today there is no generally accepted standard one for assessment of masonry arch bridges. The aim of this thesis is to compare the principal analytical and numerical methods existing in literature on case studies, trying to highlight values and weaknesses. The methods taken in exam are mainly three: i) the Thrust Line Analysis Method; ii) the Mechanism Method; iii) the Finite Element Methods. The Thrust Line Analysis Method and the Mechanism Method are analytical methods and derived from two of the fundamental theorems of the Plastic Analysis, while the Finite Element Method is a numerical method, that uses different strategies of discretization to analyze the structure. Every method is applied to the case study through computer-based representations, that allow a friendly-use application of the principles explained. A particular closed-form approach based on an elasto-plastic material model and developed by some Belgian researchers is also studied. To compare the three methods, two different case study have been analyzed: i) a generic masonry arch bridge with a single span; ii) a real masonry arch bridge, the Clemente Bridge, built on Savio River in Cesena. In the analyses performed, all the models are two-dimensional in order to have results comparable between the different methods taken in exam. The different methods have been compared with each other in terms of collapse load and of hinge positions.
Resumo:
The use of guided ultrasonic waves (GUW) has increased considerably in the fields of non-destructive (NDE) testing and structural health monitoring (SHM) due to their ability to perform long range inspections, to probe hidden areas as well as to provide a complete monitoring of the entire waveguide. Guided waves can be fully exploited only once their dispersive properties are known for the given waveguide. In this context, well stated analytical and numerical methods are represented by the Matrix family methods and the Semi Analytical Finite Element (SAFE) methods. However, while the former are limited to simple geometries of finite or infinite extent, the latter can model arbitrary cross-section waveguides of finite domain only. This thesis is aimed at developing three different numerical methods for modelling wave propagation in complex translational invariant systems. First, a classical SAFE formulation for viscoelastic waveguides is extended to account for a three dimensional translational invariant static prestress state. The effect of prestress, residual stress and applied loads on the dispersion properties of the guided waves is shown. Next, a two-and-a-half Boundary Element Method (2.5D BEM) for the dispersion analysis of damped guided waves in waveguides and cavities of arbitrary cross-section is proposed. The attenuation dispersive spectrum due to material damping and geometrical spreading of cavities with arbitrary shape is shown for the first time. Finally, a coupled SAFE-2.5D BEM framework is developed to study the dispersion characteristics of waves in viscoelastic waveguides of arbitrary geometry embedded in infinite solid or liquid media. Dispersion of leaky and non-leaky guided waves in terms of speed and attenuation, as well as the radiated wavefields, can be computed. The results obtained in this thesis can be helpful for the design of both actuation and sensing systems in practical application, as well as to tune experimental setup.
Resumo:
Theories and numerical modeling are fundamental tools for understanding, optimizing and designing present and future laser-plasma accelerators (LPAs). Laser evolution and plasma wave excitation in a LPA driven by a weakly relativistically intense, short-pulse laser propagating in a preformed parabolic plasma channel, is studied analytically in 3D including the effects of pulse steepening and energy depletion. At higher laser intensities, the process of electron self-injection in the nonlinear bubble wake regime is studied by means of fully self-consistent Particle-in-Cell simulations. Considering a non-evolving laser driver propagating with a prescribed velocity, the geometrical properties of the non-evolving bubble wake are studied. For a range of parameters of interest for laser plasma acceleration, The dependence of the threshold for self-injection in the non-evolving wake on laser intensity and wake velocity is characterized. Due to the nonlinear and complex nature of the Physics involved, computationally challenging numerical simulations are required to model laser-plasma accelerators operating at relativistic laser intensities. The numerical and computational optimizations, that combined in the codes INF&RNO and INF&RNO/quasi-static give the possibility to accurately model multi-GeV laser wakefield acceleration stages with present supercomputing architectures, are discussed. The PIC code jasmine, capable of efficiently running laser-plasma simulations on Graphics Processing Units (GPUs) clusters, is presented. GPUs deliver exceptional performance to PIC codes, but the core algorithms had to be redesigned for satisfying the constraints imposed by the intrinsic parallelism of the architecture. The simulation campaigns, run with the code jasmine for modeling the recent LPA experiments with the INFN-FLAME and CNR-ILIL laser systems, are also presented.