920 resultados para Self-Validating Numerical Methods


Relevância:

50.00% 50.00%

Publicador:

Resumo:

Theories and numerical modeling are fundamental tools for understanding, optimizing and designing present and future laser-plasma accelerators (LPAs). Laser evolution and plasma wave excitation in a LPA driven by a weakly relativistically intense, short-pulse laser propagating in a preformed parabolic plasma channel, is studied analytically in 3D including the effects of pulse steepening and energy depletion. At higher laser intensities, the process of electron self-injection in the nonlinear bubble wake regime is studied by means of fully self-consistent Particle-in-Cell simulations. Considering a non-evolving laser driver propagating with a prescribed velocity, the geometrical properties of the non-evolving bubble wake are studied. For a range of parameters of interest for laser plasma acceleration, The dependence of the threshold for self-injection in the non-evolving wake on laser intensity and wake velocity is characterized. Due to the nonlinear and complex nature of the Physics involved, computationally challenging numerical simulations are required to model laser-plasma accelerators operating at relativistic laser intensities. The numerical and computational optimizations, that combined in the codes INF&RNO and INF&RNO/quasi-static give the possibility to accurately model multi-GeV laser wakefield acceleration stages with present supercomputing architectures, are discussed. The PIC code jasmine, capable of efficiently running laser-plasma simulations on Graphics Processing Units (GPUs) clusters, is presented. GPUs deliver exceptional performance to PIC codes, but the core algorithms had to be redesigned for satisfying the constraints imposed by the intrinsic parallelism of the architecture. The simulation campaigns, run with the code jasmine for modeling the recent LPA experiments with the INFN-FLAME and CNR-ILIL laser systems, are also presented.

Relevância:

50.00% 50.00%

Publicador:

Resumo:

Finite element hp-adaptivity is a technology that allows for very accurate numerical solutions. When applied to open region problems such as radar cross section prediction or antenna analysis, a mesh truncation method needs to be used. This paper compares the following mesh truncation methods in the context of hp-adaptive methods: Infinite Elements, Perfectly Matched Layers and an iterative boundary element based methodology. These methods have been selected because they are exact at the continuous level (a desirable feature required by the extreme accuracy delivered by the hp-adaptive strategy) and they are easy to integrate with the logic of hp-adaptivity. The comparison is mainly based on the number of degrees of freedom needed for each method to achieve a given level of accuracy. Computational times are also included. Two-dimensional examples are used, but the conclusions directly extrapolated to the three dimensional case.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The results of a numerical investigation into the errors for least squares estimates of function gradients are presented. The underlying algorithm is obtained by constructing a least squares problem using a truncated Taylor expansion. An error bound associated with this method contains in its numerator terms related to the Taylor series remainder, while its denominator contains the smallest singular value of the least squares matrix. Perhaps for this reason the error bounds are often found to be pessimistic by several orders of magnitude. The circumstance under which these poor estimates arise is elucidated and an empirical correction of the theoretical error bounds is conjectured and investigated numerically. This is followed by an indication of how the conjecture is supported by a rigorous argument.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Precise control of composition and internal structure is essential for a variety of novel technological applications which require highly tailored binary quantum dots (QDs) with predictable optoelectronic and mechanical properties. The delicate balancing act between incoming flux and substrate temperature required for the growth of compositionally graded (Si1-xC x; x varies throughout the internal structure), core-multishell (discrete shells of Si and C or combinations thereof) and selected composition (x set) QDs on low-temperature plasma/ion-flux-exposed Si(100) surfaces is investigated via a hybrid numerical simulation. Incident Si and C ions lead to localized substrate heating and a reduction in surface diffusion activation energy. It is shown that by incorporating ions in the influx, a steady-state composition is reached more quickly (for selected composition QDs) and the composition gradient of a Si1-xCx QD may be fine tuned; additionally (with other deposition conditions remaining the same), larger QDs are obtained on average. It is suggested that ionizing a portion of the influx is another way to control the average size of the QDs, and ultimately, their internal structure. Advantages that can be gained by utilizing plasma/ion-related controls to facilitate the growth of highly tailored, compositionally controlled quantum dots are discussed as well.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

These lecture notes describe the use and implementation of a framework in which mathematical as well as engineering optimisation problems can be analysed. The foundations of the framework and algorithms described -Hierarchical Asynchronous Parallel Evolutionary Algorithms (HAPEAs) - lie upon traditional evolution strategies and incorporate the concepts of a multi-objective optimisation, hierarchical topology, asynchronous evaluation of candidate solutions , parallel computing and game strategies. In a step by step approach, the numerical implementation of EAs and HAPEAs for solving multi criteria optimisation problems is conducted providing the reader with the knowledge to reproduce these hand on training in his – her- academic or industrial environment.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Emotion researchers have grappled with challenging methodological issues in capturing emotions of participants in naturalistic settings such as school or university classrooms. Self-reporting methods have been used frequently, yet these methods are inadequate when used alone. We argue that the self-reporting methods of emotion diaries and cogenerative dialogues can be helpful in identifying in-the-moment emotions when used in conjunction with the microanalysis of video recordings of classroom events. We trace the evolution of our use of innovative self-reporting methods through three cases from our research projects, and propose new directions for our ongoing development and application of these methods in both school and university classrooms.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Rarely is it possible to obtain absolute numbers in free-ranging populations and although various direct and indirect methods are used to estimate abundance, few are validated against populations of known size. In this paper, we apply grounding, calibration and verification methods, used to validate mathematical models, to methods of estimating relative abundance. To illustrate how this might be done, we consider and evaluate the widely applied passive tracking index (PTI) methodology. Using published data, we examine the rationality of PTI methodology, how conceptually animal activity and abundance are related and how alternative methods are subject to similar biases or produce similar abundance estimates and trends. We then attune the method against populations representing a range of densities likely to be encountered in the field. Finally, we compare PTI trends against a prediction that adjacent populations of the same species will have similar abundance values and trends in activity. We show that while PTI abundance estimates are subject to environmental and behavioural stochasticity peculiar to each species, the PTI method and associated variance estimate showed high probability of detection, high precision of abundance values and, generally, low variability between surveys, and suggest that the PTI method applied using this procedure and for these species provides a sensitive and credible index of abundance. This same or similar validation approach can and should be applied to alternative relative abundance methods in order to demonstrate their credibility and justify their use.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Modern-day weather forecasting is highly dependent on Numerical Weather Prediction (NWP) models as the main data source. The evolving state of the atmosphere with time can be numerically predicted by solving a set of hydrodynamic equations, if the initial state is known. However, such a modelling approach always contains approximations that by and large depend on the purpose of use and resolution of the models. Present-day NWP systems operate with horizontal model resolutions in the range from about 40 km to 10 km. Recently, the aim has been to reach operationally to scales of 1 4 km. This requires less approximations in the model equations, more complex treatment of physical processes and, furthermore, more computing power. This thesis concentrates on the physical parameterization methods used in high-resolution NWP models. The main emphasis is on the validation of the grid-size-dependent convection parameterization in the High Resolution Limited Area Model (HIRLAM) and on a comprehensive intercomparison of radiative-flux parameterizations. In addition, the problems related to wind prediction near the coastline are addressed with high-resolution meso-scale models. The grid-size-dependent convection parameterization is clearly beneficial for NWP models operating with a dense grid. Results show that the current convection scheme in HIRLAM is still applicable down to a 5.6 km grid size. However, with further improved model resolution, the tendency of the model to overestimate strong precipitation intensities increases in all the experiment runs. For the clear-sky longwave radiation parameterization, schemes used in NWP-models provide much better results in comparison with simple empirical schemes. On the other hand, for the shortwave part of the spectrum, the empirical schemes are more competitive for producing fairly accurate surface fluxes. Overall, even the complex radiation parameterization schemes used in NWP-models seem to be slightly too transparent for both long- and shortwave radiation in clear-sky conditions. For cloudy conditions, simple cloud correction functions are tested. In case of longwave radiation, the empirical cloud correction methods provide rather accurate results, whereas for shortwave radiation the benefit is only marginal. Idealised high-resolution two-dimensional meso-scale model experiments suggest that the reason for the observed formation of the afternoon low level jet (LLJ) over the Gulf of Finland is an inertial oscillation mechanism, when the large-scale flow is from the south-east or west directions. The LLJ is further enhanced by the sea-breeze circulation. A three-dimensional HIRLAM experiment, with a 7.7 km grid size, is able to generate a similar LLJ flow structure as suggested by the 2D-experiments and observations. It is also pointed out that improved model resolution does not necessary lead to better wind forecasts in the statistical sense. In nested systems, the quality of the large-scale host model is really important, especially if the inner meso-scale model domain is small.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Many physical problems can be modeled by scalar, first-order, nonlinear, hyperbolic, partial differential equations (PDEs). The solutions to these PDEs often contain shock and rarefaction waves, where the solution becomes discontinuous or has a discontinuous derivative. One can encounter difficulties using traditional finite difference methods to solve these equations. In this paper, we introduce a numerical method for solving first-order scalar wave equations. The method involves solving ordinary differential equations (ODEs) to advance the solution along the characteristics and to propagate the characteristics in time. Shocks are created when characteristics cross, and the shocks are then propagated by applying analytical jump conditions. New characteristics are inserted in spreading rarefaction fans. New characteristics are also inserted when values on adjacent characteristics lie on opposite sides of an inflection point of a nonconvex flux function, Solutions along characteristics are propagated using a standard fourth-order Runge-Kutta ODE solver. Shocks waves are kept perfectly sharp. In addition, shock locations and velocities are determined without analyzing smeared profiles or taking numerical derivatives. In order to test the numerical method, we study analytically a particular class of nonlinear hyperbolic PDEs, deriving closed form solutions for certain special initial data. We also find bounded, smooth, self-similar solutions using group theoretic methods. The numerical method is validated against these analytical results. In addition, we compare the errors in our method with those using the Lax-Wendroff method for both convex and nonconvex flux functions. Finally, we apply the method to solve a PDE with a convex flux function describing the development of a thin liquid film on a horizontally rotating disk and a PDE with a nonconvex flux function, arising in a problem concerning flow in an underground reservoir.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

We develop a method for performing one-loop calculations in finite systems that is based on using the WKB approximation for the high energy states. This approximation allows us to absorb all the counterterms analytically and thereby avoids the need for extreme numerical precision that was required by previous methods. In addition, the local approximation makes this method well suited for self-consistent calculations. We then discuss the application of relativistic mean field methods to the atomic nucleus. Self-consistent, one loop calculations in the Walecka model are performed and the role of the vacuum in this model is analyzed. This model predicts that vacuum polarization effects are responsible for up to five percent of the local nucleon density. Within this framework the possible role of strangeness degrees of freedom is studied. We find that strangeness polarization can increase the kaon-nucleus scattering cross section by ten percent. By introducing a cutoff into the model, the dependence of the model on short-distance physics, where its validity is doubtful, is calculated. The model is very sensitive to cutoffs around one GeV.