6 resultados para Numerical renormalization-group
em CentAUR: Central Archive University of Reading - UK
Resumo:
We discuss several methods of calculating the DIS structure functions F2(x,Q2) based on BFKL-type small x resummations. Taking into account new HERA data ranging down to small xand low Q2, the pure leading order BFKL-based approach is excluded. Other methods based on high energy factorization are closer to conventional renormalization group equations. Despite several difficulties and ambiguities in combining the renormalization group equations with small x resummed terms, we find that a fit to the current data is hardly feasible, since the data in the low Q2 region are not as steep as the BFKL formalism predicts. Thus we conclude that deviations from the (successful) renormalization group approach towards summing up logarithms in 1/x are disfavoured by experiment.
Resumo:
It has been observed recently that a consistent LO BFKL gluon evolution leads to a steep growth of F2(x, Q2) for x → 0 almost independently of Q2. We show that current data from the DESY HERA collider are precise enough to finally rule out a pure BFKL behaviour in the accessible small x region. Several attempts have been made by other groups to treat the BFKL type small x resummations instead as additions to the conventional anomalous dimensions of the successful renormalization group “Altarelli-Parisi” equations. We demonstrate that all presently available F2 data, in particular at lower values of Q2, can not be described using the presently known NLO (two-loop consistent) small x resummations. Finally we comment on the common reason for the failure of these BFKL inspired methods which result, in general, in too steep >x-dependencies as x → 0.
Resumo:
The BFKL equation and the kT-factorization theorem are used to obtain predictions for F2 in the small Bjo/rken-x region over a wide range of Q2. The dependence on the parameters, especially on those concerning the infrared region, is discussed. After a background fit to recent experimental data obtained at DESY HERA and at Fermilab (E665 experiment) we find that the predicted, almost Q2 independent BFKL slope λ≳0.5 appears to be too steep at lower Q2 values. Thus there seems to be a chance that future HERA data can distinguish between pure BFKL and conventional field theoretic renormalization group approaches. © 1995 The American Physical Society.
Resumo:
Our group considered the desirability of including representations of uncertainty in the development of parameterizations. (By ‘uncertainty’ here we mean the deviation of sub-grid scale fluxes or tendencies in any given model grid box from truth.) We unanimously agreed that the ECWMF should attempt to provide a more physical basis for uncertainty estimates than the very effective but ad hoc methods being used at present. Our discussions identified several issues that will arise.
Resumo:
Data assimilation (DA) systems are evolving to meet the demands of convection-permitting models in the field of weather forecasting. On 19 April 2013 a special interest group meeting of the Royal Meteorological Society brought together UK researchers looking at different aspects of the data assimilation problem at high resolution, from theory to applications, and researchers creating our future high resolution observational networks. The meeting was chaired by Dr Sarah Dance of the University of Reading and Dr Cristina Charlton-Perez from the MetOffice@Reading. The purpose of the meeting was to help define the current state of high resolution data assimilation in the UK. The workshop assembled three main types of scientists: observational network specialists, operational numerical weather prediction researchers and those developing the fundamental mathematical theory behind data assimilation and the underlying models. These three working areas are intrinsically linked; therefore, a holistic view must be taken when discussing the potential to make advances in high resolution data assimilation.
Resumo:
With the prospect of exascale computing, computational methods requiring only local data become especially attractive. Consequently, the typical domain decomposition of atmospheric models means horizontally-explicit vertically-implicit (HEVI) time-stepping schemes warrant further attention. In this analysis, Runge-Kutta implicit-explicit schemes from the literature are analysed for their stability and accuracy using a von Neumann stability analysis of two linear systems. Attention is paid to the numerical phase to indicate the behaviour of phase and group velocities. Where the analysis is tractable, analytically derived expressions are considered. For more complicated cases, amplification factors have been numerically generated and the associated amplitudes and phase diagnosed. Analysis of a system describing acoustic waves has necessitated attributing the three resultant eigenvalues to the three physical modes of the system. To do so, a series of algorithms has been devised to track the eigenvalues across the frequency space. The result enables analysis of whether the schemes exactly preserve the non-divergent mode; and whether there is evidence of spurious reversal in the direction of group velocities or asymmetry in the damping for the pair of acoustic modes. Frequency ranges that span next-generation high-resolution weather models to coarse-resolution climate models are considered; and a comparison is made of errors accumulated from multiple stability-constrained shorter time-steps from the HEVI scheme with a single integration from a fully implicit scheme over the same time interval. Two schemes, “Trap2(2,3,2)” and “UJ3(1,3,2)”, both already used in atmospheric models, are identified as offering consistently good stability and representation of phase across all the analyses. Furthermore, according to a simple measure of computational cost, “Trap2(2,3,2)” is the least expensive.