8 resultados para Numerical analyses
em CentAUR: Central Archive University of Reading - UK
Resumo:
With the prospect of exascale computing, computational methods requiring only local data become especially attractive. Consequently, the typical domain decomposition of atmospheric models means horizontally-explicit vertically-implicit (HEVI) time-stepping schemes warrant further attention. In this analysis, Runge-Kutta implicit-explicit schemes from the literature are analysed for their stability and accuracy using a von Neumann stability analysis of two linear systems. Attention is paid to the numerical phase to indicate the behaviour of phase and group velocities. Where the analysis is tractable, analytically derived expressions are considered. For more complicated cases, amplification factors have been numerically generated and the associated amplitudes and phase diagnosed. Analysis of a system describing acoustic waves has necessitated attributing the three resultant eigenvalues to the three physical modes of the system. To do so, a series of algorithms has been devised to track the eigenvalues across the frequency space. The result enables analysis of whether the schemes exactly preserve the non-divergent mode; and whether there is evidence of spurious reversal in the direction of group velocities or asymmetry in the damping for the pair of acoustic modes. Frequency ranges that span next-generation high-resolution weather models to coarse-resolution climate models are considered; and a comparison is made of errors accumulated from multiple stability-constrained shorter time-steps from the HEVI scheme with a single integration from a fully implicit scheme over the same time interval. Two schemes, “Trap2(2,3,2)” and “UJ3(1,3,2)”, both already used in atmospheric models, are identified as offering consistently good stability and representation of phase across all the analyses. Furthermore, according to a simple measure of computational cost, “Trap2(2,3,2)” is the least expensive.
Resumo:
In this paper, numerical analyses of the thermal performance of an indirect evaporative air cooler incorporating a M-cycle cross-flow heat exchanger has been carried out. The numerical model was established from solving the coupled governing equations for heat and mass transfer between the product and working air, using the finite-element method. The model was developed using the EES (Engineering Equation Solver) environment and validated by published experimental data. Correlation between the cooling (wet-bulb) effectiveness, system COP and a number of air flow/exchanger parameters was developed. It is found that lower channel air velocity, lower inlet air relative humidity, and higher working-to-product air ratio yielded higher cooling effectiveness. The recommended average air velocities in dry and wet channels should not be greater than 1.77 m/s and 0.7 m/s, respectively. The optimum flow ratio of working-to-product air for this cooler is 50%. The channel geometric sizes, i.e. channel length and height, also impose significant impact to system performance. Longer channel length and smaller channel height contribute to increase of the system cooling effectiveness but lead to reduced system COP. The recommend channel height is 4 mm and the dimensionless channel length, i.e., ratio of the channel length to height, should be in the range 100 to 300. Numerical study results indicated that this new type of M-cycle heat and mass exchanger can achieve 16.7% higher cooling effectiveness compared with the conventional cross-flow heat and mass exchanger for the indirect evaporative cooler. The model of this kind is new and not yet reported in literatures. The results of the study help with design and performance analyses of such a new type of indirect evaporative air cooler, and in further, help increasing market rating of the technology within building air conditioning sector, which is currently dominated by the conventional compression refrigeration technology.
Resumo:
The impact of humidity observations on forecast skill is explored by producing a series of global forecasts using initial data derived from the ERA-40 reanalyses system, in which all humidity data have been removed during the data assimilation. The new forecasts have been compared with the original ERA-40 analyses and forecasts made from them. Both sets of forecasts show virtually identical prediction skill in the extratropics and the tropics. Differences between the forecasts are small and undergo characteristic amplification rate. There are larger differences in temperature and geopotential in the tropics but the differences are small-scale and unstructured and have no noticeable effect on the skill of the wind forecasts. The results highlight the current very limited impact of the humidity observations, used to produce the initial state, on the forecasts.
Resumo:
We describe a new methodology for comparing satellite radiation budget data with a numerical weather prediction (NWP) model. This is applied to data from the Geostationary Earth Radiation Budget (GERB) instrument on Meteosat-8. The methodology brings together, in near-real time, GERB broadband shortwave and longwave fluxes with simulations based on analyses produced by the Met Office global NWP model. Results for the period May 2003 to February 2005 illustrate the progressive improvements in the data products as various initial problems were resolved. In most areas the comparisons reveal systematic errors in the model's representation of surface properties and clouds, which are discussed elsewhere. However, for clear-sky regions over the oceans the model simulations are believed to be sufficiently accurate to allow the quality of the GERB fluxes themselves to be assessed and any changes in time of the performance of the instrument to be identified. Using model and radiosonde profiles of temperature and humidity as input to a single-column version of the model's radiation code, we conduct sensitivity experiments which provide estimates of the expected model errors over the ocean of about ±5–10 W m−2 in clear-sky outgoing longwave radiation (OLR) and ±0.01 in clear-sky albedo. For the more recent data the differences between the observed and modeled OLR and albedo are well within these error estimates. The close agreement between the observed and modeled values, particularly for the most recent period, illustrates the value of the methodology. It also contributes to the validation of the GERB products and increases confidence in the quality of the data, prior to their release.
Resumo:
This paper aims to summarise the current performance of ozone data assimilation (DA) systems, to show where they can be improved, and to quantify their errors. It examines 11 sets of ozone analyses from 7 different DA systems. Two are numerical weather prediction (NWP) systems based on general circulation models (GCMs); the other five use chemistry transport models (CTMs). The systems examined contain either linearised or detailed ozone chemistry, or no chemistry at all. In most analyses, MIPAS (Michelson Interferometer for Passive Atmospheric Sounding) ozone data are assimilated; two assimilate SCIAMACHY (Scanning Imaging Absorption Spectrometer for Atmospheric Chartography) observations instead. Analyses are compared to independent ozone observations covering the troposphere, stratosphere and lower mesosphere during the period July to November 2003. Biases and standard deviations are largest, and show the largest divergence between systems, in the troposphere, in the upper-troposphere/lower-stratosphere, in the upper-stratosphere and mesosphere, and the Antarctic ozone hole region. However, in any particular area, apart from the troposphere, at least one system can be found that agrees well with independent data. In general, none of the differences can be linked to the assimilation technique (Kalman filter, three or four dimensional variational methods, direct inversion) or the system (CTM or NWP system). Where results diverge, a main explanation is the way ozone is modelled. It is important to correctly model transport at the tropical tropopause, to avoid positive biases and excessive structure in the ozone field. In the southern hemisphere ozone hole, only the analyses which correctly model heterogeneous ozone depletion are able to reproduce the near-complete ozone destruction over the pole. In the upper-stratosphere and mesosphere (above 5 hPa), some ozone photochemistry schemes caused large but easily remedied biases. The diurnal cycle of ozone in the mesosphere is not captured, except by the one system that includes a detailed treatment of mesospheric chemistry. These results indicate that when good observations are available for assimilation, the first priority for improving ozone DA systems is to improve the models. The analyses benefit strongly from the good quality of the MIPAS ozone observations. Using the analyses as a transfer standard, it is seen that MIPAS is similar to 5% higher than HALOE (Halogen Occultation Experiment) in the mid and upper stratosphere and mesosphere (above 30 hPa), and of order 10% higher than ozonesonde and HALOE in the lower stratosphere (100 hPa to 30 hPa). Analyses based on SCIAMACHY total column are almost as good as the MIPAS analyses; analyses based on SCIAMACHY limb profiles are worse in some areas, due to problems in the SCIAMACHY retrievals.
Resumo:
In this study, we systematically compare a wide range of observational and numerical precipitation datasets for Central Asia. Data considered include two re-analyses, three datasets based on direct observations, and the output of a regional climate model simulation driven by a global re-analysis. These are validated and intercompared with respect to their ability to represent the Central Asian precipitation climate. In each of the datasets, we consider the mean spatial distribution and the seasonal cycle of precipitation, the amplitude of interannual variability, the representation of individual yearly anomalies, the precipitation sensitivity (i.e. the response to wet and dry conditions), and the temporal homogeneity of precipitation. Additionally, we carried out part of these analyses for datasets available in real time. The mutual agreement between the observations is used as an indication of how far these data can be used for validating precipitation data from other sources. In particular, we show that the observations usually agree qualitatively on anomalies in individual years while it is not always possible to use them for the quantitative validation of the amplitude of interannual variability. The regional climate model is capable of improving the spatial distribution of precipitation. At the same time, it strongly underestimates summer precipitation and its variability, while interannual variations are well represented during the other seasons, in particular in the Central Asian mountains during winter and spring
Resumo:
The sea ice export from the Arctic is of global importance due to its fresh water which influences the oceanic stratification and, thus, the global thermohaline circulation. This study deals with the effect of cyclones on sea ice and sea ice transport in particular on the basis of observations from two field experiments FRAMZY 1999 and FRAMZY 2002 in April 1999 and March 2002 as well as on the basis of simulations with a numerical sea ice model. The simulations realised by a dynamic-thermodynamic sea ice model are forced with 6-hourly atmospheric ECMWF- analyses (European Centre for Medium-Range Weather Forecasts) and 6-hourly oceanic data of a MPI-OM-simulation (Max-Planck-Institute Ocean Model). Comparing the observed and simulated variability of the sea ice drift and of the position of the ice edge shows that the chosen configuration of the model is appropriate for the performed studies. The seven observed cyclones change the position of the ice edge up to 100 km and cause an extensive decrease of sea ice coverage by 2 % up to more than 10 %. The decrease is only simulated by the model if the ocean current is strongly divergent in the centre of the cyclone. The impact is remarkable of the ocean current on divergence and shear deformation of the ice drift. As shown by sensitivity studies the ocean current at a depth of 6 m – the sea ice model is forced with – is mainly responsible for the ascertained differences between simulation and observation. The simulated sea ice transport shows a strong variability on a time scale from hours to days. Local minima occur in the time series of the ice transport during periods with Fram Strait cyclones. These minima are not caused by the local effect of the cyclone’s wind field, but mainly by the large-scale pattern of surface pressure. A displacement of the areas of strongest cyclone activity in the Nordic Seas would considerably influence the ice transport.
Resumo:
Approximate Bayesian computation (ABC) is a popular family of algorithms which perform approximate parameter inference when numerical evaluation of the likelihood function is not possible but data can be simulated from the model. They return a sample of parameter values which produce simulations close to the observed dataset. A standard approach is to reduce the simulated and observed datasets to vectors of summary statistics and accept when the difference between these is below a specified threshold. ABC can also be adapted to perform model choice. In this article, we present a new software package for R, abctools which provides methods for tuning ABC algorithms. This includes recent dimension reduction algorithms to tune the choice of summary statistics, and coverage methods to tune the choice of threshold. We provide several illustrations of these routines on applications taken from the ABC literature.