98 resultados para numerical models
Resumo:
Cool materials are characterized by having a high solar reflectance r – which is able to reduce heat gains during daytime - and a high thermal emissivity ε that enables them to dissipate the heat absorbed throughout the day during night. Despite the concept of cool roofs - i.e. the application of cool materials to roof surfaces - is well known in US since 1990s, many studies focused on their performance in both residential and commercial sectors under various climatic conditions for US countries, while only a few case studies are analyzed in EU countries. The present work aims at analyzing the thermal benefits due to their application to existing office buildings located in EU countries. Indeed, due to their weight in the existing buildings stock, as well as the very low rate of new buildings construction, the retrofit of office buildings is a topic of great concern worldwide. After an in-depth characterization of the existing buildings stock in the EU, the book gives an insight into roof energy balance due to different technological solutions, showing in which cases and to what extent cool roofs are preferable. A detailed description of the physical properties of cool materials and their availability on the market provides a solid background for the parametric analysis carried out by means of detailed numerical models that aims at evaluating cool roofs performance for various climates and office buildings configurations. With the help of dynamic simulations, the thermal behavior of representative office buildings of the existing EU buildings stock is assessed in terms of thermal comfort and energy needs for air conditioning. The results, which consider several variations of building features that may affect the resulting energy balance, show how cool roofs are an effective strategy for reducing overheating occurrences and thus improving thermal comfort in any climate. On the other hand, potential heating penalties due to a reduction in the incoming heat fluxes through the roof are taken into account, as well as the aging process of cool materials. Finally, an economic analysis of the best performing models shows the boundaries for their economic convenience.
Resumo:
Observations obtained during an 8-month deployment of AMF2 in a boreal environment in Hyytiälä, Finland, and the 20-year comprehensive in-situ data from SMEAR-II station enable the characterization of biogenic aerosol, clouds and precipitation, and their interactions. During “Biogenic Aerosols - Effects on Clouds and Climate (BAECC)”, the U.S. Department of Energy’s Atmospheric Radiation Measurement (ARM) Program deployed the ARM 2nd Mobile Facility (AMF2) to Hyytiälä, Finland, for an 8-month intensive measurement campaign from February to September 2014. The primary research goal is to understand the role of biogenic aerosols in cloud formation. Hyytiälä is host to SMEAR-II (Station for Measuring Forest Ecosystem-Atmosphere Relations), one of the world’s most comprehensive surface in-situ observation sites in a boreal forest environment. The station has been measuring atmospheric aerosols, biogenic emissions and an extensive suite of parameters relevant to atmosphere-biosphere interactions continuously since 1996. Combining vertical profiles from AMF2 with surface-based in-situ SMEAR-II observations allow the processes at the surface to be directly related to processes occurring throughout the entire tropospheric column. Together with the inclusion of extensive surface precipitation measurements, and intensive observation periods involving aircraft flights and novel radiosonde launches, the complementary observations provide a unique opportunity for investigating aerosol-cloud interactions, and cloud-to-precipitation processes, in a boreal environment. The BAECC dataset provides opportunities for evaluating and improving models of aerosol sources and transport, cloud microphysical processes, and boundary-layer structures. In addition, numerical models are being used to bridge the gap between surface-based and tropospheric observations.
Resumo:
The Southern Ocean is a critical region for global climate, yet large cloud and solar radiation biases over the Southern Ocean are a long-standing problem in climate models and are poorly understood, leading to biases in simulated sea surface temperatures. This study shows that supercooled liquid clouds are central to understanding and simulating the Southern Ocean environment. A combination of satellite observational data and detailed radiative transfer calculations is used to quantify the impact of cloud phase and cloud vertical structure on the reflected solar radiation in the Southern Hemisphere summer. It is found that clouds with supercooled liquid tops dominate the population of liquid clouds. The observations show that clouds with supercooled liquid tops contribute between 27% and 38% to the total reflected solar radiation between 40° and 70°S, and climate models are found to poorly simulate these clouds. The results quantify the importance of supercooled liquid clouds in the Southern Ocean environment and highlight the need to improve understanding of the physical processes that control these clouds in order to improve their simulation in numerical models. This is not only important for improving the simulation of present-day climate and climate variability, but also relevant for increasing confidence in climate feedback processes and future climate projections.
Resumo:
A method to solve a quasi-geostrophic two-layer model including the variation of static stability is presented. The divergent part of the wind is incorporated by means of an iterative procedure. The procedure is rather fast and the time of computation is only 60–70% longer than for the usual two-layer model. The method of solution is justified by the conservation of the difference between the gross static stability and the kinetic energy. To eliminate the side-boundary conditions the experiments have been performed on a zonal channel model. The investigation falls mainly into three parts: The first part (section 5) contains a discussion of the significance of some physically inconsistent approximations. It is shown that physical inconsistencies are rather serious and for these inconsistent models which were studied the total kinetic energy increased faster than the gross static stability. In the next part (section 6) we are studying the effect of a Jacobian difference operator which conserves the total kinetic energy. The use of this operator in two-layer models will give a slight improvement but probably does not have any practical use in short periodic forecasts. It is also shown that the energy-conservative operator will change the wave-speed in an erroneous way if the wave-number or the grid-length is large in the meridional direction. In the final part (section 7) we investigate the behaviour of baroclinic waves for some different initial states and for two energy-consistent models, one with constant and one with variable static stability. According to the linear theory the waves adjust rather rapidly in such a way that the temperature wave will lag behind the pressure wave independent of the initial configuration. Thus, both models give rise to a baroclinic development even if the initial state is quasi-barotropic. The effect of the variation of static stability is very small, qualitative differences in the development are only observed during the first 12 hours. For an amplifying wave we will get a stabilization over the troughs and an instabilization over the ridges.
Resumo:
With the prospect of exascale computing, computational methods requiring only local data become especially attractive. Consequently, the typical domain decomposition of atmospheric models means horizontally-explicit vertically-implicit (HEVI) time-stepping schemes warrant further attention. In this analysis, Runge-Kutta implicit-explicit schemes from the literature are analysed for their stability and accuracy using a von Neumann stability analysis of two linear systems. Attention is paid to the numerical phase to indicate the behaviour of phase and group velocities. Where the analysis is tractable, analytically derived expressions are considered. For more complicated cases, amplification factors have been numerically generated and the associated amplitudes and phase diagnosed. Analysis of a system describing acoustic waves has necessitated attributing the three resultant eigenvalues to the three physical modes of the system. To do so, a series of algorithms has been devised to track the eigenvalues across the frequency space. The result enables analysis of whether the schemes exactly preserve the non-divergent mode; and whether there is evidence of spurious reversal in the direction of group velocities or asymmetry in the damping for the pair of acoustic modes. Frequency ranges that span next-generation high-resolution weather models to coarse-resolution climate models are considered; and a comparison is made of errors accumulated from multiple stability-constrained shorter time-steps from the HEVI scheme with a single integration from a fully implicit scheme over the same time interval. Two schemes, “Trap2(2,3,2)” and “UJ3(1,3,2)”, both already used in atmospheric models, are identified as offering consistently good stability and representation of phase across all the analyses. Furthermore, according to a simple measure of computational cost, “Trap2(2,3,2)” is the least expensive.
Resumo:
The high computational cost of calculating the radiative heating rates in numerical weather prediction (NWP) and climate models requires that calculations are made infrequently, leading to poor sampling of the fast-changing cloud field and a poor representation of the feedback that would occur. This paper presents two related schemes for improving the temporal sampling of the cloud field. Firstly, the ‘split time-stepping’ scheme takes advantage of the independent nature of the monochromatic calculations of the ‘correlated-k’ method to split the calculation into gaseous absorption terms that are highly dependent on changes in cloud (the optically thin terms) and those that are not (optically thick). The small number of optically thin terms can then be calculated more often to capture changes in the grey absorption and scattering associated with cloud droplets and ice crystals. Secondly, the ‘incremental time-stepping’ scheme uses a simple radiative transfer calculation using only one or two monochromatic calculations representing the optically thin part of the atmospheric spectrum. These are found to be sufficient to represent the heating rate increments caused by changes in the cloud field, which can then be added to the last full calculation of the radiation code. We test these schemes in an operational forecast model configuration and find a significant improvement is achieved, for a small computational cost, over the current scheme employed at the Met Office. The ‘incremental time-stepping’ scheme is recommended for operational use, along with a new scheme to correct the surface fluxes for the change in solar zenith angle between radiation calculations.
Resumo:
Different optimization methods can be employed to optimize a numerical estimate for the match between an instantiated object model and an image. In order to take advantage of gradient-based optimization methods, perspective inversion must be used in this context. We show that convergence can be very fast by extrapolating to maximum goodness-of-fit with Newton's method. This approach is related to methods which either maximize a similar goodness-of-fit measure without use of gradient information, or else minimize distances between projected model lines and image features. Newton's method combines the accuracy of the former approach with the speed of convergence of the latter.
Resumo:
The transport of stratospheric air deep into the troposphere via convection is investigated numerically using the UK Met Office Unified Model. A convective system that formed on 27 June 2004 near southeast England, in the vicinity an upper level potential vorticity anomaly and a lowered tropopause, provides the basis for analysis. Transport is diagnosed using a stratospheric tracer that can either be passed through or withheld from the model’s convective parameterization scheme. Three simulations are performed at increasingly finer resolutions, with horizontal grid lengths of 12, 4, and 1 km. In the 12 and 4 km simulations, tracer is transported deeply into the troposphere by the parameterized convection. In the 1 km simulation, for which the convective parameterization is disengaged, deep transport is still accomplished but with a much smaller magnitude. However, the 1 km simulation resolves stirring along the tropopause that does not exist in the coarser simulations. In all three simulations, the concentration of the deeply transported tracer is small, three orders of magnitude less than that of the shallow transport near the tropopause, most likely because of the efficient dilution of parcels in the lower troposphere.
Resumo:
This paper describes benchmark testing of six two-dimensional (2D) hydraulic models (DIVAST, DIVASTTVD, TUFLOW, JFLOW, TRENT and LISFLOOD-FP) in terms of their ability to simulate surface flows in a densely urbanised area. The models are applied to a 1·0 km × 0·4 km urban catchment within the city of Glasgow, Scotland, UK, and are used to simulate a flood event that occurred at this site on 30 July 2002. An identical numerical grid describing the underlying topography is constructed for each model, using a combination of airborne laser altimetry (LiDAR) fused with digital map data, and used to run a benchmark simulation. Two numerical experiments were then conducted to test the response of each model to topographic error and uncertainty over friction parameterisation. While all the models tested produce plausible results, subtle differences between particular groups of codes give considerable insight into both the practice and science of urban hydraulic modelling. In particular, the results show that the terrain data available from modern LiDAR systems are sufficiently accurate and resolved for simulating urban flows, but such data need to be fused with digital map data of building topology and land use to gain maximum benefit from the information contained therein. When such terrain data are available, uncertainty in friction parameters becomes a more dominant factor than topographic error for typical problems. The simulations also show that flows in urban environments are characterised by numerous transitions to supercritical flow and numerical shocks. However, the effects of these are localised and they do not appear to affect overall wave propagation. In contrast, inertia terms are shown to be important in this particular case, but the specific characteristics of the test site may mean that this does not hold more generally.
Resumo:
The goal of this study is to evaluate the effect of mass lumping on the dispersion properties of four finite-element velocity/surface-elevation pairs that are used to approximate the linear shallow-water equations. For each pair, the dispersion relation, obtained using the mass lumping technique, is computed and analysed for both gravity and Rossby waves. The dispersion relations are compared with those obtained for the consistent schemes (without lumping) and the continuous case. The P0-P1, RT0 and P-P1 pairs are shown to preserve good dispersive properties when the mass matrix is lumped. Test problems to simulate fast gravity and slow Rossby waves are in good agreement with the analytical results.
Resumo:
Although climate models have been improving in accuracy and efficiency over the past few decades, it now seems that these incremental improvements may be slowing. As tera/petascale computing becomes massively parallel, our legacy codes are less suitable, and even with the increased resolution that we are now beginning to use, these models cannot represent the multiscale nature of the climate system. This paper argues that it may be time to reconsider the use of adaptive mesh refinement for weather and climate forecasting in order to achieve good scaling and representation of the wide range of spatial scales in the atmosphere and ocean. Furthermore, the challenge of introducing living organisms and human responses into climate system models is only just beginning to be tackled. We do not yet have a clear framework in which to approach the problem, but it is likely to cover such a huge number of different scales and processes that radically different methods may have to be considered. The challenges of multiscale modelling and petascale computing provide an opportunity to consider a fresh approach to numerical modelling of the climate (or Earth) system, which takes advantage of the computational fluid dynamics developments in other fields and brings new perspectives on how to incorporate Earth system processes. This paper reviews some of the current issues in climate (and, by implication, Earth) system modelling, and asks the question whether a new generation of models is needed to tackle these problems.