772 resultados para PBL parameterization
Resumo:
Northern Hemisphere tropical cyclone (TC) activity is investigated in multiyear global climate simulations with theECMWFIntegrated Forecast System (IFS) at 10-km resolution forced by the observed records of sea surface temperature and sea ice. The results are compared to analogous simulationswith the 16-, 39-, and 125-km versions of the model as well as observations. In the North Atlantic, mean TC frequency in the 10-km model is comparable to the observed frequency, whereas it is too low in the other versions. While spatial distributions of the genesis and track densities improve systematically with increasing resolution, the 10-km model displays qualitatively more realistic simulation of the track density in the western subtropical North Atlantic. In the North Pacific, the TC count tends to be too high in thewest and too low in the east for all resolutions. These model errors appear to be associated with the errors in the large-scale environmental conditions that are fairly similar in this region for all model versions. The largest benefits of the 10-km simulation are the dramatically more accurate representation of the TC intensity distribution and the structure of the most intense storms. The model can generate a supertyphoon with a maximum surface wind speed of 68.4 m s21. The life cycle of an intense TC comprises intensity fluctuations that occur in apparent connection with the variations of the eyewall/rainband structure. These findings suggest that a hydrostatic model with cumulus parameterization and of high enough resolution could be efficiently used to simulate the TC intensity response (and the associated structural changes) to future climate change.
Resumo:
The present paper presents a simple theory for the transformation of non-precipitating, shallow convection into precipitating, deep convective clouds. In order to make the pertinent point a much idealized system is considered, consisting only of shallow and deep convection without large–scale forcing. The transformation is described by an explicit coupling between these two types of convection. Shallow convection moistens and cools the atmosphere, whereas deep convection dries and warms, leading to destabilization and stabilization respectively. Consequently, in their own stand–alone modes, shallow convection perpetually grows, whereas deep convection simply damps: the former never reaches equilibrium, and the latter is never spontaneously generated. Coupling the modes together is the only way to reconcile these undesirable separate tendencies so that the convective system as a whole can remain in a stable periodic state under this idealized setting. Such coupling is a key missing element in current global atmospheric models. The energy–cycle description as originally formulated by Arakawa and Schubert, and presented herein is suitable for direct implementation into models using a mass–flux parameterization, and would alleviate the current problems with the representation of these two types of convection in numerical models. The present theory also provides a pertinent framework for analyzing large–eddy simulations and cloud–resolving modelling.
Resumo:
The turbulent mixing in thin ocean surface boundary layers (OSBL), which occupy the upper 100 m or so of the ocean, control the exchange of heat and trace gases between the atmosphere and ocean. Here we show that current parameterizations of this turbulent mixing lead to systematic and substantial errors in the depth of the OSBL in global climate models, which then leads to biases in sea surface temperature. One reason, we argue, is that current parameterizations are missing key surface-wave processes that force Langmuir turbulence that deepens the OSBL more rapidly than steady wind forcing. Scaling arguments are presented to identify two dimensionless parameters that measure the importance of wave forcing against wind forcing, and against buoyancy forcing. A global perspective on the occurrence of waveforced turbulence is developed using re-analysis data to compute these parameters globally. The diagnostic study developed here suggests that turbulent energy available for mixing the OSBL is under-estimated without forcing by surface waves. Wave-forcing and hence Langmuir turbulence could be important over wide areas of the ocean and in all seasons in the Southern Ocean. We conclude that surfacewave- forced Langmuir turbulence is an important process in the OSBL that requires parameterization.
Resumo:
The extent and thickness of the Arctic sea ice cover has decreased dramatically in the past few decades with minima in sea ice extent in September 2007 and 2011 and climate models did not predict this decline. One of the processes poorly represented in sea ice models is the formation and evolution of melt ponds. Melt ponds form on Arctic sea ice during the melting season and their presence affects the heat and mass balances of the ice cover, mainly by decreasing the value of the surface albedo by up to 20%. We have developed a melt pond model suitable for forecasting the presence of melt ponds based on sea ice conditions. This model has been incorporated into the Los Alamos CICE sea ice model, the sea ice component of several IPCC climate models. Simulations for the period 1990 to 2007 are in good agreement with observed ice concentration. In comparison to simulations without ponds, the September ice volume is nearly 40% lower. Sensitivity studies within the range of uncertainty reveal that, of the parameters pertinent to the present melt pond parameterization and for our prescribed atmospheric and oceanic forcing, variations of optical properties and the amount of snowfall have the strongest impact on sea ice extent and volume. We conclude that melt ponds will play an increasingly important role in the melting of the Arctic ice cover and their incorporation in the sea ice component of Global Circulation Models is essential for accurate future sea ice forecasts.
Resumo:
The evaluation of the quality and usefulness of climate modeling systems is dependent upon an assessment of both the limited predictability of the climate system and the uncertainties stemming from model formulation. In this study a methodology is presented that is suited to assess the performance of a regional climate model (RCM), based on its ability to represent the natural interannual variability on monthly and seasonal timescales. The methodology involves carrying out multiyear ensemble simulations (to assess the predictability bounds within which the model can be evaluated against observations) and multiyear sensitivity experiments using different model formulations (to assess the model uncertainty). As an example application, experiments driven by assimilated lateral boundary conditions and sea surface temperatures from the ECMWF Reanalysis Project (ERA-15, 1979–1993) were conducted. While the ensemble experiment demonstrates that the predictability of the regional climate varies strongly between different seasons and regions, being weakest during the summer and over continental regions, important sensitivities of the modeling system to parameterization choices are uncovered. In particular, compensating mechanisms related to the long-term representation of the water cycle are revealed, in which summer dry and hot conditions at the surface, resulting from insufficient evaporation, can persist despite insufficient net solar radiation (a result of unrealistic cloud-radiative feedbacks).
Resumo:
Ecosystem fluxes of energy, water, and CO2 result in spatial and temporal variations in atmospheric properties. In principle, these variations can be used to quantify the fluxes through inverse modelling of atmospheric transport, and can improve the understanding of processes and falsifiability of models. We investigated the influence of ecosystem fluxes on atmospheric CO2 in the vicinity of the WLEF-TV tower in Wisconsin using an ecophysiological model (Simple Biosphere, SiB2) coupled to an atmospheric model (Regional Atmospheric Modelling System). Model parameters were specified from satellite imagery and soil texture data. In a companion paper, simulated fluxes in the immediate tower vicinity have been compared to eddy covariance fluxes measured at the tower, with meteorology specified from tower sensors. Results were encouraging with respect to the ability of the model to capture observed diurnal cycles of fluxes. Here, the effects of fluxes in the tower footprint were also investigated by coupling SiB2 to a high-resolution atmospheric simulation, so that the model physiology could affect the meteorological environment. These experiments were successful in reproducing observed fluxes and concentration gradients during the day and at night, but revealed problems during transitions at sunrise and sunset that appear to be related to the canopy radiation parameterization in SiB2.
Resumo:
This Atlas presents statistical analyses of the simulations submitted to the Aqua-Planet Experiment (APE) data archive. The simulations are from global Atmospheric General Circulation Models (AGCM) applied to a water-covered earth. The AGCMs include ones actively used or being developed for numerical weather prediction or climate research. Some are mature, application models and others are more novel and thus less well tested in Earth-like applications. The experiment applies AGCMs with their complete parameterization package to an idealization of the planet Earth which has a greatly simplified lower boundary that consists of an ocean only. It has no land and its associated orography, and no sea ice. The ocean is represented by Sea Surface Temperatures (SST) which are specified everywhere with simple, idealized distributions. Thus in the hierarchy of tests available for AGCMs, APE falls between tests with simplified forcings such as those proposed by Held and Suarez (1994) and Boer and Denis (1997) and Earth-like simulations of the Atmospheric Modeling Intercomparison Project (AMIP, Gates et al., 1999). Blackburn and Hoskins (2013) summarize the APE and its aims. They discuss where the APE fits within a modeling hierarchy which has evolved to evaluate complete models and which provides a link between realistic simulation and conceptual models of atmospheric phenomena. The APE bridges a gap in the existing hierarchy. The goals of APE are to provide a benchmark of current model behaviors and to stimulate research to understand the cause of inter-model differences., APE is sponsored by the World Meteorological Organization (WMO) joint Commission on Atmospheric Science (CAS), World Climate Research Program (WCRP) Working Group on Numerical Experimentation (WGNE). Chapter 2 of this Atlas provides an overview of the specification of the eight APE experiments and of the data collected. Chapter 3 lists the participating models and includes brief descriptions of each. Chapters 4 through 7 present a wide variety of statistics from the 14 participating models for the eight different experiments. Additional intercomparison figures created by Dr. Yukiko Yamada in AGU group are available at http://www.gfd-dennou.org/library/ape/comparison/. This Atlas is intended to present and compare the statistics of the APE simulations but does not contain a discussion of interpretive analyses. Such analyses are left for journal papers such as those included in the Special Issue of the Journal of the Meteorological Society of Japan (2013, Vol. 91A) devoted to the APE. Two papers in that collection provide an overview of the simulations. One (Blackburn et al., 2013) concentrates on the CONTROL simulation and the other (Williamson et al., 2013) on the response to changes in the meridional SST profile. Additional papers provide more detailed analysis of the basic simulations, while others describe various sensitivities and applications. The APE experiment data base holds a wealth of data that is now publicly available from the APE web site: http://climate.ncas.ac.uk/ape/. We hope that this Atlas will stimulate future analyses and investigations to understand the large variation seen in the model behaviors.
Resumo:
Wave-activity conservation laws are key to understanding wave propagation in inhomogeneous environments. Their most general formulation follows from the Hamiltonian structure of geophysical fluid dynamics. For large-scale atmospheric dynamics, the Eliassen–Palm wave activity is a well-known example and is central to theoretical analysis. On the mesoscale, while such conservation laws have been worked out in two dimensions, their application to a horizontally homogeneous background flow in three dimensions fails because of a degeneracy created by the absence of a background potential vorticity gradient. Earlier three-dimensional results based on linear WKB theory considered only Doppler-shifted gravity waves, not waves in a stratified shear flow. Consideration of a background flow depending only on altitude is motivated by the parameterization of subgrid-scales in climate models where there is an imposed separation of horizontal length and time scales, but vertical coupling within each column. Here we show how this degeneracy can be overcome and wave-activity conservation laws derived for three-dimensional disturbances to a horizontally homogeneous background flow. Explicit expressions for pseudoenergy and pseudomomentum in the anelastic and Boussinesq models are derived, and it is shown how the previously derived relations for the two-dimensional problem can be treated as a limiting case of the three-dimensional problem. The results also generalize earlier three-dimensional results in that there is no slowly varying WKB-type requirement on the background flow, and the results are extendable to finite amplitude. The relationship A E =cA P between pseudoenergy A E and pseudomomentum A P, where c is the horizontal phase speed in the direction of symmetry associated with A P, has important applications to gravity-wave parameterization and provides a generalized statement of the first Eliassen–Palm theorem.
Resumo:
The very first numerical models which were developed more than 20 years ago were drastic simplifications of the real atmosphere and they were mostly restricted to describe adiabatic processes. For prediction of a day or two of the mid tropospheric flow these models often gave reasonable results but the result deteriorated quickly when the prediction was extended further in time. The prediction of the surface flow was unsatisfactory even for short predictions. It was evident that both the energy generating processes as well as the dissipative processes have to be included in numerical models in order to predict the weather patterns in the lower part of the atmosphere and to predict the atmosphere in general beyond a day or two. Present-day computers make it possible to attack the weather forecasting problem in a more comprehensive and complete way and substantial efforts have been made during the last decade in particular to incorporate the non-adiabatic processes in numerical prediction models. The physics of radiational transfer, condensation of moisture, turbulent transfer of heat, momentum and moisture and the dissipation of kinetic energy are the most important processes associated with the formation of energy sources and sinks in the atmosphere and these have to be incorporated in numerical prediction models extended over more than a few days. The mechanisms of these processes are mainly related to small scale disturbances in space and time or even molecular processes. It is therefore one of the basic characteristics of numerical models that these small scale disturbances cannot be included in an explicit way. The reason for this is the discretization of the model's atmosphere by a finite difference grid or the use of a Galerkin or spectral function representation. The second reason why we cannot explicitly introduce these processes into a numerical model is due to the fact that some physical processes necessary to describe them (such as the local buoyance) are a priori eliminated by the constraints of hydrostatic adjustment. Even if this physical constraint can be relaxed by making the models non-hydrostatic the scale problem is virtually impossible to solve and for the foreseeable future we have to try to incorporate the ensemble or gross effect of these physical processes on the large scale synoptic flow. The formulation of the ensemble effect in terms of grid-scale variables (the parameters of the large-scale flow) is called 'parameterization'. For short range prediction of the synoptic flow at middle and high latitudes, very simple parameterization has proven to be rather successful.
Resumo:
It is shown that under reasonable assumptions, conservation of angular momentum provides a strong constraint on gravity wave drag feedbacks to radiative perturbations in the middle atmosphere. In the time mean, radiatively induced temperature perturbations above a given altitude z cannot induce changes in zonal mean wind and temperature below z through feedbacks in gravity wave drag alone (assuming an unchanged gravity wave source spectrum). Thus, despite the many uncertainties in the parameterization of gravity wave drag, the role of gravity wave drag in middle-atmosphere climate perturbations may be much more limited than its role in climate itself. This constraint limits the possibilities for downward influence from the mesosphere. In order for a gravity wave drag parameterization to respect the momentum constraint and avoid spurious downward influence, any nonzero parameterized momentum flux at a model lid must be deposited within the model domain, and there must be no zonal mean sponge layer. Examples are provided of how violation of these conditions leads to spurious downward influence. For planetary waves, the momentum constraint does not prohibit downward influence, but it limits the mechanisms by which it can occur: in the time mean, downward influence from a radiative perturbation can only arise through changes in reflection and meridional propagation properties of planetary waves.
Resumo:
Rigorous upper bounds are derived that limit the finite-amplitude growth of arbitrary nonzonal disturbances to an unstable baroclinic zonal flow in a continuously stratified, quasi-geostrophic, semi-infinite fluid. Bounds are obtained bath on the depth-integrated eddy potential enstrophy and on the eddy available potential energy (APE) at the ground. The method used to derive the bounds is essentially analogous to that used in Part I of this study for the two-layer model: it relies on the existence of a nonlinear Liapunov (normed) stability theorem, which is a finite-amplitude generalization of the Charney-Stern theorem. As in Part I, the bounds are valid both for conservative (unforced, inviscid) flow, as well as for forced-dissipative flow when the dissipation is proportional to the potential vorticity in the interior, and to the potential temperature at the ground. The character of the results depends on the dimensionless external parameter γ = f02ξ/β0N2H, where ξ is the maximum vertical shear of the zonal wind, H is the density scale height, and the other symbols have their usual meaning. When γ ≫ 1, corresponding to “deep” unstable modes (vertical scale ≈H), the bound on the eddy potential enstrophy is just the total potential enstrophy in the system; but when γ≪1, corresponding to ‘shallow’ unstable modes (vertical scale ≈γH), the eddy potential enstrophy can be bounded well below the total amount available in the system. In neither case can the bound on the eddy APE prevent a complete neutralization of the surface temperature gradient which is in accord with numerical experience. For the special case of the Charney model of baroclinic instability, and in the limit of infinitesimal initial eddy disturbance amplitude, the bound states that the dimensionless eddy potential enstrophy cannot exceed (γ + 1)2/24&gamma2h when γ ≥ 1, or 1/6;&gammah when γ ≤ 1; here h = HN/f0L is the dimensionless scale height and L is the width of the channel. These bounds are very similar to (though of course generally larger than) ad hoc estimates based on baroclinic-adjustment arguments. The possibility of using these kinds of bounds for eddy-amplitude closure in a transient-eddy parameterization scheme is also discussed.
Resumo:
Interactions between different convection modes can be investigated using an energy–cycle description under a framework of mass–flux parameterization. The present paper systematically investigates this system by taking a limit of two modes: shallow and deep convection. Shallow convection destabilizes itself as well as the other convective modes by moistening and cooling the environment, whereas deep convection stabilizes itself as well as the other modes by drying and warming the environment. As a result, shallow convection leads to a runaway growth process in its stand–alone mode, whereas deep convection simply damps out. Interaction between these two convective modes becomes a rich problem, even when it is limited to the case with no large–scale forcing, because of these opposing tendencies. Only if the two modes are coupled at a proper level can a self–sustaining system arise, exhibiting a periodic cycle. The present study establishes the conditions for self–sustaining periodic solutions. It carefully documents the behaviour of the two mode system in order to facilitate the interpretation of global model behaviours when this energy–cycle is implemented as a closure into a convection parameterization in future.
Resumo:
It has been argued that extended exposure to naturalistic input provides L2 learners with more of an opportunity to converge of target morphosyntactic competence as compared to classroom-only environments, given that the former provide more positive evidence of less salient linguistic properties than the latter (e.g., Isabelli 2004). Implicitly, the claim is that such exposure is needed to fully reset parameters. However, such a position conflicts with the notion of parameterization (cf. Rothman and Iverson 2007). In light of two types of competing generative theories of adult L2 acquisition – the No Impairment Hypothesis (e.g., Duffield and White 1999) and so-called Failed Features approaches (e.g., Beck 1998; Franceschina 2001; Hawkins and Chan 1997), we investigate the verifiability of such a claim. Thirty intermediate L2 Spanish learners were tested in regards to properties of the Null-Subject Parameter before and after study-abroad. The data suggest that (i) parameter resetting is possible and (ii) exposure to naturalistic input is not privileged.
Resumo:
We evaluate the effects of spatial resolution on the ability of a regional climate model to reproduce observed extreme precipitation for a region in the Southwestern United States. A total of 73 National Climate Data Center observational sites spread throughout Arizona and New Mexico are compared with regional climate simulations at the spatial resolutions of 50 km and 10 km for a 31 year period from 1980 to 2010. We analyze mean, 3-hourly and 24-hourly extreme precipitation events using WRF regional model simulations driven by NCEP-2 reanalysis. The mean climatological spatial structure of precipitation in the Southwest is well represented by the 10 km resolution but missing in the coarse (50 km resolution) simulation. However, the fine grid has a larger positive bias in mean summer precipitation than the coarse-resolution grid. The large overestimation in the simulation is in part due to scale-dependent deficiencies in the Kain-Fritsch convective parameterization scheme that generate excessive precipitation and induce a slow eastward propagation of the moist convective summer systems in the high-resolution simulation. Despite this overestimation in the mean, the 10 km simulation captures individual extreme summer precipitation events better than the 50 km simulation. In winter, however, the two simulations appear to perform equally in simulating extremes.
Resumo:
This paper describes the techniques used to obtain sea surface temperature (SST) retrievals from the Geostationary Operational Environmental Satellite 12 (GOES-12) at the National Oceanic and Atmospheric Administration’s Office of Satellite Data Processing and Distribution. Previous SST retrieval techniques relying on channels at 11 and 12 μm are not applicable because GOES-12 lacks the latter channel. Cloud detection is performed using a Bayesian method exploiting fast-forward modeling of prior clear-sky radiances using numerical weather predictions. The basic retrieval algorithm used at nighttime is based on a linear combination of brightness temperatures at 3.9 and 11 μm. In comparison with traditional split window SSTs (using 11- and 12-μm channels), simulations show that this combination has maximum scatter when observing drier colder scenes, with a comparable overall performance. For daytime retrieval, the same algorithm is applied after estimating and removing the contribution to brightness temperature in the 3.9-μm channel from solar irradiance. The correction is based on radiative transfer simulations and comprises a parameterization for atmospheric scattering and a calculation of ocean surface reflected radiance. Potential use of the 13-μm channel for SST is shown in a simulation study: in conjunction with the 3.9-μm channel, it can reduce the retrieval error by 30%. Some validation results are shown while a companion paper by Maturi et al. shows a detailed analysis of the validation results for the operational algorithms described in this present article.