226 resultados para subgrid-scale model
em CentAUR: Central Archive University of Reading - UK
Resumo:
The subgrid-scale spatial variability in cloud water content can be described by a parameter f called the fractional standard deviation. This is equal to the standard deviation of the cloud water content divided by the mean. This parameter is an input to schemes that calculate the impact of subgrid-scale cloud inhomogeneity on gridbox-mean radiative fluxes and microphysical process rates. A new regime-dependent parametrization of the spatial variability of cloud water content is derived from CloudSat observations of ice clouds. In addition to the dependencies on horizontal and vertical resolution and cloud fraction included in previous parametrizations, the new parametrization includes an explicit dependence on cloud type. The new parametrization is then implemented in the Global Atmosphere 6 (GA6) configuration of the Met Office Unified Model and used to model the effects of subgrid variability of both ice and liquid water content on radiative fluxes and autoconversion and accretion rates in three 20-year atmosphere-only climate simulations. These simulations show the impact of the new regime-dependent parametrization on diagnostic radiation calculations, interactive radiation calculations and both interactive radiation calculations and in a new warm microphysics scheme. The control simulation uses a globally constant f value of 0.75 to model the effect of cloud water content variability on radiative fluxes. The use of the new regime-dependent parametrization in the model results in a global mean which is higher than the control's fixed value and a global distribution of f which is closer to CloudSat observations. When the new regime-dependent parametrization is used in radiative transfer calculations only, the magnitudes of short-wave and long-wave top of atmosphere cloud radiative forcing are reduced, increasing the existing global mean biases in the control. When also applied in a new warm microphysics scheme, the short-wave global mean bias is reduced.
Resumo:
A theoretical framework for the joint conservation of energy and momentum in the parameterization of subgrid-scale processes in climate models is presented. The framework couples a hydrostatic resolved (planetary scale) flow to a nonhydrostatic subgrid-scale (mesoscale) flow. The temporal and horizontal spatial scale separation between the planetary scale and mesoscale is imposed using multiple-scale asymptotics. Energy and momentum are exchanged through subgrid-scale flux convergences of heat, pressure, and momentum. The generation and dissipation of subgrid-scale energy and momentum is understood using wave-activity conservation laws that are derived by exploiting the (mesoscale) temporal and horizontal spatial homogeneities in the planetary-scale flow. The relations between these conservation laws and the planetary-scale dynamics represent generalized nonacceleration theorems. A derived relationship between the wave-activity fluxes-which represents a generalization of the second Eliassen-Palm theorem-is key to ensuring consistency between energy and momentum conservation. The framework includes a consistent formulation of heating and entropy production due to kinetic energy dissipation.
Resumo:
We report on a numerical study of the impact of short, fast inertia-gravity waves on the large-scale, slowly-evolving flow with which they co-exist. A nonlinear quasi-geostrophic numerical model of a stratified shear flow is used to simulate, at reasonably high resolution, the evolution of a large-scale mode which grows due to baroclinic instability and equilibrates at finite amplitude. Ageostrophic inertia-gravity modes are filtered out of the model by construction, but their effects on the balanced flow are incorporated using a simple stochastic parameterization of the potential vorticity anomalies which they induce. The model simulates a rotating, two-layer annulus laboratory experiment, in which we recently observed systematic inertia-gravity wave generation by an evolving, large-scale flow. We find that the impact of the small-amplitude stochastic contribution to the potential vorticity tendency, on the model balanced flow, is generally small, as expected. In certain circumstances, however, the parameterized fast waves can exert a dominant influence. In a flow which is baroclinically-unstable to a range of zonal wavenumbers, and in which there is a close match between the growth rates of the multiple modes, the stochastic waves can strongly affect wavenumber selection. This is illustrated by a flow in which the parameterized fast modes dramatically re-partition the probability-density function for equilibrated large-scale zonal wavenumber. In a second case study, the stochastic perturbations are shown to force spontaneous wavenumber transitions in the large-scale flow, which do not occur in their absence. These phenomena are due to a stochastic resonance effect. They add to the evidence that deterministic parameterizations in general circulation models, of subgrid-scale processes such as gravity wave drag, cannot always adequately capture the full details of the nonlinear interaction.
Resumo:
The unsaturated zone exerts a major control on the delivery of nutrients to Chalk streams, yet flow and transport processes in this complex, dual-porosity medium have remained controversial. A major challenge arises in characterising these processes, both at the detailed mechanistic level and at an appropriate level for inclusion within catchment-scale models for nutrient management. The lowland catchment research (LOCAR) programme in the UK has provided a unique set of comprehensively instrumented groundwater-dominated catchments. Of these, the Pang and Lambourn, tributaries of the Thames near Reading, have been a particular focus for research into subsurface processes and surface water-groundwater interactions. Data from LOCAR and other sources, along with a new dual permeability numerical model of the Chalk, have been used to explore the relative roles of matrix and fracture flow within the unsaturated zone and resolve conflicting hypotheses of response. From the improved understanding gained through these explorations, a parsimonious conceptualisation of the general response of flow and transport within the Chalk unsaturated zone was formulated. This paper summarises the modelling and data findings of these explorations, and describes the integration of the new simplified unsaturated zone representation with a catchment-scale model of nutrients (INCA), resulting in a new model for catchment-scale flow and transport within Chalk systems: INCA-Chalk. This model is applied to the Lambourn, and results, including hindcast and forecast simulations, are presented. These clearly illustrate the decadal time-scales that need to be considered in the context of nutrient management and the EU Water Framework Directive. (C) 2007 Elsevier B.V. All rights reserved.
Resumo:
Measurements of anthropogenic tracers such as chlorofluorocarbons and tritium must be quantitatively combined with ocean general circulation models as a component of systematic model development. The authors have developed and tested an inverse method, using a Green's function, to constrain general circulation models with transient tracer data. Using this method chlorofluorocarbon-11 and -12 (CFC-11 and -12) observations are combined with a North Atlantic configuration of the Miami Isopycnic Coordinate Ocean Model with 4/3 degrees resolution. Systematic differences can be seen between the observed CFC concentrations and prior CFC fields simulated by the model. These differences are reduced by the inversion, which determines the optimal gas transfer across the air-sea interface, accounting for uncertainties in the tracer observations. After including the effects of unresolved variability in the CFC fields, the model is found to be inconsistent with the observations because the model/data misfit slightly exceeds the error estimates. By excluding observations in waters ventilated north of the Greenland-Scotland ridge (sigma (0) < 27.82 kg m(-3); shallower than about 2000 m), the fit is improved, indicating that the Nordic overflows are poorly represented in the model. Some systematic differences in the model/data residuals remain and are related, in part, to excessively deep model ventilation near Rockall and deficient ventilation in the main thermocline of the eastern subtropical gyre. Nevertheless, there do not appear to be gross errors in the basin-scale model circulation. Analysis of the CFC inventory using the constrained model suggests that the North Atlantic Ocean shallower than about 2000 m was near 20% saturated in the mid-1990s. Overall, this basin is a sink to 22% of the total atmosphere-to-ocean CFC-11 flux-twice the global average value. The average water mass formation rates over the CFC transient are 7.0 and 6.0 Sv (Sv = 10(6) m(3) s(-1)) for subtropical mode water and subpolar mode water, respectively.
Resumo:
Process-based integrated modelling of weather and crop yield over large areas is becoming an important research topic. The production of the DEMETER ensemble hindcasts of weather allows this work to be carried out in a probabilistic framework. In this study, ensembles of crop yield (groundnut, Arachis hypogaea L.) were produced for 10 2.5 degrees x 2.5 degrees grid cells in western India using the DEMETER ensembles and the general large-area model (GLAM) for annual crops. Four key issues are addressed by this study. First, crop model calibration methods for use with weather ensemble data are assessed. Calibration using yield ensembles was more successful than calibration using reanalysis data (the European Centre for Medium-Range Weather Forecasts 40-yr reanalysis, ERA40). Secondly, the potential for probabilistic forecasting of crop failure is examined. The hindcasts show skill in the prediction of crop failure, with more severe failures being more predictable. Thirdly, the use of yield ensemble means to predict interannual variability in crop yield is examined and their skill assessed relative to baseline simulations using ERA40. The accuracy of multi-model yield ensemble means is equal to or greater than the accuracy using ERA40. Fourthly, the impact of two key uncertainties, sowing window and spatial scale, is briefly examined. The impact of uncertainty in the sowing window is greater with ERA40 than with the multi-model yield ensemble mean. Subgrid heterogeneity affects model accuracy: where correlations are low on the grid scale, they may be significantly positive on the subgrid scale. The implications of the results of this study for yield forecasting on seasonal time-scales are as follows. (i) There is the potential for probabilistic forecasting of crop failure (defined by a threshold yield value); forecasting of yield terciles shows less potential. (ii) Any improvement in the skill of climate models has the potential to translate into improved deterministic yield prediction. (iii) Whilst model input uncertainties are important, uncertainty in the sowing window may not require specific modelling. The implications of the results of this study for yield forecasting on multidecadal (climate change) time-scales are as follows. (i) The skill in the ensemble mean suggests that the perturbation, within uncertainty bounds, of crop and climate parameters, could potentially average out some of the errors associated with mean yield prediction. (ii) For a given technology trend, decadal fluctuations in the yield-gap parameter used by GLAM may be relatively small, implying some predictability on those time-scales.
Resumo:
We investigate the spatial characteristics of urban-like canopy flow by applying particle image velocimetry (PIV) to atmospheric turbulence. The study site was a Comprehensive Outdoor Scale MOdel (COSMO) experiment for urban climate in Japan. The PIV system captured the two-dimensional flow field within the canopy layer continuously for an hour with a sampling frequency of 30 Hz, thereby providing reliable outdoor turbulence statistics. PIV measurements in a wind-tunnel facility using similar roughness geometry, but with a lower sampling frequency of 4 Hz, were also done for comparison. The turbulent momentum flux from COSMO, and the wind tunnel showed similar values and distributions when scaled using friction velocity. Some different characteristics between outdoor and indoor flow fields were mainly caused by the larger fluctuations in wind direction for the atmospheric turbulence. The focus of the analysis is on a variety of instantaneous turbulent flow structures. One remarkable flow structure is termed 'flushing', that is, a large-scale upward motion prevailing across the whole vertical cross-section of a building gap. This is observed intermittently, whereby tracer particles are flushed vertically out from the canopy layer. Flushing phenomena are also observed in the wind tunnel where there is neither thermal stratification nor outer-layer turbulence. It is suggested that flushing phenomena are correlated with the passing of large-scale low-momentum regions above the canopy.
Resumo:
The activation of aerosols to form cloud droplets is dependent upon vertical velocities whose local variability is not typically resolved at the GCM grid scale. Consequently, it is necessary to represent the subgrid-scale variability of vertical velocity in the calculation of cloud droplet number concentration. This study uses the UK Chemistry and Aerosols community model (UKCA) within the Hadley Centre Global Environmental Model (HadGEM3), coupled for the first time to an explicit aerosol activation parameterisation, and hence known as UKCA-Activate. We explore the range of uncertainty in estimates of the indirect aerosol effects attributable to the choice of parameterisation of the subgrid-scale variability of vertical velocity in HadGEM-UKCA. Results of simulations demonstrate that the use of a characteristic vertical velocity cannot replicate results derived with a distribution of vertical velocities, and is to be discouraged in GCMs. This study focuses on the effect of the variance (σw2) of a Gaussian pdf (probability density function) of vertical velocity. Fixed values of σw (spanning the range measured in situ by nine flight campaigns found in the literature) and a configuration in which σw depends on turbulent kinetic energy are tested. Results from the mid-range fixed σw and TKE-based configurations both compare well with observed vertical velocity distributions and cloud droplet number concentrations. The radiative flux perturbation due to the total effects of anthropogenic aerosol is estimated at −1.9 W m−2 with σw = 0.1 m s−1, −2.1 W m−2 with σw derived from TKE, −2.25 W m−2 with σw = 0.4 m s−1, and −2.3 W m−2 with σw = 0.7 m s−1. The breadth of this range is 0.4 W m−2, which is comparable to a substantial fraction of the total diversity of current aerosol forcing estimates. Reducing the uncertainty in the parameterisation of σw would therefore be an important step towards reducing the uncertainty in estimates of the indirect aerosol effects. Detailed examination of regional radiative flux perturbations reveals that aerosol microphysics can be responsible for some climate-relevant radiative effects, highlighting the importance of including microphysical aerosol processes in GCMs.
Resumo:
Stochastic methods are a crucial area in contemporary climate research and are increasingly being used in comprehensive weather and climate prediction models as well as reduced order climate models. Stochastic methods are used as subgrid-scale parameterizations (SSPs) as well as for model error representation, uncertainty quantification, data assimilation, and ensemble prediction. The need to use stochastic approaches in weather and climate models arises because we still cannot resolve all necessary processes and scales in comprehensive numerical weather and climate prediction models. In many practical applications one is mainly interested in the largest and potentially predictable scales and not necessarily in the small and fast scales. For instance, reduced order models can simulate and predict large-scale modes. Statistical mechanics and dynamical systems theory suggest that in reduced order models the impact of unresolved degrees of freedom can be represented by suitable combinations of deterministic and stochastic components and non-Markovian (memory) terms. Stochastic approaches in numerical weather and climate prediction models also lead to the reduction of model biases. Hence, there is a clear need for systematic stochastic approaches in weather and climate modeling. In this review, we present evidence for stochastic effects in laboratory experiments. Then we provide an overview of stochastic climate theory from an applied mathematics perspective. We also survey the current use of stochastic methods in comprehensive weather and climate prediction models and show that stochastic parameterizations have the potential to remedy many of the current biases in these comprehensive models.
Resumo:
The Monte Carlo Independent Column Approximation (McICA) is a flexible method for representing subgrid-scale cloud inhomogeneity in radiative transfer schemes. It does, however, introduce conditional random errors but these have been shown to have little effect on climate simulations, where spatial and temporal scales of interest are large enough for effects of noise to be averaged out. This article considers the effect of McICA noise on a numerical weather prediction (NWP) model, where the time and spatial scales of interest are much closer to those at which the errors manifest themselves; this, as we show, means that noise is more significant. We suggest methods for efficiently reducing the magnitude of McICA noise and test these methods in a global NWP version of the UK Met Office Unified Model (MetUM). The resultant errors are put into context by comparison with errors due to the widely used assumption of maximum-random-overlap of plane-parallel homogeneous cloud. For a simple implementation of the McICA scheme, forecasts of near-surface temperature are found to be worse than those obtained using the plane-parallel, maximum-random-overlap representation of clouds. However, by applying the methods suggested in this article, we can reduce noise enough to give forecasts of near-surface temperature that are an improvement on the plane-parallel maximum-random-overlap forecasts. We conclude that the McICA scheme can be used to improve the representation of clouds in NWP models, with the provision that the associated noise is sufficiently small.
Resumo:
We present ocean model sensitivity experiments aimed at separating the influence of the projected changes in the “thermal” (near-surface air temperature) and “wind” (near-surface winds) forcing on the patterns of sea level and ocean heat content. In the North Atlantic, the distribution of sea level change is more due to the “thermal” forcing, whereas it is more due to the “wind” forcing in the North Pacific; in the Southern Ocean, the “thermal” and “wind” forcing have a comparable influence. In the ocean adjacent to Antarctica the “thermal” forcing leads to an inflow of warmer waters on the continental shelves, which is somewhat attenuated by the “wind” forcing. The structure of the vertically integrated heat uptake is set by different processes at low and high latitudes: at low latitudes it is dominated by the heat transport convergence, whereas at high latitudes it represents a small residual of changes in the surface flux and advection of heat. The structure of the horizontally integrated heat content tendency is set by the increase of downward heat flux by the mean circulation and comparable decrease of upward heat flux by the subgrid-scale processes; the upward eddy heat flux decreases and increases by almost the same magnitude in response to, respectively, the “thermal” and “wind” forcing. Regionally, the surface heat loss and deep convection weaken in the Labrador Sea, but intensify in the Greenland Sea in the region of sea ice retreat. The enhanced heat flux anomaly in the subpolar Atlantic is mainly caused by the “thermal” forcing.
Resumo:
Mixture model techniques are applied to a daily index of monsoon convection from ERA‐40 reanalysis to show regime behavior. The result is the existence of two significant regimes showing preferred locations of convection within the Asia/Western‐North Pacific domain, with some resemblance to active‐break events over India. Simple trend analysis over 1958–2001 shows that the first regime has become less frequent while the second becomes much more dominant. Both undergo a change in structure contributing to the total OLR trend over the ERA‐40 period. Stratifying the data according to a large‐scale dynamical index of monsoon interannual variability, we show the regime occurrence to be strongly perturbed by the seasonal condition, in agreement with conceptual ideas. This technique could be used to further examine predictability issues relating the seasonal mean and intraseasonal monsoon variability or to explore changes in monsoon behavior in centennial‐scale model integrations.
Resumo:
A series of scale model measurements of transverse electromagnetic mode tapered slot antennas are presented. They show that the beam launched by this type of antenna is astigmatic. It is shown how an off-axis spherical mirror can be used to correct this astigmatism to allow efficient coupling to quasi-optical systems. A millimetre wave antenna and mirror combination is described and, with the aid of solid state noise diodes, the coupling of the launched beam to a quasi-optical spectrometer is shown to be in good agreement with that predicted by the scale model measurements.
A wind-tunnel study of flow distortion at a meteorological sensor on top of the BT Tower, London, UK
Resumo:
High quality wind measurements in cities are needed for numerous applications including wind engineering. Such data-sets are rare and measurement platforms may not be optimal for meteorological observations. Two years' wind data were collected on the BT Tower, London, UK, showing an upward deflection on average for all wind directions. Wind tunnel simulations were performed to investigate flow distortion around two scale models of the Tower. Using a 1:160 scale model it was shown that the Tower causes a small deflection (ca. 0.5°) compared to the lattice on top on which the instruments were placed (ca. 0–4°). These deflections may have been underestimated due to wind tunnel blockage. Using a 1:40 model, the observed flow pattern was consistent with streamwise vortex pairs shed from the upstream lattice edge. Correction factors were derived for different wind directions and reduced deflection in the full-scale data-set by <3°. Instrumental tilt caused a sinusoidal variation in deflection of ca. 2°. The residual deflection (ca. 3°) was attributed to the Tower itself. Correction of the wind-speeds was small (average 1%) therefore it was deduced that flow distortion does not significantly affect the measured wind-speeds and the wind climate statistics are reliable.