235 resultados para convective parameterization scheme
Resumo:
A new frontier in weather forecasting is emerging by operational forecast models now being run at convection-permitting resolutions at many national weather services. However, this is not a panacea; significant systematic errors remain in the character of convective storms and rainfall distributions. The DYMECS project (Dynamical and Microphysical Evolution of Convective Storms) is taking a fundamentally new approach to evaluate and improve such models: rather than relying on a limited number of cases, which may not be representative, we have gathered a large database of 3D storm structures on 40 convective days using the Chilbolton radar in southern England. We have related these structures to storm life-cycles derived by tracking features in the rainfall from the UK radar network, and compared them statistically to storm structures in the Met Office model, which we ran at horizontal grid length between 1.5 km and 100 m, including simulations with different subgrid mixing length. We also evaluated the scale and intensity of convective updrafts using a new radar technique. We find that the horizontal size of simulated convective storms and the updrafts within them is much too large at 1.5-km resolution, such that the convective mass flux of individual updrafts can be too large by an order of magnitude. The scale of precipitation cores and updrafts decreases steadily with decreasing grid lengths, as does the typical storm lifetime. The 200-m grid-length simulation with standard mixing length performs best over all diagnostics, although a greater mixing length improves the representation of deep convective storms.
Resumo:
Changes in the depth of Lake Viljandi between 1940 and 1990 were simulated using a lake water and energy-balance model driven by standard monthly weather data. Catchment runoff was simulated using a one-dimensional hydrological model, with a two-layer soil, a single-layer snowpack, a simple representation of vegetation cover and similarly modest input requirements. Outflow was modelled as a function of lake level. The simulated record of lake level and outflow matched observations of lake-level variations (r = 0.78) and streamflow (r = 0.87) well. The ability of the model to capture both intra- and inter-annual variations in the behaviour of a specific lake, despite the relatively simple input requirements, makes it extremely suitable for investigations of the impacts of climate change on lake water balance.
Resumo:
This paper investigates the effect on balance of a number of Schur product-type localization schemes which have been designed with the primary function of reducing spurious far-field correlations in forecast error statistics. The localization schemes studied comprise a non-adaptive scheme (where the moderation matrix is decomposed in a spectral basis), and two adaptive schemes, namely a simplified version of SENCORP (Smoothed ENsemble COrrelations Raised to a Power) and ECO-RAP (Ensemble COrrelations Raised to A Power). The paper shows, we believe for the first time, how the degree of balance (geostrophic and hydrostatic) implied by the error covariance matrices localized by these schemes can be diagnosed. Here it is considered that an effective localization scheme is one that reduces spurious correlations adequately but also minimizes disruption of balance (where the 'correct' degree of balance or imbalance is assumed to be possessed by the unlocalized ensemble). By varying free parameters that describe each scheme (e.g. the degree of truncation in the schemes that use the spectral basis, the 'order' of each scheme, and the degree of ensemble smoothing), it is found that a particular configuration of the ECO-RAP scheme is best suited to the convective-scale system studied. According to our diagnostics this ECO-RAP configuration still weakens geostrophic and hydrostatic balance, but overall this is less so than for other schemes.
Resumo:
This paper investigates the challenge of representing structural differences in river channel cross-section geometry for regional to global scale river hydraulic models and the effect this can have on simulations of wave dynamics. Classically, channel geometry is defined using data, yet at larger scales the necessary information and model structures do not exist to take this approach. We therefore propose a fundamentally different approach where the structural uncertainty in channel geometry is represented using a simple parameterization, which could then be estimated through calibration or data assimilation. This paper first outlines the development of a computationally efficient numerical scheme to represent generalised channel shapes using a single parameter, which is then validated using a simple straight channel test case and shown to predict wetted perimeter to within 2% for the channels tested. An application to the River Severn, UK is also presented, along with an analysis of model sensitivity to channel shape, depth and friction. The channel shape parameter was shown to improve model simulations of river level, particularly for more physically plausible channel roughness and depth parameter ranges. Calibrating channel Manning’s coefficient in a rectangular channel provided similar water level simulation accuracy in terms of Nash-Sutcliffe efficiency to a model where friction and shape or depth were calibrated. However, the calibrated Manning coefficient in the rectangular channel model was ~2/3 greater than the likely physically realistic value for this reach and this erroneously slowed wave propagation times through the reach by several hours. Therefore, for large scale models applied in data sparse areas, calibrating channel depth and/or shape may be preferable to assuming a rectangular geometry and calibrating friction alone.
Resumo:
Model intercomparisons have identified important deficits in the representation of the stable boundary layer by turbulence parametrizations used in current weather and climate models. However, detrimental impacts of more realistic schemes on the large-scale flow have hindered progress in this area. Here we implement a total turbulent energy scheme into the climate model ECHAM6. The total turbulent energy scheme considers the effects of Earth’s rotation and static stability on the turbulence length scale. In contrast to the previously used turbulence scheme, the TTE scheme also implicitly represents entrainment flux in a dry convective boundary layer. Reducing the previously exaggerated surface drag in stable boundary layers indeed causes an increase in southern hemispheric zonal winds and large-scale pressure gradients beyond observed values. These biases can be largely removed by increasing the parametrized orographic drag. Reducing the neutral limit turbulent Prandtl number warms and moistens low-latitude boundary layers and acts to reduce longstanding radiation biases in the stratocumulus regions, the Southern Ocean and the equatorial cold tongue that are common to many climate models.
Resumo:
Weeds tend to aggregate in patches within fields and there is evidence that this is partly owing to variation in soil properties. Because the processes driving soil heterogeneity operate at different scales, the strength of the relationships between soil properties and weed density would also be expected to be scale-dependent. Quantifying these effects of scale on weed patch dynamics is essential to guide the design of discrete sampling protocols for mapping weed distribution. We have developed a general method that uses novel within-field nested sampling and residual maximum likelihood (REML) estimation to explore scale-dependent relationships between weeds and soil properties. We have validated the method using a case study of Alopecurus myosuroides in winter wheat. Using REML, we partitioned the variance and covariance into scale-specific components and estimated the correlations between the weed counts and soil properties at each scale. We used variograms to quantify the spatial structure in the data and to map variables by kriging. Our methodology successfully captured the effect of scale on a number of edaphic drivers of weed patchiness. The overall Pearson correlations between A. myosuroides and soil organic matter and clay content were weak and masked the stronger correlations at >50 m. Knowing how the variance was partitioned across the spatial scales we optimized the sampling design to focus sampling effort at those scales that contributed most to the total variance. The methods have the potential to guide patch spraying of weeds by identifying areas of the field that are vulnerable to weed establishment.
Resumo:
This thesis is an empirical-based study of the European Union’s Emissions Trading Scheme (EU ETS) and its implications in terms of corporate environmental and financial performance. The novelty of this study includes the extended scope of the data coverage, as most previous studies have examined only the power sector. The use of verified emissions data of ETS-regulated firms as the environmental compliance measure and as the potential differentiating criteria that concern the valuation of EU ETS-exposed firms in the stock market is also an original aspect of this study. The study begins in Chapter 2 by introducing the background information on the emission trading system (ETS), which focuses on (i) the adoption of ETS as an environmental management instrument and (ii) the adoption of ETS by the European Union as one of its central climate policies. Chapter 3 surveys four databases that provide carbon emissions data in order to determine the most suitable source of the data to be used in the later empirical chapters. The first empirical chapter, which is also Chapter 4 of this thesis, investigates the determinants of the emissions compliance performance of the EU ETS-exposed firms through constructing the best possible performance ratio from verified emissions data and self-configuring models for a panel regression analysis. Chapter 5 examines the impacts on the EU ETS-exposed firms in terms of their equity valuation with customised portfolios and multi-factor market models. The research design takes into account the emissions allowance (EUA) price as an additional factor, as it has the most direct association with the EU ETS to control for the exposure. The final empirical Chapter 6 takes the investigation one step further, by specifically testing the degree of ETS exposure facing different sectors with sector-based portfolios and an extended multi-factor market model. The findings from the emissions performance ratio analysis show that the business model of firms significantly influences emissions compliance, as the capital intensity has a positive association with the increasing emissions-to-emissions cap ratio. Furthermore, different sectors show different degrees of sensitivity towards the determining factors. The production factor influences the performance ratio of the Utilities sector, but not the Energy or Materials sectors. The results show that the capital intensity has a more profound influence on the utilities sector than on the materials sector. With regard to the financial performance impact, ETS-exposed firms as aggregate portfolios experienced a substantial underperformance during the 2001–2004 period, but not in the operating period of 2005–2011. The results of the sector-based portfolios show again the differentiating effect of the EU ETS on sectors, as one sector is priced indifferently against its benchmark, three sectors see a constant underperformance, and three sectors have altered outcomes.
Resumo:
This paper discusses an important issue related to the implementation and interpretation of the analysis scheme in the ensemble Kalman filter . I t i s shown that the obser vations must be treated as random variables at the analysis steps. That is, one should add random perturbations with the correct statistics to the obser vations and generate an ensemble of obser vations that then is used in updating the ensemble of model states. T raditionally , this has not been done in previous applications of the ensemble Kalman filter and, as will be shown, this has resulted in an updated ensemble with a variance that is too low . This simple modification of the analysis scheme results in a completely consistent approach if the covariance of the ensemble of model states is interpreted as the prediction error covariance, and there are no further requirements on the ensemble Kalman filter method, except for the use of an ensemble of sufficient size. Thus, there is a unique correspondence between the error statistics from the ensemble Kalman filter and the standard Kalman filter approach
Resumo:
This study examines convection-permitting numerical simulations of four cases of terrain-locked quasi-stationary convective bands over the UK. For each case, a 2.2-km grid-length 12-member ensemble and 1.5-km grid-length deterministic forecast are analyzed, each with two different initialization times. Object-based verification is applied to determine whether the simulations capture the structure, location, timing, intensity and duration of the observed precipitation. These verification diagnostics reveal that the forecast skill varies greatly between the four cases. Although the deterministic and ensemble simulations captured some aspects of the precipitation correctly in each case, they never simultaneously captured all of them satisfactorily. In general, the models predicted banded precipitation accumulations at approximately the correct time and location, but the precipitating structures were more cellular and less persistent than the coherent quasi-stationary bands that were observed. Ensemble simulations from the two different initialization times were not significantly different, which suggests a potential benefit of time-lagging subsequent ensembles to increase ensemble size. The predictive skill of the upstream larger-scale flow conditions and the simulated precipitation on the convection-permitting grids were strongly correlated, which suggests that more accurate forecasts from the parent ensemble should improve the performance of the convection-permitting ensemble nested within it.
Resumo:
We present a novel algorithm for concurrent model state and parameter estimation in nonlinear dynamical systems. The new scheme uses ideas from three dimensional variational data assimilation (3D-Var) and the extended Kalman filter (EKF) together with the technique of state augmentation to estimate uncertain model parameters alongside the model state variables in a sequential filtering system. The method is relatively simple to implement and computationally inexpensive to run for large systems with relatively few parameters. We demonstrate the efficacy of the method via a series of identical twin experiments with three simple dynamical system models. The scheme is able to recover the parameter values to a good level of accuracy, even when observational data are noisy. We expect this new technique to be easily transferable to much larger models.
Resumo:
Idealized explicit convection simulations of the Met Office Unified Model exhibit spontaneous self-aggregation in radiative-convective equilibrium, as seen in other models in previous studies. This self-aggregation is linked to feedbacks between radiation, surface fluxes, and convection, and the organization is intimately related to the evolution of the column water vapor field. Analysis of the budget of the spatial variance of column-integrated frozen moist static energy (MSE), following Wing and Emanuel [2014], reveals that the direct radiative feedback (including significant cloud longwave effects) is dominant in both the initial development of self-aggregation and the maintenance of an aggregated state. A low-level circulation at intermediate stages of aggregation does appear to transport MSE from drier to moister regions, but this circulation is mostly balanced by other advective effects of opposite sign and is forced by horizontal anomalies of convective heating (not radiation). Sensitivity studies with either fixed prescribed radiative cooling, fixed prescribed surface fluxes, or both do not show full self-aggregation from homogeneous initial conditions, though fixed surface fluxes do not disaggregate an initialized aggregated state. A sensitivity study in which rain evaporation is turned off shows more rapid self-aggregation, while a run with this change plus fixed radiative cooling still shows strong self-aggregation, supporting a “moisture memory” effect found in Muller and Bony [2015]. Interestingly, self-aggregation occurs even in simulations with sea surface temperatures (SSTs) of 295 K and 290 K, with direct radiative feedbacks dominating the budget of MSE variance, in contrast to results in some previous studies.
Resumo:
Convection-permitting modelling has led to a step change in forecasting convective events. However, convection occurs within different regimes which exhibit different forecast behaviour. A convective adjustment timescale can be used to distinguish between these regimes and examine their associated predictability. The convective adjustment timescale is calculated from radiosonde ascents and found to be consistent with that derived from convection-permitting model forecasts. The model-derived convective adjustment timescale is then examined for three summers in the British Isles to determine characteristics of the convective regimes for this maritime region. Convection in the British Isles is predominantly in convective quasi-equilibrium with 85%of convection having a timescale less than or equal to three hours. This percentage varies spatially with more non-equilibriumevents occurring in the south and southwest. The convective adjustment timescale exhibits a diurnal cycle over land. The nonequilibrium regime occurs more frequently at mid-range wind speeds and with winds from southerly to westerly sectors. Most non-equilibrium convective events in the British Isles are initiated near large coastal orographic gradients or on the European continent. Thus, the convective adjustment timescale is greatest when the location being examined is immediately downstream of large orographic gradients and decreases with distance from the convective initiation region. The dominance of convective quasiequilibrium conditions over the British Isles argues for the use of large-member ensembles in probabilistic forecasts for this region.
Resumo:
Introducing a parameterization of the interactions between wind-driven snow depth changes and melt pond evolution allows us to improve large scale models. In this paper we have implemented an explicit melt pond scheme and, for the first time, a wind dependant snow redistribution model and new snow thermophysics into a coupled ocean–sea ice model. The comparison of long-term mean statistics of melt pond fractions against observations demonstrates realistic melt pond cover on average over Arctic sea ice, but a clear underestimation of the pond coverage on the multi-year ice (MYI) of the western Arctic Ocean. The latter shortcoming originates from the concealing effect of persistent snow on forming ponds, impeding their growth. Analyzing a second simulation with intensified snow drift enables the identification of two distinct modes of sensitivity in the melt pond formation process. First, the larger proportion of wind-transported snow that is lost in leads directly curtails the late spring snow volume on sea ice and facilitates the early development of melt ponds on MYI. In contrast, a combination of higher air temperatures and thinner snow prior to the onset of melting sometimes make the snow cover switch to a regime where it melts entirely and rapidly. In the latter situation, seemingly more frequent on first-year ice (FYI), a smaller snow volume directly relates to a reduced melt pond cover. Notwithstanding, changes in snow and water accumulation on seasonal sea ice is naturally limited, which lessens the impacts of wind-blown snow redistribution on FYI, as compared to those on MYI. At the basin scale, the overall increased melt pond cover results in decreased ice volume via the ice-albedo feedback in summer, which is experienced almost exclusively by MYI.