197 resultados para convective upwinding scheme
Resumo:
This dissertation deals with aspects of sequential data assimilation (in particular ensemble Kalman filtering) and numerical weather forecasting. In the first part, the recently formulated Ensemble Kalman-Bucy (EnKBF) filter is revisited. It is shown that the previously used numerical integration scheme fails when the magnitude of the background error covariance grows beyond that of the observational error covariance in the forecast window. Therefore, we present a suitable integration scheme that handles the stiffening of the differential equations involved and doesn’t represent further computational expense. Moreover, a transform-based alternative to the EnKBF is developed: under this scheme, the operations are performed in the ensemble space instead of in the state space. Advantages of this formulation are explained. For the first time, the EnKBF is implemented in an atmospheric model. The second part of this work deals with ensemble clustering, a phenomenon that arises when performing data assimilation using of deterministic ensemble square root filters in highly nonlinear forecast models. Namely, an M-member ensemble detaches into an outlier and a cluster of M-1 members. Previous works may suggest that this issue represents a failure of EnSRFs; this work dispels that notion. It is shown that ensemble clustering can be reverted also due to nonlinear processes, in particular the alternation between nonlinear expansion and compression of the ensemble for different regions of the attractor. Some EnSRFs that use random rotations have been developed to overcome this issue; these formulations are analyzed and their advantages and disadvantages with respect to common EnSRFs are discussed. The third and last part contains the implementation of the Robert-Asselin-Williams (RAW) filter in an atmospheric model. The RAW filter is an improvement to the widely popular Robert-Asselin filter that successfully suppresses spurious computational waves while avoiding any distortion in the mean value of the function. Using statistical significance tests both at the local and field level, it is shown that the climatology of the SPEEDY model is not modified by the changed time stepping scheme; hence, no retuning of the parameterizations is required. It is found the accuracy of the medium-term forecasts is increased by using the RAW filter.
Resumo:
The Commission has proposed that a revised version of the present regime of direct payments should be rolled forward into the post-2013 CAP. There would be a limited redistribution of funds between Member States. Thirty per cent of the budget would be allocated to a new greening component, which would be problematic in the WTO. Non-active farmers would not qualify for aid; and payments would be capped. Special schemes would be introduced for small farmers, for young new entrants, and for disadvantaged regions.
Resumo:
The present paper presents a simple theory for the transformation of non-precipitating, shallow convection into precipitating, deep convective clouds. In order to make the pertinent point a much idealized system is considered, consisting only of shallow and deep convection without large–scale forcing. The transformation is described by an explicit coupling between these two types of convection. Shallow convection moistens and cools the atmosphere, whereas deep convection dries and warms, leading to destabilization and stabilization respectively. Consequently, in their own stand–alone modes, shallow convection perpetually grows, whereas deep convection simply damps: the former never reaches equilibrium, and the latter is never spontaneously generated. Coupling the modes together is the only way to reconcile these undesirable separate tendencies so that the convective system as a whole can remain in a stable periodic state under this idealized setting. Such coupling is a key missing element in current global atmospheric models. The energy–cycle description as originally formulated by Arakawa and Schubert, and presented herein is suitable for direct implementation into models using a mass–flux parameterization, and would alleviate the current problems with the representation of these two types of convection in numerical models. The present theory also provides a pertinent framework for analyzing large–eddy simulations and cloud–resolving modelling.
Resumo:
The concept of convective quasi–equilibrium (CQE) is a key ingredient in order to understand the role of deep moist convection in the atmosphere. It has been used as a guiding principle to develop almost all convective parameterizations and provides a basic theoretical framework for large–scale tropical dynamics. The CQE concept as originally proposed by Arakawa and Schubert [1974] is systematically reviewed from wider perspectives. Various interpretations and extensions of Arakawa and Schubert’s CQE are considered in terms of both a thermodynamic analogy and as a dynamical balance. The thermodynamic interpretations can be more emphatically embraced as a homeostasis. The dynamic balance interpretations can be best understood by analogy with the slow manifold. Various criticisms of CQE can be avoided by taking the dynamic balance interpretation. Possible limits of CQE are also discussed, including the importance of triggering in many convective situations, as well as the possible self–organized criticality of tropical convection. However, the most intriguing aspect of the CQE concept is that, in spite of many observational tests supporting and interpreting it in many different senses, it has 1never been established in a robust manner based on a systematic analysis of the cloud–work function budget by observations as was originally defined.
Resumo:
This paper reviews the ways that quality can be assessed in standing waters, a subject that has hitherto attracted little attention but which is now a legal requirement in Europe. It describes a scheme for the assessment and monitoring of water and ecological quality in standing waters greater than about I ha in area in England & Wales although it is generally relevant to North-west Europe. Thirteen hydrological, chemical and biological variables are used to characterise the standing water body in any current sampling. These are lake volume, maximum depth, onductivity, Secchi disc transparency, pH, total alkalinity, calcium ion concentration, total N concentration,winter total oxidised inorganic nitrogen (effectively nitrate) concentration, total P concentration, potential maximum chlorophyll a concentration, a score based on the nature of the submerged and emergent plant community, and the presence or absence of a fish community. Inter alia these variables are key indicators of the state of eutrophication, acidification, salinisation and infilling of a water body.
Resumo:
High-resolution simulations over a large tropical domain (∼20◦S–20◦N and 42◦E–180◦E) using both explicit and parameterized convection are analyzed and compared to observations during a 10-day case study of an active Madden-Julian Oscillation (MJO) event. The parameterized convection model simulations at both 40 km and 12 km grid spacing have a very weak MJO signal and little eastward propagation. A 4 km explicit convection simulation using Smagorinsky subgrid mixing in the vertical and horizontal dimensions exhibits the best MJO strength and propagation speed. 12 km explicit convection simulations also perform much better than the 12 km parameterized convection run, suggesting that the convection scheme, rather than horizontal resolution, is key for these MJO simulations. Interestingly, a 4 km explicit convection simulation using the conventional boundary layer scheme for vertical subgrid mixing (but still using Smagorinsky horizontal mixing) completely loses the large-scale MJO organization, showing that relatively high resolution with explicit convection does not guarantee a good MJO simulation. Models with a good MJO representation have a more realistic relationship between lower-free-tropospheric moisture and precipitation, supporting the idea that moisture-convection feedback is a key process for MJO propagation. There is also increased generation of available potential energy and conversion of that energy into kinetic energy in models with a more realistic MJO, which is related to larger zonal variance in convective heating and vertical velocity, larger zonal temperature variance around 200 hPa, and larger correlations between temperature and ascent (and between temperature and diabatic heating) between 500–400 hPa.
Resumo:
The extent and thickness of the Arctic sea ice cover has decreased dramatically in the past few decades with minima in sea ice extent in September 2005 and 2007. These minima have not been predicted in the IPCC AR4 report, suggesting that the sea ice component of climate models should more realistically represent the processes controlling the sea ice mass balance. One of the processes poorly represented in sea ice models is the formation and evolution of melt ponds. Melt ponds accumulate on the surface of sea ice from snow and sea ice melt and their presence reduces the albedo of the ice cover, leading to further melt. Toward the end of the melt season, melt ponds cover up to 50% of the sea ice surface. We have developed a melt pond evolution theory. Here, we have incorporated this melt pond theory into the Los Alamos CICE sea ice model, which has required us to include the refreezing of melt ponds. We present results showing that the presence, or otherwise, of a representation of melt ponds has a significant effect on the predicted sea ice thickness and extent. We also present a sensitivity study to uncertainty in the sea ice permeability, number of thickness categories in the model representation, meltwater redistribution scheme, and pond albedo. We conclude with a recommendation that our melt pond scheme is included in sea ice models, and the number of thickness categories should be increased and concentrated at lower thicknesses.
Resumo:
Convective equilibrium is a long-standing and useful concept for understanding many aspects of the behaviour of deep moist convection. For example, it is often invoked in developing parameterizations for large-scale models. However, the equilibrium assumption may begin to break down as models are increasingly used with shorter timesteps and finer resolutions. Here we perform idealized cloud-system resolving model simulations of deep convection with imposed time variations in the surface forcing. A range of rapid forcing timescales from 1 − 36hr are used, in order to induce systematic departures from equilibrium. For the longer forcing timescales, the equilibrium assumption remains valid, in at least the limited sense that cycle-integrated measures of convective activity are very similar from cycle to cycle. For shorter forcing timescales, cycle-integrated convection becomes more variable, with enhanced activity on one cycle being correlated with reduced activity on the next, suggesting a role for convective memory. Further investigation shows that the memory does not appear to be carried by the domain-mean thermodynamic fields but rather by structures on horizontal scales of 5 − 20km. Such structures are produced by the convective clouds and can persist beyond the lifetime of the cloud, even through to the next forcing cycle.
Resumo:
An investigation is presented of a quasi-stationary convective system (QSCS) which occurred over the UK Southwest Peninsula on 21 July 2010. This system was remarkably similar in its location and structure to one which caused devastating flash flooding in the coastal village of Boscastle, Cornwall on 16 August 2004. However, in the 2010 case rainfall accumulations were around four times smaller and no flooding was recorded. The more extreme nature of the Boscastle case is shown to be related to three factors: (1) higher rain rates, associated with a warmer and moister tropospheric column and deeper convective clouds; (2) a more stationary system, due to slower evolution of the large-scale flow; and (3) distribution of the heaviest precipitation over fewer river catchments. Overall, however, the synoptic setting of the two events was broadly similar, suggesting that such conditions favour the development of QSCSs over the Southwest Peninsula. A numerical simulation of the July 2010 event was performed using a 1.5-km grid length configuration of the Met Office Unified Model. This reveals that convection was repeatedly initiated through lifting of low-level air parcels along a quasi-stationary coastal convergence line. Sensitivity tests are used to show that this convergence line was a sea breeze front which temporarily stalled along the coastline due to the retarding influence of an offshore-directed background wind component. Several deficiencies are noted in the 1.5-km model’s representation of the storm system, including delayed convective initiation; however, significant improvements are observed when the grid length is reduced to 500 m. These result in part from an improved representation of the convergence line, which enhances the associated low-level ascent allowing air parcels to more readily reach their level of free convection. The implications of this finding for forecasting convective precipitation are discussed.
Resumo:
In this paper microlevel politics and conflict associated with social and economic change in the countryside and linked changes in rural governance are explored with a focus upon research carried out on a recent rural policy initiative aimed at local 'empowerment'. This acts as a touchstone for a wider theoretical discussion. The paper is theorised within a conceptual framework derived and extended from the work of Pierre Bourdieu and others in order to explore case studies of the English Countryside Commission's Parish Paths Partnership scheme. The micropolitics involved with this scheme are examined and used to highlight more general issues raised by increased 'parish empowerment' in the 'postrural'.
Resumo:
The development of NWP models with grid spacing down to 1 km should produce more realistic forecasts of convective storms. However, greater realism does not necessarily mean more accurate precipitation forecasts. The rapid growth of errors on small scales in conjunction with preexisting errors on larger scales may limit the usefulness of such models. The purpose of this paper is to examine whether improved model resolution alone is able to produce more skillful precipitation forecasts on useful scales, and how the skill varies with spatial scale. A verification method will be described in which skill is determined from a comparison of rainfall forecasts with radar using fractional coverage over different sized areas. The Met Office Unified Model was run with grid spacings of 12, 4, and 1 km for 10 days in which convection occurred during the summers of 2003 and 2004. All forecasts were run from 12-km initial states for a clean comparison. The results show that the 1-km model was the most skillful over all but the smallest scales (approximately <10–15 km). A measure of acceptable skill was defined; this was attained by the 1-km model at scales around 40–70 km, some 10–20 km less than that of the 12-km model. The biggest improvement occurred for heavier, more localized rain, despite it being more difficult to predict. The 4-km model did not improve much on the 12-km model because of the difficulties of representing convection at that resolution, which was accentuated by the spinup from 12-km fields.
Resumo:
We present the results of simulations carried out with the Met Office Unified Model at 12km, 4km and 1.5km resolution for a large region centred on West Africa using several different representations of the convection processes. These span the range of resolutions from much coarser than the size of the convection processes to the cloud-system resolving and thus encompass the intermediate "grey-zone". The diurnal cycle in the extent of convective regions in the models is tested against observations from the Geostationary Earth Radiation Budget instrument on Meteosat-8. By this measure, the two best-performing simulations are a 12km model without convective parametrization, using Smagorinsky style sub-grid scale mixing in all three dimensions and a 1.5km simulations with two-dimensional Smagorinsky mixing. Of these, the 12km model produces a better match to the magnitude of the total cloud fraction but the 1.5km results in better timing for its peak value. The results suggest that the previously-reported improvement in the representation of the diurnal cycle of convective organisation in the 4km model compared to the standard 12km configuration is principally a result of the convection scheme employed rather than the improved resolution per se. The details of and implications for high-resolution model simulations are discussed.
Resumo:
In Part I of this study it was shown that moving from a moisture-convergent- to a relative-humidity-dependent organized entrainment rate in the formulation for deep convection was responsible for significant advances in the simulation of the Madden – Julian Oscillation (MJO) in the ECMWF model. However, the application of traditional MJO diagnostics were not adequate to understand why changing the control on convection had such a pronounced impact on the representation of the MJO. In this study a set of process-based diagnostics are applied to the hindcast experiments described in Part I to identify the physical mechanisms responsible for the advances in MJO simulation. Increasing the sensitivity of the deep convection scheme to environmental moisture is shown to modify the relationship between precipitation and moisture in the model. Through dry-air entrainment, convective plumes ascending in low-humidity environments terminate lower in the atmosphere. As a result, there is an increase in the occurrence of cumulus congestus, which acts to moisten the mid troposphere. Due to the modified precipitation – moisture relationship more moisture is able to build up, which effectively preconditions the tropical atmosphere for the t ransition t o d eep convection. R esults from this study suggest that a tropospheric moisture control on convection is key to simulating the interaction between the convective heating and the large-scale wave forcing associated with the MJO.
Resumo:
The emergence of high-density wireless local area network (WLAN) deployments in recent years is a testament to the insatiable demands for wireless broadband services. The increased density of WLAN deployments brings with it the potential of increased capacity, extended coverage, and exciting new applications. However, the corresponding increase in contention and interference can significantly degrade throughputs, unless new challenges in channel assignment are effectively addressed. In this paper, a client-assisted channel assignment scheme that can provide enhanced throughput is proposed. A study on the impact of interference on throughput with multiple access points (APs)is first undertaken using a novel approach that determines the possibility of parallel transmissions. A metric with a good correlation to the throughput, i.e., the number of conflict pairs, is used in the client-assisted minimum conflict pairs (MICPA) scheme. In this scheme, measurements from clients are used to assist the AP in determining the channel with the minimum number of conflict pairs to maximize its expected throughput. Simulation results show that the client-assisted MICPA scheme can provide meaningful throughput improvements over other schemes that only utilize the AP’s measurements.