967 resultados para Turbulent Flocculation


Relevância:

10.00% 10.00%

Publicador:

Resumo:

The time-dependent climate response to changing concentrations of greenhouse gases and sulfate aerosols is studied using a coupled general circulation model of the atmosphere and the ocean (ECHAM4/OPYC3). The concentrations of the well-mixed greenhouse gases like CO2, CH4, N2O, and CFCs are prescribed for the past (1860–1990) and projected into the future according to International Panel on Climate Change (IPCC) scenario IS92a. In addition, the space–time distribution of tropospheric ozone is prescribed, and the tropospheric sulfur cycle is calculated within the coupled model using sulfur emissions of the past and projected into the future (IS92a). The radiative impact of the aerosols is considered via both the direct and the indirect (i.e., through cloud albedo) effect. It is shown that the simulated trend in sulfate deposition since the end of the last century is broadly consistent with ice core measurements, and the calculated radiative forcings from preindustrial to present time are within the uncertainty range estimated by IPCC. Three climate perturbation experiments are performed, applying different forcing mechanisms, and the results are compared with those obtained from a 300-yr unforced control experiment. As in previous experiments, the climate response is similar, but weaker, if aerosol effects are included in addition to greenhouse gases. One notable difference to previous experiments is that the strength of the Indian summer monsoon is not fundamentally affected by the inclusion of aerosol effects. Although the monsoon is damped compared to a greenhouse gas only experiment, it is still more vigorous than in the control experiment. This different behavior, compared to previous studies, is the result of the different land–sea distribution of aerosol forcing. Somewhat unexpected, the intensity of the global hydrological cycle becomes weaker in a warmer climate if both direct and indirect aerosol effects are included in addition to the greenhouse gases. This can be related to anomalous net radiative cooling of the earth’s surface through aerosols, which is balanced by reduced turbulent transfer of both sensible and latent heat from the surface to the atmosphere.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A simple four-dimensional assimilation technique, called Newtonian relaxation, has been applied to the Hamburg climate model (ECHAM), to enable comparison of model output with observations for short periods of time. The prognostic model variables vorticity, divergence, temperature, and surface pressure have been relaxed toward European Center for Medium-Range Weather Forecasts (ECMWF) global meteorological analyses. Several experiments have been carried out, in which the values of the relaxation coefficients have been varied to find out which values are most usable for our purpose. To be able to use the method for validation of model physics or chemistry, good agreement of the model simulated mass and wind field is required. In addition, the model physics should not be disturbed too strongly by the relaxation forcing itself. Both aspects have been investigated. Good agreement with basic observed quantities, like wind, temperature, and pressure is obtained for most simulations in the extratropics. Derived variables, like precipitation and evaporation, have been compared with ECMWF forecasts and observations. Agreement for these variables is smaller than for the basic observed quantities. Nevertheless, considerable improvement is obtained relative to a control run without assimilation. Differences between tropics and extratropics are smaller than for the basic observed quantities. Results also show that precipitation and evaporation are affected by a sort of continuous spin-up which is introduced by the relaxation: the bias (ECMWF-ECHAM) is increasing with increasing relaxation forcing. In agreement with this result we found that with increasing relaxation forcing the vertical exchange of tracers by turbulent boundary layer mixing and, in a lesser extent, by convection, is reduced.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The very first numerical models which were developed more than 20 years ago were drastic simplifications of the real atmosphere and they were mostly restricted to describe adiabatic processes. For prediction of a day or two of the mid tropospheric flow these models often gave reasonable results but the result deteriorated quickly when the prediction was extended further in time. The prediction of the surface flow was unsatisfactory even for short predictions. It was evident that both the energy generating processes as well as the dissipative processes have to be included in numerical models in order to predict the weather patterns in the lower part of the atmosphere and to predict the atmosphere in general beyond a day or two. Present-day computers make it possible to attack the weather forecasting problem in a more comprehensive and complete way and substantial efforts have been made during the last decade in particular to incorporate the non-adiabatic processes in numerical prediction models. The physics of radiational transfer, condensation of moisture, turbulent transfer of heat, momentum and moisture and the dissipation of kinetic energy are the most important processes associated with the formation of energy sources and sinks in the atmosphere and these have to be incorporated in numerical prediction models extended over more than a few days. The mechanisms of these processes are mainly related to small scale disturbances in space and time or even molecular processes. It is therefore one of the basic characteristics of numerical models that these small scale disturbances cannot be included in an explicit way. The reason for this is the discretization of the model's atmosphere by a finite difference grid or the use of a Galerkin or spectral function representation. The second reason why we cannot explicitly introduce these processes into a numerical model is due to the fact that some physical processes necessary to describe them (such as the local buoyance) are a priori eliminated by the constraints of hydrostatic adjustment. Even if this physical constraint can be relaxed by making the models non-hydrostatic the scale problem is virtually impossible to solve and for the foreseeable future we have to try to incorporate the ensemble or gross effect of these physical processes on the large scale synoptic flow. The formulation of the ensemble effect in terms of grid-scale variables (the parameters of the large-scale flow) is called 'parameterization'. For short range prediction of the synoptic flow at middle and high latitudes, very simple parameterization has proven to be rather successful.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Purpose – The focus of extant strategy literature is on for-profit organisations and within these group public organisations. There are other forms of organisations and following the deep recession of 2008 there is greater interest in other forms of organisation. In this case study and interview the aim is to examine strategy, strategic decisions and strategic management of a not-for-profit provident. Design/methodology/approach – The paper draws on documentary evidence and a semi-structured interview with Ray King, chief executive of Bupa. The perspective of CEO is key in strategy and such perspectives are relatively rarer. Findings – Bupa invests its surplus to provide better healthcare. Free from the pressures of quarterly reporting and shareholders it can pursue long-term value creation for members rather than short-term surpluses. Research limitations/implications – The case study and interview offers a unique insight into strategy-making within a successful mutual provident that has grown organically and externally becoming an international leader in health insurance. Originality/value – This case study sheds light on strategy-making within a not-for-profit provident that has diversified and grown significantly over the past six decades. Furthermore, very few case studies offer insight into the thinking of a chief executive who has successfully managed a business in a turbulent environment.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Purpose – The purpose of this paper is to demonstrate how strategy is developed and implemented in an organisation with an unusual ownership model. Partnerships are not a prevalent form of ownership but as this case demonstrates they can be extremely effective. Furthermore this case demonstrates how logical incrementalism can be used to implement major strategic decisions. Design/methodology/approach – The paper draws on company documentary evidence and a semi-structured interview with Mr Charlie Mayfield, Chairman of John Lewis Partnership. A chairman has a helicopter view of business whose perspectives are rarely captured by strategy researchers. This case study offers an insight into strategic thinking of a chairman and chief executive of a successful company. Research limitations/implications – The case study and interview offer a unique insight into the rationale behind strategic decisions within a successful partnership that has grown organically in a highly competitive retail market without high gearing. Originality/value – This case study sheds light on strategic moves within partnership. Furthermore, very few case studies offer insight into the thinking of a chief executive who has successfully managed a business in a turbulent environment.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This article focuses on the characteristics of persistent thin single-layer mixed-phase clouds. We seek to answer two important questions: (i) how does ice continually nucleate and precipitate from these clouds, without the available ice nuclei becoming depleted? (ii) how do the supercooled liquid droplets persist in spite of the net flux of water vapour to the growing ice crystals? These questions are answered quantitatively using in situ and radar observations of a long-lived mixed-phase cloud layer over the Chilbolton Observatory. Doppler radar measurements show that the top 500 m of cloud (the top 250 m of which is mixed-phase, with ice virga beneath) is turbulent and well-mixed, and the liquid water content is adiabatic. This well-mixed layer is bounded above and below by stable layers. This inhibits entrainment of fresh ice nuclei into the cloud layer, yet our in situ and radar observations show that a steady flux of ≈100 m−2s−1 ice crystals fell from the cloud over the course of ∼1 day. Comparing this flux to the concentration of conventional ice nuclei expected to be present within the well-mixed layer, we find that these nuclei would be depleted within less than 1 h. We therefore argue that nucleation in these persistent supercooled clouds is strongly time-dependent in nature, with droplets freezing slowly over many hours, significantly longer than the few seconds residence time of an ice nucleus counter. Once nucleated, the ice crystals are observed to grow primarily by vapour deposition, because of the low liquid water path (21 g m−2) yet vapour-rich environment. Evidence for this comes from high differential reflectivity in the radar observations, and in situ imaging of the crystals. The flux of vapour from liquid to ice is quantified from in situ measurements, and we show that this modest flux (3.3 g m−2h−1) can be readily offset by slow radiative cooling of the layer to space.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A Lagrangian model of photochemistry and mixing is described (CiTTyCAT, stemming from the Cambridge Tropospheric Trajectory model of Chemistry And Transport), which is suitable for transport and chemistry studies throughout the troposphere. Over the last five years, the model has been developed in parallel at several different institutions and here those developments have been incorporated into one "community" model and documented for the first time. The key photochemical developments include a new scheme for biogenic volatile organic compounds and updated emissions schemes. The key physical development is to evolve composition following an ensemble of trajectories within neighbouring air-masses, including a simple scheme for mixing between them via an evolving "background profile", both within the boundary layer and free troposphere. The model runs along trajectories pre-calculated using winds and temperature from meteorological analyses. In addition, boundary layer height and precipitation rates, output from the analysis model, are interpolated to trajectory points and used as inputs to the mixing and wet deposition schemes. The model is most suitable in regimes when the effects of small-scale turbulent mixing are slow relative to advection by the resolved winds so that coherent air-masses form with distinct composition and strong gradients between them. Such air-masses can persist for many days while stretching, folding and thinning. Lagrangian models offer a useful framework for picking apart the processes of air-mass evolution over inter-continental distances, without being hindered by the numerical diffusion inherent to global Eulerian models. The model, including different box and trajectory modes, is described and some output for each of the modes is presented for evaluation. The model is available for download from a Subversion-controlled repository by contacting the corresponding authors.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

High-resolution simulations with a mesoscale model are performed to estimate heat and moisture budgets of a well-mixed boundary layer. The model budgets are validated against energy budgets obtained from airborne measurements over heterogeneous terrain in Western Germany. Time rate of change, vertical divergence, and horizontal advection for an atmospheric column of air are estimated. Results show that the time trend of specific humidity exhibits some deficiencies, while the potential temperature trend is matched accurately. Furthermore, the simulated turbulent surface fluxes of sensible and latent heat are comparable to the measured fluxes, leading to similar values of the vertical divergence. The analysis of different horizontal model resolutions exhibits improved surface fluxes with increased resolution, a fact attributed to a reduced aggregation effect. Scale-interaction effects could be identified: while time trends and advection are strongly influenced by mesoscale forcing, the turbulent surface fluxes are mainly controlled by microscale processes.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Three wind gust estimation (WGE) methods implemented in the numerical weather prediction (NWP) model COSMO-CLM are evaluated with respect to their forecast quality using skill scores. Two methods estimate gusts locally from mean wind speed and the turbulence state of the atmosphere, while the third one considers the mixing-down of high momentum within the planetary boundary layer (WGE Brasseur). One hundred and fifty-eight windstorms from the last four decades are simulated and results are compared with gust observations at 37 stations in Germany. Skill scores reveal that the local WGE methods show an overall better behaviour, whilst WGE Brasseur performs less well except for mountain regions. The here introduced WGE turbulent kinetic energy (TKE) permits a probabilistic interpretation using statistical characteristics of gusts at observational sites for an assessment of uncertainty. The WGE TKE formulation has the advantage of a ‘native’ interpretation of wind gusts as result of local appearance of TKE. The inclusion of a probabilistic WGE TKE approach in NWP models has, thus, several advantages over other methods, as it has the potential for an estimation of uncertainties of gusts at observational sites.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A physically based gust parameterisation is added to the atmospheric mesoscale model FOOT3DK to estimate wind gusts associated with storms over West Germany. The gust parameterisation follows the Wind Gust Estimate (WGE) method and its functionality is verified in this study. The method assumes that gusts occurring at the surface are induced by turbulent eddies in the planetary boundary layer, deflecting air parcels from higher levels down to the surface under suitable conditions. Model simulations are performed with horizontal resolutions of 20 km and 5 km. Ten historical storm events of different characteristics and intensities are chosen in order to include a wide range of typical storms affecting Central Europe. All simulated storms occurred between 1990 and 1998. The accuracy of the method is assessed objectively by validating the simulated wind gusts against data from 16 synoptic stations by means of “quality parameters”. Concerning these parameters, the temporal and spatial evolution of the simulated gusts is well reproduced. Simulated values for low altitude stations agree particularly well with the measured gusts. For orographically exposed locations, the gust speeds are partly underestimated. The absolute maximum gusts lie in most cases within the bounding interval given by the WGE method. Focussing on individual storms, the performance of the method is better for intense and large storms than for weaker ones. Particularly for weaker storms, the gusts are typically overestimated. The results for the sample of ten storms document that the method is generally applicable with the mesoscale model FOOT3DK for mid-latitude winter storms, even in areas with complex orography.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The theory of homogeneous barotropic beta-plane turbulence is here extended to include effects arising from spatial inhomogeneity in the form of a zonal shear flow. Attention is restricted to the geophysically important case of zonal flows that are barotropically stable and are of larger scale than the resulting transient eddy field. Because of the presumed scale separation, the disturbance enstrophy is approximately conserved in a fully nonlinear sense, and the (nonlinear) wave-mean-flow interaction may be characterized as a shear-induced spectral transfer of disturbance enstrophy along lines of constant zonal wavenumber k. In this transfer the disturbance energy is generally not conserved. The nonlinear interactions between different disturbance components are turbulent for scales smaller than the inverse of Rhines's cascade-arrest scale κβ[identical with] (β0/2urms)½ and in this regime their leading-order effect may be characterized as a tendency to spread the enstrophy (and energy) along contours of constant total wavenumber κ [identical with] (k2 + l2)½. Insofar as this process of turbulent isotropization involves spectral transfer of disturbance enstrophy across lines of constant zonal wavenumber k, it can be readily distinguished from the shear-induced transfer which proceeds along them. However, an analysis in terms of total wavenumber K alone, which would be justified if the flow were homogeneous, would tend to mask the differences. The foregoing theoretical ideas are tested by performing direct numerical simulation experiments. It is found that the picture of classical beta-plane turbulence is altered, through the effect of the large-scale zonal flow, in the following ways: (i) while the turbulence is still confined to K Kβ, the disturbance field penetrates to the largest scales of motion; (ii) the larger disturbance scales K < Kβ exhibit a tendency to meridional rather than zonal anisotropy, namely towards v2 > u2 rather than vice versa; (iii) the initial spectral transfer rate away from an isotropic intermediate-scale source is significantly enhanced by the shear-induced transfer associated with straining by the zonal flow. This last effect occurs even when the large-scale shear appears weak to the energy-containing eddies, in the sense that dU/dy [double less-than sign] κ for typical eddy length and velocity scales.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Global FGGE data are used to investigate several aspects of large-scale turbulence in the atmosphere. The approach follows that for two-dimensional, nondivergent turbulent flows which are homogeneous and isotropic on the sphere. Spectra of kinetic energy, enstrophy and available potential energy are obtained for both the stationary and transient parts of the flow. Nonlinear interaction terms and fluxes of energy and enstrophy through wavenumber space are calculated and compared with the theory. A possible method of parameterizing the interactions with unresolved scales is considered. Two rather different flow regimes are found in wavenumber space. The high-wavenumber regime is dominated by the transient components of the flow and exhibits, at least approximately, several of the conditions characterizing homogeneous and isotropic turbulence. This region of wavenumber space also displays some of the features of an enstrophy-cascading inertial subrange. The low-wavenumber region, on the other hand, is dominated by the stationary component of the flow, exhibits marked anisotropy and, in contrast to the high-wavenumber regime, displays a marked change between January and July.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Anthropogenic emissions of heat and exhaust gases play an important role in the atmospheric boundary layer, altering air quality, greenhouse gas concentrations and the transport of heat and moisture at various scales. This is particularly evident in urban areas where emission sources are integrated in the highly heterogeneous urban canopy layer and directly linked to human activities which exhibit significant temporal variability. It is common practice to use eddy covariance observations to estimate turbulent surface fluxes of latent heat, sensible heat and carbon dioxide, which can be attributed to a local scale source area. This study provides a method to assess the influence of micro-scale anthropogenic emissions on heat, moisture and carbon dioxide exchange in a highly urbanized environment for two sites in central London, UK. A new algorithm for the Identification of Micro-scale Anthropogenic Sources (IMAS) is presented, with two aims. Firstly, IMAS filters out the influence of micro-scale emissions and allows for the analysis of the turbulent fluxes representative of the local scale source area. Secondly, it is used to give a first order estimate of anthropogenic heat flux and carbon dioxide flux representative of the building scale. The algorithm is evaluated using directional and temporal analysis. The algorithm is then used at a second site which was not incorporated in its development. The spatial and temporal local scale patterns, as well as micro-scale fluxes, appear physically reasonable and can be incorporated in the analysis of long-term eddy covariance measurements at the sites in central London. In addition to the new IMAS-technique, further steps in quality control and quality assurance used for the flux processing are presented. The methods and results have implications for urban flux measurements in dense urbanised settings with significant sources of heat and greenhouse gases.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

For an increasing number of applications, mesoscale modelling systems now aim to better represent urban areas. The complexity of processes resolved by urban parametrization schemes varies with the application. The concept of fitness-for-purpose is therefore critical for both the choice of parametrizations and the way in which the scheme should be evaluated. A systematic and objective model response analysis procedure (Multiobjective Shuffled Complex Evolution Metropolis (MOSCEM) algorithm) is used to assess the fitness of the single-layer urban canopy parametrization implemented in the Weather Research and Forecasting (WRF) model. The scheme is evaluated regarding its ability to simulate observed surface energy fluxes and the sensitivity to input parameters. Recent amendments are described, focussing on features which improve its applicability to numerical weather prediction, such as a reduced and physically more meaningful list of input parameters. The study shows a high sensitivity of the scheme to parameters characterizing roof properties in contrast to a low response to road-related ones. Problems in partitioning of energy between turbulent sensible and latent heat fluxes are also emphasized. Some initial guidelines to prioritize efforts to obtain urban land-cover class characteristics in WRF are provided. Copyright © 2010 Royal Meteorological Society and Crown Copyright.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

An urban energy and water balance model is presented which uses a small number of commonly measured meteorological variables and information about the surface cover. Rates of evaporation-interception for a single layer with multiple surface types (paved, buildings, coniferous trees and/or shrubs, deciduous trees and/or shrubs, irrigated grass, non-irrigated grass and water) are calculated. Below each surface type, except water, there is a single soil layer. At each time step the moisture state of each surface is calculated. Horizontal water movements at the surface and in the soil are incorporated. Particular attention is given to the surface conductance used to model evaporation and its parameters. The model is tested against direct flux measurements carried out over a number of years in Vancouver, Canada and Los Angeles, USA. At all measurement sites the model is able to simulate the net all-wave radiation and turbulent sensible and latent heat well (RMSE = 25–47 W m−2, 30–64 and 20–56 W m−2, respectively). The model reproduces the diurnal cycle of the turbulent fluxes but typically underestimates latent heat flux and overestimates sensible heat flux in the day time. The model tracks measured surface wetness and simulates the variations in soil moisture content. It is able to respond correctly to short-term events as well as annual changes. The largest uncertainty relates to the determination of surface conductance. The model has the potential be used for multiple applications; for example, to predict effects of regulation on urban water use, landscaping and planning scenarios, or to assess climate mitigation strategies.