971 resultados para 2-compartment Dispersion Model
Resumo:
A new method of clear-air turbulence (CAT) forecasting based on the Lighthill–Ford theory of spontaneous imbalance and emission of inertia–gravity waves has been derived and applied on episodic and seasonal time scales. A scale analysis of this shallow-water theory for midlatitude synoptic-scale flows identifies advection of relative vorticity as the leading-order source term. Examination of leading- and second-order terms elucidates previous, more empirically inspired CAT forecast diagnostics. Application of the Lighthill–Ford theory to the Upper Mississippi and Ohio Valleys CAT outbreak of 9 March 2006 results in good agreement with pilot reports of turbulence. Application of Lighthill–Ford theory to CAT forecasting for the 3 November 2005–26 March 2006 period using 1-h forecasts of the Rapid Update Cycle (RUC) 2 1500 UTC model run leads to superior forecasts compared to the current operational version of the Graphical Turbulence Guidance (GTG1) algorithm, the most skillful operational CAT forecasting method in existence. The results suggest that major improvements in CAT forecasting could result if the methods presented herein become operational.
Resumo:
This paper reports an uncertainty analysis of critical loads for acid deposition for a site in southern England, using the Steady State Mass Balance Model. The uncertainty bounds, distribution type and correlation structure for each of the 18 input parameters was considered explicitly, and overall uncertainty estimated by Monte Carlo methods. Estimates of deposition uncertainty were made from measured data and an atmospheric dispersion model, and hence the uncertainty in exceedance could also be calculated. The uncertainties of the calculated critical loads were generally much lower than those of the input parameters due to a "compensation of errors" mechanism - coefficients of variation ranged from 13% for CLmaxN to 37% for CL(A). With 1990 deposition, the probability that the critical load was exceeded was > 0.99; to reduce this probability to 0.50, a 63% reduction in deposition is required; to 0.05, an 82% reduction. With 1997 deposition, which was lower than that in 1990, exceedance probabilities declined and uncertainties in exceedance narrowed as deposition uncertainty had less effect. The parameters contributing most to the uncertainty in critical loads were weathering rates, base cation uptake rates, and choice of critical chemical value, indicating possible research priorities. However, the different critical load parameters were to some extent sensitive to different input parameters. The application of such probabilistic results to environmental regulation is discussed.
Resumo:
A surface forcing response framework is developed that enables an understanding of time-dependent climate change from a surface energy perspective. The framework allows the separation of fast responses that are unassociated with global-mean surface air temperature change (ΔT), which is included in the forcing, and slow feedbacks that scale with ΔT. The framework is illustrated primarily using 2 × CO2 climate model experiments and is robust across the models. For CO2 increases, the positive downward radiative component of forcing is smaller at the surface than at the tropopause, and so a rapid reduction in the upward surface latent heat (LH) flux is induced to conserve the tropospheric heat budget; this reduces the precipitation rate. Analysis of the time-dependent surface energy balance over sea and land separately reveals that land areas rapidly regain energy balance, and significant land surface warming occurs before global sea temperatures respond. The 2 × CO2 results are compared to a solar increase experiment and show that some fast responses are forcing dependent. In particular, a significant forcing from the fast hydrological response found in the CO2 experiments is much smaller in the solar experiment. The different fast response explains why previous equilibrium studies found differences in the hydrological sensitivity between these two forcings. On longer time scales, as ΔT increases, the net surface longwave and LH fluxes provide positive and negative surface feedbacks, respectively, while the net surface shortwave and sensible heat fluxes change little. It is found that in contrast to their fast responses, the longer-term response of both surface energy fluxes and the global hydrological cycle are similar for the different forcing agents.
Resumo:
1. The habitat components determining the structure of bee communities are well known when considering foraging resources; however, there is little data with respect to the role of nesting resources. 2. As a model system this study uses 21 diverse bee communities in a Mediterranean landscape comprising a variety of habitats regenerating after fire. The findings clearly demonstrate that a variety of nesting substrates and nest building materials have key roles in organising the composition of bee communities. 3. The availability of bare ground and potential nesting cavities were the two primary factors influencing the structure of the entire bee community, the composition of guilds, and also the relative abundance of the dominant species. Other nesting resources shown to be important include availability of steep and sloping ground, abundance of plant species providing pithy stems, and the occurrence of pre-existing burrows. 4. Nesting resource availability and guild structure varied markedly across habitats in different stages of post-fire regeneration; however, in all cases, nest sites and nesting resources were important determinants of bee community structure.
Resumo:
The budgets of seven halogenated gases (CFC-11, CFC-12, CFC-113, CFC-114, CFC-115, CCl4 and SF6) are studied by comparing measurements in polar firn air from two Arctic and three Antarctic sites, and simulation results of two numerical models: a 2-D atmospheric chemistry model and a 1-D firn diffusion model. The first one is used to calculate atmospheric concentrations from emission trends based on industrial inventories; the calculated concentration trends are used by the second one to produce depth concentration profiles in the firn. The 2-D atmospheric model is validated in the boundary layer by comparison with atmospheric station measurements, and vertically for CFC-12 by comparison with balloon and FTIR measurements. Firn air measurements provide constraints on historical atmospheric concentrations over the last century. Age distributions in the firn are discussed using a Green function approach. Finally, our results are used as input to a radiative model in order to evaluate the radiative forcing of our target gases. Multi-species and multi-site firn air studies allow to better constrain atmospheric trends. The low concentrations of all studied gases at the bottom of the firn, and their consistency with our model results confirm that their natural sources are small. Our results indicate that the emissions, sinks and trends of CFC-11, CFC-12, CFC-113, CFC-115 and SF6 are well constrained, whereas it is not the case for CFC-114 and CCl4. Significant emission-dependent changes in the lifetimes of halocarbons destroyed in the stratosphere were obtained. Those result from the time needed for their transport from the surface where they are emitted to the stratosphere where they are destroyed. Efforts should be made to update and reduce the large uncertainties on CFC lifetimes.
Resumo:
This paper addresses the statistical mechanics of ideal polymer chains next to a hard wall. The principal quantity of interest, from which all monomer densities can be calculated, is the partition function, G N(z) , for a chain of N discrete monomers with one end fixed a distance z from the wall. It is well accepted that in the limit of infinite N , G N(z) satisfies the diffusion equation with the Dirichlet boundary condition, G N(0) = 0 , unless the wall possesses a sufficient attraction, in which case the Robin boundary condition, G N(0) = - x G N ′(0) , applies with a positive coefficient, x . Here we investigate the leading N -1/2 correction, D G N(z) . Prior to the adsorption threshold, D G N(z) is found to involve two distinct parts: a Gaussian correction (for z <~Unknown control sequence '\lesssim' aN 1/2 with a model-dependent amplitude, A , and a proximal-layer correction (for z <~Unknown control sequence '\lesssim' a described by a model-dependent function, B(z).
Resumo:
There have been relatively few tracer experiments carried out that have looked at vertical plume spread in urban areas. In this paper we present results from two tracer (cyclic perfluorocarbon) experiments carried out in 2006 and 2007 in central London centred on the BT Tower as part of the REPARTEE (Regent’s Park and Tower Environmental Experiment) campaign. The height of the tower gives a unique opportunity to study vertical dispersion profiles and transport times in central London. Vertical gradients are contrasted with the relevant Pasquill stability classes. Estimation of lateral advection and vertical mixing times are made and compared with previous measurements. Data are then compared with a simple operational dispersion model and contrasted with data taken in central London as part of the DAPPLE campaign. This correlates dosage with non-dimensionalised distance from source. Such analyses illustrate the feasibility of the use of these empirical correlations over these prescribed distances in central London.
Resumo:
We make a qualitative and quantitative comparison of numericalsimulations of the ashcloud generated by the eruption of Eyjafjallajökull in April2010 with ground-basedlidar measurements at Exeter and Cardington in southern England. The numericalsimulations are performed using the Met Office’s dispersion model, NAME (Numerical Atmospheric-dispersion Modelling Environment). The results show that NAME captures many of the features of the observed ashcloud. The comparison enables us to estimate the fraction of material which survives the near-source fallout processes and enters into the distal plume. A number of simulations are performed which show that both the structure of the ashcloudover southern England and the concentration of ash within it are particularly sensitive to the height of the eruption column (and the consequent estimated mass emission rate), to the shape of the vertical source profile and the level of prescribed ‘turbulent diffusion’ (representing the mixing by the unresolved eddies) in the free troposphere with less sensitivity to the timing of the start of the eruption and the sedimentation of particulates in the distal plume.
Resumo:
1] We apply a novel computational approach to assess, for the first time, volcanic ash dispersal during the Campanian Ignimbrite (Italy) super-eruption providing insights into eruption dynamics and the impact of this gigantic event. The method uses a 3D time-dependent computational ash dispersion model, a set of wind fields, and more than 100 thickness measurements of the CI tephra deposit. Results reveal that the CI eruption dispersed 250–300 km3 of ash over ∼3.7 million km2. The injection of such a large quantity of ash (and volatiles) into the atmosphere would have caused a volcanic winter during the Heinrich Event 4, the coldest and driest climatic episode of the Last Glacial period. Fluorine-bearing leachate from the volcanic ash and acid rain would have further affected food sources and severely impacted Late Middle-Early Upper Paleolithic groups in Southern and Eastern Europe
Resumo:
An urban energy and water balance model is presented which uses a small number of commonly measured meteorological variables and information about the surface cover. Rates of evaporation-interception for a single layer with multiple surface types (paved, buildings, coniferous trees and/or shrubs, deciduous trees and/or shrubs, irrigated grass, non-irrigated grass and water) are calculated. Below each surface type, except water, there is a single soil layer. At each time step the moisture state of each surface is calculated. Horizontal water movements at the surface and in the soil are incorporated. Particular attention is given to the surface conductance used to model evaporation and its parameters. The model is tested against direct flux measurements carried out over a number of years in Vancouver, Canada and Los Angeles, USA. At all measurement sites the model is able to simulate the net all-wave radiation and turbulent sensible and latent heat well (RMSE = 25–47 W m−2, 30–64 and 20–56 W m−2, respectively). The model reproduces the diurnal cycle of the turbulent fluxes but typically underestimates latent heat flux and overestimates sensible heat flux in the day time. The model tracks measured surface wetness and simulates the variations in soil moisture content. It is able to respond correctly to short-term events as well as annual changes. The largest uncertainty relates to the determination of surface conductance. The model has the potential be used for multiple applications; for example, to predict effects of regulation on urban water use, landscaping and planning scenarios, or to assess climate mitigation strategies.
Resumo:
During the cold period of the Last Glacial Maximum (LGM, about 21 000 years ago) atmospheric CO2 was around 190 ppm, much lower than the pre-industrial concentration of 280 ppm. The causes of this substantial drop remain partially unresolved, despite intense research. Understanding the origin of reduced atmospheric CO2 during glacial times is crucial to comprehend the evolution of the different carbon reservoirs within the Earth system (atmosphere, terrestrial biosphere and ocean). In this context, the ocean is believed to play a major role as it can store large amounts of carbon, especially in the abyss, which is a carbon reservoir that is thought to have expanded during glacial times. To create this larger reservoir, one possible mechanism is to produce very dense glacial waters, thereby stratifying the deep ocean and reducing the carbon exchange between the deep and upper ocean. The existence of such very dense waters has been inferred in the LGM deep Atlantic from sediment pore water salinity and δ18O inferred temperature. Based on these observations, we study the impact of a brine mechanism on the glacial carbon cycle. This mechanism relies on the formation and rapid sinking of brines, very salty water released during sea ice formation, which brings salty dense water down to the bottom of the ocean. It provides two major features: a direct link from the surface to the deep ocean along with an efficient way of setting a strong stratification. We show with the CLIMBER-2 carbon-climate model that such a brine mechanism can account for a significant decrease in atmospheric CO2 and contribute to the glacial-interglacial change. This mechanism can be amplified by low vertical diffusion resulting from the brine-induced stratification. The modeled glacial distribution of oceanic δ13C as well as the deep ocean salinity are substantially improved and better agree with reconstructions from sediment cores, suggesting that such a mechanism could have played an important role during glacial times.
Resumo:
During April and May 2010 the ash cloud from the eruption of the Icelandic volcano Eyjafjallajökull caused widespread disruption to aviation over northern Europe. The location and impact of the eruption led to a wealth of observations of the ash cloud were being obtained which can be used to assess modelling of the long range transport of ash in the troposphere. The UK FAAM (Facility for Airborne Atmospheric Measurements) BAe-146-301 research aircraft overflew the ash cloud on a number of days during May. The aircraft carries a downward looking lidar which detected the ash layer through the backscatter of the laser light. In this study ash concentrations derived from the lidar are compared with simulations of the ash cloud made with NAME (Numerical Atmospheric-dispersion Modelling Environment), a general purpose atmospheric transport and dispersion model. The simulated ash clouds are compared to the lidar data to determine how well NAME simulates the horizontal and vertical structure of the ash clouds. Comparison between the ash concentrations derived from the lidar and those from NAME is used to define the fraction of ash emitted in the eruption that is transported over long distances compared to the total emission of tephra. In making these comparisons possible position errors in the simulated ash clouds are identified and accounted for. The ash layers seen by the lidar considered in this study were thin, with typical depths of 550–750 m. The vertical structure of the ash cloud simulated by NAME was generally consistent with the observed ash layers, although the layers in the simulated ash clouds that are identified with observed ash layers are about twice the depth of the observed layers. The structure of the simulated ash clouds were sensitive to the profile of ash emissions that was assumed. In terms of horizontal and vertical structure the best results were obtained by assuming that the emission occurred at the top of the eruption plume, consistent with the observed structure of eruption plumes. However, early in the period when the intensity of the eruption was low, assuming that the emission of ash was uniform with height gives better guidance on the horizontal and vertical structure of the ash cloud. Comparison of the lidar concentrations with those from NAME show that 2–5% of the total mass erupted by the volcano remained in the ash cloud over the United Kingdom.
Resumo:
Artificial diagenesis of the intra-crystalline proteins isolated from Patella vulgata was induced by isothermal heating at 140 °C, 110 °C and 80 °C. Protein breakdown was quantified for multiple amino acids, measuring the extent of peptide bond hydrolysis, amino acid racemisation and decomposition. The patterns of diagenesis are complex; therefore the kinetic parameters of the main reactions were estimated by two different methods: 1) a well-established approach based on fitting mathematical expressions to the experimental data, e.g. first-order rate equations for hydrolysis and power-transformed first-order rate equations for racemisation; and 2) an alternative model-free approach, which was developed by estimating a “scaling” factor for the independent variable (time) which produces the best alignment of the experimental data. This method allows the calculation of the relative reaction rates for the different temperatures of isothermal heating. High-temperature data were compared with the extent of degradation detected in sub-fossil Patella specimens of known age, and we evaluated the ability of kinetic experiments to mimic diagenesis at burial temperature. The results highlighted a difference between patterns of degradation at low and high temperature and therefore we recommend caution for the extrapolation of protein breakdown rates to low burial temperatures for geochronological purposes when relying solely on kinetic data.
Resumo:
The aerosol direct radiative effect (DRE) of African smoke was analyzed in cloud scenes over the southeast Atlantic Ocean, using Scanning Imaging Absorption Spectrometer for Atmospheric Chartography (SCIAMACHY) satellite observations and Hadley Centre Global Environmental Model version 2 (HadGEM2) climate model simulations. The observed mean DRE was about 30–35 W m−2 in August and September 2006–2009. In some years, short episodes of high-aerosol DRE can be observed, due to high-aerosol loadings, while in other years the loadings are lower but more prolonged. Climate models that use evenly distributed monthly averaged emission fields will not reproduce these high-aerosol loadings. Furthermore, the simulated monthly mean aerosol DRE in HadGEM2 is only about 6 W m−2 in August. The difference with SCIAMACHY mean observations can be partly explained by an underestimation of the aerosol absorption Ångström exponent in the ultraviolet. However, the subsequent increase of aerosol DRE simulation by about 20% is not enough to explain the observed discrepancy between simulations and observations.
Resumo:
Research evaluating perceptual responses to music has identified many structural features as correlates that might be incorporated in computer music systems for affectively charged algorithmic composition and/or expressive music performance. In order to investigate the possible integration of isolated musical features to such a system, a discrete feature known to correlate some with emotional responses – rhythmic density – was selected from a literature review and incorporated into a prototype system. This system produces variation in rhythm density via a transformative process. A stimulus set created using this system was then subjected to a perceptual evaluation. Pairwise comparisons were used to scale differences between 48 stimuli. Listener responses were analysed with Multidimensional scaling (MDS). The 2-Dimensional solution was then rotated to place the stimuli with the largest range of variation across the horizontal plane. Stimuli with variation in rhythmic density were placed further from the source material than stimuli that were generated by random permutation. This, combined with the striking similarity between the MDS scaling and that of the 2-dimensional emotional model used by some affective algorithmic composition systems, suggests that isolated musical feature manipulation can now be used to parametrically control affectively charged automated composition in a larger system.