103 resultados para Atmospheric aerosol background
Resumo:
A solution of the lidar equation is discussed, that permits combining backscatter and depolarization measurements to quantitatively distinguish two different aerosol types with different depolarization properties. The method has been successfully applied to simultaneous observations of volcanic ash and boundary layer aerosol obtained in Exeter, United Kingdom, on 16 and 18 April 2010, permitting the contribution of the two aerosols to be quantified separately. First a subset of the atmospheric profiles is used where the two aerosol types belong to clearly distinguished layers, for the purpose of characterizing the ash in terms of lidar ratio and depolarization. These quantities are then used in a three‐component atmosphere solution scheme of the lidar equation applied to the full data set, in order to compute the optical properties of both aerosol types separately. On 16 April a thin ash layer, 100–400 m deep, is observed (average and maximum estimated ash optical depth: 0.11 and 0.2); it descends from ∼2800 to ∼1400 m altitude over a 6‐hour period. On 18 April a double ash layer, ∼400 m deep, is observed just above the morning boundary layer (average and maximum estimated ash optical depth: 0.19 and 0.27). In the afternoon the ash is entrained into the boundary layer, and the latter reaches a depth of ∼1800 m (average and maximum estimated ash optical depth: 0.1 and 0.15). An additional ash layer, with a very small optical depth, was observed on 18 April at an altitude of 3500–4000 m. By converting the lidar optical measurements using estimates of volcanic ash specific extinction, derived from other works, the observations seem to suggest approximate peak ash concentrations of ∼1500 and ∼1000 mg/m3,respectively, on the two observations dates.
Resumo:
This dissertation deals with aspects of sequential data assimilation (in particular ensemble Kalman filtering) and numerical weather forecasting. In the first part, the recently formulated Ensemble Kalman-Bucy (EnKBF) filter is revisited. It is shown that the previously used numerical integration scheme fails when the magnitude of the background error covariance grows beyond that of the observational error covariance in the forecast window. Therefore, we present a suitable integration scheme that handles the stiffening of the differential equations involved and doesn’t represent further computational expense. Moreover, a transform-based alternative to the EnKBF is developed: under this scheme, the operations are performed in the ensemble space instead of in the state space. Advantages of this formulation are explained. For the first time, the EnKBF is implemented in an atmospheric model. The second part of this work deals with ensemble clustering, a phenomenon that arises when performing data assimilation using of deterministic ensemble square root filters in highly nonlinear forecast models. Namely, an M-member ensemble detaches into an outlier and a cluster of M-1 members. Previous works may suggest that this issue represents a failure of EnSRFs; this work dispels that notion. It is shown that ensemble clustering can be reverted also due to nonlinear processes, in particular the alternation between nonlinear expansion and compression of the ensemble for different regions of the attractor. Some EnSRFs that use random rotations have been developed to overcome this issue; these formulations are analyzed and their advantages and disadvantages with respect to common EnSRFs are discussed. The third and last part contains the implementation of the Robert-Asselin-Williams (RAW) filter in an atmospheric model. The RAW filter is an improvement to the widely popular Robert-Asselin filter that successfully suppresses spurious computational waves while avoiding any distortion in the mean value of the function. Using statistical significance tests both at the local and field level, it is shown that the climatology of the SPEEDY model is not modified by the changed time stepping scheme; hence, no retuning of the parameterizations is required. It is found the accuracy of the medium-term forecasts is increased by using the RAW filter.
Resumo:
Observational evidence indicates significant regional trends in solar radiation at the surface in both all-sky and cloud-free conditions. Negative trends in the downwelling solar surface irradiance (SSI) have become known as ‘dimming’ while positive trends have become known as ‘brightening’. We use the Met Office Hadley Centre HadGEM2 climate model to model trends in cloud-free and total SSI from the pre-industrial to the present-day and compare these against observations. Simulations driven by CMIP5 emissions are used to model the future trends in dimming/brightening up to the year 2100. The modeled trends are reasonably consistent with observed regional trends in dimming and brightening which are due to changes in concentrations in anthropogenic aerosols and, potentially, changes in cloud cover owing to the aerosol indirect effects and/or cloud feedback mechanisms. The future dimming/brightening in cloud-free SSI is not only caused by changes in anthropogenic aerosols: aerosol impacts are overwhelmed by a large dimming caused by increases in water vapor. There is little trend in the total SSI as cloud cover decreases in the climate model used here, and compensates the effect of the change in water vapor. In terms of the surface energy balance, these trends in SSI are obviously more than compensated by the increase in the downwelling terrestrial irradiance from increased water vapor concentrations. However, the study shows that while water vapor is widely appreciated as a greenhouse gas, water vapor impacts on the atmospheric transmission of solar radiation and the future of global dimming/brightening should not be overlooked.
Resumo:
BACKGROUND: We examined the role of aerosol transmission of influenza in an acute ward setting. METHODS: We investigated a seasonal influenza A outbreak that occurred in our general medical ward (with open bay ward layout) in 2008. Clinical and epidemiological information was collected in real time during the outbreak. Spatiotemporal analysis was performed to estimate the infection risk among patients. Airflow measurements were conducted, and concentrations of hypothetical virus-laden aerosols at different ward locations were estimated using computational fluid dynamics modeling. RESULTS: Nine inpatients were infected with an identical strain of influenza A/H3N2 virus. With reference to the index patient's location, the attack rate was 20.0% and 22.2% in the "same" and "adjacent" bays, respectively, but 0% in the "distant" bay (P = .04). Temporally, the risk of being infected was highest on the day when noninvasive ventilation was used in the index patient; multivariate logistic regression revealed an odds ratio of 14.9 (95% confidence interval, 1.7-131.3; P = .015). A simultaneous, directional indoor airflow blown from the "same" bay toward the "adjacent" bay was found; it was inadvertently created by an unopposed air jet from a separate air purifier placed next to the index patient's bed. Computational fluid dynamics modeling revealed that the dispersal pattern of aerosols originated from the index patient coincided with the bed locations of affected patients. CONCLUSIONS: Our findings suggest a possible role of aerosol transmission of influenza in an acute ward setting. Source and engineering controls, such as avoiding aerosol generation and improving ventilation design, may warrant consideration to prevent nosocomial outbreaks.
Resumo:
The global temperature response to increasing atmospheric CO2 is often quantified by metrics such as equilibrium climate sensitivity and transient climate response1. These approaches, however, do not account for carbon cycle feedbacks and therefore do not fully represent the net response of the Earth system to anthropogenic CO2 emissions. Climate–carbon modelling experiments have shown that: (1) the warming per unit CO2 emitted does not depend on the background CO2 concentration2; (2) the total allowable emissions for climate stabilization do not depend on the timing of those emissions3, 4, 5; and (3) the temperature response to a pulse of CO2 is approximately constant on timescales of decades to centuries3, 6, 7, 8. Here we generalize these results and show that the carbon–climate response (CCR), defined as the ratio of temperature change to cumulative carbon emissions, is approximately independent of both the atmospheric CO2 concentration and its rate of change on these timescales. From observational constraints, we estimate CCR to be in the range 1.0–2.1 °C per trillion tonnes of carbon (Tt C) emitted (5th to 95th percentiles), consistent with twenty-first-century CCR values simulated by climate–carbon models. Uncertainty in land-use CO2 emissions and aerosol forcing, however, means that higher observationally constrained values cannot be excluded. The CCR, when evaluated from climate–carbon models under idealized conditions, represents a simple yet robust metric for comparing models, which aggregates both climate feedbacks and carbon cycle feedbacks. CCR is also likely to be a useful concept for climate change mitigation and policy; by combining the uncertainties associated with climate sensitivity, carbon sinks and climate–carbon feedbacks into a single quantity, the CCR allows CO2-induced global mean temperature change to be inferred directly from cumulative carbon emissions.
Resumo:
Three prominent quasi-global patterns of variability and change are observed using the Met Office's sea surface temperature (SST) analysis and almost independent night marine air temperature analysis. The first is a global warming signal that is very highly correlated with global mean SST. The second is a decadal to multidecadal fluctuation with some geographical similarity to the El Niño–Southern Oscillation (ENSO). It is associated with the Pacific Decadal Oscillation (PDO), and its Pacific-wide manifestation has been termed the Interdecadal Pacific Oscillation (IPO). We present model investigations of the relationship between the IPO and ENSO. The third mode is an interhemispheric variation on multidecadal timescales which, in view of climate model experiments, is likely to be at least partly due to natural variations in the thermohaline circulation. Observed climatic impacts of this mode also appear in model simulations. Smaller-scale, regional atmospheric phenomena also affect climate on decadal to interdecadal timescales. We concentrate on one such mode, the winter North Atlantic Oscillation (NAO). This shows strong decadal to interdecadal variability and a correspondingly strong influence on surface climate variability which is largely additional to the effects of recent regional anthropogenic climate change. The winter NAO is likely influenced by both SST forcing and stratospheric variability. A full understanding of decadal changes in the NAO and European winter climate may require a detailed representation of the stratosphere that is hitherto missing in the major climate models used to study climate change.
Resumo:
We perform a multimodel detection and attribution study with climate model simulation output and satellite-based measurements of tropospheric and stratospheric temperature change. We use simulation output from 20 climate models participating in phase 5 of the Coupled Model Intercomparison Project. This multimodel archive provides estimates of the signal pattern in response to combined anthropogenic and natural external forcing (the finger-print) and the noise of internally generated variability. Using these estimates, we calculate signal-to-noise (S/N) ratios to quantify the strength of the fingerprint in the observations relative to fingerprint strength in natural climate noise. For changes in lower stratospheric temperature between 1979 and 2011, S/N ratios vary from 26 to 36, depending on the choice of observational dataset. In the lower troposphere, the fingerprint strength in observations is smaller, but S/N ratios are still significant at the 1% level or better, and range from three to eight. We find no evidence that these ratios are spuriously inflated by model variability errors. After removing all global mean signals, model fingerprints remain identifiable in 70% of the tests involving tropospheric temperature changes. Despite such agreement in the large-scale features of model and observed geographical patterns of atmospheric temperature change, most models do not replicate the size of the observed changes. On average, the models analyzed underestimate the observed cooling of the lower stratosphere and overestimate the warming of the troposphere. Although the precise causes of such differences are unclear, model biases in lower stratospheric temperature trends are likely to be reduced by more realistic treatment of stratospheric ozone depletion and volcanic aerosol forcing.
Resumo:
Wave-activity conservation laws are key to understanding wave propagation in inhomogeneous environments. Their most general formulation follows from the Hamiltonian structure of geophysical fluid dynamics. For large-scale atmospheric dynamics, the Eliassen–Palm wave activity is a well-known example and is central to theoretical analysis. On the mesoscale, while such conservation laws have been worked out in two dimensions, their application to a horizontally homogeneous background flow in three dimensions fails because of a degeneracy created by the absence of a background potential vorticity gradient. Earlier three-dimensional results based on linear WKB theory considered only Doppler-shifted gravity waves, not waves in a stratified shear flow. Consideration of a background flow depending only on altitude is motivated by the parameterization of subgrid-scales in climate models where there is an imposed separation of horizontal length and time scales, but vertical coupling within each column. Here we show how this degeneracy can be overcome and wave-activity conservation laws derived for three-dimensional disturbances to a horizontally homogeneous background flow. Explicit expressions for pseudoenergy and pseudomomentum in the anelastic and Boussinesq models are derived, and it is shown how the previously derived relations for the two-dimensional problem can be treated as a limiting case of the three-dimensional problem. The results also generalize earlier three-dimensional results in that there is no slowly varying WKB-type requirement on the background flow, and the results are extendable to finite amplitude. The relationship A E =cA P between pseudoenergy A E and pseudomomentum A P, where c is the horizontal phase speed in the direction of symmetry associated with A P, has important applications to gravity-wave parameterization and provides a generalized statement of the first Eliassen–Palm theorem.
Resumo:
Charged aerosol particles and water droplets are abundant throughout the lower atmosphere, and may influence interactions between small cloud droplets. This note describes a small, disposable sensor for the measurement of charge in non-thunderstorm cloud, which is an improvement of an earlier sensor [K. A. Nicoll and R. G. Harrison, Rev. Sci. Instrum. 80, 014501 (2009)]. The sensor utilizes a self-calibrating current measurement method. It is designed for use on a free balloon platform alongside a standard meteorological radiosonde, measuring currents from 2 fA to 15 pA and is stable to within 5 fA over a temperature range of 5 °C to −60 °C. During a balloon flight with the charge sensor through a stratocumulus cloud, charge layers up to 40 pC m−3 were detected on the cloud edges.
Resumo:
Atmospheric aerosols are now actively studied, in particular because of their radiative and climate impacts. Estimations of the direct aerosol radiative perturbation, caused by extinction of incident solar radiation, usually rely on radiative transfer codes and involve simplifying hypotheses. This paper addresses two approximations which are widely used for the sake of simplicity and limiting the computational cost of the calculations. Firstly, it is shown that using a Lambertian albedo instead of the more rigorous bidirectional reflectance distribution function (BRDF) to model the ocean surface radiative properties leads to large relative errors in the instantaneous aerosol radiative perturbation. When averaging over the day, these errors cancel out to acceptable levels of less than 3% (except in the northern hemisphere winter). The other scope of this study is to address aerosol non-sphericity effects. Comparing an experimental phase function with an equivalent Mie-calculated phase function, we found acceptable relative errors if the aerosol radiative perturbation calculated for a given optical thickness is daily averaged. However, retrieval of the optical thickness of non-spherical aerosols assuming spherical particles can lead to significant errors. This is due to significant differences between the spherical and non-spherical phase functions. Discrepancies in aerosol radiative perturbation between the spherical and non-spherical cases are sometimes reduced and sometimes enhanced if the aerosol optical thickness for the spherical case is adjusted to fit the simulated radiance of the non-spherical case.
Resumo:
Atmospheric aerosols cause scattering and absorption of incoming solar radiation. Additional anthropogenic aerosols released into the atmosphere thus exert a direct radiative forcing on the climate system1. The degree of present-day aerosol forcing is estimated from global models that incorporate a representation of the aerosol cycles1–3. Although the models are compared and validated against observations, these estimates remain uncertain. Previous satellite measurements of the direct effect of aerosols contained limited information about aerosol type, and were confined to oceans only4,5. Here we use state-of-the-art satellitebased measurements of aerosols6–8 and surface wind speed9 to estimate the clear-sky direct radiative forcing for 2002, incorporating measurements over land and ocean. We use a Monte Carlo approach to account for uncertainties in aerosol measurements and in the algorithm used. Probability density functions obtained for the direct radiative forcing at the top of the atmosphere give a clear-sky, global, annual average of 21.9Wm22 with standard deviation, 60.3Wm22. These results suggest that present-day direct radiative forcing is stronger than present model estimates, implying future atmospheric warming greater than is presently predicted, as aerosol emissions continue to decline10.
Resumo:
The European Centre for Medium-range Weather Forecast (ECMWF) provides an aerosol re-analysis starting from year 2003 for the Monitoring Atmospheric Composition and Climate (MACC) project. The re-analysis assimilates total aerosol optical depth retrieved by the Moderate Resolution Imaging Spectroradiometer (MODIS) to correct for model departures from observed aerosols. The reanalysis therefore combines satellite retrievals with the full spatial coverage of a numerical model. Re-analysed products are used here to estimate the shortwave direct and first indirect radiative forcing of anthropogenic aerosols over the period 2003–2010, using methods previously applied to satellite retrievals of aerosols and clouds. The best estimate of globally-averaged, all-sky direct radiative forcing is −0.7±0.3Wm−2. The standard deviation is obtained by a Monte-Carlo analysis of uncertainties, which accounts for uncertainties in the aerosol anthropogenic fraction, aerosol absorption, and cloudy-sky effects. Further accounting for differences between the present-day natural and pre-industrial aerosols provides a direct radiative forcing estimate of −0.4±0.3Wm−2. The best estimate of globally-averaged, all-sky first indirect radiative forcing is −0.6±0.4Wm−2. Its standard deviation accounts for uncertainties in the aerosol anthropogenic fraction, and in cloud albedo and cloud droplet number concentration susceptibilities to aerosol changes. The distribution of first indirect radiative forcing is asymmetric and is bounded by −0.1 and −2.0Wm−2. In order to decrease uncertainty ranges, better observational constraints on aerosol absorption and sensitivity of cloud droplet number concentrations to aerosol changes are required.
Resumo:
The Hadley Centre Global Environmental Model (HadGEM) includes two aerosol schemes: the Coupled Large-scale Aerosol Simulator for Studies in Climate (CLASSIC), and the new Global Model of Aerosol Processes (GLOMAP-mode). GLOMAP-mode is a modal aerosol microphysics scheme that simulates not only aerosol mass but also aerosol number, represents internally-mixed particles, and includes aerosol microphysical processes such as nucleation. In this study, both schemes provide hindcast simulations of natural and anthropogenic aerosol species for the period 2000–2006. HadGEM simulations of the aerosol optical depth using GLOMAP-mode compare better than CLASSIC against a data-assimilated aerosol re-analysis and aerosol ground-based observations. Because of differences in wet deposition rates, GLOMAP-mode sulphate aerosol residence time is two days longer than CLASSIC sulphate aerosols, whereas black carbon residence time is much shorter. As a result, CLASSIC underestimates aerosol optical depths in continental regions of the Northern Hemisphere and likely overestimates absorption in remote regions. Aerosol direct and first indirect radiative forcings are computed from simulations of aerosols with emissions for the year 1850 and 2000. In 1850, GLOMAP-mode predicts lower aerosol optical depths and higher cloud droplet number concentrations than CLASSIC. Consequently, simulated clouds are much less susceptible to natural and anthropogenic aerosol changes when the microphysical scheme is used. In particular, the response of cloud condensation nuclei to an increase in dimethyl sulphide emissions becomes a factor of four smaller. The combined effect of different 1850 baselines, residence times, and abilities to affect cloud droplet number, leads to substantial differences in the aerosol forcings simulated by the two schemes. GLOMAP-mode finds a presentday direct aerosol forcing of −0.49Wm−2 on a global average, 72% stronger than the corresponding forcing from CLASSIC. This difference is compensated by changes in first indirect aerosol forcing: the forcing of −1.17Wm−2 obtained with GLOMAP-mode is 20% weaker than with CLASSIC. Results suggest that mass-based schemes such as CLASSIC lack the necessary sophistication to provide realistic input to aerosol-cloud interaction schemes. Furthermore, the importance of the 1850 baseline highlights how model skill in predicting present-day aerosol does not guarantee reliable forcing estimates. Those findings suggest that the more complex representation of aerosol processes in microphysical schemes improves the fidelity of simulated aerosol forcings.
Resumo:
Black carbon aerosol plays a unique and important role in Earth’s climate system. Black carbon is a type of carbonaceous material with a unique combination of physical properties. This assessment provides an evaluation of black-carbon climate forcing that is comprehensive in its inclusion of all known and relevant processes and that is quantitative in providing best estimates and uncertainties of the main forcing terms: direct solar absorption; influence on liquid, mixed phase, and ice clouds; and deposition on snow and ice. These effects are calculated with climate models, but when possible, they are evaluated with both microphysical measurements and field observations. Predominant sources are combustion related, namely, fossil fuels for transportation, solid fuels for industrial and residential uses, and open burning of biomass. Total global emissions of black carbon using bottom-up inventory methods are 7500 Gg yr�-1 in the year 2000 with an uncertainty range of 2000 to 29000. However, global atmospheric absorption attributable to black carbon is too low in many models and should be increased by a factor of almost 3. After this scaling, the best estimate for the industrial-era (1750 to 2005) direct radiative forcing of atmospheric black carbon is +0.71 W m�-2 with 90% uncertainty bounds of (+0.08, +1.27)Wm�-2. Total direct forcing by all black carbon sources, without subtracting the preindustrial background, is estimated as +0.88 (+0.17, +1.48) W m�-2. Direct radiative forcing alone does not capture important rapid adjustment mechanisms. A framework is described and used for quantifying climate forcings, including rapid adjustments. The best estimate of industrial-era climate forcing of black carbon through all forcing mechanisms, including clouds and cryosphere forcing, is +1.1 W m�-2 with 90% uncertainty bounds of +0.17 to +2.1 W m�-2. Thus, there is a very high probability that black carbon emissions, independent of co-emitted species, have a positive forcing and warm the climate. We estimate that black carbon, with a total climate forcing of +1.1 W m�-2, is the second most important human emission in terms of its climate forcing in the present-day atmosphere; only carbon dioxide is estimated to have a greater forcing. Sources that emit black carbon also emit other short-lived species that may either cool or warm climate. Climate forcings from co-emitted species are estimated and used in the framework described herein. When the principal effects of short-lived co-emissions, including cooling agents such as sulfur dioxide, are included in net forcing, energy-related sources (fossil fuel and biofuel) have an industrial-era climate forcing of +0.22 (�-0.50 to +1.08) W m-�2 during the first year after emission. For a few of these sources, such as diesel engines and possibly residential biofuels, warming is strong enough that eliminating all short-lived emissions from these sources would reduce net climate forcing (i.e., produce cooling). When open burning emissions, which emit high levels of organic matter, are included in the total, the best estimate of net industrial-era climate forcing by all short-lived species from black-carbon-rich sources becomes slightly negative (�-0.06 W m�-2 with 90% uncertainty bounds of �-1.45 to +1.29 W m�-2). The uncertainties in net climate forcing from black-carbon-rich sources are substantial, largely due to lack of knowledge about cloud interactions with both black carbon and co-emitted organic carbon. In prioritizing potential black-carbon mitigation actions, non-science factors, such as technical feasibility, costs, policy design, and implementation feasibility play important roles. The major sources of black carbon are presently in different stages with regard to the feasibility for near-term mitigation. This assessment, by evaluating the large number and complexity of the associated physical and radiative processes in black-carbon climate forcing, sets a baseline from which to improve future climate forcing estimates.