968 resultados para ATMOSPHERIC NUCLEATION
Resumo:
An underestimate of atmospheric blocking occurrence is a well-known limitation of many climate models. This article presents an analysis of Northern Hemisphere winter blocking in an atmospheric model with increased horizontal resolution. European blocking frequency increases with model resolution, and this results from an improvement in the atmospheric patterns of variability as well as a simple improvement in the mean state. There is some evidence that the transient eddy momentum forcing of European blocks is increased at high resolution, which could account for this. However, it is also shown that the increase in resolution of the orography is needed to realise the improvement in blocking, consistent with the increase in height of the Rocky Mountains acting to increase the tilt of the Atlantic jet stream and giving higher mean geopotential heights over northern Europe. Blocking frequencies in the Pacific sector are also increased with atmospheric resolution, but in this case the improvement in orography actually leads to a decrease in blocking
Resumo:
Evidence is presented, based on an ensemble of climate change scenarios performed with a global general circulation model of the atmosphere with high horizontal resolution over Europe, to suggest that the end-of-century anthropogenic climate change over the North Atlantic--European region strongly projects onto the positive phase of the North Atlantic Oscillation during wintertime. It is reflected in a doubling of the residence frequency of the climate system in the associated circulation regime, in agreement with the nonlinear climate perspective. The strong increase in the amplitude of the response, compared to coarse-resolution coupled model studies, suggests that improved model representation of regional climate is needed to achieve more reliable projections of anthropogenic climate change on European climate.
Resumo:
Variations in carbon-14 to carbon-12 ratio in the atmosphere (Δ14Catm) provide a powerful diagnostic for elucidating the timing and nature of geophysical and anthropological change. The (Atlantic) marine archive suggests a rapid Δ14Catm increase of 50‰ at the onset of the Younger Dryas (YD) cold reversal (12.9–11.7 kyr BP), which has not yet been satisfactorily explained in terms of magnitude or causal mechanism, as either a change in ocean ventilation or production rate. Using Earth-system model simulations and comparison of marine-based radiocarbon records from different ocean basins, we demonstrate that the YD Δ14Catm increase is smaller than suggested by the marine archive. This is due to changes in reservoir age, predominantly caused by reduced ocean ventilation.
Resumo:
There are significant discrepancies between observational datasets of Arctic sea ice concentrations covering the last three decades, which result in differences of over 20% in Arctic summer sea ice extent/area and 5%–10% in winter. Previous modeling studies have shown that idealized sea ice anomalies have the potential for making a substantial impact on climate. In this paper, this theory is further developed by performing a set of simulations using the third Hadley Centre Coupled Atmospheric Model (HadAM3). The model was driven with monthly climatologies of sea ice fractions derived from three of these records to investigate potential implications of sea ice inaccuracies for climate simulations. The standard sea ice climatology from the Met Office provided a control. This study focuses on the effects of actual inaccuracies of concentration retrievals, which vary spatially and are larger in summer than winter. The smaller sea ice discrepancies in winter have a much larger influence on climate than the much greater summer sea ice differences. High sensitivity to sea ice prescription was observed, even though no SST feedbacks were included. Significant effects on surface fields were observed in the Arctic, North Atlantic, and North Pacific. Arctic average surface air temperature anomalies in winter vary by 2.5°C, and locally exceed 12°C. Arctic mean sea level pressure varies by up to 5 mb locally. Anomalies extend to 45°N over North America and Eurasia but not to lower latitudes, and with limited changes in circulation above the boundary layer. No statistically significant impact on climate variability was simulated, in terms of the North Atlantic Oscillation. Results suggest that the uncertainty in summer sea ice prescription is not critical but that winter values require greater accuracy, with the caveats that the influences of ocean–sea ice feedbacks were not included in this study.
Resumo:
The Antarctic continental shelf seas feature a bimodal distribution of water mass temperature, with the Amundsen and Bellingshausen Seas flooded by Circumpolar Deep Water that is several degrees Celsius warmer than the cold shelf waters prevalent in the Weddell and Ross Seas. This bimodal distribution could be caused by differences in atmospheric forcing, ocean dynamics, ocean and ice feedbacks, or some combination of these factors. In this study, a highly simplified coupled sea ice–mixed layer model is developed to investigate the physical processes controlling this situation. Under regional atmospheric forcings and parameter choices the 10-yr simulations demonstrate a complete destratification of the Weddell Sea water column in winter, forming cold, relatively saline shelf waters, while the Amundsen Sea winter mixed layer remains shallower, allowing a layer of deep warm water to persist. Applying the Weddell atmospheric forcing to the Amundsen Sea model destratifies the water column after two years, and applying the Amundsen forcing to the Weddell Sea model results in a shallower steady-state winter mixed layer that no longer destratifies the water column. This suggests that the regional difference in atmospheric forcings alone is sufficient to account for the bimodal distribution in Antarctic shelf-sea temperatures. The model prediction of mixed layer depth is most sensitive to the air temperature forcing, but a switch in all forcings is required to prevent destratification of the Weddell Sea water column.
Resumo:
The Hadley Centre Global Environmental Model (HadGEM) includes two aerosol schemes: the Coupled Large-scale Aerosol Simulator for Studies in Climate (CLASSIC), and the new Global Model of Aerosol Processes (GLOMAP-mode). GLOMAP-mode is a modal aerosol microphysics scheme that simulates not only aerosol mass but also aerosol number, represents internally-mixed particles, and includes aerosol microphysical processes such as nucleation. In this study, both schemes provide hindcast simulations of natural and anthropogenic aerosol species for the period 2000–2006. HadGEM simulations of the aerosol optical depth using GLOMAP-mode compare better than CLASSIC against a data-assimilated aerosol re-analysis and aerosol ground-based observations. Because of differences in wet deposition rates, GLOMAP-mode sulphate aerosol residence time is two days longer than CLASSIC sulphate aerosols, whereas black carbon residence time is much shorter. As a result, CLASSIC underestimates aerosol optical depths in continental regions of the Northern Hemisphere and likely overestimates absorption in remote regions. Aerosol direct and first indirect radiative forcings are computed from simulations of aerosols with emissions for the year 1850 and 2000. In 1850, GLOMAP-mode predicts lower aerosol optical depths and higher cloud droplet number concentrations than CLASSIC. Consequently, simulated clouds are much less susceptible to natural and anthropogenic aerosol changes when the microphysical scheme is used. In particular, the response of cloud condensation nuclei to an increase in dimethyl sulphide emissions becomes a factor of four smaller. The combined effect of different 1850 baselines, residence times, and abilities to affect cloud droplet number, leads to substantial differences in the aerosol forcings simulated by the two schemes. GLOMAP-mode finds a presentday direct aerosol forcing of −0.49Wm−2 on a global average, 72% stronger than the corresponding forcing from CLASSIC. This difference is compensated by changes in first indirect aerosol forcing: the forcing of −1.17Wm−2 obtained with GLOMAP-mode is 20% weaker than with CLASSIC. Results suggest that mass-based schemes such as CLASSIC lack the necessary sophistication to provide realistic input to aerosol-cloud interaction schemes. Furthermore, the importance of the 1850 baseline highlights how model skill in predicting present-day aerosol does not guarantee reliable forcing estimates. Those findings suggest that the more complex representation of aerosol processes in microphysical schemes improves the fidelity of simulated aerosol forcings.
Resumo:
Aerosol indirect effects continue to constitute one of the most important uncertainties for anthropogenic climate perturbations. Within the international AEROCOM initiative, the representation of aerosol-cloud-radiation interactions in ten different general circulation models (GCMs) is evaluated using three satellite datasets. The focus is on stratiform liquid water clouds since most GCMs do not include ice nucleation effects, and none of the model explicitly parameterises aerosol effects on convective clouds. We compute statistical relationships between aerosol optical depth (τa) and various cloud and radiation quantities in a manner that is consistent between the models and the satellite data. It is found that the model-simulated influence of aerosols on cloud droplet number concentration (Nd ) compares relatively well to the satellite data at least over the ocean. The relationship between �a and liquid water path is simulated much too strongly by the models. This suggests that the implementation of the second aerosol indirect effect mainly in terms of an autoconversion parameterisation has to be revisited in the GCMs. A positive relationship between total cloud fraction (fcld) and �a as found in the satellite data is simulated by the majority of the models, albeit less strongly than that in the satellite data in most of them. In a discussion of the hypotheses proposed in the literature to explain the satellite-derived strong fcld–�a relationship, our results indicate that none can be identified as a unique explanation. Relationships similar to the ones found in satellite data between �a and cloud top temperature or outgoing long-wave radiation (OLR) are simulated by only a few GCMs. The GCMs that simulate a negative OLR - �a relationship show a strong positive correlation between �a and fcld. The short-wave total aerosol radiative forcing as simulated by the GCMs is strongly influenced by the simulated anthropogenic fraction of �a, and parameterisation assumptions such as a lower bound on Nd . Nevertheless, the strengths of the statistical relationships are good predictors for the aerosol forcings in the models. An estimate of the total short-wave aerosol forcing inferred from the combination of these predictors for the modelled forcings with the satellite-derived statistical relationships yields a global annual mean value of −1.5±0.5Wm−2. In an alternative approach, the radiative flux perturbation due to anthropogenic aerosols can be broken down into a component over the cloud-free portion of the globe (approximately the aerosol direct effect) and a component over the cloudy portion of the globe (approximately the aerosol indirect effect). An estimate obtained by scaling these simulated clearand cloudy-sky forcings with estimates of anthropogenic �a and satellite-retrieved Nd–�a regression slopes, respectively, yields a global, annual-mean aerosol direct effect estimate of −0.4±0.2Wm−2 and a cloudy-sky (aerosol indirect effect) estimate of −0.7±0.5Wm−2, with a total estimate of −1.2±0.4Wm−2.
Resumo:
As a part of the Atmospheric Model Intercomparison Project (AMIP), the behaviour of 15 general circulation models has been analysed in order to diagnose and compare the ability of the different models in simulating Northern Hemisphere midlatitude atmospheric blocking. In accordance with the established AMIP procedure, the 10-year model integrations were performed using prescribed, time-evolving monthly mean observed SSTs spanning the period January 1979–December 1988. Atmospheric observational data (ECMWF analyses) over the same period have been also used to verify the models results. The models involved in this comparison represent a wide spectrum of model complexity, with different horizontal and vertical resolution, numerical techniques and physical parametrizations, and exhibit large differences in blocking behaviour. Nevertheless, a few common features can be found, such as the general tendency to underestimate both blocking frequency and the average duration of blocks. The problem of the possible relationship between model blocking and model systematic errors has also been assessed, although without resorting to ad-hoc numerical experimentation it is impossible to relate with certainty particular model deficiencies in representing blocking to precise parts of the model formulation.
Resumo:
Some climatological information from 14 atmospheric general circulation models is presented and compared in order to assess the ability of a broad group of models to simulate current climate. The quantities considered are cross sections of temperature, zonal wind, and meridional stream function together with latitudinal distributions of mean sea level pressure and precipitation rate. The nature of the deficiencies in the simulated climates that are common to all models and those which differ among models is investigated; the general improvement in the ability of models to simulate certain aspects of the climate is shown; consideration is given to the effect of increasing resolution on simulated climate; and approaches to understanding and reducing model deficiencies are discussed. The information presented here is a subset of a more voluminous compilation which is available in report form (Boer et al., 1991). This report contains essentially the same text, but results from all 14 models are presented together with additional results in the form of geographical distributions of surface variables and certain difference statistics.
Resumo:
Climatological information from fourteen atmospheric general circulation models is presented and compared in order to assess the ability of a broad group of models to simulate current climate. The quantities considered are cross sections of temperature, zonal wind and meridional stream function together with latitudinal distributions of mean sea-level pressure and precipitation rate. The nature of the deficiencies in the simulated climates that are common to all models and those which differ among models is investigated, general improvement in the ability of models to simulate certain aspects of the climate is shown, consideration is given to the effect of increasing resolution on simulated climate and approaches to the understanding and reduction of model deficiencies are discussed.
Resumo:
As a part of the Atmospheric Model Intercomparison Project (AMIP), the behaviour of 15 general circulation models has been analysed in order to diagnose and compare the ability of the different models in simulating Northern Hemisphere midlatitude atmospheric blocking. In accordance with the established AMIP procedure, the 10-year model integrations were performed using prescribed, time-evolving monthly mean observed SSTs spanning the period January 1979–December 1988. Atmospheric observational data (ECMWF analyses) over the same period have been also used to verify the models results. The models involved in this comparison represent a wide spectrum of model complexity, with different horizontal and vertical resolution, numerical techniques and physical parametrizations, and exhibit large differences in blocking behaviour. Nevertheless, a few common features can be found, such as the general tendency to underestimate both blocking frequency and the average duration of blocks. The problem of the possible relationship between model blocking and model systematic errors has also been assessed, although without resorting to ad-hoc numerical experimentation it is impossible to relate with certainty particular model deficiencies in representing blocking to precise parts of the model formulation.
Resumo:
Urban land surface models (LSM) are commonly evaluated for short periods (a few weeks to months) because of limited observational data. This makes it difficult to distinguish the impact of initial conditions on model performance or to consider the response of a model to a range of possible atmospheric conditions. Drawing on results from the first urban LSM comparison, these two issues are considered. Assessment shows that the initial soil moisture has a substantial impact on the performance. Models initialised with soils that are too dry are not able to adjust their surface sensible and latent heat fluxes to realistic values until there is sufficient rainfall. Models initialised with too wet soils are not able to restrict their evaporation appropriately for periods in excess of a year. This has implications for short term evaluation studies and implies the need for soil moisture measurements to improve data assimilation and model initialisation. In contrast, initial conditions influencing the thermal storage have a much shorter adjustment timescale compared to soil moisture. Most models partition too much of the radiative energy at the surface into the sensible heat flux at the probable expense of the net storage heat flux.
Resumo:
A plasma source, sustained by the application of a floating high voltage (±15 kV) to parallel-plate electrodes at 50 Hz, has been achieved in a helium/air mixture at atmospheric pressure (P = 105 Pa) contained in a zip-locked plastic package placed in the electrode gap. Some of the physical and antimicrobial properties of this apparatus were established with a view to ascertain its performance as a prototype for the disinfection of fresh produce. The current–voltage (I–V) and charge–voltage (Q–V) characteristics of the system were measured as a function of gap distance d, in the range (3 × 103 ≤ Pd ≤ 1.0 × 104 Pa m). The electrical measurements showed this plasma source to exhibit the characteristic behaviour of a dielectric barrier discharge in the filamentary mode and its properties could be accurately interpreted by the two-capacitance in series model. The power consumed by the discharge and the reduced field strength were found to decrease quadratically from 12.0 W to 4.5 W and linearly from 140 Td to 50 Td, respectively, in the range studied. Emission spectra of the discharge were recorded on a relative intensity scale and the dominant spectral features could be assigned to strong vibrational bands in the 2+ and 1− systems of N2 and ${\rm N}_2^+$ , respectively, with other weak signatures from the NO and OH radicals and the N+, He and O atomic species. Absolute spectral intensities were also recorded and interpreted by comparison with the non-equilibrium synthetic spectra generated by the computer code SPECAIR. At an inter-electrode gap of 0.04 m, this comparison yielded typical values for the electron, vibrational and translational (gas) temperatures of (4980 ± 100) K, (2700 ± 200) K and (300 ± 100) K, respectively and an electron density of 1.0 × 1017 m−3. A Boltzmann plot also provided a value of (3200 ± 200 K) for the vibrational temperature. The antimicrobial efficacy was assessed by studying the resistance of both Escherichia coli K12 its isogenic mutants in soxR, soxS, oxyR, rpoS and dnaK selected to identify possible cellular responses and targets related with 5 min exposure to the active gas in proximity of, but not directly in, the path of the discharge filaments. Both the parent strain and mutants populations were significantly reduced by more than 1.5 log cycles in these conditions, showing the potential of the system. Post-treatment storage studies showed that some transcription regulators and specific genes related to oxidative stress play an important role in the E. coli repair mechanism and that plasma exposure affects specific cell regulator systems.
Resumo:
In this paper we explore the possibility of deriving low-dimensional models of the dynamics of the Martian atmosphere. The analysis consists of a Proper Orthogonal Decomposition (POD) of the atmospheric streamfunction after first decomposing the vertical structure with a set of eigenmodes. The vertical modes were obtained from the quasi-geostrophic vertical structure equation. The empirical orthogonal functions (EOFs) were optimized to represent the atmospheric total energy. The total energy was used as the criterion to retain those modes with large energy content and discard the rest. The principal components (PCs) were analysed by means of Fourier analysis, so that the dominant frequencies could be identified. It was possible to observe the strong influence of the diurnal cycle and to identify the motion and vacillation of baroclinic waves.