910 resultados para Greenhouse gas fluxes
Resumo:
Methane is a strong greenhouse gas and large uncertainties exist concerning the future evolution of its atmospheric abundance. Analyzing methane atmospheric mixing and stable isotope ratios in air trapped in polar ice sheets helps in reconstructing the evolution of its sources and sinks in the past. This is important to improve predictions of atmospheric CH4 mixing ratios in the future under the influence of a changing climate. The aim of this study is to assess whether past atmospheric δ13C(CH4) variations can be reliably reconstructed from firn air measurements. Isotope reconstructions obtained with a state of the art firn model from different individual sites show unexpectedly large discrepancies and are mutually inconsistent. We show that small changes in the diffusivity profiles at individual sites lead to strong differences in the firn fractionation, which can explain a large part of these discrepancies. Using slightly modified diffusivities for some sites, and neglecting samples for which the firn fractionation signals are strongest, a combined multi-site inversion can be performed, which returns an isotope reconstruction that is consistent with firn data. However, the isotope trends are lower than what has been concluded from Southern Hemisphere (SH) archived air samples and high-accumulation ice core data. We conclude that with the current datasets and understanding of firn air transport, a high precision reconstruction of δ13C of CH4 from firn air samples is not possible, because reconstructed atmospheric trends over the last 50 yr of 0.3–1.5 ‰ are of the same magnitude as inherent uncertainties in the method, which are the firn fractionation correction (up to ~2 ‰ at individual sites), the Kr isobaric interference (up to ~0.8 ‰, system dependent), inter-laboratory calibration offsets (~0.2 ‰) and uncertainties in past CH4 levels (~0.5 ‰).
Resumo:
The rate of destruction of tropical forests continues to accelerate at an alarming rate contributing to an important fraction of overall greenhouse gas emissions. In recent years, much hope has been vested in the emerging REDD+ framework under the UN Framework Convention on Climate Change (UNFCCC), which aims at creating an international incentive system to reduce emissions from deforestation and forest degradation. This paper argues that in the absence of an international consensus on the design of results-based payments, “bottom-up” initiatives should take the lead and explore new avenues. It suggests that a call for tender for REDD+ credits might both assist in leveraging private investments and spending scarce public funds in a cost-efficient manner. The paper discusses the pros and cons of results-based approaches, provides an overview of the goals and principles that govern public procurement and discusses their relevance for the purchase of REDD+ credits, in particular within the ambit of the European Union.
Resumo:
The shortcomings of conventional discounting, especially in the context of long-run environmental problems, have been extensively discussed in the literature. Recently, hyperbolic discounting, i. e. discounting at declining instead of constant discount rates, has attracted a lot of interest among both scientists and politicians. Although there are compelling arguments for employing hyperbolic discounting, there are also pitfalls, which have to be pointed out. In this paper I show that the problem of time-inconsistency, an inherent characteristics of hyperbolic discounting, leads to a potential clash between economic efficiency and intergenerational equity. As an example, I refer to the weak progress in the controlling of greenhouse gas emissions under the Kyoto protocol. As the problem of time-inconsistency cannot be solved on economic grounds alone, there is a need for an intergenerational moral commitment.
Resumo:
Firn and polar ice cores offer the only direct palaeoatmospheric archive. Analyses of past greenhouse gas concentrations and their isotopic compositions in air bubbles in the ice can help to constrain changes in global biogeochemical cycles in the past. For the analysis of the hydrogen isotopic composition of methane (δD(CH4) or δ2H(CH4)) 0.5 to 1.5 kg of ice was hitherto used. Here we present a method to improve precision and reduce the sample amount for δD(CH4) measurements in (ice core) air. Pre-concentrated methane is focused in front of a high temperature oven (pre-pyrolysis trapping), and molecular hydrogen formed by pyrolysis is trapped afterwards (post-pyrolysis trapping), both on a carbon-PLOT capillary at −196 °C. Argon, oxygen, nitrogen, carbon monoxide, unpyrolysed methane and krypton are trapped together with H2 and must be separated using a second short, cooled chromatographic column to ensure accurate results. Pre- and post-pyrolysis trapping largely removes the isotopic fractionation induced during chromatographic separation and results in a narrow peak in the mass spectrometer. Air standards can be measured with a precision better than 1‰. For polar ice samples from glacial periods, we estimate a precision of 2.3‰ for 350 g of ice (or roughly 30 mL – at standard temperature and pressure (STP) – of air) with 350 ppb of methane. This corresponds to recent tropospheric air samples (about 1900 ppb CH4) of about 6 mL (STP) or about 500 pmol of pure CH4.
Resumo:
Nitrous oxide (N2O) is an important greenhouse gas and ozone-depleting substance that has anthropogenic as well as natural marine and terrestrial sources. The tropospheric N2O concentrations have varied substantially in the past in concert with changing climate on glacial–interglacial and millennial timescales. It is not well understood, however, how N2O emissions from marine and terrestrial sources change in response to varying environmental conditions. The distinct isotopic compositions of marine and terrestrial N2O sources can help disentangle the relative changes in marine and terrestrial N2O emissions during past climate variations. Here we present N2O concentration and isotopic data for the last deglaciation, from 16,000 to 10,000 years before present, retrieved from air bubbles trapped in polar ice at Taylor Glacier, Antarctica. With the help of our data and a box model of the N2O cycle, we find a 30 per cent increase in total N2O emissions from the late glacial to the interglacial, with terrestrial and marine emissions contributing equally to the overall increase and generally evolving in parallel over the last deglaciation, even though there is no a priori connection between the drivers of the two sources. However, we find that terrestrial emissions dominated on centennial timescales, consistent with a state-of-the-art dynamic global vegetation and land surface process model that suggests that during the last deglaciation emission changes were strongly influenced by temperature and precipitation patterns over land surfaces. The results improve our understanding of the drivers of natural N2O emissions and are consistent with the idea that natural N2O emissions will probably increase in response to anthropogenic warming.
Resumo:
– Swiss forests experience strong impacts under the CH2011 scenarios, partly even for the low greenhouse gas scenario RCP3PD. Negative impacts prevail in low-elevation forests, whereas mostly positive impacts are expected in high-elevation forests. – Major changes in the distribution of the two most important tree species, Norway spruce and European beech, are expected. Growth conditions for spruce improve in a broad range of scenarios at presently cool high-elevation sites with plentiful precipitation, but in the case of strong warming (A1B and A2) spruce and beech are at risk in large parts of the Swiss Plateau. – High elevation forests that are temperature-limited will show little change in species composition but an increase in biomass. In contrast, forests at low elevations in warm-dry inner-Alpine valleys are sensitive to even moderate warming and may no longer sustain current biomass and species. – Timber production potential, carbon storage, and protection from avalanches and rockfall react differently to climate change, with an overall tendency to deteriorate at low elevations, and improve at high elevations. – Climate change will affect forests also indirectly, e.g., by increasing the risk of infestation by spruce bark beetles, which will profit from an extended flight period and will produce more generations per year.
Resumo:
Organic soils in peatlands store a great proportion of the global soil carbon pool and can lose carbon via the atmosphere due to degradation. In Germany, most of the greenhouse gas (GHG) emissions from organic soils are attributed to sites managed as grassland. Here, we investigated a land use gradient from near-natural wetland (NW) to an extensively managed (GE) to an intensively managed grassland site (GI), all formed in the same bog complex in northern Germany. Vertical depth profiles of δ13C, δ15N, ash content, C / N ratio and bulk density as well as radiocarbon ages were studied to identify peat degradation and to calculate carbon loss. At all sites, including the near-natural site, δ13C depth profiles indicate aerobic decomposition in the upper horizons. Depth profiles of δ15N differed significantly between sites with increasing δ15N values in the top soil layers paralleling an increase in land use intensity owing to differences in peat decomposition and fertilizer application. At both grassland sites, the ash content peaked within the first centimetres. In the near-natural site, ash contents were highest in 10–60 cm depth. The ash profiles, not only at the managed grassland sites, but also at the near-natural site indicate that all sites were influenced by anthropogenic activities either currently or in the past, most likely due to drainage. Based on the enrichment of ash content and changes in bulk density, we calculated the total carbon loss from the sites since the peatland was influenced by anthropogenic activities. Carbon loss at the sites increased in the following order: NW < GE < GI. Radiocarbon ages of peat in the topsoil of GE and GI were hundreds of years, indicating the loss of younger peat material. In contrast, peat in the first centimetres of the NW was only a few decades old, indicating recent peat growth. It is likely that the NW site accumulates carbon today but was perturbed by anthropogenic activities in the past. Together, all biogeochemical parameters indicate a degradation of peat due to (i) conversion to grassland with historical drainage and (ii) land use intensification.
Resumo:
A time-lapse pressure tomography inversion approach is applied to characterize the CO2 plume development in a virtual deep saline aquifer. Deep CO2 injection leads to flow properties of the mixed-phase, which vary depending on the CO2 saturation. Analogous to the crossed ray paths of a seismic tomographic experiment, pressure tomography creates streamline patterns by injecting brine prior to CO2 injection or by injecting small amounts of CO2 into the two-phase (brine and CO2) system at different depths. In a first step, the introduced pressure responses at observation locations are utilized for a computationally rapid and efficient eikonal equation based inversion to reconstruct the heterogeneity of the subsurface with diffusivity (D) tomograms. Information about the plume shape can be derived by comparing D-tomograms of the aquifer at different times. In a second step, the aquifer is subdivided into two zones of constant values of hydraulic conductivity (K) and specific storage (Ss) through a clustering approach. For the CO2 plume, mixed-phase K and Ss values are estimated by minimizing the difference between calculated and “true” pressure responses using a single-phase flow simulator to reduce the computing complexity. Finally, the estimated flow property is converted to gas saturation by a single-phase proxy, which represents an integrated value of the plume. This novel approach is tested first with a doublet well configuration, and it reveals a great potential of pressure tomography based concepts for characterizing and monitoring deep aquifers, as well as the evolution of a CO2 plume. Still, field-testing will be required for better assessing the applicability of this approach.
Resumo:
Eco-driving has well-known positive effects on fuel economy and greenhouse-gas emissions. Moreover, eco-driving reduces road-traffic noise, which is a serious threat to the health and well-being of many people. We investigated the psychological predictors of the adoption of eco-driving from the perspective of road-traffic noise abatement. The data came from 890 car drivers who participated in a longitudinal survey over four months. Specifically, we tested the effects of the intention to prevent road-traffic noise, variables derived from the theory of planned behavior (social norm, perceived behavioral control, and attitude), and variables derived from the health action process approach (implementation intention, maintenance self-efficacy, and action control) on the intention to practice eco-driving and on eco-driving behavior. The intention to prevent road-traffic noise was not linked to the intention to practice eco-driving. The strongest predictors of the intention to practice eco-driving were attitude and perceived behavioral control. The strongest predictor of eco-driving behavior was action control. The link between behavioral intention and behavior was weak, indicating that drivers have difficulties putting their intention to practice eco-driving into action. Therefore, intervention efforts should directly address and support the transition from intention to behavior. This could be accomplished by providing reminders, which help to maintain behavioral intention, and by providing behavior feedback, which helps car drivers to monitor their behavior.
Resumo:
Methane (CH4) and carbon dioxide emissions from lakes are relevant for assessing the greenhouse gas output of wetlands. However, only few standardized datasets describe concentrations of these gases in lakes across different geographical regions. We studied concentrations and stable carbon isotopic composition (δ13C) of CH4 and dissolved inorganic carbon (DIC) in 32 small lakes from Finland, Sweden, Germany, the Netherlands, and Switzerland in late summer. Higher concentrations and δ13C values of DIC were observed in calcareous lakes than in lakes on non-calcareous areas. In stratified lakes, δ13C values of DIC were generally lower in the hypolimnion due to the degradation of organic matter (OM). Unexpectedly, increased δ13C values of DIC were registered above the sediment in several lakes. This may reflect carbonate dissolution in calcareous lakes or methanogenesis in deepwater layers or in the sediments. Surface water CH4 concentrations were generally higher in western and central European lakes than in Fennoscandian lakes, possibly due to higher CH4 production in the littoral sediments and lateral transport, whereas CH4 concentrations in the hypolimnion did not differ significantly between the regions. The δ13C values of CH4 in the sediment suggest that δ13C values of biogenic CH4 are not necessarily linked to δ13C values of sedimentary OM but may be strongly influenced by OM quality and methanogenic pathway. Our study suggests that CH4 and DIC cycling in small lakes differ between geographical regions and that this should be taken into account when regional studies on greenhouse gas emissions are upscaled to inter-regional scales.
Resumo:
This research investigated how an individual’s endorsements of mitigation and adaptation relate to each other, and how well each of these can be accounted for by relevant social psychological factors. Based on survey data from two European convenience samples (N = 616 / 309) we found that public endorsements of mitigation and adaptation are strongly associated: Someone who is willing to reduce greenhouse gas emissions (mitigation) is also willing to prepare for climate change impacts (adaptation). Moreover, people endorsed the two response strategies for similar reasons: People who believe that climate change is real and dangerous, who have positive attitudes about protecting the environment and the climate, and who perceive climate change as a risk, are willing to respond to climate change. Furthermore, distinguishing between (spatially) proximal and distant risk perceptions suggested that the idea of portraying climate change as a proximal (i.e., local) threat might indeed be effective in promoting personal actions. However, to gain endorsement of broader societal initiatives such as policy support, it seems advisable to turn to the distant risks of climate change. The notion that “localising” climate change might not be the panacea for engaging people in this domain is discussed in regard to previous theory and research.
Resumo:
Temperature changes in Antarctica over the last millennium are investigated using proxy records, a set of simulations driven by natural and anthropogenic forcings and one simulation with data assimilation. Over Antarctica, a long term cooling trend in annual mean is simulated during the period 1000–1850. The main contributor to this cooling trend is the volcanic forcing, astronomical forcing playing a dominant role at seasonal timescale. Since 1850, all the models produce an Antarctic warming in response to the increase in greenhouse gas concentrations. We present a composite of Antarctic temperature, calculated by averaging seven temperature records derived from isotope measurements in ice cores. This simple approach is supported by the coherency displayed between model results at these data grid points and Antarctic mean temperature. The composite shows a weak multi-centennial cooling trend during the pre-industrial period and a warming after 1850 that is broadly consistent with model results. In both data and simulations, large regional variations are superimposed on this common signal, at decadal to centennial timescales. The model results appear spatially more consistent than ice core records. We conclude that more records are needed to resolve the complex spatial distribution of Antarctic temperature variations during the last millennium.
Resumo:
67P/Churyumov-Gerasimenko (67P) is a Jupiter-family comet and the object of investigation of the European Space Agency mission Rosetta. This report presents the first full 3D simulation results of 67P’s neutral gas coma. In this study we include results from a direct simulation Monte Carlo method, a hydrodynamic code, and a purely geometric calculation which computes the total illuminated surface area on the nucleus. All models include the triangulated 3D shape model of 67P as well as realistic illumination and shadowing conditions. The basic concept is the assumption that these illumination conditions on the nucleus are the main driver for the gas activity of the comet. As a consequence, the total production rate of 67P varies as a function of solar insolation. The best agreement between the model and the data is achieved when gas fluxes on the night side are in the range of 7% to 10% of the maximum flux, accounting for contributions from the most volatile components. To validate the output of our numerical simulations we compare the results of all three models to in situ gas number density measurements from the ROSINA COPS instrument. We are able to reproduce the overall features of these local neutral number density measurements of ROSINA COPS for the time period between early August 2014 and January 1 2015 with all three models. Some details in the measurements are not reproduced and warrant further investigation and refinement of the models. However, the overall assumption that illumination conditions on the nucleus are at least an important driver of the gas activity is validated by the models. According to our simulation results we find the total production rate of 67P to be constant between August and November 2014 with a value of about 1 × 10²⁶ molecules s⁻¹.
Resumo:
Diminishing crude oil and natural gas supplies, along with concern about greenhouse gas are major driving forces in the search for efficient renewable energy sources. The conversion of lignocellulosic biomass to energy and useful chemicals is a component of the solution. Ethanol is most commonly produced by enzymatic hydrolysis of complex carbohydrates to simple sugars followed by fermentation using yeast. C6Hl0O5 + H2O −Enxymes→ C6H12O6 −Yeast→ 2CH3CH2OH + 2C02 In the U.S. corn is the primary starting raw material for commercial ethanol production. However, there is insufficient corn available to meet the future demand for ethanol as a gasoline additive. Consequently a variety of processes are being developed for producing ethanol from biomass; among which is the NREL process for the production of ethanol from white hardwood. The objective of the thesis reported here was to perform a technical economic analysis of the hardwood to ethanol process. In this analysis a Greenfield plant was compared to co-locating the ethanol plant adjacent to a Kraft pulp mill. The advantage of the latter case is that facilities can be shared jointly for ethanol production and for the production of pulp. Preliminary process designs were performed for three cases; a base case size of 2205 dry tons/day of hardwood (52 million gallons of ethanol per year) as well as the two cases of half and double this size. The thermal efficiency of the NREL process was estimated to be approximately 36%; that is about 36% of the thermal energy in the wood is retained in the product ethanol and by-product electrical energy. The discounted cash flow rate of return on investment and the net present value methods of evaluating process alternatives were used to evaluate the economic feasibility of the NREL process. The minimum acceptable discounted cash flow rate of return after taxes was assumed to be 10%. In all of the process alternatives investigated, the dominant cost factors are the capital recovery charges and the cost of wood. The Greenfield NREL process is not economically viable with the cost of producing ethanol varying from $2.58 to $2.08/gallon for the half capacity and double capacity cases respectively. The co-location cases appear more promising due to reductions in capital costs. The most profitable co-location case resulted in a discounted cash flow rate of return improving from 8.5% for the half capacity case to 20.3% for the double capacity case. Due to economy of scale, the investments become more and more profitable as the size of the plant increases. This concept is limited by the amount of wood that can be delivered to the plant on a sustainable basis as well as the demand for ethanol within a reasonable distance of the plant.
Resumo:
We obtained sediment physical properties and geochemical data from 47 piston and gravity cores located in the Bay of Bengal, to study the complex history of the Late Pleistocene run-off from the Ganges and Brahmaputra rivers and its imprint on the Bengal Fan. Grain-size parameters were predicted from core logs of density and velocity to infer sediment transport energy and to distinguish different environments along the 3000-km-long transport path from the delta platform to the lower fan. On the shelf, 27 cores indicate rapidly prograding delta foresets today that contain primarily mud, whereas outer shelf sediment has 25% higher silt contents, indicative of stronger and more stable transport regime, which prevent deposition and expose a Late Pleistocene relic surface. Deposition is currently directed towards the shelf canyon 'Swatch of No Ground', where turbidites are released to the only channel-levee system that is active on the fan during the Holocene. Active growth of the channel-levee system occurred throughout sea-level rise and highstand with a distinct growth phase at the end of the Younger Dryas. Coarse-grained material bypasses the upper fan and upper parts of the middle fan, where particle flow is enhanced as a result of flow-restriction in well-defined channels. Sandier material is deposited mainly as sheet-flow deposits on turbidite-dominated plains at the lower fan. The currently most active part of the fan with 10-40 cm thick turbidites is documented for the central channel including inner levees (e.g., site 40). Site 47 from the lower fan far to the east of the active channel-levee system indicates the end of turbidite sedimentation at 300 ka for that location. That time corresponds to the sea-level lowering during late isotopic stage 9 when sediment supply to the fan increased and led to channel avulsion farther upstream, probably indicating a close relation of climate variability and fan activity. Pelagic deep-sea sites 22 and 28 contain a 630-kyear record of climate response to orbital forcing with dominant 21- and 41-kyear cycles for carbonate and magnetic susceptibility, respectively, pointing to teleconnections of low-latitude monsoonal forcing on the precession band to high-latitude obliquity forcing. Upper slope sites 115, 124, and 126 contain a record of the response to high-frequency climate change in the Dansgaard-Oeschger bands during the last glacial cycle with shared frequencies between 0.75 and 2.5 kyear. Correlation of highs in Bengal Fan physical properties to lows in the d18O record of the GISP2 ice-core suggests that times of greater sediment transport energy in the Bay of Bengal are associated with cooler air temperatures over Greenland. Teleconnections were probably established through moisture and other greenhouse-gas forcing that could have been initiated by instabilities in the methane hydrate reservoir in the oceans.