79 resultados para Global and Nonglobal Solutions
Resumo:
Temperature results from multi-decadal simulations of coupled chemistry climate models for the recent past are analyzed using multi-linear regression including a trend, solar cycle, lower stratospheric tropical wind, and volcanic aerosol terms. The climatology of the models for recent years is in good agreement with observations for the troposphere but the model results diverge from each other and from observations in the stratosphere. Overall, the models agree better with observations than in previous assessments, primarily because of corrections in the observed temperatures. The annually averaged global and polar temperature trends simulated by the models are generally in agreement with revised satellite observations and radiosonde data over much of their altitude range. In the global average, the model trends underpredict the radiosonde data slightly at the top of the observed range. Over the Antarctic some models underpredict the temperature trend in the lower stratosphere, while others overpredict the trends
Resumo:
Changes in climate variability and, in particular, changes in extreme climate events are likely to be of far more significance for environmentally vulnerable regions than changes in the mean state. It is generally accepted that sea-surface temperatures (SSTs) play an important role in modulating rainfall variability. Consequently, SSTs can be prescribed in global and regional climate modelling in order to study the physical mechanisms behind rainfall and its extremes. Using a satellite-based daily rainfall historical data set, this paper describes the main patterns of rainfall variability over southern Africa, identifies the dates when extreme rainfall occurs within these patterns, and shows the effect of resolution in trying to identify the location and intensity of SST anomalies associated with these extremes in the Atlantic and southwest Indian Ocean. Derived from a Principal Component Analysis (PCA), the results also suggest that, for the spatial pattern accounting for the highest amount of variability, extremes extracted at a higher spatial resolution do give a clearer indication regarding the location and intensity of anomalous SST regions. As the amount of variability explained by each spatial pattern defined by the PCA decreases, it would appear that extremes extracted at a lower resolution give a clearer indication of anomalous SST regions.
Resumo:
Scenarios are used to explore the consequences of different adaptation and mitigation strategies under uncertainty. In this paper, two scenarios are used to explore developments with (1) no mitigation leading to an increase of global mean temperature of 4 °C by 2100 and (2) an ambitious mitigation strategy leading to 2 °C increase by 2100. For the second scenario, uncertainties in the climate system imply that a global mean temperature increase of 3 °C or more cannot be ruled out. Our analysis shows that, in many cases, adaptation and mitigation are not trade-offs but supplements. For example, the number of people exposed to increased water resource stress due to climate change can be substantially reduced in the mitigation scenario, but adaptation will still be required for the remaining large numbers of people exposed to increased stress. Another example is sea level rise, for which, from a global and purely monetary perspective, adaptation (up to 2100) seems more effective than mitigation. From the perspective of poorer and small island countries, however, stringent mitigation is necessary to keep risks at manageable levels. For agriculture, only a scenario based on a combination of adaptation and mitigation is able to avoid serious climate change impacts.
Resumo:
This paper assesses the impact of the monetary integration on different types of stock returns in Europe. In order to isolate European factors, the impact of global equity integration and small cap factors are investigated. European countries are sub-divided according to the process of monetary convergence. Analysis shows that national equity indices are strongly influenced by global market movements, with a European stock factor providing additional explanatory power. The global and European factors explain small cap and real estate stocks much less well –suggesting an increased importance of ‘local’ drivers. For real estate, there are notable differences between core and non-core countries. Core European countries exhibit convergence – a convergence to a European rather than a global factor. The non-core countries do not seem to exhibit common trends or movements. For the non-core countries, monetary integration has been associated with increased dispersion of returns, lower correlation and lower explanatory power of a European factor. It is concluded that this may be explained by divergence in underlying macro-economic drivers between core and non-core countries in the post-Euro period.
Resumo:
Uranium series dating has been carried out on secondary uranyl silicate minerals formed during sub-glacial and post-glacial weathering of Proterozoic uraninite ores in south west Finland. The samples were obtained from two sites adjacent to the Salpauselkä III ice marginal formation and cover a range of depths, from the surface to more than 60 m. Measured ages fall into three distinct groups, 70–100 ka, 28–36 ka and < 2500 yr. The youngest set is associated with surface exposures and the crystals display clear evidence of re-working. The most likely trigger for uranium release at depths below the surface weathering zone is intrusion of oxidising glacial melt water. The latter is often characterised by very high discharge rates along channels, which close once the overpressure generated at the ice margin is released. There is excellent correspondence between the two Finnish sites and published data for similar deposits over a large area of southern and central Sweden. None of the seventy samples analysed gave a U–Th age between 40 and 70 ka; a second hiatus is apparent at 20 ka, coinciding with the Last Glacial Maximum. Thus, the process responsible for uranyl silicate formation was halted for significant periods, owing to a change in geochemical conditions or the hydrogeological regime. These data support the presence of interstadial conditions during the Early and Middle Weichselian since in the absence of major climatic perturbations the uranium phases at depth are stable. When viewed in conjunction with proxy data from mammoth remains it would appear that the region was ice-free prior to the Last Glacial Maximum.
Resumo:
B. subtilis under certain types of media and fermentation conditions can produce surfactin, a biosurfactant which belongs to the lipopeptide class. Surfactin has exceptional surfactant activity, and exhibits some interesting biological characteristics such as antibacterial activity, antitumoral activity against ascites carcinoma cells, and a hypocholesterolemic activity that inhibits cAMP phosphodiesterase, as well as having anti-HIV properties. A cost effective recovery and purification of surfactin from fermentation broth using a two-step ultrafiltration (UF) process has been developed in order to reduce the cost of surfactin production. In this study, competitive adsorption of surfactin and proteins at the air-water interface was studied using surface pressure measurements. Small volumes of bovine serum albumin (BSA) and β-casein solutions were added to the air-water interface on a Langmuir trough and allowed to stabilise before the addition of surfactin to the subphase. Contrasting interfacial behaviour of proteins was observed with β-casein showing faster initial adsorption compared to BSA. On introduction of surfactin both proteins were displaced but a longer time were taken to displace β-casein. Overall the results showed surfactin were highly surface-active by forming a β-sheet structure at the air-water interface after reaching its critical micelle concentration (CMC) and were effective in removing both protein films, which can be explained following the orogenic mechanism. Results showed that the two-step UF process was effective to achieve high purity and fully functional surfactin.
Resumo:
SANS from deuterated ferritin and apoferritin solutions over the temperature range 5 to 300 K is presented. Above the freezing point the SANS is well described by Percus-Yevick hard sphere packing. On freezing, highly correlated, partially crystallised, clusters of the proteins form and grow with decreasing temperature. The resulting scattering, characterised by a squared Lorentzian structure factor, indicates a spatial extent of 1000 8, for the protein clusters.
Resumo:
A detailed analysis is undertaken of the Atlantic-European climate using data from 500-year-long proxy-based climate reconstructions, a long climate simulation with perpetual 1990 forcing, as well as two global and one regional climate change scenarios. The observed and simulated interannual variability and teleconnectivity are compared and interpreted in order to improve the understanding of natural climate variability on interannual to decadal time scales for the late Holocene. The focus is set on the Atlantic-European and Alpine regions during the winter and summer seasons, using temperature, precipitation, and 500 hPa geopotential height fields. The climate reconstruction shows pronounced interdecadal variations that appear to “lock” the atmospheric circulation in quasi-steady long-term patterns over multi-decadal periods controlling at least part of the temperature and precipitation variability. Different circulation patterns are persistent over several decades for the period 1500 to 1900. The 500-year-long simulation with perpetual 1990 forcing shows some substantial differences, with a more unsteady teleconnectivity behaviour. Two global scenario simulations indicate a transition towards more stable teleconnectivity for the next 100 years. Time series of reconstructed and simulated temperature and precipitation over the Alpine region show comparatively small changes in interannual variability within the time frame considered, with the exception of the summer season, where a substantial increase in interannual variability is simulated by regional climate models.
Resumo:
The impending threat of global climate change and its regional manifestations is among the most important and urgent problems facing humanity. Society needs accurate and reliable estimates of changes in the probability of regional weather variations to develop science-based adaptation and mitigation strategies. Recent advances in weather prediction and in our understanding and ability to model the climate system suggest that it is both necessary and possible to revolutionize climate prediction to meet these societal needs. However, the scientific workforce and the computational capability required to bring about such a revolution is not available in any single nation. Motivated by the success of internationally funded infrastructure in other areas of science, this paper argues that, because of the complexity of the climate system, and because the regional manifestations of climate change are mainly through changes in the statistics of regional weather variations, the scientific and computational requirements to predict its behavior reliably are so enormous that the nations of the world should create a small number of multinational high-performance computing facilities dedicated to the grand challenges of developing the capabilities to predict climate variability and change on both global and regional scales over the coming decades. Such facilities will play a key role in the development of next-generation climate models, build global capacity in climate research, nurture a highly trained workforce, and engage the global user community, policy-makers, and stakeholders. We recommend the creation of a small number of multinational facilities with computer capability at each facility of about 20 peta-flops in the near term, about 200 petaflops within five years, and 1 exaflop by the end of the next decade. Each facility should have sufficient scientific workforce to develop and maintain the software and data analysis infrastructure. Such facilities will enable questions of what resolution, both horizontal and vertical, in atmospheric and ocean models, is necessary for more confident predictions at the regional and local level. Current limitations in computing power have placed severe limitations on such an investigation, which is now badly needed. These facilities will also provide the world's scientists with the computational laboratories for fundamental research on weather–climate interactions using 1-km resolution models and on atmospheric, terrestrial, cryospheric, and oceanic processes at even finer scales. Each facility should have enabling infrastructure including hardware, software, and data analysis support, and scientific capacity to interact with the national centers and other visitors. This will accelerate our understanding of how the climate system works and how to model it. It will ultimately enable the climate community to provide society with climate predictions, which are based on our best knowledge of science and the most advanced technology.
Resumo:
The impending threat of global climate change and its regional manifestations is among the most important and urgent problems facing humanity. Society needs accurate and reliable estimates of changes in the probability of regional weather variations to develop science-based adaptation and mitigation strategies. Recent advances in weather prediction and in our understanding and ability to model the climate system suggest that it is both necessary and possible to revolutionize climate prediction to meet these societal needs. However, the scientific workforce and the computational capability required to bring about such a revolution is not available in any single nation. Motivated by the success of internationally funded infrastructure in other areas of science, this paper argues that, because of the complexity of the climate system, and because the regional manifestations of climate change are mainly through changes in the statistics of regional weather variations, the scientific and computational requirements to predict its behavior reliably are so enormous that the nations of the world should create a small number of multinational high-performance computing facilities dedicated to the grand challenges of developing the capabilities to predict climate variability and change on both global and regional scales over the coming decades. Such facilities will play a key role in the development of next-generation climate models, build global capacity in climate research, nurture a highly trained workforce, and engage the global user community, policy-makers, and stakeholders. We recommend the creation of a small number of multinational facilities with computer capability at each facility of about 20 peta-flops in the near term, about 200 petaflops within five years, and 1 exaflop by the end of the next decade. Each facility should have sufficient scientific workforce to develop and maintain the software and data analysis infrastructure. Such facilities will enable questions of what resolution, both horizontal and vertical, in atmospheric and ocean models, is necessary for more confident predictions at the regional and local level. Current limitations in computing power have placed severe limitations on such an investigation, which is now badly needed. These facilities will also provide the world's scientists with the computational laboratories for fundamental research on weather–climate interactions using 1-km resolution models and on atmospheric, terrestrial, cryospheric, and oceanic processes at even finer scales. Each facility should have enabling infrastructure including hardware, software, and data analysis support, and scientific capacity to interact with the national centers and other visitors. This will accelerate our understanding of how the climate system works and how to model it. It will ultimately enable the climate community to provide society with climate predictions, which are based on our best knowledge of science and the most advanced technology.
Resumo:
Summer rainfall over China has experienced substantial variability on longer time scales during the last century, and the question remains whether this is due to natural, internal variability or is part of the emerging signal of anthropogenic climate change. Using the best available observations over China, the decadal variability and recent trends in summer rainfall are investigated with the emphasis on changes in the seasonal evolution and on the temporal characteristics of daily rainfall. The possible relationships with global warming are reassessed. Substantial decadal variability in summer rainfall has been confirmed during the period 1958–2008; this is not unique to this period but is also seen in the earlier decades of the twentieth century. Two dominant patterns of decadal variability have been identified that contribute substantially to the recent trend of southern flooding and northern drought. Natural decadal variability appears to dominate in general but in the cases of rainfall intensity and the frequency of rainfall days, particularly light rain days, then the dominant EOFs have a rather different character, being of one sign over most of China, and having principal components (PCs) that appear more trendlike. The increasing intensity of rainfall throughout China and the decrease in light rainfall days, particularly in the north, could at least partially be of anthropogenic origin, both global and regional, linked to increased greenhouse gases and increased aerosols.
Resumo:
Global flood hazard maps can be used in the assessment of flood risk in a number of different applications, including (re)insurance and large scale flood preparedness. Such global hazard maps can be generated using large scale physically based models of rainfall-runoff and river routing, when used in conjunction with a number of post-processing methods. In this study, the European Centre for Medium Range Weather Forecasts (ECMWF) land surface model is coupled to ERA-Interim reanalysis meteorological forcing data, and resultant runoff is passed to a river routing algorithm which simulates floodplains and flood flow across the global land area. The global hazard map is based on a 30 yr (1979–2010) simulation period. A Gumbel distribution is fitted to the annual maxima flows to derive a number of flood return periods. The return periods are calculated initially for a 25×25 km grid, which is then reprojected onto a 1×1 km grid to derive maps of higher resolution and estimate flooded fractional area for the individual 25×25 km cells. Several global and regional maps of flood return periods ranging from 2 to 500 yr are presented. The results compare reasonably to a benchmark data set of global flood hazard. The developed methodology can be applied to other datasets on a global or regional scale.
Resumo:
The aim of this work was to investigate the lipopeptides aggregation behavior in single and mixed solutions in a wide range of concentrations, in order to optimize their separation and purification following the two-step ultrafiltration process and using large pore size membranes (up to MWCO = 300 kDa). Micelle size was determined by dynamic light scattering. In single solutions of lipopeptide both surfactin and mycosubtilin formed micelles of different size depending on their concentration, micelles of average diameter = 5–105 nm for surfactin and 8–18 nm for mycosubtilin. However when the lipopeptides were in the same solution they formed mixed micelles of different size (d = 8 nm) and probably conformation to that formed by the individual lipopeptide, this prevents their separation according to size. These lipopeptides were purified from fermentation culture by the two-step ultrafiltration process using different MWCO membranes ranging from 10 to 300 kDa. This led to their effective rejection in the first ultrafiltration step by membranes with MCWO = 10–100 kDa but poor rejection by the 300 KDa membrane. The lipopeptides were recovered at 90% purity (in relation to protein) and with 2.34 enrichment in the permeate of the second ultrafiltration step with the 100 KDa membrane upon addition of 75% ethanol.
Resumo:
A statistical–dynamical downscaling (SDD) approach is applied to determine present day and future high-resolution rainfall distributions in the catchment of the river Aksu at the southern slopes of the Tienshan Mountains, Central Asia. First, a circulation weather type (CWT) classification is employed to define typical lower atmospheric flow regimes from ERA-40 reanalysis data. Selected representatives of each CWT are dynamically downscaled with the regional climate model COSMO-CLM 4.8 at a horizontal grid resolution of 0.0625°, using the ERA-40 reanalysis data as boundary conditions. Finally, the simulated representatives are recombined to obtain a high-resolution rainfall climatology for present day climate. The methodology is also applied to ensemble simulations of three different scenarios of the global climate model ECHAM5/MPI-OM1 to derive projections of rainfall changes until 2100. Comparisons of downscaled seasonal and annual rainfall with observational data suggest that the statistical–dynamical approach is appropriate to capture the observed present-day precipitation climatology over the low lands and the first elevations of the Tienshan Mountains. On the other hand, a strong bias is found at higher altitudes, where precipitation is clearly underestimated by SDD. The application of SDD to the ECHAM5/MPI-OM1 ensemble reveals that precipitation changes by the end of the 21st century depend on the season. While for autumn an increase of seasonal precipitation is found for all simulations, a decrease in precipitation is obtained during winter for most parts of the Aksu catchment. The spread between different ECHAM5/MPI-OM1 ensemble members is strongest in spring, where trends of opposite sign are found. The largest changes in rainfall are simulated for the summer season, which also shows the most pronounced spatial heterogeneity. Most ECHAM5/MPI-OM1 realizations indicate a decrease of annual precipitation over large parts of the Tienshan, and an increase restricted to the southeast of the study area. These results provide a good basis for downscaling present-day and future rainfall distributions for hydrological purposes.
Resumo:
Quantitative simulations of the global-scale benefits of climate change mitigation are presented, using a harmonised, self-consistent approach based on a single set of climate change scenarios. The approach draws on a synthesis of output from both physically-based and economics-based models, and incorporates uncertainty analyses. Previous studies have projected global and regional climate change and its impacts over the 21st century but have generally focused on analysis of business-as-usual scenarios, with no explicit mitigation policy included. This study finds that both the economics-based and physically-based models indicate that early, stringent mitigation would avoid a large proportion of the impacts of climate change projected for the 2080s. However, it also shows that not all the impacts can now be avoided, so that adaptation would also therefore be needed to avoid some of the potential damage. Delay in mitigation substantially reduces the percentage of impacts that can be avoided, providing strong new quantitative evidence for the need for stringent and prompt global mitigation action on greenhouse gas emissions, combined with effective adaptation, if large, widespread climate change impacts are to be avoided. Energy technology models suggest that such stringent and prompt mitigation action is technologically feasible, although the estimated costs vary depending on the specific modelling approach and assumptions.