978 resultados para IRRADIANCE PREDICTIONS
Resumo:
Enhanced release of CO2 to the atmosphere from soil organic carbon as a result of increased temperatures may lead to a positive feedback between climate change and the carbon cycle, resulting in much higher CO2 levels and accelerated lobal warming. However, the magnitude of this effect is uncertain and critically dependent on how the decomposition of soil organic C (heterotrophic respiration) responds to changes in climate. Previous studies with the Hadley Centre’s coupled climate–carbon cycle general circulation model (GCM) (HadCM3LC) used a simple, single-pool soil carbon model to simulate the response. Here we present results from numerical simulations that use the more sophisticated ‘RothC’ multipool soil carbon model, driven with the same climate data. The results show strong similarities in the behaviour of the two models, although RothC tends to simulate slightly smaller changes in global soil carbon stocks for the same forcing. RothC simulates global soil carbon stocks decreasing by 54 GtC by 2100 in a climate change simulation compared with an 80 GtC decrease in HadCM3LC. The multipool carbon dynamics of RothC cause it to exhibit a slower magnitude of transient response to both increased organic carbon inputs and changes in climate. We conclude that the projection of a positive feedback between climate and carbon cycle is robust, but the magnitude of the feedback is dependent on the structure of the soil carbon model.
Resumo:
We explore the potential for making statistical decadal predictions of sea surface temperatures (SSTs) in a perfect model analysis, with a focus on the Atlantic basin. Various statistical methods (Lagged correlations, Linear Inverse Modelling and Constructed Analogue) are found to have significant skill in predicting the internal variability of Atlantic SSTs for up to a decade ahead in control integrations of two different global climate models (GCMs), namely HadCM3 and HadGEM1. Statistical methods which consider non-local information tend to perform best, but which is the most successful statistical method depends on the region considered, GCM data used and prediction lead time. However, the Constructed Analogue method tends to have the highest skill at longer lead times. Importantly, the regions of greatest prediction skill can be very different to regions identified as potentially predictable from variance explained arguments. This finding suggests that significant local decadal variability is not necessarily a prerequisite for skillful decadal predictions, and that the statistical methods are capturing some of the dynamics of low-frequency SST evolution. In particular, using data from HadGEM1, significant skill at lead times of 6–10 years is found in the tropical North Atlantic, a region with relatively little decadal variability compared to interannual variability. This skill appears to come from reconstructing the SSTs in the far north Atlantic, suggesting that the more northern latitudes are optimal for SST observations to improve predictions. We additionally explore whether adding sub-surface temperature data improves these decadal statistical predictions, and find that, again, it depends on the region, prediction lead time and GCM data used. Overall, we argue that the estimated prediction skill motivates the further development of statistical decadal predictions of SSTs as a benchmark for current and future GCM-based decadal climate predictions.
Resumo:
A detailed analysis is presented of solar UV spectral irradiance for the period between May 2003 and August 2005, when data are available from both the Solar Ultraviolet pectral Irradiance Monitor (SUSIM) instrument (on board the pper Atmosphere Research Satellite (UARS) spacecraft) and the Solar Stellar Irradiance Comparison Experiment (SOLSTICE) instrument (on board the Solar Radiation and Climate Experiment (SORCE) satellite). The ultimate aim is to develop a data composite that can be used to accurately determine any differences between the “exceptional” solar minimum at the end of solar cycle 23 and the previous minimum at the end of solar cycle 22 without having to rely on proxy data to set the long‐term change. SUSIM data are studied because they are the only data available in the “SOLSTICE gap” between the end of available UARS SOLSTICE data and the start of the SORCE data. At any one wavelength the two data sets are considered too dissimilar to be combined into a meaningful composite if any one of three correlations does not exceed a threshold of 0.8. This criterion removes all wavelengths except those in a small range between 156 nm and 208 nm, the longer wavelengths of which influence ozone production and heating in the lower stratosphere. Eight different methods are employed to intercalibrate the two data sequences. All methods give smaller changes between the minima than are seen when the data are not adjusted; however, correcting the SUSIM data to allow for an exponentially decaying offset drift gives a composite that is largely consistent with the unadjusted data from the SOLSTICE instruments on both UARS and SORCE and in which the recent minimum is consistently lower in the wave band studied.
Resumo:
The estimation of prediction quality is important because without quality measures, it is difficult to determine the usefulness of a prediction. Currently, methods for ligand binding site residue predictions are assessed in the function prediction category of the biennial Critical Assessment of Techniques for Protein Structure Prediction (CASP) experiment, utilizing the Matthews Correlation Coefficient (MCC) and Binding-site Distance Test (BDT) metrics. However, the assessment of ligand binding site predictions using such metrics requires the availability of solved structures with bound ligands. Thus, we have developed a ligand binding site quality assessment tool, FunFOLDQA, which utilizes protein feature analysis to predict ligand binding site quality prior to the experimental solution of the protein structures and their ligand interactions. The FunFOLDQA feature scores were combined using: simple linear combinations, multiple linear regression and a neural network. The neural network produced significantly better results for correlations to both the MCC and BDT scores, according to Kendall’s τ, Spearman’s ρ and Pearson’s r correlation coefficients, when tested on both the CASP8 and CASP9 datasets. The neural network also produced the largest Area Under the Curve score (AUC) when Receiver Operator Characteristic (ROC) analysis was undertaken for the CASP8 dataset. Furthermore, the FunFOLDQA algorithm incorporating the neural network, is shown to add value to FunFOLD, when both methods are employed in combination. This results in a statistically significant improvement over all of the best server methods, the FunFOLD method (6.43%), and one of the top manual groups (FN293) tested on the CASP8 dataset. The FunFOLDQA method was also found to be competitive with the top server methods when tested on the CASP9 dataset. To the best of our knowledge, FunFOLDQA is the first attempt to develop a method that can be used to assess ligand binding site prediction quality, in the absence of experimental data.
Resumo:
Decadal predictions have a high profile in the climate science community and beyond, yet very little is known about their skill. Nor is there any agreed protocol for estimating their skill. This paper proposes a sound and coordinated framework for verification of decadal hindcast experiments. The framework is illustrated for decadal hindcasts tailored to meet the requirements and specifications of CMIP5 (Coupled Model Intercomparison Project phase 5). The chosen metrics address key questions about the information content in initialized decadal hindcasts. These questions are: (1) Do the initial conditions in the hindcasts lead to more accurate predictions of the climate, compared to un-initialized climate change projections? and (2) Is the prediction model’s ensemble spread an appropriate representation of forecast uncertainty on average? The first question is addressed through deterministic metrics that compare the initialized and uninitialized hindcasts. The second question is addressed through a probabilistic metric applied to the initialized hindcasts and comparing different ways to ascribe forecast uncertainty. Verification is advocated at smoothed regional scales that can illuminate broad areas of predictability, as well as at the grid scale, since many users of the decadal prediction experiments who feed the climate data into applications or decision models will use the data at grid scale, or downscale it to even higher resolution. An overall statement on skill of CMIP5 decadal hindcasts is not the aim of this paper. The results presented are only illustrative of the framework, which would enable such studies. However, broad conclusions that are beginning to emerge from the CMIP5 results include (1) Most predictability at the interannual-to-decadal scale, relative to climatological averages, comes from external forcing, particularly for temperature; (2) though moderate, additional skill is added by the initial conditions over what is imparted by external forcing alone; however, the impact of initialization may result in overall worse predictions in some regions than provided by uninitialized climate change projections; (3) limited hindcast records and the dearth of climate-quality observational data impede our ability to quantify expected skill as well as model biases; and (4) as is common to seasonal-to-interannual model predictions, the spread of the ensemble members is not necessarily a good representation of forecast uncertainty. The authors recommend that this framework be adopted to serve as a starting point to compare prediction quality across prediction systems. The framework can provide a baseline against which future improvements can be quantified. The framework also provides guidance on the use of these model predictions, which differ in fundamental ways from the climate change projections that much of the community has become familiar with, including adjustment of mean and conditional biases, and consideration of how to best approach forecast uncertainty.
Resumo:
Accurate decadal climate predictions could be used to inform adaptation actions to a changing climate. The skill of such predictions from initialised dynamical global climate models (GCMs) may be assessed by comparing with predictions from statistical models which are based solely on historical observations. This paper presents two benchmark statistical models for predicting both the radiatively forced trend and internal variability of annual mean sea surface temperatures (SSTs) on a decadal timescale based on the gridded observation data set HadISST. For both statistical models, the trend related to radiative forcing is modelled using a linear regression of SST time series at each grid box on the time series of equivalent global mean atmospheric CO2 concentration. The residual internal variability is then modelled by (1) a first-order autoregressive model (AR1) and (2) a constructed analogue model (CA). From the verification of 46 retrospective forecasts with start years from 1960 to 2005, the correlation coefficient for anomaly forecasts using trend with AR1 is greater than 0.7 over parts of extra-tropical North Atlantic, the Indian Ocean and western Pacific. This is primarily related to the prediction of the forced trend. More importantly, both CA and AR1 give skillful predictions of the internal variability of SSTs in the subpolar gyre region over the far North Atlantic for lead time of 2 to 5 years, with correlation coefficients greater than 0.5. For the subpolar gyre and parts of the South Atlantic, CA is superior to AR1 for lead time of 6 to 9 years. These statistical forecasts are also compared with ensemble mean retrospective forecasts by DePreSys, an initialised GCM. DePreSys is found to outperform the statistical models over large parts of North Atlantic for lead times of 2 to 5 years and 6 to 9 years, however trend with AR1 is generally superior to DePreSys in the North Atlantic Current region, while trend with CA is superior to DePreSys in parts of South Atlantic for lead time of 6 to 9 years. These findings encourage further development of benchmark statistical decadal prediction models, and methods to combine different predictions.
Resumo:
In the mid 1990s the North Atlantic subpolar gyre (SPG) warmed rapidly, with sea surface temperatures (SST) increasing by 1°C in just a few years. By examining initialized hindcasts made with the UK Met Office Decadal Prediction System (DePreSys), it is shown that the warming could have been predicted. Conversely, hindcasts that only consider changes in radiative forcings are not able to capture the rapid warming. Heat budget analysis shows that the success of the DePreSys hindcasts is due to the initialization of anomalously strong northward ocean heat transport. Furthermore, it is found that initializing a strong Atlantic circulation, and in particular a strong Atlantic Meridional Overturning Circulation, is key for successful predictions. Finally, we show that DePreSys is able to predict significant changes in SST and other surface climate variables related to the North Atlantic warming.
Resumo:
Early and effective flood warning is essential to initiate timely measures to reduce loss of life and economic damage. The availability of several global ensemble weather prediction systems through the “THORPEX Interactive Grand Global Ensemble” (TIGGE) archive provides an opportunity to explore new dimensions in early flood forecasting and warning. TIGGE data has been used as meteorological input to the European Flood Alert System (EFAS) for a case study of a flood event in Romania in October 2007. Results illustrate that awareness for this case of flooding could have been raised as early as 8 days before the event and how the subsequent forecasts provide increasing insight into the range of possible flood conditions. This first assessment of one flood event illustrates the potential value of the TIGGE archive and the grand-ensembles approach to raise preparedness and thus to reduce the socio-economic impact of floods.
Resumo:
In order to achieve sustainability it is necessary to balance the interactions between the built and natural environment. Biodiversity plays an important part towards sustainability within the built environment, especially as the construction industry comes under increasing pressure to take ecological concerns into account. Bats constitute an important component of urban biodiversity and several species are now highly dependent on buildings, making them particularly vulnerable to anthropogenic and environmental changes. As many buildings suitable for use as bat roosts age, they often require re-roofing and traditional bituminous roofing felts are frequently being replaced with breathable roofing membranes (BRMs), which are designed to reduce condensation. Whilst the current position of bats is better in many respects than 30 years ago, new building regulations and modern materials, may substantially reduce the viability of existing roosts. At the same time building regulations require that materials be fit for purpose and with anecdotal evidence that both bats and BRMs may experience problems when the two interact, it is important to know what roost characteristics are essential for house dwelling bats and how these and BRMs may be affected. This paper reviews current literature and knowledge and considers the possible ways in which bats and BRMs may interact, how this could affect existing bat roosts within buildings and the implications for BRM service life predictions and warranties. It concludes that in order for the construction and conservation sectors to work together in solving this issue, a set of clear guidelines should be developed for use on a national level.
Resumo:
This paper evaluates the relationship between the cloud modification factor (CMF) in the ultraviolet erythe- mal range and the cloud optical depth (COD) retrieved from the Aerosol Robotic Network (AERONET) "cloud mode" algorithm under overcast cloudy conditions (confirmed with sky images) at Granada, Spain, mainly for non-precipitating, overcast and relatively homogenous water clouds. Empirical CMF showed a clear exponential dependence on experimental COD values, decreasing approximately from 0.7 for COD=10 to 0.25 for COD=50. In addition, these COD measurements were used as input in the LibRadtran radia tive transfer code allowing the simulation of CMF values for the selected overcast cases. The modeled CMF exhibited a dependence on COD similar to the empirical CMF, but modeled values present a strong underestimation with respect to the empirical factors (mean bias of 22 %). To explain this high bias, an exhaustive comparison between modeled and experimental UV erythemal irradiance (UVER) data was performed. The comparison revealed that the radiative transfer simulations were 8 % higher than the observations for clear-sky conditions. The rest of the bias (~14 %) may be attributed to the substantial underestimation of modeled UVER with respect to experimental UVER under overcast conditions, although the correlation between both dataset was high (R2 ~ 0.93). A sensitive test showed that the main reason responsible for that underestimation is the experimental AERONET COD used as input in the simulations, which has been retrieved from zenith radiances in the visible range. In this sense, effective COD in the erythemal interval were derived from an iteration procedure based on searching the best match between modeled and experimental UVER values for each selected overcast case. These effective COD values were smaller than AERONET COD data in about 80 % of the overcast cases with a mean relative difference of 22 %.
Resumo:
We present the first climate prediction of the coming decade made with multiple models, initialized with prior observations. This prediction accrues from an international activity to exchange decadal predictions in near real-time, in order to assess differences and similarities, provide a consensus view to prevent over-confidence in forecasts from any single model, and establish current collective capability. We stress that the forecast is experimental, since the skill of the multi-model system is as yet unknown. Nevertheless, the forecast systems used here are based on models that have undergone rigorous evaluation and individually have been evaluated for forecast skill. Moreover, it is important to publish forecasts to enable open evaluation, and to provide a focus on climate change in the coming decade. Initialized forecasts of the year 2011 agree well with observations, with a pattern correlation of 0.62 compared to 0.31 for uninitialized projections. In particular, the forecast correctly predicted La Niña in the Pacific, and warm conditions in the north Atlantic and USA. A similar pattern is predicted for 2012 but with a weaker La Niña. Indices of Atlantic multi-decadal variability and Pacific decadal variability show no signal beyond climatology after 2015, while temperature in the Niño3 region is predicted to warm slightly by about 0.5 °C over the coming decade. However, uncertainties are large for individual years and initialization has little impact beyond the first 4 years in most regions. Relative to uninitialized forecasts, initialized forecasts are significantly warmer in the north Atlantic sub-polar gyre and cooler in the north Pacific throughout the decade. They are also significantly cooler in the global average and over most land and ocean regions out to several years ahead. However, in the absence of volcanic eruptions, global temperature is predicted to continue to rise, with each year from 2013 onwards having a 50 % chance of exceeding the current observed record. Verification of these forecasts will provide an important opportunity to test the performance of models and our understanding and knowledge of the drivers of climate change.
Resumo:
The paper reports a study that investigated the relationship between students’ self-predicted and actual General Certificate of Secondary Education results in order to establish the extent of over- and under-prediction and whether this varies by subject and across genders and socio-economic groupings. It also considered the relationship between actual and predicted attainment and attitudes towards going to university. The sample consisted of 109 young people in two schools being followed up from an earlier study. Just over 50% of predictions were accurate and students were much more likely to over-predict than to under-predict. Most errors of prediction were only one grade out and may reflect examination unreliability as well as student misperceptions. Girls were slightly less likely than boys to over-predict but there were no differences associated with social background. Higher levels of attainment, both actual and predicted, were strongly associated with positive attitudes to university. Differences between predictions and results are likely to reflect examination errors as well as pupil errors. There is no evidence that students from more advantaged social backgrounds over-estimate themselves compared with other students, although boys over-estimate themselves compared with girls.