965 resultados para PREDICTIONS
Resumo:
The estimation of prediction quality is important because without quality measures, it is difficult to determine the usefulness of a prediction. Currently, methods for ligand binding site residue predictions are assessed in the function prediction category of the biennial Critical Assessment of Techniques for Protein Structure Prediction (CASP) experiment, utilizing the Matthews Correlation Coefficient (MCC) and Binding-site Distance Test (BDT) metrics. However, the assessment of ligand binding site predictions using such metrics requires the availability of solved structures with bound ligands. Thus, we have developed a ligand binding site quality assessment tool, FunFOLDQA, which utilizes protein feature analysis to predict ligand binding site quality prior to the experimental solution of the protein structures and their ligand interactions. The FunFOLDQA feature scores were combined using: simple linear combinations, multiple linear regression and a neural network. The neural network produced significantly better results for correlations to both the MCC and BDT scores, according to Kendall’s τ, Spearman’s ρ and Pearson’s r correlation coefficients, when tested on both the CASP8 and CASP9 datasets. The neural network also produced the largest Area Under the Curve score (AUC) when Receiver Operator Characteristic (ROC) analysis was undertaken for the CASP8 dataset. Furthermore, the FunFOLDQA algorithm incorporating the neural network, is shown to add value to FunFOLD, when both methods are employed in combination. This results in a statistically significant improvement over all of the best server methods, the FunFOLD method (6.43%), and one of the top manual groups (FN293) tested on the CASP8 dataset. The FunFOLDQA method was also found to be competitive with the top server methods when tested on the CASP9 dataset. To the best of our knowledge, FunFOLDQA is the first attempt to develop a method that can be used to assess ligand binding site prediction quality, in the absence of experimental data.
Resumo:
Decadal predictions have a high profile in the climate science community and beyond, yet very little is known about their skill. Nor is there any agreed protocol for estimating their skill. This paper proposes a sound and coordinated framework for verification of decadal hindcast experiments. The framework is illustrated for decadal hindcasts tailored to meet the requirements and specifications of CMIP5 (Coupled Model Intercomparison Project phase 5). The chosen metrics address key questions about the information content in initialized decadal hindcasts. These questions are: (1) Do the initial conditions in the hindcasts lead to more accurate predictions of the climate, compared to un-initialized climate change projections? and (2) Is the prediction model’s ensemble spread an appropriate representation of forecast uncertainty on average? The first question is addressed through deterministic metrics that compare the initialized and uninitialized hindcasts. The second question is addressed through a probabilistic metric applied to the initialized hindcasts and comparing different ways to ascribe forecast uncertainty. Verification is advocated at smoothed regional scales that can illuminate broad areas of predictability, as well as at the grid scale, since many users of the decadal prediction experiments who feed the climate data into applications or decision models will use the data at grid scale, or downscale it to even higher resolution. An overall statement on skill of CMIP5 decadal hindcasts is not the aim of this paper. The results presented are only illustrative of the framework, which would enable such studies. However, broad conclusions that are beginning to emerge from the CMIP5 results include (1) Most predictability at the interannual-to-decadal scale, relative to climatological averages, comes from external forcing, particularly for temperature; (2) though moderate, additional skill is added by the initial conditions over what is imparted by external forcing alone; however, the impact of initialization may result in overall worse predictions in some regions than provided by uninitialized climate change projections; (3) limited hindcast records and the dearth of climate-quality observational data impede our ability to quantify expected skill as well as model biases; and (4) as is common to seasonal-to-interannual model predictions, the spread of the ensemble members is not necessarily a good representation of forecast uncertainty. The authors recommend that this framework be adopted to serve as a starting point to compare prediction quality across prediction systems. The framework can provide a baseline against which future improvements can be quantified. The framework also provides guidance on the use of these model predictions, which differ in fundamental ways from the climate change projections that much of the community has become familiar with, including adjustment of mean and conditional biases, and consideration of how to best approach forecast uncertainty.
Resumo:
Accurate decadal climate predictions could be used to inform adaptation actions to a changing climate. The skill of such predictions from initialised dynamical global climate models (GCMs) may be assessed by comparing with predictions from statistical models which are based solely on historical observations. This paper presents two benchmark statistical models for predicting both the radiatively forced trend and internal variability of annual mean sea surface temperatures (SSTs) on a decadal timescale based on the gridded observation data set HadISST. For both statistical models, the trend related to radiative forcing is modelled using a linear regression of SST time series at each grid box on the time series of equivalent global mean atmospheric CO2 concentration. The residual internal variability is then modelled by (1) a first-order autoregressive model (AR1) and (2) a constructed analogue model (CA). From the verification of 46 retrospective forecasts with start years from 1960 to 2005, the correlation coefficient for anomaly forecasts using trend with AR1 is greater than 0.7 over parts of extra-tropical North Atlantic, the Indian Ocean and western Pacific. This is primarily related to the prediction of the forced trend. More importantly, both CA and AR1 give skillful predictions of the internal variability of SSTs in the subpolar gyre region over the far North Atlantic for lead time of 2 to 5 years, with correlation coefficients greater than 0.5. For the subpolar gyre and parts of the South Atlantic, CA is superior to AR1 for lead time of 6 to 9 years. These statistical forecasts are also compared with ensemble mean retrospective forecasts by DePreSys, an initialised GCM. DePreSys is found to outperform the statistical models over large parts of North Atlantic for lead times of 2 to 5 years and 6 to 9 years, however trend with AR1 is generally superior to DePreSys in the North Atlantic Current region, while trend with CA is superior to DePreSys in parts of South Atlantic for lead time of 6 to 9 years. These findings encourage further development of benchmark statistical decadal prediction models, and methods to combine different predictions.
Resumo:
In the mid 1990s the North Atlantic subpolar gyre (SPG) warmed rapidly, with sea surface temperatures (SST) increasing by 1°C in just a few years. By examining initialized hindcasts made with the UK Met Office Decadal Prediction System (DePreSys), it is shown that the warming could have been predicted. Conversely, hindcasts that only consider changes in radiative forcings are not able to capture the rapid warming. Heat budget analysis shows that the success of the DePreSys hindcasts is due to the initialization of anomalously strong northward ocean heat transport. Furthermore, it is found that initializing a strong Atlantic circulation, and in particular a strong Atlantic Meridional Overturning Circulation, is key for successful predictions. Finally, we show that DePreSys is able to predict significant changes in SST and other surface climate variables related to the North Atlantic warming.
Resumo:
Early and effective flood warning is essential to initiate timely measures to reduce loss of life and economic damage. The availability of several global ensemble weather prediction systems through the “THORPEX Interactive Grand Global Ensemble” (TIGGE) archive provides an opportunity to explore new dimensions in early flood forecasting and warning. TIGGE data has been used as meteorological input to the European Flood Alert System (EFAS) for a case study of a flood event in Romania in October 2007. Results illustrate that awareness for this case of flooding could have been raised as early as 8 days before the event and how the subsequent forecasts provide increasing insight into the range of possible flood conditions. This first assessment of one flood event illustrates the potential value of the TIGGE archive and the grand-ensembles approach to raise preparedness and thus to reduce the socio-economic impact of floods.
Resumo:
In order to achieve sustainability it is necessary to balance the interactions between the built and natural environment. Biodiversity plays an important part towards sustainability within the built environment, especially as the construction industry comes under increasing pressure to take ecological concerns into account. Bats constitute an important component of urban biodiversity and several species are now highly dependent on buildings, making them particularly vulnerable to anthropogenic and environmental changes. As many buildings suitable for use as bat roosts age, they often require re-roofing and traditional bituminous roofing felts are frequently being replaced with breathable roofing membranes (BRMs), which are designed to reduce condensation. Whilst the current position of bats is better in many respects than 30 years ago, new building regulations and modern materials, may substantially reduce the viability of existing roosts. At the same time building regulations require that materials be fit for purpose and with anecdotal evidence that both bats and BRMs may experience problems when the two interact, it is important to know what roost characteristics are essential for house dwelling bats and how these and BRMs may be affected. This paper reviews current literature and knowledge and considers the possible ways in which bats and BRMs may interact, how this could affect existing bat roosts within buildings and the implications for BRM service life predictions and warranties. It concludes that in order for the construction and conservation sectors to work together in solving this issue, a set of clear guidelines should be developed for use on a national level.
Resumo:
We present the first climate prediction of the coming decade made with multiple models, initialized with prior observations. This prediction accrues from an international activity to exchange decadal predictions in near real-time, in order to assess differences and similarities, provide a consensus view to prevent over-confidence in forecasts from any single model, and establish current collective capability. We stress that the forecast is experimental, since the skill of the multi-model system is as yet unknown. Nevertheless, the forecast systems used here are based on models that have undergone rigorous evaluation and individually have been evaluated for forecast skill. Moreover, it is important to publish forecasts to enable open evaluation, and to provide a focus on climate change in the coming decade. Initialized forecasts of the year 2011 agree well with observations, with a pattern correlation of 0.62 compared to 0.31 for uninitialized projections. In particular, the forecast correctly predicted La Niña in the Pacific, and warm conditions in the north Atlantic and USA. A similar pattern is predicted for 2012 but with a weaker La Niña. Indices of Atlantic multi-decadal variability and Pacific decadal variability show no signal beyond climatology after 2015, while temperature in the Niño3 region is predicted to warm slightly by about 0.5 °C over the coming decade. However, uncertainties are large for individual years and initialization has little impact beyond the first 4 years in most regions. Relative to uninitialized forecasts, initialized forecasts are significantly warmer in the north Atlantic sub-polar gyre and cooler in the north Pacific throughout the decade. They are also significantly cooler in the global average and over most land and ocean regions out to several years ahead. However, in the absence of volcanic eruptions, global temperature is predicted to continue to rise, with each year from 2013 onwards having a 50 % chance of exceeding the current observed record. Verification of these forecasts will provide an important opportunity to test the performance of models and our understanding and knowledge of the drivers of climate change.
Resumo:
The paper reports a study that investigated the relationship between students’ self-predicted and actual General Certificate of Secondary Education results in order to establish the extent of over- and under-prediction and whether this varies by subject and across genders and socio-economic groupings. It also considered the relationship between actual and predicted attainment and attitudes towards going to university. The sample consisted of 109 young people in two schools being followed up from an earlier study. Just over 50% of predictions were accurate and students were much more likely to over-predict than to under-predict. Most errors of prediction were only one grade out and may reflect examination unreliability as well as student misperceptions. Girls were slightly less likely than boys to over-predict but there were no differences associated with social background. Higher levels of attainment, both actual and predicted, were strongly associated with positive attitudes to university. Differences between predictions and results are likely to reflect examination errors as well as pupil errors. There is no evidence that students from more advantaged social backgrounds over-estimate themselves compared with other students, although boys over-estimate themselves compared with girls.
Resumo:
Although ensemble prediction systems (EPS) are increasingly promoted as the scientific state-of-the-art for operational flood forecasting, the communication, perception, and use of the resulting alerts have received much less attention. Using a variety of qualitative research methods, including direct user feedback at training workshops, participant observation during site visits to 25 forecasting centres across Europe, and in-depth interviews with 69 forecasters, civil protection officials, and policy makers involved in operational flood risk management in 17 European countries, this article discusses the perception, communication, and use of European Flood Alert System (EFAS) alerts in operational flood management. In particular, this article describes how the design of EFAS alerts has evolved in response to user feedback and desires for a hydrographic-like way of visualizing EFAS outputs. It also documents a variety of forecaster perceptions about the value and skill of EFAS forecasts and the best way of using them to inform operational decision making. EFAS flood alerts were generally welcomed by flood forecasters as a sort of ‘pre-alert’ to spur greater internal vigilance. In most cases, however, they did not lead, by themselves, to further preparatory action or to earlier warnings to the public or emergency services. Their hesitancy to act in response to medium-term, probabilistic alerts highlights some wider institutional obstacles to the hopes in the research community that EPS will be readily embraced by operational forecasters and lead to immediate improvements in flood incident management. The EFAS experience offers lessons for other hydrological services seeking to implement EPS operationally for flood forecasting and warning. Copyright © 2012 John Wiley & Sons, Ltd.
Resumo:
Brain activity can be measured non-invasively with functional imaging techniques. Each pixel in such an image represents a neural mass of about 105 to 107 neurons. Mean field models (MFMs) approximate their activity by averaging out neural variability while retaining salient underlying features, like neurotransmitter kinetics. However, MFMs incorporating the regional variability, realistic geometry and connectivity of cortex have so far appeared intractable. This lack of biological realism has led to a focus on gross temporal features of the EEG. We address these impediments and showcase a "proof of principle" forward prediction of co-registered EEG/fMRI for a full-size human cortex in a realistic head model with anatomical connectivity, see figure 1. MFMs usually assume homogeneous neural masses, isotropic long-range connectivity and simplistic signal expression to allow rapid computation with partial differential equations. But these approximations are insufficient in particular for the high spatial resolution obtained with fMRI, since different cortical areas vary in their architectonic and dynamical properties, have complex connectivity, and can contribute non-trivially to the measured signal. Our code instead supports the local variation of model parameters and freely chosen connectivity for many thousand triangulation nodes spanning a cortical surface extracted from structural MRI. This allows the introduction of realistic anatomical and physiological parameters for cortical areas and their connectivity, including both intra- and inter-area connections. Proper cortical folding and conduction through a realistic head model is then added to obtain accurate signal expression for a comparison to experimental data. To showcase the synergy of these computational developments, we predict simultaneously EEG and fMRI BOLD responses by adding an established model for neurovascular coupling and convolving "Balloon-Windkessel" hemodynamics. We also incorporate regional connectivity extracted from the CoCoMac database [1]. Importantly, these extensions can be easily adapted according to future insights and data. Furthermore, while our own simulation is based on one specific MFM [2], the computational framework is general and can be applied to models favored by the user. Finally, we provide a brief outlook on improving the integration of multi-modal imaging data through iterative fits of a single underlying MFM in this realistic simulation framework.
Resumo:
A statistical model is derived relating the diurnal variation of sea surface temperature (SST) to the net surface heat flux and surface wind speed from a numerical weather prediction (NWP) model. The model is derived using fluxes and winds from the European Centre for Medium-Range Weather Forecasting (ECMWF) NWP model and SSTs from the Spinning Enhanced Visible and Infrared Imager (SEVIRI). In the model, diurnal warming has a linear dependence on the net surface heat flux integrated since (approximately) dawn and an inverse quadratic dependence on the maximum of the surface wind speed in the same period. The model coefficients are found by matching, for a given integrated heat flux, the frequency distributions of the maximum wind speed and the observed warming. Diurnal cooling, where it occurs, is modelled as proportional to the integrated heat flux divided by the heat capacity of the seasonal mixed layer. The model reproduces the statistics (mean, standard deviation, and 95-percentile) of the diurnal variation of SST seen by SEVIRI and reproduces the geographical pattern of mean warming seen by the Advanced Microwave Scanning Radiometer (AMSR-E). We use the functional dependencies in the statistical model to test the behaviour of two physical model of diurnal warming that display contrasting systematic errors.
Resumo:
[1] Decadal hindcast simulations of Arctic Ocean sea ice thickness made by a modern dynamic-thermodynamic sea ice model and forced independently by both the ERA-40 and NCEP/NCAR reanalysis data sets are compared for the first time. Using comprehensive data sets of observations made between 1979 and 2001 of sea ice thickness, draft, extent, and speeds, we find that it is possible to tune model parameters to give satisfactory agreement with observed data, thereby highlighting the skill of modern sea ice models, though the parameter values chosen differ according to the model forcing used. We find a consistent decreasing trend in Arctic Ocean sea ice thickness since 1979, and a steady decline in the Eastern Arctic Ocean over the full 40-year period of comparison that accelerated after 1980, but the predictions of Western Arctic Ocean sea ice thickness between 1962 and 1980 differ substantially. The origins of differing thickness trends and variability were isolated not to parameter differences but to differences in the forcing fields applied, and in how they are applied. It is argued that uncertainty, differences and errors in sea ice model forcing sets complicate the use of models to determine the exact causes of the recently reported decline in Arctic sea ice thickness, but help in the determination of robust features if the models are tuned appropriately against observations.
Resumo:
The incorporation of numerical weather predictions (NWP) into a flood forecasting system can increase forecast lead times from a few hours to a few days. A single NWP forecast from a single forecast centre, however, is insufficient as it involves considerable non-predictable uncertainties and lead to a high number of false alarms. The availability of global ensemble numerical weather prediction systems through the THORPEX Interactive Grand Global Ensemble' (TIGGE) offers a new opportunity for flood forecast. The Grid-Xinanjiang distributed hydrological model, which is based on the Xinanjiang model theory and the topographical information of each grid cell extracted from the Digital Elevation Model (DEM), is coupled with ensemble weather predictions based on the TIGGE database (CMC, CMA, ECWMF, UKMO, NCEP) for flood forecast. This paper presents a case study using the coupled flood forecasting model on the Xixian catchment (a drainage area of 8826 km2) located in Henan province, China. A probabilistic discharge is provided as the end product of flood forecast. Results show that the association of the Grid-Xinanjiang model and the TIGGE database gives a promising tool for an early warning of flood events several days ahead.