108 resultados para metrics
Resumo:
A new tropopause definition involving a flow-dependent blending of the traditional thermal tropopause with one based on potential vorticity has been developed and applied to the European Centre for Medium-Range Weather Forecasts (ECMWF) reanalyses (ERA), ERA-40 and ERA-Interim. Global and regional trends in tropopause characteristics for annual and solsticial seasonal means are presented here, with emphasis on significant results for the newer ERA-Interim data for 1989-2007. The global-mean tropopause is rising at a rate of 47 m decade−1 , with pressure falling at 1.0 hPa decade−1 , and temperature falling at 0.18 K decade−1 . The Antarctic tropopause shows decreasing heights,warming,and increasing westerly winds. The Arctic tropopause also shows a warming, but with decreasing westerly winds. In the tropics the trends are small, but at the latitudes of the sub-tropical jets they are almost double the global values. It is found that these changes are mainly concentrated in the eastern hemisphere. Previous and new metrics for the rate of broadening of the tropics, based on both height and wind, give trends in the range 0.9◦ decade−1 to 2.2◦ decade−1 . For ERA-40 the global height and pressure trends for the period 1979-2001 are similar: 39 m decade−1 and -0.8 hPa decade−1. These values are smaller than those found from the thermal tropopause definition with this data set, as was used in most previous studies.
Resumo:
This paper identifies some significant gaps in our knowledge of the configuration and performance of the property asset management sector. It is argued that, as many leading academic property researchers have focussed on financial vehicles and modelling, in-depth analysis of property assets and their management has been neglected. In terms of potential for future in-depth research, three key broad preliminary research themes or questions are identified. First, how do the active management opportunities presented, costs of management and the key management tasks vary with market conditions, asset type and life-cycle stage? Second, how is property asset management delivered and what are the main costs and benefits of different models of procurement? Finally, what are the appropriate metrics for measuring the performance of different property managers and approaches to property management? It is concluded that the lack of published materials addressing these issues has implications for educating property students.
Resumo:
This spreadsheet contains key data about that part of the endgame of Western Chess for which Endgame Tables (EGTs) have been generated by computer. It is derived from the EGT work since 1969 of Thomas Ströhlein, Ken Thompson, Christopher Wirth, Eugene Nalimov, Marc Bourzutschky, John Tamplin and Yakov Konoval. The data includes %s of wins, draws and losses (wtm and btm), the maximum and average depths of win under various metrics (DTC = Depth to Conversion, DTM = Depth to Mate, DTZ = Depth to Conversion or Pawn-push), and examples of positions of maximum depth. It is essentially about sub-7-man Chess but is updated as news comes in of 7-man EGT computations.
Resumo:
1. Species-based indices are frequently employed as surrogates for wider biodiversity health and measures of environmental condition. Species selection is crucial in determining an indicators metric value and hence the validity of the interpretation of ecosystem condition and function it provides, yet an objective process to identify appropriate indicator species is frequently lacking. 2. An effective indicator needs to (i) be representative, reflecting the status of wider biodiversity; (ii) be reactive, acting as early-warning systems for detrimental changes in environmental conditions; (iii) respond to change in a predictable way. We present an objective, niche-based approach for species' selection, founded on a coarse categorisation of species' niche space and key resource requirements, which ensures the resultant indicator has these key attributes. 3. We use UK farmland birds as a case study to demonstrate this approach, identifying an optimal indicator set containing 12 species. In contrast to the 19 species included in the farmland bird index (FBI), a key UK biodiversity indicator that contributes to one of the UK Government's headline indicators of sustainability, the niche space occupied by these species fully encompasses that occupied by the wider community of 62 species. 4. We demonstrate that the response of these 12 species to land-use change is a strong correlate to that of the wider farmland bird community. Furthermore, the temporal dynamics of the index based on their population trends closely matches the population dynamics of the wider community. However, in both analyses, the magnitude of the change in our indicator was significantly greater, allowing this indicator to act as an early-warning system. 5. Ecological indicators are embedded in environmental management, sustainable development and biodiversity conservation policy and practice where they act as metrics against which progress towards national, regional and global targets can be measured. Adopting this niche-based approach for objective selection of indicator species will facilitate the development of sensitive and representative indices for a range of taxonomic groups, habitats and spatial scales.
Resumo:
The estimation of prediction quality is important because without quality measures, it is difficult to determine the usefulness of a prediction. Currently, methods for ligand binding site residue predictions are assessed in the function prediction category of the biennial Critical Assessment of Techniques for Protein Structure Prediction (CASP) experiment, utilizing the Matthews Correlation Coefficient (MCC) and Binding-site Distance Test (BDT) metrics. However, the assessment of ligand binding site predictions using such metrics requires the availability of solved structures with bound ligands. Thus, we have developed a ligand binding site quality assessment tool, FunFOLDQA, which utilizes protein feature analysis to predict ligand binding site quality prior to the experimental solution of the protein structures and their ligand interactions. The FunFOLDQA feature scores were combined using: simple linear combinations, multiple linear regression and a neural network. The neural network produced significantly better results for correlations to both the MCC and BDT scores, according to Kendall’s τ, Spearman’s ρ and Pearson’s r correlation coefficients, when tested on both the CASP8 and CASP9 datasets. The neural network also produced the largest Area Under the Curve score (AUC) when Receiver Operator Characteristic (ROC) analysis was undertaken for the CASP8 dataset. Furthermore, the FunFOLDQA algorithm incorporating the neural network, is shown to add value to FunFOLD, when both methods are employed in combination. This results in a statistically significant improvement over all of the best server methods, the FunFOLD method (6.43%), and one of the top manual groups (FN293) tested on the CASP8 dataset. The FunFOLDQA method was also found to be competitive with the top server methods when tested on the CASP9 dataset. To the best of our knowledge, FunFOLDQA is the first attempt to develop a method that can be used to assess ligand binding site prediction quality, in the absence of experimental data.
Resumo:
Vegetation distribution and state have been measured since 1981 by the AVHRR (Advanced Very High Resolution Radiometer) instrument through satellite remote sensing. In this study a correction method is applied to the Pathfinder NDVI (Normalized Difference Vegetation Index) data to create a continuous European vegetation phenology dataset of a 10-day temporal and 0.1° spatial resolution; additionally, land surface parameters for use in biosphere–atmosphere modelling are derived. The analysis of time-series from this dataset reveals, for the years 1982–2001, strong seasonal and interannual variability in European land surface vegetation state. Phenological metrics indicate a late and short growing season for the years 1985–1987, in addition to early and prolonged activity in the years 1989, 1990, 1994 and 1995. These variations are in close agreement with findings from phenological measurements at the surface; spring phenology is also shown to correlate particularly well with anomalies in winter temperature and winter North Atlantic Oscillation (NAO) index. Nevertheless, phenological metrics, which display considerable regional differences, could only be determined for vegetation with a seasonal behaviour. Trends in the phenological phases reveal a general shift to earlier (−0.54 days year−1) and prolonged (0.96 days year−1) growing periods which are statistically significant, especially for central Europe.
Resumo:
The estimation of the long-term wind resource at a prospective site based on a relatively short on-site measurement campaign is an indispensable task in the development of a commercial wind farm. The typical industry approach is based on the measure-correlate-predict �MCP� method where a relational model between the site wind velocity data and the data obtained from a suitable reference site is built from concurrent records. In a subsequent step, a long-term prediction for the prospective site is obtained from a combination of the relational model and the historic reference data. In the present paper, a systematic study is presented where three new MCP models, together with two published reference models �a simple linear regression and the variance ratio method�, have been evaluated based on concurrent synthetic wind speed time series for two sites, simulating the prospective and the reference site. The synthetic method has the advantage of generating time series with the desired statistical properties, including Weibull scale and shape factors, required to evaluate the five methods under all plausible conditions. In this work, first a systematic discussion of the statistical fundamentals behind MCP methods is provided and three new models, one based on a nonlinear regression and two �termed kernel methods� derived from the use of conditional probability density functions, are proposed. All models are evaluated by using five metrics under a wide range of values of the correlation coefficient, the Weibull scale, and the Weibull shape factor. Only one of all models, a kernel method based on bivariate Weibull probability functions, is capable of accurately predicting all performance metrics studied.
Resumo:
Decadal predictions have a high profile in the climate science community and beyond, yet very little is known about their skill. Nor is there any agreed protocol for estimating their skill. This paper proposes a sound and coordinated framework for verification of decadal hindcast experiments. The framework is illustrated for decadal hindcasts tailored to meet the requirements and specifications of CMIP5 (Coupled Model Intercomparison Project phase 5). The chosen metrics address key questions about the information content in initialized decadal hindcasts. These questions are: (1) Do the initial conditions in the hindcasts lead to more accurate predictions of the climate, compared to un-initialized climate change projections? and (2) Is the prediction model’s ensemble spread an appropriate representation of forecast uncertainty on average? The first question is addressed through deterministic metrics that compare the initialized and uninitialized hindcasts. The second question is addressed through a probabilistic metric applied to the initialized hindcasts and comparing different ways to ascribe forecast uncertainty. Verification is advocated at smoothed regional scales that can illuminate broad areas of predictability, as well as at the grid scale, since many users of the decadal prediction experiments who feed the climate data into applications or decision models will use the data at grid scale, or downscale it to even higher resolution. An overall statement on skill of CMIP5 decadal hindcasts is not the aim of this paper. The results presented are only illustrative of the framework, which would enable such studies. However, broad conclusions that are beginning to emerge from the CMIP5 results include (1) Most predictability at the interannual-to-decadal scale, relative to climatological averages, comes from external forcing, particularly for temperature; (2) though moderate, additional skill is added by the initial conditions over what is imparted by external forcing alone; however, the impact of initialization may result in overall worse predictions in some regions than provided by uninitialized climate change projections; (3) limited hindcast records and the dearth of climate-quality observational data impede our ability to quantify expected skill as well as model biases; and (4) as is common to seasonal-to-interannual model predictions, the spread of the ensemble members is not necessarily a good representation of forecast uncertainty. The authors recommend that this framework be adopted to serve as a starting point to compare prediction quality across prediction systems. The framework can provide a baseline against which future improvements can be quantified. The framework also provides guidance on the use of these model predictions, which differ in fundamental ways from the climate change projections that much of the community has become familiar with, including adjustment of mean and conditional biases, and consideration of how to best approach forecast uncertainty.
Resumo:
The global temperature response to increasing atmospheric CO2 is often quantified by metrics such as equilibrium climate sensitivity and transient climate response1. These approaches, however, do not account for carbon cycle feedbacks and therefore do not fully represent the net response of the Earth system to anthropogenic CO2 emissions. Climate–carbon modelling experiments have shown that: (1) the warming per unit CO2 emitted does not depend on the background CO2 concentration2; (2) the total allowable emissions for climate stabilization do not depend on the timing of those emissions3, 4, 5; and (3) the temperature response to a pulse of CO2 is approximately constant on timescales of decades to centuries3, 6, 7, 8. Here we generalize these results and show that the carbon–climate response (CCR), defined as the ratio of temperature change to cumulative carbon emissions, is approximately independent of both the atmospheric CO2 concentration and its rate of change on these timescales. From observational constraints, we estimate CCR to be in the range 1.0–2.1 °C per trillion tonnes of carbon (Tt C) emitted (5th to 95th percentiles), consistent with twenty-first-century CCR values simulated by climate–carbon models. Uncertainty in land-use CO2 emissions and aerosol forcing, however, means that higher observationally constrained values cannot be excluded. The CCR, when evaluated from climate–carbon models under idealized conditions, represents a simple yet robust metric for comparing models, which aggregates both climate feedbacks and carbon cycle feedbacks. CCR is also likely to be a useful concept for climate change mitigation and policy; by combining the uncertainties associated with climate sensitivity, carbon sinks and climate–carbon feedbacks into a single quantity, the CCR allows CO2-induced global mean temperature change to be inferred directly from cumulative carbon emissions.
Resumo:
The Prism family of algorithms induces modular classification rules which, in contrast to decision tree induction algorithms, do not necessarily fit together into a decision tree structure. Classifiers induced by Prism algorithms achieve a comparable accuracy compared with decision trees and in some cases even outperform decision trees. Both kinds of algorithms tend to overfit on large and noisy datasets and this has led to the development of pruning methods. Pruning methods use various metrics to truncate decision trees or to eliminate whole rules or single rule terms from a Prism rule set. For decision trees many pre-pruning and postpruning methods exist, however for Prism algorithms only one pre-pruning method has been developed, J-pruning. Recent work with Prism algorithms examined J-pruning in the context of very large datasets and found that the current method does not use its full potential. This paper revisits the J-pruning method for the Prism family of algorithms and develops a new pruning method Jmax-pruning, discusses it in theoretical terms and evaluates it empirically.
Resumo:
The Prism family of algorithms induces modular classification rules in contrast to the Top Down Induction of Decision Trees (TDIDT) approach which induces classification rules in the intermediate form of a tree structure. Both approaches achieve a comparable classification accuracy. However in some cases Prism outperforms TDIDT. For both approaches pre-pruning facilities have been developed in order to prevent the induced classifiers from overfitting on noisy datasets, by cutting rule terms or whole rules or by truncating decision trees according to certain metrics. There have been many pre-pruning mechanisms developed for the TDIDT approach, but for the Prism family the only existing pre-pruning facility is J-pruning. J-pruning not only works on Prism algorithms but also on TDIDT. Although it has been shown that J-pruning produces good results, this work points out that J-pruning does not use its full potential. The original J-pruning facility is examined and the use of a new pre-pruning facility, called Jmax-pruning, is proposed and evaluated empirically. A possible pre-pruning facility for TDIDT based on Jmax-pruning is also discussed.
Resumo:
We examine the climate effects of the emissions of near-term climate forcers (NTCFs) from 4 continental regions (East Asia, Europe, North America and South Asia) using radiative forcing from the task force on hemispheric transport of air pollution source-receptor global chemical transport model simulations. These simulations model the transport of 3 aerosol species (sulphate, particulate organic matter and black carbon) and 4 ozone precursors (methane, nitric oxides (NOx), volatile organic compounds and carbon monoxide). From the equilibrium radiative forcing results we calculate global climate metrics, global warming potentials (GWPs) and global temperature change potentials (GTPs) and show how these depend on emission region, and can vary as functions of time. For the aerosol species, the GWP(100) values are −37±12, −46±20, and 350±200 for SO2, POM and BC respectively for the direct effects only. The corresponding GTP(100) values are −5.2±2.4, −6.5±3.5, and 50±33. This analysis is further extended by examining the temperature-change impacts in 4 latitude bands. This shows that the latitudinal pattern of the temperature response to emissions of the NTCFs does not directly follow the pattern of the diagnosed radiative forcing. For instance temperatures in the Arctic latitudes are particularly sensitive to NTCF emissions in the northern mid-latitudes. At the 100-yr time horizon the ARTPs show NOx emissions can have a warming effect in the northern mid and high latitudes, but cooling in the tropics and Southern Hemisphere. The northern mid-latitude temperature response to northern mid-latitude emissions of most NTCFs is approximately twice as large as would be implied by the global average.
The Asian summer monsoon: an intercomparison of CMIP5 vs. CMIP3 simulations of the late 20th century
Resumo:
The boreal summer Asian monsoon has been evaluated in 25 Coupled Model Intercomparison Project-5 (CMIP5) and 22 CMIP3 GCM simulations of the late 20th Century. Diagnostics and skill metrics have been calculated to assess the time-mean, climatological annual cycle, interannual variability, and intraseasonal variability. Progress has been made in modeling these aspects of the monsoon, though there is no single model that best represents all of these aspects of the monsoon. The CMIP5 multi-model mean (MMM) is more skillful than the CMIP3 MMM for all diagnostics in terms of the skill of simulating pattern correlations with respect to observations. Additionally, for rainfall/convection the MMM outperforms the individual models for the time mean, the interannual variability of the East Asian monsoon, and intraseasonal variability. The pattern correlation of the time (pentad) of monsoon peak and withdrawal is better simulated than that of monsoon onset. The onset of the monsoon over India is typically too late in the models. The extension of the monsoon over eastern China, Korea, and Japan is underestimated, while it is overestimated over the subtropical western/central Pacific Ocean. The anti-correlation between anomalies of all-India rainfall and Niño-3.4 sea surface temperature is overly strong in CMIP3 and typically too weak in CMIP5. For both the ENSO-monsoon teleconnection and the East Asian zonal wind-rainfall teleconnection, the MMM interannual rainfall anomalies are weak compared to observations. Though simulation of intraseasonal variability remains problematic, several models show improved skill at representing the northward propagation of convection and the development of the tilted band of convection that extends from India to the equatorial west Pacific. The MMM also well represents the space-time evolution of intraseasonal outgoing longwave radiation anomalies. Caution is necessary when using GPCP and CMAP rainfall to validate (1) the time-mean rainfall, as there are systematic differences over ocean and land between these two data sets, and (2) the timing of monsoon withdrawal over India, where the smooth southward progression seen in India Meteorological Department data is better realized in CMAP data compared to GPCP data.
Resumo:
Consistent with a negativity bias account, neuroscientific and behavioral evidence demonstrates modulation of even early sensory processes by unpleasant, potentially threat-relevant information. The aim of this research is to assess the extent to which pleasant and unpleasant visual stimuli presented extrafoveally capture attention and impact eye movement control. We report an experiment examining deviations in saccade metrics in the presence of emotional image distractors that are close to a nonemotional target. We additionally manipulate the saccade latency to test when the emotional distractor has its biggest impact on oculomotor control. The results demonstrate that saccade landing position was pulled toward unpleasant distractors, and that this pull was due to the quick saccade responses. Overall, these findings support a negativity bias account of early attentional control and call for the need to consider the time course of motivated attention when affect is implicit
Resumo:
Long time series of ground-based plant phenology, as well as more than two decades of satellite-derived phenological metrics, are currently available to assess the impacts of climate variability and trends on terrestrial vegetation. Traditional plant phenology provides very accurate information on individual plant species, but with limited spatial coverage. Satellite phenology allows monitoring of terrestrial vegetation on a global scale and provides an integrative view at the landscape level. Linking the strengths of both methodologies has high potential value for climate impact studies. We compared a multispecies index from ground-observed spring phases with two types (maximum slope and threshold approach) of satellite-derived start-of-season (SOS) metrics. We focus on Switzerland from 1982 to 2001 and show that temporal and spatial variability of the multispecies index correspond well with the satellite-derived metrics. All phenological metrics correlate with temperature anomalies as expected. The slope approach proved to deviate strongly from the temporal development of the ground observations as well as from the threshold-defined SOS satellite measure. The slope spring indicator is considered to indicate a different stage in vegetation development and is therefore less suited as a SOS parameter for comparative studies in relation to ground-observed phenology. Satellite-derived metrics are, however, very susceptible to snow cover, and it is suggested that this snow cover should be better accounted for by the use of newer satellite sensors.