928 resultados para BENCHMARK
Resumo:
In the lead-up to the creation of a Eurasian Economic Union in 2015, the Customs Union and the Common Economic Space between Russia, Belarus and Kazakhstan represent two elements of the most ambitious regional integration project launched in the post-Soviet era since 1991. This CEPS Special Report examines both the potential and the limits of Eurasian economic integration. For the purpose of assessing the Eurasian integration process, CEPS applied a modified version of a framework first developed by Ernest B. Haas and Philippe C. Schmitter in 1964 to project whether economic integration of a group of countries automatically engenders political unity. Taking the data available for the early stages of the European integration process as a benchmark, the results for the Customs Union and the Common Economic Space point to a rather unfavourable outlook for Eurasian economic integration.
Resumo:
Convectively coupled equatorial waves are fundamental components of the interaction between the physics and dynamics of the tropical atmosphere. A new methodology, which isolates individual equatorial wave modes, has been developed and applied to observational data. The methodology assumes that the horizontal structures given by equatorial wave theory can be used to project upper- and lower-tropospheric data onto equatorial wave modes. The dynamical fields are first separated into eastward- and westward-moving components with a specified domain of frequency–zonal wavenumber. Each of the components for each field is then projected onto the different equatorial modes using the y structures of these modes given by the theory. The latitudinal scale yo of the modes is predetermined by data to fit the equatorial trapping in a suitable latitude belt y = ±Y. The extent to which the different dynamical fields are consistent with one another in their depiction of each equatorial wave structure determines the confidence in the reality of that structure. Comparison of the analyzed modes with the eastward- and westward-moving components in the convection field enables the identification of the dynamical structure and nature of convectively coupled equatorial waves. In a case study, the methodology is applied to two independent data sources, ECMWF Reanalysis and satellite-observed window brightness temperature (Tb) data for the summer of 1992. Various convectively coupled equatorial Kelvin, mixed Rossby–gravity, and Rossby waves have been detected. The results indicate a robust consistency between the two independent data sources. Different vertical structures for different wave modes and a significant Doppler shifting effect of the background zonal winds on wave structures are found and discussed. It is found that in addition to low-level convergence, anomalous fluxes induced by strong equatorial zonal winds associated with equatorial waves are important for inducing equatorial convection. There is evidence that equatorial convection associated with Rossby waves leads to a change in structure involving a horizontal structure similar to that of a Kelvin wave moving westward with it. The vertical structure may also be radically changed. The analysis method should make a very powerful diagnostic tool for investigating convectively coupled equatorial waves and the interaction of equatorial dynamics and physics in the real atmosphere. The results from application of the analysis method for a reanalysis dataset should provide a benchmark against which model studies can be compared.
Resumo:
This paper describes benchmark testing of six two-dimensional (2D) hydraulic models (DIVAST, DIVASTTVD, TUFLOW, JFLOW, TRENT and LISFLOOD-FP) in terms of their ability to simulate surface flows in a densely urbanised area. The models are applied to a 1·0 km × 0·4 km urban catchment within the city of Glasgow, Scotland, UK, and are used to simulate a flood event that occurred at this site on 30 July 2002. An identical numerical grid describing the underlying topography is constructed for each model, using a combination of airborne laser altimetry (LiDAR) fused with digital map data, and used to run a benchmark simulation. Two numerical experiments were then conducted to test the response of each model to topographic error and uncertainty over friction parameterisation. While all the models tested produce plausible results, subtle differences between particular groups of codes give considerable insight into both the practice and science of urban hydraulic modelling. In particular, the results show that the terrain data available from modern LiDAR systems are sufficiently accurate and resolved for simulating urban flows, but such data need to be fused with digital map data of building topology and land use to gain maximum benefit from the information contained therein. When such terrain data are available, uncertainty in friction parameters becomes a more dominant factor than topographic error for typical problems. The simulations also show that flows in urban environments are characterised by numerous transitions to supercritical flow and numerical shocks. However, the effects of these are localised and they do not appear to affect overall wave propagation. In contrast, inertia terms are shown to be important in this particular case, but the specific characteristics of the test site may mean that this does not hold more generally.
Resumo:
Phytoextraction, the use of plants to extract heavy metals from contaminated soils, could be an interesting alternative to conventional remediation technologies. However, calcareous soils with relatively high total metal contents are difficult to phytoremediate due to low soluble metal concentrations. Soil amendments such as ethylene diaminetetraacetate (EDTA) have been suggested to increase heavy metal bioavailability and uptake in aboveground plant parts. Strong persistence of EDTA and risks of leaching of potentially toxic metals and essential nutrients have led to research on easily biodegradable soilamendments such as citric acid. In our research, EDTA is regarded as a scientific benchmark with which degradable alternatives are compared for enhanced phytoextraction purposes. The effects of increasing doses of EDTA (0.1, 1, 10 mmol kg(-1) dry soil) and citric acid (0.01, 0.05,0.25,0.442, 0.5 mol kg(-1) dry soil) on bioavailable fractions of Cu, Zn, Cd, and Pb were assessed in one part of our study and results are presented in this article. The evolution of labile soil fractions of heavy metals over time was evaluated using water paste saturation extraction (similar to soluble fraction), extraction with 1 M NH4OAc at pH 7 (similar to exchangeable fraction), and extraction with 0.5 M NH4OAc + 0.5 M HOAc + 0.02 M EDTA atpH 4.65 (similar to potentially bioavailable fraction). Both citric acid and EDTA produced a rapid initial increase in labile heavy metal fractions. Metal mobilization remained constant in time for soils treated with EDTA, but metal fractions was noted for soils treated with citric acid. The half life of heavy metal mobilization by citric acid varied between 1.5 and 5.7 d. In the following article, the effect of heavy metal mobilization on uptake by Helianthus annutis will be presented.
Resumo:
Evaluating agents in decision-making applications requires assessing their skill and predicting their behaviour. Both are well developed in Poker-like situations, but less so in more complex game and model domains. This paper addresses both tasks by using Bayesian inference in a benchmark space of reference agents. The concepts are explained and demonstrated using the game of chess but the model applies generically to any domain with quantifiable options and fallible choice. Demonstration applications address questions frequently asked by the chess community regarding the stability of the rating scale, the comparison of players of different eras and/or leagues, and controversial incidents possibly involving fraud. The last include alleged under-performance, fabrication of tournament results, and clandestine use of computer advice during competition. Beyond the model world of games, the aim is to improve fallible human performance in complex, high-value tasks.
Resumo:
Despite the success of studies attempting to integrate remotely sensed data and flood modelling and the need to provide near-real time data routinely on a global scale as well as setting up online data archives, there is to date a lack of spatially and temporally distributed hydraulic parameters to support ongoing efforts in modelling. Therefore, the objective of this project is to provide a global evaluation and benchmark data set of floodplain water stages with uncertainties and assimilation in a large scale flood model using space-borne radar imagery. An algorithm is developed for automated retrieval of water stages with uncertainties from a sequence of radar imagery and data are assimilated in a flood model using the Tewkesbury 2007 flood event as a feasibility study. The retrieval method that we employ is based on possibility theory which is an extension of fuzzy sets and that encompasses probability theory. In our case we first attempt to identify main sources of uncertainty in the retrieval of water stages from radar imagery for which we define physically meaningful ranges of parameter values. Possibilities of values are then computed for each parameter using a triangular ‘membership’ function. This procedure allows the computation of possible values of water stages at maximum flood extents along a river at many different locations. At a later stage in the project these data are then used in assimilation, calibration or validation of a flood model. The application is subsequently extended to a global scale using wide swath radar imagery and a simple global flood forecasting model thereby providing improved river discharge estimates to update the latter.
Resumo:
Airborne scanning laser altimetry (LiDAR) is an important new data source for river flood modelling. LiDAR can give dense and accurate DTMs of floodplains for use as model bathymetry. Spatial resolutions of 0.5m or less are possible, with a height accuracy of 0.15m. LiDAR gives a Digital Surface Model (DSM), so vegetation removal software (e.g. TERRASCAN) must be used to obtain a DTM. An example used to illustrate the current state of the art will be the LiDAR data provided by the EA, which has been processed by their in-house software to convert the raw data to a ground DTM and separate vegetation height map. Their method distinguishes trees from buildings on the basis of object size. EA data products include the DTM with or without buildings removed, a vegetation height map, a DTM with bridges removed, etc. Most vegetation removal software ignores short vegetation less than say 1m high. We have attempted to extend vegetation height measurement to short vegetation using local height texture. Typically most of a floodplain may be covered in such vegetation. The idea is to assign friction coefficients depending on local vegetation height, so that friction is spatially varying. This obviates the need to calibrate a global floodplain friction coefficient. It’s not clear at present if the method is useful, but it’s worth testing further. The LiDAR DTM is usually determined by looking for local minima in the raw data, then interpolating between these to form a space-filling height surface. This is a low pass filtering operation, in which objects of high spatial frequency such as buildings, river embankments and walls may be incorrectly classed as vegetation. The problem is particularly acute in urban areas. A solution may be to apply pattern recognition techniques to LiDAR height data fused with other data types such as LiDAR intensity or multispectral CASI data. We are attempting to use digital map data (Mastermap structured topography data) to help to distinguish buildings from trees, and roads from areas of short vegetation. The problems involved in doing this will be discussed. A related problem of how best to merge historic river cross-section data with a LiDAR DTM will also be considered. LiDAR data may also be used to help generate a finite element mesh. In rural area we have decomposed a floodplain mesh according to taller vegetation features such as hedges and trees, so that e.g. hedge elements can be assigned higher friction coefficients than those in adjacent fields. We are attempting to extend this approach to urban area, so that the mesh is decomposed in the vicinity of buildings, roads, etc as well as trees and hedges. A dominant points algorithm is used to identify points of high curvature on a building or road, which act as initial nodes in the meshing process. A difficulty is that the resulting mesh may contain a very large number of nodes. However, the mesh generated may be useful to allow a high resolution FE model to act as a benchmark for a more practical lower resolution model. A further problem discussed will be how best to exploit data redundancy due to the high resolution of the LiDAR compared to that of a typical flood model. Problems occur if features have dimensions smaller than the model cell size e.g. for a 5m-wide embankment within a raster grid model with 15m cell size, the maximum height of the embankment locally could be assigned to each cell covering the embankment. But how could a 5m-wide ditch be represented? Again, this redundancy has been exploited to improve wetting/drying algorithms using the sub-grid-scale LiDAR heights within finite elements at the waterline.
Resumo:
We present a method to enhance fault localization for software systems based on a frequent pattern mining algorithm. Our method is based on a large set of test cases for a given set of programs in which faults can be detected. The test executions are recorded as function call trees. Based on test oracles the tests can be classified into successful and failing tests. A frequent pattern mining algorithm is used to identify frequent subtrees in successful and failing test executions. This information is used to rank functions according to their likelihood of containing a fault. The ranking suggests an order in which to examine the functions during fault analysis. We validate our approach experimentally using a subset of Siemens benchmark programs.
Resumo:
Routine milk recording data, often covering many years, are available for approximately half the dairy herds of England and Wales. In addition to milk yield and quality, these data include production events that can be used to derive objective Key Performance Indicators (KPI) describing a herd's fertility and production. Recent developments in information systems give veterinarians and other technical advisers access to these KPIs on-line. In addition to reviewing individual herd performance, advisers can establish local benchmark groups to demonstrate the relative performance of similar herds in the vicinity. The use of existing milk recording data places no additional demands on farmer's time or resources. These developments could also readily be exploited by universities to introduce veterinary undergraduates to the realities of commercial dairy production.
Resumo:
BACKGROUND: In order to maintain the most comprehensive structural annotation databases we must carry out regular updates for each proteome using the latest profile-profile fold recognition methods. The ability to carry out these updates on demand is necessary to keep pace with the regular updates of sequence and structure databases. Providing the highest quality structural models requires the most intensive profile-profile fold recognition methods running with the very latest available sequence databases and fold libraries. However, running these methods on such a regular basis for every sequenced proteome requires large amounts of processing power.In this paper we describe and benchmark the JYDE (Job Yield Distribution Environment) system, which is a meta-scheduler designed to work above cluster schedulers, such as Sun Grid Engine (SGE) or Condor. We demonstrate the ability of JYDE to distribute the load of genomic-scale fold recognition across multiple independent Grid domains. We use the most recent profile-profile version of our mGenTHREADER software in order to annotate the latest version of the Human proteome against the latest sequence and structure databases in as short a time as possible. RESULTS: We show that our JYDE system is able to scale to large numbers of intensive fold recognition jobs running across several independent computer clusters. Using our JYDE system we have been able to annotate 99.9% of the protein sequences within the Human proteome in less than 24 hours, by harnessing over 500 CPUs from 3 independent Grid domains. CONCLUSION: This study clearly demonstrates the feasibility of carrying out on demand high quality structural annotations for the proteomes of major eukaryotic organisms. Specifically, we have shown that it is now possible to provide complete regular updates of profile-profile based fold recognition models for entire eukaryotic proteomes, through the use of Grid middleware such as JYDE.
Resumo:
Sodium persulfate introduced into ordered MCM-48 silicas is described. The resulting materials are compared with existing activated carbon-based systems and MCM-48 containing transition metals such as Cu(II) and Cr(VI) for the decomposition of hydrogen cyanide and cyanogen. MCM-48 materials containing sodium persulfate alone improve on the protection offered by benchmark activated carbon systems and MCM-48 materials containing Cu(II) and Cr(VI), without the health risks associated with these metal ions.
Resumo:
We report variational calculations of rovibrational energies of CH4 using the code MULTIMODE and an ab initio force field of Schwenke and Partridge. The systematic convergence of the energies with respect to the level of mode coupling is presented. Converged vibrational energies calculated using the five-mode representation of the potential for zero total angular momentum are compared with previous, benchmark calculations based on Radau coordinates using this force field for zero total angular momentum and for J = 1. Very good agreement with the previous benchmark calculations is found. (c) 2006 Elsevier B.V. All rights reserved.
Resumo:
Background: Adaptations and assistive technology (AT) have an important role in enabling older people to remain in their own homes. Objective: To measure the feasibility and cost of adaptations and AT, and the scope for these to substitute and supplement formal care. Design: Detailed design studies to benchmark the adaptability of 82 properties against the needs of seven notional users. Setting: Social rented housing sector. Main outcome measures: Measures of the adaptability of properties, costs of care, adaptations and AT, and relationships between these costs. Results: The adaptability of properties varies according to many design factors and the needs of occupiers. The most adaptable properties were ground floor flats and bungalows; the least were houses, maisonettes and flats in converted houses. Purpose-built sheltered properties were generally more adaptable than corresponding mainstream properties but the opposite was the case for bungalows. Adaptations and AT can substitute for and supplement formal care, and in most cases the initial investment in adaptations and AT is recouped through subsequently lower care costs within the average life expectancy of a user. Conclusion: Appropriately selected adaptations and AT can make a significant contribution to the provision of living environments which facilitate independence. They can both substitute for traditional formal care services and supplement these services in a cost-effective way.
Resumo:
This paper represents the first step in an on-going work for designing an unsupervised method based on genetic algorithm for intrusion detection. Its main role in a broader system is to notify of an unusual traffic and in that way provide the possibility of detecting unknown attacks. Most of the machine-learning techniques deployed for intrusion detection are supervised as these techniques are generally more accurate, but this implies the need of labeling the data for training and testing which is time-consuming and error-prone. Hence, our goal is to devise an anomaly detector which would be unsupervised, but at the same time robust and accurate. Genetic algorithms are robust and able to avoid getting stuck in local optima, unlike the rest of clustering techniques. The model is verified on KDD99 benchmark dataset, generating a solution competitive with the solutions of the state-of-the-art which demonstrates high possibilities of the proposed method.
Resumo:
A fast Knowledge-based Evolution Strategy, KES, for the multi-objective minimum spanning tree, is presented. The proposed algorithm is validated, for the bi-objective case, with an exhaustive search for small problems (4-10 nodes), and compared with a deterministic algorithm, EPDA and NSGA-II for larger problems (up to 100 nodes) using benchmark hard instances. Experimental results show that KES finds the true Pareto fronts for small instances of the problem and calculates good approximation Pareto sets for larger instances tested. It is shown that the fronts calculated by YES are superior to NSGA-II fronts and almost as good as those established by EPDA. KES is designed to be scalable to multi-objective problems and fast due to its small complexity.