885 resultados para Metric
Resumo:
Declining grassland breeding bird populations have led to increased efforts to assess habitat quality, typically by estimating density or relative abundance. Because some grassland habitats may function as ecological traps, a more appropriate metric for determining quality may be breeding success. Between 1994 and 2003 we gathered data on the nest fates of Eastern Meadowlarks (Sturnella magna), Bobolinks (Dolichonyx oryzivorous), and Savannah Sparrows (Passerculus sandwichensis) in a series of fallow fields and pastures/hayfields in western New York State. We calculated daily survival probabilities using the Mayfield method, and used the logistic-exposure method to model effects of predictor variables on nest success. Nest survival probabilities were 0.464 for Eastern Meadowlarks (n = 26), 0.483 for Bobolinks (n = 91), and 0.585 for Savannah Sparrows (n = 152). Fledge dates for first clutches ranged between 14 June and 23 July. Only one obligate grassland bird nest was parasitized by Brown-headed Cowbirds (Molothrus ater), for an overall brood parasitism rate of 0.004. Logistic-exposure models indicated that daily nest survival probabilities were higher in pastures/hayfields than in fallow fields. Our results, and those from other studies in the Northeast, suggest that properly managed cool season grassland habitats in the region may not act as ecological traps, and that obligate grassland birds in the region may have greater nest survival probabilities, and lower rates of Brown-headed Cowbird parasitism, than in many parts of the Midwest.
Resumo:
While Nalimov’s endgame tables for Western Chess are the most used today, their Depth-to-Mate metric is not the most efficient or effective in use. The authors have developed and used new programs to create tables to alternative metrics and recommend better strategies for endgame play.
Resumo:
While Nalimov’s endgame tables for Western Chess are the most used today, their Depth-to-Mate metric is not the only one and not the most effective in use. The authors have developed and used new programs to create tables to alternative metrics and recommend better strategies for endgame play.
Resumo:
Chess endgame tables should provide efficiently the value and depth of any required position during play. The indexing of an endgame’s positions is crucial to meeting this objective. This paper updates Heinz’ previous review of approaches to indexing and describes the latest approach by the first and third authors. Heinz’ and Nalimov’s endgame tables (EGTs) encompass the en passant rule and have the most compact index schemes to date. Nalimov’s EGTs, to the Distance-to-Mate (DTM) metric, require only 30.6 × 10^9 elements in total for all the 3-to-5-man endgames and are individually more compact than previous tables. His new index scheme has proved itself while generating the tables and in the 1999 World Computer Chess Championship where many of the top programs used the new suite of EGTs.
Resumo:
Chess endgame tables should provide efficiently the value and depth of any required position during play. The indexing of an endgame’s positions is crucial to meeting this objective. This paper updates Heinz’ previous review of approaches to indexing and describes the latest approach by the first and third authors. Heinz’ and Nalimov’s endgame tables (EGTs) encompass the en passant rule and have the most compact index schemes to date. Nalimov’s EGTs, to the Distance-to-Mate (DTM) metric, require only 30.6 × 109 elements in total for all the 3-to-5-man endgames and are individually more compact than previous tables. His new index scheme has proved itself while generating the tables and in the 1999 World Computer Chess Championship where many of the top programs used the new suite of EGTs.
Resumo:
The latest 6-man chess endgame results confirm that there are many deep forced mates beyond the 50-move rule. Players with potential wins near this limit naturally want to avoid a claim for a draw: optimal play to current metrics does not guarantee feasible wins or maximise the chances of winning against fallible opposition. A new metric and further strategies are defined which support players’ aspirations and improve their prospects of securing wins in the context of a k-move rule.
Resumo:
This note corrects a previous treatment of algorithms for the metric DTR, Depth by the Rule.
Resumo:
The recent G8 Gleneagles climate statement signed on 8 July 2005 specifically mentions a determination to lessen the impact of aviation on climate [Gleneagles, 2005. The Gleneagles communique: climate change, energy and sustainable development. http://www.fco.gov.uk/Files/kfile/PostG8_Gleneagles_Communique.pdf]. In January 2005 the European Union Emission Trading Scheme (ETS) commenced operation as the largest multi-country, multi-sector ETS in the world, albeit currently limited only to CO2 emissions. At present the scheme makes no provision for aircraft emissions. However, the UK Government would like to see aircraft included in the ETS and plans to use its Presidencies of both the EU and G8 in 2005 to implement these schemes within the EU and perhaps internationally. Non-CO2 effects have been included in some policy-orientated studies of the impact of aviation but we argue that the inclusion of such effects in any such ETS scheme is premature; we specifically argue that use of the Radiative Forcing Index for comparing emissions from different sources is inappropriate and that there is currently no metric for such a purpose that is likely to enable their inclusion in the near future. (c) 2005 Elsevier Ltd. All rights reserved.
Resumo:
New conceptual ideas on network architectures have been proposed in the recent past. Current store-andforward routers are replaced by active intermediate systems, which are able to perform computations on transient packets, in a way that results very helpful for developing and deploying new protocols in a short time. This paper introduces a new routing algorithm, based on a congestion metric, and inspired by the behavior of ants in nature. The use of the Active Networks paradigm associated with a cooperative learning environment produces a robust, decentralized algorithm capable of adapting quickly to changing conditions.
Resumo:
The Bureau International des Poids et Mesures, the BIPM, was established by Article 1 of the Convention du Mètre, on 20 May 1875, and is charged with providing the basis for a single, coherent system of measurements to be used throughout the world. The decimal metric system, dating from the time of the French Revolution, was based on the metre and the kilogram. Under the terms of the 1875 Convention, new international prototypes of the metre and kilogram were made and formally adopted by the first Conférence Générale des Poids et Mesures (CGPM) in 1889. Over time this system developed, so that it now includes seven base units. In 1960 it was decided at the 11th CGPM that it should be called the Système International d’Unités, the SI (in English: the International System of Units). The SI is not static but evolves to match the world’s increasingly demanding requirements for measurements at all levels of precision and in all areas of science, technology, and human endeavour. This document is a summary of the SI Brochure, a publication of the BIPM which is a statement of the current status of the SI. The seven base units of the SI, listed in Table 1, provide the reference used to define all the measurement units of the International System. As science advances, and methods of measurement are refined, their definitions have to be revised. The more accurate the measurements, the greater the care required in the realization of the units of measurement.
Resumo:
Equilibrium study on complex formation of Co(II), Ni(II), Cu(II) and Zn(II), hereafter M(II), with the quadridentate (O-, N, O-, N) donor ligand, N-(2-hydroxybenzyl)-L-histidine (H(2)hb-L-his, hereafter H2L), in the absence and in the presence of typical (N, N) donor bidentate ligands, 1,10 phenanthroline(phen), 2, 2'-bipyridine(bipy), ethylenediamine(en), hereafter B, in aqueous solution at 25 +/- 1 degrees C was done at a fixed ionic strength, I = 0.1 mol dm(-3) (NaNO3) by combined pH-metric, UV-Vis and EPR measurements provide evidence for the formation of mononuclear and dinuclear binary and mixed ligand complexes of the types: M(L), M(L)(2)(2-), M-2(L)(2+), M-2(H-1L)(+), M(L)(B), (B)M(H-1L)M(B)(+). The imidazole moiety of the ligand is found to act as a bridging bidentate ligand in the dinuclear M-2(L)(2+), M-2(H-1L)(+) and (B)M(H-1L)M(B)(+) complexes, using its N-3 atom and N1-H deprotonated moiety. Stability constants of the complexes provide evidence of discrimination of Cu(II) from the other M(II) ions by this ligand. Solid complexes: [Ni(L)(H2O)(2)] (1), [Cu(L)(H2O)] (2), and [Ni(L)(bipy)] (.) H2O (3) have been isolated and characterized by various physicochemical studies. Single crystal X-ray diffraction of the ternary complex, 3, shows an octahedral [(O-,N,N,O-)(N,N)] geometry with extensive pi-pi stacking of the aromatic rings and H-bonding with imidazole (N1-H), secondary amino N-atom, the lattice H2O molecule, and the carboxylate and phenolate O-atoms. (c) 2006 Elsevier B.V. All rights reserved.
Resumo:
This paper develops cycle-level FPGA circuits of an organization for a fast path-based neural branch predictor Our results suggest that practical sizes of prediction tables are limited to around 32 KB to 64 KB in current FPGA technology due mainly to FPGA area of logic resources to maintain the tables. However the predictor scales well in terms of prediction speed. Table sizes alone should not be used as the only metric for hardware budget when comparing neural-based predictor to predictors of totally different organizations. This paper also gives early evidence to shift the attention on to the recovery from mis-prediction latency rather than on prediction latency as the most critical factor impacting accuracy of predictions for this class of branch predictors.
Resumo:
Biological Crossover occurs during the early stages of meiosis. During this process the chromosomes undergoing crossover are synapsed together at a number of homogenous sequence sections, it is within such synapsed sections that crossover occurs. The SVLC (Synapsing Variable Length Crossover) Algorithm recurrently synapses homogenous genetic sequences together in order of length. The genomes are considered to be flexible with crossover only being permitted within the synapsed sections. Consequently, common sequences are automatically preserved with only the genetic differences being exchanged, independent of the length of such differences. In addition to providing a rationale for variable length crossover it also provides a genotypic similarity metric for variable length genomes enabling standard niche formation techniques to be utilised. In a simple variable length test problem the SVLC algorithm outperforms current variable length crossover techniques.
Classification of lactose and mandelic acid THz spectra using subspace and wavelet-packet algorithms
Resumo:
This work compares classification results of lactose, mandelic acid and dl-mandelic acid, obtained on the basis of their respective THz transients. The performance of three different pre-processing algorithms applied to the time-domain signatures obtained using a THz-transient spectrometer are contrasted by evaluating the classifier performance. A range of amplitudes of zero-mean white Gaussian noise are used to artificially degrade the signal-to-noise ratio of the time-domain signatures to generate the data sets that are presented to the classifier for both learning and validation purposes. This gradual degradation of interferograms by increasing the noise level is equivalent to performing measurements assuming a reduced integration time. Three signal processing algorithms were adopted for the evaluation of the complex insertion loss function of the samples under study; a) standard evaluation by ratioing the sample with the background spectra, b) a subspace identification algorithm and c) a novel wavelet-packet identification procedure. Within class and between class dispersion metrics are adopted for the three data sets. A discrimination metric evaluates how well the three classes can be distinguished within the frequency range 0. 1 - 1.0 THz using the above algorithms.
Resumo:
The THz water content index of a sample is defined and advantages in using such metric in estimating a sample's relative water content are discussed. The errors from reflectance measurements performed at two different THz frequencies using a quasi-optical null-balance reflectometer are propagated to the errors in estimating the sample water content index.