928 resultados para BENCHMARK
Resumo:
A technique for automatic exploration of the genetic search region through fuzzy coding (Sharma and Irwin, 2003) has been proposed. Fuzzy coding (FC) provides the value of a variable on the basis of the optimum number of selected fuzzy sets and their effectiveness in terms of degree-of-membership. It is an indirect encoding method and has been shown to perform better than other conventional binary, Gray and floating-point encoding methods. However, the static range of the membership functions is a major problem in fuzzy coding, resulting in longer times to arrive at an optimum solution in large or complicated search spaces. This paper proposes a new algorithm, called fuzzy coding with a dynamic range (FCDR), which dynamically allocates the range of the variables to evolve an effective search region, thereby achieving faster convergence. Results are presented for two benchmark optimisation problems, and also for a case study involving neural identification of a highly non-linear pH neutralisation process from experimental data. It is shown that dynamic exploration of the genetic search region is effective for parameter optimisation in problems where the search space is complicated.
Resumo:
Ion acceleration resulting from the interaction of ultra-high intensity (2 x 10(20) W/cm(2)) and ultra-high contrast (similar to 10(10)) laser pulses with 0.05-10 mu m thick Al foils at normal (0 degrees) and 35 degrees laser incidence is investigated. When decreasing the target thickness from 10 mu m down to 0.05 mu m, the accelerated ions become less divergent and the ion flux increases, particularly at normal (0 degrees) laser incidence on the target. A laser energy conversion into protons of,similar to 6.5% is estimated at 35 degrees laser incidence. Experimental results are in reasonable agreement with theoretical estimates and can be a benchmark for further theoretical and computational work. (C) 2011 American Institute of Physics. [doi:10.1063/1.3643133]
Resumo:
Automated examination timetabling has been addressed by a wide variety of methodologies and techniques over the last ten years or so. Many of the methods in this broad range of approaches have been evaluated on a collection of benchmark instances provided at the University of Toronto in 1996. Whilst the existence of these datasets has provided an invaluable resource for research into examination timetabling, the instances have significant limitations in terms of their relevance to real-world examination timetabling in modern universities. This paper presents a detailed model which draws upon experiences of implementing examination timetabling systems in universities in Europe, Australasia and America. This model represents the problem that was presented in the 2nd International Timetabling Competition (ITC2007). In presenting this detailed new model, this paper describes the examination timetabling track introduced as part of the competition. In addition to the model, the datasets used in the competition are also based on current real-world instances introduced by EventMAP Limited. It is hoped that the interest generated as part of the competition will lead to the development, investigation and application of a host of novel and exciting techniques to address this important real-world search domain. Moreover, the motivating goal of this paper is to close the currently existing gap between theory and practice in examination timetabling by presenting the research community with a rigorous model which represents the complexity of the real-world situation. In this paper we describe the model and its motivations, followed by a full formal definition.
Resumo:
The choice of radix is crucial for multi-valued logic synthesis. Practical examples, however, reveal that it is not always possible to find the optimal radix when taking into consideration actual physical parameters of multi-valued operations. In other words, each radix has its advantages and disadvantages. Our proposal is to synthesise logic in different radices, so it may benefit from their combination. The theory presented in this paper is based on Reed-Muller expansions over Galois field arithmetic. The work aims to firstly estimate the potential of the new approach and to secondly analyse its impact on circuit parameters down to the level of physical gates. The presented theory has been applied to real-life examples focusing on cryptographic circuits where Galois Fields find frequent application. The benchmark results show the approach creates a new dimension for the trade-off between circuit parameters and provides information on how the implemented functions are related to different radices.
Resumo:
Pilkington Glass Activ(TM) represents a possible suitable successor to P25 TiO2, especially as a benchmark photocatalyst film for comparing other photocatalyst or PSH self-cleaning films. Activ(TM) is a glass product with a clear, colourless, effectively invisible, photocatalytic coating of titania that also exhibits PSH. Although not as active as a film of P25 TiO2, Activ(TM) vastly superior mechanical stability, very reproducible activity and widespread commercial availability makes it highly attractive as a reference photocatalytic film. The photocatalytic and photo-induced superhydrophilitic (PSH) properties of Activ(TM) are studied in some detail and the results reported. Thus, the kinetics of stearic acid destruction (a 104 electron process) are zero order over the stearic acid range 4-129 monolayers and exhibit formal quantum efficiencies (FQE) of 0.7 X 10(-5) and 10.2 x 10(-5) molecules per photon when irradiated with light of 365 +/- 20 and 254 nm, respectively; the latter appears also to be the quantum yield for Activ(TM) at 254 nm. The kinetics of stearic acid destruction exhibit Langmuir-Hinshelwood-like saturation type kinetics as a function of oxygen partial pressure, with no destruction occurring in the absence of oxygen and the rate of destruction appearing the same in air and oxygen atmospheres. Further kinetic work revealed a Langmuir adsorption type constant for oxygen of 0.45 +/- 0.16 kPa(-1) and an activation energy of 19 +/- 1 Kj mol(-1). A study of the PSH properties of Activ(TM) reveals a high water contact angle (67) before ultra-bandgap irradiation reduced to 0degrees after prolonged irradiation. The kinetics of PSH are similar to those reported by others for sol-gel films using a low level of UV light. The kinetics of contact angle recovery in the dark appear monophasic and different to the biphasic kinetics reported recently by others for sol-gel films [J. Phys. Chem. B 107 (2003) 1028]. Overall, Activ(TM) appears a very suitable reference material for semiconductor film photocatalysis. (C) 2003 Elsevier Science B.V All rights reserved.
Resumo:
Speeding up sequential programs on multicores is a challenging problem that is in urgent need of a solution. Automatic parallelization of irregular pointer-intensive codes, exempli?ed by the SPECint codes, is a very hard problem. This paper shows that, with a helping hand, such auto-parallelization is possible and fruitful. This paper makes the following contributions: (i) A compiler framework for extracting pipeline-like parallelism from outer program loops is presented. (ii) Using a light-weight programming model based on annotations, the programmer helps the compiler to ?nd thread-level parallelism. Each of the annotations speci?es only a small piece of semantic information that compiler analysis misses, e.g. stating that a variable is dead at a certain program point. The annotations are designed such that correctness is easily veri?ed. Furthermore, we present a tool for suggesting annotations to the programmer. (iii) The methodology is applied to autoparallelize several SPECint benchmarks. For the benchmark with most parallelism (hmmer), we obtain a scalable 7-fold speedup on an AMD quad-core dual processor. The annotations constitute a parallel programming model that relies extensively on a sequential program representation. Hereby, the complexity of debugging is not increased and it does not obscure the source code. These properties could prove valuable to increase the ef?ciency of parallel programming.
Resumo:
As a promising method for pattern recognition and function estimation, least squares support vector machines (LS-SVM) express the training in terms of solving a linear system instead of a quadratic programming problem as for conventional support vector machines (SVM). In this paper, by using the information provided by the equality constraint, we transform the minimization problem with a single equality constraint in LS-SVM into an unconstrained minimization problem, then propose reduced formulations for LS-SVM. By introducing this transformation, the times of using conjugate gradient (CG) method, which is a greatly time-consuming step in obtaining the numerical solution, are reduced to one instead of two as proposed by Suykens et al. (1999). The comparison on computational speed of our method with the CG method proposed by Suykens et al. and the first order and second order SMO methods on several benchmark data sets shows a reduction of training time by up to 44%. (C) 2011 Elsevier B.V. All rights reserved.
Resumo:
Support vector machines (SVMs), though accurate, are not preferred in applications requiring high classification speed or when deployed in systems of limited computational resources, due to the large number of support vectors involved in the model. To overcome this problem we have devised a primal SVM method with the following properties: (1) it solves for the SVM representation without the need to invoke the representer theorem, (2) forward and backward selections are combined to approach the final globally optimal solution, and (3) a criterion is introduced for identification of support vectors leading to a much reduced support vector set. In addition to introducing this method the paper analyzes the complexity of the algorithm and presents test results on three public benchmark problems and a human activity recognition application. These applications demonstrate the effectiveness and efficiency of the proposed algorithm.
--------------------------------------------------------------------------------
Resumo:
Shape corrections to the standard approximate Kohn-Sham exchange-correlation (xc) potentials are considered with the aim to improve the excitation energies (especially for higher excitations) calculated with time-dependent density functional perturbation theory. A scheme of gradient-regulated connection (GRAC) of inner to outer parts of a model potential is developed. Asymptotic corrections based either on the potential of Fermi and Amaldi or van Leeuwen and Baerends (LB) are seamlessly connected to the (shifted) xc potential of Becke and Perdew (BP) with the GRAC procedure, and are employed to calculate the vertical excitation energies of the prototype molecules N-2, CO, CH2O, C2H4, C5NH5, C6H6, Li-2, Na-2, K-2. The results are compared with those of the alternative interpolation scheme of Tozer and Handy as well as with the results of the potential obtained with the statistical averaging of (model) orbital potentials. Various asymptotically corrected potentials produce high quality excitation energies, which in quite a few cases approach the benchmark accuracy of 0.1 eV for the electronic spectra. Based on these results, the potential BP-GRAC-LB is proposed for molecular response calculations, which is a smooth potential and a genuine "local" density functional with an analytical representation. (C) 2001 American Institute of Physics.
Resumo:
There are major concerns about the level of personal borrowing, particularly sourced from credit cards. This paper charts the progress of an initiative to create a Responsible Lending Index (RLI) for the credit industry. The RLI proposed to voluntarily benchmark lending standards and promote best practice within the credit industry by involving suppliers of credit, customer representatives and regulators. However, despite initial support from some banks, consumer bodies and the Chair of the Treasury Select Committee, it failed to gain sufficient support from financial institutions in its original format. The primary reasons for this were related to the complexity of building such a robust index and the banks trade body’s fear of exposing its members to public scrutiny. A revised alternative, the Responsible Lending Initiative, was proposed which took into account these concerns. However, the Association of Payment Clearing Service (APACS), the trade body of the credit industry, then effectively destroyed the proposal. This article describes an attempt to address the challenges in the credit card industry with the initiation of the RLI, reflected in stakeholder discourse and in the context of a wider concern expressed by the involved stakeholders in terms of the need for greater responsibility in the banking industry’s lending practices.
Resumo:
In this paper, we report a fully ab initio variational Monte Carlo study of the linear and periodic chain of hydrogen atoms, a prototype system providing the simplest example of strong electronic correlation in low dimensions. In particular, we prove that numerical accuracy comparable to that of benchmark density-matrix renormalization-group calculations can be achieved by using a highly correlated Jastrow-antisymmetrized geminal power variational wave function. Furthermore, by using the so-called "modern theory of polarization" and by studying the spin-spin and dimer-dimer correlations functions, we have characterized in detail the crossover between the weakly and strongly correlated regimes of this atomic chain. Our results show that variational Monte Carlo provides an accurate and flexible alternative to highly correlated methods of quantum chemistry which, at variance with these methods, can be also applied to a strongly correlated solid in low dimensions close to a crossover or a phase transition.
Resumo:
The association between poor metabolic control and the microvascular complications of diabetes is now well established, but the relationship between long-term metabolic control and the accelerated atherosclerosis of diabetes is as yet poorly defined. Hyperglycemia is the standard benchmark by which metabolic control is assessed. One mechanism by which elevated glucose levels may mediate vascular injury is through early and advanced glycation reactions affecting a wide variety of target molecules. The "glycation hypothesis'' has developed over the past 30 years, evolving gradually into a "carbonyl stress hypothesis'' and taking into account not only the modification of proteins by glucose, but also the roles of oxidative stress, a wide range of reactive carbonyl-containing intermediates (derived not only from glucose but also from lipids), and a variety of extra- and intracellular target molecules. The final products of these reactions may now be termed "Either Advanced Glycation or Lipoxidation End-Products'' or "EAGLEs.'' The ubiquity of carbonyl stress within the body, the complexity of the reactions involved, the variety of potential carbonyl intermediates and target molecules and their differing half-lives, and the slow development of the complications of diabetes all pose major challenges in dissecting the significance of these processes. The extent of the reactions tends to correlate with overall metabolic control, creating pitfalls in the interpretation of associative data. Many animal and cell culture studies, while supporting the hypothesis, must be viewed with caution in terms of relevance to human diabetes. In this article, the development of the carbonyl stress hypothesis is reviewed, and implications for present and future treatments to prevent complications are discussed.
Resumo:
Studying the flows of parent country nationals in multinational enterprises (MNEs) to subsidiary operations has a relatively long tradition. Studying flows of subsidiary employees to other subsidiaries, as third country nationals, and to the corporate headquarters, as inpatriates, however, has empirically much less pedigree. Drawing on a large-scale empirical study of MNEs in Ireland, this paper provides a benchmark of outward flows of international assignees from the Irish subsidiaries of foreign-owned MNEs to both corporate headquarters and other worldwide operations. Building on insights from the resource-based view and neo-institutional theory, we develop and test a theoretical model to explain outward staffing flows. The results show that almost half of all MNEs use some form of outward staffing flows from their Irish operations. Although the impact of specific variables in explaining inter-organization variation differs between the utilization of inpatriate and third country national assignments, overall we find that a number of headquarters, subsidiary, structural, and human resource systems factors emerge as strong predictors of outward staffing flows. © 2010 Wiley Periodicals, Inc.
Resumo:
In most previous research on distributional semantics, Vector Space Models (VSMs) of words are built either from topical information (e.g., documents in which a word is present), or from syntactic/semantic types of words (e.g., dependency parse links of a word in sentences), but not both. In this paper, we explore the utility of combining these two representations to build VSM for the task of semantic composition of adjective-noun phrases. Through extensive experiments on benchmark datasets, we find that even though a type-based VSM is effective for semantic composition, it is often outperformed by a VSM built using a combination of topic- and type-based statistics. We also introduce a new evaluation task wherein we predict the composed vector representation of a phrase from the brain activity of a human subject reading that phrase. We exploit a large syntactically parsed corpus of 16 billion tokens to build our VSMs, with vectors for both phrases and words, and make them publicly available.
Resumo:
This paper proposes a calculation method to determine power system response during small load perturbations or minor disturbances. The method establishes the initial value of active power transient using traditional reduction technique on admittance matrix, which incorporates voltage variations in the determination. The method examines active power distribution among generators when several loads simultaneously change, and verifies that the superposition principle is applicable for this scenario. The theoretical derivation provided in the paper is validated by numerical simulations using a 3-generator 9-bus benchmark model. The results indicate that the inclusion of voltage variation renders an independent and precise measure of active power response during transient conditions.