969 resultados para attori, concorrenza, COOP, Akka, benchmark
Resumo:
Nurse rostering is a difficult search problem with many constraints. In the literature, a number of approaches have been investigated including penalty function methods to tackle these constraints within genetic algorithm frameworks. In this paper, we investigate an extension of a previously proposed stochastic ranking method, which has demonstrated superior performance to other constraint handling techniques when tested against a set of constrained optimisation benchmark problems. An initial experiment on nurse rostering problems demonstrates that the stochastic ranking method is better in finding feasible solutions but fails to obtain good results with regard to the objective function. To improve the performance of the algorithm, we hybridise it with a recently proposed simulated annealing hyper-heuristic within a local search and genetic algorithm framework. The hybrid algorithm shows significant improvement over both the genetic algorithm with stochastic ranking and the simulated annealing hyper-heuristic alone. The hybrid algorithm also considerably outperforms the methods in the literature which have the previously best known results.
Resumo:
This article examines the national and regional pressures in Northern Ireland in the post-war period for parity in public sector pay with the rest of the UK. Northern Ireland had a devolved legislature and government within the UK from 192 1 and was constitutionality in all essentially federal relationship with the rest of the UK. However, the Stormont Government chose to use legislative devolution to minimize policy differences with the rest of the UK. The article highlights the national industrial relations environment as the backdrop for provincial developments in pay setting. It establishes the important role Played by the Social Services Agreement negotiated with the Labour Government at Westminster in triggering the principle of parity in public sector pay in the early post-war years. The principle of pay parity subsequently became a benchmark for regional trade union coercive comparisons in collective bargaining across the devolved public sector. The article highlights the Policy relevance of these developments both to the UK Treasury and to devolved Governments in the UK, as they address the issue of regional public sector pay.
Resumo:
Purpose: Environmental turbulence including rapid changes in technology and markets has resulted in the need for new approaches to performance measurement and benchmarking. There is a need for studies that attempt to measure and benchmark upstream, leading or developmental aspects of organizations. Therefore, the aim of this paper is twofold. The first is to conduct an in-depth case analysis of lead performance measurement and benchmarking leading to the further development of a conceptual model derived from the extant literature and initial survey data. The second is to outline future research agendas that could further develop the framework and the subject area.
Design/methodology/approach: A multiple case analysis involving repeated in-depth interviews with managers in organisational areas of upstream influence in the case organisations.
Findings: It was found that the effect of external drivers for lead performance measurement and benchmarking was mediated by organisational context factors such as level of progression in business improvement methods. Moreover, the legitimation of the business improvement methods used for this purpose, although typical, had been extended beyond their original purpose with the development of bespoke sets of lead measures.
Practical implications: Examples of methods and lead measures are given that can be used by organizations in developing a programme of lead performance measurement and benchmarking.
Originality/value: There is a paucity of in-depth studies relating to the theory and practice of lead performance measurement and benchmarking in organisations.
Resumo:
Despite the simultaneous progress of traffic modelling both on the macroscopic and microscopic front, recent works [E. Bourrel, J.B. Lessort, Mixing micro and macro representation of traffic flow: a hybrid model based on the LWR theory, Transport. Res. Rec. 1852 (2003) 193–200; D. Helbing, M. Treiber, Critical discussion of “synchronized flow”, Coop. Transport. Dyn. 1 (2002) 2.1–2.24; A. Hennecke, M. Treiber, D. Helbing, Macroscopic simulations of open systems and micro–macro link, in: D. Helbing, H.J. Herrmann, M. Schreckenberg, D.E. Wolf (Eds.), Traffic and Granular Flow ’99, Springer, Berlin, 2000, pp. 383–388] highlighted that one of the most promising way to simulate efficiently traffic flow on large road networks is a clever combination of both traffic representations: the hybrid modelling. Our focus in this paper is to propose two hybrid models for which the macroscopic (resp. mesoscopic) part is based on a class of second order model [A. Aw, M. Rascle, Resurection of second order models of traffic flow?, SIAM J. Appl. Math. 60 (2000) 916–938] whereas the microscopic part is a Follow-the Leader type model [D.C. Gazis, R. Herman, R.W. Rothery, Nonlinear follow-the-leader models of traffic flow, Oper. Res. 9 (1961) 545–567; R. Herman, I. Prigogine, Kinetic Theory of Vehicular Traffic, American Elsevier, New York, 1971]. For the first hybrid model, we define precisely the translation of boundary conditions at interfaces and for the second one we explain the synchronization processes. Furthermore, through some numerical simulations we show that the waves propagation is not disturbed and the mass is accurately conserved when passing from one traffic representation to another.
Resumo:
The university course timetabling problem involves assigning a given number of events into a limited number of timeslots and rooms under a given set of constraints; the objective is to satisfy the hard constraints (essential requirements) and minimize the violation of soft constraints (desirable requirements). In this study we employed a Dual-sequence Simulated Annealing (DSA) algorithm as an improvement algorithm. The Round Robin (RR) algorithm is used to control the selection of neighbourhood structures within DSA. The performance of our approach is tested over eleven benchmark datasets. Experimental results show that our approach is able to generate competitive results when compared with other state-of-the-art techniques.
Resumo:
The linkage between the impact of assessment and compliance with children’s rights is a connection, which although seemingly obvious, is nonetheless rarely made, particularly by governments, which, as signatories to the relevant human rights treaties, have the primary responsibility for ensuring that educational practice is compatible with international children’s rights standards. While some jurisdictions are explicit about an adherence to children’s rights frameworks in general policy documentation, such a commitment rarely features when the focus is on assessment and testing. Thus, in spite of significant public and academic attention given to the consequences of assessment for children and governments committed to working within children’s rights standards, the two are rarely considered together. This paper examines the implications for the policy, process and practice of assessment in light of international human rights standards. Three key children’s rights principles and standards are used as a critical lens to examine assessment policy and practice: (1) best interests; (2) non-discrimination; and (3) participation. The paper seeks new insights into the complexities of assessment practice from the critical perspective of children’s rights and argues that such standards not only provide a convenient benchmark for developing, implementing and evaluating assessment practices, but also acknowledge the significance of assessment in the delivery of children’s rights to, in and through education more generally.
Resumo:
As a potential alternative to CMOS technology, QCA provides an interesting paradigm in both communication and computation. However, QCAs unique four-phase clocking scheme and timing constraints present serious timing issues for interconnection and feedback. In this work, a cut-set retiming design procedure is proposed to resolve these QCA timing issues. The proposed design procedure can accommodate QCAs unique characteristics by performing delay-transfer and time-scaling to reallocate the existing delays so as to achieve efficient clocking zone assignment. Cut-set retiming makes it possible to effectively design relatively complex QCA circuits that include feedback. It utilizes the similar characteristics of synchronization, deep pipelines and local interconnections common to both QCA and systolic architectures. As a case study, a systolic Montgomery modular multiplier is designed to illustrate the procedure. Furthermore, a nonsystolic architecture, an S27 benchmark circuit, is designed and compared with previous designs. The comparison shows that the cut-set retiming method achieves a more efficient design, with a reduction of 22%, 44%, and 46% in terms of cell count, area, and latency, respectively.
Resumo:
This paper describes the development of a novel metaheuristic that combines an electromagnetic-like mechanism (EM) and the great deluge algorithm (GD) for the University course timetabling problem. This well-known timetabling problem assigns lectures to specific numbers of timeslots and rooms maximizing the overall quality of the timetable while taking various constraints into account. EM is a population-based stochastic global optimization algorithm that is based on the theory of physics, simulating attraction and repulsion of sample points in moving toward optimality. GD is a local search procedure that allows worse solutions to be accepted based on some given upper boundary or ‘level’. In this paper, the dynamic force calculated from the attraction-repulsion mechanism is used as a decreasing rate to update the ‘level’ within the search process. The proposed method has been applied to a range of benchmark university course timetabling test problems from the literature. Moreover, the viability of the method has been tested by comparing its results with other reported results from the literature, demonstrating that the method is able to produce improved solutions to those currently published. We believe this is due to the combination of both approaches and the ability of the resultant algorithm to converge all solutions at every search process.
Resumo:
Low energy antiprotons have been used previously to give benchmark data for theories of atomic collisions. Here we present measurements of the cross section for single, nondissociative ionization of molecular hydrogen for impact of antiprotons with kinetic energies in the range 2-11 keV, i.e., in the velocity interval of 0.3-0.65 a.u. We find a cross section which is proportional to the projectile velocity, which is quite unlike the behavior of corresponding atomic cross sections, and which has never previously been observed experimentally.
Resumo:
To improve the performance of classification using Support Vector Machines (SVMs) while reducing the model selection time, this paper introduces Differential Evolution, a heuristic method for model selection in two-class SVMs with a RBF kernel. The model selection method and related tuning algorithm are both presented. Experimental results from application to a selection of benchmark datasets for SVMs show that this method can produce an optimized classification in less time and with higher accuracy than a classical grid search. Comparison with a Particle Swarm Optimization (PSO) based alternative is also included.
Resumo:
We propose a hybrid approach to the experimental assessment of the genuine quantum features of a general system consisting of microscopic and macroscopic parts. We infer entanglement by combining dichotomic measurements on a bidimensional system and phase-space inference through the Wigner distribution associated with the macroscopic component of the state. As a benchmark, we investigate the feasibility of our proposal in a bipartite-entangled state composed of a single-photon and a multiphoton field. Our analysis shows that, under ideal conditions, maximal violation of a Clauser-Horne-Shimony-Holt-based inequality is achievable regardless of the number of photons in the macroscopic part of the state. The difficulty in observing entanglement when losses and detection inefficiency are included can be overcome by using a hybrid entanglement witness that allows efficient correction for losses in the few-photon regime.
Resumo:
A technique for automatic exploration of the genetic search region through fuzzy coding (Sharma and Irwin, 2003) has been proposed. Fuzzy coding (FC) provides the value of a variable on the basis of the optimum number of selected fuzzy sets and their effectiveness in terms of degree-of-membership. It is an indirect encoding method and has been shown to perform better than other conventional binary, Gray and floating-point encoding methods. However, the static range of the membership functions is a major problem in fuzzy coding, resulting in longer times to arrive at an optimum solution in large or complicated search spaces. This paper proposes a new algorithm, called fuzzy coding with a dynamic range (FCDR), which dynamically allocates the range of the variables to evolve an effective search region, thereby achieving faster convergence. Results are presented for two benchmark optimisation problems, and also for a case study involving neural identification of a highly non-linear pH neutralisation process from experimental data. It is shown that dynamic exploration of the genetic search region is effective for parameter optimisation in problems where the search space is complicated.
Resumo:
Ion acceleration resulting from the interaction of ultra-high intensity (2 x 10(20) W/cm(2)) and ultra-high contrast (similar to 10(10)) laser pulses with 0.05-10 mu m thick Al foils at normal (0 degrees) and 35 degrees laser incidence is investigated. When decreasing the target thickness from 10 mu m down to 0.05 mu m, the accelerated ions become less divergent and the ion flux increases, particularly at normal (0 degrees) laser incidence on the target. A laser energy conversion into protons of,similar to 6.5% is estimated at 35 degrees laser incidence. Experimental results are in reasonable agreement with theoretical estimates and can be a benchmark for further theoretical and computational work. (C) 2011 American Institute of Physics. [doi:10.1063/1.3643133]
Resumo:
Automated examination timetabling has been addressed by a wide variety of methodologies and techniques over the last ten years or so. Many of the methods in this broad range of approaches have been evaluated on a collection of benchmark instances provided at the University of Toronto in 1996. Whilst the existence of these datasets has provided an invaluable resource for research into examination timetabling, the instances have significant limitations in terms of their relevance to real-world examination timetabling in modern universities. This paper presents a detailed model which draws upon experiences of implementing examination timetabling systems in universities in Europe, Australasia and America. This model represents the problem that was presented in the 2nd International Timetabling Competition (ITC2007). In presenting this detailed new model, this paper describes the examination timetabling track introduced as part of the competition. In addition to the model, the datasets used in the competition are also based on current real-world instances introduced by EventMAP Limited. It is hoped that the interest generated as part of the competition will lead to the development, investigation and application of a host of novel and exciting techniques to address this important real-world search domain. Moreover, the motivating goal of this paper is to close the currently existing gap between theory and practice in examination timetabling by presenting the research community with a rigorous model which represents the complexity of the real-world situation. In this paper we describe the model and its motivations, followed by a full formal definition.
Resumo:
The choice of radix is crucial for multi-valued logic synthesis. Practical examples, however, reveal that it is not always possible to find the optimal radix when taking into consideration actual physical parameters of multi-valued operations. In other words, each radix has its advantages and disadvantages. Our proposal is to synthesise logic in different radices, so it may benefit from their combination. The theory presented in this paper is based on Reed-Muller expansions over Galois field arithmetic. The work aims to firstly estimate the potential of the new approach and to secondly analyse its impact on circuit parameters down to the level of physical gates. The presented theory has been applied to real-life examples focusing on cryptographic circuits where Galois Fields find frequent application. The benchmark results show the approach creates a new dimension for the trade-off between circuit parameters and provides information on how the implemented functions are related to different radices.