111 resultados para attori, concorrenza, COOP, Akka, benchmark
Resumo:
Purpose: Environmental turbulence including rapid changes in technology and markets has resulted in the need for new approaches to performance measurement and benchmarking. There is a need for studies that attempt to measure and benchmark upstream, leading or developmental aspects of organizations. Therefore, the aim of this paper is twofold. The first is to conduct an in-depth case analysis of lead performance measurement and benchmarking leading to the further development of a conceptual model derived from the extant literature and initial survey data. The second is to outline future research agendas that could further develop the framework and the subject area.
Design/methodology/approach: A multiple case analysis involving repeated in-depth interviews with managers in organisational areas of upstream influence in the case organisations.
Findings: It was found that the effect of external drivers for lead performance measurement and benchmarking was mediated by organisational context factors such as level of progression in business improvement methods. Moreover, the legitimation of the business improvement methods used for this purpose, although typical, had been extended beyond their original purpose with the development of bespoke sets of lead measures.
Practical implications: Examples of methods and lead measures are given that can be used by organizations in developing a programme of lead performance measurement and benchmarking.
Originality/value: There is a paucity of in-depth studies relating to the theory and practice of lead performance measurement and benchmarking in organisations.
Resumo:
Despite the simultaneous progress of traffic modelling both on the macroscopic and microscopic front, recent works [E. Bourrel, J.B. Lessort, Mixing micro and macro representation of traffic flow: a hybrid model based on the LWR theory, Transport. Res. Rec. 1852 (2003) 193–200; D. Helbing, M. Treiber, Critical discussion of “synchronized flow”, Coop. Transport. Dyn. 1 (2002) 2.1–2.24; A. Hennecke, M. Treiber, D. Helbing, Macroscopic simulations of open systems and micro–macro link, in: D. Helbing, H.J. Herrmann, M. Schreckenberg, D.E. Wolf (Eds.), Traffic and Granular Flow ’99, Springer, Berlin, 2000, pp. 383–388] highlighted that one of the most promising way to simulate efficiently traffic flow on large road networks is a clever combination of both traffic representations: the hybrid modelling. Our focus in this paper is to propose two hybrid models for which the macroscopic (resp. mesoscopic) part is based on a class of second order model [A. Aw, M. Rascle, Resurection of second order models of traffic flow?, SIAM J. Appl. Math. 60 (2000) 916–938] whereas the microscopic part is a Follow-the Leader type model [D.C. Gazis, R. Herman, R.W. Rothery, Nonlinear follow-the-leader models of traffic flow, Oper. Res. 9 (1961) 545–567; R. Herman, I. Prigogine, Kinetic Theory of Vehicular Traffic, American Elsevier, New York, 1971]. For the first hybrid model, we define precisely the translation of boundary conditions at interfaces and for the second one we explain the synchronization processes. Furthermore, through some numerical simulations we show that the waves propagation is not disturbed and the mass is accurately conserved when passing from one traffic representation to another.
Resumo:
The university course timetabling problem involves assigning a given number of events into a limited number of timeslots and rooms under a given set of constraints; the objective is to satisfy the hard constraints (essential requirements) and minimize the violation of soft constraints (desirable requirements). In this study we employed a Dual-sequence Simulated Annealing (DSA) algorithm as an improvement algorithm. The Round Robin (RR) algorithm is used to control the selection of neighbourhood structures within DSA. The performance of our approach is tested over eleven benchmark datasets. Experimental results show that our approach is able to generate competitive results when compared with other state-of-the-art techniques.
Resumo:
The linkage between the impact of assessment and compliance with children’s rights is a connection, which although seemingly obvious, is nonetheless rarely made, particularly by governments, which, as signatories to the relevant human rights treaties, have the primary responsibility for ensuring that educational practice is compatible with international children’s rights standards. While some jurisdictions are explicit about an adherence to children’s rights frameworks in general policy documentation, such a commitment rarely features when the focus is on assessment and testing. Thus, in spite of significant public and academic attention given to the consequences of assessment for children and governments committed to working within children’s rights standards, the two are rarely considered together. This paper examines the implications for the policy, process and practice of assessment in light of international human rights standards. Three key children’s rights principles and standards are used as a critical lens to examine assessment policy and practice: (1) best interests; (2) non-discrimination; and (3) participation. The paper seeks new insights into the complexities of assessment practice from the critical perspective of children’s rights and argues that such standards not only provide a convenient benchmark for developing, implementing and evaluating assessment practices, but also acknowledge the significance of assessment in the delivery of children’s rights to, in and through education more generally.
Resumo:
As a potential alternative to CMOS technology, QCA provides an interesting paradigm in both communication and computation. However, QCAs unique four-phase clocking scheme and timing constraints present serious timing issues for interconnection and feedback. In this work, a cut-set retiming design procedure is proposed to resolve these QCA timing issues. The proposed design procedure can accommodate QCAs unique characteristics by performing delay-transfer and time-scaling to reallocate the existing delays so as to achieve efficient clocking zone assignment. Cut-set retiming makes it possible to effectively design relatively complex QCA circuits that include feedback. It utilizes the similar characteristics of synchronization, deep pipelines and local interconnections common to both QCA and systolic architectures. As a case study, a systolic Montgomery modular multiplier is designed to illustrate the procedure. Furthermore, a nonsystolic architecture, an S27 benchmark circuit, is designed and compared with previous designs. The comparison shows that the cut-set retiming method achieves a more efficient design, with a reduction of 22%, 44%, and 46% in terms of cell count, area, and latency, respectively.
Resumo:
This paper describes the development of a novel metaheuristic that combines an electromagnetic-like mechanism (EM) and the great deluge algorithm (GD) for the University course timetabling problem. This well-known timetabling problem assigns lectures to specific numbers of timeslots and rooms maximizing the overall quality of the timetable while taking various constraints into account. EM is a population-based stochastic global optimization algorithm that is based on the theory of physics, simulating attraction and repulsion of sample points in moving toward optimality. GD is a local search procedure that allows worse solutions to be accepted based on some given upper boundary or ‘level’. In this paper, the dynamic force calculated from the attraction-repulsion mechanism is used as a decreasing rate to update the ‘level’ within the search process. The proposed method has been applied to a range of benchmark university course timetabling test problems from the literature. Moreover, the viability of the method has been tested by comparing its results with other reported results from the literature, demonstrating that the method is able to produce improved solutions to those currently published. We believe this is due to the combination of both approaches and the ability of the resultant algorithm to converge all solutions at every search process.
Resumo:
Low energy antiprotons have been used previously to give benchmark data for theories of atomic collisions. Here we present measurements of the cross section for single, nondissociative ionization of molecular hydrogen for impact of antiprotons with kinetic energies in the range 2-11 keV, i.e., in the velocity interval of 0.3-0.65 a.u. We find a cross section which is proportional to the projectile velocity, which is quite unlike the behavior of corresponding atomic cross sections, and which has never previously been observed experimentally.
Resumo:
To improve the performance of classification using Support Vector Machines (SVMs) while reducing the model selection time, this paper introduces Differential Evolution, a heuristic method for model selection in two-class SVMs with a RBF kernel. The model selection method and related tuning algorithm are both presented. Experimental results from application to a selection of benchmark datasets for SVMs show that this method can produce an optimized classification in less time and with higher accuracy than a classical grid search. Comparison with a Particle Swarm Optimization (PSO) based alternative is also included.
Resumo:
We propose a hybrid approach to the experimental assessment of the genuine quantum features of a general system consisting of microscopic and macroscopic parts. We infer entanglement by combining dichotomic measurements on a bidimensional system and phase-space inference through the Wigner distribution associated with the macroscopic component of the state. As a benchmark, we investigate the feasibility of our proposal in a bipartite-entangled state composed of a single-photon and a multiphoton field. Our analysis shows that, under ideal conditions, maximal violation of a Clauser-Horne-Shimony-Holt-based inequality is achievable regardless of the number of photons in the macroscopic part of the state. The difficulty in observing entanglement when losses and detection inefficiency are included can be overcome by using a hybrid entanglement witness that allows efficient correction for losses in the few-photon regime.
Resumo:
A technique for automatic exploration of the genetic search region through fuzzy coding (Sharma and Irwin, 2003) has been proposed. Fuzzy coding (FC) provides the value of a variable on the basis of the optimum number of selected fuzzy sets and their effectiveness in terms of degree-of-membership. It is an indirect encoding method and has been shown to perform better than other conventional binary, Gray and floating-point encoding methods. However, the static range of the membership functions is a major problem in fuzzy coding, resulting in longer times to arrive at an optimum solution in large or complicated search spaces. This paper proposes a new algorithm, called fuzzy coding with a dynamic range (FCDR), which dynamically allocates the range of the variables to evolve an effective search region, thereby achieving faster convergence. Results are presented for two benchmark optimisation problems, and also for a case study involving neural identification of a highly non-linear pH neutralisation process from experimental data. It is shown that dynamic exploration of the genetic search region is effective for parameter optimisation in problems where the search space is complicated.
Resumo:
Ion acceleration resulting from the interaction of ultra-high intensity (2 x 10(20) W/cm(2)) and ultra-high contrast (similar to 10(10)) laser pulses with 0.05-10 mu m thick Al foils at normal (0 degrees) and 35 degrees laser incidence is investigated. When decreasing the target thickness from 10 mu m down to 0.05 mu m, the accelerated ions become less divergent and the ion flux increases, particularly at normal (0 degrees) laser incidence on the target. A laser energy conversion into protons of,similar to 6.5% is estimated at 35 degrees laser incidence. Experimental results are in reasonable agreement with theoretical estimates and can be a benchmark for further theoretical and computational work. (C) 2011 American Institute of Physics. [doi:10.1063/1.3643133]
Resumo:
Automated examination timetabling has been addressed by a wide variety of methodologies and techniques over the last ten years or so. Many of the methods in this broad range of approaches have been evaluated on a collection of benchmark instances provided at the University of Toronto in 1996. Whilst the existence of these datasets has provided an invaluable resource for research into examination timetabling, the instances have significant limitations in terms of their relevance to real-world examination timetabling in modern universities. This paper presents a detailed model which draws upon experiences of implementing examination timetabling systems in universities in Europe, Australasia and America. This model represents the problem that was presented in the 2nd International Timetabling Competition (ITC2007). In presenting this detailed new model, this paper describes the examination timetabling track introduced as part of the competition. In addition to the model, the datasets used in the competition are also based on current real-world instances introduced by EventMAP Limited. It is hoped that the interest generated as part of the competition will lead to the development, investigation and application of a host of novel and exciting techniques to address this important real-world search domain. Moreover, the motivating goal of this paper is to close the currently existing gap between theory and practice in examination timetabling by presenting the research community with a rigorous model which represents the complexity of the real-world situation. In this paper we describe the model and its motivations, followed by a full formal definition.
Resumo:
The choice of radix is crucial for multi-valued logic synthesis. Practical examples, however, reveal that it is not always possible to find the optimal radix when taking into consideration actual physical parameters of multi-valued operations. In other words, each radix has its advantages and disadvantages. Our proposal is to synthesise logic in different radices, so it may benefit from their combination. The theory presented in this paper is based on Reed-Muller expansions over Galois field arithmetic. The work aims to firstly estimate the potential of the new approach and to secondly analyse its impact on circuit parameters down to the level of physical gates. The presented theory has been applied to real-life examples focusing on cryptographic circuits where Galois Fields find frequent application. The benchmark results show the approach creates a new dimension for the trade-off between circuit parameters and provides information on how the implemented functions are related to different radices.
Resumo:
Pilkington Glass Activ(TM) represents a possible suitable successor to P25 TiO2, especially as a benchmark photocatalyst film for comparing other photocatalyst or PSH self-cleaning films. Activ(TM) is a glass product with a clear, colourless, effectively invisible, photocatalytic coating of titania that also exhibits PSH. Although not as active as a film of P25 TiO2, Activ(TM) vastly superior mechanical stability, very reproducible activity and widespread commercial availability makes it highly attractive as a reference photocatalytic film. The photocatalytic and photo-induced superhydrophilitic (PSH) properties of Activ(TM) are studied in some detail and the results reported. Thus, the kinetics of stearic acid destruction (a 104 electron process) are zero order over the stearic acid range 4-129 monolayers and exhibit formal quantum efficiencies (FQE) of 0.7 X 10(-5) and 10.2 x 10(-5) molecules per photon when irradiated with light of 365 +/- 20 and 254 nm, respectively; the latter appears also to be the quantum yield for Activ(TM) at 254 nm. The kinetics of stearic acid destruction exhibit Langmuir-Hinshelwood-like saturation type kinetics as a function of oxygen partial pressure, with no destruction occurring in the absence of oxygen and the rate of destruction appearing the same in air and oxygen atmospheres. Further kinetic work revealed a Langmuir adsorption type constant for oxygen of 0.45 +/- 0.16 kPa(-1) and an activation energy of 19 +/- 1 Kj mol(-1). A study of the PSH properties of Activ(TM) reveals a high water contact angle (67) before ultra-bandgap irradiation reduced to 0degrees after prolonged irradiation. The kinetics of PSH are similar to those reported by others for sol-gel films using a low level of UV light. The kinetics of contact angle recovery in the dark appear monophasic and different to the biphasic kinetics reported recently by others for sol-gel films [J. Phys. Chem. B 107 (2003) 1028]. Overall, Activ(TM) appears a very suitable reference material for semiconductor film photocatalysis. (C) 2003 Elsevier Science B.V All rights reserved.
Resumo:
Speeding up sequential programs on multicores is a challenging problem that is in urgent need of a solution. Automatic parallelization of irregular pointer-intensive codes, exempli?ed by the SPECint codes, is a very hard problem. This paper shows that, with a helping hand, such auto-parallelization is possible and fruitful. This paper makes the following contributions: (i) A compiler framework for extracting pipeline-like parallelism from outer program loops is presented. (ii) Using a light-weight programming model based on annotations, the programmer helps the compiler to ?nd thread-level parallelism. Each of the annotations speci?es only a small piece of semantic information that compiler analysis misses, e.g. stating that a variable is dead at a certain program point. The annotations are designed such that correctness is easily veri?ed. Furthermore, we present a tool for suggesting annotations to the programmer. (iii) The methodology is applied to autoparallelize several SPECint benchmarks. For the benchmark with most parallelism (hmmer), we obtain a scalable 7-fold speedup on an AMD quad-core dual processor. The annotations constitute a parallel programming model that relies extensively on a sequential program representation. Hereby, the complexity of debugging is not increased and it does not obscure the source code. These properties could prove valuable to increase the ef?ciency of parallel programming.