833 resultados para Learning from one Example
Resumo:
Machine learning techniques have been recognized as powerful tools for learning from data. One of the most popular learning techniques, the Back-Propagation (BP) Artificial Neural Networks, can be used as a computer model to predict peptides binding to the Human Leukocyte Antigens (HLA). The major advantage of computational screening is that it reduces the number of wet-lab experiments that need to be performed, significantly reducing the cost and time. A recently developed method, Extreme Learning Machine (ELM), which has superior properties over BP has been investigated to accomplish such tasks. In our work, we found that the ELM is as good as, if not better than, the BP in term of time complexity, accuracy deviations across experiments, and most importantly - prevention from over-fitting for prediction of peptide binding to HLA.
Resumo:
On-line learning is one of the most powerful and commonly used techniques for training large layered networks and has been used successfully in many real-world applications. Traditional analytical methods have been recently complemented by ones from statistical physics and Bayesian statistics. This powerful combination of analytical methods provides more insight and deeper understanding of existing algorithms and leads to novel and principled proposals for their improvement. This book presents a coherent picture of the state-of-the-art in the theoretical analysis of on-line learning. An introduction relates the subject to other developments in neural networks and explains the overall picture. Surveys by leading experts in the field combine new and established material and enable non-experts to learn more about the techniques and methods used. This book, the first in the area, provides a comprehensive view of the subject and will be welcomed by mathematicians, scientists and engineers, whether in industry or academia.
Resumo:
This article examines the current risk regulation regime, within the English National Health Service (NHS), by investigating the two, sometimes conflicting, approaches to risk embodied within the field of policies towards patient safety. The first approach focuses on promoting accountability and is built on legal principles surrounding negligence and competence. The second approach focuses on promoting learning from previous mistakes and near-misses, and is built on the development of a ‘safety culture’. Previous work has drawn attention to problems associated with risk-based regulation when faced with the dual imperatives of accountability and organisational learning. The article develops this by considering whether the NHS patient safety regime demonstrates the coexistence of two different risk regulation regimes, or merely one regime with contradictory elements. It uses the heuristic device of ‘institutional logics’ to examine the coexistence of and interrelationship between ‘organisational learning’ and ‘accountability’ logics driving risk regulation in health care.
Resumo:
Purpose – The purpose of this paper is to evaluate how a UK business school is addressing the Government's skills strategy through its Graduate Certificate in Management, and to identify good practice and development needs and to clarify how the Graduate Certificate is adapting to the needs of Generation X and Millennial students. The paper also aims to test Kolb and Kolb's experiential learning theory (ELT) in a business school setting. Design/methodology/approach – A case study methodology was adopted. In order to get a cross-section of views and triangulate the data, three focus groups were held, supported by reading documentation about the programme of study. Findings – The skills strategy is not just an ambition for some business schools, but is already part of the curriculum. Generation X and the Millennials have more in common with the positive attitudes associated with older generations than stereotyped views might allow. ELT provides a useful theoretical framework for evaluating a programme of study and student attitudes. Research limitations/implications – The research findings from one case study are reported, limiting the generalisability of the study. Practical implications – Good practice and development needs are identified which support the implementation of the Government's skills strategy and address employer concerns about student skills. Originality/value – New empirical data are reported which supports the use of ELT in evaluating programmes of study and student attitudes to work.
Learning and change in interorganizational networks:the case for network learning and network change
Resumo:
The ALBA 2002 Call for Papers asks the question ‘How do organizational learning and knowledge management contribute to organizational innovation and change?’. Intuitively, we would argue, the answer should be relatively straightforward as links between learning and change, and knowledge management and innovation, have long been commonly assumed to exist. On the basis of this assumption, theories of learning tend to focus ‘within organizations’, and assume a transfer of learning from individual to organization which in turn leads to change. However, empirically, we find these links are more difficult to articulate. Organizations exist in complex embedded economic, political, social and institutional systems, hence organizational change (or innovation) may be influenced by learning in this wider context. Based on our research in this wider interorganizational setting, we first make the case for the notion of network learning that we then explore to develop our appreciation of change in interorganizational networks, and how it may be facilitated. The paper begins with a brief review of lite rature on learning in the organizational and interorganizational context which locates our stance on organizational learning versus the learning organization, and social, distributed versus technical, centred views of organizational learning and knowledge. Developing from the view that organizational learning is “a normal, if problematic, process in every organization” (Easterby-Smith, 1997: 1109), we introduce the notion of network learning: learning by a group of organizations as a group. We argue this is also a normal, if problematic, process in organizational relationships (as distinct from interorganizational learning), which has particular implications for network change. Part two of the paper develops our analysis, drawing on empirical data from two studies of learning. The first study addresses the issue of learning to collaborate between industrial customers and suppliers, leading to the case for network learning. The second, larger scale study goes on to develop this theme, examining learning around several major change issues in a healthcare service provider network. The learning processes and outcomes around the introduction of a particularly controversial and expensive technology are described, providing a rich and contrasting case with the first study. In part three, we then discuss the implications of this work for change, and for facilitating change. Conclusions from the first study identify potential interventions designed to facilitate individual and organizational learning within the customer organization to develop individual and organizational ‘capacity to collaborate’. Translated to the network example, we observe that network change entails learning at all levels – network, organization, group and individual. However, presenting findings in terms of interventions is less meaningful in an interorganizational network setting given: the differences in authority structures; the less formalised nature of the network setting; and the importance of evaluating performance at the network rather than organizational level. Academics challenge both the idea of managing change and of managing networks. Nevertheless practitioners are faced with the issue of understanding and in fluencing change in the network setting. Thus we conclude that a network learning perspective is an important development in our understanding of organizational learning, capability and change, locating this in the wider context in which organizations are embedded. This in turn helps to develop our appreciation of facilitating change in interorganizational networks, both in terms of change issues (such as introducing a new technology), and change orientation and capability.
Resumo:
Bayesian algorithms pose a limit to the performance learning algorithms can achieve. Natural selection should guide the evolution of information processing systems towards those limits. What can we learn from this evolution and what properties do the intermediate stages have? While this question is too general to permit any answer, progress can be made by restricting the class of information processing systems under study. We present analytical and numerical results for the evolution of on-line algorithms for learning from examples for neural network classifiers, which might include or not a hidden layer. The analytical results are obtained by solving a variational problem to determine the learning algorithm that leads to maximum generalization ability. Simulations using evolutionary programming, for programs that implement learning algorithms, confirm and expand the results. The principal result is not just that the evolution is towards a Bayesian limit. Indeed it is essentially reached. In addition we find that evolution is driven by the discovery of useful structures or combinations of variables and operators. In different runs the temporal order of the discovery of such combinations is unique. The main result is that combinations that signal the surprise brought by an example arise always before combinations that serve to gauge the performance of the learning algorithm. This latter structures can be used to implement annealing schedules. The temporal ordering can be understood analytically as well by doing the functional optimization in restricted functional spaces. We also show that there is data suggesting that the appearance of these traits also follows the same temporal ordering in biological systems. © 2006 American Institute of Physics.
Resumo:
In his discussion - Challenge To Managers: Changing Hotel Work from a Secondary Choice to Career Development - by Leonidas Chitiris, Lecturer in Management, Piraeus Graduate School of Industrial Studies, Athens, Greece, Chitiris marginally alludes at the outset: “Surveys and interviews with hotel employees in Greece with regard to why individuals work for hotels and to what extent their rationale to join the hotel industry affects hotel productivity revealed that the choice to work in hotels is a secondary preference and reflects the opportunity structure in the economy at any given time and the greater the number of those who work in hotels when there are no other employment opportunities, the less likely the chances for overall improved performance. Given the increase in the proportion of unskilled, unmotivated workers, the level of hotel productivity consequently decreases! The author interprets the findings in terms of the economic and employment conditions in the Greek hotel industry. To enhance the rationale of his thesis statement, Chitiris offers with citation: “Research on initial entry into the labor force has shown that new employees reflect idealized expectations and are frequently not very satisfied with their jobs and roles in the work settings.” Chitiris advances the thought even further by saying: “Research on job satisfaction, motivation, and production purports that management can initiate policies that develop job satisfaction and may improve productivity.” The author outlines components within the general category of the hotel industry to label and quantify exactly why there may be a lag between employee expectations and the delivery of a superior level of service. Please keep in mind that the information for this essay is underpinned by the hotel industry in Greece, exclusively. Demographic information is provided. One example of the many factors parsed in this hotel service discussion is the employee/guest relationship. “The quality of service in hotels is affected to a great extent by the number of guests a hotel employee has to serve,” Chitiris offers. Additionally, Chitiris’ characterization of the typical hotel employee in Greece is not flattering, but it is an informed and representative view of that lodging labor pool. The description in and of itself begs to explain at least some of why the hotel industry in Greece suffers a consequently diminished capacity of superior service. Ill equipped, under-educated, over-worked, and under-paid are how Chitiris describes most employees in the Hellenist hospitality field. Survey based studies, and formulaic indices are used to measure variables related to productivity; the results may be inconclusive industry wide, but are interesting nonetheless. Also, an appealing table gauges the reasons why hotel workers actually employ themselves in the lodging industry. Chirtiris finds that salary expectations do not rate all that high on the motivational chart and are only marginal when related to productivity. In closing, Chirtiris presents a 5-phase development plan hotels should look to in improving performance and productivity at their respective properties.
Resumo:
Technological advancements and the ever-evolving demands of a global marketplace may have changed the way in which training is designed, implemented, and even managed, but the ultimate goal of organizational training programs remains the same: to facilitate learning of a knowledge, skill, or other outcome that will yield improvement in employee performance on the job and within the organization (Colquitt, LePine, & Noe, 2000; Tannenbaum & Yukl, 1992). Studies of organizational training have suggested medium to large effect sizes for the impact of training on employee learning (e.g., Arthur, Bennett, Edens, & Bell, 2003; Burke & Day, 1986). However, learning may be differentially affected by such factors as the (1) level and type of preparation provided prior to training, (2) targeted learning outcome, (3) training methods employed, and (4) content and goals of training (e.g., Baldwin & Ford, 1988). A variety of pre-training interventions have been identified as having the potential to enhance learning from training and practice (Cannon-Bowers, Rhodenizer, Salas, & Bowers, 1998). Numerous individual studies have been conducted examining the impact of one or more of these pre-training interventions on learning. ^ I conducted a meta-analytic examination of the effect of these pre-training interventions on cognitive, skill, and affective learning. Results compiled from 359 independent studies (total N = 37,038) reveal consistent positive effects for the role of pre-training interventions in enhancing learning. In most cases, the provision of a pre-training intervention explained approximately 5–10% of the variance in learning, and in some cases, explained up to 40–50% of variance in learning. Overall attentional advice and meta-cognitive strategies (as compared with advance organizers, goal orientation, and preparatory information) seem to result in the most consistent learning gains. Discussion focuses on the most beneficial match between an intervention and the learning outcome of interest, the most effective format of these interventions, and the most appropriate circumstances under which these interventions should be utilized. Also highlighted are the implications of these results for practice, as well as propositions for important avenues for future research. ^
Resumo:
The Antarctic Peninsula (AP) has been identified as one of the most rapidly warming region on Earth. Satellite monitoring currently allows for a detailed understanding of the relationship between sea ice extent and duration and atmospheric and oceanic circulations in this region. However, our knowledge on ocean-ice-atmosphere interactions is still relatively poor for the period extending beyond the last 30 years. Here, we describe environmental conditions in Northwestern and Northeastern Antarctic Peninsula areas over the last century using diatom census counts and diatom specific biomarkers (HBIs) in two marine sediment multicores (MTC-38C and -18A, respectively). Diatom census counts and HBIs show abrupt changes between 1935 and 1950, marked by ocean warming and sea ice retreat in both sides of the AP. Since 1950, inferred environmental conditions do not provide evidence for any trend related to the recent warming but demonstrate a pronounced variability on pluri-annual to decadal time scale. We propose that multi-decadal sea ice variations over the last century are forced by the recent warming, while the annual-to-decadal variability is mainly governed by synoptic and regional wind fields in relation with the position and intensity of the atmospheric low-pressure trough around the AP. However, the positive shift of the SAM since the last two decades cannot explain the regional trend observed in this study, probably due to the effect of local processes on the response of our biological proxies.
Resumo:
People’s ability to change their social and economic circumstances may be constrained by various forms of social, cultural and political domination. Thus to consider a social actor’s particular lifeworld in which the research is embedded assists in the understanding of how and why different trajectories of change occur or are hindered and how those changes fundamentally affect livelihood opportunities and constraints. In seeking to fulfill this condition this thesis adopted an actor-oriented approach to the study of rural livelihoods. A comprehensive livelihoods study requires grasping how social reality is being historically constituted. That means to understand how the interaction of modes of production and symbolical reproduction produces the socio-space. Research is here integrated to action through the facilitation of farmer groups. The overall aim of the groups was to prompt agency, as essential conditions to build more resilient livelihoods. The smallholder farmers in the Mabalane District of Mozambique are located in a remote semi-arid area. Their livelihoods customarily depend at least as much on livestock as on (mostly) rain-fed food crops. Increased climate variability exerts pressure on the already vulnerable production system. An extensive 10-month duration of participant observation divided into 3 periods of fieldwork structured the situated multi-method approach that drew on a set of interview categories. The actor-oriented appraisal of livelihoods worked in building a mutually shared definition of the situation. This reflection process was taken up by the facilitation of the farmer groups, one in Mabomo and one in Mungazi, which used an inquiry iteratively combining individual interviews and facilitated group meetings. Integration of action and reflection was fundamental for group constitution as spaces for communicative action. They needed to be self-organized and to achieve understanding intersubjectively, as well as to base action on cooperation and coordination. Results from this approach focus on how learning as collaboratively generated was enabled, or at times hindered, in (a) selecting meaningful options to test; (b) in developing mechanisms for group functioning; and (c) in learning from steering the testing of options. The study of livelihoods looked at how the different assets composing livelihoods are intertwined and how the increased severity of dry spells is contributing to escalated food insecurity. The reorganization of the social space, as households moved from scattered homesteads to form settlements, further exerts pressure on the already scarce natural resource-based livelihoods. Moreover, this process disrupted a normative base substantiating the way that the use of resources is governed. Hence, actual livelihood strategies and response mechanisms turn to diversification through income-generating activities that further increase pressure on the resource-base in a rather unsustainable way. These response mechanisms are, for example, the increase in small-livestock keeping, which has easier conversion to cash, and charcoal production. The latter results in ever more precarious living and working conditions. In the majority of the cases such responses are short-term and reduce future opportunities in a downward spiral of continuously decreasing assets. Thus, by indicating the failure of institutions in the mediation of smallholders’ adaptive capabilities, the livelihood assessment in Mabomo and Mungazi sheds light on the complex underlying structure of present day social vulnerability, linking the macro-context to the actual situation. To assist in breaking this state of “subordination”, shaped by historical processes, weak institutions and food insecurity, the chosen approach to facilitation of farmer groups puts farmer knowledge at the center of an evolving process of intersubjective co-construction of knowledge towards emancipation.
Resumo:
Thesis (Ph.D.)--University of Washington, 2016-08
Resumo:
The development of ICT infrastructures has facilitated the emergence of new paradigms for looking at society and the environment over the last few years. Participatory environmental sensing, i.e. directly involving citizens in environmental monitoring, is one example, which is hoped to encourage learning and enhance awareness of environmental issues. In this paper, an analysis of the behaviour of individuals involved in noise sensing is presented. Citizens have been involved in noise measuring activities through the WideNoise smartphone application. This application has been designed to record both objective (noise samples) and subjective (opinions, feelings) data. The application has been open to be used freely by anyone and has been widely employed worldwide. In addition, several test cases have been organised in European countries. Based on the information submitted by users, an analysis of emerging awareness and learning is performed. The data show that changes in the way the environment is perceived after repeated usage of the application do appear. Specifically, users learn how to recognise different noise levels they are exposed to. Additionally, the subjective data collected indicate an increased user involvement in time and a categorisation effect between pleasant and less pleasant environments.
Resumo:
Abstract Scheduling problems are generally NP-hard combinatorial problems, and a lot of research has been done to solve these problems heuristically. However, most of the previous approaches are problem-specific and research into the development of a general scheduling algorithm is still in its infancy. Mimicking the natural evolutionary process of the survival of the fittest, Genetic Algorithms (GAs) have attracted much attention in solving difficult scheduling problems in recent years. Some obstacles exist when using GAs: there is no canonical mechanism to deal with constraints, which are commonly met in most real-world scheduling problems, and small changes to a solution are difficult. To overcome both difficulties, indirect approaches have been presented (in [1] and [2]) for nurse scheduling and driver scheduling, where GAs are used by mapping the solution space, and separate decoding routines then build solutions to the original problem. In our previous indirect GAs, learning is implicit and is restricted to the efficient adjustment of weights for a set of rules that are used to construct schedules. The major limitation of those approaches is that they learn in a non-human way: like most existing construction algorithms, once the best weight combination is found, the rules used in the construction process are fixed at each iteration. However, normally a long sequence of moves is needed to construct a schedule and using fixed rules at each move is thus unreasonable and not coherent with human learning processes. When a human scheduler is working, he normally builds a schedule step by step following a set of rules. After much practice, the scheduler gradually masters the knowledge of which solution parts go well with others. He can identify good parts and is aware of the solution quality even if the scheduling process is not completed yet, thus having the ability to finish a schedule by using flexible, rather than fixed, rules. In this research we intend to design more human-like scheduling algorithms, by using ideas derived from Bayesian Optimization Algorithms (BOA) and Learning Classifier Systems (LCS) to implement explicit learning from past solutions. BOA can be applied to learn to identify good partial solutions and to complete them by building a Bayesian network of the joint distribution of solutions [3]. A Bayesian network is a directed acyclic graph with each node corresponding to one variable, and each variable corresponding to individual rule by which a schedule will be constructed step by step. The conditional probabilities are computed according to an initial set of promising solutions. Subsequently, each new instance for each node is generated by using the corresponding conditional probabilities, until values for all nodes have been generated. Another set of rule strings will be generated in this way, some of which will replace previous strings based on fitness selection. If stopping conditions are not met, the Bayesian network is updated again using the current set of good rule strings. The algorithm thereby tries to explicitly identify and mix promising building blocks. It should be noted that for most scheduling problems the structure of the network model is known and all the variables are fully observed. In this case, the goal of learning is to find the rule values that maximize the likelihood of the training data. Thus learning can amount to 'counting' in the case of multinomial distributions. In the LCS approach, each rule has its strength showing its current usefulness in the system, and this strength is constantly assessed [4]. To implement sophisticated learning based on previous solutions, an improved LCS-based algorithm is designed, which consists of the following three steps. The initialization step is to assign each rule at each stage a constant initial strength. Then rules are selected by using the Roulette Wheel strategy. The next step is to reinforce the strengths of the rules used in the previous solution, keeping the strength of unused rules unchanged. The selection step is to select fitter rules for the next generation. It is envisaged that the LCS part of the algorithm will be used as a hill climber to the BOA algorithm. This is exciting and ambitious research, which might provide the stepping-stone for a new class of scheduling algorithms. Data sets from nurse scheduling and mall problems will be used as test-beds. It is envisaged that once the concept has been proven successful, it will be implemented into general scheduling algorithms. It is also hoped that this research will give some preliminary answers about how to include human-like learning into scheduling algorithms and may therefore be of interest to researchers and practitioners in areas of scheduling and evolutionary computation. References 1. Aickelin, U. and Dowsland, K. (2003) 'Indirect Genetic Algorithm for a Nurse Scheduling Problem', Computer & Operational Research (in print). 2. Li, J. and Kwan, R.S.K. (2003), 'Fuzzy Genetic Algorithm for Driver Scheduling', European Journal of Operational Research 147(2): 334-344. 3. Pelikan, M., Goldberg, D. and Cantu-Paz, E. (1999) 'BOA: The Bayesian Optimization Algorithm', IlliGAL Report No 99003, University of Illinois. 4. Wilson, S. (1994) 'ZCS: A Zeroth-level Classifier System', Evolutionary Computation 2(1), pp 1-18.
Resumo:
There are many ways in which research messages and findings can be extended to the expansive cotton community. As everyone learns differently it is crucial that information is delivered in a variety of ways to meet the various learning needs of the CottonInfo team’s broad audience. In addition different cotton production areas often require targeted information to address specific challenges. Successful implementation of innovative research outcomes typically relies on a history of cultivated communication between the researcher and the end-user, the grower. The CottonInfo team, supported by a joint venture between Cotton Seed Distributors, Cotton Research Development Corporation, Cotton Australia and other collaborative partners, represents a unique model of extension in Australian agriculture. Industry research is extended via regionally based Regional Development Officers backed by support from Technical Specialists. The 2015 Cotton Irrigation Technology Tour is one example of a successful CottonInfo capacity building activity. This tour took seven CRDC funded irrigation-specific researchers to Emerald, Moree and Nevertire to showcase their research and technologies. These events provided irrigators and consultants with the opportunity to hear first-hand from researchers about their technologies and how they could be applied onfarm. This tour was an example of how the CottonInfo team can connect growers and researchers, not only to provide an avenue for growers to learn about the latest irrigation research, but for researchers to receive feedback about their current and future irrigation research.
Resumo:
Abstract Scheduling problems are generally NP-hard combinatorial problems, and a lot of research has been done to solve these problems heuristically. However, most of the previous approaches are problem-specific and research into the development of a general scheduling algorithm is still in its infancy. Mimicking the natural evolutionary process of the survival of the fittest, Genetic Algorithms (GAs) have attracted much attention in solving difficult scheduling problems in recent years. Some obstacles exist when using GAs: there is no canonical mechanism to deal with constraints, which are commonly met in most real-world scheduling problems, and small changes to a solution are difficult. To overcome both difficulties, indirect approaches have been presented (in [1] and [2]) for nurse scheduling and driver scheduling, where GAs are used by mapping the solution space, and separate decoding routines then build solutions to the original problem. In our previous indirect GAs, learning is implicit and is restricted to the efficient adjustment of weights for a set of rules that are used to construct schedules. The major limitation of those approaches is that they learn in a non-human way: like most existing construction algorithms, once the best weight combination is found, the rules used in the construction process are fixed at each iteration. However, normally a long sequence of moves is needed to construct a schedule and using fixed rules at each move is thus unreasonable and not coherent with human learning processes. When a human scheduler is working, he normally builds a schedule step by step following a set of rules. After much practice, the scheduler gradually masters the knowledge of which solution parts go well with others. He can identify good parts and is aware of the solution quality even if the scheduling process is not completed yet, thus having the ability to finish a schedule by using flexible, rather than fixed, rules. In this research we intend to design more human-like scheduling algorithms, by using ideas derived from Bayesian Optimization Algorithms (BOA) and Learning Classifier Systems (LCS) to implement explicit learning from past solutions. BOA can be applied to learn to identify good partial solutions and to complete them by building a Bayesian network of the joint distribution of solutions [3]. A Bayesian network is a directed acyclic graph with each node corresponding to one variable, and each variable corresponding to individual rule by which a schedule will be constructed step by step. The conditional probabilities are computed according to an initial set of promising solutions. Subsequently, each new instance for each node is generated by using the corresponding conditional probabilities, until values for all nodes have been generated. Another set of rule strings will be generated in this way, some of which will replace previous strings based on fitness selection. If stopping conditions are not met, the Bayesian network is updated again using the current set of good rule strings. The algorithm thereby tries to explicitly identify and mix promising building blocks. It should be noted that for most scheduling problems the structure of the network model is known and all the variables are fully observed. In this case, the goal of learning is to find the rule values that maximize the likelihood of the training data. Thus learning can amount to 'counting' in the case of multinomial distributions. In the LCS approach, each rule has its strength showing its current usefulness in the system, and this strength is constantly assessed [4]. To implement sophisticated learning based on previous solutions, an improved LCS-based algorithm is designed, which consists of the following three steps. The initialization step is to assign each rule at each stage a constant initial strength. Then rules are selected by using the Roulette Wheel strategy. The next step is to reinforce the strengths of the rules used in the previous solution, keeping the strength of unused rules unchanged. The selection step is to select fitter rules for the next generation. It is envisaged that the LCS part of the algorithm will be used as a hill climber to the BOA algorithm. This is exciting and ambitious research, which might provide the stepping-stone for a new class of scheduling algorithms. Data sets from nurse scheduling and mall problems will be used as test-beds. It is envisaged that once the concept has been proven successful, it will be implemented into general scheduling algorithms. It is also hoped that this research will give some preliminary answers about how to include human-like learning into scheduling algorithms and may therefore be of interest to researchers and practitioners in areas of scheduling and evolutionary computation. References 1. Aickelin, U. and Dowsland, K. (2003) 'Indirect Genetic Algorithm for a Nurse Scheduling Problem', Computer & Operational Research (in print). 2. Li, J. and Kwan, R.S.K. (2003), 'Fuzzy Genetic Algorithm for Driver Scheduling', European Journal of Operational Research 147(2): 334-344. 3. Pelikan, M., Goldberg, D. and Cantu-Paz, E. (1999) 'BOA: The Bayesian Optimization Algorithm', IlliGAL Report No 99003, University of Illinois. 4. Wilson, S. (1994) 'ZCS: A Zeroth-level Classifier System', Evolutionary Computation 2(1), pp 1-18.