919 resultados para wheel
Resumo:
This paper introduces the LiDAR compass, a bounded and extremely lightweight heading estimation technique that combines a two-dimensional laser scanner and axis maps, which represent the orientations of flat surfaces in the environment. Although suitable for a variety of indoor and outdoor environments, the LiDAR compass is especially useful for embedded and real-time applications requiring low computational overhead. For example, when combined with a sensor that can measure translation (e.g., wheel encoders) the LiDAR compass can be used to yield accurate, lightweight, and very easily implementable localization that requires no prior mapping phase. The utility of using the LiDAR compass as part of a localization algorithm was tested on a widely-available open-source data set, an indoor environment, and a larger-scale outdoor environment. In all cases, it was shown that the growth in heading error was bounded, which significantly reduced the position error to less than 1% of the distance travelled.
Resumo:
This thesis explores the effects of rehabilitation on the structural performance of corrugated steel culverts. A full-scale laboratory experiment investigated the effects of grouted slip-liners on the performance of two buried circular corrugated steel culverts. One culvert was slip-lined and grouted using low strength grout, while the other was slip-lined and grouted using high strength grout. The performances of the culverts were measured before and after rehabilitation under service loads using single wheel pair loading at 0.45m of cover. Then, the rehabilitated culverts were loaded to their ultimate limit states. Results showed that the low and high strength grouted slip-liners provided strength well beyond requirements, with the low strength specimen failing at a load 2.4 times the fully factored service load, while the high strength specimen did not reach an ultimate limit state before bearing failure of the soil stopped testing. Results also showed that the low strength specimen behaved rigidly under service loads and flexibly under higher loads, while the high strength specimen behaved rigidly under all loads. A second full-scale experiment investigated the effect of a paved invert rehabilitation procedure on the performance of a deteriorated horizontal ellipse culvert. The performance of the culvert before and after rehabilitation was examined under service loads using tandem axle loading at 0.45m of cover. The rehabilitated culvert was then loaded up to its ultimate limit state. The culvert failed due to the formation of a plastic hinge at the West shoulder, while the paved invert cracked at the invert. Results showed that the rehabilitation increased the structural performance of the culvert, increasing the system stiffness and reducing average strains and local bending at critical locations in the culvert under service loads. A sustainability rating tool specifically for the evaluation of deteriorated culvert replacement or rehabilitation projects was also developed. A module for an existing tool, called GoldSET, was created and tested using two case studies, each comparing the replacement of a culvert using a traditional open-cut method with two trenchless rehabilitation techniques. In each case, the analyses showed that the trenchless techniques were the better alternatives in terms of sustainability.
Resumo:
The pottery found in the burials of El Cano is uniform in style to these made in the coclesanos valleys between 700 and 1000 AD. The coefficient of variability of the different pottery forms, evidence diverse standardizations values for polychrome and non-polychrome ceramics. Moreover, data of funerary contexts from the Cano recently excavated, suggest that elite has controlled ceramic production. This control over the production of certain goods reveals that these were important in the support or proper operational of the chiefdoms in Panama and mark the phase of splendour of this culture.
Resumo:
The spacing of adjacent wheel lines of dual-lane loads induces different lateral live load distributions on bridges, which cannot be determined using the current American Association of State Highway and Transportation Officials (AASHTO) Load and Resistance Factor Design (LRFD) or Load Factor Design (LFD) equations for vehicles with standard axle configurations. Current Iowa law requires dual-lane loads to meet a five-foot requirement, the adequacy of which needs to be verified. To improve the state policy and AASHTO code specifications, it is necessary to understand the actual effects of wheel-line spacing on lateral load distribution. The main objective of this research was to investigate the impact of the wheel-line spacing of dual-lane loads on the lateral load distribution on bridges. To achieve this objective, a numerical evaluation using two-dimensional linear elastic finite element (FE) models was performed. For simulation purposes, 20 prestressed-concrete bridges, 20 steel bridges, and 20 slab bridges were randomly sampled from the Iowa bridge database. Based on the FE results, the load distribution factors (LDFs) of the concrete and steel bridges and the equivalent lengths of the slab bridges were derived. To investigate the variations of LDFs, a total of 22 types of single-axle four-wheel-line dual-lane loads were taken into account with configurations consisting of combinations of various interior and exterior wheel-line spacing. The corresponding moment and shear LDFs and equivalent widths were also derived using the AASHTO equations and the adequacy of the Iowa DOT five-foot requirement was evaluated. Finally, the axle weight limits per lane for different dual-lane load types were further calculated and recommended to complement the current Iowa Department of Transportation (DOT) policy and AASHTO code specifications.
Resumo:
The highly dynamic nature of some sandy shores with continuous morphological changes require the development of efficient and accurate methodological strategies for coastal hazard assessment and morphodynamic characterisation. During the past decades, the general methodological approach for the establishment of coastal monitoring programmes was based on photogrammetry or classical geodetic techniques. With the advent of new geodetic techniques, space-based and airborne-based, new methodologies were introduced in coastal monitoring programmes. This paper describes the development of a monitoring prototype that is based on the use of global positioning system (GPS). The prototype has a GPS multiantenna mounted on a fast surveying platform, a land vehicle appropriate for driving in the sand (four-wheel quad). This system was conceived to perform a network of shore profiles in sandy shores stretches (subaerial beach) that extend for several kilometres from which high-precision digital elevation models can be generated. An analysis of the accuracy and precision of some differential GPS kinematic methodologies is presented. The development of an adequate survey methodology is the first step in morphodynamic shore characterisation or in coastal hazard assessment. The sample method and the computational interpolation procedures are important steps for producing reliable three-dimensional surface maps that are real as possible. The quality of several interpolation methods used to generate grids was tested in areas where there were data gaps. The results obtained allow us to conclude that with the developed survey methodology, it is possible to Survey sandy shores stretches, under spatial scales of kilometers, with a vertical accuracy of greater than 0.10 m in the final digital elevation models.
Resumo:
Software Architecture is a high level description of a software intensive system that enables architects to have a better intellectual control over the complete system. It is also used as a communication vehicle among the various system stakeholders. Variability in software-intensive systems is the ability of a software artefact (e.g., a system, subsystem, or component) to be extended, customised, or configured for deployment in a specific context. Although variability in software architecture is recognised as a challenge in multiple domains, there has been no formal consensus on how variability should be captured or represented. In this research, we addressed the problem of representing variability in software architecture through a three phase approach. First, we examined existing literature using the Systematic Literature Review (SLR) methodology, which helped us identify the gaps and challenges within the current body of knowledge. Equipped with the findings from the SLR, a set of design principles have been formulated that are used to introduce variability management capabilities to an existing Architecture Description Language (ADL). The chosen ADL was developed within our research group (ALI) and to which we have had complete access. Finally, we evaluated the new version of the ADL produced using two distinct case studies: one from the Information Systems domain, an Asset Management System (AMS); and another from the embedded systems domain, a Wheel Brake System (WBS). This thesis presents the main findings from the three phases of the research work, including a comprehensive study of the state-of-the-art; the complete specification of an ADL that is focused on managing variability; and the lessons learnt from the evaluation work of two distinct real-life case studies.
Resumo:
Background Correctly diagnosing basal cell carcinoma (BCC) clinical type is crucial for the therapeutic management. A systematic description of the variability of all reported BCC dermoscopic features according to clinical type and anatomic location is lacking. Objectives To describe the dermoscopic variability of BCC according to clinical type and anatomic location and to test the hypothesis of a clinical/dermoscopic continuum across superficial BCCs (sBCCs) with increasing palpability. Methods Clinical/dermoscopic images of nodular BCCs (nBCCs) and sBCCs with different degrees of palpability were retrospectively evaluated for the presence of dermoscopic criteria including degree of pigmentation, BCC-associated patterns, diverse vascular patterns, melanocytic patterns and polarized light patterns. Results We examined 501 histopathologically proven BCCs (66.9% sBCCs; 33.1% nBCCs), mainly located on trunk (46.7%; mostly sBCCs) and face (30.5%; mostly nBCCs). Short fine telangiectasias, leaf-like areas, spoke-wheel areas, small erosions and concentric structures were significantly associated with sBCC, whereas arborizing telangiectasias, blue-white veil-like structures, white shiny areas and rainbow pattern with nBCCs. Short fine telangiectasia, spoke-wheel areas and small erosions were independently associated with trunk location, whereas arborizing telangiectasias with facial location. Scalp BCCs had significantly more pigmentation and melanocytic criteria than BCCs located elsewhere. Multiple clinical/dermoscopic parameters displayed a significant linear trend across increasingly palpable sBCCs. Conclusions Particular dermoscopic criteria are independently associated with clinical type and anatomic location of BCC. Heavily pigmented, scalp BCCs are the most challenging to diagnose. A clinical/dermoscopic continuum across increasingly palpable sBCCs was detected and could be potentially important for the non-surgical management of the disease.
Resumo:
A multistate molecular dyad containing flavylium and viologen units was synthesized and the pH dependent thermodynamics of the network completely characterized by a variety of spectroscopic techniques such as NMR, UV-vis and stopped-flow. The flavylium cation is only stable at acidic pH values. Above pH ≈ 5 the hydration of the flavylium leads to the formation of the hemiketal followed by ring-opening tautomerization to give the cis-chalcone. Finally, this last species isomerizes to give the trans-chalcone. For the present system only the flavylium cation and the trans-chalcone species could be detected as being thermodynamically stable. The hemiketal and the cis-chalcone are kinetic intermediates with negligible concentrations at the equilibrium. All stable species of the network were found to form 1 : 1 and 2 : 1 host : guest complexes with cucurbit[7]uril (CB7) with association constants in the ranges 10(5)-10(8) M(-1) and 10(3)-10(4) M(-1), respectively. The 1 : 1 complexes were particularly interesting to devise pH responsive bistable pseudorotaxanes: at basic pH values (≈12) the flavylium cation interconverts into the deprotonated trans-chalcone in a few minutes and under these conditions the CB7 wheel was found to be located around the viologen unit. A decrease in pH to values around 1 regenerates the flavylium cation in seconds and the macrocycle is translocated to the middle of the axle. On the other hand, if the pH is decreased to 6, the deprotonated trans-chalcone is neutralized to give a metastable species that evolves to the thermodynamically stable flavylium cation in ca. 20 hours. By taking advantage of the pH-dependent kinetics of the trans-chalcone/flavylium interconversion, spatiotemporal control of the molecular organization in pseudorotaxane systems can be achieved.
Resumo:
Ce mémoire présente l’étude expérimentale de l’écoulement d’entrée d’un aspirateur d’une turbine bulbe présentant une chute abrupte de performance. Des mesures par vélocimétrie laser à effet Doppler (LDV) ont été réalisées sur deux axes soit en aval des pales de la roue et en aval du moyeu de la roue. Une particularité de cette étude est la conception d’un montage permettant de mesurer la vitesse axiale proche de la paroi du cône. De plus, une méthode d’estimation de la vitesse radiale moyenne a été développée. Ces mesures ont permis de caractériser l’écoulement primaire et les écoulements secondaires et d’analyser leur évolution entre les deux axes. De plus, l’évolution de ces écoulements est analysée en fonction de la chute de performance de la turbine. Les principales particularités de l’écoulement sont la présence d’une recirculation sous le moyeu, d’une zone contrarotative, des sillages des directrices et des tourbillons de bout de pale.
Resumo:
Nitrous oxide (N2O) is a potent greenhouse gas with a global warming potential 298 times higher than carbon dioxide. Soils are a natural source of N2O, contributing 65% of global emissions. This paper is the first in Australia to measure and compare N2O emissions from pre-plant controlled release (CR) and conventional granular (CV) fertilisers in pineapple production using static PVC chambers to capture N2O emissions. Farm 1 cumulative emissions from the CR fertiliser were 3.22 kg ha-1 compared to 6.09 kg ha-1 produced by the CV. At farm 2 the CV blend emitted 2.36 kg ha-1 in comparison to the CR blend of 2.92 kg ha-1. Daily N2O flux rates showed a relationship of direct response to rainfall and soil moisture availability. High emissions were observed for wheel tracks where increased N2O emissions may be linked to soil compaction and waterlogging that creates anaerobic conditions after rain events. Emission measurements over three months highlighted the inconsistencies found in other studies relative to reducing emissions through controlled release nitrogen. More investigations are required to verify the benefits associated with controlled release fertiliser use in pineapples, placement and seasonal timing to address N2O emissions in pineapples.
Resumo:
Abstract Scheduling problems are generally NP-hard combinatorial problems, and a lot of research has been done to solve these problems heuristically. However, most of the previous approaches are problem-specific and research into the development of a general scheduling algorithm is still in its infancy. Mimicking the natural evolutionary process of the survival of the fittest, Genetic Algorithms (GAs) have attracted much attention in solving difficult scheduling problems in recent years. Some obstacles exist when using GAs: there is no canonical mechanism to deal with constraints, which are commonly met in most real-world scheduling problems, and small changes to a solution are difficult. To overcome both difficulties, indirect approaches have been presented (in [1] and [2]) for nurse scheduling and driver scheduling, where GAs are used by mapping the solution space, and separate decoding routines then build solutions to the original problem. In our previous indirect GAs, learning is implicit and is restricted to the efficient adjustment of weights for a set of rules that are used to construct schedules. The major limitation of those approaches is that they learn in a non-human way: like most existing construction algorithms, once the best weight combination is found, the rules used in the construction process are fixed at each iteration. However, normally a long sequence of moves is needed to construct a schedule and using fixed rules at each move is thus unreasonable and not coherent with human learning processes. When a human scheduler is working, he normally builds a schedule step by step following a set of rules. After much practice, the scheduler gradually masters the knowledge of which solution parts go well with others. He can identify good parts and is aware of the solution quality even if the scheduling process is not completed yet, thus having the ability to finish a schedule by using flexible, rather than fixed, rules. In this research we intend to design more human-like scheduling algorithms, by using ideas derived from Bayesian Optimization Algorithms (BOA) and Learning Classifier Systems (LCS) to implement explicit learning from past solutions. BOA can be applied to learn to identify good partial solutions and to complete them by building a Bayesian network of the joint distribution of solutions [3]. A Bayesian network is a directed acyclic graph with each node corresponding to one variable, and each variable corresponding to individual rule by which a schedule will be constructed step by step. The conditional probabilities are computed according to an initial set of promising solutions. Subsequently, each new instance for each node is generated by using the corresponding conditional probabilities, until values for all nodes have been generated. Another set of rule strings will be generated in this way, some of which will replace previous strings based on fitness selection. If stopping conditions are not met, the Bayesian network is updated again using the current set of good rule strings. The algorithm thereby tries to explicitly identify and mix promising building blocks. It should be noted that for most scheduling problems the structure of the network model is known and all the variables are fully observed. In this case, the goal of learning is to find the rule values that maximize the likelihood of the training data. Thus learning can amount to 'counting' in the case of multinomial distributions. In the LCS approach, each rule has its strength showing its current usefulness in the system, and this strength is constantly assessed [4]. To implement sophisticated learning based on previous solutions, an improved LCS-based algorithm is designed, which consists of the following three steps. The initialization step is to assign each rule at each stage a constant initial strength. Then rules are selected by using the Roulette Wheel strategy. The next step is to reinforce the strengths of the rules used in the previous solution, keeping the strength of unused rules unchanged. The selection step is to select fitter rules for the next generation. It is envisaged that the LCS part of the algorithm will be used as a hill climber to the BOA algorithm. This is exciting and ambitious research, which might provide the stepping-stone for a new class of scheduling algorithms. Data sets from nurse scheduling and mall problems will be used as test-beds. It is envisaged that once the concept has been proven successful, it will be implemented into general scheduling algorithms. It is also hoped that this research will give some preliminary answers about how to include human-like learning into scheduling algorithms and may therefore be of interest to researchers and practitioners in areas of scheduling and evolutionary computation. References 1. Aickelin, U. and Dowsland, K. (2003) 'Indirect Genetic Algorithm for a Nurse Scheduling Problem', Computer & Operational Research (in print). 2. Li, J. and Kwan, R.S.K. (2003), 'Fuzzy Genetic Algorithm for Driver Scheduling', European Journal of Operational Research 147(2): 334-344. 3. Pelikan, M., Goldberg, D. and Cantu-Paz, E. (1999) 'BOA: The Bayesian Optimization Algorithm', IlliGAL Report No 99003, University of Illinois. 4. Wilson, S. (1994) 'ZCS: A Zeroth-level Classifier System', Evolutionary Computation 2(1), pp 1-18.
Resumo:
A Bayesian optimisation algorithm for a nurse scheduling problem is presented, which involves choosing a suitable scheduling rule from a set for each nurse's assignment. When a human scheduler works, he normally builds a schedule systematically following a set of rules. After much practice, the scheduler gradually masters the knowledge of which solution parts go well with others. He can identify good parts and is aware of the solution quality even if the scheduling process is not yet completed, thus having the ability to finish a schedule by using flexible, rather than fixed, rules. In this paper, we design a more human-like scheduling algorithm, by using a Bayesian optimisation algorithm to implement explicit learning from past solutions. A nurse scheduling problem from a UK hospital is used for testing. Unlike our previous work that used Genetic Algorithms to implement implicit learning [1], the learning in the proposed algorithm is explicit, i.e. we identify and mix building blocks directly. The Bayesian optimisation algorithm is applied to implement such explicit learning by building a Bayesian network of the joint distribution of solutions. The conditional probability of each variable in the network is computed according to an initial set of promising solutions. Subsequently, each new instance for each variable is generated by using the corresponding conditional probabilities, until all variables have been generated, i.e. in our case, new rule strings have been obtained. Sets of rule strings are generated in this way, some of which will replace previous strings based on fitness. If stopping conditions are not met, the conditional probabilities for all nodes in the Bayesian network are updated again using the current set of promising rule strings. For clarity, consider the following toy example of scheduling five nurses with two rules (1: random allocation, 2: allocate nurse to low-cost shifts). In the beginning of the search, the probabilities of choosing rule 1 or 2 for each nurse is equal, i.e. 50%. After a few iterations, due to the selection pressure and reinforcement learning, we experience two solution pathways: Because pure low-cost or random allocation produces low quality solutions, either rule 1 is used for the first 2-3 nurses and rule 2 on remainder or vice versa. In essence, Bayesian network learns 'use rule 2 after 2-3x using rule 1' or vice versa. It should be noted that for our and most other scheduling problems, the structure of the network model is known and all variables are fully observed. In this case, the goal of learning is to find the rule values that maximize the likelihood of the training data. Thus, learning can amount to 'counting' in the case of multinomial distributions. For our problem, we use our rules: Random, Cheapest Cost, Best Cover and Balance of Cost and Cover. In more detail, the steps of our Bayesian optimisation algorithm for nurse scheduling are: 1. Set t = 0, and generate an initial population P(0) at random; 2. Use roulette-wheel selection to choose a set of promising rule strings S(t) from P(t); 3. Compute conditional probabilities of each node according to this set of promising solutions; 4. Assign each nurse using roulette-wheel selection based on the rules' conditional probabilities. A set of new rule strings O(t) will be generated in this way; 5. Create a new population P(t+1) by replacing some rule strings from P(t) with O(t), and set t = t+1; 6. If the termination conditions are not met (we use 2000 generations), go to step 2. Computational results from 52 real data instances demonstrate the success of this approach. They also suggest that the learning mechanism in the proposed approach might be suitable for other scheduling problems. Another direction for further research is to see if there is a good constructing sequence for individual data instances, given a fixed nurse scheduling order. If so, the good patterns could be recognized and then extracted as new domain knowledge. Thus, by using this extracted knowledge, we can assign specific rules to the corresponding nurses beforehand, and only schedule the remaining nurses with all available rules, making it possible to reduce the solution space. Acknowledgements The work was funded by the UK Government's major funding agency, Engineering and Physical Sciences Research Council (EPSRC), under grand GR/R92899/01. References [1] Aickelin U, "An Indirect Genetic Algorithm for Set Covering Problems", Journal of the Operational Research Society, 53(10): 1118-1126,
Resumo:
Com uma Sociedade cada vez mais complexa e evolutiva, vários são os desafios a que somos postos à prova. Com isto, a população apesar das suas dependências e comodismos, tende a procurar novos meios de lazer que se envolvam com a natureza e com a prática desportiva. Deste modo, o Cicloturismo vai ao encontro destas práticas, tornando-se um mercado em evolução, cuja a aderência do número de pessoas tem vindo a crescer, sendo um tipo de turismo que acarreta resultados positivos, tais como um estilo de vida saudável, bem como a utilização de um meio de transporte de baixo custo e com uma pegada ecológica. Na mesma linha de pensamento, o presente documento aborda todo o processo de execução de um Quadriciclo. O objetivo geral do mesmo, incide em satisfazer as necessidades dos praticantes de cicloturismo e incentivar as pessoas para o uso da bicicleta como meio de transporte. Para tal, os participantes envolvidos foram utilizadores assíduos do uso da bicicleta, não só procurando-a para o lazer, mas sim para a prática do Cicloturismo. Assim, os utilizadores do veiculo projetado têm o privilégio de ter como painel de fundo das suas viagens, o coração da natureza, disfrutando do conforto e da segurança que este proporciona, e ainda como resultado das mesmas, um estilo de vida mais saudável. De forma sequencial e organizada, utilizando como base a Metodologia de Ulrich e Eppinger, são notáveis os passos que foram dados para obter o produto final. Para além dos fortes conceitos de conforto, estabilidade, engenharia e ergonomia, é primordial salientar toda a importância do Design, uma vez que influenciou a disposição do veículo de quatro rodas, do início ao fim.
Resumo:
O tema alimentação é uma preocupação nos dias atuais. O aparecimento de doenças associadas aos excessos alimentares, diferentes culturas e informação transmitida, são problemas constantes e preocupantes na sociedade. As pessoas têm de ser educadas desde pequenas para esta situação, valorizando toda a informação que recolhem por parte do meio envolvente e colocando em prática o que aprenderam. Este trabalho baseou-se na avaliação de alguns indivíduos sobre o conhecimento e práticas que têm sobre o tema alimentação saudável. Escolheu-se um grupo de indivíduos que frequentavam do 5º ao 9º ano de escolaridade, do concelho de Viseu. O objectivo pretendido consistia em avaliar o conhecimento adquirido no convívio com a família e amigos, na escola e no marketing que os rodeia, verificando se era colocado em prática, ou se o conhecimento era insuficiente. Adotou-se o método de inquéritos por questionários para recolher a informação necessária e o SPSS (Statistical Package for the Social Sciences) como software para fazer a análise. As escolas escolhidas, encarregados de educação e alunos foram muito recetivos a este questionário, tornando possível uma amostra de 852 inquiridos, dos quais 50,12% são do sexo feminino e 49,88% são do sexo masculino, e cujas idades variam entre os 10 e os 18 anos. De modo geral percebe-se que os inquiridos têm alguma informação sobre alimentação saudável. A maior parte (93,8%) identifica a roda dos alimentos atual e é através da escola (60,2%) e pais/familiares (75,1%) que obtêm o seu conhecimento. No entanto, numa avaliação global resultante de uma análise de clusters, conclui-se que os indivíduos que até possuem algum conhecimento representam um terço dos alunos (38,7%), o que demonstra que ainda há barreiras que têm que se transpor para alertar a população estudantil para este assunto.
Resumo:
A Bayesian optimisation algorithm for a nurse scheduling problem is presented, which involves choosing a suitable scheduling rule from a set for each nurse's assignment. When a human scheduler works, he normally builds a schedule systematically following a set of rules. After much practice, the scheduler gradually masters the knowledge of which solution parts go well with others. He can identify good parts and is aware of the solution quality even if the scheduling process is not yet completed, thus having the ability to finish a schedule by using flexible, rather than fixed, rules. In this paper, we design a more human-like scheduling algorithm, by using a Bayesian optimisation algorithm to implement explicit learning from past solutions. A nurse scheduling problem from a UK hospital is used for testing. Unlike our previous work that used Genetic Algorithms to implement implicit learning [1], the learning in the proposed algorithm is explicit, i.e. we identify and mix building blocks directly. The Bayesian optimisation algorithm is applied to implement such explicit learning by building a Bayesian network of the joint distribution of solutions. The conditional probability of each variable in the network is computed according to an initial set of promising solutions. Subsequently, each new instance for each variable is generated by using the corresponding conditional probabilities, until all variables have been generated, i.e. in our case, new rule strings have been obtained. Sets of rule strings are generated in this way, some of which will replace previous strings based on fitness. If stopping conditions are not met, the conditional probabilities for all nodes in the Bayesian network are updated again using the current set of promising rule strings. For clarity, consider the following toy example of scheduling five nurses with two rules (1: random allocation, 2: allocate nurse to low-cost shifts). In the beginning of the search, the probabilities of choosing rule 1 or 2 for each nurse is equal, i.e. 50%. After a few iterations, due to the selection pressure and reinforcement learning, we experience two solution pathways: Because pure low-cost or random allocation produces low quality solutions, either rule 1 is used for the first 2-3 nurses and rule 2 on remainder or vice versa. In essence, Bayesian network learns 'use rule 2 after 2-3x using rule 1' or vice versa. It should be noted that for our and most other scheduling problems, the structure of the network model is known and all variables are fully observed. In this case, the goal of learning is to find the rule values that maximize the likelihood of the training data. Thus, learning can amount to 'counting' in the case of multinomial distributions. For our problem, we use our rules: Random, Cheapest Cost, Best Cover and Balance of Cost and Cover. In more detail, the steps of our Bayesian optimisation algorithm for nurse scheduling are: 1. Set t = 0, and generate an initial population P(0) at random; 2. Use roulette-wheel selection to choose a set of promising rule strings S(t) from P(t); 3. Compute conditional probabilities of each node according to this set of promising solutions; 4. Assign each nurse using roulette-wheel selection based on the rules' conditional probabilities. A set of new rule strings O(t) will be generated in this way; 5. Create a new population P(t+1) by replacing some rule strings from P(t) with O(t), and set t = t+1; 6. If the termination conditions are not met (we use 2000 generations), go to step 2. Computational results from 52 real data instances demonstrate the success of this approach. They also suggest that the learning mechanism in the proposed approach might be suitable for other scheduling problems. Another direction for further research is to see if there is a good constructing sequence for individual data instances, given a fixed nurse scheduling order. If so, the good patterns could be recognized and then extracted as new domain knowledge. Thus, by using this extracted knowledge, we can assign specific rules to the corresponding nurses beforehand, and only schedule the remaining nurses with all available rules, making it possible to reduce the solution space. Acknowledgements The work was funded by the UK Government's major funding agency, Engineering and Physical Sciences Research Council (EPSRC), under grand GR/R92899/01. References [1] Aickelin U, "An Indirect Genetic Algorithm for Set Covering Problems", Journal of the Operational Research Society, 53(10): 1118-1126,