773 resultados para Collaborative Network Model


Relevância:

80.00% 80.00%

Publicador:

Resumo:

Developing analytical models that can accurately describe behaviors of Internet-scale networks is difficult. This is due, in part, to the heterogeneous structure, immense size and rapidly changing properties of today's networks. The lack of analytical models makes large-scale network simulation an indispensable tool for studying immense networks. However, large-scale network simulation has not been commonly used to study networks of Internet-scale. This can be attributed to three factors: 1) current large-scale network simulators are geared towards simulation research and not network research, 2) the memory required to execute an Internet-scale model is exorbitant, and 3) large-scale network models are difficult to validate. This dissertation tackles each of these problems. ^ First, this work presents a method for automatically enabling real-time interaction, monitoring, and control of large-scale network models. Network researchers need tools that allow them to focus on creating realistic models and conducting experiments. However, this should not increase the complexity of developing a large-scale network simulator. This work presents a systematic approach to separating the concerns of running large-scale network models on parallel computers and the user facing concerns of configuring and interacting with large-scale network models. ^ Second, this work deals with reducing memory consumption of network models. As network models become larger, so does the amount of memory needed to simulate them. This work presents a comprehensive approach to exploiting structural duplications in network models to dramatically reduce the memory required to execute large-scale network experiments. ^ Lastly, this work addresses the issue of validating large-scale simulations by integrating real protocols and applications into the simulation. With an emulation extension, a network simulator operating in real-time can run together with real-world distributed applications and services. As such, real-time network simulation not only alleviates the burden of developing separate models for applications in simulation, but as real systems are included in the network model, it also increases the confidence level of network simulation. This work presents a scalable and flexible framework to integrate real-world applications with real-time simulation.^

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The lack of analytical models that can accurately describe large-scale networked systems makes empirical experimentation indispensable for understanding complex behaviors. Research on network testbeds for testing network protocols and distributed services, including physical, emulated, and federated testbeds, has made steady progress. Although the success of these testbeds is undeniable, they fail to provide: 1) scalability, for handling large-scale networks with hundreds or thousands of hosts and routers organized in different scenarios, 2) flexibility, for testing new protocols or applications in diverse settings, and 3) inter-operability, for combining simulated and real network entities in experiments. This dissertation tackles these issues in three different dimensions. First, we present SVEET, a system that enables inter-operability between real and simulated hosts. In order to increase the scalability of networks under study, SVEET enables time-dilated synchronization between real hosts and the discrete-event simulator. Realistic TCP congestion control algorithms are implemented in the simulator to allow seamless interactions between real and simulated hosts. SVEET is validated via extensive experiments and its capabilities are assessed through case studies involving real applications. Second, we present PrimoGENI, a system that allows a distributed discrete-event simulator, running in real-time, to interact with real network entities in a federated environment. PrimoGENI greatly enhances the flexibility of network experiments, through which a great variety of network conditions can be reproduced to examine what-if questions. Furthermore, PrimoGENI performs resource management functions, on behalf of the user, for instantiating network experiments on shared infrastructures. Finally, to further increase the scalability of network testbeds to handle large-scale high-capacity networks, we present a novel symbiotic simulation approach. We present SymbioSim, a testbed for large-scale network experimentation where a high-performance simulation system closely cooperates with an emulation system in a mutually beneficial way. On the one hand, the simulation system benefits from incorporating the traffic metadata from real applications in the emulation system to reproduce the realistic traffic conditions. On the other hand, the emulation system benefits from receiving the continuous updates from the simulation system to calibrate the traffic between real applications. Specific techniques that support the symbiotic approach include: 1) a model downscaling scheme that can significantly reduce the complexity of the large-scale simulation model, resulting in an efficient emulation system for modulating the high-capacity network traffic between real applications; 2) a queuing network model for the downscaled emulation system to accurately represent the network effects of the simulated traffic; and 3) techniques for reducing the synchronization overhead between the simulation and emulation systems.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The purpose of this research was to investigate the influence of elevation and other terrain characteristics over the spatial and temporal distribution of rainfall. A comparative analysis was conducted between several methods of spatial interpolations using mean monthly precipitation values in order to select the best. Following those previous results it was possible to fit an Artificial Neural Network model for interpolation of monthly precipitation values for a period of 20 years, with input values such as longitude, latitude, elevation, four geomorphologic characteristics and anchored by seven weather stations, it reached a high correlation coefficient (r=0.85). This research demonstrated a strong influence of elevation and other geomorphologic variables over the spatial distribution of precipitation and the agreement that there are nonlinear relationships. This model will be used to fill gaps in time-series of monthly precipitation, and to generate maps of spatial distribution of monthly precipitation at a resolution of 1km2.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This particular study was a sub-study of an on-going investigation by Porter and Kazcaraba (1994) at the Veterans Administration Medical Center in Miami. While the Porter and Kazcaraba study utilizes multiple measures to determine the impact of nurse patient collaborative care on quality of life of cardiovascular patients receiving anticoagulant therapy, this study sought to find whether health education could empower similar clients to improve their quality of life. A health education program based on Freire's belief that shared collective knowledge empowers individuals to improve their lives and their community and Porter's nurse patient collaborative care model was used. Findings on a sample of thirty-eight subjects revealed strong correlations between self-esteem and life satisfaction as well as a trend towards increased power post-treatment. No group comparisons were made at posttest because the sample size was too small for meaningful statistical analysis.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In the current context of policies for inclusiveness in the Brazilian higher education, the accessibility centers are responsible for the organization of actions toward the fulfillment of the legal requirements regarding accessibility and the elimination of barriers that interfere with the participation and learning of persons with special learning needs (SLN), providing conditions for the full inclusion of these students in the activities of learning, research and community outreach in this learning level. This research had the goal to analyse the work developed by the accessibility centers in the federal universities in the northeastern region of Brazil towards the care of students with SLN. It is a descriptive work, with quantitative and qualitative approaches. Twelve federal universities participated through their accessibility center’s coordinators. The data - gathered by means of an electronic survey filled out in 2014 - was organized and analyzed through descriptive statistics and content analysis; their discussion was made around four topics: organization of accessibility centers for the care of students with SLN, kinds and numbers of students with SLN cared for, actions developed, and suggestions for the improvement of the accessibility centers. It was found that in the sphere of the researched universities, the accessibility centers have been taking actions involving many parts of the academic community towards the improvement of the conditions and permanence of students with SLN. However, in some institutional realities, these action need to be expanded and/or consolidated. The coordinators suggest further actions towards the betterment of these centers, regarding expansion of financial and human resources, professional qualification, awareness of the academic community, institutionalization and the formation of a collaborative network among the accessibility centers. Thus, although there are still challenges to overcome, the presence of the accessibility centers is shown to advance the realization of policies for the inclusion of students with SLN in the post-secondary education, towards the democratization of learning starting from the universal right to education.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In the current context of policies for inclusiveness in the Brazilian higher education, the accessibility centers are responsible for the organization of actions toward the fulfillment of the legal requirements regarding accessibility and the elimination of barriers that interfere with the participation and learning of persons with special learning needs (SLN), providing conditions for the full inclusion of these students in the activities of learning, research and community outreach in this learning level. This research had the goal to analyse the work developed by the accessibility centers in the federal universities in the northeastern region of Brazil towards the care of students with SLN. It is a descriptive work, with quantitative and qualitative approaches. Twelve federal universities participated through their accessibility center’s coordinators. The data - gathered by means of an electronic survey filled out in 2014 - was organized and analyzed through descriptive statistics and content analysis; their discussion was made around four topics: organization of accessibility centers for the care of students with SLN, kinds and numbers of students with SLN cared for, actions developed, and suggestions for the improvement of the accessibility centers. It was found that in the sphere of the researched universities, the accessibility centers have been taking actions involving many parts of the academic community towards the improvement of the conditions and permanence of students with SLN. However, in some institutional realities, these action need to be expanded and/or consolidated. The coordinators suggest further actions towards the betterment of these centers, regarding expansion of financial and human resources, professional qualification, awareness of the academic community, institutionalization and the formation of a collaborative network among the accessibility centers. Thus, although there are still challenges to overcome, the presence of the accessibility centers is shown to advance the realization of policies for the inclusion of students with SLN in the post-secondary education, towards the democratization of learning starting from the universal right to education.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The advances in three related areas of state-space modeling, sequential Bayesian learning, and decision analysis are addressed, with the statistical challenges of scalability and associated dynamic sparsity. The key theme that ties the three areas is Bayesian model emulation: solving challenging analysis/computational problems using creative model emulators. This idea defines theoretical and applied advances in non-linear, non-Gaussian state-space modeling, dynamic sparsity, decision analysis and statistical computation, across linked contexts of multivariate time series and dynamic networks studies. Examples and applications in financial time series and portfolio analysis, macroeconomics and internet studies from computational advertising demonstrate the utility of the core methodological innovations.

Chapter 1 summarizes the three areas/problems and the key idea of emulating in those areas. Chapter 2 discusses the sequential analysis of latent threshold models with use of emulating models that allows for analytical filtering to enhance the efficiency of posterior sampling. Chapter 3 examines the emulator model in decision analysis, or the synthetic model, that is equivalent to the loss function in the original minimization problem, and shows its performance in the context of sequential portfolio optimization. Chapter 4 describes the method for modeling the steaming data of counts observed on a large network that relies on emulating the whole, dependent network model by independent, conjugate sub-models customized to each set of flow. Chapter 5 reviews those advances and makes the concluding remarks.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

I explore and analyze a problem of finding the socially optimal capital requirements for financial institutions considering two distinct channels of contagion: direct exposures among the institutions, as represented by a network and fire sales externalities, which reflect the negative price impact of massive liquidation of assets.These two channels amplify shocks from individual financial institutions to the financial system as a whole and thus increase the risk of joint defaults amongst the interconnected financial institutions; this is often referred to as systemic risk. In the model, there is a trade-off between reducing systemic risk and raising the capital requirements of the financial institutions. The policymaker considers this trade-off and determines the optimal capital requirements for individual financial institutions. I provide a method for finding and analyzing the optimal capital requirements that can be applied to arbitrary network structures and arbitrary distributions of investment returns.

In particular, I first consider a network model consisting only of direct exposures and show that the optimal capital requirements can be found by solving a stochastic linear programming problem. I then extend the analysis to financial networks with default costs and show the optimal capital requirements can be found by solving a stochastic mixed integer programming problem. The computational complexity of this problem poses a challenge, and I develop an iterative algorithm that can be efficiently executed. I show that the iterative algorithm leads to solutions that are nearly optimal by comparing it with lower bounds based on a dual approach. I also show that the iterative algorithm converges to the optimal solution.

Finally, I incorporate fire sales externalities into the model. In particular, I am able to extend the analysis of systemic risk and the optimal capital requirements with a single illiquid asset to a model with multiple illiquid assets. The model with multiple illiquid assets incorporates liquidation rules used by the banks. I provide an optimization formulation whose solution provides the equilibrium payments for a given liquidation rule.

I further show that the socially optimal capital problem using the ``socially optimal liquidation" and prioritized liquidation rules can be formulated as a convex and convex mixed integer problem, respectively. Finally, I illustrate the results of the methodology on numerical examples and

discuss some implications for capital regulation policy and stress testing.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Moving through a stable, three-dimensional world is a hallmark of our motor and perceptual experience. This stability is constantly being challenged by movements of the eyes and head, inducing retinal blur and retino-spatial misalignments for which the brain must compensate. To do so, the brain must account for eye and head kinematics to transform two-dimensional retinal input into the reference frame necessary for movement or perception. The four studies in this thesis used both computational and psychophysical approaches to investigate several aspects of this reference frame transformation. In the first study, we examined the neural mechanism underlying the visuomotor transformation for smooth pursuit using a feedforward neural network model. After training, the model performed the general, three-dimensional transformation using gain modulation. This gave mechanistic significance to gain modulation observed in cortical pursuit areas while also providing several testable hypotheses for future electrophysiological work. In the second study, we asked how anticipatory pursuit, which is driven by memorized signals, accounts for eye and head geometry using a novel head-roll updating paradigm. We showed that the velocity memory driving anticipatory smooth pursuit relies on retinal signals, but is updated for the current head orientation. In the third study, we asked how forcing retinal motion to undergo a reference frame transformation influences perceptual decision making. We found that simply rolling one's head impairs perceptual decision making in a way captured by stochastic reference frame transformations. In the final study, we asked how torsional shifts of the retinal projection occurring with almost every eye movement influence orientation perception across saccades. We found a pre-saccadic, predictive remapping consistent with maintaining a purely retinal (but spatially inaccurate) orientation perception throughout the movement. Together these studies suggest that, despite their spatial inaccuracy, retinal signals play a surprisingly large role in our seamless visual experience. This work therefore represents a significant advance in our understanding of how the brain performs one of its most fundamental functions.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Development of Internet-of-Services will be hampered by heterogeneous Internet-of-Things infrastructures, such as inconsistency in communicating with participating objects, connectivity between them, topology definition & data transfer, access via cloud computing for data storage etc. Our proposed solutions are applicable to a random topology scenario that allow establishing of multi-operational sensor networks out of single networks and/or single service networks with the participation of multiple networks; thus allowing virtual links to be created and resources to be shared. The designed layers are context-aware, application-oriented, and capable of representing physical objects to a management system, along with discovery of services. The reliability issue is addressed by deploying IETF supported IEEE 802.15.4 network model for low-rate wireless personal networks. Flow- sensor succeeded better results in comparison to the typical - sensor from reachability, throughput, energy consumption and diversity gain viewpoint and through allowing the multicast groups into maximum number, performances can be improved.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Thesis (Ph.D.)--University of Washington, 2016-08

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Nervous system disorders are associated with cognitive and motor deficits, and are responsible for the highest disability rates and global burden of disease. Their recovery paths are vulnerable and dependent on the effective combination of plastic brain tissue properties, with complex, lengthy and expensive neurorehabilitation programs. This work explores two lines of research, envisioning sustainable solutions to improve treatment of cognitive and motor deficits. Both projects were developed in parallel and shared a new sensible approach, where low-cost technologies were integrated with common clinical operative procedures. The aim was to achieve more intensive treatments under specialized monitoring, improve clinical decision-making and increase access to healthcare. The first project (articles I – III) concerned the development and evaluation of a web-based cognitive training platform (COGWEB), suitable for intensive use, either at home or at institutions, and across a wide spectrum of ages and diseases that impair cognitive functioning. It was tested for usability in a memory clinic setting and implemented in a collaborative network, comprising 41 centers and 60 professionals. An adherence and intensity study revealed a compliance of 82.8% at six months and an average of six hours/week of continued online cognitive training activities. The second project (articles IV – VI) was designed to create and validate an intelligent rehabilitation device to administer proprioceptive stimuli on the hemiparetic side of stroke patients while performing ambulatory movement characterization (SWORD). Targeted vibratory stimulation was found to be well tolerated and an automatic motor characterization system retrieved results comparable to the first items of the Wolf Motor Function Test. The global system was tested in a randomized placebo controlled trial to assess its impact on a common motor rehabilitation task in a relevant clinical environment (early post-stroke). The number of correct movements on a hand-to-mouth task was increased by an average of 7.2/minute while the probability to perform an error decreased from 1:3 to 1:9. Neurorehabilitation and neuroplasticity are shifting to more neuroscience driven approaches. Simultaneously, their final utility for patients and society is largely dependent on the development of more effective technologies that facilitate the dissemination of knowledge produced during the process. The results attained through this work represent a step forward in that direction. Their impact on the quality of rehabilitation services and public health is discussed according to clinical, technological and organizational perspectives. Such a process of thinking and oriented speculation has led to the debate of subsequent hypotheses, already being explored in novel research paths.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Single-walled carbon nanotubes (SWNTs) have been studied as a prominent class of high performance electronic materials for next generation electronics. Their geometry dependent electronic structure, ballistic transport and low power dissipation due to quasi one dimensional transport, and their capability of carrying high current densities are some of the main reasons for the optimistic expectations on SWNTs. However, device applications of individual SWNTs have been hindered by uncontrolled variations in characteristics and lack of scalable methods to integrate SWNTs into electronic devices. One relatively new direction in SWNT electronics, which avoids these issues, is using arrays of SWNTs, where the ensemble average may provide uniformity from device to device, and this new breed of electronic material can be integrated into electronic devices in a scalable fashion. This dissertation describes (1) methods for characterization of SWNT arrays, (2) how the electrical transport in these two-dimensional arrays depend on length scales and spatial anisotropy, (3) the interaction of aligned SWNTs with the underlying substrate, and (4) methods for scalable integration of SWNT arrays into electronic devices. The electrical characterization of SWNT arrays have been realized by polymer electrolyte-gated SWNT thin film transistors (TFTs). Polymer electrolyte-gating addresses many technical difficulties inherent to electrical characterization by gating through oxide-dielectrics. Having shown polymer electrolyte-gating can be successfully applied on SWNT arrays, we have studied the length scaling dependence of electrical transport in SWNT arrays. Ultrathin films formed by sub-monolayer surface coverage of SWNT arrays are very interesting systems in terms of the physics of two-dimensional electronic transport. We have observed that they behave qualitatively different than the classical conducting films, which obey the Ohm’s law. The resistance of an ultrathin film of SWNT arrays is indeed non-linear with the length of the film, across which the transport occurs. More interestingly, a transition between conducting and insulating states is observed at a critical surface coverage, which is called percolation limit. The surface coverage of conducting SWNTs can be manipulated by turning on and off the semiconductors in the SWNT array, leading to the operation principle of SWNT TFTs. The percolation limit depends also on the length and the spatial orientation of SWNTs. We have also observed that the percolation limit increases abruptly for aligned arrays of SWNTs, which are grown on single crystal quartz substrates. In this dissertation, we also compare our experimental results with a two-dimensional stick network model, which gives a good qualitative picture of the electrical transport in SWNT arrays in terms of surface coverage, length scaling, and spatial orientation, and briefly discuss the validity of this model. However, the electronic properties of SWNT arrays are not only determined by geometrical arguments. The contact resistances at the nanotube-nanotube and nanotube-electrode (bulk metal) interfaces, and interactions with the local chemical groups and the underlying substrates are among other issues related to the electronic transport in SWNT arrays. Different aspects of these factors have been studied in detail by many groups. In fact, I have also included a brief discussion about electron injection onto semiconducting SWNTs by polymer dopants. On the other hand, we have compared the substrate-SWNT interactions for isotropic (in two dimensions) arrays of SWNTs grown on Si/SiO2 substrates and horizontally (on substrate) aligned arrays of SWNTs grown on single crystal quartz substrates. The anisotropic interactions associated with the quartz lattice between quartz and SWNTs that allow near perfect horizontal alignment on substrate along a particular crystallographic direction is examined by Raman spectroscopy, and shown to lead to uniaxial compressive strain in as-grown SWNTs on single crystal quartz. This is the first experimental demonstration of the hard-to-achieve uniaxial compression of SWNTs. Temperature dependence of Raman G-band spectra along the length of individual nanotubes reveals that the compressive strain is non-uniform and can be larger than 1% locally at room temperature. Effects of device fabrication steps on the non-uniform strain are also examined and implications on electrical performance are discussed. Based on our findings, there are discussions about device performances and designs included in this dissertation. The channel length dependences of device mobilities and on/off ratios are included for SWNT TFTs. Time response of polymer-electrolyte gated SWNT TFTs has been measured to be ~300 Hz, and a proof-of-concept logic inverter has been fabricated by using polymer electrolyte gated SWNT TFTs for macroelectronic applications. Finally, I dedicated a chapter on scalable device designs based on aligned arrays of SWNTs, including a design for SWNT memory devices.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Abstract Scheduling problems are generally NP-hard combinatorial problems, and a lot of research has been done to solve these problems heuristically. However, most of the previous approaches are problem-specific and research into the development of a general scheduling algorithm is still in its infancy. Mimicking the natural evolutionary process of the survival of the fittest, Genetic Algorithms (GAs) have attracted much attention in solving difficult scheduling problems in recent years. Some obstacles exist when using GAs: there is no canonical mechanism to deal with constraints, which are commonly met in most real-world scheduling problems, and small changes to a solution are difficult. To overcome both difficulties, indirect approaches have been presented (in [1] and [2]) for nurse scheduling and driver scheduling, where GAs are used by mapping the solution space, and separate decoding routines then build solutions to the original problem. In our previous indirect GAs, learning is implicit and is restricted to the efficient adjustment of weights for a set of rules that are used to construct schedules. The major limitation of those approaches is that they learn in a non-human way: like most existing construction algorithms, once the best weight combination is found, the rules used in the construction process are fixed at each iteration. However, normally a long sequence of moves is needed to construct a schedule and using fixed rules at each move is thus unreasonable and not coherent with human learning processes. When a human scheduler is working, he normally builds a schedule step by step following a set of rules. After much practice, the scheduler gradually masters the knowledge of which solution parts go well with others. He can identify good parts and is aware of the solution quality even if the scheduling process is not completed yet, thus having the ability to finish a schedule by using flexible, rather than fixed, rules. In this research we intend to design more human-like scheduling algorithms, by using ideas derived from Bayesian Optimization Algorithms (BOA) and Learning Classifier Systems (LCS) to implement explicit learning from past solutions. BOA can be applied to learn to identify good partial solutions and to complete them by building a Bayesian network of the joint distribution of solutions [3]. A Bayesian network is a directed acyclic graph with each node corresponding to one variable, and each variable corresponding to individual rule by which a schedule will be constructed step by step. The conditional probabilities are computed according to an initial set of promising solutions. Subsequently, each new instance for each node is generated by using the corresponding conditional probabilities, until values for all nodes have been generated. Another set of rule strings will be generated in this way, some of which will replace previous strings based on fitness selection. If stopping conditions are not met, the Bayesian network is updated again using the current set of good rule strings. The algorithm thereby tries to explicitly identify and mix promising building blocks. It should be noted that for most scheduling problems the structure of the network model is known and all the variables are fully observed. In this case, the goal of learning is to find the rule values that maximize the likelihood of the training data. Thus learning can amount to 'counting' in the case of multinomial distributions. In the LCS approach, each rule has its strength showing its current usefulness in the system, and this strength is constantly assessed [4]. To implement sophisticated learning based on previous solutions, an improved LCS-based algorithm is designed, which consists of the following three steps. The initialization step is to assign each rule at each stage a constant initial strength. Then rules are selected by using the Roulette Wheel strategy. The next step is to reinforce the strengths of the rules used in the previous solution, keeping the strength of unused rules unchanged. The selection step is to select fitter rules for the next generation. It is envisaged that the LCS part of the algorithm will be used as a hill climber to the BOA algorithm. This is exciting and ambitious research, which might provide the stepping-stone for a new class of scheduling algorithms. Data sets from nurse scheduling and mall problems will be used as test-beds. It is envisaged that once the concept has been proven successful, it will be implemented into general scheduling algorithms. It is also hoped that this research will give some preliminary answers about how to include human-like learning into scheduling algorithms and may therefore be of interest to researchers and practitioners in areas of scheduling and evolutionary computation. References 1. Aickelin, U. and Dowsland, K. (2003) 'Indirect Genetic Algorithm for a Nurse Scheduling Problem', Computer & Operational Research (in print). 2. Li, J. and Kwan, R.S.K. (2003), 'Fuzzy Genetic Algorithm for Driver Scheduling', European Journal of Operational Research 147(2): 334-344. 3. Pelikan, M., Goldberg, D. and Cantu-Paz, E. (1999) 'BOA: The Bayesian Optimization Algorithm', IlliGAL Report No 99003, University of Illinois. 4. Wilson, S. (1994) 'ZCS: A Zeroth-level Classifier System', Evolutionary Computation 2(1), pp 1-18.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

A Bayesian optimisation algorithm for a nurse scheduling problem is presented, which involves choosing a suitable scheduling rule from a set for each nurse's assignment. When a human scheduler works, he normally builds a schedule systematically following a set of rules. After much practice, the scheduler gradually masters the knowledge of which solution parts go well with others. He can identify good parts and is aware of the solution quality even if the scheduling process is not yet completed, thus having the ability to finish a schedule by using flexible, rather than fixed, rules. In this paper, we design a more human-like scheduling algorithm, by using a Bayesian optimisation algorithm to implement explicit learning from past solutions. A nurse scheduling problem from a UK hospital is used for testing. Unlike our previous work that used Genetic Algorithms to implement implicit learning [1], the learning in the proposed algorithm is explicit, i.e. we identify and mix building blocks directly. The Bayesian optimisation algorithm is applied to implement such explicit learning by building a Bayesian network of the joint distribution of solutions. The conditional probability of each variable in the network is computed according to an initial set of promising solutions. Subsequently, each new instance for each variable is generated by using the corresponding conditional probabilities, until all variables have been generated, i.e. in our case, new rule strings have been obtained. Sets of rule strings are generated in this way, some of which will replace previous strings based on fitness. If stopping conditions are not met, the conditional probabilities for all nodes in the Bayesian network are updated again using the current set of promising rule strings. For clarity, consider the following toy example of scheduling five nurses with two rules (1: random allocation, 2: allocate nurse to low-cost shifts). In the beginning of the search, the probabilities of choosing rule 1 or 2 for each nurse is equal, i.e. 50%. After a few iterations, due to the selection pressure and reinforcement learning, we experience two solution pathways: Because pure low-cost or random allocation produces low quality solutions, either rule 1 is used for the first 2-3 nurses and rule 2 on remainder or vice versa. In essence, Bayesian network learns 'use rule 2 after 2-3x using rule 1' or vice versa. It should be noted that for our and most other scheduling problems, the structure of the network model is known and all variables are fully observed. In this case, the goal of learning is to find the rule values that maximize the likelihood of the training data. Thus, learning can amount to 'counting' in the case of multinomial distributions. For our problem, we use our rules: Random, Cheapest Cost, Best Cover and Balance of Cost and Cover. In more detail, the steps of our Bayesian optimisation algorithm for nurse scheduling are: 1. Set t = 0, and generate an initial population P(0) at random; 2. Use roulette-wheel selection to choose a set of promising rule strings S(t) from P(t); 3. Compute conditional probabilities of each node according to this set of promising solutions; 4. Assign each nurse using roulette-wheel selection based on the rules' conditional probabilities. A set of new rule strings O(t) will be generated in this way; 5. Create a new population P(t+1) by replacing some rule strings from P(t) with O(t), and set t = t+1; 6. If the termination conditions are not met (we use 2000 generations), go to step 2. Computational results from 52 real data instances demonstrate the success of this approach. They also suggest that the learning mechanism in the proposed approach might be suitable for other scheduling problems. Another direction for further research is to see if there is a good constructing sequence for individual data instances, given a fixed nurse scheduling order. If so, the good patterns could be recognized and then extracted as new domain knowledge. Thus, by using this extracted knowledge, we can assign specific rules to the corresponding nurses beforehand, and only schedule the remaining nurses with all available rules, making it possible to reduce the solution space. Acknowledgements The work was funded by the UK Government's major funding agency, Engineering and Physical Sciences Research Council (EPSRC), under grand GR/R92899/01. References [1] Aickelin U, "An Indirect Genetic Algorithm for Set Covering Problems", Journal of the Operational Research Society, 53(10): 1118-1126,