962 resultados para VERSATILE BUILDING-BLOCKS
An Estimation of Distribution Algorithm with Intelligent Local Search for Rule-based Nurse Rostering
Resumo:
This paper proposes a new memetic evolutionary algorithm to achieve explicit learning in rule-based nurse rostering, which involves applying a set of heuristic rules for each nurse's assignment. The main framework of the algorithm is an estimation of distribution algorithm, in which an ant-miner methodology improves the individual solutions produced in each generation. Unlike our previous work (where learning is implicit), the learning in the memetic estimation of distribution algorithm is explicit, i.e. we are able to identify building blocks directly. The overall approach learns by building a probabilistic model, i.e. an estimation of the probability distribution of individual nurse-rule pairs that are used to construct schedules. The local search processor (i.e. the ant-miner) reinforces nurse-rule pairs that receive higher rewards. A challenging real world nurse rostering problem is used as the test problem. Computational results show that the proposed approach outperforms most existing approaches. It is suggested that the learning methodologies suggested in this paper may be applied to other scheduling problems where schedules are built systematically according to specific rules.
Resumo:
Schedules can be built in a similar way to a human scheduler by using a set of rules that involve domain knowledge. This paper presents an Estimation of Distribution Algorithm (EDA) for the nurse scheduling problem, which involves choosing a suitable scheduling rule from a set for the assignment of each nurse. Unlike previous work that used Genetic Algorithms (GAs) to implement implicit learning, the learning in the proposed algorithm is explicit, i.e. we identify and mix building blocks directly. The EDA is applied to implement such explicit learning by building a Bayesian network of the joint distribution of solutions. The conditional probability of each variable in the network is computed according to an initial set of promising solutions. Subsequently, each new instance for each variable is generated by using the corresponding conditional probabilities, until all variables have been generated, i.e. in our case, a new rule string has been obtained. Another set of rule strings will be generated in this way, some of which will replace previous strings based on fitness selection. If stopping conditions are not met, the conditional probabilities for all nodes in the Bayesian network are updated again using the current set of promising rule strings. Computational results from 52 real data instances demonstrate the success of this approach. It is also suggested that the learning mechanism in the proposed approach might be suitable for other scheduling problems.
Resumo:
Previous research has shown that artificial immune systems can be used to produce robust schedules in a manufacturing environment. The main goal is to develop building blocks (antibodies) of partial schedules that can be used to construct backup solutions (antigens) when disturbances occur during production. The building blocks are created based upon underpinning ideas from artificial immune systems and evolved using a genetic algorithm (Phase I). Each partial schedule (antibody) is assigned a fitness value and the best partial schedules are selected to be converted into complete schedules (antigens). We further investigate whether simulated annealing and the great deluge algorithm can improve the results when hybridised with our artificial immune system (Phase II). We use ten fixed solutions as our target and measure how well we cover these specific scenarios.
Resumo:
Hebb proposed that synapses between neurons that fire synchronously are strengthened, forming cell assemblies and phase sequences. The former, on a shorter scale, are ensembles of synchronized cells that function transiently as a closed processing system; the latter, on a larger scale, correspond to the sequential activation of cell assemblies able to represent percepts and behaviors. Nowadays, the recording of large neuronal populations allows for the detection of multiple cell assemblies. Within Hebb's theory, the next logical step is the analysis of phase sequences. Here we detected phase sequences as consecutive assembly activation patterns, and then analyzed their graph attributes in relation to behavior. We investigated action potentials recorded from the adult rat hippocampus and neocortex before, during and after novel object exploration (experimental periods). Within assembly graphs, each assembly corresponded to a node, and each edge corresponded to the temporal sequence of consecutive node activations. The sum of all assembly activations was proportional to firing rates, but the activity of individual assemblies was not. Assembly repertoire was stable across experimental periods, suggesting that novel experience does not create new assemblies in the adult rat. Assembly graph attributes, on the other hand, varied significantly across behavioral states and experimental periods, and were separable enough to correctly classify experimental periods (Naïve Bayes classifier; maximum AUROCs ranging from 0.55 to 0.99) and behavioral states (waking, slow wave sleep, and rapid eye movement sleep; maximum AUROCs ranging from 0.64 to 0.98). Our findings agree with Hebb's view that assemblies correspond to primitive building blocks of representation, nearly unchanged in the adult, while phase sequences are labile across behavioral states and change after novel experience. The results are compatible with a role for phase sequences in behavior and cognition.
Resumo:
Abstract. Two ideas taken from Bayesian optimization and classifier systems are presented for personnel scheduling based on choosing a suitable scheduling rule from a set for each person's assignment. Unlike our previous work of using genetic algorithms whose learning is implicit, the learning in both approaches is explicit, i.e. we are able to identify building blocks directly. To achieve this target, the Bayesian optimization algorithm builds a Bayesian network of the joint probability distribution of the rules used to construct solutions, while the adapted classifier system assigns each rule a strength value that is constantly updated according to its usefulness in the current situation. Computational results from 52 real data instances of nurse scheduling demonstrate the success of both approaches. It is also suggested that the learning mechanism in the proposed approaches might be suitable for other scheduling problems.
Resumo:
A Bayesian optimisation algorithm for a nurse scheduling problem is presented, which involves choosing a suitable scheduling rule from a set for each nurse's assignment. When a human scheduler works, he normally builds a schedule systematically following a set of rules. After much practice, the scheduler gradually masters the knowledge of which solution parts go well with others. He can identify good parts and is aware of the solution quality even if the scheduling process is not yet completed, thus having the ability to finish a schedule by using flexible, rather than fixed, rules. In this paper, we design a more human-like scheduling algorithm, by using a Bayesian optimisation algorithm to implement explicit learning from past solutions. A nurse scheduling problem from a UK hospital is used for testing. Unlike our previous work that used Genetic Algorithms to implement implicit learning [1], the learning in the proposed algorithm is explicit, i.e. we identify and mix building blocks directly. The Bayesian optimisation algorithm is applied to implement such explicit learning by building a Bayesian network of the joint distribution of solutions. The conditional probability of each variable in the network is computed according to an initial set of promising solutions. Subsequently, each new instance for each variable is generated by using the corresponding conditional probabilities, until all variables have been generated, i.e. in our case, new rule strings have been obtained. Sets of rule strings are generated in this way, some of which will replace previous strings based on fitness. If stopping conditions are not met, the conditional probabilities for all nodes in the Bayesian network are updated again using the current set of promising rule strings. For clarity, consider the following toy example of scheduling five nurses with two rules (1: random allocation, 2: allocate nurse to low-cost shifts). In the beginning of the search, the probabilities of choosing rule 1 or 2 for each nurse is equal, i.e. 50%. After a few iterations, due to the selection pressure and reinforcement learning, we experience two solution pathways: Because pure low-cost or random allocation produces low quality solutions, either rule 1 is used for the first 2-3 nurses and rule 2 on remainder or vice versa. In essence, Bayesian network learns 'use rule 2 after 2-3x using rule 1' or vice versa. It should be noted that for our and most other scheduling problems, the structure of the network model is known and all variables are fully observed. In this case, the goal of learning is to find the rule values that maximize the likelihood of the training data. Thus, learning can amount to 'counting' in the case of multinomial distributions. For our problem, we use our rules: Random, Cheapest Cost, Best Cover and Balance of Cost and Cover. In more detail, the steps of our Bayesian optimisation algorithm for nurse scheduling are: 1. Set t = 0, and generate an initial population P(0) at random; 2. Use roulette-wheel selection to choose a set of promising rule strings S(t) from P(t); 3. Compute conditional probabilities of each node according to this set of promising solutions; 4. Assign each nurse using roulette-wheel selection based on the rules' conditional probabilities. A set of new rule strings O(t) will be generated in this way; 5. Create a new population P(t+1) by replacing some rule strings from P(t) with O(t), and set t = t+1; 6. If the termination conditions are not met (we use 2000 generations), go to step 2. Computational results from 52 real data instances demonstrate the success of this approach. They also suggest that the learning mechanism in the proposed approach might be suitable for other scheduling problems. Another direction for further research is to see if there is a good constructing sequence for individual data instances, given a fixed nurse scheduling order. If so, the good patterns could be recognized and then extracted as new domain knowledge. Thus, by using this extracted knowledge, we can assign specific rules to the corresponding nurses beforehand, and only schedule the remaining nurses with all available rules, making it possible to reduce the solution space. Acknowledgements The work was funded by the UK Government's major funding agency, Engineering and Physical Sciences Research Council (EPSRC), under grand GR/R92899/01. References [1] Aickelin U, "An Indirect Genetic Algorithm for Set Covering Problems", Journal of the Operational Research Society, 53(10): 1118-1126,
Resumo:
Abstract- A Bayesian optimization algorithm for the nurse scheduling problem is presented, which involves choosing a suitable scheduling rule from a set for each nurse's assignment. Unlike our previous work that used GAs to implement implicit learning, the learning in the proposed algorithm is explicit, i.e. eventually, we will be able to identify and mix building blocks directly. The Bayesian optimization algorithm is applied to implement such explicit learning by building a Bayesian network of the joint distribution of solutions. The conditional probability of each variable in the network is computed according to an initial set of promising solutions. Subsequently, each new instance for each variable is generated by using the corresponding conditional probabilities, until all variables have been generated, i.e. in our case, a new rule string has been obtained. Another set of rule strings will be generated in this way, some of which will replace previous strings based on fitness selection. If stopping conditions are not met, the conditional probabilities for all nodes in the Bayesian network are updated again using the current set of promising rule strings. Computational results from 52 real data instances demonstrate the success of this approach. It is also suggested that the learning mechanism in the proposed approach might be suitable for other scheduling problems.
Resumo:
Abstract Scheduling problems are generally NP-hard combinatorial problems, and a lot of research has been done to solve these problems heuristically. However, most of the previous approaches are problem-specific and research into the development of a general scheduling algorithm is still in its infancy. Mimicking the natural evolutionary process of the survival of the fittest, Genetic Algorithms (GAs) have attracted much attention in solving difficult scheduling problems in recent years. Some obstacles exist when using GAs: there is no canonical mechanism to deal with constraints, which are commonly met in most real-world scheduling problems, and small changes to a solution are difficult. To overcome both difficulties, indirect approaches have been presented (in [1] and [2]) for nurse scheduling and driver scheduling, where GAs are used by mapping the solution space, and separate decoding routines then build solutions to the original problem. In our previous indirect GAs, learning is implicit and is restricted to the efficient adjustment of weights for a set of rules that are used to construct schedules. The major limitation of those approaches is that they learn in a non-human way: like most existing construction algorithms, once the best weight combination is found, the rules used in the construction process are fixed at each iteration. However, normally a long sequence of moves is needed to construct a schedule and using fixed rules at each move is thus unreasonable and not coherent with human learning processes. When a human scheduler is working, he normally builds a schedule step by step following a set of rules. After much practice, the scheduler gradually masters the knowledge of which solution parts go well with others. He can identify good parts and is aware of the solution quality even if the scheduling process is not completed yet, thus having the ability to finish a schedule by using flexible, rather than fixed, rules. In this research we intend to design more human-like scheduling algorithms, by using ideas derived from Bayesian Optimization Algorithms (BOA) and Learning Classifier Systems (LCS) to implement explicit learning from past solutions. BOA can be applied to learn to identify good partial solutions and to complete them by building a Bayesian network of the joint distribution of solutions [3]. A Bayesian network is a directed acyclic graph with each node corresponding to one variable, and each variable corresponding to individual rule by which a schedule will be constructed step by step. The conditional probabilities are computed according to an initial set of promising solutions. Subsequently, each new instance for each node is generated by using the corresponding conditional probabilities, until values for all nodes have been generated. Another set of rule strings will be generated in this way, some of which will replace previous strings based on fitness selection. If stopping conditions are not met, the Bayesian network is updated again using the current set of good rule strings. The algorithm thereby tries to explicitly identify and mix promising building blocks. It should be noted that for most scheduling problems the structure of the network model is known and all the variables are fully observed. In this case, the goal of learning is to find the rule values that maximize the likelihood of the training data. Thus learning can amount to 'counting' in the case of multinomial distributions. In the LCS approach, each rule has its strength showing its current usefulness in the system, and this strength is constantly assessed [4]. To implement sophisticated learning based on previous solutions, an improved LCS-based algorithm is designed, which consists of the following three steps. The initialization step is to assign each rule at each stage a constant initial strength. Then rules are selected by using the Roulette Wheel strategy. The next step is to reinforce the strengths of the rules used in the previous solution, keeping the strength of unused rules unchanged. The selection step is to select fitter rules for the next generation. It is envisaged that the LCS part of the algorithm will be used as a hill climber to the BOA algorithm. This is exciting and ambitious research, which might provide the stepping-stone for a new class of scheduling algorithms. Data sets from nurse scheduling and mall problems will be used as test-beds. It is envisaged that once the concept has been proven successful, it will be implemented into general scheduling algorithms. It is also hoped that this research will give some preliminary answers about how to include human-like learning into scheduling algorithms and may therefore be of interest to researchers and practitioners in areas of scheduling and evolutionary computation. References 1. Aickelin, U. and Dowsland, K. (2003) 'Indirect Genetic Algorithm for a Nurse Scheduling Problem', Computer & Operational Research (in print). 2. Li, J. and Kwan, R.S.K. (2003), 'Fuzzy Genetic Algorithm for Driver Scheduling', European Journal of Operational Research 147(2): 334-344. 3. Pelikan, M., Goldberg, D. and Cantu-Paz, E. (1999) 'BOA: The Bayesian Optimization Algorithm', IlliGAL Report No 99003, University of Illinois. 4. Wilson, S. (1994) 'ZCS: A Zeroth-level Classifier System', Evolutionary Computation 2(1), pp 1-18.
An Estimation of Distribution Algorithm with Intelligent Local Search for Rule-based Nurse Rostering
Resumo:
This paper proposes a new memetic evolutionary algorithm to achieve explicit learning in rule-based nurse rostering, which involves applying a set of heuristic rules for each nurse's assignment. The main framework of the algorithm is an estimation of distribution algorithm, in which an ant-miner methodology improves the individual solutions produced in each generation. Unlike our previous work (where learning is implicit), the learning in the memetic estimation of distribution algorithm is explicit, i.e. we are able to identify building blocks directly. The overall approach learns by building a probabilistic model, i.e. an estimation of the probability distribution of individual nurse-rule pairs that are used to construct schedules. The local search processor (i.e. the ant-miner) reinforces nurse-rule pairs that receive higher rewards. A challenging real world nurse rostering problem is used as the test problem. Computational results show that the proposed approach outperforms most existing approaches. It is suggested that the learning methodologies suggested in this paper may be applied to other scheduling problems where schedules are built systematically according to specific rules.
Resumo:
Schedules can be built in a similar way to a human scheduler by using a set of rules that involve domain knowledge. This paper presents an Estimation of Distribution Algorithm (EDA) for the nurse scheduling problem, which involves choosing a suitable scheduling rule from a set for the assignment of each nurse. Unlike previous work that used Genetic Algorithms (GAs) to implement implicit learning, the learning in the proposed algorithm is explicit, i.e. we identify and mix building blocks directly. The EDA is applied to implement such explicit learning by building a Bayesian network of the joint distribution of solutions. The conditional probability of each variable in the network is computed according to an initial set of promising solutions. Subsequently, each new instance for each variable is generated by using the corresponding conditional probabilities, until all variables have been generated, i.e. in our case, a new rule string has been obtained. Another set of rule strings will be generated in this way, some of which will replace previous strings based on fitness selection. If stopping conditions are not met, the conditional probabilities for all nodes in the Bayesian network are updated again using the current set of promising rule strings. Computational results from 52 real data instances demonstrate the success of this approach. It is also suggested that the learning mechanism in the proposed approach might be suitable for other scheduling problems.
Resumo:
The introduction of electronically-active heteroanions into polyoxometalates (POMs) is one of the emerging topics in this field. The novel clusters have shown unprecedented intramolecular electron-transfer features that can be directly mediated by the incorporated heteroanions. In this thesis, we will focus on the study of phosphite (HPO32-) as new non-traditional heteroanions, discover HPO32- templated nanostructures, investigate their electronic behaviours as well as understand the self-assembly process of HPO32--templated species. The thesis starts with incorporating HPO32- into POM cages. The feasibility of this work was illustrated by the successful trapping of HPO32- into a “Trojan Horse” type {W18O56} nanocage. The reactivity of embedded {HPO3} was fully studied, showing the cluster undergoes a structural rearrangement in solution whereby the {HPO3} moieties dimerise to form a weakly interacting (O3PH···HPO3) moiety. In the crystalline state a temperature-dependent intramolecular redox reaction and structural rearrangement occurs. This rearrangement appears to proceed via an intermediate containing two different templates, a pyramidal {HPO3} and a tetrahedral {PO4} moiety. {HPO3} templated POM cages were then vigorously expanded and led to the isolation of five either fully oxidised or mixed-valence clusters trapped with mono-, di-, or tri- {HPO3}. Interestingly, an intriguing 3D honeycomb-like host-guest structure was also synthesised. The porous framework was self-aggregated by a tri-phopshite anion templated {W21} cluster with a {VO4} templated Wells-Dawson type {W18} acting as a guest species within the hexagonal channels. Based on this work, we further extended the templating anions to two different redox-active heteroanions, and discovered a unique mixed-heteroatom templated system built by pairing redox-active {HPIIIO3} with {TeO3}, {SeO3} or {AsO3}. Two molecular systems were developed, ie. “Trojan Horse” type [W18O56(HPO3)0.8(SeO3)1.2(H2O)2]8- and cross-shaped [H4P4X4W64O224]32-/36-, where X=TeIV, SeIV, AsIII. In the case of {W18(HPO3)0.8(SeO3)1.2}, the compound is found to be a mixture of heteroleptic {W18(HPO3)(SeO3)} and homoleptic {W18(SeO3)2} and {W18(HPO3)2}, identified by single crystal x-ray diffraction, NMR as well as high resolution mass spectrometry. The cluster exhibited similar temperature-dependent electronic features to “Trojan Horse” type {W18(HPO3)2O56}. However, due to the intrinsic reactivity difference between {HPO3} and {SeO3}, the thermal treatment leads to the formation of an unusual species [W18O55(PO4)(SeO3)]5-, in which {HPO3} was fully oxidised to {PO4} within the cage, whereas and lone-pair-containing {SeO3} heteroanions were kept intact inside the shell. This finding is extremely interesting, as it demonstrated that multiple and independent intramolecular electronic performance can be achieved by the coexistence of distinct heteroatoms within a single molecule. On the other hand, the cross-shaped [H4P4X4W64O224]32-/36- were constructed by four {W15(HPO3)(XO3)} building units linked by four {WO6} octahedra. Each building unit traps two different heteroatoms. It is interesting to note that the mixed heteroatom species show self-sorting, with a highly selective positional preference. Smaller ionic sized {HPO3} are self-organised into the uncapped side of {W15} cavity, whereas closed side are occupied by larger heteroatoms, which is surprisingly opposed to steric hindrance. Density functional theory (DFT) calculations are currently underway to have a full understanding of the preference of heteroatom substitutions. This series of clusters is of great interest in terms of achieving single molecule-based heteroatom-dependent multiple levels of electron transfer. It has opened a new way to design and synthesise POMs with higher diversity of electrical states, which may lead to a new type of Q-bits for quantum computing. The third chapter is focused on developing polyoxotungstate building blocks templated by {HPO3}. A series of building blocks, {W15O48(HPO3)2}, {W9O30(HPO3)} {W12O40(HPO3)2} and hexagonal {W6O18(HPO3)} have been obtained. The first four building blocks have been reported with {SeO3} and/or {TeO3} heteroanions. This result demonstrates {HPO3} has a similar reactivity as {SeO3} and {TeO3}, therefore studying the self-assembly of {HPO3}-based building blocks would be helpful to have a general understanding of pyramidal heteroatom-based molecular systems. The hexagonal {W6O18(HPO3)} is observed for the first time in polyoxotungstates, showing some of reactivity difference between {HPO3} and {SeO3} and {TeO3}. Furthermore, inorganic salts and pH values have some directing influence on the formation and transformation of various building blocks, resulting in the discovery of a family of {HPO3}-based clusters with nuclearity ranging from {W29} to {W106}. High resolution mass spectrometry was also carried out to investigate the cluster solution behaviour and also gain information of building block speciation. It is found that some clusters experienced decomposition, which gives rise to potential building blocks accountable for the self-assembly.
Resumo:
Re-creating and understanding the origin of life represents one of the major challenges facing the scientific community. We will never know exactly how life started on planet Earth, however, we can reconstruct the most likely chemical pathways that could have contributed to the formation of the first living systems. Traditionally, prebiotic chemistry has investigated the formation of modern life’s precursors and their self-organisation under very specific conditions thought to be ‘plausible’. So far, this approach has failed to produce a living system from the bottom-up. In the work presented herein, two different approaches are employed to explore the transition from inanimate to living matter. The development of microfluidic technology during the last decades has changed the way traditional chemical and biological experiments are performed. Microfluidics allows the handling of low volumes of reagents with very precise control. The use of micro-droplets generated within microfluidic devices is of particular interest to the field of Origins of Life and Artificial Life. Whilst many efforts have been made aiming to construct cell-like compartments from modern biological constituents, these are usually very difficult to handle. However, microdroplets can be easily generated and manipulated at kHz rates, making it suitable for high-throughput experimentation and analysis of compartmentalised chemical reactions. Therefore, we decided to develop a microfluidic device capable of manipulating microdroplets in such a way that they could be efficiently mixed, split and sorted within iterative cycles. Since no microfluidic technology had been developed before in the Cronin Group, the first chapter of this thesis describes the soft lithographic methods and techniques developed to fabricate microfluidic devices. Also, special attention is placed on the generation of water-in-oil microdroplets, and the subsequent modules required for the manipulation of the droplets such as: droplet fusers, splitters, sorters and single/multi-layer micromechanical valves. Whilst the first part of this thesis describes the development of a microfluidic platform to assist chemical evolution, finding a compatible set of chemical building blocks capable of reacting to form complex molecules with endowed replicating or catalytic activity was challenging. Abstract 10 Hence, the second part of this thesis focuses on potential chemistry that will ultimately possess the properties mentioned above. A special focus is placed on the formation of peptide bonds from unactivated amino acids, despite being one of the greatest challenges in prebiotic chemistry. As opposed to classic prebiotic experiments, in which a specific set of conditions is studied to fit a particular hypothesis, we took a different approach: we explored the effects of several parameters at once on a model polymerisation reaction, without constraints on hypotheses on the nature of optimum conditions or plausibility. This was facilitated by development of a new high-throughput automated platform, allowing the exploration of a much larger number of parameters. This led us to discover that peptide bond formation is less challenging than previously imagined. Having established the right set of conditions under which peptide bond formation was enhanced, we then explored the co-oligomerisation between different amino acids, aiming for the formation of heteropeptides with different structure or function. Finally, we studied the effect of various environmental conditions (rate of evaporation, presence of salts or minerals) in the final product distribution of our oligomeric products.
Resumo:
Developments in theory and experiment have raised the prospect of an electronic technology based on the discrete nature of electron tunnelling through a potential barrier. This thesis deals with novel design and analysis tools developed to study such systems. Possible devices include those constructed from ultrasmall normal tunnelling junctions. These exhibit charging effects including the Coulomb blockade and correlated electron tunnelling. They allow transistor-like control of the transfer of single carriers, and present the prospect of digital systems operating at the information theoretic limit. As such, they are often referred to as single electronic devices. Single electronic devices exhibit self quantising logic and good structural tolerance. Their speed, immunity to thermal noise, and operating voltage all scale beneficially with junction capacitance. For ultrasmall junctions the possibility of room temperature operation at sub picosecond timescales seems feasible. However, they are sensitive to external charge; whether from trapping-detrapping events, externally gated potentials, or system cross-talk. Quantum effects such as charge macroscopic quantum tunnelling may degrade performance. Finally, any practical system will be complex and spatially extended (amplifying the above problems), and prone to fabrication imperfection. This summarises why new design and analysis tools are required. Simulation tools are developed, concentrating on the basic building blocks of single electronic systems; the tunnelling junction array and gated turnstile device. Three main points are considered: the best method of estimating capacitance values from physical system geometry; the mathematical model which should represent electron tunnelling based on this data; application of this model to the investigation of single electronic systems. (DXN004909)
Resumo:
Relatório de Estágio para a obtenção do grau de Mestre em Ensino da Música
Resumo:
The work presented herein focused on the automation of coordination-driven self assembly, exploring methods that allow syntheses to be followed more closely while forming new ligands, as part of the fundamental study of the digitization of chemical synthesis and discovery. Whilst the control and understanding of the principle of pre-organization and self-sorting under non-equilibrium conditions remains a key goal, a clear gap has been identified in the absence of approaches that can permit fast screening and real-time observation of the reaction process under different conditions. A firm emphasis was thus placed on the realization of an autonomous chemical robot, which can not only monitor and manipulate coordination chemistry in real-time, but can also allow the exploration of a large chemical parameter space defined by the ligand building blocks and the metal to coordinate. The self-assembly of imine ligands with copper and nickel cations has been studied in a multi-step approach using a self-built flow system capable of automatically controlling the liquid-handling and collecting data in real-time using a benchtop MS and NMR spectrometer. This study led to the identification of a transient Cu(I) species in situ which allows for the formation of dimeric and trimeric carbonato bridged Cu(II) assemblies. Furthermore, new Ni(II) complexes and more remarkably also a new binuclear Cu(I) complex, which usually requires long and laborious inert conditions, could be isolated. The study was then expanded to the autonomous optimization of the ligand synthesis by enabling feedback control on the chemical system via benchtop NMR. The synthesis of new polydentate ligands has emerged as a result of the study aiming to enhance the complexity of the chemical system to accelerate the discovery of new complexes. This type of ligand consists of 1-pyridinyl-4-imino-1,2,3-triazole units, which can coordinate with different metal salts. The studies to test for the CuAAC synthesis via microwave lead to the discovery of four new Cu complexes, one of them being a coordination polymer obtained from a solvent dependent crystallization technique. With the goal of easier integration into an automated system, copper tubing has been exploited as the chemical reactor for the synthesis of this ligand, as it efficiently enhances the rate of the triazole formation and consequently promotes the formation of the full ligand in high yields within two hours. Lastly, the digitization of coordination-driven self-assembly has been realized for the first time using an in-house autonomous chemical robot, herein named the ‘Finder’. The chemical parameter space to explore was defined by the selection of six variables, which consist of the ligand precursors necessary to form complex ligands (aldehydes, alkineamines and azides), of the metal salt solutions and of other reaction parameters – duration, temperature and reagent volumes. The platform was assembled using rounded bottom flasks, flow syringe pumps, copper tubing, as an active reactor, and in-line analytics – a pH meter probe, a UV-vis flow cell and a benchtop MS. The control over the system was then obtained with an algorithm capable of autonomously focusing the experiments on the most reactive region (by avoiding areas of low interest) of the chemical parameter space to explore. This study led to interesting observations, such as metal exchange phenomena, and also to the autonomous discovery of self assembled structures in solution and solid state – such as 1-pyridinyl-4-imino-1,2,3-triazole based Fe complexes and two helicates based on the same ligand coordination motif.