872 resultados para optimisation combinatoire
Resumo:
Résumé : Les photodiodes à avalanche monophotonique (SPAD) sont d'intérêts pour les applications requérant la détection de photons uniques avec une grande résolution temporelle, comme en physique des hautes énergies et en imagerie médicale. En fait, les matrices de SPAD, souvent appelés photomultiplicateurs sur silicium (SiPM), remplacent graduellement les tubes photomultiplicateurs (PMT) et les photodiodes à avalanche (APD). De plus, il y a une tendance à utiliser les matrices de SPAD en technologie CMOS afin d'obtenir des pixels intelligents optimisés pour la résolution temporelle. La fabrication de SPAD en technologie CMOS commerciale apporte plusieurs avantages par rapport aux procédés optoélectroniques comme le faible coût, la capacité de production, l'intégration d'électronique et la miniaturisation des systèmes. Cependant, le défaut principal du CMOS est le manque de flexibilité de conception au niveau de l'architecture du SPAD, causé par le caractère fixe et standardisé des étapes de fabrication en technologie CMOS. Un autre inconvénient des matrices de SPAD CMOS est la perte de surface photosensible amenée par la présence de circuits CMOS. Ce document présente la conception, la caractérisation et l'optimisation de SPAD fabriqués dans une technologie CMOS commerciale (Teledyne DALSA 0.8µm HV CMOS - TDSI CMOSP8G). Des modifications de procédé sur mesure ont été introduites en collaboration avec l'entreprise CMOS pour optimiser les SPAD tout en gardant la compatibilité CMOS. Les matrices de SPAD produites sont dédiées à être intégrées en 3D avec de l'électronique CMOS économique (TDSI) ou avec de l'électronique CMOS submicronique avancée, produisant ainsi un SiPM 3D numérique. Ce SiPM 3D innovateur vise à remplacer les PMT, les APD et les SiPM commerciaux dans les applications à haute résolution temporelle. L'objectif principal du groupe de recherche est de développer un SiPM 3D avec une résolution temporelle de 10 ps pour usage en physique des hautes énergies et en imagerie médicale. Ces applications demandent des procédés fiables avec une capacité de production certifiée, ce qui justifie la volonté de produire le SiPM 3D avec des technologies CMOS commerciales. Ce mémoire étudie la conception, la caractérisation et l'optimisation de SPAD fabriqués en technologie TDSI-CMOSP8G.
Resumo:
Abstract not available
Resumo:
The aim of the study was the optimisation of Spirulina platensis drying on convective hot air through the response surface methodology. The responses were thiobarbituric acid (TBA) and phycocyanin loss percentage values in final product. Experiments were carried out in perforated tray drier with parallel air flow, and the wet samples thickness and drying air temperatures were in range of 3–7 mm and 50–70 °C, respectively. The statistical analysis showed significant effect (P < 0.05) for air temperature and samples thickness. In the best drying condition, 55 °C and 3.7 mm, presented the phycocyanin loss percentage and the TBA values of approximately 37% and 1.5 mgMDA kg−1, respectively. In this drying condition, the fatty acids composition of the microalgae Spirulina did not show significance difference (P > 0.05) in relation to fresh biomass. The lipid profile of dried product presented high percentage of polyunsaturated fatty acids (34.4%), especially the gamma-linolenic acid (20.6%).
Resumo:
Abstract- A Bayesian optimization algorithm for the nurse scheduling problem is presented, which involves choosing a suitable scheduling rule from a set for each nurse's assignment. Unlike our previous work that used GAs to implement implicit learning, the learning in the proposed algorithm is explicit, i.e. eventually, we will be able to identify and mix building blocks directly. The Bayesian optimization algorithm is applied to implement such explicit learning by building a Bayesian network of the joint distribution of solutions. The conditional probability of each variable in the network is computed according to an initial set of promising solutions. Subsequently, each new instance for each variable is generated by using the corresponding conditional probabilities, until all variables have been generated, i.e. in our case, a new rule string has been obtained. Another set of rule strings will be generated in this way, some of which will replace previous strings based on fitness selection. If stopping conditions are not met, the conditional probabilities for all nodes in the Bayesian network are updated again using the current set of promising rule strings. Computational results from 52 real data instances demonstrate the success of this approach. It is also suggested that the learning mechanism in the proposed approach might be suitable for other scheduling problems.
Resumo:
A Bayesian optimisation algorithm for a nurse scheduling problem is presented, which involves choosing a suitable scheduling rule from a set for each nurse's assignment. When a human scheduler works, he normally builds a schedule systematically following a set of rules. After much practice, the scheduler gradually masters the knowledge of which solution parts go well with others. He can identify good parts and is aware of the solution quality even if the scheduling process is not yet completed, thus having the ability to finish a schedule by using flexible, rather than fixed, rules. In this paper, we design a more human-like scheduling algorithm, by using a Bayesian optimisation algorithm to implement explicit learning from past solutions. A nurse scheduling problem from a UK hospital is used for testing. Unlike our previous work that used Genetic Algorithms to implement implicit learning [1], the learning in the proposed algorithm is explicit, i.e. we identify and mix building blocks directly. The Bayesian optimisation algorithm is applied to implement such explicit learning by building a Bayesian network of the joint distribution of solutions. The conditional probability of each variable in the network is computed according to an initial set of promising solutions. Subsequently, each new instance for each variable is generated by using the corresponding conditional probabilities, until all variables have been generated, i.e. in our case, new rule strings have been obtained. Sets of rule strings are generated in this way, some of which will replace previous strings based on fitness. If stopping conditions are not met, the conditional probabilities for all nodes in the Bayesian network are updated again using the current set of promising rule strings. For clarity, consider the following toy example of scheduling five nurses with two rules (1: random allocation, 2: allocate nurse to low-cost shifts). In the beginning of the search, the probabilities of choosing rule 1 or 2 for each nurse is equal, i.e. 50%. After a few iterations, due to the selection pressure and reinforcement learning, we experience two solution pathways: Because pure low-cost or random allocation produces low quality solutions, either rule 1 is used for the first 2-3 nurses and rule 2 on remainder or vice versa. In essence, Bayesian network learns 'use rule 2 after 2-3x using rule 1' or vice versa. It should be noted that for our and most other scheduling problems, the structure of the network model is known and all variables are fully observed. In this case, the goal of learning is to find the rule values that maximize the likelihood of the training data. Thus, learning can amount to 'counting' in the case of multinomial distributions. For our problem, we use our rules: Random, Cheapest Cost, Best Cover and Balance of Cost and Cover. In more detail, the steps of our Bayesian optimisation algorithm for nurse scheduling are: 1. Set t = 0, and generate an initial population P(0) at random; 2. Use roulette-wheel selection to choose a set of promising rule strings S(t) from P(t); 3. Compute conditional probabilities of each node according to this set of promising solutions; 4. Assign each nurse using roulette-wheel selection based on the rules' conditional probabilities. A set of new rule strings O(t) will be generated in this way; 5. Create a new population P(t+1) by replacing some rule strings from P(t) with O(t), and set t = t+1; 6. If the termination conditions are not met (we use 2000 generations), go to step 2. Computational results from 52 real data instances demonstrate the success of this approach. They also suggest that the learning mechanism in the proposed approach might be suitable for other scheduling problems. Another direction for further research is to see if there is a good constructing sequence for individual data instances, given a fixed nurse scheduling order. If so, the good patterns could be recognized and then extracted as new domain knowledge. Thus, by using this extracted knowledge, we can assign specific rules to the corresponding nurses beforehand, and only schedule the remaining nurses with all available rules, making it possible to reduce the solution space. Acknowledgements The work was funded by the UK Government's major funding agency, Engineering and Physical Sciences Research Council (EPSRC), under grand GR/R92899/01. References [1] Aickelin U, "An Indirect Genetic Algorithm for Set Covering Problems", Journal of the Operational Research Society, 53(10): 1118-1126,
Resumo:
L’industrie des biocarburants de deuxième génération utilise, entre autre, la biomasse lignocellulosique issue de résidus forestiers et agricoles et celle issue de cultures énergétiques. Le sorgho sucré [Sorghum bicolor (L.) Moench] fait partie de ces cultures énergétiques. L’intérêt croissant de l’industrie agroalimentaire et des biocarburants pour cette plante est dû à sa haute teneur en sucres (jusqu’à 60% en masse sèche). En plus de se développer rapidement (en 5-6 mois), le sorgho sucré a l’avantage de pouvoir croître sur des sols pauvres en nutriments et dans des conditions de faibles apports en eau, ce qui en fait une matière première intéressante pour l’industrie, notamment pour la production de bioéthanol. Le concept de bioraffinerie alliant la production de biocarburants à celle de bioénergies ou de bioproduits est de plus en plus étudié afin de valoriser la production des biocarburants. Dans le contexte d’une bioraffinerie exploitant la biomasse lignocellulosique, il est nécessaire de s’intéresser aux différents métabolites extractibles en plus des macromolécules permettant la fabrication de biocarburants et de biocommodités. Ceux-ci pouvant avoir une haute valeur ajoutée et intéresser l’industrie pharmaceutique ou cosmétique par exemple. Les techniques classiques pour extraire ces métabolites sont notamment l’extraction au Soxhlet et par macération ou percolation, qui sont longues et coûteuses en énergie. Ce projet s’intéresse donc à une méthode d’extraction des métabolites primaires et secondaires du sorgho sucré, moins coûteuse et plus courte, permettant de valoriser économiquement l’exploitation industrielle du de cette culture énergétique. Ce travail au sein de la CRIEC-B a porté spécifiquement sur l’utilisation d’une émulsion ultrasonique eau/carbonate de diméthyle permettant de diminuer les temps d’opération (passant à moins d’une heure au lieu de plusieurs heures) et les quantités de solvants mis en jeu dans le procédé d’extraction. Cette émulsion extractive permet ainsi de solubiliser à la fois les métabolites hydrophiles et ceux hydrophobes. De plus, l’impact environnemental est limité par l’utilisation de solvants respectueux de l’environnement (80 % d’eau et 20 % de carbonate de diméthyle). L’utilisation de deux systèmes d’extraction a été étudiée. L’un consiste en la recirculation de l’émulsion, en continu, au travers du lit de biomasse; le deuxième permet la mise en contact de la biomasse et des solvants avec la sonde à ultrasons, créant l’émulsion et favorisant la sonolyse de la biomasse. Ainsi, en réacteur « batch » avec recirculation de l’émulsion eau/DMC, à 370 mL.min[indice supérieur -1], au sein du lit de biomasse, l’extraction est de 37,91 % en 5 minutes, ce qui est supérieur à la méthode ASTM D1105-96 (34,01 % en 11h). De plus, en réacteur « batch – piston », où la biomasse est en contact direct avec les ultrasons et l’émulsion eau/DMC, les meilleurs rendements sont de 35,39 % en 17,5 minutes, avec 15 psig de pression et 70 % d’amplitude des ultrasons. Des tests effectués sur des particules de sorgho grossières ont donné des résultats similaires avec 30,23 % d’extraits en réacteur « batch » avec recirculation de l’émulsion (5 min, 370 mL.min[indice supérieur -1]) et 34,66 % avec le réacteur « batch-piston » (30 psig, 30 minutes, 95 % d’amplitude).
Resumo:
Abstract not available
Resumo:
International audience
Resumo:
The electricity market and climate are both undergoing a change. The changes impact hydropower and provoke an interest for hydropower capacity increases. In this thesis a new methodology was developed utilising short-term hydropower optimisation and planning software for better capacity increase profitability analysis accuracy. In the methodology income increases are calculated in month long periods while varying average discharge and electricity price volatility. The monthly incomes are used for constructing year scenarios, and from different types of year scenarios a long-term profitability analysis can be made. Average price development is included utilising a multiplier. The method was applied on Oulujoki hydropower plants. It was found that the capacity additions that were analysed for Oulujoki were not profitable. However, the methodology was found versatile and useful. The result showed that short periods of peaking prices play major role in the profitability of capacity increases. Adding more discharge capacity to hydropower plants that initially bypassed water more often showed the best improvements both in income and power generation profile flexibility.
Resumo:
This paper presents a technique called Improved Squeaky Wheel Optimisation (ISWO) for driver scheduling problems. It improves the original Squeaky Wheel Optimisation’s (SWO) effectiveness and execution speed by incorporating two additional steps of Selection and Mutation which implement evolution within a single solution. In the ISWO, a cycle of Analysis-Selection-Mutation-Prioritization-Construction continues until stopping conditions are reached. The Analysis step first computes the fitness of a current solution to identify troublesome components. The Selection step then discards these troublesome components probabilistically by using the fitness measure, and the Mutation step follows to further discard a small number of components at random. After the above steps, an input solution becomes partial and thus the resulting partial solution needs to be repaired. The repair is carried out by using the Prioritization step to first produce priorities that determine an order by which the following Construction step then schedules the remaining components. Therefore, the optimisation in the ISWO is achieved by solution disruption, iterative improvement and an iterative constructive repair process performed. Encouraging experimental results are reported.
Resumo:
Our research has shown that schedules can be built mimicking a human scheduler by using a set of rules that involve domain knowledge. This chapter presents a Bayesian Optimization Algorithm (BOA) for the nurse scheduling problem that chooses such suitable scheduling rules from a set for each nurse’s assignment. Based on the idea of using probabilistic models, the BOA builds a Bayesian network for the set of promising solutions and samples these networks to generate new candidate solutions. Computational results from 52 real data instances demonstrate the success of this approach. It is also suggested that the learning mechanism in the proposed algorithm may be suitable for other scheduling problems.
Resumo:
This paper reports on an attempt to apply Genetic Algorithms to the problem of optimising a complex system, through discrete event simulation (Simulation Optimisation), with a view to reducing the noise associated with such a procedure. We are applying this proposed solution approach to our application test bed, a Crossdocking distribution centre, because it provides a good representative of the random and unpredictable behaviour of complex systems i.e. automated machine random failure and the variability of manual order picker skill. It is known that there is noise in the output of discrete event simulation modelling. However, our interest focuses on the effect of noise on the evaluation of the fitness of candidate solutions within the search space, and the development of techniques to handle this noise. The unique quality of our proposed solution approach is we intend to embed a noise reduction technique in our Genetic Algorithm based optimisation procedure, in order for it to be robust enough to handle noise, efficiently estimate suitable fitness function, and produce good quality solutions with minimal computational effort.
Resumo:
CD73 est un ecto-enzyme qui a été associé à la suppression de l'immunité anti-tumorale. Ses valeurs pronostiques et thérapeutiques ont été mises de l'avant dans plusieurs types de cancer. La première hypothèse du projet est que l'expression de CD73 dans la tumeur prédit le pronostic des patients atteints du cancer de la prostate. L'expression de CD73 a été étudiée par immunofluorescence dans des échantillons de tumeur. Puis, des analyses univariées et multivariées ont été conduites pour déterminer si l'expression de CD73 permet de prédire la récidive biochimique des patients. Nous avons déterminé que CD73 prédit indépendamment le pronostic des patients atteints du cancer de la prostate. De plus, nous avons déterminé que son expression dans le tissu normal adjacent ou dans la tumeur prédit différemment la survenue de la récidive biochimique. La deuxième hypothèse est que l'inhibition de CD73 permet d'améliorer l'efficacité d'un vaccin thérapeutique contre le cancer de la prostate. L'effet d'un vaccin de type GVAX a été étudié dans des souris CD73KO ou en combinaison avec un anticorps ciblant CD73. Nous avons observé que l'efficacité du vaccin était augmentée dans les souris où CD73 était absent. Cependant, la combinaison avec l'anti-CD73 n'a pas permis d'améliorer l'efficacité.