867 resultados para 230118 Optimisation
Resumo:
Les percidés, dont le doré jaune (Sander vitreus), représentent une opportunité considérable de diversification de l’offre pour l’industrie aquacole au Canada, mais aussi ailleurs dans le monde. Malgré son fort potentiel, les producteurs de dorés sont marginaux, car l’élevage larvaire s’avère difficile et complexe, résultant en des succès de survie variables. Pour un développement durable de l’aquaculture, mais aussi pour assurer un meilleur contrôle des paramètres environnementaux, et conséquemment une meilleure rentabilité, les nouvelles méthodes d’élevage s’orientent vers l’utilisation de systèmes en recirculation où l’eau est majoritairement filtrée et recyclée. Un premier volet de nos travaux réalisés en 2014 a confirmé que (i) les méthodes intensives d’élevage larvaire peuvent être adaptés dans les systèmes en recirculation, que (ii) la moulée commerciale enrichie de farine de krills offre une meilleure croissance qu’une moulée commerciale enrichie aux microalgues, (iii) que des artémies vivantes ne favorise pas l’ingestion alimentaire lorsqu’ils sont ajoutés à des rations d’une moulée commerciale enrichie de farine de krills et (iv) que le développement de la vessie natatoire est le défi principal afin de produire du doré en circuit recyclé de façon rentable. Une étude menée en 2015 visait à favoriser le développement de la vessie natatoire du doré dans les systèmes en recirculation. Quatre traitements ont été comparés soit, un jet d’eau de surface faible, un jet d’eau de surface fort, un microbulleur et un rondin commercial d’absorption d’huile. Nous avons démontré que (i) l’utilisation d’un jet d’eau de surface faible n’était pas suffisant pour favoriser le développement de la vessie natatoire du doré dans les systèmes où l’eau est fortement recirculée et (ii) qu’un rondin d’absorption d’huile est le dispositif le plus efficace pour favoriser le développement de la vessie natatoire du doré. Les prochains travaux devraient se pencher sur (i) l’élaboration de bassins adaptés aux réalités de l’élevage intensif des percidés, (ii) sur des efforts de domestication du doré par sélection génétique afin d’augmenter la survie dans les systèmes en recirculation et sur (iii) des études bioéconomiques afin de réduire les risques associés aux démarrages de nouvelles piscicultures.
Resumo:
Résumé : Les photodiodes à avalanche monophotonique (SPAD) sont d'intérêts pour les applications requérant la détection de photons uniques avec une grande résolution temporelle, comme en physique des hautes énergies et en imagerie médicale. En fait, les matrices de SPAD, souvent appelés photomultiplicateurs sur silicium (SiPM), remplacent graduellement les tubes photomultiplicateurs (PMT) et les photodiodes à avalanche (APD). De plus, il y a une tendance à utiliser les matrices de SPAD en technologie CMOS afin d'obtenir des pixels intelligents optimisés pour la résolution temporelle. La fabrication de SPAD en technologie CMOS commerciale apporte plusieurs avantages par rapport aux procédés optoélectroniques comme le faible coût, la capacité de production, l'intégration d'électronique et la miniaturisation des systèmes. Cependant, le défaut principal du CMOS est le manque de flexibilité de conception au niveau de l'architecture du SPAD, causé par le caractère fixe et standardisé des étapes de fabrication en technologie CMOS. Un autre inconvénient des matrices de SPAD CMOS est la perte de surface photosensible amenée par la présence de circuits CMOS. Ce document présente la conception, la caractérisation et l'optimisation de SPAD fabriqués dans une technologie CMOS commerciale (Teledyne DALSA 0.8µm HV CMOS - TDSI CMOSP8G). Des modifications de procédé sur mesure ont été introduites en collaboration avec l'entreprise CMOS pour optimiser les SPAD tout en gardant la compatibilité CMOS. Les matrices de SPAD produites sont dédiées à être intégrées en 3D avec de l'électronique CMOS économique (TDSI) ou avec de l'électronique CMOS submicronique avancée, produisant ainsi un SiPM 3D numérique. Ce SiPM 3D innovateur vise à remplacer les PMT, les APD et les SiPM commerciaux dans les applications à haute résolution temporelle. L'objectif principal du groupe de recherche est de développer un SiPM 3D avec une résolution temporelle de 10 ps pour usage en physique des hautes énergies et en imagerie médicale. Ces applications demandent des procédés fiables avec une capacité de production certifiée, ce qui justifie la volonté de produire le SiPM 3D avec des technologies CMOS commerciales. Ce mémoire étudie la conception, la caractérisation et l'optimisation de SPAD fabriqués en technologie TDSI-CMOSP8G.
Resumo:
Abstract : Wastepaper sludge ash (WSA) is generated by a cogeneration station by burning wastepaper sludge. It mainly consists of amorphous aluminosilicate phase, anhydrite, gehlenite, calcite, lime, C2S, C3A, quartz, anorthite, traces of mayenite. Because of its free lime content (~10%), WSA suspension has a high pH (13). Previous researchers have found that the WSA composition has poor robustness and the variations lead to some unsoundness for Portland cement (PC) blended WSA concrete. This thesis focused on the use of WSA in different types of concrete mixes to avoid the deleterious effect of the expansion due to the WSA hydration. As a result, WSA were used in making alkali-activated materials (AAMs) as a precursor source and as a potential activator in consideration of its amorphous content and the high alkaline nature. Moreover, the autogenous shrinkage behavior of PC concrete at low w/b ratio was used in order to compensate the expansion effect due to WSA. The concrete properties as well as the volume change were investigated for the modified WSA blended concrete. The reaction mechanism and microstructure of newly formed binder were evaluated by X-ray diffraction (XRD), calorimetry, thermogravimetric analysis (TGA), scanning electron microscopy (SEM) and energy dispersive X-ray spectroscopy (EDX). When WSA was used as precursor, the results showed incompatible reaction between WSA and alkaline solution. The mixtures were not workable and provided very low compressive strength no matter what kinds of chemical activators were used. This was due to the metallic aluminum in WSA, which releases abundant hydrogen gas when WSA reacts with strong alkaline solution. Besides, the results of this thesis showed that WSA can activate the glassy phase contained in slag, glass powder (GP) and class F fly ash (FFA) with an optimum blended ratio of 50:50. The WSA/slag (mass ratio of 50:50) mortar (w/b of 0.47) attained 46 MPa at 28 days without heat curing assistance. A significant fast setting was noticed for the WSA-activated binder due to the C3A phase, free lime and metallic aluminum contained in the WSA. Adding 5% of gypsum can delay the fast setting, but this greatly increased the potential risk of intern sulfate attack. The XRD, TGA and calorimetry analyses demonstrated the formation of ettringite, C-S-H, portlandite, hydrogarnet and calcium carboaluminate in the hydrated binder. The mechanical performance of different binder was closely related to the microstructure of corresponding binder which was proved by the SEM observation. The hydrated WSA/slag and WSA/FFA binder formed a C-A-S-H type of gel with lower Ca/Si ratio (0.47~1.6). A hybrid gel (i.e. C-N-A-S-H) was observed for the WSA/GP binder with a very low Ca/Si ratio (0.26) and Na/Si ratio (0.03). The SEM/EDX analyses displayed the formation of expansive gel (ettringite and thaumasite) in the gypsum added WSA/slag concrete. The gradual emission of hydrogen gas due to the reaction of WSA with alkaline environment significantly increased the porosity and degraded the microstructure of hydrated matrix after the setting. In the last phase of this research WSA-PC blended binder was tailored to form a high autogenous shrinkage concrete in order to compensate the initial expansion. Different binders were proportioned with PC, WSA, silica fume or slag. The microstructure and mechanical properties of concrete can be improved by decreasing w/b ratios and by incorporating silica fume or slag. The 28-day compressive strength of WSA-blended concrete was above 22 MPa and reached 45 MPa when silica fume was added. The PC concrete incorporating silica fume or slag tended to develop higher autogenous shrinkage at low w/b ratios, and thus the ternary binder with the addition of WSA inhibited the long term shrinkage due to the initial expansion property to WSA. In the restrained shrinkage test, the concrete ring incorporating the ternary binder (PC/WSA/slag) revealed negligible potential to cracking up to 96 days as a result of the offset effect by WSA expansion. The WSA blended regular concrete could be produced for potential applications with reduced expansion, good mechanical property and lower permeability.
Resumo:
Abstract not available
Resumo:
The aim of the study was the optimisation of Spirulina platensis drying on convective hot air through the response surface methodology. The responses were thiobarbituric acid (TBA) and phycocyanin loss percentage values in final product. Experiments were carried out in perforated tray drier with parallel air flow, and the wet samples thickness and drying air temperatures were in range of 3–7 mm and 50–70 °C, respectively. The statistical analysis showed significant effect (P < 0.05) for air temperature and samples thickness. In the best drying condition, 55 °C and 3.7 mm, presented the phycocyanin loss percentage and the TBA values of approximately 37% and 1.5 mgMDA kg−1, respectively. In this drying condition, the fatty acids composition of the microalgae Spirulina did not show significance difference (P > 0.05) in relation to fresh biomass. The lipid profile of dried product presented high percentage of polyunsaturated fatty acids (34.4%), especially the gamma-linolenic acid (20.6%).
Resumo:
Abstract- A Bayesian optimization algorithm for the nurse scheduling problem is presented, which involves choosing a suitable scheduling rule from a set for each nurse's assignment. Unlike our previous work that used GAs to implement implicit learning, the learning in the proposed algorithm is explicit, i.e. eventually, we will be able to identify and mix building blocks directly. The Bayesian optimization algorithm is applied to implement such explicit learning by building a Bayesian network of the joint distribution of solutions. The conditional probability of each variable in the network is computed according to an initial set of promising solutions. Subsequently, each new instance for each variable is generated by using the corresponding conditional probabilities, until all variables have been generated, i.e. in our case, a new rule string has been obtained. Another set of rule strings will be generated in this way, some of which will replace previous strings based on fitness selection. If stopping conditions are not met, the conditional probabilities for all nodes in the Bayesian network are updated again using the current set of promising rule strings. Computational results from 52 real data instances demonstrate the success of this approach. It is also suggested that the learning mechanism in the proposed approach might be suitable for other scheduling problems.
Resumo:
A Bayesian optimisation algorithm for a nurse scheduling problem is presented, which involves choosing a suitable scheduling rule from a set for each nurse's assignment. When a human scheduler works, he normally builds a schedule systematically following a set of rules. After much practice, the scheduler gradually masters the knowledge of which solution parts go well with others. He can identify good parts and is aware of the solution quality even if the scheduling process is not yet completed, thus having the ability to finish a schedule by using flexible, rather than fixed, rules. In this paper, we design a more human-like scheduling algorithm, by using a Bayesian optimisation algorithm to implement explicit learning from past solutions. A nurse scheduling problem from a UK hospital is used for testing. Unlike our previous work that used Genetic Algorithms to implement implicit learning [1], the learning in the proposed algorithm is explicit, i.e. we identify and mix building blocks directly. The Bayesian optimisation algorithm is applied to implement such explicit learning by building a Bayesian network of the joint distribution of solutions. The conditional probability of each variable in the network is computed according to an initial set of promising solutions. Subsequently, each new instance for each variable is generated by using the corresponding conditional probabilities, until all variables have been generated, i.e. in our case, new rule strings have been obtained. Sets of rule strings are generated in this way, some of which will replace previous strings based on fitness. If stopping conditions are not met, the conditional probabilities for all nodes in the Bayesian network are updated again using the current set of promising rule strings. For clarity, consider the following toy example of scheduling five nurses with two rules (1: random allocation, 2: allocate nurse to low-cost shifts). In the beginning of the search, the probabilities of choosing rule 1 or 2 for each nurse is equal, i.e. 50%. After a few iterations, due to the selection pressure and reinforcement learning, we experience two solution pathways: Because pure low-cost or random allocation produces low quality solutions, either rule 1 is used for the first 2-3 nurses and rule 2 on remainder or vice versa. In essence, Bayesian network learns 'use rule 2 after 2-3x using rule 1' or vice versa. It should be noted that for our and most other scheduling problems, the structure of the network model is known and all variables are fully observed. In this case, the goal of learning is to find the rule values that maximize the likelihood of the training data. Thus, learning can amount to 'counting' in the case of multinomial distributions. For our problem, we use our rules: Random, Cheapest Cost, Best Cover and Balance of Cost and Cover. In more detail, the steps of our Bayesian optimisation algorithm for nurse scheduling are: 1. Set t = 0, and generate an initial population P(0) at random; 2. Use roulette-wheel selection to choose a set of promising rule strings S(t) from P(t); 3. Compute conditional probabilities of each node according to this set of promising solutions; 4. Assign each nurse using roulette-wheel selection based on the rules' conditional probabilities. A set of new rule strings O(t) will be generated in this way; 5. Create a new population P(t+1) by replacing some rule strings from P(t) with O(t), and set t = t+1; 6. If the termination conditions are not met (we use 2000 generations), go to step 2. Computational results from 52 real data instances demonstrate the success of this approach. They also suggest that the learning mechanism in the proposed approach might be suitable for other scheduling problems. Another direction for further research is to see if there is a good constructing sequence for individual data instances, given a fixed nurse scheduling order. If so, the good patterns could be recognized and then extracted as new domain knowledge. Thus, by using this extracted knowledge, we can assign specific rules to the corresponding nurses beforehand, and only schedule the remaining nurses with all available rules, making it possible to reduce the solution space. Acknowledgements The work was funded by the UK Government's major funding agency, Engineering and Physical Sciences Research Council (EPSRC), under grand GR/R92899/01. References [1] Aickelin U, "An Indirect Genetic Algorithm for Set Covering Problems", Journal of the Operational Research Society, 53(10): 1118-1126,
Resumo:
L’industrie des biocarburants de deuxième génération utilise, entre autre, la biomasse lignocellulosique issue de résidus forestiers et agricoles et celle issue de cultures énergétiques. Le sorgho sucré [Sorghum bicolor (L.) Moench] fait partie de ces cultures énergétiques. L’intérêt croissant de l’industrie agroalimentaire et des biocarburants pour cette plante est dû à sa haute teneur en sucres (jusqu’à 60% en masse sèche). En plus de se développer rapidement (en 5-6 mois), le sorgho sucré a l’avantage de pouvoir croître sur des sols pauvres en nutriments et dans des conditions de faibles apports en eau, ce qui en fait une matière première intéressante pour l’industrie, notamment pour la production de bioéthanol. Le concept de bioraffinerie alliant la production de biocarburants à celle de bioénergies ou de bioproduits est de plus en plus étudié afin de valoriser la production des biocarburants. Dans le contexte d’une bioraffinerie exploitant la biomasse lignocellulosique, il est nécessaire de s’intéresser aux différents métabolites extractibles en plus des macromolécules permettant la fabrication de biocarburants et de biocommodités. Ceux-ci pouvant avoir une haute valeur ajoutée et intéresser l’industrie pharmaceutique ou cosmétique par exemple. Les techniques classiques pour extraire ces métabolites sont notamment l’extraction au Soxhlet et par macération ou percolation, qui sont longues et coûteuses en énergie. Ce projet s’intéresse donc à une méthode d’extraction des métabolites primaires et secondaires du sorgho sucré, moins coûteuse et plus courte, permettant de valoriser économiquement l’exploitation industrielle du de cette culture énergétique. Ce travail au sein de la CRIEC-B a porté spécifiquement sur l’utilisation d’une émulsion ultrasonique eau/carbonate de diméthyle permettant de diminuer les temps d’opération (passant à moins d’une heure au lieu de plusieurs heures) et les quantités de solvants mis en jeu dans le procédé d’extraction. Cette émulsion extractive permet ainsi de solubiliser à la fois les métabolites hydrophiles et ceux hydrophobes. De plus, l’impact environnemental est limité par l’utilisation de solvants respectueux de l’environnement (80 % d’eau et 20 % de carbonate de diméthyle). L’utilisation de deux systèmes d’extraction a été étudiée. L’un consiste en la recirculation de l’émulsion, en continu, au travers du lit de biomasse; le deuxième permet la mise en contact de la biomasse et des solvants avec la sonde à ultrasons, créant l’émulsion et favorisant la sonolyse de la biomasse. Ainsi, en réacteur « batch » avec recirculation de l’émulsion eau/DMC, à 370 mL.min[indice supérieur -1], au sein du lit de biomasse, l’extraction est de 37,91 % en 5 minutes, ce qui est supérieur à la méthode ASTM D1105-96 (34,01 % en 11h). De plus, en réacteur « batch – piston », où la biomasse est en contact direct avec les ultrasons et l’émulsion eau/DMC, les meilleurs rendements sont de 35,39 % en 17,5 minutes, avec 15 psig de pression et 70 % d’amplitude des ultrasons. Des tests effectués sur des particules de sorgho grossières ont donné des résultats similaires avec 30,23 % d’extraits en réacteur « batch » avec recirculation de l’émulsion (5 min, 370 mL.min[indice supérieur -1]) et 34,66 % avec le réacteur « batch-piston » (30 psig, 30 minutes, 95 % d’amplitude).
Resumo:
Abstract not available
Resumo:
International audience
Resumo:
The electricity market and climate are both undergoing a change. The changes impact hydropower and provoke an interest for hydropower capacity increases. In this thesis a new methodology was developed utilising short-term hydropower optimisation and planning software for better capacity increase profitability analysis accuracy. In the methodology income increases are calculated in month long periods while varying average discharge and electricity price volatility. The monthly incomes are used for constructing year scenarios, and from different types of year scenarios a long-term profitability analysis can be made. Average price development is included utilising a multiplier. The method was applied on Oulujoki hydropower plants. It was found that the capacity additions that were analysed for Oulujoki were not profitable. However, the methodology was found versatile and useful. The result showed that short periods of peaking prices play major role in the profitability of capacity increases. Adding more discharge capacity to hydropower plants that initially bypassed water more often showed the best improvements both in income and power generation profile flexibility.
Resumo:
This paper presents a technique called Improved Squeaky Wheel Optimisation (ISWO) for driver scheduling problems. It improves the original Squeaky Wheel Optimisation’s (SWO) effectiveness and execution speed by incorporating two additional steps of Selection and Mutation which implement evolution within a single solution. In the ISWO, a cycle of Analysis-Selection-Mutation-Prioritization-Construction continues until stopping conditions are reached. The Analysis step first computes the fitness of a current solution to identify troublesome components. The Selection step then discards these troublesome components probabilistically by using the fitness measure, and the Mutation step follows to further discard a small number of components at random. After the above steps, an input solution becomes partial and thus the resulting partial solution needs to be repaired. The repair is carried out by using the Prioritization step to first produce priorities that determine an order by which the following Construction step then schedules the remaining components. Therefore, the optimisation in the ISWO is achieved by solution disruption, iterative improvement and an iterative constructive repair process performed. Encouraging experimental results are reported.
Resumo:
Our research has shown that schedules can be built mimicking a human scheduler by using a set of rules that involve domain knowledge. This chapter presents a Bayesian Optimization Algorithm (BOA) for the nurse scheduling problem that chooses such suitable scheduling rules from a set for each nurse’s assignment. Based on the idea of using probabilistic models, the BOA builds a Bayesian network for the set of promising solutions and samples these networks to generate new candidate solutions. Computational results from 52 real data instances demonstrate the success of this approach. It is also suggested that the learning mechanism in the proposed algorithm may be suitable for other scheduling problems.