985 resultados para Process optimisation


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Water-alternating-gas (WAG) is an enhanced oil recovery method combining the improved macroscopic sweep of water flooding with the improved microscopic displacement of gas injection. The optimal design of the WAG parameters is usually based on numerical reservoir simulation via trial and error, limited by the reservoir engineer’s availability. Employing optimisation techniques can guide the simulation runs and reduce the number of function evaluations. In this study, robust evolutionary algorithms are utilized to optimise hydrocarbon WAG performance in the E-segment of the Norne field. The first objective function is selected to be the net present value (NPV) and two global semi-random search strategies, a genetic algorithm (GA) and particle swarm optimisation (PSO) are tested on different case studies with different numbers of controlling variables which are sampled from the set of water and gas injection rates, bottom-hole pressures of the oil production wells, cycle ratio, cycle time, the composition of the injected hydrocarbon gas (miscible/immiscible WAG) and the total WAG period. In progressive experiments, the number of decision-making variables is increased, increasing the problem complexity while potentially improving the efficacy of the WAG process. The second objective function is selected to be the incremental recovery factor (IRF) within a fixed total WAG simulation time and it is optimised using the same optimisation algorithms. The results from the two optimisation techniques are analyzed and their performance, convergence speed and the quality of the optimal solutions found by the algorithms in multiple trials are compared for each experiment. The distinctions between the optimal WAG parameters resulting from NPV and oil recovery optimisation are also examined. This is the first known work optimising over this complete set of WAG variables. The first use of PSO to optimise a WAG project at the field scale is also illustrated. Compared to the reference cases, the best overall values of the objective functions found by GA and PSO were 13.8% and 14.2% higher, respectively, if NPV is optimised over all the above variables, and 14.2% and 16.2% higher, respectively, if IRF is optimised.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Résumé : Les photodiodes à avalanche monophotonique (SPAD) sont d'intérêts pour les applications requérant la détection de photons uniques avec une grande résolution temporelle, comme en physique des hautes énergies et en imagerie médicale. En fait, les matrices de SPAD, souvent appelés photomultiplicateurs sur silicium (SiPM), remplacent graduellement les tubes photomultiplicateurs (PMT) et les photodiodes à avalanche (APD). De plus, il y a une tendance à utiliser les matrices de SPAD en technologie CMOS afin d'obtenir des pixels intelligents optimisés pour la résolution temporelle. La fabrication de SPAD en technologie CMOS commerciale apporte plusieurs avantages par rapport aux procédés optoélectroniques comme le faible coût, la capacité de production, l'intégration d'électronique et la miniaturisation des systèmes. Cependant, le défaut principal du CMOS est le manque de flexibilité de conception au niveau de l'architecture du SPAD, causé par le caractère fixe et standardisé des étapes de fabrication en technologie CMOS. Un autre inconvénient des matrices de SPAD CMOS est la perte de surface photosensible amenée par la présence de circuits CMOS. Ce document présente la conception, la caractérisation et l'optimisation de SPAD fabriqués dans une technologie CMOS commerciale (Teledyne DALSA 0.8µm HV CMOS - TDSI CMOSP8G). Des modifications de procédé sur mesure ont été introduites en collaboration avec l'entreprise CMOS pour optimiser les SPAD tout en gardant la compatibilité CMOS. Les matrices de SPAD produites sont dédiées à être intégrées en 3D avec de l'électronique CMOS économique (TDSI) ou avec de l'électronique CMOS submicronique avancée, produisant ainsi un SiPM 3D numérique. Ce SiPM 3D innovateur vise à remplacer les PMT, les APD et les SiPM commerciaux dans les applications à haute résolution temporelle. L'objectif principal du groupe de recherche est de développer un SiPM 3D avec une résolution temporelle de 10 ps pour usage en physique des hautes énergies et en imagerie médicale. Ces applications demandent des procédés fiables avec une capacité de production certifiée, ce qui justifie la volonté de produire le SiPM 3D avec des technologies CMOS commerciales. Ce mémoire étudie la conception, la caractérisation et l'optimisation de SPAD fabriqués en technologie TDSI-CMOSP8G.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A Bayesian optimisation algorithm for a nurse scheduling problem is presented, which involves choosing a suitable scheduling rule from a set for each nurse's assignment. When a human scheduler works, he normally builds a schedule systematically following a set of rules. After much practice, the scheduler gradually masters the knowledge of which solution parts go well with others. He can identify good parts and is aware of the solution quality even if the scheduling process is not yet completed, thus having the ability to finish a schedule by using flexible, rather than fixed, rules. In this paper, we design a more human-like scheduling algorithm, by using a Bayesian optimisation algorithm to implement explicit learning from past solutions. A nurse scheduling problem from a UK hospital is used for testing. Unlike our previous work that used Genetic Algorithms to implement implicit learning [1], the learning in the proposed algorithm is explicit, i.e. we identify and mix building blocks directly. The Bayesian optimisation algorithm is applied to implement such explicit learning by building a Bayesian network of the joint distribution of solutions. The conditional probability of each variable in the network is computed according to an initial set of promising solutions. Subsequently, each new instance for each variable is generated by using the corresponding conditional probabilities, until all variables have been generated, i.e. in our case, new rule strings have been obtained. Sets of rule strings are generated in this way, some of which will replace previous strings based on fitness. If stopping conditions are not met, the conditional probabilities for all nodes in the Bayesian network are updated again using the current set of promising rule strings. For clarity, consider the following toy example of scheduling five nurses with two rules (1: random allocation, 2: allocate nurse to low-cost shifts). In the beginning of the search, the probabilities of choosing rule 1 or 2 for each nurse is equal, i.e. 50%. After a few iterations, due to the selection pressure and reinforcement learning, we experience two solution pathways: Because pure low-cost or random allocation produces low quality solutions, either rule 1 is used for the first 2-3 nurses and rule 2 on remainder or vice versa. In essence, Bayesian network learns 'use rule 2 after 2-3x using rule 1' or vice versa. It should be noted that for our and most other scheduling problems, the structure of the network model is known and all variables are fully observed. In this case, the goal of learning is to find the rule values that maximize the likelihood of the training data. Thus, learning can amount to 'counting' in the case of multinomial distributions. For our problem, we use our rules: Random, Cheapest Cost, Best Cover and Balance of Cost and Cover. In more detail, the steps of our Bayesian optimisation algorithm for nurse scheduling are: 1. Set t = 0, and generate an initial population P(0) at random; 2. Use roulette-wheel selection to choose a set of promising rule strings S(t) from P(t); 3. Compute conditional probabilities of each node according to this set of promising solutions; 4. Assign each nurse using roulette-wheel selection based on the rules' conditional probabilities. A set of new rule strings O(t) will be generated in this way; 5. Create a new population P(t+1) by replacing some rule strings from P(t) with O(t), and set t = t+1; 6. If the termination conditions are not met (we use 2000 generations), go to step 2. Computational results from 52 real data instances demonstrate the success of this approach. They also suggest that the learning mechanism in the proposed approach might be suitable for other scheduling problems. Another direction for further research is to see if there is a good constructing sequence for individual data instances, given a fixed nurse scheduling order. If so, the good patterns could be recognized and then extracted as new domain knowledge. Thus, by using this extracted knowledge, we can assign specific rules to the corresponding nurses beforehand, and only schedule the remaining nurses with all available rules, making it possible to reduce the solution space. Acknowledgements The work was funded by the UK Government's major funding agency, Engineering and Physical Sciences Research Council (EPSRC), under grand GR/R92899/01. References [1] Aickelin U, "An Indirect Genetic Algorithm for Set Covering Problems", Journal of the Operational Research Society, 53(10): 1118-1126,

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents a technique called Improved Squeaky Wheel Optimisation (ISWO) for driver scheduling problems. It improves the original Squeaky Wheel Optimisation’s (SWO) effectiveness and execution speed by incorporating two additional steps of Selection and Mutation which implement evolution within a single solution. In the ISWO, a cycle of Analysis-Selection-Mutation-Prioritization-Construction continues until stopping conditions are reached. The Analysis step first computes the fitness of a current solution to identify troublesome components. The Selection step then discards these troublesome components probabilistically by using the fitness measure, and the Mutation step follows to further discard a small number of components at random. After the above steps, an input solution becomes partial and thus the resulting partial solution needs to be repaired. The repair is carried out by using the Prioritization step to first produce priorities that determine an order by which the following Construction step then schedules the remaining components. Therefore, the optimisation in the ISWO is achieved by solution disruption, iterative improvement and an iterative constructive repair process performed. Encouraging experimental results are reported.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A Bayesian optimisation algorithm for a nurse scheduling problem is presented, which involves choosing a suitable scheduling rule from a set for each nurse's assignment. When a human scheduler works, he normally builds a schedule systematically following a set of rules. After much practice, the scheduler gradually masters the knowledge of which solution parts go well with others. He can identify good parts and is aware of the solution quality even if the scheduling process is not yet completed, thus having the ability to finish a schedule by using flexible, rather than fixed, rules. In this paper, we design a more human-like scheduling algorithm, by using a Bayesian optimisation algorithm to implement explicit learning from past solutions. A nurse scheduling problem from a UK hospital is used for testing. Unlike our previous work that used Genetic Algorithms to implement implicit learning [1], the learning in the proposed algorithm is explicit, i.e. we identify and mix building blocks directly. The Bayesian optimisation algorithm is applied to implement such explicit learning by building a Bayesian network of the joint distribution of solutions. The conditional probability of each variable in the network is computed according to an initial set of promising solutions. Subsequently, each new instance for each variable is generated by using the corresponding conditional probabilities, until all variables have been generated, i.e. in our case, new rule strings have been obtained. Sets of rule strings are generated in this way, some of which will replace previous strings based on fitness. If stopping conditions are not met, the conditional probabilities for all nodes in the Bayesian network are updated again using the current set of promising rule strings. For clarity, consider the following toy example of scheduling five nurses with two rules (1: random allocation, 2: allocate nurse to low-cost shifts). In the beginning of the search, the probabilities of choosing rule 1 or 2 for each nurse is equal, i.e. 50%. After a few iterations, due to the selection pressure and reinforcement learning, we experience two solution pathways: Because pure low-cost or random allocation produces low quality solutions, either rule 1 is used for the first 2-3 nurses and rule 2 on remainder or vice versa. In essence, Bayesian network learns 'use rule 2 after 2-3x using rule 1' or vice versa. It should be noted that for our and most other scheduling problems, the structure of the network model is known and all variables are fully observed. In this case, the goal of learning is to find the rule values that maximize the likelihood of the training data. Thus, learning can amount to 'counting' in the case of multinomial distributions. For our problem, we use our rules: Random, Cheapest Cost, Best Cover and Balance of Cost and Cover. In more detail, the steps of our Bayesian optimisation algorithm for nurse scheduling are: 1. Set t = 0, and generate an initial population P(0) at random; 2. Use roulette-wheel selection to choose a set of promising rule strings S(t) from P(t); 3. Compute conditional probabilities of each node according to this set of promising solutions; 4. Assign each nurse using roulette-wheel selection based on the rules' conditional probabilities. A set of new rule strings O(t) will be generated in this way; 5. Create a new population P(t+1) by replacing some rule strings from P(t) with O(t), and set t = t+1; 6. If the termination conditions are not met (we use 2000 generations), go to step 2. Computational results from 52 real data instances demonstrate the success of this approach. They also suggest that the learning mechanism in the proposed approach might be suitable for other scheduling problems. Another direction for further research is to see if there is a good constructing sequence for individual data instances, given a fixed nurse scheduling order. If so, the good patterns could be recognized and then extracted as new domain knowledge. Thus, by using this extracted knowledge, we can assign specific rules to the corresponding nurses beforehand, and only schedule the remaining nurses with all available rules, making it possible to reduce the solution space. Acknowledgements The work was funded by the UK Government's major funding agency, Engineering and Physical Sciences Research Council (EPSRC), under grand GR/R92899/01. References [1] Aickelin U, "An Indirect Genetic Algorithm for Set Covering Problems", Journal of the Operational Research Society, 53(10): 1118-1126,

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper reports on continuing research into the modelling of an order picking process within a Crossdocking distribution centre using Simulation Optimisation. The aim of this project is to optimise a discrete event simulation model and to understand factors that affect finding its optimal performance. Our initial investigation revealed that the precision of the selected simulation output performance measure and the number of replications required for the evaluation of the optimisation objective function through simulation influences the ability of the optimisation technique. We experimented with Common Random Numbers, in order to improve the precision of our simulation output performance measure, and intended to use the number of replications utilised for this purpose as the initial number of replications for the optimisation of our Crossdocking distribution centre simulation model. Our results demonstrate that we can improve the precision of our selected simulation output performance measure value using Common Random Numbers at various levels of replications. Furthermore, after optimising our Crossdocking distribution centre simulation model, we are able to achieve optimal performance using fewer simulations runs for the simulation model which uses Common Random Numbers as compared to the simulation model which does not use Common Random Numbers.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Cassava contributes significantly to biobased material development. Conventional approaches for its bio-derivative-production and application cause significant wastes, tailored material development challenges, with negative environmental impact and application limitations. Transforming cassava into sustainable value-added resources requires redesigning new approaches. Harnessing unexplored material source, and downstream process innovations can mitigate challenges. The ultimate goal proposed an integrated sustainable process system for cassava biomaterial development and potential application. An improved simultaneous release recovery cyanogenesis (SRRC) methodology, incorporating intact bitter cassava, was developed and standardized. Films were formulated, characterised, their mass transport behaviour, simulating real-distribution-chain conditions quantified, and optimised for desirable properties. Integrated process design system, for sustainable waste-elimination and biomaterial development, was developed. Films and bioderivatives for desired MAP, fast-delivery nutraceutical excipients and antifungal active coating applications were demonstrated. SRRC-processed intact bitter cassava produced significantly higher yield safe bio-derivatives than peeled, guaranteeing 16% waste-elimination. Process standardization transformed entire root into higher yield and clarified colour bio-derivatives and efficient material balance at optimal global desirability. Solvent mass through temperature-humidity-stressed films induced structural changes, and influenced water vapour and oxygen permeability. Sevenunit integrated-process design led to cost-effectiveness, energy-efficient and green cassava processing and biomaterials with zero-environment footprints. Desirable optimised bio-derivatives and films demonstrated application in desirable in-package O2/CO2, mouldgrowth inhibition, faster tablet excipient nutraceutical dissolutions and releases, and thymolencapsulated smooth antifungal coatings. Novel material resources, non-root peeling, zero-waste-elimination, and desirable standardised methodology present promising process integration tools for sustainable cassava biobased system development. Emerging design outcomes have potential applications to mitigate cyanide challenges and provide bio-derivative development pathways. Process system leads to zero-waste, with potential to reshape current style one-way processes into circular designs modelled on nature's effective approaches. Indigenous cassava components as natural material reinforcements, and SRRC processing approach has initiated a process with potential wider deployment in broad product research development. This research contributes to scientific knowledge in material science and engineering process design.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The conservation and valorisation of cultural heritage is of fundamental importance for our society, since it is witness to the legacies of human societies. In the case of metallic artefacts, because corrosion is a never-ending problem, the correct strategies for their cleaning and preservation must be chosen. Thus, the aim of this project was the development of protocols for cleaning archaeological copper artefacts by laser and plasma cleaning, since they allow the treatment of artefacts in a controlled and selective manner. Additionally, electrochemical characterisation of the artificial patinas was performed in order to obtain information on the protective properties of the corrosion layers. Reference copper samples with different artificial corrosion layers were used to evaluate the tested parameters. Laser cleaning tests resulted in partial removal of the corrosion products, but the lasermaterial interactions resulted in melting of the desired corrosion layers. The main obstacle for this process is that the materials that must be preserved show lower ablation thresholds than the undesired layers, which makes the proper elimination of dangerous corrosion products very difficult without damaging the artefacts. Different protocols should be developed for different patinas, and real artefacts should be characterised previous to any treatment to determine the best course of action. Low pressure hydrogen plasma cleaning treatments were performed on two kinds of patinas. In both cases the corrosion layers were partially removed. The total removal of the undesired corrosion products can probably be achieved by increasing the treatment time or applied power, or increasing the hydrogen pressure. Since the process is non-invasive and does not modify the bulk material, modifying the cleaning parameters is easy. EIS measurements show that, for the artificial patinas, the impedance increases while the patina is growing on the surface and then drops, probably due to diffusion reactions and a slow dissolution of copper. It appears from these results that the dissolution of copper is heavily influenced by diffusion phenomena and the corrosion product film porosity. Both techniques show good results for cleaning, as long as the proper parameters are used. These depend on the nature of the artefact and the corrosion layers that are found on its surface.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Originally from Asia, Dovyalis hebecarpa is a dark purple/red exotic berry now also produced in Brazil. However, no reports were found in the literature about phenolic extraction or characterisation of this berry. In this study we evaluate the extraction optimisation of anthocyanins and total phenolics in D. hebecarpa berries aiming at the development of a simple and mild analytical technique. Multivariate analysis was used to optimise the extraction variables (ethanol:water:acetone solvent proportions, times, and acid concentrations) at different levels. Acetone/water (20/80 v/v) gave the highest anthocyanin extraction yield, but pure water and different proportions of acetone/water or acetone/ethanol/water (with >50% of water) were also effective. Neither acid concentration nor time had a significant effect on extraction efficiency allowing to fix the recommended parameters at the lowest values tested (0.35% formic acid v/v, and 17.6 min). Under optimised conditions, extraction efficiencies were increased by 31.5% and 11% for anthocyanin and total phenolics, respectively as compared to traditional methods that use more solvent and time. Thus, the optimised methodology increased yields being less hazardous and time consuming than traditional methods. Finally, freeze-dried D. hebecarpa showed high content of target phytochemicals (319 mg/100g and 1,421 mg/100g of total anthocyanin and total phenolic content, respectively).

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Systemic lupus erythematosus is an autoimmune disease that causes many psychological repercussions that have been studied through qualitative research. These are considered relevant, since they reveal the amplitude experienced by patients. Given this importance, this study aims to map the qualitative production in this theme, derived from studies of experiences of adult patients of both genders and that had used as a tool a semi-structured interview and/or field observations, and had made use of a sampling by a saturation criterion to determine the number of participants in each study. The survey was conducted in Pubmed, Lilacs, Psycinfo e Cochrane databases, searching productions in English and Portuguese idioms published between January 2005 and June 2012. The 19 revised papers that have dealt with patients in the acute phase of the disease showed themes that were categorized into eight topics that contemplated the experienced process at various stages, from the onset of the disease, extending through the knowledge of the diagnosis and the understanding of the manifestations of the disease, drug treatment and general care, evolution and prognosis. The collected papers also point to the difficulty of understanding, of the patients, on what consists the remission phase, revealing also that this is a clinical stage underexplored by psychological studies.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

20

Relevância:

20.00% 20.00%

Publicador:

Resumo:

PURPOSE: To investigate the facial symmetry of rats submitted to experimental mandibular condyle fracture and with protein undernutrition (8% of protein) by means of cephalometric measurements. METHODS: Forty-five adult Wistar rats were distributed in three groups: fracture group, submitted to condylar fracture with no changes in diet; undernourished fracture group, submitted to hypoproteic diet and condylar fracture; undernourished group, kept until the end of experiment, without condylar fracture. Displaced fractures of the right condyle were induced under general anesthesia. The specimens were submitted to axial radiographic incidence, and cephalometric mensurations were made using a computer system. The values obtained were subjected to statistical analyses among the groups and between the sides in each group. RESULTS: There was significative decrease of the values of serum proteins and albumin in the undernourished fracture group. There was deviation of the median line of the mandible relative to the median line of the maxilla, significative to undernutrition fracture group, as well as asymmetry of the maxilla and mandible, in special in the final period of experiment. CONCLUSION: The mandibular condyle fracture in rats with proteic undernutrition induced an asymmetry of the mandible, also leading to consequences in the maxilla.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This study evaluated the effect of specimens' design and manufacturing process on microtensile bond strength, internal stress distributions (Finite Element Analysis - FEA) and specimens' integrity by means of Scanning Electron Microscopy (SEM) and Laser Scanning Confocal Microscopy (LCM). Excite was applied to flat enamel surface and a resin composite build-ups were made incrementally with 1-mm increments of Tetric Ceram. Teeth were cut using a diamond disc or a diamond wire, obtaining 0.8 mm² stick-shaped specimens, or were shaped with a Micro Specimen Former, obtaining dumbbell-shaped specimens (n = 10). Samples were randomly selected for SEM and LCM analysis. Remaining samples underwent microtensile test, and results were analyzed with ANOVA and Tukey test. FEA dumbbell-shaped model resulted in a more homogeneous stress distribution. Nonetheless, they failed under lower bond strengths (21.83 ± 5.44 MPa)c than stick-shaped specimens (sectioned with wire: 42.93 ± 4.77 MPaª; sectioned with disc: 36.62 ± 3.63 MPa b), due to geometric irregularities related to manufacturing process, as noted in microscopic analyzes. It could be concluded that stick-shaped, nontrimmed specimens, sectioned with diamond wire, are preferred for enamel specimens as they can be prepared in a less destructive, easier, and more precise way.