992 resultados para Batch process
Resumo:
Batch-mode reverse osmosis (batch-RO) operation is considered a promising desalination method due to its low energy requirement compared to other RO system arrangements. To improve and predict batch-RO performance, studies on concentration polarization (CP) are carried out. The Kimura-Sourirajan mass-transfer model is applied and validated by experimentation with two different spiral-wound RO elements. Explicit analytical Sherwood correlations are derived based on experimental results. For batch-RO operation, a new genetic algorithm method is developed to estimate the Sherwood correlation parameters, taking into account the effects of variation in operating parameters. Analytical procedures are presented, then the mass transfer coefficient models are developed for different operation processes, i.e., batch-RO and continuous RO. The CP related energy loss in batch-RO operation is quantified based on the resulting relationship between feed flow rates and mass transfer coefficients. It is found that CP increases energy consumption in batch-RO by about 25% compared to the ideal case in which CP is absent. For continuous RO process, the derived Sherwood correlation predicted CP accurately. In addition, we determined the optimum feed flow rate of our batch-RO system.
Resumo:
Fermentation processes as objects of modelling and high-quality control are characterized with interdependence and time-varying of process variables that lead to non-linear models with a very complex structure. This is why the conventional optimization methods cannot lead to a satisfied solution. As an alternative, genetic algorithms, like the stochastic global optimization method, can be applied to overcome these limitations. The application of genetic algorithms is a precondition for robustness and reaching of a global minimum that makes them eligible and more workable for parameter identification of fermentation models. Different types of genetic algorithms, namely simple, modified and multi-population ones, have been applied and compared for estimation of nonlinear dynamic model parameters of fed-batch cultivation of S. cerevisiae.
Resumo:
Background aims: The cost-effective production of human mesenchymal stromal cells (hMSCs) for off-the-shelf and patient specific therapies will require an increasing focus on improving product yield and driving manufacturing consistency. Methods: Bone marrow-derived hMSCs (BM-hMSCs) from two donors were expanded for 36 days in monolayer with medium supplemented with either fetal bovine serum (FBS) or PRIME-XV serum-free medium (SFM). Cells were assessed throughout culture for proliferation, mean cell diameter, colony-forming potential, osteogenic potential, gene expression and metabolites. Results: Expansion of BM-hMSCs in PRIME-XV SFM resulted in a significantly higher growth rate (P < 0.001) and increased consistency between donors compared with FBS-based culture. FBS-based culture showed an inter-batch production range of 0.9 and 5 days per dose compared with 0.5 and 0.6 days in SFM for each BM-hMSC donor line. The consistency between donors was also improved by the use of PRIME-XV SFM, with a production range of 0.9 days compared with 19.4 days in FBS-based culture. Mean cell diameter has also been demonstrated as a process metric for BM-hMSC growth rate and senescence through a correlation (R2 = 0.8705) across all conditions. PRIME-XV SFM has also shown increased consistency in BM-hMSC characteristics such as per cell metabolite utilization, in vitro colony-forming potential and osteogenic potential despite the higher number of population doublings. Conclusions: We have increased the yield and consistency of BM-hMSC expansion between donors, demonstrating a level of control over the product, which has the potential to increase the cost-effectiveness and reduce the risk in these manufacturing processes.
Resumo:
A semi-batch pyrolysis process was used to recover samples carbon fibre and glass fibre from their respective wastes. The mechanical properties of the recovered fibres were tested and compared to those of virgin fibres, showing good retention of the fibre properties. The recovered fibres were then used to prepare new LDPE composite materials with commercial and laboratory-synthesized compatibilizers. Mild oxidation of the post-pyrolysis recovered fibres and the use of different compatibilizers gave significant improvements in the mechanical properties of the LDPE composites; however some of the manufactured composites made from recovered fibres had properties similar to those made from virgin fibres.
Resumo:
The importance of the changeover process in the manufacturing industry is becoming widely recognised. Changeover is a complete process of changing between the manufacture of one product to manufacture of an alternative product until specified production and quality rates are reached. The initiatives to improve changeover exist in industry, as better changeover process typically contribute to improved quality performance. A high-quality and reliable changeover process can be achieved through implementation of continuous or radical improvements. This research examines the changeover process of Saudi Arabian manufacturing firms because Saudi Arabia’s government is focused on the expansion of GDP and increasing the number of export manufacturing firms. Furthermore, it is encouraging foreign manufacturing firms to invest within Saudi Arabia. These initiatives, therefore, require that Saudi manufacturing businesses develop the changeover practice in order to compete in the market and achieve the government’s objectives. Therefore, the aim of this research is to discover the current status of changeover process implementation in Saudi Arabian manufacturing businesses. To achieve this aim, the main objective of this research is to develop a conceptual model to understand and examine the effectiveness of the changeover process within Saudi Arabian manufacturing firms, facilitating identification of those activities that affect the reliability and high-quality of the process. In order to provide a comprehensive understanding of this area, this research first explores the concept of quality management and its relationship to firm performance and the performance of manufacturing changeover. An extensive body of literature was reviewed on the subject of lean manufacturing and changeover practice. A research conceptual model was identified based on this review, and focus was on providing high-quality and reliable manufacturing changeover processes during set-up in a dynamic environment. Exploratory research was conducted in sample Saudi manufacturing firms to understand the features of the changeover process within the manufacturing sector, and as a basis for modifying the proposed conceptual model. Qualitative research was employed in the study with semi-structured interviews, direct observations and documentation in order to understand the real situation such as actual daily practice and current status of changeover process in the field. The research instrument, the Changeover Effectiveness Assessment Tool (CEAT) was developed to evaluate changeover practices. A pilot study was conducted by examining the CEAT, proposed for the main research. Consequently, the conceptual model was modified and CEAT was improved in response to the pilot study findings. Case studies have been conducted within eight Saudi manufacturing businesses. These case studies assessed the implementation of manufacturing changeover practice in the lighting and medical products sectors. These two sectors were selected based on their operation strategy which was batch production as well as the fact that they fulfilled the research sampling strategy. The outcomes of the research improved the conceptual model, ultimately to facilitate the firms’ adoption and rapid implementation of a high-quality and reliability changeover during the set-up process. The main finding of this research is that Quality’s factors were considering the lowest levels comparing to the other factors which are People, Process and Infrastructure. This research contributes to enable Saudi businesses to implement the changeover process by adopting the conceptual model. In addition, the guidelines for facilitating implementation were provided in this thesis. Therefore, this research provides insight to enable the Saudi manufacturing industry to be more responsive to rapidly changing customer demands.
Resumo:
A job shop with one batch processing and several discrete machines is analyzed. Given a set of jobs, their process routes, processing requirements, and size, the objective is to schedule the jobs such that the makespan is minimized. The batch processing machine can process a batch of jobs as long as the machine capacity is not violated. The batch processing time is equal to the longest processing job in the batch. The problem under study can be represented as Jm:batch:Cmax. If no batches were formed, the scheduling problem under study reduces to the classical job shop scheduling problem (i.e. Jm:: Cmax), which is known to be NP-hard. This research extends the scheduling literature by combining Jm::Cmax with batch processing. The primary contributions are the mathematical formulation, a new network representation and several solution approaches. The problem under study is observed widely in metal working and other industries, but received limited or no attention due to its complexity. A novel network representation of the problem using disjunctive and conjunctive arcs, and a mathematical formulation are proposed to minimize the makespan. Besides that, several algorithms, like batch forming heuristics, dispatching rules, Modified Shifting Bottleneck, Tabu Search (TS) and Simulated Annealing (SA), were developed and implemented. An experimental study was conducted to evaluate the proposed heuristics, and the results were compared to those from a commercial solver (i.e., CPLEX). TS and SA, with the combination of MWKR-FF as the initial solution, gave the best solutions among all the heuristics proposed. Their results were close to CPLEX; and for some larger instances, with total operations greater than 225, they were competitive in terms of solution quality and runtime. For some larger problem instances, CPLEX was unable to report a feasible solution even after running for several hours. Between SA and the experimental study indicated that SA produced a better average Cmax for all instances. The solution approaches proposed will benefit practitioners to schedule a job shop (with both discrete and batch processing machines) more efficiently. The proposed solution approaches are easier to implement and requires short run times to solve large problem instances.
Resumo:
This research aims at a study of the hybrid flow shop problem which has parallel batch-processing machines in one stage and discrete-processing machines in other stages to process jobs of arbitrary sizes. The objective is to minimize the makespan for a set of jobs. The problem is denoted as: FF: batch1,sj:Cmax. The problem is formulated as a mixed-integer linear program. The commercial solver, AMPL/CPLEX, is used to solve problem instances to their optimality. Experimental results show that AMPL/CPLEX requires considerable time to find the optimal solution for even a small size problem, i.e., a 6-job instance requires 2 hours in average. A bottleneck-first-decomposition heuristic (BFD) is proposed in this study to overcome the computational (time) problem encountered while using the commercial solver. The proposed BFD heuristic is inspired by the shifting bottleneck heuristic. It decomposes the entire problem into three sub-problems, and schedules the sub-problems one by one. The proposed BFD heuristic consists of four major steps: formulating sub-problems, prioritizing sub-problems, solving sub-problems and re-scheduling. For solving the sub-problems, two heuristic algorithms are proposed; one for scheduling a hybrid flow shop with discrete processing machines, and the other for scheduling parallel batching machines (single stage). Both consider job arrival and delivery times. An experiment design is conducted to evaluate the effectiveness of the proposed BFD, which is further evaluated against a set of common heuristics including a randomized greedy heuristic and five dispatching rules. The results show that the proposed BFD heuristic outperforms all these algorithms. To evaluate the quality of the heuristic solution, a procedure is developed to calculate a lower bound of makespan for the problem under study. The lower bound obtained is tighter than other bounds developed for related problems in literature. A meta-search approach based on the Genetic Algorithm concept is developed to evaluate the significance of further improving the solution obtained from the proposed BFD heuristic. The experiment indicates that it reduces the makespan by 1.93 % in average within a negligible time when problem size is less than 50 jobs.
Resumo:
Continuous-flow generation of α-diazosulfoxides results in a two- to three-fold increase in yields and decreased reaction times compared to standard batch synthesis methods. These high yielding reactions are enabled by flowing through a bed of polystyrene-supported base (PS-DBU or PS-NMe2) with highly controlled residence times. This engineered solution allows the α-diazosulfoxides to be rapidly synthesized while limiting exposure of the products to basic reaction conditions, which have been found to cause rapid decomposition. In addition to improved yields, this work has the added advantage of ease of processing, increased safety profile, and scale-up potential.
Resumo:
Das Verfahren der Lebensmitteltrocknung wird häufig angewendet, um ein Produkt für längere Zeit haltbar zu machen. Obst und Gemüse sind aufgrund ihres hohen Wassergehalts leicht verderblich durch biochemische Vorgänge innerhalb des Produktes, nicht sachgemäße Lagerung und unzureichende Transportmöglichkeiten. Um solche Verluste zu vermeiden wird die direkte Trocknung eingesetzt, welche die älteste Methode zum langfristigen haltbarmachen ist. Diese Methode ist jedoch veraltet und kann den heutigen Herausforderungen nicht gerecht werden. In der vorliegenden Arbeit wurde ein neuer Chargentrockner, mit diagonalem Luftstömungskanal entlang der Länge des Trocknungsraumes und ohne Leitbleche entwickelt. Neben dem unbestreitbaren Nutzen der Verwendung von Leitblechen, erhöhen diese jedoch die Konstruktionskosten und führen auch zu einer Erhöhung des Druckverlustes. Dadurch wird im Trocknungsprozess mehr Energie verbraucht. Um eine räumlich gleichmäßige Trocknung ohne Leitbleche zu erreichen, wurden die Lebensmittelbehälter diagonal entlang der Länge des Trockners platziert. Das vorrangige Ziel des diagonalen Kanals war, die einströmende, warme Luft gleichmäßig auf das gesamte Produkt auszurichten. Die Simulation des Luftstroms wurde mit ANSYS-Fluent in der ANSYS Workbench Plattform durchgeführt. Zwei verschiedene Geometrien der Trocknungskammer, diagonal und nicht diagonal, wurden modelliert und die Ergebnisse für eine gleichmäßige Luftverteilung aus dem diagonalen Luftströmungsdesign erhalten. Es wurde eine Reihe von Experimenten durchgeführt, um das Design zu bewerten. Kartoffelscheiben dienten als Trocknungsgut. Die statistischen Ergebnisse zeigen einen guten Korrelationskoeffizienten für die Luftstromverteilung (87,09%) zwischen dem durchschnittlich vorhergesagten und der durchschnittlichen gemessenen Strömungsgeschwindigkeit. Um den Effekt der gleichmäßigen Luftverteilung auf die Veränderung der Qualität zu bewerten, wurde die Farbe des Produktes, entlang der gesamten Länge der Trocknungskammer kontaktfrei im on-line-Verfahren bestimmt. Zu diesem Zweck wurde eine Imaging-Box, bestehend aus Kamera und Beleuchtung entwickelt. Räumliche Unterschiede dieses Qualitätsparameters wurden als Kriterium gewählt, um die gleichmäßige Trocknungsqualität in der Trocknungskammer zu bewerten. Entscheidend beim Lebensmittel-Chargentrockner ist sein Energieverbrauch. Dafür wurden thermodynamische Analysen des Trockners durchgeführt. Die Energieeffizienz des Systems wurde unter den gewählten Trocknungsbedingungen mit 50,16% kalkuliert. Die durchschnittlich genutzten Energie in Form von Elektrizität zur Herstellung von 1kg getrockneter Kartoffeln wurde mit weniger als 16,24 MJ/kg und weniger als 4,78 MJ/kg Wasser zum verdampfen bei einer sehr hohen Temperatur von jeweils 65°C und Scheibendicken von 5mm kalkuliert. Die Energie- und Exergieanalysen für diagonale Chargentrockner wurden zudem mit denen anderer Chargentrockner verglichen. Die Auswahl von Trocknungstemperatur, Massenflussrate der Trocknungsluft, Trocknerkapazität und Heiztyp sind die wichtigen Parameter zur Bewertung der genutzten Energie von Chargentrocknern. Die Entwicklung des diagonalen Chargentrockners ist eine nützliche und effektive Möglichkeit um dei Trocknungshomogenität zu erhöhen. Das Design erlaubt es, das gesamte Produkt in der Trocknungskammer gleichmäßigen Luftverhältnissen auszusetzen, statt die Luft von einer Horde zur nächsten zu leiten.
Resumo:
The focus of paper is to asses and evaluate new utilisation method of coals combustion resides in glass manufacturing process. Mathematical model of glass manufacturing material balance was used to find favourable proportion of normally used batch materials and coal ash. It was found that possible to substitute up to 20 % of batch with coal ash. On the world glass production scale there is a potential to save 8,4 million tons of silica sand, 6 million tons of dolomite, 3 million tons of clay and 0,2 million tons of lime borate. Furthermore, potential to utilize 2 % of coal combustion products with suggested method.
Resumo:
This thesis addresses the Batch Reinforcement Learning methods in Robotics. This sub-class of Reinforcement Learning has shown promising results and has been the focus of recent research. Three contributions are proposed that aim to extend the state-of-art methods allowing for a faster and more stable learning process, such as required for learning in Robotics. The Q-learning update-rule is widely applied, since it allows to learn without the presence of a model of the environment. However, this update-rule is transition-based and does not take advantage of the underlying episodic structure of collected batch of interactions. The Q-Batch update-rule is proposed in this thesis, to process experiencies along the trajectories collected in the interaction phase. This allows a faster propagation of obtained rewards and penalties, resulting in faster and more robust learning. Non-parametric function approximations are explored, such as Gaussian Processes. This type of approximators allows to encode prior knowledge about the latent function, in the form of kernels, providing a higher level of exibility and accuracy. The application of Gaussian Processes in Batch Reinforcement Learning presented a higher performance in learning tasks than other function approximations used in the literature. Lastly, in order to extract more information from the experiences collected by the agent, model-learning techniques are incorporated to learn the system dynamics. In this way, it is possible to augment the set of collected experiences with experiences generated through planning using the learned models. Experiments were carried out mainly in simulation, with some tests carried out in a physical robotic platform. The obtained results show that the proposed approaches are able to outperform the classical Fitted Q Iteration.
Resumo:
In this paper, the temperature of a pilot-scale batch reaction system is modeled towards the design of a controller based on the explicit model predictive control (EMPC) strategy -- Some mathematical models are developed from experimental data to describe the system behavior -- The simplest, yet reliable, model obtained is a (1,1,1)-order ARX polynomial model for which the mentioned EMPC controller has been designed -- The resultant controller has a reduced mathematical complexity and, according to the successful results obtained in simulations, will be used directly on the real control system in a next stage of the entire experimental framework
Resumo:
The current energy market requires urgent revision for the introduction of renewable, less-polluting and inexpensive energy sources. Biohydrogen (bioH2) is considered to be one of the most appropriate options for this model shift, being easily produced through the anaerobic fermentation of carbohydrate-containing biomass. Ideally, the feedstock should be low-cost, widely available and convertible into a product of interest. Microalgae are considered to possess the referred properties, being also highly valued for their capability to assimilate CO2 [1]. The microalga Spirogyra sp. is able to accumulate high concentrations of intracellular starch, a preferential carbon source for some bioH2 producing bacteria such as Clostridium butyricum [2]. In the present work, Spirogyra biomass was submitted to acid hydrolysis to degrade polymeric components and increase the biomass fermentability. Initial tests of bioH2 production in 120 mL reactors with C. butyricum yielded a maximum volumetric productivity of 141 mL H2/L.h and a H2 production yield of 3.78 mol H2/mol consumed sugars. Subsequently, a sequential batch reactor (SBR) was used for the continuous H2 production from Spirogyra hydrolysate. After 3 consecutive batches, the fermentation achieved a maximum volumetric productivity of 324 mL H2/L.h, higher than most results obtained in similar production systems [3] and a potential H2 production yield of 10.4 L H2/L hydrolysate per day. The H2 yield achieved in the SBR was 2.59 mol H2/mol, a value that is comparable to those attained with several thermophilic microorganisms [3], [4]. In the present work, a detailed energy consumption of the microalgae value-chain is presented and compared with previous results from the literature. The specific energy requirements were determined and the functional unit considered was gH2 and MJH2. It was possible to identify the process stages responsible for the highest energy consumption during bioH2 production from Spirogyra biomass for further optimisation.
Resumo:
Crystallization is employed in different industrial processes. The method and operation can differ depending on the nature of the substances involved. The aim of this study is to examine the effect of various operating conditions on the crystal properties in a chemical engineering design window with a focus on ultrasound assisted cooling crystallization. Batch to batch variations, minimal manufacturing steps and faster production times are factors which continuous crystallization seeks to resolve. Continuous processes scale-up is considered straightforward compared to batch processes owing to increase of processing time in the specific reactor. In cooling crystallization process, ultrasound can be used to control the crystal properties. Different model compounds were used to define the suitable process parameters for the modular crystallizer using equal operating conditions in each module. A final temperature of 20oC was employed in all experiments while the operating conditions differed. The studied process parameters and configuration of the crystallizer were manipulated to achieve a continuous operation without crystal clogging along the crystallization path. The results from the continuous experiment were compared with the batch crystallization results and analysed using the Malvern Morphologi G3 instrument to determine the crystal morphology and CSD. The modular crystallizer was operated successfully with three different residence times. At optimal process conditions, a longer residence time gives smaller crystals and narrower CSD. Based on the findings, at a constant initial solution concentration, the residence time had clear influence on crystal properties. The equal supersaturation criterion in each module offered better results compared to other cooling profiles. The combination of continuous crystallization and ultrasound has large potential to overcome clogging, obtain reproducible and narrow CSD, specific crystal morphologies and uniform particle sizes, and exclusion of milling stages in comparison to batch processes.
Resumo:
An integrated analysis of naproxen adsorption on bone char in batch and packed-bed column conditions has been performed. Kinetic, thermodynamic and breakthrough parameters have been calculated using adsorption models and artificial neural networks. Results show that naproxen removal using bone char in batch conditions is a feasible and effective process, which could involve electrostatic and non-electrostatic interactions depending mainly on pH conditions. However, the application of packed-bed column for naproxen adsorption on bone char is not effective for the treatment of diluted solutions due to the low degree of adsorbent utilization (below 4%) at tested operating conditions. The proposed mechanism for naproxen removal using bone char could include a complexation process via phosphate and naproxen, hydrogen bonding and the possibility of hydrophobic interactions via π–π electron. This study highlights the relevance of performing an integrated analysis of adsorbent effectiveness in batch and dynamic conditions to establish the best process configuration for the removal of emerging water pollutants such as pharmaceuticals.