82 resultados para POLYMERIZATION REACTOR OPTIMIZATION
Resumo:
The decreasing fossil fuel resources combined with an increasing world energy demand has raised an interest in renewable energy sources. The alternatives can be solar, wind and geothermal energies, but only biomass can be a substitute for the carbon–based feedstock, which is suitable for the production of transportation fuels and chemicals. However, a high oxygen content of the biomass creates challenges for the future chemical industry, forcing the development of new processes which allow a complete or selective oxygen removal without any significant carbon loss. Therefore, understanding and optimization of biomass deoxygenation processes are crucial for the future bio–based chemical industry. In this work, deoxygenation of fatty acids and their derivatives was studied over Pd/C and TiO2 supported noble metal catalysts (Pt, Pt–Re, Re and Ru) to obtain future fuel components. The 5 % Pd/C catalyst was investigated in semibatch and fixed bed reactors at 300 °C and 1.7–2 MPa of inert and hydrogen–containing atmospheres. Based on extensive kinetic studies, plausible reaction mechanisms and pathways were proposed. The influence of the unsaturation in the deoxygenation of model compounds and industrial feedstock – tall oil fatty acids – over a Pd/C catalyst was demonstrated. The optimization of the reaction conditions suppressed the formation of by–products, hence high yields and selectivities towards linear hydrocarbons and catalyst stability were achieved. Experiments in a fixed bed reactor filled with a 2 % Pd/C catalyst were performed with stearic acid as a model compound at different hydrogen–containing gas atmospheres to understand the catalyst stability under various conditions. Moreover, prolonged experiments were carried out with concentrated model compounds to reveal the catalyst deactivation. New materials were proposed for the selective deoxygenation process at lower temperatures (~200 °C) with a tunable selectivity to hydrodeoxygenation by using 4 % Pt/TiO2 or decarboxylation/decarbonylation over 4 % Ru/TiO2 catalysts. A new method for selective hydrogenation of fatty acids to fatty alcohols was demonstrated with a 4 % Re/TiO2 catalyst. A reaction pathway and mechanism for TiO2 supported metal catalysts was proposed and an optimization of the process conditions led to an increase in the formation of the desired products.
Resumo:
The direct synthesis from hydrogen and oxygen is a green alternative for production of hydrogen peroxide. However, this process suffers from two challenges. Firstly, mixtures of hydrogen and oxygen are explosive over a wide range of concentrations (4-94% H2 in O2). Secondly, the catalytic reaction of hydrogen and oxygen involves several reaction pathways, many of them resulting in water production and therfore decreasing selectivity. The present work deals with these two challenges. The safety problem was dealed by employing a novel microstructured reactor. Selectivity of the reaction was highly improved by development a set of new catalysts. The final goal was to develop an effective and safe continuous process for direct synthesis of hydrogen peroxide from H2 and O2. Activated carbon cloth and Sibunit were examined as the catalysts’ supports. Palladium and gold monometallic and palladium-gold bimetallic catalysts were thoroughly investigated by numerous kinetic experiments performed in a tailored batch reactor and several catalyst charachterization methods. A complete set of data for direct synthesis of H2O2 and its catalytic decomposition and hydrogenation was obtained. These data were used to assess factors influencing selectivity and activity of the catalysts in direct synthesis of H2O2 as well as its decomposition and hydrogenation. A novel microstructured reactor was developed based on hydrodynamics and mass transfer studies in prototype microstractural plates. The shape and the size of the structural elements in the microreactor plate were optimized in a way to get high gas-liquid interfacial area and gas-liquid mass transfer. Finally, empirical correlations for the volumetric mass transfer coefficient were derived. A bench-scale continuous process was developed by using the novel microstructral plate reactor. A series of kinetic experiments were performed to investigate the effects of the gas and the liquid feed rates and their ratio, the amount of the catalyst, the gas feed composition and pressure on the final rate of H2O2 production and selectivity.
Resumo:
The objective of this project was to introduce a new software product to pulp industry, a new market for case company. An optimization based scheduling tool has been developed to allow pulp operations to better control their production processes and improve both production efficiency and stability. Both the work here and earlier research indicates that there is a potential for savings around 1-5%. All the supporting data is available today coming from distributed control systems, data historians and other existing sources. The pulp mill model together with the scheduler, allows what-if analyses of the impacts and timely feasibility of various external actions such as planned maintenance of any particular mill operation. The visibility gained from the model proves also to be a real benefit. The aim is to satisfy demand and gain extra profit, while achieving the required customer service level. Research effort has been put both in understanding the minimum features needed to satisfy the scheduling requirements in the industry and the overall existence of the market. A qualitative study was constructed to both identify competitive situation and the requirements vs. gaps on the market. It becomes clear that there is no such system on the marketplace today and also that there is room to improve target market overall process efficiency through such planning tool. This thesis also provides better overall understanding of the different processes in this particular industry for the case company.
Resumo:
Identification of low-dimensional structures and main sources of variation from multivariate data are fundamental tasks in data analysis. Many methods aimed at these tasks involve solution of an optimization problem. Thus, the objective of this thesis is to develop computationally efficient and theoretically justified methods for solving such problems. Most of the thesis is based on a statistical model, where ridges of the density estimated from the data are considered as relevant features. Finding ridges, that are generalized maxima, necessitates development of advanced optimization methods. An efficient and convergent trust region Newton method for projecting a point onto a ridge of the underlying density is developed for this purpose. The method is utilized in a differential equation-based approach for tracing ridges and computing projection coordinates along them. The density estimation is done nonparametrically by using Gaussian kernels. This allows application of ridge-based methods with only mild assumptions on the underlying structure of the data. The statistical model and the ridge finding methods are adapted to two different applications. The first one is extraction of curvilinear structures from noisy data mixed with background clutter. The second one is a novel nonlinear generalization of principal component analysis (PCA) and its extension to time series data. The methods have a wide range of potential applications, where most of the earlier approaches are inadequate. Examples include identification of faults from seismic data and identification of filaments from cosmological data. Applicability of the nonlinear PCA to climate analysis and reconstruction of periodic patterns from noisy time series data are also demonstrated. Other contributions of the thesis include development of an efficient semidefinite optimization method for embedding graphs into the Euclidean space. The method produces structure-preserving embeddings that maximize interpoint distances. It is primarily developed for dimensionality reduction, but has also potential applications in graph theory and various areas of physics, chemistry and engineering. Asymptotic behaviour of ridges and maxima of Gaussian kernel densities is also investigated when the kernel bandwidth approaches infinity. The results are applied to the nonlinear PCA and to finding significant maxima of such densities, which is a typical problem in visual object tracking.
Resumo:
The objective of this thesis is to examine distribution network designs and modeling practices and create a framework to identify best possible distribution network structure for the case company. The main research question therefore is: How to optimize case company’s distribution network in terms of customer needs and costs? Theory chapters introduce the basic building blocks of the distribution network design and needed calculation methods and models. Framework for the distribution network projects was created based on the theory and the case study was carried out by following the defined framework. Distribution network calculations were based on the company’s sales plan for the years 2014 - 2020. Main conclusions and recommendations were that the new Asian business strategy requires high investments in logistics and the first step is to open new satellite DC in China as soon as possible to support sales and second possible step is to open regional DC in Asia within 2 - 4 years.
Resumo:
Gasification of biomass is an efficient method process to produce liquid fuels, heat and electricity. It is interesting especially for the Nordic countries, where raw material for the processes is readily available. The thermal reactions of light hydrocarbons are a major challenge for industrial applications. At elevated temperatures, light hydrocarbons react spontaneously to form higher molecular weight compounds. In this thesis, this phenomenon was studied by literature survey, experimental work and modeling effort. The literature survey revealed that the change in tar composition is likely caused by the kinetic entropy. The role of the surface material is deemed to be an important factor in the reactivity of the system. The experimental results were in accordance with previous publications on the subject. The novelty of the experimental work lies in the used time interval for measurements combined with an industrially relevant temperature interval. The aspects which are covered in the modeling include screening of possible numerical approaches, testing of optimization methods and kinetic modelling. No significant numerical issues were observed, so the used calculation routines are adequate for the task. Evolutionary algorithms gave a better performance combined with better fit than the conventional iterative methods such as Simplex and Levenberg-Marquardt methods. Three models were fitted on experimental data. The LLNL model was used as a reference model to which two other models were compared. A compact model which included all the observed species was developed. The parameter estimation performed on that model gave slightly impaired fit to experimental data than LLNL model, but the difference was barely significant. The third tested model concentrated on the decomposition of hydrocarbons and included a theoretical description of the formation of carbon layer on the reactor walls. The fit to experimental data was extremely good. Based on the simulation results and literature findings, it is likely that the surface coverage of carbonaceous deposits is a major factor in thermal reactions.
Resumo:
Effective control and limiting of carbon dioxide (CO₂) emissions in energy production are major challenges of science today. Current research activities include the development of new low-cost carbon capture technologies, and among the proposed concepts, chemical combustion (CLC) and chemical looping with oxygen uncoupling (CLOU) have attracted significant attention allowing intrinsic separation of pure CO₂ from a hydrocarbon fuel combustion process with a comparatively small energy penalty. Both CLC and CLOU utilize the well-established fluidized bed technology, but several technical challenges need to be overcome in order to commercialize the processes. Therefore, development of proper modelling and simulation tools is essential for the design, optimization, and scale-up of chemical looping-based combustion systems. The main objective of this work was to analyze the technological feasibility of CLC and CLOU processes at different scales using a computational modelling approach. A onedimensional fluidized bed model frame was constructed and applied for simulations of CLC and CLOU systems consisting of interconnected fluidized bed reactors. The model is based on the conservation of mass and energy, and semi-empirical correlations are used to describe the hydrodynamics, chemical reactions, and transfer of heat in the reactors. Another objective was to evaluate the viability of chemical looping-based energy production, and a flow sheet model representing a CLC-integrated steam power plant was developed. The 1D model frame was succesfully validated based on the operation of a 150 kWth laboratory-sized CLC unit fed by methane. By following certain scale-up criteria, a conceptual design for a CLC reactor system at a pre-commercial scale of 100 MWth was created, after which the validated model was used to predict the performance of the system. As a result, further understanding of the parameters affecting the operation of a large-scale CLC process was acquired, which will be useful for the practical design work in the future. The integration of the reactor system and steam turbine cycle for power production was studied resulting in a suggested plant layout including a CLC boiler system, a simple heat recovery setup, and an integrated steam cycle with a three pressure level steam turbine. Possible operational regions of a CLOU reactor system fed by bituminous coal were determined via mass, energy, and exergy balance analysis. Finally, the 1D fluidized bed model was modified suitable for CLOU, and the performance of a hypothetical 500 MWth CLOU fuel reactor was evaluated by extensive case simulations.
Resumo:
The use of exact coordinates of pebbles and fuel particles of pebble bed reactor modelling becoming possible in Monte Carlo reactor physics calculations is an important development step. This allows exact modelling of pebble bed reactors with realistic pebble beds without the placing of pebbles in regular lattices. In this study the multiplication coefficient of the HTR-10 pebble bed reactor is calculated with the Serpent reactor physics code and, using this multiplication coefficient, the amount of pebbles required for the critical load of the reactor. The multiplication coefficient is calculated using pebble beds produced with the discrete element method and three different material libraries in order to compare the results. The received results are lower than those from measured at the experimental reactor and somewhat lower than those gained with other codes in earlier studies.
Resumo:
Almost every problem of design, planning and management in the technical and organizational systems has several conflicting goals or interests. Nowadays, multicriteria decision models represent a rapidly developing area of operation research. While solving practical optimization problems, it is necessary to take into account various kinds of uncertainty due to lack of data, inadequacy of mathematical models to real-time processes, calculation errors, etc. In practice, this uncertainty usually leads to undesirable outcomes where the solutions are very sensitive to any changes in the input parameters. An example is the investment managing. Stability analysis of multicriteria discrete optimization problems investigates how the found solutions behave in response to changes in the initial data (input parameters). This thesis is devoted to the stability analysis in the problem of selecting investment project portfolios, which are optimized by considering different types of risk and efficiency of the investment projects. The stability analysis is carried out in two approaches: qualitative and quantitative. The qualitative approach describes the behavior of solutions in conditions with small perturbations in the initial data. The stability of solutions is defined in terms of existence a neighborhood in the initial data space. Any perturbed problem from this neighborhood has stability with respect to the set of efficient solutions of the initial problem. The other approach in the stability analysis studies quantitative measures such as stability radius. This approach gives information about the limits of perturbations in the input parameters, which do not lead to changes in the set of efficient solutions. In present thesis several results were obtained including attainable bounds for the stability radii of Pareto optimal and lexicographically optimal portfolios of the investment problem with Savage's, Wald's criteria and criteria of extreme optimism. In addition, special classes of the problem when the stability radii are expressed by the formulae were indicated. Investigations were completed using different combinations of Chebyshev's, Manhattan and Hölder's metrics, which allowed monitoring input parameters perturbations differently.
Resumo:
This thesis considers optimization problems arising in printed circuit board assembly. Especially, the case in which the electronic components of a single circuit board are placed using a single placement machine is studied. Although there is a large number of different placement machines, the use of collect-and-place -type gantry machines is discussed because of their flexibility and increasing popularity in the industry. Instead of solving the entire control optimization problem of a collect-andplace machine with a single application, the problem is divided into multiple subproblems because of its hard combinatorial nature. This dividing technique is called hierarchical decomposition. All the subproblems of the one PCB - one machine -context are described, classified and reviewed. The derived subproblems are then either solved with exact methods or new heuristic algorithms are developed and applied. The exact methods include, for example, a greedy algorithm and a solution based on dynamic programming. Some of the proposed heuristics contain constructive parts while others utilize local search or are based on frequency calculations. For the heuristics, it is made sure with comprehensive experimental tests that they are applicable and feasible. A number of quality functions will be proposed for evaluation and applied to the subproblems. In the experimental tests, artificially generated data from Markov-models and data from real-world PCB production are used. The thesis consists of an introduction and of five publications where the developed and used solution methods are described in their full detail. For all the problems stated in this thesis, the methods proposed are efficient enough to be used in the PCB assembly production in practice and are readily applicable in the PCB manufacturing industry.
Resumo:
The purpose of this Thesis is to find the most optimal heat recovery solution for Wärtsilä’s dynamic district heating power plant considering Germany energy markets as in Germany government pays subsidies for CHP plants in order to increase its share of domestic power production to 25 % by 2020. Different heat recovery connections have been simulated dozens to be able to determine the most efficient heat recovery connections. The purpose is also to study feasibility of different heat recovery connections in the dynamic district heating power plant in the Germany markets thus taking into consideration the day ahead electricity prices, district heating network temperatures and CHP subsidies accordingly. The auxiliary cooling, dynamical operation and cost efficiency of the power plant is also investigated.