862 resultados para Sequential optimization
Resumo:
The iron and steelmaking industry is among the major contributors to the anthropogenic emissions of carbon dioxide in the world. The rising levels of CO2 in the atmosphere and the global concern about the greenhouse effect and climate change have brought about considerable investigations on how to reduce the energy intensity and CO2 emissions of this industrial sector. In this thesis the problem is tackled by mathematical modeling and optimization using three different approaches. The possibility to use biomass in the integrated steel plant, particularly as an auxiliary reductant in the blast furnace, is investigated. By pre-processing the biomass its heating value and carbon content can be increased at the same time as the oxygen content is decreased. As the compression strength of the preprocessed biomass is lower than that of coke, it is not suitable for replacing a major part of the coke in the blast furnace burden. Therefore the biomass is assumed to be injected at the tuyere level of the blast furnace. Carbon capture and storage is, nowadays, mostly associated with power plants but it can also be used to reduce the CO2 emissions of an integrated steel plant. In the case of a blast furnace, the effect of CCS can be further increased by recycling the carbon dioxide stripped top gas back into the process. However, this affects the economy of the integrated steel plant, as the amount of top gases available, e.g., for power and heat production is decreased. High quality raw materials are a prerequisite for smooth blast furnace operation. High quality coal is especially needed to produce coke with sufficient properties to ensure proper gas permeability and smooth burden descent. Lower quality coals as well as natural gas, which some countries have in great volumes, can be utilized with various direct and smelting reduction processes. The DRI produced with a direct reduction process can be utilized as a feed material for blast furnace, basic oxygen furnace or electric arc furnace. The liquid hot metal from a smelting reduction process can in turn be used in basic oxygen furnace or electric arc furnace. The unit sizes and investment costs of an alternative ironmaking process are also lower than those of a blast furnace. In this study, the economy of an integrated steel plant is investigated by simulation and optimization. The studied system consists of linearly described unit processes from coke plant to steel making units, with a more detailed thermodynamical model of the blast furnace. The results from the blast furnace operation with biomass injection revealed the importance of proper pre-processing of the raw biomass as the composition of the biomass as well as the heating value and the yield are all affected by the pyrolysis temperature. As for recycling of CO2 stripped blast furnace top gas, substantial reductions in the emission rates are achieved if the stripped CO2 can be stored. However, the optimal recycling degree together with other operation conditions is heavily dependent on the cost structure of CO2 emissions and stripping/storage. The economical feasibility related to the use of DRI in the blast furnace depends on the price ratio between the DRI pellets and the BF pellets. The high amount of energy needed in the rotary hearth furnace to reduce the iron ore leads to increased CO2 emissions.
Resumo:
Många kvantitativa problem från vitt skilda områden kan beskrivas som optimeringsproblem. Ett mått på lösningens kvalitet bör optimeras samtidigt som vissa villkor på lösningen uppfylls. Kvalitetsmåttet kallas vanligen objektfunktion och kan beskriva kostnader (exempelvis produktion, logistik), potentialenergi (molekylmodellering, proteinveckning), risk (finans, försäkring) eller något annat relevant mått. I min doktorsavhandling diskuteras speciellt icke-linjär programmering, NLP, i ändliga dimensioner. Problem med enkel struktur, till exempel någon form av konvexitet, kan lösas effektivt. Tyvärr kan inte alla kvantitativa samband modelleras på ett konvext vis. Icke-konvexa problem kan angripas med heuristiska metoder, algoritmer som söker lösningar med hjälp av deterministiska eller stokastiska tumregler. Ibland fungerar det här väl, men heuristikerna kan sällan garantera kvaliteten på lösningen eller ens att en lösning påträffas. För vissa tillämpningar är det här oacceptabelt. Istället kan man tillämpa så kallad global optimering. Genom att successivt dela variabeldomänen i mindre delar och beräkna starkare gränser på det optimala värdet hittas en lösning inom feltoleransen. Den här metoden kallas branch-and-bound, ungefär dela-och-begränsa. För att ge undre gränser (vid minimering) approximeras problemet med enklare problem, till exempel konvexa, som kan lösas effektivt. I avhandlingen studeras tillvägagångssätt för att approximera differentierbara funktioner med konvexa underskattningar, speciellt den så kallade alphaBB-metoden. Denna metod adderar störningar av en viss form och garanterar konvexitet genom att sätta villkor på den perturberade Hessematrisen. Min forskning har lyft fram en naturlig utvidgning av de perturbationer som används i alphaBB. Nya metoder för att bestämma underskattningsparametrar har beskrivits och jämförts. I sammanfattningsdelen diskuteras global optimering ur bredare perspektiv på optimering och beräkningsalgoritmer.
Resumo:
Tank mixtures among herbicides of different action mechanisms might increase weed control spectrum and may be an important strategy for preventing the development of resistance in RR soybean. However, little is known about the effects of these herbicide combinations on soybean plants. Hence, two experiments were carried out aiming at evaluating the selectivity of glyphosate mixtures with other active ingredients applied in postemergence to RR soybean. The first application was carried out at V1 to V2 soybean stage and the second at V3 to V4 (15 days after the first one). For experiment I, treatments (rates in g ha-1) evaluated were composed by two sequential applications: the first one with glyphosate (720) in tank mixtures with cloransulam (30.24), fomesafen (125), lactofen (72), chlorimuron (12.5), flumiclorac (30), bentazon (480) and imazethapyr (80); the second application consisted of isolated glyphosate (480). In experiment II, treatments also consisted of two sequential applications, but tank mixtures as described above were applied as the second application. The first one in this experiment consisted of isolated glyphosate (720). For both experiments, sequential applications of glyphosate alone at 720/480, 960/480, 1200/480 and 960/720 (Expt. I) or 720/480, 720/720, 720/960 and 720/1200 (Expt. II) were used as control treatments. Applications of glyphosate tank mixtures with other herbicides are more selective to RR soybean when applied at younger stages whereas applications at later stages might cause yield losses, especially when glyphosate is mixed with lactofen and bentazon.
Resumo:
The objective of this study was to optimize and validate the solid-liquid extraction (ESL) technique for determination of picloram residues in soil samples. At the optimization stage, the optimal conditions for extraction of soil samples were determined using univariate analysis. Ratio soil/solution extraction, type and time of agitation, ionic strength and pH of extraction solution were evaluated. Based on the optimized parameters, the following method of extraction and analysis of picloram was developed: weigh 2.00 g of soil dried and sieved through a sieve mesh of 2.0 mm pore, add 20.0 mL of KCl concentration of 0.5 mol L-1, shake the bottle in the vortex for 10 seconds to form suspension and adjust to pH 7.00, with alkaline KOH 0.1 mol L-1. Homogenate the system in a shaker system for 60 minutes and then let it stand for 10 minutes. The bottles are centrifuged for 10 minutes at 3,500 rpm. After the settlement of the soil particles and cleaning of the supernatant extract, an aliquot is withdrawn and analyzed by high performance liquid chromatography. The optimized method was validated by determining the selectivity, linearity, detection and quantification limits, precision and accuracy. The ESL methodology was efficient for analysis of residues of the pesticides studied, with percentages of recovery above 90%. The limits of detection and quantification were 20.0 and 66.0 mg kg-1 soil for the PVA, and 40.0 and 132.0 mg kg-1 soil for the VLA. The coefficients of variation (CV) were equal to 2.32 and 2.69 for PVA and TH soils, respectively. The methodology resulted in low organic solvent consumption and cleaner extracts, as well as no purification steps for chromatographic analysis were required. The parameters evaluated in the validation process indicated that the ESL methodology is efficient for the extraction of picloram residues in soils, with low limits of detection and quantification.
Resumo:
This study examines the excess returns provided by G10 currency carry trading during the Euro era. The currency carry trade has been a popular trade throughout the past decades offering excess returns to investors. The thesis aims to contribute to existing research on the topic by utilizing a new set of data for the Euro era as well as using the Euro as a basis for the study. The focus of the thesis is specifically on different carry trade strategies’ performance, risk and diversification benefits. The study finds proof of the failure of the uncovered interest rate parity theory through multiple regression analyses. Furthermore, the research finds evidence of significant diversification benefits in terms of Sharpe ratio and improved return distributions. The results suggest that currency carry trades have offered excess returns during 1999-2014 and that volatility plays an important role in carry trade returns. The risk, however, is diversifiable and therefore our results support previous quantitative research findings on the topic.
Resumo:
Environmental issues, including global warming, have been serious challenges realized worldwide, and they have become particularly important for the iron and steel manufacturers during the last decades. Many sites has been shut down in developed countries due to environmental regulation and pollution prevention while a large number of production plants have been established in developing countries which has changed the economy of this business. Sustainable development is a concept, which today affects economic growth, environmental protection, and social progress in setting up the basis for future ecosystem. A sustainable headway may attempt to preserve natural resources, recycle and reuse materials, prevent pollution, enhance yield and increase profitability. To achieve these objectives numerous alternatives should be examined in the sustainable process design. Conventional engineering work cannot address all of these substitutes effectively and efficiently to find an optimal route of processing. A systematic framework is needed as a tool to guide designers to make decisions based on overall concepts of the system, identifying the key bottlenecks and opportunities, which lead to an optimal design and operation of the systems. Since the 1980s, researchers have made big efforts to develop tools for what today is referred to as Process Integration. Advanced mathematics has been used in simulation models to evaluate various available alternatives considering physical, economic and environmental constraints. Improvements on feed material and operation, competitive energy market, environmental restrictions and the role of Nordic steelworks as energy supplier (electricity and district heat) make a great motivation behind integration among industries toward more sustainable operation, which could increase the overall energy efficiency and decrease environmental impacts. In this study, through different steps a model is developed for primary steelmaking, with the Finnish steel sector as a reference, to evaluate future operation concepts of a steelmaking site regarding sustainability. The research started by potential study on increasing energy efficiency and carbon dioxide reduction due to integration of steelworks with chemical plants for possible utilization of available off-gases in the system as chemical products. These off-gases from blast furnace, basic oxygen furnace and coke oven furnace are mainly contained of carbon monoxide, carbon dioxide, hydrogen, nitrogen and partially methane (in coke oven gas) and have proportionally low heating value but are currently used as fuel within these industries. Nonlinear optimization technique is used to assess integration with methanol plant under novel blast furnace technologies and (partially) substitution of coal with other reducing agents and fuels such as heavy oil, natural gas and biomass in the system. Technical aspect of integration and its effect on blast furnace operation regardless of capital expenditure of new operational units are studied to evaluate feasibility of the idea behind the research. Later on the concept of polygeneration system added and a superstructure generated with alternative routes for off-gases pretreatment and further utilization on a polygeneration system producing electricity, district heat and methanol. (Vacuum) pressure swing adsorption, membrane technology and chemical absorption for gas separation; partial oxidation, carbon dioxide and steam methane reforming for methane gasification; gas and liquid phase methanol synthesis are the main alternative process units considered in the superstructure. Due to high degree of integration in process synthesis, and optimization techniques, equation oriented modeling is chosen as an alternative and effective strategy to previous sequential modelling for process analysis to investigate suggested superstructure. A mixed integer nonlinear programming is developed to study behavior of the integrated system under different economic and environmental scenarios. Net present value and specific carbon dioxide emission is taken to compare economic and environmental aspects of integrated system respectively for different fuel systems, alternative blast furnace reductants, implementation of new blast furnace technologies, and carbon dioxide emission penalties. Sensitivity analysis, carbon distribution and the effect of external seasonal energy demand is investigated with different optimization techniques. This tool can provide useful information concerning techno-environmental and economic aspects for decision-making and estimate optimal operational condition of current and future primary steelmaking under alternative scenarios. The results of the work have demonstrated that it is possible in the future to develop steelmaking towards more sustainable operation.
Resumo:
The objective of this project was to introduce a new software product to pulp industry, a new market for case company. An optimization based scheduling tool has been developed to allow pulp operations to better control their production processes and improve both production efficiency and stability. Both the work here and earlier research indicates that there is a potential for savings around 1-5%. All the supporting data is available today coming from distributed control systems, data historians and other existing sources. The pulp mill model together with the scheduler, allows what-if analyses of the impacts and timely feasibility of various external actions such as planned maintenance of any particular mill operation. The visibility gained from the model proves also to be a real benefit. The aim is to satisfy demand and gain extra profit, while achieving the required customer service level. Research effort has been put both in understanding the minimum features needed to satisfy the scheduling requirements in the industry and the overall existence of the market. A qualitative study was constructed to both identify competitive situation and the requirements vs. gaps on the market. It becomes clear that there is no such system on the marketplace today and also that there is room to improve target market overall process efficiency through such planning tool. This thesis also provides better overall understanding of the different processes in this particular industry for the case company.
Resumo:
Identification of low-dimensional structures and main sources of variation from multivariate data are fundamental tasks in data analysis. Many methods aimed at these tasks involve solution of an optimization problem. Thus, the objective of this thesis is to develop computationally efficient and theoretically justified methods for solving such problems. Most of the thesis is based on a statistical model, where ridges of the density estimated from the data are considered as relevant features. Finding ridges, that are generalized maxima, necessitates development of advanced optimization methods. An efficient and convergent trust region Newton method for projecting a point onto a ridge of the underlying density is developed for this purpose. The method is utilized in a differential equation-based approach for tracing ridges and computing projection coordinates along them. The density estimation is done nonparametrically by using Gaussian kernels. This allows application of ridge-based methods with only mild assumptions on the underlying structure of the data. The statistical model and the ridge finding methods are adapted to two different applications. The first one is extraction of curvilinear structures from noisy data mixed with background clutter. The second one is a novel nonlinear generalization of principal component analysis (PCA) and its extension to time series data. The methods have a wide range of potential applications, where most of the earlier approaches are inadequate. Examples include identification of faults from seismic data and identification of filaments from cosmological data. Applicability of the nonlinear PCA to climate analysis and reconstruction of periodic patterns from noisy time series data are also demonstrated. Other contributions of the thesis include development of an efficient semidefinite optimization method for embedding graphs into the Euclidean space. The method produces structure-preserving embeddings that maximize interpoint distances. It is primarily developed for dimensionality reduction, but has also potential applications in graph theory and various areas of physics, chemistry and engineering. Asymptotic behaviour of ridges and maxima of Gaussian kernel densities is also investigated when the kernel bandwidth approaches infinity. The results are applied to the nonlinear PCA and to finding significant maxima of such densities, which is a typical problem in visual object tracking.
Resumo:
The objective of this thesis is to examine distribution network designs and modeling practices and create a framework to identify best possible distribution network structure for the case company. The main research question therefore is: How to optimize case company’s distribution network in terms of customer needs and costs? Theory chapters introduce the basic building blocks of the distribution network design and needed calculation methods and models. Framework for the distribution network projects was created based on the theory and the case study was carried out by following the defined framework. Distribution network calculations were based on the company’s sales plan for the years 2014 - 2020. Main conclusions and recommendations were that the new Asian business strategy requires high investments in logistics and the first step is to open new satellite DC in China as soon as possible to support sales and second possible step is to open regional DC in Asia within 2 - 4 years.
Resumo:
Microbial pathogens such as bacillus Calmette-Guérin (BCG) induce the activation of macrophages. Activated macrophages can be characterized by the increased production of reactive oxygen and nitrogen metabolites, generated via NADPH oxidase and inducible nitric oxide synthase, respectively, and by the increased expression of major histocompatibility complex class II molecules (MHC II). Multiple microassays have been developed to measure these parameters. Usually each assay requires 2-5 x 10(5) cells per well. In some experimental conditions the number of cells is the limiting factor for the phenotypic characterization of macrophages. Here we describe a method whereby this limitation can be circumvented. Using a single 96-well microassay and a very small number of peritoneal cells obtained from C3H/HePas mice, containing as little as <=2 x 10(5) macrophages per well, we determined sequentially the oxidative burst (H2O2), nitric oxide production and MHC II (IAk) expression of BCG-activated macrophages. More specifically, with 100 µl of cell suspension it was possible to quantify H2O2 release and nitric oxide production after 1 and 48 h, respectively, and IAk expression after 48 h of cell culture. In addition, this microassay is easy to perform, highly reproducible and more economical.
Resumo:
Almost every problem of design, planning and management in the technical and organizational systems has several conflicting goals or interests. Nowadays, multicriteria decision models represent a rapidly developing area of operation research. While solving practical optimization problems, it is necessary to take into account various kinds of uncertainty due to lack of data, inadequacy of mathematical models to real-time processes, calculation errors, etc. In practice, this uncertainty usually leads to undesirable outcomes where the solutions are very sensitive to any changes in the input parameters. An example is the investment managing. Stability analysis of multicriteria discrete optimization problems investigates how the found solutions behave in response to changes in the initial data (input parameters). This thesis is devoted to the stability analysis in the problem of selecting investment project portfolios, which are optimized by considering different types of risk and efficiency of the investment projects. The stability analysis is carried out in two approaches: qualitative and quantitative. The qualitative approach describes the behavior of solutions in conditions with small perturbations in the initial data. The stability of solutions is defined in terms of existence a neighborhood in the initial data space. Any perturbed problem from this neighborhood has stability with respect to the set of efficient solutions of the initial problem. The other approach in the stability analysis studies quantitative measures such as stability radius. This approach gives information about the limits of perturbations in the input parameters, which do not lead to changes in the set of efficient solutions. In present thesis several results were obtained including attainable bounds for the stability radii of Pareto optimal and lexicographically optimal portfolios of the investment problem with Savage's, Wald's criteria and criteria of extreme optimism. In addition, special classes of the problem when the stability radii are expressed by the formulae were indicated. Investigations were completed using different combinations of Chebyshev's, Manhattan and Hölder's metrics, which allowed monitoring input parameters perturbations differently.
Resumo:
This thesis considers optimization problems arising in printed circuit board assembly. Especially, the case in which the electronic components of a single circuit board are placed using a single placement machine is studied. Although there is a large number of different placement machines, the use of collect-and-place -type gantry machines is discussed because of their flexibility and increasing popularity in the industry. Instead of solving the entire control optimization problem of a collect-andplace machine with a single application, the problem is divided into multiple subproblems because of its hard combinatorial nature. This dividing technique is called hierarchical decomposition. All the subproblems of the one PCB - one machine -context are described, classified and reviewed. The derived subproblems are then either solved with exact methods or new heuristic algorithms are developed and applied. The exact methods include, for example, a greedy algorithm and a solution based on dynamic programming. Some of the proposed heuristics contain constructive parts while others utilize local search or are based on frequency calculations. For the heuristics, it is made sure with comprehensive experimental tests that they are applicable and feasible. A number of quality functions will be proposed for evaluation and applied to the subproblems. In the experimental tests, artificially generated data from Markov-models and data from real-world PCB production are used. The thesis consists of an introduction and of five publications where the developed and used solution methods are described in their full detail. For all the problems stated in this thesis, the methods proposed are efficient enough to be used in the PCB assembly production in practice and are readily applicable in the PCB manufacturing industry.
Resumo:
The purpose of this Thesis is to find the most optimal heat recovery solution for Wärtsilä’s dynamic district heating power plant considering Germany energy markets as in Germany government pays subsidies for CHP plants in order to increase its share of domestic power production to 25 % by 2020. Different heat recovery connections have been simulated dozens to be able to determine the most efficient heat recovery connections. The purpose is also to study feasibility of different heat recovery connections in the dynamic district heating power plant in the Germany markets thus taking into consideration the day ahead electricity prices, district heating network temperatures and CHP subsidies accordingly. The auxiliary cooling, dynamical operation and cost efficiency of the power plant is also investigated.
Resumo:
The purpose of the present study was to explore the usefulness of the Mexican sequential organ failure assessment (MEXSOFA) score for assessing the risk of mortality for critically ill patients in the ICU. A total of 232 consecutive patients admitted to an ICU were included in the study. The MEXSOFA was calculated using the original SOFA scoring system with two modifications: the PaO2/FiO2 ratio was replaced with the SpO2/FiO2 ratio, and the evaluation of neurologic dysfunction was excluded. The ICU mortality rate was 20.2%. Patients with an initial MEXSOFA score of 9 points or less calculated during the first 24 h after admission to the ICU had a mortality rate of 14.8%, while those with an initial MEXSOFA score of 10 points or more had a mortality rate of 40%. The MEXSOFA score at 48 h was also associated with mortality: patients with a score of 9 points or less had a mortality rate of 14.1%, while those with a score of 10 points or more had a mortality rate of 50%. In a multivariate analysis, only the MEXSOFA score at 48 h was an independent predictor for in-ICU death with an OR = 1.35 (95%CI = 1.14-1.59, P < 0.001). The SOFA and MEXSOFA scores calculated 24 h after admission to the ICU demonstrated a good level of discrimination for predicting the in-ICU mortality risk in critically ill patients. The MEXSOFA score at 48 h was an independent predictor of death; with each 1-point increase, the odds of death increased by 35%.
Resumo:
Although radical nephrectomy alone is widely accepted as the standard of care in localized treatment for renal cell carcinoma (RCC), it is not sufficient for the treatment of metastatic RCC (mRCC), which invariably leads to an unfavorable outcome despite the use of multiple therapies. Currently, sequential targeted agents are recommended for the management of mRCC, but the optimal drug sequence is still debated. This case was a 57-year-old man with clear-cell mRCC who received multiple therapies following his first operation in 2003 and has survived for over 10 years with a satisfactory quality of life. The treatments given included several surgeries, immunotherapy, and sequentially administered sorafenib, sunitinib, and everolimus regimens. In the course of mRCC treatment, well-planned surgeries, effective sequential targeted therapies and close follow-up are all of great importance for optimal management and a satisfactory outcome.