28 resultados para Ant colony optimisation algorithm
Resumo:
Coherent anti-Stokes Raman scattering is the powerful method of laser spectroscopy in which significant successes are achieved. However, the non-linear nature of CARS complicates the analysis of the received spectra. The objective of this Thesis is to develop a new phase retrieval algorithm for CARS. It utilizes the maximum entropy method and the new wavelet approach for spectroscopic background correction of a phase function. The method was developed to be easily automated and used on a large number of spectra of different substances.. The algorithm was successfully tested on experimental data.
Resumo:
Data traffic caused by mobile advertising client software when it is communicating with the network server can be a pain point for many application developers who are considering advertising-funded application distribution, since the cost of the data transfer might scare their users away from using the applications. For the thesis project, a simulation environment was built to mimic the real client-server solution for measuring the data transfer over varying types of connections with different usage scenarios. For optimising data transfer, a few general-purpose compressors and XML-specific compressors were tried for compressing the XML data, and a few protocol optimisations were implemented. For optimising the cost, cache usage was improved and pre-loading was enhanced to use free connections to load the data. The data traffic structure and the various optimisations were analysed, and it was found that the cache usage and pre-loading should be enhanced and that the protocol should be changed, with report aggregation and compression using WBXML or gzip.
Resumo:
Fine powders of minerals are used commonly in the paper and paint industry, and for ceramics. Research for utilizing of different waste materials in these applications is environmentally important. In this work, the ultrafine grinding of two waste gypsum materials, namely FGD (Flue Gas Desulphurisation) gypsum and phosphogypsum from a phosphoric acid plant, with the attrition bead mill and with the jet mill has been studied. The ' objective of this research was to test the suitability of the attrition bead mill and of the jet mill to produce gypsum powders with a particle size of a few microns. The grinding conditions were optimised by studying the influences of different operational grinding parameters on the grinding rate and on the energy consumption of the process in order to achieve a product fineness such as that required in the paper industry with as low energy consumption as possible. Based on experimental results, the most influential parameters in the attrition grinding were found to be the bead size, the stirrer type, and the stirring speed. The best conditions, based on the product fineness and specific energy consumption of grinding, for the attrition grinding process is to grind the material with small grinding beads and a high rotational speed of the stirrer. Also, by using some suitable grinding additive, a finer product is achieved with a lower energy consumption. In jet mill grinding the most influential parameters were the feed rate, the volumetric flow rate of the grinding air, and the height of the internal classification tube. The optimised condition for the jet is to grind with a small feed rate and with a large rate of volumetric flow rate of grinding air when the inside tube is low. The finer product with a larger rate of production was achieved with the attrition bead mill than with the jet mill, thus the attrition grinding is better for the ultrafine grinding of gypsum than the jet grinding. Finally the suitability of the population balance model for simulation of grinding processes has been studied with different S , B , and C functions. A new S function for the modelling of an attrition mill and a new C function for the modelling of a jet mill were developed. The suitability of the selected models with the developed grinding functions was tested by curve fitting the particle size distributions of the grinding products and then comparing the fitted size distributions to the measured particle sizes. According to the simulation results, the models are suitable for the estimation and simulation of the studied grinding processes.
Resumo:
The identifiability of the parameters of a heat exchanger model without phase change was studied in this Master’s thesis using synthetically made data. A fast, two-step Markov chain Monte Carlo method (MCMC) was tested with a couple of case studies and a heat exchanger model. The two-step MCMC-method worked well and decreased the computation time compared to the traditional MCMC-method. The effect of measurement accuracy of certain control variables to the identifiability of parameters was also studied. The accuracy used did not seem to have a remarkable effect to the identifiability of parameters. The use of the posterior distribution of parameters in different heat exchanger geometries was studied. It would be computationally most efficient to use the same posterior distribution among different geometries in the optimisation of heat exchanger networks. According to the results, this was possible in the case when the frontal surface areas were the same among different geometries. In the other cases the same posterior distribution can be used for optimisation too, but that will give a wider predictive distribution as a result. For condensing surface heat exchangers the numerical stability of the simulation model was studied. As a result, a stable algorithm was developed.
Resumo:
In the Russian Wholesale Market, electricity and capacity are traded separately. Capacity is a special good, the sale of which obliges suppliers to keep their generating equipment ready to produce the quantity of electricity indicated by the System Operator. The purpose of the formation of capacity trading was the maintenance of reliable and uninterrupted delivery of electricity in the wholesale market. The price of capacity reflects constant investments in construction, modernization and maintenance of power plants. So, the capacity sale creates favorable conditions to attract investments in the energy sector because it guarantees the investor that his investments will be returned.
Resumo:
In this work a fuzzy linear system is used to solve Leontief input-output model with fuzzy entries. For solving this model, we assume that the consumption matrix from di erent sectors of the economy and demand are known. These assumptions heavily depend on the information obtained from the industries. Hence uncertainties are involved in this information. The aim of this work is to model these uncertainties and to address them by fuzzy entries such as fuzzy numbers and LR-type fuzzy numbers (triangular and trapezoidal). Fuzzy linear system has been developed using fuzzy data and it is solved using Gauss-Seidel algorithm. Numerical examples show the e ciency of this algorithm. The famous example from Prof. Leontief, where he solved the production levels for U.S. economy in 1958, is also further analyzed.
Resumo:
I doktorsavhandlingen undersöks förmågan att lösa hos ett antal lösare för optimeringsproblem och ett antal svårigheter med att göra en rättvis lösarjämförelse avslöjas. Dessutom framläggs några förbättringar som utförts på en av lösarna som heter GAMS/AlphaECP. Optimering innebär, i det här sammanhanget, att finna den bästa möjliga lösningen på ett problem. Den undersökta klassen av problem kan karaktäriseras som svårlöst och förekommer inom ett flertal industriområden. Målet har varit att undersöka om det finns en lösare som är universellt snabbare och hittar lösningar med högre kvalitet än någon av de andra lösarna. Det kommersiella optimeringssystemet GAMS (General Algebraic Modeling System) och omfattande problembibliotek har använts för att jämföra lösare. Förbättringarna som presenterats har utförts på GAMS/AlphaECP lösaren som baserar sig på skärplansmetoden Extended Cutting Plane (ECP). ECP-metoden har utvecklats främst av professor Tapio Westerlund på Anläggnings- och systemteknik vid Åbo Akademi.
Resumo:
In this work mathematical programming models for structural and operational optimisation of energy systems are developed and applied to a selection of energy technology problems. The studied cases are taken from industrial processes and from large regional energy distribution systems. The models are based on Mixed Integer Linear Programming (MILP), Mixed Integer Non-Linear Programming (MINLP) and on a hybrid approach of a combination of Non-Linear Programming (NLP) and Genetic Algorithms (GA). The optimisation of the structure and operation of energy systems in urban regions is treated in the work. Firstly, distributed energy systems (DES) with different energy conversion units and annual variations of consumer heating and electricity demands are considered. Secondly, district cooling systems (DCS) with cooling demands for a large number of consumers are studied, with respect to a long term planning perspective regarding to given predictions of the consumer cooling demand development in a region. The work comprises also the development of applications for heat recovery systems (HRS), where paper machine dryer section HRS is taken as an illustrative example. The heat sources in these systems are moist air streams. Models are developed for different types of equipment price functions. The approach is based on partitioning of the overall temperature range of the system into a number of temperature intervals in order to take into account the strong nonlinearities due to condensation in the heat recovery exchangers. The influence of parameter variations on the solutions of heat recovery systems is analysed firstly by varying cost factors and secondly by varying process parameters. Point-optimal solutions by a fixed parameter approach are compared to robust solutions with given parameter variation ranges. In the work enhanced utilisation of excess heat in heat recovery systems with impingement drying, electricity generation with low grade excess heat and the use of absorption heat transformers to elevate a stream temperature above the excess heat temperature are also studied.
Resumo:
The objective of this project was to introduce a new software product to pulp industry, a new market for case company. An optimization based scheduling tool has been developed to allow pulp operations to better control their production processes and improve both production efficiency and stability. Both the work here and earlier research indicates that there is a potential for savings around 1-5%. All the supporting data is available today coming from distributed control systems, data historians and other existing sources. The pulp mill model together with the scheduler, allows what-if analyses of the impacts and timely feasibility of various external actions such as planned maintenance of any particular mill operation. The visibility gained from the model proves also to be a real benefit. The aim is to satisfy demand and gain extra profit, while achieving the required customer service level. Research effort has been put both in understanding the minimum features needed to satisfy the scheduling requirements in the industry and the overall existence of the market. A qualitative study was constructed to both identify competitive situation and the requirements vs. gaps on the market. It becomes clear that there is no such system on the marketplace today and also that there is room to improve target market overall process efficiency through such planning tool. This thesis also provides better overall understanding of the different processes in this particular industry for the case company.
Resumo:
Kartta kuuluu A. E. Nordenskiöldin kokoelmaan
Resumo:
Global warming is one of the most alarming problems of this century. Initial scepticism concerning its validity is currently dwarfed by the intensification of extreme weather events whilst the gradual arising level of anthropogenic CO2 is pointed out as its main driver. Most of the greenhouse gas (GHG) emissions come from large point sources (heat and power production and industrial processes) and the continued use of fossil fuels requires quick and effective measures to meet the world’s energy demand whilst (at least) stabilizing CO2 atmospheric levels. The framework known as Carbon Capture and Storage (CCS) – or Carbon Capture Utilization and Storage (CCUS) – comprises a portfolio of technologies applicable to large‐scale GHG sources for preventing CO2 from entering the atmosphere. Amongst them, CO2 capture and mineralisation (CCM) presents the highest potential for CO2 sequestration as the predicted carbon storage capacity (as mineral carbonates) far exceeds the estimated levels of the worldwide identified fossil fuel reserves. The work presented in this thesis aims at taking a step forward to the deployment of an energy/cost effective process for simultaneous capture and storage of CO2 in the form of thermodynamically stable and environmentally friendly solid carbonates. R&D work on the process considered here began in 2007 at Åbo Akademi University in Finland. It involves the processing of magnesium silicate minerals with recyclable ammonium salts for extraction of magnesium at ambient pressure and 400‐440⁰C, followed by aqueous precipitation of magnesium in the form of hydroxide, Mg(OH)2, and finally Mg(OH)2 carbonation in a pressurised fluidized bed reactor at ~510⁰C and ~20 bar PCO2 to produce high purity MgCO3. Rock material taken from the Hitura nickel mine, Finland, and serpentinite collected from Bragança, Portugal, were tested for magnesium extraction with both ammonium sulphate and bisulphate (AS and ABS) for determination of optimal operation parameters, primarily: reaction time, reactor type and presence of moisture. Typical efficiencies range from 50 to 80% of magnesium extraction at 350‐450⁰C. In general ABS performs better than AS showing comparable efficiencies at lower temperature and reaction times. The best experimental results so far obtained include 80% magnesium extraction with ABS at 450⁰C in a laboratory scale rotary kiln and 70% Mg(OH)2 carbonation in the PFB at 500⁰C, 20 bar CO2 pressure for 15 minutes. The extraction reaction with ammonium salts is not at all selective towards magnesium. Other elements like iron, nickel, chromium, copper, etc., are also co‐extracted. Their separation, recovery and valorisation are addressed as well and found to be of great importance. The assessment of the exergetic performance of the process was carried out using Aspen Plus® software and pinch analysis technology. The choice of fluxing agent and its recovery method have a decisive sway in the performance of the process: AS is recovered by crystallisation and in general the whole process requires more exergy (2.48–5.09 GJ/tCO2sequestered) than ABS (2.48–4.47 GJ/tCO2sequestered) when ABS is recovered by thermal decomposition. However, the corrosive nature of molten ABS and operational problems inherent to thermal regeneration of ABS prohibit this route. Regeneration of ABS through addition of H2SO4 to AS (followed by crystallisation) results in an overall negative exergy balance (mainly at the expense of low grade heat) but will flood the system with sulphates. Although the ÅA route is still energy intensive, its performance is comparable to conventional CO2 capture methods using alkanolamine solvents. An energy‐neutral process is dependent on the availability and quality of nearby waste heat and economic viability might be achieved with: magnesium extraction and carbonation levels ≥ 90%, the processing of CO2‐containing flue gases (eliminating the expensive capture step) and production of marketable products.
Resumo:
This work presents synopsis of efficient strategies used in power managements for achieving the most economical power and energy consumption in multicore systems, FPGA and NoC Platforms. In this work, a practical approach was taken, in an effort to validate the significance of the proposed Adaptive Power Management Algorithm (APMA), proposed for system developed, for this thesis project. This system comprise arithmetic and logic unit, up and down counters, adder, state machine and multiplexer. The essence of carrying this project firstly, is to develop a system that will be used for this power management project. Secondly, to perform area and power synopsis of the system on these various scalable technology platforms, UMC 90nm nanotechnology 1.2v, UMC 90nm nanotechnology 1.32v and UMC 0.18 μmNanotechnology 1.80v, in order to examine the difference in area and power consumption of the system on the platforms. Thirdly, to explore various strategies that can be used to reducing system’s power consumption and to propose an adaptive power management algorithm that can be used to reduce the power consumption of the system. The strategies introduced in this work comprise Dynamic Voltage Frequency Scaling (DVFS) and task parallelism. After the system development, it was run on FPGA board, basically NoC Platforms and on these various technology platforms UMC 90nm nanotechnology1.2v, UMC 90nm nanotechnology 1.32v and UMC180 nm nanotechnology 1.80v, the system synthesis was successfully accomplished, the simulated result analysis shows that the system meets all functional requirements, the power consumption and the area utilization were recorded and analyzed in chapter 7 of this work. This work extensively reviewed various strategies for managing power consumption which were quantitative research works by many researchers and companies, it's a mixture of study analysis and experimented lab works, it condensed and presents the whole basic concepts of power management strategy from quality technical papers.