79 resultados para mathematical parameters
Resumo:
Diplomityö tehtiin Wello Oy:n toimeksiannosta. Wello Oy on vesi- ja tuulivoimaratkaisuihin keskittynyt yritys, joka kehittää aaltovoimalaitekonseptia. Työssä selvitettiin aaltovoimaan liittyviä ilmiöitä ja aaltovoimalaitteen mekaanista konseptia ja tehtiin arvio niiden tehokkuudesta. Työssä käytettiin kaupallisia simulointityökaluja kuten monikappaledynamiikan simulointiohjelmaa MSC.ADAMS R3:a ja yleistä matematiikka ohjelmaa Matlab Simulink:ia. Simulointimallia käytettiin arvioimaan laitteen yleistä käyttäytymistä. Lisäksi laitteen analyyttisiä malleja käytettiin laitteen toimintaperiaatteen selvitykseen. Simulointia käytettiin kelluvan laitteen mekanismin tutkimukseen. Tuloksiin pohjautuen laitteelle määriteltiin teoreettiset maksimiteho rajat ja rajoitteet, jotka vaikuttavat laitteen tehokkuuteen.
Resumo:
The dissertation is based on four articles dealing with recalcitrant lignin water purification. Lignin, a complicated substance and recalcitrant to most treatment technologies, inhibits seriously pulp and paper industry waste management. Therefore, lignin is studied, using WO as a process method for its degradation. A special attention is paid to the improvement in biodegradability and the reduction of lignin content, since they have special importance for any following biological treatment. In most cases wet oxidation is not used as a complete ' mineralization method but as a pre treatment in order to eliminate toxic components and to reduce the high level of organics produced. The combination of wet oxidation with a biological treatment can be a good option due to its effectiveness and its relatively low technology cost. The literature part gives an overview of Advanced Oxidation Processes (AOPs). A hot oxidation process, wet oxidation (WO), is investigated in detail and is the AOP process used in the research. The background and main principles of wet oxidation, its industrial applications, the combination of wet oxidation with other water treatment technologies, principal reactions in WO, and key aspects of modelling and reaction kinetics are presented. There is also given a wood composition and lignin characterization (chemical composition, structure and origin), lignin containing waters, lignin degradation and reuse possibilities, and purification practices for lignin containing waters. The aim of the research was to investigate the effect of the operating conditions of WO, such as temperature, partial pressure of oxygen, pH and initial concentration of wastewater, on the efficiency, and to enhance the process and estimate optimal conditions for WO of recalcitrant lignin waters. Two different waters are studied (a lignin water model solution and debarking water from paper industry) to give as appropriate conditions as possible. Due to the great importance of re using and minimizing the residues of industries, further research is carried out using residual ash of an Estonian power plant as a catalyst in wet oxidation of lignin-containing water. Developing a kinetic model that includes in the prediction such parameters as TOC gives the opportunity to estimate the amount of emerging inorganic substances (degradation rate of waste) and not only the decrease of COD and BOD. The degradation target compound, lignin is included into the model through its COD value (CODligning). Such a kinetic model can be valuable in developing WO treatment processes for lignin containing waters, or other wastewaters containing one or more target compounds. In the first article, wet oxidation of "pure" lignin water was investigated as a model case with the aim of degrading lignin and enhancing water biodegradability. The experiments were performed at various temperatures (110 -190°C), partial oxygen pressures (0.5 -1.5 MPa) and pH (5, 9 and 12). The experiments showed that increasing the temperature notably improved the processes efficiency. 75% lignin reduction was detected at the lowest temperature tested and lignin removal improved to 100% at 190°C. The effect of temperature on the COD removal rate was lower, but clearly detectable. 53% of organics were oxidized at 190°C. The effect of pH occurred mostly on lignin removal. Increasing the pH enhanced the lignin removal efficiency from 60% to nearly 100%. A good biodegradability ratio (over 0.5) was generally achieved. The aim of the second article was to develop a mathematical model for "pure" lignin wet oxidation using lumped characteristics of water (COD, BOD, TOC) and lignin concentration. The model agreed well with the experimental data (R2 = 0.93 at pH 5 and 12) and concentration changes during wet oxidation followed adequately the experimental results. The model also showed correctly the trend of biodegradability (BOD/COD) changes. In the third article, the purpose of the research was to estimate optimal conditions for wet oxidation (WO) of debarking water from the paper industry. The WO experiments were' performed at various temperatures, partial oxygen pressures and pH. The experiments showed that lignin degradation and organics removal are affected remarkably by temperature and pH. 78-97% lignin reduction was detected at different WO conditions. Initial pH 12 caused faster removal of tannins/lignin content; but initial pH 5 was more effective for removal of total organics, represented by COD and TOC. Most of the decrease in organic substances concentrations occurred in the first 60 minutes. The aim of the fourth article was to compare the behaviour of two reaction kinetic models, based on experiments of wet oxidation of industrial debarking water under different conditions. The simpler model took into account only the changes in COD, BOD and TOC; the advanced model was similar to the model used in the second article. Comparing the results of the models, the second model was found to be more suitable for describing the kinetics of wet oxidation of debarking water. The significance of the reactions involved was compared on the basis of the model: for instance, lignin degraded first to other chemically oxidizable compounds rather than directly to biodegradable products. Catalytic wet oxidation of lignin containing waters is briefly presented at the end of the dissertation. Two completely different catalysts were used: a commercial Pt catalyst and waste power plant ash. CWO showed good performance using 1 g/L of residual ash gave lignin removal of 86% and COD removal of 39% at 150°C (a lower temperature and pressure than with WO). It was noted that the ash catalyst caused a remarkable removal rate for lignin degradation already during the pre heating for `zero' time, 58% of lignin was degraded. In general, wet oxidation is not recommended for use as a complete mineralization method, but as a pre treatment phase to eliminate toxic or difficultly biodegradable components and to reduce the high level of organics. Biological treatment is an appropriate post treatment method since easily biodegradable organic matter remains after the WO process. The combination of wet oxidation with subsequent biological treatment can be an effective option for the treatment of lignin containing waters.
Resumo:
The aim of this work is to compare two families of mathematical models for their respective capability to capture the statistical properties of real electricity spot market time series. The first model family is ARMA-GARCH models and the second model family is mean-reverting Ornstein-Uhlenbeck models. These two models have been applied to two price series of Nordic Nord Pool spot market for electricity namely to the System prices and to the DenmarkW prices. The parameters of both models were calibrated from the real time series. After carrying out simulation with optimal models from both families we conclude that neither ARMA-GARCH models, nor conventional mean-reverting Ornstein-Uhlenbeck models, even when calibrated optimally with real electricity spot market price or return series, capture the statistical characteristics of the real series. But in the case of less spiky behavior (System prices), the mean-reverting Ornstein-Uhlenbeck model could be seen to partially succeeded in this task.
Resumo:
This study presents an automatic, computer-aided analytical method called Comparison Structure Analysis (CSA), which can be applied to different dimensions of music. The aim of CSA is first and foremost practical: to produce dynamic and understandable representations of musical properties by evaluating the prevalence of a chosen musical data structure through a musical piece. Such a comparison structure may refer to a mathematical vector, a set, a matrix or another type of data structure and even a combination of data structures. CSA depends on an abstract systematic segmentation that allows for a statistical or mathematical survey of the data. To choose a comparison structure is to tune the apparatus to be sensitive to an exclusive set of musical properties. CSA settles somewhere between traditional music analysis and computer aided music information retrieval (MIR). Theoretically defined musical entities, such as pitch-class sets, set-classes and particular rhythm patterns are detected in compositions using pattern extraction and pattern comparison algorithms that are typical within the field of MIR. In principle, the idea of comparison structure analysis can be applied to any time-series type data and, in the music analytical context, to polyphonic as well as homophonic music. Tonal trends, set-class similarities, invertible counterpoints, voice-leading similarities, short-term modulations, rhythmic similarities and multiparametric changes in musical texture were studied. Since CSA allows for a highly accurate classification of compositions, its methods may be applicable to symbolic music information retrieval as well. The strength of CSA relies especially on the possibility to make comparisons between the observations concerning different musical parameters and to combine it with statistical and perhaps other music analytical methods. The results of CSA are dependent on the competence of the similarity measure. New similarity measures for tonal stability, rhythmic and set-class similarity measurements were proposed. The most advanced results were attained by employing the automated function generation – comparable with the so-called genetic programming – to search for an optimal model for set-class similarity measurements. However, the results of CSA seem to agree strongly, independent of the type of similarity function employed in the analysis.
Resumo:
Kokeellinen moodianalyysi on kokeellinen menetelmä, jolla voidaan selvittää rakenteille ominaista värähtelyä. Työssä oli kolme tavoitetta, jotka pyrittiin saavuttamaan. Ensimmäinen tavoite oli ohjeistuksen luominen rakenteiden ominaismuotojen visuaaliseksi tarkastelemiseksi Lappeenrannan teknillisellä yliopistolla käytettävissä olevilla laitteistoilla ja ohjelmilla. Ohjeistuksen perustana on ajatus siitä, että kokeellinen moodianalyysi saataisiin tehokkaampaan käyttöön koneensuunnittelun opetuksessa. Ohjeistus tehtiin pääasiassa kuvien ja kuvia tukevien selitysten avulla. Työn toisena tavoitteena oli verrata rakenteen ominaisvärähtelyä, kun se oli tuettu vapaasti ja kun se oli tuettu kiinteästi sen luonnolliseen ympäristöön. Taajuuksia verrattaessa huomattiin, että eri tavoin tuetut rakenteet käyttäytyvät eri tavalla, mikä on otettava huomioon, kun tutkitaan kriittisiä värähtelyjä. Ominaisvärähtelyjä voidaan selvittää myös matemaattisesti esimerkiksi äärellisten elementtien menetelmällä. Työn kolmantena tavoitteena oli verrata elementtimenetelmällä ja kokeellisesti saatuja ominaisvärähtelyn arvoja. Elementtimallia pyrittiin tarkentamaan eri parametrejä muuttamalla niin, että ominaistaajuuksien arvot vastaavat mahdollisimman hyvin toisiaan. Tavoite saavutettiin.
Resumo:
Direct leaching is an alternative to conventional roast-leach-electrowin (RLE) zinc production method. The basic reaction of direct leach method is the oxidation of sphalerite concentrate in acidic liquid by ferric iron. The reaction mechanism and kinetics, mass transfer and current modifications of zinc concentrate direct leaching process are considered. Particular attention is paid to the oxidation-reduction cycle of iron and its role in direct leaching of zinc concentrate, since it can be one of the limiting factors of the leaching process under certain conditions. The oxidation-reduction cycle of iron was experimentally studied with goal of gaining new knowledge for developing the direct leaching of zinc concentrate. In order to obtain this aim, ferrous iron oxidation experiments were carried out. Affect of such parameters as temperature, pressure, sulfuric acid concentration, ferrous iron and copper concentrations was studied. Based on the experimental results, mathematical model of the ferrous iron oxidation rate was developed. According to results obtained during the study, the reaction rate orders for ferrous iron concentration, oxygen concentration and copper concentration are 0.777, 0.652 and 0.0951 respectively. Values predicted by model were in good concordance with the experimental results. The reliability of estimated parameters was evaluated by MCMC analysis which showed good parameters reliability.
Resumo:
Forest inventories are used to estimate forest characteristics and the condition of forest for many different applications: operational tree logging for forest industry, forest health state estimation, carbon balance estimation, land-cover and land use analysis in order to avoid forest degradation etc. Recent inventory methods are strongly based on remote sensing data combined with field sample measurements, which are used to define estimates covering the whole area of interest. Remote sensing data from satellites, aerial photographs or aerial laser scannings are used, depending on the scale of inventory. To be applicable in operational use, forest inventory methods need to be easily adjusted to local conditions of the study area at hand. All the data handling and parameter tuning should be objective and automated as much as possible. The methods also need to be robust when applied to different forest types. Since there generally are no extensive direct physical models connecting the remote sensing data from different sources to the forest parameters that are estimated, mathematical estimation models are of "black-box" type, connecting the independent auxiliary data to dependent response data with linear or nonlinear arbitrary models. To avoid redundant complexity and over-fitting of the model, which is based on up to hundreds of possibly collinear variables extracted from the auxiliary data, variable selection is needed. To connect the auxiliary data to the inventory parameters that are estimated, field work must be performed. In larger study areas with dense forests, field work is expensive, and should therefore be minimized. To get cost-efficient inventories, field work could partly be replaced with information from formerly measured sites, databases. The work in this thesis is devoted to the development of automated, adaptive computation methods for aerial forest inventory. The mathematical model parameter definition steps are automated, and the cost-efficiency is improved by setting up a procedure that utilizes databases in the estimation of new area characteristics.
Resumo:
Preference relations, and their modeling, have played a crucial role in both social sciences and applied mathematics. A special category of preference relations is represented by cardinal preference relations, which are nothing other than relations which can also take into account the degree of relation. Preference relations play a pivotal role in most of multi criteria decision making methods and in the operational research. This thesis aims at showing some recent advances in their methodology. Actually, there are a number of open issues in this field and the contributions presented in this thesis can be grouped accordingly. The first issue regards the estimation of a weight vector given a preference relation. A new and efficient algorithm for estimating the priority vector of a reciprocal relation, i.e. a special type of preference relation, is going to be presented. The same section contains the proof that twenty methods already proposed in literature lead to unsatisfactory results as they employ a conflicting constraint in their optimization model. The second area of interest concerns consistency evaluation and it is possibly the kernel of the thesis. This thesis contains the proofs that some indices are equivalent and that therefore, some seemingly different formulae, end up leading to the very same result. Moreover, some numerical simulations are presented. The section ends with some consideration of a new method for fairly evaluating consistency. The third matter regards incomplete relations and how to estimate missing comparisons. This section reports a numerical study of the methods already proposed in literature and analyzes their behavior in different situations. The fourth, and last, topic, proposes a way to deal with group decision making by means of connecting preference relations with social network analysis.
Resumo:
Mathematical models often contain parameters that need to be calibrated from measured data. The emergence of efficient Markov Chain Monte Carlo (MCMC) methods has made the Bayesian approach a standard tool in quantifying the uncertainty in the parameters. With MCMC, the parameter estimation problem can be solved in a fully statistical manner, and the whole distribution of the parameters can be explored, instead of obtaining point estimates and using, e.g., Gaussian approximations. In this thesis, MCMC methods are applied to parameter estimation problems in chemical reaction engineering, population ecology, and climate modeling. Motivated by the climate model experiments, the methods are developed further to make them more suitable for problems where the model is computationally intensive. After the parameters are estimated, one can start to use the model for various tasks. Two such tasks are studied in this thesis: optimal design of experiments, where the task is to design the next measurements so that the parameter uncertainty is minimized, and model-based optimization, where a model-based quantity, such as the product yield in a chemical reaction model, is optimized. In this thesis, novel ways to perform these tasks are developed, based on the output of MCMC parameter estimation. A separate topic is dynamical state estimation, where the task is to estimate the dynamically changing model state, instead of static parameters. For example, in numerical weather prediction, an estimate of the state of the atmosphere must constantly be updated based on the recently obtained measurements. In this thesis, a novel hybrid state estimation method is developed, which combines elements from deterministic and random sampling methods.
Resumo:
The main focus of this thesis is to define the field weakening point of permanent magnet synchronous machine with embedded magnets in traction applications. Along with the thesis a modelling program is made to help the designer to define the field weakening point in practical applications. The thesis utilizes the equations based on the current angle. These equations can be derived from the vector diagram of permanent magnet synchronous machine. The design parameters of the machine are: The maximum rotational speed, saliency ratio, maximum induced voltage and characteristic current. The main result of the thesis is finding out the rated rotational speed, from which the field weakening starts. The action of the machine is estimated at a wide speed range and the changes of machine parameters are examined.
Resumo:
Percarboxylic acids are commonly used as disinfection and bleaching agents in textile, paper, and fine chemical industries. All of these applications are based on the oxidative potential of these compounds. In spite of high interest in these chemicals, they are unstable and explosive chemicals, which increase the risk of synthesis processes and transportation. Therefore, the safety criteria in the production process should be considered. Microreactors represent a technology that efficiently utilizes safety advantages resulting from small scale. Therefore, microreactor technology was used in the synthesis of peracetic acid and performic acid. These percarboxylic acids were produced at different temperatures, residence times and catalyst i.e. sulfuric acid concentrations. Both synthesis reactions seemed to be rather fast because with performic acid equilibrium was reached in 4 min at 313 K and with peracetic acid in 10 min at 343 K. In addition, the experimental results were used to study the kinetics of the formation of performic acid and peracetic acid. The advantages of the microreactors in this study were the efficient temperature control even in very exothermic reaction and good mixing due to the short diffusion distances. Therefore, reaction rates were determined with high accuracy. Three different models were considered in order to estimate the kinetic parameters such as reaction rate constants and activation energies. From these three models, the laminar flow model with radial velocity distribution gave most precise parameters. However, sulfuric acid creates many drawbacks in this synthesis process. Therefore, a ´´greener´´ way to use heterogeneous catalyst in the synthesis of performic acid in microreactor was studied. The cation exchange resin, Dowex 50 Wx8, presented very high activity and a long life time in this reaction. In the presence of this catalyst, the equilibrium was reached in 120 second at 313 K which indicates a rather fast reaction. In addition, the safety advantages of microreactors were investigated in this study. Four different conventional methods were used. Production of peracetic acid was used as a test case, and the safety of one conventional batch process was compared with an on-site continuous microprocess. It was found that the conventional methods for the analysis of process safety might not be reliable and adequate for radically novel technology, such as microreactors. This is understandable because the conventional methods are partly based on experience, which is very limited in connection with totally novel technology. Therefore, one checklist-based method was developed to study the safety of intensified and novel processes at the early stage of process development. The checklist was formulated using the concept of layers of protection for a chemical process. The traditional and three intensified processes of hydrogen peroxide synthesis were selected as test cases. With these real cases, it was shown that several positive and negative effects on safety can be detected in process intensification. The general claim that safety is always improved by process intensification was questioned.