995 resultados para mathematical parameters
Resumo:
The warp of corrugated board is the most prevalent quality problem incorrugated board industry. Nowadays corrugators provide high quality board but there often occurs a warp problem within the production of some board grades. One of the main reasons for that are the humidity and the temperature levels of the raw materials. The goal of the research is to find out howthe adjusted corrugator recipe parameters required for appropriate running of the corrugated board are repeatable for the considered board grades, how the temperature and humidity imbalances of the raw material papers influence on the warpformation of the finished board. Furthermore, the solutions for preventing warpof corrugated board are presented in the thesis.
Resumo:
Abstract: The objective of this work was to evaluate the feasibility of using physiological parameters for water deficit tolerance, as an auxiliary method for selection of upland rice genotypes. Two experiments - with or without water deficit - were carried out in Porangatu, in the state of Goiás, Brazil; the water deficit experiment received about half of irrigation that was applied to the well-watered experiment. Four genotypes with different tolerance levels to water stress were evaluated. The UPLRI 7, B6144F-MR-6-0-0, and IR80312-6-B-3-2-B genotypes, under water stress conditions, during the day, showed lower stomatal diffusive resistance, higher leaf water potential, and lower leaf temperature than the control. These genotypes showed the highest grain yields under water stress conditions, which were 534, 601, and 636 kg ha-1, respectively, and did not differ significantly among them. They also showed lower drought susceptibility index than the other genotypes. 'BRS Soberana' (susceptible control) was totally unproductive under drought conditions. Leaf temperature is a easy-read parameter correlated to plant-water status, viable for selecting rice genotypes for water deficit tolerance.
Resumo:
Yksi keskeisimmistä tehtävistä matemaattisten mallien tilastollisessa analyysissä on mallien tuntemattomien parametrien estimointi. Tässä diplomityössä ollaan kiinnostuneita tuntemattomien parametrien jakaumista ja niiden muodostamiseen sopivista numeerisista menetelmistä, etenkin tapauksissa, joissa malli on epälineaarinen parametrien suhteen. Erilaisten numeeristen menetelmien osalta pääpaino on Markovin ketju Monte Carlo -menetelmissä (MCMC). Nämä laskentaintensiiviset menetelmät ovat viime aikoina kasvattaneet suosiotaan lähinnä kasvaneen laskentatehon vuoksi. Sekä Markovin ketjujen että Monte Carlo -simuloinnin teoriaa on esitelty työssä siinä määrin, että menetelmien toimivuus saadaan perusteltua. Viime aikoina kehitetyistä menetelmistä tarkastellaan etenkin adaptiivisia MCMC menetelmiä. Työn lähestymistapa on käytännönläheinen ja erilaisia MCMC -menetelmien toteutukseen liittyviä asioita korostetaan. Työn empiirisessä osuudessa tarkastellaan viiden esimerkkimallin tuntemattomien parametrien jakaumaa käyttäen hyväksi teoriaosassa esitettyjä menetelmiä. Mallit kuvaavat kemiallisia reaktioita ja kuvataan tavallisina differentiaaliyhtälöryhminä. Mallit on kerätty kemisteiltä Lappeenrannan teknillisestä yliopistosta ja Åbo Akademista, Turusta.
Resumo:
Teollisuuden tuotannon eri prosessien optimointi on hyvin ajankohtainen aihe. Monet ohjausjärjestelmät ovat ajalta, jolloin tietokoneiden laskentateho oli hyvin vaatimaton nykyisiin verrattuna. Työssä esitetään tuotantoprosessi, joka sisältää teräksen leikkaussuunnitelman muodostamisongelman. Valuprosessi on yksi teräksen valmistuksen välivaiheita. Siinä sopivaan laatuun saatettu sula teräs valetaan linjastoon, jossa se jähmettyy ja leikataan aihioiksi. Myöhemmissä vaiheissa teräsaihioista muokataan pienempiä kokonaisuuksia, tehtaan lopputuotteita. Jatkuvavaletut aihiot voidaan leikata tilauskannasta riippuen monella eri tavalla. Tätä varten tarvitaan leikkaussuunnitelma, jonka muodostamiseksi on ratkaistava sekalukuoptimointiongelma. Sekalukuoptimointiongelmat ovat optimoinnin haastavin muoto. Niitä on tutkittu yksinkertaisempiin optimointiongelmiin nähden vähän. Nykyisten tietokoneiden laskentateho on kuitenkin mahdollistanut raskaampien ja monimutkaisempien optimointialgoritmien käytön ja kehittämisen. Työssä on käytetty ja esitetty eräs stokastisen optimoinnin menetelmä, differentiaalievoluutioalgoritmi. Tässä työssä esitetään teräksen leikkausoptimointialgoritmi. Kehitetty optimointimenetelmä toimii dynaamisesti tehdasympäristössä käyttäjien määrittelemien parametrien mukaisesti. Työ on osa Syncron Tech Oy:n Ovako Bar Oy Ab:lle toimittamaa ohjausjärjestelmää.
Resumo:
This paper presents a research concerning the conversion of non-accessible web pages containing mathematical formulae into accessible versions through an OCR (Optical Character Recognition) tool. The objective of this research is twofold. First, to establish criteria for evaluating the potential accessibility of mathematical web sites, i.e. the feasibility of converting non-accessible (non-MathML) math sites into accessible ones (Math-ML). Second, to propose a data model and a mechanism to publish evaluation results, making them available to the educational community who may use them as a quality measurement for selecting learning material.Results show that the conversion using OCR tools is not viable for math web pages mainly due to two reasons: many of these pages are designed to be interactive, making difficult, if not almost impossible, a correct conversion; formula (either images or text) have been written without taking into account standards of math writing, as a consequence OCR tools do not properly recognize math symbols and expressions. In spite of these results, we think the proposed methodology to create and publish evaluation reports may be rather useful in other accessibility assessment scenarios.
Resumo:
Työn kirjallisuusosassa on tarkasteltu rasvaistenjätevesien puhdistuksessa käytettyjä perinteisiä käsittelymenetelmiä ja ultrasuodatusta. Perinteisiä rasvaisten jätevesien käsittelymenetelmiä ovat muun muassalaskeutus, flotaatio, hydrosykloni, pisarakoon kasvattaminen suodatus sekä biologinen käsittely. Lisäksi happohydrolyysia voidaan soveltaa edellä mainittujen menetelmien esikäsittelynä. Perinteisten puhdistusmenetelmien käyttöä rajoittavatniiden tehottomuus emulgoituneen ja liukoisen öljyn poistossa. Tämä sekä kiristyneet päästövaatimukset ja kalvotekniikan nopea kehittyminen ovat lisänneet kiinnostusta kalvotekniikkaan. Työn soveltavassa osassa on tarkasteltu rasvojen mahdollisesti aiheuttamia ongelmia Porvoon jalostamon kemiallisessa ja biologisessa puhdistuksessa. Rasvaisia jätevesiä muodostuu biodieselin valmistuksessa, jossa rasvoja käytetään syöttöaineena. Vertailtaessa jalostamon vesilaitoksen nykyisiä olosuhteita ja rasvojen käsittelyn vaatimia olosuhteita havaitaan, että optimiolosuhteet ovat melko lähellä toisiaan ja rasvaisten jätevesien mukana tulevat fosfori-, typpi- ja COD-kuormat melko pieniä. Suurimmat mahdolliset rasvojen aiheuttamat ongelmat syntyvät aktiivilietelaitoksella, jossa kevyt pinnalle nouseva rasva nostaa mukanaan lietettä. Rasvat ja rasvahapot myös lisäävät rihmamaisten bakteerien kasvua, joiden runsas esiintyminen aiheuttaa huonosti laskeutuvaa lietettä, eli paisuntalietettä. Rasvaisten vesien aiheuttamaa kuormitusta aktiivilieteprosessiin on tarkasteltu Activated sludge Model No. 3:n ja bio-P fosforin poisto moduuliin pohjautuvan Excel-taulukkolaskentamallin avulla. Pohjana työssä on käytetty Tuomo Hillin vuonna 2002 diplomityönä tekemää taulukkolaskentamallia. Työssä on esitelty kaikki mallin kannalta oleelliset yhtälöt ja parametrit. Tämän tutkimuksen perusteella mallin käytettävyyttä rajoittaa se, että sitä ei ole kalibroitu Porvoon jalostamolle. Kalibroimattomalla mallilla voidaan saada vain suuntaa antavia tuloksia.
Resumo:
Normally either the Güntelberg or Davies equation is used to predict activity coefficients of electrolytes in dilute solutions when no better equation is available. The validity of these equations and, additionally, of the parameter-free equations used in the Bates-Guggenheim convention and in the Pitzerformalism for activity coefficients were tested with experimentally determined activity coefficients of HCl, HBr, HI, LiCl, NaCl, KCl, RbCl, CsCl, NH4Cl, LiBr,NaBr and KBr in aqueous solutions at 298.15 K. The experimental activity coefficients of these electrolytes can be usually reproduced within experimental errorby means of a two-parameter equation of the Hückel type. The best Hückel equations were also determined for all electrolytes considered. The data used in the calculations of this study cover almost all reliable galvanic cell results available in the literature for the electrolytes considered. The results of the calculations reveal that the parameter-free activity coefficient equations can only beused for very dilute electrolyte solutions in thermodynamic studies.
Resumo:
Normally either the Güntelberg or Davies equation is used to predict activity coefficients of electrolytes in dilute solutions when no betterequation is available. The validity of these equations and, additionally, of the parameter-free equation used in the Bates-Guggenheim convention for activity coefficients were tested with experimentally determined activity coefficients of LaCl3, CaCl2, SrCl2 and BaCl2 in aqueous solutions at 298.15 K. The experimentalactivity coefficients of these electrolytes can be usually reproduced within experimental error by means of a two-parameter equation of the Hückel type. The best Hückel equations were also determined for all electrolytes considered. The data used in the calculations of this study cover almost all reliable galvanic cell results available in the literature for the electrolytes considered. The results of the calculations reveal that the parameter-free activity coefficient equations can only be used for very dilute electrolyte solutions in thermodynamic studies
Resumo:
Tässä väitöstutkimuksessa tutkittiin fysikaaliskemiallisten olosuhteiden ja toimintaparametrien vaikutusta juustoheran fraktiointiin. Kirjallisuusosassa on käsitelty heran ympäristövaikutusta, heran hyödyntämistä ja heran käsittelyä kalvotekniikalla. Kokeellinen osa on jaettu kahteen osaan, joista ensimmäinen käsittelee ultrasuodatusta ja toinen nanosuodatusta juustoheran fraktioinnissa. Ultrasuodatuskalvon valinta tehtiin perustuen kalvon cut-off lukuun, joka oli määritetty polyetyleeniglykoliliuoksilla olosuhteissa, joissa konsentraatiopolariosaatioei häiritse mittausta. Kriittisen vuon konseptia käytettiin sopivan proteiinikonsentraation löytämiseksi ultrasuodatuskokeisiin, koska heraproteiinit ovat tunnetusti kalvoa likaavia aineita. Ultrasuodatuskokeissa tutkittiin heran eri komponenttien suodattumista kalvon läpi ja siihen vaikuttavia ominaisuuksia. Herapermeaattien peptidifraktiot analysoitiin kokoekskluusiokromatografialla ja MALDI-TOF massaspektrometrillä. Kokeissa käytettävien nanosuodatuskalvojen keskimääräinen huokoskoko analysoitiin neutraaleilla liukoisilla aineilla ja zeta-potentiaalit virtauspotentiaalimittauksilla. Aminohappoja käytettiin malliaineina tutkittaessa huokoskoon ja varauksen merkitystä erotuksessa. Aminohappojen retentioon vaikuttivat pH ja liuoksen ionivahvuus sekä molekyylien väliset vuorovaikutukset. Heran ultrasuodatuksessa tuotettu permeaatti, joka sisälsi pieniä peptidejä, laktoosia ja suoloja, nanosuodatettiin happamassa ja emäksisessä pH:ssa. Emäksisissä oloissa tehdyssä nanosuodatuksessa foulaantumista tapahtui vähemmän ja permeaattivuo oli parempi. Emäksisissä oloissa myös selektiivisyys laktoosin erotuksessa peptideistä oli parempi verrattuna selektiivisyyteen happamissa oloissa.
Resumo:
The building industry has a particular interest in using clinching as a joining method for frame constructions of light-frame housing. Normally many clinch joints are required in joining of frames.In order to maximise the strength of the complete assembly, each clinch joint must be as sound as possible. Experimental testing is the main means of optimising a particular clinch joint. This includes shear strength testing and visual observation of joint cross-sections. The manufacturers of clinching equipment normally perform such experimental trials. Finite element analysis can also be used to optimise the tool geometry and the process parameter, X, which represents the thickness of the base of the joint. However, such procedures require dedicated software, a skilled operator, and test specimens in order to verify the finite element model. In addition, when using current technology several hours' computing time may be necessary. The objective of the study was to develop a simple calculation procedure for rapidly establishing an optimum value for the parameter X for a given tool combination. It should be possible to use the procedure on a daily basis, without stringent demands on the skill of the operator or the equipment. It is also desirable that the procedure would significantly decrease thenumber of shear strength tests required for verification. The experimental workinvolved tests in order to obtain an understanding of the behaviour of the sheets during clinching. The most notable observation concerned the stage of the process in which the upper sheet was initially bent, after which the deformation mechanism changed to shearing and elongation. The amount of deformation was measured relative to the original location of the upper sheet, and characterised as the C-measure. By understanding in detail the behaviour of the upper sheet, it waspossible to estimate a bending line function for the surface of the upper sheet. A procedure was developed, which makes it possible to estimate the process parameter X for each tool combination with a fixed die. The procedure is based on equating the volume of material on the punch side with the volume of the die. Detailed information concerning the behaviour of material on the punch side is required, assuming that the volume of die does not change during the process. The procedure was applied to shear strength testing of a sample material. The sample material was continuously hot-dip zinc-coated high-strength constructional steel,with a nominal thickness of 1.0 mm. The minimum Rp0.2 proof stress was 637 N/mm2. Such material has not yet been used extensively in light-frame housing, and little has been published on clinching of the material. The performance of the material is therefore of particular interest. Companies that use clinching on a daily basis stand to gain the greatest benefit from the procedure. By understanding the behaviour of sheets in different cases, it is possible to use data at an early stage for adjusting and optimising the process. In particular, the functionality of common tools can be increased since it is possible to characterise the complete range of existing tools. The study increases and broadens the amount ofbasic information concerning the clinching process. New approaches and points of view are presented and used for generating new knowledge.
Resumo:
Theultimate goal of any research in the mechanism/kinematic/design area may be called predictive design, ie the optimisation of mechanism proportions in the design stage without requiring extensive life and wear testing. This is an ambitious goal and can be realised through development and refinement of numerical (computational) technology in order to facilitate the design analysis and optimisation of complex mechanisms, mechanical components and systems. As a part of the systematic design methodology this thesis concentrates on kinematic synthesis (kinematic design and analysis) methods in the mechanism synthesis process. The main task of kinematic design is to find all possible solutions in the form of structural parameters to accomplish the desired requirements of motion. Main formulations of kinematic design can be broadly divided to exact synthesis and approximate synthesis formulations. The exact synthesis formulation is based in solving n linear or nonlinear equations in n variables and the solutions for the problem areget by adopting closed form classical or modern algebraic solution methods or using numerical solution methods based on the polynomial continuation or homotopy. The approximate synthesis formulations is based on minimising the approximation error by direct optimisation The main drawbacks of exact synthesis formulationare: (ia) limitations of number of design specifications and (iia) failure in handling design constraints- especially inequality constraints. The main drawbacks of approximate synthesis formulations are: (ib) it is difficult to choose a proper initial linkage and (iib) it is hard to find more than one solution. Recentformulations in solving the approximate synthesis problem adopts polynomial continuation providing several solutions, but it can not handle inequality const-raints. Based on the practical design needs the mixed exact-approximate position synthesis with two exact and an unlimited number of approximate positions has also been developed. The solutions space is presented as a ground pivot map but thepole between the exact positions cannot be selected as a ground pivot. In this thesis the exact synthesis problem of planar mechanism is solved by generating all possible solutions for the optimisation process ¿ including solutions in positive dimensional solution sets - within inequality constraints of structural parameters. Through the literature research it is first shown that the algebraic and numerical solution methods ¿ used in the research area of computational kinematics ¿ are capable of solving non-parametric algebraic systems of n equations inn variables and cannot handle the singularities associated with positive-dimensional solution sets. In this thesis the problem of positive-dimensional solutionsets is solved adopting the main principles from mathematical research area of algebraic geometry in solving parametric ( in the mathematical sense that all parameter values are considered ¿ including the degenerate cases ¿ for which the system is solvable ) algebraic systems of n equations and at least n+1 variables.Adopting the developed solution method in solving the dyadic equations in direct polynomial form in two- to three-precision-points it has been algebraically proved and numerically demonstrated that the map of the ground pivots is ambiguousand that the singularities associated with positive-dimensional solution sets can be solved. The positive-dimensional solution sets associated with the poles might contain physically meaningful solutions in the form of optimal defectfree mechanisms. Traditionally the mechanism optimisation of hydraulically driven boommechanisms is done at early state of the design process. This will result in optimal component design rather than optimal system level design. Modern mechanismoptimisation at system level demands integration of kinematic design methods with mechanical system simulation techniques. In this thesis a new kinematic design method for hydraulically driven boom mechanism is developed and integrated in mechanical system simulation techniques. The developed kinematic design method is based on the combinations of two-precision-point formulation and on optimisation ( with mathematical programming techniques or adopting optimisation methods based on probability and statistics ) of substructures using calculated criteria from the system level response of multidegree-of-freedom mechanisms. Eg. by adopting the mixed exact-approximate position synthesis in direct optimisation (using mathematical programming techniques) with two exact positions and an unlimitednumber of approximate positions the drawbacks of (ia)-(iib) has been cancelled.The design principles of the developed method are based on the design-tree -approach of the mechanical systems and the design method ¿ in principle ¿ is capable of capturing the interrelationship between kinematic and dynamic synthesis simultaneously when the developed kinematic design method is integrated with the mechanical system simulation techniques.
Resumo:
The present dissertation is devoted to the systematic approach to the development of organic toxic and refractory pollutants abatement by chemical decomposition methods in aqueous and gaseous phases. The systematic approach outlines the basic scenario of chemical decomposition process applications with a step-by-step approximation to the most effective result with a predictable outcome for the full-scale application, confirmed by successful experience. The strategy includes the following steps: chemistry studies, reaction kinetic studies in interaction with the mass transfer processes under conditions of different control parameters, contact equipment design and studies, mathematical description of the process for its modelling and simulation, processes integration into treatment technology and its optimisation, and the treatment plant design. The main idea of the systematic approach for oxidation process introduction consists of a search for the most effective combination between the chemical reaction and the treatment device, in which the reaction is supposed to take place. Under this strategy,a knowledge of the reaction pathways, its products, stoichiometry and kinetics is fundamental and, unfortunately, often unavailable from the preliminary knowledge. Therefore, research made in chemistry on novel treatment methods, comprisesnowadays a substantial part of the efforts. Chemical decomposition methods in the aqueous phase include oxidation by ozonation, ozone-associated methods (O3/H2O2, O3/UV, O3/TiO2), Fenton reagent (H2O2/Fe2+/3+) and photocatalytic oxidation (PCO). In the gaseous phase, PCO and catalytic hydrolysis over zero valent ironsare developed. The experimental studies within the described methodology involve aqueous phase oxidation of natural organic matter (NOM) of potable water, phenolic and aromatic amino compounds, ethylene glycol and its derivatives as de-icing agents, and oxygenated motor fuel additives ¿ methyl tert-butyl ether (MTBE) ¿ in leachates and polluted groundwater. Gas-phase chemical decomposition includes PCO of volatile organic compounds and dechlorination of chlorinated methane derivatives. The results of the research summarised here are presented in fifteenattachments (publications and papers submitted for publication and under preparation).
Resumo:
Thisresearch deals with the dynamic modeling of gas lubricated tilting pad journal bearings provided with spring supported pads, including experimental verification of the computation. On the basis of a mathematical model of a film bearing, a computer program has been developed, which can be used for the simulation of a special type of tilting pad gas journal bearing supported by a rotary spring under different loading conditions time dependently (transient running conditions due to geometry variations in time externally imposed). On the basis of literature, different transformations have been used in the model to achieve simpler calculation. The numerical simulation is used to solve a non-stationary case of a gasfilm. The simulation results were compared with literature results in a stationary case (steady running conditions) and they were found to be equal. In addition to this, comparisons were made with a number of stationary and non-stationary bearing tests, which were performed at Lappeenranta University of Technology using bearings designed with the simulation program. A study was also made using numerical simulation and literature to establish the influence of the different bearing parameters on the stability of the bearing. Comparison work was done with literature on tilting pad gas bearings. This bearing type is rarely used. One literature reference has studied the same bearing type as that used in LUT. A new design of tilting pad gas bearing is introduced. It is based on a stainless steel body and electron beam welding of the bearing parts. It has good operation characteristics and is easier to tune and faster to manufacture than traditional constructions. It is also suitable for large serial production.
Resumo:
Fuzzy set theory and Fuzzy logic is studied from a mathematical point of view. The main goal is to investigatecommon mathematical structures in various fuzzy logical inference systems and to establish a general mathematical basis for fuzzy logic when considered as multi-valued logic. The study is composed of six distinct publications. The first paper deals with Mattila'sLPC+Ch Calculus. THis fuzzy inference system is an attempt to introduce linguistic objects to mathematical logic without defining these objects mathematically.LPC+Ch Calculus is analyzed from algebraic point of view and it is demonstratedthat suitable factorization of the set of well formed formulae (in fact, Lindenbaum algebra) leads to a structure called ET-algebra and introduced in the beginning of the paper. On its basis, all the theorems presented by Mattila and many others can be proved in a simple way which is demonstrated in the Lemmas 1 and 2and Propositions 1-3. The conclusion critically discusses some other issues of LPC+Ch Calculus, specially that no formal semantics for it is given.In the second paper the characterization of solvability of the relational equation RoX=T, where R, X, T are fuzzy relations, X the unknown one, and o the minimum-induced composition by Sanchez, is extended to compositions induced by more general products in the general value lattice. Moreover, the procedure also applies to systemsof equations. In the third publication common features in various fuzzy logicalsystems are investigated. It turns out that adjoint couples and residuated lattices are very often present, though not always explicitly expressed. Some minor new results are also proved.The fourth study concerns Novak's paper, in which Novak introduced first-order fuzzy logic and proved, among other things, the semantico-syntactical completeness of this logic. He also demonstrated that the algebra of his logic is a generalized residuated lattice. In proving that the examination of Novak's logic can be reduced to the examination of locally finite MV-algebras.In the fifth paper a multi-valued sentential logic with values of truth in an injective MV-algebra is introduced and the axiomatizability of this logic is proved. The paper developes some ideas of Goguen and generalizes the results of Pavelka on the unit interval. Our proof for the completeness is purely algebraic. A corollary of the Completeness Theorem is that fuzzy logic on the unit interval is semantically complete if, and only if the algebra of the valuesof truth is a complete MV-algebra. The Compactness Theorem holds in our well-defined fuzzy sentential logic, while the Deduction Theorem and the Finiteness Theorem do not. Because of its generality and good-behaviour, MV-valued logic can be regarded as a mathematical basis of fuzzy reasoning. The last paper is a continuation of the fifth study. The semantics and syntax of fuzzy predicate logic with values of truth in ana injective MV-algerba are introduced, and a list of universally valid sentences is established. The system is proved to be semanticallycomplete. This proof is based on an idea utilizing some elementary properties of injective MV-algebras and MV-homomorphisms, and is purely algebraic.