24 resultados para algebraic decoding
em Doria (National Library of Finland DSpace Services) - National Library of Finland, Finland
Resumo:
Fuzzy set theory and Fuzzy logic is studied from a mathematical point of view. The main goal is to investigatecommon mathematical structures in various fuzzy logical inference systems and to establish a general mathematical basis for fuzzy logic when considered as multi-valued logic. The study is composed of six distinct publications. The first paper deals with Mattila'sLPC+Ch Calculus. THis fuzzy inference system is an attempt to introduce linguistic objects to mathematical logic without defining these objects mathematically.LPC+Ch Calculus is analyzed from algebraic point of view and it is demonstratedthat suitable factorization of the set of well formed formulae (in fact, Lindenbaum algebra) leads to a structure called ET-algebra and introduced in the beginning of the paper. On its basis, all the theorems presented by Mattila and many others can be proved in a simple way which is demonstrated in the Lemmas 1 and 2and Propositions 1-3. The conclusion critically discusses some other issues of LPC+Ch Calculus, specially that no formal semantics for it is given.In the second paper the characterization of solvability of the relational equation RoX=T, where R, X, T are fuzzy relations, X the unknown one, and o the minimum-induced composition by Sanchez, is extended to compositions induced by more general products in the general value lattice. Moreover, the procedure also applies to systemsof equations. In the third publication common features in various fuzzy logicalsystems are investigated. It turns out that adjoint couples and residuated lattices are very often present, though not always explicitly expressed. Some minor new results are also proved.The fourth study concerns Novak's paper, in which Novak introduced first-order fuzzy logic and proved, among other things, the semantico-syntactical completeness of this logic. He also demonstrated that the algebra of his logic is a generalized residuated lattice. In proving that the examination of Novak's logic can be reduced to the examination of locally finite MV-algebras.In the fifth paper a multi-valued sentential logic with values of truth in an injective MV-algebra is introduced and the axiomatizability of this logic is proved. The paper developes some ideas of Goguen and generalizes the results of Pavelka on the unit interval. Our proof for the completeness is purely algebraic. A corollary of the Completeness Theorem is that fuzzy logic on the unit interval is semantically complete if, and only if the algebra of the valuesof truth is a complete MV-algebra. The Compactness Theorem holds in our well-defined fuzzy sentential logic, while the Deduction Theorem and the Finiteness Theorem do not. Because of its generality and good-behaviour, MV-valued logic can be regarded as a mathematical basis of fuzzy reasoning. The last paper is a continuation of the fifth study. The semantics and syntax of fuzzy predicate logic with values of truth in ana injective MV-algerba are introduced, and a list of universally valid sentences is established. The system is proved to be semanticallycomplete. This proof is based on an idea utilizing some elementary properties of injective MV-algebras and MV-homomorphisms, and is purely algebraic.
Resumo:
This thesis studies the properties and usability of operators called t-norms, t-conorms, uninorms, as well as many valued implications and equivalences. Into these operators, weights and a generalized mean are embedded for aggregation, and they are used for comparison tasks and for this reason they are referred to as comparison measures. The thesis illustrates how these operators can be weighted with a differential evolution and aggregated with a generalized mean, and the kinds of measures of comparison that can be achieved from this procedure. New operators suitable for comparison measures are suggested. These operators are combination measures based on the use of t-norms and t-conorms, the generalized 3_-uninorm and pseudo equivalence measures based on S-type implications. The empirical part of this thesis demonstrates how these new comparison measures work in the field of classification, for example, in the classification of medical data. The second application area is from the field of sports medicine and it represents an expert system for defining an athlete's aerobic and anaerobic thresholds. The core of this thesis offers definitions for comparison measures and illustrates that there is no actual difference in the results achieved in comparison tasks, by the use of comparison measures based on distance, versus comparison measures based on many valued logical structures. The approach has been highly practical in this thesis and all usage of the measures has been validated mainly by practical testing. In general, many different types of operators suitable for comparison tasks have been presented in fuzzy logic literature and there has been little or no experimental work with these operators.
Resumo:
Fuzzy subsets and fuzzy subgroups are basic concepts in fuzzy mathematics. We shall concentrate on fuzzy subgroups dealing with some of their algebraic, topological and complex analytical properties. Explorations are theoretical belonging to pure mathematics. One of our ideas is to show how widely fuzzy subgroups can be used in mathematics, which brings out the wealth of this concept. In complex analysis we focus on Möbius transformations, combining them with fuzzy subgroups in the algebraic and topological sense. We also survey MV spaces with or without a link to fuzzy subgroups. Spectral space is known in MV algebra. We are interested in its topological properties in MV-semilinear space. Later on, we shall study MV algebras in connection with Riemann surfaces. In fact, the Riemann surface as a concept belongs to complex analysis. On the other hand, Möbius transformations form a part of the theory of Riemann surfaces. In general, this work gives a good understanding how it is possible to fit together different fields of mathematics.
Resumo:
In wireless communications the transmitted signals may be affected by noise. The receiver must decode the received message, which can be mathematically modelled as a search for the closest lattice point to a given vector. This problem is known to be NP-hard in general, but for communications applications there exist algorithms that, for a certain range of system parameters, offer polynomial expected complexity. The purpose of the thesis is to study the sphere decoding algorithm introduced in the article On Maximum-Likelihood Detection and the Search for the Closest Lattice Point, which was published by M.O. Damen, H. El Gamal and G. Caire in 2003. We concentrate especially on its computational complexity when used in space–time coding. Computer simulations are used to study how different system parameters affect the computational complexity of the algorithm. The aim is to find ways to improve the algorithm from the complexity point of view. The main contribution of the thesis is the construction of two new modifications to the sphere decoding algorithm, which are shown to perform faster than the original algorithm within a range of system parameters.
Resumo:
Abstract
Resumo:
Koneet voidaan usein jakaa osajärjestelmiin, joita ovat ohjaus- ja säätöjärjestelmät, voimaa tuottavat toimilaitteet ja voiman välittävät mekanismit. Eri osajärjestelmiä on simuloitu tietokoneavusteisesti jo usean vuosikymmenen ajan. Osajärjestelmien yhdistäminen on kuitenkin uudempi ilmiö. Usein esimerkiksi mekanismien mallinnuksessa toimilaitteen tuottama voimaon kuvattu vakiona, tai ajan funktiona muuttuvana voimana. Vastaavasti toimilaitteiden analysoinnissa mekanismin toimilaitteeseen välittämä kuormitus on kuvattu vakiovoimana, tai ajan funktiona työkiertoa kuvaavana kuormituksena. Kun osajärjestelmät on erotettu toisistaan, on niiden välistenvuorovaikutuksien tarkastelu erittäin epätarkkaa. Samoin osajärjestelmän vaikutuksen huomioiminen koko järjestelmän käyttäytymissä on hankalaa. Mekanismien dynamiikan mallinnukseen on kehitetty erityisesti tietokoneille soveltuvia numeerisia mallinnusmenetelmiä. Useimmat menetelmistä perustuvat Lagrangen menetelmään, joka mahdollistaa vapaasti valittaviin koordinaattimuuttujiin perustuvan mallinnuksen. Numeerista ratkaisun mahdollistamiseksi menetelmän avulla muodostettua differentiaali-algebraaliyhtälöryhmää joudutaan muokkaamaan esim. derivoimalla rajoiteyhtälöitä kahteen kertaan. Menetelmän alkuperäisessä numeerisissa ratkaisuissa kaikki mekanismia kuvaavat yleistetyt koordinaatit integroidaan jokaisella aika-askeleella. Tästä perusmenetelmästä johdetuissa menetelmissä riippumattomat yleistetyt koordinaatit joko integroidaan ja riippuvat koordinaatit ratkaistaan rajoiteyhtälöiden perusteella tai yhtälöryhmän kokoa pienennetään esim. käyttämällä nopeus- ja kiihtyvyysanalyyseissä eri kiertymäkoordinaatteja kuin asema-analyysissä. Useimmat integrointimenetelmät on alun perin tarkoitettu differentiaaliyhtälöiden (ODE) ratkaisuunjolloin yhtälöryhmään liitetyt niveliä kuvaavat algebraaliset rajoiteyhtälöt saattavat aiheuttaa ongelmia. Nivelrajoitteiden virheiden korjaus, stabilointi, on erittäin tärkeää mekanismien dynamiikan simuloinnin onnistumisen ja tulosten oikeellisuuden kannalta. Mallinnusmenetelmien johtamisessa käytetyn virtuaalisen työn periaatteen oletuksena nimittäin on, etteivät rajoitevoimat tee työtä, eli rajoitteiden vastaista siirtymää ei tapahdu. Varsinkaan monimutkaisten järjestelmien pidemmissä analyyseissä nivelrajoitteet eivät toteudu tarkasti. Tällöin järjestelmän energiatasapainoei toteudu ja järjestelmään muodostuu virtuaalista energiaa, joka rikkoo virtuaalisen työn periaatetta, Tästä syystä tulokset eivät enää pidäpaikkaansa. Tässä raportissa tarkastellaan erityyppisiä mallinnus- ja ratkaisumenetelmiä, ja vertaillaan niiden toimivuutta yksinkertaisten mekanismien numeerisessa ratkaisussa. Menetelmien toimivuutta tarkastellaan ratkaisun tehokkuuden, nivelrajoitteiden toteutumisen ja energiatasapainon säilymisen kannalta.
Resumo:
Theultimate goal of any research in the mechanism/kinematic/design area may be called predictive design, ie the optimisation of mechanism proportions in the design stage without requiring extensive life and wear testing. This is an ambitious goal and can be realised through development and refinement of numerical (computational) technology in order to facilitate the design analysis and optimisation of complex mechanisms, mechanical components and systems. As a part of the systematic design methodology this thesis concentrates on kinematic synthesis (kinematic design and analysis) methods in the mechanism synthesis process. The main task of kinematic design is to find all possible solutions in the form of structural parameters to accomplish the desired requirements of motion. Main formulations of kinematic design can be broadly divided to exact synthesis and approximate synthesis formulations. The exact synthesis formulation is based in solving n linear or nonlinear equations in n variables and the solutions for the problem areget by adopting closed form classical or modern algebraic solution methods or using numerical solution methods based on the polynomial continuation or homotopy. The approximate synthesis formulations is based on minimising the approximation error by direct optimisation The main drawbacks of exact synthesis formulationare: (ia) limitations of number of design specifications and (iia) failure in handling design constraints- especially inequality constraints. The main drawbacks of approximate synthesis formulations are: (ib) it is difficult to choose a proper initial linkage and (iib) it is hard to find more than one solution. Recentformulations in solving the approximate synthesis problem adopts polynomial continuation providing several solutions, but it can not handle inequality const-raints. Based on the practical design needs the mixed exact-approximate position synthesis with two exact and an unlimited number of approximate positions has also been developed. The solutions space is presented as a ground pivot map but thepole between the exact positions cannot be selected as a ground pivot. In this thesis the exact synthesis problem of planar mechanism is solved by generating all possible solutions for the optimisation process ¿ including solutions in positive dimensional solution sets - within inequality constraints of structural parameters. Through the literature research it is first shown that the algebraic and numerical solution methods ¿ used in the research area of computational kinematics ¿ are capable of solving non-parametric algebraic systems of n equations inn variables and cannot handle the singularities associated with positive-dimensional solution sets. In this thesis the problem of positive-dimensional solutionsets is solved adopting the main principles from mathematical research area of algebraic geometry in solving parametric ( in the mathematical sense that all parameter values are considered ¿ including the degenerate cases ¿ for which the system is solvable ) algebraic systems of n equations and at least n+1 variables.Adopting the developed solution method in solving the dyadic equations in direct polynomial form in two- to three-precision-points it has been algebraically proved and numerically demonstrated that the map of the ground pivots is ambiguousand that the singularities associated with positive-dimensional solution sets can be solved. The positive-dimensional solution sets associated with the poles might contain physically meaningful solutions in the form of optimal defectfree mechanisms. Traditionally the mechanism optimisation of hydraulically driven boommechanisms is done at early state of the design process. This will result in optimal component design rather than optimal system level design. Modern mechanismoptimisation at system level demands integration of kinematic design methods with mechanical system simulation techniques. In this thesis a new kinematic design method for hydraulically driven boom mechanism is developed and integrated in mechanical system simulation techniques. The developed kinematic design method is based on the combinations of two-precision-point formulation and on optimisation ( with mathematical programming techniques or adopting optimisation methods based on probability and statistics ) of substructures using calculated criteria from the system level response of multidegree-of-freedom mechanisms. Eg. by adopting the mixed exact-approximate position synthesis in direct optimisation (using mathematical programming techniques) with two exact positions and an unlimitednumber of approximate positions the drawbacks of (ia)-(iib) has been cancelled.The design principles of the developed method are based on the design-tree -approach of the mechanical systems and the design method ¿ in principle ¿ is capable of capturing the interrelationship between kinematic and dynamic synthesis simultaneously when the developed kinematic design method is integrated with the mechanical system simulation techniques.
Resumo:
Vaatimus kuvatiedon tiivistämisestä on tullut entistä ilmeisemmäksi viimeisen kymmenen vuoden aikana kuvatietoon perustuvien sovellutusten myötä. Nykyisin kiinnitetään erityistä huomiota spektrikuviin, joiden tallettaminen ja siirto vaativat runsaasti levytilaa ja kaistaa. Aallokemuunnos on osoittautunut hyväksi ratkaisuksi häviöllisessä tiedontiivistämisessä. Sen toteutus alikaistakoodauksessa perustuu aallokesuodattimiin ja ongelmana on sopivan aallokesuodattimen valinta erilaisille tiivistettäville kuville. Tässä työssä esitetään katsaus tiivistysmenetelmiin, jotka perustuvat aallokemuunnokseen. Ortogonaalisten suodattimien määritys parametrisoimalla on työn painopisteenä. Työssä todetaan myös kahden erilaisen lähestymistavan samanlaisuus algebrallisten yhtälöiden avulla. Kokeellinen osa sisältää joukon testejä, joilla perustellaan parametrisoinnin tarvetta. Erilaisille kuville tarvitaan erilaisia suodattimia sekä erilaiset tiivistyskertoimet saavutetaan eri suodattimilla. Lopuksi toteutetaan spektrikuvien tiivistys aallokemuunnoksen avulla.
Resumo:
Diplomityössä mallinnetaan numeerisesti radiaalikompressorin spiraalin virtaus. Spiraalin tarkoituksena radiaalikompressorissa on kerätä tasaisesti virtaus diffuusorin kehältä. Spiraaliin on viime aikoina kiinnitetty enemmän huomiota, koska on havaittu, että kompressorin hyötysuhdetta voidaan parantaa spiraalia optimoimalla. Spiraalin toimintaa tarkastellaan kolmella eri massavirralla. Työn alussa käsitellään spiraalin toimintaperiaatteita. Numeerisena ratkaisijana käytetään Teknillisessä korkeakoulussa kehitettyä FIN-FLO -koodia. FINFLO -laskentaohjelmassa ratkaistaan Navier-Stokes yhtälöt kolmeulotteiselle laskenta-alueelle. Diskretointi perustuu kontrollitilavuus menetelmään. Työssä käsitellään laskentakoodin toimintaperiaatteitta. Turbulenssia mallinnetaan algebrallisella Baldwin-Lomaxin ja kahden yhtälön Chienin k-e turbulenssimalleilla. Laskentatuloksia verrataan Lappeenrannan teknillisessä korkeakoulussa tehtyihin mittauksiin kyseessä olevasta kompressorin spiraalista. Myös eri turbulenssimalleilla ja hilatasoilla saatuja tuloksia verrataan keskenään. Laskentatuloksien jälkikäsittelyä varten ohjelmoitiin neljä eri tietokoneohjelmaa. Laskennalla pyritään saamaan lisäselvyyttä virtauksen käyttäytymiseen spiraalissa ja erityisesti ns. kielen alueella. Myös kahden eri turbulenssimallin toimivuutta kompressorin numeerisessa mallinnuksessa tutkitaan.
Resumo:
As the development of integrated circuit technology continues to follow Moore’s law the complexity of circuits increases exponentially. Traditional hardware description languages such as VHDL and Verilog are no longer powerful enough to cope with this level of complexity and do not provide facilities for hardware/software codesign. Languages such as SystemC are intended to solve these problems by combining the powerful expression of high level programming languages and hardware oriented facilities of hardware description languages. To fully replace older languages in the desing flow of digital systems SystemC should also be synthesizable. The devices required by modern high speed networks often share the same tight constraints for e.g. size, power consumption and price with embedded systems but have also very demanding real time and quality of service requirements that are difficult to satisfy with general purpose processors. Dedicated hardware blocks of an application specific instruction set processor are one way to combine fast processing speed, energy efficiency, flexibility and relatively low time-to-market. Common features can be identified in the network processing domain making it possible to develop specialized but configurable processor architectures. One such architecture is the TACO which is based on transport triggered architecture. The architecture offers a high degree of parallelism and modularity and greatly simplified instruction decoding. For this M.Sc.(Tech) thesis, a simulation environment for the TACO architecture was developed with SystemC 2.2 using an old version written with SystemC 1.0 as a starting point. The environment enables rapid design space exploration by providing facilities for hw/sw codesign and simulation and an extendable library of automatically configured reusable hardware blocks. Other topics that are covered are the differences between SystemC 1.0 and 2.2 from the viewpoint of hardware modeling, and compilation of a SystemC model into synthesizable VHDL with Celoxica Agility SystemC Compiler. A simulation model for a processor for TCP/IP packet validation was designed and tested as a test case for the environment.
Resumo:
Tässä diplomityössä optimoitiin nelivaiheinen 1 MWe höyryturbiinin prototyyppimalli evoluutioalgoritmien avulla sekä tutkittiin optimoinnista saatuja kustannushyötyjä. Optimoinnissa käytettiin DE – algoritmia. Optimointi saatiin toimimaan, mutta optimoinnissa käytetyn laskentasovelluksen (semiempiirisiin yhtälöihin perustuvat mallit) luonteesta johtuen optimoinnin tarkkuus CFD – laskennalla suoritettuun tarkastusmallinnukseen verrattuna oli jonkin verran toivottua pienempi. Tulosten em. epätarkkuus olisi tuskin ollut vältettävissä, sillä ongelma johtui puoliempiirisiin laskentamalleihin liittyvistä lähtöoletusongelmista sekä epävarmuudesta sovitteiden absoluuttisista pätevyysalueista. Optimoinnin onnistumisen kannalta tällainen algebrallinen mallinnus oli kuitenkin välttämätöntä, koska esim. CFD-laskentaa ei olisi mitenkään voitu tehdä jokaisella optimointiaskeleella. Optimoinnin aikana ongelmia esiintyi silti konetehojen riittävyydessä sekä sellaisen sopivan rankaisumallin löytämisessä, joka pitäisi algoritmin matemaattisesti sallitulla alueella, muttei rajoittaisi liikaa optimoinnin edistymistä. Loput ongelmat johtuivat sovelluksen uutuudesta sekä täsmällisyysongelmista sovitteiden pätevyysalueiden käsittelyssä. Vaikka optimoinnista saatujen tulosten tarkkuus ei ollut aivan tavoitteen mukainen, oli niillä kuitenkin koneensuunnittelua edullisesti ohjaava vaikutus. DE – algoritmin avulla suoritetulla optimoinnilla saatiin turbiinista noin 2,2 % enemmän tehoja, joka tarkoittaa noin 15 000 € konekohtaista kustannushyötyä. Tämä on yritykselle erittäin merkittävä konekohtainen kustannushyöty. Loppujen lopuksi voitaneen sanoa, etteivät evoluutioalgoritmit olleet parhaimmillaan prototyyppituotteen optimoinnissa. Evoluutioalgoritmeilla teknisten laitteiden optimoinnissa piilee valtavasti mahdollisuuksia, mutta se vaatii kypsän sovelluskohteen, joka tunnetaan jo entuudestaan erinomaisesti tai on yksinkertainen ja aukottomasti laskettavissa.
Resumo:
The topological solitons of two classical field theories, the Faddeev-Skyrme model and the Ginzburg-Landau model are studied numerically and analytically in this work. The aim is to gain information on the existence and properties of these topological solitons, their structure and behaviour under relaxation. First, the conditions and mechanisms leading to the possibility of topological solitons are explored from the field theoretical point of view. This leads one to consider continuous deformations of the solutions of the equations of motion. The results of algebraic topology necessary for the systematic treatment of such deformations are reviewed and methods of determining the homotopy classes of topological solitons are presented. The Faddeev-Skyrme and Ginzburg-Landau models are presented, some earlier results reviewed and the numerical methods used in this work are described. The topological solitons of the Faddeev-Skyrme model, Hopfions, are found to follow the same mechanisms of relaxation in three different domains with three different topological classifications. For two of the domains, the necessary but unusual topological classification is presented. Finite size topological solitons are not found in the Ginzburg-Landau model and a scaling argument is used to suggest that there are indeed none unless a certain modification to the model, due to R. S. Ward, is made. In that case, the Hopfions of the Faddeev-Skyrme model are seen to be present for some parameter values. A boundary in the parameter space separating the region where the Hopfions exist and the area where they do not exist is found and the behaviour of the Hopfion energy on this boundary is studied.
Resumo:
Broadcasting systems are networks where the transmission is received by several terminals. Generally broadcast receivers are passive devices in the network, meaning that they do not interact with the transmitter. Providing a certain Quality of Service (QoS) for the receivers in heterogeneous reception environment with no feedback is not an easy task. Forward error control coding can be used for protection against transmission errors to enhance the QoS for broadcast services. For good performance in terrestrial wireless networks, diversity should be utilized. The diversity is utilized by application of interleaving together with the forward error correction codes. In this dissertation the design and analysis of forward error control and control signalling for providing QoS in wireless broadcasting systems are studied. Control signaling is used in broadcasting networks to give the receiver necessary information on how to connect to the network itself and how to receive the services that are being transmitted. Usually control signalling is considered to be transmitted through a dedicated path in the systems. Therefore, the relationship of the signaling and service data paths should be considered early in the design phase. Modeling and simulations are used in the case studies of this dissertation to study this relationship. This dissertation begins with a survey on the broadcasting environment and mechanisms for providing QoS therein. Then case studies present analysis and design of such mechanisms in real systems. The mechanisms for providing QoS considering signaling and service data paths and their relationship at the DVB-H link layer are analyzed as the first case study. In particular the performance of different service data decoding mechanisms and optimal signaling transmission parameter selection are presented. The second case study investigates the design of signaling and service data paths for the more modern DVB-T2 physical layer. Furthermore, by comparing the performances of the signaling and service data paths by simulations, configuration guidelines for the DVB-T2 physical layer signaling are given. The presented guidelines can prove useful when configuring DVB-T2 transmission networks. Finally, recommendations for the design of data and signalling paths are given based on findings from the case studies. The requirements for the signaling design should be derived from the requirements for the main services. Generally, these requirements for signaling should be more demanding as the signaling is the enabler for service reception.
Resumo:
The objective of this dissertation is to improve the dynamic simulation of fluid power circuits. A fluid power circuit is a typical way to implement power transmission in mobile working machines, e.g. cranes, excavators etc. Dynamic simulation is an essential tool in developing controllability and energy-efficient solutions for mobile machines. Efficient dynamic simulation is the basic requirement for the real-time simulation. In the real-time simulation of fluid power circuits there exist numerical problems due to the software and methods used for modelling and integration. A simulation model of a fluid power circuit is typically created using differential and algebraic equations. Efficient numerical methods are required since differential equations must be solved in real time. Unfortunately, simulation software packages offer only a limited selection of numerical solvers. Numerical problems cause noise to the results, which in many cases leads the simulation run to fail. Mathematically the fluid power circuit models are stiff systems of ordinary differential equations. Numerical solution of the stiff systems can be improved by two alternative approaches. The first is to develop numerical solvers suitable for solving stiff systems. The second is to decrease the model stiffness itself by introducing models and algorithms that either decrease the highest eigenvalues or neglect them by introducing steady-state solutions of the stiff parts of the models. The thesis proposes novel methods using the latter approach. The study aims to develop practical methods usable in dynamic simulation of fluid power circuits using explicit fixed-step integration algorithms. In this thesis, twomechanisms whichmake the systemstiff are studied. These are the pressure drop approaching zero in the turbulent orifice model and the volume approaching zero in the equation of pressure build-up. These are the critical areas to which alternative methods for modelling and numerical simulation are proposed. Generally, in hydraulic power transmission systems the orifice flow is clearly in the turbulent area. The flow becomes laminar as the pressure drop over the orifice approaches zero only in rare situations. These are e.g. when a valve is closed, or an actuator is driven against an end stopper, or external force makes actuator to switch its direction during operation. This means that in terms of accuracy, the description of laminar flow is not necessary. But, unfortunately, when a purely turbulent description of the orifice is used, numerical problems occur when the pressure drop comes close to zero since the first derivative of flow with respect to the pressure drop approaches infinity when the pressure drop approaches zero. Furthermore, the second derivative becomes discontinuous, which causes numerical noise and an infinitely small integration step when a variable step integrator is used. A numerically efficient model for the orifice flow is proposed using a cubic spline function to describe the flow in the laminar and transition areas. Parameters for the cubic spline function are selected such that its first derivative is equal to the first derivative of the pure turbulent orifice flow model in the boundary condition. In the dynamic simulation of fluid power circuits, a tradeoff exists between accuracy and calculation speed. This investigation is made for the two-regime flow orifice model. Especially inside of many types of valves, as well as between them, there exist very small volumes. The integration of pressures in small fluid volumes causes numerical problems in fluid power circuit simulation. Particularly in realtime simulation, these numerical problems are a great weakness. The system stiffness approaches infinity as the fluid volume approaches zero. If fixed step explicit algorithms for solving ordinary differential equations (ODE) are used, the system stability would easily be lost when integrating pressures in small volumes. To solve the problem caused by small fluid volumes, a pseudo-dynamic solver is proposed. Instead of integration of the pressure in a small volume, the pressure is solved as a steady-state pressure created in a separate cascade loop by numerical integration. The hydraulic capacitance V/Be of the parts of the circuit whose pressures are solved by the pseudo-dynamic method should be orders of magnitude smaller than that of those partswhose pressures are integrated. The key advantage of this novel method is that the numerical problems caused by the small volumes are completely avoided. Also, the method is freely applicable regardless of the integration routine applied. The superiority of both above-mentioned methods is that they are suited for use together with the semi-empirical modelling method which necessarily does not require any geometrical data of the valves and actuators to be modelled. In this modelling method, most of the needed component information can be taken from the manufacturer’s nominal graphs. This thesis introduces the methods and shows several numerical examples to demonstrate how the proposed methods improve the dynamic simulation of various hydraulic circuits.
Resumo:
Kirjallisuusarvostelu