27 resultados para Iron Metallurgy Mathematical models
em Doria (National Library of Finland DSpace Services) - National Library of Finland, Finland
Resumo:
Abstract
Resumo:
Preference relations, and their modeling, have played a crucial role in both social sciences and applied mathematics. A special category of preference relations is represented by cardinal preference relations, which are nothing other than relations which can also take into account the degree of relation. Preference relations play a pivotal role in most of multi criteria decision making methods and in the operational research. This thesis aims at showing some recent advances in their methodology. Actually, there are a number of open issues in this field and the contributions presented in this thesis can be grouped accordingly. The first issue regards the estimation of a weight vector given a preference relation. A new and efficient algorithm for estimating the priority vector of a reciprocal relation, i.e. a special type of preference relation, is going to be presented. The same section contains the proof that twenty methods already proposed in literature lead to unsatisfactory results as they employ a conflicting constraint in their optimization model. The second area of interest concerns consistency evaluation and it is possibly the kernel of the thesis. This thesis contains the proofs that some indices are equivalent and that therefore, some seemingly different formulae, end up leading to the very same result. Moreover, some numerical simulations are presented. The section ends with some consideration of a new method for fairly evaluating consistency. The third matter regards incomplete relations and how to estimate missing comparisons. This section reports a numerical study of the methods already proposed in literature and analyzes their behavior in different situations. The fourth, and last, topic, proposes a way to deal with group decision making by means of connecting preference relations with social network analysis.
Resumo:
Malaria continues to infect millions and kill hundreds of thousands of people worldwide each year, despite over a century of research and attempts to control and eliminate this infectious disease. Challenges such as the development and spread of drug resistant malaria parasites, insecticide resistance to mosquitoes, climate change, the presence of individuals with subpatent malaria infections which normally are asymptomatic and behavioral plasticity in the mosquito hinder the prospects of malaria control and elimination. In this thesis, mathematical models of malaria transmission and control that address the role of drug resistance, immunity, iron supplementation and anemia, immigration and visitation, and the presence of asymptomatic carriers in malaria transmission are developed. A within-host mathematical model of severe Plasmodium falciparum malaria is also developed. First, a deterministic mathematical model for transmission of antimalarial drug resistance parasites with superinfection is developed and analyzed. The possibility of increase in the risk of superinfection due to iron supplementation and fortification in malaria endemic areas is discussed. The model results calls upon stakeholders to weigh the pros and cons of iron supplementation to individuals living in malaria endemic regions. Second, a deterministic model of transmission of drug resistant malaria parasites, including the inflow of infective immigrants, is presented and analyzed. The optimal control theory is applied to this model to study the impact of various malaria and vector control strategies, such as screening of immigrants, treatment of drug-sensitive infections, treatment of drug-resistant infections, and the use of insecticide-treated bed nets and indoor spraying of mosquitoes. The results of the model emphasize the importance of using a combination of all four controls tools for effective malaria intervention. Next, a two-age-class mathematical model for malaria transmission with asymptomatic carriers is developed and analyzed. In development of this model, four possible control measures are analyzed: the use of long-lasting treated mosquito nets, indoor residual spraying, screening and treatment of symptomatic, and screening and treatment of asymptomatic individuals. The numerical results show that a disease-free equilibrium can be attained if all four control measures are used. A common pitfall for most epidemiological models is the absence of real data; model-based conclusions have to be drawn based on uncertain parameter values. In this thesis, an approach to study the robustness of optimal control solutions under such parameter uncertainty is presented. Numerical analysis of the optimal control problem in the presence of parameter uncertainty demonstrate the robustness of the optimal control approach that: when a comprehensive control strategy is used the main conclusions of the optimal control remain unchanged, even if inevitable variability remains in the control profiles. The results provide a promising framework for the design of cost-effective strategies for disease control with multiple interventions, even under considerable uncertainty of model parameters. Finally, a separate work modeling the within-host Plasmodium falciparum infection in humans is presented. The developed model allows re-infection of already-infected red blood cells. The model hypothesizes that in severe malaria due to parasite quest for survival and rapid multiplication, the Plasmodium falciparum can be absorbed in the already-infected red blood cells which accelerates the rupture rate and consequently cause anemia. Analysis of the model and parameter identifiability using Markov chain Monte Carlo methods is presented.
Resumo:
Yksi keskeisimmistä tehtävistä matemaattisten mallien tilastollisessa analyysissä on mallien tuntemattomien parametrien estimointi. Tässä diplomityössä ollaan kiinnostuneita tuntemattomien parametrien jakaumista ja niiden muodostamiseen sopivista numeerisista menetelmistä, etenkin tapauksissa, joissa malli on epälineaarinen parametrien suhteen. Erilaisten numeeristen menetelmien osalta pääpaino on Markovin ketju Monte Carlo -menetelmissä (MCMC). Nämä laskentaintensiiviset menetelmät ovat viime aikoina kasvattaneet suosiotaan lähinnä kasvaneen laskentatehon vuoksi. Sekä Markovin ketjujen että Monte Carlo -simuloinnin teoriaa on esitelty työssä siinä määrin, että menetelmien toimivuus saadaan perusteltua. Viime aikoina kehitetyistä menetelmistä tarkastellaan etenkin adaptiivisia MCMC menetelmiä. Työn lähestymistapa on käytännönläheinen ja erilaisia MCMC -menetelmien toteutukseen liittyviä asioita korostetaan. Työn empiirisessä osuudessa tarkastellaan viiden esimerkkimallin tuntemattomien parametrien jakaumaa käyttäen hyväksi teoriaosassa esitettyjä menetelmiä. Mallit kuvaavat kemiallisia reaktioita ja kuvataan tavallisina differentiaaliyhtälöryhminä. Mallit on kerätty kemisteiltä Lappeenrannan teknillisestä yliopistosta ja Åbo Akademista, Turusta.
Resumo:
The aim of this work is to compare two families of mathematical models for their respective capability to capture the statistical properties of real electricity spot market time series. The first model family is ARMA-GARCH models and the second model family is mean-reverting Ornstein-Uhlenbeck models. These two models have been applied to two price series of Nordic Nord Pool spot market for electricity namely to the System prices and to the DenmarkW prices. The parameters of both models were calibrated from the real time series. After carrying out simulation with optimal models from both families we conclude that neither ARMA-GARCH models, nor conventional mean-reverting Ornstein-Uhlenbeck models, even when calibrated optimally with real electricity spot market price or return series, capture the statistical characteristics of the real series. But in the case of less spiky behavior (System prices), the mean-reverting Ornstein-Uhlenbeck model could be seen to partially succeeded in this task.
Resumo:
Linguistic modelling is a rather new branch of mathematics that is still undergoing rapid development. It is closely related to fuzzy set theory and fuzzy logic, but knowledge and experience from other fields of mathematics, as well as other fields of science including linguistics and behavioral sciences, is also necessary to build appropriate mathematical models. This topic has received considerable attention as it provides tools for mathematical representation of the most common means of human communication - natural language. Adding a natural language level to mathematical models can provide an interface between the mathematical representation of the modelled system and the user of the model - one that is sufficiently easy to use and understand, but yet conveys all the information necessary to avoid misinterpretations. It is, however, not a trivial task and the link between the linguistic and computational level of such models has to be established and maintained properly during the whole modelling process. In this thesis, we focus on the relationship between the linguistic and the mathematical level of decision support models. We discuss several important issues concerning the mathematical representation of meaning of linguistic expressions, their transformation into the language of mathematics and the retranslation of mathematical outputs back into natural language. In the first part of the thesis, our view of the linguistic modelling for decision support is presented and the main guidelines for building linguistic models for real-life decision support that are the basis of our modeling methodology are outlined. From the theoretical point of view, the issues of representation of meaning of linguistic terms, computations with these representations and the retranslation process back into the linguistic level (linguistic approximation) are studied in this part of the thesis. We focus on the reasonability of operations with the meanings of linguistic terms, the correspondence of the linguistic and mathematical level of the models and on proper presentation of appropriate outputs. We also discuss several issues concerning the ethical aspects of decision support - particularly the loss of meaning due to the transformation of mathematical outputs into natural language and the issue or responsibility for the final decisions. In the second part several case studies of real-life problems are presented. These provide background and necessary context and motivation for the mathematical results and models presented in this part. A linguistic decision support model for disaster management is presented here – formulated as a fuzzy linear programming problem and a heuristic solution to it is proposed. Uncertainty of outputs, expert knowledge concerning disaster response practice and the necessity of obtaining outputs that are easy to interpret (and available in very short time) are reflected in the design of the model. Saaty’s analytic hierarchy process (AHP) is considered in two case studies - first in the context of the evaluation of works of art, where a weak consistency condition is introduced and an adaptation of AHP for large matrices of preference intensities is presented. The second AHP case-study deals with the fuzzified version of AHP and its use for evaluation purposes – particularly the integration of peer-review into the evaluation of R&D outputs is considered. In the context of HR management, we present a fuzzy rule based evaluation model (academic faculty evaluation is considered) constructed to provide outputs that do not require linguistic approximation and are easily transformed into graphical information. This is achieved by designing a specific form of fuzzy inference. Finally the last case study is from the area of humanities - psychological diagnostics is considered and a linguistic fuzzy model for the interpretation of outputs of multidimensional questionnaires is suggested. The issue of the quality of data in mathematical classification models is also studied here. A modification of the receiver operating characteristics (ROC) method is presented to reflect variable quality of data instances in the validation set during classifier performance assessment. Twelve publications on which the author participated are appended as a third part of this thesis. These summarize the mathematical results and provide a closer insight into the issues of the practicalapplications that are considered in the second part of the thesis.
Resumo:
The blast furnace is the main ironmaking production unit in the world which converts iron ore with coke and hot blast into liquid iron, hot metal, which is used for steelmaking. The furnace acts as a counter-current reactor charged with layers of raw material of very different gas permeability. The arrangement of these layers, or burden distribution, is the most important factor influencing the gas flow conditions inside the furnace, which dictate the efficiency of the heat transfer and reduction processes. For proper control the furnace operators should know the overall conditions in the furnace and be able to predict how control actions affect the state of the furnace. However, due to high temperatures and pressure, hostile atmosphere and mechanical wear it is very difficult to measure internal variables. Instead, the operators have to rely extensively on measurements obtained at the boundaries of the furnace and make their decisions on the basis of heuristic rules and results from mathematical models. It is particularly difficult to understand the distribution of the burden materials because of the complex behavior of the particulate materials during charging. The aim of this doctoral thesis is to clarify some aspects of burden distribution and to develop tools that can aid the decision-making process in the control of the burden and gas distribution in the blast furnace. A relatively simple mathematical model was created for simulation of the distribution of the burden material with a bell-less top charging system. The model developed is fast and it can therefore be used by the operators to gain understanding of the formation of layers for different charging programs. The results were verified by findings from charging experiments using a small-scale charging rig at the laboratory. A basic gas flow model was developed which utilized the results of the burden distribution model to estimate the gas permeability of the upper part of the blast furnace. This combined formulation for gas and burden distribution made it possible to implement a search for the best combination of charging parameters to achieve a target gas temperature distribution. As this mathematical task is discontinuous and non-differentiable, a genetic algorithm was applied to solve the optimization problem. It was demonstrated that the method was able to evolve optimal charging programs that fulfilled the target conditions. Even though the burden distribution model provides information about the layer structure, it neglects some effects which influence the results, such as mixed layer formation and coke collapse. A more accurate numerical method for studying particle mechanics, the Discrete Element Method (DEM), was used to study some aspects of the charging process more closely. Model charging programs were simulated using DEM and compared with the results from small-scale experiments. The mixed layer was defined and the voidage of mixed layers was estimated. The mixed layer was found to have about 12% less voidage than layers of the individual burden components. Finally, a model for predicting the extent of coke collapse when heavier pellets are charged over a layer of lighter coke particles was formulated based on slope stability theory, and was used to update the coke layer distribution after charging in the mathematical model. In designing this revision, results from DEM simulations and charging experiments for some charging programs were used. The findings from the coke collapse analysis can be used to design charging programs with more stable coke layers.
Resumo:
Web-portaalien aiheenmukaista luokittelua voidaan hyödyntää tunnistamaan käyttäjän kiinnostuksen kohteet keräämällä tilastotietoa hänen selaustottumuksistaan eri kategorioissa. Tämä diplomityö käsittelee web-sovelluksien osa-alueita, joissa kerättyä tilastotietoa voidaan hyödyntää personalisoinnissa. Yleisperiaatteet sisällön personalisoinnista, Internet-mainostamisesta ja tiedonhausta selitetään matemaattisia malleja käyttäen. Lisäksi työssä kuvaillaan yleisluontoiset ominaisuudet web-portaaleista sekä tilastotiedon keräämiseen liittyvät seikat.
Resumo:
Työn tavoitteena oli tutkia tislauskolonnin dynamiikkaa ja dynaamista mallintamista simulointien avulla. Dynaamisen simulointimallin avulla selvitettiin pentaanin erotuskolonnin toimintaa poikkeus- ja häiriötilanteissa. Lisäksi pyrittiin arvioimaan työssä käytettyjen simulointiohjelmistojen soveltuvuutta tislauksen dynaamiseen simulointiin. Työn kirjallisuusosassa käsiteltiin tislauskolonnindynamiikan mallintamista matemaattisten mallien avulla sekä tislauskolonnimallin rakentamista simulointiohjelmistoon. Kirjallisuusosassa esiteltiin myös tislauskolonnin häiriötilanteita ja niiden aiheuttamia varopurkaustapauksia. Tämän lisäksi kirjallisuusosassa käytiin läpi tislauskolonnin varoventtiilien mitoittamisen perusteita. Työn soveltavassa osassa muodostettiin tislauskolonnille dynaaminen simulointimalli Aspen HYSYS Dynamics ja PROSimulator-simulointiohjelmistolla. Mallien avulla tarkasteltiin erilaisten häiriöiden ja poikkeustilanteiden vaikutusta kolonnin käyttäytymiseen ja varopurkaus-tapauksiin. Työssä arvioitiin myös ohjelmistojen soveltuvuutta tislauksen dynaamiseen simulointiin. Työssä saatujen tulosten perusteella voidaan todeta, että dynaamisen simuloinnin avulla saadaan hyödyllistä tietoa tislauskolonnin toiminnasta häiriö- ja poikkeustilanteissa. Dynaamisen simuloinnin onnistuminen ja luotettavien tulosten saaminen edellyttää kuitenkin tarkasteltavan prosessin tuntemista ja ohjelmiston käytön hallintaa. Työssä käytetyn Aspen HYSYS Dynamics simulointiohjelmiston käytettävyydessä havaittiin puutteita ja ohjelmisto vaatii vielä kehitystyötä. Työssä käytetty PROSimulator-simulointiohjelmisto soveltui pienistä puutteista huolimatta hyvin tislauskolonnin häiriötilanteiden tutkimiseen.
Resumo:
This thesis presents a topological approach to studying fuzzy setsby means of modifier operators. Modifier operators are mathematical models, e.g., for hedges, and we present briefly different approaches to studying modifier operators. We are interested in compositional modifier operators, modifiers for short, and these modifiers depend on binary relations. We show that if a modifier depends on a reflexive and transitive binary relation on U, then there exists a unique topology on U such that this modifier is the closure operator in that topology. Also, if U is finite then there exists a lattice isomorphism between the class of all reflexive and transitive relations and the class of all topologies on U. We define topological similarity relation "≈" between L-fuzzy sets in an universe U, and show that the class LU/ ≈ is isomorphic with the class of all topologies on U, if U is finite and L is suitable. We consider finite bitopological spaces as approximation spaces, and we show that lower and upper approximations can be computed by means of α-level sets also in the case of equivalence relations. This means that approximations in the sense of Rough Set Theory can be computed by means of α-level sets. Finally, we present and application to data analysis: we study an approach to detecting dependencies of attributes in data base-like systems, called information systems.
Resumo:
Dynamic behavior of bothisothermal and non-isothermal single-column chromatographic reactors with an ion-exchange resin as the stationary phase was investigated. The reactor performance was interpreted by using results obtained when studying the effect of the resin properties on the equilibrium and kinetic phenomena occurring simultaneously in the reactor. Mathematical models were derived for each phenomenon and combined to simulate the chromatographic reactor. The phenomena studied includes phase equilibria in multicomponent liquid mixture¿ion-exchange resin systems, chemicalequilibrium in the presence of a resin catalyst, diffusion of liquids in gel-type and macroporous resins, and chemical reaction kinetics. Above all, attention was paid to the swelling behavior of the resins and how it affects the kinetic phenomena. Several poly(styrene-co-divinylbenzene) resins with different cross-link densities and internal porosities were used. Esterification of acetic acid with ethanol to produce ethyl acetate and water was used as a model reaction system. Choosing an ion-exchange resin with a low cross-link density is beneficial inthe case of the present reaction system: the amount of ethyl acetate as well the ethyl acetate to water mole ratio in the effluent stream increase with decreasing cross-link density. The enhanced performance of the reactor is mainly attributed to increasing reaction rate, which in turn originates from the phase equilibrium behavior of the system. Also mass transfer considerations favor the use ofresins with low cross-link density. The diffusion coefficients of liquids in the gel-type ion-exchange resins were found to fall rapidly when the extent of swelling became low. Glass transition of the polymer was not found to significantlyretard the diffusion in sulfonated PS¿DVB ion-exchange resins. It was also shown that non-isothermal operation of a chromatographic reactor could be used to significantly enhance the reactor performance. In the case of the exothermic modelreaction system and a near-adiabatic column, a positive thermal wave (higher temperature than in the initial state) was found to travel together with the reactive front. This further increased the conversion of the reactants. Diffusion-induced volume changes of the ion-exchange resins were studied in a flow-through cell. It was shown that describing the swelling and shrinking kinetics of the particles calls for a mass transfer model that explicitly includes the limited expansibility of the polymer network. A good description of the process was obtained by combining the generalized Maxwell-Stefan approach and an activity model that was derived from the thermodynamics of polymer solutions and gels. The swelling pressure in the resin phase was evaluated by using a non-Gaussian expression forthe polymer chain length distribution. Dimensional changes of the resin particles necessitate the use of non-standard mathematical tools for dynamic simulations. A transformed coordinate system, where the mass of the polymer was used as a spatial variable, was applied when simulating the chromatographic reactor columns as well as the swelling and shrinking kinetics of the resin particles. Shrinking of the particles in a column leads to formation of dead volume on top of the resin bed. In ordinary Eulerian coordinates, this results in a moving discontinuity that in turn causes numerical difficulties in the solution of the PDE system. The motion of the discontinuity was eliminated by spanning two calculation grids in the column that overlapped at the top of the resin bed. The reactive and non-reactive phase equilibrium data were correlated with a model derived from thethermodynamics of polymer solution and gels. The thermodynamic approach used inthis work is best suited at high degrees of swelling because the polymer matrixmay be in the glassy state when the extent of swelling is low.
Resumo:
Tässä diplomityössä tutustutaan IGBT:n ja tehodiodin rakenteisiin, lämmön muodostumiseen kyseisissä komponenteissa sekä menetelmiin, joilla komponenttien lämpötilat voidaan määrittää. Työssä suunnitellaan ja rakennetaan mittausjärjestelmä, jolla IGBT:n ja tehodiodin lämpötilat voidaan määrittää suoraan mittaamalla sekä matemaattisten mallien avulla. Mittausjärjestelmä koostuu DC-chopper -kytkennästä, kuormavirran, välipiirin jännitteen sekä lämpötilan mittauksista. Lämpötilan mittauksissa käytettiin komponenttien pintoihin liitettyjä termopareja. Matemaattisia malleja varten mittausjärjestelmään lisättiin välipiirin jännitteen sekä kuormavirran mittaus. Laitteiston ohjaus sekä mittaustulosten tallentaminen toteutettiin dSPACE -laitteistolla. Mittausjärjestelmän toimivuus testattiin Lappeenrannan teknillisen yliopiston Säätötekniikan laboratoriossa tehdyillä mittauksilla.
Resumo:
TäTässä työssä tarkastellaan jäteveden biologiseen puhdistukseen käytettävän aktiiviliete-prosessin kuvaamista matemaattisen mallintamisen avulla. Jäteveden puhdistus on jo vanha keksintö ja aktiivilieteprosessikin on otettu ensimmäisen kerran pilot- käyttöön vuonna 1914. Myös jätevesilaitosten matemaattinen mallintaminen on ollut pitkään tunnettu tekniikka ja ensimmäiset dynaamiset mallit kehitettiin 1950–luvulla. Työn alkuosassa on tarkasteltu jätevesilaitosten matemaattista mallintamista kirjallisuus-lähteiden pohjalta. Tarkastelun painopiste on suunnattu erilaisiin matemaattisiin malleihin ja mallintamisen kehitykseen. Mallintamisen ohessa on kiinnitetty huomiota aktiiviliete-prosessiin ja siihen vaikuttaviin tekijöihin. Mallintamiseen vaikuttavista tekijöistä erityistä huomiota on kiinnitetty ilmastukseen, bakteerien kasvuun ja selkeytykseen sekä niiden vaikutuksiin prosessin kannalta. Matemaattisen mallintamisen tarkastelun jälkeen työssä on pohdittu CFD–mallintamisen hyödyntämismahdollisuuksia aktiivilieteprosessien kuvaamisessa. Mallintamisosiossa on tarkasteltu Activated Sludge Model No. 3 (ASM 3) mallin rakennetta ja sisältöä sekä sen eri tekijöiden vaikutuksia malliin. Työn tässä osassa on tarkasteltu myös hapensiirtoa ilmastuksessa ilmakuplista veteen ja selkeytystä osana aktiivilieteprosessia. Tässä osiossa on käyty läpi myös kaikki prosessin kannalta oleelliset yhtälöt, esimerkiksi reaktionopeus- ja massataseyhtälöt.
Resumo:
Viimevuosina kuvantamistekniikat ovat kehittyneet huomattavasti. Näiden uusien tekniikoiden avulla voidaan tarkastella erilaisia prosesseja aiempaa tarkemmin. Painelajittelun teoriat perustuvat pääasiassa makrotason mittauksien pohjalta tehtyihin matemaattisiin malleihin, ja itse lajittumistapahtumaa ei ole aiemmin päästy kovinkaan tarkasti havainnoimaan. Tämän työn tavoitteena olikin selvittää lajittumistapahtumaa uuden kuvantamistekniikan avulla. Työn kirjallisuusosassa esiteltiin lyhyesti painelajitin ja sen toiminta. Myös painelajittelun perusteoriat käytiin lyhyesti läpi. Teoriaosan pääpaino oli lajittumistapahtumassa ja siihen vaikuttavissa tekijöissä. Lisäksi teoriaosassa esiteltiin teollisen painelajittelun nykytilannetta, työtä varten tehdyn esiselvityksen pohjalta. Lajitinsimulaattorikoeajoissa selvitettiin kuitujen ja roskapartikkelien käyttäyty-mistä painelajittimessa kuvantamisen avulla. Kuvauksissa keskityttiin rakosihdin raon tapahtumiin, sekä sihtipinnan tapahtumiin. Kuvamateriaalista laskettiin kuitujen ja roskien liikenopeuksia lankasihtiprofiilin raossa, sekä sihdin pinnalla. Lisäksi tutkittiin kehänopeuden, tuotannon ja sakeuden vaikutusta liike-nopeuksiin. Kuvamateriaalista pyrittiin myös havainnoimaan mahdollista kuitujen takaisinvirtausta. Kuvantamisella saatiin uutta tietoa kuitujen ja roskapartikkelien liikkeestä painelajittimessa. Tuloksista kävi muun muassa ilmi, että roottorin elementin pienimmän välyksen kohdalla akseptoituu huomattava määrä massaa. Tämä selittyy sillä, että pienimmän välyksen kohta tuottaa huomattavan positiivisen painepulssin. Kuitujen ja roskapartikkelien nopeudet raossa noudattelivat muutenkin hyvin selkeästi elementin tuottamaa painepulssia. Lisäksi kuvamateriaalista havaittiin, että sihdin raon suulle kertyy profiilin korkuinen kuituflokki. Koko sihdin peittävää kuitumattoa ei havaittu.
Resumo:
In mathematical modeling the estimation of the model parameters is one of the most common problems. The goal is to seek parameters that fit to the measurements as well as possible. There is always error in the measurements which implies uncertainty to the model estimates. In Bayesian statistics all the unknown quantities are presented as probability distributions. If there is knowledge about parameters beforehand, it can be formulated as a prior distribution. The Bays’ rule combines the prior and the measurements to posterior distribution. Mathematical models are typically nonlinear, to produce statistics for them requires efficient sampling algorithms. In this thesis both Metropolis-Hastings (MH), Adaptive Metropolis (AM) algorithms and Gibbs sampling are introduced. In the thesis different ways to present prior distributions are introduced. The main issue is in the measurement error estimation and how to obtain prior knowledge for variance or covariance. Variance and covariance sampling is combined with the algorithms above. The examples of the hyperprior models are applied to estimation of model parameters and error in an outlier case.