27 resultados para Bayesian shared component model
em Doria (National Library of Finland DSpace Services) - National Library of Finland, Finland
Resumo:
The purpose of this research is to draw up a clear construction of an anticipatory communicative decision-making process and a successful implementation of a Bayesian application that can be used as an anticipatory communicative decision-making support system. This study is a decision-oriented and constructive research project, and it includes examples of simulated situations. As a basis for further methodological discussion about different approaches to management research, in this research, a decision-oriented approach is used, which is based on mathematics and logic, and it is intended to develop problem solving methods. The approach is theoretical and characteristic of normative management science research. Also, the approach of this study is constructive. An essential part of the constructive approach is to tie the problem to its solution with theoretical knowledge. Firstly, the basic definitions and behaviours of an anticipatory management and managerial communication are provided. These descriptions include discussions of the research environment and formed management processes. These issues define and explain the background to further research. Secondly, it is processed to managerial communication and anticipatory decision-making based on preparation, problem solution, and solution search, which are also related to risk management analysis. After that, a solution to the decision-making support application is formed, using four different Bayesian methods, as follows: the Bayesian network, the influence diagram, the qualitative probabilistic network, and the time critical dynamic network. The purpose of the discussion is not to discuss different theories but to explain the theories which are being implemented. Finally, an application of Bayesian networks to the research problem is presented. The usefulness of the prepared model in examining a problem and the represented results of research is shown. The theoretical contribution includes definitions and a model of anticipatory decision-making. The main theoretical contribution of this study has been to develop a process for anticipatory decision-making that includes management with communication, problem-solving, and the improvement of knowledge. The practical contribution includes a Bayesian Decision Support Model, which is based on Bayesian influenced diagrams. The main contributions of this research are two developed processes, one for anticipatory decision-making, and the other to produce a model of a Bayesian network for anticipatory decision-making. In summary, this research contributes to decision-making support by being one of the few publicly available academic descriptions of the anticipatory decision support system, by representing a Bayesian model that is grounded on firm theoretical discussion, by publishing algorithms suitable for decision-making support, and by defining the idea of anticipatory decision-making for a parallel version. Finally, according to the results of research, an analysis of anticipatory management for planned decision-making is presented, which is based on observation of environment, analysis of weak signals, and alternatives to creative problem solving and communication.
Resumo:
Tämän tutkielman tarkoituksena on selittää asiakasomistajien asiakaskäyttäy-tymiseen vaikuttavia asenteellisia ja psykologisia tekijöitä taloudellisten kan-nustimien läsnä ollessa. Asiakaskäyttäytyminen jaetaan tutkielmassa kolmeen eri muotoon, word-of-mouth - käyttäytymiseen, ostojen suhteelliseen keskittämiseen sekä vaihtohalukkuuteen. Asiakaskäyttäytymisen eri aspekteja selitetään organisationaalisen identifioitumisen, sitoutumisen kolmen komponentin, organisaation imagon sekä psykologisen omistajuuden käsitteiden avulla. Samalla tarkastellaan käsitteiden muodostumismekanismeja asiakaskontekstissa. Tutkielma on luonteeltaan kvantitatiivinen tutkimus, jossa kerättyä survey -aineistoa analysoidaan käsitteiden välisten suhteiden ja vaikutusten löytämiseksi polkuanalyysiä käyttäen. Tuloksina havaittiin useiden asiakasomistajien kokemien psykologisten tilojen vaikuttavan asiakaskäyttäytymisen elementteihin taloudellisten kannustimien lisäksi. Tutkielmassa havaittiin myös psykologisen omistajuuden sekä organisationaalisen identifioitumisen olevan relevantteja käsitteitä osuustoiminnallisen yrityksen jäsenien asiakaskäyttäytymistä tutkittaessa, vaikkei niitä aiemmin ole juurikaan tutkittu tämäntyyppisissä konteksteissa. Tutkielman käsitteiden muodostumismekanismien havaittiin noudattavan pääosin kirjallisuudessa esitettyjä näkemyksiä.
Resumo:
Liiketoimintaa tukevien palvelujen etätuotanto edustaa uutta kansainvälistymisen muotoa. Kehittyvien markkinoiden nousu yhdistettynä yritysten arvoketjutoimintojen kansainvälistymiseen on luonut yrityksille kasvavan paineen etsiä parasta sijaintia toiminnoilleen. Monikansalliset yritykset ovat yhä useammin korvanneet paikallisia henkilöstöpalvelujaan siirtymällä globaaliin malliin jaettujen palvelujen tuotannossa. Tämä diplomityö on toteutettu tukeakseen UPM:n henkilöstöhallintoa globaalin palvelukeskuksen perustamisessa Puolaan. Tutkimuksen tavoitteena on laajentaa käsitystä henkilöstöpalvelujen tarjontamallin uudistamiseen johtaneista tekijöistä ja motiiveista. Empiirisen tutkimuksen tärkein tavoite on tukea rekrytoinnin hallinnollisten töiden siirtoa globaaliin palvelukeskukseen palvelun laadun säilyessä vähintään aikaisemmalla tasolla. Tutkimuksen tulokset painottavat strategista näkökulmaa muutokseen. Strategiset syyt UPM:n henkilöstöhallinnon globaalin palvelukeskuksen perustamiselle sisältävät ylikapasiteetin ja päällekkäisten toimintojen vähentämisen eri maissa. Muutos lisää palvelun joustavuutta sekä edesauttaa toiminnan läpinäkyvyyttä, ennustettavuutta ja kustannusten valvontaa. Onnistuneesti toteutetut jaetut palvelut voivat toimia hyvänä lähtökohtana tehokkaiden henkilöstöpalvelujen tuottamiselle.
Resumo:
Tutkimuksen tavoitteena on tutkia globaalin taloushallinnon palvelukeskuksen perustamisprosessia ja palvelukeskusmalliin liittyviä hyötyjä. Tarkoituksena on määrittää palvelukeskuksen perustamiseen liittyvät vaiheet sekä ne kriittiset tekijät, jotka on huomioitava ennen perustamisprosessia sekä sen aikana. Empiirinen osuus on suoritettu kvalitatiivisena tutkimuksena suorittamalla neljä puolistrukturoitua teemahaastattelua globaalin taloushallinnon palvelukeskuksen perustaneissa organisaatioissa. Tutkimustulokset osoittavat, että merkittävimmät palvelukeskusmalliin liitetyt edut ovat tuottavuuden, tehokkuuden ja laadun paraneminen sekä kustannussäästöt. Kriittisimpiä menestystekijöitä ovat muutoksen suunnitteluun ja hallintaan liittyvät tekijät. Malliin liitetyt haasteet liittyvät pääasiassa globaalin toimintaympäristön monimuotoisuuteen, henkilöstöön, teknologiaan ja prosesseihin. Saadut tulokset ovat erittäin yhteneväisiä aikaisempien tutkimusten kanssa. Empiirisen tutkimuksen tulosten sekä aikaisemman tutkimustiedon perusteella on luotu viisivaiheinen malli globaalin taloushallinnon palvelukeskuksen perustamisprosessista.
Resumo:
The amount of biological data has grown exponentially in recent decades. Modern biotechnologies, such as microarrays and next-generation sequencing, are capable to produce massive amounts of biomedical data in a single experiment. As the amount of the data is rapidly growing there is an urgent need for reliable computational methods for analyzing and visualizing it. This thesis addresses this need by studying how to efficiently and reliably analyze and visualize high-dimensional data, especially that obtained from gene expression microarray experiments. First, we will study the ways to improve the quality of microarray data by replacing (imputing) the missing data entries with the estimated values for these entries. Missing value imputation is a method which is commonly used to make the original incomplete data complete, thus making it easier to be analyzed with statistical and computational methods. Our novel approach was to use curated external biological information as a guide for the missing value imputation. Secondly, we studied the effect of missing value imputation on the downstream data analysis methods like clustering. We compared multiple recent imputation algorithms against 8 publicly available microarray data sets. It was observed that the missing value imputation indeed is a rational way to improve the quality of biological data. The research revealed differences between the clustering results obtained with different imputation methods. On most data sets, the simple and fast k-NN imputation was good enough, but there were also needs for more advanced imputation methods, such as Bayesian Principal Component Algorithm (BPCA). Finally, we studied the visualization of biological network data. Biological interaction networks are examples of the outcome of multiple biological experiments such as using the gene microarray techniques. Such networks are typically very large and highly connected, thus there is a need for fast algorithms for producing visually pleasant layouts. A computationally efficient way to produce layouts of large biological interaction networks was developed. The algorithm uses multilevel optimization within the regular force directed graph layout algorithm.
Resumo:
ABSTRACT Towards a contextual understanding of B2B salespeople’s selling competencies − an exploratory study among purchasing decision-makers of internationally-oriented technology firms The characteristics of modern selling can be classified as follows: customer retention and loyalty targets, database and knowledge management, customer relationship management, marketing activities, problem solving and system selling, and satisfying needs and creating value. For salespeople to be successful in this environment, they need a wide range of competencies. Salespeople’s selling skills are well documented in seller side literature through quantitative methods, but the knowledge, skills and competencies from the buyer’s perspective are under-researched. The existing research on selling competencies should be broadened and updated through a qualitative research perspective due to the dynamic nature and the contextual dependence of selling competencies. The purpose of the study is to increase understanding of the professional salesperson’s selling competencies from the industrial purchasing decision- makers’ viewpoint within the relationship selling context. In this study, competencies are defined as sales-related knowledge and skills. The scope of the study includes goods, materials and services managed by a company’s purchasing function and used by an organization on a daily basis. The abductive approach and ‘systematic combining’ have been applied as a research strategy. In this research, data were generated through semi- structured, person-to-person interviews and open-ended questions. The study was conducted among purchasing decision-makers in the technology industry in Finland. The branches consisted of the electronics and electro-technical industries and the mechanical engineering and metals industries. A total of 30 companies and one purchasing decision-maker from each company were purposively chosen for the sampling. The sample covers different company sizes based on their revenues, their differing structures – varying from public to family companies –that represent domestic and international ownerships. Before analyzing the data, they were organized by the purchasing orientations of the buyers: the buying, procurement or supply management orientation. Thematic analysis was chosen as the analysis method. After analyzing the data, the results were contrasted with the theory. There was a continuous interaction between the empirical data and the theory. Based on the findings, a total of 19 major knowledge and skills were identified from the buyers’ perspective. The specific knowledge and skills from the viewpoint of customers’ prevalent purchasing orientations were divided into two categories, generic and contextual. The generic knowledge and skills apply to all purchasing orientations, and the contextual knowledge and skills depend on customers’ prevalent purchasing orientations. Generic knowledge and skills relate to price setting, negotiation, communication and interaction skills, while contextual ones relate to knowledge brokering, ability to present solutions and relationship skills. Buying-oriented buyers value salespeople who are ‘action oriented experts, however at a bit of an arm’s length’, procurement buyers value salespeople who are ‘experts deeply dedicated to the customer and fostering the relationship’ and supply management buyers value salespeople who are ‘corporate-oriented experts’. In addition, the buyer’s perceptions on knowledge and selling skills differ from the seller’s ones. The buyer side emphasizes managing the subject matter, consisting of the expertise, understanding the customers’ business and needs, creating a customized solution and creating value, reliability and an ability to build long-term relationships, while the seller side emphasizes communica- tion, interaction and salesmanship skills. The study integrates the selling skills of the current three-component model− technical knowledge, salesmanship skills, interpersonal skills− and relationship skills and purchasing orientations, into a selling competency model. The findings deepen and update the content of these knowledges and skills in the B2B setting and create new insights into them from the buyer’s perspective, and thus the study increases contextual understanding of selling competencies. It generates new knowledge of the salesperson’s competencies for the relationship selling and personal selling and sales management literature. It also adds knowledge of the buying orientations to the buying behavior literature. The findings challenge sales management to perceive salespeople’s selling skills both from a contingency and competence perspective. The study has several managerial implications: it increases understanding of what the critical selling knowledge and skills from the buyer’s point of view are, understanding of how salespeople effectively implement the relationship marketing concept, sales management’s knowledge of how to manage the sales process more effectively and efficiently, and the knowledge of how sales management should develop a salesperson’s selling competencies when managing and developing the sales force. Keywords: selling competencies, knowledge, selling skills, relationship skills, purchasing orientations, B2B selling, abductive approach, technology firms
Resumo:
This study examines the structure of the Russian Reflexive Marker ( ся/-сь) and offers a usage-based model building on Construction Grammar and a probabilistic view of linguistic structure. Traditionally, reflexive verbs are accounted for relative to non-reflexive verbs. These accounts assume that linguistic structures emerge as pairs. Furthermore, these accounts assume directionality where the semantics and structure of a reflexive verb can be derived from the non-reflexive verb. However, this directionality does not necessarily hold diachronically. Additionally, the semantics and the patterns associated with a particular reflexive verb are not always shared with the non-reflexive verb. Thus, a model is proposed that can accommodate the traditional pairs as well as for the possible deviations without postulating different systems. A random sample of 2000 instances marked with the Reflexive Marker was extracted from the Russian National Corpus and the sample used in this study contains 819 unique reflexive verbs. This study moves away from the traditional pair account and introduces the concept of Neighbor Verb. A neighbor verb exists for a reflexive verb if they share the same phonological form excluding the Reflexive Marker. It is claimed here that the Reflexive Marker constitutes a system in Russian and the relation between the reflexive and neighbor verbs constitutes a cross-paradigmatic relation. Furthermore, the relation between the reflexive and the neighbor verb is argued to be of symbolic connectivity rather than directionality. Effectively, the relation holding between particular instantiations can vary. The theoretical basis of the present study builds on this assumption. Several new variables are examined in order to systematically model variability of this symbolic connectivity, specifically the degree and strength of connectivity between items. In usage-based models, the lexicon does not constitute an unstructured list of items. Instead, items are assumed to be interconnected in a network. This interconnectedness is defined as Neighborhood in this study. Additionally, each verb carves its own niche within the Neighborhood and this interconnectedness is modeled through rhyme verbs constituting the degree of connectivity of a particular verb in the lexicon. The second component of the degree of connectivity concerns the status of a particular verb relative to its rhyme verbs. The connectivity within the neighborhood of a particular verb varies and this variability is quantified by using the Levenshtein distance. The second property of the lexical network is the strength of connectivity between items. Frequency of use has been one of the primary variables in functional linguistics used to probe this. In addition, a new variable called Constructional Entropy is introduced in this study building on information theory. It is a quantification of the amount of information carried by a particular reflexive verb in one or more argument constructions. The results of the lexical connectivity indicate that the reflexive verbs have statistically greater neighborhood distances than the neighbor verbs. This distributional property can be used to motivate the traditional observation that the reflexive verbs tend to have idiosyncratic properties. A set of argument constructions, generalizations over usage patterns, are proposed for the reflexive verbs in this study. In addition to the variables associated with the lexical connectivity, a number of variables proposed in the literature are explored and used as predictors in the model. The second part of this study introduces the use of a machine learning algorithm called Random Forests. The performance of the model indicates that it is capable, up to a degree, of disambiguating the proposed argument construction types of the Russian Reflexive Marker. Additionally, a global ranking of the predictors used in the model is offered. Finally, most construction grammars assume that argument construction form a network structure. A new method is proposed that establishes generalization over the argument constructions referred to as Linking Construction. In sum, this study explores the structural properties of the Russian Reflexive Marker and a new model is set forth that can accommodate both the traditional pairs and potential deviations from it in a principled manner.
Resumo:
Tämän diplomityön tavoitteena on muodostaa sähköinen liiketoimintamalli kansainvälisen ohjelmistoyrityksen tarpeisiin. Ohjelmiston uusi ominaisuus antaa kolmansille osapuolille mahdollisuuden määritellä itse rakennusmallintamisessa tarvittavia komponentteja, mikä luo mahdollisuuden uuteen liiketoimintaan. Liiketoimintamallien teoria ja asiantuntijoiden haastattelut tulevat osoittamaan, että paras ratkaisu tässä tapauksessa on portaali, joka rakentuu komponenttimarkkinoista, e-kaupasta ja virtuaaliyhteisöstä. Komponenttimarkkinat on jaettu vapaaseen vaihdantaan ja sertifioitujen kehittäjien kaupankäyntiin. Tämä tarjoaa mahdollisuuksia kehittäjille valita sitoutuneisuutensa taso, samoin kuin motivoi heitä osallistumaan. E-kauppa on suunniteltu sovelluksille ja monimutkaisemmille komponenteille. Virtuaaliyhteisön kautta käyttäjät voivat keskustella mielipiteistään ja saada tukea ohjelmiston käyttämiseen sekä komponenttien kehittämiseen.
Resumo:
In order that the radius and thus ununiform structure of the teeth and otherelectrical and magnetic parts of the machine may be taken into consideration the calculation of an axial flux permanent magnet machine is, conventionally, doneby means of 3D FEM-methods. This calculation procedure, however, requires a lotof time and computer recourses. This study proves that also analytical methods can be applied to perform the calculation successfully. The procedure of the analytical calculation can be summarized into following steps: first the magnet is divided into slices, which makes the calculation for each section individually, and then the parts are submitted to calculation of the final results. It is obvious that using this method can save a lot of designing and calculating time. Thecalculation program is designed to model the magnetic and electrical circuits of surface mounted axial flux permanent magnet synchronous machines in such a way, that it takes into account possible magnetic saturation of the iron parts. Theresult of the calculation is the torque of the motor including the vibrations. The motor geometry and the materials and either the torque or pole angle are defined and the motor can be fed with an arbitrary shape and amplitude of three-phase currents. There are no limits for the size and number of the pole pairs nor for many other factors. The calculation steps and the number of different sections of the magnet are selectable, but the calculation time is strongly depending on this. The results are compared to the measurements of real prototypes. The permanent magnet creates part of the flux in the magnetic circuit. The form and amplitude of the flux density in the air-gap depends on the geometry and material of the magnetic circuit, on the length of the air-gap and remanence flux density of the magnet. Slotting is taken into account by using the Carter factor in the slot opening area. The calculation is simple and fast if the shape of the magnetis a square and has no skew in relation to the stator slots. With a more complicated magnet shape the calculation has to be done in several sections. It is clear that according to the increasing number of sections also the result will become more accurate. In a radial flux motor all sections of the magnets create force with a same radius. In the case of an axial flux motor, each radial section creates force with a different radius and the torque is the sum of these. The magnetic circuit of the motor, consisting of the stator iron, rotor iron, air-gap, magnet and the slot, is modelled with a reluctance net, which considers the saturation of the iron. This means, that several iterations, in which the permeability is updated, has to be done in order to get final results. The motor torque is calculated using the instantaneous linkage flux and stator currents. Flux linkage is called the part of the flux that is created by the permanent magnets and the stator currents passing through the coils in stator teeth. The angle between this flux and the phase currents define the torque created by the magnetic circuit. Due to the winding structure of the stator and in order to limit the leakage flux the slot openings of the stator are normally not made of ferromagnetic material even though, in some cases, semimagnetic slot wedges are used. In the slot opening faces the flux enters the iron almost normally (tangentially with respect to the rotor flux) creating tangential forces in the rotor. This phenomenon iscalled cogging. The flux in the slot opening area on the different sides of theopening and in the different slot openings is not equal and so these forces do not compensate each other. In the calculation it is assumed that the flux entering the left side of the opening is the component left from the geometrical centre of the slot. This torque component together with the torque component calculated using the Lorenz force make the total torque of the motor. It is easy to assume that when all the magnet edges, where the derivative component of the magnet flux density is at its highest, enter the slot openings at the same time, this will have as a result a considerable cogging torque. To reduce the cogging torquethe magnet edges can be shaped so that they are not parallel to the stator slots, which is the common way to solve the problem. In doing so, the edge may be spread along the whole slot pitch and thus also the high derivative component willbe spread to occur equally along the rotation. Besides forming the magnets theymay also be placed somewhat asymmetric on the rotor surface. The asymmetric distribution can be made in many different ways. All the magnets may have a different deflection of the symmetrical centre point or they can be for example shiftedin pairs. There are some factors that limit the deflection. The first is that the magnets cannot overlap. The magnet shape and the relative width compared to the pole define the deflection in this case. The other factor is that a shifting of the poles limits the maximum torque of the motor. If the edges of adjacent magnets are very close to each other the leakage flux from one pole to the other increases reducing thus the air-gap magnetization. The asymmetric model needs some assumptions and simplifications in order to limit the size of the model and calculation time. The reluctance net is made for symmetric distribution. If the magnets are distributed asymmetrically the flux in the different pole pairs will not be exactly the same. Therefore, the assumption that the flux flows from the edges of the model to the next pole pairs, in the calculation model from one edgeto the other, is not correct. If it were wished for that this fact should be considered in multi-pole pair machines, this would mean that all the poles, in other words the whole machine, should be modelled in reluctance net. The error resulting from this wrong assumption is, nevertheless, irrelevant.
Resumo:
In mathematical modeling the estimation of the model parameters is one of the most common problems. The goal is to seek parameters that fit to the measurements as well as possible. There is always error in the measurements which implies uncertainty to the model estimates. In Bayesian statistics all the unknown quantities are presented as probability distributions. If there is knowledge about parameters beforehand, it can be formulated as a prior distribution. The Bays’ rule combines the prior and the measurements to posterior distribution. Mathematical models are typically nonlinear, to produce statistics for them requires efficient sampling algorithms. In this thesis both Metropolis-Hastings (MH), Adaptive Metropolis (AM) algorithms and Gibbs sampling are introduced. In the thesis different ways to present prior distributions are introduced. The main issue is in the measurement error estimation and how to obtain prior knowledge for variance or covariance. Variance and covariance sampling is combined with the algorithms above. The examples of the hyperprior models are applied to estimation of model parameters and error in an outlier case.
Resumo:
This thesis was focussed on statistical analysis methods and proposes the use of Bayesian inference to extract information contained in experimental data by estimating Ebola model parameters. The model is a system of differential equations expressing the behavior and dynamics of Ebola. Two sets of data (onset and death data) were both used to estimate parameters, which has not been done by previous researchers in (Chowell, 2004). To be able to use both data, a new version of the model has been built. Model parameters have been estimated and then used to calculate the basic reproduction number and to study the disease-free equilibrium. Estimates of the parameters were useful to determine how well the model fits the data and how good estimates were, in terms of the information they provided about the possible relationship between variables. The solution showed that Ebola model fits the observed onset data at 98.95% and the observed death data at 93.6%. Since Bayesian inference can not be performed analytically, the Markov chain Monte Carlo approach has been used to generate samples from the posterior distribution over parameters. Samples have been used to check the accuracy of the model and other characteristics of the target posteriors.
Resumo:
The provision of Internet access to large numbers has traditionally been under the control of operators, who have built closed access networks for connecting customers. As the access network (i.e. the last mile to the customer) is generally the most expensive part of the network because of the vast amount of cable required, many operators have been reluctant to build access networks in rural areas. There are problems also in urban areas, as incumbent operators may use various tactics to make it difficult for competitors to enter the market. Open access networking, where the goal is to connect multiple operators and other types of service providers to a shared network, changes the way in which networks are used. This change in network structure dismantles vertical integration in service provision and enables true competition as no service provider can prevent others fromcompeting in the open access network. This thesis describes the development from traditional closed access networks towards open access networking and analyses different types of open access solution. The thesis introduces a new open access network approach (The Lappeenranta Model) in greater detail. The Lappeenranta Model is compared to other types of open access networks. The thesis shows that end users and service providers see local open access and services as beneficial. In addition, the thesis discusses open access networking in a multidisciplinary fashion, focusing on the real-world challenges of open access networks.
Resumo:
Raw measurement data does not always immediately convey useful information, but applying mathematical statistical analysis tools into measurement data can improve the situation. Data analysis can offer benefits like acquiring meaningful insight from the dataset, basing critical decisions on the findings, and ruling out human bias through proper statistical treatment. In this thesis we analyze data from an industrial mineral processing plant with the aim of studying the possibility of forecasting the quality of the final product, given by one variable, with a model based on the other variables. For the study mathematical tools like Qlucore Omics Explorer (QOE) and Sparse Bayesian regression (SB) are used. Later on, linear regression is used to build a model based on a subset of variables that seem to have most significant weights in the SB model. The results obtained from QOE show that the variable representing the desired final product does not correlate with other variables. For SB and linear regression, the results show that both SB and linear regression models built on 1-day averaged data seriously underestimate the variance of true data, whereas the two models built on 1-month averaged data are reliable and able to explain a larger proportion of variability in the available data, making them suitable for prediction purposes. However, it is concluded that no single model can fit well the whole available dataset and therefore, it is proposed for future work to make piecewise non linear regression models if the same available dataset is used, or the plant to provide another dataset that should be collected in a more systematic fashion than the present data for further analysis.
Resumo:
Mathematical models often contain parameters that need to be calibrated from measured data. The emergence of efficient Markov Chain Monte Carlo (MCMC) methods has made the Bayesian approach a standard tool in quantifying the uncertainty in the parameters. With MCMC, the parameter estimation problem can be solved in a fully statistical manner, and the whole distribution of the parameters can be explored, instead of obtaining point estimates and using, e.g., Gaussian approximations. In this thesis, MCMC methods are applied to parameter estimation problems in chemical reaction engineering, population ecology, and climate modeling. Motivated by the climate model experiments, the methods are developed further to make them more suitable for problems where the model is computationally intensive. After the parameters are estimated, one can start to use the model for various tasks. Two such tasks are studied in this thesis: optimal design of experiments, where the task is to design the next measurements so that the parameter uncertainty is minimized, and model-based optimization, where a model-based quantity, such as the product yield in a chemical reaction model, is optimized. In this thesis, novel ways to perform these tasks are developed, based on the output of MCMC parameter estimation. A separate topic is dynamical state estimation, where the task is to estimate the dynamically changing model state, instead of static parameters. For example, in numerical weather prediction, an estimate of the state of the atmosphere must constantly be updated based on the recently obtained measurements. In this thesis, a novel hybrid state estimation method is developed, which combines elements from deterministic and random sampling methods.