27 resultados para Gaussian complexities
em Doria (National Library of Finland DSpace Services) - National Library of Finland, Finland
Resumo:
The parameter setting of a differential evolution algorithm must meet several requirements: efficiency, effectiveness, and reliability. Problems vary. The solution of a particular problem can be represented in different ways. An algorithm most efficient in dealing with a particular representation may be less efficient in dealing with other representations. The development of differential evolution-based methods contributes substantially to research on evolutionary computing and global optimization in general. The objective of this study is to investigatethe differential evolution algorithm, the intelligent adjustment of its controlparameters, and its application. In the thesis, the differential evolution algorithm is first examined using different parameter settings and test functions. Fuzzy control is then employed to make control parameters adaptive based on an optimization process and expert knowledge. The developed algorithms are applied to training radial basis function networks for function approximation with possible variables including centers, widths, and weights of basis functions and both having control parameters kept fixed and adjusted by fuzzy controller. After the influence of control variables on the performance of the differential evolution algorithm was explored, an adaptive version of the differential evolution algorithm was developed and the differential evolution-based radial basis function network training approaches were proposed. Experimental results showed that the performance of the differential evolution algorithm is sensitive to parameter setting, and the best setting was found to be problem dependent. The fuzzy adaptive differential evolution algorithm releases the user load of parameter setting and performs better than those using all fixedparameters. Differential evolution-based approaches are effective for training Gaussian radial basis function networks.
Resumo:
This study focuses on corporate social responsibility (or CSR)as the latest dimension to emerge in the corporate responsibility and sustainability agenda, which in the recent past has rapidly risen to the top of the list of concerns for civil societies worldwide. Despite the continuing debates and discussions about the scope, benefits, and impacts of CSR to business and community in various sectors, levels, and types of society, many companies have moved forward to confront the opportunities and challenges of CSR. Thus, this study is about those proactive companies with a focus on the importance of CSR and its management inside and outside the company. It is an exploration and learning from the experience of Finnish companies, as well as other actors interested or involved in shaping the course of CSR, locally and globally. It also looks closely at how national culture affects the views, thinking, and management of CSR in a welfare state. This dissertation primarily draws on the analyses of information collected from a series of qualitative interviews and the existing literature in the area. This is complemented by an analysis of written and published documents on CSR from various sources. The results of the study give insightful information and detailed descriptions of a roadmap useful in learning and understanding CSR in Finnish companies. Despite the varying conceptual connotations, essential roadmap indicators point to the importance of framing CSR within the corporate responsibility concept, Finnish development and the welfare state system, globalization, stakeholders, and the pursuit of sustainable development as the main drivers of CSR, the remarkable progress of CSR in companies, and identification of key management areas and practices relevant to CSR. Similarly,the study reveals the importance of culture as essential in understanding and learning CSR. Finnish culture has a positive influence on the views, thinking, and management practices of CSR issues. Such a positive influence of culture, therefore, makes it easy for business people to discuss and understand CSR, because those CSR issues are already considered common and taken-for-granted by Finns and are implicit in the welfare state provisions. The experience of Finnish companies in implementing CSR policies in the supply chain is a concrete proactive step in advancing the message of CSR, that is, to bring companies and suppliers together to work on improving and strengthening relationships towards socially responsible practices worldwide. Such a forward step to deal with CSR issues in the supply chain reflects the companies' commitments and belief that CSR can be managed with the suppliers and gain positive benefits. Despite the problems and complexities, particularly in the global supply chain, managing CSR for Finnish companies presents new opportunities and challenges that are expected to intensify in the near future. The focus on CSR policy implementation inthe supply chain points to the importance of companies taking initiatives and forging cooperation with suppliers with the aim of addressing and improving CSR questions in the supply chains. The proactive stance of Finnish companies toward CSR is complemented by the active supporting role of important societalactors such as the government and NGOs. These actors carry out various promotional efforts and campaigns, thus bringing CSR into the mainstream of Finnish companies and strengthening the synergistic learning about CSR within the Finnish business and civil circles. The efforts of the government and NGOs to promote CSR are indicative of the importance of multipartite involvement and the emergence of better civil regulations. Likewise, their drive to learn from each other, exchange experiences, and contribute in CSR debates facilitated the evolution of CSRnetworks in the country. The results of this study add to the mounting evidence that CSR, in general, has created a new dimension in managing corporate sustainability. This study provides compelling empirical evidence and some direct quotations about CSR in the Finnish context. This information can be used to learn and gain new useful insights, approaches, and concepts for managing CSR.
Resumo:
This thesis is about detection of local image features. The research topic belongs to the wider area of object detection, which is a machine vision and pattern recognition problem where an object must be detected (located) in an image. State-of-the-art object detection methods often divide the problem into separate interest point detection and local image description steps, but in this thesis a different technique is used, leading to higher quality image features which enable more precise localization. Instead of using interest point detection the landmark positions are marked manually. Therefore, the quality of the image features is not limited by the interest point detection phase and the learning of image features is simplified. The approach combines both interest point detection and local description into one phase for detection. Computational efficiency of the descriptor is therefore important, leaving out many of the commonly used descriptors as unsuitably heavy. Multiresolution Gabor features has been the main descriptor in this thesis and improving their efficiency is a significant part. Actual image features are formed from descriptors by using a classifierwhich can then recognize similar looking patches in new images. The main classifier is based on Gaussian mixture models. Classifiers are used in one-class classifier configuration where there are only positive training samples without explicit background class. The local image feature detection method has been tested with two freely available face detection databases and a proprietary license plate database. The localization performance was very good in these experiments. Other applications applying the same under-lying techniques are also presented, including object categorization and fault detection.
Resumo:
Dynamic behavior of bothisothermal and non-isothermal single-column chromatographic reactors with an ion-exchange resin as the stationary phase was investigated. The reactor performance was interpreted by using results obtained when studying the effect of the resin properties on the equilibrium and kinetic phenomena occurring simultaneously in the reactor. Mathematical models were derived for each phenomenon and combined to simulate the chromatographic reactor. The phenomena studied includes phase equilibria in multicomponent liquid mixture¿ion-exchange resin systems, chemicalequilibrium in the presence of a resin catalyst, diffusion of liquids in gel-type and macroporous resins, and chemical reaction kinetics. Above all, attention was paid to the swelling behavior of the resins and how it affects the kinetic phenomena. Several poly(styrene-co-divinylbenzene) resins with different cross-link densities and internal porosities were used. Esterification of acetic acid with ethanol to produce ethyl acetate and water was used as a model reaction system. Choosing an ion-exchange resin with a low cross-link density is beneficial inthe case of the present reaction system: the amount of ethyl acetate as well the ethyl acetate to water mole ratio in the effluent stream increase with decreasing cross-link density. The enhanced performance of the reactor is mainly attributed to increasing reaction rate, which in turn originates from the phase equilibrium behavior of the system. Also mass transfer considerations favor the use ofresins with low cross-link density. The diffusion coefficients of liquids in the gel-type ion-exchange resins were found to fall rapidly when the extent of swelling became low. Glass transition of the polymer was not found to significantlyretard the diffusion in sulfonated PS¿DVB ion-exchange resins. It was also shown that non-isothermal operation of a chromatographic reactor could be used to significantly enhance the reactor performance. In the case of the exothermic modelreaction system and a near-adiabatic column, a positive thermal wave (higher temperature than in the initial state) was found to travel together with the reactive front. This further increased the conversion of the reactants. Diffusion-induced volume changes of the ion-exchange resins were studied in a flow-through cell. It was shown that describing the swelling and shrinking kinetics of the particles calls for a mass transfer model that explicitly includes the limited expansibility of the polymer network. A good description of the process was obtained by combining the generalized Maxwell-Stefan approach and an activity model that was derived from the thermodynamics of polymer solutions and gels. The swelling pressure in the resin phase was evaluated by using a non-Gaussian expression forthe polymer chain length distribution. Dimensional changes of the resin particles necessitate the use of non-standard mathematical tools for dynamic simulations. A transformed coordinate system, where the mass of the polymer was used as a spatial variable, was applied when simulating the chromatographic reactor columns as well as the swelling and shrinking kinetics of the resin particles. Shrinking of the particles in a column leads to formation of dead volume on top of the resin bed. In ordinary Eulerian coordinates, this results in a moving discontinuity that in turn causes numerical difficulties in the solution of the PDE system. The motion of the discontinuity was eliminated by spanning two calculation grids in the column that overlapped at the top of the resin bed. The reactive and non-reactive phase equilibrium data were correlated with a model derived from thethermodynamics of polymer solution and gels. The thermodynamic approach used inthis work is best suited at high degrees of swelling because the polymer matrixmay be in the glassy state when the extent of swelling is low.
Resumo:
Evoluutioalgoritmit ovat viime vuosina osoittautuneet tehokkaiksi menetelmiksi globaalien optimointitehtävien ratkaisuun. Niiden vahvuutena on etenkin yleiskäyttöisyys ja kyky löytää globaali ratkaisu juuttumatta optimoitavan tavoitefunktion paikallisiin optimikohtiin. Tässä työssä on tavoitteena kehittää uusi, normaalijakaumaan perustuva mutaatio-operaatio differentiaalievoluutioalgoritmiin, joka on eräs uusimmista evoluutiopohjaisista optimointialgoritmeista. Menetelmän oletetaan vähentävän entisestään sekä populaation ennenaikaisen suppenemisen, että algoritmin tilojen juuttumisen riskiä ja se on teoreettisesti osoitettavissa suppenevaksi. Tämä ei päde alkuperäisen differentiaalievoluution tapauksessa, koska on voitu osoittaa, että sen tilanmuutokset voivat pienellä todennäköisyydellä juuttua. Työssä uuden menetelmän toimintaa tarkastellaan kokeellisesti käyttäen testiongelmina monirajoiteongelmia. Rajoitefunktioiden käsittelyyn käytetään Jouni Lampisen kehittämää, Pareto-optimaalisuuden periaatteeseen perustuvaa menetelmää. Samalla saadaan kerättyä lisää kokeellista näyttöä myös tämän menetelmän toiminnasta. Kaikki käytetyt testiongelmat kyettiin ratkaisemaan sekä alkuperäisellä differentiaalievoluutiolla, että uutta mutaatio-operaatiota käyttävällä versiolla. Uusi menetelmä osoittautui kuitenkin luotettavammaksi sellaisissa tapauksissa, joissa alkuperäisellä algoritmilla oli vaikeuksia. Lisäksi useimmat ongelmat kyettiin ratkaisemaan luotettavasti pienemmällä populaation koolla kuin alkuperäistä differentiaalievoluutiota käytettäessä. Uuden menetelmän käyttö myös mahdollistaa paremmin sellaisten kontrolliparametrien käytön, joilla hausta saadaan rotaatioinvariantti. Laskennallisesti uusi menetelmä on hieman alkuperäistä differentiaalievoluutiota raskaampi ja se tarvitsee yhden kontrolliparametrin enemmän. Uusille kontrolliparametreille määritettiin kuitenkin mahdollisimman yleiskäyttöiset arvot, joita käyttämällä on mahdollista ratkaista suuri joukko erilaisia ongelmia.
Resumo:
In this thesis the X-ray tomography is discussed from the Bayesian statistical viewpoint. The unknown parameters are assumed random variables and as opposite to traditional methods the solution is obtained as a large sample of the distribution of all possible solutions. As an introduction to tomography an inversion formula for Radon transform is presented on a plane. The vastly used filtered backprojection algorithm is derived. The traditional regularization methods are presented sufficiently to ground the Bayesian approach. The measurements are foton counts at the detector pixels. Thus the assumption of a Poisson distributed measurement error is justified. Often the error is assumed Gaussian, altough the electronic noise caused by the measurement device can change the error structure. The assumption of Gaussian measurement error is discussed. In the thesis the use of different prior distributions in X-ray tomography is discussed. Especially in severely ill-posed problems the use of a suitable prior is the main part of the whole solution process. In the empirical part the presented prior distributions are tested using simulated measurements. The effect of different prior distributions produce are shown in the empirical part of the thesis. The use of prior is shown obligatory in case of severely ill-posed problem.
Resumo:
The ongoing development of the digital media has brought a new set of challenges with it. As images containing more than three wavelength bands, often called spectral images, are becoming a more integral part of everyday life, problems in the quality of the RGB reproduction from the spectral images have turned into an important area of research. The notion of image quality is often thought to comprise two distinctive areas – image quality itself and image fidelity, both dealing with similar questions, image quality being the degree of excellence of the image, and image fidelity the measure of the match of the image under study to the original. In this thesis, both image fidelity and image quality are considered, with an emphasis on the influence of color and spectral image features on both. There are very few works dedicated to the quality and fidelity of spectral images. Several novel image fidelity measures were developed in this study, which include kernel similarity measures and 3D-SSIM (structural similarity index). The kernel measures incorporate the polynomial, Gaussian radial basis function (RBF) and sigmoid kernels. The 3D-SSIM is an extension of a traditional gray-scale SSIM measure developed to incorporate spectral data. The novel image quality model presented in this study is based on the assumption that the statistical parameters of the spectra of an image influence the overall appearance. The spectral image quality model comprises three parameters of quality: colorfulness, vividness and naturalness. The quality prediction is done by modeling the preference function expressed in JNDs (just noticeable difference). Both image fidelity measures and the image quality model have proven to be effective in the respective experiments.
Resumo:
Current e-business standards have been developed and used by large organizations to reduce clerical costs in business transactions by increased automation and higher level of business-to-business integration. Small and medium enterprises (SME's), however, cannot easily adopt these standards due to the SME's lacking the technical expertise and resources for implementing them. Still, large organizations increasingly require their business partners, most of which are SME's, to be able to interoperate by their chosen e-business standards. The research question for the study was, first, which of the existing e-business technologies are most SME-adoptable, and, second, how could those e-business technologies be made easier for SME's to implement. The study was conducted as a literature study that evaluated the available e-business frameworks and SME-oriented e-business architectures based on the implementation complexity and costs incurred for the SME adopter. The study found that only few e-business solutions are SMEadoptable. The technological approaches used in the solutions need to be improved on a number of areas, the most important of which is implementation complexity. The study revealed that this also applies to the special, SME-oriented e-business architectures, which are also still too difficult for SME's to implement. Based on these findings, a high-level e-business interoperability framework concept was proposed as the basis for future research to overcome the found implementation complexities for SME's.
Resumo:
Diabetes is a rapidly increasing worldwide problem which is characterised by defective metabolism of glucose that causes long-term dysfunction and failure of various organs. The most common complication of diabetes is diabetic retinopathy (DR), which is one of the primary causes of blindness and visual impairment in adults. The rapid increase of diabetes pushes the limits of the current DR screening capabilities for which the digital imaging of the eye fundus (retinal imaging), and automatic or semi-automatic image analysis algorithms provide a potential solution. In this work, the use of colour in the detection of diabetic retinopathy is statistically studied using a supervised algorithm based on one-class classification and Gaussian mixture model estimation. The presented algorithm distinguishes a certain diabetic lesion type from all other possible objects in eye fundus images by only estimating the probability density function of that certain lesion type. For the training and ground truth estimation, the algorithm combines manual annotations of several experts for which the best practices were experimentally selected. By assessing the algorithm’s performance while conducting experiments with the colour space selection, both illuminance and colour correction, and background class information, the use of colour in the detection of diabetic retinopathy was quantitatively evaluated. Another contribution of this work is the benchmarking framework for eye fundus image analysis algorithms needed for the development of the automatic DR detection algorithms. The benchmarking framework provides guidelines on how to construct a benchmarking database that comprises true patient images, ground truth, and an evaluation protocol. The evaluation is based on the standard receiver operating characteristics analysis and it follows the medical practice in the decision making providing protocols for image- and pixel-based evaluations. During the work, two public medical image databases with ground truth were published: DIARETDB0 and DIARETDB1. The framework, DR databases and the final algorithm, are made public in the web to set the baseline results for automatic detection of diabetic retinopathy. Although deviating from the general context of the thesis, a simple and effective optic disc localisation method is presented. The optic disc localisation is discussed, since normal eye fundus structures are fundamental in the characterisation of DR.
Resumo:
Speaker diarization is the process of sorting speeches according to the speaker. Diarization helps to search and retrieve what a certain speaker uttered in a meeting. Applications of diarization systemsextend to other domains than meetings, for example, lectures, telephone, television, and radio. Besides, diarization enhances the performance of several speech technologies such as speaker recognition, automatic transcription, and speaker tracking. Methodologies previously used in developing diarization systems are discussed. Prior results and techniques are studied and compared. Methods such as Hidden Markov Models and Gaussian Mixture Models that are used in speaker recognition and other speech technologies are also used in speaker diarization. The objective of this thesis is to develop a speaker diarization system in meeting domain. Experimental part of this work indicates that zero-crossing rate can be used effectively in breaking down the audio stream into segments, and adaptive Gaussian Models fit adequately short audio segments. Results show that 35 Gaussian Models and one second as average length of each segment are optimum values to build a diarization system for the tested data. Uniting the segments which are uttered by same speaker is done in a bottom-up clustering by a newapproach of categorizing the mixture weights.
Resumo:
This PhD thesis in Mathematics belongs to the field of Geometric Function Theory. The thesis consists of four original papers. The topic studied deals with quasiconformal mappings and their distortion theory in Euclidean n-dimensional spaces. This theory has its roots in the pioneering papers of F. W. Gehring and J. Väisälä published in the early 1960’s and it has been studied by many mathematicians thereafter. In the first paper we refine the known bounds for the so-called Mori constant and also estimate the distortion in the hyperbolic metric. The second paper deals with radial functions which are simple examples of quasiconformal mappings. These radial functions lead us to the study of the so-called p-angular distance which has been studied recently e.g. by L. Maligranda and S. Dragomir. In the third paper we study a class of functions of a real variable studied by P. Lindqvist in an influential paper. This leads one to study parametrized analogues of classical trigonometric and hyperbolic functions which for the parameter value p = 2 coincide with the classical functions. Gaussian hypergeometric functions have an important role in the study of these special functions. Several new inequalities and identities involving p-analogues of these functions are also given. In the fourth paper we study the generalized complete elliptic integrals, modular functions and some related functions. We find the upper and lower bounds of these functions, and those bounds are given in a simple form. This theory has a long history which goes back two centuries and includes names such as A. M. Legendre, C. Jacobi, C. F. Gauss. Modular functions also occur in the study of quasiconformal mappings. Conformal invariants, such as the modulus of a curve family, are often applied in quasiconformal mapping theory. The invariants can be sometimes expressed in terms of special conformal mappings. This fact explains why special functions often occur in this theory.
Resumo:
The focus of this study is to examine the role of police and immigrants’ relations, as less is known about this process in the country. The studies were approached in two different ways. Firstly, an attempt was made to examine how immigrants view their encounters with the police. Secondly, the studies explored how aware the police are of immigrants’ experiences in their various encounters and interactions on the street level. An ancillary aim of the studies is to clarify, analyse and discuss how prejudice and stereotypes can be tackled, thereby contributing to the general debate about racism and discrimination for better ethnic relations in the country. The data in which this analysis was based is on a group of adults (n=88) from the total of 120 Africans questioned for the entire study (n=45) police cadets and (n=6) serving police officers from Turku. The present thesis is a compilation of five articles. A summary of each article findings follows, as the same data was used in all five studies. In the first study, a theoretical model was developed to examine the perceived knowledge of bias by immigrants resulting from race, culture and belief. This was also an attempt to explore whether this knowledge was predetermined in my attempt to classify and discuss as well as analyse the factors that may be influencing immigrants’ allegations of unfair treatment by the police in Turku. The main finding shows that in the first paper there was ignorance and naivety on the part of the police in their attitudes towards the African immigrant’s prior experiences with the police, and this may probably have resulted from stereotypes or their lack of experience as well as prior training with immigrants where these kinds of experience are rampant in the country (Egharevba, 2003 and 2004a). In exploring what leads to stereotypes, a working definition is the assumption that is prevalent among some segments of the population, including the police, that Finland is a homogenous country by employing certain conducts and behaviour towards ethnic and immigrant groups in the country. This to my understanding is stereotype. Historically this was true, but today the social topography of the country is changing and becoming even more complex. It is true that, on linguistic grounds, the country is multilingual, as there are a few recognised national minority languages (Swedish, Sami and Russian) as well as a number of immigrant languages including English. Apparently it is vital for the police to have a line of communication open when addressing the problem associated with immigrants in the country. The second paper moved a step further by examining African immigrants’ understanding of human rights as well as what human rights violation means or entails in their views as a result of their experiences with the police, both in Finland and in their country of origin. This approach became essential during the course of the study, especially when the participants were completing the questionnaire (N=88), where volunteers were solicited for a later date for an in-depth interview with the author. Many of the respondents came from countries where human rights are not well protected and seldom discussed publicly, therefore understanding their views on the subject can help to explain why some of the immigrants are sceptical about coming forward to report cases of batteries and assaults to the police, or even their experiences of being monitored in shopping malls in their new home and the reason behind their low level of trust in public authorities in Finland. The study showed that knowledge of human rights is notably low among some of the participants. The study also found that female respondents were less aware of human rights when compared with their male counterparts. This has resulted in some of the male participants focussing more on their traditional ways of thinking by not realising that they are in a new country where there is equality in sexes and lack of respect on gender terms is not condoned. The third paper focussed on the respondents’ experiences with the police in Turku and tried to explore police attitudes towards African immigrant clients, in addition to the role stereotype plays in police views of different cultures and how these views have impacted on immigrants’ views of discriminatory policing in Turku. The data is the same throughout the entire studies (n=88), except that some few participants were interviewed for the third paper thirty-five persons. The results showed that there is some bias in mass-media reports on the immigrants’ issues, due to selective portrayal of biases without much investigation being carried out before jumping to conclusions, especially when the issues at stake involve an immigrant (Egharevba, 2005a; Egharevba, 2004a and 2004b). In this vein, there was an allegation that the police are even biased while investigating cases of theft, especially if the stolen property is owned by an immigrant (Egharevba, 2006a, Egharevba, 2006b). One vital observation from the respondents’ various comments was that race has meaning in their encounters and interaction with the police in the country. This result led the author to conclude that the relation between the police and immigrants is still a challenge, as there is rampant fear and distrust towards the police by some segments of the participating respondents in the study. In the fourth paper the focus was on examining the respondents’ view of the police, with special emphasis on race and culture as well as the respondents’ perspective on police behaviour in Turku. This is because race, as it was relayed to me in the study, is a significant predictor of police perception (Egharevba, 2005a; Egharevba and Hannikianen, 2005). It is a known scientific fact that inter-group racial attitudes are the representation of group competition and perceived threat to power and status (Group-position theory). According to Blumer (1958) a sense of group threat is an essential element for the emergence of racial prejudice. Consequently, it was essential that we explored the existing relationship between the respondents and the police in order to have an understanding of this concept. The result indicates some local and international contextual issues and assumptions that were of importance tackling prejudice and discrimination as it exists within the police in the country. Moreover, we have to also remember that, for years, many of these African immigrants have been on the receiving end of unjust law enforcement in their various countries of origin, which has resulted in many of them feeling inferior and distrustful of the police even in their own country of origin. While discussing the issues of cultural difference and how it affects policing, we must also keep in mind the socio-cultural background of the participants, their level of language proficiency and educational background. The research data analysed in this study also confirmed the difficulties associated with cultural misunderstandings in interpreting issues and how these misunderstandings have affected police and immigrant relations in Finland. Finally, the fifth paper focussed on cadets’ attitudes towards African immigrants as well as serving police officers’ interaction with African clients. Secondly, the police level of awareness of African immigrants’ distrustfulness of their profession was unclear. For this reason, my questions in this fifth study examined the experiences and attitudes of police cadets and serving police officers as well as those of African immigrants in understanding how to improve this relationship in the country. The data was based on (n=88) immigrant participants, (n=45) police cadets and 6 serving police officers from the Turku police department. The result suggests that there is distrust of the police in the respondents’ interaction; this tends to have galvanised a heightened tension resulting from the lack of language proficiency (Egharevba and White, 2007; Egharevba and Hannikainen, 2005, and Egharevba, 2006b) The result also shows that the allegation of immigrants as being belittled by the police stems from the misconceptions of both parties as well as the notion of stop and search by the police in Turku. All these factors were observed to have contributed to the alleged police evasiveness and the lack of regular contact between the respondents and the police in their dealings. In other words, the police have only had job-related contact with many of the participants in the present study. The results also demonstrated the complexities caused by the low level of education among some of the African immigrants in their understanding about the Finnish culture, norms and values in the country. Thus, the framework constructed in these studies embodies diversity in national culture as well as the need for a further research study with a greater number of respondents (both from the police and immigrant/majority groups), in order to explore the different role cultures play in immigrant and majority citizens’ understanding of police work.
Resumo:
Mathematical models often contain parameters that need to be calibrated from measured data. The emergence of efficient Markov Chain Monte Carlo (MCMC) methods has made the Bayesian approach a standard tool in quantifying the uncertainty in the parameters. With MCMC, the parameter estimation problem can be solved in a fully statistical manner, and the whole distribution of the parameters can be explored, instead of obtaining point estimates and using, e.g., Gaussian approximations. In this thesis, MCMC methods are applied to parameter estimation problems in chemical reaction engineering, population ecology, and climate modeling. Motivated by the climate model experiments, the methods are developed further to make them more suitable for problems where the model is computationally intensive. After the parameters are estimated, one can start to use the model for various tasks. Two such tasks are studied in this thesis: optimal design of experiments, where the task is to design the next measurements so that the parameter uncertainty is minimized, and model-based optimization, where a model-based quantity, such as the product yield in a chemical reaction model, is optimized. In this thesis, novel ways to perform these tasks are developed, based on the output of MCMC parameter estimation. A separate topic is dynamical state estimation, where the task is to estimate the dynamically changing model state, instead of static parameters. For example, in numerical weather prediction, an estimate of the state of the atmosphere must constantly be updated based on the recently obtained measurements. In this thesis, a novel hybrid state estimation method is developed, which combines elements from deterministic and random sampling methods.
Resumo:
In any decision making under uncertainties, the goal is mostly to minimize the expected cost. The minimization of cost under uncertainties is usually done by optimization. For simple models, the optimization can easily be done using deterministic methods.However, many models practically contain some complex and varying parameters that can not easily be taken into account using usual deterministic methods of optimization. Thus, it is very important to look for other methods that can be used to get insight into such models. MCMC method is one of the practical methods that can be used for optimization of stochastic models under uncertainty. This method is based on simulation that provides a general methodology which can be applied in nonlinear and non-Gaussian state models. MCMC method is very important for practical applications because it is a uni ed estimation procedure which simultaneously estimates both parameters and state variables. MCMC computes the distribution of the state variables and parameters of the given data measurements. MCMC method is faster in terms of computing time when compared to other optimization methods. This thesis discusses the use of Markov chain Monte Carlo (MCMC) methods for optimization of Stochastic models under uncertainties .The thesis begins with a short discussion about Bayesian Inference, MCMC and Stochastic optimization methods. Then an example is given of how MCMC can be applied for maximizing production at a minimum cost in a chemical reaction process. It is observed that this method performs better in optimizing the given cost function with a very high certainty.