932 resultados para methods of analysis
Resumo:
Radioactive soil-contamination mapping and risk assessment is a vital issue for decision makers. Traditional approaches for mapping the spatial concentration of radionuclides employ various regression-based models, which usually provide a single-value prediction realization accompanied (in some cases) by estimation error. Such approaches do not provide the capability for rigorous uncertainty quantification or probabilistic mapping. Machine learning is a recent and fast-developing approach based on learning patterns and information from data. Artificial neural networks for prediction mapping have been especially powerful in combination with spatial statistics. A data-driven approach provides the opportunity to integrate additional relevant information about spatial phenomena into a prediction model for more accurate spatial estimates and associated uncertainty. Machine-learning algorithms can also be used for a wider spectrum of problems than before: classification, probability density estimation, and so forth. Stochastic simulations are used to model spatial variability and uncertainty. Unlike regression models, they provide multiple realizations of a particular spatial pattern that allow uncertainty and risk quantification. This paper reviews the most recent methods of spatial data analysis, prediction, and risk mapping, based on machine learning and stochastic simulations in comparison with more traditional regression models. The radioactive fallout from the Chernobyl Nuclear Power Plant accident is used to illustrate the application of the models for prediction and classification problems. This fallout is a unique case study that provides the challenging task of analyzing huge amounts of data ('hard' direct measurements, as well as supplementary information and expert estimates) and solving particular decision-oriented problems.
Resumo:
The present research deals with an important public health threat, which is the pollution created by radon gas accumulation inside dwellings. The spatial modeling of indoor radon in Switzerland is particularly complex and challenging because of many influencing factors that should be taken into account. Indoor radon data analysis must be addressed from both a statistical and a spatial point of view. As a multivariate process, it was important at first to define the influence of each factor. In particular, it was important to define the influence of geology as being closely associated to indoor radon. This association was indeed observed for the Swiss data but not probed to be the sole determinant for the spatial modeling. The statistical analysis of data, both at univariate and multivariate level, was followed by an exploratory spatial analysis. Many tools proposed in the literature were tested and adapted, including fractality, declustering and moving windows methods. The use of Quan-tité Morisita Index (QMI) as a procedure to evaluate data clustering in function of the radon level was proposed. The existing methods of declustering were revised and applied in an attempt to approach the global histogram parameters. The exploratory phase comes along with the definition of multiple scales of interest for indoor radon mapping in Switzerland. The analysis was done with a top-to-down resolution approach, from regional to local lev¬els in order to find the appropriate scales for modeling. In this sense, data partition was optimized in order to cope with stationary conditions of geostatistical models. Common methods of spatial modeling such as Κ Nearest Neighbors (KNN), variography and General Regression Neural Networks (GRNN) were proposed as exploratory tools. In the following section, different spatial interpolation methods were applied for a par-ticular dataset. A bottom to top method complexity approach was adopted and the results were analyzed together in order to find common definitions of continuity and neighborhood parameters. Additionally, a data filter based on cross-validation was tested with the purpose of reducing noise at local scale (the CVMF). At the end of the chapter, a series of test for data consistency and methods robustness were performed. This lead to conclude about the importance of data splitting and the limitation of generalization methods for reproducing statistical distributions. The last section was dedicated to modeling methods with probabilistic interpretations. Data transformation and simulations thus allowed the use of multigaussian models and helped take the indoor radon pollution data uncertainty into consideration. The catego-rization transform was presented as a solution for extreme values modeling through clas-sification. Simulation scenarios were proposed, including an alternative proposal for the reproduction of the global histogram based on the sampling domain. The sequential Gaussian simulation (SGS) was presented as the method giving the most complete information, while classification performed in a more robust way. An error measure was defined in relation to the decision function for data classification hardening. Within the classification methods, probabilistic neural networks (PNN) show to be better adapted for modeling of high threshold categorization and for automation. Support vector machines (SVM) on the contrary performed well under balanced category conditions. In general, it was concluded that a particular prediction or estimation method is not better under all conditions of scale and neighborhood definitions. Simulations should be the basis, while other methods can provide complementary information to accomplish an efficient indoor radon decision making.
Resumo:
In the administration, planning, design, and maintenance of road systems, transportation professionals often need to choose between alternatives, justify decisions, evaluate tradeoffs, determine how much to spend, set priorities, assess how well the network meets traveler needs, and communicate the basis for their actions to others. A variety of technical guidelines, tools, and methods have been developed to help with these activities. Such work aids include design criteria guidelines, design exception analysis methods, needs studies, revenue allocation schemes, regional planning guides, designation of minimum standards, sufficiency ratings, management systems, point based systems to determine eligibility for paving, functional classification, and bridge ratings. While such tools play valuable roles, they also manifest a number of deficiencies and are poorly integrated. Design guides tell what solutions MAY be used, they aren't oriented towards helping find which one SHOULD be used. Design exception methods help justify deviation from design guide requirements but omit consideration of important factors. Resource distribution is too often based on dividing up what's available rather than helping determine how much should be spent. Point systems serve well as procedural tools but are employed primarily to justify decisions that have already been made. In addition, the tools aren't very scalable: a system level method of analysis seldom works at the project level and vice versa. In conjunction with the issues cited above, the operation and financing of the road and highway system is often the subject of criticisms that raise fundamental questions: What is the best way to determine how much money should be spent on a city or a county's road network? Is the size and quality of the rural road system appropriate? Is too much or too little money spent on road work? What parts of the system should be upgraded and in what sequence? Do truckers receive a hidden subsidy from other motorists? Do transportation professions evaluate road situations from too narrow of a perspective? In considering the issues and questions the author concluded that it would be of value if one could identify and develop a new method that would overcome the shortcomings of existing methods, be scalable, be capable of being understood by the general public, and utilize a broad viewpoint. After trying out a number of concepts, it appeared that a good approach would be to view the road network as a sub-component of a much larger system that also includes vehicles, people, goods-in-transit, and all the ancillary items needed to make the system function. Highway investment decisions could then be made on the basis of how they affect the total cost of operating the total system. A concept, named the "Total Cost of Transportation" method, was then developed and tested. The concept rests on four key principles: 1) that roads are but one sub-system of a much larger 'Road Based Transportation System', 2) that the size and activity level of the overall system are determined by market forces, 3) that the sum of everything expended, consumed, given up, or permanently reserved in building the system and generating the activity that results from the market forces represents the total cost of transportation, and 4) that the economic purpose of making road improvements is to minimize that total cost. To test the practical value of the theory, a special database and spreadsheet model of Iowa's county road network was developed. This involved creating a physical model to represent the size, characteristics, activity levels, and the rates at which the activities take place, developing a companion economic cost model, then using the two in tandem to explore a variety of issues. Ultimately, the theory and model proved capable of being used in full system, partial system, single segment, project, and general design guide levels of analysis. The method appeared to be capable of remedying many of the existing work method defects and to answer society's transportation questions from a new perspective.
Resumo:
Road dust is caused by wind entraining fine material from the roadway surface and the main source of Iowa road dust is attrition of carbonate rock used as aggregate. The mechanisms of dust suppression can be considered as two processes: increasing particle size of the surface fines by agglomeration and inhibiting degradation of the coarse material. Agglomeration may occur by capillary tension in the pore water, surfactants that increase bonding between clay particles, and cements that bind the mineral matter together. Hygroscopic dust suppressants such as calcium chloride have short durations of effectiveness because capillary tension is the primary agglomeration mechanism. Somewhat more permanent methods of agglomeration result from chemicals that cement smaller particles into a mat or larger particles. The cements include lignosulfonates, resins, and asphalt products. The duration of the cements depend on their solubility and the climate. The only dust palliative that decreases aggregate degradation is shredded shingles that act as cushions between aggregate particles. It is likely that synthetic polymers also provide some protection against coarse aggregate attrition. Calcium chloride and lignosulfonates are widely used in Iowa. Both palliatives have a useful duration of about 6 months. Calcium chloride is effective with surface soils of moderate fine content and plasticity whereas lignin works best with materials that have high fine content and high plasticity indices. Bentonite appears to be effective for up to two years and works well with surface materials having low fines and plasticity and works well with limestone aggregate. Selection of appropriate dust suppressants should be based on characterization of the road surface material. Estimation of dosage rates for potential palliatives can be based on data from this report, from technical reports, information from reliable vendors, or laboratory screening tests. The selection should include economic analysis of construction and maintenance costs. The effectiveness of the treatment should be evaluated by any of the field performance measuring techniques discussed in this report. Novel dust control agents that need research for potential application in Iowa include; acidulated soybean oil (soapstock), soybean oil, ground up asphalt shingles, and foamed asphalt. New laboratory evaluation protocols to screen additives for potential effectiveness and determine dosage are needed. A modification of ASTM D 560 to estimate the freeze-thaw and wet-dry durability of Portland cement stabilized soils would be a starting point for improved laboratory testing of dust palliatives.
Resumo:
A collaborative study on Raman spectroscopy and microspectrophotometry (MSP) was carried out by members of the ENFSI (European Network of Forensic Science Institutes) European Fibres Group (EFG) on different dyed cotton fabrics. The detection limits of the two methods were tested on two cotton sets with a dye concentration ranging from 0.5 to 0.005% (w/w). This survey shows that it is possible to detect the presence of dye in fibres with concentrations below that detectable by the traditional methods of light microscopy and microspectrophotometry (MSP). The MSP detection limit for the dyes used in this study was found to be a concentration of 0.5% (w/w). At this concentration, the fibres appear colourless with light microscopy. Raman spectroscopy clearly shows a higher potential to detect concentrations of dyes as low as 0.05% for the yellow dye RY145 and 0.005% for the blue dye RB221. This detection limit was found to depend both on the chemical composition of the dye itself and on the analytical conditions, particularly the laser wavelength. Furthermore, analysis of binary mixtures of dyes showed that while the minor dye was detected at 1.5% (w/w) (30% of the total dye concentration) using microspectrophotometry, it was detected at a level as low as 0.05% (w/w) (10% of the total dye concentration) using Raman spectroscopy. This work also highlights the importance of a flexible Raman instrument equipped with several lasers at different wavelengths for the analysis of dyed fibres. The operator and the set up of the analytical conditions are also of prime importance in order to obtain high quality spectra. Changing the laser wavelength is important to detect different dyes in a mixture.
Resumo:
Due to the intense international competition, demanding, and sophisticated customers, and diverse transforming technological change, organizations need to renew their products and services by allocating resources on research and development (R&D). Managing R&D is complex, but vital for many organizations to survive in the dynamic, turbulent environment. Thus, the increased interest among decision-makers towards finding the right performance measures for R&D is understandable. The measures or evaluation methods of R&D performance can be utilized for multiple purposes; for strategic control, for justifying the existence of R&D, for providing information and improving activities, as well as for the purposes of motivating and benchmarking. The earlier research in the field of R&D performance analysis has generally focused on either the activities and considerable factors and dimensions - e.g. strategic perspectives, purposes of measurement, levels of analysis, types of R&D or phases of R&D process - prior to the selection of R&Dperformance measures, or on proposed principles or actual implementation of theselection or design processes of R&D performance measures or measurement systems. This study aims at integrating the consideration of essential factors anddimensions of R&D performance analysis to developed selection processes of R&D measures, which have been applied in real-world organizations. The earlier models for corporate performance measurement that can be found in the literature, are to some extent adaptable also to the development of measurement systemsand selecting the measures in R&D activities. However, it is necessary to emphasize the special aspects related to the measurement of R&D performance in a way that make the development of new approaches for especially R&D performance measure selection necessary: First, the special characteristics of R&D - such as the long time lag between the inputs and outcomes, as well as the overall complexity and difficult coordination of activities - influence the R&D performance analysis problems, such as the need for more systematic, objective, balanced and multi-dimensional approaches for R&D measure selection, as well as the incompatibility of R&D measurement systems to other corporate measurement systems and vice versa. Secondly, the above-mentioned characteristics and challenges bring forth the significance of the influencing factors and dimensions that need to be recognized in order to derive the selection criteria for measures and choose the right R&D metrics, which is the most crucial step in the measurement system development process. The main purpose of this study is to support the management and control of the research and development activities of organizations by increasing the understanding of R&D performance analysis, clarifying the main factors related to the selection of R&D measures and by providing novel types of approaches and methods for systematizing the whole strategy- and business-based selection and development process of R&D indicators.The final aim of the research is to support the management in their decision making of R&D with suitable, systematically chosen measures or evaluation methods of R&D performance. Thus, the emphasis in most sub-areas of the present research has been on the promotion of the selection and development process of R&D indicators with the help of the different tools and decision support systems, i.e. the research has normative features through providing guidelines by novel types of approaches. The gathering of data and conducting case studies in metal and electronic industry companies, in the information and communications technology (ICT) sector, and in non-profit organizations helped us to formulate a comprehensive picture of the main challenges of R&D performance analysis in different organizations, which is essential, as recognition of the most importantproblem areas is a very crucial element in the constructive research approach utilized in this study. Multiple practical benefits regarding the defined problemareas could be found in the various constructed approaches presented in this dissertation: 1) the selection of R&D measures became more systematic when compared to the empirical analysis, as it was common that there were no systematic approaches utilized in the studied organizations earlier; 2) the evaluation methods or measures of R&D chosen with the help of the developed approaches can be more directly utilized in the decision-making, because of the thorough consideration of the purpose of measurement, as well as other dimensions of measurement; 3) more balance to the set of R&D measures was desired and gained throughthe holistic approaches to the selection processes; and 4) more objectivity wasgained through organizing the selection processes, as the earlier systems were considered subjective in many organizations. Scientifically, this dissertation aims to make a contribution to the present body of knowledge of R&D performance analysis by facilitating dealing with the versatility and challenges of R&D performance analysis, as well as the factors and dimensions influencing the selection of R&D performance measures, and by integrating these aspects to the developed novel types of approaches, methods and tools in the selection processes of R&D measures, applied in real-world organizations. In the whole research, facilitation of dealing with the versatility and challenges in R&D performance analysis, as well as the factors and dimensions influencing the R&D performance measure selection are strongly integrated with the constructed approaches. Thus, the research meets the above-mentioned purposes and objectives of the dissertation from the scientific as well as from the practical point of view.
Resumo:
Muokatun matriisi-geometrian tekniikan kehitys yleimmäksi jonoksi on esitelty tässä työssä. Jonotus systeemi koostuu useista jonoista joilla on rajatut kapasiteetit. Tässä työssä on myös tutkittu PH-tyypin jakautumista kun ne jaetaan. Rakenne joka vastaa lopullista Markovin ketjua jossa on itsenäisiä matriiseja joilla on QBD rakenne. Myös eräitä rajallisia olotiloja on käsitelty tässä työssä. Sen esitteleminen matriisi-geometrisessä muodossa, muokkaamalla matriisi-geometristä ratkaisua on tämän opinnäytetyön tulos.
Resumo:
In this diploma work advantages of coherent anti-Stokes Raman scattering spectrometry (CARS) and various methods of the quantitative analysis of substance structure with its help are considered. The basic methods and concepts of the adaptive analysis are adduced. On the basis of these methods the algorithm of automatic measurement of a scattering strip size of a target component in CARS spectrum is developed. The algorithm uses known full spectrum of target substance and compares it with a CARS spectrum. The form of a differential spectrum is used as a feedback to control the accuracy of matching. To exclude the influence of a background in CARS spectra the differential spectrum is analysed by means of its second derivative. The algorithm is checked up on the simulated simple spectra and on the spectra of organic compounds received experimentally.
Resumo:
During the past century, an increasingly diverse world provided us with opportunities for intercultural communication; especially the growth of commerce at all levels from domestic to international has made the combination of the theories of intercultural communication and international business necessary. As one of the main beneficiaries in international business in recent years, companies in airline industries have developed their international market. For instance, Finnair has developed its Asian strategy which responds to the increasing market demand for flights from Europe to Asia in the new millennium. Therefore, the company manages marketing communication in a global environment and becomes a suitable case for studying the theories of intercultural communication in the context of international marketing. Finnair implemented a large number of international advertisements to promote its Asian routes, where Asia has been constructed as a number of exotic destinations. Meanwhile, the company itself as a provider of these destinations has also been constructed contrastively. Thus, this thesis aims at research how Finnair constructs Asia and the company itself in the new millennium, and how these constructions compare with the theories of intercultural communication. This research applied the theories of international marketing, intercultural communication and culture. In order to analyze the collected corpora as Finnair’s international advertisements and its annual reports in the new millennium, the methods of content analysis and discourse analysis have been used in this research. As a result, Finnair has purposefully applied the essentialist approach to intercultural communication and constructed Asia as an exotic “Other” due to the company’s market orientation. Meanwhile, Finnair has also constructed the company itself two identities based on the same approach: as an international airline provider between Europe and Asia, as well as a part of Finnish society. The combination of intercultural communication and international marketing theories, together with the combination of the methods of content analysis and discourse analysis ensure the originality of this paper.
Resumo:
The purpose of this academic economic geographical dissertation is to study and describe how competitiveness in the Finnish paper industry has developed during 2001–2008. During these years, the Finnish paper industry has faced economically challenging times. This dissertation attempts to fill the existing gap between theoretical and empirical discussions concerning economic geographical issues in the paper industry. The main research questions are: How have the supply chain costs and margins developed during 2001–2008? How do sales prices, transportation, and fixed and variable costs correlate with gross margins in a spatial context? The research object for this case study is a typical large Finnish paper mill that exports over 90 % of its production. The economic longitudinal research data were obtained from the case mill’s controlled economic system and, correlation (R2) analysis was used as the main research method. The time series data cover monthly economic and manufacturing observations from the mill from 2001 to 2008. The study reveals the development of prices, costs and transportation in the case mill, and it shows how economic variables correlate with the paper mills’ gross margins in various markets in Europe. The research methods of economic geography offer perspectives that pay attention to the spatial (market) heterogeneity. This type of research has been quite scarce in the research tradition of Finnish economic geography and supply chain management. This case study gives new insight into the research tradition of Finnish economic geography and supply chain management and its applications. As a concrete empirical result, this dissertation states that the competitive advantages of the Finnish paper industry were significantly weakened during 2001–2008 by low paper prices, costly manufacturing and expensive transportation. Statistical analysis expose that, in several important markets, transport costs lower gross margins as much as decreasing paper prices, which was a new finding. Paper companies should continuously pay attention to lowering manufacturing and transporting costs to achieve more profitable economic performance. The location of a mill being far from markets clearly has an economic impact on paper manufacturing, as paper demand is decreasing and oversupply is pressuring paper prices down. Therefore, market and economic forecasting in the paper industry is advantageous at the country and product levels while simultaneously taking into account the economic geographically specific dimensions.
Resumo:
In any decision making under uncertainties, the goal is mostly to minimize the expected cost. The minimization of cost under uncertainties is usually done by optimization. For simple models, the optimization can easily be done using deterministic methods.However, many models practically contain some complex and varying parameters that can not easily be taken into account using usual deterministic methods of optimization. Thus, it is very important to look for other methods that can be used to get insight into such models. MCMC method is one of the practical methods that can be used for optimization of stochastic models under uncertainty. This method is based on simulation that provides a general methodology which can be applied in nonlinear and non-Gaussian state models. MCMC method is very important for practical applications because it is a uni ed estimation procedure which simultaneously estimates both parameters and state variables. MCMC computes the distribution of the state variables and parameters of the given data measurements. MCMC method is faster in terms of computing time when compared to other optimization methods. This thesis discusses the use of Markov chain Monte Carlo (MCMC) methods for optimization of Stochastic models under uncertainties .The thesis begins with a short discussion about Bayesian Inference, MCMC and Stochastic optimization methods. Then an example is given of how MCMC can be applied for maximizing production at a minimum cost in a chemical reaction process. It is observed that this method performs better in optimizing the given cost function with a very high certainty.
Resumo:
The rural electrification is characterized by geographical dispersion of the population, low consumption, high investment by consumers and high cost. Moreover, solar radiation constitutes an inexhaustible source of energy and in its conversion into electricity photovoltaic panels are used. In this study, equations were adjusted to field conditions presented by the manufacturer for current and power of small photovoltaic systems. The mathematical analysis was performed on the photovoltaic rural system I-100 from ISOFOTON, with power 300 Wp, located at the Experimental Farm Lageado of FCA/UNESP. For the development of such equations, the circuitry of photovoltaic cells has been studied to apply iterative numerical methods for the determination of electrical parameters and possible errors in the appropriate equations in the literature to reality. Therefore, a simulation of a photovoltaic panel was proposed through mathematical equations that were adjusted according to the data of local radiation. The results have presented equations that provide real answers to the user and may assist in the design of these systems, once calculated that the maximum power limit ensures a supply of energy generated. This real sizing helps establishing the possible applications of solar energy to the rural producer and informing the real possibilities of generating electricity from the sun.
Resumo:
ABSTRACTObjective:to compare the frequency and the severity of diagnosed injuries between pedestrians struck by motor vehicles and victims of other blunt trauma mechanisms.Methods:retrospective analysis of data from the Trauma Registry, including adult blunt trauma patients admitted from 2008 to 2010. We reviewed the mechanism of trauma, vital signs on admission and the injuries identified. Severity stratification was carried using RTS, AIS-90, ISS e TRISS. Patients were assigned into group A (pedestrians struck by motor vehicle) or B (victims of other mechanisms of blunt trauma). Variables were compared between groups. We considered p<0.05 as significant.Results:a total of 5785 cases were included, and 1217 (21,0%) of which were in group A. Pedestrians struck by vehicles presented (p<0.05) higher mean age, mean heart rate upon admission, mean ISS and mean AIS in head, thorax, abdomen and extremities, as well as lower mean Glasgow coma scale, arterial blood pressure upon admission, RTS and TRISS. They also had a higher frequency of epidural hematomas, subdural hematomas, subarachnoid hemorrhage, brain swelling, cerebral contusions, costal fractures, pneumothorax, flail chest, pulmonary contusions, as well as pelvic, superior limbs and inferior limbs fractures.Conclusion:pedestrian struck by vehicles sustained intracranial, thoracic, abdominal and extremity injuries more frequently than victims of other blunt trauma mechanism as a group. They also presented worse physiologic and anatomic severity of the trauma.
Resumo:
The objectives of this study were to evaluate and compare the use of linear and nonlinear methods for analysis of heart rate variability (HRV) in healthy subjects and in patients after acute myocardial infarction (AMI). Heart rate (HR) was recorded for 15 min in the supine position in 10 patients with AMI taking β-blockers (aged 57 ± 9 years) and in 11 healthy subjects (aged 53 ± 4 years). HRV was analyzed in the time domain (RMSSD and RMSM), the frequency domain using low- and high-frequency bands in normalized units (nu; LFnu and HFnu) and the LF/HF ratio and approximate entropy (ApEn) were determined. There was a correlation (P < 0.05) of RMSSD, RMSM, LFnu, HFnu, and the LF/HF ratio index with the ApEn of the AMI group on the 2nd (r = 0.87, 0.65, 0.72, 0.72, and 0.64) and 7th day (r = 0.88, 0.70, 0.69, 0.69, and 0.87) and of the healthy group (r = 0.63, 0.71, 0.63, 0.63, and 0.74), respectively. The median HRV indexes of the AMI group on the 2nd and 7th day differed from the healthy group (P < 0.05): RMSSD = 10.37, 19.95, 24.81; RMSM = 23.47, 31.96, 43.79; LFnu = 0.79, 0.79, 0.62; HFnu = 0.20, 0.20, 0.37; LF/HF ratio = 3.87, 3.94, 1.65; ApEn = 1.01, 1.24, 1.31, respectively. There was agreement between the methods, suggesting that these have the same power to evaluate autonomic modulation of HR in both AMI patients and healthy subjects. AMI contributed to a reduction in cardiac signal irregularity, higher sympathetic modulation and lower vagal modulation.
Resumo:
This thesis concerns the analysis of epidemic models. We adopt the Bayesian paradigm and develop suitable Markov Chain Monte Carlo (MCMC) algorithms. This is done by considering an Ebola outbreak in the Democratic Republic of Congo, former Zaïre, 1995 as a case of SEIR epidemic models. We model the Ebola epidemic deterministically using ODEs and stochastically through SDEs to take into account a possible bias in each compartment. Since the model has unknown parameters, we use different methods to estimate them such as least squares, maximum likelihood and MCMC. The motivation behind choosing MCMC over other existing methods in this thesis is that it has the ability to tackle complicated nonlinear problems with large number of parameters. First, in a deterministic Ebola model, we compute the likelihood function by sum of square of residuals method and estimate parameters using the LSQ and MCMC methods. We sample parameters and then use them to calculate the basic reproduction number and to study the disease-free equilibrium. From the sampled chain from the posterior, we test the convergence diagnostic and confirm the viability of the model. The results show that the Ebola model fits the observed onset data with high precision, and all the unknown model parameters are well identified. Second, we convert the ODE model into a SDE Ebola model. We compute the likelihood function using extended Kalman filter (EKF) and estimate parameters again. The motivation of using the SDE formulation here is to consider the impact of modelling errors. Moreover, the EKF approach allows us to formulate a filtered likelihood for the parameters of such a stochastic model. We use the MCMC procedure to attain the posterior distributions of the parameters of the SDE Ebola model drift and diffusion parts. In this thesis, we analyse two cases: (1) the model error covariance matrix of the dynamic noise is close to zero , i.e. only small stochasticity added into the model. The results are then similar to the ones got from deterministic Ebola model, even if methods of computing the likelihood function are different (2) the model error covariance matrix is different from zero, i.e. a considerable stochasticity is introduced into the Ebola model. This accounts for the situation where we would know that the model is not exact. As a results, we obtain parameter posteriors with larger variances. Consequently, the model predictions then show larger uncertainties, in accordance with the assumption of an incomplete model.