972 resultados para MONTE-CARLO METHODS
Resumo:
BACKGROUND: Anal condylomata acuminata (ACA) are caused by human papilloma virus (HPV) infection which is transmitted by close physical and sexual contact. The result of surgical treatment of ACA has an overall success rate of 71% to 93%, with a recurrence rate between 4% and 29%. The aim of this study was to assess a possible association between HPV type and ACA recurrence after surgical treatment. METHODS: We performed a retrospective analysis of 140 consecutive patients who underwent surgery for ACA from January 1990 to December 2005 at our tertiary University Hospital. We confirmed ACA by histopathological analysis and determined the HPV typing using the polymerase chain reaction. Patients gave consent for HPV testing and completed a questionnaire. We looked at the association of ACA, HPV typing, and HIV disease. We used chi, the Monte Carlo simulation, and Wilcoxon tests for statistical analysis. RESULTS: Among the 140 patients (123 M/17 F), HPV 6 and 11 were the most frequently encountered viruses (51% and 28%, respectively). Recurrence occurred in 35 (25%) patients. HPV 11 was present in 19 (41%) of these recurrences, which is statistically significant, when compared with other HPVs. There was no significant difference between recurrence rates in the 33 (24%) HIV-positive and the HIV-negative patients. CONCLUSIONS: HPV 11 is associated with higher recurrence rate of ACA. This makes routine clinical HPV typing questionable. Follow-up is required to identify recurrence and to treat it early, especially if HPV 11 has been identified.
Resumo:
A number of geophysical methods, such as ground-penetrating radar (GPR), have the potential to provide valuable information on hydrological properties in the unsaturated zone. In particular, the stochastic inversion of such data within a coupled geophysical-hydrological framework may allow for the effective estimation of vadose zone hydraulic parameters and their corresponding uncertainties. A critical issue in stochastic inversion is choosing prior parameter probability distributions from which potential model configurations are drawn and tested against observed data. A well chosen prior should reflect as honestly as possible the initial state of knowledge regarding the parameters and be neither overly specific nor too conservative. In a Bayesian context, combining the prior with available data yields a posterior state of knowledge about the parameters, which can then be used statistically for predictions and risk assessment. Here we investigate the influence of prior information regarding the van Genuchten-Mualem (VGM) parameters, which describe vadose zone hydraulic properties, on the stochastic inversion of crosshole GPR data collected under steady state, natural-loading conditions. We do this using a Bayesian Markov chain Monte Carlo (MCMC) inversion approach, considering first noninformative uniform prior distributions and then more informative priors derived from soil property databases. For the informative priors, we further explore the effect of including information regarding parameter correlation. Analysis of both synthetic and field data indicates that the geophysical data alone contain valuable information regarding the VGM parameters. However, significantly better results are obtained when we combine these data with a realistic, informative prior.
Resumo:
The present work focuses the attention on the skew-symmetry index as a measure of social reciprocity. This index is based on the correspondence between the amount of behaviour that each individual addresses to its partners and what it receives from them in return. Although the skew-symmetry index enables researchers to describe social groups, statistical inferential tests are required. The main aim of the present study is to propose an overall statistical technique for testing symmetry in experimental conditions, calculating the skew-symmetry statistic (Φ) at group level. Sampling distributions for the skew- symmetry statistic have been estimated by means of a Monte Carlo simulation in order to allow researchers to make statistical decisions. Furthermore, this study will allow researchers to choose the optimal experimental conditions for carrying out their research, as the power of the statistical test has been estimated. This statistical test could be used in experimental social psychology studies in which researchers may control the group size and the number of interactions within dyads.
Resumo:
Geophysical methods have the potential to provide valuable information on hydrological properties in the unsaturated zone. In particular, time-lapse geophysical data, when coupled with a hydrological model and inverted stochastically, may allow for the effective estimation of subsurface hydraulic parameters and their corresponding uncertainties. In this study, we use a Bayesian Markov-chain-Monte-Carlo (MCMC) inversion approach to investigate how much information regarding vadose zone hydraulic properties can be retrieved from time-lapse crosshole GPR data collected at the Arrenaes field site in Denmark during a forced infiltration experiment.
Resumo:
This paper develops an approach to rank testing that nests all existing rank tests andsimplifies their asymptotics. The approach is based on the fact that implicit in every ranktest there are estimators of the null spaces of the matrix in question. The approach yieldsmany new insights about the behavior of rank testing statistics under the null as well as localand global alternatives in both the standard and the cointegration setting. The approach alsosuggests many new rank tests based on alternative estimates of the null spaces as well as thenew fixed-b theory. A brief Monte Carlo study illustrates the results.
Resumo:
Tumors in non-Hodgkin lymphoma (NHL) patients are often proximal to the major blood vessels in the abdomen or neck. In external-beam radiotherapy, these tumors present a challenge because imaging resolution prevents the beam from being targeted to the tumor lesion without also irradiating the artery wall. This problem has led to potentially life-threatening delayed toxicity. Because radioimmunotherapy has resulted in long-term survival of NHL patients, we investigated whether the absorbed dose (AD) to the artery wall in radioimmunotherapy of NHL is of potential concern for delayed toxicity. SPECT resolution is not sufficient to enable dosimetric analysis of anatomic features of the thickness of the aortic wall. Therefore, we present a model of aortic wall toxicity based on data from 4 patients treated with (131)I-tositumomab. METHODS: Four NHL patients with periaortic tumors were administered pretherapeutic (131)I-tositumomab. Abdominal SPECT and whole-body planar images were obtained at 48, 72, and 144 h after tracer administration. Blood-pool activity concentrations were obtained from regions of interest drawn on the heart on the planar images. Tumor and blood activity concentrations, scaled to therapeutic administered activities-both standard and myeloablative-were input into a geometry and tracking model (GEANT, version 4) of the aorta. The simulated energy deposited in the arterial walls was collected and fitted, and the AD and biologic effective dose values to the aortic wall and tumors were obtained for standard therapeutic and hypothetical myeloablative administered activities. RESULTS: Arterial wall ADs from standard therapy were lower (0.6-3.7 Gy) than those typical from external-beam therapy, as were the tumor ADs (1.4-10.5 Gy). The ratios of tumor AD to arterial wall AD were greater for radioimmunotherapy by a factor of 1.9-4.0. For myeloablative therapy, artery wall ADs were in general less than those typical for external-beam therapy (9.4-11.4 Gy for 3 of 4 patients) but comparable for 1 patient (32.6 Gy). CONCLUSION: Blood vessel radiation dose can be estimated using the software package 3D-RD combined with GEANT modeling. The dosimetry analysis suggested that arterial wall toxicity is highly unlikely in standard dose radioimmunotherapy but should be considered a potential concern and limiting factor in myeloablative therapy.
Resumo:
PURPOSE: To implement and characterize an isotropic three-dimensional cardiac T2 mapping technique. METHODS: A self-navigated three-dimensional radial segmented balanced steady-state free precession pulse sequence with an isotropic 1.7-mm spatial resolution was implemented at 3T with a variable T2 preparation module. Bloch equation and Monte Carlo simulations were performed to determine the influence of the heart rate, B1 inhomogeneity and noise on the T2 fitting accuracy. In a phantom study, the accuracy of the pulse sequence was studied through comparison with a gold-standard spin-echo T2 mapping method. The robustness and homogeneity of the technique were ascertained in a study of 10 healthy adult human volunteers, while first results obtained in patients are reported. RESULTS: The numerical simulations demonstrated that the heart rate and B1 inhomogeneity cause only minor deviations in the T2 fitting, whereas the phantom study showed good agreement of the technique with the gold standard. The volunteer study demonstrated an average myocardial T2 of 40.5 ± 3.3 ms and a <15% T2 gradient in the base-apex and anterior-inferior direction. In three patients, elevated T2 values were measured in regions with expected edema. CONCLUSION: This respiratory self-navigated isotropic three-dimensional technique allows for accurate and robust in vitro and in vivo T2 quantification. Magn Reson Med 73:1549-1554, 2015. © 2014 Wiley Periodicals, Inc.
Resumo:
Probabilistic inversion methods based on Markov chain Monte Carlo (MCMC) simulation are well suited to quantify parameter and model uncertainty of nonlinear inverse problems. Yet, application of such methods to CPU-intensive forward models can be a daunting task, particularly if the parameter space is high dimensional. Here, we present a 2-D pixel-based MCMC inversion of plane-wave electromagnetic (EM) data. Using synthetic data, we investigate how model parameter uncertainty depends on model structure constraints using different norms of the likelihood function and the model constraints, and study the added benefits of joint inversion of EM and electrical resistivity tomography (ERT) data. Our results demonstrate that model structure constraints are necessary to stabilize the MCMC inversion results of a highly discretized model. These constraints decrease model parameter uncertainty and facilitate model interpretation. A drawback is that these constraints may lead to posterior distributions that do not fully include the true underlying model, because some of its features exhibit a low sensitivity to the EM data, and hence are difficult to resolve. This problem can be partly mitigated if the plane-wave EM data is augmented with ERT observations. The hierarchical Bayesian inverse formulation introduced and used herein is able to successfully recover the probabilistic properties of the measurement data errors and a model regularization weight. Application of the proposed inversion methodology to field data from an aquifer demonstrates that the posterior mean model realization is very similar to that derived from a deterministic inversion with similar model constraints.
Resumo:
Notre consommation en eau souterraine, en particulier comme eau potable ou pour l'irrigation, a considérablement augmenté au cours des années. De nombreux problèmes font alors leur apparition, allant de la prospection de nouvelles ressources à la remédiation des aquifères pollués. Indépendamment du problème hydrogéologique considéré, le principal défi reste la caractérisation des propriétés du sous-sol. Une approche stochastique est alors nécessaire afin de représenter cette incertitude en considérant de multiples scénarios géologiques et en générant un grand nombre de réalisations géostatistiques. Nous rencontrons alors la principale limitation de ces approches qui est le coût de calcul dû à la simulation des processus d'écoulements complexes pour chacune de ces réalisations. Dans la première partie de la thèse, ce problème est investigué dans le contexte de propagation de l'incertitude, oú un ensemble de réalisations est identifié comme représentant les propriétés du sous-sol. Afin de propager cette incertitude à la quantité d'intérêt tout en limitant le coût de calcul, les méthodes actuelles font appel à des modèles d'écoulement approximés. Cela permet l'identification d'un sous-ensemble de réalisations représentant la variabilité de l'ensemble initial. Le modèle complexe d'écoulement est alors évalué uniquement pour ce sousensemble, et, sur la base de ces réponses complexes, l'inférence est faite. Notre objectif est d'améliorer la performance de cette approche en utilisant toute l'information à disposition. Pour cela, le sous-ensemble de réponses approximées et exactes est utilisé afin de construire un modèle d'erreur, qui sert ensuite à corriger le reste des réponses approximées et prédire la réponse du modèle complexe. Cette méthode permet de maximiser l'utilisation de l'information à disposition sans augmentation perceptible du temps de calcul. La propagation de l'incertitude est alors plus précise et plus robuste. La stratégie explorée dans le premier chapitre consiste à apprendre d'un sous-ensemble de réalisations la relation entre les modèles d'écoulement approximé et complexe. Dans la seconde partie de la thèse, cette méthodologie est formalisée mathématiquement en introduisant un modèle de régression entre les réponses fonctionnelles. Comme ce problème est mal posé, il est nécessaire d'en réduire la dimensionnalité. Dans cette optique, l'innovation du travail présenté provient de l'utilisation de l'analyse en composantes principales fonctionnelles (ACPF), qui non seulement effectue la réduction de dimensionnalités tout en maximisant l'information retenue, mais permet aussi de diagnostiquer la qualité du modèle d'erreur dans cet espace fonctionnel. La méthodologie proposée est appliquée à un problème de pollution par une phase liquide nonaqueuse et les résultats obtenus montrent que le modèle d'erreur permet une forte réduction du temps de calcul tout en estimant correctement l'incertitude. De plus, pour chaque réponse approximée, une prédiction de la réponse complexe est fournie par le modèle d'erreur. Le concept de modèle d'erreur fonctionnel est donc pertinent pour la propagation de l'incertitude, mais aussi pour les problèmes d'inférence bayésienne. Les méthodes de Monte Carlo par chaîne de Markov (MCMC) sont les algorithmes les plus communément utilisés afin de générer des réalisations géostatistiques en accord avec les observations. Cependant, ces méthodes souffrent d'un taux d'acceptation très bas pour les problèmes de grande dimensionnalité, résultant en un grand nombre de simulations d'écoulement gaspillées. Une approche en deux temps, le "MCMC en deux étapes", a été introduite afin d'éviter les simulations du modèle complexe inutiles par une évaluation préliminaire de la réalisation. Dans la troisième partie de la thèse, le modèle d'écoulement approximé couplé à un modèle d'erreur sert d'évaluation préliminaire pour le "MCMC en deux étapes". Nous démontrons une augmentation du taux d'acceptation par un facteur de 1.5 à 3 en comparaison avec une implémentation classique de MCMC. Une question reste sans réponse : comment choisir la taille de l'ensemble d'entrainement et comment identifier les réalisations permettant d'optimiser la construction du modèle d'erreur. Cela requiert une stratégie itérative afin que, à chaque nouvelle simulation d'écoulement, le modèle d'erreur soit amélioré en incorporant les nouvelles informations. Ceci est développé dans la quatrième partie de la thèse, oú cette méthodologie est appliquée à un problème d'intrusion saline dans un aquifère côtier. -- Our consumption of groundwater, in particular as drinking water and for irrigation, has considerably increased over the years and groundwater is becoming an increasingly scarce and endangered resource. Nofadays, we are facing many problems ranging from water prospection to sustainable management and remediation of polluted aquifers. Independently of the hydrogeological problem, the main challenge remains dealing with the incomplete knofledge of the underground properties. Stochastic approaches have been developed to represent this uncertainty by considering multiple geological scenarios and generating a large number of realizations. The main limitation of this approach is the computational cost associated with performing complex of simulations in each realization. In the first part of the thesis, we explore this issue in the context of uncertainty propagation, where an ensemble of geostatistical realizations is identified as representative of the subsurface uncertainty. To propagate this lack of knofledge to the quantity of interest (e.g., the concentration of pollutant in extracted water), it is necessary to evaluate the of response of each realization. Due to computational constraints, state-of-the-art methods make use of approximate of simulation, to identify a subset of realizations that represents the variability of the ensemble. The complex and computationally heavy of model is then run for this subset based on which inference is made. Our objective is to increase the performance of this approach by using all of the available information and not solely the subset of exact responses. Two error models are proposed to correct the approximate responses follofing a machine learning approach. For the subset identified by a classical approach (here the distance kernel method) both the approximate and the exact responses are knofn. This information is used to construct an error model and correct the ensemble of approximate responses to predict the "expected" responses of the exact model. The proposed methodology makes use of all the available information without perceptible additional computational costs and leads to an increase in accuracy and robustness of the uncertainty propagation. The strategy explored in the first chapter consists in learning from a subset of realizations the relationship between proxy and exact curves. In the second part of this thesis, the strategy is formalized in a rigorous mathematical framework by defining a regression model between functions. As this problem is ill-posed, it is necessary to reduce its dimensionality. The novelty of the work comes from the use of functional principal component analysis (FPCA), which not only performs the dimensionality reduction while maximizing the retained information, but also allofs a diagnostic of the quality of the error model in the functional space. The proposed methodology is applied to a pollution problem by a non-aqueous phase-liquid. The error model allofs a strong reduction of the computational cost while providing a good estimate of the uncertainty. The individual correction of the proxy response by the error model leads to an excellent prediction of the exact response, opening the door to many applications. The concept of functional error model is useful not only in the context of uncertainty propagation, but also, and maybe even more so, to perform Bayesian inference. Monte Carlo Markov Chain (MCMC) algorithms are the most common choice to ensure that the generated realizations are sampled in accordance with the observations. Hofever, this approach suffers from lof acceptance rate in high dimensional problems, resulting in a large number of wasted of simulations. This led to the introduction of two-stage MCMC, where the computational cost is decreased by avoiding unnecessary simulation of the exact of thanks to a preliminary evaluation of the proposal. In the third part of the thesis, a proxy is coupled to an error model to provide an approximate response for the two-stage MCMC set-up. We demonstrate an increase in acceptance rate by a factor three with respect to one-stage MCMC results. An open question remains: hof do we choose the size of the learning set and identify the realizations to optimize the construction of the error model. This requires devising an iterative strategy to construct the error model, such that, as new of simulations are performed, the error model is iteratively improved by incorporating the new information. This is discussed in the fourth part of the thesis, in which we apply this methodology to a problem of saline intrusion in a coastal aquifer.
Resumo:
This study examined the independent effect of skewness and kurtosis on the robustness of the linear mixed model (LMM), with the Kenward-Roger (KR) procedure, when group distributions are different, sample sizes are small, and sphericity cannot be assumed. Methods: A Monte Carlo simulation study considering a split-plot design involving three groups and four repeated measures was performed. Results: The results showed that when group distributions are different, the effect of skewness on KR robustness is greater than that of kurtosis for the corresponding values. Furthermore, the pairings of skewness and kurtosis with group size were found to be relevant variables when applying this procedure. Conclusions: With sample sizes of 45 and 60, KR is a suitable option for analyzing data when the distributions are: (a) mesokurtic and not highly or extremely skewed, and (b) symmetric with different degrees of kurtosis. With total sample sizes of 30, it is adequate when group sizes are equal and the distributions are: (a) mesokurtic and slightly or moderately skewed, and sphericity is assumed; and (b) symmetric with a moderate or high/extreme violation of kurtosis. Alternative analyses should be considered when the distributions are highly or extremely skewed and samples sizes are small.
Resumo:
A thorough literature review about the current situation on the implementation of eye lens monitoring has been performed in order to provide recommendations regarding dosemeter types, calibration procedures and practical aspects of eye lens monitoring for interventional radiology personnel. Most relevant data and recommendations from about 100 papers have been analysed and classified in the following topics: challenges of today in eye lens monitoring; conversion coefficients, phantoms and calibration procedures for eye lens dose evaluation; correction factors and dosemeters for eye lens dose measurements; dosemeter position and influence of protective devices. The major findings of the review can be summarised as follows: the recommended operational quantity for the eye lens monitoring is H p (3). At present, several dosemeters are available for eye lens monitoring and calibration procedures are being developed. However, in practice, very often, alternative methods are used to assess the dose to the eye lens. A summary of correction factors found in the literature for the assessment of the eye lens dose is provided. These factors can give an estimation of the eye lens dose when alternative methods, such as the use of a whole body dosemeter, are used. A wide range of values is found, thus indicating the large uncertainty associated with these simplified methods. Reduction factors from most common protective devices obtained experimentally and using Monte Carlo calculations are presented. The paper concludes that the use of a dosemeter placed at collar level outside the lead apron can provide a useful first estimate of the eye lens exposure. However, for workplaces with estimated annual equivalent dose to the eye lens close to the dose limit, specific eye lens monitoring should be performed. Finally, training of the involved medical staff on the risks of ionising radiation for the eye lens and on the correct use of protective systems is strongly recommended.
Resumo:
Abstract Objective: To perform a comparative dosimetric analysis, based on computer simulations, of temporary balloon implants with 99mTc and balloon brachytherapy with high-dose-rate (HDR) 192Ir, as boosts to radiotherapy. We hypothesized that the two techniques would produce equivalent doses under pre-established conditions of activity and exposure time. Materials and Methods: Simulations of implants with 99mTc-filled and HDR 192Ir-filled balloons were performed with the Siscodes/MCNP5, modeling in voxels a magnetic resonance imaging set related to a young female. Spatial dose rate distributions were determined. In the dosimetric analysis of the protocols, the exposure time and the level of activity required were specified. Results: The 99mTc balloon presented a weighted dose rate in the tumor bed of 0.428 cGy.h-1.mCi-1 and 0.190 cGyh-1.mCi-1 at the balloon surface and at 8-10 mm from the surface, respectively, compared with 0.499 and 0.150 cGyh-1.mCi-1, respectively, for the HDR 192Ir balloon. An exposure time of 24 hours was required for the 99mTc balloon to produce a boost of 10.14 Gy with 1.0 Ci, whereas only 24 minutes with 10.0 Ci segments were required for the HDR 192Ir balloon to produce a boost of 5.14 Gy at the same reference point, or 10.28 Gy in two 24-minutes fractions. Conclusion: Temporary 99mTc balloon implantation is an attractive option for adjuvant radiotherapy in breast cancer, because of its availability, economic viability, and similar dosimetry in comparison with the use of HDR 192Ir balloon implantation, which is the current standard in clinical practice.
Resumo:
This dissertation is based on four articles dealing with modeling of ozonation. The literature part of this considers some models for hydrodynamics in bubble column simulation. A literature review of methods for obtaining mass transfer coefficients is presented. The methods presented to obtain mass transfer are general models and can be applied to any gas-liquid system. Ozonation reaction models and methods for obtaining stoichiometric coefficients and reaction rate coefficients for ozonation reactions are discussed in the final section of the literature part. In the first article, ozone gas-liquid mass transfer into water in a bubble column was investigated for different pH values. A more general method for estimation of mass transfer and Henry’s coefficient was developed from the Beltrán method. The ozone volumetric mass transfer coefficient and the Henry’s coefficient were determined simultaneously by parameter estimation using a nonlinear optimization method. A minor dependence of the Henry’s law constant on pH was detected at the pH range 4 - 9. In the second article, a new method using the axial dispersion model for estimation of ozone self-decomposition kinetics in a semi-batch bubble column reactor was developed. The reaction rate coefficients for literature equations of ozone decomposition and the gas phase dispersion coefficient were estimated and compared with the literature data. The reaction order in the pH range 7-10 with respect to ozone 1.12 and 0.51 the hydroxyl ion were obtained, which is in good agreement with literature. The model parameters were determined by parameter estimation using a nonlinear optimization method. Sensitivity analysis was conducted using object function method to obtain information about the reliability and identifiability of the estimated parameters. In the third article, the reaction rate coefficients and the stoichiometric coefficients in the reaction of ozone with the model component p-nitrophenol were estimated at low pH of water using nonlinear optimization. A novel method for estimation of multireaction model parameters in ozonation was developed. In this method the concentration of unknown intermediate compounds is presented as a residual COD (chemical oxygen demand) calculated from the measured COD and the theoretical COD for the known species. The decomposition rate of p-nitrophenol on the pathway producing hydroquinone was found to be about two times faster than the p-nitrophenol decomposition rate on the pathway producing 4- nitrocatechol. In the fourth article, the reaction kinetics of p-nitrophenol ozonation was studied in a bubble column at pH 2. Using the new reaction kinetic model presented in the previous article, the reaction kinetic parameters, rate coefficients, and stoichiometric coefficients as well as the mass transfer coefficient were estimated with nonlinear estimation. The decomposition rate of pnitrophenol was found to be equal both on the pathway producing hydroquinone and on the path way producing 4-nitrocathecol. Comparison of the rate coefficients with the case at initial pH 5 indicates that the p-nitrophenol degradation producing 4- nitrocathecol is more selective towards molecular ozone than the reaction producing hydroquinone. The identifiability and reliability of the estimated parameters were analyzed with the Marcov chain Monte Carlo (MCMC) method. @All rights reserved. No part of the publication may be reproduced, stored in a retrieval system, or transmitted, in any form or by any means, electronic, mechanical, photocopying, recording, or otherwise, without the prior permission of the author.
Resumo:
This thesis was focussed on statistical analysis methods and proposes the use of Bayesian inference to extract information contained in experimental data by estimating Ebola model parameters. The model is a system of differential equations expressing the behavior and dynamics of Ebola. Two sets of data (onset and death data) were both used to estimate parameters, which has not been done by previous researchers in (Chowell, 2004). To be able to use both data, a new version of the model has been built. Model parameters have been estimated and then used to calculate the basic reproduction number and to study the disease-free equilibrium. Estimates of the parameters were useful to determine how well the model fits the data and how good estimates were, in terms of the information they provided about the possible relationship between variables. The solution showed that Ebola model fits the observed onset data at 98.95% and the observed death data at 93.6%. Since Bayesian inference can not be performed analytically, the Markov chain Monte Carlo approach has been used to generate samples from the posterior distribution over parameters. Samples have been used to check the accuracy of the model and other characteristics of the target posteriors.
Resumo:
Tämän työn tarkoituksena oli tarkastella kohdeorganisaation hankintaprosessin suorituskykyä. Tutkimuksen päämääränä oli tuottaa yritykselle sellaista tietoa ja arviointikriteerejä, joiden avulla yritys voi kehittää valmiuksiaan oman suorituskyvyn tehokkaampaan arviointiin tulevaisuudessa. Tutkielma tehtiin Skanska Oy:n osto-osastolle Helsinkiin. Tutkimuksen kohteeksi valittiin kausisopimusten hankintaprosessi epäsuorissa hankinnoissa, kotimaisilla markkinoilla. Keskitetyn kausisopimusten hankintaprosessin tarkoituksena on tuottaa yritykselle kilpailukykyisiä sopimuksia sekä saavuttaa prosessin parempi hallinta ja läpinäkyvyys. Tietoa tutkimuksen kohteena olevasta prosessista kerättiin haastatteluilla ja keskustelutuokioilla sekä yrityksen dokumenteista. Aineiston keräämisen kautta pyrittiin saamaan syvempi kuva prosessin toiminnasta, sen ongelmakohdista sekä niiden syistä ja seurauksista. Toisen tarkastelunäkökulman prosessin arvioinnille tarjosi läpimenoajan mittaaminen. Saatua aineistoa luokiteltiin vika- ja vaikutusanalyysiin pohjautuvalla mallilla sekä Monte Carlo – simulaatiomenetelmään perustuvalla ohjelmalla. Työn tuloksena esitetään tutkimuksen kohteena olevalle prosessille sopivia kehitystoimenpiteitä sekä suositeltavia prosessin mittaamisalueita.