914 resultados para Error Probability


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Location information is becoming increasingly necessary as every new smartphone incorporates a GPS (Global Positioning System) which allows the development of various applications based on it. However, it is not possible to properly receive the GPS signal in indoor environments. For this reason, new indoor positioning systems are being developed.As indoors is a very challenging scenario, it is necessary to study the precision of the obtained location information in order to determine if these new positioning techniques are suitable for indoor positioning.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This study aimed to develop a hip screening tool that combines relevant clinical risk factors (CRFs) and quantitative ultrasound (QUS) at the heel to determine the 10-yr probability of hip fractures in elderly women. The EPISEM database, comprised of approximately 13,000 women 70 yr of age, was derived from two population-based white European cohorts in France and Switzerland. All women had baseline data on CRFs and a baseline measurement of the stiffness index (SI) derived from QUS at the heel. Women were followed prospectively to identify incident fractures. Multivariate analysis was performed to determine the CRFs that contributed significantly to hip fracture risk, and these were used to generate a CRF score. Gradients of risk (GR; RR/SD change) and areas under receiver operating characteristic curves (AUC) were calculated for the CRF score, SI, and a score combining both. The 10-yr probability of hip fracture was computed for the combined model. Three hundred seven hip fractures were observed over a mean follow-up of 3.2 yr. In addition to SI, significant CRFs for hip fracture were body mass index (BMI), history of fracture, an impaired chair test, history of a recent fall, current cigarette smoking, and diabetes mellitus. The average GR for hip fracture was 2.10 per SD with the combined SI + CRF score compared with a GR of 1.77 with SI alone and of 1.52 with the CRF score alone. Thus, the use of CRFs enhanced the predictive value of SI alone. For example, in a woman 80 yr of age, the presence of two to four CRFs increased the probability of hip fracture from 16.9% to 26.6% and from 52.6% to 70.5% for SI Z-scores of +2 and -3, respectively. The combined use of CRFs and QUS SI is a promising tool to assess hip fracture probability in elderly women, especially when access to DXA is limited.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Theultimate goal of any research in the mechanism/kinematic/design area may be called predictive design, ie the optimisation of mechanism proportions in the design stage without requiring extensive life and wear testing. This is an ambitious goal and can be realised through development and refinement of numerical (computational) technology in order to facilitate the design analysis and optimisation of complex mechanisms, mechanical components and systems. As a part of the systematic design methodology this thesis concentrates on kinematic synthesis (kinematic design and analysis) methods in the mechanism synthesis process. The main task of kinematic design is to find all possible solutions in the form of structural parameters to accomplish the desired requirements of motion. Main formulations of kinematic design can be broadly divided to exact synthesis and approximate synthesis formulations. The exact synthesis formulation is based in solving n linear or nonlinear equations in n variables and the solutions for the problem areget by adopting closed form classical or modern algebraic solution methods or using numerical solution methods based on the polynomial continuation or homotopy. The approximate synthesis formulations is based on minimising the approximation error by direct optimisation The main drawbacks of exact synthesis formulationare: (ia) limitations of number of design specifications and (iia) failure in handling design constraints- especially inequality constraints. The main drawbacks of approximate synthesis formulations are: (ib) it is difficult to choose a proper initial linkage and (iib) it is hard to find more than one solution. Recentformulations in solving the approximate synthesis problem adopts polynomial continuation providing several solutions, but it can not handle inequality const-raints. Based on the practical design needs the mixed exact-approximate position synthesis with two exact and an unlimited number of approximate positions has also been developed. The solutions space is presented as a ground pivot map but thepole between the exact positions cannot be selected as a ground pivot. In this thesis the exact synthesis problem of planar mechanism is solved by generating all possible solutions for the optimisation process ¿ including solutions in positive dimensional solution sets - within inequality constraints of structural parameters. Through the literature research it is first shown that the algebraic and numerical solution methods ¿ used in the research area of computational kinematics ¿ are capable of solving non-parametric algebraic systems of n equations inn variables and cannot handle the singularities associated with positive-dimensional solution sets. In this thesis the problem of positive-dimensional solutionsets is solved adopting the main principles from mathematical research area of algebraic geometry in solving parametric ( in the mathematical sense that all parameter values are considered ¿ including the degenerate cases ¿ for which the system is solvable ) algebraic systems of n equations and at least n+1 variables.Adopting the developed solution method in solving the dyadic equations in direct polynomial form in two- to three-precision-points it has been algebraically proved and numerically demonstrated that the map of the ground pivots is ambiguousand that the singularities associated with positive-dimensional solution sets can be solved. The positive-dimensional solution sets associated with the poles might contain physically meaningful solutions in the form of optimal defectfree mechanisms. Traditionally the mechanism optimisation of hydraulically driven boommechanisms is done at early state of the design process. This will result in optimal component design rather than optimal system level design. Modern mechanismoptimisation at system level demands integration of kinematic design methods with mechanical system simulation techniques. In this thesis a new kinematic design method for hydraulically driven boom mechanism is developed and integrated in mechanical system simulation techniques. The developed kinematic design method is based on the combinations of two-precision-point formulation and on optimisation ( with mathematical programming techniques or adopting optimisation methods based on probability and statistics ) of substructures using calculated criteria from the system level response of multidegree-of-freedom mechanisms. Eg. by adopting the mixed exact-approximate position synthesis in direct optimisation (using mathematical programming techniques) with two exact positions and an unlimitednumber of approximate positions the drawbacks of (ia)-(iib) has been cancelled.The design principles of the developed method are based on the design-tree -approach of the mechanical systems and the design method ¿ in principle ¿ is capable of capturing the interrelationship between kinematic and dynamic synthesis simultaneously when the developed kinematic design method is integrated with the mechanical system simulation techniques.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The market place of the twenty-first century will demand that manufacturing assumes a crucial role in a new competitive field. Two potential resources in the area of manufacturing are advanced manufacturing technology (AMT) and empowered employees. Surveys in Finland have shown the need to invest in the new AMT in the Finnish sheet metal industry in the 1990's. In this run the focus has been on hard technology and less attention is paid to the utilization of human resources. In manymanufacturing companies an appreciable portion of the profit within reach is wasted due to poor quality of planning and workmanship. The production flow production error distribution of the sheet metal part based constructions is inspectedin this thesis. The objective of the thesis is to analyze the origins of production errors in the production flow of sheet metal based constructions. Also the employee empowerment is investigated in theory and the meaning of the employee empowerment in reducing the overall production error amount is discussed in this thesis. This study is most relevant to the sheet metal part fabricating industrywhich produces sheet metal part based constructions for electronics and telecommunication industry. This study concentrates on the manufacturing function of a company and is based on a field study carried out in five Finnish case factories. In each studied case factory the most delicate work phases for production errors were detected. It can be assumed that most of the production errors are caused in manually operated work phases and in mass production work phases. However, no common theme in collected production error data for production error distribution in the production flow can be found. Most important finding was still that most of the production errors in each case factory studied belong to the 'human activity based errors-category'. This result indicates that most of the problemsin the production flow are related to employees or work organization. Development activities must therefore be focused to the development of employee skills orto the development of work organization. Employee empowerment gives the right tools and methods to achieve this.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Mechanistic soil-crop models have become indispensable tools to investigate the effect of management practices on the productivity or environmental impacts of arable crops. Ideally these models may claim to be universally applicable because they simulate the major processes governing the fate of inputs such as fertiliser nitrogen or pesticides. However, because they deal with complex systems and uncertain phenomena, site-specific calibration is usually a prerequisite to ensure their predictions are realistic. This statement implies that some experimental knowledge on the system to be simulated should be available prior to any modelling attempt, and raises a tremendous limitation to practical applications of models. Because the demand for more general simulation results is high, modellers have nevertheless taken the bold step of extrapolating a model tested within a limited sample of real conditions to a much larger domain. While methodological questions are often disregarded in this extrapolation process, they are specifically addressed in this paper, and in particular the issue of models a priori parameterisation. We thus implemented and tested a standard procedure to parameterize the soil components of a modified version of the CERES models. The procedure converts routinely-available soil properties into functional characteristics by means of pedo-transfer functions. The resulting predictions of soil water and nitrogen dynamics, as well as crop biomass, nitrogen content and leaf area index were compared to observations from trials conducted in five locations across Europe (southern Italy, northern Spain, northern France and northern Germany). In three cases, the model’s performance was judged acceptable when compared to experimental errors on the measurements, based on a test of the model’s root mean squared error (RMSE). Significant deviations between observations and model outputs were however noted in all sites, and could be ascribed to various model routines. In decreasing importance, these were: water balance, the turnover of soil organic matter, and crop N uptake. A better match to field observations could therefore be achieved by visually adjusting related parameters, such as field-capacity water content or the size of soil microbial biomass. As a result, model predictions fell within the measurement errors in all sites for most variables, and the model’s RMSE was within the range of published values for similar tests. We conclude that the proposed a priori method yields acceptable simulations with only a 50% probability, a figure which may be greatly increased through a posteriori calibration. Modellers should thus exercise caution when extrapolating their models to a large sample of pedo-climatic conditions for which they have only limited information.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Background and aims: Few studies have examined whether subjective experiences during first cannabis use are related to other illicit drug (OID) use. This study investigated this topic. Methods: Baseline data from a representative sample of young Swiss men was obtained from an ongoing Cohort Study on Substance Use Risk Factors (N ¼ 5753). Logistic regressions were performed to examine the relationships between cannabis use and of subjective experiences during first cannabis use with 15 OID. Results: Positive experiences increased the likelihood of using hallucinogens (hallucinogens, salvia divinorum, spice; p50.015), stimulants (speed, ecstasy, cocaine, amphetamines/methamphetamines; p50.006) and also poppers, research chemicals, GHB/GBL, and crystal meth (p50.049). Sniffed drugs (poppers, solvents for sniffing) and ''hard'' drugs (heroin, ketamine, research chemicals, GHB/GBL and crystal meth) were more likely to be used by participants who experienced negative feelings on first use of cannabis (p50.034). Conclusion: Subjective feelings seemed to amplify the association of cannabis with OID. The risk increased for drugs with effects resembling feelings experienced on first cannabis use. Negative experiences should also be a concern, as they were associated with increased risk of using the ''hardest'' illicit drugs.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Location information is becoming increasingly necessary as every new smartphone incorporates a GPS (Global Positioning System) which allows the development of various applications based on it. However, it is not possible to properly receive the GPS signal in indoor environments. For this reason, new indoor positioning systems are being developed. As indoors is a very challenging scenario, it is necessary to study the precision of the obtained location information in order to determine if these new positioning techniques are suitable for indoor positioning.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Approximate models (proxies) can be employed to reduce the computational costs of estimating uncertainty. The price to pay is that the approximations introduced by the proxy model can lead to a biased estimation. To avoid this problem and ensure a reliable uncertainty quantification, we propose to combine functional data analysis and machine learning to build error models that allow us to obtain an accurate prediction of the exact response without solving the exact model for all realizations. We build the relationship between proxy and exact model on a learning set of geostatistical realizations for which both exact and approximate solvers are run. Functional principal components analysis (FPCA) is used to investigate the variability in the two sets of curves and reduce the dimensionality of the problem while maximizing the retained information. Once obtained, the error model can be used to predict the exact response of any realization on the basis of the sole proxy response. This methodology is purpose-oriented as the error model is constructed directly for the quantity of interest, rather than for the state of the system. Also, the dimensionality reduction performed by FPCA allows a diagnostic of the quality of the error model to assess the informativeness of the learning set and the fidelity of the proxy to the exact model. The possibility of obtaining a prediction of the exact response for any newly generated realization suggests that the methodology can be effectively used beyond the context of uncertainty quantification, in particular for Bayesian inference and optimization.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Investointi- ja energiantuotantokustannusten arviointi ja määrittäminen projektin alkuvaiheessa on olennainen osa voimalaitoksen elinkaaritiedon hallintaa. Tällöin tehdään investointi- ja energiantuotantokustannusten vertailua voimalaitosten kesken, joiden perusteella käytettävä voimalaitoskonsepti valitaan kompromissina toisaalta taloudellisen ja toisaalta suorituskyvyiltään tehokkaan voimalaitoksen väliltä. Koska investointikustannukset vaikuttavat suuresti energiatuotantokustannuksiin, on investointikustannusten arvioimisella suuri rooli voimalaitoskonseptia valittaessa. Investointikustannusten arviointimenetelmien käyttö eri projektivaiheissa riippuu pitkälti käytettävästä toteutuneesta kustannustiedosta sekä siitä, mihin tarkkuustasoon on kunkin projektin vaiheessa tarkoituksenmukaista pyrkiä. Suuren kustannusdatan avulla päästään vähilläkin lähtötiedoilla hyvään kustannusarvion tarkkuuteen. Projektin alkuvaiheessa ei kuitenkaan kannata uhrata liikaa aikaa kustannusarvion laatimiseen, koska arvioinnin kustannus kasvaa tarkkuuden myötä. Tässä työssä pyrittiin löytämään riippuvuussuhteita voimalaitosten kustannusten ja suoritusarvojen välille. Havaittujen riippuvuussuhteiden perusteella määritettiin in-vestointi- ja sähköntuotantokustannukset noin 50:lle voimalaitokselle. Investointi-kustannukset jaettiin 23:een kustannuskomponenttiin koneiden ja laitteiden sekä ra-kennusteknisten töiden kustannusryhmiin. Lisäksi kokonaisinvestointiin sisältyy projektikustannukset, rakennusaikaiset korot ja varaus. Laskettujen voimalaitosten avulla muodostettiin oma laskentaohjelmisto investointi- ja energiantuotantokustan-nusten arviointiin sekä talletukseen. Ohjelmistolla pyritään yhdenmukaistamaan kustannusarvioiden laadintaa samalla vähentäen arvioissa esiintyvää inhimillisen virheen todennäköisyyttä.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The aim of the present study was to determine the impact of trabecular bone score on the probability of fracture above that provided by the clinical risk factors utilized in FRAX. We performed a retrospective cohort study of 33,352 women aged 40-99 years from the province of Manitoba, Canada, with baseline measurements of lumbar spine trabecular bone score (TBS) and FRAX risk variables. The analysis was cohort-specific rather than based on the Canadian version of FRAX. The associations between trabecular bone score, the FRAX risk factors and the risk of fracture or death were examined using an extension of the Poisson regression model and used to calculate 10-year probabilities of fracture with and without TBS and to derive an algorithm to adjust fracture probability to take account of the independent contribution of TBS to fracture and mortality risk. During a mean follow-up of 4.7 years, 1754 women died and 1639 sustained one or more major osteoporotic fractures excluding hip fracture and 306 women sustained one or more hip fracture. When fully adjusted for FRAX risk variables, TBS remained a statistically significant predictor of major osteoporotic fractures excluding hip fracture (HR/SD 1.18, 95 % CI 1.12-1.24), death (HR/SD 1.20, 95 % CI 1.14-1.26) and hip fracture (HR/SD 1.23, 95 % CI 1.09-1.38). Models adjusting major osteoporotic fracture and hip fracture probability were derived, accounting for age and trabecular bone score with death considered as a competing event. Lumbar spine texture analysis using TBS is a risk factor for osteoporotic fracture and a risk factor for death. The predictive ability of TBS is independent of FRAX clinical risk factors and femoral neck BMD. Adjustment of fracture probability to take account of the independent contribution of TBS to fracture and mortality risk requires validation in independent cohorts.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This thesis studies evaluation of software development practices through an error analysis. The work presents software development process, software testing, software errors, error classification and software process improvement methods. The practical part of the work presents results from the error analysis of one software process. It also gives improvement ideas for the project. It was noticed that the classification of the error data was inadequate in the project. Because of this it was impossible to use the error data effectively. With the error analysis we were able to show that there were deficiencies in design and analyzing phases, implementation phase and in testing phase. The work gives ideas for improving error classification and for software development practices.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

En el sector suroriental de la Cuenca del Ebro, la inclinación paleomagnética obtenida en las sucesiones aluviales oligocenas es considerablemente menor que la esperable, si se considera la paleolatitud de referencia calculada para esa región durante el Oligoceno. Este error de inclinación puede deberse a diversos factores, como el control hidrodinámica de las partículas magnéticas en el medio deposicional, la compactación diferencial del sedimento durante el enterramiento, o bien a la deformación tectónica. Este trabajo se ha centrado en su estudio en dos sucesiones dominantemente aluviales, donde previamente se había establecido su magnetoestratigrafia. Las litofacies aluviales y lacustres estudiadas se han agrupado en cinco grupos: areniscas grises, areniscas rojas y versicolores, limos rojos, lutitas rojas y calizas. Se ha demostrado la existencia de una correlación entre la abundancia de filosilicatos y el error de inclinación. De esta manera, las litofacies con un bajo porcentaje de filosilicatos (calizas y areniscas grises) presentan errores de unos 5', estadisticarnente no significativos, con respecto a la inclinación de referencia. Por el contrario, en materiales con un porcentaje más elevado de filosilicatos (limos y arcillas) el error puede llegar a los 25'. Este hecho no tiene repercusión en la interpretación de las polaridades magnéticas, pero si en las reconstmcciones palinspásticas y paleogeográficas basadas en los cálculos de paleolatitudes a partir de las paleoinclinaciones. Los resultados obtenidos demuestran la necesidad de cautela en la propuesta de conclusiones basadas exclusivamente en este tipo de información.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Notre consommation en eau souterraine, en particulier comme eau potable ou pour l'irrigation, a considérablement augmenté au cours des années. De nombreux problèmes font alors leur apparition, allant de la prospection de nouvelles ressources à la remédiation des aquifères pollués. Indépendamment du problème hydrogéologique considéré, le principal défi reste la caractérisation des propriétés du sous-sol. Une approche stochastique est alors nécessaire afin de représenter cette incertitude en considérant de multiples scénarios géologiques et en générant un grand nombre de réalisations géostatistiques. Nous rencontrons alors la principale limitation de ces approches qui est le coût de calcul dû à la simulation des processus d'écoulements complexes pour chacune de ces réalisations. Dans la première partie de la thèse, ce problème est investigué dans le contexte de propagation de l'incertitude, oú un ensemble de réalisations est identifié comme représentant les propriétés du sous-sol. Afin de propager cette incertitude à la quantité d'intérêt tout en limitant le coût de calcul, les méthodes actuelles font appel à des modèles d'écoulement approximés. Cela permet l'identification d'un sous-ensemble de réalisations représentant la variabilité de l'ensemble initial. Le modèle complexe d'écoulement est alors évalué uniquement pour ce sousensemble, et, sur la base de ces réponses complexes, l'inférence est faite. Notre objectif est d'améliorer la performance de cette approche en utilisant toute l'information à disposition. Pour cela, le sous-ensemble de réponses approximées et exactes est utilisé afin de construire un modèle d'erreur, qui sert ensuite à corriger le reste des réponses approximées et prédire la réponse du modèle complexe. Cette méthode permet de maximiser l'utilisation de l'information à disposition sans augmentation perceptible du temps de calcul. La propagation de l'incertitude est alors plus précise et plus robuste. La stratégie explorée dans le premier chapitre consiste à apprendre d'un sous-ensemble de réalisations la relation entre les modèles d'écoulement approximé et complexe. Dans la seconde partie de la thèse, cette méthodologie est formalisée mathématiquement en introduisant un modèle de régression entre les réponses fonctionnelles. Comme ce problème est mal posé, il est nécessaire d'en réduire la dimensionnalité. Dans cette optique, l'innovation du travail présenté provient de l'utilisation de l'analyse en composantes principales fonctionnelles (ACPF), qui non seulement effectue la réduction de dimensionnalités tout en maximisant l'information retenue, mais permet aussi de diagnostiquer la qualité du modèle d'erreur dans cet espace fonctionnel. La méthodologie proposée est appliquée à un problème de pollution par une phase liquide nonaqueuse et les résultats obtenus montrent que le modèle d'erreur permet une forte réduction du temps de calcul tout en estimant correctement l'incertitude. De plus, pour chaque réponse approximée, une prédiction de la réponse complexe est fournie par le modèle d'erreur. Le concept de modèle d'erreur fonctionnel est donc pertinent pour la propagation de l'incertitude, mais aussi pour les problèmes d'inférence bayésienne. Les méthodes de Monte Carlo par chaîne de Markov (MCMC) sont les algorithmes les plus communément utilisés afin de générer des réalisations géostatistiques en accord avec les observations. Cependant, ces méthodes souffrent d'un taux d'acceptation très bas pour les problèmes de grande dimensionnalité, résultant en un grand nombre de simulations d'écoulement gaspillées. Une approche en deux temps, le "MCMC en deux étapes", a été introduite afin d'éviter les simulations du modèle complexe inutiles par une évaluation préliminaire de la réalisation. Dans la troisième partie de la thèse, le modèle d'écoulement approximé couplé à un modèle d'erreur sert d'évaluation préliminaire pour le "MCMC en deux étapes". Nous démontrons une augmentation du taux d'acceptation par un facteur de 1.5 à 3 en comparaison avec une implémentation classique de MCMC. Une question reste sans réponse : comment choisir la taille de l'ensemble d'entrainement et comment identifier les réalisations permettant d'optimiser la construction du modèle d'erreur. Cela requiert une stratégie itérative afin que, à chaque nouvelle simulation d'écoulement, le modèle d'erreur soit amélioré en incorporant les nouvelles informations. Ceci est développé dans la quatrième partie de la thèse, oú cette méthodologie est appliquée à un problème d'intrusion saline dans un aquifère côtier. -- Our consumption of groundwater, in particular as drinking water and for irrigation, has considerably increased over the years and groundwater is becoming an increasingly scarce and endangered resource. Nofadays, we are facing many problems ranging from water prospection to sustainable management and remediation of polluted aquifers. Independently of the hydrogeological problem, the main challenge remains dealing with the incomplete knofledge of the underground properties. Stochastic approaches have been developed to represent this uncertainty by considering multiple geological scenarios and generating a large number of realizations. The main limitation of this approach is the computational cost associated with performing complex of simulations in each realization. In the first part of the thesis, we explore this issue in the context of uncertainty propagation, where an ensemble of geostatistical realizations is identified as representative of the subsurface uncertainty. To propagate this lack of knofledge to the quantity of interest (e.g., the concentration of pollutant in extracted water), it is necessary to evaluate the of response of each realization. Due to computational constraints, state-of-the-art methods make use of approximate of simulation, to identify a subset of realizations that represents the variability of the ensemble. The complex and computationally heavy of model is then run for this subset based on which inference is made. Our objective is to increase the performance of this approach by using all of the available information and not solely the subset of exact responses. Two error models are proposed to correct the approximate responses follofing a machine learning approach. For the subset identified by a classical approach (here the distance kernel method) both the approximate and the exact responses are knofn. This information is used to construct an error model and correct the ensemble of approximate responses to predict the "expected" responses of the exact model. The proposed methodology makes use of all the available information without perceptible additional computational costs and leads to an increase in accuracy and robustness of the uncertainty propagation. The strategy explored in the first chapter consists in learning from a subset of realizations the relationship between proxy and exact curves. In the second part of this thesis, the strategy is formalized in a rigorous mathematical framework by defining a regression model between functions. As this problem is ill-posed, it is necessary to reduce its dimensionality. The novelty of the work comes from the use of functional principal component analysis (FPCA), which not only performs the dimensionality reduction while maximizing the retained information, but also allofs a diagnostic of the quality of the error model in the functional space. The proposed methodology is applied to a pollution problem by a non-aqueous phase-liquid. The error model allofs a strong reduction of the computational cost while providing a good estimate of the uncertainty. The individual correction of the proxy response by the error model leads to an excellent prediction of the exact response, opening the door to many applications. The concept of functional error model is useful not only in the context of uncertainty propagation, but also, and maybe even more so, to perform Bayesian inference. Monte Carlo Markov Chain (MCMC) algorithms are the most common choice to ensure that the generated realizations are sampled in accordance with the observations. Hofever, this approach suffers from lof acceptance rate in high dimensional problems, resulting in a large number of wasted of simulations. This led to the introduction of two-stage MCMC, where the computational cost is decreased by avoiding unnecessary simulation of the exact of thanks to a preliminary evaluation of the proposal. In the third part of the thesis, a proxy is coupled to an error model to provide an approximate response for the two-stage MCMC set-up. We demonstrate an increase in acceptance rate by a factor three with respect to one-stage MCMC results. An open question remains: hof do we choose the size of the learning set and identify the realizations to optimize the construction of the error model. This requires devising an iterative strategy to construct the error model, such that, as new of simulations are performed, the error model is iteratively improved by incorporating the new information. This is discussed in the fourth part of the thesis, in which we apply this methodology to a problem of saline intrusion in a coastal aquifer.