900 resultados para new keynesian models


Relevância:

30.00% 30.00%

Publicador:

Resumo:

The aim of this study is to present an Activity-Based Costing spreadsheet tool for analyzing the logistics costs. The tool can be used both by customer-companies and logistics service providers. The study discusses the influence of different activity models on costs. Additionally this paper discusses about the logistical performance across the total supply chain This study is carried out using ananalytical research approach and literature material has been used for supplementing the concerned research approach. Cost structure analysis was based on the theory of activity-based management. This study was outlined to spare part logistics in machine-shop industry. The outlines of logistics services and logisticalperformance discussed in this report are based on the new logistics business concept (LMS-concept), which has been presented earlier in the Valssi-project. Oneof the aims of this study is to increase awareness of different activity modelson logistics costs. The report paints an overall picture about the business environment and requirements for the new logistics concept.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A rigorous unit operation model is developed for vapor membrane separation. The new model is able to describe temperature, pressure, and concentration dependent permeation as wellreal fluid effects in vapor and gas separation with hydrocarbon selective rubbery polymeric membranes. The permeation through the membrane is described by a separate treatment of sorption and diffusion within the membrane. The chemical engineering thermodynamics is used to describe the equilibrium sorption of vapors and gases in rubbery membranes with equation of state models for polymeric systems. Also a new modification of the UNIFAC model is proposed for this purpose. Various thermodynamic models are extensively compared in order to verify the models' ability to predict and correlate experimental vapor-liquid equilibrium data. The penetrant transport through the selective layer of the membrane is described with the generalized Maxwell-Stefan equations, which are able to account for thebulk flux contribution as well as the diffusive coupling effect. A method is described to compute and correlate binary penetrant¿membrane diffusion coefficients from the experimental permeability coefficients at different temperatures and pressures. A fluid flow model for spiral-wound modules is derived from the conservation equation of mass, momentum, and energy. The conservation equations are presented in a discretized form by using the control volume approach. A combination of the permeation model and the fluid flow model yields the desired rigorous model for vapor membrane separation. The model is implemented into an inhouse process simulator and so vapor membrane separation may be evaluated as an integralpart of a process flowsheet.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Over 70% of the total costs of an end product are consequences of decisions that are made during the design process. A search for optimal cross-sections will often have only a marginal effect on the amount of material used if the geometry of a structure is fixed and if the cross-sectional characteristics of its elements are property designed by conventional methods. In recent years, optimalgeometry has become a central area of research in the automated design of structures. It is generally accepted that no single optimisation algorithm is suitable for all engineering design problems. An appropriate algorithm, therefore, mustbe selected individually for each optimisation situation. Modelling is the mosttime consuming phase in the optimisation of steel and metal structures. In thisresearch, the goal was to develop a method and computer program, which reduces the modelling and optimisation time for structural design. The program needed anoptimisation algorithm that is suitable for various engineering design problems. Because Finite Element modelling is commonly used in the design of steel and metal structures, the interaction between a finite element tool and optimisation tool needed a practical solution. The developed method and computer programs were tested with standard optimisation tests and practical design optimisation cases. Three generations of computer programs are developed. The programs combine anoptimisation problem modelling tool and FE-modelling program using three alternate methdos. The modelling and optimisation was demonstrated in the design of a new boom construction and steel structures of flat and ridge roofs. This thesis demonstrates that the most time consuming modelling time is significantly reduced. Modelling errors are reduced and the results are more reliable. A new selection rule for the evolution algorithm, which eliminates the need for constraint weight factors is tested with optimisation cases of the steel structures that include hundreds of constraints. It is seen that the tested algorithm can be used nearly as a black box without parameter settings and penalty factors of the constraints.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This study examines how firms interpret new, potentially disruptive technologies in their own strategic context. The work presents a cross-case analysis of four potentially disruptive technologies or technical operating models: Bluetooth, WLAN, Grid computing and Mobile Peer-to-peer paradigm. The technologies were investigated from the perspective of three mobile operators, a device manufacturer and a software company in the ICT industry. The theoretical background for the study consists of the resource-based view of the firm with dynamic perspective, the theories on the nature of technology and innovations, and the concept of business model. The literature review builds up a propositional framework for estimating the amount of radical change in the companies' business model with two middle variables, the disruptiveness potential of a new technology, and the strategic importance of a new technology to a firm. The data was gathered in group discussion sessions in each company. The results of each case analysis were brought together to evaluate, how firms interpret the potential disruptiveness in terms of changes in product characteristics and added value, technology and market uncertainty, changes in product-market positions, possible competence disruption and changes in value network positions. The results indicate that the perceived disruptiveness in terms ofproduct characteristics does not necessarily translate into strategic importance. In addition, firms did not see the new technologies as a threat in terms of potential competence disruption.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The conversion of cellular prion protein (PrPc), a GPI-anchored protein, into a protease-K-resistant and infective form (generally termed PrPsc) is mainly responsible for Transmissible Spongiform Encephalopathies (TSEs), characterized by neuronal degeneration and progressive loss of basic brain functions. Although PrPc is expressed by a wide range of tissues throughout the body, the complete repertoire of its functions has not been fully determined. Recent studies have confirmed its participation in basic physiological processes such as cell proliferation and the regulation of cellular homeostasis. Other studies indicate that PrPc interacts with several molecules to activate signaling cascades with a high number of cellular effects. To determine PrPc functions, transgenic mouse models have been generated in the last decade. In particular, mice lacking specific domains of the PrPc protein have revealed the contribution of these domains to neurodegenerative processes. A dual role of PrPc has been shown, since most authors report protective roles for this protein while others describe pro-apoptotic functions. In this review, we summarize new findings on PrPc functions, especially those related to neural degeneration and cell signaling.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Current standard treatments for metastatic colorectal cancer (CRC) are based on combination regimens with one of the two chemotherapeutic drugs, irinotecan or oxaliplatin. However, drug resistance frequently limits the clinical efficacy of these therapies. In order to gain new insights into mechanisms associated with chemoresistance, and departing from three distinct CRC cell models, we generated a panel of human colorectal cancer cell lines with acquired resistance to either oxaliplatin or irinotecan. We characterized the resistant cell line variants with regards to their drug resistance profile and transcriptome, and matched our results with datasets generated from relevant clinical material to derive putative resistance biomarkers. We found that the chemoresistant cell line variants had distinctive irinotecan- or oxaliplatin-specific resistance profiles, with non-reciprocal cross-resistance. Furthermore, we could identify several new, as well as some previously described, drug resistance-associated genes for each resistant cell line variant. Each chemoresistant cell line variant acquired a unique set of changes that may represent distinct functional subtypes of chemotherapy resistance. In addition, and given the potential implications for selection of subsequent treatment, we also performed an exploratory analysis, in relevant patient cohorts, of the predictive value of each of the specific genes identified in our cellular models.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We show that a heavy quark moving sufficiently fast through a quark-gluon plasma may lose energy by Cherenkov-radiating mesons. We demonstrate that this takes place in all strongly coupled, large-Nc plasmas with a gravity dual. The energy loss is exactly calculable in these models despite being an O(1/Nc)-effect. We discuss phenomenological implications for heavy-ion collision experiments.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Cooperation and coordination are desirable behaviors that are fundamental for the harmonious development of society. People need to rely on cooperation with other individuals in many aspects of everyday life, such as teamwork and economic exchange in anonymous markets. However, cooperation may easily fall prey to exploitation by selfish individuals who only care about short- term gain. For cooperation to evolve, specific conditions and mechanisms are required, such as kinship, direct and indirect reciprocity through repeated interactions, or external interventions such as punishment. In this dissertation we investigate the effect of the network structure of the population on the evolution of cooperation and coordination. We consider several kinds of static and dynamical network topologies, such as Baraba´si-Albert, social network models and spatial networks. We perform numerical simulations and laboratory experiments using the Prisoner's Dilemma and co- ordination games in order to contrast human behavior with theoretical results. We show by numerical simulations that even a moderate amount of random noise on the Baraba´si-Albert scale-free network links causes a significant loss of cooperation, to the point that cooperation almost vanishes altogether in the Prisoner's Dilemma when the noise rate is high enough. Moreover, when we consider fixed social-like networks we find that current models of social networks may allow cooperation to emerge and to be robust at least as much as in scale-free networks. In the framework of spatial networks, we investigate whether cooperation can evolve and be stable when agents move randomly or performing Le´vy flights in a continuous space. We also consider discrete space adopting purposeful mobility and binary birth-death process to dis- cover emergent cooperative patterns. The fundamental result is that cooperation may be enhanced when this migration is opportunistic or even when agents follow very simple heuristics. In the experimental laboratory, we investigate the issue of social coordination between indi- viduals located on networks of contacts. In contrast to simulations, we find that human players dynamics do not converge to the efficient outcome more often in a social-like network than in a random network. In another experiment, we study the behavior of people who play a pure co- ordination game in a spatial environment in which they can move around and when changing convention is costly. We find that each convention forms homogeneous clusters and is adopted by approximately half of the individuals. When we provide them with global information, i.e., the number of subjects currently adopting one of the conventions, global consensus is reached in most, but not all, cases. Our results allow us to extract the heuristics used by the participants and to build a numerical simulation model that agrees very well with the experiments. Our findings have important implications for policymakers intending to promote specific, desired behaviors in a mobile population. Furthermore, we carry out an experiment with human subjects playing the Prisoner's Dilemma game in a diluted grid where people are able to move around. In contrast to previous results on purposeful rewiring in relational networks, we find no noticeable effect of mobility in space on the level of cooperation. Clusters of cooperators form momentarily but in a few rounds they dissolve as cooperators at the boundaries stop tolerating being cheated upon. Our results highlight the difficulties that mobile agents have to establish a cooperative environment in a spatial setting without a device such as reputation or the possibility of retaliation. i.e. punishment. Finally, we test experimentally the evolution of cooperation in social networks taking into ac- count a setting where we allow people to make or break links at their will. In this work we give particular attention to whether information on an individual's actions is freely available to poten- tial partners or not. Studying the role of information is relevant as information on other people's actions is often not available for free: a recruiting firm may need to call a job candidate's refer- ences, a bank may need to find out about the credit history of a new client, etc. We find that people cooperate almost fully when information on their actions is freely available to their potential part- ners. Cooperation is less likely, however, if people have to pay about half of what they gain from cooperating with a cooperator. Cooperation declines even further if people have to pay a cost that is almost equivalent to the gain from cooperating with a cooperator. Thus, costly information on potential neighbors' actions can undermine the incentive to cooperate in dynamical networks.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Central serous chorioretinopathy (CSCR) is a major cause of vision threat among middle-aged male individuals. Multimodal imaging led to the description of a wide range of CSCR manifestations, and highlighted the contribution of the choroid and pigment epithelium in CSCR pathogenesis. However, the exact molecular mechanisms of CSCR have remained uncertain. The aim of this review is to recapitulate the clinical understanding of CSCR, with an emphasis on the most recent findings on epidemiology, risk factors, clinical and imaging diagnosis, and treatments options. It also gives an overview of the novel mineralocorticoid pathway hypothesis, from animal data to clinical evidences of the biological efficacy of oral mineralocorticoid antagonists in acute and chronic CSCR patients. In rodents, activation of the mineralocorticoid pathway in ocular cells either by intravitreous injection of its specific ligand, aldosterone, or by over-expression of the receptor specifically in the vascular endothelium, induced ocular phenotypes carrying many features of acute CSCR. Molecular mechanisms include expression of the calcium-dependent potassium channel (KCa2.3) in the endothelium of choroidal vessels, inducing subsequent vasodilation. Inappropriate or over-activation of the mineralocorticoid receptor in ocular cells and other tissues (such as brain, vessels) could link CSCR with the known co-morbidities observed in CSCR patients, including hypertension, coronary disease and psychological stress.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Langerhans cell histiocytosis (LCH) is a rare disease caused by the clonal accumulation of dendritic Langerhans cells, which is often accompanied by osteolytic lesions. It has been reported that osteoclast-like cells play a major role in the pathogenic bone destruction seen in patients with LCH and these cells are postulated to originate from the fusion of DCs. However, due to the lack of reliable animal models the pathogenesis of LCH is still poorly understood. In this study, we have established a mouse model of histiocytosis- recapitulating human disease for osteolytic lesions seen in LCH patients. At 12 weeks after birth, severe bone lesions were observed in our multisystem histiocytosis (Mushi) model, when CD8α conventional dendritic cells (DCs) are transformed (MuTuDC) and accumulate. Most importantly, our study demonstrates that bone loss in LCH can be accounted for the transdifferentiation of MuTuDCs into functional osteoclasts both in vivo and in vitro. Moreover, we have shown that injected MuTuDCs reverse the osteopetrotic phenotype of oc/oc mice in vivo. In conclusion, our results support a crucial role of DCs in bone lesions in histiocytosis patients. Furthermore, our new model of LCH based on adoptive transfer of MuTuDC lines, leading to bone lesions within 1-2 weeks, will be an important tool for investigating the pathophysiology of this disease and ultimately for evaluating the potential of anti-resorptive drugs for the treatment of bone lesions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Tutkimuksen tavoitteena oli analysoida liiketoimintamalleihin liittyviä teorioita ja erilaisten mallien pohjalta rakentaa selkeä teoria, jota yritykset voivat käyttää määritellessään ja analysoidessaan liiketoimintamalleja. Tutkimuksen kohteena olleet yritykset voitiin jaotella sisäisesti fokusoituneisiin ja ulkoisesti suuntautuneisiin. Jaottelun pohjalta oli mahdollista tehdä johtopäätöksiä koskien liiketoimintamallien potentiaalia. Tutkimus oli luonteeltaan kvalitatiivinen. Tutkimuksen tuloksena on liiketoimintamallien rakentamiseen ja analysointiin sopiva työkalu, jota voidaan käyttää yrityksen strategisessa suunnittelussa.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

During different forms of neurodegenerative diseases, including the retinal degeneration, several cell cycle proteins are expressed in the dying neurons from Drosophila to human revealing that these proteins are a hallmark of neuronal degeneration. This is true for animal models of Alzheimer's, and Parkinson's diseases, Amyotrophic Lateral Sclerosis and for Retinitis Pigmentosa as well as for acute injuries such as stroke and light damage. Longitudinal investigation and loss-of-function studies attest that cell cycle proteins participate to the process of cell death although with different impacts, depending on the disease. In the retina, inhibition of cell cycle protein action can result to massive protection. Nonetheless, the dissection of the molecular mechanisms of neuronal cell death is necessary to develop adapted therapeutic tools to efficiently protect photoreceptors as well as other neuron types.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Tutkielman tavoitteena oli tarkastella telekommunikaatiolaitevalmistajien liiketoimintamalleja. Tutkielma jakaantuu teoreettiseen ja empiiriseen osaan. Teoreettinen osa keskittyy lähinnä liiketoimintamallin käsitteen määrittelyyn. Olemassa olevien määritelmien, sekä liiketoimintamallin käsitteeseen läheisesti liittyvien termien, pohjalta luotiin liiketoimintamallille uusi malli. Tutkielman empiirinen osa keskittyy case-yritys Cisco Systemsin liiketoimintamallin määrittelyyn ja kehityksen kuvaamiseen. Liiketoimintamallin kehitystä seurattiin kahden vuoden ajalta perehtymällä lähinnä yrityksen lehdistötiedotteisiin, artikkeleihin ja muuhun julkiseen materiaaliin. Ciscon lisäksi empiirisessä osassa tutkittiin kahdeksan muun laitevalmistajan liiketoimintamallien kehitystä. Empiirisen osan päätavoitteena oli selvittää, miten telekommunikaatiolaitevalmistajien liiketoimintamallit kehittyvät nyt ja tulevaisuudessa.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This work presents new, efficient Markov chain Monte Carlo (MCMC) simulation methods for statistical analysis in various modelling applications. When using MCMC methods, the model is simulated repeatedly to explore the probability distribution describing the uncertainties in model parameters and predictions. In adaptive MCMC methods based on the Metropolis-Hastings algorithm, the proposal distribution needed by the algorithm learns from the target distribution as the simulation proceeds. Adaptive MCMC methods have been subject of intensive research lately, as they open a way for essentially easier use of the methodology. The lack of user-friendly computer programs has been a main obstacle for wider acceptance of the methods. This work provides two new adaptive MCMC methods: DRAM and AARJ. The DRAM method has been built especially to work in high dimensional and non-linear problems. The AARJ method is an extension to DRAM for model selection problems, where the mathematical formulation of the model is uncertain and we want simultaneously to fit several different models to the same observations. The methods were developed while keeping in mind the needs of modelling applications typical in environmental sciences. The development work has been pursued while working with several application projects. The applications presented in this work are: a winter time oxygen concentration model for Lake Tuusulanjärvi and adaptive control of the aerator; a nutrition model for Lake Pyhäjärvi and lake management planning; validation of the algorithms of the GOMOS ozone remote sensing instrument on board the Envisat satellite of European Space Agency and the study of the effects of aerosol model selection on the GOMOS algorithm.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Notre consommation en eau souterraine, en particulier comme eau potable ou pour l'irrigation, a considérablement augmenté au cours des années. De nombreux problèmes font alors leur apparition, allant de la prospection de nouvelles ressources à la remédiation des aquifères pollués. Indépendamment du problème hydrogéologique considéré, le principal défi reste la caractérisation des propriétés du sous-sol. Une approche stochastique est alors nécessaire afin de représenter cette incertitude en considérant de multiples scénarios géologiques et en générant un grand nombre de réalisations géostatistiques. Nous rencontrons alors la principale limitation de ces approches qui est le coût de calcul dû à la simulation des processus d'écoulements complexes pour chacune de ces réalisations. Dans la première partie de la thèse, ce problème est investigué dans le contexte de propagation de l'incertitude, oú un ensemble de réalisations est identifié comme représentant les propriétés du sous-sol. Afin de propager cette incertitude à la quantité d'intérêt tout en limitant le coût de calcul, les méthodes actuelles font appel à des modèles d'écoulement approximés. Cela permet l'identification d'un sous-ensemble de réalisations représentant la variabilité de l'ensemble initial. Le modèle complexe d'écoulement est alors évalué uniquement pour ce sousensemble, et, sur la base de ces réponses complexes, l'inférence est faite. Notre objectif est d'améliorer la performance de cette approche en utilisant toute l'information à disposition. Pour cela, le sous-ensemble de réponses approximées et exactes est utilisé afin de construire un modèle d'erreur, qui sert ensuite à corriger le reste des réponses approximées et prédire la réponse du modèle complexe. Cette méthode permet de maximiser l'utilisation de l'information à disposition sans augmentation perceptible du temps de calcul. La propagation de l'incertitude est alors plus précise et plus robuste. La stratégie explorée dans le premier chapitre consiste à apprendre d'un sous-ensemble de réalisations la relation entre les modèles d'écoulement approximé et complexe. Dans la seconde partie de la thèse, cette méthodologie est formalisée mathématiquement en introduisant un modèle de régression entre les réponses fonctionnelles. Comme ce problème est mal posé, il est nécessaire d'en réduire la dimensionnalité. Dans cette optique, l'innovation du travail présenté provient de l'utilisation de l'analyse en composantes principales fonctionnelles (ACPF), qui non seulement effectue la réduction de dimensionnalités tout en maximisant l'information retenue, mais permet aussi de diagnostiquer la qualité du modèle d'erreur dans cet espace fonctionnel. La méthodologie proposée est appliquée à un problème de pollution par une phase liquide nonaqueuse et les résultats obtenus montrent que le modèle d'erreur permet une forte réduction du temps de calcul tout en estimant correctement l'incertitude. De plus, pour chaque réponse approximée, une prédiction de la réponse complexe est fournie par le modèle d'erreur. Le concept de modèle d'erreur fonctionnel est donc pertinent pour la propagation de l'incertitude, mais aussi pour les problèmes d'inférence bayésienne. Les méthodes de Monte Carlo par chaîne de Markov (MCMC) sont les algorithmes les plus communément utilisés afin de générer des réalisations géostatistiques en accord avec les observations. Cependant, ces méthodes souffrent d'un taux d'acceptation très bas pour les problèmes de grande dimensionnalité, résultant en un grand nombre de simulations d'écoulement gaspillées. Une approche en deux temps, le "MCMC en deux étapes", a été introduite afin d'éviter les simulations du modèle complexe inutiles par une évaluation préliminaire de la réalisation. Dans la troisième partie de la thèse, le modèle d'écoulement approximé couplé à un modèle d'erreur sert d'évaluation préliminaire pour le "MCMC en deux étapes". Nous démontrons une augmentation du taux d'acceptation par un facteur de 1.5 à 3 en comparaison avec une implémentation classique de MCMC. Une question reste sans réponse : comment choisir la taille de l'ensemble d'entrainement et comment identifier les réalisations permettant d'optimiser la construction du modèle d'erreur. Cela requiert une stratégie itérative afin que, à chaque nouvelle simulation d'écoulement, le modèle d'erreur soit amélioré en incorporant les nouvelles informations. Ceci est développé dans la quatrième partie de la thèse, oú cette méthodologie est appliquée à un problème d'intrusion saline dans un aquifère côtier. -- Our consumption of groundwater, in particular as drinking water and for irrigation, has considerably increased over the years and groundwater is becoming an increasingly scarce and endangered resource. Nofadays, we are facing many problems ranging from water prospection to sustainable management and remediation of polluted aquifers. Independently of the hydrogeological problem, the main challenge remains dealing with the incomplete knofledge of the underground properties. Stochastic approaches have been developed to represent this uncertainty by considering multiple geological scenarios and generating a large number of realizations. The main limitation of this approach is the computational cost associated with performing complex of simulations in each realization. In the first part of the thesis, we explore this issue in the context of uncertainty propagation, where an ensemble of geostatistical realizations is identified as representative of the subsurface uncertainty. To propagate this lack of knofledge to the quantity of interest (e.g., the concentration of pollutant in extracted water), it is necessary to evaluate the of response of each realization. Due to computational constraints, state-of-the-art methods make use of approximate of simulation, to identify a subset of realizations that represents the variability of the ensemble. The complex and computationally heavy of model is then run for this subset based on which inference is made. Our objective is to increase the performance of this approach by using all of the available information and not solely the subset of exact responses. Two error models are proposed to correct the approximate responses follofing a machine learning approach. For the subset identified by a classical approach (here the distance kernel method) both the approximate and the exact responses are knofn. This information is used to construct an error model and correct the ensemble of approximate responses to predict the "expected" responses of the exact model. The proposed methodology makes use of all the available information without perceptible additional computational costs and leads to an increase in accuracy and robustness of the uncertainty propagation. The strategy explored in the first chapter consists in learning from a subset of realizations the relationship between proxy and exact curves. In the second part of this thesis, the strategy is formalized in a rigorous mathematical framework by defining a regression model between functions. As this problem is ill-posed, it is necessary to reduce its dimensionality. The novelty of the work comes from the use of functional principal component analysis (FPCA), which not only performs the dimensionality reduction while maximizing the retained information, but also allofs a diagnostic of the quality of the error model in the functional space. The proposed methodology is applied to a pollution problem by a non-aqueous phase-liquid. The error model allofs a strong reduction of the computational cost while providing a good estimate of the uncertainty. The individual correction of the proxy response by the error model leads to an excellent prediction of the exact response, opening the door to many applications. The concept of functional error model is useful not only in the context of uncertainty propagation, but also, and maybe even more so, to perform Bayesian inference. Monte Carlo Markov Chain (MCMC) algorithms are the most common choice to ensure that the generated realizations are sampled in accordance with the observations. Hofever, this approach suffers from lof acceptance rate in high dimensional problems, resulting in a large number of wasted of simulations. This led to the introduction of two-stage MCMC, where the computational cost is decreased by avoiding unnecessary simulation of the exact of thanks to a preliminary evaluation of the proposal. In the third part of the thesis, a proxy is coupled to an error model to provide an approximate response for the two-stage MCMC set-up. We demonstrate an increase in acceptance rate by a factor three with respect to one-stage MCMC results. An open question remains: hof do we choose the size of the learning set and identify the realizations to optimize the construction of the error model. This requires devising an iterative strategy to construct the error model, such that, as new of simulations are performed, the error model is iteratively improved by incorporating the new information. This is discussed in the fourth part of the thesis, in which we apply this methodology to a problem of saline intrusion in a coastal aquifer.