919 resultados para Many-to-many-assignment problem
Resumo:
Our essay aims at studying suitable statistical methods for the clustering ofcompositional data in situations where observations are constituted by trajectories ofcompositional data, that is, by sequences of composition measurements along a domain.Observed trajectories are known as “functional data” and several methods have beenproposed for their analysis.In particular, methods for clustering functional data, known as Functional ClusterAnalysis (FCA), have been applied by practitioners and scientists in many fields. To ourknowledge, FCA techniques have not been extended to cope with the problem ofclustering compositional data trajectories. In order to extend FCA techniques to theanalysis of compositional data, FCA clustering techniques have to be adapted by using asuitable compositional algebra.The present work centres on the following question: given a sample of compositionaldata trajectories, how can we formulate a segmentation procedure giving homogeneousclasses? To address this problem we follow the steps described below.First of all we adapt the well-known spline smoothing techniques in order to cope withthe smoothing of compositional data trajectories. In fact, an observed curve can bethought of as the sum of a smooth part plus some noise due to measurement errors.Spline smoothing techniques are used to isolate the smooth part of the trajectory:clustering algorithms are then applied to these smooth curves.The second step consists in building suitable metrics for measuring the dissimilaritybetween trajectories: we propose a metric that accounts for difference in both shape andlevel, and a metric accounting for differences in shape only.A simulation study is performed in order to evaluate the proposed methodologies, usingboth hierarchical and partitional clustering algorithm. The quality of the obtained resultsis assessed by means of several indices
Resumo:
Purpose: Although young males encounter sexually-related concerns, they are mostly absent from specialized services. Our objective is to assess whether the internet is used by boys to find answers to these types of problems and questions. Methods: In the context of a qualitative study assessing young males' barriers to access sexual and reproductive health facilities, we conducted two focus groups gathering 12 boys aged 17-20. Discussions were triggered through the presentation of four vignettes corresponding to questions posted by 17-20 year old boys and girls on an information website for adolescents (www.ciao.ch), concerning various sexual dysfunction situations. In order to avoid having to talk about their own experience, participants were asked what they would do in those cases. Results: In general, the internet was mentioned quite thoroughly as a means of searching for information through research engines and a place to address professionals for advice.Within the hierarchy of consultation possibilities, the internet was given the first place as a way to deal with these types of problems presenting many advantages: (1) the internet enables to maintain intimacy; (2) it is anonymous (use of a pseudo); (3) it avoids having to confront someone face-to-face with personal problems which can be embarrassing and challenging for one's pride; (4) it is free; and (5) it is accessible at all times. In other words, participants value the internet as a positive tool to avoid many barriers which prevent offline consultations to take place. Most participants consider the internet at least as a first step in trying to solve a problem; for instance, by better defining the seriousness of a problem and judging if it is worth consulting a doctor. However, despite the positive qualities of the internet, they do put forward the importance of having specialists answering questions, trustworthiness, and being followed-up by the same person. Participants suggested that a strategy to break down barriers for boys to consult in face-to-face settings is to have a consultation on the internet as a first step which could then guide the person to an in-person consultation if necessary. Conclusions: The internet as a means of obtaining information or consulting received high marks overall. Although the internet cannot replace an in-person consultation, the screen and the keyboard have the advantage of not involving a face-to-face encounter and raise the possibility of discussing sexual problems anonymously and in private. The internet tools together with other new technologies should continue to develop in a secure manner as a space providing prevention messages and to become an easy access door to sexual and reproductive health services for young men, which can then guide youths to appropriate resource persons. Sources of support: This study was supported by the Maurice Chalumeau Foundation, Switzerland.
Resumo:
Delivery context-aware adaptative heterogenous systems. Currently, many types of devices that have gained access to the network is large and diverse. The different capabilities and characteristics of them, in addition to the different characteristics and preferences of users, have generated a new goal to overcome: how to adapt the contents taking into account this heterogeneity, known as the “delivery context.” The concepts of adaptation and accessibility have been widely discussed and have resulted in many proposals, standards and techniques designed to solve the problem, making it necessary to refine the analysis of the issue to be considered in the process of adaptation. We present a tour of the various proposals and standards that have marked the area of heterogeneous systems works, and others who have worked since the real-time interaction through agents based platforms. All targeted to solve a common goal: the delivery context
Resumo:
Background: Recent advances on high-throughput technologies have produced a vast amount of protein sequences, while the number of high-resolution structures has seen a limited increase. This has impelled the production of many strategies to built protein structures from its sequence, generating a considerable amount of alternative models. The selection of the closest model to the native conformation has thus become crucial for structure prediction. Several methods have been developed to score protein models by energies, knowledge-based potentials and combination of both.Results: Here, we present and demonstrate a theory to split the knowledge-based potentials in scoring terms biologically meaningful and to combine them in new scores to predict near-native structures. Our strategy allows circumventing the problem of defining the reference state. In this approach we give the proof for a simple and linear application that can be further improved by optimizing the combination of Zscores. Using the simplest composite score () we obtained predictions similar to state-of-the-art methods. Besides, our approach has the advantage of identifying the most relevant terms involved in the stability of the protein structure. Finally, we also use the composite Zscores to assess the conformation of models and to detect local errors.Conclusion: We have introduced a method to split knowledge-based potentials and to solve the problem of defining a reference state. The new scores have detected near-native structures as accurately as state-of-art methods and have been successful to identify wrongly modeled regions of many near-native conformations.
Resumo:
Carbon isotope ratio (CIR) analysis has been routinely and successfully used in sports drug testing for many years to uncover the misuse of endogenous steroids. One limitation of the method is the availability of steroid preparations exhibiting CIRs equal to endogenous steroids. To overcome this problem, hydrogen isotope ratios (HIR) of endogenous urinary steroids were investigated as a potential complement; results obtained from a reference population of 67 individuals are presented herein. An established sample preparation method was modified and improved to enable separate measurements of each analyte of interest where possible. From the fraction of glucuronidated steroids; pregnanediol, 16-androstenol, 11-ketoetiocholanolone, androsterone (A), etiocholanolone (E), dehydroepiandrosterone (D), 5α- and 5β-androstanediol, testosterone and epitestosterone were included. In addition, sulfate conjugates of A, E, D, epiandrosterone and 17α- and 17β-androstenediol were considered and analyzed after acidic solvolysis. The obtained results enabled the calculation of the first reference-population-based thresholds for HIR of urinary steroids that can readily be applied to routine doping control samples. Proof-of-concept was accomplished by investigating urine specimens collected after a single oral application of testosterone-undecanoate. The HIR of most testosterone metabolites were found to be significantly influenced by the exogenous steroid beyond the established threshold values. Additionally, one regular doping control sample with an extraordinary testosterone/epitestosterone ratio of 100 without suspicious CIR was subjected to the complementary methodology of HIR analysis. The HIR data eventually provided evidence for the exogenous origin of urinary testosterone metabolites. Despite further investigations on HIR being advisable to corroborate the presented reference-population-based thresholds, the developed method proved to be a new tool supporting modern sports drug testing procedures.
Resumo:
Isotopic and isotonic chains of superheavy nuclei are analyzed to search for spherical double shell closures beyond Z=82 and N=126 within the new effective field theory model of Furnstahl, Serot, and Tang for the relativistic nuclear many-body problem. We take into account several indicators to identify the occurrence of possible shell closures, such as two-nucleon separation energies, two-nucleon shell gaps, average pairing gaps, and the shell correction energy. The effective Lagrangian model predicts N=172 and Z=120 and N=258 and Z=120 as spherical doubly magic superheavy nuclei, whereas N=184 and Z=114 show some magic character depending on the parameter set. The magicity of a particular neutron (proton) number in the analyzed mass region is found to depend on the number of protons (neutrons) present in the nucleus.
Resumo:
The performance of density-functional theory to solve the exact, nonrelativistic, many-electron problem for magnetic systems has been explored in a new implementation imposing space and spin symmetry constraints, as in ab initio wave function theory. Calculations on selected systems representative of organic diradicals, molecular magnets and antiferromagnetic solids carried out with and without these constraints lead to contradictory results, which provide numerical illustration on this usually obviated problem. It is concluded that the present exchange-correlation functionals provide reasonable numerical results although for the wrong physical reasons, thus evidencing the need for continued search for more accurate expressions.
Resumo:
The performance of density-functional theory to solve the exact, nonrelativistic, many-electron problem for magnetic systems has been explored in a new implementation imposing space and spin symmetry constraints, as in ab initio wave function theory. Calculations on selected systems representative of organic diradicals, molecular magnets and antiferromagnetic solids carried out with and without these constraints lead to contradictory results, which provide numerical illustration on this usually obviated problem. It is concluded that the present exchange-correlation functionals provide reasonable numerical results although for the wrong physical reasons, thus evidencing the need for continued search for more accurate expressions.
Resumo:
The Iowa Department of Transportation has noticed an increase in the occurrence of excessively vibrated portland cement concrete (PCC) pavements. The overconsolidation of PCC pavements can be observed in several sections of PCC highways across the state of Iowa. Also, excessive vibration is believed to be a factor in the premature deterioration of several pavements in Iowa. To address the problem of excessive vibration, a research project was conducted to document the vibratory practices of PCC slipform paving in Iowa and determine the effect of vibration on the air content of pavement. The primary factors studied were paver speed, vibrator frequency, and air content relative to the location of the vibrator. The study concluded that the Iowa Department of Transportation specification of 5000 and 8000 vibrations per minute (vpm) for slipform pavers is effective for normal paver speeds observed on the three test paving projects. Excessive vibration was clearly identified on one project where a vibrator frequency was found to be 12,000 vpm. When the paver speed was reduced to half the normal speed, hard air contents indicated that excessive vibration was beginning to occur in the localized area immediately surrounding the vibrator at a frequency of 8000 vpm. Analysis of variance testing indicated many variables and interactions to be significant at a 95% confidence level; however, the variables and interactions that were found to be significant varied from project to project. This affirms the complexity of the process for consolidating PCC.
Resumo:
This report summarizes the analysis of transverse cracking in asphalt pavement by a five state study team from Iowa, Kansas, Nebraska, North Dakota, and Oklahoma. The study was initiated under the sponsorship of the Federal Highway Administration and four evaluation conferences were held during the course of the study. Each state conducted a crack inventory on their asphalt pavement. An effort was made to correlate this inventory with numerous factors that were considered to be pertinent to the cracking problem. One state did indicate that there was a correlation between transverse cracking severity and the subsurface geology. The other states were unable to identify any significant factors as being the primary contributors. The analysis of the problem was divided into, (1) mix design, (2) maintenance, and (3) 3R rehabilitation. Many potential factors to be considered were identified under each of these three study divisions. There were many conclusions as to good and bad practices. One major conclusions was that a more effective crack maintenance program with early sealing was essential. Some new practices were suggested as potentially more cost effective in design, construction and maintenance. The interchange of methods and procedures by individual states yielded benefits in that other states selected practices that would be an improvement to their program.
Resumo:
With over 68 thousand miles of gravel roads in Iowa and the importance of these roads within the farm-to-market transportation system, proper water management becomes critical for maintaining the integrity of the roadway materials. However, the build-up of water within the aggregate subbase can lead to frost boils and ultimately potholes forming at the road surface. The aggregate subbase and subgrade soils under these gravel roads are produced with material opportunistically chosen from local sources near the site and, many times, the compositions of these sublayers are far from ideal in terms of proper water drainage with the full effects of this shortcut not being fully understood. The primary objective of this project was to provide a physically-based model for evaluating the drainability of potential subbase and subgrade materials for gravel roads in Iowa. The Richards equation provided the appropriate framework to study the transient unsaturated flow that usually occurs through the subbase and subgrade of a gravel road. From which, we identified that the saturated hydraulic conductivity, Ks, was a key parameter driving the time to drain of subgrade soils found in Iowa, thus being a good proxy variable for accessing roadway drainability. Using Ks, derived from soil texture, we were able to identify potential problem areas in terms of roadway drainage . It was found that there is a threshold for Ks of 15 cm/day that determines if the roadway will drain efficiently, based on the requirement that the time to drain, Td, the surface roadway layer does not exceed a 2-hr limit. Two of the three highest abundant textures (loam and silty clay loam), which cover nearly 60% of the state of Iowa, were found to have average Td values greater than the 2-hr limit. With such a large percentage of the state at risk for the formation of boils due to the soil with relatively low saturated hydraulic conductivity values, it seems pertinent that we propose alternative design and/or maintenance practices to limit the expensive repair work in Iowa. The addition of drain tiles or French mattresses my help address drainage problems. However, before pursuing this recommendation, a comprehensive cost-benefit analysis is needed.
Resumo:
Sähkökäytön valintaan vaikuttavat useat eri tekijät. Sähkökäytön valinnan perusteena voidaan käyttää tietoa prosessin tai toimilaitteen fysikaalisesta käyttäytymisestä. Valinnan perusteena voi olla myös riittävän suorituskyvyn tarve prosessissa. Tässä työssä tutustutaan sähkökäytön valintaan vaikuttaviin tekijöihinja sähkökäytön mitoitukseen. Työssä on keskitytty yleisempien pienjännitemoottorityyppien ja niiden säätöjen käsittelyyn. Useissa prosesseissa vaaditaan monen moottorin käyttöä saman kuorman liikuttamisessa. Monimoottorikäyttöjen ohjauksen tuntemus auttaa ongelmatilanteiden ratkaisussa ja antaa perusteet monimoottorikäytön valinnalle. Tässä työssä käsitellään monimoottorikäyttöjen pyörimisnopeuserojen ja vääntömomenttien epätasaisuuteen liittyviä ongelmia.
Resumo:
In order that the radius and thus ununiform structure of the teeth and otherelectrical and magnetic parts of the machine may be taken into consideration the calculation of an axial flux permanent magnet machine is, conventionally, doneby means of 3D FEM-methods. This calculation procedure, however, requires a lotof time and computer recourses. This study proves that also analytical methods can be applied to perform the calculation successfully. The procedure of the analytical calculation can be summarized into following steps: first the magnet is divided into slices, which makes the calculation for each section individually, and then the parts are submitted to calculation of the final results. It is obvious that using this method can save a lot of designing and calculating time. Thecalculation program is designed to model the magnetic and electrical circuits of surface mounted axial flux permanent magnet synchronous machines in such a way, that it takes into account possible magnetic saturation of the iron parts. Theresult of the calculation is the torque of the motor including the vibrations. The motor geometry and the materials and either the torque or pole angle are defined and the motor can be fed with an arbitrary shape and amplitude of three-phase currents. There are no limits for the size and number of the pole pairs nor for many other factors. The calculation steps and the number of different sections of the magnet are selectable, but the calculation time is strongly depending on this. The results are compared to the measurements of real prototypes. The permanent magnet creates part of the flux in the magnetic circuit. The form and amplitude of the flux density in the air-gap depends on the geometry and material of the magnetic circuit, on the length of the air-gap and remanence flux density of the magnet. Slotting is taken into account by using the Carter factor in the slot opening area. The calculation is simple and fast if the shape of the magnetis a square and has no skew in relation to the stator slots. With a more complicated magnet shape the calculation has to be done in several sections. It is clear that according to the increasing number of sections also the result will become more accurate. In a radial flux motor all sections of the magnets create force with a same radius. In the case of an axial flux motor, each radial section creates force with a different radius and the torque is the sum of these. The magnetic circuit of the motor, consisting of the stator iron, rotor iron, air-gap, magnet and the slot, is modelled with a reluctance net, which considers the saturation of the iron. This means, that several iterations, in which the permeability is updated, has to be done in order to get final results. The motor torque is calculated using the instantaneous linkage flux and stator currents. Flux linkage is called the part of the flux that is created by the permanent magnets and the stator currents passing through the coils in stator teeth. The angle between this flux and the phase currents define the torque created by the magnetic circuit. Due to the winding structure of the stator and in order to limit the leakage flux the slot openings of the stator are normally not made of ferromagnetic material even though, in some cases, semimagnetic slot wedges are used. In the slot opening faces the flux enters the iron almost normally (tangentially with respect to the rotor flux) creating tangential forces in the rotor. This phenomenon iscalled cogging. The flux in the slot opening area on the different sides of theopening and in the different slot openings is not equal and so these forces do not compensate each other. In the calculation it is assumed that the flux entering the left side of the opening is the component left from the geometrical centre of the slot. This torque component together with the torque component calculated using the Lorenz force make the total torque of the motor. It is easy to assume that when all the magnet edges, where the derivative component of the magnet flux density is at its highest, enter the slot openings at the same time, this will have as a result a considerable cogging torque. To reduce the cogging torquethe magnet edges can be shaped so that they are not parallel to the stator slots, which is the common way to solve the problem. In doing so, the edge may be spread along the whole slot pitch and thus also the high derivative component willbe spread to occur equally along the rotation. Besides forming the magnets theymay also be placed somewhat asymmetric on the rotor surface. The asymmetric distribution can be made in many different ways. All the magnets may have a different deflection of the symmetrical centre point or they can be for example shiftedin pairs. There are some factors that limit the deflection. The first is that the magnets cannot overlap. The magnet shape and the relative width compared to the pole define the deflection in this case. The other factor is that a shifting of the poles limits the maximum torque of the motor. If the edges of adjacent magnets are very close to each other the leakage flux from one pole to the other increases reducing thus the air-gap magnetization. The asymmetric model needs some assumptions and simplifications in order to limit the size of the model and calculation time. The reluctance net is made for symmetric distribution. If the magnets are distributed asymmetrically the flux in the different pole pairs will not be exactly the same. Therefore, the assumption that the flux flows from the edges of the model to the next pole pairs, in the calculation model from one edgeto the other, is not correct. If it were wished for that this fact should be considered in multi-pole pair machines, this would mean that all the poles, in other words the whole machine, should be modelled in reluctance net. The error resulting from this wrong assumption is, nevertheless, irrelevant.
Resumo:
IIn electric drives, frequency converters are used to generatefor the electric motor the AC voltage with variable frequency and amplitude. When considering the annual sale of drives in values of money and units sold, the use of low-performance drives appears to be in predominant. These drives have tobe very cost effective to manufacture and use, while they are also expected to fulfill the harmonic distortion standards. One of the objectives has also been to extend the lifetime of the frequency converter. In a traditional frequency converter, a relatively large electrolytic DC-link capacitor is used. Electrolytic capacitors are large, heavy and rather expensive components. In many cases, the lifetime of the electrolytic capacitor is the main factor limiting the lifetime of the frequency converter. To overcome the problem, the electrolytic capacitor is replaced with a metallized polypropylene film capacitor (MPPF). The MPPF has improved properties when compared to the electrolytic capacitor. By replacing the electrolytic capacitor with a film capacitor the energy storage of the DC-linkwill be decreased. Thus, the instantaneous power supplied to the motor correlates with the instantaneous power taken from the network. This yields a continuousDC-link current fed by the diode rectifier bridge. As a consequence, the line current harmonics clearly decrease. Because of the decreased energy storage, the DC-link voltage fluctuates. This sets additional conditions to the controllers of the frequency converter to compensate the fluctuation from the supplied motor phase voltages. In this work three-phase and single-phase frequency converters with small DC-link capacitor are analyzed. The evaluation is obtained with simulations and laboratory measurements.
Resumo:
Notre consommation en eau souterraine, en particulier comme eau potable ou pour l'irrigation, a considérablement augmenté au cours des années. De nombreux problèmes font alors leur apparition, allant de la prospection de nouvelles ressources à la remédiation des aquifères pollués. Indépendamment du problème hydrogéologique considéré, le principal défi reste la caractérisation des propriétés du sous-sol. Une approche stochastique est alors nécessaire afin de représenter cette incertitude en considérant de multiples scénarios géologiques et en générant un grand nombre de réalisations géostatistiques. Nous rencontrons alors la principale limitation de ces approches qui est le coût de calcul dû à la simulation des processus d'écoulements complexes pour chacune de ces réalisations. Dans la première partie de la thèse, ce problème est investigué dans le contexte de propagation de l'incertitude, oú un ensemble de réalisations est identifié comme représentant les propriétés du sous-sol. Afin de propager cette incertitude à la quantité d'intérêt tout en limitant le coût de calcul, les méthodes actuelles font appel à des modèles d'écoulement approximés. Cela permet l'identification d'un sous-ensemble de réalisations représentant la variabilité de l'ensemble initial. Le modèle complexe d'écoulement est alors évalué uniquement pour ce sousensemble, et, sur la base de ces réponses complexes, l'inférence est faite. Notre objectif est d'améliorer la performance de cette approche en utilisant toute l'information à disposition. Pour cela, le sous-ensemble de réponses approximées et exactes est utilisé afin de construire un modèle d'erreur, qui sert ensuite à corriger le reste des réponses approximées et prédire la réponse du modèle complexe. Cette méthode permet de maximiser l'utilisation de l'information à disposition sans augmentation perceptible du temps de calcul. La propagation de l'incertitude est alors plus précise et plus robuste. La stratégie explorée dans le premier chapitre consiste à apprendre d'un sous-ensemble de réalisations la relation entre les modèles d'écoulement approximé et complexe. Dans la seconde partie de la thèse, cette méthodologie est formalisée mathématiquement en introduisant un modèle de régression entre les réponses fonctionnelles. Comme ce problème est mal posé, il est nécessaire d'en réduire la dimensionnalité. Dans cette optique, l'innovation du travail présenté provient de l'utilisation de l'analyse en composantes principales fonctionnelles (ACPF), qui non seulement effectue la réduction de dimensionnalités tout en maximisant l'information retenue, mais permet aussi de diagnostiquer la qualité du modèle d'erreur dans cet espace fonctionnel. La méthodologie proposée est appliquée à un problème de pollution par une phase liquide nonaqueuse et les résultats obtenus montrent que le modèle d'erreur permet une forte réduction du temps de calcul tout en estimant correctement l'incertitude. De plus, pour chaque réponse approximée, une prédiction de la réponse complexe est fournie par le modèle d'erreur. Le concept de modèle d'erreur fonctionnel est donc pertinent pour la propagation de l'incertitude, mais aussi pour les problèmes d'inférence bayésienne. Les méthodes de Monte Carlo par chaîne de Markov (MCMC) sont les algorithmes les plus communément utilisés afin de générer des réalisations géostatistiques en accord avec les observations. Cependant, ces méthodes souffrent d'un taux d'acceptation très bas pour les problèmes de grande dimensionnalité, résultant en un grand nombre de simulations d'écoulement gaspillées. Une approche en deux temps, le "MCMC en deux étapes", a été introduite afin d'éviter les simulations du modèle complexe inutiles par une évaluation préliminaire de la réalisation. Dans la troisième partie de la thèse, le modèle d'écoulement approximé couplé à un modèle d'erreur sert d'évaluation préliminaire pour le "MCMC en deux étapes". Nous démontrons une augmentation du taux d'acceptation par un facteur de 1.5 à 3 en comparaison avec une implémentation classique de MCMC. Une question reste sans réponse : comment choisir la taille de l'ensemble d'entrainement et comment identifier les réalisations permettant d'optimiser la construction du modèle d'erreur. Cela requiert une stratégie itérative afin que, à chaque nouvelle simulation d'écoulement, le modèle d'erreur soit amélioré en incorporant les nouvelles informations. Ceci est développé dans la quatrième partie de la thèse, oú cette méthodologie est appliquée à un problème d'intrusion saline dans un aquifère côtier. -- Our consumption of groundwater, in particular as drinking water and for irrigation, has considerably increased over the years and groundwater is becoming an increasingly scarce and endangered resource. Nofadays, we are facing many problems ranging from water prospection to sustainable management and remediation of polluted aquifers. Independently of the hydrogeological problem, the main challenge remains dealing with the incomplete knofledge of the underground properties. Stochastic approaches have been developed to represent this uncertainty by considering multiple geological scenarios and generating a large number of realizations. The main limitation of this approach is the computational cost associated with performing complex of simulations in each realization. In the first part of the thesis, we explore this issue in the context of uncertainty propagation, where an ensemble of geostatistical realizations is identified as representative of the subsurface uncertainty. To propagate this lack of knofledge to the quantity of interest (e.g., the concentration of pollutant in extracted water), it is necessary to evaluate the of response of each realization. Due to computational constraints, state-of-the-art methods make use of approximate of simulation, to identify a subset of realizations that represents the variability of the ensemble. The complex and computationally heavy of model is then run for this subset based on which inference is made. Our objective is to increase the performance of this approach by using all of the available information and not solely the subset of exact responses. Two error models are proposed to correct the approximate responses follofing a machine learning approach. For the subset identified by a classical approach (here the distance kernel method) both the approximate and the exact responses are knofn. This information is used to construct an error model and correct the ensemble of approximate responses to predict the "expected" responses of the exact model. The proposed methodology makes use of all the available information without perceptible additional computational costs and leads to an increase in accuracy and robustness of the uncertainty propagation. The strategy explored in the first chapter consists in learning from a subset of realizations the relationship between proxy and exact curves. In the second part of this thesis, the strategy is formalized in a rigorous mathematical framework by defining a regression model between functions. As this problem is ill-posed, it is necessary to reduce its dimensionality. The novelty of the work comes from the use of functional principal component analysis (FPCA), which not only performs the dimensionality reduction while maximizing the retained information, but also allofs a diagnostic of the quality of the error model in the functional space. The proposed methodology is applied to a pollution problem by a non-aqueous phase-liquid. The error model allofs a strong reduction of the computational cost while providing a good estimate of the uncertainty. The individual correction of the proxy response by the error model leads to an excellent prediction of the exact response, opening the door to many applications. The concept of functional error model is useful not only in the context of uncertainty propagation, but also, and maybe even more so, to perform Bayesian inference. Monte Carlo Markov Chain (MCMC) algorithms are the most common choice to ensure that the generated realizations are sampled in accordance with the observations. Hofever, this approach suffers from lof acceptance rate in high dimensional problems, resulting in a large number of wasted of simulations. This led to the introduction of two-stage MCMC, where the computational cost is decreased by avoiding unnecessary simulation of the exact of thanks to a preliminary evaluation of the proposal. In the third part of the thesis, a proxy is coupled to an error model to provide an approximate response for the two-stage MCMC set-up. We demonstrate an increase in acceptance rate by a factor three with respect to one-stage MCMC results. An open question remains: hof do we choose the size of the learning set and identify the realizations to optimize the construction of the error model. This requires devising an iterative strategy to construct the error model, such that, as new of simulations are performed, the error model is iteratively improved by incorporating the new information. This is discussed in the fourth part of the thesis, in which we apply this methodology to a problem of saline intrusion in a coastal aquifer.