945 resultados para mean field independent component analysis


Relevância:

100.00% 100.00%

Publicador:

Resumo:

En aquest article es presenta una visió global del camp de l'anàlisi del discurs que pretén ser més que exhaustiva clari ficadora. Aquest objectiu es persegueix a partir de la proposta de considerar que la major part dels treballs en anàlisi del discurs es poden classi ficar en una de les tres perspectives següents: el discurs com a acció, com a sistema i com a informació. L'autor de l'article revisa els conceptes bàsics en cada una d'aquestes perspectives i integra les propostes que s'han fet des de diferents 'escoles' d'anàlisi del discurs en l'esquema que proposa.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

AimOur aim was to understand the interplay of heterogeneous climatic and spatial landscapes in shaping the distribution of nuclear microsatellite variation in burrowing parrots, Cyanoliseus patagonus. Given the marked phenotypic differences between populations of burrowing parrots we hypothesized an important role of geographical as well climatic heterogeneity in the population structure of this species. LocationSouthern South America. MethodsWe applied a landscape genetics approach to investigate the explicit patterns of genetic spatial autocorrelation based on both geography and climate using spatial principal component analysis (sPCA). This necessitated a novel statistical estimation of the species climatic landscape, considering temperature- and precipitation-based variables separately to evaluate their weight in shaping the distribution of genetic variation in our model system. ResultsGeographical and climatic heterogeneity successfully explained molecular variance in burrowing parrots. sPCA divided the species distribution into two main areas, Patagonia and the pre-Andes, which were connected by an area of geographical and climatic transition. Moreover, sPCA revealed cryptic and conservation-relevant genetic structure: the pre-Andean populations and the transition localities were each divided into two groups, each management units for conservation. Main conclusionssPCA, a method originally developed for spatial genetics, allowed us to unravel the genetic structure related to spatial and climatic landscapes and to visualize these patterns in landscape space. These novel climatic inferences underscore the importance of our modified sPCA approach in revealing how climatic variables can drive cryptic patterns of genetic structure, making the approach potentially useful in the study of any species distributed over a climatically heterogeneous landscape.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The sensory, physical and chemical characteristics of 'Douradão' peaches cold stored in different modified atmosphere packaging (LDPE bags of 30, 50, 60, 75µm thickness) were studied. After 14, 21 and 28 days of cold storage (1 ± 1 ºC and 90 ± 5% RH), samples were withdrawn from MAP and kept during 4 days in ambient air for ripening. Descriptive terminology and sensory profile of the peaches were developed by methodology based on the Quantitative Descriptive Analysis (QDA). The assessors consensually defined the sensory descriptors, their respective reference materials and the descriptive evaluation ballot. Fourteen individuals were selected as judges based on their discrimination capacity and reproducibility. Seven descriptors were generated showing similarities and differences among the samples. The data were analysed by ANOVA, Tukey test and Principal Component Analysis (PCA). The atmospheres that developed inside the different packaging materials during cold storage differed significantly. The PCA showed that MA50 and MA60 treatments were more characterized by the fresh peach flavour, fresh appearance, juiciness and flesh firmness, and were effective for keeping good quality of 'Douradão' peaches during 28 d of cold storage. The Control and MA30 treatments were characterized by the mealiness, the MA75 treatment showed lower intensity for all attributes evaluated and they were ineffective to maintain good quality of the fruits during cold storage. Higher correlation coefficients (positive) were found between fresh appearance and flesh firmness (0.95), fresh appearance and juiciness (0.97), ratio and intensity of fresh peach smell (0.81), as well as higher correlation coefficients (negative) between Hue angle and intensity of yellow colour (-0.91), fresh appearance and mealiness (-0.92), juiciness and mealiness (-0.95), firmness and mealiness (-0.94).

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We analyze the neutron skin thickness in finite nuclei with the droplet model and effective nuclear interactions. The ratio of the bulk symmetry energy J to the so-called surface stiffness coefficient Q has in the droplet model a prominent role in driving the size of neutron skins. We present a correlation between the density derivative of the nuclear symmetry energy at saturation and the J/Q ratio. We emphasize the role of the surface widths of the neutron and proton density profiles in the calculation of the neutron skin thickness when one uses realistic mean-field effective interactions. Next, taking as experimental baseline the neutron skin sizes measured in 26 antiprotonic atoms along the mass table, we explore constraints arising from neutron skins on the value of the J/Q ratio. The results favor a relatively soft symmetry energy at subsaturation densities. Our predictions are compared with the recent constraints derived from other experimental observables. Though the various extractions predict different ranges of values, one finds a narrow window L∼45-75 MeV for the coefficient L that characterizes the density derivative of the symmetry energy that is compatible with all the different empirical indications.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In traffic accidents involving motorcycles, paint traces can be transferred from the rider's helmet or smeared onto its surface. These traces are usually in the form of chips or smears and are frequently collected for comparison purposes. This research investigates the physical and chemical characteristics of the coatings found on motorcycles helmets. An evaluation of the similarities between helmet and automotive coating systems was also performed.Twenty-seven helmet coatings from 15 different brands and 22 models were considered. One sample per helmet was collected and observed using optical microscopy. FTIR spectroscopy was then used and seven replicate measurements per layer were carried out to study the variability of each coating system (intravariability). Principal Component Analysis (PCA) and Hierarchical Cluster Analysis (HCA) were also performed on the infrared spectra of the clearcoats and basecoats of the data set. The most common systems were composed of two or three layers, consistently involving a clearcoat and basecoat. The coating systems of helmets with composite shells systematically contained a minimum of three layers. FTIR spectroscopy results showed that acrylic urethane and alkyd urethane were the most frequent binders used for clearcoats and basecoats. A high proportion of the coatings were differentiated (more than 95%) based on microscopic examinations. The chemical and physical characteristics of the coatings allowed the differentiation of all but one pair of helmets of the same brand, model and color. Chemometrics (PCA and HCA) corroborated classification based on visual comparisons of the spectra and allowed the study of the whole data set at once (i.e., all spectra of the same layer). Thus, the intravariability of each helmet and its proximity to the others (intervariability) could be more readily assessed. It was also possible to determine the most discriminative chemical variables based on the study of the PCA loadings. Chemometrics could therefore be used as a complementary decision-making tool when many spectra and replicates have to be taken into account. Similarities between automotive and helmet coating systems were highlighted, in particular with regard to automotive coating systems on plastic substrates (microscopy and FTIR). However, the primer layer of helmet coatings was shown to differ from the automotive primer. If the paint trace contains this layer, the risk of misclassification (i.e., helmet versus vehicle) is reduced. Nevertheless, a paint examiner should pay close attention to these similarities when analyzing paint traces, especially regarding smears or paint chips presenting an incomplete layer system.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The recently developed semiclassical variational Wigner-Kirkwood (VWK) approach is applied to finite nuclei using external potentials and self-consistent mean fields derived from Skyrme inter-actions and from relativistic mean field theory. VWK consist s of the Thomas-Fermi part plus a pure, perturbative h 2 correction. In external potentials, VWK passes through the average of the quantal values of the accumulated level density and total en energy as a function of the Fermi energy. However, there is a problem of overbinding when the energy per particle is displayed as a function of the particle number. The situation is analyzed comparing spherical and deformed harmonic oscillator potentials. In the self-consistent case, we show for Skyrme forces that VWK binding energies are very close to those obtained from extended Thomas-Fermi functionals of h 4 order, pointing to the rapid convergence of the VWK theory. This satisfying result, however, does not cure the overbinding problem, i.e., the semiclassical energies show more binding than they should. This feature is more pronounced in the case of Skyrme forces than with the relativistic mean field approach. However, even in the latter case the shell correction energy for e.g.208 Pb turns out to be only ∼ −6 MeV what is about a factor two or three off the generally accepted value. As an adhoc remedy, increasing the kinetic energy by 2.5%, leads to shell correction energies well acceptable throughout the periodic table. The general importance of the present studies for other finite Fermi systems, self-bound or in external potentials, is pointed out.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The present study evaluated the sensory quality of chocolates obtained from two cocoa cultivars (PH16 and SR162) resistant to Moniliophtora perniciosa mould comparing to a conventional cocoa that is not resistant to the disease. The acceptability of the chocolates was assessed and the promising cultivars with relevant sensory and commercial attributes could be indicated to cocoa producers and chocolate manufacturers. The descriptive terminology and the sensory profile of chocolates were developed by Quantitative Descriptive Analysis (QDA). Ten panelists, selected on the basis of their discriminatory capacity and reproducibility, defined eleven sensory descriptors, their respective reference materials and the descriptive evaluation ballot. The data were analyzed using ANOVA, Principal Component Analysis (PCA) and Tukey's test to compare the means. The results revealed significant differences among the sensory profiles of the chocolates. Chocolates from the PH16 cultivar were characterized by a darker brown color, more intense flavor and odor of chocolate, bitterness and a firmer texture, which are important sensory and commercial attributes. Chocolates from the SR162 cultivar were characterized by a greater sweetness and melting quality and chocolates from the conventional treatment presented intermediate sensory characteristics between those of the other two chocolates. All samples indicated high acceptance, but chocolates from the PH16 and conventional cultivars obtained higher purchase intention scores.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The saturation properties of neutron-rich matter are investigated in a relativistic mean-field formalism using two accurately calibrated models: NL3 and FSUGold. The saturation properties density, binding energy per nucleon, and incompressibility coefficient are calculated as a function of the neutron-proton asymmetry α≡(N-Z)/A to all orders in α. Good agreement (at the 10% level or better) is found between these numerical calculations and analytic expansions that are given in terms of a handful of bulk parameters determined at saturation density. Using insights developed from the analytic approach and a general expression for the incompressibility coefficient of infinite neutron-rich matter, i.e., K0(α)=K0+Kτα2+ , we construct a hybrid model with values for K0 and Kτ as suggested by recent experimental findings. Whereas the hybrid model provides a better description of the measured distribution of isoscalar monopole strength in the Sn isotopes relative to both NL3 and FSUGold, it significantly underestimates the distribution of strength in 208Pb. Thus, we conclude that the incompressibility coefficient of neutron-rich matter remains an important open problem.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Using mean field theory, we have studied Bose-Fermi mixtures in a one-dimensional optical lattice in the case of an attractive boson-fermion interaction. We consider that the fermions are in the degenerate regime and that the laser intensities are such that quantum coherence across the condensate is ensured. We discuss the effect of the optical lattice on the critical rotational frequency for vortex line creation in the Bose-Einstein condensate, as well as how it affects the stability of the boson-fermion mixture. A reduction of the critical frequency for nucleating a vortex is observed as the strength of the applied laser is increased. The onset of instability of the mixture occurs for a sizably lower number of fermions in the presence of a deep optical lattice.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The local thermodynamics of a system with long-range interactions in d dimensions is studied using the mean-field approximation. Long-range interactions are introduced through pair interaction potentials that decay as a power law in the interparticle distance. We compute the local entropy, Helmholtz free energy, and grand potential per particle in the microcanonical, canonical, and grand canonical ensembles, respectively. From the local entropy per particle we obtain the local equation of state of the system by using the condition of local thermodynamic equilibrium. This local equation of state has the form of the ideal gas equation of state, but with the density depending on the potential characterizing long-range interactions. By volume integration of the relation between the different thermodynamic potentials at the local level, we find the corresponding equation satisfied by the potentials at the global level. It is shown that the potential energy enters as a thermodynamic variable that modifies the global thermodynamic potentials. As a result, we find a generalized Gibbs-Duhem equation that relates the potential energy to the temperature, pressure, and chemical potential. For the marginal case where the power of the decaying interaction potential is equal to the dimension of the space, the usual Gibbs-Duhem equation is recovered. As examples of the application of this equation, we consider spatially uniform interaction potentials and the self-gravitating gas. We also point out a close relationship with the thermodynamics of small systems.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Combining theories on social trust and social capital with sociopsychological approaches and applying contextual analyses to Swiss and European survey data, this thesis examines under what circumstances generalised trust, often understood as public good, may not benefit everyone, but instead amplify inequality. The empirical investigation focuses on the Swiss context, but considers different scales of analysis. Two broader questions are addressed. First, might generalised trust imply more or less narrow visions of community and solidarity in different contexts? Applying nonlinear principal component analysis to aggregate indicators, Study 1 explores inclusive and exclusive types of social capital in Europe, measured as regional configurations of generalised trust, civic participation and attitudes towards diversity. Study 2 employs multilevel models to examine how generalised trust, as an individual predisposition and an aggregate climate at the level of Swiss cantons, is linked to equality- directed collective action intention versus radical right support. Second, might high-trust climates impact negatively on disadvantaged members of society, precisely because they reflect a normative discourse of social harmony that impedes recognition of inequality? Study 3 compares how climates of generalised trust at the level of Swiss micro-regions and subjective perceptions of neighbourhood cohesion moderate the negative relationship between socio-economic disadvantage and mental health. Overall, demonstrating beneficial, as well as counterintuitive effects of social trust, this thesis proposes a critical and contextualised approach to the sources and dynamics of social cohesion in democratic societies. -- Cette thèse combine des théories sur le capital social et la confiance sociale avec des approches psychosociales et s'appuie sur des analyses contextuelles de données d'enquêtes suisses et européennes, afin d'étudier dans quelles circonstances la confiance généralisée, souvent présentée comme un bien public, pourrait ne pas bénéficier à tout le monde, mais amplifier les inégalités. Les études empiriques, centrées sur le contexte suisse, intègrent différentes échelles d'analyse et investiguent deux questions principales. Premièrement, la confiance généralisée implique-t-elle des visions plus ou moins restrictives de la communauté et de la solidarité selon le contexte? Dans l'étude 1, une analyse à composantes principales non-linéaire sur des indicateurs agrégés permet d'explorer des types de capital social inclusif et exclusif en Europe, mesurés par des configurations régionales de confiance généralisée, de participation civique, et d'attitudes envers la diversité. L'étude 2 utilise des modèles multiniveaux afin d'analyser comment la confiance généralisée, en tant que prédisposition individuelle et climat agrégé au niveau des cantons suisses, est associée à l'intention de participer à des actions collectives en faveur de l'égalité ou, au contraire, à l'intention de voter pour la droite radicale. Deuxièmement, des climats de haute confiance peuvent-ils avoir un impact négatif sur des membres désavantagés de la société, précisément parce qu'ils reflètent un discours normatif d'harmonie sociale qui empêche la reconnaissance des inégalités? L'étude 3 analyse comment des climats de confiance au niveau des micro-régions suisses et la perception subjective de faire partie d'un environnement cohésif modèrent la relation négative entre le désavantage socio-économique et la santé mentale. En démontrant des effets bénéfiques mais aussi contre-intuitifs de la confiance sociale, cette thèse propose une approche critique et contextualisée des sources et dynamiques de la cohésion sociale dans les sociétés démocratiques.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Background: Differences in the distribution of genotypes between individuals of the same ethnicity are an important confounder factor commonly undervalued in typical association studies conducted in radiogenomics. Objective: To evaluate the genotypic distribution of SNPs in a wide set of Spanish prostate cancer patients for determine the homogeneity of the population and to disclose potential bias. Design, Setting, and Participants: A total of 601 prostate cancer patients from Andalusia, Basque Country, Canary and Catalonia were genotyped for 10 SNPs located in 6 different genes associated to DNA repair: XRCC1 (rs25487, rs25489, rs1799782), ERCC2 (rs13181), ERCC1 (rs11615), LIG4 (rs1805388, rs1805386), ATM (rs17503908, rs1800057) and P53 (rs1042522). The SNP genotyping was made in a Biotrove OpenArrayH NT Cycler. Outcome Measurements and Statistical Analysis: Comparisons of genotypic and allelic frequencies among populations, as well as haplotype analyses were determined using the web-based environment SNPator. Principal component analysis was made using the SnpMatrix and XSnpMatrix classes and methods implemented as an R package. Non-supervised hierarchical cluster of SNP was made using MultiExperiment Viewer. Results and Limitations: We observed that genotype distribution of 4 out 10 SNPs was statistically different among the studied populations, showing the greatest differences between Andalusia and Catalonia. These observations were confirmed in cluster analysis, principal component analysis and in the differential distribution of haplotypes among the populations. Because tumor characteristics have not been taken into account, it is possible that some polymorphisms may influence tumor characteristics in the same way that it may pose a risk factor for other disease characteristics. Conclusion: Differences in distribution of genotypes within different populations of the same ethnicity could be an important confounding factor responsible for the lack of validation of SNPs associated with radiation-induced toxicity, especially when extensive meta-analysis with subjects from different countries are carried out.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Objective: To compare lower incisor dentoalveolar compensation and mandible symphysis morphology among Class I and Class III malocclusion patients with different facial vertical skeletal patterns. Materials and Methods: Lower incisor extrusion and inclination, as well as buccal (LA) and lingual (LP) cortex depth, and mandibular symphysis height (LH) were measured in 107 lateral cephalometric x-rays of adult patients without prior orthodontic treatment. In addition, malocclusion type (Class I or III) and facial vertical skeletal pattern were considered. Through a principal component analysis (PCA) related variables were reduced. Simple regression equation and multivariate analyses of variance were also used. Results: Incisor mandibular plane angle (P < .001) and extrusion (P  =  .03) values showed significant differences between the sagittal malocclusion groups. Variations in the mandibular plane have a negative correlation with LA (Class I P  =  .03 and Class III P  =  .01) and a positive correlation with LH (Class I P  =  .01 and Class III P  =  .02) in both groups. Within the Class III group, there was a negative correlation between the mandibular plane and LP (P  =  .02). PCA showed that the tendency toward a long face causes the symphysis to elongate and narrow. In Class III, alveolar narrowing is also found in normal faces. Conclusions: Vertical facial pattern is a significant factor in mandibular symphysis alveolar morphology and lower incisor positioning, both for Class I and Class III patients. Short-faced Class III patients have a widened alveolar bone. However, for long-faced and normal-faced Class III, natural compensation elongates the symphysis and influences lower incisor position.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The symmetry energy coefficients, incompressibility, and single-particle and isovector potentials of clusterized dilute nuclear matter are calculated at different temperatures employing the S-matrix approach to the evaluation of the equation of state. Calculations have been extended to understand the aforesaid properties of homogeneous and clusterized supernova matter in the subnuclear density region. A comparison of the results in the S-matrix and mean-field approach reveals some subtle differences in the density and temperature region we explore.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Notre consommation en eau souterraine, en particulier comme eau potable ou pour l'irrigation, a considérablement augmenté au cours des années. De nombreux problèmes font alors leur apparition, allant de la prospection de nouvelles ressources à la remédiation des aquifères pollués. Indépendamment du problème hydrogéologique considéré, le principal défi reste la caractérisation des propriétés du sous-sol. Une approche stochastique est alors nécessaire afin de représenter cette incertitude en considérant de multiples scénarios géologiques et en générant un grand nombre de réalisations géostatistiques. Nous rencontrons alors la principale limitation de ces approches qui est le coût de calcul dû à la simulation des processus d'écoulements complexes pour chacune de ces réalisations. Dans la première partie de la thèse, ce problème est investigué dans le contexte de propagation de l'incertitude, oú un ensemble de réalisations est identifié comme représentant les propriétés du sous-sol. Afin de propager cette incertitude à la quantité d'intérêt tout en limitant le coût de calcul, les méthodes actuelles font appel à des modèles d'écoulement approximés. Cela permet l'identification d'un sous-ensemble de réalisations représentant la variabilité de l'ensemble initial. Le modèle complexe d'écoulement est alors évalué uniquement pour ce sousensemble, et, sur la base de ces réponses complexes, l'inférence est faite. Notre objectif est d'améliorer la performance de cette approche en utilisant toute l'information à disposition. Pour cela, le sous-ensemble de réponses approximées et exactes est utilisé afin de construire un modèle d'erreur, qui sert ensuite à corriger le reste des réponses approximées et prédire la réponse du modèle complexe. Cette méthode permet de maximiser l'utilisation de l'information à disposition sans augmentation perceptible du temps de calcul. La propagation de l'incertitude est alors plus précise et plus robuste. La stratégie explorée dans le premier chapitre consiste à apprendre d'un sous-ensemble de réalisations la relation entre les modèles d'écoulement approximé et complexe. Dans la seconde partie de la thèse, cette méthodologie est formalisée mathématiquement en introduisant un modèle de régression entre les réponses fonctionnelles. Comme ce problème est mal posé, il est nécessaire d'en réduire la dimensionnalité. Dans cette optique, l'innovation du travail présenté provient de l'utilisation de l'analyse en composantes principales fonctionnelles (ACPF), qui non seulement effectue la réduction de dimensionnalités tout en maximisant l'information retenue, mais permet aussi de diagnostiquer la qualité du modèle d'erreur dans cet espace fonctionnel. La méthodologie proposée est appliquée à un problème de pollution par une phase liquide nonaqueuse et les résultats obtenus montrent que le modèle d'erreur permet une forte réduction du temps de calcul tout en estimant correctement l'incertitude. De plus, pour chaque réponse approximée, une prédiction de la réponse complexe est fournie par le modèle d'erreur. Le concept de modèle d'erreur fonctionnel est donc pertinent pour la propagation de l'incertitude, mais aussi pour les problèmes d'inférence bayésienne. Les méthodes de Monte Carlo par chaîne de Markov (MCMC) sont les algorithmes les plus communément utilisés afin de générer des réalisations géostatistiques en accord avec les observations. Cependant, ces méthodes souffrent d'un taux d'acceptation très bas pour les problèmes de grande dimensionnalité, résultant en un grand nombre de simulations d'écoulement gaspillées. Une approche en deux temps, le "MCMC en deux étapes", a été introduite afin d'éviter les simulations du modèle complexe inutiles par une évaluation préliminaire de la réalisation. Dans la troisième partie de la thèse, le modèle d'écoulement approximé couplé à un modèle d'erreur sert d'évaluation préliminaire pour le "MCMC en deux étapes". Nous démontrons une augmentation du taux d'acceptation par un facteur de 1.5 à 3 en comparaison avec une implémentation classique de MCMC. Une question reste sans réponse : comment choisir la taille de l'ensemble d'entrainement et comment identifier les réalisations permettant d'optimiser la construction du modèle d'erreur. Cela requiert une stratégie itérative afin que, à chaque nouvelle simulation d'écoulement, le modèle d'erreur soit amélioré en incorporant les nouvelles informations. Ceci est développé dans la quatrième partie de la thèse, oú cette méthodologie est appliquée à un problème d'intrusion saline dans un aquifère côtier. -- Our consumption of groundwater, in particular as drinking water and for irrigation, has considerably increased over the years and groundwater is becoming an increasingly scarce and endangered resource. Nofadays, we are facing many problems ranging from water prospection to sustainable management and remediation of polluted aquifers. Independently of the hydrogeological problem, the main challenge remains dealing with the incomplete knofledge of the underground properties. Stochastic approaches have been developed to represent this uncertainty by considering multiple geological scenarios and generating a large number of realizations. The main limitation of this approach is the computational cost associated with performing complex of simulations in each realization. In the first part of the thesis, we explore this issue in the context of uncertainty propagation, where an ensemble of geostatistical realizations is identified as representative of the subsurface uncertainty. To propagate this lack of knofledge to the quantity of interest (e.g., the concentration of pollutant in extracted water), it is necessary to evaluate the of response of each realization. Due to computational constraints, state-of-the-art methods make use of approximate of simulation, to identify a subset of realizations that represents the variability of the ensemble. The complex and computationally heavy of model is then run for this subset based on which inference is made. Our objective is to increase the performance of this approach by using all of the available information and not solely the subset of exact responses. Two error models are proposed to correct the approximate responses follofing a machine learning approach. For the subset identified by a classical approach (here the distance kernel method) both the approximate and the exact responses are knofn. This information is used to construct an error model and correct the ensemble of approximate responses to predict the "expected" responses of the exact model. The proposed methodology makes use of all the available information without perceptible additional computational costs and leads to an increase in accuracy and robustness of the uncertainty propagation. The strategy explored in the first chapter consists in learning from a subset of realizations the relationship between proxy and exact curves. In the second part of this thesis, the strategy is formalized in a rigorous mathematical framework by defining a regression model between functions. As this problem is ill-posed, it is necessary to reduce its dimensionality. The novelty of the work comes from the use of functional principal component analysis (FPCA), which not only performs the dimensionality reduction while maximizing the retained information, but also allofs a diagnostic of the quality of the error model in the functional space. The proposed methodology is applied to a pollution problem by a non-aqueous phase-liquid. The error model allofs a strong reduction of the computational cost while providing a good estimate of the uncertainty. The individual correction of the proxy response by the error model leads to an excellent prediction of the exact response, opening the door to many applications. The concept of functional error model is useful not only in the context of uncertainty propagation, but also, and maybe even more so, to perform Bayesian inference. Monte Carlo Markov Chain (MCMC) algorithms are the most common choice to ensure that the generated realizations are sampled in accordance with the observations. Hofever, this approach suffers from lof acceptance rate in high dimensional problems, resulting in a large number of wasted of simulations. This led to the introduction of two-stage MCMC, where the computational cost is decreased by avoiding unnecessary simulation of the exact of thanks to a preliminary evaluation of the proposal. In the third part of the thesis, a proxy is coupled to an error model to provide an approximate response for the two-stage MCMC set-up. We demonstrate an increase in acceptance rate by a factor three with respect to one-stage MCMC results. An open question remains: hof do we choose the size of the learning set and identify the realizations to optimize the construction of the error model. This requires devising an iterative strategy to construct the error model, such that, as new of simulations are performed, the error model is iteratively improved by incorporating the new information. This is discussed in the fourth part of the thesis, in which we apply this methodology to a problem of saline intrusion in a coastal aquifer.