877 resultados para Non-stationary iterative method


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Electric motors driven by adjustable-frequency converters may produce periodic excitation forces that can cause torque and speed ripple. Interaction with the driven mechanical system may cause undesirable vibrations that affect the system performance and lifetime. Direct drives in sensitive applications, such as elevators or paper machines, emphasize the importance of smooth torque production. This thesis analyses the non-idealities of frequencyconverters that produce speed and torque ripple in electric drives. The origin of low order harmonics in speed and torque is examined. It is shown how different current measurement error types affect the torque. As the application environment, direct torque control (DTC) method is applied to permanent magnet synchronous machines (PMSM). A simulation model to analyse the effect of the frequency converter non-idealities on the performance of the electric drives is created. Themodel enables to identify potential problems causing torque vibrations and possibly damaging oscillations in electrically driven machine systems. The model is capable of coupling with separate simulation software of complex mechanical loads. Furthermore, the simulation model of the frequency converter's control algorithm can be applied to control a real frequency converter. A commercial frequencyconverter with standard software, a permanent magnet axial flux synchronous motor and a DC motor as the load are used to detect the effect of current measurement errors on load torque. A method to reduce the speed and torque ripple by compensating the current measurement errors is introduced. The method is based on analysing the amplitude of a selected harmonic component of speed as a function oftime and selecting a suitable compensation alternative for the current error. The speed can be either measured or estimated, so the compensation method is applicable also for speed sensorless drives. The proposed compensation method is tested with a laboratory drive, which consists of commercial frequency converter hardware with self-made software and a prototype PMSM. The speed and torque rippleof the test drive are reduced by applying the compensation method. In addition to the direct torque controlled PMSM drives, the compensation method can also beapplied to other motor types and control methods.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

PURPOSE: To combine weighted iterative reconstruction with self-navigated free-breathing coronary magnetic resonance angiography for retrospective reduction of respiratory motion artifacts. METHODS: One-dimensional self-navigation was improved for robust respiratory motion detection and the consistency of the acquired data was estimated on the detected motion. Based on the data consistency, the data fidelity term of iterative reconstruction was weighted to reduce the effects of respiratory motion. In vivo experiments were performed in 14 healthy volunteers and the resulting image quality of the proposed method was compared to a navigator-gated reference in terms of acquisition time, vessel length, and sharpness. RESULT: Although the sampling pattern of the proposed method contained 60% more samples with respect to the reference, the scan efficiency was improved from 39.5 ± 10.1% to 55.1 ± 9.1%. The improved self-navigation showed a high correlation to the standard navigator signal and the described weighting efficiently reduced respiratory motion artifacts. Overall, the average image quality of the proposed method was comparable to the navigator-gated reference. CONCLUSION: Self-navigated coronary magnetic resonance angiography was successfully combined with weighted iterative reconstruction to reduce the total acquisition time and efficiently suppress respiratory motion artifacts. The simplicity of the experimental setup and the promising image quality are encouraging toward future clinical evaluation. Magn Reson Med 73:1885-1895, 2015. © 2014 Wiley Periodicals, Inc.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

BACKGROUND: In acute respiratory failure, arterial blood gas analysis (ABG) is used to diagnose hypercapnia. Once non-invasive ventilation (NIV) is initiated, ABG should at least be repeated within 1 h to assess PaCO2 response to treatment in order to help detect NIV failure. The main aim of this study was to assess whether measuring end-tidal CO2 (EtCO2) with a dedicated naso-buccal sensor during NIV could predict PaCO2 variation and/or PaCO2 absolute values. The additional aim was to assess whether active or passive prolonged expiratory maneuvers could improve the agreement between expiratory CO2 and PaCO2. METHODS: This is a prospective study in adult patients suffering from acute hypercapnic respiratory failure (PaCO2 ≥ 45 mmHg) treated with NIV. EtCO2 and expiratory CO2 values during active and passive expiratory maneuvers were measured using a dedicated naso-buccal sensor and compared to concomitant PaCO2 values. The agreement between two consecutive values of EtCO2 (delta EtCO2) and two consecutive values of PaCO2 (delta PaCO2) and between PaCO2 and concomitant expiratory CO2 values was assessed using the Bland and Altman method adjusted for the effects of repeated measurements. RESULTS: Fifty-four datasets from a population of 11 patients (8 COPD and 3 non-COPD patients), were included in the analysis. PaCO2 values ranged from 39 to 80 mmHg, and EtCO2 from 12 to 68 mmHg. In the observed agreement between delta EtCO2 and deltaPaCO2, bias was -0.3 mmHg, and limits of agreement were -17.8 and 17.2 mmHg. In agreement between PaCO2 and EtCO2, bias was 14.7 mmHg, and limits of agreement were -6.6 and 36.1 mmHg. Adding active and passive expiration maneuvers did not improve PaCO2 prediction. CONCLUSIONS: During NIV delivered for acute hypercapnic respiratory failure, measuring EtCO2 using a dedicating naso-buccal sensor was inaccurate to predict both PaCO2 and PaCO2 variations over time. Active and passive expiration maneuvers did not improve PaCO2 prediction. TRIAL REGISTRATION: ClinicalTrials.gov: NCT01489150.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A method for dealing with monotonicity constraints in optimal control problems is used to generalize some results in the context of monopoly theory, also extending the generalization to a large family of principal-agent programs. Our main conclusion is that many results on diverse economic topics, achieved under assumptions of continuity and piecewise differentiability in connection with the endogenous variables of the problem, still remain valid after replacing such assumptions by two minimal requirements.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

OBJECTIVE: To assess the iodine status of Swiss population groups and to evaluate the influence of iodized salt as a vector for iodine fortification. DESIGN: The relationship between 24 h urinary iodine and Na excretions was assessed in the general population after correcting for confounders. Single-day intakes were estimated assuming that 92 % of dietary iodine was excreted in 24 h urine. Usual intake distributions were derived for male and female population groups after adjustment for within-subject variability. The estimated average requirement (EAR) cut-point method was applied as guidance to assess the inadequacy of the iodine supply. SETTING: Public health strategies to reduce the dietary salt intake in the general population may affect its iodine supply. SUBJECTS: The study population (1481 volunteers, aged ≥15 years) was randomly selected from three different linguistic regions of Switzerland. RESULTS: The 24 h urine samples from 1420 participants were determined to be properly collected. Mean iodine intakes obtained for men (n 705) and women (n 715) were 179 (sd 68.1) µg/d and 138 (sd 57.8) µg/d, respectively. Urinary Na and Ca, and BMI were significantly and positively associated with higher iodine intake, as were men and non-smokers. Fifty-four per cent of the total iodine intake originated from iodized salt. The prevalence of inadequate iodine intake as estimated by the EAR cut-point method was 2 % for men and 14 % for women. CONCLUSIONS: The estimated prevalence of inadequate iodine intake was within the optimal target range of 2-3 % for men, but not for women.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Sap flow could be used as physiological parameter to assist irrigation of screen house citrus nursery trees by continuous water consumption estimation. Herein we report a first set of results indicating the potential use of the heat dissipation method for sap flow measurement in containerized citrus nursery trees. 'Valencia' sweet orange [Citrus sinensis (L.) Osbeck] budded on 'Rangpur' lime (Citrus limonia Osbeck) was evaluated for 30 days during summer. Heat dissipation probes and thermocouple sensors were constructed with low-cost and easily available materials in order to improve accessibility of the method. Sap flow showed high correlation to air temperature inside the screen house. However, errors due to natural thermal gradient and plant tissue injuries affected measurement precision. Transpiration estimated by sap flow measurement was four times higher than gravimetric measurement. Improved micro-probes, adequate method calibration, and non-toxic insulating materials should be further investigated.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A broad and simple method permitted halide ions in quaternary heteroaromatic and ammonium salts to be exchanged for a variety of anions using an anion exchange resin (A− form) in non-aqueous media. The anion loading of the AER (OH− form) was examined using two different anion sources, acids or ammonium salts, and changing the polarity of the solvents. The AER (A− form) method in organic solvents was then applied to several quaternary heteroaromatic salts and ILs, and the anion exchange proceeded in excellent to quantitative yields, concomitantly removing halide impurities. Relying on the hydrophobicity of the targeted ion pair for the counteranion swap, organic solvents with variable polarity were used, such as CH3OH, CH3CN and the dipolar nonhydroxylic solvent mixture CH3CN:CH2Cl2 (3:7) and the anion exchange was equally successful with both lipophilic cations and anions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Alikriittisellä vedellä tarkoitetaan paineistettua vettä, joka on kriittisen lämpötilansa (374 °C) alapuolella nestemäisessä tilassa. Veden tiheys pienenee lämpötilan kasvaessa Veden liuotinominaisuuksia voidaan säädellä lämpötilan avulla. Veden pintajännitys, viskositeetti, tiheys ja polaarisuus pienenevät lämpötilan kasvaessa, ja alikriittisen veden aineominaisuudet muuttuvat lähemmäksi orgaanista liuotinta. Alikriittisen veden dielektrisyysvakion aleneminen johtuu pääasiassa lämpötilan vaikutuksesta ja vain vähän paineen vaikutuksesta. Alikriittistä vettä on käytetty liuottimena uutossa, mutta nyt myös alikriittinen kromatografia on kehittymässä oleva erotusmenetelmä. Työn kokeellisessa osassa kehitettiin kromatografinen laitteisto alikriittiselle vedelle, jolla tutkittiin sokerialkoholien ja sokerien kromatografista erotusta alikriittisen veden avulla. Lisäksi tutkittiin sokerialkoholien, sokereiden ja stationäärifaasien termistä kestävyyttä. Tutkittavina komponentteina olivat sorbitoli, mannitoli, ksylitoli, arabinoosi, mannoosi, ksyloosi, maltoosi ja ramnoosi. Stationäärifaaseina käytettiin makrohuokoista funktionalisoimatonta polystyreenidivinyylibentseenikopolymeeriä, sekä vahvoja ja heikkoja divinyylibentseenillä ristisilloitettuja kationinvaihtohartseja, jotka olivat joko Na+- tai Ca2+-ionimuodoissa. Veden lämpötilan nostaminen vaikuttaa sekä kromatografisen stationäärifaasin tilavuusmuutoksiin että näytekomponenttien ominaisuuksiin. Vahvoilla kationinvaihtimilla havaittiin termisten tilavuusmuutosten riippuvan ionimuodosta: Na+-muotoiset hartsit turpoavat ja Ca2+-muotoiset kutistuvat lämpötilan noustessa. Heikot kationinvaihtimet kutistuvat molemmissa ionimuodoissa, mutta Ca2+-muoto kutistuu Na+-muotoa voimakkaammin. Näytekomponenteista sokerialkoholien havaittiin kestävän paremmin korkeita lämpötiloja kuin sokerien. Sokerialkoholeista kestävimmäksi havaittiin ksylitoli ja sokereista ramnoosi. Tutkittavien komponenttien piikkien havaittiin kapenevan, häntimisen vähenevän, ja piikkien eluoituvan aikaisemmin riippuen käytettävästä stationäärifaasista. Ca2+-muotoisen vahvan kationinvaihtimen kompleksinmuodostuskyky heikkeni lämpötilan kasvaessa. Näytekomponenttien erotus ei kuitenkaan parantunut lämpötilan noustessa tutkituilla stationäärifaaseilla.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

BACKGROUND: Left atrial (LA) dilatation is associated with a large variety of cardiac diseases. Current cardiovascular magnetic resonance (CMR) strategies to measure LA volumes are based on multi-breath-hold multi-slice acquisitions, which are time-consuming and susceptible to misregistration. AIM: To develop a time-efficient single breath-hold 3D CMR acquisition and reconstruction method to precisely measure LA volumes and function. METHODS: A highly accelerated compressed-sensing multi-slice cine sequence (CS-cineCMR) was combined with a non-model-based 3D reconstruction method to measure LA volumes with high temporal and spatial resolution during a single breath-hold. This approach was validated in LA phantoms of different shapes and applied in 3 patients. In addition, the influence of slice orientations on accuracy was evaluated in the LA phantoms for the new approach in comparison with a conventional model-based biplane area-length reconstruction. As a reference in patients, a self-navigated high-resolution whole-heart 3D dataset (3D-HR-CMR) was acquired during mid-diastole to yield accurate LA volumes. RESULTS: Phantom studies. LA volumes were accurately measured by CS-cineCMR with a mean difference of -4.73 ± 1.75 ml (-8.67 ± 3.54%, r2 = 0.94). For the new method the calculated volumes were not significantly different when different orientations of the CS-cineCMR slices were applied to cover the LA phantoms. Long-axis "aligned" vs "not aligned" with the phantom long-axis yielded similar differences vs the reference volume (-4.87 ± 1.73 ml vs. -4.45 ± 1.97 ml, p = 0.67) and short-axis "perpendicular" vs. "not-perpendicular" with the LA long-axis (-4.72 ± 1.66 ml vs. -4.75 ± 2.13 ml; p = 0.98). The conventional bi-plane area-length method was susceptible for slice orientations (p = 0.0085 for the interaction of "slice orientation" and "reconstruction technique", 2-way ANOVA for repeated measures). To use the 3D-HR-CMR as the reference for LA volumes in patients, it was validated in the LA phantoms (mean difference: -1.37 ± 1.35 ml, -2.38 ± 2.44%, r2 = 0.97). Patient study: The CS-cineCMR LA volumes of the mid-diastolic frame matched closely with the reference LA volume (measured by 3D-HR-CMR) with a difference of -2.66 ± 6.5 ml (3.0% underestimation; true LA volumes: 63 ml, 62 ml, and 395 ml). Finally, a high intra- and inter-observer agreement for maximal and minimal LA volume measurement is also shown. CONCLUSIONS: The proposed method combines a highly accelerated single-breathhold compressed-sensing multi-slice CMR technique with a non-model-based 3D reconstruction to accurately and reproducibly measure LA volumes and function.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Intravenous thrombolysis (IVT) as treatment in acute ischaemic strokes may be insufficient to achieve recanalisation in certain patients. Predicting probability of non-recanalisation after IVT may have the potential to influence patient selection to more aggressive management strategies. We aimed at deriving and internally validating a predictive score for post-thrombolytic non-recanalisation, using clinical and radiological variables. In thrombolysis registries from four Swiss academic stroke centres (Lausanne, Bern, Basel and Geneva), patients were selected with large arterial occlusion on acute imaging and with repeated arterial assessment at 24 hours. Based on a logistic regression analysis, an integer-based score for each covariate of the fitted multivariate model was generated. Performance of integer-based predictive model was assessed by bootstrapping available data and cross validation (delete-d method). In 599 thrombolysed strokes, five variables were identified as independent predictors of absence of recanalisation: Acute glucose > 7 mmol/l (A), significant extracranial vessel STenosis (ST), decreased Range of visual fields (R), large Arterial occlusion (A) and decreased Level of consciousness (L). All variables were weighted 1, except for (L) which obtained 2 points based on β-coefficients on the logistic scale. ASTRAL-R scores 0, 3 and 6 corresponded to non-recanalisation probabilities of 18, 44 and 74 % respectively. Predictive ability showed AUC of 0.66 (95 %CI, 0.61-0.70) when using bootstrap and 0.66 (0.63-0.68) when using delete-d cross validation. In conclusion, the 5-item ASTRAL-R score moderately predicts non-recanalisation at 24 hours in thrombolysed ischaemic strokes. If its performance can be confirmed by external validation and its clinical usefulness can be proven, the score may influence patient selection for more aggressive revascularisation strategies in routine clinical practice.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

BACKGROUND: Vitamin D deficiency is prevalent in HIV-infected individuals and vitamin D supplementation is proposed according to standard care. This study aimed at characterizing the kinetics of 25(OH)D in a cohort of HIV-infected individuals of European ancestry to better define the influence of genetic and non-genetic factors on 25(OH)D levels. These data were used for the optimization of vitamin D supplementation in order to reach therapeutic targets. METHODS: 1,397 25(OH)D plasma levels and relevant clinical information were collected in 664 participants during medical routine follow-up visits. They were genotyped for 7 SNPs in 4 genes known to be associated with 25(OH)D levels. 25(OH)D concentrations were analysed using a population pharmacokinetic approach. The percentage of individuals with 25(OH)D concentrations within the recommended range of 20-40 ng/ml during 12 months of follow-up and several dosage regimens were evaluated by simulation. RESULTS: A one-compartment model with linear absorption and elimination was used to describe 25(OH)D pharmacokinetics, while integrating endogenous baseline plasma concentrations. Covariate analyses confirmed the effect of seasonality, body mass index, smoking habits, the analytical method, darunavir/ritonavir and the genetic variant in GC (rs2282679) on 25(OH)D concentrations. 11% of the inter-individual variability in 25(OH)D levels was explained by seasonality and other non-genetic covariates, and 1% by genetics. The optimal supplementation for severe vitamin D deficient patients was 300,000 IU two times per year. CONCLUSIONS: This analysis allowed identifying factors associated with 25(OH)D plasma levels in HIV-infected individuals. Improvement of dosage regimen and timing of vitamin D supplementation is proposed based on those results.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

OBJECTIVES: Immunohistochemistry (IHC) has become a promising method for pre-screening ALK-rearrangements in non-small cell lung carcinomas (NSCLC). Various ALK antibodies, detection systems and automated immunostainers are available. We therefore aimed to compare the performance of the monoclonal 5A4 (Novocastra, Leica) and D5F3 (Cell Signaling, Ventana) antibodies using two different immunostainers. Additionally we analyzed the accuracy of prospective ALK IHC-testing in routine diagnostics. MATERIALS AND METHODS: Seventy-two NSCLC with available ALK FISH results and enriched for FISH-positive carcinomas were retrospectively analyzed. IHC was performed on BenchMarkXT (Ventana) using 5A4 and D5F3, respectively, and additionally with 5A4 on Bond-MAX (Leica). Data from our routine diagnostics on prospective ALK-testing with parallel IHC, using 5A4, and FISH were available from 303 NSCLC. RESULTS: All three IHC protocols showed congruent results. Only 1/25 FISH-positive NSCLC (4%) was false negative by IHC. For all three IHC protocols the sensitivity, specificity, positive (PPV) and negative predictive values (NPV) compared to FISH were 96%, 100%, 100% and 97.8%, respectively. In the prospective cohort 3/32 FISH-positive (9.4%) and 2/271 FISH-negative (0.7%) NSCLC were false negative and false positive by IHC, respectively. In routine diagnostics the sensitivity, specificity, PPV and NPV of IHC compared to FISH were 90.6%, 99.3%, 93.5% and 98.9%, respectively. CONCLUSIONS: 5A4 and D5F3 are equally well suited for detecting ALK-rearranged NSCLC. BenchMark and BOND-MAX immunostainers can be used for IHC with 5A4. True discrepancies between IHC and FISH results do exist and need to be addressed when implementing IHC in an ALK-testing algorithm.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

ETHNOPHARMACOLOGICAL RELEVANCE: The aim of this survey was to describe which traditional medicines (TM) are most commonly used for non-communicable diseases (NCD - diabetes, hypertension related to excess weight and obesity) in Pacific islands and with what perceived effectiveness. NCD, especially prevalent in the Pacific, have been subject to many public health interventions, often with rather disappointing results. Innovative interventions are required; one hypothesis is that some local, traditional approaches may have been overlooked. MATERIALS AND METHODS: The method used was a retrospective treatment-outcome study in a nation-wide representative sample of the adult population (about 15,000 individuals) of the Republic of Palau, an archipelago of Micronesia. RESULTS: From 188 respondents (61% female, age 16-87, median 48,), 30 different plants were used, mostly self-prepared (69%), or from a traditional healer (18%). For excess weight, when comparing the two most frequent plants, Morinda citrifolia L. was associated with more adequate outcome than Phaleria nishidae Kaneh. (P=0.05). In case of diabetes, when comparing Phaleria nishidae (=Phaleria nisidai) and Morinda citrifolia, the former was statistically more often associated with the reported outcome "lower blood sugar" (P=0.01). CONCLUSIONS: Statistical association between a plant used and reported outcome is not a proof of effectiveness or safety, but it can help select plants of interest for further studies, e.g. through a reverse pharmacology process, in search of local products which may have a positive impact on population health.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper we explore the use of non-linear transformations in order to improve the performance of an entropy based voice activity detector (VAD). The idea of using a non-linear transformation comes from some previous work done in speech linear prediction (LPC) field based in source separation techniques, where the score function was added into the classical equations in order to take into account the real distribution of the signal. We explore the possibility of estimating the entropy of frames after calculating its score function, instead of using original frames. We observe that if signal is clean, estimated entropy is essentially the same; but if signal is noisy transformed frames (with score function) are able to give different entropy if the frame is voiced against unvoiced ones. Experimental results show that this fact permits to detect voice activity under high noise, where simple entropy method fails.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Notre consommation en eau souterraine, en particulier comme eau potable ou pour l'irrigation, a considérablement augmenté au cours des années. De nombreux problèmes font alors leur apparition, allant de la prospection de nouvelles ressources à la remédiation des aquifères pollués. Indépendamment du problème hydrogéologique considéré, le principal défi reste la caractérisation des propriétés du sous-sol. Une approche stochastique est alors nécessaire afin de représenter cette incertitude en considérant de multiples scénarios géologiques et en générant un grand nombre de réalisations géostatistiques. Nous rencontrons alors la principale limitation de ces approches qui est le coût de calcul dû à la simulation des processus d'écoulements complexes pour chacune de ces réalisations. Dans la première partie de la thèse, ce problème est investigué dans le contexte de propagation de l'incertitude, oú un ensemble de réalisations est identifié comme représentant les propriétés du sous-sol. Afin de propager cette incertitude à la quantité d'intérêt tout en limitant le coût de calcul, les méthodes actuelles font appel à des modèles d'écoulement approximés. Cela permet l'identification d'un sous-ensemble de réalisations représentant la variabilité de l'ensemble initial. Le modèle complexe d'écoulement est alors évalué uniquement pour ce sousensemble, et, sur la base de ces réponses complexes, l'inférence est faite. Notre objectif est d'améliorer la performance de cette approche en utilisant toute l'information à disposition. Pour cela, le sous-ensemble de réponses approximées et exactes est utilisé afin de construire un modèle d'erreur, qui sert ensuite à corriger le reste des réponses approximées et prédire la réponse du modèle complexe. Cette méthode permet de maximiser l'utilisation de l'information à disposition sans augmentation perceptible du temps de calcul. La propagation de l'incertitude est alors plus précise et plus robuste. La stratégie explorée dans le premier chapitre consiste à apprendre d'un sous-ensemble de réalisations la relation entre les modèles d'écoulement approximé et complexe. Dans la seconde partie de la thèse, cette méthodologie est formalisée mathématiquement en introduisant un modèle de régression entre les réponses fonctionnelles. Comme ce problème est mal posé, il est nécessaire d'en réduire la dimensionnalité. Dans cette optique, l'innovation du travail présenté provient de l'utilisation de l'analyse en composantes principales fonctionnelles (ACPF), qui non seulement effectue la réduction de dimensionnalités tout en maximisant l'information retenue, mais permet aussi de diagnostiquer la qualité du modèle d'erreur dans cet espace fonctionnel. La méthodologie proposée est appliquée à un problème de pollution par une phase liquide nonaqueuse et les résultats obtenus montrent que le modèle d'erreur permet une forte réduction du temps de calcul tout en estimant correctement l'incertitude. De plus, pour chaque réponse approximée, une prédiction de la réponse complexe est fournie par le modèle d'erreur. Le concept de modèle d'erreur fonctionnel est donc pertinent pour la propagation de l'incertitude, mais aussi pour les problèmes d'inférence bayésienne. Les méthodes de Monte Carlo par chaîne de Markov (MCMC) sont les algorithmes les plus communément utilisés afin de générer des réalisations géostatistiques en accord avec les observations. Cependant, ces méthodes souffrent d'un taux d'acceptation très bas pour les problèmes de grande dimensionnalité, résultant en un grand nombre de simulations d'écoulement gaspillées. Une approche en deux temps, le "MCMC en deux étapes", a été introduite afin d'éviter les simulations du modèle complexe inutiles par une évaluation préliminaire de la réalisation. Dans la troisième partie de la thèse, le modèle d'écoulement approximé couplé à un modèle d'erreur sert d'évaluation préliminaire pour le "MCMC en deux étapes". Nous démontrons une augmentation du taux d'acceptation par un facteur de 1.5 à 3 en comparaison avec une implémentation classique de MCMC. Une question reste sans réponse : comment choisir la taille de l'ensemble d'entrainement et comment identifier les réalisations permettant d'optimiser la construction du modèle d'erreur. Cela requiert une stratégie itérative afin que, à chaque nouvelle simulation d'écoulement, le modèle d'erreur soit amélioré en incorporant les nouvelles informations. Ceci est développé dans la quatrième partie de la thèse, oú cette méthodologie est appliquée à un problème d'intrusion saline dans un aquifère côtier. -- Our consumption of groundwater, in particular as drinking water and for irrigation, has considerably increased over the years and groundwater is becoming an increasingly scarce and endangered resource. Nofadays, we are facing many problems ranging from water prospection to sustainable management and remediation of polluted aquifers. Independently of the hydrogeological problem, the main challenge remains dealing with the incomplete knofledge of the underground properties. Stochastic approaches have been developed to represent this uncertainty by considering multiple geological scenarios and generating a large number of realizations. The main limitation of this approach is the computational cost associated with performing complex of simulations in each realization. In the first part of the thesis, we explore this issue in the context of uncertainty propagation, where an ensemble of geostatistical realizations is identified as representative of the subsurface uncertainty. To propagate this lack of knofledge to the quantity of interest (e.g., the concentration of pollutant in extracted water), it is necessary to evaluate the of response of each realization. Due to computational constraints, state-of-the-art methods make use of approximate of simulation, to identify a subset of realizations that represents the variability of the ensemble. The complex and computationally heavy of model is then run for this subset based on which inference is made. Our objective is to increase the performance of this approach by using all of the available information and not solely the subset of exact responses. Two error models are proposed to correct the approximate responses follofing a machine learning approach. For the subset identified by a classical approach (here the distance kernel method) both the approximate and the exact responses are knofn. This information is used to construct an error model and correct the ensemble of approximate responses to predict the "expected" responses of the exact model. The proposed methodology makes use of all the available information without perceptible additional computational costs and leads to an increase in accuracy and robustness of the uncertainty propagation. The strategy explored in the first chapter consists in learning from a subset of realizations the relationship between proxy and exact curves. In the second part of this thesis, the strategy is formalized in a rigorous mathematical framework by defining a regression model between functions. As this problem is ill-posed, it is necessary to reduce its dimensionality. The novelty of the work comes from the use of functional principal component analysis (FPCA), which not only performs the dimensionality reduction while maximizing the retained information, but also allofs a diagnostic of the quality of the error model in the functional space. The proposed methodology is applied to a pollution problem by a non-aqueous phase-liquid. The error model allofs a strong reduction of the computational cost while providing a good estimate of the uncertainty. The individual correction of the proxy response by the error model leads to an excellent prediction of the exact response, opening the door to many applications. The concept of functional error model is useful not only in the context of uncertainty propagation, but also, and maybe even more so, to perform Bayesian inference. Monte Carlo Markov Chain (MCMC) algorithms are the most common choice to ensure that the generated realizations are sampled in accordance with the observations. Hofever, this approach suffers from lof acceptance rate in high dimensional problems, resulting in a large number of wasted of simulations. This led to the introduction of two-stage MCMC, where the computational cost is decreased by avoiding unnecessary simulation of the exact of thanks to a preliminary evaluation of the proposal. In the third part of the thesis, a proxy is coupled to an error model to provide an approximate response for the two-stage MCMC set-up. We demonstrate an increase in acceptance rate by a factor three with respect to one-stage MCMC results. An open question remains: hof do we choose the size of the learning set and identify the realizations to optimize the construction of the error model. This requires devising an iterative strategy to construct the error model, such that, as new of simulations are performed, the error model is iteratively improved by incorporating the new information. This is discussed in the fourth part of the thesis, in which we apply this methodology to a problem of saline intrusion in a coastal aquifer.