57 resultados para TIGHT GAS. Low permeability. Hydraulic fracturing. Reservoir modeling. Numerical simulation
Resumo:
PECUBE is a three-dimensional thermal-kinematic code capable of solving the heat production-diffusion-advection equation under a temporally varying surface boundary condition. It was initially developed to assess the effects of time-varying surface topography (relief) on low-temperature thermochronological datasets. Thermochronometric ages are predicted by tracking the time-temperature histories of rock-particles ending up at the surface and by combining these with various age-prediction models. In the decade since its inception, the PECUBE code has been under continuous development as its use became wider and addressed different tectonic-geomorphic problems. This paper describes several major recent improvements in the code, including its integration with an inverse-modeling package based on the Neighborhood Algorithm, the incorporation of fault-controlled kinematics, several different ways to address topographic and drainage change through time, the ability to predict subsurface (tunnel or borehole) data, prediction of detrital thermochronology data and a method to compare these with observations, and the coupling with landscape-evolution (or surface-process) models. Each new development is described together with one or several applications, so that the reader and potential user can clearly assess and make use of the capabilities of PECUBE. We end with describing some developments that are currently underway or should take place in the foreseeable future. (C) 2012 Elsevier B.V. All rights reserved.
Resumo:
Limited information is available regarding the methodology required to characterize hashish seizures for assessing the presence or the absence of a chemical link between two seizures. This casework report presents the methodology applied for assessing that two different police seizures were coming from the same block before this latter one was split. The chemical signature was extracted using GC-MS analysis and the implemented methodology consists in a study of intra- and inter-variability distributions based on the measurement of the chemical profiles similarity using a number of hashish seizures and the calculation of the Pearson correlation coefficient. Different statistical scenarios (i.e., a combination of data pretreatment techniques and selection of target compounds) were tested to find the most discriminating one. Seven compounds showing high discrimination capabilities were selected on which a specific statistical data pretreatment was applied. Based on the results, the statistical model built for comparing the hashish seizures leads to low error rates. Therefore, the implemented methodology is suitable for the chemical profiling of hashish seizures.
Resumo:
The Admiral, a new microporous membrane oxygenator with a low surface area, decreased priming volume and two separate reservoirs, was tested in 30 adult patients. This study was undertaken to evaluate blood path resistance, gas exchange capabilities and blood trauma in clinical use, with and without shed blood separation. Patients were divided into 3 groups. Group 1 had valve surgery without separation of suction, Group 2 had coronary artery bypass grafting (CABG) with direct blood aspiration and Group 3 had coronary artery bypass grafting with shed blood separation. The suctioned, separated, cardiotomy blood in Group 3 was treated with an autotransfusion device at the end of bypass before being returned to the patient. Theoretical blood flow could be achieved in all cases without problem. The pressure drop through the oxygenator averaged 88 +/- 13 mmHg at 4 l/min and 109 +/- 12 mmHg at 5 l/min. O(2) transfer was 163 +/- 27 ml/min. Free plasma haemoglobin rose in all groups, but significantly less in group 3. Lactate dehydrogenase (LDH) rose significantly in Groups 1 and 2. Platelets decreased in all groups without significant differences. Clinical experience with this new oxygenator was safe, the reduced membrane surface did not impair gas exchange and blood trauma could be minimized easily by separating shed blood, using the second cardiotomy reservoir.
Resumo:
Recently, Revil & Florsch proposed a novel mechanistic model based on the polarization of the Stern layer relating the permeability of granular media to their spectral induced polarization (SIP) characteristics based on the formation of polarized cells around individual grains. To explore the practical validity of this model, we compare it to pertinent laboratory measurements on samples of quartz sands with a wide range of granulometric characteristics. In particular, we measure the hydraulic and SIP characteristics of all samples both in their loose, non-compacted and compacted states, which might allow for the detection of polarization processes that are independent of the grain size. We first verify the underlying grain size/permeability relationship upon which the model of Revil & Florsch is based and then proceed to compare the observed and predicted permeability values for our samples by substituting the grain size characteristics by corresponding SIP parameters, notably the so-called Cole-Cole time constant. In doing so, we also asses the quantitative impact of an observed shift in the Cole-Cole time constant related to textural variations in the samples and observe that changes related to the compaction of the samples are not relevant for the corresponding permeability predictions. We find that the proposed model does indeed provide an adequate prediction of the overall trend of the observed permeability values, but underestimates their actual values by approximately one order-of-magnitude. This discrepancy in turn points to the potential importance of phenomena, which are currently not accounted for in the model and which tend to reduce the characteristic size of the prevailing polarization cells compared to the considered model, such as, for example, membrane polarization, contacts of double-layers of neighbouring grains, and incorrect estimation of the size of the polarized cells because of the irregularity of natural sand grains.
Resumo:
Phthalates are suspected to be endocrine disruptors. Di(2-ethylhexyl) phthalate (DEHP) is assumed to have low dermal absorption; however, previous in vitro skin permeation studies have shown large permeation differences. Our aims were to determine DEHP permeation parameters and assess extent of skin DEHP metabolism among workers highly exposed to these lipophilic, low volatile substances. Surgically removed skin from patients undergoing abdominoplasty was immediately dermatomed (800 μm) and mounted on flow-through diffusion cells (1.77 cm(2)) operating at 32°C with cell culture media (aqueous solution) as the reservoir liquid. The cells were dosed either with neat DEHP or emulsified in aqueous solution (166 μg/ml). Samples were analysed by HPLC-MS/MS. DEHP permeated human viable skin only as the metabolite MEHP (100%) after 8h of exposure. Human skin was able to further oxidize MEHP to 5-oxo-MEHP. Neat DEHP applied to the skin hardly permeated skin while the aqueous solution readily permeated skin measured in both cases as concentration of MEHP in the receptor liquid. DEHP pass through human skin, detected as MEHP only when emulsified in aqueous solution, and to a far lesser degree when applied neat to the skin. Using results from older in vitro skin permeation studies with non-viable skin may underestimate skin exposures. Our results are in overall agreement with newer phthalate skin permeation studies.
Resumo:
Des progrès significatifs ont été réalisés dans le domaine de l'intégration quantitative des données géophysique et hydrologique l'échelle locale. Cependant, l'extension à de plus grandes échelles des approches correspondantes constitue encore un défi majeur. Il est néanmoins extrêmement important de relever ce défi pour développer des modèles fiables de flux des eaux souterraines et de transport de contaminant. Pour résoudre ce problème, j'ai développé une technique d'intégration des données hydrogéophysiques basée sur une procédure bayésienne de simulation séquentielle en deux étapes. Cette procédure vise des problèmes à plus grande échelle. L'objectif est de simuler la distribution d'un paramètre hydraulique cible à partir, d'une part, de mesures d'un paramètre géophysique pertinent qui couvrent l'espace de manière exhaustive, mais avec une faible résolution (spatiale) et, d'autre part, de mesures locales de très haute résolution des mêmes paramètres géophysique et hydraulique. Pour cela, mon algorithme lie dans un premier temps les données géophysiques de faible et de haute résolution à travers une procédure de réduction déchelle. Les données géophysiques régionales réduites sont ensuite reliées au champ du paramètre hydraulique à haute résolution. J'illustre d'abord l'application de cette nouvelle approche dintégration des données à une base de données synthétiques réaliste. Celle-ci est constituée de mesures de conductivité hydraulique et électrique de haute résolution réalisées dans les mêmes forages ainsi que destimations des conductivités électriques obtenues à partir de mesures de tomographic de résistivité électrique (ERT) sur l'ensemble de l'espace. Ces dernières mesures ont une faible résolution spatiale. La viabilité globale de cette méthode est testée en effectuant les simulations de flux et de transport au travers du modèle original du champ de conductivité hydraulique ainsi que du modèle simulé. Les simulations sont alors comparées. Les résultats obtenus indiquent que la procédure dintégration des données proposée permet d'obtenir des estimations de la conductivité en adéquation avec la structure à grande échelle ainsi que des predictions fiables des caractéristiques de transports sur des distances de moyenne à grande échelle. Les résultats correspondant au scénario de terrain indiquent que l'approche d'intégration des données nouvellement mise au point est capable d'appréhender correctement les hétérogénéitées à petite échelle aussi bien que les tendances à gande échelle du champ hydraulique prévalent. Les résultats montrent également une flexibilté remarquable et une robustesse de cette nouvelle approche dintégration des données. De ce fait, elle est susceptible d'être appliquée à un large éventail de données géophysiques et hydrologiques, à toutes les gammes déchelles. Dans la deuxième partie de ma thèse, j'évalue en détail la viabilité du réechantillonnage geostatique séquentiel comme mécanisme de proposition pour les méthodes Markov Chain Monte Carlo (MCMC) appliquées à des probmes inverses géophysiques et hydrologiques de grande dimension . L'objectif est de permettre une quantification plus précise et plus réaliste des incertitudes associées aux modèles obtenus. En considérant une série dexemples de tomographic radar puits à puits, j'étudie deux classes de stratégies de rééchantillonnage spatial en considérant leur habilité à générer efficacement et précisément des réalisations de la distribution postérieure bayésienne. Les résultats obtenus montrent que, malgré sa popularité, le réechantillonnage séquentiel est plutôt inefficace à générer des échantillons postérieurs indépendants pour des études de cas synthétiques réalistes, notamment pour le cas assez communs et importants où il existe de fortes corrélations spatiales entre le modèle et les paramètres. Pour résoudre ce problème, j'ai développé un nouvelle approche de perturbation basée sur une déformation progressive. Cette approche est flexible en ce qui concerne le nombre de paramètres du modèle et lintensité de la perturbation. Par rapport au rééchantillonage séquentiel, cette nouvelle approche s'avère être très efficace pour diminuer le nombre requis d'itérations pour générer des échantillons indépendants à partir de la distribution postérieure bayésienne. - Significant progress has been made with regard to the quantitative integration of geophysical and hydrological data at the local scale. However, extending corresponding approaches beyond the local scale still represents a major challenge, yet is critically important for the development of reliable groundwater flow and contaminant transport models. To address this issue, I have developed a hydrogeophysical data integration technique based on a two-step Bayesian sequential simulation procedure that is specifically targeted towards larger-scale problems. The objective is to simulate the distribution of a target hydraulic parameter based on spatially exhaustive, but poorly resolved, measurements of a pertinent geophysical parameter and locally highly resolved, but spatially sparse, measurements of the considered geophysical and hydraulic parameters. To this end, my algorithm links the low- and high-resolution geophysical data via a downscaling procedure before relating the downscaled regional-scale geophysical data to the high-resolution hydraulic parameter field. I first illustrate the application of this novel data integration approach to a realistic synthetic database consisting of collocated high-resolution borehole measurements of the hydraulic and electrical conductivities and spatially exhaustive, low-resolution electrical conductivity estimates obtained from electrical resistivity tomography (ERT). The overall viability of this method is tested and verified by performing and comparing flow and transport simulations through the original and simulated hydraulic conductivity fields. The corresponding results indicate that the proposed data integration procedure does indeed allow for obtaining faithful estimates of the larger-scale hydraulic conductivity structure and reliable predictions of the transport characteristics over medium- to regional-scale distances. The approach is then applied to a corresponding field scenario consisting of collocated high- resolution measurements of the electrical conductivity, as measured using a cone penetrometer testing (CPT) system, and the hydraulic conductivity, as estimated from electromagnetic flowmeter and slug test measurements, in combination with spatially exhaustive low-resolution electrical conductivity estimates obtained from surface-based electrical resistivity tomography (ERT). The corresponding results indicate that the newly developed data integration approach is indeed capable of adequately capturing both the small-scale heterogeneity as well as the larger-scale trend of the prevailing hydraulic conductivity field. The results also indicate that this novel data integration approach is remarkably flexible and robust and hence can be expected to be applicable to a wide range of geophysical and hydrological data at all scale ranges. In the second part of my thesis, I evaluate in detail the viability of sequential geostatistical resampling as a proposal mechanism for Markov Chain Monte Carlo (MCMC) methods applied to high-dimensional geophysical and hydrological inverse problems in order to allow for a more accurate and realistic quantification of the uncertainty associated with the thus inferred models. Focusing on a series of pertinent crosshole georadar tomographic examples, I investigated two classes of geostatistical resampling strategies with regard to their ability to efficiently and accurately generate independent realizations from the Bayesian posterior distribution. The corresponding results indicate that, despite its popularity, sequential resampling is rather inefficient at drawing independent posterior samples for realistic synthetic case studies, notably for the practically common and important scenario of pronounced spatial correlation between model parameters. To address this issue, I have developed a new gradual-deformation-based perturbation approach, which is flexible with regard to the number of model parameters as well as the perturbation strength. Compared to sequential resampling, this newly proposed approach was proven to be highly effective in decreasing the number of iterations required for drawing independent samples from the Bayesian posterior distribution.
Resumo:
The unstable rock slope, Stampa, above the village of Flåm, Norway, shows signs of both active and postglacial gravitational deformation over an area of 11 km2. Detailed structural field mapping, annual differential Global Navigation Satellite System (GNSS) surveys, as well as geomorphic analysis of high-resolution digital elevation models based on airborne and terrestrial laser scanning indicate that slope deformation is complex and spatially variable. Numerical modeling was used to investigate the influence of former rockslide activity and to better understand the failure mechanism. Field observations, kinematic analysis and numerical modeling indicate a strong structural control of the unstable area. Based on the integration of the above analyses, we propose that the failure mechanism is dominated by (1) a toppling component, (2) subsiding bilinear wedge failure and (3) planar sliding along the foliation at the toe of the unstable slope. Using differential GNSS, 18 points were measured annually over a period of up to 6 years. Two of these points have an average yearly movement of around 10 mm/year. They are located at the frontal cliff on almost completely detached blocks with volumes smaller than 300,000 m3. Large fractures indicate deep-seated gravitational deformation of volumes reaching several 100 million m3, but the movement rates in these areas are below 2 mm/year. Two different lobes of prehistoric rock slope failures were dated with terrestrial cosmogenic nuclides. While the northern lobe gave an average age of 4,300 years BP, the southern one resulted in two different ages (2,400 and 12,000 years BP), which represent most likely multiple rockfall events. This reflects the currently observable deformation style with unstable blocks in the northern part in between Joasete and Furekamben and no distinct blocks but a high rockfall activity around Ramnanosi in the south. With a relative susceptibility analysis it is concluded that small collapses of blocks along the frontal cliff will be more frequent. Larger collapses of free-standing blocks along the cliff with volumes > 100,000 m3, thus large enough to reach the fjord, cannot be ruled out. A larger collapse involving several million m3 is presently considered of very low likelihood.
Resumo:
OBJECTIVES: Mannan-binding lectin (MBL) acts as a pattern-recognition molecule directed against oligomannan, which is part of the cell wall of yeasts and various bacteria. We have previously shown an association between MBL deficiency and anti-Saccharomyces cerevisiae mannan antibody (ASCA) positivity. This study aims at evaluating whether MBL deficiency is associated with distinct Crohn's disease (CD) phenotypes. METHODS: Serum concentrations of MBL and ASCA were measured using ELISA (enzyme-linked immunosorbent assay) in 427 patients with CD, 70 with ulcerative colitis, and 76 healthy controls. CD phenotypes were grouped according to the Montreal Classification as follows: non-stricturing, non-penetrating (B1, n=182), stricturing (B2, n=113), penetrating (B3, n=67), and perianal disease (p, n=65). MBL was classified as deficient (<100 ng/ml), low (100-500 ng/ml), and normal (500 ng/ml). RESULTS: Mean MBL was lower in B2 and B3 CD patients (1,503+/-1,358 ng/ml) compared with that in B1 phenotypes (1,909+/-1,392 ng/ml, P=0.013). B2 and B3 patients more frequently had low or deficient MBL and ASCA positivity compared with B1 patients (P=0.004 and P<0.001). Mean MBL was lower in ASCA-positive CD patients (1,562+/-1,319 ng/ml) compared with that in ASCA-negative CD patients (1,871+/-1,320 ng/ml, P=0.038). In multivariate logistic regression modeling, low or deficient MBL was associated significantly with B1 (negative association), complicated disease (B2+B3), and ASCA. MBL levels did not correlate with disease duration. CONCLUSIONS: Low or deficient MBL serum levels are significantly associated with complicated (stricturing and penetrating) CD phenotypes but are negatively associated with the non-stricturing, non-penetrating group. Furthermore, CD patients with low or deficient MBL are significantly more often ASCA positive, possibly reflecting delayed clearance of oligomannan-containing microorganisms by the innate immune system in the absence of MBL.
Sensitive headspace gas chromatography analysis of free and conjugated 1-methoxy-2-propanol in urine
Resumo:
Glycol ethers still continue to be a workplace hazard due to their important use on an industrial scale. Currently, chronic occupational exposures to low levels of xenobiotics become increasingly relevant. Thus, sensitive analytical methods for detecting biomarkers of exposure are of interest in the field of occupational exposure assessment. 1-Methoxy-2-propanol (1M2P) is one of the dominant glycol ethers and the unmetabolized urinary fraction has been identified to be a good biological indicator of exposure. An existing analytical method including a solid-phase extraction and derivatization before GC/FID analysis is available but presents some disadvantages. We present here an alternative method for the determination of urinary 1M2P based on the headspace gas chromatography technique. We determined the 1M2P values by the direct headspace method for 47 samples that had previously been assayed by the solid-phase extraction and derivatization gas chromatography procedure. An inter-method comparison based on a Bland-Altman analysis showed that both techniques can be used interchangeably. The alternative method showed a tenfold lower limit of detection (0.1 mg/L) as well as good accuracy and precision which were determined by several urinary 1M2P analyses carried out on a series of urine samples obtained from a human volunteer study. The within- and between-run precisions were generally about 10%, which corresponds to the usual injection variability. We observed that the differences between the results obtained with both methods are not clinically relevant in comparison to the current biological exposure index of urinary 1M2P. Accordingly, the headspace gas chromatography technique turned out to be a more sensitive, accurate, and simple method for the determination of urinary 1M2P.[Authors]
Resumo:
A gas chromatographic-mass spectrometric method is presented which allows the determination of chlorzoxazone and 6-hydroxychlorzoxazone after derivatization with the reagent N-tert.-butyldimethylsilyl-N-methyltrifluoroacetamide. No interference was observed from endogenous compounds following the extraction of plasma samples from six different human subjects. The standard curves were linear over a working range of 20 to 4000 ng/ml and of 20 to 1000 ng/ml for chlorzoxazone and 6-hydroxychlorzoxazone, respectively. Recoveries ranged from 65 to 97% for the two compounds and intra- and inter-day coefficients of variation were always less than 9%. The limit of quantitation of the method was found to be 5 ng/ml for the two compounds, hence allowing its use for single low dose pharmacokinetics.
Resumo:
Uncertainty quantification of petroleum reservoir models is one of the present challenges, which is usually approached with a wide range of geostatistical tools linked with statistical optimisation or/and inference algorithms. Recent advances in machine learning offer a novel approach to model spatial distribution of petrophysical properties in complex reservoirs alternative to geostatistics. The approach is based of semisupervised learning, which handles both ?labelled? observed data and ?unlabelled? data, which have no measured value but describe prior knowledge and other relevant data in forms of manifolds in the input space where the modelled property is continuous. Proposed semi-supervised Support Vector Regression (SVR) model has demonstrated its capability to represent realistic geological features and describe stochastic variability and non-uniqueness of spatial properties. On the other hand, it is able to capture and preserve key spatial dependencies such as connectivity of high permeability geo-bodies, which is often difficult in contemporary petroleum reservoir studies. Semi-supervised SVR as a data driven algorithm is designed to integrate various kind of conditioning information and learn dependences from it. The semi-supervised SVR model is able to balance signal/noise levels and control the prior belief in available data. In this work, stochastic semi-supervised SVR geomodel is integrated into Bayesian framework to quantify uncertainty of reservoir production with multiple models fitted to past dynamic observations (production history). Multiple history matched models are obtained using stochastic sampling and/or MCMC-based inference algorithms, which evaluate posterior probability distribution. Uncertainty of the model is described by posterior probability of the model parameters that represent key geological properties: spatial correlation size, continuity strength, smoothness/variability of spatial property distribution. The developed approach is illustrated with a fluvial reservoir case. The resulting probabilistic production forecasts are described by uncertainty envelopes. The paper compares the performance of the models with different combinations of unknown parameters and discusses sensitivity issues.
Resumo:
Wildlife populations represent an important reservoir for emerging pathogens and trans-boundary livestock diseases. However, detailed information relating to the occurrence of endemic pathogens such as those of the order Chlamydiales in such populations is lacking. During the hunting season of 2008, 863 samples (including blood, conjunctival swabs, internal organs and faeces) were collected in the Eastern Swiss Alps from 99 free-living red deer (Cervus elaphus) and 64 free-living roe deer (Capreolus capreolus) and tested using ELISA, PCR and immunohistochemistry for members of the family Chlamydiaceae and the genus Parachlamydia. Parachlamydia spp. were detected in the conjunctival swabs, faeces and internal organs of both species of deer (2.4% positive, with a further 29.5% inconclusive). The very low occurrence of Chlamydiaceae (2.5%) was in line with serological data (0.7% seroprevalence for Chlamydia abortus). Further investigations are required to elucidate the zoonotic potential, pathogenicity, and distribution of Parachlamydia spp. in wild ruminants.
Resumo:
Studies of species range determinants have traditionally focused on abiotic variables (typically climatic conditions), and therefore the recent explicit consideration of biotic interactions represents an important advance in the field. While these studies clearly support the role of biotic interactions in shaping species distributions, most examine only the influence of a single species and/or a single interaction, failing to account for species being subject to multiple concurrent interactions. By fitting species distribution models (SDMs), we examine the influence of multiple vertical (i.e., grazing, trampling, and manuring by mammalian herbivores) and horizontal (i.e., competition and facilitation; estimated from the cover of dominant plant species) interspecific interactions on the occurrence and cover of 41 alpine tundra plant species. Adding plant-plant interactions to baseline SDMs (using five field-quantified abiotic variables) significantly improved models' predictive power for independent data, while herbivore-related variables had only a weak influence. Overall, abiotic variables had the strongest individual contributions to the distribution of alpine tundra plants, with the importance of horizontal interaction variables exceeding that of vertical interaction variables. These results were consistent across three modeling techniques, for both species occurrence and cover, demonstrating the pattern to be robust. Thus, the explicit consideration of multiple biotic interactions reveals that plant-plant interactions exert control over the fine-scale distribution of vascular species that is comparable to abiotic drivers and considerably stronger than herbivores in this low-energy system.
Resumo:
ABSTRACT: BACKGROUND: The prevalence of obesity has increased in societies of all socio-cultural backgrounds. To date, guidelines set forward to prevent obesity have universally emphasized optimal levels of physical activity. However there are few empirical data to support the assertion that low levels of energy expenditure in activity is a causal factor in the current obesity epidemic are very limited. METHODS: The Modeling the Epidemiologic Transition Study (METS) is a cohort study designed to assess the association between physical activity levels and relative weight, weight gain and diabetes and cardiovascular disease risk in five population-based samples at different stages of economic development. Twenty-five hundred young adults, ages 25-45, were enrolled in the study; 500 from sites in Ghana, South Africa, Seychelles, Jamaica and the United States. At baseline, physical activity levels were assessed using accelerometry and a questionnaire in all participants and by doubly labeled water in a subsample of 75 per site. We assessed dietary intake using two separate 24-h recalls, body composition using bioelectrical impedance analysis, and health history, social and economic indicators by questionnaire. Blood pressure was measured and blood samples collected for measurement of lipids, glucose, insulin and adipokines. Full examination including physical activity using accelerometry, anthropometric data and fasting glucose will take place at 12 and 24 months. The distribution of the main variables and the associations between physical activity, independent of energy intake, glucose metabolism and anthropometric measures will be assessed using cross-section and longitudinal analysis within and between sites. DISCUSSION: METS will provide insight on the relative contribution of physical activity and diet to excess weight, age-related weight gain and incident glucose impairment in five populations' samples of young adults at different stages of economic development. These data should be useful for the development of empirically-based public health policy aimed at the prevention of obesity and associated chronic diseases.
Resumo:
The epithelial Na(+) channel (ENaC), located in the apical membrane of tight epithelia, allows vectorial Na(+) absorption. The amiloride-sensitive ENaC is highly selective for Na(+) and Li(+) ions. There is growing evidence that the short stretch of amino acid residues (preM2) preceding the putative second transmembrane domain M2 forms the outer channel pore with the amiloride binding site and the narrow ion-selective region of the pore. We have shown previously that mutations of the alphaS589 residue in the preM2 segment change the ion selectivity, making the channel permeant to K(+) ions. To understand the molecular basis of this important change in ionic selectivity, we have substituted alphaS589 with amino acids of different sizes and physicochemical properties. Here, we show that the molecular cutoff of the channel pore for inorganic and organic cations increases with the size of the amino acid residue at position alpha589, indicating that alphaS589 mutations enlarge the pore at the selectivity filter. Mutants with an increased permeability to large cations show a decrease in the ENaC unitary conductance of small cations such as Na(+) and Li(+). These findings demonstrate the critical role of the pore size at the alphaS589 residue for the selectivity properties of ENaC. Our data are consistent with the main chain carbonyl oxygens of the alphaS589 residues lining the channel pore at the selectivity filter with their side chain pointing away from the pore lumen. We propose that the alphaS589 side chain is oriented toward the subunit-subunit interface and that substitution of alphaS589 by larger residues increases the pore diameter by adding extra volume at the subunit-subunit interface.