194 resultados para Decreasing Scale
em Université de Lausanne, Switzerland
Resumo:
Des progrès significatifs ont été réalisés dans le domaine de l'intégration quantitative des données géophysique et hydrologique l'échelle locale. Cependant, l'extension à de plus grandes échelles des approches correspondantes constitue encore un défi majeur. Il est néanmoins extrêmement important de relever ce défi pour développer des modèles fiables de flux des eaux souterraines et de transport de contaminant. Pour résoudre ce problème, j'ai développé une technique d'intégration des données hydrogéophysiques basée sur une procédure bayésienne de simulation séquentielle en deux étapes. Cette procédure vise des problèmes à plus grande échelle. L'objectif est de simuler la distribution d'un paramètre hydraulique cible à partir, d'une part, de mesures d'un paramètre géophysique pertinent qui couvrent l'espace de manière exhaustive, mais avec une faible résolution (spatiale) et, d'autre part, de mesures locales de très haute résolution des mêmes paramètres géophysique et hydraulique. Pour cela, mon algorithme lie dans un premier temps les données géophysiques de faible et de haute résolution à travers une procédure de réduction déchelle. Les données géophysiques régionales réduites sont ensuite reliées au champ du paramètre hydraulique à haute résolution. J'illustre d'abord l'application de cette nouvelle approche dintégration des données à une base de données synthétiques réaliste. Celle-ci est constituée de mesures de conductivité hydraulique et électrique de haute résolution réalisées dans les mêmes forages ainsi que destimations des conductivités électriques obtenues à partir de mesures de tomographic de résistivité électrique (ERT) sur l'ensemble de l'espace. Ces dernières mesures ont une faible résolution spatiale. La viabilité globale de cette méthode est testée en effectuant les simulations de flux et de transport au travers du modèle original du champ de conductivité hydraulique ainsi que du modèle simulé. Les simulations sont alors comparées. Les résultats obtenus indiquent que la procédure dintégration des données proposée permet d'obtenir des estimations de la conductivité en adéquation avec la structure à grande échelle ainsi que des predictions fiables des caractéristiques de transports sur des distances de moyenne à grande échelle. Les résultats correspondant au scénario de terrain indiquent que l'approche d'intégration des données nouvellement mise au point est capable d'appréhender correctement les hétérogénéitées à petite échelle aussi bien que les tendances à gande échelle du champ hydraulique prévalent. Les résultats montrent également une flexibilté remarquable et une robustesse de cette nouvelle approche dintégration des données. De ce fait, elle est susceptible d'être appliquée à un large éventail de données géophysiques et hydrologiques, à toutes les gammes déchelles. Dans la deuxième partie de ma thèse, j'évalue en détail la viabilité du réechantillonnage geostatique séquentiel comme mécanisme de proposition pour les méthodes Markov Chain Monte Carlo (MCMC) appliquées à des probmes inverses géophysiques et hydrologiques de grande dimension . L'objectif est de permettre une quantification plus précise et plus réaliste des incertitudes associées aux modèles obtenus. En considérant une série dexemples de tomographic radar puits à puits, j'étudie deux classes de stratégies de rééchantillonnage spatial en considérant leur habilité à générer efficacement et précisément des réalisations de la distribution postérieure bayésienne. Les résultats obtenus montrent que, malgré sa popularité, le réechantillonnage séquentiel est plutôt inefficace à générer des échantillons postérieurs indépendants pour des études de cas synthétiques réalistes, notamment pour le cas assez communs et importants où il existe de fortes corrélations spatiales entre le modèle et les paramètres. Pour résoudre ce problème, j'ai développé un nouvelle approche de perturbation basée sur une déformation progressive. Cette approche est flexible en ce qui concerne le nombre de paramètres du modèle et lintensité de la perturbation. Par rapport au rééchantillonage séquentiel, cette nouvelle approche s'avère être très efficace pour diminuer le nombre requis d'itérations pour générer des échantillons indépendants à partir de la distribution postérieure bayésienne. - Significant progress has been made with regard to the quantitative integration of geophysical and hydrological data at the local scale. However, extending corresponding approaches beyond the local scale still represents a major challenge, yet is critically important for the development of reliable groundwater flow and contaminant transport models. To address this issue, I have developed a hydrogeophysical data integration technique based on a two-step Bayesian sequential simulation procedure that is specifically targeted towards larger-scale problems. The objective is to simulate the distribution of a target hydraulic parameter based on spatially exhaustive, but poorly resolved, measurements of a pertinent geophysical parameter and locally highly resolved, but spatially sparse, measurements of the considered geophysical and hydraulic parameters. To this end, my algorithm links the low- and high-resolution geophysical data via a downscaling procedure before relating the downscaled regional-scale geophysical data to the high-resolution hydraulic parameter field. I first illustrate the application of this novel data integration approach to a realistic synthetic database consisting of collocated high-resolution borehole measurements of the hydraulic and electrical conductivities and spatially exhaustive, low-resolution electrical conductivity estimates obtained from electrical resistivity tomography (ERT). The overall viability of this method is tested and verified by performing and comparing flow and transport simulations through the original and simulated hydraulic conductivity fields. The corresponding results indicate that the proposed data integration procedure does indeed allow for obtaining faithful estimates of the larger-scale hydraulic conductivity structure and reliable predictions of the transport characteristics over medium- to regional-scale distances. The approach is then applied to a corresponding field scenario consisting of collocated high- resolution measurements of the electrical conductivity, as measured using a cone penetrometer testing (CPT) system, and the hydraulic conductivity, as estimated from electromagnetic flowmeter and slug test measurements, in combination with spatially exhaustive low-resolution electrical conductivity estimates obtained from surface-based electrical resistivity tomography (ERT). The corresponding results indicate that the newly developed data integration approach is indeed capable of adequately capturing both the small-scale heterogeneity as well as the larger-scale trend of the prevailing hydraulic conductivity field. The results also indicate that this novel data integration approach is remarkably flexible and robust and hence can be expected to be applicable to a wide range of geophysical and hydrological data at all scale ranges. In the second part of my thesis, I evaluate in detail the viability of sequential geostatistical resampling as a proposal mechanism for Markov Chain Monte Carlo (MCMC) methods applied to high-dimensional geophysical and hydrological inverse problems in order to allow for a more accurate and realistic quantification of the uncertainty associated with the thus inferred models. Focusing on a series of pertinent crosshole georadar tomographic examples, I investigated two classes of geostatistical resampling strategies with regard to their ability to efficiently and accurately generate independent realizations from the Bayesian posterior distribution. The corresponding results indicate that, despite its popularity, sequential resampling is rather inefficient at drawing independent posterior samples for realistic synthetic case studies, notably for the practically common and important scenario of pronounced spatial correlation between model parameters. To address this issue, I have developed a new gradual-deformation-based perturbation approach, which is flexible with regard to the number of model parameters as well as the perturbation strength. Compared to sequential resampling, this newly proposed approach was proven to be highly effective in decreasing the number of iterations required for drawing independent samples from the Bayesian posterior distribution.
Resumo:
The Mississippi Valley-type (MVT) Pb-Zn ore district at Mezica is hosted by Middle to Upper Triassic platform carbonate rocks in the Northern Karavanke/Drau Range geotectonic units of the Eastern Alps, northeastern Slovenia. The mineralization at Mezica covers an area of 64 km(2) with more than 350 orebodies and numerous galena and sphalerite occurrences, which formed epigenetically, both conformable and discordant to bedding. While knowledge on the style of mineralization has grown considerably, the origin of discordant mineralization is still debated. Sulfur stable isotope analyses of 149 sulfide samples from the different types of orebodies provide new insights on the genesis of these mineralizations and their relationship. Over the whole mining district, sphalerite and galena have delta(34)S values in the range of -24.7 to -1.5% VCDT (-13.5 +/- 5.0%) and -24.7 to -1.4% (-10.7 +/- 5.9%), respectively. These values are in the range of the main MVT deposits of the Drau Range. All sulfide delta(34)S values are negative within a broad range, with delta(34)S(pyrite) < delta(34)S(sphalerite) < delta(34)S(galena) for both conformable and discordant orebodies, indicating isotopically heterogeneous H(2)S in the ore-forming fluids and precipitation of the sulfides at thermodynamic disequilibrium. This clearly supports that the main sulfide sulfur originates from bacterially mediated reduction (BSR) of Middle to Upper Triassic seawater sulfate or evaporite sulfate. Thermochemical sulfate reduction (TSR) by organic compounds contributed a minor amount of (34)S-enriched H(2)S to the ore fluid. The variations of delta(34)S values of galena and coarse-grained sphalerite at orefield scale are generally larger than the differences observed in single hand specimens. The progressively more negative delta(34)S values with time along the different sphalerite generations are consistent with mixing of different H(2)S sources, with a decreasing contribution of H(2)S from regional TSR, and an increase from a local H(2)S reservoir produced by BSR (i.e., sedimentary biogenic pyrite, organo-sulfur compounds). Galena in discordant ore (-11.9 to -1.7%; -7.0 +/- 2.7%, n=12) tends to be depleted in (34)S compared with conformable ore (-24.7 to -2.8%, -11.7 +/- 6.2%, n=39). A similar trend is observed from fine-crystalline sphalerite I to coarse open-space filling sphalerite II. Some variation of the sulfide delta(34)S values is attributed to the inherent variability of bacterial sulfate reduction, including metabolic recycling in a locally partially closed system and contribution of H(2)S from hydrolysis of biogenic pyrite and thermal cracking of organo-sulfur compounds. The results suggest that the conformable orebodies originated by mixing of hydrothermal saline metal-rich fluid with H(2)S-rich pore waters during late burial diagenesis, while the discordant orebodies formed by mobilization of the earlier conformable mineralization.
Resumo:
1. Digital elevation models (DEMs) are often used in landscape ecology to retrieve elevation or first derivative terrain attributes such as slope or aspect in the context of species distribution modelling. However, DEM-derived variables are scale-dependent and, given the increasing availability of very high-resolution (VHR) DEMs, their ecological relevancemust be assessed for different spatial resolutions. 2. In a study area located in the Swiss Western Alps, we computed VHR DEMs-derived variables related to morphometry, hydrology and solar radiation. Based on an original spatial resolution of 0.5 m, we generated DEM-derived variables at 1, 2 and 4 mspatial resolutions, applying a Gaussian Pyramid. Their associations with local climatic factors, measured by sensors (direct and ambient air temperature, air humidity and soil moisture) as well as ecological indicators derived fromspecies composition, were assessed with multivariate generalized linearmodels (GLM) andmixed models (GLMM). 3. Specific VHR DEM-derived variables showed significant associations with climatic factors. In addition to slope, aspect and curvature, the underused wetness and ruggedness indices modelledmeasured ambient humidity and soilmoisture, respectively. Remarkably, spatial resolution of VHR DEM-derived variables had a significant influence on models' strength, with coefficients of determination decreasing with coarser resolutions or showing a local optimumwith a 2 mresolution, depending on the variable considered. 4. These results support the relevance of using multi-scale DEM variables to provide surrogates for important climatic variables such as humidity, moisture and temperature, offering suitable alternatives to direct measurements for evolutionary ecology studies at a local scale.
Resumo:
PURPOSE: The Cancer Vaccine Consortium of the Cancer Research Institute (CVC-CRI) conducted a multicenter HLA-peptide multimer proficiency panel (MPP) with a group of 27 laboratories to assess the performance of the assay. EXPERIMENTAL DESIGN: Participants used commercially available HLA-peptide multimers and a well characterized common source of peripheral blood mononuclear cells (PBMC). The frequency of CD8+ T cells specific for two HLA-A2-restricted model antigens was measured by flow cytometry. The panel design allowed for participants to use their preferred staining reagents and locally established protocols for both cell labeling, data acquisition and analysis. RESULTS: We observed significant differences in both the performance characteristics of the assay and the reported frequencies of specific T cells across laboratories. These results emphasize the need to identify the critical variables important for the observed variability to allow for harmonization of the technique across institutions. CONCLUSIONS: Three key recommendations emerged that would likely reduce assay variability and thus move toward harmonizing of this assay. (1) Use of more than two colors for the staining (2) collect at least 100,000 CD8 T cells, and (3) use of a background control sample to appropriately set the analytical gates. We also provide more insight into the limitations of the assay and identified additional protocol steps that potentially impact the quality of data generated and therefore should serve as primary targets for systematic analysis in future panels. Finally, we propose initial guidelines for harmonizing assay performance which include the introduction of standard operating protocols to allow for adequate training of technical staff and auditing of test analysis procedures.
Resumo:
The algorithmic approach to data modelling has developed rapidly these last years, in particular methods based on data mining and machine learning have been used in a growing number of applications. These methods follow a data-driven methodology, aiming at providing the best possible generalization and predictive abilities instead of concentrating on the properties of the data model. One of the most successful groups of such methods is known as Support Vector algorithms. Following the fruitful developments in applying Support Vector algorithms to spatial data, this paper introduces a new extension of the traditional support vector regression (SVR) algorithm. This extension allows for the simultaneous modelling of environmental data at several spatial scales. The joint influence of environmental processes presenting different patterns at different scales is here learned automatically from data, providing the optimum mixture of short and large-scale models. The method is adaptive to the spatial scale of the data. With this advantage, it can provide efficient means to model local anomalies that may typically arise in situations at an early phase of an environmental emergency. However, the proposed approach still requires some prior knowledge on the possible existence of such short-scale patterns. This is a possible limitation of the method for its implementation in early warning systems. The purpose of this paper is to present the multi-scale SVR model and to illustrate its use with an application to the mapping of Cs137 activity given the measurements taken in the region of Briansk following the Chernobyl accident.
Resumo:
L'objectif de cette étude est d'examiner la structure factorielle et la consistance interne de la TAS-20 sur un échantillon d'adolescents (n = 264), ainsi que de décrire la distribution des caractéristiques alexithymiques dans cet échantillon. La structure à trois facteurs de la TAS-20 a été confirmée par notre analyse factorielle confirmatoire. La consistance interne, mesurée à l'aide d'alpha de Cronbach, est acceptable pour le premier facteur (difficulté à identifier les sentiments (DIF)), bonne pour le second (difficulté à verbaliser les sentiments (DDF)), mais en revanche, faible pour le troisième facteur (pensées orientées vers l'extérieur (EOT)). Les résultats d'une Anova mettent en évidence une tendance linéaire indiquant que plus l'âge augmente plus le niveau d'alexithymie (score total TAS-20), la difficulté à identifier les sentiments et les pensées orientées vers l'extérieur diminuent. En ce qui concerne la prévalence de l'alexithymie, on remarque en effet que 38,5 % des adolescents de moins de 16 ans sont considérés comme alexithymiques, contre 30,1 % des 16-17 ans et 22 % des plus de 17 ans. Notre étude indique donc que la TAS-20 est un instrument adéquat pour évaluer l'alexithymie à l'adolescence, tout en suggérant quelques précautions étant donné l'aspect développemental de cette période.
Resumo:
BACKGROUND: The Adolescent Drug Abuse Diagnosis (ADAD) and Health of Nation Outcome Scales for Children and Adolescents (HoNOSCA) are both measures of outcome for adolescent mental health services. AIMS: To compare the ADAD with HoNOSCA; to examine their clinical usefulness. METHODS: Comparison of the ADAD and HoNOSCA outcome measures of 20 adolescents attending a psychiatric day care unit. RESULTS: ADAD change was positively correlated with HoNOSCA change. HoNOSCA assesses the clinic's day-care programme more positively than the ADAD. The ADAD detects a group for which the mean score remains unchanged whereas HoNOSCA does not. CONCLUSIONS: A good convergent validity emerges between the two assessment tools. The ADAD allows an evidence-based assessment and generally enables a better subject discrimination than HoNOSCA. HoNOSCA gives a less refined evaluation but is more economic in time and possibly more sensitive to change. Both assessment tools give useful information and enabled the Day-care Unit for Adolescents to rethink the process of care and of outcome, which benefited both the institution and the patients.
Resumo:
The authors investigated the dimensionality of the French version of the Rosenberg Self-Esteem Scale (RSES; Rosenberg, 1965) using confirmatory factor analysis. We tested models of 1 or 2 factors. Results suggest the RSES is a 1-dimensional scale with 3 highly correlated items. Comparison with the Revised NEO-Personality Inventory (NEO-PI-R; Costa, McCrae, & Rolland, 1998) demonstrated that Neuroticism correlated strongly and Extraversion and Conscientiousness moderately with the RSES. Depression accounted for 47% of the variance of the RSES. Other NEO-PI-R facets were also moderately related with self-esteem.
Resumo:
In mammals, glycogen synthesis and degradation are dynamic processes regulating blood and cerebral glucose-levels within a well-defined physiological range. Despite the essential role of glycogen in hepatic and cerebral metabolism, its spatiotemporal distribution at the molecular and cellular level is unclear. By correlating electron microscopy and ultra-high resolution ion microprobe (NanoSIMS) imaging of tissue from fasted mice injected with (13)C-labeled glucose, we demonstrate that liver glycogenesis initiates in the hepatocyte perinuclear region before spreading toward the cell membrane. In the mouse brain, we observe that (13)C is inhomogeneously incorporated into astrocytic glycogen at a rate ~25 times slower than in the liver, in agreement with prior bulk studies. This experiment, using temporally resolved, nanometer-scale imaging of glycogen synthesis and degradation, provides greater insight into glucose metabolism in mammalian organs and shows how this technique can be used to explore biochemical pathways in healthy and diseased states. FROM THE CLINICAL EDITOR: By correlating electron microscopy and ultra-high resolution ion microprobe imaging of tissue from fasting mice injected with (13)C-labeled glucose, the authors demonstrate a method to image glycogen metabolism at the nanometer scale.
Resumo:
This observational study analyzed imatinib pharmacokinetics and response in 2478 chronic myeloid leukemia (CML) patients. Data were obtained through centralized therapeutic drug monitoring (TDM) at median treatment duration of ≥2 years. First, individual initial trough concentrations under 400mg/day imatinib starting dose were estimated. Second, their correlation (C^min(400mg)) with reported treatment response was verified. Low imatinib levels were predicted in young male patients and those receiving P-gp/CYP3A4 inducers. These patients had also lower response rates (7% lower 18-months MMR in male, 17% lower 1-year CCyR in young patients, Kaplan-Meier estimates). Time-point independent multivariate regression confirmed a correlation of individual C^min(400mg) with response and adverse events. Possibly due to confounding factors (e.g. dose modifications, patient selection bias), the relationship seemed however flatter than previously reported from prospective controlled studies. Nonetheless, these observational results strongly suggest that a subgroup of patients could benefit from early dosage optimization assisted by TDM, because of lower imatinib concentrations and lower response rates.
Resumo:
BACKGROUND/AIMS: Ligand activation of the mineralocorticoid receptor (MR) induces several post-translational modifications (PTMs). Among the different PTMs, MR is known to be dynamically ubiquitylated with impact on its stability and transcriptional activity. Previously, we have shown that MR is monoubiquitylated at the basal state and that aldosterone stimulation induces monoubiquitylation removal prompting polyubiquitin-dependent destabilization of the receptor and proteasomal degradation. This study investigated the role of the aldosterone induced ubiquitin-specific protease USP2-45 on the ubiquitylation state of MR. METHODS: Renal epithelial cells M1 were co-transfected with MR with or without wild-type or inactive USP2-45. The association of MR with USP2-45 or TSG101 as well as MR ubiquitylation state were determined by immunoprecipitation and immunoblotting. MR transcriptional activity was assessed via a luciferase reporter gene. RESULTS: We show that USP2-45 is able to bind MR and, similarly to aldosterone, induce MR monoubiquitylation removal, disruption of MR/TSG101 association and destabilization of MR at protein level. CONCLUSION: This study provides a novel role for USP2-45 by playing a pivotal role in the regulation of the ubiquitylation state of MR and reveals the existence of a negative feedback loop for limiting the aldosterone induced response.
Resumo:
Debris flow susceptibility mapping at a regional scale has been the subject of various studies. The complexity of the phenomenon and the variability of local controlling factors limit the use of process-based models for a first assessment. GISbased approaches associating an automatic detection of the source areas and a simple assessment of the debris flow spreading may provide a substantial basis for a preliminary susceptibility assessment at the regional scale. The use of a digital elevation model, with a 10 m resolution, for the Canton de Vaud territory (Switzerland), a lithological map and a land use map, has allowed automatic identification of the potential source areas. The spreading estimates are based on basic probabilistic and energy calculations that allow to define the maximal runout distance of a debris flow.
Resumo:
OBJECTIVES: The use of tenofovir is highly associated with the emergence of mutation K65R, which confers broad resistance to nucleoside/nucleotide analogue reverse transcriptase inhibitors (NRTIs), especially when tenofovir is combined with other NRTIs also selecting for K65R. Although recent HIV-1 treatment guidelines discouraging these combinations resulted in reduced K65R selection with tenofovir, updated information on the impact of currently recommended regimens on the population selection rate of K65R is presently lacking. METHODS: In this study, we evaluated changes over time in the selection rate of resistance mutation K65R in a large population of 2736 HIV-1-infected patients failing combination antiretroviral treatment between 2002 and 2010. RESULTS: The K65R resistance mutation was detected in 144 patients, a prevalence of 5.3%. A large majority of observed K65R cases were explained by the use of tenofovir, reflecting its wide use in clinical practice. However, changing patterns over time in NRTIs accompanying tenofovir resulted in a persistent decreasing probability of K65R selection by tenofovir-based therapy. The currently recommended NRTI combination tenofovir/emtricitabine was associated with a low probability of K65R emergence. For any given dual NRTI combination including tenofovir, higher selection rates of K65R were consistently observed with a non-nucleoside reverse transcriptase inhibitor than with a protease inhibitor as the third agent. DISCUSSION: Our finding of a stable time trend of K65R despite elevated use of tenofovir illustrates increased potency of current HIV-1 therapy including tenofovir.