124 resultados para low-dimensional system
Resumo:
AIM: To assess whether blockade of the renin-angiotensin system (RAS), a recognized strategy to prevent the progression of diabetic nephropathy, affects renal tissue oxygenation in type 2 diabetes mellitus (T2DM) patients. METHODS: Prospective randomized 2-way cross over study; T2DM patients with (micro)albuminuria and/or hypertension underwent blood oxygenation level-dependent magnetic resonance imaging (BOLD-MRI) at baseline, after one month of enalapril (20mgqd), and after one month of candesartan (16mgqd). Each BOLD-MRI was performed before and after the administration of furosemide. The mean R2* (=1/T2*) values in the medulla and cortex were calculated, a low R2* indicating high tissue oxygenation. RESULTS: Twelve patients (mean age: 60±11 years, eGFR: 62±22ml/min/1.73m(2)) completed the study. Neither chronic enalapril nor candesartan intake modified renal cortical or medullary R2* levels. Furosemide significantly decreased cortical and medullary R2* levels suggesting a transient increase in renal oxygenation. Medullary R2* levels correlated positively with urinary sodium excretion and systemic blood pressure, suggesting lower renal oxygenation at higher dietary sodium intake and blood pressure; cortical R2* levels correlated positively with glycemia and HbA1c. CONCLUSION: RAS blockade does not seem to increase renal tissue oxygenation in T2DM hypertensive patients. The response to furosemide and the association with 24h urinary sodium excretion emphasize the crucial role of renal sodium handling as one of the main determinants of renal tissue oxygenation.
Resumo:
Three standard radiation qualities (RQA 3, RQA 5 and RQA 9) and two screens, Kodak Lanex Regular and Insight Skeletal, were used to compare the imaging performance and dose requirements of the new Kodak Hyper Speed G and the current Kodak T-MAT G/RA medical x-ray films. The noise equivalent quanta (NEQ) and detective quantum efficiencies (DQE) of the four screen-film combinations were measured at three gross optical densities and compared with the characteristics for the Kodak CR 9000 system with GP (general purpose) and HR (high resolution) phosphor plates. The new Hyper Speed G film has double the intrinsic sensitivity of the T-MAT G/RA film and a higher contrast in the high optical density range for comparable exposure latitude. By providing both high sensitivity and high spatial resolution, the new film significantly improves the compromise between dose and image quality. As expected, the new film has a higher noise level and a lower signal-to-noise ratio than the standard film, although in the high frequency range this is compensated for by a better resolution, giving better DQE results--especially at high optical density. Both screen-film systems outperform the phosphor plates in terms of MTF and DQE for standard imaging conditions (Regular screen at RQA 5 and RQA 9 beam qualities). At low energy (RQA 3), the CR system has a comparable low-frequency DQE to screen-film systems when used with a fine screen at low and middle optical densities, and a superior low-frequency DQE at high optical density.
Resumo:
We previously showed in a 3D rat brain cell in vitro model for glutaric aciduria type-I that repeated application of 1mM 3-hydroxy-glutarate (3-OHGA) caused ammonium accumulation, morphologic alterations and induction of non-apoptotic cell death in developing brain cells. Here, we performed a dose-response study with lower concentrations of 3- OHGA.We exposed our cultures to 0.1, 0.33 and 1mM 3-OHGA every 12h over three days at two developmental stages (DIV5-8 and DIV11-14). Ammonium accumulation was observed at both stages starting from 0.1mM 3-OHGA, in parallel with a glutamine decrease. Morphological changes started at 0.33mM with loss of MBP expression and loss of astrocytic processes. Neurons were not substantially affected. At DIV8, release of LDH in the medium and cellular TUNEL staining increased from 0.1mM and 0.33mM 3-OHGA exposure, respectively. No increase in activated caspase-3 was observed. We confirmed ammonium accumulation and non-apoptotic cell death of brain cells in our in vitro model at lower 3-OHGA concentrations thus strongly suggesting that the observed effects are likely to take place in the brain of affected patients. The concomitant glutamine decrease suggests a defect in the astrocyte ammonium buffering system. Ammonium accumulation might be the cause of non-apoptotic cell death.
Resumo:
Major histocompatibility complex (MHC) molecules are of crucial importance for the immune system to recognize and defend the body against external attacks. Foreign antigens are presented by specialized cells, called antigen presenting cells, to T lymphocytes in the context of MHC molecules, thereby inducing T cell activation. In addition, MHC molecules are essential for Natural Killer (NK) cell biology, playing a role in NK cell education and activation. Recently, the NOD-like receptor (NLR) family member NLRC5 (NLR caspase recruitment domain containing protein 5) was found to act as transcriptional regulator of MHC class I, in particular in T and NK cells. Its role in MHC class I expression is however minor in dendritic cells (DCs). This raised the question of whether inflammatory conditions, which augment the levels of NLRC5 in DCs, could increase its contribution to MHC class I expression. Our work shows that MHC class I transcript and intracellular levels depend on NLRC5, while its role in MHC class I surface expression is instead negligible. We describe however a general salvage mechanism that enables cells with low intracellular MHC class I levels to nevertheless maintain relatively high MHC class I on the cell surface. In addition, we lack a thorough understanding of NLRC5 target gene specificity and mechanism of action. Our work delineates the unique consensus sequence in MHC class I promoters required for NLRC5 recruitment and pinpoints conserved features conferring its specificity. Furthermore, through genome-wide analyses, we confirm that NLRC5 regulates classical MHC class I genes and identify novel target genes all encoding non-classical MHC class I molecules exerting an array of functions in immunity and tolerance. We finally asked why a dedicated factor co-regulates MHC class I expression specifically in T and NK lymphocytes. We show that deregulated NLRC5 expression affects the education of NK cells and alters the crosstalk between T and NK cells, leading to NK cell-mediated killing of T lymphocytes. Altogether this thesis work brings insights into molecular and physiological aspects of NLRC5 function, which might help understand certain aspects of immune responses and disorders. -- Les molécules du complexe majeur d'histocompatibilité (CMH) sont essentielles au système immunitaire pour l'initiation de la réponse immunitaire. En effet, l'activation des lymphocytes T nécessite la reconnaissance d'un antigène étranger présenté par les cellules présentatrices d'antigènes sur une molécule du CMH. Les molécules du CMH ont également un rôle fondamental pour la fonction des cellules Natural Killer (NK) puisqu'elles sont nécessaires à leur processus d'éducation et d'activation. Récemment, NLRC5 (NLR caspase recruitment domain containing protein 5), un membre de la famille des récepteurs de type NOD (NLRs), a été décrit comme un facteur de transactivation de l'expression des gènes du CMH de classe I. A l'état basai, cette fonction transcriptionnelle est essentielle dans les lymphocytes T et NK, alors que ce rôle reste mineur pour l'expression des molécules du CMH de classe I dans les cellules dendritiques (DCs). Dans des conditions inflammatoires, l'expression de NLRC5 augmente dans les DCs. Notre travail démontre que, dans ces conditions, les transcrits et les niveaux intracellulaires des molécules du CMH de classe I augmentent aussi d'une façon dépendante de NLRC5. A contrario, le rôle de NLRC5 sur les niveaux de molécules de surface reste minoritaire. Cette observation nous a conduits à l'identification d'un mécanisme général de compensation qui permet aux cellules de maintenir des niveaux relativement élevés de molécules de CMH de class I à leur surface malgré de faibles niveaux intracellulaires. De plus, il semblait nécessaire de s'orienter vers une approche plus globale afin de déterminer l'étendue de la fonction transcriptionnelle de NLRC5. Par une approche du génome entier, nous avons pu décrire une séquence consensus conservée présente dans les promoteurs des gènes du CMH de classe I, sur laquelle NLRC5 est spécifiquement recruté. Nous avons pu également identifier de nouveaux gènes cibles codant pour des molécules de CMH de classe I non classiques impliqués dans l'immunité et la tolérance. Finalement, nous nous sommes demandé quel est l'intérêt d'avoir un facteur transcriptionnel, en l'occurrence NLRC5, qui orchestre l'expression du CMH de classe I dans les lymphocytes T et NK. Nous montrons que la dérégulation de l'expression de NLRC5 affecte l'éducation des cellules NK et conduit à la mort cellulaire des lymphocytes T médiée par les cellules NK. Dans l'ensemble ce travail de thèse contribue à la caractérisation du rôle de NLRC5, tant au niveau moléculaire que physiologique, ce qui présente un intérêt dans le cadre de la compréhension de certains aspects physiopathologique de la réponse immunitaire.
Resumo:
BACKGROUND: The risks of a public exposure to a sudden decompression, until now, have been related to civil aviation and, at a lesser extent, to diving activities. However, engineers are currently planning the use of low pressure environments for underground transportation. This method has been proposed for the future Swissmetro, a high-speed underground train designed for inter-urban linking in Switzerland. HYPOTHESIS: The use of a low pressure environment in an underground public transportation system must be considered carefully regarding the decompression risks. Indeed, due to the enclosed environment, both decompression kinetics and safety measures may differ from aviation decompression cases. METHOD: A theoretical study of decompression risks has been conducted at an early stage of the Swissmetro project. A three-compartment theoretical model, based on the physics of fluids, has been implemented with flow processing software (Ithink 5.0). Simulations have been conducted in order to analyze "decompression scenarios" for a wide range of parameters, relevant in the context of the Swissmetro main study. RESULTS: Simulation results cover a wide range from slow to explosive decompression, depending on the simulation parameters. Not surprisingly, the leaking orifice area has a tremendous impact on barotraumatic effects, while the tunnel pressure may significantly affect both hypoxic and barotraumatic effects. Calculations have also shown that reducing the free space around the vehicle may mitigate significantly an accidental decompression. CONCLUSION: Numeric simulations are relevant to assess decompression risks in the future Swissmetro system. The decompression model has proven to be useful in assisting both design choices and safety management.
Resumo:
A 47-year-old male taxi driver experienced multiple adverse drug reactions during therapy with clomipramine (CMI) and quetiapine for major depressive disorder, after having been unsuccessfully treated with adequate doses of mirtazapine and venlafaxine. Drug serum concentrations of CMI and quetiapine were significantly increased and pharmacogenetic testing showed a poor metabolizer status for CYP2D6, low CYP3A4/5 activity and normal CYP2C19 genotype. After reduction of the CMI dose and discontinuation of quetiapine, all ADR subsided except for the increase in liver enzymes. The latter improved but did not normalize completely, even months later, possibly due to concomitant cholelithiasis.
Resumo:
BACKGROUND: Reading volume and mammography screening performance appear positively correlated. Quality and effectiveness were compared across low-volume screening programmes targeting relatively small populations and operating under the same decentralised healthcare system. Except for accreditation of 2nd readers (restrictive vs non-restrictive strategy), these organised programmes had similar screening regimen/procedures and duration, which maximises comparability. Variation in performance and its determinants were explored in order to improve mammography practice and optimise screening performance. METHODS: Circa 200,000 screens performed between 1999 and 2006 (4 rounds) in 3 longest standing Swiss cantonal programmes (of Vaud, Geneva and Valais) were assessed. Indicators of quality and effectiveness were assessed according to European standards. Interval cancers were identified through linkage with cancer registries records. RESULTS: Swiss programmes met most European standards of performance with a substantial, favourable cancer stage shift. Up to a two-fold variation occurred for several performance indicators. In subsequent rounds, compared with programmes (Vaud and Geneva) that applied a restrictive selection strategy for 2nd readers, proportions of in situ lesions and of small cancers (≤1cm) were one third lower and halved, respectively, and the proportion of advanced lesions (stage II+) nearly 50% higher in the programme without a restrictive selection strategy. Discrepancy in second-year proportional incidence of interval cancers appears to be multicausal. CONCLUSION: Differences in performance could partly be explained by a selective strategy for second readers and a prior experience in service screening, but not by the levels of opportunistic screening and programme attendance. This study provides clues for enhancing mammography screening performance in low-volume programmes.
Resumo:
Les échantillons biologiques ne s?arrangent pas toujours en objets ordonnés (cristaux 2D ou hélices) nécessaires pour la microscopie électronique ni en cristaux 3D parfaitement ordonnés pour la cristallographie rayons X alors que de nombreux spécimens sont tout simplement trop << gros D pour la spectroscopie NMR. C?est pour ces raisons que l?analyse de particules isolées par la cryo-microscopie électronique est devenue une technique de plus en plus importante pour déterminer la structure de macromolécules. Néanmoins, le faible rapport signal-sur-bruit ainsi que la forte sensibilité des échantillons biologiques natifs face au faisceau électronique restent deux parmi les facteurs limitant la résolution. La cryo-coloration négative est une technique récemment développée permettant l?observation des échantillons biologiques avec le microscope électronique. Ils sont observés à l?état vitrifié et à basse température, en présence d?un colorant (molybdate d?ammonium). Les avantages de la cryo-coloration négative sont étudiés dans ce travail. Les résultats obtenus révèlent que les problèmes majeurs peuvent êtres évités par l?utilisation de cette nouvelle technique. Les échantillons sont représentés fidèlement avec un SNR 10 fois plus important que dans le cas des échantillons dans l?eau. De plus, la comparaison de données obtenues après de multiples expositions montre que les dégâts liés au faisceau électronique sont réduits considérablement. D?autre part, les résultats exposés mettent en évidence que la technique est idéale pour l?analyse à haute résolution de macromolécules biologiques. La solution vitrifiée de molybdate d?ammonium entourant l?échantillon n?empêche pas l?accès à la structure interne de la protéine. Finalement, plusieurs exemples d?application démontrent les avantages de cette technique nouvellement développée.<br/><br/>Many biological specimens do not arrange themselves in ordered assemblies (tubular or flat 2D crystals) suitable for electron crystallography, nor in perfectly ordered 3D crystals for X-ray diffraction; many other are simply too large to be approached by NMR spectroscopy. Therefore, single-particles analysis has become a progressively more important technique for structural determination of large isolated macromolecules by cryo-electron microscopy. Nevertheless, the low signal-to-noise ratio and the high electron-beam sensitivity of biological samples remain two main resolution-limiting factors, when the specimens are observed in their native state. Cryo-negative staining is a recently developed technique that allows the study of biological samples with the electron microscope. The samples are observed at low temperature, in the vitrified state, but in presence of a stain (ammonium molybdate). In the present work, the advantages of this novel technique are investigated: it is shown that cryo-negative staining can generally overcome most of the problems encountered with cryo-electron microscopy of vitrified native suspension of biological particles. The specimens are faithfully represented with a 10-times higher SNR than in the case of unstained samples. Beam-damage is found to be considerably reduced by comparison of multiple-exposure series of both stained and unstained samples. The present report also demonstrates that cryo-negative staining is capable of high- resolution analysis of biological macromolecules. The vitrified stain solution surrounding the sample does not forbid the access to the interna1 features (ie. the secondary structure) of a protein. This finding is of direct interest for the structural biologist trying to combine electron microscopy and X-ray data. developed electron microscopy technique. Finally, several application examples demonstrate the advantages of this newly
Resumo:
Un système efficace de sismique tridimensionnelle (3-D) haute-résolution adapté à des cibles lacustres de petite échelle a été développé. Dans le Lac Léman, près de la ville de Lausanne, en Suisse, des investigations récentes en deux dimension (2-D) ont mis en évidence une zone de faille complexe qui a été choisie pour tester notre système. Les structures observées incluent une couche mince (<40 m) de sédiments quaternaires sub-horizontaux, discordants sur des couches tertiaires de molasse pentées vers le sud-est. On observe aussi la zone de faille de « La Paudèze » qui sépare les unités de la Molasse du Plateau de la Molasse Subalpine. Deux campagnes 3-D complètes, d?environ d?un kilomètre carré, ont été réalisées sur ce site de test. La campagne pilote (campagne I), effectuée en 1999 pendant 8 jours, a couvert 80 profils en utilisant une seule flûte. Pendant la campagne II (9 jours en 2001), le nouveau système trois-flûtes, bien paramétrés pour notre objectif, a permis l?acquisition de données de très haute qualité sur 180 lignes CMP. Les améliorations principales incluent un système de navigation et de déclenchement de tirs grâce à un nouveau logiciel. Celui-ci comprend un contrôle qualité de la navigation du bateau en temps réel utilisant un GPS différentiel (dGPS) à bord et une station de référence près du bord du lac. De cette façon, les tirs peuvent être déclenchés tous les 5 mètres avec une erreur maximale non-cumulative de 25 centimètres. Tandis que pour la campagne I la position des récepteurs de la flûte 48-traces a dû être déduite à partir des positions du bateau, pour la campagne II elle ont pu être calculées précisément (erreur <20 cm) grâce aux trois antennes dGPS supplémentaires placées sur des flotteurs attachés à l?extrémité de chaque flûte 24-traces. Il est maintenant possible de déterminer la dérive éventuelle de l?extrémité des flûtes (75 m) causée par des courants latéraux ou de petites variations de trajet du bateau. De plus, la construction de deux bras télescopiques maintenant les trois flûtes à une distance de 7.5 m les uns des autres, qui est la même distance que celle entre les lignes naviguées de la campagne II. En combinaison avec un espacement de récepteurs de 2.5 m, la dimension de chaque «bin» de données 3-D de la campagne II est de 1.25 m en ligne et 3.75 m latéralement. L?espacement plus grand en direction « in-line » par rapport à la direction «cross-line» est justifié par l?orientation structurale de la zone de faille perpendiculaire à la direction «in-line». L?incertitude sur la navigation et le positionnement pendant la campagne I et le «binning» imprécis qui en résulte, se retrouve dans les données sous forme d?une certaine discontinuité des réflecteurs. L?utilisation d?un canon à air à doublechambre (qui permet d?atténuer l?effet bulle) a pu réduire l?aliasing observé dans les sections migrées en 3-D. Celui-ci était dû à la combinaison du contenu relativement haute fréquence (<2000 Hz) du canon à eau (utilisé à 140 bars et à 0.3 m de profondeur) et d?un pas d?échantillonnage latéral insuffisant. Le Mini G.I 15/15 a été utilisé à 80 bars et à 1 m de profondeur, est mieux adapté à la complexité de la cible, une zone faillée ayant des réflecteurs pentés jusqu?à 30°. Bien que ses fréquences ne dépassent pas les 650 Hz, cette source combine une pénétration du signal non-aliasé jusqu?à 300 m dans le sol (par rapport au 145 m pour le canon à eau) pour une résolution verticale maximale de 1.1 m. Tandis que la campagne I a été acquise par groupes de plusieurs lignes de directions alternées, l?optimisation du temps d?acquisition du nouveau système à trois flûtes permet l?acquisition en géométrie parallèle, ce qui est préférable lorsqu?on utilise une configuration asymétrique (une source et un dispositif de récepteurs). Si on ne procède pas ainsi, les stacks sont différents selon la direction. Toutefois, la configuration de flûtes, plus courtes que pour la compagne I, a réduit la couverture nominale, la ramenant de 12 à 6. Une séquence classique de traitement 3-D a été adaptée à l?échantillonnage à haute fréquence et elle a été complétée par deux programmes qui transforment le format non-conventionnel de nos données de navigation en un format standard de l?industrie. Dans l?ordre, le traitement comprend l?incorporation de la géométrie, suivi de l?édition des traces, de l?harmonisation des «bins» (pour compenser l?inhomogénéité de la couverture due à la dérive du bateau et de la flûte), de la correction de la divergence sphérique, du filtrage passe-bande, de l?analyse de vitesse, de la correction DMO en 3-D, du stack et enfin de la migration 3-D en temps. D?analyses de vitesse détaillées ont été effectuées sur les données de couverture 12, une ligne sur deux et tous les 50 CMP, soit un nombre total de 600 spectres de semblance. Selon cette analyse, les vitesses d?intervalles varient de 1450-1650 m/s dans les sédiments non-consolidés et de 1650-3000 m/s dans les sédiments consolidés. Le fait que l?on puisse interpréter plusieurs horizons et surfaces de faille dans le cube, montre le potentiel de cette technique pour une interprétation tectonique et géologique à petite échelle en trois dimensions. On distingue cinq faciès sismiques principaux et leurs géométries 3-D détaillées sur des sections verticales et horizontales: les sédiments lacustres (Holocène), les sédiments glacio-lacustres (Pléistocène), la Molasse du Plateau, la Molasse Subalpine de la zone de faille (chevauchement) et la Molasse Subalpine au sud de cette zone. Les couches de la Molasse du Plateau et de la Molasse Subalpine ont respectivement un pendage de ~8° et ~20°. La zone de faille comprend de nombreuses structures très déformées de pendage d?environ 30°. Des tests préliminaires avec un algorithme de migration 3-D en profondeur avant sommation et à amplitudes préservées démontrent que la qualité excellente des données de la campagne II permet l?application de telles techniques à des campagnes haute-résolution. La méthode de sismique marine 3-D était utilisée jusqu?à présent quasi-exclusivement par l?industrie pétrolière. Son adaptation à une échelle plus petite géographiquement mais aussi financièrement a ouvert la voie d?appliquer cette technique à des objectifs d?environnement et du génie civil.<br/><br/>An efficient high-resolution three-dimensional (3-D) seismic reflection system for small-scale targets in lacustrine settings was developed. In Lake Geneva, near the city of Lausanne, Switzerland, past high-resolution two-dimensional (2-D) investigations revealed a complex fault zone (the Paudèze thrust zone), which was subsequently chosen for testing our system. Observed structures include a thin (<40 m) layer of subhorizontal Quaternary sediments that unconformably overlie southeast-dipping Tertiary Molasse beds and the Paudèze thrust zone, which separates Plateau and Subalpine Molasse units. Two complete 3-D surveys have been conducted over this same test site, covering an area of about 1 km2. In 1999, a pilot survey (Survey I), comprising 80 profiles, was carried out in 8 days with a single-streamer configuration. In 2001, a second survey (Survey II) used a newly developed three-streamer system with optimized design parameters, which provided an exceptionally high-quality data set of 180 common midpoint (CMP) lines in 9 days. The main improvements include a navigation and shot-triggering system with in-house navigation software that automatically fires the gun in combination with real-time control on navigation quality using differential GPS (dGPS) onboard and a reference base near the lake shore. Shots were triggered at 5-m intervals with a maximum non-cumulative error of 25 cm. Whereas the single 48-channel streamer system of Survey I requires extrapolation of receiver positions from the boat position, for Survey II they could be accurately calculated (error <20 cm) with the aid of three additional dGPS antennas mounted on rafts attached to the end of each of the 24- channel streamers. Towed at a distance of 75 m behind the vessel, they allow the determination of feathering due to cross-line currents or small course variations. Furthermore, two retractable booms hold the three streamers at a distance of 7.5 m from each other, which is the same distance as the sail line interval for Survey I. With a receiver spacing of 2.5 m, the bin dimension of the 3-D data of Survey II is 1.25 m in in-line direction and 3.75 m in cross-line direction. The greater cross-line versus in-line spacing is justified by the known structural trend of the fault zone perpendicular to the in-line direction. The data from Survey I showed some reflection discontinuity as a result of insufficiently accurate navigation and positioning and subsequent binning errors. Observed aliasing in the 3-D migration was due to insufficient lateral sampling combined with the relatively high frequency (<2000 Hz) content of the water gun source (operated at 140 bars and 0.3 m depth). These results motivated the use of a double-chamber bubble-canceling air gun for Survey II. A 15 / 15 Mini G.I air gun operated at 80 bars and 1 m depth, proved to be better adapted for imaging the complexly faulted target area, which has reflectors dipping up to 30°. Although its frequencies do not exceed 650 Hz, this air gun combines a penetration of non-aliased signal to depths of 300 m below the water bottom (versus 145 m for the water gun) with a maximum vertical resolution of 1.1 m. While Survey I was shot in patches of alternating directions, the optimized surveying time of the new threestreamer system allowed acquisition in parallel geometry, which is preferable when using an asymmetric configuration (single source and receiver array). Otherwise, resulting stacks are different for the opposite directions. However, the shorter streamer configuration of Survey II reduced the nominal fold from 12 to 6. A 3-D conventional processing flow was adapted to the high sampling rates and was complemented by two computer programs that format the unconventional navigation data to industry standards. Processing included trace editing, geometry assignment, bin harmonization (to compensate for uneven fold due to boat/streamer drift), spherical divergence correction, bandpass filtering, velocity analysis, 3-D DMO correction, stack and 3-D time migration. A detailed semblance velocity analysis was performed on the 12-fold data set for every second in-line and every 50th CMP, i.e. on a total of 600 spectra. According to this velocity analysis, interval velocities range from 1450-1650 m/s for the unconsolidated sediments and from 1650-3000 m/s for the consolidated sediments. Delineation of several horizons and fault surfaces reveal the potential for small-scale geologic and tectonic interpretation in three dimensions. Five major seismic facies and their detailed 3-D geometries can be distinguished in vertical and horizontal sections: lacustrine sediments (Holocene) , glaciolacustrine sediments (Pleistocene), Plateau Molasse, Subalpine Molasse and its thrust fault zone. Dips of beds within Plateau and Subalpine Molasse are ~8° and ~20°, respectively. Within the fault zone, many highly deformed structures with dips around 30° are visible. Preliminary tests with 3-D preserved-amplitude prestack depth migration demonstrate that the excellent data quality of Survey II allows application of such sophisticated techniques even to high-resolution seismic surveys. In general, the adaptation of the 3-D marine seismic reflection method, which to date has almost exclusively been used by the oil exploration industry, to a smaller geographical as well as financial scale has helped pave the way for applying this technique to environmental and engineering purposes.<br/><br/>La sismique réflexion est une méthode d?investigation du sous-sol avec un très grand pouvoir de résolution. Elle consiste à envoyer des vibrations dans le sol et à recueillir les ondes qui se réfléchissent sur les discontinuités géologiques à différentes profondeurs et remontent ensuite à la surface où elles sont enregistrées. Les signaux ainsi recueillis donnent non seulement des informations sur la nature des couches en présence et leur géométrie, mais ils permettent aussi de faire une interprétation géologique du sous-sol. Par exemple, dans le cas de roches sédimentaires, les profils de sismique réflexion permettent de déterminer leur mode de dépôt, leurs éventuelles déformations ou cassures et donc leur histoire tectonique. La sismique réflexion est la méthode principale de l?exploration pétrolière. Pendant longtemps on a réalisé des profils de sismique réflexion le long de profils qui fournissent une image du sous-sol en deux dimensions. Les images ainsi obtenues ne sont que partiellement exactes, puisqu?elles ne tiennent pas compte de l?aspect tridimensionnel des structures géologiques. Depuis quelques dizaines d?années, la sismique en trois dimensions (3-D) a apporté un souffle nouveau à l?étude du sous-sol. Si elle est aujourd?hui parfaitement maîtrisée pour l?imagerie des grandes structures géologiques tant dans le domaine terrestre que le domaine océanique, son adaptation à l?échelle lacustre ou fluviale n?a encore fait l?objet que de rares études. Ce travail de thèse a consisté à développer un système d?acquisition sismique similaire à celui utilisé pour la prospection pétrolière en mer, mais adapté aux lacs. Il est donc de dimension moindre, de mise en oeuvre plus légère et surtout d?une résolution des images finales beaucoup plus élevée. Alors que l?industrie pétrolière se limite souvent à une résolution de l?ordre de la dizaine de mètres, l?instrument qui a été mis au point dans le cadre de ce travail permet de voir des détails de l?ordre du mètre. Le nouveau système repose sur la possibilité d?enregistrer simultanément les réflexions sismiques sur trois câbles sismiques (ou flûtes) de 24 traces chacun. Pour obtenir des données 3-D, il est essentiel de positionner les instruments sur l?eau (source et récepteurs des ondes sismiques) avec une grande précision. Un logiciel a été spécialement développé pour le contrôle de la navigation et le déclenchement des tirs de la source sismique en utilisant des récepteurs GPS différentiel (dGPS) sur le bateau et à l?extrémité de chaque flûte. Ceci permet de positionner les instruments avec une précision de l?ordre de 20 cm. Pour tester notre système, nous avons choisi une zone sur le Lac Léman, près de la ville de Lausanne, où passe la faille de « La Paudèze » qui sépare les unités de la Molasse du Plateau et de la Molasse Subalpine. Deux campagnes de mesures de sismique 3-D y ont été réalisées sur une zone d?environ 1 km2. Les enregistrements sismiques ont ensuite été traités pour les transformer en images interprétables. Nous avons appliqué une séquence de traitement 3-D spécialement adaptée à nos données, notamment en ce qui concerne le positionnement. Après traitement, les données font apparaître différents faciès sismiques principaux correspondant notamment aux sédiments lacustres (Holocène), aux sédiments glacio-lacustres (Pléistocène), à la Molasse du Plateau, à la Molasse Subalpine de la zone de faille et la Molasse Subalpine au sud de cette zone. La géométrie 3-D détaillée des failles est visible sur les sections sismiques verticales et horizontales. L?excellente qualité des données et l?interprétation de plusieurs horizons et surfaces de faille montrent le potentiel de cette technique pour les investigations à petite échelle en trois dimensions ce qui ouvre des voies à son application dans les domaines de l?environnement et du génie civil.
Resumo:
One aim of this study is to determine the impact of water velocity on the uptake of indicator polychlorinated biphenyls (iPCBs) by silicone rubber (SR) and low-density polyethylene (LDPE) passive samplers. A second aim is to assess the efficiency of performance reference compounds (PRCs) to correct for the impact of water velocity. SR and LDPE samplers were spiked with 11 or 12 PRCs and exposed for 6 weeks to four different velocities (in the range of 1.6 to 37.7 cm s−1) in river-like flow conditions using a channel system supplied with river water. A relationship between velocity and the uptakewas found for each iPCB and enables to determine expected changes in the uptake due to velocity variations. For both samplers, velocity increases from 2 to 10 cm s−1, 30 cm s−1 (interpolated data) and 100 cm s−1 (extrapolated data) lead to increases of the uptake which do not exceed a factor of 2, 3 and 4.5, respectively. Results also showed that the influence of velocity decreased with increasing the octanol-water coefficient partition (log Kow) of iPCBs when SR is used whereas the opposite effect was observed for LDPE. Time-weighted average (TWA) concentrations of iPCBs in water were calculated from iPCB uptake and PRC release. These calculations were performed using either a single PRC or all the PRCs. The efficiency of PRCs to correct the impact of velocity was assessed by comparing the TWA concentrations obtained at the four tested velocities. For SR, a good agreement was found among the four TWA concentrations with both methods (average RSD b 10%). Also for LDPE, PRCs offered a good correction of the impact of water velocity (average RSD of about 10 to 20%). These results contribute to the process of acceptance of passive sampling in routine regulatory monitoring programs.
Resumo:
PURPOSE: To optimize and preliminarily evaluate a three-dimensional (3D) radial balanced steady-state free precession (bSSFP) arterial spin labeled (ASL) sequence for nonenhanced MR angiography (MRA) of the extracranial carotid arteries. MATERIALS AND METHODS: The carotid arteries of 13 healthy subjects and 2 patients were imaged on a 1.5 Tesla MRI system using an undersampled 3D radial bSSFP sequence providing a scan time of ∼4 min and 1 mm(3) isotropic resolution. A hybridized scheme that combined pseudocontinuous and pulsed ASL was used to maximize arterial coverage. The impact of a post label delay period, the sequence repetition time, and radiofrequency (RF) energy configuration of pseudocontinuous labeling on the display of the carotid arteries was assessed with contrast-to-noise ratio (CNR) measurements. Faster, higher undersampled 2 and 1 min scans were tested. RESULTS: Using hybridized ASL MRA and a 3D radial bSSFP trajectory, arterial CNR was maximized with a post label delay of 0.2 s, repetition times ≥ 2.5 s (P < 0.05), and by eliminating RF energy during the pseudocontinuous control phase (P < 0.001). With higher levels of undersampling, the carotid arteries were displayed in ≤ 2 min. CONCLUSION: Nonenhanced MRA using hybridized ASL with a 3D radial bSSFP trajectory can display long lengths of the carotid arteries with 1 mm(3) isotropic resolution. J. Magn. Reson. Imaging 2015;41:1150-1156. © 2014 Wiley Periodicals, Inc.
Resumo:
BACKGROUND: Frequent emergency department users represent a small number of patients but account for a large number of emergency department visits. They should be a focus because they are often vulnerable patients with many risk factors affecting their quality of life (QoL). Case management interventions have resulted in a significant decrease in emergency department visits, but association with QoL has not been assessed. One aim of our study was to examine to what extent an interdisciplinary case management intervention, compared to standard emergency care, improved frequent emergency department users' QoL. METHODS: Data are part of a randomized, controlled trial designed to improve frequent emergency department users' QoL and use of health-care resources at the Lausanne University Hospital, Switzerland. In total, 250 frequent emergency department users (≥5 attendances during the previous 12 months; ≥ 18 years of age) were interviewed between May 2012 and July 2013. Following an assessment focused on social characteristics; social, mental, and somatic determinants of health; risk behaviors; health care use; and QoL, participants were randomly assigned to the control or the intervention group (n=125 in each group). The final sample included 194 participants (20 deaths, 36 dropouts, n=96 in the intervention group, n=99 in the control group). Participants in the intervention group received a case management intervention by an interdisciplinary, mobile team in addition to standard emergency care. The case management intervention involved four nurses and a physician who provided counseling and assistance concerning social determinants of health, substance-use disorders, and access to the health-care system. The participants' QoL was evaluated by a study nurse using the WHOQOL-BREF five times during the study (at baseline, and at 2, 5.5, 9, and 12 months). Four of the six WHOQOL dimensions of QoL were retained here: physical health, psychological health, social relationship, and environment, with scores ranging from 0 (low QoL) to 100 (high QoL). A linear, mixed-effects model with participants as a random effect was run to analyze the change in QoL over time. The effects of time, participants' group, and the interaction between time and group were tested. These effects were controlled for sociodemographic characteristics and health-related variables (i.e., age, gender, education, citizenship, marital status, type of financial resources, proficiency in French, somatic and mental health problems, and behaviors at risk).
Resumo:
Probabilistic inversion methods based on Markov chain Monte Carlo (MCMC) simulation are well suited to quantify parameter and model uncertainty of nonlinear inverse problems. Yet, application of such methods to CPU-intensive forward models can be a daunting task, particularly if the parameter space is high dimensional. Here, we present a 2-D pixel-based MCMC inversion of plane-wave electromagnetic (EM) data. Using synthetic data, we investigate how model parameter uncertainty depends on model structure constraints using different norms of the likelihood function and the model constraints, and study the added benefits of joint inversion of EM and electrical resistivity tomography (ERT) data. Our results demonstrate that model structure constraints are necessary to stabilize the MCMC inversion results of a highly discretized model. These constraints decrease model parameter uncertainty and facilitate model interpretation. A drawback is that these constraints may lead to posterior distributions that do not fully include the true underlying model, because some of its features exhibit a low sensitivity to the EM data, and hence are difficult to resolve. This problem can be partly mitigated if the plane-wave EM data is augmented with ERT observations. The hierarchical Bayesian inverse formulation introduced and used herein is able to successfully recover the probabilistic properties of the measurement data errors and a model regularization weight. Application of the proposed inversion methodology to field data from an aquifer demonstrates that the posterior mean model realization is very similar to that derived from a deterministic inversion with similar model constraints.
Resumo:
The impact of round-the-clock cerebrospinal fluid (CSF) Gram stain on overnight empirical therapy for suspected central nervous system (CNS) infections was investigated. All consecutive overnight CSF Gram stains between 2006 and 2011 were included. The impact of a positive or a negative test on empirical therapy was evaluated and compared to other clinical and biological indications based on institutional guidelines. Bacterial CNS infection was documented in 51/241 suspected cases. Overnight CSF Gram stain was positive in 24/51. Upon validation, there were two false-positive and one false-negative results. The sensitivity and specificity were 41 and 99 %, respectively. All patients but one had other indications for empirical therapy than Gram stain alone. Upon obtaining the Gram result, empirical therapy was modified in 7/24, including the addition of an appropriate agent (1), addition of unnecessary agents (3) and simplification of unnecessary combination therapy (3/11). Among 74 cases with a negative CSF Gram stain and without formal indication for empirical therapy, antibiotics were withheld in only 29. Round-the-clock CSF Gram stain had a low impact on overnight empirical therapy for suspected CNS infections and was associated with several misinterpretation errors. Clinicians showed little confidence in CSF direct examination for simplifying or withholding therapy before definite microbiological results.