228 resultados para Document imaging system

em Université de Lausanne, Switzerland


Relevância:

100.00% 100.00%

Publicador:

Resumo:

BACKGROUND: Early detection is a major goal in the management of malignant melanoma. Besides clinical assessment many noninvasive technologies such as dermoscopy, digital dermoscopy and in vivo laser scanner microscopy are used as additional methods. Herein we tested a system to assess lesional perfusion as a tool for early melanoma detection.¦METHODS: Laser Doppler flow (FluxExplorer) and mole analyser (MA) score (FotoFinder) were applied to histologically verified melanocytic nevi (33) and malignant melanomas (12).¦RESULTS: Mean perfusion and MA scores were significantly increased in melanoma compared to nevi. However, applying an empirically determined threshold of 16% perfusion increase only 42% of the melanomas fulfilled the criterion of malignancy, whereas with the mole analyzer score 82% of the melanomas fulfilled the criterion of malignancy.¦CONCLUSION: Laser Doppler imaging is a highly sensitive technology to assess skin and skin tumor perfusion in vivo. Although mean perfusion is higher in melanomas compared to nevi the high numbers of false negative results hamper the use of this technology for early melanoma detection.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

BACKGROUND: Deep burn assessment made by clinical evaluation has an accuracy varying between 60% and 80% and will determine if a burn injury will need tangential excision and skin grafting or if it will be able to heal spontaneously. Laser Doppler Imaging (LDI) techniques allow an improved burn depth assessment but their use is limited by the time-consuming image acquisition which may take up to 6 min per image. METHODS: To evaluate the effectiveness and reliability of a newly developed full-field LDI technology, 15 consecutive patients presenting with intermediate depth burns were assessed both clinically and by FluxExplorer LDI technology. Comparison between the two methods of assessment was carried out. RESULTS: Image acquisition was done within 6 s. FluxEXPLORER LDI technology achieved a significantly improved accuracy of burn depth assessment compared to the clinical judgement performed by board certified plastic and reconstructive surgeons (P < 0.05, 93% of correctly assessed burns injuries vs. 80% for clinical assessment). CONCLUSION: Technological improvements of LDI technology leading to a decreased image acquisition time and reliable burn depth assessment allow the routine use of such devices in the acute setting of burn care without interfering with the patient's treatment. Rapid and reliable LDI technology may assist clinicians in burn depth assessment and may limit the morbidity of burn patients through a minimization of the area of surgical debridement. Future technological improvements allowing the miniaturization of the device will further ease its clinical application.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

PURPOSE: EOS (EOS imaging S.A, Paris, France) is an x-ray imaging system that uses slot-scanning technology in order to optimize the trade-off between image quality and dose. The goal of this study was to characterize the EOS system in terms of occupational exposure, organ doses to patients as well as image quality for full spine examinations. METHODS: Occupational exposure was determined by measuring the ambient dose equivalents in the radiological room during a standard full spine examination. The patient dosimetry was performed using anthropomorphic phantoms representing an adolescent and a five-year-old child. The organ doses were measured with thermoluminescent detectors and then used to calculate effective doses. Patient exposure with EOS was then compared to dose levels reported for conventional radiological systems. Image quality was assessed in terms of spatial resolution and different noise contributions to evaluate the detector's performances of the system. The spatial-frequency signal transfer efficiency of the imaging system was quantified by the detective quantum efficiency (DQE). RESULTS: The use of a protective apron when the medical staff or parents have to stand near to the cubicle in the radiological room is recommended. The estimated effective dose to patients undergoing a full spine examination with the EOS system was 290μSv for an adult and 200 μSv for a child. MTF and NPS are nonisotropic, with higher values in the scanning direction; they are in addition energy-dependent, but scanning speed independent. The system was shown to be quantum-limited, with a maximum DQE of 13%. The relevance of the DQE for slot-scanning system has been addressed. CONCLUSIONS: As a summary, the estimated effective dose was 290μSv for an adult; the image quality remains comparable to conventional systems.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Purpose. We evaluated the influence of the time between low-dose gadolinium (Gd) contrast administration and coronary vessel wall enhancement (LGE) detected by 3T magnetic resonance imaging (MRI) in healthy subjects and patients with coronary artery disease (CAD). Materials and Methods. Four healthy subjects (4 men, mean age 29 ± 3 years and eleven CAD patients (6 women, mean age 61 ± 10 years) were studied on a commercial 3.0 Tesla (T) whole-body MR imaging system (Achieva 3.0 T; Philips, Best, The Netherlands). T1-weighted inversion-recovery coronary magnetic resonance imaging (MRI) was repeated up to 75 minutes after administration of low-dose Gadolinium (Gd) (0.1 mmol/kg Gd-DTPA). Results. LGE was seen in none of the healthy subjects, however in all of the CAD patients. In CAD patients, fifty-six of 62 (90.3%) segments showed LGE of the coronary artery vessel wall at time-interval 1 after contrast. At time-interval 2, 34 of 42 (81.0%) and at time-interval 3, 29 of 39 evaluable segments (74.4%) were enhanced. Conclusion. In this work, we demonstrate LGE of the coronary artery vessel wall using 3.0 T MRI after a single, low-dose Gd contrast injection in CAD patients but not in healthy subjects. In the majority of the evaluated coronary segments in CAD patients, LGE of the coronary vessel wall was already detectable 30-45 minutes after administration of the contrast agent.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

OBJECTIVES: Our objective is to test the hypothesis that coronary endothelial function (CorEndoFx) does not change with repeated isometric handgrip (IHG) stress in CAD patients or healthy subjects. BACKGROUND: Coronary responses to endothelial-dependent stressors are important measures of vascular risk that can change in response to environmental stimuli or pharmacologic interventions. The evaluation of the effect of an acute intervention on endothelial response is only valid if the measurement does not change significantly in the short term under normal conditions. Using 3.0 Tesla (T) MRI, we non-invasively compared two coronary artery endothelial function measurements separated by a ten minute interval in healthy subjects and patients with coronary artery disease (CAD). METHODS: Twenty healthy adult subjects and 12 CAD patients were studied on a commercial 3.0 T whole-body MR imaging system. Coronary cross-sectional area (CSA), peak diastolic coronary flow velocity (PDFV) and blood-flow were quantified before and during continuous IHG stress, an endothelial-dependent stressor. The IHG exercise with imaging was repeated after a 10 minute recovery period. RESULTS: In healthy adults, coronary artery CSA changes and blood-flow increases did not differ between the first and second stresses (mean % change ±SEM, first vs. second stress CSA: 14.8%±3.3% vs. 17.8%±3.6%, p = 0.24; PDFV: 27.5%±4.9% vs. 24.2%±4.5%, p = 0.54; blood-flow: 44.3%±8.3 vs. 44.8%±8.1, p = 0.84). The coronary vasoreactive responses in the CAD patients also did not differ between the first and second stresses (mean % change ±SEM, first stress vs. second stress: CSA: -6.4%±2.0% vs. -5.0%±2.4%, p = 0.22; PDFV: -4.0%±4.6% vs. -4.2%±5.3%, p = 0.83; blood-flow: -9.7%±5.1% vs. -8.7%±6.3%, p = 0.38). CONCLUSION: MRI measures of CorEndoFx are unchanged during repeated isometric handgrip exercise tests in CAD patients and healthy adults. These findings demonstrate the repeatability of noninvasive 3T MRI assessment of CorEndoFx and support its use in future studies designed to determine the effects of acute interventions on coronary vasoreactivity.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

BACKGROUND: Takayasu arteritis (TA) is a rare form of chronic inflammatory granulomatous arteritis of the aorta and its major branches. Late gadolinium enhancement (LGE) with magnetic resonance imaging (MRI) has demonstrated its value for the detection of vessel wall alterations in TA. The aim of this study was to assess LGE of the coronary artery wall in patients with TA compared to patients with stable CAD. METHODS: We enrolled 9 patients (8 female, average age 46±13 years) with proven TA. In the CAD group 9 patients participated (8 male, average age 65±10 years). Studies were performed on a commercial 3T whole-body MR imaging system (Achieva; Philips, Best, The Netherlands) using a 3D inversion prepared navigator gated spoiled gradient-echo sequence, which was repeated 34-45 minutes after low-dose gadolinium administration. RESULTS: No coronary vessel wall enhancement was observed prior to contrast in either group. Post contrast, coronary LGE on IR scans was detected in 28 of 50 segments (56%) seen on T2-Prep scans in TA and in 25 of 57 segments (44%) in CAD patients. LGE quantitative assessment of coronary artery vessel wall CNR post contrast revealed no significant differences between the two groups (CNR in TA: 6.0±2.4 and 7.3±2.5 in CAD; p = 0.474). CONCLUSION: Our findings suggest that LGE of the coronary artery wall seems to be common in patients with TA and similarly pronounced as in CAD patients. The observed coronary LGE seems to be rather unspecific, and differentiation between coronary vessel wall fibrosis and inflammation still remains unclear.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

La tomodensitométrie (CT) est une technique d'imagerie dont l'intérêt n'a cessé de croître depuis son apparition dans le début des années 70. Dans le domaine médical, son utilisation est incontournable à tel point que ce système d'imagerie pourrait être amené à devenir victime de son succès si son impact au niveau de l'exposition de la population ne fait pas l'objet d'une attention particulière. Bien évidemment, l'augmentation du nombre d'examens CT a permis d'améliorer la prise en charge des patients ou a rendu certaines procédures moins invasives. Toutefois, pour assurer que le compromis risque - bénéfice soit toujours en faveur du patient, il est nécessaire d'éviter de délivrer des doses non utiles au diagnostic.¦Si cette action est importante chez l'adulte elle doit être une priorité lorsque les examens se font chez l'enfant, en particulier lorsque l'on suit des pathologies qui nécessitent plusieurs examens CT au cours de la vie du patient. En effet, les enfants et jeunes adultes sont plus radiosensibles. De plus, leur espérance de vie étant supérieure à celle de l'adulte, ils présentent un risque accru de développer un cancer radio-induit dont la phase de latence peut être supérieure à vingt ans. Partant du principe que chaque examen radiologique est justifié, il devient dès lors nécessaire d'optimiser les protocoles d'acquisitions pour s'assurer que le patient ne soit pas irradié inutilement. L'avancée technologique au niveau du CT est très rapide et depuis 2009, de nouvelles techniques de reconstructions d'images, dites itératives, ont été introduites afin de réduire la dose et améliorer la qualité d'image.¦Le présent travail a pour objectif de déterminer le potentiel des reconstructions itératives statistiques pour réduire au minimum les doses délivrées lors d'examens CT chez l'enfant et le jeune adulte tout en conservant une qualité d'image permettant le diagnostic, ceci afin de proposer des protocoles optimisés.¦L'optimisation d'un protocole d'examen CT nécessite de pouvoir évaluer la dose délivrée et la qualité d'image utile au diagnostic. Alors que la dose est estimée au moyen d'indices CT (CTDIV0| et DLP), ce travail a la particularité d'utiliser deux approches radicalement différentes pour évaluer la qualité d'image. La première approche dite « physique », se base sur le calcul de métriques physiques (SD, MTF, NPS, etc.) mesurées dans des conditions bien définies, le plus souvent sur fantômes. Bien que cette démarche soit limitée car elle n'intègre pas la perception des radiologues, elle permet de caractériser de manière rapide et simple certaines propriétés d'une image. La seconde approche, dite « clinique », est basée sur l'évaluation de structures anatomiques (critères diagnostiques) présentes sur les images de patients. Des radiologues, impliqués dans l'étape d'évaluation, doivent qualifier la qualité des structures d'un point de vue diagnostique en utilisant une échelle de notation simple. Cette approche, lourde à mettre en place, a l'avantage d'être proche du travail du radiologue et peut être considérée comme méthode de référence.¦Parmi les principaux résultats de ce travail, il a été montré que les algorithmes itératifs statistiques étudiés en clinique (ASIR?, VEO?) ont un important potentiel pour réduire la dose au CT (jusqu'à-90%). Cependant, par leur fonctionnement, ils modifient l'apparence de l'image en entraînant un changement de texture qui pourrait affecter la qualité du diagnostic. En comparant les résultats fournis par les approches « clinique » et « physique », il a été montré que ce changement de texture se traduit par une modification du spectre fréquentiel du bruit dont l'analyse permet d'anticiper ou d'éviter une perte diagnostique. Ce travail montre également que l'intégration de ces nouvelles techniques de reconstruction en clinique ne peut se faire de manière simple sur la base de protocoles utilisant des reconstructions classiques. Les conclusions de ce travail ainsi que les outils développés pourront également guider de futures études dans le domaine de la qualité d'image, comme par exemple, l'analyse de textures ou la modélisation d'observateurs pour le CT.¦-¦Computed tomography (CT) is an imaging technique in which interest has been growing since it first began to be used in the early 1970s. In the clinical environment, this imaging system has emerged as the gold standard modality because of its high sensitivity in producing accurate diagnostic images. However, even if a direct benefit to patient healthcare is attributed to CT, the dramatic increase of the number of CT examinations performed has raised concerns about the potential negative effects of ionizing radiation on the population. To insure a benefit - risk that works in favor of a patient, it is important to balance image quality and dose in order to avoid unnecessary patient exposure.¦If this balance is important for adults, it should be an absolute priority for children undergoing CT examinations, especially for patients suffering from diseases requiring several follow-up examinations over the patient's lifetime. Indeed, children and young adults are more sensitive to ionizing radiation and have an extended life span in comparison to adults. For this population, the risk of developing cancer, whose latency period exceeds 20 years, is significantly higher than for adults. Assuming that each patient examination is justified, it then becomes a priority to optimize CT acquisition protocols in order to minimize the delivered dose to the patient. Over the past few years, CT advances have been developing at a rapid pace. Since 2009, new iterative image reconstruction techniques, called statistical iterative reconstructions, have been introduced in order to decrease patient exposure and improve image quality.¦The goal of the present work was to determine the potential of statistical iterative reconstructions to reduce dose as much as possible without compromising image quality and maintain diagnosis of children and young adult examinations.¦The optimization step requires the evaluation of the delivered dose and image quality useful to perform diagnosis. While the dose is estimated using CT indices (CTDIV0| and DLP), the particularity of this research was to use two radically different approaches to evaluate image quality. The first approach, called the "physical approach", computed physical metrics (SD, MTF, NPS, etc.) measured on phantoms in well-known conditions. Although this technique has some limitations because it does not take radiologist perspective into account, it enables the physical characterization of image properties in a simple and timely way. The second approach, called the "clinical approach", was based on the evaluation of anatomical structures (diagnostic criteria) present on patient images. Radiologists, involved in the assessment step, were asked to score image quality of structures for diagnostic purposes using a simple rating scale. This approach is relatively complicated to implement and also time-consuming. Nevertheless, it has the advantage of being very close to the practice of radiologists and is considered as a reference method.¦Primarily, this work revealed that the statistical iterative reconstructions studied in clinic (ASIR? and VECO have a strong potential to reduce CT dose (up to -90%). However, by their mechanisms, they lead to a modification of the image appearance with a change in image texture which may then effect the quality of the diagnosis. By comparing the results of the "clinical" and "physical" approach, it was showed that a change in texture is related to a modification of the noise spectrum bandwidth. The NPS analysis makes possible to anticipate or avoid a decrease in image quality. This project demonstrated that integrating these new statistical iterative reconstruction techniques can be complex and cannot be made on the basis of protocols using conventional reconstructions. The conclusions of this work and the image quality tools developed will be able to guide future studies in the field of image quality as texture analysis or model observers dedicated to CT.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

We introduce a microscopic method that determines quantitative optical properties beyond the optical diffraction limit and allows direct imaging of unstained living biological specimens. In established holographic microscopy, complex fields are measured using interferometric detection, allowing diffraction-limited phase measurements. Here, we show that non-invasive optical nanoscopy can achieve a lateral resolution of 90 nm by using a quasi-2 pi-holographic detection scheme and complex deconvolution. We record holograms from different illumination directions on the sample plane and observe subwavelength tomographic variations of the specimen. Nanoscale apertures serve to calibrate the tomographic reconstruction and to characterize the imaging system by means of the coherent transfer function. This gives rise to realistic inverse filtering and guarantees true complex field reconstruction. The observations are shown for nanoscopic porous cell frustule (diatoms), for the direct study of bacteria (Escherichia coil), and for a time-lapse approach to explore the dynamics of living dendritic spines (neurones).

Relevância:

80.00% 80.00%

Publicador:

Resumo:

T cell activation is triggered by the specific recognition of cognate peptides presented by MHC molecules. Altered peptide ligands are analogs of cognate peptides which have a high affinity for MHC molecules. Some of them induce complete T cell responses, i.e. they act as agonists, whereas others behave as partial agonists or even as antagonists. Here, we analyzed both early (intracellular Ca2+ mobilization), and late (interleukin-2 production) signal transduction events induced by a cognate peptide or a corresponding altered peptide ligand using T cell hybridomas expressing or not the CD8 alpha and beta chains. With a video imaging system, we showed that the intracellular Ca2+ response to an altered peptide ligand induces the appearance of a characteristic sustained intracellular Ca2+ concentration gradient which can be detected shortly after T cell interaction with antigen-presenting cells. We also provide evidence that the same altered peptide ligand can be seen either as an agonist or a partial agonist, depending on the presence of CD8beta in the CD8 co-receptor dimers expressed at the T cell surface.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In recent years correlative microscopy, combining the power and advantages of different imaging system, e.g., light, electrons, X-ray, NMR, etc., has become an important tool for biomedical research. Among all the possible combinations of techniques, light and electron microscopy, have made an especially big step forward and are being implemented in more and more research labs. Electron microscopy profits from the high spatial resolution, the direct recognition of the cellular ultrastructure and identification of the organelles. It, however, has two severe limitations: the restricted field of view and the fact that no live imaging can be done. On the other hand light microscopy has the advantage of live imaging, following a fluorescently tagged molecule in real time and at lower magnifications the large field of view facilitates the identification and location of sparse individual cells in a large context, e.g., tissue. The combination of these two imaging techniques appears to be a valuable approach to dissect biological events at a submicrometer level. Light microscopy can be used to follow a labelled protein of interest, or a visible organelle such as mitochondria, in time, then the sample is fixed and the exactly same region is investigated by electron microscopy. The time resolution is dependent on the speed of penetration and fixation when chemical fixatives are used and on the reaction time of the operator for cryo-fixation. Light microscopy can also be used to identify cells of interest, e.g., a special cell type in tissue or cells that have been modified by either transfections or RNAi, in a large population of non-modified cells. A further application is to find fluorescence labels in cells on a large section to reduce searching time in the electron microscope. Multiple fluorescence labelling of a series of sections can be correlated with the ultrastructure of the individual sections to get 3D information of the distribution of the marked proteins: array tomography. More and more efforts are put in either converting a fluorescence label into an electron dense product or preserving the fluorescence throughout preparation for the electron microscopy. Here, we will review successful protocols and where possible try to extract common features to better understand the importance of the individual steps in the preparation. Further the new instruments and software, intended to ease correlative light and electron microscopy, are discussed. Last but not least we will detail the approach we have chosen for correlative microscopy.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The feasibility of three-dimensional (3D) whole-heart imaging of the coronary venous (CV) system was investigated. The hypothesis that coronary magnetic resonance venography (CMRV) can be improved by using an intravascular contrast agent (CA) was tested. A simplified model of the contrast in T(2)-prepared steady-state free precession (SSFP) imaging was applied to calculate optimal T(2)-preparation durations for the various deoxygenation levels expected in venous blood. Non-contrast-agent (nCA)- and CA-enhanced images were compared for the delineation of the coronary sinus (CS) and its main tributaries. A quantitative analysis of the resulting contrast-to-noise ratio (CNR) and signal-to-noise ratio (SNR) in both approaches was performed. Precontrast visualization of the CV system was limited by the poor CNR between large portions of the venous blood and the surrounding tissue. Postcontrast, a significant increase in CNR between the venous blood and the myocardium (Myo) resulted in a clear delineation of the target vessels. The CNR improvement was 347% (P < 0.05) for the CS, 260% (P < 0.01) for the mid cardiac vein (MCV), and 430% (P < 0.05) for the great cardiac vein (GCV). The improvement in SNR was on average 155%, but was not statistically significant for the CS and the MCV. The signal of the Myo could be significantly reduced to about 25% (P < 0.001).

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Determining groundwater flow paths of infiltrated river water is necessary for studying biochemical processes in the riparian zone, but their characterization is complicated by strong temporal and spatial heterogeneity. We investigated to what extent repeat 3D surface electrical resistance tomography (ERT) can be used to monitor transport of a salt-tracer plume under close to natural gradient conditions. The aim is to estimate groundwater flow velocities and pathways at a site located within a riparian groundwater system adjacent to the perialpine Thur River in northeastern Switzerland. Our ERT time-lapse images provide constraints on the plume's shape, flow direction, and velocity. These images allow the movement of the plume to be followed for 35 m. Although the hydraulic gradient is only 1.43 parts per thousand, the ERT time-lapse images demonstrate that the plume's center of mass and its front propagate with velocities of 2x10(-4) m/s and 5x10(-4) m/s, respectively. These velocities are compatible with groundwater resistivity monitoring data in two observation wells 5 m from the injection well. Five additional sensors in the 5-30 m distance range did not detect the plume. Comparison of the ERT time-lapse images with a groundwater transport model and time-lapse inversions of synthetic ERT data indicate that the movement of the plume can be described for the first 6 h after injection by a uniform transport model. Subsurface heterogeneity causes a change of the plume's direction and velocity at later times. Our results demonstrate the effectiveness of using time-lapse 3D surface ERT to monitor flow pathways in a challenging perialpine environment over larger scales than is practically possible with crosshole 3D ERT.

Relevância:

40.00% 40.00%

Publicador:

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Three standard radiation qualities (RQA 3, RQA 5 and RQA 9) and two screens, Kodak Lanex Regular and Insight Skeletal, were used to compare the imaging performance and dose requirements of the new Kodak Hyper Speed G and the current Kodak T-MAT G/RA medical x-ray films. The noise equivalent quanta (NEQ) and detective quantum efficiencies (DQE) of the four screen-film combinations were measured at three gross optical densities and compared with the characteristics for the Kodak CR 9000 system with GP (general purpose) and HR (high resolution) phosphor plates. The new Hyper Speed G film has double the intrinsic sensitivity of the T-MAT G/RA film and a higher contrast in the high optical density range for comparable exposure latitude. By providing both high sensitivity and high spatial resolution, the new film significantly improves the compromise between dose and image quality. As expected, the new film has a higher noise level and a lower signal-to-noise ratio than the standard film, although in the high frequency range this is compensated for by a better resolution, giving better DQE results--especially at high optical density. Both screen-film systems outperform the phosphor plates in terms of MTF and DQE for standard imaging conditions (Regular screen at RQA 5 and RQA 9 beam qualities). At low energy (RQA 3), the CR system has a comparable low-frequency DQE to screen-film systems when used with a fine screen at low and middle optical densities, and a superior low-frequency DQE at high optical density.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Un système efficace de sismique tridimensionnelle (3-D) haute-résolution adapté à des cibles lacustres de petite échelle a été développé. Dans le Lac Léman, près de la ville de Lausanne, en Suisse, des investigations récentes en deux dimension (2-D) ont mis en évidence une zone de faille complexe qui a été choisie pour tester notre système. Les structures observées incluent une couche mince (<40 m) de sédiments quaternaires sub-horizontaux, discordants sur des couches tertiaires de molasse pentées vers le sud-est. On observe aussi la zone de faille de « La Paudèze » qui sépare les unités de la Molasse du Plateau de la Molasse Subalpine. Deux campagnes 3-D complètes, d?environ d?un kilomètre carré, ont été réalisées sur ce site de test. La campagne pilote (campagne I), effectuée en 1999 pendant 8 jours, a couvert 80 profils en utilisant une seule flûte. Pendant la campagne II (9 jours en 2001), le nouveau système trois-flûtes, bien paramétrés pour notre objectif, a permis l?acquisition de données de très haute qualité sur 180 lignes CMP. Les améliorations principales incluent un système de navigation et de déclenchement de tirs grâce à un nouveau logiciel. Celui-ci comprend un contrôle qualité de la navigation du bateau en temps réel utilisant un GPS différentiel (dGPS) à bord et une station de référence près du bord du lac. De cette façon, les tirs peuvent être déclenchés tous les 5 mètres avec une erreur maximale non-cumulative de 25 centimètres. Tandis que pour la campagne I la position des récepteurs de la flûte 48-traces a dû être déduite à partir des positions du bateau, pour la campagne II elle ont pu être calculées précisément (erreur <20 cm) grâce aux trois antennes dGPS supplémentaires placées sur des flotteurs attachés à l?extrémité de chaque flûte 24-traces. Il est maintenant possible de déterminer la dérive éventuelle de l?extrémité des flûtes (75 m) causée par des courants latéraux ou de petites variations de trajet du bateau. De plus, la construction de deux bras télescopiques maintenant les trois flûtes à une distance de 7.5 m les uns des autres, qui est la même distance que celle entre les lignes naviguées de la campagne II. En combinaison avec un espacement de récepteurs de 2.5 m, la dimension de chaque «bin» de données 3-D de la campagne II est de 1.25 m en ligne et 3.75 m latéralement. L?espacement plus grand en direction « in-line » par rapport à la direction «cross-line» est justifié par l?orientation structurale de la zone de faille perpendiculaire à la direction «in-line». L?incertitude sur la navigation et le positionnement pendant la campagne I et le «binning» imprécis qui en résulte, se retrouve dans les données sous forme d?une certaine discontinuité des réflecteurs. L?utilisation d?un canon à air à doublechambre (qui permet d?atténuer l?effet bulle) a pu réduire l?aliasing observé dans les sections migrées en 3-D. Celui-ci était dû à la combinaison du contenu relativement haute fréquence (<2000 Hz) du canon à eau (utilisé à 140 bars et à 0.3 m de profondeur) et d?un pas d?échantillonnage latéral insuffisant. Le Mini G.I 15/15 a été utilisé à 80 bars et à 1 m de profondeur, est mieux adapté à la complexité de la cible, une zone faillée ayant des réflecteurs pentés jusqu?à 30°. Bien que ses fréquences ne dépassent pas les 650 Hz, cette source combine une pénétration du signal non-aliasé jusqu?à 300 m dans le sol (par rapport au 145 m pour le canon à eau) pour une résolution verticale maximale de 1.1 m. Tandis que la campagne I a été acquise par groupes de plusieurs lignes de directions alternées, l?optimisation du temps d?acquisition du nouveau système à trois flûtes permet l?acquisition en géométrie parallèle, ce qui est préférable lorsqu?on utilise une configuration asymétrique (une source et un dispositif de récepteurs). Si on ne procède pas ainsi, les stacks sont différents selon la direction. Toutefois, la configuration de flûtes, plus courtes que pour la compagne I, a réduit la couverture nominale, la ramenant de 12 à 6. Une séquence classique de traitement 3-D a été adaptée à l?échantillonnage à haute fréquence et elle a été complétée par deux programmes qui transforment le format non-conventionnel de nos données de navigation en un format standard de l?industrie. Dans l?ordre, le traitement comprend l?incorporation de la géométrie, suivi de l?édition des traces, de l?harmonisation des «bins» (pour compenser l?inhomogénéité de la couverture due à la dérive du bateau et de la flûte), de la correction de la divergence sphérique, du filtrage passe-bande, de l?analyse de vitesse, de la correction DMO en 3-D, du stack et enfin de la migration 3-D en temps. D?analyses de vitesse détaillées ont été effectuées sur les données de couverture 12, une ligne sur deux et tous les 50 CMP, soit un nombre total de 600 spectres de semblance. Selon cette analyse, les vitesses d?intervalles varient de 1450-1650 m/s dans les sédiments non-consolidés et de 1650-3000 m/s dans les sédiments consolidés. Le fait que l?on puisse interpréter plusieurs horizons et surfaces de faille dans le cube, montre le potentiel de cette technique pour une interprétation tectonique et géologique à petite échelle en trois dimensions. On distingue cinq faciès sismiques principaux et leurs géométries 3-D détaillées sur des sections verticales et horizontales: les sédiments lacustres (Holocène), les sédiments glacio-lacustres (Pléistocène), la Molasse du Plateau, la Molasse Subalpine de la zone de faille (chevauchement) et la Molasse Subalpine au sud de cette zone. Les couches de la Molasse du Plateau et de la Molasse Subalpine ont respectivement un pendage de ~8° et ~20°. La zone de faille comprend de nombreuses structures très déformées de pendage d?environ 30°. Des tests préliminaires avec un algorithme de migration 3-D en profondeur avant sommation et à amplitudes préservées démontrent que la qualité excellente des données de la campagne II permet l?application de telles techniques à des campagnes haute-résolution. La méthode de sismique marine 3-D était utilisée jusqu?à présent quasi-exclusivement par l?industrie pétrolière. Son adaptation à une échelle plus petite géographiquement mais aussi financièrement a ouvert la voie d?appliquer cette technique à des objectifs d?environnement et du génie civil.<br/><br/>An efficient high-resolution three-dimensional (3-D) seismic reflection system for small-scale targets in lacustrine settings was developed. In Lake Geneva, near the city of Lausanne, Switzerland, past high-resolution two-dimensional (2-D) investigations revealed a complex fault zone (the Paudèze thrust zone), which was subsequently chosen for testing our system. Observed structures include a thin (<40 m) layer of subhorizontal Quaternary sediments that unconformably overlie southeast-dipping Tertiary Molasse beds and the Paudèze thrust zone, which separates Plateau and Subalpine Molasse units. Two complete 3-D surveys have been conducted over this same test site, covering an area of about 1 km2. In 1999, a pilot survey (Survey I), comprising 80 profiles, was carried out in 8 days with a single-streamer configuration. In 2001, a second survey (Survey II) used a newly developed three-streamer system with optimized design parameters, which provided an exceptionally high-quality data set of 180 common midpoint (CMP) lines in 9 days. The main improvements include a navigation and shot-triggering system with in-house navigation software that automatically fires the gun in combination with real-time control on navigation quality using differential GPS (dGPS) onboard and a reference base near the lake shore. Shots were triggered at 5-m intervals with a maximum non-cumulative error of 25 cm. Whereas the single 48-channel streamer system of Survey I requires extrapolation of receiver positions from the boat position, for Survey II they could be accurately calculated (error <20 cm) with the aid of three additional dGPS antennas mounted on rafts attached to the end of each of the 24- channel streamers. Towed at a distance of 75 m behind the vessel, they allow the determination of feathering due to cross-line currents or small course variations. Furthermore, two retractable booms hold the three streamers at a distance of 7.5 m from each other, which is the same distance as the sail line interval for Survey I. With a receiver spacing of 2.5 m, the bin dimension of the 3-D data of Survey II is 1.25 m in in-line direction and 3.75 m in cross-line direction. The greater cross-line versus in-line spacing is justified by the known structural trend of the fault zone perpendicular to the in-line direction. The data from Survey I showed some reflection discontinuity as a result of insufficiently accurate navigation and positioning and subsequent binning errors. Observed aliasing in the 3-D migration was due to insufficient lateral sampling combined with the relatively high frequency (<2000 Hz) content of the water gun source (operated at 140 bars and 0.3 m depth). These results motivated the use of a double-chamber bubble-canceling air gun for Survey II. A 15 / 15 Mini G.I air gun operated at 80 bars and 1 m depth, proved to be better adapted for imaging the complexly faulted target area, which has reflectors dipping up to 30°. Although its frequencies do not exceed 650 Hz, this air gun combines a penetration of non-aliased signal to depths of 300 m below the water bottom (versus 145 m for the water gun) with a maximum vertical resolution of 1.1 m. While Survey I was shot in patches of alternating directions, the optimized surveying time of the new threestreamer system allowed acquisition in parallel geometry, which is preferable when using an asymmetric configuration (single source and receiver array). Otherwise, resulting stacks are different for the opposite directions. However, the shorter streamer configuration of Survey II reduced the nominal fold from 12 to 6. A 3-D conventional processing flow was adapted to the high sampling rates and was complemented by two computer programs that format the unconventional navigation data to industry standards. Processing included trace editing, geometry assignment, bin harmonization (to compensate for uneven fold due to boat/streamer drift), spherical divergence correction, bandpass filtering, velocity analysis, 3-D DMO correction, stack and 3-D time migration. A detailed semblance velocity analysis was performed on the 12-fold data set for every second in-line and every 50th CMP, i.e. on a total of 600 spectra. According to this velocity analysis, interval velocities range from 1450-1650 m/s for the unconsolidated sediments and from 1650-3000 m/s for the consolidated sediments. Delineation of several horizons and fault surfaces reveal the potential for small-scale geologic and tectonic interpretation in three dimensions. Five major seismic facies and their detailed 3-D geometries can be distinguished in vertical and horizontal sections: lacustrine sediments (Holocene) , glaciolacustrine sediments (Pleistocene), Plateau Molasse, Subalpine Molasse and its thrust fault zone. Dips of beds within Plateau and Subalpine Molasse are ~8° and ~20°, respectively. Within the fault zone, many highly deformed structures with dips around 30° are visible. Preliminary tests with 3-D preserved-amplitude prestack depth migration demonstrate that the excellent data quality of Survey II allows application of such sophisticated techniques even to high-resolution seismic surveys. In general, the adaptation of the 3-D marine seismic reflection method, which to date has almost exclusively been used by the oil exploration industry, to a smaller geographical as well as financial scale has helped pave the way for applying this technique to environmental and engineering purposes.<br/><br/>La sismique réflexion est une méthode d?investigation du sous-sol avec un très grand pouvoir de résolution. Elle consiste à envoyer des vibrations dans le sol et à recueillir les ondes qui se réfléchissent sur les discontinuités géologiques à différentes profondeurs et remontent ensuite à la surface où elles sont enregistrées. Les signaux ainsi recueillis donnent non seulement des informations sur la nature des couches en présence et leur géométrie, mais ils permettent aussi de faire une interprétation géologique du sous-sol. Par exemple, dans le cas de roches sédimentaires, les profils de sismique réflexion permettent de déterminer leur mode de dépôt, leurs éventuelles déformations ou cassures et donc leur histoire tectonique. La sismique réflexion est la méthode principale de l?exploration pétrolière. Pendant longtemps on a réalisé des profils de sismique réflexion le long de profils qui fournissent une image du sous-sol en deux dimensions. Les images ainsi obtenues ne sont que partiellement exactes, puisqu?elles ne tiennent pas compte de l?aspect tridimensionnel des structures géologiques. Depuis quelques dizaines d?années, la sismique en trois dimensions (3-D) a apporté un souffle nouveau à l?étude du sous-sol. Si elle est aujourd?hui parfaitement maîtrisée pour l?imagerie des grandes structures géologiques tant dans le domaine terrestre que le domaine océanique, son adaptation à l?échelle lacustre ou fluviale n?a encore fait l?objet que de rares études. Ce travail de thèse a consisté à développer un système d?acquisition sismique similaire à celui utilisé pour la prospection pétrolière en mer, mais adapté aux lacs. Il est donc de dimension moindre, de mise en oeuvre plus légère et surtout d?une résolution des images finales beaucoup plus élevée. Alors que l?industrie pétrolière se limite souvent à une résolution de l?ordre de la dizaine de mètres, l?instrument qui a été mis au point dans le cadre de ce travail permet de voir des détails de l?ordre du mètre. Le nouveau système repose sur la possibilité d?enregistrer simultanément les réflexions sismiques sur trois câbles sismiques (ou flûtes) de 24 traces chacun. Pour obtenir des données 3-D, il est essentiel de positionner les instruments sur l?eau (source et récepteurs des ondes sismiques) avec une grande précision. Un logiciel a été spécialement développé pour le contrôle de la navigation et le déclenchement des tirs de la source sismique en utilisant des récepteurs GPS différentiel (dGPS) sur le bateau et à l?extrémité de chaque flûte. Ceci permet de positionner les instruments avec une précision de l?ordre de 20 cm. Pour tester notre système, nous avons choisi une zone sur le Lac Léman, près de la ville de Lausanne, où passe la faille de « La Paudèze » qui sépare les unités de la Molasse du Plateau et de la Molasse Subalpine. Deux campagnes de mesures de sismique 3-D y ont été réalisées sur une zone d?environ 1 km2. Les enregistrements sismiques ont ensuite été traités pour les transformer en images interprétables. Nous avons appliqué une séquence de traitement 3-D spécialement adaptée à nos données, notamment en ce qui concerne le positionnement. Après traitement, les données font apparaître différents faciès sismiques principaux correspondant notamment aux sédiments lacustres (Holocène), aux sédiments glacio-lacustres (Pléistocène), à la Molasse du Plateau, à la Molasse Subalpine de la zone de faille et la Molasse Subalpine au sud de cette zone. La géométrie 3-D détaillée des failles est visible sur les sections sismiques verticales et horizontales. L?excellente qualité des données et l?interprétation de plusieurs horizons et surfaces de faille montrent le potentiel de cette technique pour les investigations à petite échelle en trois dimensions ce qui ouvre des voies à son application dans les domaines de l?environnement et du génie civil.