33 resultados para System parameters
Resumo:
This study aims to design a wearable system for kinetics measurement of multi-segment foot joints in long-distance walking and to investigate its suitability for clinical evaluations. The wearable system consisted of inertial sensors (3D gyroscopes and 3D accelerometers) on toes, forefoot, hindfoot, and shank, and a plantar pressure insole. After calibration in a laboratory, 10 healthy elderly subjects and 12 patients with ankle osteoarthritis walked 50m twice wearing this system. Using inverse dynamics, 3D forces, moments, and power were calculated in the joint sections among toes, forefoot, hindfoot, and shank. Compared to those we previously estimated for a one-segment foot model, the sagittal and transverse moments and power in the ankle joint, as measured via multi-segment foot model, showed a normalized RMS difference of less than 11%, 14%, and 13%, respectively, for healthy subjects, and 13%, 15%, and 14%, for patients. Similar to our previous study, the coronal moments were not analyzed. Maxima-minima values of anterior-posterior and vertical force, sagittal moment, and power in shank-hindfoot and hindfoot-forefoot joints were significantly different between patients and healthy subjects. Except for power, the inter-subject repeatability of these parameters was CMC>0.90 for healthy subjects and CMC>0.70 for patients. Repeatability of these parameters was lower for the forefoot-toes joint. The proposed measurement system estimated multi-segment foot joints kinetics with acceptable repeatability but showed difference, compared to those previously estimated for the one-segment foot model. These parameters also could distinguish patients from healthy subjects. Thus, this system is suggested for outcome evaluations of foot treatments.
Resumo:
Analysis of variance is commonly used in morphometry in order to ascertain differences in parameters between several populations. Failure to detect significant differences between populations (type II error) may be due to suboptimal sampling and lead to erroneous conclusions; the concept of statistical power allows one to avoid such failures by means of an adequate sampling. Several examples are given in the morphometry of the nervous system, showing the use of the power of a hierarchical analysis of variance test for the choice of appropriate sample and subsample sizes. In the first case chosen, neuronal densities in the human visual cortex, we find the number of observations to be of little effect. For dendritic spine densities in the visual cortex of mice and humans, the effect is somewhat larger. A substantial effect is shown in our last example, dendritic segmental lengths in monkey lateral geniculate nucleus. It is in the nature of the hierarchical model that sample size is always more important than subsample size. The relative weight to be attributed to subsample size thus depends on the relative magnitude of the between observations variance compared to the between individuals variance.
Resumo:
With increased activity and reduced financial and human resources, there is a need for automation in clinical bacteriology. Initial processing of clinical samples includes repetitive and fastidious steps. These tasks are suitable for automation, and several instruments are now available on the market, including the WASP (Copan), Previ-Isola (BioMerieux), Innova (Becton-Dickinson) and Inoqula (KIESTRA) systems. These new instruments allow efficient and accurate inoculation of samples, including four main steps: (i) selecting the appropriate Petri dish; (ii) inoculating the sample; (iii) spreading the inoculum on agar plates to obtain, upon incubation, well-separated bacterial colonies; and (iv) accurate labelling and sorting of each inoculated media. The challenge for clinical bacteriologists is to determine what is the ideal automated system for their own laboratory. Indeed, different solutions will be preferred, according to the number and variety of samples, and to the types of sample that will be processed with the automated system. The final choice is troublesome, because audits proposed by industrials risk being biased towards the solution proposed by their company, and because these automated systems may not be easily tested on site prior to the final decision, owing to the complexity of computer connections between the laboratory information system and the instrument. This article thus summarizes the main parameters that need to be taken into account for choosing the optimal system, and provides some clues to help clinical bacteriologists to make their choice.
Resumo:
Developing a novel technique for the efficient, noninvasive clinical evaluation of bone microarchitecture remains both crucial and challenging. The trabecular bone score (TBS) is a new gray-level texture measurement that is applicable to dual-energy X-ray absorptiometry (DXA) images. Significant correlations between TBS and standard 3-dimensional (3D) parameters of bone microarchitecture have been obtained using a numerical simulation approach. The main objective of this study was to empirically evaluate such correlations in anteroposterior spine DXA images. Thirty dried human cadaver vertebrae were evaluated. Micro-computed tomography acquisitions of the bone pieces were obtained at an isotropic resolution of 93μm. Standard parameters of bone microarchitecture were evaluated in a defined region within the vertebral body, excluding cortical bone. The bone pieces were measured on a Prodigy DXA system (GE Medical-Lunar, Madison, WI), using a custom-made positioning device and experimental setup. Significant correlations were detected between TBS and 3D parameters of bone microarchitecture, mostly independent of any correlation between TBS and bone mineral density (BMD). The greatest correlation was between TBS and connectivity density, with TBS explaining roughly 67.2% of the variance. Based on multivariate linear regression modeling, we have established a model to allow for the interpretation of the relationship between TBS and 3D bone microarchitecture parameters. This model indicates that TBS adds greater value and power of differentiation between samples with similar BMDs but different bone microarchitectures. It has been shown that it is possible to estimate bone microarchitecture status derived from DXA imaging using TBS.
Resumo:
New Global Positioning System (GPS) receivers allow now to measure a location on earth at high frequency (5Hz) with a centimetric precision using phase differential positioning method. We studied whether such technique was accurate enough to retrieve basic parameters of human locomotion. Eight subjects walked on an athletics track at four different imposed step frequencies (70-130steps/min) plus a run at free pace. Differential carrier phase localization between a fixed base station and the mobile antenna mounted on the walking person was calculated. In parallel, a triaxial accelerometer, attached to the low back, recorded body accelerations. The different parameters were averaged for 150 consecutive steps of each run for each subject (total of 6000 steps analyzed). We observed a perfect correlation between average step duration measured by accelerometer and by GPS (r=0.9998, N=40). Two important parameters for the calculation of the external work of walking were also analyzed, namely the vertical lift of the trunk and the velocity variation per step. For an average walking speed of 4.0km/h, average vertical lift and velocity variation were, respectively, 4.8cm and 0.60km/h. The average intra-individual step-to-step variability at a constant speed, which includes GPS errors and the biological gait style variation, were found to be 24. 5% (coefficient of variation) for vertical lift and 44.5% for velocity variation. It is concluded that GPS technique can provide useful biomechanical parameters for the analysis of an unlimited number of strides in an unconstrained free-living environment.
Resumo:
Visualization of the vascular systems of organs or of small animals is important for an assessment of basic physiological conditions, especially in studies that involve genetically manipulated mice. For a detailed morphological analysis of the vascular tree, it is necessary to demonstrate the system in its entirety. In this study, we present a new lipophilic contrast agent, Angiofil, for performing postmortem microangiography by using microcomputed tomography. The new contrast agent was tested in 10 wild-type mice. Imaging of the vascular system revealed vessels down to the caliber of capillaries, and the digital three-dimensional data obtained from the scans allowed for virtual cutting, amplification, and scaling without destroying the sample. By use of computer software, parameters such as vessel length and caliber could be quantified and remapped by color coding onto the surface of the vascular system. The liquid Angiofil is easy to handle and highly radio-opaque. Because of its lipophilic abilities, it is retained intravascularly, hence it facilitates virtual vessel segmentation, and yields an enduring signal which is advantageous during repetitive investigations, or if samples need to be transported from the site of preparation to the place of actual analysis, respectively. These characteristics make Angiofil a promising novel contrast agent; when combined with microcomputed tomography, it has the potential to turn into a powerful method for rapid vascular phenotyping.
Resumo:
PURPOSE: The objective of this experiment is to establish a continuous postmortem circulation in the vascular system of porcine lungs and to evaluate the pulmonary distribution of the perfusate. This research is performed in the bigger scope of a revascularization project of Thiel embalmed specimens. This technique enables teaching anatomy, practicing surgical procedures and doing research under lifelike circumstances. METHODS: After cannulation of the pulmonary trunk and the left atrium, the vascular system was flushed with paraffinum perliquidum (PP) through a heart-lung machine. A continuous circulation was then established using red PP, during which perfusion parameters were measured. The distribution of contrast-containing PP in the pulmonary circulation was visualized on computed tomography. Finally, the amount of leak from the vascular system was calculated. RESULTS: A reperfusion of the vascular system was initiated for 37 min. The flow rate ranged between 80 and 130 ml/min throughout the experiment with acceptable perfusion pressures (range: 37-78 mm Hg). Computed tomography imaging and 3D reconstruction revealed a diffuse vascular distribution of PP and a decreasing vascularization ratio in cranial direction. A self-limiting leak (i.e. 66.8% of the circulating volume) towards the tracheobronchial tree due to vessel rupture was also measured. CONCLUSIONS: PP enables circulation in an isolated porcine lung model with an acceptable pressure-flow relationship resulting in an excellent recruitment of the vascular system. Despite these promising results, rupture of vessel walls may cause leaks. Further exploration of the perfusion capacities of PP in other organs is necessary. Eventually, this could lead to the development of reperfused Thiel embalmed human bodies, which have several applications.
Resumo:
Many basic physiological functions exhibit circadian rhythmicity. These functional rhythms are driven, in part, by the circadian clock, an ubiquitous molecular mechanism allowing cells and tissues to anticipate regular environmental events and to prepare for them. This mechanism has been shown to play a particularly important role in maintaining stability (homeostasis) of internal conditions. Because the homeostatic equilibrium is continuously challenged by environmental changes, the role of the circadian clock is thought to consist in the anticipative adjustment of homeostatic pathways in relation with the 24h environmental cycle. The kidney is the principal organ responsible for the regulation of the composition and volume of extracellular fluids (ECF). Several major parameters of kidney function, including renal plasma flow (RPF), glomerular filtration rate (GFR) and tubular reabsorption and secretion have been shown to exhibit strong circadian oscillations. Recent evidence suggest that the circadian clock can be involved in generation of these rhythms through external circadian time cues (e.g. humoral factors, activity and body temperature rhythms) or, trough the intrinsic renal circadian clock. Here, we discuss the role of renal circadian mechanisms in maintaining homeostasis of water and three major ions, namely, Na(+), K(+) and Cl(-).
Resumo:
In order to distinguish dysfunctional gait; clinicians require a measure of reference gait parameters for each population. This study provided normative values for widely used parameters in more than 1400 able-bodied adults over the age of 65. We also measured the foot clearance parameters (i.e., height of the foot above ground during swing phase) that are crucial to understand the complex relationship between gait and falls as well as obstacle negotiation strategies. We used a shoe-worn inertial sensor on each foot and previously validated algorithms to extract the gait parameters during 20 m walking trials in a corridor at a self-selected pace. We investigated the difference of the gait parameters between male and female participants by considering the effect of age and height factors. Besides; we examined the inter-relation of the clearance parameters with the gait speed. The sample size and breadth of gait parameters provided in this study offer a unique reference resource for the researchers.
Resumo:
Introduction: Ankle arthrodesis (AD) and total ankle replacement (TAR) are typical treatments for ankle osteoarthritis (AO). Despite clinical interest, there is a lack of their outcome evaluation using objective criteria. Gait analysis and plantar pressure assessment are appropriate to detect pathologies in orthopaedics but they are mostly used in lab with few gait cycles. In this study, we propose an ambulatory device based on inertial and plantar pressure sensors to compare the gait during long-distance trials between healthy subjects (H) and patients with AO or treated by AD and TAR. Methods: Our study included four groups: 11 patients with AO, 9 treated by TAR, 7 treated by AD and 6 control subjects. An ambulatory system (Physilog®, CH) was used for gait analysis; plantar pressure measurements were done using a portable insole (Pedar®-X, DE). The subjects were asked to walk 50 meters in two trials. Mean value and coefficient of variation of spatio-temporal gait parameters were calculated for each trial. Pressure distribution was analyzed in ten subregions of foot. All parameters were compared among the four groups using multi-level model-based statistical analysis. Results: Significant difference (p <0.05) with control was noticed for AO patients in maximum force in medial hindfoot and forefoot and in central forefoot. These differences were no longer significant in TAR and AD groups. Cadence and speed of all pathologic groups showed significant difference with control. Both treatments showed a significant improvement in double support and stance. TAR decreased variability in speed, stride length and knee ROM. Conclusions: In spite of a small sample size, this study showed that ankle function after AO treatments can be evaluated objectively based on plantar pressure and spatio-temporal gait parameters measured during unconstrained walking outside the lab. The combination of these two ambulatory techniques provides a promising way to evaluate foot function in clinics.
Resumo:
The aim of this study was to develop an ambulatory system for the three-dimensional (3D) knee kinematics evaluation, which can be used outside a laboratory during long-term monitoring. In order to show the efficacy of this ambulatory system, knee function was analysed using this system, after an anterior cruciate ligament (ACL) lesion, and after reconstructive surgery. The proposed system was composed of two 3D gyroscopes, fixed on the shank and on the thigh, and a portable data logger for signal recording. The measured parameters were the 3D mean range of motion (ROM) and the healthy knee was used as control. The precision of this system was first assessed using an ultrasound reference system. The repeatability was also estimated. A clinical study was then performed on five unilateral ACL-deficient men (range: 19-36 years) prior to, and a year after the surgery. The patients were evaluated with the IKDC score and the kinematics measurements were carried out on a 30 m walking trial. The precision in comparison with the reference system was 4.4 degrees , 2.7 degrees and 4.2 degrees for flexion-extension, internal-external rotation, and abduction-adduction, respectively. The repeatability of the results for the three directions was 0.8 degrees , 0.7 degrees and 1.8 degrees . The averaged ROM of the five patients' healthy knee were 70.1 degrees (standard deviation (SD) 5.8 degrees), 24.0 degrees (SD 3.0 degrees) and 12.0 degrees (SD 6.3 degrees for flexion-extension, internal-external rotation and abduction-adduction before surgery, and 76.5 degrees (SD 4.1 degrees), 21.7 degrees (SD 4.9 degrees) and 10.2 degrees (SD 4.6 degrees) 1 year following the reconstruction. The results for the pathologic knee were 64.5 degrees (SD 6.9 degrees), 20.6 degrees (SD 4.0 degrees) and 19.7 degrees (8.2 degrees) during the first evaluation, and 72.3 degrees (SD 2.4 degrees), 25.8 degrees (SD 6.4 degrees) and 12.4 degrees (SD 2.3 degrees) during the second one. The performance of the system enabled us to detect knee function modifications in the sagittal and transverse plane. Prior to the reconstruction, the ROM of the injured knee was lower in flexion-extension and internal-external rotation in comparison with the controlateral knee. One year after the surgery, four patients were classified normal (A) and one almost normal (B), according to the IKDC score, and changes in the kinematics of the five patients remained: lower flexion-extension ROM and higher internal-external rotation ROM in comparison with the controlateral knee. The 3D kinematics was changed after an ACL lesion and remained altered one year after the surgery
Resumo:
INTRODUCTION: In alpine skiing, chronometry analysis is currently the most common tool to assess performance. It is widely used to rank competitors during races, as well as to manage athletes training and to evaluate material. Usually, this measurement is accurately realized using timing cells. Nevertheless, these devices are too complex and expensive to allow chronometry of every gates crossing. On the other side, differential GPS can be used for measuring gate crossing time (Waegli et al). However, this is complex (e.g. recording gate position with GPS) and mainly used in research applications. The aim of the study was to propose a wearable system to time gates crossing during alpine skiing slalom (SL), which is suitable for routine uses. METHODS: The proposed system was composed of a 3D accelerometer (ADXL320®, Analog Device, USA) placed at the sacrum of the athlete, a matrix of force sensors (Flexiforce®, Tekscan, USA) fixed on the right shin guard and a data logger (Physilog®, BioAGM, Switzerland). The sensors were sampled at 500 Hz. The crossing time were calculated in two phases. First, the accelerometer was used to detect the curves by considering the maximum of the mediolateral peak acceleration. Then, the force sensors were used to detect the impacts with the gates by considering maximum force variation. In case of non impact, the detection was realized based on the acceleration and features measured at the other gates. In order to assess the efficiency of the system, two different SL were monitored twice for two world cup level skiers, a male SL expert and a female downhill expert. RESULTS AND DISCUSSION: The combination of the accelerometer and force sensors allowed to clearly identify the gate crossing times. When comparing the runs of the SL expert and the downhill expert, we noticed that the SL expert was faster. For example for the first SL, the overall difference between the best run of each athlete was of 5.47s. At each gate, the SL expert increased the time difference slower at the beginning (0.27s/gate) than at the end (0.34s/gate). Furthermore, when comparing the runs of the SL expert, a maximum time difference of 20ms at each gate was noticed. This showed high repeatability skills of the SL expert. In opposite, the downhill expert with a maximum difference time of 1s at each gate was clearly less repeatable. Both skiers were not disturbed by the system. CONCLUSION: This study proposed a new wearable system to automatically time gates crossing during alpine skiing slalom combining force and accelerometer sensors. The system was evaluated with two professional world cup skiers and showed a high potential. This system could be extended to time other parameters. REFERENCES Waegli A, Skaloud J (2007). Inside GNSS, Spring, 24-34.
Resumo:
BACKGROUND: The risks of a public exposure to a sudden decompression, until now, have been related to civil aviation and, at a lesser extent, to diving activities. However, engineers are currently planning the use of low pressure environments for underground transportation. This method has been proposed for the future Swissmetro, a high-speed underground train designed for inter-urban linking in Switzerland. HYPOTHESIS: The use of a low pressure environment in an underground public transportation system must be considered carefully regarding the decompression risks. Indeed, due to the enclosed environment, both decompression kinetics and safety measures may differ from aviation decompression cases. METHOD: A theoretical study of decompression risks has been conducted at an early stage of the Swissmetro project. A three-compartment theoretical model, based on the physics of fluids, has been implemented with flow processing software (Ithink 5.0). Simulations have been conducted in order to analyze "decompression scenarios" for a wide range of parameters, relevant in the context of the Swissmetro main study. RESULTS: Simulation results cover a wide range from slow to explosive decompression, depending on the simulation parameters. Not surprisingly, the leaking orifice area has a tremendous impact on barotraumatic effects, while the tunnel pressure may significantly affect both hypoxic and barotraumatic effects. Calculations have also shown that reducing the free space around the vehicle may mitigate significantly an accidental decompression. CONCLUSION: Numeric simulations are relevant to assess decompression risks in the future Swissmetro system. The decompression model has proven to be useful in assisting both design choices and safety management.
Resumo:
Un système efficace de sismique tridimensionnelle (3-D) haute-résolution adapté à des cibles lacustres de petite échelle a été développé. Dans le Lac Léman, près de la ville de Lausanne, en Suisse, des investigations récentes en deux dimension (2-D) ont mis en évidence une zone de faille complexe qui a été choisie pour tester notre système. Les structures observées incluent une couche mince (<40 m) de sédiments quaternaires sub-horizontaux, discordants sur des couches tertiaires de molasse pentées vers le sud-est. On observe aussi la zone de faille de « La Paudèze » qui sépare les unités de la Molasse du Plateau de la Molasse Subalpine. Deux campagnes 3-D complètes, d?environ d?un kilomètre carré, ont été réalisées sur ce site de test. La campagne pilote (campagne I), effectuée en 1999 pendant 8 jours, a couvert 80 profils en utilisant une seule flûte. Pendant la campagne II (9 jours en 2001), le nouveau système trois-flûtes, bien paramétrés pour notre objectif, a permis l?acquisition de données de très haute qualité sur 180 lignes CMP. Les améliorations principales incluent un système de navigation et de déclenchement de tirs grâce à un nouveau logiciel. Celui-ci comprend un contrôle qualité de la navigation du bateau en temps réel utilisant un GPS différentiel (dGPS) à bord et une station de référence près du bord du lac. De cette façon, les tirs peuvent être déclenchés tous les 5 mètres avec une erreur maximale non-cumulative de 25 centimètres. Tandis que pour la campagne I la position des récepteurs de la flûte 48-traces a dû être déduite à partir des positions du bateau, pour la campagne II elle ont pu être calculées précisément (erreur <20 cm) grâce aux trois antennes dGPS supplémentaires placées sur des flotteurs attachés à l?extrémité de chaque flûte 24-traces. Il est maintenant possible de déterminer la dérive éventuelle de l?extrémité des flûtes (75 m) causée par des courants latéraux ou de petites variations de trajet du bateau. De plus, la construction de deux bras télescopiques maintenant les trois flûtes à une distance de 7.5 m les uns des autres, qui est la même distance que celle entre les lignes naviguées de la campagne II. En combinaison avec un espacement de récepteurs de 2.5 m, la dimension de chaque «bin» de données 3-D de la campagne II est de 1.25 m en ligne et 3.75 m latéralement. L?espacement plus grand en direction « in-line » par rapport à la direction «cross-line» est justifié par l?orientation structurale de la zone de faille perpendiculaire à la direction «in-line». L?incertitude sur la navigation et le positionnement pendant la campagne I et le «binning» imprécis qui en résulte, se retrouve dans les données sous forme d?une certaine discontinuité des réflecteurs. L?utilisation d?un canon à air à doublechambre (qui permet d?atténuer l?effet bulle) a pu réduire l?aliasing observé dans les sections migrées en 3-D. Celui-ci était dû à la combinaison du contenu relativement haute fréquence (<2000 Hz) du canon à eau (utilisé à 140 bars et à 0.3 m de profondeur) et d?un pas d?échantillonnage latéral insuffisant. Le Mini G.I 15/15 a été utilisé à 80 bars et à 1 m de profondeur, est mieux adapté à la complexité de la cible, une zone faillée ayant des réflecteurs pentés jusqu?à 30°. Bien que ses fréquences ne dépassent pas les 650 Hz, cette source combine une pénétration du signal non-aliasé jusqu?à 300 m dans le sol (par rapport au 145 m pour le canon à eau) pour une résolution verticale maximale de 1.1 m. Tandis que la campagne I a été acquise par groupes de plusieurs lignes de directions alternées, l?optimisation du temps d?acquisition du nouveau système à trois flûtes permet l?acquisition en géométrie parallèle, ce qui est préférable lorsqu?on utilise une configuration asymétrique (une source et un dispositif de récepteurs). Si on ne procède pas ainsi, les stacks sont différents selon la direction. Toutefois, la configuration de flûtes, plus courtes que pour la compagne I, a réduit la couverture nominale, la ramenant de 12 à 6. Une séquence classique de traitement 3-D a été adaptée à l?échantillonnage à haute fréquence et elle a été complétée par deux programmes qui transforment le format non-conventionnel de nos données de navigation en un format standard de l?industrie. Dans l?ordre, le traitement comprend l?incorporation de la géométrie, suivi de l?édition des traces, de l?harmonisation des «bins» (pour compenser l?inhomogénéité de la couverture due à la dérive du bateau et de la flûte), de la correction de la divergence sphérique, du filtrage passe-bande, de l?analyse de vitesse, de la correction DMO en 3-D, du stack et enfin de la migration 3-D en temps. D?analyses de vitesse détaillées ont été effectuées sur les données de couverture 12, une ligne sur deux et tous les 50 CMP, soit un nombre total de 600 spectres de semblance. Selon cette analyse, les vitesses d?intervalles varient de 1450-1650 m/s dans les sédiments non-consolidés et de 1650-3000 m/s dans les sédiments consolidés. Le fait que l?on puisse interpréter plusieurs horizons et surfaces de faille dans le cube, montre le potentiel de cette technique pour une interprétation tectonique et géologique à petite échelle en trois dimensions. On distingue cinq faciès sismiques principaux et leurs géométries 3-D détaillées sur des sections verticales et horizontales: les sédiments lacustres (Holocène), les sédiments glacio-lacustres (Pléistocène), la Molasse du Plateau, la Molasse Subalpine de la zone de faille (chevauchement) et la Molasse Subalpine au sud de cette zone. Les couches de la Molasse du Plateau et de la Molasse Subalpine ont respectivement un pendage de ~8° et ~20°. La zone de faille comprend de nombreuses structures très déformées de pendage d?environ 30°. Des tests préliminaires avec un algorithme de migration 3-D en profondeur avant sommation et à amplitudes préservées démontrent que la qualité excellente des données de la campagne II permet l?application de telles techniques à des campagnes haute-résolution. La méthode de sismique marine 3-D était utilisée jusqu?à présent quasi-exclusivement par l?industrie pétrolière. Son adaptation à une échelle plus petite géographiquement mais aussi financièrement a ouvert la voie d?appliquer cette technique à des objectifs d?environnement et du génie civil.<br/><br/>An efficient high-resolution three-dimensional (3-D) seismic reflection system for small-scale targets in lacustrine settings was developed. In Lake Geneva, near the city of Lausanne, Switzerland, past high-resolution two-dimensional (2-D) investigations revealed a complex fault zone (the Paudèze thrust zone), which was subsequently chosen for testing our system. Observed structures include a thin (<40 m) layer of subhorizontal Quaternary sediments that unconformably overlie southeast-dipping Tertiary Molasse beds and the Paudèze thrust zone, which separates Plateau and Subalpine Molasse units. Two complete 3-D surveys have been conducted over this same test site, covering an area of about 1 km2. In 1999, a pilot survey (Survey I), comprising 80 profiles, was carried out in 8 days with a single-streamer configuration. In 2001, a second survey (Survey II) used a newly developed three-streamer system with optimized design parameters, which provided an exceptionally high-quality data set of 180 common midpoint (CMP) lines in 9 days. The main improvements include a navigation and shot-triggering system with in-house navigation software that automatically fires the gun in combination with real-time control on navigation quality using differential GPS (dGPS) onboard and a reference base near the lake shore. Shots were triggered at 5-m intervals with a maximum non-cumulative error of 25 cm. Whereas the single 48-channel streamer system of Survey I requires extrapolation of receiver positions from the boat position, for Survey II they could be accurately calculated (error <20 cm) with the aid of three additional dGPS antennas mounted on rafts attached to the end of each of the 24- channel streamers. Towed at a distance of 75 m behind the vessel, they allow the determination of feathering due to cross-line currents or small course variations. Furthermore, two retractable booms hold the three streamers at a distance of 7.5 m from each other, which is the same distance as the sail line interval for Survey I. With a receiver spacing of 2.5 m, the bin dimension of the 3-D data of Survey II is 1.25 m in in-line direction and 3.75 m in cross-line direction. The greater cross-line versus in-line spacing is justified by the known structural trend of the fault zone perpendicular to the in-line direction. The data from Survey I showed some reflection discontinuity as a result of insufficiently accurate navigation and positioning and subsequent binning errors. Observed aliasing in the 3-D migration was due to insufficient lateral sampling combined with the relatively high frequency (<2000 Hz) content of the water gun source (operated at 140 bars and 0.3 m depth). These results motivated the use of a double-chamber bubble-canceling air gun for Survey II. A 15 / 15 Mini G.I air gun operated at 80 bars and 1 m depth, proved to be better adapted for imaging the complexly faulted target area, which has reflectors dipping up to 30°. Although its frequencies do not exceed 650 Hz, this air gun combines a penetration of non-aliased signal to depths of 300 m below the water bottom (versus 145 m for the water gun) with a maximum vertical resolution of 1.1 m. While Survey I was shot in patches of alternating directions, the optimized surveying time of the new threestreamer system allowed acquisition in parallel geometry, which is preferable when using an asymmetric configuration (single source and receiver array). Otherwise, resulting stacks are different for the opposite directions. However, the shorter streamer configuration of Survey II reduced the nominal fold from 12 to 6. A 3-D conventional processing flow was adapted to the high sampling rates and was complemented by two computer programs that format the unconventional navigation data to industry standards. Processing included trace editing, geometry assignment, bin harmonization (to compensate for uneven fold due to boat/streamer drift), spherical divergence correction, bandpass filtering, velocity analysis, 3-D DMO correction, stack and 3-D time migration. A detailed semblance velocity analysis was performed on the 12-fold data set for every second in-line and every 50th CMP, i.e. on a total of 600 spectra. According to this velocity analysis, interval velocities range from 1450-1650 m/s for the unconsolidated sediments and from 1650-3000 m/s for the consolidated sediments. Delineation of several horizons and fault surfaces reveal the potential for small-scale geologic and tectonic interpretation in three dimensions. Five major seismic facies and their detailed 3-D geometries can be distinguished in vertical and horizontal sections: lacustrine sediments (Holocene) , glaciolacustrine sediments (Pleistocene), Plateau Molasse, Subalpine Molasse and its thrust fault zone. Dips of beds within Plateau and Subalpine Molasse are ~8° and ~20°, respectively. Within the fault zone, many highly deformed structures with dips around 30° are visible. Preliminary tests with 3-D preserved-amplitude prestack depth migration demonstrate that the excellent data quality of Survey II allows application of such sophisticated techniques even to high-resolution seismic surveys. In general, the adaptation of the 3-D marine seismic reflection method, which to date has almost exclusively been used by the oil exploration industry, to a smaller geographical as well as financial scale has helped pave the way for applying this technique to environmental and engineering purposes.<br/><br/>La sismique réflexion est une méthode d?investigation du sous-sol avec un très grand pouvoir de résolution. Elle consiste à envoyer des vibrations dans le sol et à recueillir les ondes qui se réfléchissent sur les discontinuités géologiques à différentes profondeurs et remontent ensuite à la surface où elles sont enregistrées. Les signaux ainsi recueillis donnent non seulement des informations sur la nature des couches en présence et leur géométrie, mais ils permettent aussi de faire une interprétation géologique du sous-sol. Par exemple, dans le cas de roches sédimentaires, les profils de sismique réflexion permettent de déterminer leur mode de dépôt, leurs éventuelles déformations ou cassures et donc leur histoire tectonique. La sismique réflexion est la méthode principale de l?exploration pétrolière. Pendant longtemps on a réalisé des profils de sismique réflexion le long de profils qui fournissent une image du sous-sol en deux dimensions. Les images ainsi obtenues ne sont que partiellement exactes, puisqu?elles ne tiennent pas compte de l?aspect tridimensionnel des structures géologiques. Depuis quelques dizaines d?années, la sismique en trois dimensions (3-D) a apporté un souffle nouveau à l?étude du sous-sol. Si elle est aujourd?hui parfaitement maîtrisée pour l?imagerie des grandes structures géologiques tant dans le domaine terrestre que le domaine océanique, son adaptation à l?échelle lacustre ou fluviale n?a encore fait l?objet que de rares études. Ce travail de thèse a consisté à développer un système d?acquisition sismique similaire à celui utilisé pour la prospection pétrolière en mer, mais adapté aux lacs. Il est donc de dimension moindre, de mise en oeuvre plus légère et surtout d?une résolution des images finales beaucoup plus élevée. Alors que l?industrie pétrolière se limite souvent à une résolution de l?ordre de la dizaine de mètres, l?instrument qui a été mis au point dans le cadre de ce travail permet de voir des détails de l?ordre du mètre. Le nouveau système repose sur la possibilité d?enregistrer simultanément les réflexions sismiques sur trois câbles sismiques (ou flûtes) de 24 traces chacun. Pour obtenir des données 3-D, il est essentiel de positionner les instruments sur l?eau (source et récepteurs des ondes sismiques) avec une grande précision. Un logiciel a été spécialement développé pour le contrôle de la navigation et le déclenchement des tirs de la source sismique en utilisant des récepteurs GPS différentiel (dGPS) sur le bateau et à l?extrémité de chaque flûte. Ceci permet de positionner les instruments avec une précision de l?ordre de 20 cm. Pour tester notre système, nous avons choisi une zone sur le Lac Léman, près de la ville de Lausanne, où passe la faille de « La Paudèze » qui sépare les unités de la Molasse du Plateau et de la Molasse Subalpine. Deux campagnes de mesures de sismique 3-D y ont été réalisées sur une zone d?environ 1 km2. Les enregistrements sismiques ont ensuite été traités pour les transformer en images interprétables. Nous avons appliqué une séquence de traitement 3-D spécialement adaptée à nos données, notamment en ce qui concerne le positionnement. Après traitement, les données font apparaître différents faciès sismiques principaux correspondant notamment aux sédiments lacustres (Holocène), aux sédiments glacio-lacustres (Pléistocène), à la Molasse du Plateau, à la Molasse Subalpine de la zone de faille et la Molasse Subalpine au sud de cette zone. La géométrie 3-D détaillée des failles est visible sur les sections sismiques verticales et horizontales. L?excellente qualité des données et l?interprétation de plusieurs horizons et surfaces de faille montrent le potentiel de cette technique pour les investigations à petite échelle en trois dimensions ce qui ouvre des voies à son application dans les domaines de l?environnement et du génie civil.
Resumo:
Résumé destiné à un large public Le système immunitaire associé aux muqueuses gastro-intestinales doit être capable de protéger notre organisme contre l'invasion de pathogènes. Parallèlement, il doit identifier en Cant que tels, des composés inoffensifs comme la nourriture ou les milliards de bactéries qui résident dans notre intestin. Le travail présenté ici aborde ces deux aspects essentiels au bon fonctionnement de notre muqueuse intestinale. Dans une première partie, la protéine nommée pièce sécrétoire a été étudiée pour ses propriétés protectrices contre le pathogène viral rotavirus. Le rôle de la pièce sécrétoire est de transporter les anticorps que nous produisons vers la surface des muqueuses. En dehors de cette fonction bien connue, il se peut que cette protéine soit également capable de protéger notre organisme contre certains virus. L'hypothèse de travail était donc que la pièce sécrétoire se lie directement au virus, l'empêchant ainsi d'infecter des cellules épithéliales de l'intestin. En utilisant différentes techniques biochimiques, cette hypothèse s'est révélée fausse car aucune interaction entre la pièce sécrétoire et le virus n'a pu être observée, et logiquement, aucune protection n'a pu prendre place. En revanche, la pièce sécrétoire se lie à d'autres structures pathogéniques et permet ainsi de neutraliser leurs effets néfastes. La pièce sécrétoire participe donc activement à la protection de nos muqueuses, en plus de son rôle de transporteur. La deuxième partie de ce travail avait pour sujet les réactions inappropriées que le système immunitaire induit parfois contre un aliment, ou, autrement dit, les allergies alimentaires. Un modèle d'allergie alimentaire à donc été développé chez la souris et a permis de mesurer plusieurs symptômes et facteurs liés à l'allergie. Puis, ce modèle a été utilisé afin de tester les effets bénéfiques d'une bactérie lactique, dite probiotique, sur le développement de l'allergie. Il a été observé que, sous certaines circonstances, l'administration de la bactérie lactique protégeait entièrement les souris contre les réactions allergiques. L'effet bénéfique dépend donc du probiotique mais également d'autres facteurs encore inconnus â ce jour. Cette étude ouvre la voie sur la compréhension des mécanismes liés aux allergies alimentaires et sur l'impact que peuvent avoir les bactéries probiotiques sur cette maladie. Résumé Le système immunitaire associé aux muqueuses intestinales doit être capable de différencier les antigènes inoffensifs tels que 1a nourriture ou les bactéries commensales des microorganismes potentiellement dangereux. Cet aspect est essentiel pour le maintien de l'homéostase intestinale et fait l'objet du travail présenté ici. Dans un premier projet, les propriétés protectrices de la protéine appelée pièce sécrétoire (SC) ont été étudiées. SC est une protéine connue pour le transport des immunoglobulines à la surface des muqueuses. Cette protéine est fortement glycosylée paz des sucres complexes, ce qui nous a mené à postuler que SC puisse interagir avec le pathogène rotavirus. Cette hypothèse était soutenue par le fait que ce virus adhère aux cellules épithéliales par des résidus glycosylés. Des analyses biochimiques et biologiques ont démontré qu'aucune interaction entre SC et le virus ne prenait place, et que par conséquent SC n'offrait aucune protection contre ce pathogène. En revanche, SC interagit avec d'autres structures pathogéniques, comme la toxine A de Clostridium difficile, et la molécule d'adhésion intimine de la bactérie entéropathogène Escherichia coli. La liaison se fait par l'intermédiaire des sucres et confère ainsi une protection contre ces pathogènes. Ainsi, SC a été identifié comme agent neutralisant au niveau de l'intestin. La deuxième partie de ce travail abordait le sujet des allergies alimentaires, et avait pour but de tester les effets bénéfiques potentiels d'une bactérie probiotique, Lactobacillus paracasei NCC2461, contre les réactions allergiques. Un modèle marin d'allergie alimentaire a été mis au point, permettant de mesurer des immunoglobulines E, des symptômes allergiques, et la dégranulation de mastocytes. Lorsque le probiotique a été administré aux souris, celles-ci ont été complètement protégées des réactions allergiques dans une première expérience. Cependant, cette protection n'a pas été reproduite et suggère que des facteurs environnementaux encore inconnus sont critiques pour que le probiotique agisse positivement. Ce travail a permis de mettre en évidence la complexité de l'approche des traitements liés aux probiotiques et ouvre la voie sur la compréhension des mécanismes liés à l'allergie. Abstract The mucosal immune system associated to the gastrointestinal mucosa must efficiently distinguish between innocuous antigens, such as food proteins and commensal bacteria and potentially infectious agents. The work presented here deals with these two essential aspects guaranteeing intestinal homeostasis. In the first part of this work, the protective properties of secretory component (SC) toward the pathogen rotavirus were investigated. SC, which allows the transport of polymeric immunoglobulins (Ig) to mucosal surfaces, is highly glycosylated with complex glycan structures. The abundance and the nature of these carbohydrates led us to speculate that SC might interact with rotavirus, which is known to bind target cells with glycan receptors. Using various biological and biochemical techniques, we demonstrated that SC did not interact with rotaviruses, nor protected epithelial cells from infection. However, SC was shown to bind to Clostridium difficile toxin A and to the enteropathogenic Echerischia coli adhesion molecule intimin in a glycan-dependent fashion. These interactions allow in vitro protection of epithelial cells using physiological concentrations of SC. These data identify SC as a microbial scavenger at mucosal surfaces, and in the context of secretory IgA, further enhance the neutralising properties of the complex. The second project was inscribed in the domain of food allergy and aimed to test the modulatory functions of a probiotic strain of Lactobacillus paracasei toward allergic reactions. A model of food-mediated allergy was developed in the mouse using mucosal sensitisation. Several parameters associated to allergy were quantified after allergen challenge, and included allergen-specific IgE, allergic signs like diarrhea and temperature drop, and degranulation of mast cells. Administration of the probiotic strain was shown to completely protect mice from allergic reactions. However, these data were not reproduced, suggesting that unknown environmental factors are required so that protection mediated by the probiotic strain occurs. This study paves the way to the understanding of the mechanisms associated to allergy, and highlights the tremendous complexity that probiotic treatments will have to face.