68 resultados para Two dimensional fuzzy fault tree analysis

em Université de Lausanne, Switzerland


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Several different sample preparation methods for two-dimensional electrophoresis (2-DE) analysis of Leishmania parasites were compared. From this work, we were able to identify a solubilization method using Nonidet P-40 as detergent, which was simple to follow, and which produced 2-DE gels of high resolution and reproducibility.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Differential protein labeling with 2-DE separation is an effective method for distinguishing differences in the protein composition of two or more protein samples. Here, we report on a sensitive infrared-based labeling procedure, adding a novel tool to the many labeling possibilities. Defined amounts of newborn and adult mouse brain proteins and tubulin were exposed to maleimide-conjugated infrared dyes DY-680 and DY-780 followed by 1- and 2-DE. The procedure allows amounts of less than 5 microg of cysteine-labeled protein mixtures to be detected (together with unlabeled proteins) in a single 2-DE step with an LOD of individual proteins in the femtogram range; however, co-migration of unlabeled proteins and subsequent general protein stains are necessary for a precise comparison. Nevertheless, the most abundant thiol-labeled proteins, such as tubulin, were identified by MS, with cysteine-containing peptides influencing the accuracy of the identification score. Unfortunately, some infrared-labeled proteins were no longer detectable by Western blots. In conclusion, differential thiol labeling with infrared dyes provides an additional tool for detection of low-abundant cysteine-containing proteins and for rapid identification of differences in the protein composition of two sets of protein samples.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Over the past decades, several sensitive post-electrophoretic stains have been developed for an identification of proteins in general, or for a specific detection of post-translational modifications such as phosphorylation, glycosylation or oxidation. Yet, for a visualization and quantification of protein differences, the differential two-dimensional gel electrophoresis, termed DIGE, has become the method of choice for a detection of differences in two sets of proteomes. The goal of this review is to evaluate the use of the most common non-covalent and covalent staining techniques in 2D electrophoresis gels, in order to obtain maximal information per electrophoresis gel and for an identification of potential biomarkers. We will also discuss the use of detergents during covalent labeling, the identification of oxidative modifications and review influence of detergents on finger prints analysis and MS/MS identification in relation to 2D electrophoresis.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The choice of sample preparation protocol is a critical influential factor for isoelectric focusing which in turn affects the two-dimensional gel result in terms of quality and protein species distribution. The optimal protocol varies depending on the nature of the sample for analysis and the properties of the constituent protein species (hydrophobicity, tendency to form aggregates, copy number) intended for resolution. This review explains the standard sample buffer constituents and illustrates a series of protocols for processing diverse samples for two-dimensional gel electrophoresis, including hydrophobic membrane proteins. Current methods for concentrating lower abundance proteins, by removal of high abundance proteins, are also outlined. Finally, since protein staining is becoming increasingly incorporated into the sample preparation procedure, we describe the principles and applications of current (and future) pre-electrophoretic labelling methods.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this study we have demonstrated the potential of two-dimensional electrophoresis (2DE)-based technologies as tools for characterization of the Leishmania proteome (the expressed protein complement of the genome). Standardized neutral range (pH 5-7) proteome maps of Leishmania (Viannia) guyanensis and Leishmania (Viannia) panamensis promastigotes were reproducibly generated by 2DE of soluble parasite extracts, which were prepared using lysis buffer containing urea and nonidet P-40 detergent. The Coomassie blue and silver nitrate staining systems both yielded good resolution and representation of protein spots, enabling the detection of approximately 800 and 1,500 distinct proteins, respectively. Several reference protein spots common to the proteomes of all parasite species/strains studied were isolated and identified by peptide mass spectrometry (LC-ES-MS/MS), and bioinformatics approaches as members of the heat shock protein family, ribosomal protein S12, kinetoplast membrane protein 11 and a hypothetical Leishmania-specific 13 kDa protein of unknown function. Immunoblotting of Leishmania protein maps using a monoclonal antibody resulted in the specific detection of the 81.4 kDa and 77.5 kDa subunits of paraflagellar rod proteins 1 and 2, respectively. Moreover, differences in protein expression profiles between distinct parasite clones were reproducibly detected through comparative proteome analyses of paired maps using image analysis software. These data illustrate the resolving power of 2DE-based proteome analysis. The production and basic characterization of good quality Leishmania proteome maps provides an essential first step towards comparative protein expression studies aimed at identifying the molecular determinants of parasite drug resistance and virulence, as well as discovering new drug and vaccine targets.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

An epidemic model is formulated by a reactionâeuro"diffusion system where the spatial pattern formation is driven by cross-diffusion. The reaction terms describe the local dynamics of susceptible and infected species, whereas the diffusion terms account for the spatial distribution dynamics. For both self-diffusion and cross-diffusion, nonlinear constitutive assumptions are suggested. To simulate the pattern formation two finite volume formulations are proposed, which employ a conservative and a non-conservative discretization, respectively. An efficient simulation is obtained by a fully adaptive multiresolution strategy. Numerical examples illustrate the impact of the cross-diffusion on the pattern formation.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

PURPOSE: At 7 Tesla (T), conventional static field (B0 ) projection mapping techniques, e.g., FASTMAP, FASTESTMAP, lead to elevated specific absorption rates (SAR), requiring longer total acquisition times (TA). In this work, the series of adiabatic pulses needed for slab selection in FASTMAP is replaced by a single two-dimensional radiofrequency (2D-RF) pulse to minimize TA while ensuring equal shimming performance. METHODS: Spiral gradients and 2D-RF pulses were designed to excite thin slabs in the small tip angle regime. The corresponding selection profile was characterized in phantoms and in vivo. After optimization of the shimming protocol, the spectral linewidths obtained after 2D localized shimming were compared with conventional techniques and published values from (Emir et al NMR Biomed 2012;25:152-160) in six different brain regions. RESULTS: Results on healthy volunteers show no significant difference (P > 0.5) between the spectroscopic linewidths obtained with the adiabatic (TA = 4 min) and the new low-SAR and time-efficient FASTMAP sequence (TA = 42 s). The SAR can be reduced by three orders of magnitude and TA accelerated six times without impact on the shimming performances or quality of the resulting spectra. CONCLUSION: Multidimensional pulses can be used to minimize the RF energy and time spent for automated shimming using projection mapping at high field. Magn Reson Med, 2014. © 2014 Wiley Periodicals, Inc.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The purpose of this study is to clinically validate a new two-dimensional preoperative planning software for cementless total hip arthroplasty (THA). Manual and two-dimensional computer-assisted planning were compared by an independent observer for each of the 30 patients with osteoarthritis who underwent THA. This study showed that there were no statistical differences between the results of both preoperative plans in terms of stem size and neck length (<1 size) and hip rotation center position (<5 mm). Two-dimensional computer-assisted preoperative planning provided successful results comparable to those using the manual procedure, thereby allowing the surgeon to simulate various stem designs easily.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Probabilistic inversion methods based on Markov chain Monte Carlo (MCMC) simulation are well suited to quantify parameter and model uncertainty of nonlinear inverse problems. Yet, application of such methods to CPU-intensive forward models can be a daunting task, particularly if the parameter space is high dimensional. Here, we present a 2-D pixel-based MCMC inversion of plane-wave electromagnetic (EM) data. Using synthetic data, we investigate how model parameter uncertainty depends on model structure constraints using different norms of the likelihood function and the model constraints, and study the added benefits of joint inversion of EM and electrical resistivity tomography (ERT) data. Our results demonstrate that model structure constraints are necessary to stabilize the MCMC inversion results of a highly discretized model. These constraints decrease model parameter uncertainty and facilitate model interpretation. A drawback is that these constraints may lead to posterior distributions that do not fully include the true underlying model, because some of its features exhibit a low sensitivity to the EM data, and hence are difficult to resolve. This problem can be partly mitigated if the plane-wave EM data is augmented with ERT observations. The hierarchical Bayesian inverse formulation introduced and used herein is able to successfully recover the probabilistic properties of the measurement data errors and a model regularization weight. Application of the proposed inversion methodology to field data from an aquifer demonstrates that the posterior mean model realization is very similar to that derived from a deterministic inversion with similar model constraints.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

PURPOSE: To improve coronary magnetic resonance angiography (MRA) by combining a two-dimensional (2D) spatially selective radiofrequency (RF) pulse with a T2 -preparation module ("2D-T2 -Prep"). METHODS: An adiabatic T2 -Prep was modified so that the first and last pulses were of differing spatial selectivity. The first RF pulse was replaced by a 2D pulse, such that a pencil-beam volume is excited. The last RF pulse remains nonselective, thus restoring the T2 -prepared pencil-beam, while tipping the (formerly longitudinal) magnetization outside of the pencil-beam into the transverse plane, where it is then spoiled. Thus, only a cylinder of T2 -prepared tissue remains for imaging. Numerical simulations were followed by phantom validation and in vivo coronary MRA, where the technique was quantitatively evaluated. Reduced field-of-view (rFoV) images were similarly studied. RESULTS: In vivo, full field-of-view 2D-T2 -Prep significantly improved vessel sharpness as compared to conventional T2 -Prep, without adversely affecting signal-to-noise (SNR) or contrast-to-noise ratios (CNR). It also reduced respiratory motion artifacts. In rFoV images, the SNR, CNR, and vessel sharpness decreased, although scan time reduction was 60%. CONCLUSION: When compared with conventional T2 -Prep, the 2D-T2 -Prep improves vessel sharpness and decreases respiratory ghosting while preserving both SNR and CNR. It may also acquire rFoV images for accelerated data acquisition.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

To determine the feasibility of data transfer, an interlaboratory comparison was conducted on colon carcinoma cell line (DLD-1) proteins resolved by two-dimensional polyacrylamide gel electrophoresis either on small (6 x 7 cm) or large (16x18 cm) gels. The gels were silver-stained and scanned by laser densitometry, and the image obtained was analyzed using Melanie software. The number of spots detected was 1337+/-161 vs. 2382+/-176 for small vs. large format gels, respectively. After gel calibration using landmarks determined using pl and Mr markers, large- and small-format gels were matched and 712+/-36 proteins were found on both types of gels. Having performed accurate gel matching it was possible to acquire additional information after accessing a 2-D PAGE reference database (http://www.expasy.ch/ cgibin/map2/def?DLD1_HUMAN). Thus, the difference in gel size is not an obstacle for data transfer. This will facilitate exchanges between laboratories or consultation concerning existing databases.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

An efficient high-resolution (HR) three-dimensional (3D) seismic reflection system for small-scale targets in lacustrine settings was developed. In Lake Geneva, near the city of Lausanne, Switzerland, the offshore extension of a complex fault zone well mapped on land was chosen for testing our system. A preliminary two-dimensional seismic survey indicated structures that include a thin (<40 m) layer of subhorizontal Quaternary sediments that unconformably overlie south-east-dipping Tertiary Molasse beds and a major fault zone (Paudeze Fault Zone) that separates Plateau and Subalpine Molasse (SM) units. A 3D survey was conducted over this test site using a newly developed three-streamer system. It provided high-quality data with a penetration to depths of 300 m below the water bottom of non-aliased signal for dips up to 30degrees and with a maximum vertical resolution of 1.1 m. The data were subjected to a conventional 3D processing sequence that included post-stack time migration. Tests with 3D pre-stack depth migration showed that such techniques can be applied to HR seismic surveys. Delineation of several horizons and fault surfaces reveals the potential for small-scale geologic and tectonic interpretation in three dimensions. Five major seismic facies and their detailed 3D geometries can be distinguished. Three fault surfaces and the top of a molasse surface were mapped in 3D. Analysis of the geometry of these surfaces and their relative orientation suggests that pre-existing structures within the Plateau Molasse (PM) unit influenced later faulting between the Plateau and SM. In particular, a change in strike of the PM bed dip may indicate a fold formed by a regional stress regime, the orientation of which was different from the one responsible for the creation of the Paudeze Fault Zone. This structure might have later influenced the local stress regime and caused the curved shape of the Paudeze Fault in our surveyed area.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Un système efficace de sismique tridimensionnelle (3-D) haute-résolution adapté à des cibles lacustres de petite échelle a été développé. Dans le Lac Léman, près de la ville de Lausanne, en Suisse, des investigations récentes en deux dimension (2-D) ont mis en évidence une zone de faille complexe qui a été choisie pour tester notre système. Les structures observées incluent une couche mince (<40 m) de sédiments quaternaires sub-horizontaux, discordants sur des couches tertiaires de molasse pentées vers le sud-est. On observe aussi la zone de faille de « La Paudèze » qui sépare les unités de la Molasse du Plateau de la Molasse Subalpine. Deux campagnes 3-D complètes, d?environ d?un kilomètre carré, ont été réalisées sur ce site de test. La campagne pilote (campagne I), effectuée en 1999 pendant 8 jours, a couvert 80 profils en utilisant une seule flûte. Pendant la campagne II (9 jours en 2001), le nouveau système trois-flûtes, bien paramétrés pour notre objectif, a permis l?acquisition de données de très haute qualité sur 180 lignes CMP. Les améliorations principales incluent un système de navigation et de déclenchement de tirs grâce à un nouveau logiciel. Celui-ci comprend un contrôle qualité de la navigation du bateau en temps réel utilisant un GPS différentiel (dGPS) à bord et une station de référence près du bord du lac. De cette façon, les tirs peuvent être déclenchés tous les 5 mètres avec une erreur maximale non-cumulative de 25 centimètres. Tandis que pour la campagne I la position des récepteurs de la flûte 48-traces a dû être déduite à partir des positions du bateau, pour la campagne II elle ont pu être calculées précisément (erreur <20 cm) grâce aux trois antennes dGPS supplémentaires placées sur des flotteurs attachés à l?extrémité de chaque flûte 24-traces. Il est maintenant possible de déterminer la dérive éventuelle de l?extrémité des flûtes (75 m) causée par des courants latéraux ou de petites variations de trajet du bateau. De plus, la construction de deux bras télescopiques maintenant les trois flûtes à une distance de 7.5 m les uns des autres, qui est la même distance que celle entre les lignes naviguées de la campagne II. En combinaison avec un espacement de récepteurs de 2.5 m, la dimension de chaque «bin» de données 3-D de la campagne II est de 1.25 m en ligne et 3.75 m latéralement. L?espacement plus grand en direction « in-line » par rapport à la direction «cross-line» est justifié par l?orientation structurale de la zone de faille perpendiculaire à la direction «in-line». L?incertitude sur la navigation et le positionnement pendant la campagne I et le «binning» imprécis qui en résulte, se retrouve dans les données sous forme d?une certaine discontinuité des réflecteurs. L?utilisation d?un canon à air à doublechambre (qui permet d?atténuer l?effet bulle) a pu réduire l?aliasing observé dans les sections migrées en 3-D. Celui-ci était dû à la combinaison du contenu relativement haute fréquence (<2000 Hz) du canon à eau (utilisé à 140 bars et à 0.3 m de profondeur) et d?un pas d?échantillonnage latéral insuffisant. Le Mini G.I 15/15 a été utilisé à 80 bars et à 1 m de profondeur, est mieux adapté à la complexité de la cible, une zone faillée ayant des réflecteurs pentés jusqu?à 30°. Bien que ses fréquences ne dépassent pas les 650 Hz, cette source combine une pénétration du signal non-aliasé jusqu?à 300 m dans le sol (par rapport au 145 m pour le canon à eau) pour une résolution verticale maximale de 1.1 m. Tandis que la campagne I a été acquise par groupes de plusieurs lignes de directions alternées, l?optimisation du temps d?acquisition du nouveau système à trois flûtes permet l?acquisition en géométrie parallèle, ce qui est préférable lorsqu?on utilise une configuration asymétrique (une source et un dispositif de récepteurs). Si on ne procède pas ainsi, les stacks sont différents selon la direction. Toutefois, la configuration de flûtes, plus courtes que pour la compagne I, a réduit la couverture nominale, la ramenant de 12 à 6. Une séquence classique de traitement 3-D a été adaptée à l?échantillonnage à haute fréquence et elle a été complétée par deux programmes qui transforment le format non-conventionnel de nos données de navigation en un format standard de l?industrie. Dans l?ordre, le traitement comprend l?incorporation de la géométrie, suivi de l?édition des traces, de l?harmonisation des «bins» (pour compenser l?inhomogénéité de la couverture due à la dérive du bateau et de la flûte), de la correction de la divergence sphérique, du filtrage passe-bande, de l?analyse de vitesse, de la correction DMO en 3-D, du stack et enfin de la migration 3-D en temps. D?analyses de vitesse détaillées ont été effectuées sur les données de couverture 12, une ligne sur deux et tous les 50 CMP, soit un nombre total de 600 spectres de semblance. Selon cette analyse, les vitesses d?intervalles varient de 1450-1650 m/s dans les sédiments non-consolidés et de 1650-3000 m/s dans les sédiments consolidés. Le fait que l?on puisse interpréter plusieurs horizons et surfaces de faille dans le cube, montre le potentiel de cette technique pour une interprétation tectonique et géologique à petite échelle en trois dimensions. On distingue cinq faciès sismiques principaux et leurs géométries 3-D détaillées sur des sections verticales et horizontales: les sédiments lacustres (Holocène), les sédiments glacio-lacustres (Pléistocène), la Molasse du Plateau, la Molasse Subalpine de la zone de faille (chevauchement) et la Molasse Subalpine au sud de cette zone. Les couches de la Molasse du Plateau et de la Molasse Subalpine ont respectivement un pendage de ~8° et ~20°. La zone de faille comprend de nombreuses structures très déformées de pendage d?environ 30°. Des tests préliminaires avec un algorithme de migration 3-D en profondeur avant sommation et à amplitudes préservées démontrent que la qualité excellente des données de la campagne II permet l?application de telles techniques à des campagnes haute-résolution. La méthode de sismique marine 3-D était utilisée jusqu?à présent quasi-exclusivement par l?industrie pétrolière. Son adaptation à une échelle plus petite géographiquement mais aussi financièrement a ouvert la voie d?appliquer cette technique à des objectifs d?environnement et du génie civil.<br/><br/>An efficient high-resolution three-dimensional (3-D) seismic reflection system for small-scale targets in lacustrine settings was developed. In Lake Geneva, near the city of Lausanne, Switzerland, past high-resolution two-dimensional (2-D) investigations revealed a complex fault zone (the Paudèze thrust zone), which was subsequently chosen for testing our system. Observed structures include a thin (<40 m) layer of subhorizontal Quaternary sediments that unconformably overlie southeast-dipping Tertiary Molasse beds and the Paudèze thrust zone, which separates Plateau and Subalpine Molasse units. Two complete 3-D surveys have been conducted over this same test site, covering an area of about 1 km2. In 1999, a pilot survey (Survey I), comprising 80 profiles, was carried out in 8 days with a single-streamer configuration. In 2001, a second survey (Survey II) used a newly developed three-streamer system with optimized design parameters, which provided an exceptionally high-quality data set of 180 common midpoint (CMP) lines in 9 days. The main improvements include a navigation and shot-triggering system with in-house navigation software that automatically fires the gun in combination with real-time control on navigation quality using differential GPS (dGPS) onboard and a reference base near the lake shore. Shots were triggered at 5-m intervals with a maximum non-cumulative error of 25 cm. Whereas the single 48-channel streamer system of Survey I requires extrapolation of receiver positions from the boat position, for Survey II they could be accurately calculated (error <20 cm) with the aid of three additional dGPS antennas mounted on rafts attached to the end of each of the 24- channel streamers. Towed at a distance of 75 m behind the vessel, they allow the determination of feathering due to cross-line currents or small course variations. Furthermore, two retractable booms hold the three streamers at a distance of 7.5 m from each other, which is the same distance as the sail line interval for Survey I. With a receiver spacing of 2.5 m, the bin dimension of the 3-D data of Survey II is 1.25 m in in-line direction and 3.75 m in cross-line direction. The greater cross-line versus in-line spacing is justified by the known structural trend of the fault zone perpendicular to the in-line direction. The data from Survey I showed some reflection discontinuity as a result of insufficiently accurate navigation and positioning and subsequent binning errors. Observed aliasing in the 3-D migration was due to insufficient lateral sampling combined with the relatively high frequency (<2000 Hz) content of the water gun source (operated at 140 bars and 0.3 m depth). These results motivated the use of a double-chamber bubble-canceling air gun for Survey II. A 15 / 15 Mini G.I air gun operated at 80 bars and 1 m depth, proved to be better adapted for imaging the complexly faulted target area, which has reflectors dipping up to 30°. Although its frequencies do not exceed 650 Hz, this air gun combines a penetration of non-aliased signal to depths of 300 m below the water bottom (versus 145 m for the water gun) with a maximum vertical resolution of 1.1 m. While Survey I was shot in patches of alternating directions, the optimized surveying time of the new threestreamer system allowed acquisition in parallel geometry, which is preferable when using an asymmetric configuration (single source and receiver array). Otherwise, resulting stacks are different for the opposite directions. However, the shorter streamer configuration of Survey II reduced the nominal fold from 12 to 6. A 3-D conventional processing flow was adapted to the high sampling rates and was complemented by two computer programs that format the unconventional navigation data to industry standards. Processing included trace editing, geometry assignment, bin harmonization (to compensate for uneven fold due to boat/streamer drift), spherical divergence correction, bandpass filtering, velocity analysis, 3-D DMO correction, stack and 3-D time migration. A detailed semblance velocity analysis was performed on the 12-fold data set for every second in-line and every 50th CMP, i.e. on a total of 600 spectra. According to this velocity analysis, interval velocities range from 1450-1650 m/s for the unconsolidated sediments and from 1650-3000 m/s for the consolidated sediments. Delineation of several horizons and fault surfaces reveal the potential for small-scale geologic and tectonic interpretation in three dimensions. Five major seismic facies and their detailed 3-D geometries can be distinguished in vertical and horizontal sections: lacustrine sediments (Holocene) , glaciolacustrine sediments (Pleistocene), Plateau Molasse, Subalpine Molasse and its thrust fault zone. Dips of beds within Plateau and Subalpine Molasse are ~8° and ~20°, respectively. Within the fault zone, many highly deformed structures with dips around 30° are visible. Preliminary tests with 3-D preserved-amplitude prestack depth migration demonstrate that the excellent data quality of Survey II allows application of such sophisticated techniques even to high-resolution seismic surveys. In general, the adaptation of the 3-D marine seismic reflection method, which to date has almost exclusively been used by the oil exploration industry, to a smaller geographical as well as financial scale has helped pave the way for applying this technique to environmental and engineering purposes.<br/><br/>La sismique réflexion est une méthode d?investigation du sous-sol avec un très grand pouvoir de résolution. Elle consiste à envoyer des vibrations dans le sol et à recueillir les ondes qui se réfléchissent sur les discontinuités géologiques à différentes profondeurs et remontent ensuite à la surface où elles sont enregistrées. Les signaux ainsi recueillis donnent non seulement des informations sur la nature des couches en présence et leur géométrie, mais ils permettent aussi de faire une interprétation géologique du sous-sol. Par exemple, dans le cas de roches sédimentaires, les profils de sismique réflexion permettent de déterminer leur mode de dépôt, leurs éventuelles déformations ou cassures et donc leur histoire tectonique. La sismique réflexion est la méthode principale de l?exploration pétrolière. Pendant longtemps on a réalisé des profils de sismique réflexion le long de profils qui fournissent une image du sous-sol en deux dimensions. Les images ainsi obtenues ne sont que partiellement exactes, puisqu?elles ne tiennent pas compte de l?aspect tridimensionnel des structures géologiques. Depuis quelques dizaines d?années, la sismique en trois dimensions (3-D) a apporté un souffle nouveau à l?étude du sous-sol. Si elle est aujourd?hui parfaitement maîtrisée pour l?imagerie des grandes structures géologiques tant dans le domaine terrestre que le domaine océanique, son adaptation à l?échelle lacustre ou fluviale n?a encore fait l?objet que de rares études. Ce travail de thèse a consisté à développer un système d?acquisition sismique similaire à celui utilisé pour la prospection pétrolière en mer, mais adapté aux lacs. Il est donc de dimension moindre, de mise en oeuvre plus légère et surtout d?une résolution des images finales beaucoup plus élevée. Alors que l?industrie pétrolière se limite souvent à une résolution de l?ordre de la dizaine de mètres, l?instrument qui a été mis au point dans le cadre de ce travail permet de voir des détails de l?ordre du mètre. Le nouveau système repose sur la possibilité d?enregistrer simultanément les réflexions sismiques sur trois câbles sismiques (ou flûtes) de 24 traces chacun. Pour obtenir des données 3-D, il est essentiel de positionner les instruments sur l?eau (source et récepteurs des ondes sismiques) avec une grande précision. Un logiciel a été spécialement développé pour le contrôle de la navigation et le déclenchement des tirs de la source sismique en utilisant des récepteurs GPS différentiel (dGPS) sur le bateau et à l?extrémité de chaque flûte. Ceci permet de positionner les instruments avec une précision de l?ordre de 20 cm. Pour tester notre système, nous avons choisi une zone sur le Lac Léman, près de la ville de Lausanne, où passe la faille de « La Paudèze » qui sépare les unités de la Molasse du Plateau et de la Molasse Subalpine. Deux campagnes de mesures de sismique 3-D y ont été réalisées sur une zone d?environ 1 km2. Les enregistrements sismiques ont ensuite été traités pour les transformer en images interprétables. Nous avons appliqué une séquence de traitement 3-D spécialement adaptée à nos données, notamment en ce qui concerne le positionnement. Après traitement, les données font apparaître différents faciès sismiques principaux correspondant notamment aux sédiments lacustres (Holocène), aux sédiments glacio-lacustres (Pléistocène), à la Molasse du Plateau, à la Molasse Subalpine de la zone de faille et la Molasse Subalpine au sud de cette zone. La géométrie 3-D détaillée des failles est visible sur les sections sismiques verticales et horizontales. L?excellente qualité des données et l?interprétation de plusieurs horizons et surfaces de faille montrent le potentiel de cette technique pour les investigations à petite échelle en trois dimensions ce qui ouvre des voies à son application dans les domaines de l?environnement et du génie civil.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

INTRODUCTION: Hip fractures are responsible for excessive mortality, decreasing the 5-year survival rate by about 20%. From an economic perspective, they represent a major source of expense, with direct costs in hospitalization, rehabilitation, and institutionalization. The incidence rate sharply increases after the age of 70, but it can be reduced in women aged 70-80 years by therapeutic interventions. Recent analyses suggest that the most efficient strategy is to implement such interventions in women at the age of 70 years. As several guidelines recommend bone mineral density (BMD) screening of postmenopausal women with clinical risk factors, our objective was to assess the cost-effectiveness of two screening strategies applied to elderly women aged 70 years and older. METHODS: A cost-effectiveness analysis was performed using decision-tree analysis and a Markov model. Two alternative strategies, one measuring BMD of all women, and one measuring BMD only of those having at least one risk factor, were compared with the reference strategy "no screening". Cost-effectiveness ratios were measured as cost per year gained without hip fracture. Most probabilities were based on data observed in EPIDOS, SEMOF and OFELY cohorts. RESULTS: In this model, which is mostly based on observed data, the strategy "screen all" was more cost effective than "screen women at risk." For one woman screened at the age of 70 and followed for 10 years, the incremental (additional) cost-effectiveness ratio of these two strategies compared with the reference was 4,235 euros and 8,290 euros, respectively. CONCLUSION: The results of this model, under the assumptions described in the paper, suggest that in women aged 70-80 years, screening all women with dual-energy X-ray absorptiometry (DXA) would be more effective than no screening or screening only women with at least one risk factor. Cost-effectiveness studies based on decision-analysis trees maybe useful tools for helping decision makers, and further models based on different assumptions should be performed to improve the level of evidence on cost-effectiveness ratios of the usual screening strategies for osteoporosis.