942 resultados para Variable Sampling Interval Control Charts
Resumo:
In humans, NK receptors are expressed by natural killer cells and some T cells, the latter of which are preferentially alphabetaTCR+ CD8+ cytolytic T lymphocytes (CTL). In this study we analyzed the expression of nine NK receptors (p58.1, p58.2, p70, p140, ILT2, NKRP1A, ZIN176, CD94 and CD94/NKG2A) in PBL from both healthy donors and melanoma patients. The percentages of NK receptor-positive T cells (NKT cells) varied strongly, and this variation was more important between individual patients than between individual healthy donors. In all the individuals, the NKT cells were preferentially CD28-, and a significant correlation was found between the percentage of CD28- T cells and the percentage of NK receptor+ T cells. Based on these data and the known activated phenotype of CD28- T cells, we propose that the CD28- CD8+ T cell pool represents or contains the currently active CTL population, and that the frequent expression of NK receptors reflects regulatory mechanisms modulating the extent of CTL effector function. Preliminary results indicate that some tumor antigen-specific T cells may indeed be CD28- and express NK receptors in vivo.
Resumo:
Blowing and drifting of snow is a major concern for transportation efficiency and road safety in regions where their development is common. One common way to mitigate snow drift on roadways is to install plastic snow fences. Correct design of snow fences is critical for road safety and maintaining the roads open during winter in the US Midwest and other states affected by large snow events during the winter season and to maintain costs related to accumulation of snow on the roads and repair of roads to minimum levels. Of critical importance for road safety is the protection against snow drifting in regions with narrow rights of way, where standard fences cannot be deployed at the recommended distance from the road. Designing snow fences requires sound engineering judgment and a thorough evaluation of the potential for snow blowing and drifting at the construction site. The evaluation includes site-specific design parameters typically obtained with semi-empirical relations characterizing the local transport conditions. Among the critical parameters involved in fence design and assessment of their post-construction efficiency is the quantification of the snow accumulation at fence sites. The present study proposes a joint experimental and numerical approach to monitor snow deposits around snow fences, quantitatively estimate snow deposits in the field, asses the efficiency and improve the design of snow fences. Snow deposit profiles were mapped using GPS based real-time kinematic surveys (RTK) conducted at the monitored field site during and after snow storms. The monitored site allowed testing different snow fence designs under close to identical conditions over four winter seasons. The study also discusses the detailed monitoring system and analysis of weather forecast and meteorological conditions at the monitored sites. A main goal of the present study was to assess the performance of lightweight plastic snow fences with a lower porosity than the typical 50% porosity used in standard designs of such fences. The field data collected during the first winter was used to identify the best design for snow fences with a porosity of 50%. Flow fields obtained from numerical simulations showed that the fence design that worked the best during the first winter induced the formation of an elongated area of small velocity magnitude close to the ground. This information was used to identify other candidates for optimum design of fences with a lower porosity. Two of the designs with a fence porosity of 30% that were found to perform well based on results of numerical simulations were tested in the field during the second winter along with the best performing design for fences with a porosity of 50%. Field data showed that the length of the snow deposit away from the fence was reduced by about 30% for the two proposed lower-porosity (30%) fence designs compared to the best design identified for fences with a porosity of 50%. Moreover, one of the lower-porosity designs tested in the field showed no significant snow deposition within the bottom gap region beneath the fence. Thus, a major outcome of this study is to recommend using plastic snow fences with a porosity of 30%. It is expected that this lower-porosity design will continue to work well for even more severe snow events or for successive snow events occurring during the same winter. The approach advocated in the present study allowed making general recommendations for optimizing the design of lower-porosity plastic snow fences. This approach can be extended to improve the design of other types of snow fences. Some preliminary work for living snow fences is also discussed. Another major contribution of this study is to propose, develop protocols and test a novel technique based on close range photogrammetry (CRP) to quantify the snow deposits trapped snow fences. As image data can be acquired continuously, the time evolution of the volume of snow retained by a snow fence during a storm or during a whole winter season can, in principle, be obtained. Moreover, CRP is a non-intrusive method that eliminates the need to perform man-made measurements during the storms, which are difficult and sometimes dangerous to perform. Presently, there is lots of empiricism in the design of snow fences due to lack of data on fence storage capacity on how snow deposits change with the fence design and snow storm characteristics and in the estimation of the main parameters used by the state DOTs to design snow fences at a given site. The availability of such information from CRP measurements should provide critical data for the evaluation of the performance of a certain snow fence design that is tested by the IDOT. As part of the present study, the novel CRP method is tested at several sites. The present study also discusses some attempts and preliminary work to determine the snow relocation coefficient which is one of the main variables that has to be estimated by IDOT engineers when using the standard snow fence design software (Snow Drift Profiler, Tabler, 2006). Our analysis showed that standard empirical formulas did not produce reasonable values when applied at the Iowa test sites monitored as part of the present study and that simple methods to estimate this variable are not reliable. The present study makes recommendations for the development of a new methodology based on Large Scale Particle Image Velocimetry that can directly measure the snow drift fluxes and the amount of snow relocated by the fence.
Resumo:
During the first two trimesters of intrauterine life, fetal sex steroid production is driven by maternal human chorionic gonadotropin (hCG). The HPG axis is activated around the third trimester and remains active for the first 6-months of neonatal life. This so-called mini-puberty is a developmental window that has profound effects on future potential for fertility. In early puberty, GnRH secretion is reactivated first at night and then night and day. Pulsatile GnRH stimulates both LH and FSH, which induce maturation of the seminiferous tubules and Leydig cells. Congenital hypogonadotropic hypogonadism (CHH) results from GnRH deficiency. Men with CHH lack the mini-pubertal and pubertal periods of Sertoli Cell proliferation and thus present with prepubertal testes (<4mL) and low inhibin serum levels --reflecting diminished SC numbers. To induce full maturation of the testes, GnRH-deficient patients can be treated with either pulsatile GnRH, hCG or combined gonadotropin therapy (FSH+hCG). Fertility outcomes with each of these regimens are highly variable. Recently, a randomized, open label treatment study (n=13) addressed the question of whether a sequential treatment with FSH alone prior to LH and FSH (via GnRH pump) could enhance fertility outcomes. All men receiving the sequential treatment developed sperm in the ejaculate, whereas 2/6 men in the other group remained azoospermic. A large, multicenter clinical trial is needed to definitively prove the optimal treatment approach for severe CHH.
Resumo:
Helping behavior is any intentional behavior that benefits another living being or group (Hogg & Vaughan, 2010). People tend to underestimate the probability that others will comply with their direct requests for help (Flynn & Lake, 2008). This implies that when they need help, they will assess the probability of getting it (De Paulo, 1982, cited in Flynn & Lake, 2008) and then they will tend to estimate one that is actually lower than the real chance, so they may not even consider worth asking for it. Existing explanations for this phenomenon attribute it to a mistaken cost computation by the help seeker, who will emphasize the instrumental cost of “saying yes”, ignoring that the potential helper also needs to take into account the social cost of saying “no”. And the truth is that, especially in face-to-face interactions, the discomfort caused by refusing to help can be very high. In short, help seekers tend to fail to realize that it might be more costly to refuse to comply with a help request rather than accepting. A similar effect has been observed when estimating trustworthiness of people. Fetchenhauer and Dunning (2010) showed that people also tend to underestimate it. This bias is reduced when, instead of asymmetric feedback (getting feedback only when deciding to trust the other person), symmetric feedback (always given) was provided. This cause could as well be applicable to help seeking as people only receive feedback when they actually make their request but not otherwise. Fazio, Shook, and Eiser (2004) studied something that could be reinforcing these outcomes: Learning asymmetries. By means of a computer game called BeanFest, they showed that people learn better about negatively valenced objects (beans in this case) than about positively valenced ones. This learning asymmetry esteemed from “information gain being contingent on approach behavior” (p. 293), which could be identified with what Fetchenhauer and Dunning mention as ‘asymmetric feedback’, and hence also with help requests. Fazio et al. also found a generalization asymmetry in favor of negative attitudes versus positive ones. They attributed it to a negativity bias that “weights resemblance to a known negative more heavily than resemblance to a positive” (p. 300). Applied to help seeking scenarios, this would mean that when facing an unknown situation, people would tend to generalize and infer that is more likely that they get a negative rather than a positive outcome from it, so, along with what it was said before, people will be more inclined to think that they will get a “no” when requesting help. Denrell and Le Mens (2011) present a different perspective when trying to explain judgment biases in general. They deviate from the classical inappropriate information processing (depicted among other by Fiske & Taylor, 2007, and Tversky & Kahneman, 1974) and explain this in terms of ‘adaptive sampling’. Adaptive sampling is a sampling mechanism in which the selection of sample items is conditioned by the values of the variable of interest previously observed (Thompson, 2011). Sampling adaptively allows individuals to safeguard themselves from experiences they went through once and turned out to lay negative outcomes. However, it also prevents them from giving a second chance to those experiences to get an updated outcome that could maybe turn into a positive one, a more positive one, or just one that regresses to the mean, whatever direction that implies. That, as Denrell and Le Mens (2011) explained, makes sense: If you go to a restaurant, and you did not like the food, you do not choose that restaurant again. This is what we think could be happening when asking for help: When we get a “no”, we stop asking. And here, we want to provide a complementary explanation for the underestimation of the probability that others comply with our direct help requests based on adaptive sampling. First, we will develop and explain a model that represents the theory. Later on, we will test it empirically by means of experiments, and will elaborate on the analysis of its results.
Resumo:
The Federal Highway Administration (FHWA) mandated utilizing the Load and Resistance Factor Design (LRFD) approach for all new bridges initiated in the United States after October 1, 2007. As a result, there has been a progressive move among state Departments of Transportation (DOTs) toward an increased use of the LRFD in geotechnical design practices. For the above reasons, the Iowa Highway Research Board (IHRB) sponsored three research projects: TR-573, TR-583 and TR-584. The research information is summarized in the project web site (http://srg.cce.iastate.edu/lrfd/). Two reports of total four volumes have been published. Report volume I by Roling et al. (2010) described the development of a user-friendly and electronic database (PILOT). Report volume II by Ng et al. (2011) summarized the 10 full-scale field tests conducted throughout Iowa and data analyses. This report presents the development of regionally calibrated LRFD resistance factors for bridge pile foundations in Iowa based on reliability theory, focusing on the strength limit states and incorporating the construction control aspects and soil setup into the design process. The calibration framework was selected to follow the guidelines provided by the American Association of State Highway and Transportation Officials (AASHTO), taking into consideration the current local practices. The resistance factors were developed for general and in-house static analysis methods used for the design of pile foundations as well as for dynamic analysis methods and dynamic formulas used for construction control. The following notable benefits to the bridge foundation design were attained in this project: 1) comprehensive design tables and charts were developed to facilitate the implementation of the LRFD approach, ensuring uniform reliability and consistency in the design and construction processes of bridge pile foundations; 2) the results showed a substantial gain in the factored capacity compared to the 2008 AASHTO-LRFD recommendations; and 3) contribution to the existing knowledge, thereby advancing the foundation design and construction practices in Iowa and the nation.
Resumo:
The material presented in the these notes covers the sessions Modelling of electromechanical systems, Passive control theory I and Passive control theory II of the II EURON/GEOPLEX Summer School on Modelling and Control of Complex Dynamical Systems.We start with a general description of what an electromechanical system is from a network modelling point of view. Next, a general formulation in terms of PHDS is introduced, and some of the previous electromechanical systems are rewritten in this formalism. Power converters, which are variable structure systems (VSS), can also be given a PHDS form.We conclude the modelling part of these lectures with a rather complex example, showing the interconnection of subsystems from several domains, namely an arrangement to temporally store the surplus energy in a section of a metropolitan transportation system based on dc motor vehicles, using either arrays of supercapacitors or an electric poweredflywheel. The second part of the lectures addresses control of PHD systems. We first present the idea of control as power connection of a plant and a controller. Next we discuss how to circumvent this obstacle and present the basic ideas of Interconnection and Damping Assignment (IDA) passivity-based control of PHD systems.
Resumo:
Introduction: We recently observed in a chronic ovine model that a shortening of action potential duration (APD) as assessed by the activation recovery interval (ARI) may be a mechanism whereby pacing-induced atrial tachycardia (PIAT) facilitates atrial fibrillation (AF), mediated by a return to 1:1 atrial capture after the effective refractory period has been reached. The aim of the present study is to evaluate the effect of long term intermittent burst pacing on ARI before induction of AF.Methods: We specifically developed a chronic ovine model of PIAT using two pacemakers (PM) each with a right atrial (RA) lead separated by ∼2cm. The 1st PM (Vitatron T70) was used to record a broadband unipolar RA EGM (800 Hz, 0.4 Hz high pass filter). The 2nd was used to deliver PIAT during electrophysiological protocols at decremental pacing CL (400 beats, from 400 to 110ms) and long term intermittent RA burst pacing to promote electrical remodeling (5s of burst followed by 2s of sinus rhythm) until onset of sustained AF. ARI was defined as the time difference between the peak of the atrial repolarization wave and the first atrial depolarization. The mean ARIs of paired sequences (before and after remodeling), each consisting of 20 beats were compared.Results: As shown in the figure, ARIs (n=4 sheep, 46 recordings) decreased post remodeling compared to baseline (86±19 vs 103±12 ms, p<0.05). There was no difference in atrial structure as assessed by light microscopy between control and remodeled sheep.Conclusions: Using standard pacemaker technology, atrial ARIs as a surrogate of APDs were successfully measured in vivo during the electrical remodeling process leading to AF. The facilitation of AF by PIAT mimicking salvos from pulmonary veins is heralded by a significant shortening of ARI.
Resumo:
We examined the sequence variation of mitochondrial DNA control region and cytochrome b gene of the house mouse (Mus musculus sensu lato) drawn from ca. 200 localities, with 286 new samples drawn primarily from previously unsampled portions of their Eurasian distribution and with the objective of further clarifying evolutionary episodes of this species before and after the onset of human-mediated long-distance dispersals. Phylogenetic analysis of the expanded data detected five equally distinct clades, with geographic ranges of northern Eurasia (musculus, MUS), India and Southeast Asia (castaneus, CAS), Nepal (unspecified, NEP), western Europe (domesticus, DOM) and Yemen (gentilulus). Our results confirm previous suggestions of Southwestern Asia as the likely place of origin of M. musculus and the region of Iran, Afghanistan, Pakistan, and northern India, specifically as the ancestral homeland of CAS. The divergence of the subspecies lineages and of internal sublineage differentiation within CAS were estimated to be 0.37-0.47 and 0.14-0.23 million years ago (mya), respectively, assuming a split of M. musculus and Mus spretus at 1.7 mya. Of the four CAS sublineages detected, only one extends to eastern parts of India, Southeast Asia, Indonesia, Philippines, South China, Northeast China, Primorye, Sakhalin and Japan, implying a dramatic range expansion of CAS out of its homeland during an evolutionary short time, perhaps associated with the spread of agricultural practices. Multiple and non-coincident eastward dispersal events of MUS sublineages to distant geographic areas, such as northern China, Russia and Korea, are inferred, with the possibility of several different routes.
Resumo:
Aim. To evaluate the usefulness of COOP/WONCA charts as a screening tool for mental disorders in primary care in the immigrant healthcare users in Salt. To measure self-rated health of Salt immigration population using the COOP / WONCA charts and to assess its associated factorsDesign. Descriptive and transversal studyParticipants. 370 non-EU immigrants seniors selected by consecutive sampling stratified by sexMain measures. Personal information will be collected (age, sex, country of origin, years of residency in Spain, number of people living in the household and associated comorbidities). Each participant will complete the COOP/WONCA charts. An analysis of the validity of the diagnostic test will be done: sensibility, specificity, positive predictive value, negative predictive value, ROC curve and area under the curve (AUC). All variables will be subjected to descriptive analysis. Bivariate and multivariate analysis between the variables collected (sex, years of residency in Spain... ) and the results of COOP / WONCA charts will be performedResults. Preliminary results are available on a pilot test with 30 patients. The mental disorder prevalence is around 30%. Sensibility (0,89), specificity (0,89), VPP (0,80), VPN (0,94) cutoff score (3.5) and AUC (0,941). Women, people with 10 or more years of residency in Spain and unemployed people have worse self-rated healthConclusions. Based on the preliminary results, is possible to conclude that COOP/WONCA charts could be an useful, valid and applicable screening test for mental disorders in primary care with immigrant population
Resumo:
A specification for contractor moisture quality control (QC) in roadway embankment construction has been in use for approximately 10 years in Iowa on about 190 projects. The use of this QC specification and the development of the soils certification program for the Iowa Department of Transportation (DOT) originated from Iowa Highway Research Board (IHRB) embankment quality research projects. Since this research, the Iowa DOT has applied compaction with moisture control on most embankment work under pavements. This study set out to independently evaluate the actual quality of compaction using the current specifications. Results show that Proctor tests conducted by Iowa State University (ISU) using representative material obtained from each test section where field testing was conducted had optimum moisture contents and maximum dry densities that are different from what was selected by the Iowa DOT for QC/quality assurance (QA) testing. Comparisons between the measured and selected values showed a standard error of 2.9 lb/ft3 for maximum dry density and 2.1% for optimum moisture content. The difference in optimum moisture content was as high as 4% and the difference in maximum dry density was as high as 6.5 lb/ft3 . The difference at most test locations, however, were within the allowable variation suggested in AASHTO T 99 for test results between different laboratories. The ISU testing results showed higher rates of data outside of the target limits specified based on the available contractor QC data for cohesive materials. Also, during construction observations, wet fill materials were often observed. Several test points indicated that materials were placed and accepted at wet of the target moisture contents. The statistical analysis results indicate that the results obtained from this study showed improvements over results from previous embankment quality research projects (TR-401 Phases I through III and TR-492) in terms of the percentage of data that fell within the specification limits. Although there was evidence of improvement, QC/QA results are not consistently meeting the target limits/values. Recommendations are provided in this report for Iowa DOT consideration with three proposed options for improvements to the current specifications. Option 1 provides enhancements to current specifications in terms of material-dependent control limits, training, sampling, and process control. Option 2 addresses development of alternative specifications that incorporate dynamic cone penetrometer or light weight deflectometer testing into QC/QA. Option 3 addresses incorporating calibrated intelligent compaction measurements into QC/QA.
Resumo:
Ambulatory blood pressure monitoring (ABPM) has become indispensable for the diagnosis and control of hypertension. However, no consensus exists on how daytime and nighttime periods should be defined. OBJECTIVE: To compare daytime and nighttime blood pressure (BP) defined by an actigraph and by body position with BP resulting from arbitrary daytime and nighttime periods. PATIENTS AND METHOD: ABPM, sleeping periods and body position were recorded simultaneously using an actigraph (SenseWear Armband(®)) in patients referred for ABPM. BP results obtained with the actigraph (sleep and position) were compared to the results obtained with fixed daytime (7a.m.-10p.m.) and nighttime (10p.m.-7a.m.) periods. RESULTS: Data from 103 participants were available. More than half of them were taking antihypertensive drugs. Nocturnal BP was lower (systolic BP: 2.08±4.50mmHg; diastolic BP: 1.84±2.99mmHg, P<0.05) and dipping was more marked (systolic BP: 1.54±3.76%; diastolic BP: 2.27±3.48%, P<0.05) when nighttime was defined with the actigraph. Standing BP was higher (systolic BP 1.07±2.81mmHg; diastolic BP: 1.34±2.50mmHg) than daytime BP defined by a fixed period. CONCLUSION: Diurnal BP, nocturnal BP and dipping are influenced by the definition of daytime and nighttime periods. Studies evaluating the prognostic value of each method are needed to clarify which definition should be used.
Resumo:
Background We analyzed the relationship between cholelithiasis and cancer risk in a network of case-control studies conducted in Italy and Switzerland in 1982-2009. Methods The analyses included 1997 oropharyngeal, 917 esophageal, 999 gastric, 23 small intestinal, 3726 colorectal, 684 liver, 688 pancreatic, 1240 laryngeal, 6447 breast, 1458 endometrial, 2002 ovarian, 1582 prostate, 1125 renal cell, 741 bladder cancers, and 21 284 controls. The odds ratios (ORs) were estimated by multiple logistic regression models. Results The ORs for subjects with history of cholelithiasis compared with those without were significantly elevated for small intestinal (OR = 3.96), prostate (OR = 1.36), and kidney cancers (OR = 1.57). These positive associations were observed ≥10 years after diagnosis of cholelithiasis and were consistent across strata of age, sex, and body mass index. No relation was found with the other selected cancers. A meta-analysis including this and three other studies on the relation of cholelithiasis with small intestinal cancer gave a pooled relative risk of 2.35 [95% confidence interval (CI) 1.82-3.03]. Conclusion In subjects with cholelithiasis, we showed an appreciably increased risk of small intestinal cancer and suggested a moderate increased risk of prostate and kidney cancers. We found no material association with the other cancers considered.
Resumo:
ABSTRACT: INTRODUCTION: Prospective epidemiologic studies have consistently shown that levels of circulating androgens in postmenopausal women are positively associated with breast cancer risk. However, data in premenopausal women are limited. METHODS: A case-control study nested within the New York University Women's Health Study was conducted. A total of 356 cases (276 invasive and 80 in situ) and 683 individually-matched controls were included. Matching variables included age and date, phase, and day of menstrual cycle at blood donation. Testosterone, androstenedione, dehydroandrosterone sulfate (DHEAS) and sex hormone-binding globulin (SHBG) were measured using direct immunoassays. Free testosterone was calculated. RESULTS: Premenopausal serum testosterone and free testosterone concentrations were positively associated with breast cancer risk. In models adjusted for known risk factors of breast cancer, the odds ratios for increasing quintiles of testosterone were 1.0 (reference), 1.5 (95% confidence interval (CI), 0.9 to 2.3), 1.2 (95% CI, 0.7 to 1.9), 1.4 (95% CI, 0.9 to 2.3) and 1.8 (95% CI, 1.1 to 2.9; Ptrend = 0.04), and for free testosterone were 1.0 (reference), 1.2 (95% CI, 0.7 to 1.8), 1.5 (95% CI, 0.9 to 2.3), 1.5 (95% CI, 0.9 to 2.3), and 1.8 (95% CI, 1.1 to 2.8, Ptrend = 0.01). A marginally significant positive association was observed with androstenedione (P = 0.07), but no association with DHEAS or SHBG. Results were consistent in analyses stratified by tumor type (invasive, in situ), estrogen receptor status, age at blood donation, and menopausal status at diagnosis. Intra-class correlation coefficients for samples collected from 0.8 to 5.3 years apart (median 2 years) in 138 cases and 268 controls were greater than 0.7 for all biomarkers except for androstenedione (0.57 in controls). CONCLUSIONS: Premenopausal concentrations of testosterone and free testosterone are associated with breast cancer risk. Testosterone and free testosterone measurements are also highly reliable (that is, a single measurement is reflective of a woman's average level over time). Results from other prospective studies are consistent with our results. The impact of including testosterone or free testosterone in breast cancer risk prediction models for women between the ages of 40 and 50 years should be assessed. Improving risk prediction models for this age group could help decision making regarding both screening and chemoprevention of breast cancer.
Resumo:
Es modelitza un vehicle submarí i s'estudien diferents alternatives de control sota linearització sota l'assumpció d'una geometria d'elipsoide prolat, obtenint les gràfiques de l'estat i el control en un interval de temps.
Resumo:
Un système efficace de sismique tridimensionnelle (3-D) haute-résolution adapté à des cibles lacustres de petite échelle a été développé. Dans le Lac Léman, près de la ville de Lausanne, en Suisse, des investigations récentes en deux dimension (2-D) ont mis en évidence une zone de faille complexe qui a été choisie pour tester notre système. Les structures observées incluent une couche mince (<40 m) de sédiments quaternaires sub-horizontaux, discordants sur des couches tertiaires de molasse pentées vers le sud-est. On observe aussi la zone de faille de « La Paudèze » qui sépare les unités de la Molasse du Plateau de la Molasse Subalpine. Deux campagnes 3-D complètes, d?environ d?un kilomètre carré, ont été réalisées sur ce site de test. La campagne pilote (campagne I), effectuée en 1999 pendant 8 jours, a couvert 80 profils en utilisant une seule flûte. Pendant la campagne II (9 jours en 2001), le nouveau système trois-flûtes, bien paramétrés pour notre objectif, a permis l?acquisition de données de très haute qualité sur 180 lignes CMP. Les améliorations principales incluent un système de navigation et de déclenchement de tirs grâce à un nouveau logiciel. Celui-ci comprend un contrôle qualité de la navigation du bateau en temps réel utilisant un GPS différentiel (dGPS) à bord et une station de référence près du bord du lac. De cette façon, les tirs peuvent être déclenchés tous les 5 mètres avec une erreur maximale non-cumulative de 25 centimètres. Tandis que pour la campagne I la position des récepteurs de la flûte 48-traces a dû être déduite à partir des positions du bateau, pour la campagne II elle ont pu être calculées précisément (erreur <20 cm) grâce aux trois antennes dGPS supplémentaires placées sur des flotteurs attachés à l?extrémité de chaque flûte 24-traces. Il est maintenant possible de déterminer la dérive éventuelle de l?extrémité des flûtes (75 m) causée par des courants latéraux ou de petites variations de trajet du bateau. De plus, la construction de deux bras télescopiques maintenant les trois flûtes à une distance de 7.5 m les uns des autres, qui est la même distance que celle entre les lignes naviguées de la campagne II. En combinaison avec un espacement de récepteurs de 2.5 m, la dimension de chaque «bin» de données 3-D de la campagne II est de 1.25 m en ligne et 3.75 m latéralement. L?espacement plus grand en direction « in-line » par rapport à la direction «cross-line» est justifié par l?orientation structurale de la zone de faille perpendiculaire à la direction «in-line». L?incertitude sur la navigation et le positionnement pendant la campagne I et le «binning» imprécis qui en résulte, se retrouve dans les données sous forme d?une certaine discontinuité des réflecteurs. L?utilisation d?un canon à air à doublechambre (qui permet d?atténuer l?effet bulle) a pu réduire l?aliasing observé dans les sections migrées en 3-D. Celui-ci était dû à la combinaison du contenu relativement haute fréquence (<2000 Hz) du canon à eau (utilisé à 140 bars et à 0.3 m de profondeur) et d?un pas d?échantillonnage latéral insuffisant. Le Mini G.I 15/15 a été utilisé à 80 bars et à 1 m de profondeur, est mieux adapté à la complexité de la cible, une zone faillée ayant des réflecteurs pentés jusqu?à 30°. Bien que ses fréquences ne dépassent pas les 650 Hz, cette source combine une pénétration du signal non-aliasé jusqu?à 300 m dans le sol (par rapport au 145 m pour le canon à eau) pour une résolution verticale maximale de 1.1 m. Tandis que la campagne I a été acquise par groupes de plusieurs lignes de directions alternées, l?optimisation du temps d?acquisition du nouveau système à trois flûtes permet l?acquisition en géométrie parallèle, ce qui est préférable lorsqu?on utilise une configuration asymétrique (une source et un dispositif de récepteurs). Si on ne procède pas ainsi, les stacks sont différents selon la direction. Toutefois, la configuration de flûtes, plus courtes que pour la compagne I, a réduit la couverture nominale, la ramenant de 12 à 6. Une séquence classique de traitement 3-D a été adaptée à l?échantillonnage à haute fréquence et elle a été complétée par deux programmes qui transforment le format non-conventionnel de nos données de navigation en un format standard de l?industrie. Dans l?ordre, le traitement comprend l?incorporation de la géométrie, suivi de l?édition des traces, de l?harmonisation des «bins» (pour compenser l?inhomogénéité de la couverture due à la dérive du bateau et de la flûte), de la correction de la divergence sphérique, du filtrage passe-bande, de l?analyse de vitesse, de la correction DMO en 3-D, du stack et enfin de la migration 3-D en temps. D?analyses de vitesse détaillées ont été effectuées sur les données de couverture 12, une ligne sur deux et tous les 50 CMP, soit un nombre total de 600 spectres de semblance. Selon cette analyse, les vitesses d?intervalles varient de 1450-1650 m/s dans les sédiments non-consolidés et de 1650-3000 m/s dans les sédiments consolidés. Le fait que l?on puisse interpréter plusieurs horizons et surfaces de faille dans le cube, montre le potentiel de cette technique pour une interprétation tectonique et géologique à petite échelle en trois dimensions. On distingue cinq faciès sismiques principaux et leurs géométries 3-D détaillées sur des sections verticales et horizontales: les sédiments lacustres (Holocène), les sédiments glacio-lacustres (Pléistocène), la Molasse du Plateau, la Molasse Subalpine de la zone de faille (chevauchement) et la Molasse Subalpine au sud de cette zone. Les couches de la Molasse du Plateau et de la Molasse Subalpine ont respectivement un pendage de ~8° et ~20°. La zone de faille comprend de nombreuses structures très déformées de pendage d?environ 30°. Des tests préliminaires avec un algorithme de migration 3-D en profondeur avant sommation et à amplitudes préservées démontrent que la qualité excellente des données de la campagne II permet l?application de telles techniques à des campagnes haute-résolution. La méthode de sismique marine 3-D était utilisée jusqu?à présent quasi-exclusivement par l?industrie pétrolière. Son adaptation à une échelle plus petite géographiquement mais aussi financièrement a ouvert la voie d?appliquer cette technique à des objectifs d?environnement et du génie civil.<br/><br/>An efficient high-resolution three-dimensional (3-D) seismic reflection system for small-scale targets in lacustrine settings was developed. In Lake Geneva, near the city of Lausanne, Switzerland, past high-resolution two-dimensional (2-D) investigations revealed a complex fault zone (the Paudèze thrust zone), which was subsequently chosen for testing our system. Observed structures include a thin (<40 m) layer of subhorizontal Quaternary sediments that unconformably overlie southeast-dipping Tertiary Molasse beds and the Paudèze thrust zone, which separates Plateau and Subalpine Molasse units. Two complete 3-D surveys have been conducted over this same test site, covering an area of about 1 km2. In 1999, a pilot survey (Survey I), comprising 80 profiles, was carried out in 8 days with a single-streamer configuration. In 2001, a second survey (Survey II) used a newly developed three-streamer system with optimized design parameters, which provided an exceptionally high-quality data set of 180 common midpoint (CMP) lines in 9 days. The main improvements include a navigation and shot-triggering system with in-house navigation software that automatically fires the gun in combination with real-time control on navigation quality using differential GPS (dGPS) onboard and a reference base near the lake shore. Shots were triggered at 5-m intervals with a maximum non-cumulative error of 25 cm. Whereas the single 48-channel streamer system of Survey I requires extrapolation of receiver positions from the boat position, for Survey II they could be accurately calculated (error <20 cm) with the aid of three additional dGPS antennas mounted on rafts attached to the end of each of the 24- channel streamers. Towed at a distance of 75 m behind the vessel, they allow the determination of feathering due to cross-line currents or small course variations. Furthermore, two retractable booms hold the three streamers at a distance of 7.5 m from each other, which is the same distance as the sail line interval for Survey I. With a receiver spacing of 2.5 m, the bin dimension of the 3-D data of Survey II is 1.25 m in in-line direction and 3.75 m in cross-line direction. The greater cross-line versus in-line spacing is justified by the known structural trend of the fault zone perpendicular to the in-line direction. The data from Survey I showed some reflection discontinuity as a result of insufficiently accurate navigation and positioning and subsequent binning errors. Observed aliasing in the 3-D migration was due to insufficient lateral sampling combined with the relatively high frequency (<2000 Hz) content of the water gun source (operated at 140 bars and 0.3 m depth). These results motivated the use of a double-chamber bubble-canceling air gun for Survey II. A 15 / 15 Mini G.I air gun operated at 80 bars and 1 m depth, proved to be better adapted for imaging the complexly faulted target area, which has reflectors dipping up to 30°. Although its frequencies do not exceed 650 Hz, this air gun combines a penetration of non-aliased signal to depths of 300 m below the water bottom (versus 145 m for the water gun) with a maximum vertical resolution of 1.1 m. While Survey I was shot in patches of alternating directions, the optimized surveying time of the new threestreamer system allowed acquisition in parallel geometry, which is preferable when using an asymmetric configuration (single source and receiver array). Otherwise, resulting stacks are different for the opposite directions. However, the shorter streamer configuration of Survey II reduced the nominal fold from 12 to 6. A 3-D conventional processing flow was adapted to the high sampling rates and was complemented by two computer programs that format the unconventional navigation data to industry standards. Processing included trace editing, geometry assignment, bin harmonization (to compensate for uneven fold due to boat/streamer drift), spherical divergence correction, bandpass filtering, velocity analysis, 3-D DMO correction, stack and 3-D time migration. A detailed semblance velocity analysis was performed on the 12-fold data set for every second in-line and every 50th CMP, i.e. on a total of 600 spectra. According to this velocity analysis, interval velocities range from 1450-1650 m/s for the unconsolidated sediments and from 1650-3000 m/s for the consolidated sediments. Delineation of several horizons and fault surfaces reveal the potential for small-scale geologic and tectonic interpretation in three dimensions. Five major seismic facies and their detailed 3-D geometries can be distinguished in vertical and horizontal sections: lacustrine sediments (Holocene) , glaciolacustrine sediments (Pleistocene), Plateau Molasse, Subalpine Molasse and its thrust fault zone. Dips of beds within Plateau and Subalpine Molasse are ~8° and ~20°, respectively. Within the fault zone, many highly deformed structures with dips around 30° are visible. Preliminary tests with 3-D preserved-amplitude prestack depth migration demonstrate that the excellent data quality of Survey II allows application of such sophisticated techniques even to high-resolution seismic surveys. In general, the adaptation of the 3-D marine seismic reflection method, which to date has almost exclusively been used by the oil exploration industry, to a smaller geographical as well as financial scale has helped pave the way for applying this technique to environmental and engineering purposes.<br/><br/>La sismique réflexion est une méthode d?investigation du sous-sol avec un très grand pouvoir de résolution. Elle consiste à envoyer des vibrations dans le sol et à recueillir les ondes qui se réfléchissent sur les discontinuités géologiques à différentes profondeurs et remontent ensuite à la surface où elles sont enregistrées. Les signaux ainsi recueillis donnent non seulement des informations sur la nature des couches en présence et leur géométrie, mais ils permettent aussi de faire une interprétation géologique du sous-sol. Par exemple, dans le cas de roches sédimentaires, les profils de sismique réflexion permettent de déterminer leur mode de dépôt, leurs éventuelles déformations ou cassures et donc leur histoire tectonique. La sismique réflexion est la méthode principale de l?exploration pétrolière. Pendant longtemps on a réalisé des profils de sismique réflexion le long de profils qui fournissent une image du sous-sol en deux dimensions. Les images ainsi obtenues ne sont que partiellement exactes, puisqu?elles ne tiennent pas compte de l?aspect tridimensionnel des structures géologiques. Depuis quelques dizaines d?années, la sismique en trois dimensions (3-D) a apporté un souffle nouveau à l?étude du sous-sol. Si elle est aujourd?hui parfaitement maîtrisée pour l?imagerie des grandes structures géologiques tant dans le domaine terrestre que le domaine océanique, son adaptation à l?échelle lacustre ou fluviale n?a encore fait l?objet que de rares études. Ce travail de thèse a consisté à développer un système d?acquisition sismique similaire à celui utilisé pour la prospection pétrolière en mer, mais adapté aux lacs. Il est donc de dimension moindre, de mise en oeuvre plus légère et surtout d?une résolution des images finales beaucoup plus élevée. Alors que l?industrie pétrolière se limite souvent à une résolution de l?ordre de la dizaine de mètres, l?instrument qui a été mis au point dans le cadre de ce travail permet de voir des détails de l?ordre du mètre. Le nouveau système repose sur la possibilité d?enregistrer simultanément les réflexions sismiques sur trois câbles sismiques (ou flûtes) de 24 traces chacun. Pour obtenir des données 3-D, il est essentiel de positionner les instruments sur l?eau (source et récepteurs des ondes sismiques) avec une grande précision. Un logiciel a été spécialement développé pour le contrôle de la navigation et le déclenchement des tirs de la source sismique en utilisant des récepteurs GPS différentiel (dGPS) sur le bateau et à l?extrémité de chaque flûte. Ceci permet de positionner les instruments avec une précision de l?ordre de 20 cm. Pour tester notre système, nous avons choisi une zone sur le Lac Léman, près de la ville de Lausanne, où passe la faille de « La Paudèze » qui sépare les unités de la Molasse du Plateau et de la Molasse Subalpine. Deux campagnes de mesures de sismique 3-D y ont été réalisées sur une zone d?environ 1 km2. Les enregistrements sismiques ont ensuite été traités pour les transformer en images interprétables. Nous avons appliqué une séquence de traitement 3-D spécialement adaptée à nos données, notamment en ce qui concerne le positionnement. Après traitement, les données font apparaître différents faciès sismiques principaux correspondant notamment aux sédiments lacustres (Holocène), aux sédiments glacio-lacustres (Pléistocène), à la Molasse du Plateau, à la Molasse Subalpine de la zone de faille et la Molasse Subalpine au sud de cette zone. La géométrie 3-D détaillée des failles est visible sur les sections sismiques verticales et horizontales. L?excellente qualité des données et l?interprétation de plusieurs horizons et surfaces de faille montrent le potentiel de cette technique pour les investigations à petite échelle en trois dimensions ce qui ouvre des voies à son application dans les domaines de l?environnement et du génie civil.