979 resultados para 2D synchrosqueezed transforms
Resumo:
RESUME L'Institut de Géophysique de l'Université de Lausanne a développé au cours de ces dernières années un système d'acquisition de sismique réflexion multitrace à haute résolution 2D et 3D. L'objectif de cette thèse était de poursuivre ce développement tout améliorant les connaissances de la géologie sous le lac Léman, en étudiant en particulier la configuration des grands accidents sous-lacustres dans la Molasse (Tertiaire) qui forme l'essentiel du substratum des formations quaternaires. En configuration 2D, notre système permet d'acquérir des profils sismiques avec une distance inter-CDP de 1,25 m. La couverture varie entre 6 et 18 selon le nombre de traces et la distance inter-tir. Le canon à air (15/15 eu. in.), offre une résolution verticale de 1,25 ni et une pénétration maximale de 300 m sous le fond de l'eau. Nous avons acquis au total plus de 400 km de sections 2D dans le Grand Lac et le Haut Lac entre octobre 2000 et juillet 2004. Une campagne de sismique 3D a fourni des données au large d'Evian sur une surface de 442,5 m sur 1450 m, soit 0,64 km2. La navigation ainsi que le positionnement des hydrophones et de la source ont été réalisés avec des GPS différentiels. Nous avons utilisé un traitement sismique conventionnel, sans appliquer d'AGC et en utilisant une migration post-stack. L'interprétation du substratum antéquaternaire est basée sur l'identification des sismofaciès, sur leurs relations avec les unités géologiques adjacentes au lac, ainsi que sur quelques données de forages. Nous obtenons ainsi une carte des unités géologiques dans le Grand Lac. Nous précisons la position du chevauchement subalpin entre la ville de Lausanne, sur la rive nord, et le bassin de Sciez, sur la rive sud. Dans la Molasse de Plateau, nous avons identifié les décrochements de Pontarlier et de St. Cergue ainsi que plusieurs failles non reconnues jusqu'ici. Nous avons cartographié les accidents qui affectent la Molasse subalpine ainsi que le plan de chevauchement du flysch sur la Molasse près de la rive sud du lac. Une nouvelle carte tectonique de la région lémanique a ainsi pu être dressée. L'analyse du substratum ne montre pas de failles suggérant une origine tectonique de la cuvette lémanique. Par contre, nous suggérons que la forme du creusement glaciaire, donc de la forme du lac Léman, a été influencée par la présence de failles dans le substratum antéquaternaire. L'analyse des sédiments quaternaires nous a permis de tracer des cartes des différentes interfaces ou unités qui les composent. La carte du toit du substratum antéquaternaire montre la présence de chenaux d'origine glaciaire dont la profondeur maximale atteint la cote -200 ni. Leur pente est dirigée vers le nord-est, à l'inverse du sens d'écoulement actuel des eaux. Nous expliquons cette observation par l'existence de circulations sous-glaciaires d'eau artésienne. Les sédiments glaciaires dont l'épaisseur maximale atteint 150 ni au centre du lac ont enregistré les différentes récurrences glaciaires. Dans la zone d'Evian, nous mettons en évidence la présence de lentilles de sédiments glaciolacustres perchées sur le flanc de la cuvette lémanique. Nous avons corrélé ces unités avec des données de forage et concluons qu'il s'agit du complexe inférieur de la pile sédimentaire d'Evian. Celui-ci, âgé de plus de 30 000 ans, serait un dépôt de Kame associé à un lac périglaciaire. La sismique réflexion 3D permet de préciser l'orientation de l'alimentation en matériel détritique de l'unité. La finesse des images obtenues nous permet également d'établir quels types d'érosion ont affecté certaines unités. Les sédiments lacustres, dont l'épaisseur maximale imagée atteint plus de 225 m et sans doute 400 ni sous le delta du Rhône, indiquent plusieurs mécanismes de dépôts. A la base, une mégaturbidite, épaisse d'une trentaine de mètres en moyenne, s'étend entre l'embouchure de la Dranse et le delta du Rhône. Au-dessus, la décantation des particules en suspension d'origine biologique et détritique fournit l'essentiel des sédiments. Dans la partie orientale du lac, les apports détritiques du Rhône forment un delta qui prograde vers l'ouest en s'imbriquant avec les sédiments déposés par décantation. La structure superficielle du delta a brutalement évolué, probablement à la suite de l'évènement catastrophique du Tauredunum (563 A.D.). Sa trace probable se marque par la présence d'une surface érosive que nous avons cartographiée. Le delta a ensuite changé de géométrie, avec notamment un déplacement des chenaux sous-lacustres. Sur l'ensemble de nos sections sismiques, nous n'observons aucune faille dans les sédiments quaternaires qui attesterait d'une tectonique postglaciaire du substratum. ABSTRACT During the last few years the institute of Geophysics of the University of Lausanne cleveloped a 2D and 3D high-resolution multichannel seismic reflection acquisition system. The objective of the present work was to carry on this development white improving our knowledge of the geology under Lake Geneva, in particular by studying the configuration of the large accidents affecting the Tertiary Molasse that makes up the basement of most Quaternary deposits. In its 2D configuration, our system makes it possible to acquire seismic profiles with a CDP interval of 1.25 m. The fold varies from 6 to 18 depending on the number of traces and the shooting interval. Our air gun (15/15 cu. in.) provides a vertical resolution of 1.25 m and a maximum penetration depth of approximately 300 m under water bottom. We acquired more than 400 km of 2D sections in the Grand Lac and the Haut Lac between October 2000 and July 2004. A 3D seismic survey off the city of Evian provided data on a surface of 442.5 m x 1450 m (0.64 km2). Ship's navigation as well as hydrophone- and source positioning were carried out with differential GPS. The seismic data were processed following a conventional sequence without .applying AGC and using post-stack migration. The interpretation of the pre-Quaternary substratum is based on sismofacies, on their relationships with terrestrial geological units and on some borehole data. We thus obtained a map of the geological units in the Grand Lac. We defined the location of the subalpine thrust from Lausanne, on the north shore, to the Sciez Basin, on the south shore. Within the Molasse de Plateau, we identified the already know Pontarlier and St Cergue transforms Fault as well as faults. We mapped faults that affect subalpine Molasse as well as the thrust fault plane between alpine flysch and Molasse near the lake's south shore. A new tectonic map of the Lake Geneva region could thus be drawn up. The substratum does not show faults indicating a tectonic origin for the Lake Geneva Basin. However, we suggest that the orientation of glacial erosion, and thus the shape of Lake Geneva, vas influenced by the presence of faults in the pre-Quaternary basement. The analysis of Quaternary sediments enabled us to draw up maps of various discontinuities or internal units. The top pre-Quaternary basement map shows channels of glacial origin, the deepest of them reaching an altitude of 200 m a.s.l. The channel's slopes are directed to the North-East, in opposite direction of the present water flow. We explain this observation by the presence of artesian subglacial water circulation. Glacial sediments, the maximum thickness of which reaches 150 m in the central part of the lake, record several glacial recurrences. In the Evian area, we found lenses of glacio-lacustrine sediments set high up on the flank of the Lake Geneva Bassin. We correlated these units with on-land borehole data and concluded that they represent the lower complex of the Evian sedimentary pile. The lower complex is aider than 30 000 years, and it could be a Kame deposit associated with a periglacial lake. Our 3D seismic reflexion survey enables us to specify the supply direction of detrital material in this unit. With detailed seismic images we established how some units were affected by different erosion types. The lacustrine sediments we imaged in Lake Geneva are thicker than 225 m and 400 m or more Linder the Rhone Delta. They indicate several depositional mechanisms. Their base is a major turbidite, thirty meters thick on average, that spreads between the Dranse mouth and the Rhone delta. Above this unit, settling of suspended biological and detrital particles provides most of the sediments. In the eastern part of the lake, detrital contribution from the Rhone builds a delta that progrades to the west and imbricates with the settling sediments. The shallow structure of the Rhone delta abruptly evolved, probably after the catastrophic Tauredunum event (563 A.D.). It probably coincides with an erosive surface that we mapped. As a result, the delta geometry changed, in particular associated with a displacement of water bottom channels. In all our seismic sections, we do not observe fault in the Quaternary sediments that would attest postglacial tectonic activity in the basement.
Resumo:
In this article, the authors evaluate a merit function for 2D/3D registration called stochastic rank correlation (SRC). SRC is characterized by the fact that differences in image intensity do not influence the registration result; it therefore combines the numerical advantages of cross correlation (CC)-type merit functions with the flexibility of mutual-information-type merit functions. The basic idea is that registration is achieved on a random subset of the image, which allows for an efficient computation of Spearman's rank correlation coefficient. This measure is, by nature, invariant to monotonic intensity transforms in the images under comparison, which renders it an ideal solution for intramodal images acquired at different energy levels as encountered in intrafractional kV imaging in image-guided radiotherapy. Initial evaluation was undertaken using a 2D/3D registration reference image dataset of a cadaver spine. Even with no radiometric calibration, SRC shows a significant improvement in robustness and stability compared to CC. Pattern intensity, another merit function that was evaluated for comparison, gave rather poor results due to its limited convergence range. The time required for SRC with 5% image content compares well to the other merit functions; increasing the image content does not significantly influence the algorithm accuracy. The authors conclude that SRC is a promising measure for 2D/3D registration in IGRT and image-guided therapy in general.
Resumo:
Context. About 2/3 of the Be stars present the so-called V/R variations, a phenomenon characterized by the quasi-cyclic variation in the ratio between the violet and red emission peaks of the HI emission lines. These variations are generally explained by global oscillations in the circumstellar disk forming a one-armed spiral density pattern that precesses around the star with a period of a few years. Aims. This paper presents self-consistent models of polarimetric, photometric, spectrophotometric, and interferometric observations of the classical Be star zeta Tauri. The primary goal is to conduct a critical quantitative test of the global oscillation scenario. Methods. Detailed three-dimensional, NLTE radiative transfer calculations were carried out using the radiative transfer code HDUST. The most up-to-date research on Be stars was used as input for the code in order to include a physically realistic description for the central star and the circumstellar disk. The model adopts a rotationally deformed, gravity darkened central star, surrounded by a disk whose unperturbed state is given by a steady-state viscous decretion disk model. It is further assumed that this disk is in vertical hydrostatic equilibrium. Results. By adopting a viscous decretion disk model for zeta Tauri and a rigorous solution of the radiative transfer, a very good fit of the time-average properties of the disk was obtained. This provides strong theoretical evidence that the viscous decretion disk model is the mechanism responsible for disk formation. The global oscillation model successfully fitted spatially resolved VLTI/AMBER observations and the temporal V/R variations in the H alpha and Br gamma lines. This result convincingly demonstrates that the oscillation pattern in the disk is a one-armed spiral. Possible model shortcomings, as well as suggestions for future improvements, are also discussed.
Resumo:
Three new bimetallic oxamato-based magnets with the proligand 4,5-dimethyl-1,2-phenylenebis-(oxamato) (dmopba) were synthesized using water or dimethylsulfoxide (DMSO) as solvents. Single crystal X-ray diffraction provided structures for two of them: [MnCu(dmopba)(H(2)O)(3)]n center dot 4nH(2)O (1) and [MnCu(dmopba)(DMSO)(3)](n center dot)nDMSO (2). The crystalline structures for both 1 and 2 consist of linearly ordered oxamato-bridged Mn(II)Cu(II) bimetallic chains. The magnetic characterization revealed a typical behaviour of ferrimagnetic chains for 1 and 2. Least-squares fits of the experimental magnetic data performed in the 300-20 K temperature range led to J(MnCu) = -27.9 cm(-1), g(Cu) = 2.09 and g(Mn) = 1.98 for 1 and J(MnCu) = -30.5 cm(-1), g(Cu) = 2.09 and g(Mn) = 2.02 for 2 (H = -J(MnCu)Sigma S(Mn, i)(S(Cu, i) + S(Cu, i-1))). The two-dimensional ferrimagnetic system [Me(4)N](2n){Co(2)[Cu(dmopba)](3)}center dot 4nDMSO center dot nH(2)O (3) was prepared by reaction of Co(II) ions and an excess of [Cu(dmopba)](2-) in DMSO. The study of the temperature dependence of the magnetic susceptibility as well as the temperature and field dependences of the magnetization revealed a cluster glass-like behaviour for 3.
Resumo:
This work presents the study and development of a combined fault location scheme for three-terminal transmission lines using wavelet transforms (WTs). The methodology is based on the low- and high-frequency components of the transient signals originated from fault situations registered in the terminals of a system. By processing these signals and using the WT, it is possible to determine the time of travelling waves of voltages and/or currents from the fault point to the terminals, as well as estimate the fundamental frequency components. A new approach presents a reliable and accurate fault location scheme combining some different solutions. The main idea is to have a decision routine in order to select which method should be used in each situation presented to the algorithm. The combined algorithm was tested for different fault conditions by simulations using the ATP (Alternative Transients Program) software. The results obtained are promising and demonstrate a highly satisfactory degree of accuracy and reliability of the proposed method.
Resumo:
A hybrid system to automatically detect, locate and classify disturbances affecting power quality in an electrical power system is presented in this paper. The disturbances characterized are events from an actual power distribution system simulated by the ATP (Alternative Transients Program) software. The hybrid approach introduced consists of two stages. In the first stage, the wavelet transform (WT) is used to detect disturbances in the system and to locate the time of their occurrence. When such an event is flagged, the second stage is triggered and various artificial neural networks (ANNs) are applied to classify the data measured during the disturbance(s). A computational logic using WTs and ANNs together with a graphical user interface (GU) between the algorithm and its end user is then implemented. The results obtained so far are promising and suggest that this approach could lead to a useful application in an actual distribution system. (C) 2009 Elsevier Ltd. All rights reserved.
Resumo:
Most post-processors for boundary element (BE) analysis use an auxiliary domain mesh to display domain results, working against the profitable modelling process of a pure boundary discretization. This paper introduces a novel visualization technique which preserves the basic properties of the boundary element methods. The proposed algorithm does not require any domain discretization and is based on the direct and automatic identification of isolines. Another critical aspect of the visualization of domain results in BE analysis is the effort required to evaluate results in interior points. In order to tackle this issue, the present article also provides a comparison between the performance of two different BE formulations (conventional and hybrid). In addition, this paper presents an overview of the most common post-processing and visualization techniques in BE analysis, such as the classical algorithms of scan line and the interpolation over a domain discretization. The results presented herein show that the proposed algorithm offers a very high performance compared with other visualization procedures.
Resumo:
This work deals with the determination of crack openings in 2D reinforced concrete structures using the Finite Element Method with a smeared rotating crack model or an embedded crack model In the smeared crack model, the strong discontinuity associated with the crack is spread throughout the finite element As is well known, the continuity of the displacement field assumed for these models is incompatible with the actual discontinuity However, this type of model has been used extensively due to the relative computational simplicity it provides by treating cracks in a continuum framework, as well as the reportedly good predictions of reinforced concrete members` structural behavior On the other hand, by enriching the displacement field within each finite element crossed by the crack path, the embedded crack model is able to describe the effects of actual discontinuities (cracks) This paper presents a comparative study of the abilities of these 2D models in predicting the mechanical behavior of reinforced concrete structures Structural responses are compared with experimental results from the literature, including crack patterns, crack openings and rebar stresses predicted by both models
Resumo:
The classical approach for acoustic imaging consists of beamforming, and produces the source distribution of interest convolved with the array point spread function. This convolution smears the image of interest, significantly reducing its effective resolution. Deconvolution methods have been proposed to enhance acoustic images and have produced significant improvements. Other proposals involve covariance fitting techniques, which avoid deconvolution altogether. However, in their traditional presentation, these enhanced reconstruction methods have very high computational costs, mostly because they have no means of efficiently transforming back and forth between a hypothetical image and the measured data. In this paper, we propose the Kronecker Array Transform ( KAT), a fast separable transform for array imaging applications. Under the assumption of a separable array, it enables the acceleration of imaging techniques by several orders of magnitude with respect to the fastest previously available methods, and enables the use of state-of-the-art regularized least-squares solvers. Using the KAT, one can reconstruct images with higher resolutions than was previously possible and use more accurate reconstruction techniques, opening new and exciting possibilities for acoustic imaging.
Resumo:
In Part I [""Fast Transforms for Acoustic Imaging-Part I: Theory,"" IEEE TRANSACTIONS ON IMAGE PROCESSING], we introduced the Kronecker array transform (KAT), a fast transform for imaging with separable arrays. Given a source distribution, the KAT produces the spectral matrix which would be measured by a separable sensor array. In Part II, we establish connections between the KAT, beamforming and 2-D convolutions, and show how these results can be used to accelerate classical and state of the art array imaging algorithms. We also propose using the KAT to accelerate general purpose regularized least-squares solvers. Using this approach, we avoid ill-conditioned deconvolution steps and obtain more accurate reconstructions than previously possible, while maintaining low computational costs. We also show how the KAT performs when imaging near-field source distributions, and illustrate the trade-off between accuracy and computational complexity. Finally, we show that separable designs can deliver accuracy competitive with multi-arm logarithmic spiral geometries, while having the computational advantages of the KAT.
Resumo:
In this study, twenty hydroxylated and acetoxylated 3-phenylcoumarin derivatives were evaluated as inhibitors of immune complex-stimulated neutrophil oxidative metabolism and possible modulators of the inflammatory tissue damage found in type III hypersensitivity reactions. By using lucigenin- and luminol-enhanced chemiluminescence assays (CL-luc and CL-lum, respectively), we found that the 6,7-dihydroxylated and 6,7-diacetoxylated 3-phenylcoumarin derivatives were the most effective inhibitors. Different structural features of the other compounds determined CL-luc and/or CL-lum inhibition. The 2D-QSAR analysis suggested the importance of hydrophobic contributions to explain these effects. In addition, a statistically significant 3D-QSAR model built applying GRIND descriptors allowed us to propose a virtual receptor site considering pharmacophoric regions and mutual distances. Furthermore, the 3-phenylcoumarins studied were not toxic to neutrophils under the assessed conditions. (C) 2007 Elsevier Masson SAS. All rights reserved.
Resumo:
Simulations provide a powerful means to help gain the understanding of crustal fault system physics required to progress towards the goal of earthquake forecasting. Cellular Automata are efficient enough to probe system dynamics but their simplifications render interpretations questionable. In contrast, sophisticated elasto-dynamic models yield more convincing results but are too computationally demanding to explore phase space. To help bridge this gap, we develop a simple 2D elastodynamic model of parallel fault systems. The model is discretised onto a triangular lattice and faults are specified as split nodes along horizontal rows in the lattice. A simple numerical approach is presented for calculating the forces at medium and split nodes such that general nonlinear frictional constitutive relations can be modeled along faults. Single and multi-fault simulation examples are presented using a nonlinear frictional relation that is slip and slip-rate dependent in order to illustrate the model.
Resumo:
OBJECTIVE- To determine whether obesity increases platelet reactivity and thrombin activity in patients with type 2 diabetes plus stable coronary artery disease. RESEARCH DESIGN AND METHODS- We assessed platelet reactivity and markers of thrombin generation and activity in 193 patients from nine clinical sites of the Bypass Angioplasty Revascularization Investigation 2 Diabetes (BARI 2D). Blood taken at the time of enrollment was used for assay of the concentration of prothrombin fragment 1.2 (PT1.2, released when prothrombin is activated) and fibrinopeptide A (FPA, released when fibrinogen is cleaved). Platelet activation was identified with the use of flow cytometry in response to 0, 0.2, and 1 mu mol/l adenosine diphosphate (ADP). RESULTS- Concentrations of FPA, PT1.2, and platelet activation in the absence of agonist were low. Greater BMI was associated with higher platelet reactivity in response to 1 mu m ADP as assessed by surface expression of P-selectin (r = 0.29, P < 0.0001) but not reflected by the binding of fibrinogen to activated glycoprotein IIb-IIIa. BMI was not associated with concentrations of FPA or PT1.2. Platelet reactivity correlated negatively with A1C (P < 0.04), was not related to the concentration Of triglycerides in blood, and did not correlate with the concentration of C-reactive peptide. CONCLUSIONS- Among patients enrolled in this substudy of BARI 2D, a greater BMI was associated with higher platelet reactivity at the time of enrollment. Our results suggest that obesity and insulin resistance that accompanies obesity may influence platelet reactivity in patients with type 2 diabetes.
Resumo:
We evaluated the associations between glycemic therapies and prevalence of diabetic peripheral neuropathy (DPN) at baseline among participants in the Bypass Angioplasty Revascularization Investigation 2 Diabetes (BARI 2D) trial on medical and revascularization therapies for coronary artery disease (CAD) and on insulin-sensitizing vs. insulin-providing treatments for diabetes. A total of 2,368 patients with type 2 diabetes and CAD was evaluated. DPN was defined as clinical examination score > 2 using the Michigan Neuropathy Screening Instrument (MNSI). DPN odds ratios across different groups of glycemic therapy were evaluated by multiple logistic regression adjusted for multiple covariates including age, sex, hemoglobin A1c (HbA1c), and diabetes duration. Fifty-one percent of BARI 2D subjects with valid baseline characteristics and MNSI scores had DPN. After adjusting for all variables, use of insulin was significantly associated with DPN (OR = 1.57, 95% CI: 1.15-2.13). Patients on sulfonylurea (SU) or combination of SU/metformin (Met)/thiazolidinediones (TZD) had marginally higher rates of DPN than the Met/TZD group. This cross-sectional study in a cohort of patients with type 2 diabetes and CAD showed association of insulin use with higher DPN prevalence, independent of disease duration, glycemic control, and other characteristics. The causality between a glycemic control strategy and DPN cannot be evaluated in this cross-sectional study, but continued assessment of DPN and randomized therapies in BARI 2D trial may provide further explanations on the development of DPN.