35 resultados para Weld data sensing
em Université de Lausanne, Switzerland
Resumo:
Yosemite Valley poses significant rockfall hazard and related risk due to its glacially steepened walls and approximately 4 million visitors annually. To assess rockfall hazard, it is necessary to evaluate the geologic structure that contributes to the destabilization of rockfall sources and locate the most probable future source areas. Coupling new remote sensing techniques (Terrestrial Laser Scanning, Aerial Laser Scanning) and traditional field surveys, we investigated the regional geologic and structural setting, the orientation of the primary discontinuity sets for large areas of Yosemite Valley, and the specific discontinuity sets present at active rockfall sources. This information, combined with better understanding of the geologic processes that contribute to the progressive destabilization and triggering of granitic rock slabs, contributes to a more accurate rockfall susceptibility assessment for Yosemite Valley and elsewhere.
Resumo:
Virulence factors of Pseudomonas aeruginosa include hydrogen cyanide (HCN). This secondary metabolite is maximally produced at low oxygen tension and high cell densities during the transition from exponential to stationary growth phase. The hcnABC genes encoding HCN synthase were identified on a genomic fragment complementing an HCN-deficient mutant of P. aeruginosa PAO1. The hcnA promoter was found to be controlled by the FNR-like anaerobic regulator ANR and by the quorum-sensing regulators LasR and RhlR. Primer extension analysis revealed two transcription starts, T1 and T2, separated by 29 bp. Their function was confirmed by transcriptional lacZ fusions. The promoter sequence displayed an FNR/ANR box at -42.5 bp upstream of T2 and a lux box centered around -42.5 bp upstream of T1. Expression of the hcn genes was completely abolished when this lux box was deleted or inactivated by two point mutations in conserved nucleotides. The lux box was recognized by both LasR [activated by N-(oxododecanoyl)-homoserine lactone] and RhlR (activated by N-butanoyl-homoserine lactone), as shown by expression experiments performed in quorum-sensing-defective P. aeruginosa mutants and in the N-acyl-homoserine lactone-negative heterologous host P. fluorescens CHA0. A second, less conserved lux box lying 160 bp upstream of T1 seems to account for enhanced quorum-sensing-dependent expression. Without LasR and RhlR, ANR could not activate the hcn promoter. Together, these data indicate that expression of the hcn promoter from T1 can occur under quorum-sensing control alone. Enhanced expression from T2 appears to rely on a synergistic action between LasR, RhlR, and ANR.
Compressed Sensing Single-Breath-Hold CMR for Fast Quantification of LV Function, Volumes, and Mass.
Resumo:
OBJECTIVES: The purpose of this study was to compare a novel compressed sensing (CS)-based single-breath-hold multislice magnetic resonance cine technique with the standard multi-breath-hold technique for the assessment of left ventricular (LV) volumes and function. BACKGROUND: Cardiac magnetic resonance is generally accepted as the gold standard for LV volume and function assessment. LV function is 1 of the most important cardiac parameters for diagnosis and the monitoring of treatment effects. Recently, CS techniques have emerged as a means to accelerate data acquisition. METHODS: The prototype CS cine sequence acquires 3 long-axis and 4 short-axis cine loops in 1 single breath-hold (temporal/spatial resolution: 30 ms/1.5 × 1.5 mm(2); acceleration factor 11.0) to measure left ventricular ejection fraction (LVEFCS) as well as LV volumes and LV mass using LV model-based 4D software. For comparison, a conventional stack of multi-breath-hold cine images was acquired (temporal/spatial resolution 40 ms/1.2 × 1.6 mm(2)). As a reference for the left ventricular stroke volume (LVSV), aortic flow was measured by phase-contrast acquisition. RESULTS: In 94% of the 33 participants (12 volunteers: mean age 33 ± 7 years; 21 patients: mean age 63 ± 13 years with different LV pathologies), the image quality of the CS acquisitions was excellent. LVEFCS and LVEFstandard were similar (48.5 ± 15.9% vs. 49.8 ± 15.8%; p = 0.11; r = 0.96; slope 0.97; p < 0.00001). Agreement of LVSVCS with aortic flow was superior to that of LVSVstandard (overestimation vs. aortic flow: 5.6 ± 6.5 ml vs. 16.2 ± 11.7 ml, respectively; p = 0.012) with less variability (r = 0.91; p < 0.00001 for the CS technique vs. r = 0.71; p < 0.01 for the standard technique). The intraobserver and interobserver agreement for all CS parameters was good (slopes 0.93 to 1.06; r = 0.90 to 0.99). CONCLUSIONS: The results demonstrated the feasibility of applying the CS strategy to evaluate LV function and volumes with high accuracy in patients. The single-breath-hold CS strategy has the potential to replace the multi-breath-hold standard cardiac magnetic resonance technique.
Resumo:
This paper presents general problems and approaches for the spatial data analysis using machine learning algorithms. Machine learning is a very powerful approach to adaptive data analysis, modelling and visualisation. The key feature of the machine learning algorithms is that they learn from empirical data and can be used in cases when the modelled environmental phenomena are hidden, nonlinear, noisy and highly variable in space and in time. Most of the machines learning algorithms are universal and adaptive modelling tools developed to solve basic problems of learning from data: classification/pattern recognition, regression/mapping and probability density modelling. In the present report some of the widely used machine learning algorithms, namely artificial neural networks (ANN) of different architectures and Support Vector Machines (SVM), are adapted to the problems of the analysis and modelling of geo-spatial data. Machine learning algorithms have an important advantage over traditional models of spatial statistics when problems are considered in a high dimensional geo-feature spaces, when the dimension of space exceeds 5. Such features are usually generated, for example, from digital elevation models, remote sensing images, etc. An important extension of models concerns considering of real space constrains like geomorphology, networks, and other natural structures. Recent developments in semi-supervised learning can improve modelling of environmental phenomena taking into account on geo-manifolds. An important part of the study deals with the analysis of relevant variables and models' inputs. This problem is approached by using different feature selection/feature extraction nonlinear tools. To demonstrate the application of machine learning algorithms several interesting case studies are considered: digital soil mapping using SVM, automatic mapping of soil and water system pollution using ANN; natural hazards risk analysis (avalanches, landslides), assessments of renewable resources (wind fields) with SVM and ANN models, etc. The dimensionality of spaces considered varies from 2 to more than 30. Figures 1, 2, 3 demonstrate some results of the studies and their outputs. Finally, the results of environmental mapping are discussed and compared with traditional models of geostatistics.
Resumo:
Typically at dawn on a hot summer day, land plants need precise molecular thermometers to sense harmless increments in the ambient temperature to induce a timely heat shock response (HSR) and accumulate protective heat shock proteins in anticipation of harmful temperatures at mid-day. Here, we found that the cyclic nucleotide gated calcium channel (CNGC) CNGCb gene from Physcomitrella patens and its Arabidopsis thaliana ortholog CNGC2, encode a component of cyclic nucleotide gated Ca(2+) channels that act as the primary thermosensors of land plant cells. Disruption of CNGCb or CNGC2 produced a hyper-thermosensitive phenotype, giving rise to an HSR and acquired thermotolerance at significantly milder heat-priming treatments than in wild-type plants. In an aequorin-expressing moss, CNGCb loss-of-function caused a hyper-thermoresponsive Ca(2+) influx and altered Ca(2+) signaling. Patch clamp recordings on moss protoplasts showed the presence of three distinct thermoresponsive Ca(2+) channels in wild-type cells. Deletion of CNGCb led to a total absence of one and increased the open probability of the remaining two thermoresponsive Ca(2+) channels. Thus, CNGC2 and CNGCb are expected to form heteromeric Ca(2+) channels with other related CNGCs. These channels in the plasma membrane respond to increments in the ambient temperature by triggering an optimal HSR, leading to the onset of plant acquired thermotolerance.
Resumo:
PURPOSE: Respiratory motion correction remains a challenge in coronary magnetic resonance imaging (MRI) and current techniques, such as navigator gating, suffer from sub-optimal scan efficiency and ease-of-use. To overcome these limitations, an image-based self-navigation technique is proposed that uses "sub-images" and compressed sensing (CS) to obtain translational motion correction in 2D. The method was preliminarily implemented as a 2D technique and tested for feasibility for targeted coronary imaging. METHODS: During a 2D segmented radial k-space data acquisition, heavily undersampled sub-images were reconstructed from the readouts collected during each cardiac cycle. These sub-images may then be used for respiratory self-navigation. Alternatively, a CS reconstruction may be used to create these sub-images, so as to partially compensate for the heavy undersampling. Both approaches were quantitatively assessed using simulations and in vivo studies, and the resulting self-navigation strategies were then compared to conventional navigator gating. RESULTS: Sub-images reconstructed using CS showed a lower artifact level than sub-images reconstructed without CS. As a result, the final image quality was significantly better when using CS-assisted self-navigation as opposed to the non-CS approach. Moreover, while both self-navigation techniques led to a 69% scan time reduction (as compared to navigator gating), there was no significant difference in image quality between the CS-assisted self-navigation technique and conventional navigator gating, despite the significant decrease in scan time. CONCLUSIONS: CS-assisted self-navigation using 2D translational motion correction demonstrated feasibility of producing coronary MRA data with image quality comparable to that obtained with conventional navigator gating, and does so without the use of additional acquisitions or motion modeling, while still allowing for 100% scan efficiency and an improved ease-of-use. In conclusion, compressed sensing may become a critical adjunct for 2D translational motion correction in free-breathing cardiac imaging with high spatial resolution. An expansion to modern 3D approaches is now warranted.
Resumo:
The 2008 Data Fusion Contest organized by the IEEE Geoscience and Remote Sensing Data Fusion Technical Committee deals with the classification of high-resolution hyperspectral data from an urban area. Unlike in the previous issues of the contest, the goal was not only to identify the best algorithm but also to provide a collaborative effort: The decision fusion of the best individual algorithms was aiming at further improving the classification performances, and the best algorithms were ranked according to their relative contribution to the decision fusion. This paper presents the five awarded algorithms and the conclusions of the contest, stressing the importance of decision fusion, dimension reduction, and supervised classification methods, such as neural networks and support vector machines.
Resumo:
The 2009-2010 Data Fusion Contest organized by the Data Fusion Technical Committee of the IEEE Geoscience and Remote Sensing Society was focused on the detection of flooded areas using multi-temporal and multi-modal images. Both high spatial resolution optical and synthetic aperture radar data were provided. The goal was not only to identify the best algorithms (in terms of accuracy), but also to investigate the further improvement derived from decision fusion. This paper presents the four awarded algorithms and the conclusions of the contest, investigating both supervised and unsupervised methods and the use of multi-modal data for flood detection. Interestingly, a simple unsupervised change detection method provided similar accuracy as supervised approaches, and a digital elevation model-based predictive method yielded a comparable projected change detection map without using post-event data.
Resumo:
Nowadays, the joint exploitation of images acquired daily by remote sensing instruments and of images available from archives allows a detailed monitoring of the transitions occurring at the surface of the Earth. These modifications of the land cover generate spectral discrepancies that can be detected via the analysis of remote sensing images. Independently from the origin of the images and of type of surface change, a correct processing of such data implies the adoption of flexible, robust and possibly nonlinear method, to correctly account for the complex statistical relationships characterizing the pixels of the images. This Thesis deals with the development and the application of advanced statistical methods for multi-temporal optical remote sensing image processing tasks. Three different families of machine learning models have been explored and fundamental solutions for change detection problems are provided. In the first part, change detection with user supervision has been considered. In a first application, a nonlinear classifier has been applied with the intent of precisely delineating flooded regions from a pair of images. In a second case study, the spatial context of each pixel has been injected into another nonlinear classifier to obtain a precise mapping of new urban structures. In both cases, the user provides the classifier with examples of what he believes has changed or not. In the second part, a completely automatic and unsupervised method for precise binary detection of changes has been proposed. The technique allows a very accurate mapping without any user intervention, resulting particularly useful when readiness and reaction times of the system are a crucial constraint. In the third, the problem of statistical distributions shifting between acquisitions is studied. Two approaches to transform the couple of bi-temporal images and reduce their differences unrelated to changes in land cover are studied. The methods align the distributions of the images, so that the pixel-wise comparison could be carried out with higher accuracy. Furthermore, the second method can deal with images from different sensors, no matter the dimensionality of the data nor the spectral information content. This opens the doors to possible solutions for a crucial problem in the field: detecting changes when the images have been acquired by two different sensors.
Resumo:
In the preceding article, we demonstrated that activation of the hepatoportal glucose sensor led to a paradoxical development of hypoglycemia that was associated with increased glucose utilization by a subset of tissues. In this study, we tested whether GLUT2 plays a role in the portal glucose-sensing system that is similar to its involvement in pancreatic beta-cells. Awake RIPGLUT1 x GLUT2-/- and control mice were infused with glucose through the portal (Po-) or the femoral (Fe-) vein for 3 h at a rate equivalent to the endogenous glucose production rate. Blood glucose and plasma insulin concentrations were continuously monitored. Glucose turnover, glycolysis, and glycogen synthesis rates were determined by the 3H-glucose infusion technique. We showed that portal glucose infusion in RIPGLUT1 x GLUT24-/- mice did not induce the hypoglycemia observed in control mice but, in contrast, led to a transient hyperglycemic state followed by a return to normoglycemia; this glycemic pattern was similar to that observed in control Fe-mice and RIPGLUT1 x GLUT2-/- Fe-mice. Plasma insulin profiles during the infusion period were similar in control and RIPGLUT1 x GLUT2-/- Po- and Fe-mice. The lack of hypoglycemia development in RIPGLUT1 x GLUT2-/- mice was not due to the absence of GLUT2 in the liver. Indeed, reexpression by transgenesis of this transporter in hepatocytes did not restore the development of hypoglycemia after initiating portal vein glucose infusion. In the absence of GLUT2, glucose turnover increased in Po-mice to the same extent as that in RIPGLUT1 x GLUT2-/- or control Fe-mice. Finally, co-infusion of somatostatin with glucose prevented development of hypoglycemia in control Po-mice, but it did not affect the glycemia or insulinemia of RIPGLUT1 x GLUT2-/- Po-mice. Together, our data demonstrate that GLUT2 is required for the function of the hepatoportal glucose sensor and that somatostatin could inhibit the glucose signal by interfering with GLUT2-expressing sensing units.
Resumo:
Data mining can be defined as the extraction of previously unknown and potentially useful information from large datasets. The main principle is to devise computer programs that run through databases and automatically seek deterministic patterns. It is applied in different fields of application, e.g., remote sensing, biometry, speech recognition, but has seldom been applied to forensic case data. The intrinsic difficulty related to the use of such data lies in its heterogeneity, which comes from the many different sources of information. The aim of this study is to highlight potential uses of pattern recognition that would provide relevant results from a criminal intelligence point of view. The role of data mining within a global crime analysis methodology is to detect all types of structures in a dataset. Once filtered and interpreted, those structures can point to previously unseen criminal activities. The interpretation of patterns for intelligence purposes is the final stage of the process. It allows the researcher to validate the whole methodology and to refine each step if necessary. An application to cutting agents found in illicit drug seizures was performed. A combinatorial approach was done, using the presence and the absence of products. Methods coming from the graph theory field were used to extract patterns in data constituted by links between products and place and date of seizure. A data mining process completed using graphing techniques is called ``graph mining''. Patterns were detected that had to be interpreted and compared with preliminary knowledge to establish their relevancy. The illicit drug profiling process is actually an intelligence process that uses preliminary illicit drug classes to classify new samples. Methods proposed in this study could be used \textit{a priori} to compare structures from preliminary and post-detection patterns. This new knowledge of a repeated structure may provide valuable complementary information to profiling and become a source of intelligence.
Resumo:
Recently, kernel-based Machine Learning methods have gained great popularity in many data analysis and data mining fields: pattern recognition, biocomputing, speech and vision, engineering, remote sensing etc. The paper describes the use of kernel methods to approach the processing of large datasets from environmental monitoring networks. Several typical problems of the environmental sciences and their solutions provided by kernel-based methods are considered: classification of categorical data (soil type classification), mapping of environmental and pollution continuous information (pollution of soil by radionuclides), mapping with auxiliary information (climatic data from Aral Sea region). The promising developments, such as automatic emergency hot spot detection and monitoring network optimization are discussed as well.
Resumo:
Waveform-based tomographic imaging of crosshole georadar data is a powerful method to investigate the shallow subsurface because of its ability to provide images of electrical properties in near-surface environments with unprecedented spatial resolution. A critical issue with waveform inversion is the a priori unknown source signal. Indeed, the estimation of the source pulse is notoriously difficult but essential for the effective application of this method. Here, we explore the viability and robustness of a recently proposed deconvolution-based procedure to estimate the source pulse during waveform inversion of crosshole georadar data, where changes in wavelet shape with location as a result of varying near-field conditions and differences in antenna coupling may be significant. Specifically, we examine whether a single, average estimated source current function can adequately represent the pulses radiated at all transmitter locations during a crosshole georadar survey, or whether a separate source wavelet estimation should be performed for each transmitter gather. Tests with synthetic and field data indicate that remarkably good tomographic reconstructions can be obtained using a single estimated source pulse when moderate to strong variability exists in the true source signal with antenna location. Only in the case of very strong variability in the true source pulse are tomographic reconstructions clearly improved by estimating a different source wavelet for each transmitter location.
Resumo:
In this paper, we propose two active learning algorithms for semiautomatic definition of training samples in remote sensing image classification. Based on predefined heuristics, the classifier ranks the unlabeled pixels and automatically chooses those that are considered the most valuable for its improvement. Once the pixels have been selected, the analyst labels them manually and the process is iterated. Starting with a small and nonoptimal training set, the model itself builds the optimal set of samples which minimizes the classification error. We have applied the proposed algorithms to a variety of remote sensing data, including very high resolution and hyperspectral images, using support vector machines. Experimental results confirm the consistency of the methods. The required number of training samples can be reduced to 10% using the methods proposed, reaching the same level of accuracy as larger data sets. A comparison with a state-of-the-art active learning method, margin sampling, is provided, highlighting advantages of the methods proposed. The effect of spatial resolution and separability of the classes on the quality of the selection of pixels is also discussed.