33 resultados para Coded aperture compressive sensing
em Consorci de Serveis Universitaris de Catalunya (CSUC), Spain
Resumo:
During the last decade the interest on space-borne Synthetic Aperture Radars (SAR) for remote sensing applications has grown as testified by the number of recent and forthcoming missions as TerraSAR-X, RADARSAT-2, COSMO-kyMed, TanDEM-X and the Spanish SEOSAR/PAZ. In this sense, this thesis proposes to study and analyze the performance of the state-of-the-Art space-borne SAR systems, with modes able to provide Moving Target Indication capabilities (MTI), i.e. moving object detection and estimation. The research will focus on the MTI processing techniques as well as the architecture and/ or configuration of the SAR instrument, setting the limitations of the current systems with MTI capabilities, and proposing efficient solutions for the future missions. Two European projects, to which the Universitat Politècnica de Catalunya provides support, are an excellent framework for the research activities suggested in this thesis. NEWA project proposes a potential European space-borne radar system with MTI capabilities in order to fulfill the upcoming European security policies. This thesis will critically review the state-of-the-Art MTI processing techniques as well as the readiness and maturity level of the developed capabilities. For each one of the techniques a performance analysis will be carried out based on the available technologies, deriving a roadmap and identifying the different technological gaps. In line with this study a simulator tool will be developed in order to validate and evaluate different MTI techniques in the basis of a flexible space-borne radar configuration. The calibration of a SAR system is mandatory for the accurate formation of the SAR images and turns to be critical in the advanced operation modes as MTI. In this sense, the SEOSAR/PAZ project proposes the study and estimation of the radiometric budget. This thesis will also focus on an exhaustive analysis of the radiometric budget considering the current calibration concepts and their possible limitations. In the framework of this project a key point will be the study of the Dual Receive Antenna (DRA) mode, which provides MTI capabilities to the mission. An additional aspect under study is the applicability of the Digital Beamforming on multichannel and/or multistatic radar platforms, which conform potential solutions for the NEWA project with the aim to fully exploit its capability jointly with MTI techniques.
Resumo:
Two-dimensional aperture synthesis radiometry is the technologyselected for ESA's SMOS mission to provide high resolution L-bandradiometric imagery. The array topology is a Y-shaped structure. Theposition and number of redundant elements to minimise instrumentdegradation in case of element failure(s) are studied.
Resumo:
Positioning a robot with respect to objects by using data provided by a camera is a well known technique called visual servoing. In order to perform a task, the object must exhibit visual features which can be extracted from different points of view. Then, visual servoing is object-dependent as it depends on the object appearance. Therefore, performing the positioning task is not possible in presence of non-textured objects or objects for which extracting visual features is too complex or too costly. This paper proposes a solution to tackle this limitation inherent to the current visual servoing techniques. Our proposal is based on the coded structured light approach as a reliable and fast way to solve the correspondence problem. In this case, a coded light pattern is projected providing robust visual features independently of the object appearance
Resumo:
Obtaining automatic 3D profile of objects is one of the most important issues in computer vision. With this information, a large number of applications become feasible: from visual inspection of industrial parts to 3D reconstruction of the environment for mobile robots. In order to achieve 3D data, range finders can be used. Coded structured light approach is one of the most widely used techniques to retrieve 3D information of an unknown surface. An overview of the existing techniques as well as a new classification of patterns for structured light sensors is presented. This kind of systems belong to the group of active triangulation method, which are based on projecting a light pattern and imaging the illuminated scene from one or more points of view. Since the patterns are coded, correspondences between points of the image(s) and points of the projected pattern can be easily found. Once correspondences are found, a classical triangulation strategy between camera(s) and projector device leads to the reconstruction of the surface. Advantages and constraints of the different patterns are discussed
Resumo:
This paper presents the implementation details of a coded structured light system for rapid shape acquisition of unknown surfaces. Such techniques are based on the projection of patterns onto a measuring surface and grabbing images of every projection with a camera. Analyzing the pattern deformations that appear in the images, 3D information of the surface can be calculated. The implemented technique projects a unique pattern so that it can be used to measure moving surfaces. The structure of the pattern is a grid where the color of the slits are selected using a De Bruijn sequence. Moreover, since both axis of the pattern are coded, the cross points of the grid have two codewords (which permits to reconstruct them very precisely), while pixels belonging to horizontal and vertical slits have also a codeword. Different sets of colors are used for horizontal and vertical slits, so the resulting pattern is invariant to rotation. Therefore, the alignment constraint between camera and projector considered by a lot of authors is not necessary
Resumo:
In a search for new sensor systems and new methods for underwater vehicle positioning based on visual observation, this paper presents a computer vision system based on coded light projection. 3D information is taken from an underwater scene. This information is used to test obstacle avoidance behaviour. In addition, the main ideas for achieving stabilisation of the vehicle in front of an object are presented
Resumo:
The absolute necessity of obtaining 3D information of structured and unknown environments in autonomous navigation reduce considerably the set of sensors that can be used. The necessity to know, at each time, the position of the mobile robot with respect to the scene is indispensable. Furthermore, this information must be obtained in the least computing time. Stereo vision is an attractive and widely used method, but, it is rather limited to make fast 3D surface maps, due to the correspondence problem. The spatial and temporal correspondence among images can be alleviated using a method based on structured light. This relationship can be directly found codifying the projected light; then each imaged region of the projected pattern carries the needed information to solve the correspondence problem. We present the most significant techniques, used in recent years, concerning the coded structured light method
Resumo:
This letter presents a comparison between threeFourier-based motion compensation (MoCo) algorithms forairborne synthetic aperture radar (SAR) systems. These algorithmscircumvent the limitations of conventional MoCo, namelythe assumption of a reference height and the beam-center approximation.All these approaches rely on the inherent time–frequencyrelation in SAR systems but exploit it differently, with the consequentdifferences in accuracy and computational burden. Aftera brief overview of the three approaches, the performance ofeach algorithm is analyzed with respect to azimuthal topographyaccommodation, angle accommodation, and maximum frequencyof track deviations with which the algorithm can cope. Also, ananalysis on the computational complexity is presented. Quantitativeresults are shown using real data acquired by the ExperimentalSAR system of the German Aerospace Center (DLR).
Resumo:
Nanomotors are nanoscale devices capable of converting energy into movement and forces. Among them, self-propelled nanomotors offer considerable promise for developing new and novel bioanalytical and biosensing strategies based on the direct isolation of target biomolecules or changes in their movement in the presence of target analytes. The mainachievements of this project consists on the development of receptor-functionalized nanomotors that offer direct and rapid target detection, isolation and transport from raw biological samples without preparatory and washing steps. For example, microtube engines functionalized with aptamer, antibody, lectin and enzymes receptors were used for the direct isolation of analytes of biomedical interest, including proteins and whole cells, among others. A target protein was also isolated from a complex sample by using an antigen-functionalized microengine navigating into the reservoirs of a lab-on-a-chip device. The new nanomotorbased target biomarkers detection strategy not only offers highly sensitive, rapid, simple and low cost alternative for the isolation and transport of target molecules, but also represents a new dimension of analytical information based on motion. The recognition events can be easily visualized by optical microscope (without any sophisticated analytical instrument) to reveal the target presence and concentration. The use of artificial nanomachines has shown not only to be useful for (bio)recognition and (bio)transport but also for detection of environmental contamination and remediation. In this context, micromotors modified with superhydrophobic layer demonstrated that effectively interacted, captured, transported and removed oil droplets from oil contaminated samples. Finally, a unique micromotor-based strategy for water-quality testing, that mimics live-fish water-quality testing, based on changes in the propulsion behavior of artificial biocatalytic microswimmers in the presence of aquatic pollutants was also developed. The attractive features of the new micromachine-based target isolation and signal transduction protocols developed in this project offer numerous potential applications in biomedical diagnostics, environmental monitoring, and forensic analysis.
Resumo:
Report for the scientific sojourn carried out at the l’ Institute for Computational Molecular Science of the Temple University, United States, from 2010 to 2012. Two-component systems (TCS) are used by pathogenic bacteria to sense the environment within a host and activate mechanisms related to virulence and antimicrobial resistance. A prototypical example is the PhoQ/PhoP system, which is the major regulator of virulence in Salmonella. Hence, PhoQ is an attractive target for the design of new antibiotics against foodborne diseases. Inhibition of the PhoQ-mediated bacterial virulence does not result in growth inhibition, presenting less selective pressure for the generation of antibiotic resistance. Moreover, PhoQ is a histidine kinase (HK) and it is absent in animals. Nevertheless, the design of satisfactory HK inhibitors has been proven to be a challenge. To compete with the intracellular ATP concentrations, the affinity of a HK inhibidor must be in the micromolar-nanomolar range, whereas the current lead compounds have at best millimolar affinities. Moreover, the drug selectivity depends on the conformation of a highly variable loop, referred to as the “ATP-lid, which is difficult to study by X-Ray crystallography due to its flexibility. I have investigated the binding of different HK inhibitors to PhoQ. In particular, all-atom molecular dynamics simulations have been combined with enhanced sampling techniques in order to provide structural and dynamic information of the conformation of the ATP-lid. Transient interactions between these drugs and the ATP-lid have been identified and the free energy of the different binding modes has been estimated. The results obtained pinpoint the importance of protein flexibility in the HK-inhibitor binding, and constitute a first step in developing more potent and selective drugs. The computational resources of the hosting institution as well as the experience of the members of the group in drug binding and free energy methods have been crucial to carry out this work.
Resumo:
Silver Code (SilC) was originally discovered in [1–4] for 2×2 multiple-input multiple-output (MIMO) transmission. It has non-vanishing minimum determinant 1/7, slightly lower than Golden code, but is fast-decodable, i.e., it allows reduced-complexity maximum likelihood decoding [5–7]. In this paper, we present a multidimensional trellis-coded modulation scheme for MIMO systems [11] based on set partitioning of the Silver Code, named Silver Space-Time Trellis Coded Modulation (SST-TCM). This lattice set partitioning is designed specifically to increase the minimum determinant. The branches of the outer trellis code are labeled with these partitions. Viterbi algorithm is applied for trellis decoding, while the branch metrics are computed by using a sphere-decoding algorithm. It is shown that the proposed SST-TCM performs very closely to the Golden Space-Time Trellis Coded Modulation (GST-TCM) scheme, yetwith a much reduced decoding complexity thanks to its fast-decoding property.
Resumo:
When continuous data are coded to categorical variables, two types of coding are possible: crisp coding in the form of indicator, or dummy, variables with values either 0 or 1; or fuzzy coding where each observation is transformed to a set of "degrees of membership" between 0 and 1, using co-called membership functions. It is well known that the correspondence analysis of crisp coded data, namely multiple correspondence analysis, yields principal inertias (eigenvalues) that considerably underestimate the quality of the solution in a low-dimensional space. Since the crisp data only code the categories to which each individual case belongs, an alternative measure of fit is simply to count how well these categories are predicted by the solution. Another approach is to consider multiple correspondence analysis equivalently as the analysis of the Burt matrix (i.e., the matrix of all two-way cross-tabulations of the categorical variables), and then perform a joint correspondence analysis to fit just the off-diagonal tables of the Burt matrix - the measure of fit is then computed as the quality of explaining these tables only. The correspondence analysis of fuzzy coded data, called "fuzzy multiple correspondence analysis", suffers from the same problem, albeit attenuated. Again, one can count how many correct predictions are made of the categories which have highest degree of membership. But here one can also defuzzify the results of the analysis to obtain estimated values of the original data, and then calculate a measure of fit in the familiar percentage form, thanks to the resultant orthogonal decomposition of variance. Furthermore, if one thinks of fuzzy multiple correspondence analysis as explaining the two-way associations between variables, a fuzzy Burt matrix can be computed and the same strategy as in the crisp case can be applied to analyse the off-diagonal part of this matrix. In this paper these alternative measures of fit are defined and applied to a data set of continuous meteorological variables, which are coded crisply and fuzzily into three categories. Measuring the fit is further discussed when the data set consists of a mixture of discrete and continuous variables.
Resumo:
A biplot, which is the multivariate generalization of the two-variable scatterplot, can be used to visualize the results of many multivariate techniques, especially those that are based on the singular value decomposition. We consider data sets consisting of continuous-scale measurements, their fuzzy coding and the biplots that visualize them, using a fuzzy version of multiple correspondence analysis. Of special interest is the way quality of fit of the biplot is measured, since it is well-known that regular (i.e., crisp) multiple correspondence analysis seriously under-estimates this measure. We show how the results of fuzzy multiple correspondence analysis can be defuzzified to obtain estimated values of the original data, and prove that this implies an orthogonal decomposition of variance. This permits a measure of fit to be calculated in the familiar form of a percentage of explained variance, which is directly comparable to the corresponding fit measure used in principal component analysis of the original data. The approach is motivated initially by its application to a simulated data set, showing how the fuzzy approach can lead to diagnosing nonlinear relationships, and finally it is applied to a real set of meteorological data.
Resumo:
Remote sensing spatial, spectral, and temporal resolutions of images, acquired over a reasonably sized image extent, result in imagery that can be processed to represent land cover over large areas with an amount of spatial detail that is very attractive for monitoring, management, and scienti c activities. With Moore's Law alive and well, more and more parallelism is introduced into all computing platforms, at all levels of integration and programming to achieve higher performance and energy e ciency. Being the geometric calibration process one of the most time consuming processes when using remote sensing images, the aim of this work is to accelerate this process by taking advantage of new computing architectures and technologies, specially focusing in exploiting computation over shared memory multi-threading hardware. A parallel implementation of the most time consuming process in the remote sensing geometric correction has been implemented using OpenMP directives. This work compares the performance of the original serial binary versus the parallelized implementation, using several multi-threaded modern CPU architectures, discussing about the approach to nd the optimum hardware for a cost-e ective execution.