980 resultados para Remote sensing techniques
Resumo:
The leaf area index (LAI) of fast-growing Eucalyptus plantations is highly dynamic both seasonally and interannually, and is spatially variable depending on pedo-climatic conditions. LAI is very important in determining the carbon and water balance of a stand, but is difficult to measure during a complete stand rotation and at large scales. Remote-sensing methods allowing the retrieval of LAI time series with accuracy and precision are therefore necessary. Here, we tested two methods for LAI estimation from MODIS 250m resolution red and near-infrared (NIR) reflectance time series. The first method involved the inversion of a coupled model of leaf reflectance and transmittance (PROSPECT4), soil reflectance (SOILSPECT) and canopy radiative transfer (4SAIL2). Model parameters other than the LAI were either fixed to measured constant values, or allowed to vary seasonally and/or with stand age according to trends observed in field measurements. The LAI was assumed to vary throughout the rotation following a series of alternately increasing and decreasing sigmoid curves. The parameters of each sigmoid curve that allowed the best fit of simulated canopy reflectance to MODIS red and NIR reflectance data were obtained by minimization techniques. The second method was based on a linear relationship between the LAI and values of the GEneralized Soil Adjusted Vegetation Index (GESAVI), which was calibrated using destructive LAI measurements made at two seasons, on Eucalyptus stands of different ages and productivity levels. The ability of each approach to reproduce field-measured LAI values was assessed, and uncertainty on results and parameter sensitivities were examined. Both methods offered a good fit between measured and estimated LAI (R(2) = 0.80 and R(2) = 0.62 for model inversion and GESAVI-based methods, respectively), but the GESAVI-based method overestimated the LAI at young ages. (C) 2010 Elsevier Inc. All rights reserved.
Resumo:
Estuaries are perhaps the most threatened environments in the coastal fringe; the coincidence of high natural value and attractiveness for human use has led to conflicts between conservation and development. These conflicts occur in the Sado Estuary since its location is near the industrialised zone of Peninsula of Setúbal and at the same time, a great part of the Estuary is classified as a Natural Reserve due to its high biodiversity. These facts led us to the need of implementing a model of environmental management and quality assessment, based on methodologies that enable the assessment of the Sado Estuary quality and evaluation of the human pressures in the estuary. These methodologies are based on indicators that can better depict the state of the environment and not necessarily all that could be measured or analysed. Sediments have always been considered as an important temporary source of some compounds or a sink for other type of materials or an interface where a great diversity of biogeochemical transformations occur. For all this they are of great importance in the formulation of coastal management system. Many authors have been using sediments to monitor aquatic contamination, showing great advantages when compared to the sampling of the traditional water column. The main objective of this thesis was to develop an estuary environmental management framework applied to Sado Estuary using the DPSIR Model (EMMSado), including data collection, data processing and data analysis. The support infrastructure of EMMSado were a set of spatially contiguous and homogeneous regions of sediment structure (management units). The environmental quality of the estuary was assessed through the sediment quality assessment and integrated in a preliminary stage with the human pressure for development. Besides the earlier explained advantages, studying the quality of the estuary mainly based on the indicators and indexes of the sediment compartment also turns this methodology easier, faster and human and financial resource saving. These are essential factors to an efficient environmental management of coastal areas. Data management, visualization, processing and analysis was obtained through the combined use of indicators and indices, sampling optimization techniques, Geographical Information Systems, remote sensing, statistics for spatial data, Global Positioning Systems and best expert judgments. As a global conclusion, from the nineteen management units delineated and analyzed three showed no ecological risk (18.5 % of the study area). The areas of more concern (5.6 % of the study area) are located in the North Channel and are under strong human pressure mainly due to industrial activities. These areas have also low hydrodynamics and are, thus associated with high levels of deposition. In particular the areas near Lisnave and Eurominas industries can also accumulate the contamination coming from Águas de Moura Channel, since particles coming from that channel can settle down in that area due to residual flow. In these areas the contaminants of concern, from those analyzed, are the heavy metals and metalloids (Cd, Cu, Zn and As exceeded the PEL guidelines) and the pesticides BHC isomers, heptachlor, isodrin, DDT and metabolits, endosulfan and endrin. In the remain management units (76 % of the study area) there is a moderate impact potential of occurrence of adverse ecological effects and in some of these areas no stress agents could be identified. This emphasizes the need for further research, since unmeasured chemicals may be causing or contributing to these adverse effects. Special attention must be taken to the units with moderate impact potential of occurrence of adverse ecological effects, located inside the natural reserve. Non-point source pollution coming from agriculture and aquaculture activities also seem to contribute with important pollution load into the estuary entering from Águas de Moura Channel. This pressure is expressed in a moderate impact potential for ecological risk existent in the areas near the entrance of this Channel. Pressures may also came from Alcácer Channel although they were not quantified in this study. The management framework presented here, including all the methodological tools may be applied and tested in other estuarine ecosystems, which will also allow a comparison between estuarine ecosystems in other parts of the globe.
Resumo:
Dissertation submitted in partial fulfilment of the requirements for the Degree of Master of Science in Geospatial Technologies
Resumo:
Thesis submitted to the Instituto Superior de Estatística e Gestão de Informação da Universidade Nova de Lisboa in partial fulfillment of the requirements for the Degree of Doctor of Philosophy in Information Management – Geographic Information Systems
Resumo:
This paper presents a new parallel implementation of a previously hyperspectral coded aperture (HYCA) algorithm for compressive sensing on graphics processing units (GPUs). HYCA method combines the ideas of spectral unmixing and compressive sensing exploiting the high spatial correlation that can be observed in the data and the generally low number of endmembers needed in order to explain the data. The proposed implementation exploits the GPU architecture at low level, thus taking full advantage of the computational power of GPUs using shared memory and coalesced accesses to memory. The proposed algorithm is evaluated not only in terms of reconstruction error but also in terms of computational performance using two different GPU architectures by NVIDIA: GeForce GTX 590 and GeForce GTX TITAN. Experimental results using real data reveals signficant speedups up with regards to serial implementation.
Resumo:
Remote hyperspectral sensors collect large amounts of data per flight usually with low spatial resolution. It is known that the bandwidth connection between the satellite/airborne platform and the ground station is reduced, thus a compression onboard method is desirable to reduce the amount of data to be transmitted. This paper presents a parallel implementation of an compressive sensing method, called parallel hyperspectral coded aperture (P-HYCA), for graphics processing units (GPU) using the compute unified device architecture (CUDA). This method takes into account two main properties of hyperspectral dataset, namely the high correlation existing among the spectral bands and the generally low number of endmembers needed to explain the data, which largely reduces the number of measurements necessary to correctly reconstruct the original data. Experimental results conducted using synthetic and real hyperspectral datasets on two different GPU architectures by NVIDIA: GeForce GTX 590 and GeForce GTX TITAN, reveal that the use of GPUs can provide real-time compressive sensing performance. The achieved speedup is up to 20 times when compared with the processing time of HYCA running on one core of the Intel i7-2600 CPU (3.4GHz), with 16 Gbyte memory.
Resumo:
The development of high spatial resolution airborne and spaceborne sensors has improved the capability of ground-based data collection in the fields of agriculture, geography, geology, mineral identification, detection [2, 3], and classification [4–8]. The signal read by the sensor from a given spatial element of resolution and at a given spectral band is a mixing of components originated by the constituent substances, termed endmembers, located at that element of resolution. This chapter addresses hyperspectral unmixing, which is the decomposition of the pixel spectra into a collection of constituent spectra, or spectral signatures, and their corresponding fractional abundances indicating the proportion of each endmember present in the pixel [9, 10]. Depending on the mixing scales at each pixel, the observed mixture is either linear or nonlinear [11, 12]. The linear mixing model holds when the mixing scale is macroscopic [13]. The nonlinear model holds when the mixing scale is microscopic (i.e., intimate mixtures) [14, 15]. The linear model assumes negligible interaction among distinct endmembers [16, 17]. The nonlinear model assumes that incident solar radiation is scattered by the scene through multiple bounces involving several endmembers [18]. Under the linear mixing model and assuming that the number of endmembers and their spectral signatures are known, hyperspectral unmixing is a linear problem, which can be addressed, for example, under the maximum likelihood setup [19], the constrained least-squares approach [20], the spectral signature matching [21], the spectral angle mapper [22], and the subspace projection methods [20, 23, 24]. Orthogonal subspace projection [23] reduces the data dimensionality, suppresses undesired spectral signatures, and detects the presence of a spectral signature of interest. The basic concept is to project each pixel onto a subspace that is orthogonal to the undesired signatures. As shown in Settle [19], the orthogonal subspace projection technique is equivalent to the maximum likelihood estimator. This projection technique was extended by three unconstrained least-squares approaches [24] (signature space orthogonal projection, oblique subspace projection, target signature space orthogonal projection). Other works using maximum a posteriori probability (MAP) framework [25] and projection pursuit [26, 27] have also been applied to hyperspectral data. In most cases the number of endmembers and their signatures are not known. Independent component analysis (ICA) is an unsupervised source separation process that has been applied with success to blind source separation, to feature extraction, and to unsupervised recognition [28, 29]. ICA consists in finding a linear decomposition of observed data yielding statistically independent components. Given that hyperspectral data are, in given circumstances, linear mixtures, ICA comes to mind as a possible tool to unmix this class of data. In fact, the application of ICA to hyperspectral data has been proposed in reference 30, where endmember signatures are treated as sources and the mixing matrix is composed by the abundance fractions, and in references 9, 25, and 31–38, where sources are the abundance fractions of each endmember. In the first approach, we face two problems: (1) The number of samples are limited to the number of channels and (2) the process of pixel selection, playing the role of mixed sources, is not straightforward. In the second approach, ICA is based on the assumption of mutually independent sources, which is not the case of hyperspectral data, since the sum of the abundance fractions is constant, implying dependence among abundances. This dependence compromises ICA applicability to hyperspectral images. In addition, hyperspectral data are immersed in noise, which degrades the ICA performance. IFA [39] was introduced as a method for recovering independent hidden sources from their observed noisy mixtures. IFA implements two steps. First, source densities and noise covariance are estimated from the observed data by maximum likelihood. Second, sources are reconstructed by an optimal nonlinear estimator. Although IFA is a well-suited technique to unmix independent sources under noisy observations, the dependence among abundance fractions in hyperspectral imagery compromises, as in the ICA case, the IFA performance. Considering the linear mixing model, hyperspectral observations are in a simplex whose vertices correspond to the endmembers. Several approaches [40–43] have exploited this geometric feature of hyperspectral mixtures [42]. Minimum volume transform (MVT) algorithm [43] determines the simplex of minimum volume containing the data. The MVT-type approaches are complex from the computational point of view. Usually, these algorithms first find the convex hull defined by the observed data and then fit a minimum volume simplex to it. Aiming at a lower computational complexity, some algorithms such as the vertex component analysis (VCA) [44], the pixel purity index (PPI) [42], and the N-FINDR [45] still find the minimum volume simplex containing the data cloud, but they assume the presence in the data of at least one pure pixel of each endmember. This is a strong requisite that may not hold in some data sets. In any case, these algorithms find the set of most pure pixels in the data. Hyperspectral sensors collects spatial images over many narrow contiguous bands, yielding large amounts of data. For this reason, very often, the processing of hyperspectral data, included unmixing, is preceded by a dimensionality reduction step to reduce computational complexity and to improve the signal-to-noise ratio (SNR). Principal component analysis (PCA) [46], maximum noise fraction (MNF) [47], and singular value decomposition (SVD) [48] are three well-known projection techniques widely used in remote sensing in general and in unmixing in particular. The newly introduced method [49] exploits the structure of hyperspectral mixtures, namely the fact that spectral vectors are nonnegative. The computational complexity associated with these techniques is an obstacle to real-time implementations. To overcome this problem, band selection [50] and non-statistical [51] algorithms have been introduced. This chapter addresses hyperspectral data source dependence and its impact on ICA and IFA performances. The study consider simulated and real data and is based on mutual information minimization. Hyperspectral observations are described by a generative model. This model takes into account the degradation mechanisms normally found in hyperspectral applications—namely, signature variability [52–54], abundance constraints, topography modulation, and system noise. The computation of mutual information is based on fitting mixtures of Gaussians (MOG) to data. The MOG parameters (number of components, means, covariances, and weights) are inferred using the minimum description length (MDL) based algorithm [55]. We study the behavior of the mutual information as a function of the unmixing matrix. The conclusion is that the unmixing matrix minimizing the mutual information might be very far from the true one. Nevertheless, some abundance fractions might be well separated, mainly in the presence of strong signature variability, a large number of endmembers, and high SNR. We end this chapter by sketching a new methodology to blindly unmix hyperspectral data, where abundance fractions are modeled as a mixture of Dirichlet sources. This model enforces positivity and constant sum sources (full additivity) constraints. The mixing matrix is inferred by an expectation-maximization (EM)-type algorithm. This approach is in the vein of references 39 and 56, replacing independent sources represented by MOG with mixture of Dirichlet sources. Compared with the geometric-based approaches, the advantage of this model is that there is no need to have pure pixels in the observations. The chapter is organized as follows. Section 6.2 presents a spectral radiance model and formulates the spectral unmixing as a linear problem accounting for abundance constraints, signature variability, topography modulation, and system noise. Section 6.3 presents a brief resume of ICA and IFA algorithms. Section 6.4 illustrates the performance of IFA and of some well-known ICA algorithms with experimental data. Section 6.5 studies the ICA and IFA limitations in unmixing hyperspectral data. Section 6.6 presents results of ICA based on real data. Section 6.7 describes the new blind unmixing scheme and some illustrative examples. Section 6.8 concludes with some remarks.
Resumo:
This paper introduces a new method to blindly unmix hyperspectral data, termed dependent component analysis (DECA). This method decomposes a hyperspectral images into a collection of reflectance (or radiance) spectra of the materials present in the scene (endmember signatures) and the corresponding abundance fractions at each pixel. DECA assumes that each pixel is a linear mixture of the endmembers signatures weighted by the correspondent abundance fractions. These abudances are modeled as mixtures of Dirichlet densities, thus enforcing the constraints on abundance fractions imposed by the acquisition process, namely non-negativity and constant sum. The mixing matrix is inferred by a generalized expectation-maximization (GEM) type algorithm. This method overcomes the limitations of unmixing methods based on Independent Component Analysis (ICA) and on geometrical based approaches. The effectiveness of the proposed method is illustrated using simulated data based on U.S.G.S. laboratory spectra and real hyperspectral data collected by the AVIRIS sensor over Cuprite, Nevada.
Resumo:
Dissertation submitted in partial fulfillment of the requirements for the Degree of Master of Science in Geospatial Technologies.
Resumo:
Dissertação de mestrado em Geologia (área de especialização em Valorização de Recursos Geológicos)
Resumo:
Satellite remote sensing imagery is used for forestry, conservation and environmental applications, but insufficient spatial resolution, and, in particular, unavailability of images at the precise timing required for a given application, often prevent achieving a fully operational stage. Airborne remote sensing has the advantage of custom-tuned sensors, resolution and timing, but its price prevents using it as a routine technique for the mentioned fields. Some Unmanned Aerial Vehicles might provide a “third way” solution as low-cost techniques for acquiring remotely sensed information, under close control of the end-user, albeit at the expense of lower quality instrumentation and instability. This report evaluates a light remote sensing system based on a remotely-controlled mini-UAV (ATMOS-3) equipped with a color infra-red camera (VEGCAM-1) designed and operated by CATUAV. We conducted a testing mission over a Mediterranean landscape dominated by an evergreen woodland of Aleppo pine (Pinus halepensis) and (Holm) oak (Quercus ilex) in the Montseny National Park (Catalonia, NE Spain). We took advantage of state-of-the-art ortho-rectified digital aerial imagery (acquired by the Institut Cartogràfic de Catalunya over the area during the previous year) and used it as quality reference. In particular, we paid attention to: 1) Operationality of flight and image acquisition according to a previously defined plan; 2) Radiometric and geometric quality of the images; and 3) Operational use of the images in the context of applications. We conclude that the system has achieved an operational stage regarding flight activities, although with meteorological limits set by wind speed and turbulence. Appropriate landing areas can be sometimes limiting also, but the system is able to land on small and relatively rough terrains such as patches of grassland or short matorral, and we have operated the UAV as far as 7 km from the control unit. Radiometric quality is sufficient for interactive analysis, but probably insufficient for automated processing. A forthcoming camera is supposed to greatly improve radiometric quality and consistency. Conventional GPS positioning through time synchronization provides coarse orientation of the images, with no roll information.
Resumo:
Detecting changes between images of the same scene taken at different times is of great interest for monitoring and understanding the environment. It is widely used for on-land application but suffers from different constraints. Unfortunately, Change detection algorithms require highly accurate geometric and photometric registration. This requirement has precluded their use in underwater imagery in the past. In this paper, the change detection techniques available nowadays for on-land application were analyzed and a method to automatically detect the changes in sequences of underwater images is proposed. Target application scenarios are habitat restoration sites, or area monitoring after sudden impacts from hurricanes or ship groundings. The method is based on the creation of a 3D terrain model from one image sequence over an area of interest. This model allows for synthesizing textured views that correspond to the same viewpoints of a second image sequence. The generated views are photometrically matched and corrected against the corresponding frames from the second sequence. Standard change detection techniques are then applied to find areas of difference. Additionally, the paper shows that it is possible to detect false positives, resulting from non-rigid objects, by applying the same change detection method to the first sequence exclusively. The developed method was able to correctly find the changes between two challenging sequences of images from a coral reef taken one year apart and acquired with two different cameras
Resumo:
During the last decade the interest on space-borne Synthetic Aperture Radars (SAR) for remote sensing applications has grown as testified by the number of recent and forthcoming missions as TerraSAR-X, RADARSAT-2, COSMO-kyMed, TanDEM-X and the Spanish SEOSAR/PAZ. In this sense, this thesis proposes to study and analyze the performance of the state-of-the-Art space-borne SAR systems, with modes able to provide Moving Target Indication capabilities (MTI), i.e. moving object detection and estimation. The research will focus on the MTI processing techniques as well as the architecture and/ or configuration of the SAR instrument, setting the limitations of the current systems with MTI capabilities, and proposing efficient solutions for the future missions. Two European projects, to which the Universitat Politècnica de Catalunya provides support, are an excellent framework for the research activities suggested in this thesis. NEWA project proposes a potential European space-borne radar system with MTI capabilities in order to fulfill the upcoming European security policies. This thesis will critically review the state-of-the-Art MTI processing techniques as well as the readiness and maturity level of the developed capabilities. For each one of the techniques a performance analysis will be carried out based on the available technologies, deriving a roadmap and identifying the different technological gaps. In line with this study a simulator tool will be developed in order to validate and evaluate different MTI techniques in the basis of a flexible space-borne radar configuration. The calibration of a SAR system is mandatory for the accurate formation of the SAR images and turns to be critical in the advanced operation modes as MTI. In this sense, the SEOSAR/PAZ project proposes the study and estimation of the radiometric budget. This thesis will also focus on an exhaustive analysis of the radiometric budget considering the current calibration concepts and their possible limitations. In the framework of this project a key point will be the study of the Dual Receive Antenna (DRA) mode, which provides MTI capabilities to the mission. An additional aspect under study is the applicability of the Digital Beamforming on multichannel and/or multistatic radar platforms, which conform potential solutions for the NEWA project with the aim to fully exploit its capability jointly with MTI techniques.
Resumo:
Current monitoring techniques for determination of compaction of earthwork and asphalt generally involve destructive testing of the materials following placement. Advances in sensor technologies show significant promise for obtaining necessary information through nondestructive and remote techniques. To develop a better understanding of suitable and potential technologies, this study was undertaken to conduct a synthesis review of nondestructive testing technologies and perform preliminary evaluations of selected technologies to better understand their application to testing of geomaterials (soil fill, aggregate base, asphalt, etc.). This research resulted in a synthesis of potential technologies for compaction monitoring with a strong emphasis on moisture sensing. Techniques were reviewed and selectively evaluated for their potential to improve field quality control operations. Activities included an extensive review of commercially available moisture sensors, literature review, and evaluation of selected technologies. The technologies investigated in this study were dielectric, nuclear, near infrared spectroscopy, seismic, electromagnetic induction, and thermal. The primary disadvantage of all the methods is the small sample volume measured. In addition, all the methods possessed some sensitivity to non-moisture factors that affected the accuracy of the results. As the measurement volume increases, local variances are averaged out providing better accuracy. Most dielectric methods with the exception of ground penetrating radar have a very small measurement volume and are highly sensitive to variations in density, porosity, etc.
Resumo:
Data mining can be defined as the extraction of previously unknown and potentially useful information from large datasets. The main principle is to devise computer programs that run through databases and automatically seek deterministic patterns. It is applied in different fields of application, e.g., remote sensing, biometry, speech recognition, but has seldom been applied to forensic case data. The intrinsic difficulty related to the use of such data lies in its heterogeneity, which comes from the many different sources of information. The aim of this study is to highlight potential uses of pattern recognition that would provide relevant results from a criminal intelligence point of view. The role of data mining within a global crime analysis methodology is to detect all types of structures in a dataset. Once filtered and interpreted, those structures can point to previously unseen criminal activities. The interpretation of patterns for intelligence purposes is the final stage of the process. It allows the researcher to validate the whole methodology and to refine each step if necessary. An application to cutting agents found in illicit drug seizures was performed. A combinatorial approach was done, using the presence and the absence of products. Methods coming from the graph theory field were used to extract patterns in data constituted by links between products and place and date of seizure. A data mining process completed using graphing techniques is called ``graph mining''. Patterns were detected that had to be interpreted and compared with preliminary knowledge to establish their relevancy. The illicit drug profiling process is actually an intelligence process that uses preliminary illicit drug classes to classify new samples. Methods proposed in this study could be used \textit{a priori} to compare structures from preliminary and post-detection patterns. This new knowledge of a repeated structure may provide valuable complementary information to profiling and become a source of intelligence.