502 resultados para Pixels
Resumo:
Land cover plays a key role in global to regional monitoring and modeling because it affects and is being affected by climate change and thus became one of the essential variables for climate change studies. National and international organizations require timely and accurate land cover information for reporting and management actions. The North American Land Change Monitoring System (NALCMS) is an international cooperation of organizations and entities of Canada, the United States, and Mexico to map land cover change of North America's changing environment. This paper presents the methodology to derive the land cover map of Mexico for the year 2005 which was integrated in the NALCMS continental map. Based on a time series of 250 m Moderate Resolution Imaging Spectroradiometer (MODIS) data and an extensive sample data base the complexity of the Mexican landscape required a specific approach to reflect land cover heterogeneity. To estimate the proportion of each land cover class for every pixel several decision tree classifications were combined to obtain class membership maps which were finally converted to a discrete map accompanied by a confidence estimate. The map yielded an overall accuracy of 82.5% (Kappa of 0.79) for pixels with at least 50% map confidence (71.3% of the data). An additional assessment with 780 randomly stratified samples and primary and alternative calls in the reference data to account for ambiguity indicated 83.4% overall accuracy (Kappa of 0.80). A high agreement of 83.6% for all pixels and 92.6% for pixels with a map confidence of more than 50% was found for the comparison between the land cover maps of 2005 and 2006. Further wall-to-wall comparisons to related land cover maps resulted in 56.6% agreement with the MODIS land cover product and a congruence of 49.5 with Globcover.
Resumo:
The susceptibility of a catchment to flooding is affected by its soil moisture prior to an extreme rainfall event. While soil moisture is routinely observed by satellite instruments, results from previous work on the assimilation of remotely sensed soil moisture into hydrologic models have been mixed. This may have been due in part to the low spatial resolution of the observations used. In this study, the remote sensing aspects of a project attempting to improve flow predictions from a distributed hydrologic model by assimilating soil moisture measurements are described. Advanced Synthetic Aperture Radar (ASAR) Wide Swath data were used to measure soil moisture as, unlike low resolution microwave data, they have sufficient resolution to allow soil moisture variations due to local topography to be detected, which may help to take into account the spatial heterogeneity of hydrological processes. Surface soil moisture content (SSMC) was measured over the catchments of the Severn and Avon rivers in the South West UK. To reduce the influence of vegetation, measurements were made only over homogeneous pixels of improved grassland determined from a land cover map. Radar backscatter was corrected for terrain variations and normalized to a common incidence angle. SSMC was calculated using change detection. To search for evidence of a topographic signal, the mean SSMC from improved grassland pixels on low slopes near rivers was compared to that on higher slopes. When the mean SSMC on low slopes was 30–90%, the higher slopes were slightly drier than the low slopes. The effect was reversed for lower SSMC values. It was also more pronounced during a drying event. These findings contribute to the scant information in the literature on the use of high resolution SAR soil moisture measurement to improve hydrologic models.
Resumo:
Georeferencing is one of the major tasks of satellite-borne remote sensing. Compared to traditional indirect methods, direct georeferencing through a Global Positioning System/inertial navigation system requires fewer and simpler steps to obtain exterior orientation parameters of remotely sensed images. However, the pixel shift caused by geographic positioning error, which is generally derived from boresight angle as well as terrain topography variation, can have a great impact on the precision of georeferencing. The distribution of pixel shifts introduced by the positioning error on a satellite linear push-broom image is quantitatively analyzed. We use the variation of the object space coordinate to simulate different kinds of positioning errors and terrain topography. Then a total differential method was applied to establish a rigorous sensor model in order to mathematically obtain the relationship between pixel shift and positioning error. Finally, two simulation experiments are conducted using the imaging parameters of Chang’ E-1 satellite to evaluate two different kinds of positioning errors. The experimental results have shown that with the experimental parameters, the maximum pixel shift could reach 1.74 pixels. The proposed approach can be extended to a generic application for imaging error modeling in remote sensing with terrain variation.
Resumo:
The topography of many floodplains in the developed world has now been surveyed with high resolution sensors such as airborne LiDAR (Light Detection and Ranging), giving accurate Digital Elevation Models (DEMs) that facilitate accurate flood inundation modelling. This is not always the case for remote rivers in developing countries. However, the accuracy of DEMs produced for modelling studies on such rivers should be enhanced in the near future by the high resolution TanDEM-X WorldDEM. In a parallel development, increasing use is now being made of flood extents derived from high resolution Synthetic Aperture Radar (SAR) images for calibrating, validating and assimilating observations into flood inundation models in order to improve these. This paper discusses an additional use of SAR flood extents, namely to improve the accuracy of the TanDEM-X DEM in the floodplain covered by the flood extents, thereby permanently improving this DEM for future flood modelling and other studies. The method is based on the fact that for larger rivers the water elevation generally changes only slowly along a reach, so that the boundary of the flood extent (the waterline) can be regarded locally as a quasi-contour. As a result, heights of adjacent pixels along a small section of waterline can be regarded as samples with a common population mean. The height of the central pixel in the section can be replaced with the average of these heights, leading to a more accurate estimate. While this will result in a reduction in the height errors along a waterline, the waterline is a linear feature in a two-dimensional space. However, improvements to the DEM heights between adjacent pairs of waterlines can also be made, because DEM heights enclosed by the higher waterline of a pair must be at least no higher than the corrected heights along the higher waterline, whereas DEM heights not enclosed by the lower waterline must in general be no lower than the corrected heights along the lower waterline. In addition, DEM heights between the higher and lower waterlines can also be assigned smaller errors because of the reduced errors on the corrected waterline heights. The method was tested on a section of the TanDEM-X Intermediate DEM (IDEM) covering an 11km reach of the Warwickshire Avon, England. Flood extents from four COSMO-SKyMed images were available at various stages of a flood in November 2012, and a LiDAR DEM was available for validation. In the area covered by the flood extents, the original IDEM heights had a mean difference from the corresponding LiDAR heights of 0.5 m with a standard deviation of 2.0 m, while the corrected heights had a mean difference of 0.3 m with standard deviation 1.2 m. These figures show that significant reductions in IDEM height bias and error can be made using the method, with the corrected error being only 60% of the original. Even if only a single SAR image obtained near the peak of the flood was used, the corrected error was only 66% of the original. The method should also be capable of improving the final TanDEM-X DEM and other DEMs, and may also be of use with data from the SWOT (Surface Water and Ocean Topography) satellite.
Resumo:
Collocations between two satellite sensors are occasions where both sensors observe the same place at roughly the same time. We study collocations between the Microwave Humidity Sounder (MHS) on-board NOAA-18 and the Cloud Profiling Radar (CPR) on-board CloudSat. First, a simple method is presented to obtain those collocations and this method is compared with a more complicated approach found in literature. We present the statistical properties of the collocations, with particular attention to the effects of the differences in footprint size. For 2007, we find approximately two and a half million MHS measurements with CPR pixels close to their centrepoints. Most of those collocations contain at least ten CloudSat pixels and image relatively homogeneous scenes. In the second part, we present three possible applications for the collocations. Firstly, we use the collocations to validate an operational Ice Water Path (IWP) product from MHS measurements, produced by the National Environment Satellite, Data and Information System (NESDIS) in the Microwave Surface and Precipitation Products System (MSPPS). IWP values from the CloudSat CPR are found to be significantly larger than those from the MSPPS. Secondly, we compare the relation between IWP and MHS channel 5 (190.311 GHz) brightness temperature for two datasets: the collocated dataset, and an artificial dataset. We find a larger variability in the collocated dataset. Finally, we use the collocations to train an Artificial Neural Network and describe how we can use it to develop a new MHS-based IWP product. We also study the effect of adding measurements from the High Resolution Infrared Radiation Sounder (HIRS), channels 8 (11.11 μm) and 11 (8.33 μm). This shows a small improvement in the retrieval quality. The collocations described in the article are available for public use.
Resumo:
Sea surface temperature (SST) data are often provided as gridded products, typically at resolutions of order 0.05 degrees from satellite observations to reduce data volume at the request of data users and facilitate comparison against other products or models. Sampling uncertainty is introduced in gridded products where the full surface area of the ocean within a grid cell cannot be fully observed because of cloud cover. In this paper we parameterise uncertainties in SST as a function of the percentage of clear-sky pixels available and the SST variability in that subsample. This parameterisation is developed from Advanced Along Track Scanning Radiometer (AATSR) data, but is applicable to all gridded L3U SST products at resolutions of 0.05-0.1 degrees, irrespective of instrument and retrieval algorithm, provided that instrument noise propagated into the SST is accounted for. We also calculate the sampling uncertainty of ~0.04 K in Global Area Coverage (GAC) Advanced Very High Resolution Radiometer (AVHRR) products, using related methods.
Resumo:
This paper presents an analysis of ground-based Aerosol Optical Depth (AOD) observations by the Aerosol Robotic Network (AERONET) in South America from 2001 to 2007 in comparison with the satellite AOD product of Moderate Resolution Imaging Spectroradiometer (MODIS), aboard TERRA and AQUA satellites. Data of 12 observation sites were used with primary interest in AERONET sites located in or downwind of areas with high biomass burning activity and with measurements available for the full time range. Fires cause the predominant carbonaceous aerosol emission signal during the dry season in South America and are therefore a special focus of this study. Interannual and seasonal behavior of the observed AOD at different sites were investigated, showing clear differences between purely fire and urban influenced sites. An intercomparison of AERONET and MODIS AOD annual correlations revealed that neither an interannual long-term trend may be observed nor that correlations differ significantly owing to different overpass times of TERRA and AQUA. Individual anisotropic representativity areas for each AERONET site were derived by correlating daily AOD of each site for all years with available individual MODIS AOD pixels gridded to 1 degrees x 1 degrees. Results showed that for many sites a good AOD correlation (R(2) > 0.5) persists for large, often strongly anisotropic, areas. The climatological areas of common regional aerosol regimes often extend over several hundreds of kilometers, sometimes far across national boundaries. As a practical application, these strongly inhomogeneous and anisotropic areas of influence are being implemented in the tropospheric aerosol data assimilation system of the Coupled Chemistry-Aerosol-Tracer Transport Model coupled to the Brazilian Regional Atmospheric Modeling System (CCATT-BRAMS) at the Brazilian National Institute for Space Research (INPE). This new information promises an improved exploitation of local site sampling and, thus, chemical weather forecast.
Resumo:
This paper proposes a parallel hardware architecture for image feature detection based on the Scale Invariant Feature Transform algorithm and applied to the Simultaneous Localization And Mapping problem. The work also proposes specific hardware optimizations considered fundamental to embed such a robotic control system on-a-chip. The proposed architecture is completely stand-alone; it reads the input data directly from a CMOS image sensor and provides the results via a field-programmable gate array coupled to an embedded processor. The results may either be used directly in an on-chip application or accessed through an Ethernet connection. The system is able to detect features up to 30 frames per second (320 x 240 pixels) and has accuracy similar to a PC-based implementation. The achieved system performance is at least one order of magnitude better than a PC-based solution, a result achieved by investigating the impact of several hardware-orientated optimizations oil performance, area and accuracy.
Resumo:
The most significant radiation field nonuniformity is the well-known Heel effect. This nonuniform beam effect has a negative influence on the results of computer-aided diagnosis of mammograms, which is frequently used for early cancer detection. This paper presents a method to correct all pixels in the mammography image according to the excess or lack on radiation to which these have been submitted as a result of the this effect. The current simulation method calculates the intensities at all points of the image plane. In the simulated image, the percentage of radiation received by all the points takes the center of the field as reference. In the digitized mammography, the percentages of the optical density of all the pixels of the analyzed image are also calculated. The Heel effect causes a Gaussian distribution around the anode-cathode axis and a logarithmic distribution parallel to this axis. Those characteristic distributions are used to determine the center of the radiation field as well as the cathode-anode axis, allowing for the automatic determination of the correlation between these two sets of data. The measurements obtained with our proposed method differs on average by 2.49 mm in the direction perpendicular to the anode-cathode axis and 2.02 mm parallel to the anode-cathode axis of commercial equipment. The method eliminates around 94% of the Heel effect in the radiological image and the objects will reflect their x-ray absorption. To evaluate this method, experimental data was taken from known objects, but could also be done with clinical and digital images.
Resumo:
In medical processes where ionizing radiation is used, dose planning and dose delivery are the key elements to patient safety and treatment success, particularly, when the delivered dose in a single session of treatment can be an order of magnitude higher than the regular doses of radiotherapy. Therefore, the radiation dose should be well defined and precisely delivered to the target while minimizing radiation exposure to surrounding normal tissues [1]. Several methods have been proposed to obtain three-dimensional (3-D) dose distribution [2, 3]. In this paper, we propose an alternative method, which can be easily implemented in any stereotactic radiosurgery center with a magnetic resonance imaging (MRI) facility. A phantom with or without scattering centers filled with Fricke gel solution is irradiated with Gamma Knife(A (R)) system at a chosen spot. The phantom can be a replica of a human organ such as head, breast or any other organ. It can even be constructed from a real 3-D MR image of an organ of a patient using a computer-aided construction and irradiated at a specific region corresponding to the tumor position determined by MRI. The spin-lattice relaxation time T (1) of different parts of the irradiated phantom is determined by localized spectroscopy. The T (1)-weighted phantom images are used to correlate the image pixels intensity to the absorbed dose and consequently a 3-D dose distribution with a high resolution is obtained.
Resumo:
Recently, the deterministic tourist walk has emerged as a novel approach for texture analysis. This method employs a traveler visiting image pixels using a deterministic walk rule. Resulting trajectories provide clues about pixel interaction in the image that can be used for image classification and identification tasks. This paper proposes a new walk rule for the tourist which is based on contrast direction of a neighborhood. The yielded results using this approach are comparable with those from traditional texture analysis methods in the classification of a set of Brodatz textures and their rotated versions, thus confirming the potential of the method as a feasible texture analysis methodology. (C) 2010 Elsevier B.V. All rights reserved.
Resumo:
The shuttle radar topography mission (SRTM), was flow on the space shuttle Endeavour in February 2000, with the objective of acquiring a digital elevation model of all land between 60 degrees north latitude and 56 degrees south latitude, using interferometric synthetic aperture radar (InSAR) techniques. The SRTM data are distributed at horizontal resolution of 1 arc-second (similar to 30m) for areas within the USA and at 3 arc-second (similar to 90m) resolution for the rest of the world. A resolution of 90m can be considered suitable for the small or medium-scale analysis, but it is too coarse for more detailed purposes. One alternative is to interpolate the SRTM data at a finer resolution; it will not increase the level of detail of the original digital elevation model (DEM), but it will lead to a surface where there is the coherence of angular properties (i.e. slope, aspect) between neighbouring pixels, which is an important characteristic when dealing with terrain analysis. This work intents to show how the proper adjustment of variogram and kriging parameters, namely the nugget effect and the maximum distance within which values are used in interpolation, can be set to achieve quality results on resampling SRTM data from 3"" to 1"". We present for a test area in western USA, which includes different adjustment schemes (changes in nugget effect value and in the interpolation radius) and comparisons with the original 1"" model of the area, with the national elevation dataset (NED) DEMs, and with other interpolation methods (splines and inverse distance weighted (IDW)). The basic concepts for using kriging to resample terrain data are: (i) working only with the immediate neighbourhood of the predicted point, due to the high spatial correlation of the topographic surface and omnidirectional behaviour of variogram in short distances; (ii) adding a very small random variation to the coordinates of the points prior to interpolation, to avoid punctual artifacts generated by predicted points with the same location than original data points and; (iii) using a small value of nugget effect, to avoid smoothing that can obliterate terrain features. Drainages derived from the surfaces interpolated by kriging and by splines have a good agreement with streams derived from the 1"" NED, with correct identification of watersheds, even though a few differences occur in the positions of some rivers in flat areas. Although the 1"" surfaces resampled by kriging and splines are very similar, we consider the results produced by kriging as superior, since the spline-interpolated surface still presented some noise and linear artifacts, which were removed by kriging.
Resumo:
This paper proposes a method to locate and track people by combining evidence from multiple cameras using the homography constraint. The proposed method use foreground pixels from simple background subtraction to compute evidence of the location of people on a reference ground plane. The algorithm computes the amount of support that basically corresponds to the ""foreground mass"" above each pixel. Therefore, pixels that correspond to ground points have more support. The support is normalized to compensate for perspective effects and accumulated on the reference plane for all camera views. The detection of people on the reference plane becomes a search for regions of local maxima in the accumulator. Many false positives are filtered by checking the visibility consistency of the detected candidates against all camera views. The remaining candidates are tracked using Kalman filters and appearance models. Experimental results using challenging data from PETS`06 show good performance of the method in the presence of severe occlusion. Ground truth data also confirms the robustness of the method. (C) 2010 Elsevier B.V. All rights reserved.
Resumo:
The design of translation invariant and locally defined binary image operators over large windows is made difficult by decreased statistical precision and increased training time. We present a complete framework for the application of stacked design, a recently proposed technique to create two-stage operators that circumvents that difficulty. We propose a novel algorithm, based on Information Theory, to find groups of pixels that should be used together to predict the Output Value. We employ this algorithm to automate the process of creating a set of first-level operators that are later combined in a global operator. We also propose a principled way to guide this combination, by using feature selection and model comparison. Experimental results Show that the proposed framework leads to better results than single stage design. (C) 2009 Elsevier B.V. All rights reserved.
Resumo:
The motivation for this thesis work is the need for improving reliability of equipment and quality of service to railway passengers as well as a requirement for cost-effective and efficient condition maintenance management for rail transportation. This thesis work develops a fusion of various machine vision analysis methods to achieve high performance in automation of wooden rail track inspection.The condition monitoring in rail transport is done manually by a human operator where people rely on inference systems and assumptions to develop conclusions. The use of conditional monitoring allows maintenance to be scheduled, or other actions to be taken to avoid the consequences of failure, before the failure occurs. Manual or automated condition monitoring of materials in fields of public transportation like railway, aerial navigation, traffic safety, etc, where safety is of prior importance needs non-destructive testing (NDT).In general, wooden railway sleeper inspection is done manually by a human operator, by moving along the rail sleeper and gathering information by visual and sound analysis for examining the presence of cracks. Human inspectors working on lines visually inspect wooden rails to judge the quality of rail sleeper. In this project work the machine vision system is developed based on the manual visual analysis system, which uses digital cameras and image processing software to perform similar manual inspections. As the manual inspection requires much effort and is expected to be error prone sometimes and also appears difficult to discriminate even for a human operator by the frequent changes in inspected material. The machine vision system developed classifies the condition of material by examining individual pixels of images, processing them and attempting to develop conclusions with the assistance of knowledge bases and features.A pattern recognition approach is developed based on the methodological knowledge from manual procedure. The pattern recognition approach for this thesis work was developed and achieved by a non destructive testing method to identify the flaws in manually done condition monitoring of sleepers.In this method, a test vehicle is designed to capture sleeper images similar to visual inspection by human operator and the raw data for pattern recognition approach is provided from the captured images of the wooden sleepers. The data from the NDT method were further processed and appropriate features were extracted.The collection of data by the NDT method is to achieve high accuracy in reliable classification results. A key idea is to use the non supervised classifier based on the features extracted from the method to discriminate the condition of wooden sleepers in to either good or bad. Self organising map is used as classifier for the wooden sleeper classification.In order to achieve greater integration, the data collected by the machine vision system was made to interface with one another by a strategy called fusion. Data fusion was looked in at two different levels namely sensor-level fusion, feature- level fusion. As the goal was to reduce the accuracy of the human error on the rail sleeper classification as good or bad the results obtained by the feature-level fusion compared to that of the results of actual classification were satisfactory.