940 resultados para Transform statistics
Resumo:
Spatial information captured from optical remote sensors on board unmanned aerial vehicles (UAVs) has great potential in automatic surveillance of electrical infrastructure. For an automatic vision-based power line inspection system, detecting power lines from a cluttered background is one of the most important and challenging tasks. In this paper, a novel method is proposed, specifically for power line detection from aerial images. A pulse coupled neural filter is developed to remove background noise and generate an edge map prior to the Hough transform being employed to detect straight lines. An improved Hough transform is used by performing knowledge-based line clustering in Hough space to refine the detection results. The experiment on real image data captured from a UAV platform demonstrates that the proposed approach is effective for automatic power line detection.
Resumo:
Light Detection and Ranging (LIDAR) has great potential to assist vegetation management in power line corridors by providing more accurate geometric information of the power line assets and vegetation along the corridors. However, the development of algorithms for the automatic processing of LIDAR point cloud data, in particular for feature extraction and classification of raw point cloud data, is in still in its infancy. In this paper, we take advantage of LIDAR intensity and try to classify ground and non-ground points by statistically analyzing the skewness and kurtosis of the intensity data. Moreover, the Hough transform is employed to detected power lines from the filtered object points. The experimental results show the effectiveness of our methods and indicate that better results were obtained by using LIDAR intensity data than elevation data.
Resumo:
Intuitively, any `bag of words' approach in IR should benefit from taking term dependencies into account. Unfortunately, for years the results of exploiting such dependencies have been mixed or inconclusive. To improve the situation, this paper shows how the natural language properties of the target documents can be used to transform and enrich the term dependencies to more useful statistics. This is done in three steps. The term co-occurrence statistics of queries and documents are each represented by a Markov chain. The paper proves that such a chain is ergodic, and therefore its asymptotic behavior is unique, stationary, and independent of the initial state. Next, the stationary distribution is taken to model queries and documents, rather than their initial distri- butions. Finally, ranking is achieved following the customary language modeling paradigm. The main contribution of this paper is to argue why the asymptotic behavior of the document model is a better representation then just the document's initial distribution. A secondary contribution is to investigate the practical application of this representation in case the queries become increasingly verbose. In the experiments (based on Lemur's search engine substrate) the default query model was replaced by the stable distribution of the query. Just modeling the query this way already resulted in significant improvements over a standard language model baseline. The results were on a par or better than more sophisticated algorithms that use fine-tuned parameters or extensive training. Moreover, the more verbose the query, the more effective the approach seems to become.
Resumo:
The refractive error of a human eye varies across the pupil and therefore may be treated as a random variable. The probability distribution of this random variable provides a means for assessing the main refractive properties of the eye without the necessity of traditional functional representation of wavefront aberrations. To demonstrate this approach, the statistical properties of refractive error maps are investigated. Closed-form expressions are derived for the probability density function (PDF) and its statistical moments for the general case of rotationally-symmetric aberrations. A closed-form expression for a PDF for a general non-rotationally symmetric wavefront aberration is difficult to derive. However, for specific cases, such as astigmatism, a closed-form expression of the PDF can be obtained. Further, interpretation of the distribution of the refractive error map as well as its moments is provided for a range of wavefront aberrations measured in real eyes. These are evaluated using a kernel density and sample moments estimators. It is concluded that the refractive error domain allows non-functional analysis of wavefront aberrations based on simple statistics in the form of its sample moments. Clinicians may find this approach to wavefront analysis easier to interpret due to the clinical familiarity and intuitive appeal of refractive error maps.
Resumo:
The macerals in bituminous coals with varying organic sulfur content from the Early Permian Greta Coal Measures at three locations (Southland Colliery, Drayton Colliery and the Cranky Corner Basin), in and around the Sydney Basin (Australia), have been studied using light-element electron microprobe (EMP) analysis and micro-ATR–FTIR. Electron microprobe analysis of individual macerals reveals that the vitrinite in both the Cranky Corner Basin and Drayton Colliery (Puxtrees seam) samples have similar carbon contents (ca. 78% C in telocollinite), suggesting that they are of equivalent rank. However, the Cranky Corner coals have anomalously low vitrinite reflectance (down to 0.45%) vs. the Drayton materials (ca. 0.7%). They also have very high organic S content (3–6.5%) and lower O content (ca. 10%) than the equivalent macerals in the Drayton sample (0.7% S and 15.6% O). A study was carried out to investigate the impacts of the high organic S on the functional groups of the macerals in these two otherwise iso-rank, stratigraphically-equivalent seams. An iso-rank low-S coal from the overlying Wittingham Coal Measures near Muswellbrook and coals of slightly higher rank from the Greta Coal Measures at Southland Colliery near Cessnock were also evaluated using the same techniques to extend the data set. Although the telocollinite in the Drayton and Cranky Corner coals have very similar carbon content (ca.78% C), the ATR–FTIR spectra of the vitrinite and inertinite macerals in these respectively low S and high S coals show some distinct differences in IR absorbance from various aliphatic and aromatic functional groups. The differences in absorbance of the aliphatic stretching bands (2800–3000 cm−1) and the aromatic carbon (CC) peak at 1606 cm−1 are very obvious. Compared to that of the Drayton sample (0.7% S and 15% O), the telocollinite of the Cranky Corner coal (6% S and 10% O) clearly shows: (i) less absorbance from OH groups, represented by a broad region around 3553 cm−1, (ii) much stronger aliphatic C–H absorbance (stretching modes around 3000–2800 cm−1 and bending modes around 1442 cm−1) and (iii) less absorbance from aromatic carbon functional groups (peaking at 1606 cm−1). Evaluation of the iso-rank Drayton and Cranky Corner coals shows that: (i) the aliphatic C–H absorbances decrease with increasing oxygen content but increase with increasing organic S content and (ii) the aromatic H to aliphatic H ratio (Har/Hali) for the telocollinite increases with (organic) O%, but decreases progressively with increasing organic S. The high organic S content in the maceral appears to be accompanied by a greater proportion of aliphatic functional groups, possibly as a result of some of the O within maceral ring structures in the high S coal samples being replaced.
Resumo:
Stereo vision is a method of depth perception, in which depth information is inferred from two (or more) images of a scene, taken from different perspectives. Practical applications for stereo vision include aerial photogrammetry, autonomous vehicle guidance, robotics and industrial automation. The initial motivation behind this work was to produce a stereo vision sensor for mining automation applications. For such applications, the input stereo images would consist of close range scenes of rocks. A fundamental problem faced by matching algorithms is the matching or correspondence problem. This problem involves locating corresponding points or features in two images. For this application, speed, reliability, and the ability to produce a dense depth map are of foremost importance. This work implemented a number of areabased matching algorithms to assess their suitability for this application. Area-based techniques were investigated because of their potential to yield dense depth maps, their amenability to fast hardware implementation, and their suitability to textured scenes such as rocks. In addition, two non-parametric transforms, the rank and census, were also compared. Both the rank and the census transforms were found to result in improved reliability of matching in the presence of radiometric distortion - significant since radiometric distortion is a problem which commonly arises in practice. In addition, they have low computational complexity, making them amenable to fast hardware implementation. Therefore, it was decided that matching algorithms using these transforms would be the subject of the remainder of the thesis. An analytic expression for the process of matching using the rank transform was derived from first principles. This work resulted in a number of important contributions. Firstly, the derivation process resulted in one constraint which must be satisfied for a correct match. This was termed the rank constraint. The theoretical derivation of this constraint is in contrast to the existing matching constraints which have little theoretical basis. Experimental work with actual and contrived stereo pairs has shown that the new constraint is capable of resolving ambiguous matches, thereby improving match reliability. Secondly, a novel matching algorithm incorporating the rank constraint has been proposed. This algorithm was tested using a number of stereo pairs. In all cases, the modified algorithm consistently resulted in an increased proportion of correct matches. Finally, the rank constraint was used to devise a new method for identifying regions of an image where the rank transform, and hence matching, are more susceptible to noise. The rank constraint was also incorporated into a new hybrid matching algorithm, where it was combined a number of other ideas. These included the use of an image pyramid for match prediction, and a method of edge localisation to improve match accuracy in the vicinity of edges. Experimental results obtained from the new algorithm showed that the algorithm is able to remove a large proportion of invalid matches, and improve match accuracy.