842 resultados para Texture Feature


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Wide-angle images exhibit significant distortion for which existing scale-space detectors such as the scale-invariant feature transform (SIFT) are inappropriate. The required scale-space images for feature detection are correctly obtained through the convolution of the image, mapped to the sphere, with the spherical Gaussian. A new visual key-point detector, based on this principle, is developed and several computational approaches to the convolution are investigated in both the spatial and frequency domain. In particular, a close approximation is developed that has comparable computation time to conventional SIFT but with improved matching performance. Results are presented for monocular wide-angle outdoor image sequences obtained using fisheye and equiangular catadioptric cameras. We evaluate the overall matching performance (recall versus 1-precision) of these methods compared to conventional SIFT. We also demonstrate the use of the technique for variable frame-rate visual odometry and its application to place recognition.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Background Some dialysis patients fail to comply with their fluid restriction causing problems due to volume overload. These patients sometimes blame excessive thirst. There has been little work in this area and no work documenting polydipsia among peritoneal dialysis (PD) patients. Methods We measured motivation to drink and fluid consumption in 46 haemodialysis patients (HD), 39 PD patients and 42 healthy controls (HC) using a modified palmtop computer to collect visual analogue scores at hourly intervals. Results Mean thirst scores were markedly depressed on the dialysis day (day 1) for HD (P<0.0001). The profile for day 2 was similar to that of HC. PD generated consistently higher scores than HD day 1 and HC (P = 0.01 vs. HC and P<0.0001 vs HD day 1). Reported mean daily water consumption was similar for HD and PD with both significantly less than HC (P<0.001 for both). However, measured fluid losses were similar for PD and HC whilst HD were lower (P<0.001 for both) suggesting that the PD group may have underestimated their fluid intake. Conclusion Our results indicate that HD causes a protracted period of reduced thirst but that the population's thirst perception is similar to HC on the interdialytic day despite a reduced fluid intake. In contrast, the PD group recorded high thirst scores throughout the day and were apparently less compliant with their fluid restriction. This is potentially important because the volume status of PD patients influences their survival.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The main goal of this research is to design an efficient compression al~ gorithm for fingerprint images. The wavelet transform technique is the principal tool used to reduce interpixel redundancies and to obtain a parsimonious representation for these images. A specific fixed decomposition structure is designed to be used by the wavelet packet in order to save on the computation, transmission, and storage costs. This decomposition structure is based on analysis of information packing performance of several decompositions, two-dimensional power spectral density, effect of each frequency band on the reconstructed image, and the human visual sensitivities. This fixed structure is found to provide the "most" suitable representation for fingerprints, according to the chosen criteria. Different compression techniques are used for different subbands, based on their observed statistics. The decision is based on the effect of each subband on the reconstructed image according to the mean square criteria as well as the sensitivities in human vision. To design an efficient quantization algorithm, a precise model for distribution of the wavelet coefficients is developed. The model is based on the generalized Gaussian distribution. A least squares algorithm on a nonlinear function of the distribution model shape parameter is formulated to estimate the model parameters. A noise shaping bit allocation procedure is then used to assign the bit rate among subbands. To obtain high compression ratios, vector quantization is used. In this work, the lattice vector quantization (LVQ) is chosen because of its superior performance over other types of vector quantizers. The structure of a lattice quantizer is determined by its parameters known as truncation level and scaling factor. In lattice-based compression algorithms reported in the literature the lattice structure is commonly predetermined leading to a nonoptimized quantization approach. In this research, a new technique for determining the lattice parameters is proposed. In the lattice structure design, no assumption about the lattice parameters is made and no training and multi-quantizing is required. The design is based on minimizing the quantization distortion by adapting to the statistical characteristics of the source in each subimage. 11 Abstract Abstract Since LVQ is a multidimensional generalization of uniform quantizers, it produces minimum distortion for inputs with uniform distributions. In order to take advantage of the properties of LVQ and its fast implementation, while considering the i.i.d. nonuniform distribution of wavelet coefficients, the piecewise-uniform pyramid LVQ algorithm is proposed. The proposed algorithm quantizes almost all of source vectors without the need to project these on the lattice outermost shell, while it properly maintains a small codebook size. It also resolves the wedge region problem commonly encountered with sharply distributed random sources. These represent some of the drawbacks of the algorithm proposed by Barlaud [26). The proposed algorithm handles all types of lattices, not only the cubic lattices, as opposed to the algorithms developed by Fischer [29) and Jeong [42). Furthermore, no training and multiquantizing (to determine lattice parameters) is required, as opposed to Powell's algorithm [78). For coefficients with high-frequency content, the positive-negative mean algorithm is proposed to improve the resolution of reconstructed images. For coefficients with low-frequency content, a lossless predictive compression scheme is used to preserve the quality of reconstructed images. A method to reduce bit requirements of necessary side information is also introduced. Lossless entropy coding techniques are subsequently used to remove coding redundancy. The algorithms result in high quality reconstructed images with better compression ratios than other available algorithms. To evaluate the proposed algorithms their objective and subjective performance comparisons with other available techniques are presented. The quality of the reconstructed images is important for a reliable identification. Enhancement and feature extraction on the reconstructed images are also investigated in this research. A structural-based feature extraction algorithm is proposed in which the unique properties of fingerprint textures are used to enhance the images and improve the fidelity of their characteristic features. The ridges are extracted from enhanced grey-level foreground areas based on the local ridge dominant directions. The proposed ridge extraction algorithm, properly preserves the natural shape of grey-level ridges as well as precise locations of the features, as opposed to the ridge extraction algorithm in [81). Furthermore, it is fast and operates only on foreground regions, as opposed to the adaptive floating average thresholding process in [68). Spurious features are subsequently eliminated using the proposed post-processing scheme.

Relevância:

20.00% 20.00%

Publicador:

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Previous research on the protection of soil organic C from decomposition suggests that soil texture affects soil C stocks. However, different pools of soil organic matter (SOM) might be differently related to soil texture. Our objective was to examine how soil texture differentially alters the distribution of organic C within physically and chemically defined pools of unprotected and protected SOM. We collected samples from two soil texture gradients where other variables influencing soil organic C content were held constant. One texture gradient (16-60% clay) was located near Stewart Valley, Saskatchewan, Canada and the other (25-50% clay) near Cygnet, OH. Soils were physically fractionated into coarse- and fine-particulate organic matter (POM), silt- and clay-sized particles within microaggregates, and easily dispersed silt-and clay-sized particles outside of microaggregates. Whole-soil organic C concentration was positively related to silt plus clay content at both sites. We found no relationship between soil texture and unprotected C (coarse- and fine-POM C). Biochemically protected C (nonhydrolyzable C) increased with increasing clay content in whole-soil samples, but the proportion of nonhydrolyzable C within silt- and clay-sized fractions was unchanged. As the amount of silt or clay increased, the amount of C stabilized within easily dispersed and microaggregate-associated silt or clay fractions decreased. Our results suggest that for a given level of C inputs, the relationship between mineral surface area and soil organic matter varies with soil texture for physically and biochemically protected C fractions. Because soil texture acts directly and indirectly on various protection mechanisms, it may not be a universal predictor of whole-soil C content.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Trajectory design for Autonomous Underwater Vehicles (AUVs) is of great importance to the oceanographic research community. Intelligent planning is required to maneuver a vehicle to high-valued locations for data collection. We consider the use of ocean model predictions to determine the locations to be visited by an AUV, which then provides near-real time, in situ measurements back to the model to increase the skill of future predictions. The motion planning problem of steering the vehicle between the computed waypoints is not considered here. Our focus is on the algorithm to determine relevant points of interest for a chosen oceanographic feature. This represents a first approach to an end to end autonomous prediction and tasking system for aquatic, mobile sensor networks. We design a sampling plan and present experimental results with AUV retasking in the Southern California Bight (SCB) off the coast of Los Angeles.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Vector field visualisation is one of the classic sub-fields of scientific data visualisation. The need for effective visualisation of flow data arises in many scientific domains ranging from medical sciences to aerodynamics. Though there has been much research on the topic, the question of how to communicate flow information effectively in real, practical situations is still largely an unsolved problem. This is particularly true for complex 3D flows. In this presentation we give a brief introduction and background to vector field visualisation and comment on the effectiveness of the most common solutions. We will then give some examples of current development on texture-based techniques, and given practical examples of their use in CFD research and hydrodynamic applications.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper presents a robust stochastic framework for the incorporation of visual observations into conventional estimation, data fusion, navigation and control algorithms. The representation combines Isomap, a non-linear dimensionality reduction algorithm, with expectation maximization, a statistical learning scheme. The joint probability distribution of this representation is computed offline based on existing training data. The training phase of the algorithm results in a nonlinear and non-Gaussian likelihood model of natural features conditioned on the underlying visual states. This generative model can be used online to instantiate likelihoods corresponding to observed visual features in real-time. The instantiated likelihoods are expressed as a Gaussian mixture model and are conveniently integrated within existing non-linear filtering algorithms. Example applications based on real visual data from heterogenous, unstructured environments demonstrate the versatility of the generative models.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Road surface macro-texture is an indicator used to determine the skid resistance levels in pavements. Existing methods of quantifying macro-texture include the sand patch test and the laser profilometer. These methods utilise the 3D information of the pavement surface to extract the average texture depth. Recently, interest in image processing techniques as a quantifier of macro-texture has arisen, mainly using the Fast Fourier Transform (FFT). This paper reviews the FFT method, and then proposes two new methods, one using the autocorrelation function and the other using wavelets. The methods are tested on pictures obtained from a pavement surface extending more than 2km's. About 200 images were acquired from the surface at approx. 10m intervals from a height 80cm above ground. The results obtained from image analysis methods using the FFT, the autocorrelation function and wavelets are compared with sensor measured texture depth (SMTD) data obtained from the same paved surface. The results indicate that coefficients of determination (R2) exceeding 0.8 are obtained when up to 10% of outliers are removed.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Uncooperative iris identification systems at a distance suffer from poor resolution of the captured iris images, which significantly degrades iris recognition performance. Superresolution techniques have been employed to enhance the resolution of iris images and improve the recognition performance. However, all existing super-resolution approaches proposed for the iris biometric super-resolve pixel intensity values. This paper considers transferring super-resolution of iris images from the intensity domain to the feature domain. By directly super-resolving only the features essential for recognition, and by incorporating domain specific information from iris models, improved recognition performance compared to pixel domain super-resolution can be achieved. This is the first paper to investigate the possibility of feature domain super-resolution for iris recognition, and experiments confirm the validity of the proposed approach.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

It is a big challenge to guarantee the quality of discovered relevance features in text documents for describing user preferences because of the large number of terms, patterns, and noise. Most existing popular text mining and classification methods have adopted term-based approaches. However, they have all suffered from the problems of polysemy and synonymy. Over the years, people have often held the hypothesis that pattern-based methods should perform better than term-based ones in describing user preferences, but many experiments do not support this hypothesis. The innovative technique presented in paper makes a breakthrough for this difficulty. This technique discovers both positive and negative patterns in text documents as higher level features in order to accurately weight low-level features (terms) based on their specificity and their distributions in the higher level features. Substantial experiments using this technique on Reuters Corpus Volume 1 and TREC topics show that the proposed approach significantly outperforms both the state-of-the-art term-based methods underpinned by Okapi BM25, Rocchio or Support Vector Machine and pattern based methods on precision, recall and F measures.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The journalism revolution is upon us. In a world where we are constantly being told that everyone can be a publisher and challenges are emerging from bloggers, Twitterers and podcasters, journalism educators are inevitably reassessing what skills we now need to teach to keep our graduates ahead of the game. QUT this year tackled that question head-on as a curriculum review and program restructure resulted in a greater emphasis on online journalism. The author spent a week in the online newsrooms of each of two of the major players – ABC online news and thecouriermail.com to watch, listen and interview some of the key players. This, in addition to interviews with industry leaders from Fairfax and news.com, lead to the conclusion that while there are some new skills involved in new media much of what the industry is demanding is in fact good old fashioned journalism. Themes of good spelling, grammar, accuracy and writing skills and a nose for news recurred when industry players were asked what it was that they would like to see in new graduates. While speed was cited as one of the big attributes needed in online journalism, the conclusion of many of the players was that the skills of a good down-table sub or a journalist working for wire service were not unlike those most used in online newsrooms.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Despite many arguments to the contrary, the three-act story structure, as propounded and refined by Hollywood continues to dominate the blockbuster and independent film markets. Recent successes in post-modern cinema could indicate new directions and opportunities for low-budget national cinemas.