999 resultados para Quad-Tree decomposition


Relevância:

100.00% 100.00%

Publicador:

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper presents a novel driver verification algorithm based on the recognition of handgrip patterns on steering wheel. A pressure sensitive mat mounted on a steering wheel is employed to collect a series of pressure images exerted by the hands of the drivers who intend to start the vehicle. Then, feature extraction from those images is carried out through two major steps: Quad-Tree-based multi-resolution decomposition on the images and Principle Component Analysis (PCA)-based dimension reduction, followed by implementing a likelihood-ratio classifier to distinguish drivers into known or unknown ones. The experimental results obtained in this study show that the mean acceptance rates of 78.15% and 78.22% for the trained subjects and the mean rejection rates of 93.92% and 90.93% to the un-trained ones are achieved in two trials, respectively. It can be concluded that the driver verification approach based on the handgrip recognition on steering wheel is promising and will be further explored in the near future.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

A variety of data structures such as inverted file, multi-lists, quad tree, k-d tree, range tree, polygon tree, quintary tree, multidimensional tries, segment tree, doubly chained tree, the grid file, d-fold tree. super B-tree, Multiple Attribute Tree (MAT), etc. have been studied for multidimensional searching and related problems. Physical data base organization, which is an important application of multidimensional searching, is traditionally and mostly handled by employing inverted file. This study proposes MAT data structure for bibliographic file systems, by illustrating the superiority of MAT data structure over inverted file. Both the methods are compared in terms of preprocessing, storage and query costs. Worst-case complexity analysis of both the methods, for a partial match query, is carried out in two cases: (a) when directory resides in main memory, (b) when directory resides in secondary memory. In both cases, MAT data structure is shown to be more efficient than the inverted file method. Arguments are given to illustrate the superiority of MAT data structure in an average case also. An efficient adaptation of MAT data structure, that exploits the special features of MAT structure and bibliographic files, is proposed for bibliographic file systems. In this adaptation, suitable techniques for fixing and ranking of the attributes for MAT data structure are proposed. Conclusions and proposals for future research are presented.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In this paper, a novel fast method for modeling mammograms by deterministic fractal coding approach to detect the presence of microcalcifications, which are early signs of breast cancer, is presented. The modeled mammogram obtained using fractal encoding method is visually similar to the original image containing microcalcifications, and therefore, when it is taken out from the original mammogram, the presence of microcalcifications can be enhanced. The limitation of fractal image modeling is the tremendous time required for encoding. In the present work, instead of searching for a matching domain in the entire domain pool of the image, three methods based on mean and variance, dynamic range of the image blocks, and mass center features are used. This reduced the encoding time by a factor of 3, 89, and 13, respectively, in the three methods with respect to the conventional fractal image coding method with quad tree partitioning. The mammograms obtained from The Mammographic Image Analysis Society database (ground truth available) gave a total detection score of 87.6%, 87.6%, 90.5%, and 87.6%, for the conventional and the proposed three methods, respectively.

Relevância:

80.00% 80.00%

Publicador:

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This paper presents a novel mobile sink area allocation scheme for consumer based mobile robotic devices with a proven application to robotic vacuum cleaners. In the home or office environment, rooms are physically separated by walls and an automated robotic cleaner cannot make a decision about which room to move to and perform the cleaning task. Likewise, state of the art cleaning robots do not move to other rooms without direct human interference. In a smart home monitoring system, sensor nodes may be deployed to monitor each separate room. In this work, a quad tree based data gathering scheme is proposed whereby the mobile sink physically moves through every room and logically links all separated sub-networks together. The proposed scheme sequentially collects data from the monitoring environment and transmits the information back to a base station. According to the sensor nodes information, the base station can command a cleaning robot to move to a specific location in the home environment. The quad tree based data gathering scheme minimizes the data gathering tour length and time through the efficient allocation of data gathering areas. A calculated shortest path data gathering tour can efficiently be allocated to the robotic cleaner to complete the cleaning task within a minimum time period. Simulation results show that the proposed scheme can effectively allocate and control the cleaning area to the robot vacuum cleaner without any direct interference from the consumer. The performance of the proposed scheme is then validated with a set of practical sequential data gathering tours in a typical office/home environment.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In this paper a new method to compute saliency of source images is presented. This work is an extension to universal quality index founded by Wang and Bovik and improved by Piella. It defines the saliency according to the change of topology of quadratic tree decomposition between source images and the fused image. The saliency function provides higher weight for the tree nodes that differs more in the fused image in terms topology. Quadratic tree decomposition provides an easy and systematic way to add a saliency factor based on the segmented regions in the images.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In this paper is presented a region-based methodology for Digital Elevation Model segmentation obtained from laser scanning data. The methodology is based on two sequential techniques, i.e., a recursive splitting technique using the quad tree structure followed by a region merging technique using the Markov Random Field model. The recursive splitting technique starts splitting the Digital Elevation Model into homogeneous regions. However, due to slight height differences in the Digital Elevation Model, region fragmentation can be relatively high. In order to minimize the fragmentation, a region merging technique based on the Markov Random Field model is applied to the previously segmented data. The resulting regions are firstly structured by using the so-called Region Adjacency Graph. Each node of the Region Adjacency Graph represents a region of the Digital Elevation Model segmented and two nodes have connectivity between them if corresponding regions share a common boundary. Next it is assumed that the random variable related to each node, follows the Markov Random Field model. This hypothesis allows the derivation of the posteriori probability distribution function whose solution is obtained by the Maximum a Posteriori estimation. Regions presenting high probability of similarity are merged. Experiments carried out with laser scanning data showed that the methodology allows to separate the objects in the Digital Elevation Model with a low amount of fragmentation.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Image segmentation is one of the most computationally intensive operations in image processing and computer vision. This is because a large volume of data is involved and many different features have to be extracted from the image data. This thesis is concerned with the investigation of practical issues related to the implementation of several classes of image segmentation algorithms on parallel architectures. The Transputer is used as the basic building block of hardware architectures and Occam is used as the programming language. The segmentation methods chosen for implementation are convolution, for edge-based segmentation; the Split and Merge algorithm for segmenting non-textured regions; and the Granlund method for segmentation of textured images. Three different convolution methods have been implemented. The direct method of convolution, carried out in the spatial domain, uses the array architecture. The other two methods, based on convolution in the frequency domain, require the use of the two-dimensional Fourier transform. Parallel implementations of two different Fast Fourier Transform algorithms have been developed, incorporating original solutions. For the Row-Column method the array architecture has been adopted, and for the Vector-Radix method, the pyramid architecture. The texture segmentation algorithm, for which a system-level design is given, demonstrates a further application of the Vector-Radix Fourier transform. A novel concurrent version of the quad-tree based Split and Merge algorithm has been implemented on the pyramid architecture. The performance of the developed parallel implementations is analysed. Many of the obtained speed-up and efficiency measures show values close to their respective theoretical maxima. Where appropriate comparisons are drawn between different implementations. The thesis concludes with comments on general issues related to the use of the Transputer system as a development tool for image processing applications; and on the issues related to the engineering of concurrent image processing applications.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

3D geographic information system (GIS) is data and computation intensive in nature. Internet users are usually equipped with low-end personal computers and network connections of limited bandwidth. Data reduction and performance optimization techniques are of critical importance in quality of service (QoS) management for online 3D GIS. In this research, QoS management issues regarding distributed 3D GIS presentation were studied to develop 3D TerraFly, an interactive 3D GIS that supports high quality online terrain visualization and navigation. ^ To tackle the QoS management challenges, multi-resolution rendering model, adaptive level of detail (LOD) control and mesh simplification algorithms were proposed to effectively reduce the terrain model complexity. The rendering model is adaptively decomposed into sub-regions of up-to-three detail levels according to viewing distance and other dynamic quality measurements. The mesh simplification algorithm was designed as a hybrid algorithm that combines edge straightening and quad-tree compression to reduce the mesh complexity by removing geometrically redundant vertices. The main advantage of this mesh simplification algorithm is that grid mesh can be directly processed in parallel without triangulation overhead. Algorithms facilitating remote accessing and distributed processing of volumetric GIS data, such as data replication, directory service, request scheduling, predictive data retrieving and caching were also proposed. ^ A prototype of the proposed 3D TerraFly implemented in this research demonstrates the effectiveness of our proposed QoS management framework in handling interactive online 3D GIS. The system implementation details and future directions of this research are also addressed in this thesis. ^

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)

Relevância:

40.00% 40.00%

Publicador: