940 resultados para Computational complexity.


Relevância:

60.00% 60.00%

Publicador:

Resumo:

In this paper, we propose a low complexity and reliable wideband spectrum sensing technique that operates at sub-Nyquist sampling rates. Unlike the majority of other sub-Nyquist spectrum sensing algorithms that rely on the Compressive Sensing (CS) methodology, the introduced method does not entail solving an optimisation problem. It is characterised by simplicity and low computational complexity without compromising the system performance and yet delivers substantial reductions on the operational sampling rates. The reliability guidelines of the devised non-compressive sensing approach are provided and simulations are presented to illustrate its superior performance. © 2013 IEEE.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Compared with the ordinary adaptive filter, the variable-length adaptive filter is more efficient (including smaller., lower power consumption and higher computational complexity output SNR) because of its tap-length learning algorithm, which is able to dynamically adapt its tap-length to the optimal tap-length that best balances the complexity and the performance of the adaptive filter. Among existing tap-length algorithms, the LMS-style Variable Tap-Length Algorithm (also called Fractional Tap-Length Algorithm or FT Algorithm) proposed by Y.Gong has the best performance because it has the fastest convergence rates and best stability. However, in some cases its performance deteriorates dramatically. To solve this problem, we first analyze the FT algorithm and point out some of its defects. Second, we propose a new FT algorithm called 'VSLMS' (Variable Step-size LMS) Style Tap-Length Learning Algorithm, which not only uses the concept of FT but also introduces a new concept of adaptive convergence slope. With this improvement the new FT algorithm has even faster convergence rates and better stability. Finally, we offer computer simulations to verify this improvement.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

A novel geometric algorithm for blind image restoration is proposed in this paper, based on High-Dimensional Space Geometrical Informatics (HDSGI) theory. In this algorithm every image is considered as a point, and the location relationship of the points in high-dimensional space, i.e. the intrinsic relationship of images is analyzed. Then geometric technique of "blurring-blurring-deblurring" is adopted to get the deblurring images. Comparing with other existing algorithms like Wiener filter, super resolution image restoration etc., the experimental results show that the proposed algorithm could not only obtain better details of images but also reduces the computational complexity with less computing time. The novel algorithm probably shows a new direction for blind image restoration with promising perspective of applications.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

IEEE Comp Soc, IFIP, Tianjin Normal Univ

Relevância:

60.00% 60.00%

Publicador:

Resumo:

With the digital all-sky imager (ASI) emergence in aurora research, millions of images are captured annually. However, only a fraction of which can be actually used. To address the problem incurred by low efficient manual processing, an integrated image analysis and retrieval system is developed. For precisely representing aurora image, macroscopic and microscopic features are combined to describe aurora texture. To reduce the feature dimensionality of the huge dataset, a modified local binary pattern (LBP) called ALBP is proposed to depict the microscopic texture, and scale-invariant Gabor and orientation-invariant Gabor are employed to extract the macroscopic texture. A physical property of aurora is inducted as region features to bridge the gap between the low-level visual features and high-level semantic description. The experiments results demonstrate that the ALBP method achieves high classification rate and low computational complexity. The retrieval simulation results show that the developed retrieval system is efficient for huge dataset. (c) 2010 Elsevier Inc. All rights reserved.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

角点检测应用十分广泛,是许多计算机视觉任务的基础。本文提出了一种快速、高精度的角点检测算法,算法简单新颖,角点条件和角点响应函数设计独特。和以往不同的是:算法在设计上考虑的是角点的局部几何特征,使得处理的数据量大为减少,同时能够很好地保证检测精度等其他性能指标。通过和广泛使用的SUSAN算法、Harris算法在正确率、漏检、精度、抗噪声、计算复杂度等方面进行综合比较,结果表明该算法无论对人工合成图像还是对自然图像均具有良好的性能。

Relevância:

60.00% 60.00%

Publicador:

Resumo:

介绍了一种基于多线阵像机构成的视觉空间定位系统.该系统利用线阵像机的快速性与高分辨率的特点,采用了非平行空间投影面相交定位的基本原理,利用几何投影关系定位求解的方法,实现了多线阵像机视觉系统的空间定位.并提出了多线阵像机的神经网络非线性修正方法,使修正后的PSD能在较宽的位置范围内输出高线性度的信号.实验结果表明,基于非线性修正的多线阵像机位姿测量系统简化了立体视觉空间定位计算的复杂性,在定位精度、定位范围和采样速度上均达到了良好效果.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

针对水下机器人(UUV)推进系统容错控制分配问题,本文提出了基于SVD分解(奇异值分解)与定点分配的混合算法。与传统的方法相比,它回避了求伪逆矩阵的问题,降低了计算量;能够满足推进器饱和约束限制。利用水下实验平台推进系统模型进行了仿真实验,验证了算法的正确性和有效性。

Relevância:

60.00% 60.00%

Publicador:

Resumo:

应用车辆地面力学理论研究滑转率对月球车车轮挂钩牵引力、驱动效率以及功率消耗的影响。建立刚性车轮与松软月壤交互作用的动力学模型。通过实例对月球车车轮驱动动力学特性进行仿真分析。研究结果表明,车轮的挂钩牵引力、驱动效率以及驱动能耗均受到车轮滑转率的制约。存在一个最优的滑转率区间,在此区间内车轮可获得较大的挂钩牵引力、较高的驱动效率以及较低的驱动能耗。求取轮、地相对速度,对月球车车轮的地面摩擦力功率进行了估算。

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The task in text retrieval is to find the subset of a collection of documents relevant to a user's information request, usually expressed as a set of words. Classically, documents and queries are represented as vectors of word counts. In its simplest form, relevance is defined to be the dot product between a document and a query vector--a measure of the number of common terms. A central difficulty in text retrieval is that the presence or absence of a word is not sufficient to determine relevance to a query. Linear dimensionality reduction has been proposed as a technique for extracting underlying structure from the document collection. In some domains (such as vision) dimensionality reduction reduces computational complexity. In text retrieval it is more often used to improve retrieval performance. We propose an alternative and novel technique that produces sparse representations constructed from sets of highly-related words. Documents and queries are represented by their distance to these sets. and relevance is measured by the number of common clusters. This technique significantly improves retrieval performance, is efficient to compute and shares properties with the optimal linear projection operator and the independent components of documents.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Example-based methods are effective for parameter estimation problems when the underlying system is simple or the dimensionality of the input is low. For complex and high-dimensional problems such as pose estimation, the number of required examples and the computational complexity rapidly becme prohibitively high. We introduce a new algorithm that learns a set of hashing functions that efficiently index examples relevant to a particular estimation task. Our algorithm extends a recently developed method for locality-sensitive hashing, which finds approximate neighbors in time sublinear in the number of examples. This method depends critically on the choice of hash functions; we show how to find the set of hash functions that are optimally relevant to a particular estimation problem. Experiments demonstrate that the resulting algorithm, which we call Parameter-Sensitive Hashing, can rapidly and accurately estimate the articulated pose of human figures from a large database of example images.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

We consider the problem of detecting a large number of different classes of objects in cluttered scenes. Traditional approaches require applying a battery of different classifiers to the image, at multiple locations and scales. This can be slow and can require a lot of training data, since each classifier requires the computation of many different image features. In particular, for independently trained detectors, the (run-time) computational complexity, and the (training-time) sample complexity, scales linearly with the number of classes to be detected. It seems unlikely that such an approach will scale up to allow recognition of hundreds or thousands of objects. We present a multi-class boosting procedure (joint boosting) that reduces the computational and sample complexity, by finding common features that can be shared across the classes (and/or views). The detectors for each class are trained jointly, rather than independently. For a given performance level, the total number of features required, and therefore the computational cost, is observed to scale approximately logarithmically with the number of classes. The features selected jointly are closer to edges and generic features typical of many natural structures instead of finding specific object parts. Those generic features generalize better and reduce considerably the computational cost of an algorithm for multi-class object detection.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

An iterative method for reconstructing a 3D polygonal mesh and color texture map from multiple views of an object is presented. In each iteration, the method first estimates a texture map given the current shape estimate. The texture map and its associated residual error image are obtained via maximum a posteriori estimation and reprojection of the multiple views into texture space. Next, the surface shape is adjusted to minimize residual error in texture space. The surface is deformed towards a photometrically-consistent solution via a series of 1D epipolar searches at randomly selected surface points. The texture space formulation has improved computational complexity over standard image-based error approaches, and allows computation of the reprojection error and uncertainty for any point on the surface. Moreover, shape adjustments can be constrained such that the recovered model's silhouette matches those of the input images. Experiments with real world imagery demonstrate the validity of the approach.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Object detection and recognition are important problems in computer vision. The challenges of these problems come from the presence of noise, background clutter, large within class variations of the object class and limited training data. In addition, the computational complexity in the recognition process is also a concern in practice. In this thesis, we propose one approach to handle the problem of detecting an object class that exhibits large within-class variations, and a second approach to speed up the classification processes. In the first approach, we show that foreground-background classification (detection) and within-class classification of the foreground class (pose estimation) can be jointly solved with using a multiplicative form of two kernel functions. One kernel measures similarity for foreground-background classification. The other kernel accounts for latent factors that control within-class variation and implicitly enables feature sharing among foreground training samples. For applications where explicit parameterization of the within-class states is unavailable, a nonparametric formulation of the kernel can be constructed with a proper foreground distance/similarity measure. Detector training is accomplished via standard Support Vector Machine learning. The resulting detectors are tuned to specific variations in the foreground class. They also serve to evaluate hypotheses of the foreground state. When the image masks for foreground objects are provided in training, the detectors can also produce object segmentation. Methods for generating a representative sample set of detectors are proposed that can enable efficient detection and tracking. In addition, because individual detectors verify hypotheses of foreground state, they can also be incorporated in a tracking-by-detection frame work to recover foreground state in image sequences. To run the detectors efficiently at the online stage, an input-sensitive speedup strategy is proposed to select the most relevant detectors quickly. The proposed approach is tested on data sets of human hands, vehicles and human faces. On all data sets, the proposed approach achieves improved detection accuracy over the best competing approaches. In the second part of the thesis, we formulate a filter-and-refine scheme to speed up recognition processes. The binary outputs of the weak classifiers in a boosted detector are used to identify a small number of candidate foreground state hypotheses quickly via Hamming distance or weighted Hamming distance. The approach is evaluated in three applications: face recognition on the face recognition grand challenge version 2 data set, hand shape detection and parameter estimation on a hand data set, and vehicle detection and estimation of the view angle on a multi-pose vehicle data set. On all data sets, our approach is at least five times faster than simply evaluating all foreground state hypotheses with virtually no loss in classification accuracy.