923 resultados para Object Recognition
Resumo:
目标识别技术在现实生活中的很多领域都有广泛的应用,但是由于遮挡,视角变换等因素的影响,目标识别技术仍面临着巨大的挑战。局部特征由于其本身固有的局部性,引起了人们的重视。结合空间分布约束,局部特征可以包含高层的语义信息,能够提高目标识别算法抗遮挡和视角变化的能力。本文分析对比了当前流行的局部特征检测方法,描述方法以及空间分布约束方法,并提出了一种“中心-特征”结构模型以及相应的目标识别方法。 首先介绍局部特征检测方法,深入研究局部特征描述方法,并从原理,不变性,匹配速度,适用情形等方面进行了比较分析。 综合显式模型和隐式模型的优缺点,提出了一种“中心-特征”结构的模型。该模型以目标中心作为衡量所有局部特征之间位置关系的参考点,既保留了星形模型等的准确性,同时又去掉了特殊结点,避免了特殊结点缺失带来的不利影响,提高了算法的稳定性。 基于上述空间分布约束模型提出了相应的目标识别算法。该算法同时考虑表面特征和空间位置之间的匹配程度。基于模板中目标的表面特征和形状因素构造空间分布约束模型,利用待检测目标的表面特征信息形成相关假设,通过假设检验定量衡量目标出现的位置及可能性,并提出了一种搜索目标中心位置的加速算法。实验验证了算法在相似变换及仿射变换下的有效性,且具有一定的抗缺失能力。
Resumo:
光照是影响成像的关键因素之一。当光照条件变化时,同一物体的不同成像之间的差异极大,有时甚至大于不同物体的成像之间的差异。在很多目标识别应用场景中,光照又常常不受人为控制,这使得光照变化条件下的目标识别成为一个普遍而具有挑战性的问题。 本文深入分析了光照特性如强度、方向和颜色等的改变对目标成像的影响;研究了目前流行的各种光照鲁棒的目标识别方法,介绍它们的算法原理,分析光照鲁棒的原因,算法的适用条件等。 提出了一种在低照度条件下基于图像频域特征的目标识别方法,该方法通过分析空频域仿射变换之间的关系,采取对梯度图像的傅氏频谱进行伪对数采样的特征提取方法,较好地提取了中低频特征,抑制了高频噪声,避免了光照变化带来的不利影响;使用神经网络进行识别,有效地提取了目标的仿射不变特征,识别速度快。 提出了一种光照鲁棒的非线性相关目标识别方法。该方法采取一种信息分解的策略,将灰度信息分解为描述存在变化的区域和区域内变化程度两个描述分量,选择比较有区分力的部分像素参与匹配;以向量之间夹角的大小作为相似度度量,直接利用图像的灰度信息,在高维向量空间中考虑图像之间的相似度,克服了在低照度、低信噪比的图像中求边缘、角点和形状等特征时面临的困难。该相似度度量不受向量模的大小(乘性光照变化)以及向量平移(加性光照变化)的影响,是线性光照不变的。
Resumo:
According to the research results reported in the past decades, it is well acknowledged that face recognition is not a trivial task. With the development of electronic devices, we are gradually revealing the secret of object recognition in the primate's visual cortex. Therefore, it is time to reconsider face recognition by using biologically inspired features. In this paper, we represent face images by utilizing the C1 units, which correspond to complex cells in the visual cortex, and pool over S1 units by using a maximum operation to reserve only the maximum response of each local area of S1 units. The new representation is termed C1 Face. Because C1 Face is naturally a third-order tensor (or a three dimensional array), we propose three-way discriminative locality alignment (TWDLA), an extension of the discriminative locality alignment, which is a top-level discriminate manifold learning-based subspace learning algorithm. TWDLA has the following advantages: (1) it takes third-order tensors as input directly so the structure information can be well preserved; (2) it models the local geometry over every modality of the input tensors so the spatial relations of input tensors within a class can be preserved; (3) it maximizes the margin between a tensor and tensors from other classes over each modality so it performs well for recognition tasks and (4) it has no under sampling problem. Extensive experiments on YALE and FERET datasets show (1) the proposed C1Face representation can better represent face images than raw pixels and (2) TWDLA can duly preserve both the local geometry and the discriminative information over every modality for recognition.
Resumo:
针对经典形状上下文算法对物体关节相对位置变化敏感的缺点,提出一种基于剪影局部形状填充率的物体识别算法.该算法以物体不同的轮廓控制点为圆心,计算不同半径下物体剪影像素所占总像素的比例,即为控制点的局部形状填充率;将不同控制点、不同半径长度所计算的形状填充率数值构成一个特征矩阵,该矩阵反映了物体整个剪影的统计特性.通过不同数据库的实验结果表明,文中算法对物体的细节有很强的描述能力,对物体关节的相对位置不敏感,并且受剪影轮廓控制点数量影响小.
Resumo:
随着移动机器人应用范围的日益扩展,在动态、非结构化环境下提高其自主导航能力已经成为移动机器人研究领域迫切需要解决的问题。在机器人自主导航关键技术中,识别技术是最难解决、也是最急需解决的问题。视觉作为导航中的重要传感器,与其他传感器相比具有信息量大、重量轻便、功耗低等诸多优势,因此基于视觉的识别技术也被公认为最具潜力的研究方向。 本文以国防基础研究项目和中科院开放实验室基金项目为依托,以沈阳自动化所自主研发的“轮腿复合结构机器人”和“无人机”为实验平台,针对地面自主机器人和无人机自主导航中迫切需要解决的应用问题,有针对性的展开研究,旨在提高移动机器人在动态、非结构化环境下的适应能力。 本论文的主要内容如下: 首先,为了提高复杂环境下地面移动机器人的自主能力,本文提出了一种基于立体视觉的面向室外非结构化环境障碍物检测算法。文中首先给出了一种可以从V视差图(V-disparity image)中有效估计地面主视差(Main Ground Disparity, MGD)的方法。随后,我们利用由粗到精逐步判断的方式,来识别疑似障碍和最终障碍并对障碍进行定位。最后,该方法已在地面自主移动平台得到实际应用。通过在各种场景下的实验,验证了该方法的准确性和快速性。 其次,以无人机天际线识别为背景,提出了一种准确、实时的天际线识别算法,并由此估计姿态角。通过对天际线建立能量泛函模型,利用变分原理推出相应偏微分方程。在实际应用中出于对实时性的考虑,引入分段直线约束对该模型进行简化,然后利用由粗到精的思想识别天际线。具体做法是:首先,对图像预处理并垂直剖分,然后利用简化的水平直线模型对天际线进行粗识别,通过拟合获得天际线粗识别结果,最后在基于梯度和区域混合开曲线模型约束下精确识别天际线,并由此估计无人机滚动和俯仰姿态角。 第三,通过对红外机场跑道的目标特性进行分析,文中设计了一种新的基于1D Haar 小波的并行的红外图像分割算法的;然后,有针对性的对分割区域提取特征;最后,两种常用的识别方法,支持向量机(SVM)和投票法(Voting)被用于对疑似目标区域进行分类和识别。通过对实际视频和红外仿真图片的测试,验证了本文算法的快速性、可靠性和实时性,该算法每帧平均处理时间为30ms。 最后,针对无人机空中巡逻中对人群进行自动监控所遇到的问题,通过将此类问题简化为固定视角下人流密度监测问题,提出了一种全新的基于速度场估计的越线人流计数和区域内人流密度估计算法。 首先,该算法把越线的人流当成运动的流场,给出了一种有效估计1D速度场的运动估计模型;然后,通过对动态人流进行速度估计和积分,将越线人流的拼接成动态区域;最后,对各个动态区域提取面积和边缘信息,利用回归分析实现对人流密度估计。该方法与以往基于场景学习的方法不同,本文是一种基于角度的学习,因此便于实际应用。
Resumo:
Crowding, generally defined as the deleterious influence of nearby contours on visual discrimination, is ubiquitous in spatial vision. Specifically, long-range effects of non-overlapping distracters can alter the appearance of an object, making it unrecognizable. Theories in many domains, including vision computation and high-level attention, have been proposed to account for crowding. However, neither compulsory averaging model nor insufficient spatial esolution of attention provides an adequate explanation for crowding. The present study examined the effects of perceptual organization on crowding. We hypothesize that target-distractor segmentation in crowding is analogous to figure-ground segregation in Gestalt. When distractors can be grouped as a whole or when they are similar to each other but different from the target, the target can be distinguished from distractors. However, grouping target and distractors together by Gestalt principles may interfere with target-distractor separation. Six experiments were carried out to assess our theory. In experiments 1, 2, and 3, we manipulated the similarity between target and distractor as well as the configuration of distractors to investigate the effects of stimuli-driven grouping on target-distractor segmentation. In experiments 4, 5, and 6, we focused on the interaction between bottom-up and top-down processes of grouping, and their influences on target-distractor segmentation. Our results demonstrated that: (a) when distractors were similar to each other but different from target, crowding was eased; (b) when distractors formed a subjective contour or were placed regularly, crowding was also reduced; (c) both bottom-up and top-down processes could influence target-distractor grouping, mediating the effects of crowding. These results support our hypothesis that the figure-ground segregation and target-distractor segmentation in crowding may share similar processes. The present study not only provides a novel explanation for crowding, but also examines the processing bottleneck in object recognition. These findings have significant implications on computer vision and interface design as well as on clinical practice in amblyopia and dyslexia.
Resumo:
Model-based object recognition commonly involves using a minimal set of matched model and image points to compute the pose of the model in image coordinates. Furthermore, recognition systems often rely on the "weak-perspective" imaging model in place of the perspective imaging model. This paper discusses computing the pose of a model from three corresponding points under weak-perspective projection. A new solution to the problem is proposed which, like previous solutins, involves solving a biquadratic equation. Here the biquadratic is motivate geometrically and its solutions, comprised of an actual and a false solution, are interpreted graphically. The final equations take a new form, which lead to a simple expression for the image position of any unmatched model point.
Resumo:
A common design of an object recognition system has two steps, a detection step followed by a foreground within-class classification step. For example, consider face detection by a boosted cascade of detectors followed by face ID recognition via one-vs-all (OVA) classifiers. Another example is human detection followed by pose recognition. Although the detection step can be quite fast, the foreground within-class classification process can be slow and becomes a bottleneck. In this work, we formulate a filter-and-refine scheme, where the binary outputs of the weak classifiers in a boosted detector are used to identify a small number of candidate foreground state hypotheses quickly via Hamming distance or weighted Hamming distance. The approach is evaluated in three applications: face recognition on the FRGC V2 data set, hand shape detection and parameter estimation on a hand data set and vehicle detection and view angle estimation on a multi-view vehicle data set. On all data sets, our approach has comparable accuracy and is at least five times faster than the brute force approach.
Resumo:
A method for deformable shape detection and recognition is described. Deformable shape templates are used to partition the image into a globally consistent interpretation, determined in part by the minimum description length principle. Statistical shape models enforce the prior probabilities on global, parametric deformations for each object class. Once trained, the system autonomously segments deformed shapes from the background, while not merging them with adjacent objects or shadows. The formulation can be used to group image regions based on any image homogeneity predicate; e.g., texture, color, or motion. The recovered shape models can be used directly in object recognition. Experiments with color imagery are reported.
Resumo:
A novel method that combines shape-based object recognition and image segmentation is proposed for shape retrieval from images. Given a shape prior represented in a multi-scale curvature form, the proposed method identifies the target objects in images by grouping oversegmented image regions. The problem is formulated in a unified probabilistic framework and solved by a stochastic Markov Chain Monte Carlo (MCMC) mechanism. By this means, object segmentation and recognition are accomplished simultaneously. Within each sampling move during the simulation process,probabilistic region grouping operations are influenced by both the image information and the shape similarity constraint. The latter constraint is measured by a partial shape matching process. A generalized parallel algorithm by Barbu and Zhu,combined with a large sampling jump and other implementation improvements, greatly speeds up the overall stochastic process. The proposed method supports the segmentation and recognition of multiple occluded objects in images. Experimental results are provided for both synthetic and real images.
Resumo:
CONFIGR (CONtour FIgure GRound) is a computational model based on principles of biological vision that completes sparse and noisy image figures. Within an integrated vision/recognition system, CONFIGR posits an initial recognition stage which identifies figure pixels from spatially local input information. The resulting, and typically incomplete, figure is fed back to the “early vision” stage for long-range completion via filling-in. The reconstructed image is then re-presented to the recognition system for global functions such as object recognition. In the CONFIGR algorithm, the smallest independent image unit is the visible pixel, whose size defines a computational spatial scale. Once pixel size is fixed, the entire algorithm is fully determined, with no additional parameter choices. Multi-scale simulations illustrate the vision/recognition system. Open-source CONFIGR code is available online, but all examples can be derived analytically, and the design principles applied at each step are transparent. The model balances filling-in as figure against complementary filling-in as ground, which blocks spurious figure completions. Lobe computations occur on a subpixel spatial scale. Originally designed to fill-in missing contours in an incomplete image such as a dashed line, the same CONFIGR system connects and segments sparse dots, and unifies occluded objects from pieces locally identified as figure in the initial recognition stage. The model self-scales its completion distances, filling-in across gaps of any length, where unimpeded, while limiting connections among dense image-figure pixel groups that already have intrinsic form. Long-range image completion promises to play an important role in adaptive processors that reconstruct images from highly compressed video and still camera images.
Resumo:
This paper introduces ART-EMAP, a neural architecture that uses spatial and temporal evidence accumulation to extend the capabilities of fuzzy ARTMAP. ART-EMAP combines supervised and unsupervised learning and a medium-term memory process to accomplish stable pattern category recognition in a noisy input environment. The ART-EMAP system features (i) distributed pattern registration at a view category field; (ii) a decision criterion for mapping between view and object categories which can delay categorization of ambiguous objects and trigger an evidence accumulation process when faced with a low confidence prediction; (iii) a process that accumulates evidence at a medium-term memory (MTM) field; and (iv) an unsupervised learning algorithm to fine-tune performance after a limited initial period of supervised network training. ART-EMAP dynamics are illustrated with a benchmark simulation example. Applications include 3-D object recognition from a series of ambiguous 2-D views.
Resumo:
A neural theory is proposed in which visual search is accomplished by perceptual grouping and segregation, which occurs simultaneous across the visual field, and object recognition, which is restricted to a selected region of the field. The theory offers an alternative hypothesis to recently developed variations on Feature Integration Theory (Treisman, and Sato, 1991) and Guided Search Model (Wolfe, Cave, and Franzel, 1989). A neural architecture and search algorithm is specified that quantitatively explains a wide range of psychophysical search data (Wolfe, Cave, and Franzel, 1989; Cohen, and lvry, 1991; Mordkoff, Yantis, and Egeth, 1990; Treisman, and Sato, 1991).
Resumo:
The recognition of 3-D objects from sequences of their 2-D views is modeled by a family of self-organizing neural architectures, called VIEWNET, that use View Information Encoded With NETworks. VIEWNET incorporates a preprocessor that generates a compressed but 2-D invariant representation of an image, a supervised incremental learning system that classifies the preprocessed representations into 2-D view categories whose outputs arc combined into 3-D invariant object categories, and a working memory that makes a 3-D object prediction by accumulating evidence from 3-D object category nodes as multiple 2-D views are experienced. The simplest VIEWNET achieves high recognition scores without the need to explicitly code the temporal order of 2-D views in working memory. Working memories are also discussed that save memory resources by implicitly coding temporal order in terms of the relative activity of 2-D view category nodes, rather than as explicit 2-D view transitions. Variants of the VIEWNET architecture may also be used for scene understanding by using a preprocessor and classifier that can determine both What objects are in a scene and Where they are located. The present VIEWNET preprocessor includes the CORT-X 2 filter, which discounts the illuminant, regularizes and completes figural boundaries, and suppresses image noise. This boundary segmentation is rendered invariant under 2-D translation, rotation, and dilation by use of a log-polar transform. The invariant spectra undergo Gaussian coarse coding to further reduce noise and 3-D foreshortening effects, and to increase generalization. These compressed codes are input into the classifier, a supervised learning system based on the fuzzy ARTMAP algorithm. Fuzzy ARTMAP learns 2-D view categories that are invariant under 2-D image translation, rotation, and dilation as well as 3-D image transformations that do not cause a predictive error. Evidence from sequence of 2-D view categories converges at 3-D object nodes that generate a response invariant under changes of 2-D view. These 3-D object nodes input to a working memory that accumulates evidence over time to improve object recognition. ln the simplest working memory, each occurrence (nonoccurrence) of a 2-D view category increases (decreases) the corresponding node's activity in working memory. The maximally active node is used to predict the 3-D object. Recognition is studied with noisy and clean image using slow and fast learning. Slow learning at the fuzzy ARTMAP map field is adapted to learn the conditional probability of the 3-D object given the selected 2-D view category. VIEWNET is demonstrated on an MIT Lincoln Laboratory database of l28x128 2-D views of aircraft with and without additive noise. A recognition rate of up to 90% is achieved with one 2-D view and of up to 98.5% correct with three 2-D views. The properties of 2-D view and 3-D object category nodes are compared with those of cells in monkey inferotemporal cortex.
Resumo:
A neural model is proposed of how laminar interactions in the visual cortex may learn and recognize object texture and form boundaries. The model brings together five interacting processes: region-based texture classification, contour-based boundary grouping, surface filling-in, spatial attention, and object attention. The model shows how form boundaries can determine regions in which surface filling-in occurs; how surface filling-in interacts with spatial attention to generate a form-fitting distribution of spatial attention, or attentional shroud; how the strongest shroud can inhibit weaker shrouds; and how the winning shroud regulates learning of texture categories, and thus the allocation of object attention. The model can discriminate abutted textures with blurred boundaries and is sensitive to texture boundary attributes like discontinuities in orientation and texture flow curvature as well as to relative orientations of texture elements. The model quantitatively fits a large set of human psychophysical data on orientation-based textures. Object boundar output of the model is compared to computer vision algorithms using a set of human segmented photographic images. The model classifies textures and suppresses noise using a multiple scale oriented filterbank and a distributed Adaptive Resonance Theory (dART) classifier. The matched signal between the bottom-up texture inputs and top-down learned texture categories is utilized by oriented competitive and cooperative grouping processes to generate texture boundaries that control surface filling-in and spatial attention. Topdown modulatory attentional feedback from boundary and surface representations to early filtering stages results in enhanced texture boundaries and more efficient learning of texture within attended surface regions. Surface-based attention also provides a self-supervising training signal for learning new textures. Importance of the surface-based attentional feedback in texture learning and classification is tested using a set of textured images from the Brodatz micro-texture album. Benchmark studies vary from 95.1% to 98.6% with attention, and from 90.6% to 93.2% without attention.