399 resultados para Quantization
Resumo:
基于量化索引调制(QIM)的隐写技术正日益受到隐写分析的威胁。该文将通常在DCT域隐写的做法改为在非均匀DCT域进行,将参数作为密钥,提出了一种NDCT-QIM图像隐写方法。由于在攻击者猜测的域中,嵌入信号具有扩散性,NDCT-QIM方法不利于隐写分析对隐写特征的检测,分析和实验表明,它能够更好地抵御基于梯度能量、直方图及小波统计特征等常用统计量的隐写分析,增强了隐写的隐蔽性。
Resumo:
小尺寸目标跟踪是视觉跟踪中的难题。该文首先指出了均值移动小尺寸目标跟踪算法中的两个主要问题:算法跟踪中断和丢失跟踪目标。然后,论文给出了相应的解决方法。对传统Parzen窗密度估计法加以改进,并用于对候选目标区域的直方图进行插值处理,较好地解决了算法跟踪中断问题。论文采用Kullback-Leibler距离作为目标模型和候选目标之间的新型相似性度量函数,并推导了其相应的权值和新位置计算公式,提高了算法的跟踪精度。多段视频序列的跟踪实验表明,该文提出的算法可以有效地跟踪小尺寸目标,能够成功跟踪只有6×12个像素的小目标,跟踪精度也有一定提高。
Resumo:
针对实时序列图像多目标识别问题提出了一种快速图像处理方法。该方法依据一定的先验知识和准则,对复杂背景图像进行窗口化,对每一个窗口独立进行自适应快速中值滤波,及基于局部图像灰度信息的自适应重新量化和最大熵分割处理,实现了对全景视场内预定目标的快速准确提取和识别。为动态环境中多目标条件下移动机器人的视觉定位、导航和目标跟踪所需图像处理技术提供了一种新的方法。
Resumo:
要测量出一组特征点分别在两个空间坐标系下的坐标 ,就可以求解两个空间目标间的位姿关系 .实现上述目标位姿测量方法的前提条件是要保证该组特征点在不同坐标系下 ,其位置关系相同 ,但计算误差的存在却破坏了这种固定的位置关系 .为此 ,提出了两种基于模型的三维视觉方法——基于模型的单目视觉和基于模型的双目视觉 ,前者从视觉计算的物理意义入手 ,通过简单的约束迭代求解实现模型约束 ;后者则将简单的约束最小二乘法和基于模型的单目视觉方法融合在一起来实现模型约束 .引入模型约束后 ,单目视觉方法可以达到很高的测量精度 .而基于模型的双目视觉较传统的无模型立体视觉方法位移精度提高有限 ,但姿态精度提高很多
Resumo:
在JPEG静止图象压缩的基础上,设计了一种扩充的自适应量化器.利用人眼的视觉特征,通过分析MCU块的局部视觉活动性,以MCU活动性函数确定量化因子,并引入亮度掩盖算子调节量化参量.实验结果表明,本文所设计的自适应量化器能减少图象编码主观失真,改善图象质量,获得更好的压缩效果
Resumo:
本文以跟踪电视系统中自适应量化器为设计背景,提出了一种新的、实时自适应的快速图象量化方法——逐极均值法,文中首先用Lloyd-Max最佳量化理论分析了这种量化方法的均方误差失真,讨沦了图象中存在孤立亮点时的处理方法。然后论述了这种量化方法应用于跟踪电视系统中的性能,即实现的简单、快速性;对照度变化的自适应性;及图象对比度增强效果。文中通过图象处理实验结果验证了这种量化方法的性能和理论分析的正确性。最后得出结论:逐极均值法量化器是一种能够代替LlodyMax最佳量化器的次佳量化器,这种量化器可以很好地满足跟踪电视系统中对自适应量化器的设计所提出的各方面性能要求;它对那些要求实现简单、实时自适应的量化器应用领域也将具有一定意义。
Resumo:
In this paper we present some extensions to the k-means algorithm for vector quantization that permit its efficient use in image segmentation and pattern classification tasks. It is shown that by introducing state variables that correspond to certain statistics of the dynamic behavior of the algorithm, it is possible to find the representative centers fo the lower dimensional maniforlds that define the boundaries between classes, for clouds of multi-dimensional, mult-class data; this permits one, for example, to find class boundaries directly from sparse data (e.g., in image segmentation tasks) or to efficiently place centers for pattern classification (e.g., with local Gaussian classifiers). The same state variables can be used to define algorithms for determining adaptively the optimal number of centers for clouds of data with space-varying density. Some examples of the applicatin of these extensions are also given.
Resumo:
Automated assembly of mechanical devices is studies by researching methods of operating assembly equipment in a variable manner; that is, systems which may be configured to perform many different assembly operations are studied. The general parts assembly operation involves the removal of alignment errors within some tolerance and without damaging the parts. Two methods for eliminating alignment errors are discussed: a priori suppression and measurement and removal. Both methods are studied with the more novel measurement and removal technique being studied in greater detail. During the study of this technique, a fast and accurate six degree-of-freedom position sensor based on a light-stripe vision technique was developed. Specifications for the sensor were derived from an assembly-system error analysis. Studies on extracting accurate information from the sensor by optimally reducing redundant information, filtering quantization noise, and careful calibration procedures were performed. Prototype assembly systems for both error elimination techniques were implemented and used to assemble several products. The assembly system based on the a priori suppression technique uses a number of mechanical assembly tools and software systems which extend the capabilities of industrial robots. The need for the tools was determined through an assembly task analysis of several consumer and automotive products. The assembly system based on the measurement and removal technique used the six degree-of-freedom position sensor to measure part misalignments. Robot commands for aligning the parts were automatically calculated based on the sensor data and executed.
Resumo:
King, R. D. and Ouali, M. (2004) Poly-transformation. In proceedings of 5th International Conference on Intelligent Data Engineering and Automated Learning (IDEAL 2004). Springer LNCS 3177 p99-107
Resumo:
Q. Shen and R. Jensen, 'Selecting Informative Features with Fuzzy-Rough Sets and its Application for Complex Systems Monitoring,' Pattern Recognition, vol. 37, no. 7, pp. 1351-1363, 2004.
Resumo:
This paper shows how knowledge, in the form of fuzzy rules, can be derived from a self-organizing supervised learning neural network called fuzzy ARTMAP. Rule extraction proceeds in two stages: pruning removes those recognition nodes whose confidence index falls below a selected threshold; and quantization of continuous learned weights allows the final system state to be translated into a usable set of rules. Simulations on a medical prediction problem, the Pima Indian Diabetes (PID) database, illustrate the method. In the simulations, pruned networks about 1/3 the size of the original actually show improved performance. Quantization yields comprehensible rules with only slight degradation in test set prediction performance.
Resumo:
Two classes of techniques have been developed to whiten the quantization noise in digital delta-sigma modulators (DDSMs): deterministic and stochastic. In this two-part paper, a design methodology for reduced-complexity DDSMs is presented. The design methodology is based on error masking. Rules for selecting the word lengths of the stages in multistage architectures are presented. We show that the hardware requirement can be reduced by up to 20% compared with a conventional design, without sacrificing performance. Simulation and experimental results confirm theoretical predictions. Part I addresses MultistAge noise SHaping (MASH) DDSMs; Part II focuses on single-quantizer DDSMs..
Resumo:
This work considers the effect of hardware constraints that typically arise in practical power-aware wireless sensor network systems. A rigorous methodology is presented that quantifies the effect of output power limit and quantization constraints on bit error rate performance. The approach uses a novel, intuitively appealing means of addressing the output power constraint, wherein the attendant saturation block is mapped from the output of the plant to its input and compensation is then achieved using a robust anti-windup scheme. A priori levels of system performance are attained using a quantitative feedback theory approach on the initial, linear stage of the design paradigm. This hybrid design is assessed experimentally using a fully compliant 802.15.4 testbed where mobility is introduced through the use of autonomous robots. A benchmark comparison between the new approach and a number of existing strategies is also presented.
Resumo:
We consider massless higher spin gauge theories with both electric and magnetic sources, with a special emphasis on the spin two case. We write the equations of motion at the linear level (with conserved external sources) and introduce Dirac strings so as to derive the equations from a variational principle. We then derive a quantization condition that generalizes the familiar Dirac quantization condition, and which involves the conserved charges associated with the asymptotic symmetries for higher spins. Next we discuss briefly how the result extends to the nonlinear theory. This is done in the context of gravitation, where the Taub-NUT solution provides the exact solution of the field equations with both types of sources. We rederive, in analogy with electromagnetism, the quantization condition from the quantization of the angular momentum. We also observe that the Taub-NUT metric is asymptotically flat at spatial infinity in the sense of Regge and Teitelboim (including their parity conditions). It follows, in particular, that one can consistently consider in the variational principle configurations with different electric and magnetic masses. © 2006 The American Physical Society.
Resumo:
The AltiKa altimeter records the reflection of Ka-band radar pulses from the Earth’s surface, with the commonly used waveform product involving the summation of 96 returns to provide average echoes at 40 Hz. Occasionally there are one-second recordings of the complex individual echoes (IEs), which facilitate the evaluation of on-board processing and offer the potential for new processing strategies. Our investigation of these IEs over the ocean confirms the on-board operations, whilst noting that data quantization limits the accuracy in the thermal noise region. By constructing average waveforms from 32 IEs at a time, and applying an innovative subwaveform retracker, we demonstrate that accurate height and wave height information can be retrieved from very short sections of data. Early exploration of the complex echoes reveals structure in the phase information similar to that noted for Envisat’s IEs.