983 resultados para Adaptive Image Binarization
Resumo:
ACM Computing Classification System (1998): I.7, I.7.5.
Resumo:
Our study of a novel technique for adaptive image sequence coding is reported. The number of reference frames and the intervals between them are adjusted to improve the temporal compensability of the input video. The bits are distributed more efficiently on different frame types according to temporal and spatial complexity of the image scene. Experimental results show that this dynamic group-of-picture (GOP) structure coding scheme is not only feasible but also better than the conventional fixed GOP method in terms of perceptual quality and SNR. (C) 1996 Society of Photo-Optical Instrumentation Engineers.
Resumo:
This paper deals with and details the design and implementation of a low-power; hardware-efficient adaptive self-calibrating image rejection receiver based on blind-source-separation that alleviates the RF analog front-end impairments. Hybrid strength-reduced and re-scheduled data-flow, low-power implementation of the adaptive self-calibration algorithm is developed and its efficiency is demonstrated through simulation case studies. A behavioral and structural model is developed in Matlab as well as a low-level architectural design in VHDL providing valuable test benches for the performance measures undertaken on the detailed algorithms and structures.
Resumo:
The aim of the thesis was to design and develop spatially adaptive denoising techniques with edge and feature preservation, for images corrupted with additive white Gaussian noise and SAR images affected with speckle noise. Image denoising is a well researched topic. It has found multifaceted applications in our day to day life. Image denoising based on multi resolution analysis using wavelet transform has received considerable attention in recent years. The directionlet based denoising schemes presented in this thesis are effective in preserving the image specific features like edges and contours in denoising. Scope of this research is still open in areas like further optimization in terms of speed and extension of the techniques to other related areas like colour and video image denoising. Such studies would further augment the practical use of these techniques.
Resumo:
One of the main concerns of evolvable and adaptive systems is the need of a training mechanism, which is normally done by using a training reference and a test input. The fitness function to be optimized during the evolution (training) phase is obtained by comparing the output of the candidate systems against the reference. The adaptivity that this type of systems may provide by re-evolving during operation is especially important for applications with runtime variable conditions. However, fully automated self-adaptivity poses additional problems. For instance, in some cases, it is not possible to have such reference, because the changes in the environment conditions are unknown, so it becomes difficult to autonomously identify which problem requires to be solved, and hence, what conditions should be representative for an adequate re-evolution. In this paper, a solution to solve this dependency is presented and analyzed. The system consists of an image filter application mapped on an evolvable hardware platform, able to evolve using two consecutive frames from a camera as both test and reference images. The system is entirely mapped in an FPGA, and native dynamic and partial reconfiguration is used for evolution. It is also shown that using such images, both of them being noisy, as input and reference images in the evolution phase of the system is equivalent or even better than evolving the filter with offline images. The combination of both techniques results in the completely autonomous, noise type/level agnostic filtering system without reference image requirement described along the paper.
Resumo:
Evolvable Hardware (EH) is a technique that consists of using reconfigurable hardware devices whose configuration is controlled by an Evolutionary Algorithm (EA). Our system consists of a fully-FPGA implemented scalable EH platform, where the Reconfigurable processing Core (RC) can adaptively increase or decrease in size. Figure 1 shows the architecture of the proposed System-on-Programmable-Chip (SoPC), consisting of a MicroBlaze processor responsible of controlling the whole system operation, a Reconfiguration Engine (RE), and a Reconfigurable processing Core which is able to change its size in both height and width. This system is used to implement image filters, which are generated autonomously thanks to the evolutionary process. The system is complemented with a camera that enables the usage of the platform for real time applications.
Resumo:
Vision-based underwater navigation and obstacle avoidance demands robust computer vision algorithms, particularly for operation in turbid water with reduced visibility. This paper describes a novel method for the simultaneous underwater image quality assessment, visibility enhancement and disparity computation to increase stereo range resolution under dynamic, natural lighting and turbid conditions. The technique estimates the visibility properties from a sparse 3D map of the original degraded image using a physical underwater light attenuation model. Firstly, an iterated distance-adaptive image contrast enhancement enables a dense disparity computation and visibility estimation. Secondly, using a light attenuation model for ocean water, a color corrected stereo underwater image is obtained along with a visibility distance estimate. Experimental results in shallow, naturally lit, high-turbidity coastal environments show the proposed technique improves range estimation over the original images as well as image quality and color for habitat classification. Furthermore, the recursiveness and robustness of the technique allows implementation onboard an Autonomous Underwater Vehicle for improving navigation and obstacle avoidance performance.
Resumo:
In this paper, we discuss the issues related to word recognition in born-digital word images. We introduce a novel method of power-law transformation on the word image for binarization. We show the improvement in image binarization and the consequent increase in the recognition performance of OCR engine on the word image. The optimal value of gamma for a word image is automatically chosen by our algorithm with fixed stroke width threshold. We have exhaustively experimented our algorithm by varying the gamma and stroke width threshold value. By varying the gamma value, we found that our algorithm performed better than the results reported in the literature. On the ICDAR Robust Reading Systems Challenge-1: Word Recognition Task on born digital dataset, as compared to the recognition rate of 61.5% achieved by TH-OCR after suitable pre-processing by Yang et. al. and 63.4% by ABBYY Fine Reader (used as baseline by the competition organizers without any preprocessing), we achieved 82.9% using Omnipage OCR applied on the images after being processed by our algorithm.
Resumo:
基于奇异值分解和能量最小原则,提出了一种自适应图像降噪算法,并给出了基于有界变差的能量降噪模型的代数形式。通过在矩阵范数意义下求能量最小,自适应确定去噪图像重构的奇异值个数。该算法的特点是将能量最小法则和奇异值分解结合起来,在代数空间中建立了一种自适应的图像降噪算法。与基于压缩比和奇异值分解的降噪方法相比,由于该算法避免了图像压缩比函数及其拐点的计算,因此具有快速去噪和简单可行的优点。实验结果证明,该算法是有效的。
Resumo:
本文以跟踪电视系统中自适应量化器为设计背景,提出了一种新的、实时自适应的快速图象量化方法——逐极均值法,文中首先用Lloyd-Max最佳量化理论分析了这种量化方法的均方误差失真,讨沦了图象中存在孤立亮点时的处理方法。然后论述了这种量化方法应用于跟踪电视系统中的性能,即实现的简单、快速性;对照度变化的自适应性;及图象对比度增强效果。文中通过图象处理实验结果验证了这种量化方法的性能和理论分析的正确性。最后得出结论:逐极均值法量化器是一种能够代替LlodyMax最佳量化器的次佳量化器,这种量化器可以很好地满足跟踪电视系统中对自适应量化器的设计所提出的各方面性能要求;它对那些要求实现简单、实时自适应的量化器应用领域也将具有一定意义。
Resumo:
Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)
Resumo:
The rapid technical advances in computed tomography have led to an increased number of clinical indications. Unfortunately, at the same time the radiation exposure to the population has also increased due to the increased total number of CT examinations. In the last few years various publications have demonstrated the feasibility of radiation dose reduction for CT examinations with no compromise in image quality and loss in interpretation accuracy. The majority of the proposed methods for dose optimization are easy to apply and are independent of the detector array configuration. This article reviews indication-dependent principles (e.g. application of reduced tube voltage for CT angiography, selection of the collimation and the pitch, reducing the total number of imaging series, lowering the tube voltage and tube current for non-contrast CT scans), manufacturer-dependent principles (e.g. accurate application of automatic modulation of tube current, use of adaptive image noise filter and use of iterative image reconstruction) and general principles (e.g. appropriate patient-centering in the gantry, avoiding over-ranging of the CT scan, lowering the tube voltage and tube current for survey CT scans) which lead to radiation dose reduction.