992 resultados para Adaptive Image


Relevância:

30.00% 30.00%

Publicador:

Resumo:

The tap-length, or the number of the taps, is an important structural parameter of the linear MMSE adaptive filter. Although the optimum tap-length that balances performance and complexity varies with scenarios, most current adaptive filters fix the tap-length at some compromise value, making them inefficient to implement especially in time-varying scenarios. A novel gradient search based variable tap-length algorithm is proposed, using the concept of the pseudo-fractional tap-length, and it is shown that the new algorithm can converge to the optimum tap-length in the mean. Results of computer simulations are also provided to verify the analysis.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper an adaptive approach for color image enhancement is proposed. In this approach, the saturation feedback technique is used as a means of supplementing color image shmpness and contrast. This technique of the saturation feedback can serve to bring out image details that have low luminance contrast. In the technique, the feedback parameters are the key component and are usually determined manually. In order to realize the adaptive color image enhancement, the genetic algorithm is employed to search global optimal parameters for saturation feedback automatically. The detailed procedures are described in the paper. Experimental results on color images show the feasibility of the proposed method.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Traditional data compression algorithms for 2D images work using the information theoretic paradigm, attempting to reduce redundant information by as much as possible. However, through the use of a depletion algorithm that takes advantage of characteristics of the human visual system, images can be displayed using only half or a quarter of the original information with no appreciable loss of quality.

The characteristics of the human visual system that allows the viewer to perceive a higher rate of information than is actually displayed is known as the beta or picket fence effect. It is called the picket fence effect because its effect is noticeable when a person is travelling along a picket fence. Despite the person not having an unimpeded view of the objects behind the fence at any instant, as the person is moving, the objects behind the picket fence are clearly visible. In fact, in most cases the fence is hardly noticeable at all.

The techniques we have developed uses this effect to achieve higher levels of compression than would otherwise be possible. As a fundamental characteristic of the beta effect is the requirement that there is movement of the fence in relation to the object, the beta effect can only be used in image sequences where movement between the depletion pattern and objects within the image can be achieved.

As MPEG is the recognised standard by which image sequences are coded, compatibility with MPEG is essential. We have modified our technique such that it performs in conjunction with MPEG, providing further compression over MPEG.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background elimination models are widely used in motion tracking systems. Our aim is to develop a system that performs reliably under adverse lighting conditions. In particular, this includes indoor scenes lit partly or entirely by diffuse natural light. We present a modified "median value" model in which the detection threshold adapts to global changes in illumination. The responses of several models are compared, demonstrating the effectiveness of the new model.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Traditional content-based image retrieval (CBIR) scheme with assumption of independent individual images in large-scale collections suffers from poor retrieval performance. In medical applications, images usually exist in the form of image bags and each bag includes multiple relevant images of the same perceptual meaning. In this paper, based on these natural image bags, we explore a new scheme to improve the performance of medical image retrieval. It is feasible and efficient to search the bag-based medical image collection by providing a query bag. However, there is a critical problem of noisy images which may present in image bags and severely affect the retrieval performance. A new three-stage solution is proposed to perform the retrieval and handle the noisy images. In stage 1, in order to alleviate the influence of noisy images, we associate each image in the image bags with a relevance degree. In stage 2, a novel similarity aggregation method is proposed to incorporate image relevance and feature importance into the similarity computation process. In stage 3, we obtain the final image relevance in an adaptive way which can consider both image bag similarity and individual image similarity. The experiments demonstrate that the proposed approach can improve the image retrieval performance significantly.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Developing a watermarking method that is robust to cropping attack and random bending attacks (RBAs) is a challenging task in image watermarking. In this paper, we propose a histogram-based image watermarking method to tackle with both cropping attack and RBAs. In this method first the gray levels are divided into groups. Secondly the groups for watermark embedding are selected according to the number of pixels in them, which makes this method fully based on the histogram shape of the original image and adaptive to different images. Then the watermark bits are embedded by modifying the histogram of the selected groups. Since histogram shape is insensitive to cropping and independent from pixel positions, the proposed method is robust to cropping attack and RBAs. Besides, it also has high robustness against other common attacks. Experimental results demonstrate the effectiveness of the proposed method. © 2014 IEEE.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Biological processes are very complex mechanisms, most of them being accompanied by or manifested as signals that reflect their essential characteristics and qualities. The development of diagnostic techniques based on signal and image acquisition from the human body is commonly retained as one of the propelling factors in the advancements in medicine and biosciences recorded in the recent past. It is a fact that the instruments used for biological signal and image recording, like any other acquisition system, are affected by non-idealities which, by different degrees, negatively impact on the accuracy of the recording. This work discusses how it is possible to attenuate, and ideally to remove, these effects, with a particular attention toward ultrasound imaging and extracellular recordings. Original algorithms developed during the Ph.D. research activity will be examined and compared to ones in literature tackling the same problems; results will be drawn on the base of comparative tests on both synthetic and in-vivo acquisitions, evaluating standard metrics in the respective field of application. All the developed algorithms share an adaptive approach to signal analysis, meaning that their behavior is not dependent only on designer choices, but driven by input signal characteristics too. Performance comparisons following the state of the art concerning image quality assessment, contrast gain estimation and resolution gain quantification as well as visual inspection highlighted very good results featured by the proposed ultrasound image deconvolution and restoring algorithms: axial resolution up to 5 times better than algorithms in literature are possible. Concerning extracellular recordings, the results of the proposed denoising technique compared to other signal processing algorithms pointed out an improvement of the state of the art of almost 4 dB.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In recent years, due to the rapid convergence of multimedia services, Internet and wireless communications, there has been a growing trend of heterogeneity (in terms of channel bandwidths, mobility levels of terminals, end-user quality-of-service (QoS) requirements) for emerging integrated wired/wireless networks. Moreover, in nowadays systems, a multitude of users coexists within the same network, each of them with his own QoS requirement and bandwidth availability. In this framework, embedded source coding allowing partial decoding at various resolution is an appealing technique for multimedia transmissions. This dissertation includes my PhD research, mainly devoted to the study of embedded multimedia bitstreams in heterogenous networks, developed at the University of Bologna, advised by Prof. O. Andrisano and Prof. A. Conti, and at the University of California, San Diego (UCSD), where I spent eighteen months as a visiting scholar, advised by Prof. L. B. Milstein and Prof. P. C. Cosman. In order to improve the multimedia transmission quality over wireless channels, joint source and channel coding optimization is investigated in a 2D time-frequency resource block for an OFDM system. We show that knowing the order of diversity in time and/or frequency domain can assist image (video) coding in selecting optimal channel code rates (source and channel code rates). Then, adaptive modulation techniques, aimed at maximizing the spectral efficiency, are investigated as another possible solution for improving multimedia transmissions. For both slow and fast adaptive modulations, the effects of imperfect channel estimation errors are evaluated, showing that the fast technique, optimal in ideal systems, might be outperformed by the slow adaptive modulation, when a real test case is considered. Finally, the effects of co-channel interference and approximated bit error probability (BEP) are evaluated in adaptive modulation techniques, providing new decision regions concepts, and showing how the widely used BEP approximations lead to a substantial loss in the overall performance.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A new generation of high definition computed tomography (HDCT) 64-slice devices complemented by a new iterative image reconstruction algorithm-adaptive statistical iterative reconstruction, offer substantially higher resolution compared to standard definition CT (SDCT) scanners. As high resolution confers higher noise we have compared image quality and radiation dose of coronary computed tomography angiography (CCTA) from HDCT versus SDCT. Consecutive patients (n = 93) underwent HDCT, and were compared to 93 patients who had previously undergone CCTA with SDCT matched for heart rate (HR), HR variability and body mass index (BMI). Tube voltage and current were adapted to the patient's BMI, using identical protocols in both groups. The image quality of all CCTA scans was evaluated by two independent readers in all coronary segments using a 4-point scale (1, excellent image quality; 2, blurring of the vessel wall; 3, image with artefacts but evaluative; 4, non-evaluative). Effective radiation dose was calculated from DLP multiplied by a conversion factor (0.014 mSv/mGy × cm). The mean image quality score from HDCT versus SDCT was comparable (2.02 ± 0.68 vs. 2.00 ± 0.76). Mean effective radiation dose did not significantly differ between HDCT (1.7 ± 0.6 mSv, range 1.0-3.7 mSv) and SDCT (1.9 ± 0.8 mSv, range 0.8-5.5 mSv; P = n.s.). HDCT scanners allow low-dose 64-slice CCTA scanning with higher resolution than SDCT but maintained image quality and equally low radiation dose. Whether this will translate into higher accuracy of HDCT for CAD detection remains to be evaluated.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

During decades Distance Transforms have proven to be useful for many image processing applications, and more recently, they have started to be used in computer graphics environments. The goal of this paper is to propose a new technique based on Distance Transforms for detecting mesh elements which are close to the objects' external contour (from a given point of view), and using this information for weighting the approximation error which will be tolerated during the mesh simplification process. The obtained results are evaluated in two ways: visually and using an objective metric that measures the geometrical difference between two polygonal meshes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Purpose: The rapid distal falloff of a proton beam allows for sparing of normal tissues distal to the target. However proton beams that aim directly towards critical structures are avoided due to concerns of range uncertainties, such as CT number conversion and anatomy variations. We propose to eliminate range uncertainty and enable prostate treatment with a single anterior beam by detecting the proton’s range at the prostate-rectal interface and adaptively adjusting the range in vivo and in real-time. Materials and Methods: A prototype device, consisting of an endorectal liquid scintillation detector and dual-inverted Lucite wedges for range compensation, was designed to test the feasibility and accuracy of the technique. Liquid scintillation filled volume was fitted with optical fiber and placed inside the rectum of an anthropomorphic pelvic phantom. Photodiode-generated current signal was generated as a function of proton beam distal depth, and the spatial resolution of this technique was calculated by relating the variance in detecting proton spills to its maximum penetration depth. The relative water-equivalent thickness of the wedges was measured in a water phantom and prospectively tested to determine the accuracy of range corrections. Treatment simulation studies were performed to test the potential dosimetric benefit in sparing the rectum. Results: The spatial resolution of the detector in phantom measurement was 0.5 mm. The precision of the range correction was 0.04 mm. The residual margin to ensure CTV coverage was 1.1 mm. The composite distal margin for 95% treatment confidence was 2.4 mm. Planning studies based on a previously estimated 2mm margin (90% treatment confidence) for 27 patients showed a rectal sparing up to 51% at 70 Gy and 57% at 40 Gy relative to IMRT and bilateral proton treatment. Conclusion: We demonstrated the feasibility of our design. Use of this technique allows for proton treatment using a single anterior beam, significantly reducing the rectal dose.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

New-onset impairment of ocular motility will cause incomitant strabismus, i.e., a gaze-dependent ocular misalignment. This ocular misalignment will cause retinal disparity, that is, a deviation of the spatial position of an image on the retina of both eyes, which is a trigger for a vergence eye movement that results in ocular realignment. If the vergence movement fails, the eyes remain misaligned, resulting in double vision. Adaptive processes to such incomitant vergence stimuli are poorly understood. In this study, we have investigated the physiological oculomotor response of saccadic and vergence eye movements in healthy individuals after shifting gaze from a viewing position without image disparity into a field of view with increased image disparity, thus in conditions mimicking incomitance. Repetitive saccadic eye movements into a visual field with increased stimulus disparity lead to a rapid modification of the oculomotor response: (a) Saccades showed immediate disconjugacy (p < 0.001) resulting in decreased retinal image disparity at the end of a saccade. (b) Vergence kinetics improved over time (p < 0.001). This modified oculomotor response enables a more prompt restoration of ocular alignment in new-onset incomitance.