312 resultados para image reduction algorith
em Indian Institute of Science - Bangalore - Índia
Resumo:
Denoising of images in compressed wavelet domain has potential application in transmission technology such as mobile communication. In this paper, we present a new image denoising scheme based on restoration of bit-planes of wavelet coefficients in compressed domain. It exploits the fundamental property of wavelet transform - its ability to analyze the image at different resolution levels and the edge information associated with each band. The proposed scheme relies on the fact that noise commonly manifests itself as a fine-grained structure in image and wavelet transform allows the restoration strategy to adapt itself according to directional features of edges. The proposed approach shows promising results when compared with conventional unrestored scheme, in context of error reduction and has capability to adapt to situations where noise level in the image varies. The applicability of the proposed approach has implications in restoration of images due to noisy channels. This scheme, in addition, to being very flexible, tries to retain all the features, including edges of the image. The proposed scheme is computationally efficient.
Resumo:
Denoising of medical images in wavelet domain has potential application in transmission technologies such as teleradiology. This technique becomes all the more attractive when we consider the progressive transmission in a teleradiology system. The transmitted images are corrupted mainly due to noisy channels. In this paper, we present a new real time image denoising scheme based on limited restoration of bit-planes of wavelet coefficients. The proposed scheme exploits the fundamental property of wavelet transform - its ability to analyze the image at different resolution levels and the edge information associated with each sub-band. The desired bit-rate control is achieved by applying the restoration on a limited number of bit-planes subject to the optimal smoothing. The proposed method adapts itself to the preference of the medical expert; a single parameter can be used to balance the preservation of (expert-dependent) relevant details against the degree of noise reduction. The proposed scheme relies on the fact that noise commonly manifests itself as a fine-grained structure in image and wavelet transform allows the restoration strategy to adapt itself according to directional features of edges. The proposed approach shows promising results when compared with unrestored case, in context of error reduction. It also has capability to adapt to situations where noise level in the image varies and with the changing requirements of medical-experts. The applicability of the proposed approach has implications in restoration of medical images in teleradiology systems. The proposed scheme is computationally efficient.
Resumo:
Denoising of images in compressed wavelet domain has potential application in transmission technology such as mobile communication. In this paper, we present a new image denoising scheme based on restoration of bit-planes of wavelet coefficients in compressed domain. It exploits the fundamental property of wavelet transform - its ability to analyze the image at different resolution levels and the edge information associated with each band. The proposed scheme relies on the fact that noise commonly manifests itself as a fine-grained structure in image and wavelet transform allows the restoration strategy to adapt itself according to directional features of edges. The proposed approach shows promising results when compared with conventional unrestored scheme, in context of error reduction and has capability to adapt to situations where noise level in the image varies. The applicability of the proposed approach has implications in restoration of images due to noisy channels. This scheme, in addition, to being very flexible, tries to retain all the features, including edges of the image. The proposed scheme is computationally efficient.
Resumo:
The effect of neutralizing endogenous follicle stimulating hormone (FSH) or luteinizing hormone (LH) with specific antisera on the Image Image and Image Image synthesis of estrogen in the ovary of cycling hamster was studied. Neutralization of FSH or LH on proestrus resulted in a reduction in the estradiol concentration of the ovary on diestrus-2 and next proestrus, suggesting an impairment in follicular development.Injection of FSH antiserum at 0900 h of diestrus-2 significantly reduced the ovarian estradiol concentration within 6–7 h. Further, these ovaries on incubation with testosterone(T) Image Image at 1600 h of the same day or the next day synthesized significantly lower amounts of estradiol, compared to corresponding control ovaries. Although testosterone itself, in the absence of endogenous FSH, could stimulate estrogen synthesis to some extent, FSH had to be supplemented with T to restore estrogen synthesis to the level seen in control ovaries incubated with T. Lack of FSH thus appeared to affect the aromatization step in the estrogen biosynthetic pathway in the ovary of hamster on diestrus-2. In contrast to this, FSH antiserum given on the morning of proestrus had no effect on the Image Image and Image Image synthesis of estrogen, when examined 6–7 h later. The results suggest that there could be a difference in the need for FSH at different times of the cycle.Neutralization of LH either on diestrus-2 or proestrus resulted in a drastic reduction in estradiol concentration of the ovary. This block was at the level of androgen synthesis, since supplementing testosterone alone Image Image could stimulate estrogen synthesis to a more or less similar extent as in the ovaries of control hamsters.
Resumo:
The X-ray structure of Image and MNDO optimized geometries of related 7-norbornenone derivatives show a clear tilt of the carbonyl bridge away from the C=C double bond. The preferred reduction from the more hindered face of the diester reveals the electron/electrostatic origin of π - facial selectivity in these systems. X-ray structure and MNDO calculations reveal the dominance of electronic effects in determining the π-facial selectivity in 4a.
Resumo:
DDHQ/TCC esters 3a–f, 7a–g were prepared either by oxidation of spiroketones 1 with DDQ/Image -chloranil or by condensation of acid chloride with DDHQ/TCC. NaBH4 reduction of unsaturated DDHQ 3a–b and TCC 7a–c esters gave the corresponding allylic alcohols in good yield without any observable 1,4-addition products. Reduction of saturated esters 3e, 7d, gave the corresponding alcohols. Alkyl esters 5 and 6, methyl benzoate and phenyl benzoate remained unaffected under these reduction conditions. In the reduction of compound 7e containing both alkyl and TCC esters, TCC ester is selectively reduced. Reduction of TCC mono esters 7f–g gave the lactones. The observed facile reduction has been rationalised.
Resumo:
Typical image-guided diffuse optical tomographic image reconstruction procedures involve reduction of the number of optical parameters to be reconstructed equal to the number of distinct regions identified in the structural information provided by the traditional imaging modality. This makes the image reconstruction problem less ill-posed compared to traditional underdetermined cases. Still, the methods that are deployed in this case are same as those used for traditional diffuse optical image reconstruction, which involves a regularization term as well as computation of the Jacobian. A gradient-free Nelder-Mead simplex method is proposed here to perform the image reconstruction procedure and is shown to provide solutions that closely match ones obtained using established methods, even in highly noisy data. The proposed method also has the distinct advantage of being more efficient owing to being regularization free, involving only repeated forward calculations. (C) 2013 Society of Photo-Optical Instrumentation Engineers (SPIE)
Resumo:
In order to reduce the motion artifacts in DSA, non-rigid image registration is commonly used before subtracting the mask from the contrast image. Since DSA registration requires a set of spatially non-uniform control points, a conventional MRF model is not very efficient. In this paper, we introduce the concept of pivotal and non-pivotal control points to address this, and propose a non-uniform MRF for DSA registration. We use quad-trees in a novel way to generate the non-uniform grid of control points. Our MRF formulation produces a smooth displacement field and therefore results in better artifact reduction than that of registering the control points independently. We achieve improved computational performance using pivotal control points without compromising on the artifact reduction. We have tested our approach using several clinical data sets, and have presented the results of quantitative analysis, clinical assessment and performance improvement on a GPU. (C) 2013 Elsevier Ltd. All rights reserved.
B-Spline potential function for maximum a-posteriori image reconstruction in fluorescence microscopy
Resumo:
An iterative image reconstruction technique employing B-Spline potential function in a Bayesian framework is proposed for fluorescence microscopy images. B-splines are piecewise polynomials with smooth transition, compact support and are the shortest polynomial splines. Incorporation of the B-spline potential function in the maximum-a-posteriori reconstruction technique resulted in improved contrast, enhanced resolution and substantial background reduction. The proposed technique is validated on simulated data as well as on the images acquired from fluorescence microscopes (widefield, confocal laser scanning fluorescence and super-resolution 4Pi microscopy). A comparative study of the proposed technique with the state-of-art maximum likelihood (ML) and maximum-a-posteriori (MAP) with quadratic potential function shows its superiority over the others. B-Spline MAP technique can find applications in several imaging modalities of fluorescence microscopy like selective plane illumination microscopy, localization microscopy and STED. (C) 2015 Author(s).
Resumo:
In big data image/video analytics, we encounter the problem of learning an over-complete dictionary for sparse representation from a large training dataset, which cannot be processed at once because of storage and computational constraints. To tackle the problem of dictionary learning in such scenarios, we propose an algorithm that exploits the inherent clustered structure of the training data and make use of a divide-and-conquer approach. The fundamental idea behind the algorithm is to partition the training dataset into smaller clusters, and learn local dictionaries for each cluster. Subsequently, the local dictionaries are merged to form a global dictionary. Merging is done by solving another dictionary learning problem on the atoms of the locally trained dictionaries. This algorithm is referred to as the split-and-merge algorithm. We show that the proposed algorithm is efficient in its usage of memory and computational complexity, and performs on par with the standard learning strategy, which operates on the entire data at a time. As an application, we consider the problem of image denoising. We present a comparative analysis of our algorithm with the standard learning techniques that use the entire database at a time, in terms of training and denoising performance. We observe that the split-and-merge algorithm results in a remarkable reduction of training time, without significantly affecting the denoising performance.
Resumo:
This paper presents the design and implementation of PolyMage, a domain-specific language and compiler for image processing pipelines. An image processing pipeline can be viewed as a graph of interconnected stages which process images successively. Each stage typically performs one of point-wise, stencil, reduction or data-dependent operations on image pixels. Individual stages in a pipeline typically exhibit abundant data parallelism that can be exploited with relative ease. However, the stages also require high memory bandwidth preventing effective utilization of parallelism available on modern architectures. For applications that demand high performance, the traditional options are to use optimized libraries like OpenCV or to optimize manually. While using libraries precludes optimization across library routines, manual optimization accounting for both parallelism and locality is very tedious. The focus of our system, PolyMage, is on automatically generating high-performance implementations of image processing pipelines expressed in a high-level declarative language. Our optimization approach primarily relies on the transformation and code generation capabilities of the polyhedral compiler framework. To the best of our knowledge, this is the first model-driven compiler for image processing pipelines that performs complex fusion, tiling, and storage optimization automatically. Experimental results on a modern multicore system show that the performance achieved by our automatic approach is up to 1.81x better than that achieved through manual tuning in Halide, a state-of-the-art language and compiler for image processing pipelines. For a camera raw image processing pipeline, our performance is comparable to that of a hand-tuned implementation.
Resumo:
In this paper, we present a machine learning approach to measure the visual quality of JPEG-coded images. The features for predicting the perceived image quality are extracted by considering key human visual sensitivity (HVS) factors such as edge amplitude, edge length, background activity and background luminance. Image quality assessment involves estimating the functional relationship between HVS features and subjective test scores. The quality of the compressed images are obtained without referring to their original images ('No Reference' metric). Here, the problem of quality estimation is transformed to a classification problem and solved using extreme learning machine (ELM) algorithm. In ELM, the input weights and the bias values are randomly chosen and the output weights are analytically calculated. The generalization performance of the ELM algorithm for classification problems with imbalance in the number of samples per quality class depends critically on the input weights and the bias values. Hence, we propose two schemes, namely the k-fold selection scheme (KS-ELM) and the real-coded genetic algorithm (RCGA-ELM) to select the input weights and the bias values such that the generalization performance of the classifier is a maximum. Results indicate that the proposed schemes significantly improve the performance of ELM classifier under imbalance condition for image quality assessment. The experimental results prove that the estimated visual quality of the proposed RCGA-ELM emulates the mean opinion score very well. The experimental results are compared with the existing JPEG no-reference image quality metric and full-reference structural similarity image quality metric.
Resumo:
Ce1-xSnxO2 (x = 0.1-0.5) solid solution and its Pd substituted analogue have been prepared by a single step solution combustion method using tin oxalate precursor. The compounds were characterized by X-ray diffraction (XRD), X-ray photoelectron spectroscopy (XPS), transmission electron microscopy (TEM), and H-2/temperature programmed redution (TPR) studies. The cubic fluorite structure remained intact up to 50% of Sri substitution in CeO2, and the compounds were stable up to 700 C. Oxygen storage capacity of Ce1-xSnxO2 was found to be much higher than that of Ce1-xZrxO2 due to accessible Ce4+/Ce3+ and Sn4+/Sn2+ redox couples at temperatures between 200 and 400 C. Pd 21 ions in Ce0.78Sn0.2Pd0.02O2-delta are highly ionic, and the lattice oxygen of this catalyst is highly labile, leading to low temperature CO to CO2 conversion. The rate of CO oxidation was 2 mu mol g(-1) s(-1) at 50 degrees C. NO reduction by CO with 70% N-2 selectivity was observed at similar to 200 degrees C and 100% N-2 selectivity below 260 degrees C with 1000-5000 ppm NO. Thus, Pd2+ ion substituted Ce1-xSnxO2 is a superior catalyst compared to Pd2+ ions in CeO2, Ce1-xZrxO2, and Ce1-xTixO2 for low temperature exhaust applications due to the involvement of the Sn2+/Sn4+ redox couple along with Pd2+/Pd-0 and Ce4+/Ce3+ couples.
Resumo:
Remote sensing provides a lucid and effective means for crop coverage identification. Crop coverage identification is a very important technique, as it provides vital information on the type and extent of crop cultivated in a particular area. This information has immense potential in the planning for further cultivation activities and for optimal usage of the available fertile land. As the frontiers of space technology advance, the knowledge derived from the satellite data has also grown in sophistication. Further, image classification forms the core of the solution to the crop coverage identification problem. No single classifier can prove to satisfactorily classify all the basic crop cover mapping problems of a cultivated region. We present in this paper the experimental results of multiple classification techniques for the problem of crop cover mapping of a cultivated region. A detailed comparison of the algorithms inspired by social behaviour of insects and conventional statistical method for crop classification is presented in this paper. These include the Maximum Likelihood Classifier (MLC), Particle Swarm Optimisation (PSO) and Ant Colony Optimisation (ACO) techniques. The high resolution satellite image has been used for the experiments.
Resumo:
Birch reductio and reductive methylations of some substituted naphtholic acids have been examined. The factors influencing the mechanism of reduction process have been discussed. Some of the reduced naphthoic acids are useful synthons for synthesis.