281 resultados para Search image
em Indian Institute of Science - Bangalore - Índia
Resumo:
In this paper, we present a machine learning approach to measure the visual quality of JPEG-coded images. The features for predicting the perceived image quality are extracted by considering key human visual sensitivity (HVS) factors such as edge amplitude, edge length, background activity and background luminance. Image quality assessment involves estimating the functional relationship between HVS features and subjective test scores. The quality of the compressed images are obtained without referring to their original images ('No Reference' metric). Here, the problem of quality estimation is transformed to a classification problem and solved using extreme learning machine (ELM) algorithm. In ELM, the input weights and the bias values are randomly chosen and the output weights are analytically calculated. The generalization performance of the ELM algorithm for classification problems with imbalance in the number of samples per quality class depends critically on the input weights and the bias values. Hence, we propose two schemes, namely the k-fold selection scheme (KS-ELM) and the real-coded genetic algorithm (RCGA-ELM) to select the input weights and the bias values such that the generalization performance of the classifier is a maximum. Results indicate that the proposed schemes significantly improve the performance of ELM classifier under imbalance condition for image quality assessment. The experimental results prove that the estimated visual quality of the proposed RCGA-ELM emulates the mean opinion score very well. The experimental results are compared with the existing JPEG no-reference image quality metric and full-reference structural similarity image quality metric.
Resumo:
The presence of folded solution conformations in the peptides Boc-Ala-(Aib-Ala)2-OMe, Boc-Val-(Aib-Val) 2-OMe, Boc-Ala-(Aib-Ala)3-OMe and Boc-Val-(Aib-Val)3-OMe has been established by 270MHz 1H NMR. Intramolecularly H-bonded NH groups have been identified using temperature and solvent dependence of NH chemical shifts and paramagnetic radical induced broadening of NH resonances. Both pentapeptides adopt 310 helical conformations possessing 3 intramolecular H-bonds in CDCl3 and (CD3)2SO. The heptapeptides favour helical structures with 5 H-bonds in CDCl3. In (CD3)2SO only 4 H-bonds are readily detected.
Resumo:
Lateral or transaxial truncation of cone-beam data can occur either due to the field of view limitation of the scanning apparatus or iregion-of-interest tomography. In this paper, we Suggest two new methods to handle lateral truncation in helical scan CT. It is seen that reconstruction with laterally truncated projection data, assuming it to be complete, gives severe artifacts which even penetrates into the field of view. A row-by-row data completion approach using linear prediction is introduced for helical scan truncated data. An extension of this technique known as windowed linear prediction approach is introduced. Efficacy of the two techniques are shown using simulation with standard phantoms. A quantitative image quality measure of the resulting reconstructed images are used to evaluate the performance of the proposed methods against an extension of a standard existing technique.
Resumo:
A novel method, designated the holographic spectrum reconstruction (HSR) method, is proposed for achieving simultaneous display of the spectrum and image of an object in a single plane. A study of the scaling behaviour of both the spectrum and the image has been carried out and based on this study, it is demonstrated that a lensless coherent optical processor can be realized.
Resumo:
In order to understand the molecular mechanism of non-oxidative decarboxylation of aromatic acids observed in microbial systems, 2,3 dihydroxybenzoic acid (DHBA) decarboxylase from Image Image was purified to homogeneity by affinity chromatography. The enzyme (Mr 120 kDa) had four identical subunits (28 kDa each) and was specific for DHBA. It had a pH optimum of 5.2 and Km was 0.34mM. The decarboxylation did not require any cofactors, nor did the enzyme had any pyruvoyl group at the active site. The carboxyl group and hydroxyl group in the Image -position were required for activity. The preliminary spectroscopic properties of the enzyme are also reported.
Resumo:
Microsomes (105,000xg sediment) prepared from induced cells of Image was found to hydroxylate progesterone to 11a-hydroxyprogesterone (11a-OHP) in high yields (85-90% in 30 min.) in the presence of NADPH and O2. The pH optimum for the hydroxylase was found to be 7.7. However, for the isolation of active microsomes grinding of the mycelium should be carried out at pH 8.3. Metyrapone, carbon monoxide, SKF-525A, p-CMB and N-methyl maleimide inhibited the hydroxylase activity indicating the involvement of cytochrome P-450 system. The inhibition of the hydroxylase by cytochrome Image and the presence of high levels of NADPH-cytochrome Image reductase in induced microsomes suggest that the reductase could be one of the components in the hydroxylase system.
Resumo:
A soluble fraction of Image catalyzed the hydroxylation of mandelic acid to Image -hydroxymandelic acid. The enzyme had a pH optimum of 5.4 and showed an absolute requirement for Fe2+, tetrahydropteridine, NADPH. Image -Hydroxymandelate, the product of the enzyme reaction was identified by paper chromatography, thin layer chromatography, UV and IR-spectra.
Resumo:
tRNA isolated from . grown in a medium containing [75Se] sodium selenosulfate was converted to nucleosides and analysed for selenonucleosides on a phosphocellulose column. Upon chromatography of the nucleosides on phosphocellulose column, the radioactivity resolved into three peaks. The first peak consisted of free selenium and traces of undigested nucleotides. The second peak was identified as 4-selenouridine by co-chromatographing with an authentic sample of 4-selenouridine. The identity of the third peak was not established. The second and third peaks represented 93% and 7% of the selenium present in nucleosides respectively.
Resumo:
In prediction phase, the hierarchical tree structure obtained from the test image is used to predict every central pixel of an image by its four neighboring pixels. The prediction scheme generates the predicted error image, to which the wavelet/sub-band coding algorithm can be applied to obtain efficient compression. In quantization phase, we used a modified SPIHT algorithm to achieve efficiency in memory requirements. The memory constraint plays a vital role in wireless and bandwidth-limited applications. A single reusable list is used instead of three continuously growing linked lists as in case of SPIHT. This method is error resilient. The performance is measured in terms of PSNR and memory requirements. The algorithm shows good compression performance and significant savings in memory. (C) 2006 Elsevier B.V. All rights reserved.
Resumo:
This paper focuses on optimisation algorithms inspired by swarm intelligence for satellite image classification from high resolution satellite multi- spectral images. Amongst the multiple benefits and uses of remote sensing, one of the most important has been its use in solving the problem of land cover mapping. As the frontiers of space technology advance, the knowledge derived from the satellite data has also grown in sophistication. Image classification forms the core of the solution to the land cover mapping problem. No single classifier can prove to satisfactorily classify all the basic land cover classes of an urban region. In both supervised and unsupervised classification methods, the evolutionary algorithms are not exploited to their full potential. This work tackles the land map covering by Ant Colony Optimisation (ACO) and Particle Swarm Optimisation (PSO) which are arguably the most popular algorithms in this category. We present the results of classification techniques using swarm intelligence for the problem of land cover mapping for an urban region. The high resolution Quick-bird data has been used for the experiments.
Resumo:
Denoising of images in compressed wavelet domain has potential application in transmission technology such as mobile communication. In this paper, we present a new image denoising scheme based on restoration of bit-planes of wavelet coefficients in compressed domain. It exploits the fundamental property of wavelet transform - its ability to analyze the image at different resolution levels and the edge information associated with each band. The proposed scheme relies on the fact that noise commonly manifests itself as a fine-grained structure in image and wavelet transform allows the restoration strategy to adapt itself according to directional features of edges. The proposed approach shows promising results when compared with conventional unrestored scheme, in context of error reduction and has capability to adapt to situations where noise level in the image varies. The applicability of the proposed approach has implications in restoration of images due to noisy channels. This scheme, in addition, to being very flexible, tries to retain all the features, including edges of the image. The proposed scheme is computationally efficient.
Resumo:
In positron emission tomography (PET), image reconstruction is a demanding problem. Since, PET image reconstruction is an ill-posed inverse problem, new methodologies need to be developed. Although previous studies show that incorporation of spatial and median priors improves the image quality, the image artifacts such as over-smoothing and streaking are evident in the reconstructed image. In this work, we use a simple, yet powerful technique to tackle the PET image reconstruction problem. Proposed technique is based on the integration of Bayesian approach with that of finite impulse response (FIR) filter. A FIR filter is designed whose coefficients are determined based on the surface diffusion model. The resulting reconstructed image is iteratively filtered and fed back to obtain the new estimate. Experiments are performed on a simulated PET system. The results show that the proposed approach is better than recently proposed MRP algorithm in terms of image quality and normalized mean square error.
Resumo:
In this paper we develop a multithreaded VLSI processor linear array architecture to render complex environments based on the radiosity approach. The processing elements are identical and multithreaded. They work in Single Program Multiple Data (SPMD) mode. A new algorithm to do the radiosity computations based on the progressive refinement approach[2] is proposed. Simulation results indicate that the architecture is latency tolerant and scalable. It is shown that a linear array of 128 uni-threaded processing elements sustains a throughput close to 0.4 million patches/sec.
Resumo:
The efficacy of the multifractal spectrum as a tool for characterizing images has been studied. This spectrum has been computed for digitized images of the nucleus of human cervical cancer cells and it was observed that the entire spectrum is almost fully reproduced for a normal cell while only the right half (q<0) of the spectrum is reproduced for a cancerous cell. Cells in stages in between the two extremes show a shortening of the left half of the spectrum proportional to their condition. The extent of this shortening has been found to be sufficient to permit a classification between three classes of cells at varying distances from a basal cancerous layer-the superficial cells, the intermediate cells and the parabasal cells. This technique may be used for automatic screening of the population while also indicating the stage of malignancy
Resumo:
This paper presents a low cost but high resolution retinal image acquisition system of the human eye. The images acquired by a CMOS image sensor are communicated through the Universal Serial Bus (USB) interface to a personal computer for viewing and further processing. The image acquisition time was estimated to be 2.5 seconds. This system can also be used in telemedicine applications.