919 resultados para image classification
Resumo:
A technique is proposed for classifying respiratory volume waveforms(RVW) into normal and abnormal categories of respiratory pathways. The proposed method transforms the temporal sequence into frequency domain by using an orthogonal transform, namely discrete cosine transform (DCT) and the transformed signal is pole-zero modelled. A Bayes classifier using model pole angles as the feature vector performed satisfactorily when a limited number of RVWs recorded under deep and rapid (DR) manoeuvre are classified.
Resumo:
This paper presents the image reconstruction using the fan-beam filtered backprojection (FBP) algorithm with no backprojection weight from windowed linear prediction (WLP) completed truncated projection data. The image reconstruction from truncated projections aims to reconstruct the object accurately from the available limited projection data. Due to the incomplete projection data, the reconstructed image contains truncation artifacts which extends into the region of interest (ROI) making the reconstructed image unsuitable for further use. Data completion techniques have been shown to be effective in such situations. We use windowed linear prediction technique for projection completion and then use the fan-beam FBP algorithm with no backprojection weight for the 2-D image reconstruction. We evaluate the quality of the reconstructed image using fan-beam FBP algorithm with no backprojection weight after WLP completion.
Resumo:
Earthquakes cause massive road damage which in turn causes adverse effects on the society. Previous studies have quantified the damage caused to residential and commercial buildings; however, not many studies have been conducted to quantify road damage caused by earthquakes. In this study, an attempt has been made to propose a new scale to classify and quantify the road damage due to earthquakes based on the data collected from major earthquakes in the past. The proposed classification for road damage due to earthquake is called as road damage scale (RDS). Earthquake details such as magnitude, distance of road damage from the epicenter, focal depth, and photographs of damaged roads have been collected from various sources with reported modified Mercalli intensity (MMI). The widely used MMI scale is found to be inadequate to clearly define the road damage. The proposed RDS is applied to various reported road damage and reclassified as per RDS. The correlation between RDS and earthquake parameters of magnitude, epicenter distance, hypocenter distance, and combination of magnitude with epicenter and hypocenter distance has been studied using available data. It is observed that the proposed RDS correlates well with the available earthquake data when compared with the MMI scale. Among several correlations, correlation between RDS and combination of magnitude and epicenter distance is appropriate. Summary of these correlations, their limitations, and the applicability of the proposed scale to forecast road damages and to carry out vulnerability analysis in urban areas is presented in the paper.
Resumo:
Editors' note:Flexible, large-area display and sensor arrays are finding growing applications in multimedia and future smart homes. This article first analyzes and compares current flexible devices, then discusses the implementation, requirements, and testing of flexible sensor arrays.—Jiun-Lang Huang (National Taiwan University) and Kwang-Ting (Tim) Cheng (University of California, Santa Barbara)
Resumo:
Fusion of multi-sensor imaging data enables a synergetic interpretation of complementary information obtained by sensors of different spectral ranges. Multi-sensor data of diverse spectral, spatial and temporal resolutions require advanced numerical techniques for analysis and interpretation. This paper reviews ten advanced pixel based image fusion techniques – Component substitution (COS), Local mean and variance matching, Modified IHS (Intensity Hue Saturation), Fast Fourier Transformed-enhanced IHS, Laplacian Pyramid, Local regression, Smoothing filter (SF), Sparkle, SVHC and Synthetic Variable Ratio. The above techniques were tested on IKONOS data (Panchromatic band at 1 m spatial resolution and Multispectral 4 bands at 4 m spatial resolution). Evaluation of the fused results through various accuracy measures, revealed that SF and COS methods produce images closest to corresponding multi-sensor would observe at the highest resolution level (1 m).
Resumo:
In this paper, we give a brief review of pattern classification algorithms based on discriminant analysis. We then apply these algorithms to classify movement direction based on multivariate local field potentials recorded from a microelectrode array in the primary motor cortex of a monkey performing a reaching task. We obtain prediction accuracies between 55% and 90% using different methods which are significantly above the chance level of 12.5%.
Resumo:
Proving the unsatisfiability of propositional Boolean formulas has applications in a wide range of fields. Minimal Unsatisfiable Sets (MUS) are signatures of the property of unsatisfiability in formulas and our understanding of these signatures can be very helpful in answering various algorithmic and structural questions relating to unsatisfiability. In this paper, we explore some combinatorial properties of MUS and use them to devise a classification scheme for MUS. We also derive bounds on the sizes of MUS in Horn, 2-SAT and 3-SAT formulas.
Resumo:
In this paper, we consider the problem of time series classification. Using piecewise linear interpolation various novel kernels are obtained which can be used with Support vector machines for designing classifiers capable of deciding the class of a given time series. The approach is general and is applicable in many scenarios. We apply the method to the task of Online Tamil handwritten character recognition with promising results.
Resumo:
Image segmentation is formulated as a stochastic process whose invariant distribution is concentrated at points of the desired region. By choosing multiple seed points, different regions can be segmented. The algorithm is based on the theory of time-homogeneous Markov chains and has been largely motivated by the technique of simulated annealing. The method proposed here has been found to perform well on real-world clean as well as noisy images while being computationally far less expensive than stochastic optimisation techniques
Resumo:
Conventional encryption techniques are usually applicable for text data and often unsuited for encrypting multimedia objects for two reasons. Firstly, the huge sizes associated with multimedia objects make conventional encryption computationally costly. Secondly, multimedia objects come with massive redundancies which are useful in avoiding encryption of the objects in their entirety. Hence a class of encryption techniques devoted to encrypting multimedia objects like images have been developed. These techniques make use of the fact that the data comprising multimedia objects like images could in general be seggregated into two disjoint components, namely salient and non-salient. While the former component contributes to the perceptual quality of the object, the latter only adds minor details to it. In the context of images, the salient component is often much smaller in size than the non-salient component. Encryption effort is considerably reduced if only the salient component is encrypted while leaving the other component unencrypted. A key challenge is to find means to achieve a desirable seggregation so that the unencrypted component does not reveal any information about the object itself. In this study, an image encryption approach that uses fractal structures known as space-filling curves- in order to reduce the encryption overload is presented. In addition, the approach also enables a high quality lossy compression of images.
Resumo:
The following topics were dealt with: document analysis and recognition; multimedia document processing; character recognition; document image processing; cheque processing; form processing; music processing; document segmentation; electronic documents; character classification; handwritten character recognition; information retrieval; postal automation; font recognition; Indian language OCR; handwriting recognition; performance evaluation; graphics recognition; oriental character recognition; and word recognition
Resumo:
Preferential accumulation and agglomeration kinetics of nanoparticles suspended in an acoustically levitated water droplet under radiative heating has been studied. Particle image velocimetry performed to map the internal flow field shows a single cell recirculation with increasing strength for decreasing viscosities. Infrared thermography and high speed imaging show details of the heating process for various concentrations of nanosilica droplets. Initial stage of heating is marked by fast vaporization of liquid and sharp temperature rise. Following this stage, aggregation of nanoparticles is seen resulting in various structure formations. At low concentrations, a bowl structure of the droplet is dominant, maintained at a constant temperature. At high concentrations, viscosity of the solution increases, leading to rotation about the levitator axis due to the dominance of centrifugal motion. Such complex fluid motion inside the droplet due to acoustic streaming eventually results in the formation of a ring structure. This horizontal ring eventually reorients itself due to an imbalance of acoustic forces on the ring, exposing larger area for laser absorption and subsequent sharp temperature rise.
Resumo:
Existing approches to digital halftoning of image are based primarily on thresholding. We propose a general framework fot image halftoning whcrc some function uf the output halftone tracks another function of the input gray-tone.This appcoach is shown lo unify most existing algorithms and to provide useful insights. Further, the new intcrpretation allows us to remedy problems in existing aigorithrms such as the error dlffusion, and sohsequently to achieve halftones haavmg superior quality. The proposed method is very general nature is an advantage since it offers a wide choice of three Cilters and a update rule. An intercstmg product of this framework is that equally good, or better, half-tones are possible ro be obtained by thresholding a noise proccess instead of the image itself.