65 resultados para post-processing
em Indian Institute of Science - Bangalore - Índia
Resumo:
This paper addresses the problem of resolving ambiguities in frequently confused online Tamil character pairs by employing script specific algorithms as a post classification step. Robust structural cues and temporal information of the preprocessed character are extensively utilized in the design of these algorithms. The methods are quite robust in automatically extracting the discriminative sub-strokes of confused characters for further analysis. Experimental validation on the IWFHR Database indicates error rates of less than 3 % for the confused characters. Thus, these post processing steps have a good potential to improve the performance of online Tamil handwritten character recognition.
Resumo:
We address the problem of speech enhancement in real-world noisy scenarios. We propose to solve the problem in two stages, the first comprising a generalized spectral subtraction technique, followed by a sequence of perceptually-motivated post-processing algorithms. The role of the post-processing algorithms is to compensate for the effects of noise as well as to suppress any artifacts created by the first-stage processing. The key post-processing mechanisms are aimed at suppressing musical noise and to enhance the formant structure of voiced speech as well as to denoise the linear-prediction residual. The parameter values in the techniques are fixed optimally by experimentally evaluating the enhancement performance as a function of the parameters. We used the Carnegie-Mellon university Arctic database for our experiments. We considered three real-world noise types: fan noise, car noise, and motorbike noise. The enhancement performance was evaluated by conducting listening experiments on 12 subjects. The listeners reported a clear improvement (MOS improvement of 0.5 on an average) over the noisy signal in the perceived quality (increase in the mean-opinion score (MOS)) for positive signal-to-noise-ratios (SNRs). For negative SNRs, however, the improvement was found to be marginal.
Resumo:
In this paper we propose a postprocessing technique for a spectrogram diffusion based harmonic/percussion decom- position algorithm. The proposed technique removes har- monic instrument leakages in the percussion enhanced out- puts of the baseline algorithm. The technique uses median filtering and an adaptive detection of percussive segments in subbands followed by piecewise signal reconstruction using envelope properties to ensure that percussion is enhanced while harmonic leakages are suppressed. A new binary mask is created for the percussion signal which upon applying on the original signal improves harmonic versus percussion separation. We compare our algorithm with two recent techniques and show that on a database of polyphonic Indian music, the postprocessing algorithm improves the harmonic versus percussion decomposition significantly.
Resumo:
It is well known that extremely long low-density parity-check (LDPC) codes perform exceptionally well for error correction applications, short-length codes are preferable in practical applications. However, short-length LDPC codes suffer from performance degradation owing to graph-based impairments such as short cycles, trapping sets and stopping sets and so on in the bipartite graph of the LDPC matrix. In particular, performance degradation at moderate to high E-b/N-0 is caused by the oscillations in bit node a posteriori probabilities induced by short cycles and trapping sets in bipartite graphs. In this study, a computationally efficient algorithm is proposed to improve the performance of short-length LDPC codes at moderate to high E-b/N-0. This algorithm makes use of the information generated by the belief propagation (BP) algorithm in previous iterations before a decoding failure occurs. Using this information, a reliability-based estimation is performed on each bit node to supplement the BP algorithm. The proposed algorithm gives an appreciable coding gain as compared with BP decoding for LDPC codes of a code rate equal to or less than 1/2 rate coding. The coding gains are modest to significant in the case of optimised (for bipartite graph conditioning) regular LDPC codes, whereas the coding gains are huge in the case of unoptimised codes. Hence, this algorithm is useful for relaxing some stringent constraints on the graphical structure of the LDPC code and for developing hardware-friendly designs.
Resumo:
The Modified Crack Closure Integral (MCCI) technique based on Irwin's crack closure integral concept is very effective for estimation of strain energy release rates G in individual as well as mixed-mode configurations in linear elastic fracture mechanics problems. In a finite element approach, MCCI can be evaluated in the post-processing stage in terms of nodal forces and displacements near the crack tip. The MCCI expressions are however, element dependent and require a systematic derivation using stress and displacement distributions in the crack tip elements. Earlier a general procedure was proposed by the present authors for the derivation of MCCI expressions for 3-dimensional (3-d) crack problems modelled with 8-noded brick elements. A concept of sub-area integration was proposed to estimate strain energy release rates at a large number of points along the crack front. In the present paper a similar procedure is adopted for the derivation of MCCI expressions for 3-d cracks modelled with 20-noded brick elements. Numerical results are presented for centre crack tension and edge crack shear specimens in thick slabs, showing a comparison between present results and those available in the literature.
Resumo:
The Modified Crack Closure Integral (MCCI) technique based on Irwin's crack closure integral concept is very effective for estimation of strain energy release rates G in individual as well as mixed-mode configurations in linear elastic fracture mechanics problems. In a finite element approach, MCCI can be evaluated in the post-processing stage in terms of nodal forces and displacements near the crack tip. The MCCI expressions are however, element dependent and require a systematic derivation using stress and displacement distributions in the crack tip elements. Earlier a general procedure was proposed by the present authors for the derivation of MCCI expressions for 3-dimensional (3-d) crack problems modelled with 8-noded brick elements. A concept of sub-area integration was proposed to estimate strain energy release rates at a large number of points along the crack front. In the present paper a similar procedure is adopted for the derivation of MCCI expressions for 3-d cracks modelled with 20-noded brick elements. Numerical results are presented for centre crack tension and edge crack shear specimens in thick slabs, showing a comparison between present results and those available in the literature.
Resumo:
In this paper, we present an unrestricted Kannada online handwritten character recognizer which is viable for real time applications. It handles Kannada and Indo-Arabic numerals, punctuation marks and special symbols like $, &, # etc, apart from all the aksharas of the Kannada script. The dataset used has handwriting of 69 people from four different locations, making the recognition writer independent. It was found that for the DTW classifier, using smoothed first derivatives as features, enhanced the performance to 89% as compared to preprocessed co-ordinates which gave 85%, but was too inefficient in terms of time. To overcome this, we used Statistical Dynamic Time Warping (SDTW) and achieved 46 times faster classification with comparable accuracy i.e. 88%, making it fast enough for practical applications. The accuracies reported are raw symbol recognition results from the classifier. Thus, there is good scope of improvement in actual applications. Where domain constraints such as fixed vocabulary, language models and post processing can be employed. A working demo is also available on tablet PC for recognition of Kannada words.
Guided Wave based Damage Detection in a Composite T-joint using 3D Scanning Laser Doppler Vibrometer
Resumo:
Composite T-joints are commonly used in modern composite airframe, pressure vessels and piping structures, mainly to increase the bending strength of the joint and prevents buckling of plates and shells, and in multi-cell thin-walled structures. Here we report a detailed study on the propagation of guided ultrasonic wave modes in a composite T-joint and their interactions with delamination in the co-cured co-bonded flange. A well designed guiding path is employed wherein the waves undergo a two step mode conversion process, one is due to the web and joint filler on the back face of the flange and the other is due to the delamination edges close to underneath the accessible surface of the flange. A 3D Laser Doppler Vibrometer is used to obtain the three components of surface displacements/velocities of the accessible face of the flange of the T-joint. The waves are launched by a piezo ceramic wafer bonded on to the back surface of the flange. What is novel in the proposed method is that the location of any change in material/geometric properties can be traced by computing a frequency domain power flow along a scan line. The scan line can be chosen over a grid either during scan or during post-processing of the scan data off-line. The proposed technique eliminates the necessity of baseline data and disassembly of structure for structural interrogation.
Resumo:
A CMOS gas sensor array platform with digital read-out containing 27 sensor pixels and a reference pixel is presented. A signal conditioning circuit at each pixel includes digitally programmable gain stages for sensor signal amplification followed by a second order continuous time delta sigma modulator for digitization. Each sensor pixel can be functionalized with a distinct sensing material that facilitates transduction based on impedance change. Impedance spectrum (up to 10 KHz) of the sensor is obtained off-chip by computing the fast Fourier transform of sensor and reference pixel outputs. The reference pixel also compensates for the phase shift introduced by the signal processing circuits. The chip also contains a temperature sensor with digital readout for ambient temperature measurement. A sensor pixel is functionalized with polycarbazole conducting polymer for sensing volatile organic gases and measurement results are presented. The chip is fabricated in a 0.35 CMOS technology and requires a single step post processing for functionalization. It consumes 57 mW from a 3.3 V supply.
Resumo:
Isolated magnetic nanowires have been studied extensively and the magnetization reversal mechanism is well understood in these systems. But when these nanowires are joined together in different architectures, they behave differently and can give novel properties. Using this approach, one can engineer the network architectures to get artificial anisotropy. Here, we report six-fold anisotropy by joining the magnetic nanowires into hexagonal network. For this study, we also benchmark the widely used micromagnetic packages: OOMMF, Nmag, and LLG-simulator. Further, we propose a local hysteresis method by post processing the spatial magnetization information. With this approach we obtained the hysteresis of nanowires to understand the six-fold anisotropy and the reversal mechanism within the hexagonal networks.
Guided-wave-based damage detection in a composite T-joint using 3D scanning laser Doppler vibrometer
Resumo:
Composite T-joints are commonly used in modern composite airframe, pressure vessels and piping structures, mainly to increase the bending strength of the joint and prevents buckling of plates and shells, and in multi-cell thin-walled structures. Here we report a detailed study on the propagation of guided ultrasonic wave modes in a composite T-joint and their interactions with delamination in the co-cured co-bonded flange. A well designed guiding path is employed wherein the waves undergo a two step mode conversion process, one is due to the web and joint filler on the back face of the flange and the other is due to the delamination edges close to underneath the accessible surface of the flange. A 3D Laser Doppler Vibrometer is used to obtain the three components of surface displacements/velocities of the accessible face of the flange of the T-joint. The waves are launched by a piezo ceramic wafer bonded on to the back surface of the flange. What is novel in the proposed method is that the location of any change in material/geometric properties can be traced by computing a frequency domain power flow along a scan line. The scan line can be chosen over a grid either during scan or during post-processing of the scan data off-line. The proposed technique eliminates the necessity of baseline data and disassembly of structure for structural interrogation.
Resumo:
In this article, we prove convergence of the weakly penalized adaptive discontinuous Galerkin methods. Unlike other works, we derive the contraction property for various discontinuous Galerkin methods only assuming the stabilizing parameters are large enough to stabilize the method. A central idea in the analysis is to construct an auxiliary solution from the discontinuous Galerkin solution by a simple post processing. Based on the auxiliary solution, we define the adaptive algorithm which guides to the convergence of adaptive discontinuous Galerkin methods.
Resumo:
This paper presents a GPU implementation of normalized cuts for road extraction problem using panchromatic satellite imagery. The roads have been extracted in three stages namely pre-processing, image segmentation and post-processing. Initially, the image is pre-processed to improve the tolerance by reducing the clutter (that mostly represents the buildings, vegetation,. and fallow regions). The road regions are then extracted using the normalized cuts algorithm. Normalized cuts algorithm is a graph-based partitioning `approach whose focus lies in extracting the global impression (perceptual grouping) of an image rather than local features. For the segmented image, post-processing is carried out using morphological operations - erosion and dilation. Finally, the road extracted image is overlaid on the original image. Here, a GPGPU (General Purpose Graphical Processing Unit) approach has been adopted to implement the same algorithm on the GPU for fast processing. A performance comparison of this proposed GPU implementation of normalized cuts algorithm with the earlier algorithm (CPU implementation) is presented. From the results, we conclude that the computational improvement in terms of time as the size of image increases for the proposed GPU implementation of normalized cuts. Also, a qualitative and quantitative assessment of the segmentation results has been projected.
Resumo:
In this work, we describe a system, which recognises open vocabulary, isolated, online handwritten Tamil words and extend it to recognize a paragraph of writing. We explain in detail each step involved in the process: segmentation, preprocessing, feature extraction, classification and bigram-based post-processing. On our database of 45,000 handwritten words obtained through tablet PC, we have obtained symbol level accuracy of 78.5% and 85.3% without and with the usage of post-processing using symbol level language models, respectively. Word level accuracies for the same are 40.1% and 59.6%. A line and word level segmentation strategy is proposed, which gives promising results of 100% line segmentation and 98.1% word segmentation accuracies on our initial trials of 40 handwritten paragraphs. The two modules have been combined to obtain a full-fledged page recognition system for online handwritten Tamil data. To the knowledge of the authors, this is the first ever attempt on recognition of open vocabulary, online handwritten paragraphs in any Indian language.
Resumo:
We present a quantum dot based DNA nanosensor specifically targeting the cleavage step in the reaction cycle of the essential DNA-modifying enzyme, mycobacterial topoisomerase I. The design takes advantages of the unique photophysical properties of quantum dots to generate visible fluorescence recovery upon specific cleavage by mycobacterial topoisomerase I. This report, for the first time, demonstrates the possibility to quantify the cleavage activity of the mycobacterial enzyme without the pre-processing sample purification or post-processing signal amplification. The cleavage induced signal response has also proven reliable in biological matrices, such as whole cell extracts prepared from Escherichia coli and human Caco-2 cells. It is expected that the assay may contribute to the clinical diagnostics of bacterial diseases, as well as the evaluation of treatment outcomes.