72 resultados para Densité a priori
Resumo:
We propose a set of metrics that evaluate the uniformity, sharpness, continuity, noise, stroke width variance,pulse width ratio, transient pixels density, entropy and variance of components to quantify the quality of a document image. The measures are intended to be used in any optical character recognition (OCR) engine to a priori estimate the expected performance of the OCR. The suggested measures have been evaluated on many document images, which have different scripts. The quality of a document image is manually annotated by users to create a ground truth. The idea is to correlate the values of the measures with the user annotated data. If the measure calculated matches the annotated description,then the metric is accepted; else it is rejected. In the set of metrics proposed, some of them are accepted and the rest are rejected. We have defined metrics that are easily estimatable. The metrics proposed in this paper are based on the feedback of homely grown OCR engines for Indic (Tamil and Kannada) languages. The metrics are independent of the scripts, and depend only on the quality and age of the paper and the printing. Experiments and results for each proposed metric are discussed. Actual recognition of the printed text is not performed to evaluate the proposed metrics. Sometimes, a document image containing broken characters results in good document image as per the evaluated metrics, which is part of the unsolved challenges. The proposed measures work on gray scale document images and fail to provide reliable information on binarized document image.
Resumo:
A fully discrete C-0 interior penalty finite element method is proposed and analyzed for the Extended Fisher-Kolmogorov (EFK) equation u(t) + gamma Delta(2)u - Delta u + u(3) - u = 0 with appropriate initial and boundary conditions, where gamma is a positive constant. We derive a regularity estimate for the solution u of the EFK equation that is explicit in gamma and as a consequence we derive a priori error estimates that are robust in gamma. (C) 2013 Elsevier B.V. All rights reserved.
Resumo:
Bilateral filters perform edge-preserving smoothing and are widely used for image denoising. The denoising performance is sensitive to the choice of the bilateral filter parameters. We propose an optimal parameter selection for bilateral filtering of images corrupted with Poisson noise. We employ the Poisson's Unbiased Risk Estimate (PURE), which is an unbiased estimate of the Mean Squared Error (MSE). It does not require a priori knowledge of the ground truth and is useful in practical scenarios where there is no access to the original image. Experimental results show that quality of denoising obtained with PURE-optimal bilateral filters is almost indistinguishable with that of the Oracle-MSE-optimal bilateral filters.
Resumo:
Distributed compressed sensing exploits information redundancy, inbuilt in multi-signal ensembles with interas well as intra-signal correlations, to reconstruct undersampled signals. In this paper we revisit this problem, albeit from a different perspective, of taking streaming data, from several correlated sources, as input to a real time system which, without any a priori information, incrementally learns and admits each source into the system.
Resumo:
Nearly pollution-free solutions of the Helmholtz equation for k-values corresponding to visible light are demonstrated and verified through experimentally measured forward scattered intensity from an optical fiber. Numerically accurate solutions are, in particular, obtained through a novel reformulation of the H-1 optimal Petrov-Galerkin weak form of the Helmholtz equation. Specifically, within a globally smooth polynomial reproducing framework, the compact and smooth test functions are so designed that their normal derivatives are zero everywhere on the local boundaries of their compact supports. This circumvents the need for a priori knowledge of the true solution on the support boundary and relieves the weak form of any jump boundary terms. For numerical demonstration of the above formulation, we used a multimode optical fiber in an index matching liquid as the object. The scattered intensity and its normal derivative are computed from the scattered field obtained by solving the Helmholtz equation, using the new formulation and the conventional finite element method. By comparing the results with the experimentally measured scattered intensity, the stability of the solution through the new formulation is demonstrated and its closeness to the experimental measurements verified.
Resumo:
Traditional taxonomy based on morphology has often failed in accurate species identification owing to the occurrence of cryptic species, which are reproductively isolated but morphologically identical. Molecular data have thus been used to complement morphology in species identification. The sexual advertisement calls in several groups of acoustically communicating animals are species-specific and can thus complement molecular data as non-invasive tools for identification. Several statistical tools and automated identifier algorithms have been used to investigate the efficiency of acoustic signals in species identification. Despite a plethora of such methods, there is a general lack of knowledge regarding the appropriate usage of these methods in specific taxa. In this study, we investigated the performance of two commonly used statistical methods, discriminant function analysis (DFA) and cluster analysis, in identification and classification based on acoustic signals of field cricket species belonging to the subfamily Gryllinae. Using a comparative approach we evaluated the optimal number of species and calling song characteristics for both the methods that lead to most accurate classification and identification. The accuracy of classification using DFA was high and was not affected by the number of taxa used. However, a constraint in using discriminant function analysis is the need for a priori classification of songs. Accuracy of classification using cluster analysis, which does not require a priori knowledge, was maximum for 6-7 taxa and decreased significantly when more than ten taxa were analysed together. We also investigated the efficacy of two novel derived acoustic features in improving the accuracy of identification. Our results show that DFA is a reliable statistical tool for species identification using acoustic signals. Our results also show that cluster analysis of acoustic signals in crickets works effectively for species classification and identification.
Resumo:
We describe a framework to explore and visualize the movement of cloud systems. Using techniques from computational topology and computer vision, our framework allows the user to study this movement at various scales in space and time. Such movements could have large temporal and spatial scales such as the Madden Julian Oscillation (MJO), which has a spatial scale ranging from 1000 km to 10000 km and time of oscillation of around 40 days. Embedded within these larger scale oscillations are a hierarchy of cloud clusters which could have smaller spatial and temporal scales such as the Nakazawa cloud clusters. These smaller cloud clusters, while being part of the equatorial MJO, sometimes move at speeds different from the larger scale and in a direction opposite to that of the MJO envelope. Hitherto, one could only speculate about such movements by selectively analysing data and a priori knowledge of such systems. Our framework automatically delineates such cloud clusters and does not depend on the prior experience of the user to define cloud clusters. Analysis using our framework also shows that most tropical systems such as cyclones also contain multi-scale interactions between clouds and cloud systems. We show the effectiveness of our framework to track organized cloud system during one such rainfall event which happened at Mumbai, India in July 2005 and for cyclone Aila which occurred in Bay of Bengal during May 2009.
Resumo:
Epoch is defined as the instant of significant excitation within a pitch period of voiced speech. Epoch extraction continues to attract the interest of researchers because of its significance in speech analysis. Existing high performance epoch extraction algorithms require either dynamic programming techniques or a priori information of the average pitch period. An algorithm without such requirements is proposed based on integrated linear prediction residual (ILPR) which resembles the voice source signal. Half wave rectified and negated ILPR (or Hilbert transform of ILPR) is used as the pre-processed signal. A new non-linear temporal measure named the plosion index (PI) has been proposed for detecting `transients' in speech signal. An extension of PI, called the dynamic plosion index (DPI) is applied on pre-processed signal to estimate the epochs. The proposed DPI algorithm is validated using six large databases which provide simultaneous EGG recordings. Creaky and singing voice samples are also analyzed. The algorithm has been tested for its robustness in the presence of additive white and babble noise and on simulated telephone quality speech. The performance of the DPI algorithm is found to be comparable or better than five state-of-the-art techniques for the experiments considered.
Resumo:
We address the problem of multi-instrument recognition in polyphonic music signals. Individual instruments are modeled within a stochastic framework using Student's-t Mixture Models (tMMs). We impose a mixture of these instrument models on the polyphonic signal model. No a priori knowledge is assumed about the number of instruments in the polyphony. The mixture weights are estimated in a latent variable framework from the polyphonic data using an Expectation Maximization (EM) algorithm, derived for the proposed approach. The weights are shown to indicate instrument activity. The output of the algorithm is an Instrument Activity Graph (IAG), using which, it is possible to find out the instruments that are active at a given time. An average F-ratio of 0 : 7 5 is obtained for polyphonies containing 2-5 instruments, on a experimental test set of 8 instruments: clarinet, flute, guitar, harp, mandolin, piano, trombone and violin.
Resumo:
Numerous algorithms have been proposed recently for sparse signal recovery in Compressed Sensing (CS). In practice, the number of measurements can be very limited due to the nature of the problem and/or the underlying statistical distribution of the non-zero elements of the sparse signal may not be known a priori. It has been observed that the performance of any sparse signal recovery algorithm depends on these factors, which makes the selection of a suitable sparse recovery algorithm difficult. To take advantage in such situations, we propose to use a fusion framework using which we employ multiple sparse signal recovery algorithms and fuse their estimates to get a better estimate. Theoretical results justifying the performance improvement are shown. The efficacy of the proposed scheme is demonstrated by Monte Carlo simulations using synthetic sparse signals and ECG signals selected from MIT-BIH database.
Resumo:
The distributed, low-feedback, timer scheme is used in several wireless systems to select the best node from the available nodes. In it, each node sets a timer as a function of a local preference number called a metric, and transmits a packet when its timer expires. The scheme ensures that the timer of the best node, which has the highest metric, expires first. However, it fails to select the best node if another node transmits a packet within Delta s of the transmission by the best node. We derive the optimal metric-to-timer mappings for the practical scenario where the number of nodes is unknown. We consider two cases in which the probability distribution of the number of nodes is either known a priori or is unknown. In the first case, the optimal mapping maximizes the success probability averaged over the probability distribution. In the second case, a robust mapping maximizes the worst case average success probability over all possible probability distributions on the number of nodes. Results reveal that the proposed mappings deliver significant gains compared to the mappings considered in the literature.
Resumo:
A Monte Carlo filter, based on the idea of averaging over characteristics and fashioned after a particle-based time-discretized approximation to the Kushner-Stratonovich (KS) nonlinear filtering equation, is proposed. A key aspect of the new filter is the gain-like additive update, designed to approximate the innovation integral in the KS equation and implemented through an annealing-type iterative procedure, which is aimed at rendering the innovation (observation prediction mismatch) for a given time-step to a zero-mean Brownian increment corresponding to the measurement noise. This may be contrasted with the weight-based multiplicative updates in most particle filters that are known to precipitate the numerical problem of weight collapse within a finite-ensemble setting. A study to estimate the a-priori error bounds in the proposed scheme is undertaken. The numerical evidence, presently gathered from the assessed performance of the proposed and a few other competing filters on a class of nonlinear dynamic system identification and target tracking problems, is suggestive of the remarkably improved convergence and accuracy of the new filter. (C) 2013 Elsevier B.V. All rights reserved.
Resumo:
The objective in this work is to develop downscaling methodologies to obtain a long time record of inundation extent at high spatial resolution based on the existing low spatial resolution results of the Global Inundation Extent from Multi-Satellites (GIEMS) dataset. In semiarid regions, high-spatial-resolution a priori information can be provided by visible and infrared observations from the Moderate Resolution Imaging Spectroradiometer (MODIS). The study concentrates on the Inner Niger Delta where MODIS-derived inundation extent has been estimated at a 500-m resolution. The space-time variability is first analyzed using a principal component analysis (PCA). This is particularly effective to understand the inundation variability, interpolate in time, or fill in missing values. Two innovative methods are developed (linear regression and matrix inversion) both based on the PCA representation. These GIEMS downscaling techniques have been calibrated using the 500-m MODIS data. The downscaled fields show the expected space-time behaviors from MODIS. A 20-yr dataset of the inundation extent at 500 m is derived from this analysis for the Inner Niger Delta. The methods are very general and may be applied to many basins and to other variables than inundation, provided enough a priori high-spatial-resolution information is available. The derived high-spatial-resolution dataset will be used in the framework of the Surface Water Ocean Topography (SWOT) mission to develop and test the instrument simulator as well as to select the calibration validation sites (with high space-time inundation variability). In addition, once SWOT observations are available, the downscaled methodology will be calibrated on them in order to downscale the GIEMS datasets and to extend the SWOT benefits back in time to 1993.
Resumo:
Although many sparse recovery algorithms have been proposed recently in compressed sensing (CS), it is well known that the performance of any sparse recovery algorithm depends on many parameters like dimension of the sparse signal, level of sparsity, and measurement noise power. It has been observed that a satisfactory performance of the sparse recovery algorithms requires a minimum number of measurements. This minimum number is different for different algorithms. In many applications, the number of measurements is unlikely to meet this requirement and any scheme to improve performance with fewer measurements is of significant interest in CS. Empirically, it has also been observed that the performance of the sparse recovery algorithms also depends on the underlying statistical distribution of the nonzero elements of the signal, which may not be known a priori in practice. Interestingly, it can be observed that the performance degradation of the sparse recovery algorithms in these cases does not always imply a complete failure. In this paper, we study this scenario and show that by fusing the estimates of multiple sparse recovery algorithms, which work with different principles, we can improve the sparse signal recovery. We present the theoretical analysis to derive sufficient conditions for performance improvement of the proposed schemes. We demonstrate the advantage of the proposed methods through numerical simulations for both synthetic and real signals.
Resumo:
Polyolefin based blends have tremendous commercial importance in view of their exceptional properties. In this study the interface of a biphasic polymer blend of PE (polyethylene) and PEO (polyethylene oxide) has been tailored to reduce the interfacial tension between the phases and to render finer morphology. This was accomplished by employing various strategies like addition of maleated PE (PE grafted maleic anhydride), immobilizing PE chains, ex situ, onto MWNTs by covalent grafting, and in situ grafting of PE chains onto MWNTs during melt processing. Multiwalled nanotubes (MWNTs) with different surface functional groups have been synthesized either a priori or were facilitated during melt mixing at higher temperature. NH2 terminated MWNTs were synthesized by grafting ethylene diamine (EDA) onto carboxyl functionalized carbon nanotubes (COOH(MWNTs) and further, was used to reactively couple with maleated PE to immobilize PE chains on the surface of MWNTs. The covalent coupling of maleated PE with NH2 terminated MWNTs was also realized in situ in the melt extruder at high temperature. Both NH2 terminated MWNTs and the in situ formed PE brush on MWNTs during melt mixing, revealed a significant improvement in the mechanical properties of the blend besides remarkably improving the dispersion of the minor phase (PEO) in the blends. Structural properties of the composites were evaluated and the tensile fractured morphology was assessed using scanning electron microscopy.