814 resultados para data gathering algorithm
Resumo:
We address the problem of estimating the fundamental frequency of voiced speech. We present a novel solution motivated by the importance of amplitude modulation in sound processing and speech perception. The new algorithm is based on a cumulative spectrum computed from the temporal envelope of various subbands. We provide theoretical analysis to derive the new pitch estimator based on the temporal envelope of the bandpass speech signal. We report extensive experimental performance for synthetic as well as natural vowels for both realworld noisy and noise-free data. Experimental results show that the new technique performs accurate pitch estimation and is robust to noise. We also show that the technique is superior to the autocorrelation technique for pitch estimation.
Resumo:
With the introduction of 2D flat-panel X-ray detectors, 3D image reconstruction using helical cone-beam tomography is fast replacing the conventional 2D reconstruction techniques. In 3D image reconstruction, the source orbit or scanning geometry should satisfy the data sufficiency or completeness condition for exact reconstruction. The helical scan geometry satisfies this condition and hence can give exact reconstruction. The theoretically exact helical cone-beam reconstruction algorithm proposed by Katsevich is a breakthrough and has attracted interest in the 3D reconstruction using helical cone-beam Computed Tomography.In many practical situations, the available projection data is incomplete. One such case is where the detector plane does not completely cover the full extent of the object being imaged in lateral direction resulting in truncated projections. This result in artifacts that mask small features near to the periphery of the ROI when reconstructed using the convolution back projection (CBP) method assuming that the projection data is complete. A number of techniques exist which deal with completion of missing data followed by the CBP reconstruction. In 2D, linear prediction (LP)extrapolation has been shown to be efficient for data completion, involving minimal assumptions on the nature of the data, producing smooth extensions of the missing projection data.In this paper, we propose to extend the LP approach for extrapolating helical cone beam truncated data. The projection on the multi row flat panel detectors has missing columns towards either ends in the lateral direction in truncated data situation. The available data from each detector row is modeled using a linear predictor. The available data is extrapolated and this completed projection data is backprojected using the Katsevich algorithm. Simulation results show the efficacy of the proposed method.
Resumo:
This paper presents a novel algorithm for compression of single lead Electrocardiogram (ECG) signals. The method is based on Pole-Zero modelling of the Discrete Cosine Transformed (DCT) signal. An extension is proposed to the well known Steiglitz-Hcbride algorithm, to model the higher frequency components of the input signal more accurately. This is achieved by weighting the error function minimized by the algorithm to estimate the model parameters. The data compression achieved by the parametric model is further enhanced by Differential Pulse Code Modulation (DPCM) of the model parameters. The method accomplishes a compression ratio in the range of 1:20 to 1:40, which far exceeds those achieved by most of the current methods.
Resumo:
This paper deals with the solution to the problem of multisensor data fusion for a single target scenario as detected by an airborne track-while-scan radar. The details of a neural network implementation, various training algorithms based on standard backpropagation, and the results of training and testing the neural network are presented. The promising capabilities of RPROP algorithm for multisensor data fusion for various parameters are shown in comparison to other adaptive techniques
Resumo:
This paper proposes a new approach for solving the state estimation problem. The approach is aimed at producing a robust estimator that rejects bad data, even if they are associated with leverage-point measurements. This is achieved by solving a sequence of Linear Programming (LP) problems. Optimization is carried via a new algorithm which is a combination of “upper bound optimization technique" and “an improved algorithm for discrete linear approximation". In this formulation of the LP problem, in addition to the constraints corresponding to the measurement set, constraints corresponding to bounds of state variables are also involved, which enables the LP problem more efficient in rejecting bad data, even if they are associated with leverage-point measurements. Results of the proposed estimator on IEEE 39-bus system and a 24-bus EHV equivalent system of the southern Indian grid are presented for illustrative purpose.
Resumo:
Channel-aware assignment of subchannels to users in the downlink of an OFDMA system requires extensive feedback of channel state information (CSI) to the base station. Since bandwidth is scarce, schemes that limit feedback are necessary. We develop a novel, low feedback, distributed splitting-based algorithm called SplitSelect to opportunistically assign each subchannel to its most suitable user. SplitSelect explicitly handles multiple access control aspects associated with CSI feedback, and scales well with the number of users. In it, according to a scheduling criterion, each user locally maintains a scheduling metric for each subchannel. The goal is to select, for each subchannel, the user with the highest scheduling metric. At any time, each user contends for the subchannel for which it has the largest scheduling metric among the unallocated subchannels. A tractable asymptotic analysis of a system with many users is central to SplitSelect's simple design. Extensive simulation results demonstrate the speed with which subchannels and users are paired. The net data throughput, when the time overhead of selection is accounted for, is shown to be substantially better than several schemes proposed in the literature. We also show how fairness and user prioritization can be ensured by suitably defining the scheduling metric.
Resumo:
This paper presents an artificial feed forward neural network (FFNN) approach for the assessment of power system voltage stability. A novel approach based on the input-output relation between real and reactive power, as well as voltage vectors for generators and load buses is used to train the neural net (NN). The input properties of the feed forward network are generated from offline training data with various simulated loading conditions using a conventional voltage stability algorithm based on the L-index. The neural network is trained for the L-index output as the target vector for each of the system loads. Two separate trained NN, corresponding to normal loading and contingency, are investigated on the 367 node practical power system network. The performance of the trained artificial neural network (ANN) is also investigated on the system under various voltage stability assessment conditions. As compared to the computationally intensive benchmark conventional software, near accurate results in the value of L-index and thus the voltage profile were obtained. Proposed algorithm is fast, robust and accurate and can be used online for predicting the L-indices of all the power system buses. The proposed ANN approach is also shown to be effective and computationally feasible in voltage stability assessment as well as potential enhancements within an overall energy management system in order to determining local and global stability indices
Resumo:
This paper presents hierarchical clustering algorithms for land cover mapping problem using multi-spectral satellite images. In unsupervised techniques, the automatic generation of number of clusters and its centers for a huge database is not exploited to their full potential. Hence, a hierarchical clustering algorithm that uses splitting and merging techniques is proposed. Initially, the splitting method is used to search for the best possible number of clusters and its centers using Mean Shift Clustering (MSC), Niche Particle Swarm Optimization (NPSO) and Glowworm Swarm Optimization (GSO). Using these clusters and its centers, the merging method is used to group the data points based on a parametric method (k-means algorithm). A performance comparison of the proposed hierarchical clustering algorithms (MSC, NPSO and GSO) is presented using two typical multi-spectral satellite images - Landsat 7 thematic mapper and QuickBird. From the results obtained, we conclude that the proposed GSO based hierarchical clustering algorithm is more accurate and robust.
Resumo:
We address the problem of phase retrieval, which is frequently encountered in optical imaging. The measured quantity is the magnitude of the Fourier spectrum of a function (in optics, the function is also referred to as an object). The goal is to recover the object based on the magnitude measurements. In doing so, the standard assumptions are that the object is compactly supported and positive. In this paper, we consider objects that admit a sparse representation in some orthonormal basis. We develop a variant of the Fienup algorithm to incorporate the condition of sparsity and to successively estimate and refine the phase starting from the magnitude measurements. We show that the proposed iterative algorithm possesses Cauchy convergence properties. As far as the modality is concerned, we work with measurements obtained using a frequency-domain optical-coherence tomography experimental setup. The experimental results on real measured data show that the proposed technique exhibits good reconstruction performance even with fewer coefficients taken into account for reconstruction. It also suppresses the autocorrelation artifacts to a significant extent since it estimates the phase accurately.
Resumo:
Boxicity of a graph G(V, E) is the minimum integer k such that G can be represented as the intersection graph of k-dimensional axis parallel boxes in Rk. Equivalently, it is the minimum number of interval graphs on the vertex set V such that the intersection of their edge sets is E. It is known that boxicity cannot be approximated even for graph classes like bipartite, co-bipartite and split graphs below O(n0.5-ε)-factor, for any ε > 0 in polynomial time unless NP = ZPP. Till date, there is no well known graph class of unbounded boxicity for which even an nε-factor approximation algorithm for computing boxicity is known, for any ε < 1. In this paper, we study the boxicity problem on Circular Arc graphs - intersection graphs of arcs of a circle. We give a (2+ 1/k)-factor polynomial time approximation algorithm for computing the boxicity of any circular arc graph along with a corresponding box representation, where k ≥ 1 is its boxicity. For Normal Circular Arc(NCA) graphs, with an NCA model given, this can be improved to an additive 2-factor approximation algorithm. The time complexity of the algorithms to approximately compute the boxicity is O(mn+n2) in both these cases and in O(mn+kn2) which is at most O(n3) time we also get their corresponding box representations, where n is the number of vertices of the graph and m is its number of edges. The additive 2-factor algorithm directly works for any Proper Circular Arc graph, since computing an NCA model for it can be done in polynomial time.
Resumo:
Ranking problems have become increasingly important in machine learning and data mining in recent years, with applications ranging from information retrieval and recommender systems to computational biology and drug discovery. In this paper, we describe a new ranking algorithm that directly maximizes the number of relevant objects retrieved at the absolute top of the list. The algorithm is a support vector style algorithm, but due to the different objective, it no longer leads to a quadratic programming problem. Instead, the dual optimization problem involves l1, ∞ constraints; we solve this dual problem using the recent l1, ∞ projection method of Quattoni et al (2009). Our algorithm can be viewed as an l∞-norm extreme of the lp-norm based algorithm of Rudin (2009) (albeit in a support vector setting rather than a boosting setting); thus we refer to the algorithm as the ‘Infinite Push’. Experiments on real-world data sets confirm the algorithm’s focus on accuracy at the absolute top of the list.
Resumo:
This paper proposes an algorithm for joint data detection and tracking of the dominant singular mode of a time varying channel at the transmitter and receiver of a time division duplex multiple input multiple output beamforming system. The method proposed is a modified expectation maximization algorithm which utilizes an initial estimate to track the dominant modes of the channel at the transmitter and the receiver blindly; and simultaneously detects the un known data. Furthermore, the estimates are constrained to be within a confidence interval of the previous estimate in order to improve the tracking performance and mitigate the effect of error propagation. Monte-Carlo simulation results of the symbol error rate and the mean square inner product between the estimated and the true singular vector are plotted to show the performance benefits offered by the proposed method compared to existing techniques.
Resumo:
In this paper we propose a new algorithm for learning polyhedral classifiers which we call as Polyceptron. It is a Perception like algorithm which updates the parameters only when the current classifier misclassifies any training data. We give both batch and online version of Polyceptron algorithm. Finally we give experimental results to show the effectiveness of our approach.
Resumo:
We address the problem of mining targeted association rules over multidimensional market-basket data. Here, each transaction has, in addition to the set of purchased items, ancillary dimension attributes associated with it. Based on these dimensions, transactions can be visualized as distributed over cells of an n-dimensional cube. In this framework, a targeted association rule is of the form {X -> Y} R, where R is a convex region in the cube and X. Y is a traditional association rule within region R. We first describe the TOARM algorithm, based on classical techniques, for identifying targeted association rules. Then, we discuss the concepts of bottom-up aggregation and cubing, leading to the CellUnion technique. This approach is further extended, using notions of cube-count interleaving and credit-based pruning, to derive the IceCube algorithm. Our experiments demonstrate that IceCube consistently provides the best execution time performance, especially for large and complex data cubes.
Resumo:
We have benchmarked the maximum obtainable recognition accuracy on five publicly available standard word image data sets using semi-automated segmentation and a commercial OCR. These images have been cropped from camera captured scene images, born digital images (BDI) and street view images. Using the Matlab based tool developed by us, we have annotated at the pixel level more than 3600 word images from the five data sets. The word images binarized by the tool, as well as by our own midline analysis and propagation of segmentation (MAPS) algorithm are recognized using the trial version of Nuance Omnipage OCR and these two results are compared with the best reported in the literature. The benchmark word recognition rates obtained on ICDAR 2003, Sign evaluation, Street view, Born-digital and ICDAR 2011 data sets are 83.9%, 89.3%, 79.6%, 88.5% and 86.7%, respectively. The results obtained from MAPS binarized word images without the use of any lexicon are 64.5% and 71.7% for ICDAR 2003 and 2011 respectively, and these values are higher than the best reported values in the literature of 61.1% and 41.2%, respectively. MAPS results of 82.8% for BDI 2011 dataset matches the performance of the state of the art method based on power law transform.