121 resultados para Weighted
Resumo:
It has been shown that iterative re-weighted strategies will often improve the performance of many sparse reconstruction algorithms. However, these strategies are algorithm dependent and cannot be easily extended for an arbitrary sparse reconstruction algorithm. In this paper, we propose a general iterative framework and a novel algorithm which iteratively enhance the performance of any given arbitrary sparse reconstruction algorithm. We theoretically analyze the proposed method using restricted isometry property and derive sufficient conditions for convergence and performance improvement. We also evaluate the performance of the proposed method using numerical experiments with both synthetic and real-world data. (C) 2014 Elsevier B.V. All rights reserved.
Resumo:
For a general tripartite system in some pure state, an observer possessing any two parts will see them in a mixed state. By the consequence of Hughston-Jozsa-Wootters theorem, each basis set of local measurement on the third part will correspond to a particular decomposition of the bipartite mixed state into a weighted sum of pure states. It is possible to associate an average bipartite entanglement ((S) over bar) with each of these decompositions. The maximum value of (S) over bar is called the entanglement of assistance (E-A) while the minimum value is called the entanglement of formation (E-F). An appropriate choice of the basis set of local measurement will correspond to an optimal value of (S) over bar; we find here a generic optimality condition for the choice of the basis set. In the present context, we analyze the tripartite states W and GHZ and show how they are fundamentally different. (C) 2014 Elsevier B.V. All rights reserved.
Resumo:
A nonlinear stochastic filtering scheme based on a Gaussian sum representation of the filtering density and an annealing-type iterative update, which is additive and uses an artificial diffusion parameter, is proposed. The additive nature of the update relieves the problem of weight collapse often encountered with filters employing weighted particle based empirical approximation to the filtering density. The proposed Monte Carlo filter bank conforms in structure to the parent nonlinear filtering (Kushner-Stratonovich) equation and possesses excellent mixing properties enabling adequate exploration of the phase space of the state vector. The performance of the filter bank, presently assessed against a few carefully chosen numerical examples, provide ample evidence of its remarkable performance in terms of filter convergence and estimation accuracy vis-a-vis most other competing filters especially in higher dimensional dynamic system identification problems including cases that may demand estimating relatively minor variations in the parameter values from their reference states. (C) 2014 Elsevier Ltd. All rights reserved.
Resumo:
We compute the instantaneous contributions to the spherical harmonic modes of gravitational waveforms from compact binary systems in general orbits up to the third post-Newtonian (PN) order. We further extend these results for compact binaries in quasielliptical orbits using the 3PN quasi-Keplerian representation of the conserved dynamics of compact binaries in eccentric orbits. Using the multipolar post-Minkowskian formalism, starting from the different mass and current-type multipole moments, we compute the spin-weighted spherical harmonic decomposition of the instantaneous part of the gravitational waveform. These are terms which are functions of the retarded time and do not depend on the history of the binary evolution. Together with the hereditary part, which depends on the binary's dynamical history, these waveforms form the basis for construction of accurate templates for the detection of gravitational wave signals from binaries moving in quasielliptical orbits.
Resumo:
This paper presents an experimental procedure to determine the acoustic and vibration behavior of an inverter-fed induction motor based on measurements of the current spectrum, acoustic noise spectrum, overall noise in dB, and overall A-weighted noise in dBA. Measurements are carried out on space-vector modulated 8-hp and 3-hp induction motor drives over a range of carrier frequencies at different modulation frequencies. The experimental data help to distinguish between regions of high and low acoustic noise levels. The measurements also bring out the impact of carrier frequency on the acoustic noise. The sensitivity of the overall noise to carrier frequency is indicative of the relative dominance of the high-frequency electromagnetic noise over mechanical and aerodynamic components of noise. Based on the measured current and acoustic noise spectra, the ratio of dynamic deflection on the stator surface to the product of fundamental and harmonic current amplitudes is obtained at each operating point. The variation of this ratio of deflection to current product with carrier frequency indicates the resonant frequency clearly and also gives a measure of the amplification of vibration at frequencies close to the resonant frequency. This ratio is useful to predict the magnitude of acoustic noise corresponding to significant time-harmonic currents flowing in the stator winding.
Resumo:
In this work, the hypothesis testing problem of spectrum sensing in a cognitive radio is formulated as a Goodness-of-fit test against the general class of noise distributions used in most communications-related applications. A simple, general, and powerful spectrum sensing technique based on the number of weighted zero-crossings in the observations is proposed. For the cases of uniform and exponential weights, an expression for computing the near-optimal detection threshold that meets a given false alarm probability constraint is obtained. The proposed detector is shown to be robust to two commonly encountered types of noise uncertainties, namely, the noise model uncertainty, where the PDF of the noise process is not completely known, and the noise parameter uncertainty, where the parameters associated with the noise PDF are either partially or completely unknown. Simulation results validate our analysis, and illustrate the performance benefits of the proposed technique relative to existing methods, especially in the low SNR regime and in the presence of noise uncertainties.
Resumo:
Local polynomial approximation of data is an approach towards signal denoising. Savitzky-Golay (SG) filters are finite-impulse-response kernels, which convolve with the data to result in polynomial approximation for a chosen set of filter parameters. In the case of noise following Gaussian statistics, minimization of mean-squared error (MSE) between noisy signal and its polynomial approximation is optimum in the maximum-likelihood (ML) sense but the MSE criterion is not optimal for non-Gaussian noise conditions. In this paper, we robustify the SG filter for applications involving noise following a heavy-tailed distribution. The optimal filtering criterion is achieved by l(1) norm minimization of error through iteratively reweighted least-squares (IRLS) technique. It is interesting to note that at any stage of the iteration, we solve a weighted SG filter by minimizing l(2) norm but the process converges to l(1) minimized output. The results show consistent improvement over the standard SG filter performance.
Resumo:
This paper considers the problem of receive antenna selection (AS) in a multiple-antenna communication system having a single radio-frequency (RF) chain. The AS decisions are based on noisy channel estimates obtained using known pilot symbols embedded in the data packets. The goal here is to minimize the average packet error rate (PER) by exploiting the known temporal correlation of the channel. As the underlying channels are only partially observed using the pilot symbols, the problem of AS for PER minimization is cast into a partially observable Markov decision process (POMDP) framework. Under mild assumptions, the optimality of a myopic policy is established for the two-state channel case. Moreover, two heuristic AS schemes are proposed based on a weighted combination of the estimated channel states on the different antennas. These schemes utilize the continuous valued received pilot symbols to make the AS decisions, and are shown to offer performance comparable to the POMDP approach, which requires one to quantize the channel and observations to a finite set of states. The performance improvement offered by the POMDP solution and the proposed heuristic solutions relative to existing AS training-based approaches is illustrated using Monte Carlo simulations.
Resumo:
In this paper we prove weighted mixed norm estimates for Riesz transforms on the Heisenberg group and Riesz transforms associated to the special Hermite operator. From these results vector-valued inequalities for sequences of Riesz transforms associated to generalised Grushin operators and Laguerre operators are deduced.
Resumo:
The problem of cooperative beamforming for maximizing the achievable data rate of an energy constrained two-hop amplify-and-forward (AF) network is considered. Assuming perfect channel state information (CSI) of all the nodes, we evaluate the optimal scaling factor for the relay nodes. Along with individual power constraint on each of the relay nodes, we consider a weighted sum power constraint. The proposed iterative algorithm initially solves a set of relaxed problems with weighted sum power constraint and then updates the solution to accommodate individual constraints. These relaxed problems in turn are solved using a sequence of Quadratic Eigenvalue Problems (QEP). The key contribution of this letter is the generalization of cooperative beamforming to incorporate both the individual and weighted sum constraint. Furthermore, we have proposed a novel algorithm based on Quadratic Eigenvalue Problem (QEP) and discussed its convergence.
Resumo:
We are given a set of sensors at given locations, a set of potential locations for placing base stations (BSs, or sinks), and another set of potential locations for placing wireless relay nodes. There is a cost for placing a BS and a cost for placing a relay. The problem we consider is to select a set of BS locations, a set of relay locations, and an association of sensor nodes with the selected BS locations, so that the number of hops in the path from each sensor to its BS is bounded by h(max), and among all such feasible networks, the cost of the selected network is the minimum. The hop count bound suffices to ensure a certain probability of the data being delivered to the BS within a given maximum delay under a light traffic model. We observe that the problem is NP-Hard, and is hard to even approximate within a constant factor. For this problem, we propose a polynomial time approximation algorithm (SmartSelect) based on a relay placement algorithm proposed in our earlier work, along with a modification of the greedy algorithm for weighted set cover. We have analyzed the worst case approximation guarantee for this algorithm. We have also proposed a polynomial time heuristic to improve upon the solution provided by SmartSelect. Our numerical results demonstrate that the algorithms provide good quality solutions using very little computation time in various randomly generated network scenarios.
Resumo:
Network theory has become an excellent method of choice through which biological data are smoothly integrated to gain insights into complex biological problems. Understanding protein structure, folding, and function has been an important problem, which is being extensively investigated by the network approach. Since the sequence uniquely determines the structure, this review focuses on the networks of non-covalently connected amino acid side chains in proteins. Questions in structural biology are addressed within the framework of such a formalism. While general applications are mentioned in this review, challenging problems which have demanded the attention of scientific community for a long time, such as allostery and protein folding, are considered in greater detail. Our aim has been to explore these important problems through the eyes of networks. Various methods of constructing protein structure networks (PSN) are consolidated. They include the methods based on geometry, edges weighted by different schemes, and also bipartite network of protein-nucleic acid complexes. A number of network metrics that elegantly capture the general features as well as specific features related to phenomena, such as allostery and protein model validation, are described. Additionally, an integration of network theory with ensembles of equilibrium structures of a single protein or that of a large number of structures from the data bank has been presented to perceive complex phenomena from network perspective. Finally, we discuss briefly the capabilities, limitations, and the scope for further explorations of protein structure networks.
Resumo:
In this work, we study the well-known r-DIMENSIONAL k-MATCHING ((r, k)-DM), and r-SET k-PACKING ((r, k)-SP) problems. Given a universe U := U-1 ... U-r and an r-uniform family F subset of U-1 x ... x U-r, the (r, k)-DM problem asks if F admits a collection of k mutually disjoint sets. Given a universe U and an r-uniform family F subset of 2(U), the (r, k)-SP problem asks if F admits a collection of k mutually disjoint sets. We employ techniques based on dynamic programming and representative families. This leads to a deterministic algorithm with running time O(2.851((r-1)k) .vertical bar F vertical bar. n log(2)n . logW) for the weighted version of (r, k)-DM, where W is the maximum weight in the input, and a deterministic algorithm with running time O(2.851((r-0.5501)k).vertical bar F vertical bar.n log(2) n . logW) for the weighted version of (r, k)-SP. Thus, we significantly improve the previous best known deterministic running times for (r, k)-DM and (r, k)-SP and the previous best known running times for their weighted versions. We rely on structural properties of (r, k)-DM and (r, k)-SP to develop algorithms that are faster than those that can be obtained by a standard use of representative sets. Incorporating the principles of iterative expansion, we obtain a better algorithm for (3, k)-DM, running in time O(2.004(3k).vertical bar F vertical bar . n log(2)n). We believe that this algorithm demonstrates an interesting application of representative families in conjunction with more traditional techniques. Furthermore, we present kernels of size O(e(r)r(k-1)(r) logW) for the weighted versions of (r, k)-DM and (r, k)-SP, improving the previous best known kernels of size O(r!r(k-1)(r) logW) for these problems.
Resumo:
Approximately 140 million years ago, the Indian plate separated from Gondwana and migrated by almost 90 degrees latitude to its current location, forming the Himalayan-Tibetan system. Large discrepancies exist in the rate of migration of Indian plate during Phanerozoic. Here we describe a new approach to paleo-latitudinal reconstruction based on simultaneous determination of carbonate formation temperature and delta O-18 of soil carbonates, constrained by the abundances of C-13-O-18 bonds in palaeosol carbonates. Assuming that the palaeosol carbonates have a strong relationship with the composition of the meteoric water, delta O-18 carbonate of palaeosol can constrain paleo-latitudinal position. Weighted mean annual rainfall delta O-18 water values measured at several stations across the southern latitudes are used to derive a polynomial equation: delta(18)Ow = -0.006 x (LAT)(2) - 0.294 x (LAT) - 5.29 which is used for latitudinal reconstruction. We use this approach to show the northward migration of the Indian plate from 46.8 +/- 5.8 degrees S during the Permian (269 M. y.) to 30 +/- 11 degrees S during the Triassic (248 M. y.), 14.7 +/- 8.7 degrees S during the early Cretaceous (135 M. y.), and 28 +/- 8.8 degrees S during the late Cretaceous ( 68 M. y.). Soil carbonate delta O-18 provides an alternative method for tracing the latitudinal position of Indian plate in the past and the estimates are consistent with the paleo-magnetic records which document the position of Indian plate prior to 135 +/- 3 M. y.
Resumo:
Standard approaches for ellipse fitting are based on the minimization of algebraic or geometric distance between the given data and a template ellipse. When the data are noisy and come from a partial ellipse, the state-of-the-art methods tend to produce biased ellipses. We rely on the sampling structure of the underlying signal and show that the x- and y-coordinate functions of an ellipse are finite-rate-of-innovation (FRI) signals, and that their parameters are estimable from partial data. We consider both uniform and nonuniform sampling scenarios in the presence of noise and show that the data can be modeled as a sum of random amplitude-modulated complex exponentials. A low-pass filter is used to suppress noise and approximate the data as a sum of weighted complex exponentials. The annihilating filter used in FRI approaches is applied to estimate the sampling interval in the closed form. We perform experiments on simulated and real data, and assess both objective and subjective performances in comparison with the state-of-the-art ellipse fitting methods. The proposed method produces ellipses with lesser bias. Furthermore, the mean-squared error is lesser by about 2 to 10 dB. We show the applications of ellipse fitting in iris images starting from partial edge contours, and to free-hand ellipses drawn on a touch-screen tablet.