180 resultados para Polynomial penalty functions
Resumo:
The problem of sensor-network-based distributed intrusion detection in the presence of clutter is considered. It is argued that sensing is best regarded as a local phenomenon in that only sensors in the immediate vicinity of an intruder are triggered. In such a setting, lack of knowledge of intruder location gives rise to correlated sensor readings. A signal-space view-point is introduced in which the noise-free sensor readings associated to intruder and clutter appear as surfaces f(s) and f(g) and the problem reduces to one of determining in distributed fashion, whether the current noisy sensor reading is best classified as intruder or clutter. Two approaches to distributed detection are pursued. In the first, a decision surface separating f(s) and f(g) is identified using Neyman-Pearson criteria. Thereafter, the individual sensor nodes interactively exchange bits to determine whether the sensor readings are on one side or the other of the decision surface. Bounds on the number of bits needed to be exchanged are derived, based on communication-complexity (CC) theory. A lower bound derived for the two-party average case CC of general functions is compared against the performance of a greedy algorithm. Extensions to the multi-party case is straightforward and is briefly discussed. The average case CC of the relevant greaterthan (CT) function is characterized within two bits. Under the second approach, each sensor node broadcasts a single bit arising from appropriate two-level quantization of its own sensor reading, keeping in mind the fusion rule to be subsequently applied at a local fusion center. The optimality of a threshold test as a quantization rule is proved under simplifying assumptions. Finally, results from a QualNet simulation of the algorithms are presented that include intruder tracking using a naive polynomial-regression algorithm. 2010 Elsevier B.V. All rights reserved.
Resumo:
In this paper we consider the problem of learning an n × n kernel matrix from m(1) similarity matrices under general convex loss. Past research have extensively studied the m = 1 case and have derived several algorithms which require sophisticated techniques like ACCP, SOCP, etc. The existing algorithms do not apply if one uses arbitrary losses and often can not handle m > 1 case. We present several provably convergent iterative algorithms, where each iteration requires either an SVM or a Multiple Kernel Learning (MKL) solver for m > 1 case. One of the major contributions of the paper is to extend the well knownMirror Descent(MD) framework to handle Cartesian product of psd matrices. This novel extension leads to an algorithm, called EMKL, which solves the problem in O(m2 log n 2) iterations; in each iteration one solves an MKL involving m kernels and m eigen-decomposition of n × n matrices. By suitably defining a restriction on the objective function, a faster version of EMKL is proposed, called REKL,which avoids the eigen-decomposition. An alternative to both EMKL and REKL is also suggested which requires only an SVMsolver. Experimental results on real world protein data set involving several similarity matrices illustrate the efficacy of the proposed algorithms.
Resumo:
We study the exact one-electron propagator and spectral function of a solvable model of interacting electrons due to Schulz and Shastry. The solution previously found for the energies and wave functions is extended to give spectral functions that turn out to be computable, interesting, and nontrivial. They provide one of the few examples of cases where the spectral functions are known asymptotically as well as exactly.
Resumo:
The paper deals with the existence of a quadratic Lyapunov function V = x′P(t)x for an exponentially stable linear system with varying coefficients described by the vector differential equation S0305004100044777_inline1 The derivative dV/dt is allowed to be strictly semi-(F) and the locus dV/dt = 0 does not contain any arc of the system trajectory. It is then shown that the coefficient matrix A(t) of the exponentially stable sy
Resumo:
Analytical studies are carried out to minimize acquisition time in phase-lock loop (PLL) applications using aiding functions. A second order aided PLL is realized with the help of the quasi-stationary approach to verify the acquisition behavior in the absence of noise. Time acquisition is measured both from the study of the LPF output transient and by employing a lock detecting and indicating circuit to crosscheck experimental and analytical results. A closed form solution is obtained for the evaluation of the time acquisition using different aiding functions. The aiding signal is simple and economical and can be used with state of the art hardware.
Resumo:
We address the problem of local-polynomial modeling of smooth time-varying signals with unknown functional form, in the presence of additive noise. The problem formulation is in the time domain and the polynomial coefficients are estimated in the pointwise minimum mean square error (PMMSE) sense. The choice of the window length for local modeling introduces a bias-variance tradeoff, which we solve optimally by using the intersection-of-confidence-intervals (ICI) technique. The combination of the local polynomial model and the ICI technique gives rise to an adaptive signal model equipped with a time-varying PMMSE-optimal window length whose performance is superior to that obtained by using a fixed window length. We also evaluate the sensitivity of the ICI technique with respect to the confidence interval width. Simulation results on electrocardiogram (ECG) signals show that at 0dB signal-to-noise ratio (SNR), one can achieve about 12dB improvement in SNR. Monte-Carlo performance analysis shows that the performance is comparable to the basic wavelet techniques. For 0 dB SNR, the adaptive window technique yields about 2-3dB higher SNR than wavelet regression techniques and for SNRs greater than 12dB, the wavelet techniques yield about 2dB higher SNR.
Resumo:
Abstract—A method of testing for parametric faults of analog circuits based on a polynomial representaion of fault-free function of the circuit is presented. The response of the circuit under test (CUT) is estimated as a polynomial in the applied input voltage at relevant frequencies apart from DC. Classification of CUT is based on a comparison of the estimated polynomial coefficients with those of the fault free circuit. The method needs very little augmentation of circuit to make it testable as only output parameters are used for classification. This procedure is shown to uncover several parametric faults causing smaller than 5 % deviations the nominal values. Fault diagnosis based upon sensitivity of polynomial coefficients at relevant frequencies is also proposed.
Resumo:
Abstract—DC testing of parametric faults in non-linear analog circuits based on a new transformation, entitled, V-Transform acting on polynomial coefficient expansion of the circuit function is presented. V-Transform serves the dual purpose of monotonizing polynomial coefficients of circuit function expansion and increasing the sensitivity of these coefficients to circuit parameters. The sensitivity of V-Transform Coefficients (VTC) to circuit parameters is up to 3x-5x more than sensitivity of polynomial coefficients. As a case study, we consider a benchmark elliptic filter to validate our method. The technique is shown to uncover hitherto untestable parametric faults whose sizes are smaller than 10 % of the nominal values. I.
Resumo:
In this paper, we develop and analyze C(0) penalty methods for the fully nonlinear Monge-Ampere equation det(D(2)u) = f in two dimensions. The key idea in designing our methods is to build discretizations such that the resulting discrete linearizations are symmetric, stable, and consistent with the continuous linearization. We are then able to show the well-posedness of the penalty method as well as quasi-optimal error estimates using the Banach fixed-point theorem as our main tool. Numerical experiments are presented which support the theoretical results.
Resumo:
The statistically steady humidity distribution resulting from an interaction of advection, modelled as an uncorrelated random walk of moist parcels on an isentropic surface, and a vapour sink, modelled as immediate condensation whenever the specific humidity exceeds a specified saturation humidity, is explored with theory and simulation. A source supplies moisture at the deep-tropical southern boundary of the domain and the saturation humidity is specified as a monotonically decreasing function of distance from the boundary. The boundary source balances the interior condensation sink, so that a stationary spatially inhomogeneous humidity distribution emerges. An exact solution of the Fokker-Planck equation delivers a simple expression for the resulting probability density function (PDF) of the wate-rvapour field and also the relative humidity. This solution agrees completely with a numerical simulation of the process, and the humidity PDF exhibits several features of interest, such as bimodality close to the source and unimodality further from the source. The PDFs of specific and relative humidity are broad and non-Gaussian. The domain-averaged relative humidity PDF is bimodal with distinct moist and dry peaks, a feature which we show agrees with middleworld isentropic PDFs derived from the ERA interim dataset. Copyright (C) 2011 Royal Meteorological Society
Resumo:
The literature on pricing implicitly assumes an "infinite data" model, in which sources can sustain any data rate indefinitely. We assume a more realistic "finite data" model, in which sources occasionally run out of data; this leads to variable user data rates. Further, we assume that users have contracts with the service provider, specifying the rates at which they can inject traffic into the network. Our objective is to study how prices can be set such that a single link can be shared efficiently and fairly among users in a dynamically changing scenario where a subset of users occasionally has little data to send. User preferences are modelled by concave increasing utility functions. Further, we introduce two additional elements: a convex increasing disutility function and a convex increasing multiplicative congestion-penally function. The disutility function takes the shortfall (contracted rate minus present rate) as its argument, and essentially encourages users to send traffic at their contracted rates, while the congestion-penalty function discourages heavy users from sending excess data when the link is congested. We obtain simple necessary and sufficient conditions on prices for fair and efficient link sharing; moreover, we show that a single price for all users achieves this. We illustrate the ideas using a simple experiment.