291 resultados para Rejection-sampling Algorithm

em Indian Institute of Science - Bangalore - Índia


Relevância:

90.00% 90.00%

Publicador:

Resumo:

Using a Girsanov change of measures, we propose novel variations within a particle-filtering algorithm, as applied to the inverse problem of state and parameter estimations of nonlinear dynamical systems of engineering interest, toward weakly correcting for the linearization or integration errors that almost invariably occur whilst numerically propagating the process dynamics, typically governed by nonlinear stochastic differential equations (SDEs). Specifically, the correction for linearization, provided by the likelihood or the Radon-Nikodym derivative, is incorporated within the evolving flow in two steps. Once the likelihood, an exponential martingale, is split into a product of two factors, correction owing to the first factor is implemented via rejection sampling in the first step. The second factor, which is directly computable, is accounted for via two different schemes, one employing resampling and the other using a gain-weighted innovation term added to the drift field of the process dynamics thereby overcoming the problem of sample dispersion posed by resampling. The proposed strategies, employed as add-ons to existing particle filters, the bootstrap and auxiliary SIR filters in this work, are found to non-trivially improve the convergence and accuracy of the estimates and also yield reduced mean square errors of such estimates vis-a-vis those obtained through the parent-filtering schemes.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The Girsanov linearization method (GLM), proposed earlier in Saha, N., and Roy, D., 2007, ``The Girsanov Linearisation Method for Stochastically Driven Nonlinear Oscillators,'' J. Appl. Mech., 74, pp. 885-897, is reformulated to arrive at a nearly exact, semianalytical, weak and explicit scheme for nonlinear mechanical oscillators under additive stochastic excitations. At the heart of the reformulated linearization is a temporally localized rejection sampling strategy that, combined with a resampling scheme, enables selecting from and appropriately modifying an ensemble of locally linearized trajectories while weakly applying the Girsanov correction (the Radon-Nikodym derivative) for the linearization errors. The semianalyticity is due to an explicit linearization of the nonlinear drift terms and it plays a crucial role in keeping the Radon-Nikodym derivative ``nearly bounded'' above by the inverse of the linearization time step (which means that only a subset of linearized trajectories with low, yet finite, probability exceeds this bound). Drift linearization is conveniently accomplished via the first few (lower order) terms in the associated stochastic (Ito) Taylor expansion to exclude (multiple) stochastic integrals from the numerical treatment. Similarly, the Radon-Nikodym derivative, which is a strictly positive, exponential (super-) martingale, is converted to a canonical form and evaluated over each time step without directly computing the stochastic integrals appearing in its argument. Through their numeric implementations for a few low-dimensional nonlinear oscillators, the proposed variants of the scheme, presently referred to as the Girsanov corrected linearization method (GCLM), are shown to exhibit remarkably higher numerical accuracy over a much larger range of the time step size than is possible with the local drift-linearization schemes on their own.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The application of computer-aided inspection integrated with the coordinate measuring machine and laser scanners to inspect manufactured aircraft parts using robust registration of two-point datasets is a subject of active research in computational metrology. This paper presents a novel approach to automated inspection by matching shapes based on the modified iterative closest point (ICP) method to define a criterion for the acceptance or rejection of a part. This procedure improves upon existing methods by doing away with the following, viz., the need for constructing either a tessellated or smooth representation of the inspected part and requirements for an a priori knowledge of approximate registration and correspondence between the points representing the computer-aided design datasets and the part to be inspected. In addition, this procedure establishes a better measure for error between the two matched datasets. The use of localized region-based triangulation is proposed for tracking the error. The approach described improves the convergence of the ICP technique with a dramatic decrease in computational effort. Experimental results obtained by implementing this proposed approach using both synthetic and practical data show that the present method is efficient and robust. This method thereby validates the algorithm, and the examples demonstrate its potential to be used in engineering applications.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The Orthogonal Frequency Division Multiplexing (OFDM) is a form of Multi-Carrier Modulation where the data stream is transmitted over a number of carriers which are orthogonal to each other i.e. the carrier spacing is selected such that each carrier is located at the zeroes of all other carriers in the spectral domain. This paper proposes a new novel sampling offset estimation algorithm for an OFDM system in order to receive the OFDM data symbols error-free over the noisy channel at the receiver and to achieve fine timing synchronization between the transmitter and the receiver. The performance of this algorithm has been studied in AWGN, ADSL and SUI channels successfully.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The Orthogonal Frequency Division Multiplexing (OFDM) is a form of Multi-Carrier Modulation where the data stream is transmitted over a number of carriers which are orthogonal to each other i.e. the carrier spacing is selected such that each carrier is located at the zeroes of all other carriers in the spectral domain. This paper proposes a new novel sampling offset estimation algorithm for an OFDM system in order to receive the OFDM data symbols error-free over the noisy channel at the receiver and to achieve fine timing synchronization between the transmitter and the receiver. The performance of this algorithm has been studied in AWGN, ADSL and SUI channels successfully.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Sampling based planners have been successful in path planning of robots with many degrees of freedom, but still remains ineffective when the configuration space has a narrow passage. We present a new technique based on a random walk strategy to generate samples in narrow regions quickly, thus improving efficiency of Probabilistic Roadmap Planners. The algorithm substantially reduces instances of collision checking and thereby decreases computational time. The method is powerful even for cases where the structure of the narrow passage is not known, thus giving significant improvement over other known methods.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Compressive Sampling Matching Pursuit (CoSaMP) is one of the popular greedy methods in the emerging field of Compressed Sensing (CS). In addition to the appealing empirical performance, CoSaMP has also splendid theoretical guarantees for convergence. In this paper, we propose a modification in CoSaMP to adaptively choose the dimension of search space in each iteration, using a threshold based approach. Using Monte Carlo simulations, we show that this modification improves the reconstruction capability of the CoSaMP algorithm in clean as well as noisy measurement cases. From empirical observations, we also propose an optimum value for the threshold to use in applications.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper, we propose low-complexity algorithms based on Monte Carlo sampling for signal detection and channel estimation on the uplink in large-scale multiuser multiple-input-multiple-output (MIMO) systems with tens to hundreds of antennas at the base station (BS) and a similar number of uplink users. A BS receiver that employs a novel mixed sampling technique (which makes a probabilistic choice between Gibbs sampling and random uniform sampling in each coordinate update) for detection and a Gibbs-sampling-based method for channel estimation is proposed. The algorithm proposed for detection alleviates the stalling problem encountered at high signal-to-noise ratios (SNRs) in conventional Gibbs-sampling-based detection and achieves near-optimal performance in large systems with M-ary quadrature amplitude modulation (M-QAM). A novel ingredient in the detection algorithm that is responsible for achieving near-optimal performance at low complexity is the joint use of a mixed Gibbs sampling (MGS) strategy coupled with a multiple restart (MR) strategy with an efficient restart criterion. Near-optimal detection performance is demonstrated for a large number of BS antennas and users (e. g., 64 and 128 BS antennas and users). The proposed Gibbs-sampling-based channel estimation algorithm refines an initial estimate of the channel obtained during the pilot phase through iterations with the proposed MGS-based detection during the data phase. In time-division duplex systems where channel reciprocity holds, these channel estimates can be used for multiuser MIMO precoding on the downlink. The proposed receiver is shown to achieve good performance and scale well for large dimensions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper, we consider the problem of finding a spectrum hole of a specified bandwidth in a given wide band of interest. We propose a new, simple and easily implementable sub-Nyquist sampling scheme for signal acquisition and a spectrum hole search algorithm that exploits sparsity in the primary spectral occupancy in the frequency domain by testing a group of adjacent subbands in a single test. The sampling scheme deliberately introduces aliasing during signal acquisition, resulting in a signal that is the sum of signals from adjacent sub-bands. Energy-based hypothesis tests are used to provide an occupancy decision over the group of subbands, and this forms the basis of the proposed algorithm to find contiguous spectrum holes. We extend this framework to a multi-stage sensing algorithm that can be employed in a variety of spectrum sensing scenarios, including non-contiguous spectrum hole search. Further, we provide the analytical means to optimize the hypothesis tests with respect to the detection thresholds, number of samples and group size to minimize the detection delay under a given error rate constraint. Depending on the sparsity and SNR, the proposed algorithms can lead to significantly lower detection delays compared to a conventional bin-by-bin energy detection scheme; the latter is in fact a special case of the group test when the group size is set to 1. We validate our analytical results via Monte Carlo simulations.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper, we propose a low-complexity algorithm based on Markov chain Monte Carlo (MCMC) technique for signal detection on the uplink in large scale multiuser multiple input multiple output (MIMO) systems with tens to hundreds of antennas at the base station (BS) and similar number of uplink users. The algorithm employs a randomized sampling method (which makes a probabilistic choice between Gibbs sampling and random sampling in each iteration) for detection. The proposed algorithm alleviates the stalling problem encountered at high SNRs in conventional MCMC algorithm and achieves near-optimal performance in large systems with M-QAM. A novel ingredient in the algorithm that is responsible for achieving near-optimal performance at low complexities is the joint use of a randomized MCMC (R-MCMC) strategy coupled with a multiple restart strategy with an efficient restart criterion. Near-optimal detection performance is demonstrated for large number of BS antennas and users (e.g., 64, 128, 256 BS antennas/users).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Structural information over the entire course of binding interactions based on the analyses of energy landscapes is described, which provides a framework to understand the events involved during biomolecular recognition. Conformational dynamics of malectin's exquisite selectivity for diglucosylated N-glycan (Dig-N-glycan), a highly flexible oligosaccharide comprising of numerous dihedral torsion angles, are described as an example. For this purpose, a novel approach based on hierarchical sampling for acquiring metastable molecular conformations constituting low-energy minima for understanding the structural features involved in a biologic recognition is proposed. For this purpose, four variants of principal component analysis were employed recursively in both Cartesian space and dihedral angles space that are characterized by free energy landscapes to select the most stable conformational substates. Subsequently, k-means clustering algorithm was implemented for geometric separation of the major native state to acquire a final ensemble of metastable conformers. A comparison of malectin complexes was then performed to characterize their conformational properties. Analyses of stereochemical metrics and other concerted binding events revealed surface complementarity, cooperative and bidentate hydrogen bonds, water-mediated hydrogen bonds, carbohydrate-aromatic interactions including CH-pi and stacking interactions involved in this recognition. Additionally, a striking structural transition from loop to beta-strands in malectin CRD upon specific binding to Dig-N-glycan is observed. The interplay of the above-mentioned binding events in malectin and Dig-N-glycan supports an extended conformational selection model as the underlying binding mechanism.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Remote sensing of physiological parameters could be a cost effective approach to improving health care, and low-power sensors are essential for remote sensing because these sensors are often energy constrained. This paper presents a power optimized photoplethysmographic sensor interface to sense arterial oxygen saturation, a technique to dynamically trade off SNR for power during sensor operation, and a simple algorithm to choose when to acquire samples in photoplethysmography. A prototype of the proposed pulse oximeter built using commercial-off-the-shelf (COTS) components is tested on 10 adults. The dynamic adaptation techniques described reduce power consumption considerably compared to our reference implementation, and our approach is competitive to state-of-the-art implementations. The techniques presented in this paper may be applied to low-power sensor interface designs where acquiring samples is expensive in terms of power as epitomized by pulse oximetry.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Flexray is a high speed communication protocol designed for distributive control in automotive control applications. Control performance not only depends on the control algorithm but also on the scheduling constraints in communication. A balance between the control performance and communication constraints must required for the choice of the sampling rates of the control loops in a node. In this paper, an optimum sampling period of control loops to minimize the cost function, satisfying the scheduling constraints is obtained. An algorithm to obtain the delay in service of each task in a node of the control loop in the hyper period has been also developed. (C) 2015 The Authors. Published by Elsevier B.V.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The Restricted Boltzmann Machines (RBM) can be used either as classifiers or as generative models. The quality of the generative RBM is measured through the average log-likelihood on test data. Due to the high computational complexity of evaluating the partition function, exact calculation of test log-likelihood is very difficult. In recent years some estimation methods are suggested for approximate computation of test log-likelihood. In this paper we present an empirical comparison of the main estimation methods, namely, the AIS algorithm for estimating the partition function, the CSL method for directly estimating the log-likelihood, and the RAISE algorithm that combines these two ideas.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper, we present a decentralized dynamic load scheduling/balancing algorithm called ELISA (Estimated Load Information Scheduling Algorithm) for general purpose distributed computing systems. ELISA uses estimated state information based upon periodic exchange of exact state information between neighbouring nodes to perform load scheduling. The primary objective of the algorithm is to cut down on the communication and load transfer overheads by minimizing the frequency of status exchange and by restricting the load transfer and status exchange within the buddy set of a processor. It is shown that the resulting algorithm performs almost as well as a perfect information algorithm and is superior to other load balancing schemes based on the random sharing and Ni-Hwang algorithms. A sensitivity analysis to study the effect of various design parameters on the effectiveness of load balancing is also carried out. Finally, the algorithm's performance is tested on large dimensional hypercubes in the presence of time-varying load arrival process and is shown to perform well in comparison to other algorithms. This makes ELISA a viable and implementable load balancing algorithm for use in general purpose distributed computing systems.