168 resultados para Probabilistic Algorithms
Resumo:
This paper proposes a novel approach to solve the ordinal regression problem using Gaussian processes. The proposed approach, probabilistic least squares ordinal regression (PLSOR), obtains the probability distribution over ordinal labels using a particular likelihood function. It performs model selection (hyperparameter optimization) using the leave-one-out cross-validation (LOO-CV) technique. PLSOR has conceptual simplicity and ease of implementation of least squares approach. Unlike the existing Gaussian process ordinal regression (GPOR) approaches, PLSOR does not use any approximation techniques for inference. We compare the proposed approach with the state-of-the-art GPOR approaches on some synthetic and benchmark data sets. Experimental results show the competitiveness of the proposed approach.
Resumo:
Numerous algorithms have been proposed recently for sparse signal recovery in Compressed Sensing (CS). In practice, the number of measurements can be very limited due to the nature of the problem and/or the underlying statistical distribution of the non-zero elements of the sparse signal may not be known a priori. It has been observed that the performance of any sparse signal recovery algorithm depends on these factors, which makes the selection of a suitable sparse recovery algorithm difficult. To take advantage in such situations, we propose to use a fusion framework using which we employ multiple sparse signal recovery algorithms and fuse their estimates to get a better estimate. Theoretical results justifying the performance improvement are shown. The efficacy of the proposed scheme is demonstrated by Monte Carlo simulations using synthetic sparse signals and ECG signals selected from MIT-BIH database.
Resumo:
Recently, it has been shown that fusion of the estimates of a set of sparse recovery algorithms result in an estimate better than the best estimate in the set, especially when the number of measurements is very limited. Though these schemes provide better sparse signal recovery performance, the higher computational requirement makes it less attractive for low latency applications. To alleviate this drawback, in this paper, we develop a progressive fusion based scheme for low latency applications in compressed sensing. In progressive fusion, the estimates of the participating algorithms are fused progressively according to the availability of estimates. The availability of estimates depends on computational complexity of the participating algorithms, in turn on their latency requirement. Unlike the other fusion algorithms, the proposed progressive fusion algorithm provides quick interim results and successive refinements during the fusion process, which is highly desirable in low latency applications. We analyse the developed scheme by providing sufficient conditions for improvement of CS reconstruction quality and show the practical efficacy by numerical experiments using synthetic and real-world data. (C) 2013 Elsevier B.V. All rights reserved.
Resumo:
This study presents the response of a vertically loaded pile in undrained clay considering spatially distributed undrained shear strength. The probabilistic study is performed considering undrained shear strength as random variable and the analysis is conducted using random field theory. The inherent soil variability is considered as source of variability and the field is modeled as two dimensional non-Gaussian homogeneous random field. Random field is simulated using Cholesky decomposition technique within the finite difference program and Monte Carlo simulation approach is considered for the probabilistic analysis. The influence of variance and spatial correlation of undrained shear strength on the ultimate capacity as summation of ultimate skin friction and end bearing resistance of pile are examined. It is observed that the coefficient of variation and spatial correlation distance are the most important parameters that affect the pile ultimate capacity.
Resumo:
The sparse estimation methods that utilize the l(p)-norm, with p being between 0 and 1, have shown better utility in providing optimal solutions to the inverse problem in diffuse optical tomography. These l(p)-norm-based regularizations make the optimization function nonconvex, and algorithms that implement l(p)-norm minimization utilize approximations to the original l(p)-norm function. In this work, three such typical methods for implementing the l(p)-norm were considered, namely, iteratively reweighted l(1)-minimization (IRL1), iteratively reweighted least squares (IRLS), and the iteratively thresholding method (ITM). These methods were deployed for performing diffuse optical tomographic image reconstruction, and a systematic comparison with the help of three numerical and gelatin phantom cases was executed. The results indicate that these three methods in the implementation of l(p)-minimization yields similar results, with IRL1 fairing marginally in cases considered here in terms of shape recovery and quantitative accuracy of the reconstructed diffuse optical tomographic images. (C) 2014 Optical Society of America
Resumo:
Precise experimental implementation of unitary operators is one of the most important tasks for quantum information processing. Numerical optimization techniques are widely used to find optimized control fields to realize a desired unitary operator. However, finding high-fidelity control pulses to realize an arbitrary unitary operator in larger spin systems is still a difficult task. In this work, we demonstrate that a combination of the GRAPE algorithm, which is a numerical pulse optimization technique, and a unitary operator decomposition algorithm Ajoy et al., Phys. Rev. A 85, 030303 (2012)] can realize unitary operators with high experimental fidelity. This is illustrated by simulating the mirror-inversion propagator of an XY spin chain in a five-spin dipolar coupled nuclear spin system. Further, this simulation has been used to demonstrate the transfer of entangled states from one end of the spin chain to the other end.
Resumo:
Smoothed functional (SF) schemes for gradient estimation are known to be efficient in stochastic optimization algorithms, especially when the objective is to improve the performance of a stochastic system However, the performance of these methods depends on several parameters, such as the choice of a suitable smoothing kernel. Different kernels have been studied in the literature, which include Gaussian, Cauchy, and uniform distributions, among others. This article studies a new class of kernels based on the q-Gaussian distribution, which has gained popularity in statistical physics over the last decade. Though the importance of this family of distributions is attributed to its ability to generalize the Gaussian distribution, we observe that this class encompasses almost all existing smoothing kernels. This motivates us to study SF schemes for gradient estimation using the q-Gaussian distribution. Using the derived gradient estimates, we propose two-timescale algorithms for optimization of a stochastic objective function in a constrained setting with a projected gradient search approach. We prove the convergence of our algorithms to the set of stationary points of an associated ODE. We also demonstrate their performance numerically through simulations on a queuing model.
Resumo:
We address the parameterized complexity ofMaxColorable Induced Subgraph on perfect graphs. The problem asks for a maximum sized q-colorable induced subgraph of an input graph G. Yannakakis and Gavril IPL 1987] showed that this problem is NP-complete even on split graphs if q is part of input, but gave a n(O(q)) algorithm on chordal graphs. We first observe that the problem is W2]-hard parameterized by q, even on split graphs. However, when parameterized by l, the number of vertices in the solution, we give two fixed-parameter tractable algorithms. The first algorithm runs in time 5.44(l) (n+#alpha(G))(O(1)) where #alpha(G) is the number of maximal independent sets of the input graph. The second algorithm runs in time q(l+o()l())n(O(1))T(alpha) where T-alpha is the time required to find a maximum independent set in any induced subgraph of G. The first algorithm is efficient when the input graph contains only polynomially many maximal independent sets; for example split graphs and co-chordal graphs. The running time of the second algorithm is FPT in l alone (whenever T-alpha is a polynomial in n), since q <= l for all non-trivial situations. Finally, we show that (under standard complexitytheoretic assumptions) the problem does not admit a polynomial kernel on split and perfect graphs in the following sense: (a) On split graphs, we do not expect a polynomial kernel if q is a part of the input. (b) On perfect graphs, we do not expect a polynomial kernel even for fixed values of q >= 2.
Resumo:
We present the first q-Gaussian smoothed functional (SF) estimator of the Hessian and the first Newton-based stochastic optimization algorithm that estimates both the Hessian and the gradient of the objective function using q-Gaussian perturbations. Our algorithm requires only two system simulations (regardless of the parameter dimension) and estimates both the gradient and the Hessian at each update epoch using these. We also present a proof of convergence of the proposed algorithm. In a related recent work (Ghoshdastidar, Dukkipati, & Bhatnagar, 2014), we presented gradient SF algorithms based on the q-Gaussian perturbations. Our work extends prior work on SF algorithms by generalizing the class of perturbation distributions as most distributions reported in the literature for which SF algorithms are known to work turn out to be special cases of the q-Gaussian distribution. Besides studying the convergence properties of our algorithm analytically, we also show the results of numerical simulations on a model of a queuing network, that illustrate the significance of the proposed method. In particular, we observe that our algorithm performs better in most cases, over a wide range of q-values, in comparison to Newton SF algorithms with the Gaussian and Cauchy perturbations, as well as the gradient q-Gaussian SF algorithms. (C) 2014 Elsevier Ltd. All rights reserved.
Resumo:
It has been shown that iterative re-weighted strategies will often improve the performance of many sparse reconstruction algorithms. However, these strategies are algorithm dependent and cannot be easily extended for an arbitrary sparse reconstruction algorithm. In this paper, we propose a general iterative framework and a novel algorithm which iteratively enhance the performance of any given arbitrary sparse reconstruction algorithm. We theoretically analyze the proposed method using restricted isometry property and derive sufficient conditions for convergence and performance improvement. We also evaluate the performance of the proposed method using numerical experiments with both synthetic and real-world data. (C) 2014 Elsevier B.V. All rights reserved.
Resumo:
We present a new Hessian estimator based on the simultaneous perturbation procedure, that requires three system simulations regardless of the parameter dimension. We then present two Newton-based simulation optimization algorithms that incorporate this Hessian estimator. The two algorithms differ primarily in the manner in which the Hessian estimate is used. Both our algorithms do not compute the inverse Hessian explicitly, thereby saving on computational effort. While our first algorithm directly obtains the product of the inverse Hessian with the gradient of the objective, our second algorithm makes use of the Sherman-Morrison matrix inversion lemma to recursively estimate the inverse Hessian. We provide proofs of convergence for both our algorithms. Next, we consider an interesting application of our algorithms on a problem of road traffic control. Our algorithms are seen to exhibit better performance than two Newton algorithms from a recent prior work.
Resumo:
The work presented in this paper involves the stochastic finite element analysis of composite-epoxy adhesive lap joints using Monte Carlo simulation. A set of composite adhesive lap joints were prepared and loaded till failure to obtain their strength. The peel and shear strain in the bond line region at different levels of load were obtained using digital image correlation (DIC). The corresponding stresses were computed assuming a plane strain condition. The finite element model was verified by comparing the numerical and experimental stresses. The stresses exhibited a similar behavior and a good correlation was obtained. Further, the finite element model was used to perform the stochastic analysis using Monte Carlo simulation. The parameters influencing stress distribution were provided as a random input variable and the resulting probabilistic variation of maximum peel and shear stresses were studied. It was found that the adhesive modulus and bond line thickness had significant influence on the maximum stress variation. While the adherend thickness had a major influence, the effect of variation in longitudinal and shear modulus on the stresses was found to be little. (C) 2014 Elsevier Ltd. All rights reserved.
Resumo:
We investigate the problem of timing recovery for 2-D magnetic recording (TDMR) channels. We develop a timing error model for TDMR channel considering the phase and frequency offsets with noise. We propose a 2-D data-aided phase-locked loop (PLL) architecture for tracking variations in the position and movement of the read head in the down-track and cross-track directions and analyze the convergence of the algorithm under non-separable timing errors. We further develop a 2-D interpolation-based timing recovery scheme that works in conjunction with the 2-D PLL. We quantify the efficiency of our proposed algorithms by simulations over a 2-D magnetic recording channel with timing errors.
Resumo:
This paper proposes a probabilistic prediction based approach for providing Quality of Service (QoS) to delay sensitive traffic for Internet of Things (IoT). A joint packet scheduling and dynamic bandwidth allocation scheme is proposed to provide service differentiation and preferential treatment to delay sensitive traffic. The scheduler focuses on reducing the waiting time of high priority delay sensitive services in the queue and simultaneously keeping the waiting time of other services within tolerable limits. The scheme uses the difference in probability of average queue length of high priority packets at previous cycle and current cycle to determine the probability of average weight required in the current cycle. This offers optimized bandwidth allocation to all the services by avoiding distribution of excess resources for high priority services and yet guaranteeing the services for it. The performance of the algorithm is investigated using MPEG-4 traffic traces under different system loading. The results show the improved performance with respect to waiting time for scheduling high priority packets and simultaneously keeping tolerable limits for waiting time and packet loss for other services. Crown Copyright (C) 2015 Published by Elsevier B.V.