168 resultados para Probabilistic Algorithms

em Indian Institute of Science - Bangalore - Índia


Relevância:

30.00% 30.00%

Publicador:

Resumo:

The random early detection (RED) technique has seen a lot of research over the years. However, the functional relationship between RED performance and its parameters viz,, queue weight (omega(q)), marking probability (max(p)), minimum threshold (min(th)) and maximum threshold (max(th)) is not analytically availa ble. In this paper, we formulate a probabilistic constrained optimization problem by assuming a nonlinear relationship between the RED average queue length and its parameters. This problem involves all the RED parameters as the variables of the optimization problem. We use the barrier and the penalty function approaches for its Solution. However (as above), the exact functional relationship between the barrier and penalty objective functions and the optimization variable is not known, but noisy samples of these are available for different parameter values. Thus, for obtaining the gradient and Hessian of the objective, we use certain recently developed simultaneous perturbation stochastic approximation (SPSA) based estimates of these. We propose two four-timescale stochastic approximation algorithms based oil certain modified second-order SPSA updates for finding the optimum RED parameters. We present the results of detailed simulation experiments conducted over different network topologies and network/traffic conditions/settings, comparing the performance of Our algorithms with variants of RED and a few other well known adaptive queue management (AQM) techniques discussed in the literature.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Two algorithms are outlined, each of which has interesting features for modeling of spatial variability of rock depth. In this paper, reduced level of rock at Bangalore, India, is arrived from the 652 boreholes data in the area covering 220 sqa <.km. Support vector machine (SVM) and relevance vector machine (RVM) have been utilized to predict the reduced level of rock in the subsurface of Bangalore and to study the spatial variability of the rock depth. The support vector machine (SVM) that is firmly based on the theory of statistical learning theory uses regression technique by introducing epsilon-insensitive loss function has been adopted. RVM is a probabilistic model similar to the widespread SVM, but where the training takes place in a Bayesian framework. Prediction results show the ability of learning machine to build accurate models for spatial variability of rock depth with strong predictive capabilities. The paper also highlights the capability ofRVM over the SVM model.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We consider the problem of optimal routing in a multi-stage network of queues with constraints on queue lengths. We develop three algorithms for probabilistic routing for this problem using only the total end-to-end delays. These algorithms use the smoothed functional (SF) approach to optimize the routing probabilities. In our model all the queues are assumed to have constraints on the average queue length. We also propose a novel quasi-Newton based SF algorithm. Policies like Join Shortest Queue or Least Work Left work only for unconstrained routing. Besides assuming knowledge of the queue length at all the queues. If the only information available is the expected end-to-end delay as with our case such policies cannot be used. We also give simulation results showing the performance of the SF algorithms for this problem.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Latent variable methods, such as PLCA (Probabilistic Latent Component Analysis) have been successfully used for analysis of non-negative signal representations. In this paper, we formulate PLCS (Probabilistic Latent Component Segmentation), which models each time frame of a spectrogram as a spectral distribution. Given the signal spectrogram, the segmentation boundaries are estimated using a maximum-likelihood approach. For an efficient solution, the algorithm imposes a hard constraint that each segment is modelled by a single latent component. The hard constraint facilitates the solution of ML boundary estimation using dynamic programming. The PLCS framework does not impose a parametric assumption unlike earlier ML segmentation techniques. PLCS can be naturally extended to model coarticulation between successive phones. Experiments on the TIMIT corpus show that the proposed technique is promising compared to most state of the art speech segmentation algorithms.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This article presents the results of probabilistic seismic hazard analysis (PSHA) for Bangalore, South India. Analyses have been carried out considering the seismotectonic parameters of the region covering a radius of 350 km keeping Bangalore as the center. Seismic hazard parameter `b' has been evaluated considering the available earthquake data using (1) Gutenberg-Richter (G-R) relationship and (2) Kijko and Sellevoll (1989, 1992) method utilizing extreme and complete catalogs. The `b' parameter was estimated to be 0.62 to 0.98 from G-R relation and 0.87 +/- A 0.03 from Kijko and Sellevoll method. The results obtained are a little higher than the `b' values published earlier for southern India. Further, probabilistic seismic hazard analysis for Bangalore region has been carried out considering six seismogenic sources. From the analysis, mean annual rate of exceedance and cumulative probability hazard curve for peak ground acceleration (PGA) and spectral acceleration (Sa) have been generated. The quantified hazard values in terms of the rock level peak ground acceleration (PGA) are mapped for 10% probability of exceedance in 50 years on a grid size of 0.5 km x 0.5 km. In addition, Uniform Hazard Response Spectrum (UHRS) at rock level is also developed for the 5% damping corresponding to 10% probability of exceedance in 50 years. The peak ground acceleration (PGA) value of 0.121 g obtained from the present investigation is slightly lower (but comparable) than the PGA values obtained from the deterministic seismic hazard analysis (DSHA) for the same area. However, the PGA value obtained in the current investigation is higher than PGA values reported in the global seismic hazard assessment program (GSHAP) maps of Bhatia et al. (1999) for the shield area.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Recently, efficient scheduling algorithms based on Lagrangian relaxation have been proposed for scheduling parallel machine systems and job shops. In this article, we develop real-world extensions to these scheduling methods. In the first part of the paper, we consider the problem of scheduling single operation jobs on parallel identical machines and extend the methodology to handle multiple classes of jobs, taking into account setup times and setup costs, The proposed methodology uses Lagrangian relaxation and simulated annealing in a hybrid framework, In the second part of the paper, we consider a Lagrangian relaxation based method for scheduling job shops and extend it to obtain a scheduling methodology for a real-world flexible manufacturing system with centralized material handling.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A computational study for the convergence acceleration of Euler and Navier-Stokes computations with upwind schemes has been conducted in a unified framework. It involves the flux-vector splitting algorithms due to Steger-Warming and Van Leer, the flux-difference splitting algorithms due to Roe and Osher and the hybrid algorithms, AUSM (Advection Upstream Splitting Method) and HUS (Hybrid Upwind Splitting). Implicit time integration with line Gauss-Seidel relaxation and multigrid are among the procedures which have been systematically investigated on an individual as well as cumulative basis. The upwind schemes have been tested in various implicit-explicit operator combinations such that the optimal among them can be determined based on extensive computations for two-dimensional flows in subsonic, transonic, supersonic and hypersonic flow regimes. In this study, the performance of these implicit time-integration procedures has been systematically compared with those corresponding to a multigrid accelerated explicit Runge-Kutta method. It has been demonstrated that a multigrid method employed in conjunction with an implicit time-integration scheme yields distinctly superior convergence as compared to those associated with either of the acceleration procedures provided that effective smoothers, which have been identified in this investigation, are prescribed in the implicit operator.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Background: A genetic network can be represented as a directed graph in which a node corresponds to a gene and a directed edge specifies the direction of influence of one gene on another. The reconstruction of such networks from transcript profiling data remains an important yet challenging endeavor. A transcript profile specifies the abundances of many genes in a biological sample of interest. Prevailing strategies for learning the structure of a genetic network from high-dimensional transcript profiling data assume sparsity and linearity. Many methods consider relatively small directed graphs, inferring graphs with up to a few hundred nodes. This work examines large undirected graphs representations of genetic networks, graphs with many thousands of nodes where an undirected edge between two nodes does not indicate the direction of influence, and the problem of estimating the structure of such a sparse linear genetic network (SLGN) from transcript profiling data. Results: The structure learning task is cast as a sparse linear regression problem which is then posed as a LASSO (l1-constrained fitting) problem and solved finally by formulating a Linear Program (LP). A bound on the Generalization Error of this approach is given in terms of the Leave-One-Out Error. The accuracy and utility of LP-SLGNs is assessed quantitatively and qualitatively using simulated and real data. The Dialogue for Reverse Engineering Assessments and Methods (DREAM) initiative provides gold standard data sets and evaluation metrics that enable and facilitate the comparison of algorithms for deducing the structure of networks. The structures of LP-SLGNs estimated from the INSILICO1, INSILICO2 and INSILICO3 simulated DREAM2 data sets are comparable to those proposed by the first and/or second ranked teams in the DREAM2 competition. The structures of LP-SLGNs estimated from two published Saccharomyces cerevisae cell cycle transcript profiling data sets capture known regulatory associations. In each S. cerevisiae LP-SLGN, the number of nodes with a particular degree follows an approximate power law suggesting that its degree distributions is similar to that observed in real-world networks. Inspection of these LP-SLGNs suggests biological hypotheses amenable to experimental verification. Conclusion: A statistically robust and computationally efficient LP-based method for estimating the topology of a large sparse undirected graph from high-dimensional data yields representations of genetic networks that are biologically plausible and useful abstractions of the structures of real genetic networks. Analysis of the statistical and topological properties of learned LP-SLGNs may have practical value; for example, genes with high random walk betweenness, a measure of the centrality of a node in a graph, are good candidates for intervention studies and hence integrated computational – experimental investigations designed to infer more realistic and sophisticated probabilistic directed graphical model representations of genetic networks. The LP-based solutions of the sparse linear regression problem described here may provide a method for learning the structure of transcription factor networks from transcript profiling and transcription factor binding motif data.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Doppler weather radars with fast scanning rates must estimate spectral moments based on a small number of echo samples. This paper concerns the estimation of mean Doppler velocity in a coherent radar using a short complex time series. Specific results are presented based on 16 samples. A wide range of signal-to-noise ratios are considered, and attention is given to ease of implementation. It is shown that FFT estimators fare poorly in low SNR and/or high spectrum-width situations. Several variants of a vector pulse-pair processor are postulated and an algorithm is developed for the resolution of phase angle ambiguity. This processor is found to be better than conventional processors at very low SNR values. A feasible approximation to the maximum entropy estimator is derived as well as a technique utilizing the maximization of the periodogram. It is found that a vector pulse-pair processor operating with four lags for clear air observation and a single lag (pulse-pair mode) for storm observation may be a good way to estimate Doppler velocities over the entire gamut of weather phenomena.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We propose four variants of recently proposed multi-timescale algorithm in [1] for ant colony optimization and study their application on a multi-stage shortest path problem. We study the performance of the various algorithms in this framework. We observe, that one of the variants consistently outperforms the algorithm [1].

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Two algorithms that improve upon the sequent-peak procedure for reservoir capacity calculation are presented. The first incorporates storage-dependent losses (like evaporation losses) exactly as the standard linear programming formulation does. The second extends the first so as to enable designing with less than maximum reliability even when allowable shortfall in any failure year is also specified. Together, the algorithms provide a more accurate, flexible and yet fast method of calculating the storage capacity requirement in preliminary screening and optimization models.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The paper presents two new algorithms for the direct parallel solution of systems of linear equations. The algorithms employ a novel recursive doubling technique to obtain solutions to an nth-order system in n steps with no more than 2n(n −1) processors. Comparing their performance with the Gaussian elimination algorithm (GE), we show that they are almost 100% faster than the latter. This speedup is achieved by dispensing with all the computation involved in the back-substitution phase of GE. It is also shown that the new algorithms exhibit error characteristics which are superior to GE. An n(n + 1) systolic array structure is proposed for the implementation of the new algorithms. We show that complete solutions can be obtained, through these single-phase solution methods, in 5n−log2n−4 computational steps, without the need for intermediate I/O operations.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Based on trial interchanges, this paper develops three algorithms for the solution of the placement problem of logic modules in a circuit. A significant decrease in the computation time of such placement algorithms can be achieved by restricting the trial interchanges to only a subset of all the modules in a circuit. The three algorithms are simulated on a DEC 1090 system in Pascal and the performance of these algorithms in terms of total wirelength and computation time is compared with the results obtained by Steinberg, for the 34-module backboard wiring problem. Performance analysis of the first two algorithms reveals that algorithms based on pairwise trial interchanges (2 interchanges) achieve a desired placement faster than the algorithms based on trial N interchanges. The first two algorithms do not perform better than Steinberg's algorithm1, whereas the third algorithm based on trial pairwise interchange among unconnected pairs of modules (UPM) and connected pairs of modules (CPM) performs better than Steinberg's algorithm, both in terms of total wirelength (TWL) and computation time.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper, the design and implementation of a single shared bus, shared memory multiprocessing system using Intel's single board computers is presented. The hardware configuration and the operating system developed to execute the parallel algorithms are discussed. The performance evaluation studies carried out on Image are outlined.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The Reeb graph tracks topology changes in level sets of a scalar function and finds applications in scientific visualization and geometric modeling. We describe an algorithm that constructs the Reeb graph of a Morse function defined on a 3-manifold. Our algorithm maintains connected components of the two dimensional levels sets as a dynamic graph and constructs the Reeb graph in O(nlogn+nlogg(loglogg)3) time, where n is the number of triangles in the tetrahedral mesh representing the 3-manifold and g is the maximum genus over all level sets of the function. We extend this algorithm to construct Reeb graphs of d-manifolds in O(nlogn(loglogn)3) time, where n is the number of triangles in the simplicial complex that represents the d-manifold. Our result is a significant improvement over the previously known O(n2) algorithm. Finally, we present experimental results of our implementation and demonstrate that our algorithm for 3-manifolds performs efficiently in practice.