168 resultados para Probabilistic Algorithms
Resumo:
A new class of nets, called S-nets, is introduced for the performance analysis of scheduling algorithms used in real-time systems Deterministic timed Petri nets do not adequately model the scheduling of resources encountered in real-time systems, and need to be augmented with resource places and signal places, and a scheduler block, to facilitate the modeling of scheduling algorithms. The tokens are colored, and the transition firing rules are suitably modified. Further, the concept of transition folding is used, to get intuitively simple models of multiframe real-time systems. Two generic performance measures, called �load index� and �balance index,� which characterize the resource utilization and the uniformity of workload distribution, respectively, are defined. The utility of S-nets for evaluating heuristic-based scheduling schemes is illustrated by considering three heuristics for real-time scheduling. S-nets are useful in tuning the hardware configuration and the underlying scheduling policy, so that the system utilization is maximized, and the workload distribution among the computing resources is balanced.
Resumo:
An important tool in signal processing is the use of eigenvalue and singular value decompositions for extracting information from time-series/sensor array data. These tools are used in the so-called subspace methods that underlie solutions to the harmonic retrieval problem in time series and the directions-of-arrival (DOA) estimation problem in array processing. The subspace methods require the knowledge of eigenvectors of the underlying covariance matrix to estimate the parameters of interest. Eigenstructure estimation in signal processing has two important classes: (i) estimating the eigenstructure of the given covariance matrix and (ii) updating the eigenstructure estimates given the current estimate and new data. In this paper, we survey some algorithms for both these classes useful for harmonic retrieval and DOA estimation problems. We begin by surveying key results in the literature and then describe, in some detail, energy function minimization approaches that underlie a class of feedback neural networks. Our approaches estimate some or all of the eigenvectors corresponding to the repeated minimum eigenvalue and also multiple orthogonal eigenvectors corresponding to the ordered eigenvalues of the covariance matrix. Our presentation includes some supporting analysis and simulation results. We may point out here that eigensubspace estimation is a vast area and all aspects of this cannot be fully covered in a single paper. (C) 1995 Academic Press, Inc.
Resumo:
Genetic algorithms (GAs) are search methods that are being employed in a multitude of applications with extremely large search spaces. Recently, there has been considerable interest among GA researchers in understanding and formalizing the working of GAs. In an earlier paper, we have introduced the notion of binomially distributed populations as the central idea behind an exact ''populationary'' model of the large-population dynamics of the GA operators for objective functions called ''functions of unitation.'' In this paper, we extend this populationary model of GA dynamics to a more general class of objective functions called functions of unitation variables. We generalize the notion of a binomially distributed population to a generalized binomially distributed population (GBDP). We show that the effects of selection, crossover, and mutation can be exactly modelled after decomposing the population into GBDPs. Based on this generalized model, we have implemented a GA simulator for functions of two unitation variables-GASIM 2, and the distributions predicted by GASIM 2 match with those obtained from actual GA runs. The generalized populationary model of GA dynamics not only presents a novel and natural way of interpreting the workings of GAs with large populations, but it also provides for an efficient implementation of the model as a GA simulator. (C) Elsevier Science Inc. 1997.
Resumo:
We consider a discrete time queue with finite capacity and i.i.d. and Markov modulated arrivals, Efficient algorithms are developed to calculate the moments and the distributions of the first time to overflow and the regeneration length, Results are extended to the multiserver queue. Some illustrative numerical examples are provided.
Resumo:
ASICs offer the best realization of DSP algorithms in terms of performance, but the cost is prohibitive, especially when the volumes involved are low. However, if the architecture synthesis trajectory for such algorithms is such that the target architecture can be identified as an interconnection of elementary parameterized computational structures, then it is possible to attain a close match, both in terms of performance and power with respect to an ASIC, for any algorithmic parameters of the given algorithm. Such an architecture is weakly programmable (configurable) and can be viewed as an application specific integrated processor (ASIP). In this work, we present a methodology to synthesize ASIPs for DSP algorithms. (C) 1999 Elsevier Science B.V. All rights reserved.
Resumo:
The conventional Cornell's source-based approach of probabilistic seismic-hazard assessment (PSHA) has been employed all around the world, whilst many studies often rely on the use of computer packages such as FRISK (McGuire FRISK-a computer program for seismic risk analysis. Open-File Report 78-1007, United States Geological Survey, Department of Interior, Washington 1978) and SEISRISK III (Bender and Perkins SEISRISK III-a computer program for seismic hazard estimation, Bulletin 1772. United States Geological Survey, Department of Interior, Washington 1987). A ``black-box'' syndrome may be resulted if the user of the software does not have another simple and robust PSHA method that can be used to make comparisons. An alternative method for PSHA, namely direct amplitude-based (DAB) approach, has been developed as a heuristic and efficient method enabling users to undertake their own sanity checks on outputs from computer packages. This paper experiments the application of the DAB approach for three cities in China, Iran, and India, respectively, and compares with documented results computed by the source-based approach. Several insights regarding the procedure of conducting PSHA have also been obtained, which could be useful for future seismic-hazard studies.
Resumo:
This paper considers the problem of spectrum sensing in cognitive radio networks when the primary user is using Orthogonal Frequency Division Multiplexing (OFDM). For this we develop cooperative sequential detection algorithms that use the autocorrelation property of cyclic prefix (CP) used in OFDM systems. We study the effect of timing and frequency offset, IQ-imbalance and uncertainty in noise and transmit power. We also modify the detector to mitigate the effects of these impairments. The performance of the proposed algorithms is studied via simulations. We show that sequential detection can significantly improve the performance over a fixed sample size detector.
Resumo:
Parallel execution of computational mechanics codes requires efficient mesh-partitioning techniques. These mesh-partitioning techniques divide the mesh into specified number of submeshes of approximately the same size and at the same time, minimise the interface nodes of the submeshes. This paper describes a new mesh partitioning technique, employing Genetic Algorithms. The proposed algorithm operates on the deduced graph (dual or nodal graph) of the given finite element mesh rather than directly on the mesh itself. The algorithm works by first constructing a coarse graph approximation using an automatic graph coarsening method. The coarse graph is partitioned and the results are interpolated onto the original graph to initialise an optimisation of the graph partition problem. In practice, hierarchy of (usually more than two) graphs are used to obtain the final graph partition. The proposed partitioning algorithm is applied to graphs derived from unstructured finite element meshes describing practical engineering problems and also several example graphs related to finite element meshes given in the literature. The test results indicate that the proposed GA based graph partitioning algorithm generates high quality partitions and are superior to spectral and multilevel graph partitioning algorithms.
Resumo:
In this paper we consider the problem of learning an n × n kernel matrix from m(1) similarity matrices under general convex loss. Past research have extensively studied the m = 1 case and have derived several algorithms which require sophisticated techniques like ACCP, SOCP, etc. The existing algorithms do not apply if one uses arbitrary losses and often can not handle m > 1 case. We present several provably convergent iterative algorithms, where each iteration requires either an SVM or a Multiple Kernel Learning (MKL) solver for m > 1 case. One of the major contributions of the paper is to extend the well knownMirror Descent(MD) framework to handle Cartesian product of psd matrices. This novel extension leads to an algorithm, called EMKL, which solves the problem in O(m2 log n 2) iterations; in each iteration one solves an MKL involving m kernels and m eigen-decomposition of n × n matrices. By suitably defining a restriction on the objective function, a faster version of EMKL is proposed, called REKL,which avoids the eigen-decomposition. An alternative to both EMKL and REKL is also suggested which requires only an SVMsolver. Experimental results on real world protein data set involving several similarity matrices illustrate the efficacy of the proposed algorithms.
Resumo:
Instruction scheduling with an automaton-based resource conflict model is well-established for normal scheduling. Such models have been generalized to software pipelining in the modulo-scheduling framework. One weakness with existing methods is that a distinct automaton must be constructed for each combination of a reservation table and initiation interval. In this work, we present a different approach to model conflicts. We construct one automaton for each reservation table which acts as a compact encoding of all the conflict automata for this table, which can be recovered for use in modulo-scheduling. The basic premise of the construction is to move away from the Proebsting-Fraser model of conflict automaton to the Muller model of automaton modelling issue sequences. The latter turns out to be useful and efficient in this situation. Having constructed this automaton, we show how to improve the estimate of resource constrained initiation interval. Such a bound is always better than the average-use estimate. We show that our bound is safe: it is always lower than the true initiation interval. This use of the automaton is orthogonal to its use in modulo-scheduling. Once we generate the required information during pre-processing, we can compute the lower bound for a program without any further reference to the automaton.
Resumo:
We have developed two reduced complexity bit-allocation algorithms for MP3/AAC based audio encoding, which can be useful at low bit-rates. One algorithm derives optimum bit-allocation using constrained optimization of weighted noise-to-mask ratio and the second algorithm uses decoupled iterations for distortion control and rate control, with convergence criteria. MUSHRA based evaluation indicated that the new algorithm would be comparable to AAC but requiring only about 1/10 th the complexity.
Resumo:
Web services are now a key ingredient of software services offered by software enterprises. Many standardized web services are now available as commodity offerings from web service providers. An important problem for a web service requester is the web service composition problem which involves selecting the right mix of web service offerings to execute an end-to-end business process. Web service offerings are now available in bundled form as composite web services and more recently, volume discounts are also on offer, based on the number of executions of web services requested. In this paper, we develop efficient algorithms for the web service composition problem in the presence of composite web service offerings and volume discounts. We model this problem as a combinatorial auction with volume discounts. We first develop efficient polynomial time algorithms when the end-to-end service involves a linear workflow of web services. Next we develop efficient polynomial time algorithms when the end-to-end service involves a tree workflow of web services.
Resumo:
We present two online algorithms for maintaining a topological order of a directed acyclic graph as arcs are added, and detecting a cycle when one is created. Our first algorithm takes O(m 1/2) amortized time per arc and our second algorithm takes O(n 2.5/m) amortized time per arc, where n is the number of vertices and m is the total number of arcs. For sparse graphs, our O(m 1/2) bound improves the best previous bound by a factor of logn and is tight to within a constant factor for a natural class of algorithms that includes all the existing ones. Our main insight is that the two-way search method of previous algorithms does not require an ordered search, but can be more general, allowing us to avoid the use of heaps (priority queues). Instead, the deterministic version of our algorithm uses (approximate) median-finding; the randomized version of our algorithm uses uniform random sampling. For dense graphs, our O(n 2.5/m) bound improves the best previously published bound by a factor of n 1/4 and a recent bound obtained independently of our work by a factor of logn. Our main insight is that graph search is wasteful when the graph is dense and can be avoided by searching the topological order space instead. Our algorithms extend to the maintenance of strong components, in the same asymptotic time bounds.
Resumo:
A modified lattice model using finite element method has been developed to study the mode-I fracture analysis of heterogeneous materials like concrete. In this model, the truss members always join at points where aggregates are located which are modeled as plane stress triangular elements. The truss members are given the properties of cement mortar matrix randomly, so as to represent the randomness of strength in concrete. It is widely accepted that the fracture of concrete structures should not be based on strength criterion alone, but should be coupled with energy criterion. Here, by incorporating the strain softening through a parameter ‘α’, the energy concept is introduced. The softening branch of load-displacement curves was successfully obtained. From the sensitivity study, it was observed that the maximum load of a beam is most sensitive to the tensile strength of mortar. It is seen that by varying the values of properties of mortar according to a normal random distribution, better results can be obtained for load-displacement diagram.