953 resultados para optimal sequential search


Relevância:

30.00% 30.00%

Publicador:

Resumo:

A numerically stable sequential Primal–Dual LP algorithm for the reactive power optimisation (RPO) is presented in this article. The algorithm minimises the voltage stability index C 2 [1] of all the load buses to improve the system static voltage stability. Real time requirements such as numerical stability, identification of the most effective subset of controllers for curtailing the number of controllers and their movement can be handled effectively by the proposed algorithm. The algorithm has a natural characteristic of selecting the most effective subset of controllers (and hence curtailing insignificant controllers) for improving the objective. Comparison with transmission loss minimisation objective indicates that the most effective subset of controllers and their solution identified by the static voltage stability improvement objective is not the same as that of the transmission loss minimisation objective. The proposed algorithm is suitable for real time application for the improvement of the system static voltage stability.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper considers sequential hypothesis testing in a decentralized framework. We start with two simple decentralized sequential hypothesis testing algorithms. One of which is later proved to be asymptotically Bayes optimal. We also consider composite versions of decentralized sequential hypothesis testing. A novel nonparametric version for decentralized sequential hypothesis testing using universal source coding theory is developed. Finally we design a simple decentralized multihypothesis sequential detection algorithm.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper considers cooperative spectrum sensing in Cognitive Radios. In our previous work we have developed DualSPRT, a distributed algorithm for cooperative spectrum sensing using Sequential Probability Ratio Test (SPRT) at the Cognitive Radios as well as at the fusion center. This algorithm works well, but is not optimal. In this paper we propose an improved algorithm- SPRT-CSPRT, which is motivated from Cumulative Sum Procedures (CUSUM). We analyse it theoretically. We also modify this algorithm to handle uncertainties in SNR's and fading.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper primarily intends to develop a GIS (geographical information system)-based data mining approach for optimally selecting the locations and determining installed capacities for setting up distributed biomass power generation systems in the context of decentralized energy planning for rural regions. The optimal locations within a cluster of villages are obtained by matching the installed capacity needed with the demand for power, minimizing the cost of transportation of biomass from dispersed sources to power generation system, and cost of distribution of electricity from the power generation system to demand centers or villages. The methodology was validated by using it for developing an optimal plan for implementing distributed biomass-based power systems for meeting the rural electricity needs of Tumkur district in India consisting of 2700 villages. The approach uses a k-medoid clustering algorithm to divide the total region into clusters of villages and locate biomass power generation systems at the medoids. The optimal value of k is determined iteratively by running the algorithm for the entire search space for different values of k along with demand-supply matching constraints. The optimal value of the k is chosen such that it minimizes the total cost of system installation, costs of transportation of biomass, and transmission and distribution. A smaller region, consisting of 293 villages was selected to study the sensitivity of the results to varying demand and supply parameters. The results of clustering are represented on a GIS map for the region.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper, we consider signal detection in nt × nr underdetermined MIMO (UD-MIMO) systems, where i) nt >; nr with a overload factor α = nt over nr >; 1, ii) nt symbols are transmitted per channel use through spatial multiplexing, and iii) nt, nr are large (in the range of tens). A low-complexity detection algorithm based on reactive tabu search is considered. A variable threshold based stopping criterion is proposed which offers near-optimal performance in large UD-MIMO systems at low complexities. A lower bound on the maximum likelihood (ML) bit error performance of large UD-MIMO systems is also obtained for comparison. The proposed algorithm is shown to achieve BER performance close to the ML lower bound within 0.6 dB at an uncoded BER of 10-2 in 16 × 8 V-BLAST UD-MIMO system with 4-QAM (32 bps/Hz). Similar near-ML performance results are shown for 32 × 16, 32 × 24 V-BLAST UD-MIMO with 4-QAM/16-QAM as well. A performance and complexity comparison between the proposed algorithm and the λ-generalized sphere decoder (λ-GSD) algorithm for UD-MIMO shows that the proposed algorithm achieves almost the same performance of λ-GSD but at a significantly lesser complexity.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper considers cooperative spectrum sensing algorithms for Cognitive Radios which focus on reducing the number of samples to make a reliable detection. We propose algorithms based on decentralized sequential hypothesis testing in which the Cognitive Radios sequentially collect the observations, make local decisions and send them to the fusion center for further processing to make a final decision on spectrum usage. The reporting channel between the Cognitive Radios and the fusion center is assumed more realistically as a Multiple Access Channel (MAC) with receiver noise. Furthermore the communication for reporting is limited, thereby reducing the communication cost. We start with an algorithm where the fusion center uses an SPRT-like (Sequential Probability Ratio Test) procedure and theoretically analyze its performance. Asymptotically, its performance is close to the optimal centralized test without fusion center noise. We further modify this algorithm to improve its performance at practical operating points. Later we generalize these algorithms to handle uncertainties in SNR and fading. (C) 2014 Elsevier B.V. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Sequential Monte Carlo methods, also known as particle methods, are a widely used set of computational tools for inference in non-linear non-Gaussian state-space models. In many applications it may be necessary to compute the sensitivity, or derivative, of the optimal filter with respect to the static parameters of the state-space model; for instance, in order to obtain maximum likelihood model parameters of interest, or to compute the optimal controller in an optimal control problem. In Poyiadjis et al. [2011] an original particle algorithm to compute the filter derivative was proposed and it was shown using numerical examples that the particle estimate was numerically stable in the sense that it did not deteriorate over time. In this paper we substantiate this claim with a detailed theoretical study. Lp bounds and a central limit theorem for this particle approximation of the filter derivative are presented. It is further shown that under mixing conditions these Lp bounds and the asymptotic variance characterized by the central limit theorem are uniformly bounded with respect to the time index. We demon- strate the performance predicted by theory with several numerical examples. We also use the particle approximation of the filter derivative to perform online maximum likelihood parameter estimation for a stochastic volatility model.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Nonlinear non-Gaussian state-space models arise in numerous applications in control and signal processing. Sequential Monte Carlo (SMC) methods, also known as Particle Filters, are numerical techniques based on Importance Sampling for solving the optimal state estimation problem. The task of calibrating the state-space model is an important problem frequently faced by practitioners and the observed data may be used to estimate the parameters of the model. The aim of this paper is to present a comprehensive overview of SMC methods that have been proposed for this task accompanied with a discussion of their advantages and limitations.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A search for dielectron decays of heavy neutral resonances has been performed using proton-proton collision data collected at √s = 7 TeV by the Compact Muon Solenoid (CMS) experiment at the Large Hadron Collider (LHC) in 2011. The data sample corresponds to an integrated luminosity of 5 fb−1. The dielectron mass distribution is consistent with Standard Model (SM) predictions. An upper limit on the ratio of the cross section times branching fraction of new bosons, normalized to the cross section times branching fraction of the Z boson, is set at the 95 % confidence level. This result is translated into limits on the mass of new neutral particles at the level of 2120 GeV for the Z′ in the Sequential Standard Model, 1810 GeV for the superstring-inspired Z′ψ resonance, and 1940 (1640) GeV for Kaluza-Klein gravitons with the coupling parameter k/MPl of 0.10 (0.05).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A general framework for multi-criteria optimal design is presented which is well-suited for automated design of structural systems. A systematic computer-aided optimal design decision process is developed which allows the designer to rapidly evaluate and improve a proposed design by taking into account the major factors of interest related to different aspects such as design, construction, and operation.

The proposed optimal design process requires the selection of the most promising choice of design parameters taken from a large design space, based on an evaluation using specified criteria. The design parameters specify a particular design, and so they relate to member sizes, structural configuration, etc. The evaluation of the design uses performance parameters which may include structural response parameters, risks due to uncertain loads and modeling errors, construction and operating costs, etc. Preference functions are used to implement the design criteria in a "soft" form. These preference functions give a measure of the degree of satisfaction of each design criterion. The overall evaluation measure for a design is built up from the individual measures for each criterion through a preference combination rule. The goal of the optimal design process is to obtain a design that has the highest overall evaluation measure - an optimization problem.

Genetic algorithms are stochastic optimization methods that are based on evolutionary theory. They provide the exploration power necessary to explore high-dimensional search spaces to seek these optimal solutions. Two special genetic algorithms, hGA and vGA, are presented here for continuous and discrete optimization problems, respectively.

The methodology is demonstrated with several examples involving the design of truss and frame systems. These examples are solved by using the proposed hGA and vGA.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The location of a flame front is often taken as the point of maximum OH gradient. Planar laser-induced fluorescence of OH can be used to obtain the flame front by extracting the points of maximum gradient. This operation is typically performed using an edge detection algorithm. The choice of operating parameters a priori poses significant problems of robustness when handling images with a range of signal-to-noise ratios. A statistical method of parameter selection originating in the image processing literature is detailed, and its merit for this application is demonstrated. A reduced search space method is proposed to decrease computational cost and render the technique viable for large data sets. This gives nearly identical output to the full method. These methods demonstrate substantial decreases in data rejection compared to the use of a priori parameters. These methods are viable for any application where maximum gradient contours must be accurately extracted from images of species or temperature, even at very low signal-to-noise ratios.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A method is given for solving an optimal H2 approximation problem for SISO linear time-invariant stable systems. The method, based on constructive algebra, guarantees that the global optimum is found; it does not involve any gradient-based search, and hence avoids the usual problems of local minima. We examine mostly the case when the model order is reduced by one, and when the original system has distinct poles. This case exhibits special structure which allows us to provide a complete solution. The problem is converted into linear algebra by exhibiting a finite-dimensional basis for a certain space, and can then be solved by eigenvalue calculations, following the methods developed by Stetter and Moeller. The use of Buchberger's algorithm is avoided by writing the first-order optimality conditions in a special form, from which a Groebner basis is immediately available. Compared with our previous work the method presented here has much smaller time and memory requirements, and can therefore be applied to systems of significantly higher McMillan degree. In addition, some hypotheses which were required in the previous work have been removed. Some examples are included.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Nonlinear non-Gaussian state-space models arise in numerous applications in control and signal processing. Sequential Monte Carlo (SMC) methods, also known as Particle Filters, provide very good numerical approximations to the associated optimal state estimation problems. However, in many scenarios, the state-space model of interest also depends on unknown static parameters that need to be estimated from the data. In this context, standard SMC methods fail and it is necessary to rely on more sophisticated algorithms. The aim of this paper is to present a comprehensive overview of SMC methods that have been proposed to perform static parameter estimation in general state-space models. We discuss the advantages and limitations of these methods. © 2009 IFAC.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

POMDP algorithms have made significant progress in recent years by allowing practitioners to find good solutions to increasingly large problems. Most approaches (including point-based and policy iteration techniques) operate by refining a lower bound of the optimal value function. Several approaches (e.g., HSVI2, SARSOP, grid-based approaches and online forward search) also refine an upper bound. However, approximating the optimal value function by an upper bound is computationally expensive and therefore tightness is often sacrificed to improve efficiency (e.g., sawtooth approximation). In this paper, we describe a new approach to efficiently compute tighter bounds by i) conducting a prioritized breadth first search over the reachable beliefs, ii) propagating upper bound improvements with an augmented POMDP and iii) using exact linear programming (instead of the sawtooth approximation) for upper bound interpolation. As a result, we can represent the bounds more compactly and significantly reduce the gap between upper and lower bounds on several benchmark problems. Copyright © 2011, Association for the Advancement of Artificial Intelligence. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Combinatorial testing is an important testing method. It requires the test cases to cover various combinations of parameters of the system under test. The test generation problem for combinatorial testing can be modeled as constructing a matrix which has certain properties. This paper first discusses two combinatorial testing criteria: covering array and orthogonal array, and then proposes a backtracking search algorithm to construct matrices satisfying them. Several search heuristics and symmetry breaking techniques are used to reduce the search time. This paper also introduces some techniques to generate large covering array instances from smaller ones. All the techniques have been implemented in a tool called EXACT (EXhaustive seArch of Combinatorial Test suites). A new optimal covering array is found by this tool.