31 resultados para Multi-objective optimization problem
Resumo:
In this brief, a hybrid filter algorithm is developed to deal with the state estimation (SE) problem for power systems by taking into account the impact from the phasor measurement units (PMUs). Our aim is to include PMU measurements when designing the dynamic state estimators for power systems with traditional measurements. Also, as data dropouts inevitably occur in the transmission channels of traditional measurements from the meters to the control center, the missing measurement phenomenon is also tackled in the state estimator design. In the framework of extended Kalman filter (EKF) algorithm, the PMU measurements are treated as inequality constraints on the states with the aid of the statistical criterion, and then the addressed SE problem becomes a constrained optimization one based on the probability-maximization method. The resulting constrained optimization problem is then solved using the particle swarm optimization algorithm together with the penalty function approach. The proposed algorithm is applied to estimate the states of the power systems with both traditional and PMU measurements in the presence of probabilistic data missing phenomenon. Extensive simulations are carried out on the IEEE 14-bus test system and it is shown that the proposed algorithm gives much improved estimation performances over the traditional EKF method.
Resumo:
Recently there has been an increasing interest in the development of new methods using Pareto optimality to deal with multi-objective criteria (for example, accuracy and architectural complexity). Once one has learned a model based on their devised method, the problem is then how to compare it with the state of art. In machine learning, algorithms are typically evaluated by comparing their performance on different data sets by means of statistical tests. Unfortunately, the standard tests used for this purpose are not able to jointly consider performance measures. The aim of this paper is to resolve this issue by developing statistical procedures that are able to account for multiple competing measures at the same time. In particular, we develop two tests: a frequentist procedure based on the generalized likelihood-ratio test and a Bayesian procedure based on a multinomial-Dirichlet conjugate model. We further extend them by discovering conditional independences among measures to reduce the number of parameter of such models, as usually the number of studied cases is very reduced in such comparisons. Real data from a comparison among general purpose classifiers is used to show a practical application of our tests.
Resumo:
There has been an increasing interest in the development of new methods using Pareto optimality to deal with multi-objective criteria (for example, accuracy and time complexity). Once one has developed an approach to a problem of interest, the problem is then how to compare it with the state of art. In machine learning, algorithms are typically evaluated by comparing their performance on different data sets by means of statistical tests. Standard tests used for this purpose are able to consider jointly neither performance measures nor multiple competitors at once. The aim of this paper is to resolve these issues by developing statistical procedures that are able to account for multiple competing measures at the same time and to compare multiple algorithms altogether. In particular, we develop two tests: a frequentist procedure based on the generalized likelihood-ratio test and a Bayesian procedure based on a multinomial-Dirichlet conjugate model. We further extend them by discovering conditional independences among measures to reduce the number of parameters of such models, as usually the number of studied cases is very reduced in such comparisons. Data from a comparison among general purpose classifiers is used to show a practical application of our tests.
Resumo:
An overview of a many-body approach to calculation of electronic transport in molecular systems is given. The physics required to describe electronic transport through a molecule at the many-body level, without relying on commonly made assumptions such as the Landauer formalism or linear response theory, is discussed. Physically, our method relies on the incorporation of scattering boundary conditions into a many-body wavefunction and application of the maximum entropy principle to the transport region. Mathematically, this simple physical model translates into a constrained nonlinear optimization problem. A strategy for solving the constrained optimization problem is given. (C) 2004 Wiley Periodicals, Inc.
Resumo:
We examine the dynamic optimization problem for not-for-profit financial institutions (NFPs) that maximize consumer surplus, not profits. We characterize the optimal dynamic policy and find that it involves credit rationing. Interest rates set by mature NFPs will typically be more favorable to customers than market rates, as any surplus is distributed in the form of interest rate subsidies, with credit rationing being required to prevent these subsidies from distorting loan volumes from their optimal levels. Rationing overcomes a fundamental problem in NFPs; it allows them to distribute the surplus without distorting the volume of activity from the efficient level.
Resumo:
There is an increasing need to identify the effect of mix composition on the rheological properties of composite cement pastes using simple tests to determine the fluidity, the cohesion and other mechanical properties of grouting applications such as compressive strength. This paper reviews statistical models developed using a fractional factorial design which was carried out to model the influence of key parameters on properties affecting the performance of composite cement paste. Such responses of fluidity included mini-slump, flow time using Marsh cone and cohesion measured by Lombardi plate meter and unit weight, and compressive strength at 3 d, 7 d and 28 d. The models are valid for mixes with 0.35 to 0.42 water-to-binder ratio (W/B), 10% to 40% of pulverised fuel ash (PFA) as replacement of cement by mass, 0.02 to 0.06% of viscosity enhancer admixture (VEA), by mass of binder, and 0.3 to 1.2% of superplasticizer (SP), by mass of binder. The derived models that enable the identification of underlying primary factors and their interactions that influence the modelled responses of composite cement paste are presented. Such parameters can be useful to reduce the test protocol needed for proportioning of composite cement paste. This paper attempts also to demonstrate the usefulness of the models to better understand trade-offs between parameters and compare the responses obtained from the various test methods which are highlighted. The multi parametric optimization is used in order to establish isoresponses for a desirability function of cement composite paste. Results indicate that the replacement of cement by PFA is compromising the early compressive strength and up 26%, the desirability function decreased.
Resumo:
Current high temperature superconducting (HTS) wires exhibit high current densities enabling their use in electrical rotating machinery. The possibility of designing high power density superconducting motors operating at reasonable temperatures allows for new applications in mobile systems in which size and weight represent key design parameters. Thus, all-electric aircrafts represent a promising application for HTS motors. The design of such a complex system as an aircraft consists of a multi-variable optimization that requires computer models and advanced design procedures. This paper presents a specific sizing model of superconducting propulsion motors to be used in aircraft design. The model also takes into account the cooling system. The requirements for this application are presented in terms of power and dynamics as well as a load profile corresponding to a typical mission. We discuss the design implications of using a superconducting motor on an aircraft as well as the integration of the electrical propulsion in the aircraft, and the scaling laws derived from physics-based modeling of HTS motors.
Resumo:
The pressure and velocity field in a one-dimensional acoustic waveguide can be sensed in a non-intrusive manner using spatially distributed microphones. Experimental characterization with sensor arrangements of this type has many applications in measurement and control. This paper presents a method for measuring the acoustic variables in a duct under fluctuating propagation conditions with specific focus on in-system calibration and tracking of the system parameters of a three-microphone measurement configuration. The tractability of the non-linear optimization problem that results from taking a parametric approach is investigated alongside the influence of extraneous measurement noise on the parameter estimates. The validity and accuracy of the method are experimentally assessed in terms of the ability of the calibrated system to separate the propagating waves under controlled conditions. The tracking performance is tested through measurements with a time-varying mean flow, including an experiment conducted under propagation conditions similar to those in a wind instrument during playing.
Resumo:
We present an implementation of quantum annealing (QA) via lattice Green's function Monte Carlo (GFMC), focusing on its application to the Ising spin glass in transverse field. In particular, we study whether or not such a method is more effective than the path-integral Monte Carlo- (PIMC) based QA, as well as classical simulated annealing (CA), previously tested on the same optimization problem. We identify the issue of importance sampling, i.e., the necessity of possessing reasonably good (variational) trial wave functions, as the key point of the algorithm. We performed GFMC-QA runs using such a Boltzmann-type trial wave function, finding results for the residual energies that are qualitatively similar to those of CA (but at a much larger computational cost), and definitely worse than PIMC-QA. We conclude that, at present, without a serious effort in constructing reliable importance sampling variational wave functions for a quantum glass, GFMC-QA is not a true competitor of PIMC-QA.
Resumo:
Polar codes are one of the most recent advancements in coding theory and they have attracted significant interest. While they are provably capacity achieving over various channels, they have seen limited practical applications. Unfortunately, the successive nature of successive cancellation based decoders hinders fine-grained adaptation of the decoding complexity to design constraints and operating conditions. In this paper, we propose a systematic method for enabling complexity-performance trade-offs by constructing polar codes based on an optimization problem which minimizes the complexity under a suitably defined mutual information based performance constraint. Moreover, a low-complexity greedy algorithm is proposed in order to solve the optimization problem efficiently for very large code lengths.
Resumo:
The development of smart grid technologies and appropriate charging strategies are key to accommodating large numbers of Electric Vehicles (EV) charging on the grid. In this paper a general framework is presented for formulating the EV charging optimization problem and three different charging strategies are investigated and compared from the perspective of charging fairness while taking into account power system constraints. Two strategies are based on distributed algorithms, namely, Additive Increase and Multiplicative Decrease (AIMD), and Distributed Price-Feedback (DPF), while the third is an ideal centralized solution used to benchmark performance. The algorithms are evaluated using a simulation of a typical residential low voltage distribution network with 50% EV penetration. © 2013 IEEE.
Resumo:
In a Bayesian learning setting, the posterior distribution of a predictive model arises from a trade-off between its prior distribution and the conditional likelihood of observed data. Such distribution functions usually rely on additional hyperparameters which need to be tuned in order to achieve optimum predictive performance; this operation can be efficiently performed in an Empirical Bayes fashion by maximizing the posterior marginal likelihood of the observed data. Since the score function of this optimization problem is in general characterized by the presence of local optima, it is necessary to resort to global optimization strategies, which require a large number of function evaluations. Given that the evaluation is usually computationally intensive and badly scaled with respect to the dataset size, the maximum number of observations that can be treated simultaneously is quite limited. In this paper, we consider the case of hyperparameter tuning in Gaussian process regression. A straightforward implementation of the posterior log-likelihood for this model requires O(N^3) operations for every iteration of the optimization procedure, where N is the number of examples in the input dataset. We derive a novel set of identities that allow, after an initial overhead of O(N^3), the evaluation of the score function, as well as the Jacobian and Hessian matrices, in O(N) operations. We prove how the proposed identities, that follow from the eigendecomposition of the kernel matrix, yield a reduction of several orders of magnitude in the computation time for the hyperparameter optimization problem. Notably, the proposed solution provides computational advantages even with respect to state of the art approximations that rely on sparse kernel matrices.
Resumo:
A simple yet efficient harmony search (HS) method with a new pitch adjustment rule (NPAHS) is proposed for dynamic economic dispatch (DED) of electrical power systems, a large-scale non-linear real time optimization problem imposed by a number of complex constraints. The new pitch adjustment rule is based on the perturbation information and the mean value of the harmony memory, which is simple to implement and helps to enhance solution quality and convergence speed. A new constraint handling technique is also developed to effectively handle various constraints in the DED problem, and the violation of ramp rate limits between the first and last scheduling intervals that is often ignored by existing approaches for DED problems is effectively eliminated. To validate the effectiveness, the NPAHS is first tested on 10 popular benchmark functions with 100 dimensions, in comparison with four HS variants and five state-of-the-art evolutionary algorithms. Then, NPAHS is used to solve three 24-h DED systems with 5, 15 and 54 units, which consider the valve point effects, transmission loss, emission and prohibited operating zones. Simulation results on all these systems show the scalability and superiority of the proposed NPAHS on various large scale problems.
Resumo:
his paper investigates the identification and output tracking control of a class of Hammerstein systems through a wireless network within an integrated framework and the statistic characteristics of the wireless network are modelled using the inverse Gaussian cumulative distribution function. In the proposed framework, a new networked identification algorithm is proposed to compensate for the influence of the wireless network delays so as to acquire the more precise Hammerstein system model. Then, the identified model together with the model-based approach is used to design an output tracking controller. Mean square stability conditions are given using linear matrix inequalities (LMIs) and the optimal controller gains can be obtained by solving the corresponding optimization problem expressed using LMIs. Illustrative numerical simulation examples are given to demonstrate the effectiveness of our proposed method.
Resumo:
As one of key technologies in photovoltaic converter control, Maximum Power Point Tracking (MPPT) methods can keep the power conversion efficiency as high as nearly 99% under the uniform solar irradiance condition. However, these methods may fail when shading conditions occur and the power loss can over as much as 70% due to the multiple maxima in curve in shading conditions v.s. single maximum point in uniformly solar irradiance. In this paper, a Real Maximum Power Point Tracking (RMPPT) strategy under Partially Shaded Conditions (PSCs) is introduced to deal with this kind of problems. An optimization problem, based on a predictive model which will change adaptively with environment, is developed to tracking the global maxima and corresponding adaptive control strategy is presented. No additional circuits are required to obtain the environment uncertainties. Finally, simulations show the effectiveness of proposed method.