103 resultados para Logic-based optimization algorithm
Resumo:
This paper deals with an efficient hybrid evolutionary optimization algorithm in accordance with combining the ant colony optimization (ACO) and the simulated annealing (SA), so called ACO-SA. The distribution feeder reconfiguration (DFR) is known as one of the most important control schemes in the distribution networks, which can be affected by distributed generations (DGs) for the multi-objective DFR. In such a case, DGs is used to minimize the real power loss, the deviation of nodes voltage and the number of switching operations. The approach is carried out on a real distribution feeder, where the simulation results show that the proposed evolutionary optimization algorithm is robust and suitable for solving the DFR problem.
Resumo:
The increase in data center dependent services has made energy optimization of data centers one of the most exigent challenges in today's Information Age. The necessity of green and energy-efficient measures is very high for reducing carbon footprint and exorbitant energy costs. However, inefficient application management of data centers results in high energy consumption and low resource utilization efficiency. Unfortunately, in most cases, deploying an energy-efficient application management solution inevitably degrades the resource utilization efficiency of the data centers. To address this problem, a Penalty-based Genetic Algorithm (GA) is presented in this paper to solve a defined profile-based application assignment problem whilst maintaining a trade-off between the power consumption performance and resource utilization performance. Case studies show that the penalty-based GA is highly scalable and provides 16% to 32% better solutions than a greedy algorithm.
Resumo:
We present a new penalty-based genetic algorithm for the multi-source and multi-sink minimum vertex cut problem, and illustrate the algorithm’s usefulness with two real-world applications. It is proved in this paper that the genetic algorithm always produces a feasible solution by exploiting some domain-specific knowledge. The genetic algorithm has been implemented on the example applications and evaluated to show how well it scales as the problem size increases.
Resumo:
Mobile robots are widely used in many industrial fields. Research on path planning for mobile robots is one of the most important aspects in mobile robots research. Path planning for a mobile robot is to find a collision-free route, through the robot’s environment with obstacles, from a specified start location to a desired goal destination while satisfying certain optimization criteria. Most of the existing path planning methods, such as the visibility graph, the cell decomposition, and the potential field are designed with the focus on static environments, in which there are only stationary obstacles. However, in practical systems such as Marine Science Research, Robots in Mining Industry, and RoboCup games, robots usually face dynamic environments, in which both moving and stationary obstacles exist. Because of the complexity of the dynamic environments, research on path planning in the environments with dynamic obstacles is limited. Limited numbers of papers have been published in this area in comparison with hundreds of reports on path planning in stationary environments in the open literature. Recently, a genetic algorithm based approach has been introduced to plan the optimal path for a mobile robot in a dynamic environment with moving obstacles. However, with the increase of the number of the obstacles in the environment, and the changes of the moving speed and direction of the robot and obstacles, the size of the problem to be solved increases sharply. Consequently, the performance of the genetic algorithm based approach deteriorates significantly. This motivates the research of this work. This research develops and implements a simulated annealing algorithm based approach to find the optimal path for a mobile robot in a dynamic environment with moving obstacles. The simulated annealing algorithm is an optimization algorithm similar to the genetic algorithm in principle. However, our investigation and simulations have indicated that the simulated annealing algorithm based approach is simpler and easier to implement. Its performance is also shown to be superior to that of the genetic algorithm based approach in both online and offline processing times as well as in obtaining the optimal solution for path planning of the robot in the dynamic environment. The first step of many path planning methods is to search an initial feasible path for the robot. A commonly used method for searching the initial path is to randomly pick up some vertices of the obstacles in the search space. This is time consuming in both static and dynamic path planning, and has an important impact on the efficiency of the dynamic path planning. This research proposes a heuristic method to search the feasible initial path efficiently. Then, the heuristic method is incorporated into the proposed simulated annealing algorithm based approach for dynamic robot path planning. Simulation experiments have shown that with the incorporation of the heuristic method, the developed simulated annealing algorithm based approach requires much shorter processing time to get the optimal solutions in the dynamic path planning problem. Furthermore, the quality of the solution, as characterized by the length of the planned path, is also improved with the incorporated heuristic method in the simulated annealing based approach for both online and offline path planning.
Resumo:
Information Retrieval is an important albeit imperfect component of information technologies. A problem of insufficient diversity of retrieved documents is one of the primary issues studied in this research. This study shows that this problem leads to a decrease of precision and recall, traditional measures of information retrieval effectiveness. This thesis presents an adaptive IR system based on the theory of adaptive dual control. The aim of the approach is the optimization of retrieval precision after all feedback has been issued. This is done by increasing the diversity of retrieved documents. This study shows that the value of recall reflects this diversity. The Probability Ranking Principle is viewed in the literature as the “bedrock” of current probabilistic Information Retrieval theory. Neither the proposed approach nor other methods of diversification of retrieved documents from the literature conform to this principle. This study shows by counterexample that the Probability Ranking Principle does not in general lead to optimal precision in a search session with feedback (for which it may not have been designed but is actively used). Retrieval precision of the search session should be optimized with a multistage stochastic programming model to accomplish the aim. However, such models are computationally intractable. Therefore, approximate linear multistage stochastic programming models are derived in this study, where the multistage improvement of the probability distribution is modelled using the proposed feedback correctness method. The proposed optimization models are based on several assumptions, starting with the assumption that Information Retrieval is conducted in units of topics. The use of clusters is the primary reasons why a new method of probability estimation is proposed. The adaptive dual control of topic-based IR system was evaluated in a series of experiments conducted on the Reuters, Wikipedia and TREC collections of documents. The Wikipedia experiment revealed that the dual control feedback mechanism improves precision and S-recall when all the underlying assumptions are satisfied. In the TREC experiment, this feedback mechanism was compared to a state-of-the-art adaptive IR system based on BM-25 term weighting and the Rocchio relevance feedback algorithm. The baseline system exhibited better effectiveness than the cluster-based optimization model of ADTIR. The main reason for this was insufficient quality of the generated clusters in the TREC collection that violated the underlying assumption.
Resumo:
Focusing on the conditions that an optimization problem may comply with, the so-called convergence conditions have been proposed and sequentially a stochastic optimization algorithm named as DSZ algorithm is presented in order to deal with both unconstrained and constrained optimizations. The principle is discussed in the theoretical model of DSZ algorithm, from which we present the practical model of DSZ algorithm. Practical model efficiency is demonstrated by the comparison with the similar algorithms such as Enhanced simulated annealing (ESA), Monte Carlo simulated annealing (MCS), Sniffer Global Optimization (SGO), Directed Tabu Search (DTS), and Genetic Algorithm (GA), using a set of well-known unconstrained and constrained optimization test cases. Meanwhile, further attention goes to the strategies how to optimize the high-dimensional unconstrained problem using DSZ algorithm.
Resumo:
In the real world there are many problems in network of networks (NoNs) that can be abstracted to a so-called minimum interconnection cut problem, which is fundamentally different from those classical minimum cut problems in graph theory. Thus, it is desirable to propose an efficient and effective algorithm for the minimum interconnection cut problem. In this paper we formulate the problem in graph theory, transform it into a multi-objective and multi-constraint combinatorial optimization problem, and propose a hybrid genetic algorithm (HGA) for the problem. The HGA is a penalty-based genetic algorithm (GA) that incorporates an effective heuristic procedure to locally optimize the individuals in the population of the GA. The HGA has been implemented and evaluated by experiments. Experimental results have shown that the HGA is effective and efficient.
Resumo:
Composite web services comprise several component web services. When a composite web service is executed centrally, a single web service engine is responsible for coordinating the execution of the components, which may create a bottleneck and degrade the overall throughput of the composite service when there are a large number of service requests. Potentially this problem can be handled by decentralizing execution of the composite web service, but this raises the issue of how to partition a composite service into groups of component services such that each group can be orchestrated by its own execution engine while ensuring acceptable overall throughput of the composite service. Here we present a novel penalty-based genetic algorithm to solve the composite web service partitioning problem. Empirical results show that our new algorithm outperforms existing heuristic-based solutions.
Resumo:
Appearance-based localization is increasingly used for loop closure detection in metric SLAM systems. Since it relies only upon the appearance-based similarity between images from two locations, it can perform loop closure regardless of accumulated metric error. However, the computation time and memory requirements of current appearance-based methods scale linearly not only with the size of the environment but also with the operation time of the platform. These properties impose severe restrictions on longterm autonomy for mobile robots, as loop closure performance will inevitably degrade with increased operation time. We present a set of improvements to the appearance-based SLAM algorithm CAT-SLAM to constrain computation scaling and memory usage with minimal degradation in performance over time. The appearance-based comparison stage is accelerated by exploiting properties of the particle observation update, and nodes in the continuous trajectory map are removed according to minimal information loss criteria. We demonstrate constant time and space loop closure detection in a large urban environment with recall performance exceeding FAB-MAP by a factor of 3 at 100% precision, and investigate the minimum computational and memory requirements for maintaining mapping performance.
Resumo:
To ensure the small-signal stability of a power system, power system stabilizers (PSSs) are extensively applied for damping low frequency power oscillations through modulating the excitation supplied to synchronous machines, and increasing interest has been focused on developing different PSS schemes to tackle the threat of damping oscillations to power system stability. This paper examines four different PSS models and investigates their performances on damping power system dynamics using both small-signal eigenvalue analysis and large-signal dynamic simulations. The four kinds of PSSs examined include the Conventional PSS (CPSS), Single Neuron based PSS (SNPSS), Adaptive PSS (APSS) and Multi-band PSS (MBPSS). A steep descent parameter optimization algorithm is employed to seek the optimal PSS design parameters. To evaluate the effects of these PSSs on improving power system dynamic behaviors, case studies are carried out on an 8-unit 24-bus power system through both small-signal eigenvalue analysis and large-signal time-domain simulations.
Resumo:
The challenge of persistent appearance-based navigation and mapping is to develop an autonomous robotic vision system that can simultaneously localize, map and navigate over the lifetime of the robot. However, the computation time and memory requirements of current appearance-based methods typically scale not only with the size of the environment but also with the operation time of the platform; also, repeated revisits to locations will develop multiple competing representations which reduce recall performance. In this paper we present a solution to the persistent localization, mapping and global path planning problem in the context of a delivery robot in an office environment over a one-week period. Using a graphical appearance-based SLAM algorithm, CAT-Graph, we demonstrate constant time and memory loop closure detection with minimal degradation during repeated revisits to locations, along with topological path planning that improves over time without using a global metric representation. We compare the localization performance of CAT-Graph to openFABMAP, an appearance-only SLAM algorithm, and the path planning performance to occupancy-grid based metric SLAM. We discuss the limitations of the algorithm with regard to environment change over time and illustrate how the topological graph representation can be coupled with local movement behaviors for persistent autonomous robot navigation.
Resumo:
An energy storage system (ESS) can provide ancillary services such as frequency regulation and reserves, as well as smooth the fluctuations of wind power outputs, and hence improve the security and economics of the power system concerned. The combined operation of a wind farm and an ESS has become a widely accepted operating mode. Hence, it appears necessary to consider this operating mode in transmission system expansion planning, and this is an issue to be systematically addressed in this work. Firstly, the relationship between the cost of the NaS based ESS and its discharging cycle life is analyzed. A strategy for the combined operation of a wind farm and an ESS is next presented, so as to have a good compromise between the operating cost of the ESS and the smoothing effect of the fluctuation of wind power outputs. Then, a transmission system expansion planning model is developed with the sum of the transmission investment costs, the investment and operating costs of ESSs and the punishment cost of lost wind energy as the objective function to be minimized. An improved particle swarm optimization algorithm is employed to solve the developed planning model. Finally, the essential features of the developed model and adopted algorithm are demonstrated by 18-bus and 46-bus test systems.
Resumo:
Background While the burden of chronic cough in children has been documented, etiologic factors across multiple settings and age have not been described. In children with chronic cough, we aimed (1) to evaluate the burden and etiologies using a standard management pathway in various settings, and (2) to determine the influence of age and setting on disease burden and etiologies and etiology on disease burden. We hypothesized that the etiology, but not the burden, of chronic cough in children is dependent on the clinical setting and age. Methods From five major hospitals and three rural-remote clinics, 346 children (mean age 4.5 years) newly referred with chronic cough (> 4 weeks) were prospectively managed in accordance with an evidence-based cough algorithm. We used a priori definitions, timeframes, and validated outcome measures (parent-proxy cough-specific quality of life [PC-QOL], a generic QOL [pediatric quality of life (PedsQL)], and cough diary). Results The burden of chronic cough (PC-QOL, cough duration) significantly differed between settings (P = .014, 0.021, respectively), but was not influenced by age or etiology. PC-QOL and PedsQL did not correlate with age. The frequency of etiologies was significantly different in dissimilar settings (P = .0001); 17.6% of children had a serious underlying diagnosis (bronchiectasis, aspiration, cystic fibrosis). Except for protracted bacterial bronchitis, the frequency of other common diagnoses (asthma, bronchiectasis, resolved without specific-diagnosis) was similar across age categories. Conclusions The high burden of cough is independent of children’s age and etiology but dependent on clinical setting. Irrespective of setting and age, children with chronic cough should be carefully evaluated and child-specific evidence-based algorithms used.
Resumo:
This paper presents an efficient algorithm for optimizing the operation of battery storage in a low voltage distribution network with a high penetration of PV generation. A predictive control solution is presented that uses wavelet neural networks to predict the load and PV generation at hourly intervals for twelve hours into the future. The load and generation forecast, and the previous twelve hours of load and generation history, is used to assemble load profile. A diurnal charging profile can be compactly represented by a vector of Fourier coefficients allowing a direct search optimization algorithm to be applied. The optimal profile is updated hourly allowing the state of charge profile to respond to changing forecasts in load.
Resumo:
Energy efficient embedded computing enables new application scenarios in mobile devices like software-defined radio and video processing. The hierarchical multiprocessor considered in this work may contain dozens or hundreds of resource efficient VLIW CPUs. Programming this number of CPU cores is a complex task requiring compiler support. The stream programming paradigm provides beneficial properties that help to support automatic partitioning. This work describes a compiler for streaming applications targeting the self-build hierarchical CoreVA-MPSoC multiprocessor platform. The compiler is supported by a programming model that is tailored to fit the streaming programming paradigm. We present a novel simulated-annealing (SA) based partitioning algorithm, called Smart SA. The overall speedup of Smart SA is 12.84 for an MPSoC with 16 CPU cores compared to a single CPU implementation. Comparison with a state of the art partitioning algorithm shows an average performance improvement of 34.07%.