942 resultados para Multi-objective optimization techniques
Resumo:
Opportunistic land encroachment occurs in many low-income countries, gradually yet pervasively, until discrete areas of common land disappear. This paper, motivated by field observations in Karnataka, India, demonstrates that such an evolution of property rights from common to private may be efficient when the boundaries between common and private land are poorly defined, or ‘‘fuzzy.’’ Using a multi-period optimization model, and introducing the concept of stock and flow enforcement, I show how effectiveness of enforcement effort, whether encroachment is reversible, and punitive fines, influence whether an area of common land is fully defined and protected or gradually or rapidly encroached.
Resumo:
The aim of this study was, within a sensitivity analysis framework, to determine if additional model complexity gives a better capability to model the hydrology and nitrogen dynamics of a small Mediterranean forested catchment or if the additional parameters cause over-fitting. Three nitrogen-models of varying hydrological complexity were considered. For each model, general sensitivity analysis (GSA) and Generalized Likelihood Uncertainty Estimation (GLUE) were applied, each based on 100,000 Monte Carlo simulations. The results highlighted the most complex structure as the most appropriate, providing the best representation of the non-linear patterns observed in the flow and streamwater nitrate concentrations between 1999 and 2002. Its 5% and 95% GLUE bounds, obtained considering a multi-objective approach, provide the narrowest band for streamwater nitrogen, which suggests increased model robustness, though all models exhibit periods of inconsistent good and poor fits between simulated outcomes and observed data. The results confirm the importance of the riparian zone in controlling the short-term (daily) streamwater nitrogen dynamics in this catchment but not the overall flux of nitrogen from the catchment. It was also shown that as the complexity of a hydrological model increases over-parameterisation occurs, but the converse is true for a water quality model where additional process representation leads to additional acceptable model simulations. Water quality data help constrain the hydrological representation in process-based models. Increased complexity was justifiable for modelling river-system hydrochemistry. Increased complexity was justifiable for modelling river-system hydrochemistry.
Resumo:
This paper explores the development of multi-feature classification techniques used to identify tremor-related characteristics in the Parkinsonian patient. Local field potentials were recorded from the subthalamic nucleus and the globus pallidus internus of eight Parkinsonian patients through the implanted electrodes of a Deep brain stimulation (DBS) device prior to device internalization. A range of signal processing techniques were evaluated with respect to their tremor detection capability and used as inputs in a multi-feature neural network classifier to identify the activity of Parkinsonian tremor. The results of this study show that a trained multi-feature neural network is able, under certain conditions, to achieve excellent detection accuracy on patients unseen during training. Overall the tremor detection accuracy was mixed, although an accuracy of over 86% was achieved in four out of the eight patients.
Resumo:
Bloom filters are a data structure for storing data in a compressed form. They offer excellent space and time efficiency at the cost of some loss of accuracy (so-called lossy compression). This work presents a yes-no Bloom filter, which as a data structure consisting of two parts: the yes-filter which is a standard Bloom filter and the no-filter which is another Bloom filter whose purpose is to represent those objects that were recognised incorrectly by the yes-filter (that is, to recognise the false positives of the yes-filter). By querying the no-filter after an object has been recognised by the yes-filter, we get a chance of rejecting it, which improves the accuracy of data recognition in comparison with the standard Bloom filter of the same total length. A further increase in accuracy is possible if one chooses objects to include in the no-filter so that the no-filter recognises as many as possible false positives but no true positives, thus producing the most accurate yes-no Bloom filter among all yes-no Bloom filters. This paper studies how optimization techniques can be used to maximize the number of false positives recognised by the no-filter, with the constraint being that it should recognise no true positives. To achieve this aim, an Integer Linear Program (ILP) is proposed for the optimal selection of false positives. In practice the problem size is normally large leading to intractable optimal solution. Considering the similarity of the ILP with the Multidimensional Knapsack Problem, an Approximate Dynamic Programming (ADP) model is developed making use of a reduced ILP for the value function approximation. Numerical results show the ADP model works best comparing with a number of heuristics as well as the CPLEX built-in solver (B&B), and this is what can be recommended for use in yes-no Bloom filters. In a wider context of the study of lossy compression algorithms, our researchis an example showing how the arsenal of optimization methods can be applied to improving the accuracy of compressed data.
Resumo:
Evidence of jet precession in many galactic and extragalactic sources has been reported in the literature. Much of this evidence is based on studies of the kinematics of the jet knots, which depends on the correct identification of the components to determine their respective proper motions and position angles on the plane of the sky. Identification problems related to fitting procedures, as well as observations poorly sampled in time, may influence the follow-up of the components in time, which consequently might contribute to a misinterpretation of the data. In order to deal with these limitations, we introduce a very powerful statistical tool to analyse jet precession: the cross-entropy method for continuous multi-extremal optimization. Only based on the raw data of the jet components (right ascension and declination offsets from the core), the cross-entropy method searches for the precession model parameters that better represent the data. In this work we present a large number of tests to validate this technique, using synthetic precessing jets built from a given set of precession parameters. With the aim of recovering these parameters, we applied the cross-entropy method to our precession model, varying exhaustively the quantities associated with the method. Our results have shown that even in the most challenging tests, the cross-entropy method was able to find the correct parameters within a 1 per cent level. Even for a non-precessing jet, our optimization method could point out successfully the lack of precession.
Resumo:
Clustering is a difficult task: there is no single cluster definition and the data can have more than one underlying structure. Pareto-based multi-objective genetic algorithms (e.g., MOCK Multi-Objective Clustering with automatic K-determination and MOCLE-Multi-Objective Clustering Ensemble) were proposed to tackle these problems. However, the output of such algorithms can often contains a high number of partitions, becoming difficult for an expert to manually analyze all of them. In order to deal with this problem, we present two selection strategies, which are based on the corrected Rand, to choose a subset of solutions. To test them, they are applied to the set of solutions produced by MOCK and MOCLE in the context of several datasets. The study was also extended to select a reduced set of partitions from the initial population of MOCLE. These analysis show that both versions of selection strategy proposed are very effective. They can significantly reduce the number of solutions and, at the same time, keep the quality and the diversity of the partitions in the original set of solutions. (C) 2010 Elsevier B.V. All rights reserved.
Resumo:
This Thesis Work will concentrate on a very interesting problem, the Vehicle Routing Problem (VRP). In this problem, customers or cities have to be visited and packages have to be transported to each of them, starting from a basis point on the map. The goal is to solve the transportation problem, to be able to deliver the packages-on time for the customers,-enough package for each Customer,-using the available resources- and – of course - to be so effective as it is possible.Although this problem seems to be very easy to solve with a small number of cities or customers, it is not. In this problem the algorithm have to face with several constraints, for example opening hours, package delivery times, truck capacities, etc. This makes this problem a so called Multi Constraint Optimization Problem (MCOP). What’s more, this problem is intractable with current amount of computational power which is available for most of us. As the number of customers grow, the calculations to be done grows exponential fast, because all constraints have to be solved for each customers and it should not be forgotten that the goal is to find a solution, what is best enough, before the time for the calculation is up. This problem is introduced in the first chapter: form its basics, the Traveling Salesman Problem, using some theoretical and mathematical background it is shown, why is it so hard to optimize this problem, and although it is so hard, and there is no best algorithm known for huge number of customers, why is it a worth to deal with it. Just think about a huge transportation company with ten thousands of trucks, millions of customers: how much money could be saved if we would know the optimal path for all our packages.Although there is no best algorithm is known for this kind of optimization problems, we are trying to give an acceptable solution for it in the second and third chapter, where two algorithms are described: the Genetic Algorithm and the Simulated Annealing. Both of them are based on obtaining the processes of nature and material science. These algorithms will hardly ever be able to find the best solution for the problem, but they are able to give a very good solution in special cases within acceptable calculation time.In these chapters (2nd and 3rd) the Genetic Algorithm and Simulated Annealing is described in details, from their basis in the “real world” through their terminology and finally the basic implementation of them. The work will put a stress on the limits of these algorithms, their advantages and disadvantages, and also the comparison of them to each other.Finally, after all of these theories are shown, a simulation will be executed on an artificial environment of the VRP, with both Simulated Annealing and Genetic Algorithm. They will both solve the same problem in the same environment and are going to be compared to each other. The environment and the implementation are also described here, so as the test results obtained.Finally the possible improvements of these algorithms are discussed, and the work will try to answer the “big” question, “Which algorithm is better?”, if this question even exists.
Resumo:
In the past, the focus of drainage design was on sizing pipes and storages in order to provide sufficient network capacity. This traditional approach, together with computer software and technical guidance, had been successful for many years. However, due to rapid population growth and urbanisation, the requirements of a “good” drainage design have also changed significantly. In addition to water management, other aspects such as environmental impacts, amenity values and carbon footprint have to be considered during the design process. Going forward, we need to address the key sustainability issues carefully and practically. The key challenge of moving from simple objectives (e.g. capacity and costs) to complicated objectives (e.g. capacity, flood risk, environment, amenity etc) is the difficulty to strike a balance between various objectives and to justify potential benefits and compromises. In order to assist decision makers, we developed a new decision support system for drainage design. The system consists of two main components – a multi-criteria evaluation framework for drainage systems and a multi-objective optimisation tool. The evaluation framework is used for the quantification of performance, life-cycle costs and benefits of different drainage systems. The optimisation tool can search for feasible combinations of design parameters such as the sizes, order and type of drainage components that maximise multiple benefits. In this paper, we will discuss real-world application of the decision support system. A number of case studies have been developed based on recent drainage projects in China. We will use the case studies to illustrate how the evaluation framework highlights and compares the pros and cons of various design options. We will also discuss how the design parameters can be optimised based on the preferences of decision makers. The work described here is the output of an EngD project funded by EPSRC and XP Solutions.
Resumo:
An underwater gas pipeline is the portion of the pipeline that crosses a river beneath its bottom. Underwater gas pipelines are subject to increasing dangers as time goes by. An accident at an underwater gas pipeline can lead to technological and environmental disaster on the scale of an entire region. Therefore, timely troubleshooting of all underwater gas pipelines in order to prevent any potential accidents will remain a pressing task for the industry. The most important aspect of resolving this challenge is the quality of the automated system in question. Now the industry doesn't have any automated system that fully meets the needs of the experts working in the field maintaining underwater gas pipelines. Principle Aim of this Research: This work aims to develop a new system of automated monitoring which would simplify the process of evaluating the technical condition and decision making on planning and preventive maintenance and repair work on the underwater gas pipeline. Objectives: Creation a shared model for a new, automated system via IDEF3; Development of a new database system which would store all information about underwater gas pipelines; Development a new application that works with database servers, and provides an explanation of the results obtained from the server; Calculation of the values MTBF for specified pipelines based on quantitative data obtained from tests of this system. Conclusion: The new, automated system PodvodGazExpert has been developed for timely and qualitative determination of the physical conditions of underwater gas pipeline; The basis of the mathematical analysis of this new, automated system uses principal component analysis method; The process of determining the physical condition of an underwater gas pipeline with this new, automated system increases the MTBF by a factor of 8.18 above the existing system used today in the industry.
Resumo:
When an accurate hydraulic network model is available, direct modeling techniques are very straightforward and reliable for on-line leakage detection and localization applied to large class of water distribution networks. In general, this type of techniques based on analytical models can be seen as an application of the well-known fault detection and isolation theory for complex industrial systems. Nonetheless, the assumption of single leak scenarios is usually made considering a certain leak size pattern which may not hold in real applications. Upgrading a leak detection and localization method based on a direct modeling approach to handle multiple-leak scenarios can be, on one hand, quite straightforward but, on the other hand, highly computational demanding for large class of water distribution networks given the huge number of potential water loss hotspots. This paper presents a leakage detection and localization method suitable for multiple-leak scenarios and large class of water distribution networks. This method can be seen as an upgrade of the above mentioned method based on a direct modeling approach in which a global search method based on genetic algorithms has been integrated in order to estimate those network water loss hotspots and the size of the leaks. This is an inverse / direct modeling method which tries to take benefit from both approaches: on one hand, the exploration capability of genetic algorithms to estimate network water loss hotspots and the size of the leaks and on the other hand, the straightforwardness and reliability offered by the availability of an accurate hydraulic model to assess those close network areas around the estimated hotspots. The application of the resulting method in a DMA of the Barcelona water distribution network is provided and discussed. The obtained results show that leakage detection and localization under multiple-leak scenarios may be performed efficiently following an easy procedure.
Resumo:
In the field of operational water management, Model Predictive Control (MPC) has gained popularity owing to its versatility and flexibility. The MPC controller, which takes predictions, time delay and uncertainties into account, can be designed for multi-objective management problems and for large-scale systems. Nonetheless, a critical obstacle, which needs to be overcome in MPC, is the large computational burden when a large-scale system is considered or a long prediction horizon is involved. In order to solve this problem, we use an adaptive prediction accuracy (APA) approach that can reduce the computational burden almost by half. The proposed MPC scheme with this scheme is tested on the northern Dutch water system, which comprises Lake IJssel, Lake Marker, the River IJssel and the North Sea Canal. The simulation results show that by using the MPC-APA scheme, the computational time can be reduced to a large extent and a flood protection problem over longer prediction horizons can be well solved.
Resumo:
The modern society depends on an efficient communications system able to of transmitting and receiving information with a higher speed and reliability every time. The need for ever more efficient devices raises optimization techniques of microstrip devices, such as techniques to increase bandwidth: thicker substrates and substrate structures with EBG (Electromagnetic Band Gap) and PBG (Photonic Band Gap). This work has how aims the study of the application of PBG materials on substrates of planar structures in microstrip, more precisely in directional quadrature couplers and in rat-race and impedance of transformers. A study of the planar structures in microstrip and substrates EBG is presented. The PBG substrates can be used to optimize the radiation through the air, thus reducing the occurrence of surface waves and the resulting diffraction edge responsible for degradation of radiation pattern. Through specific programs in FORTRAN Power Station obtained the frequencies and couplings for each structure. Are used the program PACMO - Computer Aided Design in Microwave. Results are obtained of the frequency and coupling devices, ranging the frequency band used (cellular communication and Wimax systems) and the permittivity of the substrate, comparing the results of conventional material and PBG materials in the s and p polarizations.
Resumo:
In this paper, a multi-objective approach for observing the performance of distribution systems with embedded generators in the steady state, based on heuristic and power system analysis, is proposed. The proposed hybrid performance index describes the quality of the operating state in each considered distribution network configuration. In order to represent the system state, the loss allocation in the distribution systems, based on the Z-bus loss allocation method and compensation-based power flow algorithm, is determined. Also, an investigation of the impact of the integration of embedded generators on the overall performance of the distribution systems in the steady state, is performed. Results obtained from several case studies are presented and discussed. Copyright (C) 2004 John Wiley Sons, Ltd.
Resumo:
A combinatorial mathematical model in tandem with a metaheuristic technique for solving transmission network expansion planning (TNEP) using an AC model associated with reactive power planning (RPP) is presented in this paper. AC-TNEP is handled through a prior DC model while additional lines as well as VAr-plants are used as reinforcements to cope with real network requirements. The solution of the reinforcement stage can be obtained by assuming all reactive demands are supplied locally to achieve a solution for AC-TNEP and by neglecting the local reactive sources, a reactive power planning (RPP) will be managed to find the minimum required reactive power sources. Binary GA as well as a real genetic algorithm (RCA) are employed as metaheuristic optimization techniques for solving this combinatorial TNEP as well as the RPP problem. High quality results related with lower investment costs through case studies on test systems show the usefulness of the proposal when working directly with the AC model in transmission network expansion planning, instead of relaxed models. (C) 2010 Elsevier B.V. All rights reserved.