958 resultados para Cost Optimization


Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents a decentralized/peer-to-peer architecture-based parallel version of the vector evaluated particle swarm optimization (VEPSO) algorithm for multi-objective design optimization of laminated composite plates using message passing interface (MPI). The design optimization of laminated composite plates being a combinatorially explosive constrained non-linear optimization problem (CNOP), with many design variables and a vast solution space, warrants the use of non-parametric and heuristic optimization algorithms like PSO. Optimization requires minimizing both the weight and cost of these composite plates, simultaneously, which renders the problem multi-objective. Hence VEPSO, a multi-objective variant of the PSO algorithm, is used. Despite the use of such a heuristic, the application problem, being computationally intensive, suffers from long execution times due to sequential computation. Hence, a parallel version of the PSO algorithm for the problem has been developed to run on several nodes of an IBM P720 cluster. The proposed parallel algorithm, using MPI's collective communication directives, establishes a peer-to-peer relationship between the constituent parallel processes, deviating from the more common master-slave approach, in achieving reduction of computation time by factor of up to 10. Finally we show the effectiveness of the proposed parallel algorithm by comparing it with a serial implementation of VEPSO and a parallel implementation of the vector evaluated genetic algorithm (VEGA) for the same design problem. (c) 2012 Elsevier Ltd. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Ground management problems are typically solved by the simulation-optimization approach where complex numerical models are used to simulate the groundwater flow and/or contamination transport. These numerical models take a lot of time to solve the management problems and hence become computationally expensive. In this study, Artificial Neural Network (ANN) and Particle Swarm Optimization (PSO) models were developed and coupled for the management of groundwater of Dore river basin in France. The Analytic Element Method (AEM) based flow model was developed and used to generate the dataset for the training and testing of the ANN model. This developed ANN-PSO model was applied to minimize the pumping cost of the wells, including cost of the pipe line. The discharge and location of the pumping wells were taken as the decision variable and the ANN-PSO model was applied to find out the optimal location of the wells. The results of the ANN-PSO model are found similar to the results obtained by AEM-PSO model. The results show that the ANN model can reduce the computational burden significantly as it is able to analyze different scenarios, and the ANN-PSO model is capable of identifying the optimal location of wells efficiently.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

An exciting application of crowdsourcing is to use social networks in complex task execution. In this paper, we address the problem of a planner who needs to incentivize agents within a network in order to seek their help in executing an atomic task as well as in recruiting other agents to execute the task. We study this mechanism design problem under two natural resource optimization settings: (1) cost critical tasks, where the planner's goal is to minimize the total cost, and (2) time critical tasks, where the goal is to minimize the total time elapsed before the task is executed. We identify a set of desirable properties that should ideally be satisfied by a crowdsourcing mechanism. In particular, sybil-proofness and collapse-proofness are two complementary properties in our desiderata. We prove that no mechanism can satisfy all the desirable properties simultaneously. This leads us naturally to explore approximate versions of the critical properties. We focus our attention on approximate sybil-proofness and our exploration leads to a parametrized family of payment mechanisms which satisfy collapse-proofness. We characterize the approximate versions of the desirable properties in cost critical and time critical domain.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Groundwater management involves conflicting objectives as maximization of discharge contradicts the criteria of minimum pumping cost and minimum piping cost. In addition, available data contains uncertainties such as market fluctuations, variations in water levels of wells and variations of ground water policies. A fuzzy model is to be evolved to tackle the uncertainties, and a multiobjective optimization is to be conducted to simultaneously satisfy the contradicting objectives. Towards this end, a multiobjective fuzzy optimization model is evolved. To get at the upper and lower bounds of the individual objectives, particle Swarm optimization (PSO) is adopted. The analytic element method (AEM) is employed to obtain the operating potentio metric head. In this study, a multiobjective fuzzy optimization model considering three conflicting objectives is developed using PSO and AEM methods for obtaining a sustainable groundwater management policy. The developed model is applied to a case study, and it is demonstrated that the compromise solution satisfies all the objectives with adequate levels of satisfaction. Sensitivity analysis is carried out by varying the parameters, and it is shown that the effect of any such variation is quite significant. Copyright (c) 2015 John Wiley & Sons, Ltd.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Selection of relevant features is an open problem in Brain-computer interfacing (BCI) research. Sometimes, features extracted from brain signals are high dimensional which in turn affects the accuracy of the classifier. Selection of the most relevant features improves the performance of the classifier and reduces the computational cost of the system. In this study, we have used a combination of Bacterial Foraging Optimization and Learning Automata to determine the best subset of features from a given motor imagery electroencephalography (EEG) based BCI dataset. Here, we have employed Discrete Wavelet Transform to obtain a high dimensional feature set and classified it by Distance Likelihood Ratio Test. Our proposed feature selector produced an accuracy of 80.291% in 216 seconds.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In metropolitan cities, public transportation service plays a vital role in mobility of people, and it has to introduce new routes more frequently due to the fast development of the city in terms of population growth and city size. Whenever there is introduction of new route or increase in frequency of buses, the nonrevenue kilometers covered by the buses increases as depot and route starting/ending points are at different places. This non-revenue kilometers or dead kilometers depends on the distance between depot and route starting point/ending point. The dead kilometers not only results in revenue loss but also results in an increase in the operating cost because of the extra kilometers covered by buses. Reduction of dead kilometers is necessary for the economic growth of the public transportation system. Therefore, in this study, the attention is focused on minimizing dead kilometers by optimizing allocation of buses to depots depending upon the shortest distance between depot and route starting/ending points. We consider also depot capacity and time period of operation during allocation of buses to ensure parking safety and proper maintenance of buses. Mathematical model is developed considering the aforementioned parameters, which is a mixed integer program, and applied to Bangalore Metropolitan Transport Corporation (BMTC) routes operating presently in order to obtain optimal bus allocation to depots. Database for dead kilometers of depots in BMTC for all the schedules are generated using the Form-4 (trip sheet) of each schedule to analyze depot-wise and division-wise dead kilometers. This study also suggests alternative locations where depots can be located to reduce dead kilometers. Copyright (C) 2015 John Wiley & Sons, Ltd.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Campaigners are increasingly using online social networking platforms for promoting products, ideas and information. A popular method of promoting a product or even an idea is incentivizing individuals to evangelize the idea vigorously by providing them with referral rewards in the form of discounts, cash backs, or social recognition. Due to budget constraints on scarce resources such as money and manpower, it may not be possible to provide incentives for the entire population, and hence incentives need to be allocated judiciously to appropriate individuals for ensuring the highest possible outreach size. We aim to do the same by formulating and solving an optimization problem using percolation theory. In particular, we compute the set of individuals that are provided incentives for minimizing the expected cost while ensuring a given outreach size. We also solve the problem of computing the set of individuals to be incentivized for maximizing the outreach size for given cost budget. The optimization problem turns out to be non trivial; it involves quantities that need to be computed by numerically solving a fixed point equation. Our primary contribution is, that for a fairly general cost structure, we show that the optimization problems can be solved by solving a simple linear program. We believe that our approach of using percolation theory to formulate an optimization problem is the first of its kind. (C) 2016 Elsevier B.V. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Many engineering applications face the problem of bounding the expected value of a quantity of interest (performance, risk, cost, etc.) that depends on stochastic uncertainties whose probability distribution is not known exactly. Optimal uncertainty quantification (OUQ) is a framework that aims at obtaining the best bound in these situations by explicitly incorporating available information about the distribution. Unfortunately, this often leads to non-convex optimization problems that are numerically expensive to solve.

This thesis emphasizes on efficient numerical algorithms for OUQ problems. It begins by investigating several classes of OUQ problems that can be reformulated as convex optimization problems. Conditions on the objective function and information constraints under which a convex formulation exists are presented. Since the size of the optimization problem can become quite large, solutions for scaling up are also discussed. Finally, the capability of analyzing a practical system through such convex formulations is demonstrated by a numerical example of energy storage placement in power grids.

When an equivalent convex formulation is unavailable, it is possible to find a convex problem that provides a meaningful bound for the original problem, also known as a convex relaxation. As an example, the thesis investigates the setting used in Hoeffding's inequality. The naive formulation requires solving a collection of non-convex polynomial optimization problems whose number grows doubly exponentially. After structures such as symmetry are exploited, it is shown that both the number and the size of the polynomial optimization problems can be reduced significantly. Each polynomial optimization problem is then bounded by its convex relaxation using sums-of-squares. These bounds are found to be tight in all the numerical examples tested in the thesis and are significantly better than Hoeffding's bounds.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We are at the cusp of a historic transformation of both communication system and electricity system. This creates challenges as well as opportunities for the study of networked systems. Problems of these systems typically involve a huge number of end points that require intelligent coordination in a distributed manner. In this thesis, we develop models, theories, and scalable distributed optimization and control algorithms to overcome these challenges.

This thesis focuses on two specific areas: multi-path TCP (Transmission Control Protocol) and electricity distribution system operation and control. Multi-path TCP (MP-TCP) is a TCP extension that allows a single data stream to be split across multiple paths. MP-TCP has the potential to greatly improve reliability as well as efficiency of communication devices. We propose a fluid model for a large class of MP-TCP algorithms and identify design criteria that guarantee the existence, uniqueness, and stability of system equilibrium. We clarify how algorithm parameters impact TCP-friendliness, responsiveness, and window oscillation and demonstrate an inevitable tradeoff among these properties. We discuss the implications of these properties on the behavior of existing algorithms and motivate a new algorithm Balia (balanced linked adaptation) which generalizes existing algorithms and strikes a good balance among TCP-friendliness, responsiveness, and window oscillation. We have implemented Balia in the Linux kernel. We use our prototype to compare the new proposed algorithm Balia with existing MP-TCP algorithms.

Our second focus is on designing computationally efficient algorithms for electricity distribution system operation and control. First, we develop efficient algorithms for feeder reconfiguration in distribution networks. The feeder reconfiguration problem chooses the on/off status of the switches in a distribution network in order to minimize a certain cost such as power loss. It is a mixed integer nonlinear program and hence hard to solve. We propose a heuristic algorithm that is based on the recently developed convex relaxation of the optimal power flow problem. The algorithm is efficient and can successfully computes an optimal configuration on all networks that we have tested. Moreover we prove that the algorithm solves the feeder reconfiguration problem optimally under certain conditions. We also propose a more efficient algorithm and it incurs a loss in optimality of less than 3% on the test networks.

Second, we develop efficient distributed algorithms that solve the optimal power flow (OPF) problem on distribution networks. The OPF problem determines a network operating point that minimizes a certain objective such as generation cost or power loss. Traditionally OPF is solved in a centralized manner. With increasing penetration of volatile renewable energy resources in distribution systems, we need faster and distributed solutions for real-time feedback control. This is difficult because power flow equations are nonlinear and kirchhoff's law is global. We propose solutions for both balanced and unbalanced radial distribution networks. They exploit recent results that suggest solving for a globally optimal solution of OPF over a radial network through a second-order cone program (SOCP) or semi-definite program (SDP) relaxation. Our distributed algorithms are based on the alternating direction method of multiplier (ADMM), but unlike standard ADMM-based distributed OPF algorithms that require solving optimization subproblems using iterative methods, the proposed solutions exploit the problem structure that greatly reduce the computation time. Specifically, for balanced networks, our decomposition allows us to derive closed form solutions for these subproblems and it speeds up the convergence by 1000x times in simulations. For unbalanced networks, the subproblems reduce to either closed form solutions or eigenvalue problems whose size remains constant as the network scales up and computation time is reduced by 100x compared with iterative methods.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

An experimental culture practice of P. monodon on extension approach was conducted in two brackish water earthen ponds of Demonstration Farm and Training Center (DFTC), Kaliganj, Satkhira. The experiment was aimed to provide farmers with appropriate technology that can immediately improve pond yield with keeping the environment in friendly condition. For optimization of stocking density of a cost effective environmental friendly improved extensive shrimp farming, the ponds were stocked with coastal river post larvae of P. monodon at the stocking rates of 2 pls/m² and 2.5 pls/m² without supplementary feeding. To control experimental error another five farmer's gher were used as replicates of each demo-pond. Considering the farmers buying ability, cost of inputs and other facilities kept minimal. The impact of stocking density was evaluated on the basis of growth, survival rate, production and economic return. Better production (average 299.01 kg/ha) with same survival rate (39.33%) were found with a stocking density of 2.5 pls/m² without causing any deterioration in the culture environment.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Pile reuse has become an increasingly popular option in foundation design, mainly due to its potential cost and environmental benefits and the problem of underground congestion in urban areas. However, key geotechnical concerns remain regarding the behavior of reused piles and the modeling of foundation systems involving old and new piles to support building loads of the new structure. In this paper, a design and analysis tool for pile reuse projects will be introduced. The tool allows coupling of superstructure stiffness with the foundation model, and includes an optimization algorithm to obtain the best configuration of new piles to work alongside reused piles. Under the concept of Pareto Optimality, multi-objective optimization analyses can also reveal the relationship between material usage and the corresponding foundation performance, providing a series of reuse options at various foundation costs. The components of this analysis tool will be discussed and illustrated through a case history in London, where 110 existing piles are reused at a site to support the proposed new development. The case history reveals the difficulties faced by foundation reuse in urban areas and demonstrates the application of the design tool to tackle these challenges. © ASCE 2011.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Multi-objective Genetic Algorithms have become a popular choice to aid in optimising the size of the whole hybrid power train. Within these optimisation processes, other optimisation techniques for the control strategy are implemented. This optimisation within an optimisation requires many simulations to be run, so reducing the computational cost is highly desired. This paper presents an optimisation framework consisting of a series hybrid optimisation algorithm, in which a global search optimizes a submarine propulsion system using low-fidelity models and, in order to refine the results, a local search is used with high-fidelity models. The effectiveness of the Hybrid optimisation algorithm is demonstrated with the optimisation of a submarine propulsion system. © 2011 EPE Association - European Power Electr.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A low cost solar drier was constructed using locally available materials. The size of the drier was 20x3.6x3 having drying capacity of 80 kg of SIS (w/w). Optimization of moisture content was observed for mola, dhela, chapila, chanda and puti at temperature ranges between 40-45°C and 50-55°C in solar tunnel drier. There was little or no change in moisture content at temperature below 40°C during the first 3 hours. Then the moisture content declined gradually with the increase of drying period. On the other hand, at temperature between 50-55°C, moisture content started to decline after 2 hours of drying. The moisture content of the sample reached at about 16% after 26 hours of sun drying at 40-45°C and 20 hours at 50-55°C. The optimum temperature for producing high quality dried products was 45-50°C in solar tunnel drier. The temperature and relative humidity outside and inside the dryers (with fish) at various locations were recorded from 8.00am to 4.00pm. The normal atmospheric ambient temperature was recorded in the range of 25-37°C from at 8:00am to 4:00pm. During the same period the atmospheric relative humidity recorded was in the range of 30-58%. On the other hand, the maximum temperature inside the dryers was recorded in the range of 28-65°C. The lowest temperature recorded was 28°C in the morning and at 13.00pm the highest temperature 65°C was recorded. The maximum relative humidity 58% found in the afternoon and minimum of 28% at noon. There was inverse relationship between temperature intensity of sunshine and humidity which decreased as sunshine increased. In total, it took around 26 hours of drying to reduce the moisture level to about 16%.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Superconductors are known for the ability to trap magnetic field. A thermally actuated magnetization (TAM) flux pump is a system that utilizes the thermal material to generate multiple small magnetic pulses resulting in a high magnetization accumulated in the superconductor. Ferrites are a good thermal material candidate for the future TAM flux pumps because the relative permeability of ferrite changes significantly with temperature, particularly around the Curie temperature. Several soft ferrites have been specially synthesized to reduce the cost and improve the efficiency of the TAM flux pump. Various ferrite compositions have been tested under a temperature variation ranging from 77K to 300K. The experimental results of the synthesized soft ferrites-Cu 0.3 Zn 0.7Ti 0.04Fe 1.96O 4, including the Curie temperature, magnetic relative permeability and the volume magnetization (emu/cm3), are presented in this paper. The results are compared with original thermal material, gadolinium, used in the TAM flux pump system.-Cu 0.3 Zn 0.7Ti 0.04 Fe 1.96O 4 holds superior characteristics and is believed to be a suitable material for next generation TAM flux pump. © 2011 IEEE.