236 resultados para Global Optimization


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Climate change is one of the most important global environmental challenges, with implications for food production, water supply, health, energy, etc. Addressing climate change requires a good scientific understanding as well as coordinated action at national and global level. This paper addresses these challenges. Historically, the responsibility for greenhouse gas emissions' increase lies largely with the industrialized world, though the developing countries are likely to be the source of an increasing proportion of future emissions. The projected climate change under various scenarios is likely to have implications on food production, water supply, coastal settlements, forest ecosystems, health, energy security, etc. The adaptive capacity of communities likely to be impacted by climate change is low in developing countries. The efforts made by the UNFCCC and the Kyoto Protocol provisions are clearly inadequate to address the climate change challenge. The most effective way to address climate change is to adopt a sustainable development pathway by shifting to environmentally sustainable technologies and promotion of energy efficiency, renewable energy, forest conservation, reforestation, water conservation, etc. The issue of highest importance to developing countries is reducing the vulnerability of their natural and socio-economic systems to the projected climate change. India and other developing countries will face the challenge of promoting mitigation and adaptation strategies, bearing the cost of such an effort, and its implications for economic development.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We propose a novel equalizer for ultrawideband (UWB) multiple-input multiple-output (MIMO) channels characterized by severe delay spreads. The proposed equalizer is based on reactive tabu search (RTS), which is a heuristic originally designed to obtain approximate solutions to combinatorial optimization problems. The proposed RTS equalizer is shown to perform increasingly better for increasing number of multipath components (MPC), and achieve near maximum likelihood (ML) performance for large number of MPCs at a much less complexity than that of the ML detector. The proposed RTS equalizer is shown to perform close to within 0.4 dB of single-input multiple-output AWGN performance at 10(-3) uncoded BER on a severely delay-spread UWB MIMO channel with 48 equal-energy MPCs.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

There are a number of large networks which occur in many problems dealing with the flow of power, communication signals, water, gas, transportable goods, etc. Both design and planning of these networks involve optimization problems. The first part of this paper introduces the common characteristics of a nonlinear network (the network may be linear, the objective function may be non linear, or both may be nonlinear). The second part develops a mathematical model trying to put together some important constraints based on the abstraction for a general network. The third part deals with solution procedures; it converts the network to a matrix based system of equations, gives the characteristics of the matrix and suggests two solution procedures, one of them being a new one. The fourth part handles spatially distributed networks and evolves a number of decomposition techniques so that we can solve the problem with the help of a distributed computer system. Algorithms for parallel processors and spatially distributed systems have been described.There are a number of common features that pertain to networks. A network consists of a set of nodes and arcs. In addition at every node, there is a possibility of an input (like power, water, message, goods etc) or an output or none. Normally, the network equations describe the flows amoungst nodes through the arcs. These network equations couple variables associated with nodes. Invariably, variables pertaining to arcs are constants; the result required will be flows through the arcs. To solve the normal base problem, we are given input flows at nodes, output flows at nodes and certain physical constraints on other variables at nodes and we should find out the flows through the network (variables at nodes will be referred to as across variables).The optimization problem involves in selecting inputs at nodes so as to optimise an objective function; the objective may be a cost function based on the inputs to be minimised or a loss function or an efficiency function. The above mathematical model can be solved using Lagrange Multiplier technique since the equalities are strong compared to inequalities. The Lagrange multiplier technique divides the solution procedure into two stages per iteration. Stage one calculates the problem variables % and stage two the multipliers lambda. It is shown that the Jacobian matrix used in stage one (for solving a nonlinear system of necessary conditions) occurs in the stage two also.A second solution procedure has also been imbedded into the first one. This is called total residue approach. It changes the equality constraints so that we can get faster convergence of the iterations.Both solution procedures are found to coverge in 3 to 7 iterations for a sample network.The availability of distributed computer systems — both LAN and WAN — suggest the need for algorithms to solve the optimization problems. Two types of algorithms have been proposed — one based on the physics of the network and the other on the property of the Jacobian matrix. Three algorithms have been deviced, one of them for the local area case. These algorithms are called as regional distributed algorithm, hierarchical regional distributed algorithm (both using the physics properties of the network), and locally distributed algorithm (a multiprocessor based approach with a local area network configuration). The approach used was to define an algorithm that is faster and uses minimum communications. These algorithms are found to converge at the same rate as the non distributed (unitary) case.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper, we present a generic method/model for multi-objective design optimization of laminated composite components, based on Vector Evaluated Artificial Bee Colony (VEABC) algorithm. VEABC is a parallel vector evaluated type, swarm intelligence multi-objective variant of the Artificial Bee Colony algorithm (ABC). In the current work a modified version of VEABC algorithm for discrete variables has been developed and implemented successfully for the multi-objective design optimization of composites. The problem is formulated with multiple objectives of minimizing weight and the total cost of the composite component to achieve a specified strength. The primary optimization variables are the number of layers, its stacking sequence (the orientation of the layers) and thickness of each layer. The classical lamination theory is utilized to determine the stresses in the component and the design is evaluated based on three failure criteria: failure mechanism based failure criteria, maximum stress failure criteria and the tsai-wu failure criteria. The optimization method is validated for a number of different loading configurations-uniaxial, biaxial and bending loads. The design optimization has been carried for both variable stacking sequences, as well fixed standard stacking schemes and a comparative study of the different design configurations evolved has been presented. Finally the performance is evaluated in comparison with other nature inspired techniques which includes Particle Swarm Optimization (PSO), Artificial Immune System (AIS) and Genetic Algorithm (GA). The performance of ABC is at par with that of PSO, AIS and GA for all the loading configurations. (C) 2009 Elsevier B.V. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Methodologies are presented for minimization of risk in a river water quality management problem. A risk minimization model is developed to minimize the risk of low water quality along a river in the face of conflict among various stake holders. The model consists of three parts: a water quality simulation model, a risk evaluation model with uncertainty analysis and an optimization model. Sensitivity analysis, First Order Reliability Analysis (FORA) and Monte-Carlo simulations are performed to evaluate the fuzzy risk of low water quality. Fuzzy multiobjective programming is used to formulate the multiobjective model. Probabilistic Global Search Laussane (PGSL), a global search algorithm developed recently, is used for solving the resulting non-linear optimization problem. The algorithm is based on the assumption that better sets of points are more likely to be found in the neighborhood of good sets of points, therefore intensifying the search in the regions that contain good solutions. Another model is developed for risk minimization, which deals with only the moments of the generated probability density functions of the water quality indicators. Suitable skewness values of water quality indicators, which lead to low fuzzy risk are identified. Results of the models are compared with the results of a deterministic fuzzy waste load allocation model (FWLAM), when methodologies are applied to the case study of Tunga-Bhadra river system in southern India, with a steady state BOD-DO model. The fractional removal levels resulting from the risk minimization model are slightly higher, but result in a significant reduction in risk of low water quality. (c) 2005 Elsevier Ltd. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The optimization of a photovoltaic pumping system based on an induction motor driven pump that is powered by a solar array is presented in this paper. The motor-pump subsystem is analyzed from the point of view of optimizing the power requirement of the induction motor, which has led to an optimum u-f relationship useful in controlling the motor. The complete pumping system is implemented using a dc-dc converter, a three-phase inverter, and an induction motor-pump set. The dc-dc converter is used as a power conditioner and its duty cycle is controlled so as to match the load to the array. A microprocessor-based controller is used to carry out the load-matching.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The ion energy distribution of inductively coupled plasma ion source for focused ion beam application is measured using a four grid retarding field energy analyzer. Without using any Faraday shield, ion energy spread is found to be 50 eV or more. Moreover, the ion energy distribution is found to have double peaks showing that the power coupling to the plasma is not purely inductive, but a strong parasitic capacitive coupling is also present. By optimizing the various source parameters and Faraday shield, ion energy distribution having a single peak, well separated from zero energy and with ion energy spread of 4 eV is achieved. A novel plasma chamber, with proper Faraday shield is designed to ignite the plasma at low RF powers which otherwise would require 300-400 W of RF power. Optimization of various parameters of the ion source to achieve ions with very low energy spread and the experimental results are presented in this article. (C) 2010 Elsevier Ltd. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper proposes a hybrid solar cooking system where the solar energy is transported to the kitchen. The thermal energy source is used to supplement the Liquefied Petroleum Gas (LPG) that is in common use in kitchens. Solar energy is transferred to the kitchen by means of a circulating fluid. Energy collected from sun is maximized by changing the flow rate dynamically. This paper proposes a concept of maximum power point tracking (MPPT) for the solar thermal collector. The diameter of the pipe is selected to optimize the overall energy transfer. Design and sizing of different components of the system are explained. Concept of MPPT is validated with simulation and experimental results. (C) 2010 Elsevier Ltd. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Based on a method presented in detail in a previous work by the authors, similar solutions have been obtained for the steady inviscid quasi‐one‐dimensional nonreacting flow in the supersonic nozzle of a CO2–N2 gasdynamic laser system, with either H2O or He as the catalyst. It has been demonstrated how these solutions could be used to optimize the small‐signal gain coefficient on a specified vibrational‐rotational transition. Results presented for a wide range of mixture compositions include optimum values for the small‐signal gain, area ratio, reservoir temperature, and a binary scaling parameter, which is the product of reservoir pressure and nozzle shape factor.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Based on a method proposed by Reddy and Daum, the equations governing the steady inviscid nonreacting gasdynamic laser (GDL) flow in a supersonic nozzle are reduced to a universal form so that the solutions depend on a single parameter which combines all the other parameters of the problem. Solutions are obtained for a sample case of available data and compared with existing results to validate the present approach. Also, similar solutions for a sample case are presented.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

It is well known that in the time-domain acquisition of NMR data, signal-to-noise (S/N) improves as the square root of the number of transients accumulated. However, the amplitude of the measured signal varies during the time of detection, having a functional form dependent on the coherence detected. Matching the time spent signal averaging to the expected amplitude of the signal observed should also improve the detected signal-to-noise. Following this reasoning, Barna et al. (J Magn. Reson.75, 384, 1987) demonstrated the utility of exponential sampling in one- and two-dimensional NMR, using maximum-entropy methods to analyze the data. It is proposed here that for two-dimensional experiments the exponential sampling be replaced by exponential averaging. The data thus collected can be analyzed by standard fast-Fourier-transform routines. We demonstrate the utility of exponential averaging in 2D NOESY spectra of the protein ubiquitin, in which an enhanced SIN is observed. It is also shown that the method acquires delayed double-quantum-filtered COSY without phase distortion.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper presents an optimization algorithm for an ammonia reactor based on a regression model relating the yield to several parameters, control inputs and disturbances. This model is derived from the data generated by hybrid simulation of the steady-state equations describing the reactor behaviour. The simplicity of the optimization program along with its ability to take into account constraints on flow variables make it best suited in supervisory control applications.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Determining the sequence of amino acid residues in a heteropolymer chain of a protein with a given conformation is a discrete combinatorial problem that is not generally amenable for gradient-based continuous optimization algorithms. In this paper we present a new approach to this problem using continuous models. In this modeling, continuous "state functions" are proposed to designate the type of each residue in the chain. Such a continuous model helps define a continuous sequence space in which a chosen criterion is optimized to find the most appropriate sequence. Searching a continuous sequence space using a deterministic optimization algorithm makes it possible to find the optimal sequences with much less computation than many other approaches. The computational efficiency of this method is further improved by combining it with a graph spectral method, which explicitly takes into account the topology of the desired conformation and also helps make the combined method more robust. The continuous modeling used here appears to have additional advantages in mimicking the folding pathways and in creating the energy landscapes that help find sequences with high stability and kinetic accessibility. To illustrate the new approach, a widely used simplifying assumption is made by considering only two types of residues: hydrophobic (H) and polar (P). Self-avoiding compact lattice models are used to validate the method with known results in the literature and data that can be practically obtained by exhaustive enumeration on a desktop computer. We also present examples of sequence design for the HP models of some real proteins, which are solved in less than five minutes on a single-processor desktop computer Some open issues and future extensions are noted.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Higher order LCL filters are essential in meeting the interconnection standard requirement for grid-connected voltage source converters. LCL filters offer better harmonic attenuation and better efficiency at a smaller size when compared to the traditional L filters. The focus of this paper is to analyze the LCL filter design procedure from the point of view of power loss and efficiency. The IEEE 1547-2008 specifications for high-frequency current ripple are used as a major constraint early in the design to ensure that all subsequent optimizations are still compliant with the standards. Power loss in each individual filter component is calculated on a per-phase basis. The total inductance per unit of the LCL filter is varied, and LCL parameter values which give the highest efficiency while simultaneously meeting the stringent standard requirements are identified. The power loss and harmonic output spectrum of the grid-connected LCL filter is experimentally verified, and measurements confirm the predicted trends.