299 resultados para Parameter Optimization
Resumo:
The overall performance of random early detection (RED) routers in the Internet is determined by the settings of their associated parameters. The non-availability of a functional relationship between the RED performance and its parameters makes it difficult to implement optimization techniques directly in order to optimize the RED parameters. In this paper, we formulate a generic optimization framework using a stochastically bounded delay metric to dynamically adapt the RED parameters. The constrained optimization problem thus formulated is solved using traditional nonlinear programming techniques. Here, we implement the barrier and penalty function approaches, respectively. We adopt a second-order nonlinear optimization framework and propose a novel four-timescale stochastic approximation algorithm to estimate the gradient and Hessian of the barrier and penalty objectives and update the RED parameters. A convergence analysis of the proposed algorithm is briefly sketched. We perform simulations to evaluate the performance of our algorithm with both barrier and penalty objectives and compare these with RED and a variant of it in the literature. We observe an improvement in performance using our proposed algorithm over RED, and the above variant of it.
Resumo:
The present work concerns with the static scheduling of jobs to parallel identical batch processors with incompatible job families for minimizing the total weighted tardiness. This scheduling problem is applicable in burn-in operations and wafer fabrication in semiconductor manufacturing. We decompose the problem into two stages: batch formation and batch scheduling, as in the literature. The Ant Colony Optimization (ACO) based algorithm called ATC-BACO algorithm is developed in which ACO is used to solve the batch scheduling problems. Our computational experimentation shows that the proposed ATC-BACO algorithm performs better than the available best traditional dispatching rule called ATC-BATC rule.
Resumo:
The notion of optimization is inherent in protein design. A long linear chain of twenty types of amino acid residues are known to fold to a 3-D conformation that minimizes the combined inter-residue energy interactions. There are two distinct protein design problems, viz. predicting the folded structure from a given sequence of amino acid monomers (folding problem) and determining a sequence for a given folded structure (inverse folding problem). These two problems have much similarity to engineering structural analysis and structural optimization problems respectively. In the folding problem, a protein chain with a given sequence folds to a conformation, called a native state, which has a unique global minimum energy value when compared to all other unfolded conformations. This involves a search in the conformation space. This is somewhat akin to the principle of minimum potential energy that determines the deformed static equilibrium configuration of an elastic structure of given topology, shape, and size that is subjected to certain boundary conditions. In the inverse-folding problem, one has to design a sequence with some objectives (having a specific feature of the folded structure, docking with another protein, etc.) and constraints (sequence being fixed in some portion, a particular composition of amino acid types, etc.) while obtaining a sequence that would fold to the desired conformation satisfying the criteria of folding. This requires a search in the sequence space. This is similar to structural optimization in the design-variable space wherein a certain feature of structural response is optimized subject to some constraints while satisfying the governing static or dynamic equilibrium equations. Based on this similarity, in this work we apply the topology optimization methods to protein design, discuss modeling issues and present some initial results.
Resumo:
There are a number of large networks which occur in many problems dealing with the flow of power, communication signals, water, gas, transportable goods, etc. Both design and planning of these networks involve optimization problems. The first part of this paper introduces the common characteristics of a nonlinear network (the network may be linear, the objective function may be non linear, or both may be nonlinear). The second part develops a mathematical model trying to put together some important constraints based on the abstraction for a general network. The third part deals with solution procedures; it converts the network to a matrix based system of equations, gives the characteristics of the matrix and suggests two solution procedures, one of them being a new one. The fourth part handles spatially distributed networks and evolves a number of decomposition techniques so that we can solve the problem with the help of a distributed computer system. Algorithms for parallel processors and spatially distributed systems have been described.There are a number of common features that pertain to networks. A network consists of a set of nodes and arcs. In addition at every node, there is a possibility of an input (like power, water, message, goods etc) or an output or none. Normally, the network equations describe the flows amoungst nodes through the arcs. These network equations couple variables associated with nodes. Invariably, variables pertaining to arcs are constants; the result required will be flows through the arcs. To solve the normal base problem, we are given input flows at nodes, output flows at nodes and certain physical constraints on other variables at nodes and we should find out the flows through the network (variables at nodes will be referred to as across variables).The optimization problem involves in selecting inputs at nodes so as to optimise an objective function; the objective may be a cost function based on the inputs to be minimised or a loss function or an efficiency function. The above mathematical model can be solved using Lagrange Multiplier technique since the equalities are strong compared to inequalities. The Lagrange multiplier technique divides the solution procedure into two stages per iteration. Stage one calculates the problem variables % and stage two the multipliers lambda. It is shown that the Jacobian matrix used in stage one (for solving a nonlinear system of necessary conditions) occurs in the stage two also.A second solution procedure has also been imbedded into the first one. This is called total residue approach. It changes the equality constraints so that we can get faster convergence of the iterations.Both solution procedures are found to coverge in 3 to 7 iterations for a sample network.The availability of distributed computer systems — both LAN and WAN — suggest the need for algorithms to solve the optimization problems. Two types of algorithms have been proposed — one based on the physics of the network and the other on the property of the Jacobian matrix. Three algorithms have been deviced, one of them for the local area case. These algorithms are called as regional distributed algorithm, hierarchical regional distributed algorithm (both using the physics properties of the network), and locally distributed algorithm (a multiprocessor based approach with a local area network configuration). The approach used was to define an algorithm that is faster and uses minimum communications. These algorithms are found to converge at the same rate as the non distributed (unitary) case.
Resumo:
In this paper, we present a generic method/model for multi-objective design optimization of laminated composite components, based on Vector Evaluated Artificial Bee Colony (VEABC) algorithm. VEABC is a parallel vector evaluated type, swarm intelligence multi-objective variant of the Artificial Bee Colony algorithm (ABC). In the current work a modified version of VEABC algorithm for discrete variables has been developed and implemented successfully for the multi-objective design optimization of composites. The problem is formulated with multiple objectives of minimizing weight and the total cost of the composite component to achieve a specified strength. The primary optimization variables are the number of layers, its stacking sequence (the orientation of the layers) and thickness of each layer. The classical lamination theory is utilized to determine the stresses in the component and the design is evaluated based on three failure criteria: failure mechanism based failure criteria, maximum stress failure criteria and the tsai-wu failure criteria. The optimization method is validated for a number of different loading configurations-uniaxial, biaxial and bending loads. The design optimization has been carried for both variable stacking sequences, as well fixed standard stacking schemes and a comparative study of the different design configurations evolved has been presented. Finally the performance is evaluated in comparison with other nature inspired techniques which includes Particle Swarm Optimization (PSO), Artificial Immune System (AIS) and Genetic Algorithm (GA). The performance of ABC is at par with that of PSO, AIS and GA for all the loading configurations. (C) 2009 Elsevier B.V. All rights reserved.
Resumo:
The apparent contradiction between the exact nature of the interaction parameter formalism as presented by Lupis and Elliott and the inconsistencies discussed recently by Pelton and Bale arise from the truncation of the Maclaurin series in the latter treatment. The truncation removes the exactness of the expression for the logarithm of the activity coefficient of a solute in a multi-component system. The integrals are therefore path dependent. Formulae for integration along paths of constant Xi,or X i/Xj are presented. The expression for In γsolvent given by Pelton and Bale is valid only in the limit that the mole fraction of solvent tends to one. The truncation also destroys the general relations between interaction parameters derived by Lupis and Elliott. For each specific choice of parameters special relationships are obtained between interaction parameters.
Resumo:
The optimization of a photovoltaic pumping system based on an induction motor driven pump that is powered by a solar array is presented in this paper. The motor-pump subsystem is analyzed from the point of view of optimizing the power requirement of the induction motor, which has led to an optimum u-f relationship useful in controlling the motor. The complete pumping system is implemented using a dc-dc converter, a three-phase inverter, and an induction motor-pump set. The dc-dc converter is used as a power conditioner and its duty cycle is controlled so as to match the load to the array. A microprocessor-based controller is used to carry out the load-matching.
Resumo:
The ion energy distribution of inductively coupled plasma ion source for focused ion beam application is measured using a four grid retarding field energy analyzer. Without using any Faraday shield, ion energy spread is found to be 50 eV or more. Moreover, the ion energy distribution is found to have double peaks showing that the power coupling to the plasma is not purely inductive, but a strong parasitic capacitive coupling is also present. By optimizing the various source parameters and Faraday shield, ion energy distribution having a single peak, well separated from zero energy and with ion energy spread of 4 eV is achieved. A novel plasma chamber, with proper Faraday shield is designed to ignite the plasma at low RF powers which otherwise would require 300-400 W of RF power. Optimization of various parameters of the ion source to achieve ions with very low energy spread and the experimental results are presented in this article. (C) 2010 Elsevier Ltd. All rights reserved.
Resumo:
This paper proposes a hybrid solar cooking system where the solar energy is transported to the kitchen. The thermal energy source is used to supplement the Liquefied Petroleum Gas (LPG) that is in common use in kitchens. Solar energy is transferred to the kitchen by means of a circulating fluid. Energy collected from sun is maximized by changing the flow rate dynamically. This paper proposes a concept of maximum power point tracking (MPPT) for the solar thermal collector. The diameter of the pipe is selected to optimize the overall energy transfer. Design and sizing of different components of the system are explained. Concept of MPPT is validated with simulation and experimental results. (C) 2010 Elsevier Ltd. All rights reserved.
Resumo:
It is well known that in the time-domain acquisition of NMR data, signal-to-noise (S/N) improves as the square root of the number of transients accumulated. However, the amplitude of the measured signal varies during the time of detection, having a functional form dependent on the coherence detected. Matching the time spent signal averaging to the expected amplitude of the signal observed should also improve the detected signal-to-noise. Following this reasoning, Barna et al. (J Magn. Reson.75, 384, 1987) demonstrated the utility of exponential sampling in one- and two-dimensional NMR, using maximum-entropy methods to analyze the data. It is proposed here that for two-dimensional experiments the exponential sampling be replaced by exponential averaging. The data thus collected can be analyzed by standard fast-Fourier-transform routines. We demonstrate the utility of exponential averaging in 2D NOESY spectra of the protein ubiquitin, in which an enhanced SIN is observed. It is also shown that the method acquires delayed double-quantum-filtered COSY without phase distortion.
Resumo:
Novel one and two dimensional NMR techniques are proposed and utilized for the determination of the signs of the order parameters used for the study of the mobility of the fatty acid chains. The experiments designed to extract this information involve the use of the intensities of the side bands in the spectra of oriented systems spinning at the magic angle. Advantages of the two dimensional technique over the one dimensional method are discussed. The utility of the method in the study of the dynamic properties of membranes and model systems is pointed out.
Resumo:
This paper presents an optimization algorithm for an ammonia reactor based on a regression model relating the yield to several parameters, control inputs and disturbances. This model is derived from the data generated by hybrid simulation of the steady-state equations describing the reactor behaviour. The simplicity of the optimization program along with its ability to take into account constraints on flow variables make it best suited in supervisory control applications.
Resumo:
Determining the sequence of amino acid residues in a heteropolymer chain of a protein with a given conformation is a discrete combinatorial problem that is not generally amenable for gradient-based continuous optimization algorithms. In this paper we present a new approach to this problem using continuous models. In this modeling, continuous "state functions" are proposed to designate the type of each residue in the chain. Such a continuous model helps define a continuous sequence space in which a chosen criterion is optimized to find the most appropriate sequence. Searching a continuous sequence space using a deterministic optimization algorithm makes it possible to find the optimal sequences with much less computation than many other approaches. The computational efficiency of this method is further improved by combining it with a graph spectral method, which explicitly takes into account the topology of the desired conformation and also helps make the combined method more robust. The continuous modeling used here appears to have additional advantages in mimicking the folding pathways and in creating the energy landscapes that help find sequences with high stability and kinetic accessibility. To illustrate the new approach, a widely used simplifying assumption is made by considering only two types of residues: hydrophobic (H) and polar (P). Self-avoiding compact lattice models are used to validate the method with known results in the literature and data that can be practically obtained by exhaustive enumeration on a desktop computer. We also present examples of sequence design for the HP models of some real proteins, which are solved in less than five minutes on a single-processor desktop computer Some open issues and future extensions are noted.