874 resultados para Objective function values
Resumo:
The appealing concept of optimal harvesting is often used in fisheries to obtain new management strategies. However, optimality depends on the objective function, which often varies, reflecting the interests of different groups of people. The aim of maximum sustainable yield is to extract the greatest amount of food from replenishable resources in a sustainable way. Maximum sustainable yield may not be desirable from an economic point of view. Maximum economic yield that maximizes the profit of fishing fleets (harvesting sector) but ignores socio-economic benefits such as employment and other positive externalities. It may be more appropriate to use the maximum economic yield that which is based on the value chain of the overall fishing sector, to reflect better society's interests. How to make more efficient use of a fishery for society rather than fishing operators depends critically on the gain function parameters including multiplier effects and inclusion or exclusion of certain costs. In particular, the optimal effort level based on the overall value chain moves closer to the optimal effort for the maximum sustainable yield because of the multiplier effect. These issues are illustrated using the Australian Northern Prawn Fishery.
Resumo:
A method is presented for identification of parameters in unconfined aquifers from pumping tests, based on the optimisation of the objective function using the least squares approach. Four parameters are to be evaluated, namely: The hydraulic conductivity in the radial and the vertical directions, the storage coefficient and the specific yield. The sensitivity analysis technique is used for solving the optimisation problem. Besides eliminating the subjectivity involved in the graphical procedure, the method takes into account the field data at all time intervals without classifying them into small and large time intervals and does not use the approximation that the ratio of the storage coefficient to the specific yield tends to zero. Two illustrative examples are presented and it is found that the parameter estimates from the computational and graphical procedures differ fairly significantly.
Resumo:
An adaptive learning scheme, based on a fuzzy approximation to the gradient descent method for training a pattern classifier using unlabeled samples, is described. The objective function defined for the fuzzy ISODATA clustering procedure is used as the loss function for computing the gradient. Learning is based on simultaneous fuzzy decisionmaking and estimation. It uses conditional fuzzy measures on unlabeled samples. An exponential membership function is assumed for each class, and the parameters constituting these membership functions are estimated, using the gradient, in a recursive fashion. The induced possibility of occurrence of each class is useful for estimation and is computed using 1) the membership of the new sample in that class and 2) the previously computed average possibility of occurrence of the same class. An inductive entropy measure is defined in terms of induced possibility distribution to measure the extent of learning. The method is illustrated with relevant examples.
Resumo:
Systems level modelling and simulations of biological processes are proving to be invaluable in obtaining a quantitative and dynamic perspective of various aspects of cellular function. In particular, constraint-based analyses of metabolic networks have gained considerable popularity for simulating cellular metabolism, of which flux balance analysis (FBA), is most widely used. Unlike mechanistic simulations that depend on accurate kinetic data, which are scarcely available, FBA is based on the principle of conservation of mass in a network, which utilizes the stoichiometric matrix and a biologically relevant objective function to identify optimal reaction flux distributions. FBA has been used to analyse genome-scale reconstructions of several organisms; it has also been used to analyse the effect of perturbations, such as gene deletions or drug inhibitions in silico. This article reviews the usefulness of FBA as a tool for gaining biological insights, advances in methodology enabling integration of regulatory information and thermodynamic constraints, and finally addresses the challenges that lie ahead. Various use scenarios and biological insights obtained from FBA, and applications in fields such metabolic engineering and drug target identification, are also discussed. Genome-scale constraint-based models have an immense potential for building and testing hypotheses, as well as to guide experimentation.
Resumo:
We propose certain discrete parameter variants of well known simulation optimization algorithms. Two of these algorithms are based on the smoothed functional (SF) technique while two others are based on the simultaneous perturbation stochastic approximation (SPSA) method. They differ from each other in the way perturbations are obtained and also the manner in which projections and parameter updates are performed. All our algorithms use two simulations and two-timescale stochastic approximation. As an application setting, we consider the important problem of admission control of packets in communication networks under dependent service times. We consider a discrete time slotted queueing model of the system and consider two different scenarios - one where the service times have a dependence on the system state and the other where they depend on the number of arrivals in a time slot. Under our settings, the simulated objective function appears ill-behaved with multiple local minima and a unique global minimum characterized by a sharp dip in the objective function in a small region of the parameter space. We compare the performance of our algorithms on these settings and observe that the two SF algorithms show the best results overall. In fact, in many cases studied, SF algorithms converge to the global minimum.
Resumo:
In this paper, we consider robust joint linear precoder/receive filter designs for multiuser multi-input multi-output (MIMO) downlink that minimize the sum mean square error (SMSE) in the presence of imperfect channel state information at the transmitter (CSIT). The base station (BS) is equipped with multiple transmit antennas, and each user terminal is equipped with one or more receive antennas. We consider a stochastic error (SE) model and a norm-bounded error (NBE) model for the CSIT error. In the case of CSIT error following SE model, we compute the desired downlink precoder/receive filter matrices by solving the simpler uplink problem by exploiting the uplink-downlink duality for the MSE region. In the case of the CSIT error following the NBE model, we consider the worst-case SMSE as the objective function, and propose an iterative algorithm for the robust transceiver design. The robustness of the proposed algorithms to imperfections in CSIT is illustrated through simulations.
Resumo:
In this paper, we consider the machining condition optimization models presented in earlier studies. Finding the optimal combination of machining conditions within the constraints is a difficult task. Hence, in earlier studies standard optimization methods are used. The non-linear nature of the objective function, and the constraints that need to be satisfied makes it difficult to use the standard optimization methods for the solution. In this paper, we present a real coded genetic algorithm (RCGA), to find the optimal combination of machining conditions. We present various issues related to real coded genetic algorithm such as solution representation, crossover operators, and repair algorithm in detail. We also present the results obtained for these models using real coded genetic algorithm and discuss the advantages of using real coded genetic algorithm for these problems. From the results obtained, we conclude that real coded genetic algorithm is reliable and accurate for solving the machining condition optimization models.
Resumo:
This study develops a real options approach for analyzing the optimal risk adoption policy in an environment where the adoption means a switch from one stochastic flow representation into another. We establish that increased volatility needs not decelerate investment, as predicted by the standard literature on real options, once the underlying volatility of the state is made endogenous. We prove that for a decision maker with a convex (concave) objective function, increased post-adoption volatility increases (decreases) the expected cumulative present value of the post-adoption profit flow, which consequently decreases (increases) the option value of waiting and, therefore, accelerates (decelerates) current investment.
Resumo:
Four algorithms, all variants of Simultaneous Perturbation Stochastic Approximation (SPSA), are proposed. The original one-measurement SPSA uses an estimate of the gradient of objective function L containing an additional bias term not seen in two-measurement SPSA. As a result, the asymptotic covariance matrix of the iterate convergence process has a bias term. We propose a one-measurement algorithm that eliminates this bias, and has asymptotic convergence properties making for easier comparison with the two-measurement SPSA. The algorithm, under certain conditions, outperforms both forms of SPSA with the only overhead being the storage of a single measurement. We also propose a similar algorithm that uses perturbations obtained from normalized Hadamard matrices. The convergence w.p. 1 of both algorithms is established. We extend measurement reuse to design two second-order SPSA algorithms and sketch the convergence analysis. Finally, we present simulation results on an illustrative minimization problem.
Resumo:
In this paper we address the problem of distributed transmission of functions of correlated sources over a fast fading multiple access channel (MAC). This is a basic building block in a hierarchical sensor network used in estimating a random field where the cluster head is interested only in estimating a function of the observations. The observations are transmitted to the cluster head through a fast fading MAC. We provide sufficient conditions for lossy transmission when the encoders and decoders are provided with partial information about the channel state. Furthermore signal side information maybe available at the encoders and the decoder. Various previous studies are shown as special cases. Efficient joint-source channel coding schemes are discussed for transmission of discrete and continuous alphabet sources to recover function values.
Resumo:
There are a number of large networks which occur in many problems dealing with the flow of power, communication signals, water, gas, transportable goods, etc. Both design and planning of these networks involve optimization problems. The first part of this paper introduces the common characteristics of a nonlinear network (the network may be linear, the objective function may be non linear, or both may be nonlinear). The second part develops a mathematical model trying to put together some important constraints based on the abstraction for a general network. The third part deals with solution procedures; it converts the network to a matrix based system of equations, gives the characteristics of the matrix and suggests two solution procedures, one of them being a new one. The fourth part handles spatially distributed networks and evolves a number of decomposition techniques so that we can solve the problem with the help of a distributed computer system. Algorithms for parallel processors and spatially distributed systems have been described.There are a number of common features that pertain to networks. A network consists of a set of nodes and arcs. In addition at every node, there is a possibility of an input (like power, water, message, goods etc) or an output or none. Normally, the network equations describe the flows amoungst nodes through the arcs. These network equations couple variables associated with nodes. Invariably, variables pertaining to arcs are constants; the result required will be flows through the arcs. To solve the normal base problem, we are given input flows at nodes, output flows at nodes and certain physical constraints on other variables at nodes and we should find out the flows through the network (variables at nodes will be referred to as across variables).The optimization problem involves in selecting inputs at nodes so as to optimise an objective function; the objective may be a cost function based on the inputs to be minimised or a loss function or an efficiency function. The above mathematical model can be solved using Lagrange Multiplier technique since the equalities are strong compared to inequalities. The Lagrange multiplier technique divides the solution procedure into two stages per iteration. Stage one calculates the problem variables % and stage two the multipliers lambda. It is shown that the Jacobian matrix used in stage one (for solving a nonlinear system of necessary conditions) occurs in the stage two also.A second solution procedure has also been imbedded into the first one. This is called total residue approach. It changes the equality constraints so that we can get faster convergence of the iterations.Both solution procedures are found to coverge in 3 to 7 iterations for a sample network.The availability of distributed computer systems — both LAN and WAN — suggest the need for algorithms to solve the optimization problems. Two types of algorithms have been proposed — one based on the physics of the network and the other on the property of the Jacobian matrix. Three algorithms have been deviced, one of them for the local area case. These algorithms are called as regional distributed algorithm, hierarchical regional distributed algorithm (both using the physics properties of the network), and locally distributed algorithm (a multiprocessor based approach with a local area network configuration). The approach used was to define an algorithm that is faster and uses minimum communications. These algorithms are found to converge at the same rate as the non distributed (unitary) case.
Resumo:
A method is presented for optimising the performance indices of aperture antennas in the presence of blockage. An N-dimensional objective function is formed for maximising the directivity factor of a circular aperture with blockage under sidelobe-level constraints, and is minimised using the simplex search method. Optimum aperture distributions are computed for a circular aperture with blockage of circular geometry that gives the maximum directivity factor under sidelobe-level constraints.
Resumo:
Chronic obstructive pulmonary disease (COPD) is a slowly progressive disease characterized by airway inflammation and largely irreversible airflow limitation. One major risk factor for COPD is cigarette smoking. Since the inflammatory process starts many years prior to the onset of clinical symptoms and still continues after smoking cessation, there is an urgent need to find simple non-invasive biomarkers that can be used in the early diagnosis of COPD and which could help in predicting the disease progression. The first aim of the present study was to evaluate the involvement of different oxidative/nitrosative stress markers, matrix metalloproteinases (MMPs) and their tissue inhibitor-1 (TIMP-1) in smokers and in COPD. Elevated numbers of inducible nitric oxide synthase (iNOS), nitrotyrosine, myeloperoxidase (MPO) and 4-hydroxy-2-nonenal (4-HNE) positive cells and increased levels of 8-isoprostane and lactoferrin were found in sputum of non-symptomatic smokers compared to non-smokers, and especially in subjects with stable mild to moderate COPD, and they correlated with the severity of airway obstruction. This suggests that an increased oxidant burden exists already in the airways of smokers with normal lung function values. However, none of these markers could differentiate healthy smokers from symptomatic smokers with normal lung function values i.e. those individuals who are at risk of developing COPD. In contrast what is known about asthma exhaled nitric oxide (FENO) was lower in smokers than in non-smokers, the reduced FENO value was significantly associated with neutrophilic inflammation and the elevated oxidant burden (positive cells for iNOS, nitrotyrosine and MPO). The levels of sputum MMP-8 and plasma MMP-12 appeared to differentiate subjects who have a risk for COPD development but these finding require further investigations. The levels of all studied MMPs correlated with the numbers of neutrophils, and MMP-8 and MMP-9 with markers of neutrophil activation (MPO, lactoferrin) suggesting that especially neutrophil derived oxidants may stimulate the tissue destructive MMPs already in lungs of smokers who are not yet experiencing any airflow limitation. When investigating the role of neutrophil proteases (neutrophil elastase, MMP-8, MMP-9) during COPD exacerbation and its recovery period, we found that levels of all these proteases were increased in sputum of patients with COPD exacerbation as compared to stable COPD and controls, and decreased during the one-month recovery period, giving evidence for a role of these enzymes in COPD exacerbations. In the last study, the effects of subject`s age and smoking habits were evaluated on the plasma levels of surfactant protein A (SP-A), SP-D, MMP-9 and TIMP-1. Long-term smoking increased the levels of all of these proteins. SP-A most clearly correlated with age, pack years and lung function decline (FEV1/FVC), and based on the receiver operating characteristic curve analysis, SP-A was the best marker for discriminating subjects with COPD from controls. In conclusion, these findings support the hypothesis that especially neutrophil derived oxidants may activate MMPs and induce an active remodeling process already in the lungs of smokers with normal lung function values. The marked increase of sputum levels of neutrophil proteases in smokers, stable COPD and/or during its exacerbations suggest that these enzymes play a role in the development and progression of COPD. Based on the comparison of various biomarkers, SP-A can be proposed to serve as sensitive biomarker in COPD development.
Resumo:
In this paper we consider the problem of learning an n × n kernel matrix from m(1) similarity matrices under general convex loss. Past research have extensively studied the m = 1 case and have derived several algorithms which require sophisticated techniques like ACCP, SOCP, etc. The existing algorithms do not apply if one uses arbitrary losses and often can not handle m > 1 case. We present several provably convergent iterative algorithms, where each iteration requires either an SVM or a Multiple Kernel Learning (MKL) solver for m > 1 case. One of the major contributions of the paper is to extend the well knownMirror Descent(MD) framework to handle Cartesian product of psd matrices. This novel extension leads to an algorithm, called EMKL, which solves the problem in O(m2 log n 2) iterations; in each iteration one solves an MKL involving m kernels and m eigen-decomposition of n × n matrices. By suitably defining a restriction on the objective function, a faster version of EMKL is proposed, called REKL,which avoids the eigen-decomposition. An alternative to both EMKL and REKL is also suggested which requires only an SVMsolver. Experimental results on real world protein data set involving several similarity matrices illustrate the efficacy of the proposed algorithms.
Resumo:
An aeroelastic analysis based on finite elements in space and time is used to model the helicopter rotor in forward flight. The rotor blade is represented as an elastic cantilever beam undergoing flap and lag bending, elastic torsion and axial deformations. The objective of the improved design is to reduce vibratory loads at the rotor hub that are the main source of helicopter vibration. Constraints are imposed on aeroelastic stability, and move limits are imposed on the blade elastic stiffness design variables. Using the aeroelastic analysis, response surface approximations are constructed for the objective function (vibratory hub loads). It is found that second order polynomial response surfaces constructed using the central composite design of the theory of design of experiments adequately represents the aeroelastic model in the vicinity of the baseline design. Optimization results show a reduction in the objective function of about 30 per cent. A key accomplishment of this paper is the decoupling of the analysis problem and the optimization problems using response surface methods, which should encourage the use of optimization methods by the helicopter industry. (C) 2002 Elsevier Science Ltd. All rights reserved.