930 resultados para Linear optimization approach
Resumo:
A method is developed by which the input leading to the highest possible response in an interval of time can be determined for a class of non-linear systems. The input, if deterministic, is constrained to have a known finite energy (or norm) in the interval under consideration. In the case of random inputs, the energy is constrained to have a known probability distribution function. The approach has applications when a system has to be put to maximum advantage by getting the largest possible output or when a system has to be designed to the highest maximum response with only the input energy or the energy distribution known. The method is also useful in arriving at a bound on the highest peak distribution of the response, when the excitation is a known random process.As an illustration the Duffing oscillator has been analysed and some numerical results have also been presented.
Resumo:
Motivated by developments in spacecraft dynamics, the asymptotic behaviour and boundedness of solution of a special class of time varying systems in which each term appears as the sum of a constant and a time varying part, are analysed in this paper. It is not possible to apply standard textbook results to such systems, which are originally in second order. Some of the existing results are reformulated. Four theorems which explore the relations between the asymptotic behaviour/boundedness of the constant coefficient system, obtained by equating the time varying terms to zero, to the corresponding behaviour of the time varying system, are developed. The results show the behaviour of the two systems to be intimately related, provided the solutions of the constant coefficient system approach zero are bounded for large values of time, and the time varying terms are suitably restrained. Two problems are tackled using these theorems.
Resumo:
The voltage stability control problem has become an important concern for utilities transmitting power over long distances. This paper presents an approach using fuzzy set theory for reactive power control with the purpose of improving the voltage stability of a power system. To minimize the voltage deviations from pre-desired values of all the load buses, using the sensitivities with respect to reactive power control variables form the basis of the proposed fuzzy logic control (FLC). Control variables considered are switchable VAR compensators, On Load Tap Changing (OLTC) transformers and generator excitations. Voltage deviations and controlling variables are translated into fuzzy set notations to formulate the relation between voltage deviations and controlling ability of controlling devices. The developed fuzzy system is tested on a few simulated practical Indian power systems and some IEEE standard test systems. The performance of the fuzzy system is compared with conventional optimization technique and results obtained are encouraging. Results obtained for a 24 - node equivalent EHV system of part of Indian southern grid and IEEE New England 39-bus system are presented for illustration purposes. The proposed Fuzzy-Expert technique is found suitable for on-line applications in energy control centre as the solution is obtained fast with significant speedups.
Resumo:
The notion of optimization is inherent in protein design. A long linear chain of twenty types of amino acid residues are known to fold to a 3-D conformation that minimizes the combined inter-residue energy interactions. There are two distinct protein design problems, viz. predicting the folded structure from a given sequence of amino acid monomers (folding problem) and determining a sequence for a given folded structure (inverse folding problem). These two problems have much similarity to engineering structural analysis and structural optimization problems respectively. In the folding problem, a protein chain with a given sequence folds to a conformation, called a native state, which has a unique global minimum energy value when compared to all other unfolded conformations. This involves a search in the conformation space. This is somewhat akin to the principle of minimum potential energy that determines the deformed static equilibrium configuration of an elastic structure of given topology, shape, and size that is subjected to certain boundary conditions. In the inverse-folding problem, one has to design a sequence with some objectives (having a specific feature of the folded structure, docking with another protein, etc.) and constraints (sequence being fixed in some portion, a particular composition of amino acid types, etc.) while obtaining a sequence that would fold to the desired conformation satisfying the criteria of folding. This requires a search in the sequence space. This is similar to structural optimization in the design-variable space wherein a certain feature of structural response is optimized subject to some constraints while satisfying the governing static or dynamic equilibrium equations. Based on this similarity, in this work we apply the topology optimization methods to protein design, discuss modeling issues and present some initial results.
Resumo:
1,3-Dipolar cycloaddition of an organic azide and an acetylenic unit,often referred to as the ``click reaction'', has become an important ligation tool both in the context of materials chemistry and biology. Thus, development of simple approaches to directly generate polymers that bear either an azide or an alkyne unit has gained considerable importance. We describe here a straightforward approach to directly prepare linear and hyperbranched polyesters that carry terminal propargyl groups. To achieve the former, we designed an AB-type monomer that carries a hydroxyl group and a propargyl ester, which upon self-condensation under standard transesterification conditions yielded a polyester that carries a single propargyl group at one of its chain-ends. Similarly, an AB(2) type monomer that carries one hydroxyl group and two propargyl ester groups, when polymerized under the same conditions yielded a hyperbranched polymer with numerous clickable'' propargyl groups at its molecular periphery. These propargyl groups can be readily clicked with different organic azides, such as benzyl azide, omega-azido heptaethyleneglycol monomethylether or 9-azidomethyl anthracene. When an anthracene chromophore is clicked, the molecular weight of the linear polyester could be readily estimated using both UV-visible and fluorescence spectroscopic measurements. Furthermore, the reactive propargyl end group could also provide an opportunity to prepare block copolymers in the case of linear polyesters and to generate nanodimensional scaffolds to anchor variety of functional units, in the case of the hyperbranched polymer. (C) 2010 Wiley Periodicals, Inc. J Polym Sci Part A: Polym Chem 48: 3200-3208, 2010.
Resumo:
A fuzzy system is developed using a linearized performance model of the gas turbine engine for performing gas turbine fault isolation from noisy measurements. By using a priori information about measurement uncertainties and through design variable linking, the design of the fuzzy system is posed as an optimization problem with low number of design variables which can be solved using the genetic algorithm in considerably low amount of computer time. The faults modeled are module faults in five modules: fan, low pressure compressor, high pressure compressor, high pressure turbine and low pressure turbine. The measurements used are deviations in exhaust gas temperature, low rotor speed, high rotor speed and fuel flow from a base line 'good engine'. The genetic fuzzy system (GFS) allows rapid development of the rule base if the fault signatures and measurement uncertainties change which happens for different engines and airlines. In addition, the genetic fuzzy system reduces the human effort needed in the trial and error process used to design the fuzzy system and makes the development of such a system easier and faster. A radial basis function neural network (RBFNN) is also used to preprocess the measurements before fault isolation. The RBFNN shows significant noise reduction and when combined with the GFS leads to a diagnostic system that is highly robust to the presence of noise in data. Showing the advantage of using a soft computing approach for gas turbine diagnostics.
Resumo:
Like the metal and semiconductor nanoparticles, the melting temperature of free inert-gas nanoparticles decreases with decreasing size. The variation is linear with the inverse of the particle size for large nanoparticles and deviates from the linearity for small nanoparticles. The decrease in the melting temperature is slower for free nanoparticles with non-wetting surfaces, while the decrease is faster for nanoparticles with wetting surfaces. Though the depression of the melting temperature has been reported for inert-gas nanoparticles in porous glasses, superheating has also been observed when the nanoparticles are embedded in some matrices. By using a simple classical approach, the influence of size, geometry and the matrix on the melting temperature of nanoparticles is understood quantitatively and shown to be applicable for other materials. It is also shown that the classical approach can be applied to understand the size-dependent freezing temperature of nanoparticles.
Resumo:
Tiivistelmä ReferatAbstract Metabolomics is a rapidly growing research field that studies the response of biological systems to environmental factors, disease states and genetic modifications. It aims at measuring the complete set of endogenous metabolites, i.e. the metabolome, in a biological sample such as plasma or cells. Because metabolites are the intermediates and end products of biochemical reactions, metabolite compositions and metabolite levels in biological samples can provide a wealth of information on on-going processes in a living system. Due to the complexity of the metabolome, metabolomic analysis poses a challenge to analytical chemistry. Adequate sample preparation is critical to accurate and reproducible analysis, and the analytical techniques must have high resolution and sensitivity to allow detection of as many metabolites as possible. Furthermore, as the information contained in the metabolome is immense, the data set collected from metabolomic studies is very large. In order to extract the relevant information from such large data sets, efficient data processing and multivariate data analysis methods are needed. In the research presented in this thesis, metabolomics was used to study mechanisms of polymeric gene delivery to retinal pigment epithelial (RPE) cells. The aim of the study was to detect differences in metabolomic fingerprints between transfected cells and non-transfected controls, and thereafter to identify metabolites responsible for the discrimination. The plasmid pCMV-β was introduced into RPE cells using the vector polyethyleneimine (PEI). The samples were analyzed using high performance liquid chromatography (HPLC) and ultra performance liquid chromatography (UPLC) coupled to a triple quadrupole (QqQ) mass spectrometer (MS). The software MZmine was used for raw data processing and principal component analysis (PCA) was used in statistical data analysis. The results revealed differences in metabolomic fingerprints between transfected cells and non-transfected controls. However, reliable fingerprinting data could not be obtained because of low analysis repeatability. Therefore, no attempts were made to identify metabolites responsible for discrimination between sample groups. Repeatability and accuracy of analyses can be influenced by protocol optimization. However, in this study, optimization of analytical methods was hindered by the very small number of samples available for analysis. In conclusion, this study demonstrates that obtaining reliable fingerprinting data is technically demanding, and the protocols need to be thoroughly optimized in order to approach the goals of gaining information on mechanisms of gene delivery.
Resumo:
The paper examines the needs, premises and criteria for effective public participation in tactical forest planning. A method for participatory forest planning utilizing the techniques of preference analysis, professional expertise and heuristic optimization is introduced. The techniques do not cover the whole process of participatory planning, but are applied as a tool constituting the numerical core for decision support. The complexity of multi-resource management is addressed by hierarchical decision analysis which assesses the public values, preferences and decision criteria toward the planning situation. An optimal management plan is sought using heuristic optimization. The plan can further be improved through mutual negotiations, if necessary. The use of the approach is demonstrated with an illustrative example, it's merits and challenges for participatory forest planning and decision making are discussed and a model for applying it in general forest planning context is depicted. By using the approach, valuable information can be obtained about public preferences and the effects of taking them into consideration on the choice of the combination of standwise treatment proposals for a forest area. Participatory forest planning calculations, carried out by the approach presented in the paper, can be utilized in conflict management and in developing compromises between competing interests.
Resumo:
Based on a method proposed by Reddy and Daum, the equations governing the steady inviscid nonreacting gasdynamic laser (GDL) flow in a supersonic nozzle are reduced to a universal form so that the solutions depend on a single parameter which combines all the other parameters of the problem. Solutions are obtained for a sample case of available data and compared with existing results to validate the present approach. Also, similar solutions for a sample case are presented.
Resumo:
A detailed theoretical analysis of flow through a quadrant plate weir is made in the light of the generalized theory of proportional weirs, using a numerical optimization procedure. It is shown that the flow through the quadrant plate weir has a linear discharge-head relationship valid for certain ranges of head. It is shown that the weir is associated with a reference plane or datum from which all heads are reckoned.Further, it is shown that the measuring range of the quadrant plate weir can be considerably enhanced by extending the tangents to the quadrants at the terminals of the quadrant plate weir. The importance of this weir (when the datum of the weir lies below its crest) as an outlet weir for grit chambers is highlighted. Experiments show excellent agreement with the theory by giving a constant average coefficient of discharge.
Resumo:
A linear state feedback gain vector used in the control of a single input dynamical system may be constrained because of the way feedback is realized. Some examples of feedback realizations which impose constraints on the gain vector are: static output feedback, constant gain feedback for several operating points of a system, and two-controller feedback. We consider a general class of problems of stabilization of single input dynamical systems with such structural constraints and give a numerical method to solve them. Each of these problems is cast into a problem of solving a system of equalities and inequalities. In this formulation, the coefficients of the quadratic and linear factors of the closed-loop characteristic polynomial are the variables. To solve the system of equalities and inequalities, a continuous realization of the gradient projection method and a barrier method are used under the homotopy framework. Our method is illustrated with an example for each class of control structure constraint.
Resumo:
Potential transients are obtained by using “Padé approximants” (an accurate approximation procedure valid globally — not just perturbatively) for all amplitudes of concentration polarization and current densities. This is done for several mechanistic schemes under constant current conditions. We invert the non-linear current-potential relationship in the form (using the Lagrange or the Ramanujan method) of power series appropriate to the two extremes, namely near reversible and near irreversible. Transforming both into the Pad́e expressions, we construct the potential-time profile by retaining whichever is the more accurate of the two. The effectiveness of this method is demonstrated through illustrations which include couplings of homogeneous chemical reactions to the electron-transfer step.
Resumo:
This thesis studies the effect of income inequality on economic growth. This is done by analyzing panel data from several countries with both short and long time dimensions of the data. Two of the chapters study the direct effect of inequality on growth, and one chapter also looks at the possible indirect effect of inequality on growth by assessing the effect of inequality on savings. In Chapter two, the effect of inequality on growth is studied by using a panel of 70 countries and a new EHII2008 inequality measure. Chapter contributes on two problems that panel econometric studies on the economic effect of inequality have recently encountered: the comparability problem associated with the commonly used Deininger and Squire s Gini index, and the problem relating to the estimation of group-related elasticities in panel data. In this study, a simple way to 'bypass' vagueness related to the use of parametric methods to estimate group-related parameters is presented. The idea is to estimate the group-related elasticities implicitly using a set of group-related instrumental variables. The estimation results with new data and method indicate that the relationship between income inequality and growth is likely to be non-linear. Chapter three incorporates the EHII2.1 inequality measure and a panel with annual time series observations from 38 countries to test the existence of long-run equilibrium relation(s) between inequality and the level of GDP. Panel unit root tests indicate that both the logarithmic EHII2.1 inequality measure and the logarithmic GDP per capita series are I(1) nonstationary processes. They are also found to be cointegrated of order one, which implies that there is a long-run equilibrium relation between them. The long-run growth elasticity of inequality is found to be negative in the middle-income and rich economies, but the results for poor economies are inconclusive. In the fourth Chapter, macroeconomic data on nine developed economies spanning across four decades starting from the year 1960 is used to study the effect of the changes in the top income share to national and private savings. The income share of the top 1 % of population is used as proxy for the distribution of income. The effect of inequality on private savings is found to be positive in the Nordic and Central-European countries, but for the Anglo-Saxon countries the direction of the effect (positive vs. negative) remains somewhat ambiguous. Inequality is found to have an effect national savings only in the Nordic countries, where it is positive.
Resumo:
An asymptotically correct analysis is developed for Macro Fiber Composite unit cell using Variational Asymptotic Method (VAM). VAM splits the 3D nonlinear problem into two parts: A 1D nonlinear problem along the length of the fiber and a linear 2D cross-sectional problem. Closed form solutions are obtained for the 2D problem which are in terms of 1D parameters.