944 resultados para Constrained evolutionary optimization


Relevância:

20.00% 20.00%

Publicador:

Resumo:

A new high-order finite volume method based on local reconstruction is presented in this paper. The method, so-called the multi-moment constrained finite volume (MCV) method, uses the point values defined within single cell at equally spaced points as the model variables (or unknowns). The time evolution equations used to update the unknowns are derived from a set of constraint conditions imposed on multi kinds of moments, i.e. the cell-averaged value and the point-wise value of the state variable and its derivatives. The finite volume constraint on the cell-average guarantees the numerical conservativeness of the method. Most constraint conditions are imposed on the cell boundaries, where the numerical flux and its derivatives are solved as general Riemann problems. A multi-moment constrained Lagrange interpolation reconstruction for the demanded order of accuracy is constructed over single cell and converts the evolution equations of the moments to those of the unknowns. The presented method provides a general framework to construct efficient schemes of high orders. The basic formulations for hyperbolic conservation laws in 1- and 2D structured grids are detailed with the numerical results of widely used benchmark tests. (C) 2009 Elsevier Inc. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper deals with an experimental study of air staging in a 1 MW (heat input power) tangentially fired pulverized coal furnace. The influences of several variables associated with air staging on NOx reduction efficiency and unburned carbon in fly ash were investigated, and these variables included the air stoichiometric ratio of primary combustion zone (SR1), the locations of over-fire air nozzles along furnace height, and the ratio of coal concentration of the fuel-rich stream to that of the fuel-lean one (RRL) in primary air nozzle. The experimental results indicate that SR1 and RRL have optimum values for NOx reduction, and the two optimum values are 0.85 and 3:1, respectively. NO, reduction efficiency monotonically increases with the increase of OFA nozzle location along furnace height. On the optimized operating conditions of air staging, NOx reduction efficiency can attain 47%. Although air staging can effectively reduce NOx emission, the increase of unburned carbon in fly ash should be noticed. (C) 2008 Elsevier B.V. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This thesis discusses various methods for learning and optimization in adaptive systems. Overall, it emphasizes the relationship between optimization, learning, and adaptive systems; and it illustrates the influence of underlying hardware upon the construction of efficient algorithms for learning and optimization. Chapter 1 provides a summary and an overview.

Chapter 2 discusses a method for using feed-forward neural networks to filter the noise out of noise-corrupted signals. The networks use back-propagation learning, but they use it in a way that qualifies as unsupervised learning. The networks adapt based only on the raw input data-there are no external teachers providing information on correct operation during training. The chapter contains an analysis of the learning and develops a simple expression that, based only on the geometry of the network, predicts performance.

Chapter 3 explains a simple model of the piriform cortex, an area in the brain involved in the processing of olfactory information. The model was used to explore the possible effect of acetylcholine on learning and on odor classification. According to the model, the piriform cortex can classify odors better when acetylcholine is present during learning but not present during recall. This is interesting since it suggests that learning and recall might be separate neurochemical modes (corresponding to whether or not acetylcholine is present). When acetylcholine is turned off at all times, even during learning, the model exhibits behavior somewhat similar to Alzheimer's disease, a disease associated with the degeneration of cells that distribute acetylcholine.

Chapters 4, 5, and 6 discuss algorithms appropriate for adaptive systems implemented entirely in analog hardware. The algorithms inject noise into the systems and correlate the noise with the outputs of the systems. This allows them to estimate gradients and to implement noisy versions of gradient descent, without having to calculate gradients explicitly. The methods require only noise generators, adders, multipliers, integrators, and differentiators; and the number of devices needed scales linearly with the number of adjustable parameters in the adaptive systems. With the exception of one global signal, the algorithms require only local information exchange.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Granular crystals are compact periodic assemblies of elastic particles in Hertzian contact whose dynamic response can be tuned from strongly nonlinear to linear by the addition of a static precompression force. This unique feature allows for a wide range of studies that include the investigation of new fundamental nonlinear phenomena in discrete systems such as solitary waves, shock waves, discrete breathers and other defect modes. In the absence of precompression, a particularly interesting property of these systems is their ability to support the formation and propagation of spatially localized soliton-like waves with highly tunable properties. The wealth of parameters one can modify (particle size, geometry and material properties, periodicity of the crystal, presence of a static force, type of excitation, etc.) makes them ideal candidates for the design of new materials for practical applications. This thesis describes several ways to optimally control and tailor the propagation of stress waves in granular crystals through the use of heterogeneities (interstitial defect particles and material heterogeneities) in otherwise perfectly ordered systems. We focus on uncompressed two-dimensional granular crystals with interstitial spherical intruders and composite hexagonal packings and study their dynamic response using a combination of experimental, numerical and analytical techniques. We first investigate the interaction of defect particles with a solitary wave and utilize this fundamental knowledge in the optimal design of novel composite wave guides, shock or vibration absorbers obtained using gradient-based optimization methods.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A constrained high-order statistical algorithm is proposed to blindly deconvolute the measured spectral data and estimate the response function of the instruments simultaneously. In this algorithm, no prior-knowledge is necessary except a proper length of the unit-impulse response. This length can be easily set to be the width of the narrowest spectral line by observing the measured data. The feasibility of this method has been demonstrated experimentally by the measured Raman and absorption spectral data.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The dissertation studies the general area of complex networked systems that consist of interconnected and active heterogeneous components and usually operate in uncertain environments and with incomplete information. Problems associated with those systems are typically large-scale and computationally intractable, yet they are also very well-structured and have features that can be exploited by appropriate modeling and computational methods. The goal of this thesis is to develop foundational theories and tools to exploit those structures that can lead to computationally-efficient and distributed solutions, and apply them to improve systems operations and architecture.

Specifically, the thesis focuses on two concrete areas. The first one is to design distributed rules to manage distributed energy resources in the power network. The power network is undergoing a fundamental transformation. The future smart grid, especially on the distribution system, will be a large-scale network of distributed energy resources (DERs), each introducing random and rapid fluctuations in power supply, demand, voltage and frequency. These DERs provide a tremendous opportunity for sustainability, efficiency, and power reliability. However, there are daunting technical challenges in managing these DERs and optimizing their operation. The focus of this dissertation is to develop scalable, distributed, and real-time control and optimization to achieve system-wide efficiency, reliability, and robustness for the future power grid. In particular, we will present how to explore the power network structure to design efficient and distributed market and algorithms for the energy management. We will also show how to connect the algorithms with physical dynamics and existing control mechanisms for real-time control in power networks.

The second focus is to develop distributed optimization rules for general multi-agent engineering systems. A central goal in multiagent systems is to design local control laws for the individual agents to ensure that the emergent global behavior is desirable with respect to the given system level objective. Ideally, a system designer seeks to satisfy this goal while conditioning each agent’s control on the least amount of information possible. Our work focused on achieving this goal using the framework of game theory. In particular, we derived a systematic methodology for designing local agent objective functions that guarantees (i) an equivalence between the resulting game-theoretic equilibria and the system level design objective and (ii) that the resulting game possesses an inherent structure that can be exploited for distributed learning, e.g., potential games. The control design can then be completed by applying any distributed learning algorithm that guarantees convergence to the game-theoretic equilibrium. One main advantage of this game theoretic approach is that it provides a hierarchical decomposition between the decomposition of the systemic objective (game design) and the specific local decision rules (distributed learning algorithms). This decomposition provides the system designer with tremendous flexibility to meet the design objectives and constraints inherent in a broad class of multiagent systems. Furthermore, in many settings the resulting controllers will be inherently robust to a host of uncertainties including asynchronous clock rates, delays in information, and component failures.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The effect of the mixing of pulsed two color fields on the generation of an isolated attosecond pulse has been systematically investigated. One main color is 800 nm and the other color (or secondary color) is varied from 1.2 to 2.4 mu m. This work shows that the continuum length behaves in a similar way to the behavior of the difference in the square of the amplitude of the strongest and next strongest cycle. As the mixing ratio is increased, the optimal wavelength for the extended continuum shifts toward shorter wavelength side. There is a certain mixing ratio of intensities at which the continuum length bifurcates, i.e., the existence of two optimal wavelengths. As the mixing ratio is further increased, each branch bifurcates again into two sub-branches. This 2D map analysis of the mixing ratio and the wavelength of the secondary field easily allows one to select a proper wavelength and the mixing ratio for a given pulse duration of the primary field. The study shows that an isolated sub-100 attosecond pulse can be generated mixing an 11 fs full-width-half-maximum (FWHM), 800 laser pulse with an 1840 nm FWHM pulse. Furthermore the result reveals that a 33 fs FWHM, 800 nm pulse can produce an isolated pulse below 200 as, when properly mixed. (c) 2008 Optical Society of America.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this thesis, we provide a statistical theory for the vibrational pooling and fluorescence time dependence observed in infrared laser excitation of CO on an NaCl surface. The pooling is seen in experiment and in computer simulations. In the theory, we assume a rapid equilibration of the quanta in the substrate and minimize the free energy subject to the constraint at any time t of a fixed number of vibrational quanta N(t). At low incident intensity, the distribution is limited to one- quantum exchanges with the solid and so the Debye frequency of the solid plays a key role in limiting the range of this one-quantum domain. The resulting inverted vibrational equilibrium population depends only on fundamental parameters of the oscillator (ωe and ωeχe) and the surface (ωD and T). Possible applications and relation to the Treanor gas phase treatment are discussed. Unlike the solid phase system, the gas phase system has no Debye-constraining maximum. We discuss the possible distributions for arbitrary N-conserving diatom-surface pairs, and include application to H:Si(111) as an example.

Computations are presented to describe and analyze the high levels of infrared laser-induced vibrational excitation of a monolayer of absorbed 13CO on a NaCl(100) surface. The calculations confirm that, for situations where the Debye frequency limited n domain restriction approximately holds, the vibrational state population deviates from a Boltzmann population linearly in n. Nonetheless, the full kinetic calculation is necessary to capture the result in detail.

We discuss the one-to-one relationship between N and γ and the examine the state space of the new distribution function for varied γ. We derive the Free Energy, F = NγkT − kTln(∑Pn), and effective chemical potential, μn ≈ γkT, for the vibrational pool. We also find the anti correlation of neighbor vibrations leads to an emergent correlation that appears to extend further than nearest neighbor.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Many engineering applications face the problem of bounding the expected value of a quantity of interest (performance, risk, cost, etc.) that depends on stochastic uncertainties whose probability distribution is not known exactly. Optimal uncertainty quantification (OUQ) is a framework that aims at obtaining the best bound in these situations by explicitly incorporating available information about the distribution. Unfortunately, this often leads to non-convex optimization problems that are numerically expensive to solve.

This thesis emphasizes on efficient numerical algorithms for OUQ problems. It begins by investigating several classes of OUQ problems that can be reformulated as convex optimization problems. Conditions on the objective function and information constraints under which a convex formulation exists are presented. Since the size of the optimization problem can become quite large, solutions for scaling up are also discussed. Finally, the capability of analyzing a practical system through such convex formulations is demonstrated by a numerical example of energy storage placement in power grids.

When an equivalent convex formulation is unavailable, it is possible to find a convex problem that provides a meaningful bound for the original problem, also known as a convex relaxation. As an example, the thesis investigates the setting used in Hoeffding's inequality. The naive formulation requires solving a collection of non-convex polynomial optimization problems whose number grows doubly exponentially. After structures such as symmetry are exploited, it is shown that both the number and the size of the polynomial optimization problems can be reduced significantly. Each polynomial optimization problem is then bounded by its convex relaxation using sums-of-squares. These bounds are found to be tight in all the numerical examples tested in the thesis and are significantly better than Hoeffding's bounds.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Optical parametric chirped pulse amplification with different pump wavelengths was investigated using LBO crystal, at signal central wavelength of 800 nm. According to our theoretical simulation, when pump wavelength is 492.5 nm, there is a maximal gain bandwidth of 190 nm. centered at 805 nm in optimal noncollinear angle using LBO. Presently, pump wavelength of 492.5 nm can be obtained from second harmonic generation of a Yb:Sr-5(PO4)(3)F laser. The broad gain bandwidth can completely support similar to 6 fs with a spectral centre of seed pulse at 800 nm. The deviation from optimal noncollinear angle can be compensated by accurately tuning crystal angle for phase matching. The gain spectrum with pump wavelength of 492.5 nm is much better than those with pump wavelengths of 400, 526.5 and 532 nm, at signal centre of 800 nm. (c) 2005 Elsevier B.V. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We show that the peak intensity of single attosecond x-ray pulses is enhanced by 1 or 2 orders of magnitude, the pulse duration is greatly compressed, and the optimal propagation distance is shortened by genetic algorithm optimization of the chirp and initial phase of 5 fs laser pulses. However, as the laser intensity increases, more efficient nonadiabatic self-phase matching can lead to a dramatically enhanced harmonic yield, and the efficiency of optimization decreases in the enhancement and compression of the generated attosecond pulses. (c) 2006 Optical Society of America.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We develop new algorithms which combine the rigorous theory of mathematical elasticity with the geometric underpinnings and computational attractiveness of modern tools in geometry processing. We develop a simple elastic energy based on the Biot strain measure, which improves on state-of-the-art methods in geometry processing. We use this energy within a constrained optimization problem to, for the first time, provide surface parameterization tools which guarantee injectivity and bounded distortion, are user-directable, and which scale to large meshes. With the help of some new generalizations in the computation of matrix functions and their derivative, we extend our methods to a large class of hyperelastic stored energy functions quadratic in piecewise analytic strain measures, including the Hencky (logarithmic) strain, opening up a wide range of possibilities for robust and efficient nonlinear elastic simulation and geometry processing by elastic analogy.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A general framework for multi-criteria optimal design is presented which is well-suited for automated design of structural systems. A systematic computer-aided optimal design decision process is developed which allows the designer to rapidly evaluate and improve a proposed design by taking into account the major factors of interest related to different aspects such as design, construction, and operation.

The proposed optimal design process requires the selection of the most promising choice of design parameters taken from a large design space, based on an evaluation using specified criteria. The design parameters specify a particular design, and so they relate to member sizes, structural configuration, etc. The evaluation of the design uses performance parameters which may include structural response parameters, risks due to uncertain loads and modeling errors, construction and operating costs, etc. Preference functions are used to implement the design criteria in a "soft" form. These preference functions give a measure of the degree of satisfaction of each design criterion. The overall evaluation measure for a design is built up from the individual measures for each criterion through a preference combination rule. The goal of the optimal design process is to obtain a design that has the highest overall evaluation measure - an optimization problem.

Genetic algorithms are stochastic optimization methods that are based on evolutionary theory. They provide the exploration power necessary to explore high-dimensional search spaces to seek these optimal solutions. Two special genetic algorithms, hGA and vGA, are presented here for continuous and discrete optimization problems, respectively.

The methodology is demonstrated with several examples involving the design of truss and frame systems. These examples are solved by using the proposed hGA and vGA.