892 resultados para ISE and ITSE optimization
Resumo:
This paper presents a lower bound limit analysis approach for solving an axisymmetric stability problem by using the Drucker-Prager (D-P) yield cone in conjunction with finite elements and nonlinear optimization. In principal stress space, the tip of the yield cone has been smoothened by applying the hyperbolic approximation. The nonlinear optimization has been performed by employing an interior point method based on the logarithmic barrier function. A new proposal has also been given to simulate the D-P yield cone with the Mohr-Coulomb hexagonal yield pyramid. For the sake of illustration, bearing capacity factors N-c, N-q and N-gamma have been computed, as a function of phi, both for smooth and rough circular foundations. The results obtained from the analysis compare quite well with the solutions reported from literature.
Resumo:
This paper presents the design and implementation of PolyMage, a domain-specific language and compiler for image processing pipelines. An image processing pipeline can be viewed as a graph of interconnected stages which process images successively. Each stage typically performs one of point-wise, stencil, reduction or data-dependent operations on image pixels. Individual stages in a pipeline typically exhibit abundant data parallelism that can be exploited with relative ease. However, the stages also require high memory bandwidth preventing effective utilization of parallelism available on modern architectures. For applications that demand high performance, the traditional options are to use optimized libraries like OpenCV or to optimize manually. While using libraries precludes optimization across library routines, manual optimization accounting for both parallelism and locality is very tedious. The focus of our system, PolyMage, is on automatically generating high-performance implementations of image processing pipelines expressed in a high-level declarative language. Our optimization approach primarily relies on the transformation and code generation capabilities of the polyhedral compiler framework. To the best of our knowledge, this is the first model-driven compiler for image processing pipelines that performs complex fusion, tiling, and storage optimization automatically. Experimental results on a modern multicore system show that the performance achieved by our automatic approach is up to 1.81x better than that achieved through manual tuning in Halide, a state-of-the-art language and compiler for image processing pipelines. For a camera raw image processing pipeline, our performance is comparable to that of a hand-tuned implementation.
Resumo:
In this paper we address the problem of the separation and recovery of convolutively mixed autoregressive processes in a Bayesian framework. Solving this problem requires the ability to solve integration and/or optimization problems of complicated posterior distributions. We thus propose efficient stochastic algorithms based on Markov chain Monte Carlo (MCMC) methods. We present three algorithms. The first one is a classical Gibbs sampler that generates samples from the posterior distribution. The two other algorithms are stochastic optimization algorithms that allow to optimize either the marginal distribution of the sources, or the marginal distribution of the parameters of the sources and mixing filters, conditional upon the observation. Simulations are presented.
Resumo:
In the quest to develop viable designs for third-generation optical interferometric gravitational-wave detectors, one strategy is to monitor the relative momentum or speed of the test-mass mirrors, rather than monitoring their relative position. The most straightforward design for a speed-meter interferometer that accomplishes this is described and analyzed in Chapter 2. This design (due to Braginsky, Gorodetsky, Khalili, and Thorne) is analogous to a microwave-cavity speed meter conceived by Braginsky and Khalili. A mathematical mapping between the microwave speed meter and the optical interferometric speed meter is developed and used to show (in accord with the speed being a quantum nondemolition observable) that in principle the interferometric speed meter can beat the gravitational-wave standard quantum limit (SQL) by an arbitrarily large amount, over an arbitrarily wide range of frequencies . However, in practice, to reach or beat the SQL, this specific speed meter requires exorbitantly high input light power. The physical reason for this is explored, along with other issues such as constraints on performance due to optical dissipation.
Chapter 3 proposes a more sophisticated version of a speed meter. This new design requires only a modest input power and appears to be a fully practical candidate for third-generation LIGO. It can beat the SQL (the approximate sensitivity of second-generation LIGO interferometers) over a broad range of frequencies (~ 10 to 100 Hz in practice) by a factor h/hSQL ~ √W^(SQL)_(circ)/Wcirc. Here Wcirc is the light power circulating in the interferometer arms and WSQL ≃ 800 kW is the circulating power required to beat the SQL at 100 Hz (the LIGO-II power). If squeezed vacuum (with a power-squeeze factor e-2R) is injected into the interferometer's output port, the SQL can be beat with a much reduced laser power: h/hSQL ~ √W^(SQL)_(circ)/Wcirce-2R. For realistic parameters (e-2R ≃ 10 and Wcirc ≃ 800 to 2000 kW), the SQL can be beat by a factor ~ 3 to 4 from 10 to 100 Hz. [However, as the power increases in these expressions, the speed meter becomes more narrow band; additional power and re-optimization of some parameters are required to maintain the wide band.] By performing frequency-dependent homodyne detection on the output (with the aid of two kilometer-scale filter cavities), one can markedly improve the interferometer's sensitivity at frequencies above 100 Hz.
Chapters 2 and 3 are part of an ongoing effort to develop a practical variant of an interferometric speed meter and to combine the speed meter concept with other ideas to yield a promising third- generation interferometric gravitational-wave detector that entails low laser power.
Chapter 4 is a contribution to the foundations for analyzing sources of gravitational waves for LIGO. Specifically, it presents an analysis of the tidal work done on a self-gravitating body (e.g., a neutron star or black hole) in an external tidal field (e.g., that of a binary companion). The change in the mass-energy of the body as a result of the tidal work, or "tidal heating," is analyzed using the Landau-Lifshitz pseudotensor and the local asymptotic rest frame of the body. It is shown that the work done on the body is gauge invariant, while the body-tidal-field interaction energy contained within the body's local asymptotic rest frame is gauge dependent. This is analogous to Newtonian theory, where the interaction energy is shown to depend on how one localizes gravitational energy, but the work done on the body is independent of that localization. These conclusions play a role in analyses, by others, of the dynamics and stability of the inspiraling neutron-star binaries whose gravitational waves are likely to be seen and studied by LIGO.
Resumo:
In this thesis we build a novel analysis framework to perform the direct extraction of all possible effective Higgs boson couplings to the neutral electroweak gauge bosons in the H → ZZ(*) → 4l channel also referred to as the golden channel. We use analytic expressions of the full decay differential cross sections for the H → VV' → 4l process, and the dominant irreducible standard model qq ̄ → 4l background where 4l = 2e2μ,4e,4μ. Detector effects are included through an explicit convolution of these analytic expressions with transfer functions that model the detector responses as well as acceptance and efficiency effects. Using the full set of decay observables, we construct an unbinned 8-dimensional detector level likelihood function which is con- tinuous in the effective couplings, and includes systematics. All potential anomalous couplings of HVV' where V = Z,γ are considered, allowing for general CP even/odd admixtures and any possible phases. We measure the CP-odd mixing between the tree-level HZZ coupling and higher order CP-odd couplings to be compatible with zero, and in the range [−0.40, 0.43], and the mixing between HZZ tree-level coupling and higher order CP -even coupling to be in the ranges [−0.66, −0.57] ∪ [−0.15, 1.00]; namely compatible with a standard model Higgs. We discuss the expected precision in determining the various HVV' couplings in future LHC runs. A powerful and at first glance surprising prediction of the analysis is that with 100-400 fb-1, the golden channel will be able to start probing the couplings of the Higgs boson to diphotons in the 4l channel. We discuss the implications and further optimization of the methods for the next LHC runs.
Resumo:
Reducing energy consumption is a major challenge for "energy-intensive" industries such as papermaking. A commercially viable energy saving solution is to employ data-based optimization techniques to obtain a set of "optimized" operational settings that satisfy certain performance indices. The difficulties of this are: 1) the problems of this type are inherently multicriteria in the sense that improving one performance index might result in compromising the other important measures; 2) practical systems often exhibit unknown complex dynamics and several interconnections which make the modeling task difficult; and 3) as the models are acquired from the existing historical data, they are valid only locally and extrapolations incorporate risk of increasing process variability. To overcome these difficulties, this paper presents a new decision support system for robust multiobjective optimization of interconnected processes. The plant is first divided into serially connected units to model the process, product quality, energy consumption, and corresponding uncertainty measures. Then multiobjective gradient descent algorithm is used to solve the problem in line with user's preference information. Finally, the optimization results are visualized for analysis and decision making. In practice, if further iterations of the optimization algorithm are considered, validity of the local models must be checked prior to proceeding to further iterations. The method is implemented by a MATLAB-based interactive tool DataExplorer supporting a range of data analysis, modeling, and multiobjective optimization techniques. The proposed approach was tested in two U.K.-based commercial paper mills where the aim was reducing steam consumption and increasing productivity while maintaining the product quality by optimization of vacuum pressures in forming and press sections. The experimental results demonstrate the effectiveness of the method.
Resumo:
Reducing energy consumption is a major challenge for energy-intensive industries such as papermaking. A commercially viable energy saving solution is to employ data-based optimization techniques to obtain a set of optimized operational settings that satisfy certain performance indices. The difficulties of this are: 1) the problems of this type are inherently multicriteria in the sense that improving one performance index might result in compromising the other important measures; 2) practical systems often exhibit unknown complex dynamics and several interconnections which make the modeling task difficult; and 3) as the models are acquired from the existing historical data, they are valid only locally and extrapolations incorporate risk of increasing process variability. To overcome these difficulties, this paper presents a new decision support system for robust multiobjective optimization of interconnected processes. The plant is first divided into serially connected units to model the process, product quality, energy consumption, and corresponding uncertainty measures. Then multiobjective gradient descent algorithm is used to solve the problem in line with user's preference information. Finally, the optimization results are visualized for analysis and decision making. In practice, if further iterations of the optimization algorithm are considered, validity of the local models must be checked prior to proceeding to further iterations. The method is implemented by a MATLAB-based interactive tool DataExplorer supporting a range of data analysis, modeling, and multiobjective optimization techniques. The proposed approach was tested in two U.K.-based commercial paper mills where the aim was reducing steam consumption and increasing productivity while maintaining the product quality by optimization of vacuum pressures in forming and press sections. The experimental results demonstrate the effectiveness of the method. © 2006 IEEE.
Resumo:
This paper presents the development and the application of a multi-objective optimization framework for the design of two-dimensional multi-element high-lift airfoils. An innovative and efficient optimization algorithm, namely Multi-Objective Tabu Search (MOTS), has been selected as core of the framework. The flow-field around the multi-element configuration is simulated using the commercial computational fluid dynamics (cfd) suite Ansys cfx. Elements shape and deployment settings have been considered as design variables in the optimization of the Garteur A310 airfoil, as presented here. A validation and verification process of the cfd simulation for the Garteur airfoil is performed using available wind tunnel data. Two design examples are presented in this study: a single-point optimization aiming at concurrently increasing the lift and drag performance of the test case at a fixed angle of attack and a multi-point optimization. The latter aims at introducing operational robustness and off-design performance into the design process. Finally, the performance of the MOTS algorithm is assessed by comparison with the leading NSGA-II (Non-dominated Sorting Genetic Algorithm) optimization strategy. An equivalent framework developed by the authors within the industrial sponsor environment is used for the comparison. To eliminate cfd solver dependencies three optimum solutions from the Pareto optimal set have been cross-validated. As a result of this study MOTS has been demonstrated to be an efficient and effective algorithm for aerodynamic optimizations. Copyright © 2012 Tech Science Press.
Resumo:
Lattice materials are characterized at the microscopic level by a regular pattern of voids confined by walls. Recent rapid prototyping techniques allow their manufacturing from a wide range of solid materials, ensuring high degrees of accuracy and limited costs. The microstructure of lattice material permits to obtain macroscopic properties and structural performance, such as very high stiffness to weight ratios, highly anisotropy, high specific energy dissipation capability and an extended elastic range, which cannot be attained by uniform materials. Among several applications, lattice materials are of special interest for the design of morphing structures, energy absorbing components and hard tissue scaffold for biomedical prostheses. Their macroscopic mechanical properties can be finely tuned by properly selecting the lattice topology and the material of the walls. Nevertheless, since the number of the design parameters involved is very high, and their correlation to the final macroscopic properties of the material is quite complex, reliable and robust multiscale mechanics analysis and design optimization tools are a necessary aid for their practical application. In this paper, the optimization of lattice materials parameters is illustrated with reference to the design of a bracket subjected to a point load. Given the geometric shape and the boundary conditions of the component, the parameters of four selected topologies have been optimized to concurrently maximize the component stiffness and minimize its mass. Copyright © 2011 by ASME.
Resumo:
The optimization of the organic modifier concentration in micellar electrokinetic capillary chromatography (MECC) has been achieved by a uniform design and iterative optimization method, which has been developed for the optimization of composition of the mobile phase in high performance liquid chromatography. According to the proposed method, the uniform design technique has been applied to design the starting experiments, which can reduce the number of experiments compared with traditional simultaneous methods, such as the orthano design. The hierarchical chromatographic response function has been modified to evaluate the separation quality of a chromatogram in MECC. An iterative procedure has been adopted to search the optimal concentration of organic modifiers for improving the accuracy of retention predicted and the quality of the chromatogram. Validity of the optimization method has been proved by the separation of 31 aromatic compounds in MECC. (C) 2000 John Wiley & Sons, Inc.