123 resultados para Minimization of open stack problem
Resumo:
A modified linear prediction (MLP) method is proposed in which the reference sensor is optimally located on the extended line of the array. The criterion of optimality is the minimization of the prediction error power, where the prediction error is defined as the difference between the reference sensor and the weighted array outputs. It is shown that the L2-norm of the least-squares array weights attains a minimum value for the optimum spacing of the reference sensor, subject to some soft constraint on signal-to-noise ratio (SNR). How this minimum norm property can be used for finding the optimum spacing of the reference sensor is described. The performance of the MLP method is studied and compared with that of the linear prediction (LP) method using resolution, detection bias, and variance as the performance measures. The study reveals that the MLP method performs much better than the LP technique.
Resumo:
Extensive research work has been carried out in the last few years on the synthesis and characterization of several families of open-framework materials, including aluminosilicates,[1] phosphates,[2] and carboxylates.[3] These studies have shown the occurrence of a variety of three dimensional (3D) architectures containing channels and other features.
Resumo:
In this article, an ultrasonic wave propagation in graphene sheet is studied using nonlocal elasticity theory incorporating small scale effects. The graphene sheet is modeled as an isotropic plate of one-atom thick. For this model, the nonlocal governing differential equations of motion are derived from the minimization of the total potential energy of the entire system. An ultrasonic type of wave propagation model is also derived for the graphene sheet. The nonlocal scale parameter introduces certain band gap region in in-plane and flexural wave modes where no wave propagation occurs. This is manifested in the wavenumber plots as the region where the wavenumber tends to infinite or wave speed tends to zero. The frequency at which this phenomenon occurs is called the escape frequency. The explicit expressions for cutoff frequencies and escape frequencies are derived. The escape frequencies are mainly introduced because of the nonlocal elasticity. Obviously these frequencies are function of nonlocal scaling parameter. It has also been obtained that these frequencies are independent of y-directional wavenumber. It means that for any type of nanostructure, the escape frequencies are purely a function of nonlocal scaling parameter only. It is also independent of the geometry of the structure. It has been found that the cutoff frequencies are function of nonlocal scaling parameter (e(0)a) and the y-directional wavenumber (k(y)). For a given nanostructure, nonlocal small scale coefficient can be obtained by matching the results from molecular dynamics (MD) simulations and the nonlocal elasticity calculations. At that value of the nonlocal scale coefficient, the waves will propagate in the nanostructure at that cut-off frequency. In the present paper, different values of e(o)a are used. One can get the exact e(0)a for a given graphene sheet by matching the MD simulation results of graphene with the results presented in this paper. (C) 2010 Elsevier B.V. All rights reserved.
Resumo:
In this paper, we present a generic method/model for multi-objective design optimization of laminated composite components, based on Vector Evaluated Artificial Bee Colony (VEABC) algorithm. VEABC is a parallel vector evaluated type, swarm intelligence multi-objective variant of the Artificial Bee Colony algorithm (ABC). In the current work a modified version of VEABC algorithm for discrete variables has been developed and implemented successfully for the multi-objective design optimization of composites. The problem is formulated with multiple objectives of minimizing weight and the total cost of the composite component to achieve a specified strength. The primary optimization variables are the number of layers, its stacking sequence (the orientation of the layers) and thickness of each layer. The classical lamination theory is utilized to determine the stresses in the component and the design is evaluated based on three failure criteria: failure mechanism based failure criteria, maximum stress failure criteria and the tsai-wu failure criteria. The optimization method is validated for a number of different loading configurations-uniaxial, biaxial and bending loads. The design optimization has been carried for both variable stacking sequences, as well fixed standard stacking schemes and a comparative study of the different design configurations evolved has been presented. Finally the performance is evaluated in comparison with other nature inspired techniques which includes Particle Swarm Optimization (PSO), Artificial Immune System (AIS) and Genetic Algorithm (GA). The performance of ABC is at par with that of PSO, AIS and GA for all the loading configurations. (C) 2009 Elsevier B.V. All rights reserved.
Resumo:
Using normal mode analysis Rayleigh-Taylor instability is investigated for three-layer viscous stratified incompressible steady flow, when the top 3rd and bottom 1st layers extend up to infinity, the middle layer has a small thickness δ. The wave Reynolds number in the middle layer is assumed to be sufficiently small. A dispersion relation (a seventh degree polynomial in wave frequency ω) valid up to the order of the maximal value of all possible Kj (j less-than-or-equals, slant 0, K is the wave number) in each coefficient of the polynomial is obtained. A sufficient condition for instability is found out for the first time, pursuing a medium wavelength analysis. It depends on ratios (α and β) of the coefficients of viscosity, the thickness of the middle layer δ, surface tension ratio T and wave number K. This is a new analytical criterion for Rayleigh-Taylor instability of three-layer fluids. It recovers the results of the corresponding problem for two-layer fluids. Among the results obtained, it is observed that taking the coefficients of viscosity of 2nd and 3rd layers same can inhibit the effect of surface tension completely. For large wave number K, the thickness of the middle layer should be correspondingly small to keep the domain of dependence of the threshold wave number Kc constant for fixed α, β and T.
Resumo:
Analytical and numerical solutions of a general problem related to the radially symmetric inward spherical solidification of a superheated melt have been studied in this paper. In the radiation-convection type boundary conditions, the heat transfer coefficient has been taken as time dependent which could be infinite, at time,t=0. This is necessary, for the initiation of instantaneous solidification of superheated melt, over its surface. The analytical solution consists of employing suitable fictitious initial temperatures and fictitious extensions of the original region occupied by the melt. The numerical solution consists of finite difference scheme in which the grid points move with the freezing front. The numerical scheme can handle with ease the density changes in the solid and liquid states and the shrinkage or expansions of volumes due to density changes. In the numerical results, obtained for the moving boundary and temperatures, the effects of several parameters such as latent heat, Boltzmann constant, density ratios, heat transfer coefficients, etc. have been shown. The correctness of numerical results has also been checked by satisfying the integral heat balance at every timestep.
Resumo:
A new fast and efficient marching algorithm is introduced to solve the basic quasilinear, hyperbolic partial differential equations describing unsteady, flow in conduits by the method of characteristics. The details of the marching method are presented with an illustration of the waterhammer problem in a simple piping system both for friction and frictionless cases. It is shown that for the same accuracy the new marching method requires fewer computational steps, less computer memory and time.
Resumo:
An implicit sub-grid scale model for large eddy simulation is presented by utilising the concept of a relaxation system for one dimensional Burgers' equation in a novel way. The Burgers' equation is solved for three different unsteady flow situations by varying the ratio of relaxation parameter (epsilon) to time step. The coarse mesh results obtained with a relaxation scheme are compared with the filtered DNS solution of the same problem on a fine mesh using a fourth-order CWENO discretisation in space and third-order TVD Runge-Kutta discretisation in time. The numerical solutions obtained through the relaxation system have the same order of accuracy in space and time and they closely match with the filtered DNS solutions.
Resumo:
In order to answer the practically important question of whether the down conductors of lightning protection systems to tall towers and buildings can be electrically isolated from the structure itself, this work is conducted. As a first step in this regard, it is presumed that the down conductor placed on metallic tower will be a pessimistic representation of the actual problem. This opinion was based on the fact that the proximity of heavy metallic structure will have a large damping effect. The post-stroke current distributions along the down conductors and towers, which can be quite different from that in the lightning channel, govern the post-stroke near field and the resulting gradient in the soil. Also, for a reliable estimation of the actual stroke current from the measured down conductor currents, it is essential to know the current distribution characteristics along the down conductors. In view of these, the present work attempts to deduce the post-stroke current and voltage distribution along typical down conductors and towers. A solution of the governing field equations on an electromagnetic model of the system is sought for the investigation. Simulation results providing the spatio-temporal distribution of the post-stroke current and voltage has provided very interesting results. It is concluded that it is almost impossible to achieve electrical isolation between the structure and the down conductor. Furthermore, there will be significant induction into the steel matrix of the supporting structure.
Resumo:
We present a complete solution to the problem of coherent-mode decomposition of the most general anisotropic Gaussian Schell-model (AGSM) beams, which constitute a ten-parameter family. Our approach is based on symmetry considerations. Concepts and techniques familiar from the context of quantum mechanics in the two-dimensional plane are used to exploit the Sp(4, R) dynamical symmetry underlying the AGSM problem. We take advantage of the fact that the symplectic group of first-order optical system acts unitarily through the metaplectic operators on the Hilbert space of wave amplitudes over the transverse plane, and, using the Iwasawa decomposition for the metaplectic operator and the classic theorem of Williamson on the normal forms of positive definite symmetric matrices under linear canonical transformations, we demonstrate the unitary equivalence of the AGSM problem to a separable problem earlier studied by Li and Wolf [Opt. Lett. 7, 256 (1982)] and Gori and Guattari [Opt. Commun. 48, 7 (1983)]. This conn ction enables one to write down, almost by inspection, the coherent-mode decomposition of the general AGSM beam. A universal feature of the eigenvalue spectrum of the AGSM family is noted.
Resumo:
This paper looks at the complexity of four different incremental problems. The following are the problems considered: (1) Interval partitioning of a flow graph (2) Breadth first search (BFS) of a directed graph (3) Lexicographic depth first search (DFS) of a directed graph (4) Constructing the postorder listing of the nodes of a binary tree. The last problem arises out of the need for incrementally computing the Sethi-Ullman (SU) ordering [1] of the subtrees of a tree after it has undergone changes of a given type. These problems are among those that claimed our attention in the process of our designing algorithmic techniques for incremental code generation. BFS and DFS have certainly numerous other applications, but as far as our work is concerned, incremental code generation is the common thread linking these problems. The study of the complexity of these problems is done from two different perspectives. In [2] is given the theory of incremental relative lower bounds (IRLB). We use this theory to derive the IRLBs of the first three problems. Then we use the notion of a bounded incremental algorithm [4] to prove the unboundedness of the fourth problem with respect to the locally persistent model of computation. Possibly, the lower bound result for lexicographic DFS is the most interesting. In [5] the author considers lexicographic DFS to be a problem for which the incremental version may require the recomputation of the entire solution from scratch. In that sense, our IRLB result provides further evidence for this possibility with the proviso that the incremental DFS algorithms considered be ones that do not require too much of preprocessing.
Resumo:
An analytical method is developed for solving an inverse problem for Helmholtz's equation associated with two semi-infinite incompressible fluids of different variable refractive indices, separated by a plane interface. The unknowns of the inverse problem are: (i) the refractive indices of the two fluids, (ii) the ratio of the densities of the two fluids, and (iii) the strength of an acoustic source assumed to be situated at the interface of the two fluids. These are determined from the pressure on the interface produced by the acoustic source. The effect of the surface tension force at the interface is taken into account in this paper. The application of the proposed analytical method to solve the inverse problem is also illustrated with several examples. In particular, exact solutions of two direct problems are first derived using standard classical methods which are then used in our proposed inverse method to recover the unknowns of the corresponding inverse problems. The results are found to be in excellent agreement.
Resumo:
In this paper, we consider a robust design of MIMO-relay precoder and receive filter for the destination nodes in a non-regenerative multiple-input multiple-output (MIMO) relay network. The network consists of multiple source-destination node pairs assisted by a single MIMO-relay node. The source and destination nodes are single antenna nodes, whereas the MIMO-relay node has multiple transmit and multiple receive antennas. The channel state information (CSI) available at the MIMO-relay node for precoding purpose is assumed to be imperfect. We assume that the norms of errors in CSI are upper-bounded, and the MIMO-relay node knows these bounds. We consider the robust design of the MIMO-relay precoder and receive filter based on the minimization of the total MIMO-relay transmit power with constraints on the mean square error (MSE) at the destination nodes. We show that this design problem can be solved by solving an alternating sequence of minimization and worst-case analysis problems. The minimization problem is formulated as a convex optimization problem that can be solved efficiently using interior-point methods. The worst-case analysis problem can be solved analytically using an approximation for the MSEs at the destination nodes. We demonstrate the robust performance of the proposed design through simulations.
Resumo:
Reduction of carbon emissions is of paramount importance in the context of global warming. Countries and global companies are now engaged in understanding systematic ways of achieving well defined emission targets. In fact, carbon credits have become significant and strategic instruments of finance for countries and global companies. In this paper, we formulate and suggest a solution to the carbon allocation problem, which involves determining a cost minimizing allocation of carbon credits among different emitting agents. We address this challenge in the context of a global company which is faced with the challenge of determining an allocation of carbon credit caps among its divisions in a cost effective way. The problem is formulated as a reverse auction problem where the company plays the role of a buyer or carbon planning authority and the different divisions within the company are the emitting agents that specify cost curves for carbon credit reductions. Two natural variants of the problem: (a) with unlimited budget and (b) with limited budget are considered. Suitable assumptions are made on the cost curves and in each of the two cases we show that the resulting problem formulation is a knapsack problem that can be solved optimally using a greedy heuristic. The solution of the allocation problem provides critical decision support to global companies engaged seriously in green programs.
Resumo:
An aeroelastic analysis based on finite elements in space and time is used to model the helicopter rotor in forward flight. The rotor blade is represented as an elastic cantilever beam undergoing flap and lag bending, elastic torsion and axial deformations. The objective of the improved design is to reduce vibratory loads at the rotor hub that are the main source of helicopter vibration. Constraints are imposed on aeroelastic stability, and move limits are imposed on the blade elastic stiffness design variables. Using the aeroelastic analysis, response surface approximations are constructed for the objective function (vibratory hub loads). It is found that second order polynomial response surfaces constructed using the central composite design of the theory of design of experiments adequately represents the aeroelastic model in the vicinity of the baseline design. Optimization results show a reduction in the objective function of about 30 per cent. A key accomplishment of this paper is the decoupling of the analysis problem and the optimization problems using response surface methods, which should encourage the use of optimization methods by the helicopter industry. (C) 2002 Elsevier Science Ltd. All rights reserved.