183 resultados para Two-Level Optimization


Relevância:

30.00% 30.00%

Publicador:

Resumo:

We present two efficient discrete parameter simulation optimization (DPSO) algorithms for the long-run average cost objective. One of these algorithms uses the smoothed functional approximation (SFA) procedure, while the other is based on simultaneous perturbation stochastic approximation (SPSA). The use of SFA for DPSO had not been proposed previously in the literature. Further, both algorithms adopt an interesting technique of random projections that we present here for the first time. We give a proof of convergence of our algorithms. Next, we present detailed numerical experiments on a problem of admission control with dependent service times. We consider two different settings involving parameter sets that have moderate and large sizes, respectively. On the first setting, we also show performance comparisons with the well-studied optimal computing budget allocation (OCBA) algorithm and also the equal allocation algorithm. Note to Practitioners-Even though SPSA and SFA have been devised in the literature for continuous optimization problems, our results indicate that they can be powerful techniques even when they are adapted to discrete optimization settings. OCBA is widely recognized as one of the most powerful methods for discrete optimization when the parameter sets are of small or moderate size. On a setting involving a parameter set of size 100, we observe that when the computing budget is small, both SPSA and OCBA show similar performance and are better in comparison to SFA, however, as the computing budget is increased, SPSA and SFA show better performance than OCBA. Both our algorithms also show good performance when the parameter set has a size of 10(8). SFA is seen to show the best overall performance. Unlike most other DPSO algorithms in the literature, an advantage with our algorithms is that they are easily implementable regardless of the size of the parameter sets and show good performance in both scenarios.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A three-level inverter produces six active vectors, each of normalized magnitudes 1, 0.866, and 0.5, besides a zero vector. The vectors of relative length 0.5 are termed pivot vectors.The three nearest voltage vectors are usually used to synthesize the reference vector. In most continuous pulsewidth-modulation(PWM) schemes, the switching sequence begins from a pivot vector and ends with the same pivot vector. Thus, the pivot vector is applied twice in a subcycle or half-carrier cycle. This paper proposes and investigates alternative switching sequences, which use the pivot vector only once but employ one of the other two vectors twice within the subcycle. The total harmonic distortion(THD) in the fundamental line current pertaining to these novel sequences is studied theoretically as well as experimentally over the whole range of modulation. Compared with centered space vector PWM, two of the proposed sequences lead to reduced THD at high modulation indices at a given average switching frequency.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Topology optimization methods have been shown to have extensive application in the design of microsystems. However, their utility in practical situations is restricted to predominantly planar configurations due to the limitations of most microfabrication techniques in realizing structures with arbitrary topologies in the direction perpendicular to the substrate. This study addresses the problem of synthesizing optimal topologies in the out-of-plane direction while obeying the constraints imposed by surface micromachining. A new formulation that achieves this by defining a design space that implicitly obeys the manufacturing constraints with a continuous design parameterization is presented in this paper. This is in contrast to including manufacturing cost in the objective function or constraints. The resulting solutions of the new formulation obtained with gradient-based optimization directly provide the photolithographic mask layouts. Two examples that illustrate the approach for the case of stiff structures are included.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Swarm intelligence algorithms are applied for optimal control of flexible smart structures bonded with piezoelectric actuators and sensors. The optimal locations of actuators/sensors and feedback gain are obtained by maximizing the energy dissipated by the feedback control system. We provide a mathematical proof that this system is uncontrollable if the actuators and sensors are placed at the nodal points of the mode shapes. The optimal locations of actuators/sensors and feedback gain represent a constrained non-linear optimization problem. This problem is converted to an unconstrained optimization problem by using penalty functions. Two swarm intelligence algorithms, namely, Artificial bee colony (ABC) and glowworm swarm optimization (GSO) algorithms, are considered to obtain the optimal solution. In earlier published research, a cantilever beam with one and two collocated actuator(s)/sensor(s) was considered and the numerical results were obtained by using genetic algorithm and gradient based optimization methods. We consider the same problem and present the results obtained by using the swarm intelligence algorithms ABC and GSO. An extension of this cantilever beam problem with five collocated actuators/sensors is considered and the numerical results obtained by using the ABC and GSO algorithms are presented. The effect of increasing the number of design variables (locations of actuators and sensors and gain) on the optimization process is investigated. It is shown that the ABC and GSO algorithms are robust and are good choices for the optimization of smart structures.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper, we consider robust joint designs of relay precoder and destination receive filters in a nonregenerative multiple-input multiple-output (MIMO) relay network. The network consists of multiple source-destination node pairs assisted by a MIMO-relay node. The channel state information (CSI) available at the relay node is assumed to be imperfect. We consider robust designs for two models of CSI error. The first model is a stochastic error (SE) model, where the probability distribution of the CSI error is Gaussian. This model is applicable when the imperfect CSI is mainly due to errors in channel estimation. For this model, we propose robust minimum sum mean square error (SMSE), MSE-balancing, and relay transmit power minimizing precoder designs. The next model for the CSI error is a norm-bounded error (NBE) model, where the CSI error can be specified by an uncertainty set. This model is applicable when the CSI error is dominated by quantization errors. In this case, we adopt a worst-case design approach. For this model, we propose a robust precoder design that minimizes total relay transmit power under constraints on MSEs at the destination nodes. We show that the proposed robust design problems can be reformulated as convex optimization problems that can be solved efficiently using interior-point methods. We demonstrate the robust performance of the proposed design through simulations.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The technological world has attained a new dimension with the advent of miniaturization and a major breakthrough has evolved in the form of moems, technically more advanced than mems. This breakthrough has paved way for the scientists to research and conceive their innovation. This paper presents a mathematical analysis of the wave propagation along the non-uniform waveguide with refractive index varying along the z axis implemented on the cantilever beam of MZI based moem accelerometer. Secondly the studies on the wave bends with minimum power loss focusing on two main aspects of bend angle and curvature angle is also presented.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We present a method for obtaining conjugate, conjoined shapes and tilings in the context of the design of structures using topology optimization. Optimal material distribution is achieved in topology optimization by setting up a selection field in the design domain to determine the presence/absence of material there. We generalize this approach in this paper by presenting a paradigm in which the material left out by the selection field is also utilised. We obtain conjugate shapes when the region chosen and the region left-out are solutions for two problems, each with a different functionality. On the other hand, if the left-out region is connected to the selected region in some pre-determined fashion for achieving a single functionality, then we get conjoined shapes. The utilization of the left-out material, gives the notion of material economy in both cases. Thus, material wastage is avoided in the practical realization of these designs using many manufacturing techniques. This is in contrast to the wastage of left-out material during manufacture of traditional topology-optimized designs. We illustrate such shapes in the case of stiff structures and compliant mechanisms. When such designs are suitably made on domains of the unit cell of a tiling, this leads to the formation of new tilings which are functionally useful. Such shapes are not only useful for their functionality and economy of material and manufacturing, but also for their aesthetic value.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

An attempt is made to study the two dimensional (2D) effective electron mass (EEM) in quantum wells (Qws), inversion layers (ILs) and NIPI superlattices of Kane type semiconductors in the presence of strong external photoexcitation on the basis of a newly formulated electron dispersion laws within the framework of k.p. formalism. It has been found, taking InAs and InSb as examples, that the EEM in Qws, ILs and superlattices increases with increasing concentration, light intensity and wavelength of the incident light waves, respectively and the numerical magnitudes in each case is band structure dependent. The EEM in ILs is quantum number dependent exhibiting quantum jumps for specified values of the surface electric field and in NIPI superlattices; the same is the function of Fermi energy and the subband index characterizing such 2D structures. The appearance of the humps of the respective curves is due to the redistribution of the electrons among the quantized energy levels when the quantum numbers corresponding to the highest occupied level changes from one fixed value to the others. Although the EEM varies in various manners with all the variables as evident from all the curves, the rates of variations totally depend on the specific dispersion relation of the particular 2D structure. Under certain limiting conditions, all the results as derived in this paper get transformed into well known formulas of the EEM and the electron statistics in the absence of external photo-excitation and thus confirming the compatibility test. The results of this paper find three applications in the field of microstructures. (C) 2011 Elsevier Ltd. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This study presents development of a computational fluid dynamic (CFD) model to predict unsteady, two-dimensional temperature, moisture and velocity distributions inside a novel, biomass-fired, natural convection-type agricultural dryer. Results show that in initial stages of drying, when material surface is wet and moisture is easily available, moisture removal rate from surface depends upon the condition of drying air. Subsequently, material surface becomes dry and moisture removal rate is driven by diffusion of moisture from inside to the material surface. An optimum 9-tray configuration is found to be more efficient than for the same mass of material and volume of dryer. A new configuration of dryer, mainly to explore its potential to increasing uniformity in drying across all trays, is also analyzed. This configuration involves diverting a portion of hot air before it enters over the first tray and is supplied directly at an intermediate location in the dryer. Uniformity in drying across trays has increased for the kind of material simulated.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We show that it is possible to change from a subnatural electromagnetically induced transparency (EIT) feature to a subnatural electromagnetically induced absorption (EIA) feature in a (degenerate) three-level. system. The change is effected by turning on a second control beam counter-propagating with respect to the first beam. We observe this change in the D-2 line of Rb in a room temperature vapor cell. The observations are supported by density-matrix analysis of the complete sublevel structure including the effect of Doppler averaging, but can be understood qualitatively as arising due to the formation of N-type systems with the two control beams. Since many of the applications of EIT and EIA rely on the anomalous dispersion near the resonances, this introduces a new ability to control the sign of the dispersion. Copyright (C) EPLA, 2012

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Accurate estimation of mass transport parameters is necessary for overall design and evaluation processes of the waste disposal facilities. The mass transport parameters, such as effective diffusion coefficient, retardation factor and diffusion accessible porosity, are estimated from observed diffusion data by inverse analysis. Recently, particle swarm optimization (PSO) algorithm has been used to develop inverse model for estimating these parameters that alleviated existing limitations in the inverse analysis. However, PSO solver yields different solutions in successive runs because of the stochastic nature of the algorithm and also because of the presence of multiple optimum solutions. Thus the estimated mean solution from independent runs is significantly different from the best solution. In this paper, two variants of the PSO algorithms are proposed to improve the performance of the inverse analysis. The proposed algorithms use perturbation equation for the gbest particle to gain information around gbest region on the search space and catfish particles in alternative iterations to improve exploration capabilities. Performance comparison of developed solvers on synthetic test data for two different diffusion problems reveals that one of the proposed solvers, CPPSO, significantly improves overall performance with improved best, worst and mean fitness values. The developed solver is further used to estimate transport parameters from 12 sets of experimentally observed diffusion data obtained from three diffusion problems and compared with published values from the literature. The proposed solver is quick, simple and robust on different diffusion problems. (C) 2012 Elsevier Ltd. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Purpose: To optimize the data-collection strategy for diffuse optical tomography and to obtain a set of independent measurements among the total measurements using the model based data-resolution matrix characteristics. Methods: The data-resolution matrix is computed based on the sensitivity matrix and the regularization scheme used in the reconstruction procedure by matching the predicted data with the actual one. The diagonal values of data-resolution matrix show the importance of a particular measurement and the magnitude of off-diagonal entries shows the dependence among measurements. Based on the closeness of diagonal value magnitude to off-diagonal entries, the independent measurements choice is made. The reconstruction results obtained using all measurements were compared to the ones obtained using only independent measurements in both numerical and experimental phantom cases. The traditional singular value analysis was also performed to compare the results obtained using the proposed method. Results: The results indicate that choosing only independent measurements based on data-resolution matrix characteristics for the image reconstruction does not compromise the reconstructed image quality significantly, in turn reduces the data-collection time associated with the procedure. When the same number of measurements (equivalent to independent ones) are chosen at random, the reconstruction results were having poor quality with major boundary artifacts. The number of independent measurements obtained using data-resolution matrix analysis is much higher compared to that obtained using the singular value analysis. Conclusions: The data-resolution matrix analysis is able to provide the high level of optimization needed for effective data-collection in diffuse optical imaging. The analysis itself is independent of noise characteristics in the data, resulting in an universal framework to characterize and optimize a given data-collection strategy. (C) 2012 American Association of Physicists in Medicine. http://dx.doi.org/10.1118/1.4736820]

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We investigate the impact of the Indian Ocean Dipole (IOD) and El Nino and the Southern Oscillation (ENSO) on sea level variations in the North Indian Ocean during 1957-2008. Using tide-gauge and altimeter data, we show that IOD and ENSO leave characteristic signatures in the sea level anomalies (SLAs) in the Bay of Bengal. During a positive IOD event, negative SLAs are observed during April-December, with the SLAs decreasing continuously to a peak during September-November. During El Nino, negative SLAs are observed twice (April-December and November-July), with a relaxation between the two peaks. SLA signatures during negative IOD and La Nina events are much weaker. We use a linear, continuously stratified model of the Indian Ocean to simulate their sea level patterns of IOD and ENSO events. We then separate solutions into parts that correspond to specific processes: coastal alongshore winds, remote forcing from the equator via reflected Rossby waves, and direct forcing by interior winds within the bay. During pure IOD events, the SLAs are forced both from the equator and by direct wind forcing. During ENSO events, they are primarily equatorially forced, with only a minor contribution from direct wind forcing. Using a lead/lag covariance analysis between the Nino-3.4 SST index and Indian Ocean wind stress, we derive a composite wind field for a typical El Nino event: the resulting solution has two negative SLA peaks. The IOD and ENSO signatures are not evident off the west coast of India.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In space application the precision level measurement of cryogenic liquids in the storage tanks is done using triple redundant capacitance level sensor, for control and safety point of view. The linearity of each sensor element depends upon the cylindricity and concentricity of the internal and external electrodes. The complexity of calibrating all sensors together has been addressed by two step calibration methodology which has been developed and used for the calibration of six capacitance sensors. All calibrations are done using Liquid Nitrogen (LN2) as a cryogenic fluid. In the first step of calibration, one of the elements of Liquid Hydrogen (LH2) level sensor is calibrated using 700mm eleven point discrete diode array. Four wire method has been used for the diode array. Thus a linearity curve for a single element of LH2 is obtained. In second step of calibration, using the equation thus obtained for the above sensor, it is considered as a reference for calibrating remaining elements of the same LH2 sensor and other level sensor (either Liquid Oxygen (LOX) or LH2). The elimination of stray capacitance for the capacitance level probes has been attempted. The automatic data logging of capacitance values through GPIB is done using LabVIEW 8.5.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Service systems are labor intensive. Further, the workload tends to vary greatly with time. Adapting the staffing levels to the workloads in such systems is nontrivial due to a large number of parameters and operational variations, but crucial for business objectives such as minimal labor inventory. One of the central challenges is to optimize the staffing while maintaining system steady-state and compliance to aggregate SLA constraints. We formulate this problem as a parametrized constrained Markov process and propose a novel stochastic optimization algorithm for solving it. Our algorithm is a multi-timescale stochastic approximation scheme that incorporates a SPSA based algorithm for ‘primal descent' and couples it with a ‘dual ascent' scheme for the Lagrange multipliers. We validate this optimization scheme on five real-life service systems and compare it with a state-of-the-art optimization tool-kit OptQuest. Being two orders of magnitude faster than OptQuest, our scheme is particularly suitable for adaptive labor staffing. Also, we observe that it guarantees convergence and finds better solutions than OptQuest in many cases.