958 resultados para Cost Optimization
Resumo:
Some of the well known formulations for topology optimization of compliant mechanisms could lead to lumped compliant mechanisms. In lumped compliance, most of the elastic deformation in a mechanism occurs at few points, while rest of the mechanism remains more or less rigid. Such points are referred to as point-flexures. It has been noted in literature that high relative rotation is associated with point-flexures. In literature we also find a formulation of local constraint on relative rotations to avoid lumped compliance. However it is well known that a global constraint is easier to handle than a local constraint, by a numerical optimization algorithm. The current work presents a way of putting global constraint on relative rotations. This constraint is also simpler to implement since it uses linearized rotation at the center of finite-elements, to compute relative rotations. I show the results obtained by using this constraint oil the following benchmark problems - displacement inverter and gripper.
Resumo:
The purpose of this article is to show the applicability and benefits of the techniques of design of experiments as an optimization tool for discrete simulation models. The simulated systems are computational representations of real-life systems; its characteristics include a constant evolution that follows the occurrence of discrete events along the time. In this study, a production system, designed with the business philosophy JIT (Just in Time) is used, which seeks to achieve excellence in organizations through waste reduction in all the operational aspects. The most typical tool of JIT systems is the KANBAN production control that seeks to synchronize demand with flow of materials, minimize work in process, and define production metrics. Using experimental design techniques for stochastic optimization, the impact of the operational factors on the efficiency of the KANBAN / CONWIP simulation model is analyzed. The results show the effectiveness of the integration of experimental design techniques and discrete simulation models in the calculation of the operational parameters. Furthermore, the reliability of the methodologies found was improved with a new statistical consideration.
Resumo:
The purpose of this study is to examine the changes of energy cost during a high-heeled continuous jogging.Thirteen healthy female volunteers jointed in this study with heel height of the shoes varied from 1, 4.5 and 7 cm, respectively. Each subjects jogged on the treadmill with K4b2 portable gas analysis system. The results of this study showed that ventilnation, relative oxygen consumption and energy expenditure increased with the increase of heel height and these values shows significantly larger when the heel height reached to 7 cm. Present study suggest that wearing high heel shoes jogging could directly increase energy consumption, causing neuromuscular fatigue.
Resumo:
The theoretical optimization of the design parametersN A ,N D andW P has been done for efficient operation of Au-p-n Si solar cell including thermionic field emission, dependence of lifetime and mobility on impurity concentrations, dependence of absorption coefficient on wavelength, variation of barrier height and hence the optimum thickness ofp region with illumination. The optimized design parametersN D =5×1020 m−3,N A =3×1024 m−3 andW P =11.8 nm yield efficiencyη=17.1% (AM0) andη=19.6% (AM1). These are reduced to 14.9% and 17.1% respectively if the metal layer series resistance and transmittance with ZnS antireflection coating are included. A practical value ofW P =97.0 nm gives an efficiency of 12.2% (AM1).
Resumo:
Simultaneous consideration of both performance and reliability issues is important in the choice of computer architectures for real-time aerospace applications. One of the requirements for such a fault-tolerant computer system is the characteristic of graceful degradation. A shared and replicated resources computing system represents such an architecture. In this paper, a combinatorial model is used for the evaluation of the instruction execution rate of a degradable, replicated resources computing system such as a modular multiprocessor system. Next, a method is presented to evaluate the computation reliability of such a system utilizing a reliability graph model and the instruction execution rate. Finally, this computation reliability measure, which simultaneously describes both performance and reliability, is applied as a constraint in an architecture optimization model for such computing systems. Index Terms-Architecture optimization, computation
Resumo:
A hybrid simulation technique for identification and steady state optimization of a tubular reactor used in ammonia synthesis is presented. The parameter identification program finds the catalyst activity factor and certain heat transfer coefficients that minimize the sum of squares of deviation from simulated and actual temperature measurements obtained from an operating plant. The optimization program finds the values of three flows to the reactor to maximize the ammonia yield using the estimated parameter values. Powell's direct method of optimization is used in both cases. The results obtained here are compared with the plant data.
Resumo:
Deriving an estimate of optimal fishing effort or even an approximate estimate is very valuable for managing fisheries with multiple target species. The most challenging task associated with this is allocating effort to individual species when only the total effort is recorded. Spatial information on the distribution of each species within a fishery can be used to justify the allocations, but often such information is not available. To determine the long-term overall effort required to achieve maximum sustainable yield (MSY) and maximum economic yield (MEY), we consider three methods for allocating effort: (i) optimal allocation, which optimally allocates effort among target species; (ii) fixed proportions, which chooses proportions based on past catch data; and (iii) economic allocation, which splits effort based on the expected catch value of each species. Determining the overall fishing effort required to achieve these management objectives is a maximizing problem subject to constraints due to economic and social considerations. We illustrated the approaches using a case study of the Moreton Bay Prawn Trawl Fishery in Queensland (Australia). The results were consistent across the three methods. Importantly, our analysis demonstrated the optimal total effort was very sensitive to daily fishing costs-the effort ranged from 9500-11 500 to 6000-7000, 4000 and 2500 boat-days, using daily cost estimates of $0, $500, $750, and $950, respectively. The zero daily cost corresponds to the MSY, while a daily cost of $750 most closely represents the actual present fishing cost. Given the recent debate on which costs should be factored into the analyses for deriving MEY, our findings highlight the importance of including an appropriate cost function for practical management advice. The approaches developed here could be applied to other multispecies fisheries where only aggregated fishing effort data are recorded, as the literature on this type of modelling is sparse.
Resumo:
An analytical method has been proposed to optimise the small-signaloptical gain of CO2-N2 gasdynamic lasers (gdl) employing two-dimensional (2D) wedge nozzles. Following our earlier work the equations governing the steady, inviscid, quasi-one-dimensional flow in the wedge nozzle of thegdl are reduced to a universal form so that their solutions depend on a single unifying parameter. These equations are solved numerically to obtain similar solutions for the various flow quantities, which variables are subsequently used to optimize the small-signal-gain. The corresponding optimum values like reservoir pressure and temperature and 2D nozzle area ratio also have been predicted and graphed for a wide range of laser gas compositions, with either H2O or He as the catalyst. A large number of graphs are presented which may be used to obtain the optimum values of small signal gain for a wide range of laser compositions without further computations.
Resumo:
Nahhas, Wolfe, and Chen (2002, Biometrics 58, 964-971) considered optimal set size for ranked set sampling (RSS) with fixed operational costs. This framework can be very useful in practice to determine whether RSS is beneficial and to obtain the optimal set size that minimizes the variance of the population estimator for a fixed total cost. In this article, we propose a scheme of general RSS in which more than one observation can be taken from each ranked set. This is shown to be more cost-effective in some cases when the cost of ranking is not so small. We demonstrate using the example in Nahhas, Wolfe, and Chen (2002, Biometrics 58, 964-971), by taking two or more observations from one set even with the optimal set size from the RSS design can be more beneficial.
Resumo:
We propose four variants of recently proposed multi-timescale algorithm in [1] for ant colony optimization and study their application on a multi-stage shortest path problem. We study the performance of the various algorithms in this framework. We observe, that one of the variants consistently outperforms the algorithm [1].
Resumo:
Recent decreases in costs, and improvements in performance, of silicon array detectors open a range of potential applications of relevance to plant physiologists, associated with spectral analysis in the visible and short-wave near infra-red (far-red) spectrum. The performance characteristics of three commercially available ‘miniature’ spectrometers based on silicon array detectors operating in the 650–1050-nm spectral region (MMS1 from Zeiss, S2000 from Ocean Optics, and FICS from Oriel, operated with a Larry detector) were compared with respect to the application of non-invasive prediction of sugar content of fruit using near infra-red spectroscopy (NIRS). The FICS–Larry gave the best wavelength resolution; however, the narrow slit and small pixel size of the charge-coupled device detector resulted in a very low sensitivity, and this instrumentation was not considered further. Wavelength resolution was poor with the MMS1 relative to the S2000 (e.g. full width at half maximum of the 912 nm Hg peak, 13 and 2 nm for the MMS1 and S2000, respectively), but the large pixel height of the array used in the MMS1 gave it sensitivity comparable to the S2000. The signal-to-signal standard error ratio of spectra was greater by an order of magnitude with the MMS1, relative to the S2000, at both near saturation and low light levels. Calibrations were developed using reflectance spectra of filter paper soaked in range of concentrations (0–20% w/v) of sucrose, using a modified partial least squares procedure. Calibrations developed with the MMS1 were superior to those developed using the S2000 (e.g. coefficient of correlation of 0.90 and 0.62, and standard error of cross-validation of 1.9 and 5.4%, respectively), indicating the importance of high signal to noise ratio over wavelength resolution to calibration accuracy. The design of a bench top assembly using the MMS1 for the non-invasive assessment of mesocarp sugar content of (intact) melon fruit is reported in terms of light source and angle between detector and light source, and optimisation of math treatment (derivative condition and smoothing function).
Resumo:
Multi-objective optimization is an active field of research with broad applicability in aeronautics. This report details a variant of the original NSGA-II software aimed to improve the performances of such a widely used Genetic Algorithm in finding the optimal Pareto-front of a Multi-Objective optimization problem for the use of UAV and aircraft design and optimsaiton. Original NSGA-II works on a population of predetermined constant size and its computational cost to evaluate one generation is O(mn^2 ), being m the number of objective functions and n the population size. The basic idea encouraging this work is that of reduce the computational cost of the NSGA-II algorithm by making it work on a population of variable size, in order to obtain better convergence towards the Pareto-front in less time. In this work some test functions will be tested with both original NSGA-II and VPNSGA-II algorithms; each test will be timed in order to get a measure of the computational cost of each trial and the results will be compared.
Calciothermic reduction of TiO2: A diagrammatic assessment of the thermodynamic limit of deoxidation
Resumo:
Calciothermic reduction of TiO2 provides a potentially low-cost route to titanium production. Presented in this article is a suitably designed diagram, useful for assessing the degree of reduction of TiO2 and residual oxygen contamination in metal as a function of reduction temperature and other process parameters. The oxygen chemical potential diagram à la Ellingham-Richardson-Jeffes is useful for visualization of the thermodynamics of reduction reactions at high temperatures. Although traditionally the diagram depicts oxygen potentials corresponding to the oxidation of different metals to their corresponding oxides or of lower oxides to higher oxides, oxygen potentials associated with solution phases at constant composition can be readily superimposed. The usefulness of the diagram for an insightful analysis of calciothermic reduction, either direct or through an electrochemical process, is discussed. Identified are possible process variations, modeling and optimization strategies.
Resumo:
During the past few decades, developing efficient methods to solve dynamic facility layout problems has been focused on significantly by practitioners and researchers. More specifically meta-heuristic algorithms, especially genetic algorithm, have been proven to be increasingly helpful to generate sub-optimal solutions for large-scale dynamic facility layout problems. Nevertheless, the uncertainty of the manufacturing factors in addition to the scale of the layout problem calls for a mixed genetic algorithm–robust approach that could provide a single unlimited layout design. The present research aims to devise a customized permutation-based robust genetic algorithm in dynamic manufacturing environments that is expected to be generating a unique robust layout for all the manufacturing periods. The numerical outcomes of the proposed robust genetic algorithm indicate significant cost improvements compared to the conventional genetic algorithm methods and a selective number of other heuristic and meta-heuristic techniques.
Resumo:
Demagnetization to zero remanent value or to a predetermined value is of interest to magnet manufacturers and material users. Conventional methods of demagnetization using a varying alternating demagnetizing field, under a damped oscillatory or conveyor system, result in either high cost for demagnetization or large power dissipation. A simple technique using thyristors is presented for demagnetizing the material. Power consumption is mainly in the first two half-cycles of applied voltage. Hence power dissipation is very much reduced. An optimum value calculation for a thyristor triggering angle for demagnetizing high coercive materials is also presented.