905 resultados para Optimization algorithms
Resumo:
Traditionally, an instruction decoder is designed as a monolithic structure that inhibit the leakage energy optimization. In this paper, we consider a split instruction decoder that enable the leakage energy optimization. We also propose a compiler scheduling algorithm that exploits instruction slack to increase the simultaneous active and idle duration in instruction decoder. The proposed compiler-assisted scheme obtains a further 14.5% reduction of energy consumption of instruction decoder over a hardware-only scheme for a VLIW architecture. The benefits are 17.3% and 18.7% in the context of a 2-clustered and a 4-clustered VLIW architecture respectively.
Resumo:
Four hybrid algorithms has been developed for the solution of the unit commitment problem. They use simulated annealing as one of the constituent techniques, and produce lower cost schedules; two of them have less overhead than other soft computing techniques. They are also more robust to the choice of parameters. A special technique avoids the generating of infeasible schedules, and thus reduces computation time.
Resumo:
Reactive Pulsed Laser Deposition is a single step process wherein the ablated elemental metal reacts with a low pressure ambient gas to form a compound. We report here a Secondary Ion Mass Spectrometry based analytical methodology to conduct minimum number of experiments to arrive at optimal process parameters to obtain high quality TiN thin film. Quality of these films was confirmed by electron microscopic analysis. This methodology can be extended for optimization of other process parameters and materials. (C) 2009 Elsevier B.V. All rights reserved.
Resumo:
We propose two algorithms for Q-learning that use the two-timescale stochastic approximation methodology. The first of these updates Q-values of all feasible state–action pairs at each instant while the second updates Q-values of states with actions chosen according to the ‘current’ randomized policy updates. A proof of convergence of the algorithms is shown. Finally, numerical experiments using the proposed algorithms on an application of routing in communication networks are presented on a few different settings.
Resumo:
Next generation wireless systems employ Orthogonal frequency division multiplexing (OFDM) physical layer owing to the high data rate transmissions that are possible without increase in bandwidth. While TCP performance has been extensively studied for interaction with link layer ARQ, little attention has been given to the interaction of TCP with MAC layer. In this work, we explore cross-layer interactions in an OFDM based wireless system, specifically focusing on channel-aware resource allocation strategies at the MAC layer and its impact on TCP congestion control. Both efficiency and fairness oriented MAC resource allocation strategies were designed for evaluating the performance of TCP. The former schemes try to exploit the channel diversity to maximize the system throughput, while the latter schemes try to provide a fair resource allocation over sufficiently long time duration. From a TCP goodput standpoint, we show that the class of MAC algorithms that incorporate a fairness metric and consider the backlog outperform the channel diversity exploiting schemes.
Resumo:
We consider the problem of computing an approximate minimum cycle basis of an undirected edge-weighted graph G with m edges and n vertices; the extension to directed graphs is also discussed. In this problem, a {0,1} incidence vector is associated with each cycle and the vector space over F-2 generated by these vectors is the cycle space of G. A set of cycles is called a cycle basis of G if it forms a basis for its cycle space. A cycle basis where the sum of the weights of the cycles is minimum is called a minimum cycle basis of G. Cycle bases of low weight are useful in a number of contexts, e.g. the analysis of electrical networks, structural engineering, chemistry, and surface reconstruction. We present two new algorithms to compute an approximate minimum cycle basis. For any integer k >= 1, we give (2k - 1)-approximation algorithms with expected running time 0(kmn(1+2/k) + mn((1+1/k)(omega-1))) and deterministic running time 0(n(3+2/k)), respectively. Here omega is the best exponent of matrix multiplication. It is presently known that omega < 2.376. Both algorithms are o(m(omega)) for dense graphs. This is the first time that any algorithm which computes sparse cycle bases with a guarantee drops below the Theta(m(omega)) bound. We also present a 2-approximation algorithm with O(m(omega) root n log n) expected running time, a linear time 2-approximation algorithm for planar graphs and an O(n(3)) time 2.42-approximation algorithm for the complete Euclidean graph in the plane.
Resumo:
Optimization in energy consumption of the existing synchronization mechanisms can lead to substantial gains in terms of network life in Wireless Sensor Networks (WSNs). In this paper, we analyze ERBS and TPSN, two existing synchronization algorithms for WSNs which use widely different approach, and compare their performance in large scale WSNs each of which consists of different type of platform and has varying node density. We, then, propose a novel algorithm, PROBESYNC, which takes advantage of differences in power required to transmit and receive a message on ERBS and TPSN and leverages the shortcomings of each of these algorithms. This leads to considerable improvement in energy conservation and enhanced life of large scale WSNs.
Resumo:
In this paper, we consider the machining condition optimization models presented in earlier studies. Finding the optimal combination of machining conditions within the constraints is a difficult task. Hence, in earlier studies standard optimization methods are used. The non-linear nature of the objective function, and the constraints that need to be satisfied makes it difficult to use the standard optimization methods for the solution. In this paper, we present a real coded genetic algorithm (RCGA), to find the optimal combination of machining conditions. We present various issues related to real coded genetic algorithm such as solution representation, crossover operators, and repair algorithm in detail. We also present the results obtained for these models using real coded genetic algorithm and discuss the advantages of using real coded genetic algorithm for these problems. From the results obtained, we conclude that real coded genetic algorithm is reliable and accurate for solving the machining condition optimization models.
Resumo:
In this study, the stability of anchored cantilever sheet pile wall in sandy soils is investigated using reliability analysis. Targeted stability is formulated as an optimization problem in the framework of an inverse first order reliability method. A sensitivity analysis is conducted to investigate the effect of parameters influencing the stability of sheet pile wall. Backfill soil properties, soil - steel pile interface friction angle, depth of the water table from the top of the sheet pile wall, total depth of embedment below the dredge line, yield strength of steel, section modulus of steel sheet pile, and anchor pull are all treated as random variables. The sheet pile wall system is modeled as a series of failure mode combination. Penetration depth, anchor pull, and section modulus are calculated for various target component and system reliability indices based on three limit states. These are: rotational failure about the position of the anchor rod, expressed in terms of moment ratio; sliding failure mode, expressed in terms of force ratio; and flexural failure of the steel sheet pile wall, expressed in terms of the section modulus ratio. An attempt is made to propose reliability based design charts considering the failure criteria as well as the variability in the parameters. The results of the study are compared with studies in the literature.
Resumo:
Radiation therapy (RT) plays currently significant role in curative treatments of several cancers. External beam RT is carried out mostly by using megavoltage beams of linear accelerators. Tumor eradication and normal tissue complications correlate to dose absorbed in tissues. Normally this dependence is steep and it is crucial that actual dose within patient accurately correspond to the planned dose. All factors in a RT procedure contain uncertainties requiring strict quality assurance. From hospital physicist´s point of a view, technical quality control (QC), dose calculations and methods for verification of correct treatment location are the most important subjects. Most important factor in technical QC is the verification that radiation production of an accelerator, called output, is within narrow acceptable limits. The output measurements are carried out according to a locally chosen dosimetric QC program defining measurement time interval and action levels. Dose calculation algorithms need to be configured for the accelerators by using measured beam data. The uncertainty of such data sets limits for best achievable calculation accuracy. All these dosimetric measurements require good experience, are workful, take up resources needed for treatments and are prone to several random and systematic sources of errors. Appropriate verification of treatment location is more important in intensity modulated radiation therapy (IMRT) than in conventional RT. This is due to steep dose gradients produced within or close to healthy tissues locating only a few millimetres from the targeted volume. The thesis was concentrated in investigation of the quality of dosimetric measurements, the efficacy of dosimetric QC programs, the verification of measured beam data and the effect of positional errors on the dose received by the major salivary glands in head and neck IMRT. A method was developed for the estimation of the effect of the use of different dosimetric QC programs on the overall uncertainty of dose. Data were provided to facilitate the choice of a sufficient QC program. The method takes into account local output stability and reproducibility of the dosimetric QC measurements. A method based on the model fitting of the results of the QC measurements was proposed for the estimation of both of these factors. The reduction of random measurement errors and optimization of QC procedure were also investigated. A method and suggestions were presented for these purposes. The accuracy of beam data was evaluated in Finnish RT centres. Sufficient accuracy level was estimated for the beam data. A method based on the use of reference beam data was developed for the QC of beam data. Dosimetric and geometric accuracy requirements were evaluated for head and neck IMRT when function of the major salivary glands is intended to be spared. These criteria are based on the dose response obtained for the glands. Random measurement errors could be reduced enabling lowering of action levels and prolongation of measurement time interval from 1 month to even 6 months simultaneously maintaining dose accuracy. The combined effect of the proposed methods, suggestions and criteria was found to facilitate the avoidance of maximal dose errors of up to even about 8 %. In addition, their use may make the strictest recommended overall dose accuracy level of 3 % (1SD) achievable.
Resumo:
Due to their non-stationarity, finite-horizon Markov decision processes (FH-MDPs) have one probability transition matrix per stage. Thus the curse of dimensionality affects FH-MDPs more severely than infinite-horizon MDPs. We propose two parametrized 'actor-critic' algorithms to compute optimal policies for FH-MDPs. Both algorithms use the two-timescale stochastic approximation technique, thus simultaneously performing gradient search in the parametrized policy space (the 'actor') on a slower timescale and learning the policy gradient (the 'critic') via a faster recursion. This is in contrast to methods where critic recursions learn the cost-to-go proper. We show w.p 1 convergence to a set with the necessary condition for constrained optima. The proposed parameterization is for FHMDPs with compact action sets, although certain exceptions can be handled. Further, a third algorithm for stochastic control of stopping time processes is presented. We explain why current policy evaluation methods do not work as critic to the proposed actor recursion. Simulation results from flow-control in communication networks attest to the performance advantages of all three algorithms.
Resumo:
Tin monosulfide (SnS) films with varying distance between the source and substrate (DSS) were prepared by the thermal evaporation technique at a temperature of 300 degrees C to investigate the effect of the DSS on the physical properties. The physical properties of the as-deposited films are strongly influenced by the variation of DSS. The thickness, Sn to S at.% ratio, grain size, and root mean square (rms) roughness of the films decreased with the increase of DSS. The films grown at DSS = 10 and 15 cm exhibited nearly single-crystalline nature with low electrical resistivity. From Hall-effect measurements, it is observed that the films grown at DSS <= 15 cm have p-type conduction and the films grown at higher distances have n-type conduction due to the variation of the Sn/S ratio. The films grown at DSS = 15 cm showed higher optical band gap of 1.36 eV as compared with the films grown at other distances. The effect of the DSS on the physical properties of SnS films is discussed and reported.
Resumo:
We present a generic method/model for multi-objective design optimization of laminated composite components, based on vector evaluated particle swarm optimization (VEPSO) algorithm. VEPSO is a novel, co-evolutionary multi-objective variant of the popular particle swarm optimization algorithm (PSO). In the current work a modified version of VEPSO algorithm for discrete variables has been developed and implemented successfully for the, multi-objective design optimization of composites. The problem is formulated with multiple objectives of minimizing weight and the total cost of the composite component to achieve a specified strength. The primary optimization variables are - the number of layers, its stacking sequence (the orientation of the layers) and thickness of each layer. The classical lamination theory is utilized to determine the stresses in the component and the design is evaluated based on three failure criteria; failure mechanism based failure criteria, Maximum stress failure criteria and the Tsai-Wu failure criteria. The optimization method is validated for a number of different loading configurations - uniaxial, biaxial and bending loads. The design optimization has been carried for both variable stacking sequences, as well fixed standard stacking schemes and a comparative study of the different design configurations evolved has been presented. (C) 2007 Elsevier Ltd. All rights reserved.
Resumo:
We describe a real-time system that supports design of optimal flight paths over terrains. These paths either maximize view coverage or minimize vehicle exposure to ground. A volume-rendered display of multi-viewpoint visibility and a haptic interface assists the user in selecting, assessing, and refining the computed flight path. We design a three-dimensional scalar field representing the visibility of a point above the terrain, describe an efficient algorithm to compute the visibility field, and develop visual and haptic schemes to interact with the visibility field. Given the origin and destination, the desired flight path is computed using an efficient simulation of an articulated rope under the influence of the visibility gradient. The simulation framework also accepts user input, via the haptic interface, thereby allowing manual refinement of the flight path.