994 resultados para Time minimization
Resumo:
Random Access Scan, which addresses individual flip-flops in a design using a memory array like row and column decoder architecture, has recently attracted widespread attention, due to its potential for lower test application time, test data volume and test power dissipation when compared to traditional Serial Scan. This is because typically only a very limited number of random ``care'' bits in a test response need be modified to create the next test vector. Unlike traditional scan, most flip-flops need not be updated. Test application efficiency can be further improved by organizing the access by word instead of by bit. In this paper we present a new decoder structure that takes advantage of basis vectors and linear algebra to further significantly optimize test application in RAS by performing the write operations on multiple bits consecutively. Simulations performed on benchmark circuits show an average of 2-3 times speed up in test write time compared to conventional RAS.
Resumo:
In this paper we consider the implementation of time and energy efficient trajectories onto a test-bed autonomous underwater vehicle. The trajectories are losely connected to the results of the application of the maximum principle to the controlled mechanical system. We use a numerical algorithm to compute efficient trajectories designed using geometric control theory to optimize a given cost function. Experimental results are shown for the time minimization problem.
Resumo:
In this note, we consider the scheduling problem of minimizing the sum of the weighted completion times on a single machine with one non-availability interval on the machine under the non-resumable scenario. Together with a recent 2-approximation algorithm designed by Kacem [I. Kacem, Approximation algorithm for the weighted flow-time minimization on a single machine with a fixed non-availability interval, Computers & Industrial Engineering 54 (2008) 401–410], this paper is the first successful attempt to develop a constant ratio approximation algorithm for this problem. We present two approaches to designing such an algorithm. Our best algorithm guarantees a worst-case performance ratio of 2+ε. © 2008 Elsevier B.V. All rights reserved.
Resumo:
In this paper we consider the implementation of time and energy efficient trajectories onto a test-bed autonomous underwater vehicle. The trajectories are losely connected to the results of the application of the maximum principle to the controlled mechanical system. We use a numerical algorithm to compute efficient trajectories designed using geometric control theory to optimize a given cost function. Experimental results are shown for the time minimization problem.
Resumo:
Routing is a very important step in VLSI physical design. A set of nets are routed under delay and resource constraints in multi-net global routing. In this paper a delay-driven congestion-aware global routing algorithm is developed, which is a heuristic based method to solve a multi-objective NP-hard optimization problem. The proposed delay-driven Steiner tree construction method is of O(n(2) log n) complexity, where n is the number of terminal points and it provides n-approximation solution of the critical time minimization problem for a certain class of grid graphs. The existing timing-driven method (Hu and Sapatnekar, 2002) has a complexity O(n(4)) and is implemented on nets with small number of sinks. Next we propose a FPTAS Gradient algorithm for minimizing the total overflow. This is a concurrent approach considering all the nets simultaneously contrary to the existing approaches of sequential rip-up and reroute. The algorithms are implemented on ISPD98 derived benchmarks and the drastic reduction of overflow is observed. (C) 2014 Elsevier Inc. All rights reserved.
Resumo:
In this paper we deal with the one-dimensional integer cutting stock problem, which consists of cutting a set of available objects in stock in order to produce ordered smaller items in such a way as to optimize a given objective function, which in this paper is composed of three different objectives: minimization of the number of objects to be cut (raw material), minimization of the number of different cutting patterns (setup time), minimization of the number of saw cycles (optimization of the saw productivity). For solving this complex problem we adopt a multiobjective approach in which we adapt, for the problem studied, a symbiotic genetic algorithm proposed in the literature. Some theoretical and computational results are presented.
Resumo:
In this paper the technique of shorter route determination of fire engine to the fire place on time minimization criterion with the use of evolutionary modeling is offered. The algorithm of its realization on the base of complete and optimized space of search of possible decisions is explored. The aspects of goal function forming and program realization of method having a special purpose are considered. Experimental verification is executed and the results of comparative analysis with the expert conclusions are considered.
Resumo:
Nous adaptons une heuristique de recherche à voisinage variable pour traiter le problème du voyageur de commerce avec fenêtres de temps (TSPTW) lorsque l'objectif est la minimisation du temps d'arrivée au dépôt de destination. Nous utilisons des méthodes efficientes pour la vérification de la réalisabilité et de la rentabilité d'un mouvement. Nous explorons les voisinages dans des ordres permettant de réduire l'espace de recherche. La méthode résultante est compétitive avec l'état de l'art. Nous améliorons les meilleures solutions connues pour deux classes d'instances et nous fournissons les résultats de plusieurs instances du TSPTW pour la première fois.
Resumo:
Nous adaptons une heuristique de recherche à voisinage variable pour traiter le problème du voyageur de commerce avec fenêtres de temps (TSPTW) lorsque l'objectif est la minimisation du temps d'arrivée au dépôt de destination. Nous utilisons des méthodes efficientes pour la vérification de la réalisabilité et de la rentabilité d'un mouvement. Nous explorons les voisinages dans des ordres permettant de réduire l'espace de recherche. La méthode résultante est compétitive avec l'état de l'art. Nous améliorons les meilleures solutions connues pour deux classes d'instances et nous fournissons les résultats de plusieurs instances du TSPTW pour la première fois.
Resumo:
Conventional Random access scan (RAS) for testing has lower test application time, low power dissipation, and low test data volume compared to standard serial scan chain based design In this paper, we present two cluster based techniques, namely, Serial Input Random Access Scan and Variable Word Length Random Access Scan to reduce test application time even further by exploiting the parallelism among the clusters and performing write operations on multiple bits Experimental results on benchmarks circuits show on an average 2-3 times speed up in test write time and average 60% reduction in write test data volume
Resumo:
This study compares the procurement cost-minimizing and productive efficiency performance of the auction mechanism used by independent system operators (ISOs) in wholesale electricity auction markets in the U.S. with that of a proposed alternative. The current practice allocates energy contracts as if the auction featured a discriminatory final payment method when, in fact, the markets are uniform price auctions. The proposed alternative explicitly accounts for the market clearing price during the allocation phase. We find that the proposed alternative largely outperforms the current practice on the basis of procurement costs in the context of simple auction markets featuring both day-ahead and real-time auctions and that the procurement cost advantage of the alternative is complete when we simulate the effects of increased competition. We also find that a trade-off between the objectives of procurement cost minimization and productive efficiency emerges in our simple auction markets and persists in the face of increased competition.
Resumo:
Autonomous underwater vehicles (AUVs) are increasingly used, both in military and civilian applications. These vehicles are limited mainly by the intelligence we give them and the life of their batteries. Research is active to extend vehicle autonomy in both aspects. Our intent is to give the vehicle the ability to adapt its behavior under different mission scenarios (emergency maneuvers versus long duration monitoring). This involves a search for optimal trajectories minimizing time, energy or a combination of both. Despite some success stories in AUV control, optimal control is still a very underdeveloped area. Adaptive control research has contributed to cost minimization problems, but vehicle design has been the driving force for advancement in optimal control research. We look to advance the development of optimal control theory by expanding the motions along which AUVs travel. Traditionally, AUVs have taken the role of performing the long data gathering mission in the open ocean with little to no interaction with their surroundings, MacIver et al. (2004). The AUV is used to find the shipwreck, and the remotely operated vehicle (ROV) handles the exploration up close. AUV mission profiles of this sort are best suited through the use of a torpedo shaped AUV, Bertram and Alvarez (2006), since straight lines and minimal (0 deg - 30 deg) angular displacements are all that are necessary to perform the transects and grid lines for these applications. However, the torpedo shape AUV lacks the ability to perform low-speed maneuvers in cluttered environments, such as autonomous exploration close to the seabed and around obstacles, MacIver et al. (2004). Thus, we consider an agile vehicle capable of movement in six degrees of freedom without any preference of direction.
Resumo:
A novel approach for lossless as well as lossy compression of monochrome images using Boolean minimization is proposed. The image is split into bit planes. Each bit plane is divided into windows or blocks of variable size. Each block is transformed into a Boolean switching function in cubical form, treating the pixel values as output of the function. Compression is performed by minimizing these switching functions using ESPRESSO, a cube based two level function minimizer. The minimized cubes are encoded using a code set which satisfies the prefix property. Our technique of lossless compression involves linear prediction as a preprocessing step and has compression ratio comparable to that of JPEG lossless compression technique. Our lossy compression technique involves reducing the number of bit planes as a preprocessing step which incurs minimal loss in the information of the image. The bit planes that remain after preprocessing are compressed using our lossless compression technique based on Boolean minimization. Qualitatively one cannot visually distinguish between the original image and the lossy image and the value of mean square error is kept low. For mean square error value close to that of JPEG lossy compression technique, our method gives better compression ratio. The compression scheme is relatively slower while the decompression time is comparable to that of JPEG.
Resumo:
We consider the wireless two-way relay channel, in which two-way data transfer takes place between the end nodes with the help of a relay. For the Denoise-And-Forward (DNF) protocol, it was shown by Koike-Akino et al. that adaptively changing the network coding map used at the relay greatly reduces the impact of Multiple Access Interference at the relay. The harmful effect of the deep channel fade conditions can be effectively mitigated by proper choice of these network coding maps at the relay. Alternatively, in this paper we propose a Distributed Space Time Coding (DSTC) scheme, which effectively removes most of the deep fade channel conditions at the transmitting nodes itself without any CSIT and without any need to adaptively change the network coding map used at the relay. It is shown that the deep fades occur when the channel fade coefficient vector falls in a finite number of vector subspaces of, which are referred to as the singular fade subspaces. DSTC design criterion referred to as the singularity minimization criterion under which the number of such vector subspaces are minimized is obtained. Also, a criterion to maximize the coding gain of the DSTC is obtained. Explicit low decoding complexity DSTC designs which satisfy the singularity minimization criterion and maximize the coding gain for QAM and PSK signal sets are provided. Simulation results show that at high Signal to Noise Ratio, the DSTC scheme provides large gains when compared to the conventional Exclusive OR network code and performs better than the adaptive network coding scheme.