934 resultados para Axle Load
Resumo:
A model comprising several servers, each equipped with its own queue and with possibly different service speeds, is considered. Each server receives a dedicated arrival stream of jobs; there is also a stream of generic jobs that arrive to a job scheduler and can be individually allocated to any of the servers. It is shown that if the arrival streams are all Poisson and all jobs have the same exponentially distributed service requirements, the probabilistic splitting of the generic stream that minimizes the average job response time is such that it balances the server idle times in a weighted least-squares sense, where the weighting coefficients are related to the service speeds of the servers. The corresponding result holds for nonexponentially distributed service times if the service speeds are all equal. This result is used to develop adaptive quasi-static algorithms for allocating jobs in the generic arrival stream when the load parameters are unknown. The algorithms utilize server idle-time measurements which are sent periodically to the central job scheduler. A model is developed for these measurements, and the result mentioned is used to cast the problem into one of finding a projection of the root of an affine function, when only noisy values of the function can be observed
Resumo:
The major contribution of this paper is to introduce load compatibility constraints in the mathematical model for the capacitated vehicle routing problem with pickup and deliveries. The employee transportation problem in the Indian call centers and transportation of hazardous materials provided the motivation for this variation. In this paper we develop a integer programming model for the vehicle routing problem with load compatibility constraints. Specifically two types of load compatability constraints are introduced, namely mutual exclusion and conditional exclusion. The model is demonstrated with an application from the employee transportation problem in the Indian call centers.
Resumo:
Tiivistelmä: Valuma-alueen vaikutus fosforin ja typen hajakuormitukseen.
Resumo:
Load-deflection curves for a notched beam under three-point load are determined using the Fictitious Crack Model (FCM) and Blunt Crack Model (BCM). Two values of fracture energy GF are used in this analysis: (i) GF obtained from the size effect law and (ii) GF obtained independently of the size effect. The predicted load-deflection diagrams are compared with the experimental ones obtained for the beams tested by Jenq and Shah. In addition, the values of maximum load (Pmax) obtained by the analyses are compared with the experimental ones for beams tested by Jenq and Shah and by Bažant and Pfeiffer. The results indicate that the descending portion of the load-deflection curve is very sensitive to the GF value used.
Resumo:
Reduction of the execution time of a job through equitable distribution of work load among the processors in a distributed system is the goal of load balancing. Performance of static and dynamic load balancing algorithms for the extended hypercube, is discussed. Threshold algorithms are very well-known algorithms for dynamic load balancing in distributed systems. An extension of the threshold algorithm, called the multilevel threshold algorithm, has been proposed. The hierarchical interconnection network of the extended hypercube is suitable for implementing the proposed algorithm. The new algorithm has been implemented on a transputer-based system and the performance of the algorithm for an extended hypercube is compared with those for mesh and binary hypercube networks
Resumo:
This study concerns the effect of duration of load increment (up to 24 h) on the consolidation properties of expansive black cotton soil (liquid limit = 81%) and nonexpansive kaolinite (liquid limit = 49%). It indicates that the amount and rate of compression are not noticeably affected by the duration of loading for a standard sample of 25 mm in height and 76.2 mm in diameter with double drainage. Hence, the compression index and coefficient of consolidation can be obtained with reasonable accuracy even if the duration of each load increment is as short as 4 h. The secondary compression coefficient (C-alpha epsilon) for kaolinite can be obtained for any pressure range with 1/2 h of loading, which, however, requires 4 h for black cotton soil. This is because primary consolidation is completed early in the case of kaolinite. The paper proves that the conventional consolidation test can be carried out with much shorter duration of loading (less than 4 h) than the standard specification of 24 h or more even for remolded fine-grained soils.
Resumo:
In this paper, we look at the problem of scheduling expression trees with reusable registers on delayed load architectures. Reusable registers come into the picture when the compiler has a data-flow analyzer which is able to estimate the extent of use of the registers. Earlier work considered the same problem without allowing for register variables. Subsequently, Venugopal considered non-reusable registers in the tree. We further extend these efforts to consider a much more general form of the tree. We describe an approximate algorithm for the problem. We formally prove that the code schedule produced by this algorithm will, in the worst case, generate one interlock and use just one more register than that used by the optimal schedule. Spilling is minimized. The approximate algorithm is simple and has linear complexity.
Resumo:
This paper presents a new strategy for load distribution in a single-level tree network equipped with or without front-ends. The load is distributed in more than one installment in an optimal manner to minimize the processing time. This is a deviation and an improvement over earlier studies in which the load distribution is done in only one installment. Recursive equations for the general case, and their closed form solutions for a special case in which the network has identical processors and identical links, are derived. An asymptotic analysis of the network performance with respect to the number of processors and the number of installments is carried out. Discussions of the results in terms of some practical issues like the tradeoff relationship between the number of processors and the number of installments are also presented.
Resumo:
Ultra low-load-dynamic microhardness testing facilitates the hardness measurements in a very low volume of the material and thus is suited for characterization of the interfaces in MMC's. This paper details the studies on age-hardening behavior of the interfaces in Al-Cu-5SiC(p) composites characterized using this technique. Results of hardness studies have been further substantiated by TEM observations. In the solution-treated condition, hardness is maximum at the particle/matrix interface and decreases with increasing distance from the interface. This could be attributed to the presence of maximum dislocation density at the interface which decreases with increasing distance from the interface. In the case of composites subjected to high temperature aging, hardening at the interface is found to be faster than the bulk matrix and the aging kinetics becomes progressively slower with increasing distance from the interface. This is attributed to the dislocation density gradient at the interface, leading to enhanced nucleation and growth of precipitates at the interface compared to the bulk matrix. TEM observations reveal that the sizes of the precipitates decrease with increasing distance from the interface and thus confirms the retardation in aging kinetics with increasing distance from the interface.
Resumo:
Models for electricity planning require inclusion of demand. Depending on the type of planning, the demand is usually represented as an annual demand for electricity (GWh), a peak demand (MW) or in the form of annual load-duration curves. The demand for electricity varies with the seasons, economic activities, etc. Existing schemes do not capture the dynamics of demand variations that are important for planning. For this purpose, we introduce the concept of representative load curves (RLCs). Advantages of RLCs are demonstrated in a case study for the state of Karnataka in India. Multiple discriminant analysis is used to cluster the 365 daily load curves for 1993-94 into nine RLCs. Further analyses of these RLCs help to identify important factors, namely, seasonal, industrial, agricultural, and residential (water heating and air-cooling) demand variations besides rationing by the utility. (C) 1999 Elsevier Science Ltd. All rights reserved.
Resumo:
In this paper, power management algorithms for energy harvesting sensors (EHS) that operate purely based on energy harvested from the environment are proposed. To maintain energy neutrality, EHS nodes schedule their utilization of the harvested power so as to save/draw energy into/from an inefficient battery during peak/low energy harvesting periods, respectively. Under this constraint, one of the key system design goals is to transmit as much data as possible given the energy harvesting profile. For implementational simplicity, it is assumed that the EHS transmits at a constant data rate with power control, when the channel is sufficiently good. By converting the data rate maximization problem into a convex optimization problem, the optimal load scheduling (power management) algorithm that maximizes the average data rate subject to energy neutrality is derived. Also, the energy storage requirements on the battery for implementing the proposed algorithm are calculated. Further, robust schemes that account for the insufficiency of battery storage capacity, or errors in the prediction of the harvested power are proposed. The superior performance of the proposed algorithms over conventional scheduling schemes are demonstrated through computations using numerical data from solar energy harvesting databases.
Resumo:
Relay selection combined with buffering of packets of relays can substantially increase the throughput of a cooperative network that uses rateless codes. However, buffering also increases the end-to-end delays due to the additional queuing delays at the relay nodes. In this paper we propose a novel method that exploits a unique property of rateless codes that enables a receiver to decode a packet from non-contiguous and unordered portions of the received signal. In it, each relay, depending on its queue length, ignores its received coded bits with a given probability. We show that this substantially reduces the end-to-end delays while retaining almost all of the throughput gain achieved by buffering. In effect, the method increases the odds that the packet is first decoded by a relay with a smaller queue. Thus, the queuing load is balanced across the relays and traded off with transmission times. We derive explicit necessary and sufficient conditions for the stability of this system when the various channels undergo fading. Despite encountering analytically intractable G/GI/1 queues in our system, we also gain insights about the method by analyzing a similar system with a simpler model for the relay-to-destination transmission times.