340 resultados para Load estimation
Resumo:
In this paper, we look at the problem of scheduling expression trees with reusable registers on delayed load architectures. Reusable registers come into the picture when the compiler has a data-flow analyzer which is able to estimate the extent of use of the registers. Earlier work considered the same problem without allowing for register variables. Subsequently, Venugopal considered non-reusable registers in the tree. We further extend these efforts to consider a much more general form of the tree. We describe an approximate algorithm for the problem. We formally prove that the code schedule produced by this algorithm will, in the worst case, generate one interlock and use just one more register than that used by the optimal schedule. Spilling is minimized. The approximate algorithm is simple and has linear complexity.
Resumo:
An important tool in signal processing is the use of eigenvalue and singular value decompositions for extracting information from time-series/sensor array data. These tools are used in the so-called subspace methods that underlie solutions to the harmonic retrieval problem in time series and the directions-of-arrival (DOA) estimation problem in array processing. The subspace methods require the knowledge of eigenvectors of the underlying covariance matrix to estimate the parameters of interest. Eigenstructure estimation in signal processing has two important classes: (i) estimating the eigenstructure of the given covariance matrix and (ii) updating the eigenstructure estimates given the current estimate and new data. In this paper, we survey some algorithms for both these classes useful for harmonic retrieval and DOA estimation problems. We begin by surveying key results in the literature and then describe, in some detail, energy function minimization approaches that underlie a class of feedback neural networks. Our approaches estimate some or all of the eigenvectors corresponding to the repeated minimum eigenvalue and also multiple orthogonal eigenvectors corresponding to the ordered eigenvalues of the covariance matrix. Our presentation includes some supporting analysis and simulation results. We may point out here that eigensubspace estimation is a vast area and all aspects of this cannot be fully covered in a single paper. (C) 1995 Academic Press, Inc.
Resumo:
In the past few years there have been attempts to develop subspace methods for DoA (direction of arrival) estimation using a fourth?order cumulant which is known to de?emphasize Gaussian background noise. To gauge the relative performance of the cumulant MUSIC (MUltiple SIgnal Classification) (c?MUSIC) and the standard MUSIC, based on the covariance function, an extensive numerical study has been carried out, where a narrow?band signal source has been considered and Gaussian noise sources, which produce a spatially correlated background noise, have been distributed. These simulations indicate that, even though the cumulant approach is capable of de?emphasizing the Gaussian noise, both bias and variance of the DoA estimates are higher than those for MUSIC. To achieve comparable results the cumulant approach requires much larger data, three to ten times that for MUSIC, depending upon the number of sources and how close they are. This is attributed to the fact that in the estimation of the cumulant, an average of a product of four random variables is needed to make an evaluation. Therefore, compared to those in the evaluation of the covariance function, there are more cross terms which do not go to zero unless the data length is very large. It is felt that these cross terms contribute to the large bias and variance observed in c?MUSIC. However, the ability to de?emphasize Gaussian noise, white or colored, is of great significance since the standard MUSIC fails when there is colored background noise. Through simulation it is shown that c?MUSIC does yield good results, but only at the cost of more data.
Resumo:
This paper presents a new strategy for load distribution in a single-level tree network equipped with or without front-ends. The load is distributed in more than one installment in an optimal manner to minimize the processing time. This is a deviation and an improvement over earlier studies in which the load distribution is done in only one installment. Recursive equations for the general case, and their closed form solutions for a special case in which the network has identical processors and identical links, are derived. An asymptotic analysis of the network performance with respect to the number of processors and the number of installments is carried out. Discussions of the results in terms of some practical issues like the tradeoff relationship between the number of processors and the number of installments are also presented.
Resumo:
This paper proposes a sensorless vector control scheme for general-purpose induction motor drives using the current error space phasor-based hysteresis controller. In this paper, a new technique for sensorless operation is developed to estimate rotor voltage and hence rotor flux position using the stator current error during zero-voltage space vectors. It gives a comparable performance with the vector control drive using sensors especially at a very low speed of operation (less than 1 Hz). Since no voltage sensing is made, the dead-time effect and loss of accuracy in voltage sensing at low speed are avoided here, with the inherent advantages of the current error space phasor-based hysteresis controller. However, appropriate device on-state drops are compensated to achieve a steady-state operation up to less than 1 Hz. Moreover, using a parabolic boundary for current error, the switching frequency of the inverter can be maintained constant for the entire operating speed range. Simple sigma L-s estimation is proposed, and the parameter sensitivity of the control scheme to changes in stator resistance, R-s is also investigated in this paper. Extensive experimental results are shown at speeds less than 1 Hz to verify the proposed concept. The same control scheme is further extended from less than 1 Hz to rated 50 Hz six-step operation of the inverter. Here, the magnetic saturation is ignored in the control scheme.
Resumo:
The interest in low bit rate video coding has increased considerably. Despite rapid progress in storage density and digital communication system performance, demand for data-transmission bandwidth and storage capacity continue to exceed the capabilities of available technologies. The growth of data-intensive digital audio, video applications and the increased use of bandwidth-limited media such as video conferencing and full motion video have not only sustained the need for efficient ways to encode analog signals, but made signal compression central to digital communication and data-storage technology. In this paper we explore techniques for compression of image sequences in a manner that optimizes the results for the human receiver. We propose a new motion estimator using two novel block match algorithms which are based on human perception. Simulations with image sequences have shown an improved bit rate while maintaining ''image quality'' when compared to conventional motion estimation techniques using the MAD block match criteria.
Resumo:
Ultra low-load-dynamic microhardness testing facilitates the hardness measurements in a very low volume of the material and thus is suited for characterization of the interfaces in MMC's. This paper details the studies on age-hardening behavior of the interfaces in Al-Cu-5SiC(p) composites characterized using this technique. Results of hardness studies have been further substantiated by TEM observations. In the solution-treated condition, hardness is maximum at the particle/matrix interface and decreases with increasing distance from the interface. This could be attributed to the presence of maximum dislocation density at the interface which decreases with increasing distance from the interface. In the case of composites subjected to high temperature aging, hardening at the interface is found to be faster than the bulk matrix and the aging kinetics becomes progressively slower with increasing distance from the interface. This is attributed to the dislocation density gradient at the interface, leading to enhanced nucleation and growth of precipitates at the interface compared to the bulk matrix. TEM observations reveal that the sizes of the precipitates decrease with increasing distance from the interface and thus confirms the retardation in aging kinetics with increasing distance from the interface.
Resumo:
Models for electricity planning require inclusion of demand. Depending on the type of planning, the demand is usually represented as an annual demand for electricity (GWh), a peak demand (MW) or in the form of annual load-duration curves. The demand for electricity varies with the seasons, economic activities, etc. Existing schemes do not capture the dynamics of demand variations that are important for planning. For this purpose, we introduce the concept of representative load curves (RLCs). Advantages of RLCs are demonstrated in a case study for the state of Karnataka in India. Multiple discriminant analysis is used to cluster the 365 daily load curves for 1993-94 into nine RLCs. Further analyses of these RLCs help to identify important factors, namely, seasonal, industrial, agricultural, and residential (water heating and air-cooling) demand variations besides rationing by the utility. (C) 1999 Elsevier Science Ltd. All rights reserved.
Resumo:
We describe here two non-interferometric methods for the estimation of the phase of transmitted wavefronts through refracting objects. The phase of the wavefronts obtained is used to reconstruct either the refractive index distribution of the objects or their contours. Refraction corrected reconstructions are obtained by the application of an iterative loop incorporating digital ray tracing for forward propagation and a modified filtered back projection (FBP) for reconstruction. The FBP is modified to take into account non-straight path propagation of light through the object. When the iteration stagnates, the difference between the projection data and an estimate of it obtained by ray tracing through the final reconstruction is reconstructed using a diffraction tomography algorithm. The reconstruction so obtained, viewed as a correction term, is added to the estimate of the object from the loop to obtain an improved final refractive index reconstruction.
Resumo:
The statistical performance analysis of ESPRIT, root-MUSIC, minimum-norm methods for direction estimation, due to finite data perturbations, using the modified spatially smoothed covariance matrix, is developed. Expressions for the mean-squared error in the direction estimates are derived based on a common framework. Based on the analysis, the use of the modified smoothed covariance matrix improves the performance of the methods when the sources are fully correlated. Also, the performance is better even when the number of subarrays is large unlike in the case of the conventionally smoothed covariance matrix. However, the performance for uncorrelated sources deteriorates due to an artificial correlation introduced by the modified smoothing. The theoretical expressions are validated using extensive simulations. (C) 1999 Elsevier Science B.V. All rights reserved.
Resumo:
The problem of estimating multiple Carrier Frequency Offsets (CFOs) in the uplink of MIMO-OFDM systems with Co-Channel (CC) and OFDMA based carrier allocation is considered. The tri-linear data model for generalized, multiuser OFDM system is formulated. Novel blind subspace based estimation of multiple CFOs in the case of arbitrary carrier allocation scheme in OFDMA systems and CC users in OFDM systems based on the Khatri-Rao product is proposed. The method works where the conventional subspace method fails. The performance of the proposed methods is compared with pilot based Least-Squares method.
Resumo:
In this paper, power management algorithms for energy harvesting sensors (EHS) that operate purely based on energy harvested from the environment are proposed. To maintain energy neutrality, EHS nodes schedule their utilization of the harvested power so as to save/draw energy into/from an inefficient battery during peak/low energy harvesting periods, respectively. Under this constraint, one of the key system design goals is to transmit as much data as possible given the energy harvesting profile. For implementational simplicity, it is assumed that the EHS transmits at a constant data rate with power control, when the channel is sufficiently good. By converting the data rate maximization problem into a convex optimization problem, the optimal load scheduling (power management) algorithm that maximizes the average data rate subject to energy neutrality is derived. Also, the energy storage requirements on the battery for implementing the proposed algorithm are calculated. Further, robust schemes that account for the insufficiency of battery storage capacity, or errors in the prediction of the harvested power are proposed. The superior performance of the proposed algorithms over conventional scheduling schemes are demonstrated through computations using numerical data from solar energy harvesting databases.
Resumo:
Relay selection combined with buffering of packets of relays can substantially increase the throughput of a cooperative network that uses rateless codes. However, buffering also increases the end-to-end delays due to the additional queuing delays at the relay nodes. In this paper we propose a novel method that exploits a unique property of rateless codes that enables a receiver to decode a packet from non-contiguous and unordered portions of the received signal. In it, each relay, depending on its queue length, ignores its received coded bits with a given probability. We show that this substantially reduces the end-to-end delays while retaining almost all of the throughput gain achieved by buffering. In effect, the method increases the odds that the packet is first decoded by a relay with a smaller queue. Thus, the queuing load is balanced across the relays and traded off with transmission times. We derive explicit necessary and sufficient conditions for the stability of this system when the various channels undergo fading. Despite encountering analytically intractable G/GI/1 queues in our system, we also gain insights about the method by analyzing a similar system with a simpler model for the relay-to-destination transmission times.
Resumo:
Owing to the increased customer demands for make-to-order products and smaller product life-cycles, today assembly lines are designed to ensure a quick switch-over from one product model to another for companies' survival in market place. The complexity associated with the decisions pertaining to the type of training and number of workers and their exposition to the different tasks especially in the current era of customized production is a serious problem that the managers and the HRD gurus are facing in industry. This paper aims to determine the amount of cross-training and dynamic deployment policy caused by workforce flexibility for a make-to-order assembly. The aforementioned issues have been dealt with by adopting the concept of evolutionary fuzzy system because of the linguistic nature of the attributes associated with product variety and task complexity. A fuzzy system-based methodology is proposed to determine the amount of cross-training and dynamic deployment policy. The proposed methodology is tested on 10 sample products of varying complexities and the results obtained are in line with the conclusions drawn by previous researchers.