44 resultados para Referential stamp
em Indian Institute of Science - Bangalore - Índia
Resumo:
Possible integration of Single Electron Transistor (SET) with CMOS technology is making the study of semiconductor SET more important than the metallic SET and consequently, the study of energy quantization effects on semiconductor SET devices and circuits is gaining significance. In this paper, for the first time, the effects of energy quantization on SET inverter performance are examined through analytical modeling and Monte Carlo simulations. It is observed that the primary effect of energy quantization is to change the Coulomb Blockade region and drain current of SET devices and as a result affects the noise margin, power dissipation, and the propagation delay of SET inverter. A new model for the noise margin of SET inverter is proposed which includes the energy quantization effects. Using the noise margin as a metric, the robustness of SET inverter is studied against the effects of energy quantization. It is shown that SET inverter designed with CT : CG = 1/3 (where CT and CG are tunnel junction and gate capacitances respectively) offers maximum robustness against energy quantization.
Resumo:
This paper presents real-time simulation models of electrical machines on FPGA platform. Implementation of the real-time numerical integration methods with digital logic elements is discussed. Several numerical integrations are presented. A real-time simulation of DC machine is carried out on this FPGA platform and important transient results are presented. These results are compared to simulation results obtained through a commercial off-line simulation software.
Resumo:
Simultaneous consideration of both performance and reliability issues is important in the choice of computer architectures for real-time aerospace applications. One of the requirements for such a fault-tolerant computer system is the characteristic of graceful degradation. A shared and replicated resources computing system represents such an architecture. In this paper, a combinatorial model is used for the evaluation of the instruction execution rate of a degradable, replicated resources computing system such as a modular multiprocessor system. Next, a method is presented to evaluate the computation reliability of such a system utilizing a reliability graph model and the instruction execution rate. Finally, this computation reliability measure, which simultaneously describes both performance and reliability, is applied as a constraint in an architecture optimization model for such computing systems. Index Terms-Architecture optimization, computation
Resumo:
Abstract-The success of automatic speaker recognition in laboratory environments suggests applications in forensic science for establishing the Identity of individuals on the basis of features extracted from speech. A theoretical model for such a verification scheme for continuous normaliy distributed featureIss developed. The three cases of using a) single feature, b)multipliendependent measurements of a single feature, and c)multpleindependent features are explored.The number iofndependent features needed for areliable personal identification is computed based on the theoretcal model and an expklatory study of some speech featues.
Resumo:
Abstract-To detect errors in decision tables one needs to decide whether a given set of constraints is feasible or not. This paper describes an algorithm to do so when the constraints are linear in variables that take only integer values. Decision tables with such constraints occur frequently in business data processing and in nonnumeric applications. The aim of the algorithm is to exploit. the abundance of very simple constraints that occur in typical decision table contexts. Essentially, the algorithm is a backtrack procedure where the the solution space is pruned by using the set of simple constrains. After some simplications, the simple constraints are captured in an acyclic directed graph with weighted edges. Further, only those partial vectors are considered from extension which can be extended to assignments that will at least satisfy the simple constraints. This is how pruning of the solution space is achieved. For every partial assignment considered, the graph representation of the simple constraints provides a lower bound for each variable which is not yet assigned a value. These lower bounds play a vital role in the algorithm and they are obtained in an efficient manner by updating older lower bounds. Our present algorithm also incorporates an idea by which it can be checked whether or not an (m - 2)-ary vector can be extended to a solution vector of m components, thereby backtracking is reduced by one component.
Resumo:
The role of melt convection oil the performance of beat sinks with Phase Change Material (PCM) is presented in this paper. The beat sink consists of aluminum plate fins embedded in PCM and heat flux is supplied from the bottom. The design of such a heat sink requires optimization with respect to its geometrical parameters. The objective of the optimization is to maximize the heat sink operation time for the prescribed heat flux and the critical chip temperature. The parameters considered for optimization are fin number and fill thickness. The height and base plate thickness of heat sink are kept constant in the present analysis. An enthalpy based CFD model is developed, which is capable Of Simulating phase change and associated melt convection. The CFD model is Coupled with Genetic Algorithm (GA) for carrying out the optimization. Two cases are considered, one without melt convection (conduction regime) and the other with convection. It is found that the geometrical optimizations of heat sinks are different for the two cases, indicating the importance of inch convection in the design of heat sinks with PCMs.
Resumo:
In this paper, the validity of'single fault assumption in deriving diagnostic test sets is examined with respect to crosspoint faults in programmable logic arrays (PLA's). The control input procedure developed here can be used to convert PLA's having undetectable crosspoint faults to crosspoint-irredundant PLA's for testing purposes. All crosspoints will be testable in crosspoint-irredundant PLA's. The control inputs are used as extra variables during testing. They are maintained at logic I during normal operation. A useful heuristic for obtaining a near-minimal number of control inputs is suggested. Expressions for calculating bounds on the number of control inputs have also been obtained.
Resumo:
This paper describes a method of adjusting the stator power factor angle for the control of an induction motor fed from a current source inverter (CSI) based on the concept of space vectors (or park vectors). It is shown that under steady state, if the torque angle is kept constant over the entire operating range, it has the advantage of keeping the slip frequency constant. This can be utilized to dispose of the speed feedback and simplify the control scheme for the drive, such that the stator voltage integral zero crossings alone can be used as a feedback for deciding the triggering instants of the CSI thyristors under stable operation of the system. A closed-loop control strategy is developed for the drive based on this principle, using a microprocessor-based control system and is implemented on a laboratory prototype CSI fed induction motor drive.
Resumo:
The paper presents a new adaptive delta modulator, called the hybrid constant factor incremental delta modulator (HCFIDM), which uses instantaneous as well as syllabic adaptation of the step size. Three instantaneous algorithms have been used: two new instantaneous algorithms (CFIDM-3 and CFIDM-2) and the third, Song's voice ADM (SVADM). The quantisers have been simulated on a digital computer and their performances studied. The figure of merit used is the SNR with correlated, /?C-shaped Gaussian signals and real speech as the input. The results indicate that the hybrid technique is superior to the nonhybrid adaptive quantisers. Also, the two new instantaneous algorithms developed have improved SNR and fast response to step inputs as compared to the earlier systems.
Resumo:
The statistical minimum risk pattern recognition problem, when the classification costs are random variables of unknown statistics, is considered. Using medical diagnosis as a possible application, the problem of learning the optimal decision scheme is studied for a two-class twoaction case, as a first step. This reduces to the problem of learning the optimum threshold (for taking appropriate action) on the a posteriori probability of one class. A recursive procedure for updating an estimate of the threshold is proposed. The estimation procedure does not require the knowledge of actual class labels of the sample patterns in the design set. The adaptive scheme of using the present threshold estimate for taking action on the next sample is shown to converge, in probability, to the optimum. The results of a computer simulation study of three learning schemes demonstrate the theoretically predictable salient features of the adaptive scheme.
Resumo:
Non-stationary signal modeling is a well addressed problem in the literature. Many methods have been proposed to model non-stationary signals such as time varying linear prediction and AM-FM modeling, the later being more popular. Estimation techniques to determine the AM-FM components of narrow-band signal, such as Hilbert transform, DESA1, DESA2, auditory processing approach, ZC approach, etc., are prevalent but their robustness to noise is not clearly addressed in the literature. This is critical for most practical applications, such as in communications. We explore the robustness of different AM-FM estimators in the presence of white Gaussian noise. Also, we have proposed three new methods for IF estimation based on non-uniform samples of the signal and multi-resolution analysis. Experimental results show that ZC based methods give better results than the popular methods such as DESA in clean condition as well as noisy condition.
Resumo:
Power conversion using high frequency (HF) link converters is popular because of compact size and light weight of highfrequency transformer. This study focuses on improved utilisation of HF transformer in DC–AC applications. In practical application, the operating condition of the power converter deviates significantly from the designed considerations. These deviating factors are commutation requirements (dead-time, overlap), mismatch in device drops and presence of the fundamental frequency in load current. As a result, the HF transformer handles some amount of low-frequency components (including DC) other than desired HF components. This causes the operating point in B-H curve to shift away from its normal or idealised position and hence results poor utilisation of the HF transformer and unwanted losses. This study investigates the nature of the problem with experimental determination of approximate lumped parameter modelling and saturation behaviour (B-H curve) of the HF transformer. A simple closed-loop control algorithm with online tuning of the controller parameters is proposed to improve the utilisation of the isolation transformer. The simulation and experimental results are presented.
Resumo:
Software transactional memory (STM) has been proposed as a promising programming paradigm for shared memory multi-threaded programs as an alternative to conventional lock based synchronization primitives. Typical STM implementations employ a conflict detection scheme, which works with uniform access granularity, tracking shared data accesses either at word/cache line or at object level. It is well known that a single fixed access tracking granularity cannot meet the conflicting goals of reducing false conflicts without impacting concurrency adversely. A fine grained granularity while improving concurrency can have an adverse impact on performance due to lock aliasing, lock validation overheads, and additional cache pressure. On the other hand, a coarse grained granularity can impact performance due to reduced concurrency. Thus, in general, a fixed or uniform granularity access tracking (UGAT) scheme is application-unaware and rarely matches the access patterns of individual application or parts of an application, leading to sub-optimal performance for different parts of the application(s). In order to mitigate the disadvantages associated with UGAT scheme, we propose a Variable Granularity Access Tracking (VGAT) scheme in this paper. We propose a compiler based approach wherein the compiler uses inter-procedural whole program static analysis to select the access tracking granularity for different shared data structures of the application based on the application's data access pattern. We describe our prototype VGAT scheme, using TL2 as our STM implementation. Our experimental results reveal that VGAT-STM scheme can improve the application performance of STAMP benchmarks from 1.87% to up to 21.2%.
Resumo:
Regenerating codes are a class of recently developed codes for distributed storage that, like Reed-Solomon codes, permit data recovery from any arbitrary of nodes. However regenerating codes possess in addition, the ability to repair a failed node by connecting to any arbitrary nodes and downloading an amount of data that is typically far less than the size of the data file. This amount of download is termed the repair bandwidth. Minimum storage regenerating (MSR) codes are a subclass of regenerating codes that require the least amount of network storage; every such code is a maximum distance separable (MDS) code. Further, when a replacement node stores data identical to that in the failed node, the repair is termed as exact. The four principal results of the paper are (a) the explicit construction of a class of MDS codes for d = n - 1 >= 2k - 1 termed the MISER code, that achieves the cut-set bound on the repair bandwidth for the exact repair of systematic nodes, (b) proof of the necessity of interference alignment in exact-repair MSR codes, (c) a proof showing the impossibility of constructing linear, exact-repair MSR codes for d < 2k - 3 in the absence of symbol extension, and (d) the construction, also explicit, of high-rate MSR codes for d = k+1. Interference alignment (IA) is a theme that runs throughout the paper: the MISER code is built on the principles of IA and IA is also a crucial component to the nonexistence proof for d < 2k - 3. To the best of our knowledge, the constructions presented in this paper are the first explicit constructions of regenerating codes that achieve the cut-set bound.
Resumo:
The Linear phase(LP) Finite Impulse Response(FIR) filters are widely used in many signal processing systems which are sensitive to phase distortion. In this article, we obtain a canonic lattice structure of an LP-FIR filter with a complex impulse response. This lattice structure is based on some novel lattice stages obtained from some properties of symmetric polynomials.This canonic lattice structure exploits the redundancy in the zeros of an LP-FIR filter.