92 resultados para Run
em Indian Institute of Science - Bangalore - Índia
Resumo:
A common trick for designing faster quantum adiabatic algorithms is to apply the adiabaticity condition locally at every instant. However it is often difficult to determine the instantaneous gap between the lowest two eigenvalues, which is an essential ingredient in the adiabaticity condition. In this paper we present a simple linear algebraic technique for obtaining a lower bound on the instantaneous gap even in such a situation. As an illustration, we investigate the adiabatic un-ordered search of van Dam et al. [17] and Roland and Cerf [15] when the non-zero entries of the diagonal final Hamiltonian are perturbed by a polynomial (in log N, where N is the length of the unordered list) amount. We use our technique to derive a bound on the running time of a local adiabatic schedule in terms of the minimum gap between the lowest two eigenvalues.
Resumo:
Modern wireline and wireless communication devices are multimode and multifunctional communication devices. In order to support multiple standards on a single platform, it is necessary to develop a reconfigurable architecture that can provide the required flexibility and performance. The Channel decoder is one of the most compute intensive and essential elements of any communication system. Most of the standards require a reconfigurable Channel decoder that is capable of performing Viterbi decoding and Turbo decoding. Furthermore, the Channel decoder needs to support different configurations of Viterbi and Turbo decoders. In this paper, we propose a reconfigurable Channel decoder that can be reconfigured for standards such as WCDMA, CDMA2000, IEEE802.11, DAB, DVB and GSM. Different parameters like code rate, constraint length, polynomials and truncation length can be configured to map any of the above mentioned standards. A multiprocessor approach has been followed to provide higher throughput and scalable power consumption in various configurations of the reconfigurable Viterbi decoder and Turbo decoder. We have proposed A Hybrid register exchange approach for multiprocessor architecture to minimize power consumption.
Resumo:
Numerical Linear Algebra (NLA) kernels are at the heart of all computational problems. These kernels require hardware acceleration for increased throughput. NLA Solvers for dense and sparse matrices differ in the way the matrices are stored and operated upon although they exhibit similar computational properties. While ASIC solutions for NLA Solvers can deliver high performance, they are not scalable, and hence are not commercially viable. In this paper, we show how NLA kernels can be accelerated on REDEFINE, a scalable runtime reconfigurable hardware platform. Compared to a software implementation, Direct Solver (Modified Faddeev's algorithm) on REDEFINE shows a 29X improvement on an average and Iterative Solver (Conjugate Gradient algorithm) shows a 15-20% improvement. We further show that solution on REDEFINE is scalable over larger problem sizes without any notable degradation in performance.
Resumo:
Before installation, a voltage source converter is usually subjected to heat-run test to verify its thermal design and performance under load. For heat-run test, the converter needs to be operated at rated voltage and rated current for a substantial length of time. Hence, such tests consume huge amount of energy in case of high-power converters. Also, the capacities of the source and loads available in the research and development (R&D) centre or the production facility could be inadequate to conduct such tests. This paper proposes a method to conduct heat-run tests on high-power, pulse width modulated (PWM) converters with low energy consumption. The experimental set-up consists of the converter under test and another converter (of similar or higher rating), both connected in parallel on the ac side and open on the dc side. Vector-control or synchronous reference frame control is employed to control the converters such that one draws certain amount of reactive power and the other supplies the same; only the system losses are drawn from the mains. The performance of the controller is validated through simulation and experiments. Experimental results, pertaining to heat-run tests on a high-power PWM converter, are presented at power levels of 25 kVA to 150 kVA.
Resumo:
Information spreading in a population can be modeled as an epidemic. Campaigners (e.g., election campaign managers, companies marketing products or movies) are interested in spreading a message by a given deadline, using limited resources. In this paper, we formulate the above situation as an optimal control problem and the solution (using Pontryagin's Maximum Principle) prescribes an optimal resource allocation over the time of the campaign. We consider two different scenarios-in the first, the campaigner can adjust a direct control (over time) which allows her to recruit individuals from the population (at some cost) to act as spreaders for the Susceptible-Infected-Susceptible (SIS) epidemic model. In the second case, we allow the campaigner to adjust the effective spreading rate by incentivizing the infected in the Susceptible-Infected-Recovered (SIR) model, in addition to the direct recruitment. We consider time varying information spreading rate in our formulation to model the changing interest level of individuals in the campaign, as the deadline is reached. In both the cases, we show the existence of a solution and its uniqueness for sufficiently small campaign deadlines. For the fixed spreading rate, we show the effectiveness of the optimal control strategy against the constant control strategy, a heuristic control strategy and no control. We show the sensitivity of the optimal control to the spreading rate profile when it is time varying. (C) 2014 Elsevier Inc. All rights reserved.
Resumo:
The discovery of a Higgs boson with a mass of 126 GeV at the LHC when combined with the non-observation of new physics both in direct and indirect searches imposes strong constraints on supersymmetric models and in particular on the top squark sector. The experiments for direct detection of dark matter have provided with yet more constraints on the neutralino LSP mass and its interactions. After imposing limits from the Higgs, flavour and dark matter sectors, we examine the feasibility for a light stop in the context of the pMSSM, in light of current results for stop and other SUSY searches at the LHC. We only require that the neutralino dark matter explains a fraction of the cosmologically measured dark matter abundance. We find that a stop with mass below similar to 500 GeV is still allowed. We further study various probes of the light stop scenario that could be performed at the LHC Run-II either through direct searches for the light and heavy stop, or SUSY searches not currently available in simplified model results. Moreover we study the characteristics of heavy Higgs for the points in the parameter space allowed by all the available constraints and illustrate the region with large cross sections to fermionic or electroweakino channels. Finally we show that nearly all scenarios with a small stop-LSP mass difference will be tested by Xenon1T provided the NLSP is a chargino, thus probing a region hard to access at the LHC.
Resumo:
The ergodic or long-run average cost control problem for a partially observed finite-state Markov chain is studied via the associated fully observed separated control problem for the nonlinear filter. Dynamic programming equations for the latter are derived, leading to existence and characterization of optimal stationary policies.
Resumo:
We believe the Babcock-Leighton process of poloidal field generation to be the main source of irregularity in the solar cycle. The random nature of this process may make the poloidal field in one hemisphere stronger than that in the other hemisphere at the end of a cycle. We expect this to induce an asymmetry in the next sunspot cycle. We look for evidence of this in the observational data and then model it theoretically with our dynamo code. Since actual polar field measurements exist only from the 1970s, we use the polar faculae number data recorded by Sheeley (1991, 2008) as a proxy of the polar field and estimate the hemispheric asymmetry of the polar field in different solar minima during the major part of the twentieth century. This asymmetry is found to have a reasonable correlation with the asymmetry of the next cycle. We then run our dynamo code by feeding information about this asymmetry at the successive minima and compare the results with observational data. We find that the theoretically computed asymmetries of different cycles compare favorably with the observational data, with the correlation coefficient being 0.73. Due to the coupling between the two hemispheres, any hemispheric asymmetry tends to get attenuated with time. The hemispheric asymmetry of a cycle either from observational data or from theoretical calculations statistically tends to be less than the asymmetry in the polar field (as inferred from the faculae data) in the preceding minimum. This reduction factor turns out to be 0.43 and 0.51 respectively in observational data and theoretical simulations.
Resumo:
The literature contains many examples of digital procedures for the analytical treatment of electroencephalograms, but there is as yet no standard by which those techniques may be judged or compared. This paper proposes one method of generating an EEG, based on a computer program for Zetterberg's simulation. It is assumed that the statistical properties of an EEG may be represented by stationary processes having rational transfer functions and achieved by a system of software fillers and random number generators.The model represents neither the neurological mechanism response for generating the EEG, nor any particular type of EEG record; transient phenomena such as spikes, sharp waves and alpha bursts also are excluded. The basis of the program is a valid ‘partial’ statistical description of the EEG; that description is then used to produce a digital representation of a signal which if plotted sequentially, might or might not by chance resemble an EEG, that is unimportant. What is important is that the statistical properties of the series remain those of a real EEG; it is in this sense that the output is a simulation of the EEG. There is considerable flexibility in the form of the output, i.e. its alpha, beta and delta content, which may be selected by the user, the same selected parameters always producing the same statistical output. The filtered outputs from the random number sequences may be scaled to provide realistic power distributions in the accepted EEG frequency bands and then summed to create a digital output signal, the ‘stationary EEG’. It is suggested that the simulator might act as a test input to digital analytical techniques for the EEG, a simulator which would enable at least a substantial part of those techniques to be compared and assessed in an objective manner. The equations necessary to implement the model are given. The program has been run on a DEC1090 computer but is suitable for any microcomputer having more than 32 kBytes of memory; the execution time required to generate a 25 s simulated EEG is in the region of 15 s.
Resumo:
A new method of specifying the syntax of programming languages, known as hierarchical language specifications (HLS), is proposed. Efficient parallel algorithms for parsing languages generated by HLS are presented. These algorithms run on an exclusive-read exclusive-write parallel random-access machine. They require O(n) processors and O(log2n) time, where n is the length of the string to be parsed. The most important feature of these algorithms is that they do not use a stack.
Resumo:
We present a fast algorithm for computing a Gomory-Hu tree or cut tree for an unweighted undirected graph G = (V, E). The expected running time of our algorithm is (O) over tilde (mc) where vertical bar E vertical bar = m and c is the maximum u-v edge connectivity, where u, v is an element of V. When the input graph is also simple (i.e., it has no parallel edges), then the u-v edge connectivity for each pair of vertices u and v is at most n - 1; so the expected run-ning time of our algorithm for simple unweighted graphs is (O) over tilde (mn). All the algorithms currently known for constructing a Gomory-Hu tree [8, 9] use n - 1 minimum s-t cut (i.e., max flow) subroutines. This in conjunction with the current fastest (O) over tilde (n(20/9)) max flow algorithm due to Karger and Levine[11] yields the current best running time of (O) over tilde (n(20/9)n) for Gomory-Hu tree construction on simple unweighted graphs with m edges and n vertices. Thus we present the first (O) over tilde (mn) algorithm for constructing a Gomory-Hu tree for simple unweighted graphs. We do not use a max flow subroutine here; we present an efficient tree packing algorithm for computing Steiner edge connectivity and use this algorithm as our main subroutine. The advantage in using a tree packing algorithm for constructing a Gomory-Hu tree is that the work done in computing a minimum Steiner cut for a Steiner set S subset of V can be reused for computing a minimum Steiner cut for certain Steiner sets S' subset of S.
Resumo:
Friction characteristics of journal bearings made from cast graphic aluminum particulate composite alloy were determined under mixed lubrication and compared with those of the base alloy (without graphite) and leaded phosphor bronze. All three materials ran without seizure while the performance of the particulate composite and leaded phosphor bronze improved with running. Temperature rise in the journal bearing under mixed/boundary lubrication was also measured. It was found that with 0.3D/1000 to 1.5D/1000 clearance and a low lubrication rate (typical value for a bearing of diameter 35 mm × length 35 mm is 80 mm3/min) and at a PV value of 73 × 106 Nm m−2 min−1 graphitic aluminium alloy journal bearings operate satisfactorily without seizure and excessive temperature rise. In comparison, the bronze bearings, with all the other parameters remaining the same, could not run without excessive temperature rise at clearances below D/1000 at lubrication rates lower than 200 mm3/min
Resumo:
A computationally efficient agglomerative clustering algorithm based on multilevel theory is presented. Here, the data set is divided randomly into a number of partitions. The samples of each such partition are clustered separately using hierarchical agglomerative clustering algorithm to form sub-clusters. These are merged at higher levels to get the final classification. This algorithm leads to the same classification as that of hierarchical agglomerative clustering algorithm when the clusters are well separated. The advantages of this algorithm are short run time and small storage requirement. It is observed that the savings, in storage space and computation time, increase nonlinearly with the sample size.
Resumo:
A numerical scheme is presented for accurate simulation of fluid flow using the lattice Boltzmann equation (LBE) on unstructured mesh. A finite volume approach is adopted to discretize the LBE on a cell-centered, arbitrary shaped, triangular tessellation. The formulation includes a formal, second order discretization using a Total Variation Diminishing (TVD) scheme for the terms representing advection of the distribution function in physical space, due to microscopic particle motion. The advantage of the LBE approach is exploited by implementing the scheme in a new computer code to run on a parallel computing system. Performance of the new formulation is systematically investigated by simulating four benchmark flows of increasing complexity, namely (1) flow in a plane channel, (2) unsteady Couette flow, (3) flow caused by a moving lid over a 2D square cavity and (4) flow over a circular cylinder. For each of these flows, the present scheme is validated with the results from Navier-Stokes computations as well as lattice Boltzmann simulations on regular mesh. It is shown that the scheme is robust and accurate for the different test problems studied.
Resumo:
This paper presents the results of the rise time calculation of a SAW resonator. The total rise time is given by rise time = [(rise time of cavity)2 + (rise time of reflectors)2 + (rise time of IDT) 2 ]. 1/2 These rise times are calculated in terms of the effective length of the cavity , the characteristics of the reflector, and the number of finger pairs in the IDT. The rise time of a 38 MHz one-port resonator on Y-Z LiNb03 calculated using this approach is found to be in good agreement with experimental results .