989 resultados para Memory -- Testing
Resumo:
A new six-component accelerometer force balance is developed and used in the HST2 shock tunnel of Indian Institute of Science. Aerodynamic forces and moments for a hypersonic slender body measured using this balance system at a free stream Mach number of 5.75 and Reynolds number of 1.5 million and stagnation enthalpy of 1.5 and 2 MJ/kg are presented. These measured values compare well with the theoretical values estimated using modified Newtonian theory.
Resumo:
We address the problem of recognition and retrieval of relatively weak industrial signal such as Partial Discharges (PD) buried in excessive noise. The major bottleneck being the recognition and suppression of stochastic pulsive interference (PI) which has similar time-frequency characteristics as PD pulse. Therefore conventional frequency based DSP techniques are not useful in retrieving PD pulses. We employ statistical signal modeling based on combination of long-memory process and probabilistic principal component analysis (PPCA). An parametric analysis of the signal is exercised for extracting the features of desired pules. We incorporate a wavelet based bootstrap method for obtaining the noise training vectors from observed data. The procedure adopted in this work is completely different from the research work reported in the literature, which is generally based on deserved signal frequency and noise frequency.
Resumo:
We study the distribution of first passage time for Levy type anomalous diffusion. A fractional Fokker-Planck equation framework is introduced.For the zero drift case, using fractional calculus an explicit analytic solution for the first passage time density function in terms of Fox or H-functions is given. The asymptotic behaviour of the density function is discussed. For the nonzero drift case, we obtain an expression for the Laplace transform of the first passage time density function, from which the mean first passage time and variance are derived.
Resumo:
Investigations on the switching behaviour of arsenic-tellurium glasses with Ge or Al additives, yield interesting information about the dependence of switching on network rigidity, co-ordination of the constituents, glass transition & ambient temperature and glass forming ability.
Resumo:
Today's SoCs are complex designs with multiple embedded processors, memory subsystems, and application specific peripherals. The memory architecture of embedded SoCs strongly influences the power and performance of the entire system. Further, the memory subsystem constitutes a major part (typically up to 70%) of the silicon area for the current day SoC. In this article, we address the on-chip memory architecture exploration for DSP processors which are organized as multiple memory banks, where banks can be single/dual ported with non-uniform bank sizes. In this paper we propose two different methods for physical memory architecture exploration and identify the strengths and applicability of these methods in a systematic way. Both methods address the memory architecture exploration for a given target application by considering the application's data access characteristics and generates a set of Pareto-optimal design points that are interesting from a power, performance and VLSI area perspective. To the best of our knowledge, this is the first comprehensive work on memory space exploration at physical memory level that integrates data layout and memory exploration to address the system objectives from both hardware design and application software development perspective. Further we propose an automatic framework that explores the design space identifying 100's of Pareto-optimal design points within a few hours of running on a standard desktop configuration.
Resumo:
Compositional dependent investigations of the bulk GeTe chalcogenides alloys added with different selenium concentrations are carried out by X-ray diffraction (XRD), Raman spectroscopy, X-ray photoelectron spectroscopy (XPS), electron probe micro-analyzer (EPMA) and differential scanning calorimetry (DSC). The measurements reveal that GeTe crystals are predominant in alloys up to 0.20 at.% of Se content indicating interstitial occupancy of Se in the Ge vacancies. Raman modes in the GeTe alloys changes to GeSe modes with the addition of Se. Amorphousness in the alloy increases with increase of Se and 0.50 at.% Se alloy forms a homogeneous amorphous phase with a mixture of Ge-Se and Te-Se bonds. Structural changes are explained with the help of bond theory of solids. Crystallization temperature is found to be increasing with increase of Se, which will enable the amorphous stability. For the optimum 0.50 at.% Se alloy, the melting temperature has reduced which will reduce the RESET current requirement for the phase change memory applications. (C) 2012 Elsevier B.V. All rights reserved.
Resumo:
Prediction of the Sun's magnetic activity is important because of its effect on space environment and climate. However, recent efforts to predict the amplitude of the solar cycle have resulted in diverging forecasts with no consensus. Yeates et al. have shown that the dynamical memory of the solar dynamo mechanism governs predictability, and this memory is different for advection- and diffusion-dominated solar convection zones. By utilizing stochastically forced, kinematic dynamo simulations, we demonstrate that the inclusion of downward turbulent pumping of magnetic flux reduces the memory of both advection- and diffusion-dominated solar dynamos to only one cycle; stronger pumping degrades this memory further. Thus, our results reconcile the diverging dynamo-model-based forecasts for the amplitude of solar cycle 24. We conclude that reliable predictions for the maximum of solar activity can be made only at the preceding minimum-allowing about five years of advance planning for space weather. For more accurate predictions, sequential data assimilation would be necessary in forecasting models to account for the Sun's short memory.
Resumo:
We present external memory data structures for efficiently answering range-aggregate queries. The range-aggregate problem is defined as follows: Given a set of weighted points in R-d, compute the aggregate of the weights of the points that lie inside a d-dimensional orthogonal query rectangle. The aggregates we consider in this paper include COUNT, sum, and MAX. First, we develop a structure for answering two-dimensional range-COUNT queries that uses O(N/B) disk blocks and answers a query in O(log(B) N) I/Os, where N is the number of input points and B is the disk block size. The structure can be extended to obtain a near-linear-size structure for answering range-sum queries using O(log(B) N) I/Os, and a linear-size structure for answering range-MAX queries in O(log(B)(2) N) I/Os. Our structures can be made dynamic and extended to higher dimensions. (C) 2012 Elsevier B.V. All rights reserved.
Resumo:
Electrical failure of insulation is known to be an extremal random process wherein nominally identical pro-rated specimens of equipment insulation, at constant stress fail at inordinately different times even under laboratory test conditions. In order to be able to estimate the life of power equipment, it is necessary to run long duration ageing experiments under accelerated stresses, to acquire and analyze insulation specific failure data. In the present work, Resin Impregnated Paper (RIP) a relatively new insulation system of choice used in transformer bushings, is taken as an example. The failure data has been processed using proven statistical methods, both graphical and analytical. The physical model governing insulation failure at constant accelerated stress has been assumed to be based on temperature dependent inverse power law model.
Resumo:
We consider a visual search problem studied by Sripati and Olson where the objective is to identify an oddball image embedded among multiple distractor images as quickly as possible. We model this visual search task as an active sequential hypothesis testing problem (ASHT problem). Chernoff in 1959 proposed a policy in which the expected delay to decision is asymptotically optimal. The asymptotics is under vanishing error probabilities. We first prove a stronger property on the moments of the delay until a decision, under the same asymptotics. Applying the result to the visual search problem, we then propose a ``neuronal metric'' on the measured neuronal responses that captures the discriminability between images. From empirical study we obtain a remarkable correlation (r = 0.90) between the proposed neuronal metric and speed of discrimination between the images. Although this correlation is lower than with the L-1 metric used by Sripati and Olson, this metric has the advantage of being firmly grounded in formal decision theory.
Resumo:
Fast content addressable data access mechanisms have compelling applications in today's systems. Many of these exploit the powerful wildcard matching capabilities provided by ternary content addressable memories. For example, TCAM based implementations of important algorithms in data mining been developed in recent years; these achieve an an order of magnitude speedup over prevalent techniques. However, large hardware TCAMs are still prohibitively expensive in terms of power consumption and cost per bit. This has been a barrier to extending their exploitation beyond niche and special purpose systems. We propose an approach to overcome this barrier by extending the traditional virtual memory hierarchy to scale up the user visible capacity of TCAMs while mitigating the power consumption overhead. By exploiting the notion of content locality (as opposed to spatial locality), we devise a novel combination of software and hardware techniques to provide an abstraction of a large virtual ternary content addressable space. In the long run, such abstractions enable applications to disassociate considerations of spatial locality and contiguity from the way data is referenced. If successful, ideas for making content addressability a first class abstraction in computing systems can open up a radical shift in the way applications are optimized for memory locality, just as storage class memories are soon expected to shift away from the way in which applications are typically optimized for disk access locality.