957 resultados para Test Set
Resumo:
Ensuring reliable operation over an extended period of time is one of the biggest challenges facing present day electronic systems. The increased vulnerability of the components to atmospheric particle strikes poses a big threat in attaining the reliability required for various mission critical applications. Various soft error mitigation methodologies exist to address this reliability challenge. A general solution to this problem is to arrive at a soft error mitigation methodology with an acceptable implementation overhead and error tolerance level. This implementation overhead can then be reduced by taking advantage of various derating effects like logical derating, electrical derating and timing window derating, and/or making use of application redundancy, e. g. redundancy in firmware/software executing on the so designed robust hardware. In this paper, we analyze the impact of various derating factors and show how they can be profitably employed to reduce the hardware overhead to implement a given level of soft error robustness. This analysis is performed on a set of benchmark circuits using the delayed capture methodology. Experimental results show upto 23% reduction in the hardware overhead when considering individual and combined derating factors.
Resumo:
In this article, we investigate the performance of a volume integral equation code on BlueGene/L system. Volume integral equation (VIE) is solved for homogeneous and inhomogeneous dielectric objects for radar cross section (RCS) calculation in a highly parallel environment. Pulse basis functions and point matching technique is used to convert the volume integral equation into a set of simultaneous linear equations and is solved using parallel numerical library ScaLAPACK on IBM's distributed-memory supercomputer BlueGene/L by different number of processors to compare the speed-up and test the scalability of the code.
Resumo:
This paper considers the problem of weak signal detection in the presence of navigation data bits for Global Navigation Satellite System (GNSS) receivers. Typically, a set of partial coherent integration outputs are non-coherently accumulated to combat the effects of model uncertainties such as the presence of navigation data-bits and/or frequency uncertainty, resulting in a sub-optimal test statistic. In this work, the test-statistic for weak signal detection is derived in the presence of navigation data-bits from the likelihood ratio. It is highlighted that averaging the likelihood ratio based test-statistic over the prior distributions of the unknown data bits and the carrier phase uncertainty leads to the conventional Post Detection Integration (PDI) technique for detection. To improve the performance in the presence of model uncertainties, a novel cyclostationarity based sub-optimal PDI technique is proposed. The test statistic is analytically characterized, and shown to be robust to the presence of navigation data-bits, frequency, phase and noise uncertainties. Monte Carlo simulation results illustrate the validity of the theoretical results and the superior performance offered by the proposed detector in the presence of model uncertainties.
Resumo:
Analysis of high resolution satellite images has been an important research topic for urban analysis. One of the important features of urban areas in urban analysis is the automatic road network extraction. Two approaches for road extraction based on Level Set and Mean Shift methods are proposed. From an original image it is difficult and computationally expensive to extract roads due to presences of other road-like features with straight edges. The image is preprocessed to improve the tolerance by reducing the noise (the buildings, parking lots, vegetation regions and other open spaces) and roads are first extracted as elongated regions, nonlinear noise segments are removed using a median filter (based on the fact that road networks constitute large number of small linear structures). Then road extraction is performed using Level Set and Mean Shift method. Finally the accuracy for the road extracted images is evaluated based on quality measures. The 1m resolution IKONOS data has been used for the experiment.
Resumo:
Subsurface lithology and seismic site classification of Lucknow urban center located in the central part of the Indo-Gangetic Basin (IGB) are presented based on detailed shallow subsurface investigations and borehole analysis. These are done by carrying out 47 seismic surface wave tests using multichannel analysis of surface waves (MASW) and 23 boreholes drilled up to 30 m with standard penetration test (SPT) N values. Subsurface lithology profiles drawn from the drilled boreholes show low- to medium-compressibility clay and silty to poorly graded sand available till depth of 30 m. In addition, deeper boreholes (depth >150 m) were collected from the Lucknow Jal Nigam (Water Corporation), Government of Uttar Pradesh to understand deeper subsoil stratification. Deeper boreholes in this paper refer to those with depth over 150 m. These reports show the presence of clay mix with sand and Kankar at some locations till a depth of 150 m, followed by layers of sand, clay, and Kankar up to 400 m. Based on the available details, shallow and deeper cross-sections through Lucknow are presented. Shear wave velocity (SWV) and N-SPT values were measured for the study area using MASW and SPT testing. Measured SWV and N-SPT values for the same locations were found to be comparable. These values were used to estimate 30 m average values of N-SPT (N-30) and SWV (V-s(30)) for seismic site classification of the study area as per the National Earthquake Hazards Reduction Program (NEHRP) soil classification system. Based on the NEHRP classification, the entire study area is classified into site class C and D based on V-s(30) and site class D and E based on N-30. The issue of larger amplification during future seismic events is highlighted for a major part of the study area which comes under site class D and E. Also, the mismatch of site classes based on N-30 and V-s(30) raises the question of the suitability of the NEHRP classification system for the study region. Further, 17 sets of SPT and SWV data are used to develop a correlation between N-SPT and SWV. This represents a first attempt of seismic site classification and correlation between N-SPT and SWV in the Indo-Gangetic Basin.
Resumo:
Power converters burn-in test consumes large amount of energy, which increases the cost of testing, and certification, in medium and high power application. A simple test configuration to test a PWM rectifier induction motor drive, using a Doubly Fed Induction Machine (DFIM) to circulate power back to the grid for burn-in test is presented. The test configuration makes use of only one power electronic converter, which is the converter to be tested. The test method ensures soft synchronization of DFIM and Squirrel Cage Induction Machine (SCIM). A simple volt per hertz control of the drive is sufficient for conducting the test. To synchronize the DFIM with SCIM, the rotor terminal voltage of DFIM is measured and used as an indication of speed mismatch between DFIM and SCIM. The synchronization is done when the DFIM rotor voltage is at its minimum. Analysis of the DFIM characteristics confirms that such a test can be effectively performed with smooth start up and loading of the test setup. After synchronization is obtained, the speed command to SCIM is changed in order to load the setup in motoring or regenerative mode of operation. The experimental results are presented that validates the proposed test method.
Resumo:
The RILEM work-of-fracture method for measuring the specific fracture energy of concrete from notched three-point bend specimens is still the most common method used throughout the world, despite the fact that the specific fracture energy so measured is known to vary with the size and shape of the test specimen. The reasons for this variation have also been known for nearly two decades, and two methods have been proposed in the literature to correct the measured size-dependent specific fracture energy (G(f)) in order to obtain a size-independent value (G(F)). It has also been proved recently, on the basis of a limited set of results on a single concrete mix with a compressive strength of 37 MPa, that when the size-dependent G(f) measured by the RILEM method is corrected following either of these two methods, the resulting specific fracture energy G(F) is very nearly the same and independent of the size of the specimen. In this paper, we will provide further evidence in support of this important conclusion using extensive independent test results of three different concrete mixes ranging in compressive strength from 57 to 122 MPa. (c) 2013 Elsevier Ltd. All rights reserved.
Resumo:
We present here, an experimental set-up developed for the first time in India for the determination of mixing ratio and carbon isotopic ratio of air-CO2. The set-up includes traps for collection and extraction of CO2 from air samples using cryogenic procedures, followed by the measurement of CO2 mixing ratio using an MKS Baratron gauge and analysis of isotopic ratios using the dual inlet peripheral of a high sensitivity isotope ratio mass spectrometer (IRMS) MAT 253. The internal reproducibility (precision) for the PC measurement is established based on repeat analyses of CO2 +/- 0.03 parts per thousand. The set-up is calibrated with international carbonate and air-CO2 standards. An in-house air-CO2 mixture, `OASIS AIRMIX' is prepared mixing CO2 from a high purity cylinder with O-2 and N-2 and an aliquot of this mixture is routinely analyzed together with the air samples. The external reproducibility for the measurement of the CO2 mixing ratio and carbon isotopic ratios are +/- 7 (n = 169) mu mol.mol(-1) and +/- 0.05 (n = 169) parts per thousand based on the mean of the difference between two aliquots of reference air mixture analyzed during daily operation carried out during November 2009-December 2011. The correction due to the isobaric interference of N2O on air-CO2 samples is determined separately by analyzing mixture of CO2 (of known isotopic composition) and N2O in varying proportions. A +0.2 parts per thousand correction in the delta C-13 value for a N2O concentration of 329 ppb is determined. As an application, we present results from an experiment conducted during solar eclipse of 2010. The isotopic ratio in CO2 and the carbon dioxide mixing ratio in the air samples collected during the event are different from neighbouring samples, suggesting the role of atmospheric inversion in trapping the emitted CO2 from the urban atmosphere during the eclipse.
Resumo:
In this paper, we explore fundamental limits on the number of tests required to identify a given number of ``healthy'' items from a large population containing a small number of ``defective'' items, in a nonadaptive group testing framework. Specifically, we derive mutual information-based upper bounds on the number of tests required to identify the required number of healthy items. Our results show that an impressive reduction in the number of tests is achievable compared to the conventional approach of using classical group testing to first identify the defective items and then pick the required number of healthy items from the complement set. For example, to identify L healthy items out of a population of N items containing K defective items, when the tests are reliable, our results show that O(K(L - 1)/(N - K)) measurements are sufficient. In contrast, the conventional approach requires O(K log(N/K)) measurements. We derive our results in a general sparse signal setup, and hence, they are applicable to other sparse signal-based applications such as compressive sensing also.
Resumo:
The primary objective of the present study is to show that for the most common configuration of an impactor system, the accelerometer cannot exactly reproduce the dynamic response of a specimen subjected to impact loading. An equivalent Lumped Parameter Model (LPM) of the given impactor set-up has been formulated for assessing the accuracy of an accelerometer mounted in a drop-weight impactor set-up for an axially loaded specimen. A specimen under the impact loading is represented by a non-linear spring of varying stiffness, while the accelerometer is assumed to behave in a linear manner due to its high stiffness. Specimens made of steel, aluminium and fibre-reinforced composite (FRC) are used in the present study. Assuming the force-displacement response obtained in an actual impact test to be the true behaviour of the test specimen, a suitable numerical approach has been used to solve the governing non-linear differential equations of a three degrees-of-freedom (DOF) system in a piece-wise linear manner. The numerical solution of the governing differential equations following an explicit time integration scheme yields an excellent reproduction of the mechanical behaviour of the specimen, consequently confirming the accuracy of the numerical approach. However, the spring representing the accelerometer predicts a response that qualitatively matches the assumed force-displacement response of the test specimen with a perceptibly lower magnitude of load.
Resumo:
The contour tree is a topological abstraction of a scalar field that captures evolution in level set connectivity. It is an effective representation for visual exploration and analysis of scientific data. We describe a work-efficient, output sensitive, and scalable parallel algorithm for computing the contour tree of a scalar field defined on a domain that is represented using either an unstructured mesh or a structured grid. A hybrid implementation of the algorithm using the GPU and multi-core CPU can compute the contour tree of an input containing 16 million vertices in less than ten seconds with a speedup factor of upto 13. Experiments based on an implementation in a multi-core CPU environment show near-linear speedup for large data sets.
Resumo:
We report on the design, development, and performance study of a packaged piezoelectric thin film impact sensor, and its potential application in non-destructive material discrimination. The impact sensing element employed was a thin circular diaphragm of flexible Phynox alloy. Piezoelectric ZnO thin film as an impact sensing layer was deposited on to the Phynox alloy diaphragm by RF reactive magnetron sputtering. Deposited ZnO thin film was characterized by X-ray diffraction (XRD), Atomic Force Microscopy (AFM), and Scanning Electron Microscopy (SEM) techniques. The d(31) piezoelectric coefficient value of ZnO thin film was 4.7 pm V-1, as measured by 4-point bending method. ZnO film deposited diaphragm based sensing element was properly packaged in a suitable housing made of High Density Polyethylene (HDPE) material. Packaged impact sensor was used in an experimental set-up, which was designed and developed in-house for non-destructive material discrimination studies. Materials of different densities (iron, glass, wood, and plastic) were used as test specimens for material discrimination studies. The analysis of output voltage waveforms obtained reveals lots of valuable information about the impacted material. Impact sensor was able to discriminate the test materials on the basis of the difference in their densities. The output response of packaged impact sensor shows high linearity and repeatability. The packaged impact sensor discussed in this paper is highly sensitive, reliable, and cost-effective.
Resumo:
In this paper, we consider the problem of finding a spectrum hole of a specified bandwidth in a given wide band of interest. We propose a new, simple and easily implementable sub-Nyquist sampling scheme for signal acquisition and a spectrum hole search algorithm that exploits sparsity in the primary spectral occupancy in the frequency domain by testing a group of adjacent subbands in a single test. The sampling scheme deliberately introduces aliasing during signal acquisition, resulting in a signal that is the sum of signals from adjacent sub-bands. Energy-based hypothesis tests are used to provide an occupancy decision over the group of subbands, and this forms the basis of the proposed algorithm to find contiguous spectrum holes. We extend this framework to a multi-stage sensing algorithm that can be employed in a variety of spectrum sensing scenarios, including non-contiguous spectrum hole search. Further, we provide the analytical means to optimize the hypothesis tests with respect to the detection thresholds, number of samples and group size to minimize the detection delay under a given error rate constraint. Depending on the sparsity and SNR, the proposed algorithms can lead to significantly lower detection delays compared to a conventional bin-by-bin energy detection scheme; the latter is in fact a special case of the group test when the group size is set to 1. We validate our analytical results via Monte Carlo simulations.
Resumo:
Detection of explosives, especially trinitrotoluene (TNT), is of utmost importance due to its highly explosive nature and environmental hazard. Therefore, detection of TNT has been a matter of great concern to the scientific community worldwide. Herein, a new aggregation-induced phosphorescent emission (AIPE)-active iridium(III) bis(2-(2,4-difluorophenyl)pyridinato-NC2') (2-(2-pyridyl)benzimidazolato-N,N') complex FIrPyBiz] has been developed and serves as a molecular probe for the detection of TNT in the vapor phase, solid phase, and aqueous media. In addition, phosphorescent test strips have been constructed by impregnating Whatman filter paper with aggregates of FIrPyBiz for trace detection of TNT in contact mode, with detection limits in nanograms, by taking advantage of the excited state interaction of AIPE-active phosphorescent iridium(III) complex with that of TNT and the associated photophysical properties.
Resumo:
A scheme for built-in self-test of analog signals with minimal area overhead for measuring on-chip voltages in an all-digital manner is presented. The method is well suited for a distributed architecture, where the routing of analog signals over long paths is minimized. A clock is routed serially to the sampling heads placed at the nodes of analog test voltages. This sampling head present at each test node, which consists of a pair of delay cells and a pair of flip-flops, locally converts the test voltage to a skew between a pair of subsampled signals, thus giving rise to as many subsampled signal pairs as the number of nodes. To measure a certain analog voltage, the corresponding subsampled signal pair is fed to a delay measurement unit to measure the skew between this pair. The concept is validated by designing a test chip in a UMC 130-nm CMOS process. Sub-millivolt accuracy for static signals is demonstrated for a measurement time of a few seconds, and an effective number of bits of 5.29 is demonstrated for low-bandwidth signals in the absence of sample-and-hold circuitry.