96 resultados para test adaptation
Resumo:
Symmetry?adapted linear combinations of valence?bond (VB) diagrams are constructed for arbitrary point groups and total spin S using diagrammatic VB methods. VB diagrams are related uniquely to invariant subspaces whose size reflects the number of group elements; their nonorthogonality leads to sparser matrices and is fully incorporated into a binary integer representation. Symmetry?adapated linear combinations of VB diagrams are constructed for the 1764 singlets of a half?filled cube of eight sites, the 2.8 million ??electron singlets of anthracene, and for illustrative S?0 systems.
Resumo:
The no-hiding theorem says that if any physical process leads to bleaching of quantum information from the original system, then it must reside in the rest of the Universe with no information being hidden in the correlation between these two subsystems. Here, we report an experimental test of the no-hiding theorem with the technique of nuclear magnetic resonance. We use the quantum state randomization of a qubit as one example of the bleaching process and show that the missing information can be fully recovered up to local unitary transformations in the ancilla qubits.
Resumo:
The problem of structural system identification when measurements originate from multiple tests and multiple sensors is considered. An offline solution to this problem using bootstrap particle filtering is proposed. The central idea of the proposed method is the introduction of a dummy independent variable that allows for simultaneous assimilation of multiple measurements in a sequential manner. The method can treat linear/nonlinear structural models and allows for measurements on strains and displacements under static/dynamic loads. Illustrative examples consider measurement data from numerical models and also from laboratory experiments. The results from the proposed method are compared with those from a Kalman filter-based approach and the superior performance of the proposed method is demonstrated. Copyright (C) 2009 John Wiley & Sons, Ltd.
Resumo:
Random Access Scan, which addresses individual flip-flops in a design using a memory array like row and column decoder architecture, has recently attracted widespread attention, due to its potential for lower test application time, test data volume and test power dissipation when compared to traditional Serial Scan. This is because typically only a very limited number of random ``care'' bits in a test response need be modified to create the next test vector. Unlike traditional scan, most flip-flops need not be updated. Test application efficiency can be further improved by organizing the access by word instead of by bit. In this paper we present a new decoder structure that takes advantage of basis vectors and linear algebra to further significantly optimize test application in RAS by performing the write operations on multiple bits consecutively. Simulations performed on benchmark circuits show an average of 2-3 times speed up in test write time compared to conventional RAS.
Resumo:
In the area of testing communication systems, the interfaces between systems to be tested and their testers have great impact on test generation and fault detectability. Several types of such interfaces have been standardized by the International Standardization Organization (ISO). A general distributed test architecture, containing distributed interfaces, has been presented in the literature for testing distributed systems based on the Open Distributing Processing (ODP) Basic Reference Model (BRM), which is a generalized version of ISO distributed test architecture. We study in this paper the issue of test selection with respect to such an test architecture. In particular, we consider communication systems that can be modeled by finite state machines with several distributed interfaces, called ports. A test generation method is developed for generating test sequences for such finite state machines, which is based on the idea of synchronizable test sequences. Starting from the initial effort by Sarikaya, a certain amount of work has been done for generating test sequences for finite state machines with respect to the ISO distributed test architecture, all based on the idea of modifying existing test generation methods to generate synchronizable test sequences. However, none studies the fault coverage provided by their methods. We investigate the issue of fault coverage and point out a fact that the methods given in the literature for the distributed test architecture cannot ensure the same fault coverage as the corresponding original testing methods. We also study the limitation of fault detectability in the distributed test architecture.
Resumo:
Direct use of experimental eigenvalues of the vibrational secular equation on to the ab initio predicted eigenvector space is suggested as a means of obtaining a reliable set of intramolecular force constants. This method which we have termed RECOVES (recovery in the eigenvector space) is computationally simple and free from arbitrariness. The RECOVES force constants, by definition, reproduce the experimental vibrational frequencies of the parent molecule exactly. The ab initio calculations were carried out for ethylene as a test molecule and the force constants obtained by the present procedure also correctly predict the vibrational frequencies of the deuterated species. The RECOVES force constants for ethylene are compared with those obtained by using the SQM procedure.
Resumo:
The use of delayed coefficient adaptation in the least mean square (LMS) algorithm has enabled the design of pipelined architectures for real-time transversal adaptive filtering. However, the convergence speed of this delayed LMS (DLMS) algorithm, when compared with that of the standard LMS algorithm, is degraded and worsens with increase in the adaptation delay. Existing pipelined DLMS architectures have large adaptation delay and hence degraded convergence speed. We in this paper, first present a pipelined DLMS architecture with minimal adaptation delay for any given sampling rate. The architecture is synthesized by using a number of function preserving transformations on the signal flow graph representation of the DLMS algorithm. With the use of carry-save arithmetic, the pipelined architecture can support high sampling rates, limited only by the delay of a full adder and a 2-to-1 multiplexer. In the second part of this paper, we extend the synthesis methodology described in the first part, to synthesize pipelined DLMS architectures whose power dissipation meets a specified budget. This low-power architecture exploits the parallelism in the DLMS algorithm to meet the required computational throughput. The architecture exhibits a novel tradeoff between algorithmic performance (convergence speed) and power dissipation. (C) 1999 Elsevier Science B.V. All rights resented.
Resumo:
The consistency of very soft sediments prevents the conventional oedometer test from being applied to study their compressibility and permeability characteristics. The hydraulic consolidation test in existence requires sophisticated instrumentation and testing procedures. The present paper proposes a seepage-force-induced consolidation testing procedure for studying the compressibility and permeability behavior of soft sediments at low effective stress levels. The good agreement that has been observed between the results obtained from the proposed method and the conventional oedometer test at overlapping effective stress levels indicates that the proposed method can be used to study the compressibility and permeability characteristics of soft sediments at low effective stress levels satisfactorily.
Resumo:
The effect of the test gas on the flow field around a 120degrees apex angle blunt cone has been investigated in a shock tunnel at a nominal Mach number of 5.75. The shock standoff distance around the blunt cone was measured by an electrical discharge technique using both carbon dioxide and air as test gases. The forebody laminar convective heat transfer to the blunt cone was measured with platinum thin-film sensors in both air and carbon dioxide environments. An increase of 10 to 15% in the measured heat transfer values was observed with carbon dioxide as the test gas in comparison to air. The measured thickness of the shock layer along the stagnation streamline was 3.57 +/- 0.17 mm in air and 3.29 +/- 0.26 mm in carbon dioxide. The computed thickness of the shock layer for air and carbon dioxide were 3.98 mm and 3.02 mm, respectively. The observed increase in the measured heat transfer rates in carbon dioxide compared to air was due to the higher density ratio across the bow shock wave and the reduced shock layer thickness.
Resumo:
A heterotroph Paenibacillus polymyxa bacteria is adapted to pyrite, chalcopyrite, galena and sphalerite minerals by repeated subculturing the bacteria in the presence of the mineral until their growth characteristics became similar to the growth in the absence of mineral. The unadapted and adapted bacterial surface have been chemically characterised by zeta-potential, contact angle, adherence to hydrocarbons and FT-IR spectroscopic studies. The surface free energies of bacteria have been calculated by following the equation of state and surface tension component approaches. The aim of the present paper is to understand the changes in surface chemical properties of bacteria during adaptation to sulfide minerals and the projected consequences in bioflotation and bioflocculation processes. The mineral-adapted cells became more hydrophilic as compared to unadapted cells. There are no significant changes in the surface charge of bacteria before and after adaptation, and all the bacteria exhibit an iso-electric point below pH 2.5. The contact angles are observed to be more reliable for hydrophobicity assessment than the adherence to hydrocarbons. The Lifschitz–van der Waals/acid–base approach to calculate surface free energy is found to be relevant for mineral–bacteria interactions. The diffuse reflectance FT-IR absorbance bands for all the bacteria are the same illustrating similar surface chemical composition. However, the intensity of the bands for unadapted and adapted cells is significantly varied and this is due to different amounts of bacterial secretions underlying different growth conditions.
Resumo:
Verification is one of the important stages in designing an SoC (system on chips) that consumes upto 70% of the design time. In this work, we present a methodology to automatically generate verification test-cases to verify a class of SoCs and also enable re-use of verification resources created from one SoC to another. A prototype implementation for generating the test-cases is also presented.