95 resultados para test automation
em Indian Institute of Science - Bangalore - Índia
Location of concentrators in a computer communication network: a stochastic automation search method
Resumo:
The following problem is considered. Given the locations of the Central Processing Unit (ar;the terminals which have to communicate with it, to determine the number and locations of the concentrators and to assign the terminals to the concentrators in such a way that the total cost is minimized. There is alao a fixed cost associated with each concentrator. There is ail upper limit to the number of terminals which can be connected to a concentrator. The terminals can be connected directly to the CPU also In this paper it is assumed that the concentrators can bo located anywhere in the area A containing the CPU and the terminals. Then this becomes a multimodal optimization problem. In the proposed algorithm a stochastic automaton is used as a search device to locate the minimum of the multimodal cost function . The proposed algorithm involves the following. The area A containing the CPU and the terminals is divided into an arbitrary number of regions (say K). An approximate value for the number of concentrators is assumed (say m). The optimum number is determined by iteration later The m concentrators can be assigned to the K regions in (mk) ways (m > K) or (km) ways (K>m).(All possible assignments are feasible, i.e. a region can contain 0,1,…, to concentrators). Each possible assignment is assumed to represent a state of the stochastic variable structure automaton. To start with, all the states are assigned equal probabilities. At each stage of the search the automaton visits a state according to the current probability distribution. At each visit the automaton selects a 'point' inside that state with uniform probability. The cost associated with that point is calculated and the average cost of that state is updated. Then the probabilities of all the states are updated. The probabilities are taken to bo inversely proportional to the average cost of the states After a certain number of searches the search probabilities become stationary and the automaton visits a particular state again and again. Then the automaton is said to have converged to that state Then by conducting a local gradient search within that state the exact locations of the concentrators are determined This algorithm was applied to a set of test problems and the results were compared with those given by Cooper's (1964, 1967) EAC algorithm and on the average it was found that the proposed algorithm performs better.
Resumo:
Conformance testing focuses on checking whether an implementation. under test (IUT) behaves according to its specification. Typically, testers are interested it? performing targeted tests that exercise certain features of the IUT This intention is formalized as a test purpose. The tester needs a "strategy" to reach the goal specified by the test purpose. Also, for a particular test case, the strategy should tell the tester whether the IUT has passed, failed. or deviated front the test purpose. In [8] Jeron and Morel show how to compute, for a given finite state machine specification and a test purpose automaton, a complete test graph (CTG) which represents all test strategies. In this paper; we consider the case when the specification is a hierarchical state machine and show how to compute a hierarchical CTG which preserves the hierarchical structure of the specification. We also propose an algorithm for an online test oracle which avoids a space overhead associated with the CTG.
Resumo:
Verification is one of the important stages in designing an SoC (system on chips) that consumes upto 70% of the design time. In this work, we present a methodology to automatically generate verification test-cases to verify a class of SoCs and also enable re-use of verification resources created from one SoC to another. A prototype implementation for generating the test-cases is also presented.
Resumo:
A geometric and non parametric procedure for testing if two finite set of points are linearly separable is proposed. The Linear Separability Test is equivalent to a test that determines if a strictly positive point h > 0 exists in the range of a matrix A (related to the points in the two finite sets). The algorithm proposed in the paper iteratively checks if a strictly positive point exists in a subspace by projecting a strictly positive vector with equal co-ordinates (p), on the subspace. At the end of each iteration, the subspace is reduced to a lower dimensional subspace. The test is completed within r ≤ min(n, d + 1) steps, for both linearly separable and non separable problems (r is the rank of A, n is the number of points and d is the dimension of the space containing the points). The worst case time complexity of the algorithm is O(nr3) and space complexity of the algorithm is O(nd). A small review of some of the prominent algorithms and their time complexities is included. The worst case computational complexity of our algorithm is lower than the worst case computational complexity of Simplex, Perceptron, Support Vector Machine and Convex Hull Algorithms, if d
Resumo:
Designing and implementing thread-safe multithreaded libraries can be a daunting task as developers of these libraries need to ensure that their implementations are free from concurrency bugs, including deadlocks. The usual practice involves employing software testing and/or dynamic analysis to detect. deadlocks. Their effectiveness is dependent on well-designed multithreaded test cases. Unsurprisingly, developing multithreaded tests is significantly harder than developing sequential tests for obvious reasons. In this paper, we address the problem of automatically synthesizing multithreaded tests that can induce deadlocks. The key insight to our approach is that a subset of the properties observed when a deadlock manifests in a concurrent execution can also be observed in a single threaded execution. We design a novel, automatic, scalable and directed approach that identifies these properties and synthesizes a deadlock revealing multithreaded test. The input to our approach is the library implementation under consideration and the output is a set of deadlock revealing multithreaded tests. We have implemented our approach as part of a tool, named OMEN1. OMEN is able to synthesize multithreaded tests on many multithreaded Java libraries. Applying a dynamic deadlock detector on the execution of the synthesized tests results in the detection of a number of deadlocks, including 35 real deadlocks in classes documented as thread-safe. Moreover, our experimental results show that dynamic analysis on multithreaded tests that are either synthesized randomly or developed by third-party programmers are ineffective in detecting the deadlocks.
Resumo:
Careful study of various aspects presented in the note reveals basic fallacies in the concept and final conclusions.The Authors claim to have presented a new method of determining C-v. However, the note does not contain a new method. In fact, the method proposed is an attempt to generate settlement vs. time data using only two values of (t,8). The Authors have used a rectangular hyperbola method to determine C-v from the predicated 8- t data. In this context, the title of the paper itself is misleading and questionable. The Authors have compared C-v values predicated with measured values, both of them being the results of the rectangular hyperbola method.
Resumo:
Using Terzaghi's degree of consolidation, U, and the time factor, T, relationship, if M-U1 and M-U2 (M-U1 not equal M-U2) are slopes of the U-root T curve at any two time factors T-U1 and T-U2, then it can be shown that a unique relationship exists between T-U2/T-U1, M-U1/M-U2, and TU, (or TU2), and knowing any two of these, the third can be uniquely determined. A chart, called the T chart, has been plotted using these three variables for quickly determining T and U at any experimental time, t, to determine the coefficient of consolidation, c(v), corrected zero settlement, delta(o), and ultimate primary settlement, delta(100). The chart can be used even in those cases where settlement and time, at the instant of load increment, are not known.
Resumo:
Taylor (1948) suggested the method for determination of the settlement, d, corresponding to 90% consolidation utilizing the characteristics of the degree of consolidation, U, versus the square root of the time factor, square root of T, plot. Based on the properties of the slope of U versus square root of T curve, a new method is proposed to determine d corresponding to any U above 70% consolidation for evaluation of the coefficient of consolidation, Cn. The effects of the secondary consolidation on the Cn value at different percentages of consolidation can be studied. Cn, closer to the field values, can be determined in less time as compared to Taylor's method. At any U in between 75 and 95% consolidation, Cn(U) due to the new method lies in between Taylor's Cn and Casagrande's Cn.
Resumo:
This article discusses the design and development of GRDB (General Purpose Relational Data Base System) which has been implemented on a DEC-1090 system in Pascal. GRDB is a general purpose database system designed to be completely independent of the nature of data to be handled, since it is not tailored to the specific requirements of any particular enterprise. It can handle different types of data such as variable length records and textual data. Apart from the usual database facilities such as data definition and data manipulation, GRDB supports User Definition Language (UDL) and Security definition language. These facilities are provided through a SEQUEL-like General Purpose Query Language (GQL). GRDB provides adequate protection facilities up to the relation level. The concept of “security matrix” has been made use of to provide database protection. The concept of Unique IDentification number (UID) and Password is made use of to ensure user identification and authentication. The concept of static integrity constraints has been used to ensure data integrity. Considerable efforts have been made to improve the response time through indexing on the data files and query optimisation. GRDB is designed for an interactive use but alternate provision has been made for its use through batch mode also. A typical Air Force application (consisting of data about personnel, inventory control, and maintenance planning) has been used to test GRDB and it has been found to perform satisfactorily.
Resumo:
The test based on comparison of the characteristic coefficients of the adjancency matrices of the corresponding graphs for detection of isomorphism in kinematic chains has been shown to fail in the case of two pairs of ten-link, simple-jointed chains, one pair corresponding to single-freedom chains and the other pair corresponding to three-freedom chains. An assessment of the merits and demerits of available methods for detection of isomorphism in graphs and kinematic chains is presented, keeping in view the suitability of the methods for use in computerized structural synthesis of kinematic chains. A new test based on the characteristic coefficients of the “degree” matrix of the corresponding graph is proposed for detection of isomorphism in kinematic chains. The new test is found to be successful in the case of a number of examples of graphs where the test based on characteristic coefficients of adjancency matrix fails. It has also been found to be successful in distinguishing the structures of all known simple-jointed kinematic chains in the categories of (a) single-freedom chains with up to 10 links, (b) two-freedom chains with up to 9 links and (c) three-freedom chains with up to 10 links.
Resumo:
Combustion is a complex phenomena involving a multiplicity of variables. Some important variables measured in flame tests follow [1]. In order to characterize ignition, such related parameters as ignition time, ease of ignition, flash ignition temperature, and self-ignition temperature are measured. For studying the propagation of the flame, parameters such as distance burned or charred, area of flame spread, time of flame spread, burning rate, charred or melted area, and fire endurance are measured. Smoke characteristics are studied by determining such parameters as specific optical density, maximum specific optical density, time of occurrence of the densities, maximum rate of density increase, visual obscuration time, and smoke obscuration index. In addition to the above variables, there are a number of specific properties of the combustible system which could be measured. These are soot formation, toxicity of combustion gases, heat of combustion, dripping phenomena during the burning of thermoplastics, afterglow, flame intensity, fuel contribution, visual characteristics, limiting oxygen concentration (OI), products of pyrolysis and combustion, and so forth. A multitude of flammability tests measuring one or more of these properties have been developed [2]. Admittedly, no one small scale test is adequate to mimic or assess the performance of a plastic in a real fire situation. The conditions are much too complicated [3, 4]. Some conceptual problems associated with flammability testing of polymers have been reviewed [5, 6].
Resumo:
Functional Programming (FP) systems are modified and extended to form Nondeterministic Functional Programming (NFP) systems in which nondeterministic programs can be specified and both deterministic and nondeterministic programs can be verified essentially within the system. It is shown that the algebra of NFP programs has simpler laws in comparison with the algebra of FP programs. "Regular" forms are introduced to put forward a disciplined way of reasoning about programs. Finally, an alternative definition of "linear" forms is proposed for reasoning about recursively defined programs. This definition, when used to test the linearity of forms, results in simpler verification conditions than those generated by the original definition of linear forms.
Resumo:
A very general and numerically quite robust algorithm has been proposed by Sastry and Gauvrit (1980) for system identification. The present paper takes it up and examines its performance on a real test example. The example considered is the lateral dynamics of an aircraft. This is used as a vehicle for demonstrating the performance of various aspects of the algorithm in several possible modes.
Resumo:
The pollen of Parthenium hysterophorus, an alien weed growing wild in India was found to be a potential source of allergic rhinitis. A clinical survey showed that 34% of the patients suffering from rhinitis and 12% suffering from bronchial asthma gave positive skin-prick test reactions to Parthenium pollen antigen extracts. Parthenium-specific IgE was detected in the sera of sixteen out of twenty-four patients suffering from seasonal rhinitis. There was 66% correlation between skin test and RAST.
Resumo:
Salt-fog tests as per International Electrotechnical Commission (IEC) recommendations were conducted on stationtype insulators with large leakage lengths. Later, tests were conducted to simulate natural conditions. From these tests, it was understood that the pollution flashover would occur because of nonuniform pollution layers causing nonuniform voltage distribution during a natural drying-up period. The leakage current during test conditions was very small and the evidence was that the leakage current did not play any significant role in causing flashovers. In the light of the experimental results, some modification of the test procedure is suggested.