48 resultados para test case generation
Resumo:
Conformance testing focuses on checking whether an implementation. under test (IUT) behaves according to its specification. Typically, testers are interested it? performing targeted tests that exercise certain features of the IUT This intention is formalized as a test purpose. The tester needs a "strategy" to reach the goal specified by the test purpose. Also, for a particular test case, the strategy should tell the tester whether the IUT has passed, failed. or deviated front the test purpose. In [8] Jeron and Morel show how to compute, for a given finite state machine specification and a test purpose automaton, a complete test graph (CTG) which represents all test strategies. In this paper; we consider the case when the specification is a hierarchical state machine and show how to compute a hierarchical CTG which preserves the hierarchical structure of the specification. We also propose an algorithm for an online test oracle which avoids a space overhead associated with the CTG.
Resumo:
This work aims at dimensional reduction of non-linear isotropic hyperelastic plates in an asymptotically accurate manner. The problem is both geometrically and materially non-linear. The geometric non-linearity is handled by allowing for finite deformations and generalized warping while the material non-linearity is incorporated through hyperelastic material model. The development, based on the Variational Asymptotic Method (VAM) with moderate strains and very small thickness to shortest wavelength of the deformation along the plate reference surface as small parameters, begins with three-dimensional (3-D) non-linear elasticity and mathematically splits the analysis into a one-dimensional (1-D) through-the-thickness analysis and a two-dimensional (2-D) plate analysis. Major contributions of this paper are derivation of closed-form analytical expressions for warping functions and stiffness coefficients and a set of recovery relations to express approximately the 3-D displacement, strain and stress fields. Consistent with the 2-D non-linear constitutive laws, 2-D plate theory and corresponding finite element program have been developed. Validation of present theory is carried out with a standard test case and the results match well. Distributions of 3-D results are provided for another test case. (c) 2012 Elsevier Ltd. All rights reserved.
Resumo:
Flood is one of the detrimental hydro-meteorological threats to mankind. This compels very efficient flood assessment models. In this paper, we propose remote sensing based flood assessment using Synthetic Aperture Radar (SAR) image because of its imperviousness to unfavourable weather conditions. However, they suffer from the speckle noise. Hence, the processing of SAR image is applied in two stages: speckle removal filters and image segmentation methods for flood mapping. The speckle noise has been reduced with the help of Lee, Frost and Gamma MAP filters. A performance comparison of these speckle removal filters is presented. From the results obtained, we deduce that the Gamma MAP is reliable. The selected Gamma MAP filtered image is segmented using Gray Level Co-occurrence Matrix (GLCM) and Mean Shift Segmentation (MSS). The GLCM is a texture analysis method that separates the image pixels into water and non-water groups based on their spectral feature whereas MSS is a gradient ascent method, here segmentation is carried out using spectral and spatial information. As test case, Kosi river flood is considered in our study. From the segmentation result of both these methods are comprehensively analysed and concluded that the MSS is efficient for flood mapping.
Resumo:
A computationally efficient approach that computes the optimal regularization parameter for the Tikhonov-minimization scheme is developed for photoacoustic imaging. This approach is based on the least squares-QR decomposition which is a well-known dimensionality reduction technique for a large system of equations. It is shown that the proposed framework is effective in terms of quantitative and qualitative reconstructions of initial pressure distribution enabled via finding an optimal regularization parameter. The computational efficiency and performance of the proposed method are shown using a test case of numerical blood vessel phantom, where the initial pressure is exactly known for quantitative comparison. (C) 2013 Society of Photo-Optical Instrumentation Engineers (SPIE)
Resumo:
Asymptotically-accurate dimensional reduction from three to two dimensions and recovery of 3-D displacement field of non-prestretched dielectric hyperelastic membranes are carried out using the Variational Asymptotic Method (VAM) with moderate strains and very small ratio of the membrane thickness to its shortest wavelength of the deformation along the plate reference surface chosen as the small parameters for asymptotic expansion. Present work incorporates large deformations (displacements and rotations), material nonlinearity (hyperelasticity), and electrical effects. It begins with 3-D nonlinear electroelastic energy and mathematically splits the analysis into a one-dimensional (1-D) through-the-thickness analysis and a 2-D nonlinear plate analysis. Major contribution of this paper is a comprehensive nonlinear through-the-thickness analysis which provides a 2-D energy asymptotically equivalent of the 3-D energy, a 2-D constitutive relation between the 2-D generalized strain and stress tensors for the plate analysis and a set of recovery relations to express the 3-D displacement field. Analytical expressions are derived for warping functions and stiffness coefficients. This is the first attempt to integrate an analytical work on asymptotically-accurate nonlinear electro-elastic constitutive relation for compressible dielectric hyperelastic model with a generalized finite element analysis of plates to provide 3-D displacement fields using VAM. A unified software package `VAMNLM' (Variational Asymptotic Method applied to Non-Linear Material models) was developed to carry out 1-D non-linear analysis (analytical), 2-D non-linear finite element analysis and 3-D recovery analysis. The applicability of the current theory is demonstrated through an actuation test case, for which distribution of 3-D displacements are provided. (C) 2014 Elsevier Ltd. All rights reserved.
Resumo:
Atomization is the process of disintegration of a liquid jet into ligaments and subsequently into smaller droplets. A liquid jet injected from a circular orifice into cross flow of air undergoes atomization primarily due to the interaction of the two phases rather than an intrinsic break up. Direct numerical simulation of this process resolving the finest droplets is computationally very expensive and impractical. In the present study, we resort to multiscale modelling to reduce the computational cost. The primary break up of the liquid jet is simulated using Gerris, an open source code, which employs Volume-of-Fluid (VOF) algorithm. The smallest droplets formed during primary atomization are modeled as Lagrangian particles. This one-way coupling approach is validated with the help of the simple test case of tracking a particle in a Taylor-Green vortex. The temporal evolution of the liquid jet forming the spray is captured and the flattening of the cylindrical liquid column prior to breakup is observed. The size distribution of the resultant droplets is presented at different distances downstream from the location of injection and their spatial evolution is analyzed.
Resumo:
Identification of residue-residue contacts from primary sequence can be used to guide protein structure prediction. Using Escherichia coli CcdB as the test case, we describe an experimental method termed saturation-suppressor mutagenesis to acquire residue contact information. In this methodology, for each of five inactive CcdB mutants, exhaustive screens for suppressors were performed. Proximal suppressors were accurately discriminated from distal suppressors based on their phenotypes when present as single mutants. Experimentally identified putative proximal pairs formed spatial constraints to recover >98% of native-like models of CcdB from a decoy dataset. Suppressor methodology was also applied to the integral membrane protein, diacylglycerol kinase A where the structures determined by X-ray crystallography and NMR were significantly different. Suppressor as well as sequence co-variation data clearly point to the Xray structure being the functional one adopted in vivo. The methodology is applicable to any macromolecular system for which a convenient phenotypic assay exists.
Resumo:
This study aims at understanding the need for decentralized power generation systems and to explore the potential, feasibility and environmental implications of biomass gasifier-based electricity generation systems for village electrification. Electricity needs of villages are in the range of 5–20 kW depending on the size of the village. Decentralized power generation systems are desirable for low load village situations as the cost of power transmission lines is reduced and transmission and distribution losses are minimised. A biomass gasifier-based electricity generation system is one of the feasible options; the technology is readily available and has already been field tested. To meet the lighting and stationary power needs of 500,000 villages in India the land required is only 16 Mha compared to over 100 Mha of degraded land available for tree planting. In fact all the 95 Mt of woody biomass required for gasification could be obtained through biomass conservation programmes such as biogas and improved cook stoves. Thus dedication of land for energy plantations may not be required. A shift to a biomass gasifier-based power generation system leads to local benefits such as village self reliance, local employment and skill generation and promotion of in situ plant diversity plus global benefits like no net CO2 emission (as sustainable biomass harvests are possible) and a reduction in CO2 emissions (when used to substitute thermal power and diesel in irrigation pump sets).
Resumo:
A general procedure for arriving at 3-D models of disulphiderich olypeptide systems based on the covalent cross-link constraints has been developed. The procedure, which has been coded as a computer program, RANMOD, assigns a large number of random, permitted backbone conformations to the polypeptide and identifies stereochemically acceptable structures as plausible models based on strainless disulphide bridge modelling. Disulphide bond modelling is performed using the procedure MODIP developed earlier, in connection with the choice of suitable sites where disulphide bonds could be engineered in proteins (Sowdhamini,R., Srinivasan,N., Shoichet,B., Santi,D.V., Ramakrishnan,C. and Balaram,P. (1989) Protein Engng, 3, 95-103). The method RANMOD has been tested on small disulphide loops and the structures compared against preferred backbone conformations derived from an analysis of putative disulphide subdatabase and model calculations. RANMOD has been applied to disulphiderich peptides and found to give rise to several stereochemically acceptable structures. The results obtained on the modelling of two test cases, a-conotoxin GI and endothelin I, are presented. Available NMR data suggest that such small systems exhibit conformational heterogeneity in solution. Hence, this approach for obtaining several distinct models is particularly attractive for the study of conformational excursions.
Resumo:
Technology scaling has caused Negative Bias Temperature Instability (NBTI) to emerge as a major circuit reliability concern. Simultaneously leakage power is becoming a greater fraction of the total power dissipated by logic circuits. As both NBTI and leakage power are highly dependent on vectors applied at the circuit’s inputs, they can be minimized by applying carefully chosen input vectors during periods when the circuit is in standby or idle mode. Unfortunately input vectors that minimize leakage power are not the ones that minimize NBTI degradation, so there is a need for a methodology to generate input vectors that minimize both of these variables.This paper proposes such a systematic methodology for the generation of input vectors which minimize leakage power under the constraint that NBTI degradation does not exceed a specified limit. These input vectors can be applied at the primary inputs of a circuit when it is in standby/idle mode and are such that the gates dissipate only a small amount of leakage power and also allow a large majority of the transistors on critical paths to be in the “recovery” phase of NBTI degradation. The advantage of this methodology is that allowing circuit designers to constrain NBTI degradation to below a specified limit enables tighter guardbanding, increasing performance. Our methodology guarantees that the generated input vector dissipates the least leakage power among all the input vectors that satisfy the degradation constraint. We formulate the problem as a zero-one integer linear program and show that this formulation produces input vectors whose leakage power is within 1% of a minimum leakage vector selected by a search algorithm and simultaneously reduces NBTI by about 5.75% of maximum circuit delay as compared to the worst case NBTI degradation. Our paper also proposes two new algorithms for the identification of circuit paths that are affected the most by NBTI degradation. The number of such paths identified by our algorithms are an order of magnitude fewer than previously proposed heuristics.
Resumo:
This paper primarily intends to develop a GIS (geographical information system)-based data mining approach for optimally selecting the locations and determining installed capacities for setting up distributed biomass power generation systems in the context of decentralized energy planning for rural regions. The optimal locations within a cluster of villages are obtained by matching the installed capacity needed with the demand for power, minimizing the cost of transportation of biomass from dispersed sources to power generation system, and cost of distribution of electricity from the power generation system to demand centers or villages. The methodology was validated by using it for developing an optimal plan for implementing distributed biomass-based power systems for meeting the rural electricity needs of Tumkur district in India consisting of 2700 villages. The approach uses a k-medoid clustering algorithm to divide the total region into clusters of villages and locate biomass power generation systems at the medoids. The optimal value of k is determined iteratively by running the algorithm for the entire search space for different values of k along with demand-supply matching constraints. The optimal value of the k is chosen such that it minimizes the total cost of system installation, costs of transportation of biomass, and transmission and distribution. A smaller region, consisting of 293 villages was selected to study the sensitivity of the results to varying demand and supply parameters. The results of clustering are represented on a GIS map for the region.
Resumo:
A novel test of recent theories of the origin of optical activity has been designed based on the inclusion of certain alkyl 2-methylhexanoates into urea channels.
Resumo:
Biomethanation of herbaceous biomass feedstock has the potential to provide clean energy source for cooking and other activities in areas where such biomass availability predominates. A biomethanation concept that involves fermentation of biomass residues in three steps, occurring in three zones of the fermentor is described. This approach while attempting take advantage of multistage reactors simplifies the reactor operation and obviates the need for a high degree of process control or complex reactor design. Typical herbaceous biomass decompose with a rapid VFA flux initially (with a tendency to float) followed by a slower decomposition showing balanced process of VFA generation and its utilization by methanogens that colonize biomass slowly. The tendency to float at the initial stages is suppressed by allowing previous days feed to hold it below digester liquid which permits VFA to disperse into the digester liquid without causing process inhibition. This approach has been used to build and operate simple biomass digesters to provide cooking gas in rural areas with weed and agro-residues. With appropriate modifications, the same concept has been used for digesting municipal solid wastes in small towns where large fermentors are not viable. With further modifications this concept has been used for solid-liquid feed fermentors. Methanogen colonized leaf biomass has been used as biofilm support to treat coffee processing wastewater as well as crop litter alternately in a year. During summer it functions as a biomass based biogas plants operating in the three-zone mode while in winter, feeding biomass is suspended and high strength coffee processing wastewater is let into the fermentor achieving over 90% BOD reduction. The early field experience of these fermentors is presented.
Resumo:
In this study, reduction and desorption of oxides of nitrogen (NOx) were conducted using an electrical discharge plasma technique. The study was carried out using a simulated gas mixture to explore the possibility of re-generation of used adsorbents by a nonthermal plasma desorption technique. Three different types of corona electrodes, namely, pipe, helical wire, and straight wire, were used for analyzing their effectiveness in NOx reduction/desorption. The pipe-type corona electrode exhibited a nitric oxide (NO) conversion of 50%, which is 1.5 times that of the straight-wire-type electrode at an energy density of 175J/L. The helical-wire-type corona electrode exhibited a NOx desorption efficiency almost 4 times that of the pipe-type electrode,indicating the possibility that corona-generated species play a crucial role in desorption.
Resumo:
The test based on comparison of the characteristic coefficients of the adjancency matrices of the corresponding graphs for detection of isomorphism in kinematic chains has been shown to fail in the case of two pairs of ten-link, simple-jointed chains, one pair corresponding to single-freedom chains and the other pair corresponding to three-freedom chains. An assessment of the merits and demerits of available methods for detection of isomorphism in graphs and kinematic chains is presented, keeping in view the suitability of the methods for use in computerized structural synthesis of kinematic chains. A new test based on the characteristic coefficients of the “degree” matrix of the corresponding graph is proposed for detection of isomorphism in kinematic chains. The new test is found to be successful in the case of a number of examples of graphs where the test based on characteristic coefficients of adjancency matrix fails. It has also been found to be successful in distinguishing the structures of all known simple-jointed kinematic chains in the categories of (a) single-freedom chains with up to 10 links, (b) two-freedom chains with up to 9 links and (c) three-freedom chains with up to 10 links.