34 resultados para automatic test case generation
em Biblioteca Digital da Produção Intelectual da Universidade de São Paulo (BDPI/USP)
Resumo:
Modern Integrated Circuit (IC) design is characterized by a strong trend of Intellectual Property (IP) core integration into complex system-on-chip (SOC) architectures. These cores require thorough verification of their functionality to avoid erroneous behavior in the final device. Formal verification methods are capable of detecting any design bug. However, due to state explosion, their use remains limited to small circuits. Alternatively, simulation-based verification can explore hardware descriptions of any size, although the corresponding stimulus generation, as well as functional coverage definition, must be carefully planned to guarantee its efficacy. In general, static input space optimization methodologies have shown better efficiency and results than, for instance, Coverage Directed Verification (CDV) techniques, although they act on different facets of the monitored system and are not exclusive. This work presents a constrained-random simulation-based functional verification methodology where, on the basis of the Parameter Domains (PD) formalism, irrelevant and invalid test case scenarios are removed from the input space. To this purpose, a tool to automatically generate PD-based stimuli sources was developed. Additionally, we have developed a second tool to generate functional coverage models that fit exactly to the PD-based input space. Both the input stimuli and coverage model enhancements, resulted in a notable testbench efficiency increase, if compared to testbenches with traditional stimulation and coverage scenarios: 22% simulation time reduction when generating stimuli with our PD-based stimuli sources (still with a conventional coverage model), and 56% simulation time reduction when combining our stimuli sources with their corresponding, automatically generated, coverage models.
Resumo:
Mutation testing has been used to assess the quality of test case suites by analyzing the ability in distinguishing the artifact under testing from a set of alternative artifacts, the so-called mutants. The mutants are generated from the artifact under testing by applying a set of mutant operators, which produce artifacts with simple syntactical differences. The mutant operators are usually based on typical errors that occur during the software development and can be related to a fault model. In this paper, we propose a language-named MuDeL (MUtant DEfinition Language)-for the definition of mutant operators, aiming not only at automating the mutant generation, but also at providing precision and formality to the operator definition. The proposed language is based on concepts from transformational and logical programming paradigms, as well as from context-free grammar theory. Denotational semantics formal framework is employed to define the semantics of the MuDeL language. We also describe a system-named mudelgen-developed to support the use of this language. An executable representation of the denotational semantics of the language is used to check the correctness of the implementation of mudelgen. At the very end, a mutant generator module is produced, which can be incorporated into a specific mutant tool/environment. (C) 2008 Elsevier Ltd. All rights reserved.
Resumo:
Carrying out information about the microstructure and stress behaviour of ferromagnetic steels, magnetic Barkhausen noise (MBN) has been used as a basis for effective non-destructive testing methods, opening new areas in industrial applications. One of the factors that determines the quality and reliability of the MBN analysis is the way information is extracted from the signal. Commonly, simple scalar parameters are used to characterize the information content, such as amplitude maxima and signal root mean square. This paper presents a new approach based on the time-frequency analysis. The experimental test case relates the use of MBN signals to characterize hardness gradients in a AISI4140 steel. To that purpose different time-frequency (TFR) and time-scale (TSR) representations such as the spectrogram, the Wigner-Ville distribution, the Capongram, the ARgram obtained from an AutoRegressive model, the scalogram, and the Mellingram obtained from a Mellin transform are assessed. It is shown that, due to nonstationary characteristics of the MBN, TFRs can provide a rich and new panorama of these signals. Extraction techniques of some time-frequency parameters are used to allow a diagnostic process. Comparison with results obtained by the classical method highlights the improvement on the diagnosis provided by the method proposed.
Resumo:
Quantitation of progesterone (P(4)) in biological fluids is often performed by radioimmunoassay (RIA), whereas liquid chromatography coupled to tandem mass spectrometry (LC-MS/MS) has been used much less often. Due to its autoconfirmatory nature, LC-MS/MS greatly minimizes false positives and interference. Herein we report and compare with RIA an optimized LC-MS/MS method for rapid, efficient, and cost-effective quantitation of P(4) in plasma of cattle with no sample derivatization. The quantitation of plasma P(4) released from three nonbiodegradable, commercial, intravaginal P(4)-releasing devices (IPRD) over 192 h in six ovariectomized cows was compared in a pairwise study as a test case. Both techniques showed similar P(4) kinetics (P > 0.05) whereas results of P(4) quantitation by RIA were consistently higher compared with LC-MS/MS (P < 0.05) due to interference and matrix effects. The LC-MS/MS method was validated according to the recommended analytical standards and displayed P(4) limits of detection (LOD) and quantitation (LOQ) of 0.08 and a 0.25 ng/mL, respectively. The high selective LC-MS/MS method proposed herein for P(4) quantitation eliminates the risks associated with radioactive handling; it also requires no sample derivatization, which is a common requirement for LC-MS/MS quantitation of steroid hormones. Its application to multisteroid assays is also viable, and it is envisaged that it may provide a gold standard technique for hormone quantitation in animal reproductive science studies. (C) 2011 Elsevier Inc. All rights reserved.
Resumo:
Background. Visceral leishmaniasis (VL) is caused by Leishmania donovani and Leishmania infantum chagasi. Genome-wide linkage studies from Sudan and Brazil identified a putative susceptibility locus on chromosome 6q27. Methods. Twenty-two single-nucleotide polymorphisms (SNPs) at genes PHF10, C6orf70, DLL1, FAM120B, PSMB1, and TBP were genotyped in 193 VL cases from 85 Sudanese families, and 8 SNPs at genes PHF10, C6orf70, DLL1, PSMB1, and TBP were genotyped in 194 VL cases from 80 Brazilian families. Family-based association, haplotype, and linkage disequilibrium analyses were performed. Multispecies comparative sequence analysis was used to identify conserved noncoding sequences carrying putative regulatory elements. Quantitative reverse-transcription polymerase chain reaction measured expression of candidate genes in splenic aspirates from Indian patients with VL compared with that in the control spleen sample. Results. Positive associations were observed at PHF10, C6orf70, DLL1, PSMB1, and TBP in Sudan, but only at DLL1 in Brazil (combined P = 3 x 10(-4) at DLL1 across Sudan and Brazil). No functional coding region variants were observed in resequencing of 22 Sudanese VL cases. DLL1 expression was significantly (P = 2 x 10(-7)) reduced (mean fold change, 3.5 [SEM, 0.7]) in splenic aspirates from patients with VL, whereas other 6q27 genes showed higher levels (1.27 x 10(-6) < P < .01) than did the control spleen sample. A cluster of conserved noncoding sequences with putative regulatory variants was identified in the distal promoter of DLL1. Conclusions. DLL1, which encodes Delta-like 1, the ligand for Notch3, is strongly implicated as the chromosome 6q27 VL susceptibility gene.
Resumo:
Unveiling the mechanisms of energy relaxation in biomolecules is key to our understanding of protein stability, allostery, intramolecular signaling, and long-lasting quantum coherence phenomena at ambient temperatures. Yet, the relationship between the pathways of energy transfer and the functional role of the residues involved remains largely unknown. Here, we develop a simulation method of mapping out residues that are highly efficient in relaxing an initially localized excess vibrational energy and perform site-directed mutagenesis functional assays to assess the relevance of these residues to protein function. We use the ligand binding domains of thyroid hormone receptor (TR) subtypes as a test case and find that conserved arginines, which are critical to TR transactivation function, are the most effective heat diffusers across the protein structure. These results suggest a hitherto unsuspected connection between a residue`s ability to mediate intramolecular vibrational energy redistribution and its functional relevance.
Resumo:
We study the reconstruction of visual stimuli from spike trains, representing the reconstructed stimulus by a Volterra series up to second order. We illustrate this procedure in a prominent example of spiking neurons, recording simultaneously from the two H1 neurons located in the lobula plate of the fly Chrysomya megacephala. The fly views two types of stimuli, corresponding to rotational and translational displacements. Second-order reconstructions require the manipulation of potentially very large matrices, which obstructs the use of this approach when there are many neurons. We avoid the computation and inversion of these matrices using a convenient set of basis functions to expand our variables in. This requires approximating the spike train four-point functions by combinations of two-point functions similar to relations, which would be true for gaussian stochastic processes. In our test case, this approximation does not reduce the quality of the reconstruction. The overall contribution to stimulus reconstruction of the second-order kernels, measured by the mean squared error, is only about 5% of the first-order contribution. Yet at specific stimulus-dependent instants, the addition of second-order kernels represents up to 100% improvement, but only for rotational stimuli. We present a perturbative scheme to facilitate the application of our method to weakly correlated neurons.
Resumo:
The immersed boundary method is a versatile tool for the investigation of flow-structure interaction. In a large number of applications, the immersed boundaries or structures are very stiff and strong tangential forces on these interfaces induce a well-known, severe time-step restriction for explicit discretizations. This excessive stability constraint can be removed with fully implicit or suitable semi-implicit schemes but at a seemingly prohibitive computational cost. While economical alternatives have been proposed recently for some special cases, there is a practical need for a computationally efficient approach that can be applied more broadly. In this context, we revisit a robust semi-implicit discretization introduced by Peskin in the late 1970s which has received renewed attention recently. This discretization, in which the spreading and interpolation operators are lagged. leads to a linear system of equations for the inter-face configuration at the future time, when the interfacial force is linear. However, this linear system is large and dense and thus it is challenging to streamline its solution. Moreover, while the same linear system or one of similar structure could potentially be used in Newton-type iterations, nonlinear and highly stiff immersed structures pose additional challenges to iterative methods. In this work, we address these problems and propose cost-effective computational strategies for solving Peskin`s lagged-operators type of discretization. We do this by first constructing a sufficiently accurate approximation to the system`s matrix and we obtain a rigorous estimate for this approximation. This matrix is expeditiously computed by using a combination of pre-calculated values and interpolation. The availability of a matrix allows for more efficient matrix-vector products and facilitates the design of effective iterative schemes. We propose efficient iterative approaches to deal with both linear and nonlinear interfacial forces and simple or complex immersed structures with tethered or untethered points. One of these iterative approaches employs a splitting in which we first solve a linear problem for the interfacial force and then we use a nonlinear iteration to find the interface configuration corresponding to this force. We demonstrate that the proposed approach is several orders of magnitude more efficient than the standard explicit method. In addition to considering the standard elliptical drop test case, we show both the robustness and efficacy of the proposed methodology with a 2D model of a heart valve. (C) 2009 Elsevier Inc. All rights reserved.
Resumo:
This paper proposes an approach of optimal sensitivity applied in the tertiary loop of the automatic generation control. The approach is based on the theorem of non-linear perturbation. From an optimal operation point obtained by an optimal power flow a new optimal operation point is directly determined after a perturbation, i.e., without the necessity of an iterative process. This new optimal operation point satisfies the constraints of the problem for small perturbation in the loads. The participation factors and the voltage set point of the automatic voltage regulators (AVR) of the generators are determined by the technique of optimal sensitivity, considering the effects of the active power losses minimization and the network constraints. The participation factors and voltage set point of the generators are supplied directly to a computational program of dynamic simulation of the automatic generation control, named by power sensitivity mode. Test results are presented to show the good performance of this approach. (C) 2008 Elsevier B.V. All rights reserved.
Resumo:
In this paper, we consider a classical problem of complete test generation for deterministic finite-state machines (FSMs) in a more general setting. The first generalization is that the number of states in implementation FSMs can even be smaller than that of the specification FSM. Previous work deals only with the case when the implementation FSMs are allowed to have the same number of states as the specification FSM. This generalization provides more options to the test designer: when traditional methods trigger a test explosion for large specification machines, tests with a lower, but yet guaranteed, fault coverage can still be generated. The second generalization is that tests can be generated starting with a user-defined test suite, by incrementally extending it until the desired fault coverage is achieved. Solving the generalized test derivation problem, we formulate sufficient conditions for test suite completeness weaker than the existing ones and use them to elaborate an algorithm that can be used both for extending user-defined test suites to achieve the desired fault coverage and for test generation. We present the experimental results that indicate that the proposed algorithm allows obtaining a trade-off between the length and fault coverage of test suites.
Resumo:
The design of supplementary damping controllers to mitigate the effects of electromechanical oscillations in power systems is a highly complex and time-consuming process, which requires a significant amount of knowledge from the part of the designer. In this study, the authors propose an automatic technique that takes the burden of tuning the controller parameters away from the power engineer and places it on the computer. Unlike other approaches that do the same based on robust control theories or evolutionary computing techniques, our proposed procedure uses an optimisation algorithm that works over a formulation of the classical tuning problem in terms of bilinear matrix inequalities. Using this formulation, it is possible to apply linear matrix inequality solvers to find a solution to the tuning problem via an iterative process, with the advantage that these solvers are widely available and have well-known convergence properties. The proposed algorithm is applied to tune the parameters of supplementary controllers for thyristor controlled series capacitors placed in the New England/New York benchmark test system, aiming at the improvement of the damping factor of inter-area modes, under several different operating conditions. The results of the linear analysis are validated by non-linear simulation and demonstrate the effectiveness of the proposed procedure.
Resumo:
This paper presents two strategies for the upgrade of set-up generation systems for tandem cold mills. Even though these mills have been modernized mainly due to quality requests, their upgrades may be made intending to replace pre-calculated reference tables. In this case, Bryant and Osborn mill model without adaptive technique is proposed. As a more demanding modernization, Bland and Ford model including adaptation is recommended, although it requires a more complex computational hardware. Advantages and disadvantages of these two systems are compared and discussed and experimental results obtained from an industrial cold mill are shown.
Resumo:
This paper describes an automatic device for in situ and continuous monitoring of the ageing process occurring in natural and synthetic resins widely used in art and in the conservation and restoration of cultural artefacts. The results of tests carried out under accelerated ageing conditions are also presented. This easy-to-assemble palm-top device, essentially consists of oscillators based on quartz crystal resonators coated with films of the organic materials whose response to environmental stress is to be addressed. The device contains a microcontroller which selects at pre-defined time intervals the oscillators and records and stores their oscillation frequency. The ageing of the coatings, caused by the environmental stress and resulting in a shift in the oscillation frequency of the modified crystals, can be straightforwardly monitored in this way. The kinetics of this process reflects the level of risk damage associated with a specific microenvironment. In this case, natural and artificial resins, broadly employed in art and restoration of artistic and archaeological artefacts (dammar and Paraloid B72), were applied onto the crystals. The environmental stress was represented by visible and UV radiation, since the chosen materials are known to be photochemically active, to different extents. In the case of dammar, the results obtained are consistent with previous data obtained using a bench-top equipment by impedance analysis through discrete measurements and confirm that the ageing of this material is reflected in the gravimetric response of the modified quartz crystals. As for Paraloid B72, the outcome of the assays indicates that the resin is resistant to visible light, but is very sensitive to UV irradiation. The use of a continuous monitoring system, apart from being obviously more practical, is essential to identify short-term (i.e. reversible) events, like water vapour adsorption/desorption processes, and to highlight ageing trends or sudden changes of such trends. (C) 2007 Elsevier B.V. All rights reserved.
Resumo:
A case-control study was carried out in litters of 1 to 7-day-old piglets to identify the main infectious agents involved with neonatal diarrhea in pigs. Fecal samples (n=276) from piglets were collected on pig farms in the State of Rio Grande do Sul, Brazil, from May to September 2007. Litters with diarrhea were considered cases (n=129) and normal litters (n=147) controls. The samples were examined by latex agglutination test, PAGE, conventional isolating techniques, ELISA, PCR, and microscopic methods in order to detect rotavirus, bacterial pathogens (Escherichia coli, Clostridium perfringens type A and C, and Clostridium difficile), and parasites (Coccidian and Cryptosporidium spp.). Outbreaks of diarrhea were not observed during sampling. At least one agent was detected in fecal samples on 25 out of 28 farms (89.3%) and in 16 farms (57.1%) more than one agent was found. The main agents diagnosed were Coccidia (42.86%) and rotavirus (39.29%). The main agents identified in litters with diarrhea were Clostridium difficile (10.6%), Clostridium perfringens type A (8.8%) and rotavirus (7.5%); in control litters, Clostridium difficile (16.6%) and Coccidian (8.5%). Beta hemolytic Escherichia coli and Clostridium perfringens type C were not detected. When compared with controls, no agent was significantly associated with diarrhea in case litters. These findings stress the need for caution in the interpretation of laboratorial diagnosis of mild diarrhea in neonatal pigs, as the sole detection of an agent does not necessarily indicate that it is the cause of the problem.
Resumo:
Background: In areas with limited structure in place for microscopy diagnosis, rapid diagnostic tests (RDT) have been demonstrated to be effective. Method: The cost-effectiveness of the Optimal (R) and thick smear microscopy was estimated and compared. Data were collected on remote areas of 12 municipalities in the Brazilian Amazon. Data sources included the National Malaria Control Programme of the Ministry of Health, the National Healthcare System reimbursement table, hospitalization records, primary data collected from the municipalities, and scientific literature. The perspective was that of the Brazilian public health system, the analytical horizon was from the start of fever until the diagnostic results provided to patient and the temporal reference was that of year 2006. The results were expressed in costs per adequately diagnosed cases in 2006 U. S. dollars. Sensitivity analysis was performed considering key model parameters. Results: In the case base scenario, considering 92% and 95% sensitivity for thick smear microscopy to Plasmodium falciparum and Plasmodium vivax, respectively, and 100% specificity for both species, thick smear microscopy is more costly and more effective, with an incremental cost estimated at US$ 549.9 per adequately diagnosed case. In sensitivity analysis, when sensitivity and specificity of microscopy for P. vivax were 0.90 and 0.98, respectively, and when its sensitivity for P. falciparum was 0.83, the RDT was more cost-effective than microscopy. Conclusion: Microscopy is more cost-effective than OptiMal (R) in these remote areas if high accuracy of microscopy is maintained in the field. Decision regarding use of rapid tests for diagnosis of malaria in these areas depends on current microscopy accuracy in the field.