916 resultados para automatic test case generation
Resumo:
Modern software application testing, such as the testing of software driven by graphical user interfaces (GUIs) or leveraging event-driven architectures in general, requires paying careful attention to context. Model-based testing (MBT) approaches first acquire a model of an application, then use the model to construct test cases covering relevant contexts. A major shortcoming of state-of-the-art automated model-based testing is that many test cases proposed by the model are not actually executable. These \textit{infeasible} test cases threaten the integrity of the entire model-based suite, and any coverage of contexts the suite aims to provide. In this research, I develop and evaluate a novel approach for classifying the feasibility of test cases. I identify a set of pertinent features for the classifier, and develop novel methods for extracting these features from the outputs of MBT tools. I use a supervised logistic regression approach to obtain a model of test case feasibility from a randomly selected training suite of test cases. I evaluate this approach with a set of experiments. The outcomes of this investigation are as follows: I confirm that infeasibility is prevalent in MBT, even for test suites designed to cover a relatively small number of unique contexts. I confirm that the frequency of infeasibility varies widely across applications. I develop and train a binary classifier for feasibility with average overall error, false positive, and false negative rates under 5\%. I find that unique event IDs are key features of the feasibility classifier, while model-specific event types are not. I construct three types of features from the event IDs associated with test cases, and evaluate the relative effectiveness of each within the classifier. To support this study, I also develop a number of tools and infrastructure components for scalable execution of automated jobs, which use state-of-the-art container and continuous integration technologies to enable parallel test execution and the persistence of all experimental artifacts.
Resumo:
In the near future, the LHC experiments will continue to be upgraded as the LHC luminosity will increase from the design 1034 to 7.5 × 1034, with the HL-LHC project, to reach 3000 × f b−1 of accumulated statistics. After the end of a period of data collection, CERN will face a long shutdown to improve overall performance by upgrading the experiments and implementing more advanced technologies and infrastructures. In particular, ATLAS will upgrade parts of the detector, the trigger, and the data acquisition system. It will also implement new strategies and algorithms for processing and transferring the data to the final storage. This PhD thesis presents a study of a new pattern recognition algorithm to be used in the trigger system, which is a software designed to provide the information necessary to select physical events from background data. The idea is to use the well-known Hough Transform mathematical formula as an algorithm for detecting particle trajectories. The effectiveness of the algorithm has already been validated in the past, independently of particle physics applications, to detect generic shapes in images. Here, a software emulation tool is proposed for the hardware implementation of the Hough Transform, to reconstruct the tracks in the ATLAS Trigger and Data Acquisition system. Until now, it has never been implemented on electronics in particle physics experiments, and as a hardware implementation it would provide overall latency benefits. A comparison between the simulated data and the physical system was performed on a Xilinx UltraScale+ FPGA device.
Resumo:
The recent advances in CMOS technology have allowed for the fabrication of transistors with submicronic dimensions, making possible the integration of tens of millions devices in a single chip that can be used to build very complex electronic systems. Such increase in complexity of designs has originated a need for more efficient verification tools that could incorporate more appropriate physical and computational models. Timing verification targets at determining whether the timing constraints imposed to the design may be satisfied or not. It can be performed by using circuit simulation or by timing analysis. Although simulation tends to furnish the most accurate estimates, it presents the drawback of being stimuli dependent. Hence, in order to ensure that the critical situation is taken into account, one must exercise all possible input patterns. Obviously, this is not possible to accomplish due to the high complexity of current designs. To circumvent this problem, designers must rely on timing analysis. Timing analysis is an input-independent verification approach that models each combinational block of a circuit as a direct acyclic graph, which is used to estimate the critical delay. First timing analysis tools used only the circuit topology information to estimate circuit delay, thus being referred to as topological timing analyzers. However, such method may result in too pessimistic delay estimates, since the longest paths in the graph may not be able to propagate a transition, that is, may be false. Functional timing analysis, in turn, considers not only circuit topology, but also the temporal and functional relations between circuit elements. Functional timing analysis tools may differ by three aspects: the set of sensitization conditions necessary to declare a path as sensitizable (i.e., the so-called path sensitization criterion), the number of paths simultaneously handled and the method used to determine whether sensitization conditions are satisfiable or not. Currently, the two most efficient approaches test the sensitizability of entire sets of paths at a time: one is based on automatic test pattern generation (ATPG) techniques and the other translates the timing analysis problem into a satisfiability (SAT) problem. Although timing analysis has been exhaustively studied in the last fifteen years, some specific topics have not received the required attention yet. One such topic is the applicability of functional timing analysis to circuits containing complex gates. This is the basic concern of this thesis. In addition, and as a necessary step to settle the scenario, a detailed and systematic study on functional timing analysis is also presented.
Resumo:
Formal methods and software testing are tools to obtain and control software quality. When used together, they provide mechanisms for software specification, verification and error detection. Even though formal methods allow software to be mathematically verified, they are not enough to assure that a system is free of faults, thus, software testing techniques are necessary to complement the process of verification and validation of a system. Model Based Testing techniques allow tests to be generated from other software artifacts such as specifications and abstract models. Using formal specifications as basis for test creation, we can generate better quality tests, because these specifications are usually precise and free of ambiguity. Fernanda Souza (2009) proposed a method to define test cases from B Method specifications. This method used information from the machine s invariant and the operation s precondition to define positive and negative test cases for an operation, using equivalent class partitioning and boundary value analysis based techniques. However, the method proposed in 2009 was not automated and had conceptual deficiencies like, for instance, it did not fit in a well defined coverage criteria classification. We started our work with a case study that applied the method in an example of B specification from the industry. Based in this case study we ve obtained subsidies to improve it. In our work we evolved the proposed method, rewriting it and adding characteristics to make it compatible with a test classification used by the community. We also improved the method to support specifications structured in different components, to use information from the operation s behavior on the test case generation process and to use new coverage criterias. Besides, we have implemented a tool to automate the method and we have submitted it to more complex case studies
Resumo:
The behavior of composed Web services depends on the results of the invoked services; unexpected behavior of one of the invoked services can threat the correct execution of an entire composition. This paper proposes an event-based approach to black-box testing of Web service compositions based on event sequence graphs, which are extended by facilities to deal not only with service behavior under regular circumstances (i.e., where cooperating services are working as expected) but also with their behavior in undesirable situations (i.e., where cooperating services are not working as expected). Furthermore, the approach can be used independently of artifacts (e.g., Business Process Execution Language) or type of composition (orchestration/choreography). A large case study, based on a commercial Web application, demonstrates the feasibility of the approach and analyzes its characteristics. Test generation and execution are supported by dedicated tools. Especially, the use of an enterprise service bus for test execution is noteworthy and differs from other approaches. The results of the case study encourage to suggest that the new approach has the power to detect faults systematically, performing properly even with complex and large compositions. Copyright © 2012 John Wiley & Sons, Ltd.
Resumo:
With the ever growing trend of smart phones and tablets, Android is becoming more and more popular everyday. With more than one billion active users i to date, Android is the leading technology in smart phone arena. In addition to that, Android also runs on Android TV, Android smart watches and cars. Therefore, in recent years, Android applications have become one of the major development sectors in software industry. As of mid 2013, the number of published applications on Google Play had exceeded one million and the cumulative number of downloads was more than 50 billionii. A 2013 survey also revealed that 71% of the mobile application developers work on developing Android applicationsiii. Considering this size of Android applications, it is quite evident that people rely on these applications on a daily basis for the completion of simple tasks like keeping track of weather to rather complex tasks like managing one’s bank accounts. Hence, like every other kind of code, Android code also needs to be verified in order to work properly and achieve a certain confidence level. Because of the gigantic size of the number of applications, it becomes really hard to manually test Android applications specially when it has to be verified for various versions of the OS and also, various device configurations such as different screen sizes and different hardware availability. Hence, recently there has been a lot of work on developing different testing methods for Android applications in Computer Science fraternity. The model of Android attracts researchers because of its open source nature. It makes the whole research model more streamlined when the code for both, application and the platform are readily available to analyze. And hence, there has been a great deal of research in testing and static analysis of Android applications. A great deal of this research has been focused on the input test generation for Android applications. Hence, there are a several testing tools available now, which focus on automatic generation of test cases for Android applications. These tools differ with one another on the basis of their strategies and heuristics used for this generation of test cases. But there is still very little work done on the comparison of these testing tools and the strategies they use. Recently, some research work has been carried outiv in this regard that compared the performance of various available tools with respect to their respective code coverage, fault detection, ability to work on multiple platforms and their ease of use. It was done, by running these tools on a total of 60 real world Android applications. The results of this research showed that although effective, these strategies being used by the tools, also face limitations and hence, have room for improvement. The purpose of this thesis is to extend this research into a more specific and attribute-‐ oriented way. Attributes refer to the tasks that can be completed using the Android platform. It can be anything ranging from a basic system call for receiving an SMS to more complex tasks like sending the user to another application from the current one. The idea is to develop a benchmark for Android testing tools, which is based on the performance related to these attributes. This will allow the comparison of these tools with respect to these attributes. For example, if there is an application that plays some audio file, will the testing tool be able to generate a test input that will warrant the execution of this audio file? Using multiple applications using different attributes, it can be visualized that which testing tool is more useful for which kinds of attributes. In this thesis, it was decided that 9 attributes covering the basic nature of tasks, will be targeted for the assessment of three testing tools. Later this can be done for much more attributes to compare even more testing tools. The aim of this work is to show that this approach is effective and can be used on a much larger scale. One of the flagship features of this work, which also differentiates it with the previous work, is that the applications used, are all specially made for this research. The reason for doing that is to analyze just that specific attribute in isolation, which the application is focused on, and not allow the tool to get bottlenecked by something trivial, which is not the main attribute under testing. This means 9 applications, each focused on one specific attribute. The main contributions of this thesis are: A summary of the three existing testing tools and their respective techniques for automatic test input generation of Android Applications. • A detailed study of the usage of these testing tools using the 9 applications specially designed and developed for this study. • The analysis of the obtained results of the study carried out. And a comparison of the performance of the selected tools.
Resumo:
O Teste Baseado em Modelos (TBM) emergiu como uma estratégia promissora para minimizar problemas relacionados à falta de tempo e recursos em teste de software e visa verificar se a implementação sob teste está em conformidade com sua especificação. Casos de teste são gerados automaticamente a partir de modelos comportamentais produzidos durante o ciclo de desenvolvimento de software. Entre as técnicas de modelagem existentes, Sistemas de Transição com Entrada/Saída (do inglês, Input/Output Transition Systems - IOTSs), são modelos amplamente utilizados no TBM por serem mais expressivos do que Máquinas de Estado Finito (MEFs). Apesar dos métodos existentes para geração de testes a partir de IOTSs, o problema da seleção de casos de testes é um tópico difícil e importante. Os métodos existentes para IOTS são não-determinísticos, ao contrário da teoria existente para MEFs, que fornece garantia de cobertura completa com base em um modelo de defeitos. Esta tese investiga a aplicação de modelos de defeitos em métodos determinísticos de geração de testes a partir de IOTSs. Foi proposto um método para geração de conjuntos de teste com base no método W para MEFs. O método gera conjuntos de teste de forma determinística além de satisfazer condições de suficiência de cobertura da especificação e de todos os defeitos do domínio de defeitos definido. Estudos empíricos avaliaram a aplicabilidade e eficácia do método proposto: resultados experimentais para analisar o custo de geração de conjuntos de teste utilizando IOTSs gerados aleatoriamente e um estudo de caso com especificações da indústria mostram a efetividade dos conjuntos gerados em relação ao método tradicional de Tretmans.
Resumo:
Dissertação (mestrado)—Universidade de Brasília, Instituto de Ciências Exatas, Departamento de Ciência da Computação, 2015.
Resumo:
Modern Integrated Circuit (IC) design is characterized by a strong trend of Intellectual Property (IP) core integration into complex system-on-chip (SOC) architectures. These cores require thorough verification of their functionality to avoid erroneous behavior in the final device. Formal verification methods are capable of detecting any design bug. However, due to state explosion, their use remains limited to small circuits. Alternatively, simulation-based verification can explore hardware descriptions of any size, although the corresponding stimulus generation, as well as functional coverage definition, must be carefully planned to guarantee its efficacy. In general, static input space optimization methodologies have shown better efficiency and results than, for instance, Coverage Directed Verification (CDV) techniques, although they act on different facets of the monitored system and are not exclusive. This work presents a constrained-random simulation-based functional verification methodology where, on the basis of the Parameter Domains (PD) formalism, irrelevant and invalid test case scenarios are removed from the input space. To this purpose, a tool to automatically generate PD-based stimuli sources was developed. Additionally, we have developed a second tool to generate functional coverage models that fit exactly to the PD-based input space. Both the input stimuli and coverage model enhancements, resulted in a notable testbench efficiency increase, if compared to testbenches with traditional stimulation and coverage scenarios: 22% simulation time reduction when generating stimuli with our PD-based stimuli sources (still with a conventional coverage model), and 56% simulation time reduction when combining our stimuli sources with their corresponding, automatically generated, coverage models.
Resumo:
3G SGSN (3rd Generation Serving GPRS Support Node) onUMTS (Universal Mobile Telecommunications System) pakettiverkon verkkoelementti, joka toimii linkkinä ulkoisen verkon(kuten Internet) ja radioverkon välillä. 3G SGSN ylläpitää tilaajan tietoja ja paikannustietoja päätelaitteen liikkuessa verkon sisällä ja välittää myös dataa ulkoisen verkon ja mobiilipäätelaitteen välillä. Tämän diplomityön aiheena on 3G SGSN toiminnallisuus-testauksen automatisointi. Työssä kehitetään 3G SGSN toiminnallisuustestaukseen soveltuva testauksen automatisointijärjestelmä Linux-ympäristössä AuTS (Automated Test Sequences)-automatisointityökalulla. Kehitystyöhön kuuluu järjestelmän suunnittelu ja toteutus. Toteutettu automatisointijärjestelmä suorittaa 3G SGSN toiminnallisuustestauksen istunnon- ja liikkuvuudenhallinnan testitapauksia.Suoritettujen testitapausten tuloksista luodaan HTML (Hypertext Markup Language)-raportti, joka sisältää tiedot ja analyysit suoritetuista testitapauksista sekä linkit testitapausten lokitietoihin. Hyvä automatisointi vaatii suunnittelua, jossa huomioidaan käytettävyys ja ylläpidettävyys. Näihin ominaisuuksiin on toteutuksessa kiinnitetty erityistä huomiota. Lopputuloksena saatiin toimiva automatisointijärjestelmä, joka osoittaa automatisoinnin tuovan huomattavia etuja toiminnallisuustestauksen tehokkuuteen. Työssä käsitelläänmyös ohjelmistotestausta sekä testauksen automatisointia ja niiden periaatteita.
Resumo:
[EN]In this paper we review the novel meccano method. We summarize the main stages (subdivision, mapping, optimization) of this automatic tetrahedral mesh generation technique and we concentrate the study to complex genus-zero solids. In this case, our procedure only requires a surface triangulation of the solid. A crucial consequence of our method is the volume parametrization of the solid to a cube. We construct volume T-meshes for isogeometric analysis by using this result. The efficiency of the proposed technique is shown with several examples. A comparison between the meccano method and standard mesh generation techniques is introduced.-1…
Resumo:
The paper presents the application of a new risk-informed methodology for the identification of the Emergency Management Requirements (EMR) to a Generation II, Large size Reactor and a Generation III+ Small Modular Reactor. The results obtained in this test case demonstrate that the actual EMR is conservative, as expected, for the GenII reactor, while the new methodology could be applied for the definition of EMRs for the new generation Nuclear Power Plants, with a possible reduction of the emergency area without loss of safety level. By adopting both probabilistic and deterministic approaches, the study addresses possible accidents and corresponding release scenarios for the two types of reactor, calculates the areas where the accidents have an impact on the population and defines the new EMR considering the health effects on the population.
Resumo:
With the advent of object-oriented languages and the portability of Java, the development and use of class libraries has become widespread. Effective class reuse depends on class reliability which in turn depends on thorough testing. This paper describes a class testing approach based on modeling each test case with a tuple and then generating large numbers of tuples to thoroughly cover an input space with many interesting combinations of values. The testing approach is supported by the Roast framework for the testing of Java classes. Roast provides automated tuple generation based on boundary values, unit operations that support driver standardization, and test case templates used for code generation. Roast produces thorough, compact test drivers with low development and maintenance cost. The framework and tool support are illustrated on a number of non-trivial classes, including a graphical user interface policy manager. Quantitative results are presented to substantiate the practicality and effectiveness of the approach. Copyright (C) 2002 John Wiley Sons, Ltd.