892 resultados para Test Case Generator
Resumo:
Test case prioritization techniques schedule test cases for regression testing in an order that increases their ability to meet some performance goal. One performance goal, rate offault detection, measures how quickly faults are detected within the testing process. In previous work we provided a metric, APFD, for measuring rate of fault detection, and techniques for prioritizing test cases to improve APFD, and reported the results of experiments using those techniques. This metric and these techniques, however, applied only in cases in which test costs and fault severity are uniform. In this paper, we present a new metric for assessing the rate of fault detection of prioritized test cases, that incorporates varying test case and fault costs. We present the results of a case study illustrating the application of the metric. This study raises several practical questions that might arise in applying test case prioritization; we discuss how practitioners could go about answering these questions.
Resumo:
We present a generalized test case generation method, called the G method. Although inspired by the W method, the G method, in contrast, allows for test case suite generation even in the absence of characterization sets for the specification models. Instead, the G method relies on knowledge about the index of certain equivalences induced at the implementation models. We show that the W method can be derived from the G method as a particular case. Moreover, we discuss some naturally occurring infinite classes of FSM models over which the G method generates test suites that are exponentially more compact than those produced by the W method.
Resumo:
Wolfram von Eschenbach’s novel Parzival is a courtly romance composed in German language shortly after 1200. In a project, based at the University of Bern, a new critical edition of the poem is prepared in electronic and printed form. It visualizes parallel textual versions, which, depending on particular circumstances of oral performance, have developed in the early stage of the poem’s transmission. Philological research as well as phylogenetic techniques common in the natural sciences, e.g. in molecular biology, have been used to demonstrate the existence of these early textual versions. The article shows how both methods work and how they are applied to the ongoing edition. Exemplary passages to be presented include the text of some rare fragments written in the first decades of the 13th century, which might even go back to the author’s lifetime and which allow to date the existence of the versions they belong to.
Resumo:
Las pruebas de software (Testing) son en la actualidad la técnica más utilizada para la validación y la evaluación de la calidad de un programa. El testing está integrado en todas las metodologías prácticas de desarrollo de software y juega un papel crucial en el éxito de cualquier proyecto de software. Desde las unidades de código más pequeñas a los componentes más complejos, su integración en un sistema de software y su despliegue a producción, todas las piezas de un producto de software deben ser probadas a fondo antes de que el producto de software pueda ser liberado a un entorno de producción. La mayor limitación del testing de software es que continúa siendo un conjunto de tareas manuales, representando una buena parte del coste total de desarrollo. En este escenario, la automatización resulta fundamental para aliviar estos altos costes. La generación automática de casos de pruebas (TCG, del inglés test case generation) es el proceso de generar automáticamente casos de prueba que logren un alto recubrimiento del programa. Entre la gran variedad de enfoques hacia la TCG, esta tesis se centra en un enfoque estructural de caja blanca, y más concretamente en una de las técnicas más utilizadas actualmente, la ejecución simbólica. En ejecución simbólica, el programa bajo pruebas es ejecutado con expresiones simbólicas como argumentos de entrada en lugar de valores concretos. Esta tesis se basa en un marco general para la generación automática de casos de prueba dirigido a programas imperativos orientados a objetos (Java, por ejemplo) y basado en programación lógica con restricciones (CLP, del inglés constraint logic programming). En este marco general, el programa imperativo bajo pruebas es primeramente traducido a un programa CLP equivalente, y luego dicho programa CLP es ejecutado simbólicamente utilizando los mecanismos de evaluación estándar de CLP, extendidos con operaciones especiales para el tratamiento de estructuras de datos dinámicas. Mejorar la escalabilidad y la eficiencia de la ejecución simbólica constituye un reto muy importante. Es bien sabido que la ejecución simbólica resulta impracticable debido al gran número de caminos de ejecución que deben ser explorados y a tamaño de las restricciones que se deben manipular. Además, la generación de casos de prueba mediante ejecución simbólica tiende a producir un número innecesariamente grande de casos de prueba cuando es aplicada a programas de tamaño medio o grande. Las contribuciones de esta tesis pueden ser resumidas como sigue. (1) Se desarrolla un enfoque composicional basado en CLP para la generación de casos de prueba, el cual busca aliviar el problema de la explosión de caminos interprocedimiento analizando de forma separada cada componente (p.ej. método) del programa bajo pruebas, almacenando los resultados y reutilizándolos incrementalmente hasta obtener resultados para el programa completo. También se ha desarrollado un enfoque composicional basado en especialización de programas (evaluación parcial) para la herramienta de ejecución simbólica Symbolic PathFinder (SPF). (2) Se propone una metodología para usar información del consumo de recursos del programa bajo pruebas para guiar la ejecución simbólica hacia aquellas partes del programa que satisfacen una determinada política de recursos, evitando la exploración de aquellas partes del programa que violan dicha política. (3) Se propone una metodología genérica para guiar la ejecución simbólica hacia las partes más interesantes del programa, la cual utiliza abstracciones como generadores de trazas para guiar la ejecución de acuerdo a criterios de selección estructurales. (4) Se propone un nuevo resolutor de restricciones, el cual maneja eficientemente restricciones sobre el uso de la memoria dinámica global (heap) durante ejecución simbólica, el cual mejora considerablemente el rendimiento de la técnica estándar utilizada para este propósito, la \lazy initialization". (5) Todas las técnicas propuestas han sido implementadas en el sistema PET (el enfoque composicional ha sido también implementado en la herramienta SPF). Mediante evaluación experimental se ha confirmado que todas ellas mejoran considerablemente la escalabilidad y eficiencia de la ejecución simbólica y la generación de casos de prueba. ABSTRACT Testing is nowadays the most used technique to validate software and assess its quality. It is integrated into all practical software development methodologies and plays a crucial role towards the success of any software project. From the smallest units of code to the most complex components and their integration into a software system and later deployment; all pieces of a software product must be tested thoroughly before a software product can be released. The main limitation of software testing is that it remains a mostly manual task, representing a large fraction of the total development cost. In this scenario, test automation is paramount to alleviate such high costs. Test case generation (TCG) is the process of automatically generating test inputs that achieve high coverage of the system under test. Among a wide variety of approaches to TCG, this thesis focuses on structural (white-box) TCG, where one of the most successful enabling techniques is symbolic execution. In symbolic execution, the program under test is executed with its input arguments being symbolic expressions rather than concrete values. This thesis relies on a previously developed constraint-based TCG framework for imperative object-oriented programs (e.g., Java), in which the imperative program under test is first translated into an equivalent constraint logic program, and then such translated program is symbolically executed by relying on standard evaluation mechanisms of Constraint Logic Programming (CLP), extended with special treatment for dynamically allocated data structures. Improving the scalability and efficiency of symbolic execution constitutes a major challenge. It is well known that symbolic execution quickly becomes impractical due to the large number of paths that must be explored and the size of the constraints that must be handled. Moreover, symbolic execution-based TCG tends to produce an unnecessarily large number of test cases when applied to medium or large programs. The contributions of this dissertation can be summarized as follows. (1) A compositional approach to CLP-based TCG is developed which overcomes the inter-procedural path explosion by separately analyzing each component (method) in a program under test, stowing the results as method summaries and incrementally reusing them to obtain whole-program results. A similar compositional strategy that relies on program specialization is also developed for the state-of-the-art symbolic execution tool Symbolic PathFinder (SPF). (2) Resource-driven TCG is proposed as a methodology to use resource consumption information to drive symbolic execution towards those parts of the program under test that comply with a user-provided resource policy, avoiding the exploration of those parts of the program that violate such policy. (3) A generic methodology to guide symbolic execution towards the most interesting parts of a program is proposed, which uses abstractions as oracles to steer symbolic execution through those parts of the program under test that interest the programmer/tester most. (4) A new heap-constraint solver is proposed, which efficiently handles heap-related constraints and aliasing of references during symbolic execution and greatly outperforms the state-of-the-art standard technique known as lazy initialization. (5) All techniques above have been implemented in the PET system (and some of them in the SPF tool). Experimental evaluation has confirmed that they considerably help towards a more scalable and efficient symbolic execution and TCG.
Resumo:
This paper presents a gravimetric study (based on 382 gravimetric stations in an area about 32 km2) of a nearly flat basin: the Low Andarax valley. This alluvial basin, close to its river mouth, is located in the extreme south of the province of Almería and coincides with one of the existing depressions in the Betic Cordillera. The paper presents new methodological work to adapt a published inversion approach (GROWTH method) to the case of an alluvial valley (sedimentary stratification, with density increase downward). The adjusted 3D density model reveals several features in the topography of the discontinuity layers between the calcareous basement (2,700 kg/m3) and two sedimentary layers (2,400 and 2,250 kg/m3). We interpret several low density alignments as corresponding to SE faults striking about N140?145°E. Some detected basement elevations (such as the one, previously known by boreholes, in Viator village) are apparently connected with the fault pattern. The outcomes of this work are: (1) new gravimetric data, (2) new methodological options, and (3) the resulting structural conclusions.
Resumo:
Many small bacterial, archaebacterial, and eukaryotic genomes have been sequenced, and the larger eukaryotic genomes are predicted to be completely sequenced within the next decade. In all genomes sequenced to date, a large portion of these organisms’ predicted protein coding regions encode polypeptides of unknown biochemical, biophysical, and/or cellular functions. Three-dimensional structures of these proteins may suggest biochemical or biophysical functions. Here we report the crystal structure of one such protein, MJ0577, from a hyperthermophile, Methanococcus jannaschii, at 1.7-Å resolution. The structure contains a bound ATP, suggesting MJ0577 is an ATPase or an ATP-mediated molecular switch, which we confirm by biochemical experiments. Furthermore, the structure reveals different ATP binding motifs that are shared among many homologous hypothetical proteins in this family. This result indicates that structure-based assignment of molecular function is a viable approach for the large-scale biochemical assignment of proteins and for discovering new motifs, a basic premise of structural genomics.
Resumo:
In this CEPS Commentary, the former Irish Prime Minister calls the precedents being set in the Cypriot banking case “troubling” and reflective of a lack of clarity and consistency of thought by both the eurozone Finance Ministers and the European Commission. He welcomes the rejection of the deal by the Cypriot Parliament as it now gives eurozone policy-makers a chance to think again about the underlying philosophy of their approach to the financial crisis.
Resumo:
The eruption of Tambora (Indonesia) in April 1815 had substantial effects on global climate and led to the ‘Year Without a Summer’ of 1816 in Europe and North America. Although a tragic event — tens of thousands of people lost their lives — the eruption also was an ‘experiment of nature’ from which science has learned until today. The aim of this study is to summarize our current understanding of the Tambora eruption and its effects on climate as expressed in early instrumental observations, climate proxies and geological evidence, climate reconstructions, and model simulations. Progress has been made with respect to our understanding of the eruption process and estimated amount of SO2 injected into the atmosphere, although large uncertainties still exist with respect to altitude and hemispheric distribution of Tambora aerosols. With respect to climate effects, the global and Northern Hemispheric cooling are well constrained by proxies whereas there is no strong signal in Southern Hemisphere proxies. Newly recovered early instrumental information for Western Europe and parts of North America, regions with particularly strong climate effects, allow Tambora’s effect on the weather systems to be addressed. Climate models respond to prescribed Tambora-like forcing with a strengthening of the wintertime stratospheric polar vortex, global cooling and a slowdown of the water cycle, weakening of the summer monsoon circulations, a strengthening of the Atlantic Meridional Overturning Circulation, and a decrease of atmospheric CO₂. Combining observations, climate proxies, and model simulations for the case of Tambora, a better understanding of climate processes has emerged.
Resumo:
"April 1965."
Resumo:
"August 1966."