6 resultados para Rimestad, Sebastian
em DigitalCommons@University of Nebraska - Lincoln
Resumo:
Static analysis tools report software defects that may or may not be detected by other verification methods. Two challenges complicating the adoption of these tools are spurious false positive warnings and legitimate warnings that are not acted on. This paper reports automated support to help address these challenges using logistic regression models that predict the foregoing types of warnings from signals in the warnings and implicated code. Because examining many potential signaling factors in large software development settings can be expensive, we use a screening methodology to quickly discard factors with low predictive power and cost-effectively build predictive models. Our empirical evaluation indicates that these models can achieve high accuracy in predicting accurate and actionable static analysis warnings, and suggests that the models are competitive with alternative models built without screening.
Resumo:
Where the creation, understanding, and assessment of software testing and regression testing techniques are concerned, controlled experimentation is an indispensable research methodology. Obtaining the infrastructure necessary to support such experimentation, however, is difficult and expensive. As a result, progress in experimentation with testing techniques has been slow, and empirical data on the costs and effectiveness of techniques remains relatively scarce. To help address this problem, we have been designing and constructing infrastructure to support controlled experimentation with testing and regression testing techniques. This paper reports on the challenges faced by researchers experimenting with testing techniques, including those that inform the design of our infrastructure. The paper then describes the infrastructure that we are creating in response to these challenges, and that we are now making available to other researchers, and discusses the impact that this infrastructure has and can be expected to have.
Resumo:
Many tools and techniques for addressing software maintenance problems rely on code coverage information. Often, this coverage information is gathered for a specific version of a software system, and then used to perform analyses on subsequent versions of that system without being recalculated. As a software system evolves, however, modifications to the software alter the software’s behavior on particular inputs, and code coverage information gathered on earlier versions of a program may not accurately reflect the coverage that would be obtained on later versions. This discrepancy may affect the success of analyses dependent on code coverage information. Despite the importance of coverage information in various analyses, in our search of the literature we find no studies specifically examining the impact of software evolution on code coverage information. Therefore, we conducted empirical studies to examine this impact. The results of our studies suggest that even relatively small modifications can greatly affect code coverage information, and that the degree of impact of change on coverage may be difficult to predict.
Resumo:
Transferring data across applications is a common end user task, and copying and pasting via the clipboard lets users do so relatively easily. Using the clipboard, however, can also introduce inefficiencies and errors in user tasks. To help researchers and tool developers understand and address these problems, we studied how end users interact with the clipboard through cut, copy, and paste actions. This study was performed by logging clipboard interactions while end users performed everyday tasks. From the clipboard usage data, we have identified several usage patterns that describe how data is transferred within the desktop environment. Such patterns help us understand end user behavior and indicate areas in which clipboard support tools can be improved.
Resumo:
Test case prioritization techniques schedule test cases for regression testing in an order that increases their ability to meet some performance goal. One performance goal, rate offault detection, measures how quickly faults are detected within the testing process. In previous work we provided a metric, APFD, for measuring rate of fault detection, and techniques for prioritizing test cases to improve APFD, and reported the results of experiments using those techniques. This metric and these techniques, however, applied only in cases in which test costs and fault severity are uniform. In this paper, we present a new metric for assessing the rate of fault detection of prioritized test cases, that incorporates varying test case and fault costs. We present the results of a case study illustrating the application of the metric. This study raises several practical questions that might arise in applying test case prioritization; we discuss how practitioners could go about answering these questions.
Resumo:
End-user programmers are increasingly relying on web authoring environments to create web applications. Although often consisting primarily of web pages, such applications are increasingly going further, harnessing the content available on the web through “programs” that query other web applications for information to drive other tasks. Unfortunately, errors can be pervasive in web applications, impacting their dependability. This paper reports the results of an exploratory study of end-user web application developers, performed with the aim of exposing prevalent classes of errors. The results suggest that end-users struggle the most with the identification and manipulation of variables when structuring requests to obtain data from other web sites. To address this problem, we present a family of techniques that help end user programmers perform this task, reducing possible sources of error. The techniques focus on simplification and characterization of the data that end-users must analyze while developing their web applications. We report the results of an empirical study in which these techniques are applied to several popular web sites. Our results reveal several potential benefits for end-users who wish to “engineer” dependable web applications.