11 resultados para software repository

em DRUM (Digital Repository at the University of Maryland)


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Large component-based systems are often built from many of the same components. As individual component-based software systems are developed, tested and maintained, these shared components are repeatedly manipulated. As a result there are often significant overlaps and synergies across and among the different test efforts of different component-based systems. However, in practice, testers of different systems rarely collaborate, taking a test-all-by-yourself approach. As a result, redundant effort is spent testing common components, and important information that could be used to improve testing quality is lost. The goal of this research is to demonstrate that, if done properly, testers of shared software components can save effort by avoiding redundant work, and can improve the test effectiveness for each component as well as for each component-based software system by using information obtained when testing across multiple components. To achieve this goal I have developed collaborative testing techniques and tools for developers and testers of component-based systems with shared components, applied the techniques to subject systems, and evaluated the cost and effectiveness of applying the techniques. The dissertation research is organized in three parts. First, I investigated current testing practices for component-based software systems to find the testing overlap and synergy we conjectured exists. Second, I designed and implemented infrastructure and related tools to facilitate communication and data sharing between testers. Third, I designed two testing processes to implement different collaborative testing algorithms and applied them to large actively developed software systems. This dissertation has shown the benefits of collaborative testing across component developers who share their components. With collaborative testing, researchers can design algorithms and tools to support collaboration processes, achieve better efficiency in testing configurations, and discover inter-component compatibility faults within a minimal time window after they are introduced.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Modern software application testing, such as the testing of software driven by graphical user interfaces (GUIs) or leveraging event-driven architectures in general, requires paying careful attention to context. Model-based testing (MBT) approaches first acquire a model of an application, then use the model to construct test cases covering relevant contexts. A major shortcoming of state-of-the-art automated model-based testing is that many test cases proposed by the model are not actually executable. These \textit{infeasible} test cases threaten the integrity of the entire model-based suite, and any coverage of contexts the suite aims to provide. In this research, I develop and evaluate a novel approach for classifying the feasibility of test cases. I identify a set of pertinent features for the classifier, and develop novel methods for extracting these features from the outputs of MBT tools. I use a supervised logistic regression approach to obtain a model of test case feasibility from a randomly selected training suite of test cases. I evaluate this approach with a set of experiments. The outcomes of this investigation are as follows: I confirm that infeasibility is prevalent in MBT, even for test suites designed to cover a relatively small number of unique contexts. I confirm that the frequency of infeasibility varies widely across applications. I develop and train a binary classifier for feasibility with average overall error, false positive, and false negative rates under 5\%. I find that unique event IDs are key features of the feasibility classifier, while model-specific event types are not. I construct three types of features from the event IDs associated with test cases, and evaluate the relative effectiveness of each within the classifier. To support this study, I also develop a number of tools and infrastructure components for scalable execution of automated jobs, which use state-of-the-art container and continuous integration technologies to enable parallel test execution and the persistence of all experimental artifacts.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

With the increasing complexity of today's software, the software development process is becoming highly time and resource consuming. The increasing number of software configurations, input parameters, usage scenarios, supporting platforms, external dependencies, and versions plays an important role in expanding the costs of maintaining and repairing unforeseeable software faults. To repair software faults, developers spend considerable time in identifying the scenarios leading to those faults and root-causing the problems. While software debugging remains largely manual, it is not the case with software testing and verification. The goal of this research is to improve the software development process in general, and software debugging process in particular, by devising techniques and methods for automated software debugging, which leverage the advances in automatic test case generation and replay. In this research, novel algorithms are devised to discover faulty execution paths in programs by utilizing already existing software test cases, which can be either automatically or manually generated. The execution traces, or alternatively, the sequence covers of the failing test cases are extracted. Afterwards, commonalities between these test case sequence covers are extracted, processed, analyzed, and then presented to the developers in the form of subsequences that may be causing the fault. The hypothesis is that code sequences that are shared between a number of faulty test cases for the same reason resemble the faulty execution path, and hence, the search space for the faulty execution path can be narrowed down by using a large number of test cases. To achieve this goal, an efficient algorithm is implemented for finding common subsequences among a set of code sequence covers. Optimization techniques are devised to generate shorter and more logical sequence covers, and to select subsequences with high likelihood of containing the root cause among the set of all possible common subsequences. A hybrid static/dynamic analysis approach is designed to trace back the common subsequences from the end to the root cause. A debugging tool is created to enable developers to use the approach, and integrate it with an existing Integrated Development Environment. The tool is also integrated with the environment's program editors so that developers can benefit from both the tool suggestions, and their source code counterparts. Finally, a comparison between the developed approach and the state-of-the-art techniques shows that developers need only to inspect a small number of lines in order to find the root cause of the fault. Furthermore, experimental evaluation shows that the algorithm optimizations lead to better results in terms of both the algorithm running time and the output subsequence length.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Software updates are critical to the security of software systems and devices. Yet users often do not install them in a timely manner, leaving their devices open to security exploits. This research explored a re-design of automatic software updates on desktop and mobile devices to improve the uptake of updates through three studies. First using interviews, we studied users’ updating patterns and behaviors on desktop machines in a formative study. Second, we distilled these findings into the design of a low-fi prototype for desktops, and evaluated its efficacy for automating updates by means of a think-aloud study. Third, we investigated individual differences in update automation on Android devices using a large scale survey, and interviews. In this thesis, I present the findings of all three studies and provide evidence for how automatic updates can be better appropriated to fit users on both desktops and mobile devices. Additionally, I provide user interface design suggestions for software updates and outline recommendations for future work to improve the user experience of software updates.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Figer (to congeal, to solidify) is a quadraphonic electroacoustic composition. It was completed in the fall of 2003. Several software programs were used in creating and assembling the piece (C-Sound, Grain Mill, AL/Erwin (grain generator), Sound Forge and Acid Music). The sounds used in the piece are of two general types: synthesized and sampled, both of which were subjected to various processing techniques. The most important of these techniques, and one that formally defines large portions of the piece, is granular synthesis. Form The notion of time perception is of great importance in this piece. Figer addresses this question in several ways. In one sense, the form of Figer is simple. There are three layers of activity (see diagram). Layer 1 is continuous and non-sectional and supplies a backdrop (not necessarily a background) for the other two. The second and third layers overlap and interrupt one another. Each consists of two blocks of sound. The layers, and blocks within, relate to each other in various ways. Layer 1 is formally continuous. Layer 2 consists of well-defined columns of sound that evolve from soft and mild to loud and abrasive. The layer is, in reality, a whole that is simply cut into two parts (block 1 and block 2). In contrast, the blocks of layer 3 do not constitute a whole. Each is a complete unit and has its own self-contained evolutionary path. Those paths, however, do cross the paths of other units (layers, blocks), influencing them and absorbing some of their essence. At the heart of Figer lies a constant process of presenting materials or ideas and immediately, or, at times, simultaneously, commenting, reflecting on, or reinterpreting that material. All of the layers of this piece deal, both at local and global levels, with the problem of time and its perception relative to the materials, sonic or otherwise, that occupy it and the manner in which they unfold and relate to each other.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Gemstone Team Renewables

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We present a novel system to be used in the rehabilitation of patients with forearm injuries. The system uses surface electromyography (sEMG) recordings from a wireless sleeve to control video games designed to provide engaging biofeedback to the user. An integrated hardware/software system uses a neural net to classify the signals from a user’s muscles as they perform one of a number of common forearm physical therapy exercises. These classifications are used as input for a suite of video games that have been custom-designed to hold the patient’s attention and decrease the risk of noncompliance with the physical therapy regimen necessary to regain full function in the injured limb. The data is transmitted wirelessly from the on-sleeve board to a laptop computer using a custom-designed signal-processing algorithm that filters and compresses the data prior to transmission. We believe that this system has the potential to significantly improve the patient experience and efficacy of physical therapy using biofeedback that leverages the compelling nature of video games.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

College students receive a wealth of information through electronic communications that they are unable to process efficiently. This information overload negatively impacts their affect, which is officially defined in the field of psychology as the experience of feeling or emotion. To address this problem, we postulated that we could create an application that organizes and presents incoming content in a manner that optimizes users’ ability to process information. First, we conducted surveys that quantitatively measured each participant’s psychological affect while handling electronic communications, which was used to tailor the features of the application to what the user’s desire. After designing and implementing the application, we again measured the user's affect using this product. Our goal was to find that the program promoted a positive change in affect. Our application, Brevitus, was able to match Gmail on affect reduction profiles, while succeeding in implementing certain user interface specifications.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

An abstract of this work will be presented at the Compiler, Architecture and Tools Conference (CATC), Intel Development Center, Haifa, Israel November 23, 2015.