2 resultados para Mobile application testing
em DRUM (Digital Repository at the University of Maryland)
Resumo:
This dissertation presents a case study of collaborative research through design with Floracaching, a gamified mobile application for citizen science biodiversity data collection. One contribution of this study is the articulation of collaborative research through design (CRtD), an approach that blends cooperative design approaches with the research through design methodology (RtD). Collaborative research through design is thus defined as an iterative process of cooperative design, where the collaborative vision of an ideal state is embedded in a design. Applying collaborative research through design with Floracaching illustrates how a number of cooperative techniques—especially contextual inquiry, prototyping, and focus groups—may be applied in a research through design setting. Four suggestions for collaborative research through design (recruit from a range of relevant backgrounds; take flexibility as a goal; enable independence and agency; and, choose techniques that support agreement or consensus) are offered to help others who wish to experiment with this new approach. Applying collaborative research through design to Floracaching yielded a new prototype of the application, accompanied by design annotations in the form of framing constructs for designing to support mobile, place-based citizen science activities. The prototype and framing constructs, which may inform other designers of similar citizen science technologies, are a second contribution of this research.
Resumo:
Modern software application testing, such as the testing of software driven by graphical user interfaces (GUIs) or leveraging event-driven architectures in general, requires paying careful attention to context. Model-based testing (MBT) approaches first acquire a model of an application, then use the model to construct test cases covering relevant contexts. A major shortcoming of state-of-the-art automated model-based testing is that many test cases proposed by the model are not actually executable. These \textit{infeasible} test cases threaten the integrity of the entire model-based suite, and any coverage of contexts the suite aims to provide. In this research, I develop and evaluate a novel approach for classifying the feasibility of test cases. I identify a set of pertinent features for the classifier, and develop novel methods for extracting these features from the outputs of MBT tools. I use a supervised logistic regression approach to obtain a model of test case feasibility from a randomly selected training suite of test cases. I evaluate this approach with a set of experiments. The outcomes of this investigation are as follows: I confirm that infeasibility is prevalent in MBT, even for test suites designed to cover a relatively small number of unique contexts. I confirm that the frequency of infeasibility varies widely across applications. I develop and train a binary classifier for feasibility with average overall error, false positive, and false negative rates under 5\%. I find that unique event IDs are key features of the feasibility classifier, while model-specific event types are not. I construct three types of features from the event IDs associated with test cases, and evaluate the relative effectiveness of each within the classifier. To support this study, I also develop a number of tools and infrastructure components for scalable execution of automated jobs, which use state-of-the-art container and continuous integration technologies to enable parallel test execution and the persistence of all experimental artifacts.