864 resultados para Precondition event
Resumo:
Symbolic execution is a powerful program analysis technique, but it is very challenging to apply to programs built using event-driven frameworks, such as Android. The main reason is that the framework code itself is too complex to symbolically execute. The standard solution is to manually create a framework model that is simpler and more amenable to symbolic execution. However, developing and maintaining such a model by hand is difficult and error-prone. We claim that we can leverage program synthesis to introduce a high-degree of automation to the process of framework modeling. To support this thesis, we present three pieces of work. First, we introduced SymDroid, a symbolic executor for Android. While Android apps are written in Java, they are compiled to Dalvik bytecode format. Instead of analyzing an app’s Java source, which may not be available, or decompiling from Dalvik back to Java, which requires significant engineering effort and introduces yet another source of potential bugs in an analysis, SymDroid works directly on Dalvik bytecode. Second, we introduced Pasket, a new system that takes a first step toward automatically generating Java framework models to support symbolic execution. Pasket takes as input the framework API and tutorial programs that exercise the framework. From these artifacts and Pasket's internal knowledge of design patterns, Pasket synthesizes an executable framework model by instantiating design patterns, such that the behavior of a synthesized model on the tutorial programs matches that of the original framework. Lastly, in order to scale program synthesis to framework models, we devised adaptive concretization, a novel program synthesis algorithm that combines the best of the two major synthesis strategies: symbolic search, i.e., using SAT or SMT solvers, and explicit search, e.g., stochastic enumeration of possible solutions. Adaptive concretization parallelizes multiple sub-synthesis problems by partially concretizing highly influential unknowns in the original synthesis problem. Thanks to adaptive concretization, Pasket can generate a large-scale model, e.g., thousands lines of code. In addition, we have used an Android model synthesized by Pasket and found that the model is sufficient to allow SymDroid to execute a range of apps.
Resumo:
Modern software application testing, such as the testing of software driven by graphical user interfaces (GUIs) or leveraging event-driven architectures in general, requires paying careful attention to context. Model-based testing (MBT) approaches first acquire a model of an application, then use the model to construct test cases covering relevant contexts. A major shortcoming of state-of-the-art automated model-based testing is that many test cases proposed by the model are not actually executable. These \textit{infeasible} test cases threaten the integrity of the entire model-based suite, and any coverage of contexts the suite aims to provide. In this research, I develop and evaluate a novel approach for classifying the feasibility of test cases. I identify a set of pertinent features for the classifier, and develop novel methods for extracting these features from the outputs of MBT tools. I use a supervised logistic regression approach to obtain a model of test case feasibility from a randomly selected training suite of test cases. I evaluate this approach with a set of experiments. The outcomes of this investigation are as follows: I confirm that infeasibility is prevalent in MBT, even for test suites designed to cover a relatively small number of unique contexts. I confirm that the frequency of infeasibility varies widely across applications. I develop and train a binary classifier for feasibility with average overall error, false positive, and false negative rates under 5\%. I find that unique event IDs are key features of the feasibility classifier, while model-specific event types are not. I construct three types of features from the event IDs associated with test cases, and evaluate the relative effectiveness of each within the classifier. To support this study, I also develop a number of tools and infrastructure components for scalable execution of automated jobs, which use state-of-the-art container and continuous integration technologies to enable parallel test execution and the persistence of all experimental artifacts.
Resumo:
The Graphical User Interface (GUI) is an integral component of contemporary computer software. A stable and reliable GUI is necessary for correct functioning of software applications. Comprehensive verification of the GUI is a routine part of most software development life-cycles. The input space of a GUI is typically large, making exhaustive verification difficult. GUI defects are often revealed by exercising parts of the GUI that interact with each other. It is challenging for a verification method to drive the GUI into states that might contain defects. In recent years, model-based methods, that target specific GUI interactions, have been developed. These methods create a formal model of the GUI’s input space from specification of the GUI, visible GUI behaviors and static analysis of the GUI’s program-code. GUIs are typically dynamic in nature, whose user-visible state is guided by underlying program-code and dynamic program-state. This research extends existing model-based GUI testing techniques by modelling interactions between the visible GUI of a GUI-based software and its underlying program-code. The new model is able to, efficiently and effectively, test the GUI in ways that were not possible using existing methods. The thesis is this: Long, useful GUI testcases can be created by examining the interactions between the GUI, of a GUI-based application, and its program-code. To explore this thesis, a model-based GUI testing approach is formulated and evaluated. In this approach, program-code level interactions between GUI event handlers will be examined, modelled and deployed for constructing long GUI testcases. These testcases are able to drive the GUI into states that were not possible using existing models. Implementation and evaluation has been conducted using GUITAR, a fully-automated, open-source GUI testing framework.
Resumo:
This study is about the comparison of simulation techniques between Discrete Event Simulation (DES) and Agent Based Simulation (ABS). DES is one of the best-known types of simulation techniques in Operational Research. Recently, there has been an emergence of another technique, namely ABS. One of the qualities of ABS is that it helps to gain a better understanding of complex systems that involve the interaction of people with their environment as it allows to model concepts like autonomy and pro-activeness which are important attributes to consider. Although there is a lot of literature relating to DES and ABS, we have found none that focuses on exploring the capability of both in tackling the human behaviour issues which relates to queuing time and customer satisfaction in the retail sector. Therefore, the objective of this study is to identify empirically the differences between these simulation techniques by stimulating the potential economic benefits of introducing new policies in a department store. To apply the new strategy, the behaviour of consumers in a retail store will be modelled using the DES and ABS approach and the results will be compared. We aim to understand which simulation technique is better suited to human behaviour modelling by investigating the capability of both techniques in predicting the best solution for an organisation in using management practices. Our main concern is to maximise customer satisfaction, for example by minimising their waiting times for the different services provided.
Resumo:
Alzheimer’s disease is the most common cause of dementia which causes a progressive and irreversible impairment of several cognitive functions. The aging population has been increasing significantly in recent decades and this disease affects mainly the elderly. Its diagnostic accuracy is relatively low and there is not a biomarker able to detect AD without invasive tests. Despite the progress in better understanding the disease there remains no prospect of cure at least in the near future. The electroencephalogram (EEG) test is a widely available technology in clinical settings. It may help diagnosis of brain disorders, once it can be used in patients who have cognitive impairment involving a general decrease in overall brain function or in patients with a located deficit. This study is a new approach to improve the scalp localization and the detection of brain anomalies (EEG temporal events) sources associated with AD by using the EEG.
Resumo:
Automatic analysis of human behaviour in large collections of videos is gaining interest, even more so with the advent of file sharing sites such as YouTube. However, challenges still exist owing to several factors such as inter- and intra-class variations, cluttered backgrounds, occlusion, camera motion, scale, view and illumination changes. This research focuses on modelling human behaviour for action recognition in videos. The developed techniques are validated on large scale benchmark datasets and applied on real-world scenarios such as soccer videos. Three major contributions are made. The first contribution is in the area of proper choice of a feature representation for videos. This involved a study of state-of-the-art techniques for action recognition, feature extraction processing and dimensional reduction techniques so as to yield the best performance with optimal computational requirements. Secondly, temporal modelling of human behaviour is performed. This involved frequency analysis and temporal integration of local information in the video frames to yield a temporal feature vector. Current practices mostly average the frame information over an entire video and neglect the temporal order. Lastly, the proposed framework is applied and further adapted to real-world scenario such as soccer videos. A dataset consisting of video sequences depicting events of players falling is created from actual match data to this end and used to experimentally evaluate the proposed framework.
Resumo:
Sequences of timestamped events are currently being generated across nearly every domain of data analytics, from e-commerce web logging to electronic health records used by doctors and medical researchers. Every day, this data type is reviewed by humans who apply statistical tests, hoping to learn everything they can about how these processes work, why they break, and how they can be improved upon. To further uncover how these processes work the way they do, researchers often compare two groups, or cohorts, of event sequences to find the differences and similarities between outcomes and processes. With temporal event sequence data, this task is complex because of the variety of ways single events and sequences of events can differ between the two cohorts of records: the structure of the event sequences (e.g., event order, co-occurring events, or frequencies of events), the attributes about the events and records (e.g., gender of a patient), or metrics about the timestamps themselves (e.g., duration of an event). Running statistical tests to cover all these cases and determining which results are significant becomes cumbersome. Current visual analytics tools for comparing groups of event sequences emphasize a purely statistical or purely visual approach for comparison. Visual analytics tools leverage humans' ability to easily see patterns and anomalies that they were not expecting, but is limited by uncertainty in findings. Statistical tools emphasize finding significant differences in the data, but often requires researchers have a concrete question and doesn't facilitate more general exploration of the data. Combining visual analytics tools with statistical methods leverages the benefits of both approaches for quicker and easier insight discovery. Integrating statistics into a visualization tool presents many challenges on the frontend (e.g., displaying the results of many different metrics concisely) and in the backend (e.g., scalability challenges with running various metrics on multi-dimensional data at once). I begin by exploring the problem of comparing cohorts of event sequences and understanding the questions that analysts commonly ask in this task. From there, I demonstrate that combining automated statistics with an interactive user interface amplifies the benefits of both types of tools, thereby enabling analysts to conduct quicker and easier data exploration, hypothesis generation, and insight discovery. The direct contributions of this dissertation are: (1) a taxonomy of metrics for comparing cohorts of temporal event sequences, (2) a statistical framework for exploratory data analysis with a method I refer to as high-volume hypothesis testing (HVHT), (3) a family of visualizations and guidelines for interaction techniques that are useful for understanding and parsing the results, and (4) a user study, five long-term case studies, and five short-term case studies which demonstrate the utility and impact of these methods in various domains: four in the medical domain, one in web log analysis, two in education, and one each in social networks, sports analytics, and security. My dissertation contributes an understanding of how cohorts of temporal event sequences are commonly compared and the difficulties associated with applying and parsing the results of these metrics. It also contributes a set of visualizations, algorithms, and design guidelines for balancing automated statistics with user-driven analysis to guide users to significant, distinguishing features between cohorts. This work opens avenues for future research in comparing two or more groups of temporal event sequences, opening traditional machine learning and data mining techniques to user interaction, and extending the principles found in this dissertation to data types beyond temporal event sequences.
Resumo:
Discrete Event Simulation (DES) is a very popular simulation technique in Operational Research. Recently, there has been the emergence of another technique, namely Agent Based Simulation (ABS). Although there is a lot of literature relating to DES and ABS, we have found less that focuses on exploring the capabilities of both in tackling human behaviour issues. In order to understand the gap between these two simulation techniques, therefore, our aim is to understand the distinctions between DES and ABS models with the real world phenomenon in modelling and simulating human behaviour. In achieving the aim, we have carried out a case study at a department store. Both DES and ABS models will be compared using the same problem domain which is concerning on management policy in a fitting room. The behaviour of staffs while working and customers’ satisfaction will be modelled for both models behaviour understanding.
Resumo:
Field lab in marketing: Children consumer behaviour
Resumo:
In our research we investigate the output accuracy of discrete event simulation models and agent based simulation models when studying human centric complex systems. In this paper we focus on human reactive behaviour as it is possible in both modelling approaches to implement human reactive behaviour in the model by using standard methods. As a case study we have chosen the retail sector, and here in particular the operations of the fitting room in the women wear department of a large UK department store. In our case study we looked at ways of determining the efficiency of implementing new management policies for the fitting room operation through modelling the reactive behaviour of staff and customers of the department. First, we have carried out a validation experiment in which we compared the results from our models to the performance of the real system. This experiment also allowed us to establish differences in output accuracy between the two modelling methods. In a second step a multi-scenario experiment was carried out to study the behaviour of the models when they are used for the purpose of operational improvement. Overall we have found that for our case study example both, discrete event simulation and agent based simulation have the same potential to support the investigation into the efficiency of implementing new management policies.
Resumo:
In this paper, we investigate output accuracy for a Discrete Event Simulation (DES) model and Agent Based Simulation (ABS) model. The purpose of this investigation is to find out which of these simulation techniques is the best one for modelling human reactive behaviour in the retail sector. In order to study the output accuracy in both models, we have carried out a validation experiment in which we compared the results from our simulation models to the performance of a real system. Our experiment was carried out using a large UK department store as a case study. We had to determine an efficient implementation of management policy in the store’s fitting room using DES and ABS. Overall, we have found that both simulation models were a good representation of the real system when modelling human reactive behaviour.
Resumo:
Any other technology has never affected daily life at this level and witnessed as speedy adaptation as the mobile phone. At the same time, mobile media has developed to be a serious marketing tool for all kinds of businesses, and the industry has grown explosively in recent years. The objective of this thesis is to inspect the mobile marketing process of an international event. This thesis is a qualitative case study. The chosen case for this thesis is the mobile marketing process of Falun2015 FIS Nordic World Ski Championships due to researcher’s interest on the topic and contacts to the people around the event. The empirical findings were acquired by conducting two interviews with three experts from the case organisation and its partner organisation. The interviews were performed as semi-structured interviews utilising the themes arising from the chosen theoretical framework. The framework distinguished six phases in the process: (i) campaign initiation, (ii) campaign design, (iii) campaign creation, (iv) permission management, (v) delivery, and (vi) evaluation and analysis. Phases one and five were not examined in this thesis because campaign initiation was not purely seen as part of the campaign implementation, and investigating phase five would have required a very technical viewpoint to the study. In addition to the interviews, some pre-established documents were exploited as a supporting data. The empirical findings of this thesis mainly follow the theoretical framework utilised. However, some modifications to the model could be made mainly related to the order of different phases. In the revised model, the actions are categorised depending on the time they should be conducted, i.e. before, during or after the event. Regardless of the categorisation, the phases can be in different order and overlapping. In addition, the business network was highly emphasised by the empirical findings and is thus added to the modified model. Five managerial recommendations can be concluded from the empirical findings of this thesis: (i) the importance of a business network should be highly valued in a mobile marketing process; (ii) clear goals should be defined for mobile marketing actions in order to make sure that everyone involved is aware them; (iii) interactivity should be perceived as part of a mobile marketing communication; (iv) enough time should be allowed for the development of a mobile marketing process in order to exploit all the potential it can offer; and (v) attention should be paid to measuring and analysing matters that are of relevance