950 resultados para Symbolic Execution
Resumo:
Symbolic execution is a powerful program analysis technique, but it is very challenging to apply to programs built using event-driven frameworks, such as Android. The main reason is that the framework code itself is too complex to symbolically execute. The standard solution is to manually create a framework model that is simpler and more amenable to symbolic execution. However, developing and maintaining such a model by hand is difficult and error-prone. We claim that we can leverage program synthesis to introduce a high-degree of automation to the process of framework modeling. To support this thesis, we present three pieces of work. First, we introduced SymDroid, a symbolic executor for Android. While Android apps are written in Java, they are compiled to Dalvik bytecode format. Instead of analyzing an app’s Java source, which may not be available, or decompiling from Dalvik back to Java, which requires significant engineering effort and introduces yet another source of potential bugs in an analysis, SymDroid works directly on Dalvik bytecode. Second, we introduced Pasket, a new system that takes a first step toward automatically generating Java framework models to support symbolic execution. Pasket takes as input the framework API and tutorial programs that exercise the framework. From these artifacts and Pasket's internal knowledge of design patterns, Pasket synthesizes an executable framework model by instantiating design patterns, such that the behavior of a synthesized model on the tutorial programs matches that of the original framework. Lastly, in order to scale program synthesis to framework models, we devised adaptive concretization, a novel program synthesis algorithm that combines the best of the two major synthesis strategies: symbolic search, i.e., using SAT or SMT solvers, and explicit search, e.g., stochastic enumeration of possible solutions. Adaptive concretization parallelizes multiple sub-synthesis problems by partially concretizing highly influential unknowns in the original synthesis problem. Thanks to adaptive concretization, Pasket can generate a large-scale model, e.g., thousands lines of code. In addition, we have used an Android model synthesized by Pasket and found that the model is sufficient to allow SymDroid to execute a range of apps.
Resumo:
Las pruebas de software (Testing) son en la actualidad la técnica más utilizada para la validación y la evaluación de la calidad de un programa. El testing está integrado en todas las metodologías prácticas de desarrollo de software y juega un papel crucial en el éxito de cualquier proyecto de software. Desde las unidades de código más pequeñas a los componentes más complejos, su integración en un sistema de software y su despliegue a producción, todas las piezas de un producto de software deben ser probadas a fondo antes de que el producto de software pueda ser liberado a un entorno de producción. La mayor limitación del testing de software es que continúa siendo un conjunto de tareas manuales, representando una buena parte del coste total de desarrollo. En este escenario, la automatización resulta fundamental para aliviar estos altos costes. La generación automática de casos de pruebas (TCG, del inglés test case generation) es el proceso de generar automáticamente casos de prueba que logren un alto recubrimiento del programa. Entre la gran variedad de enfoques hacia la TCG, esta tesis se centra en un enfoque estructural de caja blanca, y más concretamente en una de las técnicas más utilizadas actualmente, la ejecución simbólica. En ejecución simbólica, el programa bajo pruebas es ejecutado con expresiones simbólicas como argumentos de entrada en lugar de valores concretos. Esta tesis se basa en un marco general para la generación automática de casos de prueba dirigido a programas imperativos orientados a objetos (Java, por ejemplo) y basado en programación lógica con restricciones (CLP, del inglés constraint logic programming). En este marco general, el programa imperativo bajo pruebas es primeramente traducido a un programa CLP equivalente, y luego dicho programa CLP es ejecutado simbólicamente utilizando los mecanismos de evaluación estándar de CLP, extendidos con operaciones especiales para el tratamiento de estructuras de datos dinámicas. Mejorar la escalabilidad y la eficiencia de la ejecución simbólica constituye un reto muy importante. Es bien sabido que la ejecución simbólica resulta impracticable debido al gran número de caminos de ejecución que deben ser explorados y a tamaño de las restricciones que se deben manipular. Además, la generación de casos de prueba mediante ejecución simbólica tiende a producir un número innecesariamente grande de casos de prueba cuando es aplicada a programas de tamaño medio o grande. Las contribuciones de esta tesis pueden ser resumidas como sigue. (1) Se desarrolla un enfoque composicional basado en CLP para la generación de casos de prueba, el cual busca aliviar el problema de la explosión de caminos interprocedimiento analizando de forma separada cada componente (p.ej. método) del programa bajo pruebas, almacenando los resultados y reutilizándolos incrementalmente hasta obtener resultados para el programa completo. También se ha desarrollado un enfoque composicional basado en especialización de programas (evaluación parcial) para la herramienta de ejecución simbólica Symbolic PathFinder (SPF). (2) Se propone una metodología para usar información del consumo de recursos del programa bajo pruebas para guiar la ejecución simbólica hacia aquellas partes del programa que satisfacen una determinada política de recursos, evitando la exploración de aquellas partes del programa que violan dicha política. (3) Se propone una metodología genérica para guiar la ejecución simbólica hacia las partes más interesantes del programa, la cual utiliza abstracciones como generadores de trazas para guiar la ejecución de acuerdo a criterios de selección estructurales. (4) Se propone un nuevo resolutor de restricciones, el cual maneja eficientemente restricciones sobre el uso de la memoria dinámica global (heap) durante ejecución simbólica, el cual mejora considerablemente el rendimiento de la técnica estándar utilizada para este propósito, la \lazy initialization". (5) Todas las técnicas propuestas han sido implementadas en el sistema PET (el enfoque composicional ha sido también implementado en la herramienta SPF). Mediante evaluación experimental se ha confirmado que todas ellas mejoran considerablemente la escalabilidad y eficiencia de la ejecución simbólica y la generación de casos de prueba. ABSTRACT Testing is nowadays the most used technique to validate software and assess its quality. It is integrated into all practical software development methodologies and plays a crucial role towards the success of any software project. From the smallest units of code to the most complex components and their integration into a software system and later deployment; all pieces of a software product must be tested thoroughly before a software product can be released. The main limitation of software testing is that it remains a mostly manual task, representing a large fraction of the total development cost. In this scenario, test automation is paramount to alleviate such high costs. Test case generation (TCG) is the process of automatically generating test inputs that achieve high coverage of the system under test. Among a wide variety of approaches to TCG, this thesis focuses on structural (white-box) TCG, where one of the most successful enabling techniques is symbolic execution. In symbolic execution, the program under test is executed with its input arguments being symbolic expressions rather than concrete values. This thesis relies on a previously developed constraint-based TCG framework for imperative object-oriented programs (e.g., Java), in which the imperative program under test is first translated into an equivalent constraint logic program, and then such translated program is symbolically executed by relying on standard evaluation mechanisms of Constraint Logic Programming (CLP), extended with special treatment for dynamically allocated data structures. Improving the scalability and efficiency of symbolic execution constitutes a major challenge. It is well known that symbolic execution quickly becomes impractical due to the large number of paths that must be explored and the size of the constraints that must be handled. Moreover, symbolic execution-based TCG tends to produce an unnecessarily large number of test cases when applied to medium or large programs. The contributions of this dissertation can be summarized as follows. (1) A compositional approach to CLP-based TCG is developed which overcomes the inter-procedural path explosion by separately analyzing each component (method) in a program under test, stowing the results as method summaries and incrementally reusing them to obtain whole-program results. A similar compositional strategy that relies on program specialization is also developed for the state-of-the-art symbolic execution tool Symbolic PathFinder (SPF). (2) Resource-driven TCG is proposed as a methodology to use resource consumption information to drive symbolic execution towards those parts of the program under test that comply with a user-provided resource policy, avoiding the exploration of those parts of the program that violate such policy. (3) A generic methodology to guide symbolic execution towards the most interesting parts of a program is proposed, which uses abstractions as oracles to steer symbolic execution through those parts of the program under test that interest the programmer/tester most. (4) A new heap-constraint solver is proposed, which efficiently handles heap-related constraints and aliasing of references during symbolic execution and greatly outperforms the state-of-the-art standard technique known as lazy initialization. (5) All techniques above have been implemented in the PET system (and some of them in the SPF tool). Experimental evaluation has confirmed that they considerably help towards a more scalable and efficient symbolic execution and TCG.
Resumo:
The term "Logic Programming" refers to a variety of computer languages and execution models which are based on the traditional concept of Symbolic Logic. The expressive power of these languages offers promise to be of great assistance in facing the programming challenges of present and future symbolic processing applications in Artificial Intelligence, Knowledge-based systems, and many other areas of computing. The sequential execution speed of logic programs has been greatly improved since the advent of the first interpreters. However, higher inference speeds are still required in order to meet the demands of applications such as those contemplated for next generation computer systems. The execution of logic programs in parallel is currently considered a promising strategy for attaining such inference speeds. Logic Programming in turn appears as a suitable programming paradigm for parallel architectures because of the many opportunities for parallel execution present in the implementation of logic programs. This dissertation presents an efficient parallel execution model for logic programs. The model is described from the source language level down to an "Abstract Machine" level suitable for direct implementation on existing parallel systems or for the design of special purpose parallel architectures. Few assumptions are made at the source language level and therefore the techniques developed and the general Abstract Machine design are applicable to a variety of logic (and also functional) languages. These techniques offer efficient solutions to several areas of parallel Logic Programming implementation previously considered problematic or a source of considerable overhead, such as the detection and handling of variable binding conflicts in AND-Parallelism, the specification of control and management of the execution tree, the treatment of distributed backtracking, and goal scheduling and memory management issues, etc. A parallel Abstract Machine design is offered, specifying data areas, operation, and a suitable instruction set. This design is based on extending to a parallel environment the techniques introduced by the Warren Abstract Machine, which have already made very fast and space efficient sequential systems a reality. Therefore, the model herein presented is capable of retaining sequential execution speed similar to that of high performance sequential systems, while extracting additional gains in speed by efficiently implementing parallel execution. These claims are supported by simulations of the Abstract Machine on sample programs.
Resumo:
Universidade Estadual de Campinas . Faculdade de Educação Física
Resumo:
In this paper it is presented the theoretical background, the architecture (using the ""4+1"" model), and the use of the library for execution of adaptive devices, AdapLib. This library was created seeking to be accurate to the adaptive devices theory, and to allow its easy extension considering the specific details of solutions that employ this kind of device. As an example, it is presented a case study in which the library was used to create a proof of concept to monitor and diagnose problems in an online news portal.
Resumo:
Movement-related potentials (MRPs) reflect increasing cortical activity related to the preparation and execution of voluntary movement. Execution and preparatory components may be separated by comparing MRPs recorded from actual and imagined movement. Imagined movement initiates preparatory processes, but not motor execution activity. MRPs are maximal over the supplementary motor area (SMA), an area of the cortex involved in the planning and preparation of movement. The SMA receives input from the basal ganglia, which are affected in Huntington's disease (HD), a hyperkinetic movement disorder. In order to further elucidate the effects of the disorder upon the cortical activity relating to movement, MRPs were recorded from ten HD patients, and ten age-matched controls, whilst they performed and imagined performing a sequential button-pressing task. HD patients produced MRPs of significantly reduced size both for performed and imagined movement. The component relating to movement execution was obtained by subtracting the MRP for imagined movement from the MRP for performed movement, and was found to be normal in HD. The movement preparation component was found by subtracting the MRP found for a control condition of watching the visual cues from the MRP for imagined movement. This preparation component in HD was reduced in early slope, peak amplitude, and post-peak slope. This study therefore reported abnormal MRPs in HD. particularly in terms of the components relating to movement preparation, and this finding may further explain the movement deficits reported in the disease.
Resumo:
Studies of functional brain imaging in humans and single cell recordings in monkeys have generally shown preferential involvement of the medially located supplementary motor area (SMA) in self-initiated movement and the lateral premotor cortex in externally cued movement. Studies of event-related cortical potentials recorded during movement preparation, however, generally show increased cortical activity prior to self-initiated movements but little activity at early stages prior to movements that are externally cued at unpredictable times. In this study, the spatial location and relative timing of activation for self-initiated and externally triggered movements were examined using rapid event-related functional MRI. Twelve healthy right-handed subjects were imaged while performing a brief finger sequence movement (three rapid alternating button presses: index-middle-index finger) made either in response to an unpredictably timed auditory cue (between 8 to 24 s after the previous movement) or at self-paced irregular intervals. Both movement conditions involved similar strong activation of medial motor areas including the pre-SMA, SMA proper, and rostral cingulate cortex, as well as activation within contralateral primary motor, superior parietal, and insula cortex. Activation within the basal ganglia was found for self-initiated movements only, while externally triggered movements involved additional bilateral activation of primary auditory cortex. Although the level of SMA and cingulate cortex activation did not differ significantly between movement conditions, the timing of the hemodynamic response within the pre-SMA was significantly earlier for self-initiated compared with externally triggered movements. This clearly reflects involvement of the pre-SMA in early processes associated with the preparation for voluntary movement. (C) 2002 Elsevier Science.
Resumo:
Objective To evaluate the influence of oral contraceptives (OCs) containing 20 mu mu g ethinylestradiol (EE) and 150 mu mu g gestodene (GEST) on the autonomic modulation of heart rate (HR) in women. Methods One-hundred and fifty-five women aged 24 +/-+/- 2 years were divided into four groups according to their physical activity and the use or not of an OC: active-OC, active-non-OC (NOC), sedentary-OC, and sedentary-NOC. The heart rate was registered in real time based on the electrocardiogram signal for 15 minutes, in the supine-position. The heart rate variability (HRV) was analysed using Shannon`s entropy (SE), conditional entropy (complexity index [CInd] and normalised CInd [NCI]), and symbolic analysis (0V%, 1V%, 2LV%, and 2ULV%). For statistical analysis the Kruskal-Wallis test with Dunn post hoc and the Wilcoxon test (p < 0.05 was considered significant) were applied. Results Treatment with this COC caused no significant changes in SE, CInd, NCI, or symbolic analysis in either active or sedentary groups. Active groups presented higher values for SE and 2ULV%, and lower values for 0V% when compared to sedentary groups (p < 0.05). Conclusion HRV patterns differed depending on life style; the non-linear method applied was highly reliable for identifying these changes. The use of OCs containing 20 mu mu g EE and 150 mu mu g GEST does not influence HR autonomic modulation.
Resumo:
The foramen of Vesalius (FV) is located in the greater wing of the sphenoid bone between the foramen ovale (FO) and the foramen rotundum in an intracranial view. The FO allows the passage of the mandibular branch of trigeminal nerve, which is the target of the trigeminal radiofrequency rhizotomy. We analyzed its location, morphology, morphometry and interrelation among other foramina. 400 macerated adult human skulls were examined. A digital microscope (Dino-Lite plus(A (R))) was used to capture images from the FV. A digital caliper was used to perform the measurements of the distance between the FV and other foramina (FO, foramen spinosum and the carotid canal) in an extracranial view of the skull base. In the 400 analyzed skulls, the FV was identified in 135 skulls (33.75%) and absent on both sides in 265 skulls (66.25%). The FV was observed present bilaterally in 15.5% of the skulls. The incidence of unilateral foramen was 18.25% of the skulls of which 7.75% on right side and 10.5% on left side. The diameter of the FV was measured and we found an average value of 0.65 mm, on right side 0.63 mm and on the left side 0.67 mm. We verified that positive correlations were statistically significant among the three analyzed distances. This study intends to offer specific anatomical data with morphological patterns (macroscopic and mesoscopic) to increase the understanding of the FV features as frequency, incidence and important distances among adjacent foramina.
Resumo:
In this work we investigate the population dynamics of cooperative hunting extending the McCann and Yodzis model for a three-species food chain system with a predator, a prey, and a resource species. The new model considers that a given fraction sigma of predators cooperates in prey's hunting, while the rest of the population 1-sigma hunts without cooperation. We use the theory of symbolic dynamics to study the topological entropy and the parameter space ordering of the kneading sequences associated with one-dimensional maps that reproduce significant aspects of the dynamics of the species under several degrees of cooperative hunting. Our model also allows us to investigate the so-called deterministic extinction via chaotic crisis and transient chaos in the framework of cooperative hunting. The symbolic sequences allow us to identify a critical boundary in the parameter spaces (K, C-0) and (K, sigma) which separates two scenarios: (i) all-species coexistence and (ii) predator's extinction via chaotic crisis. We show that the crisis value of the carrying capacity K-c decreases at increasing sigma, indicating that predator's populations with high degree of cooperative hunting are more sensitive to the chaotic crises. We also show that the control method of Dhamala and Lai [Phys. Rev. E 59, 1646 (1999)] can sustain the chaotic behavior after the crisis for systems with cooperative hunting. We finally analyze and quantify the inner structure of the target regions obtained with this control method for wider parameter values beyond the crisis, showing a power law dependence of the extinction transients on such critical parameters.
Resumo:
This paper presents an algorithm to efficiently generate the state-space of systems specified using the IOPT Petri-net modeling formalism. IOPT nets are a non-autonomous Petri-net class, based on Place-Transition nets with an extended set of features designed to allow the rapid prototyping and synthesis of system controllers through an existing hardware-software co-design framework. To obtain coherent and deterministic operation, IOPT nets use a maximal-step execution semantics where, in a single execution step, all enabled transitions will fire simultaneously. This fact increases the resulting state-space complexity and can cause an arc "explosion" effect. Real-world applications, with several million states, will reach a higher order of magnitude number of arcs, leading to the need for high performance state-space generator algorithms. The proposed algorithm applies a compilation approach to read a PNML file containing one IOPT model and automatically generate an optimized C program to calculate the corresponding state-space.
Resumo:
This work describes a methodology to extract symbolic rules from trained neural networks. In our approach, patterns on the network are codified using formulas on a Lukasiewicz logic. For this we take advantage of the fact that every connective in this multi-valued logic can be evaluated by a neuron in an artificial network having, by activation function the identity truncated to zero and one. This fact simplifies symbolic rule extraction and allows the easy injection of formulas into a network architecture. We trained this type of neural network using a back-propagation algorithm based on Levenderg-Marquardt algorithm, where in each learning iteration, we restricted the knowledge dissemination in the network structure. This makes the descriptive power of produced neural networks similar to the descriptive power of Lukasiewicz logic language, minimizing the information loss on the translation between connectionist and symbolic structures. To avoid redundance on the generated network, the method simplifies them in a pruning phase, using the "Optimal Brain Surgeon" algorithm. We tested this method on the task of finding the formula used on the generation of a given truth table. For real data tests, we selected the Mushrooms data set, available on the UCI Machine Learning Repository.