974 resultados para Control-flow Analysis
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
This thesis analyses problems related to the applicability, in business environments, of Process Mining tools and techniques. The first contribution is a presentation of the state of the art of Process Mining and a characterization of companies, in terms of their "process awareness". The work continues identifying circumstance where problems can emerge: data preparation; actual mining; and results interpretation. Other problems are the configuration of parameters by not-expert users and computational complexity. We concentrate on two possible scenarios: "batch" and "on-line" Process Mining. Concerning the batch Process Mining, we first investigated the data preparation problem and we proposed a solution for the identification of the "case-ids" whenever this field is not explicitly indicated. After that, we concentrated on problems at mining time and we propose the generalization of a well-known control-flow discovery algorithm in order to exploit non instantaneous events. The usage of interval-based recording leads to an important improvement of performance. Later on, we report our work on the parameters configuration for not-expert users. We present two approaches to select the "best" parameters configuration: one is completely autonomous; the other requires human interaction to navigate a hierarchy of candidate models. Concerning the data interpretation and results evaluation, we propose two metrics: a model-to-model and a model-to-log. Finally, we present an automatic approach for the extension of a control-flow model with social information, in order to simplify the analysis of these perspectives. The second part of this thesis deals with control-flow discovery algorithms in on-line settings. We propose a formal definition of the problem, and two baseline approaches. The actual mining algorithms proposed are two: the first is the adaptation, to the control-flow discovery problem, of a frequency counting algorithm; the second constitutes a framework of models which can be used for different kinds of streams (stationary versus evolving).
Resumo:
An optimizing compiler internal representation fundamentally affects the clarity, efficiency and feasibility of optimization algorithms employed by the compiler. Static Single Assignment (SSA) as a state-of-the-art program representation has great advantages though still can be improved. This dissertation explores the domain of single assignment beyond SSA, and presents two novel program representations: Future Gated Single Assignment (FGSA) and Recursive Future Predicated Form (RFPF). Both FGSA and RFPF embed control flow and data flow information, enabling efficient traversal program information and thus leading to better and simpler optimizations. We introduce future value concept, the designing base of both FGSA and RFPF, which permits a consumer instruction to be encountered before the producer of its source operand(s) in a control flow setting. We show that FGSA is efficiently computable by using a series T1/T2/TR transformation, yielding an expected linear time algorithm for combining together the construction of the pruned single assignment form and live analysis for both reducible and irreducible graphs. As a result, the approach results in an average reduction of 7.7%, with a maximum of 67% in the number of gating functions compared to the pruned SSA form on the SPEC2000 benchmark suite. We present a solid and near optimal framework to perform inverse transformation from single assignment programs. We demonstrate the importance of unrestricted code motion and present RFPF. We develop algorithms which enable instruction movement in acyclic, as well as cyclic regions, and show the ease to perform optimizations such as Partial Redundancy Elimination on RFPF.
Resumo:
Conventional debugging tools present developers with means to explore the run-time context in which an error has occurred. In many cases this is enough to help the developer discover the faulty source code and correct it. However, rather often errors occur due to code that has executed in the past, leaving certain objects in an inconsistent state. The actual run-time error only occurs when these inconsistent objects are used later in the program. So-called back-in-time debuggers help developers step back through earlier states of the program and explore execution contexts not available to conventional debuggers. Nevertheless, even back-in-time debuggers do not help answer the question, ``Where did this object come from?'' The Object-Flow Virtual Machine, which we have proposed in previous work, tracks the flow of objects to answer precisely such questions, but this VM does not provide dedicated debugging support to explore faulty programs. In this paper we present a novel debugger, called Compass, to navigate between conventional run-time stack-oriented control flow views and object flows. Compass enables a developer to effectively navigate from an object contributing to an error back-in-time through all the code that has touched the object. We present the design and implementation of Compass, and we demonstrate how flow-centric, back-in-time debugging can be used to effectively locate the source of hard-to-find bugs.
Resumo:
OBJECTIVE We sought to evaluate the feasibility of k-t parallel imaging for accelerated 4D flow MRI in the hepatic vascular system by investigating the impact of different acceleration factors. MATERIALS AND METHODS k-t GRAPPA accelerated 4D flow MRI of the liver vasculature was evaluated in 16 healthy volunteers at 3T with acceleration factors R = 3, R = 5, and R = 8 (2.0 × 2.5 × 2.4 mm(3), TR = 82 ms), and R = 5 (TR = 41 ms); GRAPPA R = 2 was used as the reference standard. Qualitative flow analysis included grading of 3D streamlines and time-resolved particle traces. Quantitative evaluation assessed velocities, net flow, and wall shear stress (WSS). RESULTS Significant scan time savings were realized for all acceleration factors compared to standard GRAPPA R = 2 (21-71 %) (p < 0.001). Quantification of velocities and net flow offered similar results between k-t GRAPPA R = 3 and R = 5 compared to standard GRAPPA R = 2. Significantly increased leakage artifacts and noise were seen between standard GRAPPA R = 2 and k-t GRAPPA R = 8 (p < 0.001) with significant underestimation of peak velocities and WSS of up to 31 % in the hepatic arterial system (p <0.05). WSS was significantly underestimated up to 13 % in all vessels of the portal venous system for k-t GRAPPA R = 5, while significantly higher values were observed for the same acceleration with higher temporal resolution in two veins (p < 0.05). CONCLUSION k-t acceleration of 4D flow MRI is feasible for liver hemodynamic assessment with acceleration factors R = 3 and R = 5 resulting in a scan time reduction of at least 40 % with similar quantitation of liver hemodynamics compared with GRAPPA R = 2.
Resumo:
Ice cores provide a robust reconstruction of past climate. However, development of timescales by annual-layer counting, essential to detailed climate reconstruction and interpretation, on ice cores collected at low-accumulation sites or in regions of compressed ice, is problematic due to closely spaced layers. Ice-core analysis by laser ablation–inductively coupled plasma–mass spectrometry (LA-ICP-MS) provides sub-millimeter-scale sampling resolution (on the order of 100μm in this study) and the low detection limits (ng L–1) necessary to measure the chemical constituents preserved in ice cores. We present a newly developed cryocell that can hold a 1m long section of ice core, and an alternative strategy for calibration. Using ice-core samples from central Greenland, we demonstrate the repeatability of multiple ablation passes, highlight the improved sampling resolution, verify the calibration technique and identify annual layers in the chemical profile in a deep section of an ice core where annual layers have not previously been identified using chemistry. In addition, using sections of cores from the Swiss/Italian Alps we illustrate the relationship between Ca, Na and Fe and particle concentration and conductivity, and validate the LA-ICP-MS Ca profile through a direct comparison with continuous flow analysis results.
Resumo:
Este trabajo responde al propósito de reflexionar acerca de las formas que toma el control de la administración pública en nuestro país, ahondando especialmente en el control social. El recorrido y análisis de diversas normas y programas, además de su cruce con las distintas modalidades del control, muestran diferentes instancias de participación que constituyen nuevos campos de intervención ciudadana. En todas y cada una de ellas, el acceso a la información pública aparece como auténtico presupuesto de la participación. Nuevos instrumentos y modalidades en los que, sin embargo, debemos resaltar el carácter de no vinculante que tienen las audiencias públicas y el escaso margen generado para la participación activa en los distintos programas. De todas maneras esos instrumentos son perfectibles y a futuro habrá de pensarse en el modo de transformar esta incipiente participación en una intervención activa y vigorosa que defina una nueva relación Estado/sociedad.
Resumo:
The presence and abundance of anaerobic ammonium-oxidizing (anammox) bacteria was investigated in continental shelf and slope sediments (300-3000 m water depth) off northwest Africa in a combined approach applying quantitative polymerase chain reaction (q-PCR) analysis of anammox-specific 16S rRNA genes and anammox-specific ladderane biomarker lipids. We used the presence of an intact ladderane monoether lipid with a phosphocholine (PC) headgroup as a direct indicator for living anammox bacteria and compared it with the abundance of ladderane core lipids derived from both living and dead bacterial biomass. All investigated sediments contained ladderane lipids, both intact and core lipids, in agreement with the presence of anammoxspecific 16S rRNA gene copies, indicating that anammox occurs at all sites. Concentrations of ladderane core lipids in core top sediments varied between 0.3 and 97 ng g**-1 sediment, with the highest concentrations detected at the sites located on the shelf at shallower water depths between 300 and 500 m. In contrast, the C20 [3]-ladderane monoether-PC lipid was most abundant in a core top sediment from 1500 m water depth. Both anammox-specific 16S rRNA gene copy numbers and the concentration of the C20 [3]-ladderane monoether-PC lipid increased downcore in sediments located at greater water depths, showing highest concentrations of 1.2 x 10**8 copies g**-1 sediment and 30 pg g**-1 sediment, respectively, at the deepest station of 3000 m water depth. This suggests that the relative abundance of anammox bacteria is higher in sediments at intermediate to deep water depths where carbon mineralization rates are lower but where anammox is probably more important than denitrification.
Resumo:
Abstract interpretation-based data-flow analysis of logic programs is, at this point, relatively well understood from the point of view of general frameworks and abstract domains. On the other hand, comparatively little attention has been given to the problems which arise when analysis of a full, practical dialect of the Prolog language is attempted, and only few solutions to these problems have been proposed to date. Existing proposals generally restrict in one way or another the classes of programs which can be analyzed. This paper attempts to fill this gap by considering a full dialect of Prolog, essentially the recent ISO standard, pointing out the problems that may arise in the analysis of such a dialect, and proposing a combination of known and novel solutions that together allow the correct analysis of arbitrary programs which use the full power of the language.
Resumo:
Global data-flow analysis of (constraint) logic programs, which is generally based on abstract interpretation [7], is reaching a comparatively high level of maturity. A natural question is whether it is time for its routine incorporation in standard compilers, something which, beyond a few experimental systems, has not happened to date. Such incorporation arguably makes good sense only if: • the range of applications of global analysis is large enough to justify the additional complication in the compiler, and • global analysis technology can deal with all the features of "practical" languages (e.g., the ISO-Prolog built-ins) and "scales up" for large programs. We present a tutorial overview of a number of concepts and techniques directly related to the issues above, with special emphasis on the first one. In particular, we concéntrate on novel uses of global analysis during program development and debugging, rather than on the more traditional application área of program optimization. The idea of using abstract interpretation for validation and diagnosis has been studied in the context of imperative programming [2] and also of logic programming. The latter work includes issues such as using approximations to reduce the burden posed on programmers by declarative debuggers [6, 3] and automatically generating and checking assertions [4, 5] (which includes the more traditional type checking of strongly typed languages, such as Gódel or Mercury [1, 8, 9]) We also review some solutions for scalability including modular analysis, incremental analysis, and widening. Finally, we discuss solutions for dealing with meta-predicates, side-effects, delay declarations, constraints, dynamic predicates, and other such features which may appear in practical languages. In the discussion we will draw both from the literature and from our experience and that of others in the development and use of the CIAO system analyzer. In order to emphasize the practical aspects of the solutions discussed, the presentation of several concepts will be illustrated by examples run on the CIAO system, which makes extensive use of global analysis and assertions.
Resumo:
Los tipos de datos concurrentes son implementaciones concurrentes de las abstracciones de datos clásicas, con la diferencia de que han sido específicamente diseñados para aprovechar el gran paralelismo disponible en las modernas arquitecturas multiprocesador y multinúcleo. La correcta manipulación de los tipos de datos concurrentes resulta esencial para demostrar la completa corrección de los sistemas de software que los utilizan. Una de las mayores dificultades a la hora de diseñar y verificar tipos de datos concurrentes surge de la necesidad de tener que razonar acerca de un número arbitrario de procesos que invocan estos tipos de datos de manera concurrente. Esto requiere considerar sistemas parametrizados. En este trabajo estudiamos la verificación formal de propiedades temporales de sistemas concurrentes parametrizados, poniendo especial énfasis en programas que manipulan estructuras de datos concurrentes. La principal dificultad a la hora de razonar acerca de sistemas concurrentes parametrizados proviene de la interacción entre el gran nivel de concurrencia que éstos poseen y la necesidad de razonar al mismo tiempo acerca de la memoria dinámica. La verificación de sistemas parametrizados resulta en sí un problema desafiante debido a que requiere razonar acerca de estructuras de datos complejas que son accedidas y modificadas por un numero ilimitado de procesos que manipulan de manera simultánea el contenido de la memoria dinámica empleando métodos de sincronización poco estructurados. En este trabajo, presentamos un marco formal basado en métodos deductivos capaz de ocuparse de la verificación de propiedades de safety y liveness de sistemas concurrentes parametrizados que manejan estructuras de datos complejas. Nuestro marco formal incluye reglas de prueba y técnicas especialmente adaptadas para sistemas parametrizados, las cuales trabajan en colaboración con procedimientos de decisión especialmente diseñados para analizar complejas estructuras de datos concurrentes. Un aspecto novedoso de nuestro marco formal es que efectúa una clara diferenciación entre el análisis del flujo de control del programa y el análisis de los datos que se manejan. El flujo de control del programa se analiza utilizando reglas de prueba y técnicas de verificación deductivas especialmente diseñadas para lidiar con sistemas parametrizados. Comenzando a partir de un programa concurrente y la especificación de una propiedad temporal, nuestras técnicas deductivas son capaces de generar un conjunto finito de condiciones de verificación cuya validez implican la satisfacción de dicha especificación temporal por parte de cualquier sistema, sin importar el número de procesos que formen parte del sistema. Las condiciones de verificación generadas se corresponden con los datos manipulados. Estudiamos el diseño de procedimientos de decisión especializados capaces de lidiar con estas condiciones de verificación de manera completamente automática. Investigamos teorías decidibles capaces de describir propiedades de tipos de datos complejos que manipulan punteros, tales como implementaciones imperativas de pilas, colas, listas y skiplists. Para cada una de estas teorías presentamos un procedimiento de decisión y una implementación práctica construida sobre SMT solvers. Estos procedimientos de decisión son finalmente utilizados para verificar de manera automática las condiciones de verificación generadas por nuestras técnicas de verificación parametrizada. Para concluir, demostramos como utilizando nuestro marco formal es posible probar no solo propiedades de safety sino además de liveness en algunas versiones de protocolos de exclusión mutua y programas que manipulan estructuras de datos concurrentes. El enfoque que presentamos en este trabajo resulta ser muy general y puede ser aplicado para verificar un amplio rango de tipos de datos concurrentes similares. Abstract Concurrent data types are concurrent implementations of classical data abstractions, specifically designed to exploit the great deal of parallelism available in modern multiprocessor and multi-core architectures. The correct manipulation of concurrent data types is essential for the overall correctness of the software system built using them. A major difficulty in designing and verifying concurrent data types arises by the need to reason about any number of threads invoking the data type simultaneously, which requires considering parametrized systems. In this work we study the formal verification of temporal properties of parametrized concurrent systems, with a special focus on programs that manipulate concurrent data structures. The main difficulty to reason about concurrent parametrized systems comes from the combination of their inherently high concurrency and the manipulation of dynamic memory. This parametrized verification problem is very challenging, because it requires to reason about complex concurrent data structures being accessed and modified by threads which simultaneously manipulate the heap using unstructured synchronization methods. In this work, we present a formal framework based on deductive methods which is capable of dealing with the verification of safety and liveness properties of concurrent parametrized systems that manipulate complex data structures. Our framework includes special proof rules and techniques adapted for parametrized systems which work in collaboration with specialized decision procedures for complex data structures. A novel aspect of our framework is that it cleanly differentiates the analysis of the program control flow from the analysis of the data being manipulated. The program control flow is analyzed using deductive proof rules and verification techniques specifically designed for coping with parametrized systems. Starting from a concurrent program and a temporal specification, our techniques generate a finite collection of verification conditions whose validity entails the satisfaction of the temporal specification by any client system, in spite of the number of threads. The verification conditions correspond to the data manipulation. We study the design of specialized decision procedures to deal with these verification conditions fully automatically. We investigate decidable theories capable of describing rich properties of complex pointer based data types such as stacks, queues, lists and skiplists. For each of these theories we present a decision procedure, and its practical implementation on top of existing SMT solvers. These decision procedures are ultimately used for automatically verifying the verification conditions generated by our specialized parametrized verification techniques. Finally, we show how using our framework it is possible to prove not only safety but also liveness properties of concurrent versions of some mutual exclusion protocols and programs that manipulate concurrent data structures. The approach we present in this work is very general, and can be applied to verify a wide range of similar concurrent data types.
Resumo:
We used 2D protein gel electrophoresis and DNA microarray technologies to systematically analyze genes under glucose repression in Bacillus subtilis. In particular, we focused on genes expressed after the shift from glycolytic to gluconeogenic at the middle logarithmic phase of growth in a nutrient sporulation medium, which remained repressed by the addition of glucose. We also examined whether or not glucose repression of these genes was mediated by CcpA, the catabolite control protein of this bacterium. The wild-type and ccpA1 cells were grown with and without glucose, and their proteomes and transcriptomes were compared. 2D gel electrophoresis allowed us to identify 11 proteins, the synthesis of which was under glucose repression. Of these proteins, the synthesis of four (IolA, I, S and PckA) was under CcpA-independent control. Microarray analysis enabled us to detect 66 glucose-repressive genes, 22 of which (glmS, acoA, C, yisS, speD, gapB, pckA, yvdR, yxeF, iolA, B, C, D, E, F, G, H, I, J, R, S and yxbF ) were at least partially under CcpA-independent control. Furthermore, we found that CcpA and IolR, a repressor of the iol divergon, were involved in the glucose repression of the synthesis of inositol dehydrogenase encoded by iolG included in the above list. The CcpA-independent glucose repression of the iol genes appeared to be explained by inducer exclusion.
Resumo:
Item 1005-C