918 resultados para control-flow checking
Resumo:
Branch divergence is a very commonly occurring performance problem in GPGPU in which the execution of diverging branches is serialized to execute only one control flow path at a time. Existing hardware mechanism to reconverge threads using a stack causes duplicate execution of code for unstructured control flow graphs. Also the stack mechanism cannot effectively utilize the available parallelism among diverging branches. Further, the amount of nested divergence allowed is also limited by depth of the branch divergence stack. In this paper we propose a simple and elegant transformation to handle all of the above mentioned problems. The transformation converts an unstructured CFG to a structured CFG without duplicating user code. It incurs only a linear increase in the number of basic blocks and also the number of instructions. Our solution linearizes the CFG using a predicate variable. This mechanism reconverges the divergent threads as early as possible. It also reduces the depth of the reconvergence stack. The available parallelism in nested branches can be effectively extracted by scheduling the basic blocks to reduce the effect of stalls due to memory accesses. It can also increase execution efficiency of nested loops with different trip counts for different threads. We implemented the proposed transformation at PTX level using the Ocelot compiler infrastructure. We evaluated the technique using various benchmarks to show that it can be effective in handling the performance problem due to divergence in unstructured CFGs.
Resumo:
El uso de aritmética de punto fijo es una opción de diseño muy extendida en sistemas con fuertes restricciones de área, consumo o rendimiento. Para producir implementaciones donde los costes se minimicen sin impactar negativamente en la precisión de los resultados debemos llevar a cabo una asignación cuidadosa de anchuras de palabra. Encontrar la combinación óptima de anchuras de palabra en coma fija para un sistema dado es un problema combinatorio NP-hard al que los diseñadores dedican entre el 25 y el 50 % del ciclo de diseño. Las plataformas hardware reconfigurables, como son las FPGAs, también se benefician de las ventajas que ofrece la aritmética de coma fija, ya que éstas compensan las frecuencias de reloj más bajas y el uso más ineficiente del hardware que hacen estas plataformas respecto a los ASICs. A medida que las FPGAs se popularizan para su uso en computación científica los diseños aumentan de tamaño y complejidad hasta llegar al punto en que no pueden ser manejados eficientemente por las técnicas actuales de modelado de señal y ruido de cuantificación y de optimización de anchura de palabra. En esta Tesis Doctoral exploramos distintos aspectos del problema de la cuantificación y presentamos nuevas metodologías para cada uno de ellos: Las técnicas basadas en extensiones de intervalos han permitido obtener modelos de propagación de señal y ruido de cuantificación muy precisos en sistemas con operaciones no lineales. Nosotros llevamos esta aproximación un paso más allá introduciendo elementos de Multi-Element Generalized Polynomial Chaos (ME-gPC) y combinándolos con una técnica moderna basada en Modified Affine Arithmetic (MAA) estadístico para así modelar sistemas que contienen estructuras de control de flujo. Nuestra metodología genera los distintos caminos de ejecución automáticamente, determina las regiones del dominio de entrada que ejercitarán cada uno de ellos y extrae los momentos estadísticos del sistema a partir de dichas soluciones parciales. Utilizamos esta técnica para estimar tanto el rango dinámico como el ruido de redondeo en sistemas con las ya mencionadas estructuras de control de flujo y mostramos la precisión de nuestra aproximación, que en determinados casos de uso con operadores no lineales llega a tener tan solo una desviación del 0.04% con respecto a los valores de referencia obtenidos mediante simulación. Un inconveniente conocido de las técnicas basadas en extensiones de intervalos es la explosión combinacional de términos a medida que el tamaño de los sistemas a estudiar crece, lo cual conlleva problemas de escalabilidad. Para afrontar este problema presen tamos una técnica de inyección de ruidos agrupados que hace grupos con las señales del sistema, introduce las fuentes de ruido para cada uno de los grupos por separado y finalmente combina los resultados de cada uno de ellos. De esta forma, el número de fuentes de ruido queda controlado en cada momento y, debido a ello, la explosión combinatoria se minimiza. También presentamos un algoritmo de particionado multi-vía destinado a minimizar la desviación de los resultados a causa de la pérdida de correlación entre términos de ruido con el objetivo de mantener los resultados tan precisos como sea posible. La presente Tesis Doctoral también aborda el desarrollo de metodologías de optimización de anchura de palabra basadas en simulaciones de Monte-Cario que se ejecuten en tiempos razonables. Para ello presentamos dos nuevas técnicas que exploran la reducción del tiempo de ejecución desde distintos ángulos: En primer lugar, el método interpolativo aplica un interpolador sencillo pero preciso para estimar la sensibilidad de cada señal, y que es usado después durante la etapa de optimización. En segundo lugar, el método incremental gira en torno al hecho de que, aunque es estrictamente necesario mantener un intervalo de confianza dado para los resultados finales de nuestra búsqueda, podemos emplear niveles de confianza más relajados, lo cual deriva en un menor número de pruebas por simulación, en las etapas iniciales de la búsqueda, cuando todavía estamos lejos de las soluciones optimizadas. Mediante estas dos aproximaciones demostramos que podemos acelerar el tiempo de ejecución de los algoritmos clásicos de búsqueda voraz en factores de hasta x240 para problemas de tamaño pequeño/mediano. Finalmente, este libro presenta HOPLITE, una infraestructura de cuantificación automatizada, flexible y modular que incluye la implementación de las técnicas anteriores y se proporciona de forma pública. Su objetivo es ofrecer a desabolladores e investigadores un entorno común para prototipar y verificar nuevas metodologías de cuantificación de forma sencilla. Describimos el flujo de trabajo, justificamos las decisiones de diseño tomadas, explicamos su API pública y hacemos una demostración paso a paso de su funcionamiento. Además mostramos, a través de un ejemplo sencillo, la forma en que conectar nuevas extensiones a la herramienta con las interfaces ya existentes para poder así expandir y mejorar las capacidades de HOPLITE. ABSTRACT Using fixed-point arithmetic is one of the most common design choices for systems where area, power or throughput are heavily constrained. In order to produce implementations where the cost is minimized without negatively impacting the accuracy of the results, a careful assignment of word-lengths is required. The problem of finding the optimal combination of fixed-point word-lengths for a given system is a combinatorial NP-hard problem to which developers devote between 25 and 50% of the design-cycle time. Reconfigurable hardware platforms such as FPGAs also benefit of the advantages of fixed-point arithmetic, as it compensates for the slower clock frequencies and less efficient area utilization of the hardware platform with respect to ASICs. As FPGAs become commonly used for scientific computation, designs constantly grow larger and more complex, up to the point where they cannot be handled efficiently by current signal and quantization noise modelling and word-length optimization methodologies. In this Ph.D. Thesis we explore different aspects of the quantization problem and we present new methodologies for each of them: The techniques based on extensions of intervals have allowed to obtain accurate models of the signal and quantization noise propagation in systems with non-linear operations. We take this approach a step further by introducing elements of MultiElement Generalized Polynomial Chaos (ME-gPC) and combining them with an stateof- the-art Statistical Modified Affine Arithmetic (MAA) based methodology in order to model systems that contain control-flow structures. Our methodology produces the different execution paths automatically, determines the regions of the input domain that will exercise them, and extracts the system statistical moments from the partial results. We use this technique to estimate both the dynamic range and the round-off noise in systems with the aforementioned control-flow structures. We show the good accuracy of our approach, which in some case studies with non-linear operators shows a 0.04 % deviation respect to the simulation-based reference values. A known drawback of the techniques based on extensions of intervals is the combinatorial explosion of terms as the size of the targeted systems grows, which leads to scalability problems. To address this issue we present a clustered noise injection technique that groups the signals in the system, introduces the noise terms in each group independently and then combines the results at the end. In this way, the number of noise sources in the system at a given time is controlled and, because of this, the combinato rial explosion is minimized. We also present a multi-way partitioning algorithm aimed at minimizing the deviation of the results due to the loss of correlation between noise terms, in order to keep the results as accurate as possible. This Ph.D. Thesis also covers the development of methodologies for word-length optimization based on Monte-Carlo simulations in reasonable times. We do so by presenting two novel techniques that explore the reduction of the execution times approaching the problem in two different ways: First, the interpolative method applies a simple but precise interpolator to estimate the sensitivity of each signal, which is later used to guide the optimization effort. Second, the incremental method revolves on the fact that, although we strictly need to guarantee a certain confidence level in the simulations for the final results of the optimization process, we can do it with more relaxed levels, which in turn implies using a considerably smaller amount of samples, in the initial stages of the process, when we are still far from the optimized solution. Through these two approaches we demonstrate that the execution time of classical greedy techniques can be accelerated by factors of up to ×240 for small/medium sized problems. Finally, this book introduces HOPLITE, an automated, flexible and modular framework for quantization that includes the implementation of the previous techniques and is provided for public access. The aim is to offer a common ground for developers and researches for prototyping and verifying new techniques for system modelling and word-length optimization easily. We describe its work flow, justifying the taken design decisions, explain its public API and we do a step-by-step demonstration of its execution. We also show, through an example, the way new extensions to the flow should be connected to the existing interfaces in order to expand and improve the capabilities of HOPLITE.
Resumo:
The use of microprocessor-based systems is gaining importance in application domains where safety is a must. For this reason, there is a growing concern about the mitigation of SEU and SET effects. This paper presents a new hybrid technique aimed to protect both the data and the control-flow of embedded applications running on microprocessors. On one hand, the approach is based on software redundancy techniques for correcting errors produced in the data. On the other hand, control-flow errors can be detected by reusing the on-chip debug interface, existing in most modern microprocessors. Experimental results show an important increase in the system reliability even superior to two orders of magnitude, in terms of mitigation of both SEUs and SETs. Furthermore, the overheads incurred by our technique can be perfectly assumable in low-cost systems.
Resumo:
Timinganalysis of assembler code is essential to achieve the strongest possible guarantee of correctness for safety-critical, real-time software. Previous work has shown how timingconstrain ts on controlflow paths through high-level language programs can be formalised using the semantics of the statements comprisingthe path. We extend these results to assembler-level code where it becomes possible to not only determine timingconstrain ts, but also to verify them against the known execution times for each instruction. A minimal formal model is developed with both a weakest liberal precondition and a strongest postcondition semantics. However, despite the formalism’s simplicity, it is shown that complex timingb ehaviour associated with instruction pipeliningand iterative code can be modelled accurately.
Resumo:
控制流检测是抵御单粒子事件的有效手段之一.目前的主流方法是采用嵌入式签名技术, 但是该技术引入的检测指令过多, 导致程序效率低下. 本文使用基本块规约的技术, 在原基本块的基础上, 选择合适的约束量重新划分基本块, 减少引入的检测指令. 与8个常见算法的性能比较表明, 该方法在软错误检测覆盖率基本不变的前提下,能有效提高目标程序效率.
Resumo:
随着微电子器件复杂度的提高,空间辐射对于计算机程序的正确性影响正越来越明显。一般情况下,这些影响并不是永久的,而是瞬时故障。无论是太空中的信息处理系统、嵌入式实时控制系统,还是计算机集群、高性能超级计算机都可能由于错误的输出而导致灾难性的后果。 传统的可靠性系统采用抗辐射部件和冗余的硬件来达到可靠性的要求,但是其价格昂贵,性能落后于今天的商用部件(COTS)。针对COTS在容错能力上存在的不足,软件容错技术可以在不改变硬件结构的情况下,有效的提高计算机系统的可靠性。 瞬时故障在软件层面上主要表现为控制流错误和数据流错误,本文主要针对控制流错误进行容错处理。软件实现的控制流容错技术通过在编译时加入冗余的容错逻辑,在程序执行时进行控制流错误的检测和处理。 如何在保证容错能力的同时,尽量降低冗余逻辑所带来的系统开销,是控制流容错需要解决的主要问题。本文从控制流错误的基本概念,容错单元的选择,签名信息的建立,签名点和检测点的插入位置几个角度对控制流容错进行研究,主要内容有: 1.对常见的控制流容错方法进行了分析比较,对其优点和不足予以说明。 2.对控制流错误进行了分类,以此为基础,提出了基于相关前驱基本块的控制流容错方法(CFCLRB)。 3.提出了一种签名流模型,提出了基于签名流模型的控制流容错方法(CFCSF)。该方法能够对基本块间控制流错误进行检测,具有较低的时间开销、空间开销和较高的错误覆盖率。同时,该方法可以根据容错尺度的要求,灵活的插入和删除签名点与检测点,具有极强的扩展性。该方法还可以应对动态函数指针这种编译时难以确定的控制流情况。 4.基于汇编指令对上述方法予以实现,并实现了国际上常用的控制流容错方法Control Flow Checking by Software Signatures(CFCSS)和Control-flow Error Detection through Assertions(CEDA)做为对比。通过加入冗余的指令逻辑,完成了对原程序的容错功能。 5.基于PIN工具实现了对控制流错误的注入,在相同的实验环境下对CFCLRB ,CFCSF,CFCSS,CEDA进行了对比实验。实验表明, CFCLRB的时间开销为26.9%,空间开销为27.6%,相比不具容错能力的原程序,其错误覆盖率从66.50%提升到97.32%。CFCSF的时间开销为14.7%,空间开销为22.1%,相比不具容错能力的原程序,其错误覆盖率从66.50%提升到96.79%。相比CFCSS,该方法的时间开销从37.2%下降到14.7%,空间开销从31.2%下降到22.1%,错误覆盖率从95.16%提升到96.79%。相比CEDA,该方法的时间开销从26.9%下降到14.7%,空间开销从27.1%下降到22.1%,错误覆盖率仅从97.39%下降到96.79%。 最后,本文对控制流容错的未来研究方向进行了展望。
Resumo:
系统的高可靠性是研究航空航天领域的一个重要指标. 由于太空环境的特殊性, 辐射和高能粒子会造成计算机系统的出现瞬时性错误, 这种错误被称作软错误, 它对航空航天器件造成了很大的影响, 严重降低系统的可靠性. 检测和防护这种软错误是航空航天系统中的重要研究方向之一. 软错误的检测和防护包括硬件防护与检错, 软硬件混合检错以及纯软件检错等. 随着商用器件的广泛使用, 与之相配合的各种软错误软件检错方法开始得到深入的研究, 在各种软件检错方法中, 控制流检测是抵御单粒子事件的有效手段之一.目前的主流方法是采用嵌入式签名技术, 但是该技术引入的检测指令过多, 导致程序效率低下. 本文从总结控制流检测技术的共同点出发, 分析该技术导致效率低下的原因:由于基本块定义的约束导致程序中基本块过多, 进而在代码注入过程中引入过多的判断及跳转指令, 导致程序效率低下. 本文针对这种情况, 提出了一种基于源代码分析的基本块规约的方法. 该方法通过修改基本块定义的约束, 使在新的基本块定义下每个基本块能够容纳更多的指令, 减少检测指令的注入, 提高效率;并且在新的基本块定义下, 原来的控制流检错方法仍可以不加修改的直接应用于新的基本块定义上. 该方法能在不修改benchmark源代码以及控制流检测方法的基础上, 选择合适的约束量重新划分基本块, 减少引入的检测指令. 本文中使用该方法以ECCA, CFCSS和RSCFC三个控制流检错方法作为验证对象, 使用这3种控制流检错方法, 在不同的约束量作用下, 对8个常见算法的benchmark进行了软错误覆盖率测试和效率测试. 多次实验数据表明, 该方法在提高检错算法效率的同时, 能够保持软错误检错的覆盖率基本不变. 在对控制流检错算法进行优化的同时, 本文还完成了相应的控制流分析工具, 基于模拟器的错误注入和代码片段执行时间检测工具等. 有效的对优化算法进行了评估和测试.
Resumo:
The application of blown jet vortex generators to control flow separation in a diffuser with an opening angle of 10° has been studied using the computational fluid dynamics (CFD) code Fluent 6™. Experimental data is available for the uncontrolled flow in the diffuser. The section of the duct upstream of the diffuser has a height H equal to 15 mm; its length and breadth are 101H and 41H respectively; the diffuser has an expansion ratio of 4.7:1. Fully developed flow is achieved upstream of the diffuser. Pipes of diameters equal to 1.5%, 2.5% and 5% of H were considered; pitch angle was constant at 45° and yaw angle was fixed at 60°; velocity ratio was varied from 1.7 to 8.0; both co-rotating and counter-rotating arrays were studied. The best results were obtained with a counter-rotating array of generators with a hole diameter of 5% of H and a velocity ratio of 3.7.
Resumo:
By definition, regulatory rules (in legal context called norms) intend to achieve specific behaviour from business processes, and might be relevant to the whole or part of a business process. They can impose conditions on different aspects of process models, e.g., control-flow, data and resources etc. Based on the rules sets, norms can be classified into various classes and sub-classes according to their effects. This paper presents an abstract framework consisting of a list of norms and a generic compliance checking approach on the idea of (possible) execution of processes. The proposed framework is independent of any existing formalism, and provides a conceptually rich and exhaustive ontology and semantics of norms needed for business process compliance checking. The possible uses of the proposed framework include to compare different compliance management frameworks (CMFs).
Resumo:
Data-flow analysis is an integral part of any aggressive optimizing compiler. We propose a framework for improving the precision of data-flow analysis in the presence of complex control-flow. W initially perform data-flow analysis to determine those control-flow merges which cause the loss in data-flow analysis precision. The control-flow graph of the program is then restructured such that performing data-flow analysis on the resulting restructured graph gives more precise results. The proposed framework is both simple, involving the familiar notion of product automata, and also general, since it is applicable to any forward data-flow analysis. Apart from proving that our restructuring process is correct, we also show that restructuring is effective in that it necessarily leads to more optimization opportunities. Furthermore, the framework handles the trade-off between the increase in data-flow precision and the code size increase inherent in the restructuring. We show that determining an optimal restructuring is NP-hard, and propose and evaluate a greedy strategy. The framework has been implemented in the Scale research compiler, and instantiated for the specific problem of Constant Propagation. On the SPECINT 2000 benchmark suite we observe an average speedup of 4% in the running times over Wegman-Zadeck conditional constant propagation algorithm and 2% over a purely path profile guided approach.
Resumo:
Compiler optimizations need precise and scalable analyses to discover program properties. We propose a partially flow-sensitive framework that tries to draw on the scalability of flow-insensitive algorithms while providing more precision at some specific program points. Provided with a set of critical nodes — basic blocks at which more precise information is desired — our partially flow-sensitive algorithm computes a reduced control-flow graph by collapsing some sets of non-critical nodes. The algorithm is more scalable than a fully flow-sensitive one as, assuming that the number of critical nodes is small, the reduced flow-graph is much smaller than the original flow-graph. At the same time, a much more precise information is obtained at certain program points than would had been obtained from a flow-insensitive algorithm.
Resumo:
Embedded systems are usually designed for a single or a specified set of tasks. This specificity means the system design as well as its hardware/software development can be highly optimized. Embedded software must meet the requirements such as high reliability operation on resource-constrained platforms, real time constraints and rapid development. This necessitates the adoption of static machine codes analysis tools running on a host machine for the validation and optimization of embedded system codes, which can help meet all of these goals. This could significantly augment the software quality and is still a challenging field.Embedded systems are usually designed for a single or a specified set of tasks. This specificity means the system design as well as its hardware/software development can be highly optimized. Embedded software must meet the requirements such as high reliability operation on resource-constrained platforms, real time constraints and rapid development. This necessitates the adoption of static machine codes analysis tools running on a host machine for the validation and optimization of embedded system codes, which can help meet all of these goals. This could significantly augment the software quality and is still a challenging field.Embedded systems are usually designed for a single or a specified set of tasks. This specificity means the system design as well as its hardware/software development can be highly optimized. Embedded software must meet the requirements such as high reliability operation on resource-constrained platforms, real time constraints and rapid development. This necessitates the adoption of static machine codes analysis tools running on a host machine for the validation and optimization of embedded system codes, which can help meet all of these goals. This could significantly augment the software quality and is still a challenging field.Embedded systems are usually designed for a single or a specified set of tasks. This specificity means the system design as well as its hardware/software development can be highly optimized. Embedded software must meet the requirements such as high reliability operation on resource-constrained platforms, real time constraints and rapid development. This necessitates the adoption of static machine codes analysis tools running on a host machine for the validation and optimization of embedded system codes, which can help meet all of these goals. This could significantly augment the software quality and is still a challenging field.This dissertation contributes to an architecture oriented code validation, error localization and optimization technique assisting the embedded system designer in software debugging, to make it more effective at early detection of software bugs that are otherwise hard to detect, using the static analysis of machine codes. The focus of this work is to develop methods that automatically localize faults as well as optimize the code and thus improve the debugging process as well as quality of the code.Validation is done with the help of rules of inferences formulated for the target processor. The rules govern the occurrence of illegitimate/out of place instructions and code sequences for executing the computational and integrated peripheral functions. The stipulated rules are encoded in propositional logic formulae and their compliance is tested individually in all possible execution paths of the application programs. An incorrect sequence of machine code pattern is identified using slicing techniques on the control flow graph generated from the machine code.An algorithm to assist the compiler to eliminate the redundant bank switching codes and decide on optimum data allocation to banked memory resulting in minimum number of bank switching codes in embedded system software is proposed. A relation matrix and a state transition diagram formed for the active memory bank state transition corresponding to each bank selection instruction is used for the detection of redundant codes. Instances of code redundancy based on the stipulated rules for the target processor are identified.This validation and optimization tool can be integrated to the system development environment. It is a novel approach independent of compiler/assembler, applicable to a wide range of processors once appropriate rules are formulated. Program states are identified mainly with machine code pattern, which drastically reduces the state space creation contributing to an improved state-of-the-art model checking. Though the technique described is general, the implementation is architecture oriented, and hence the feasibility study is conducted on PIC16F87X microcontrollers. The proposed tool will be very useful in steering novices towards correct use of difficult microcontroller features in developing embedded systems.