998 resultados para Program Compilation
Resumo:
The technique of Abstract Interpretation [11] has allowed the development of sophisticated program analyses which are provably correct and practical. The semantic approximations produced by such analyses have been traditionally applied to optimization during program compilation. However, recently, novel and promising applications of semantic approximations have been proposed in the more general context of program validation and debugging [3,9,7].
Resumo:
This paper addresses the issue of the practicality of global flow analysis in logic program compilation, in terms of speed of the analysis, precisión, and usefulness of the information obtained. To this end, design and implementation aspects are discussed for two practical abstract interpretation-based flow analysis systems: MA , the MCC And-parallel Analyzer and Annotator; and Ms, an experimental mode inference system developed for SB-Prolog. The paper also provides performance data obtained (rom these implementations and, as an example of an application, a study of the usefulness of the mode information obtained in reducing run-time checks in independent and-parallelism.Based on the results obtained, it is concluded that the overhead of global flow analysis is not prohibitive, while the results of analysis can be quite precise and useful.
Resumo:
High-level language program compilation strategies can be proven correct by modelling the process as a series of refinement steps from source code to a machine-level description. We show how this can be done for programs containing recursively-defined procedures in the well-established predicate transformer semantics for refinement. To do so the formalism is extended with an abstraction of the way stack frames are created at run time for procedure parameters and variables.
Resumo:
This paper addresses the issue of the practicality of global flow analysis in logic program compilation, in terms of both speed and precision of analysis. It discusses design and implementation aspects of two practical abstract interpretation-based flow analysis systems: MA3, the MOO Andparallel Analyzer and Annotator; and Ms, an experimental mode inference system developed for SB-Prolog. The paper also provides performance data obtained from these implementations. Based on these results, it is concluded that the overhead of global flow analysis is not prohibitive, while the results of analysis can be quite precise and useful.
Resumo:
CiaoPP is the abstract interpretation-based preprocessor of the Ciao multi-paradigm (Constraint) Logic Programming system. It uses modular, incremental abstract interpretation as a fundamental tool to obtain information about programs. In CiaoPP, the semantic approximations thus produced have been applied to perform high- and low-level optimizations during program compilation, including transformations such as múltiple abstract specialization, parallelization, partial evaluation, resource usage control, and program verification. More recently, novel and promising applications of such semantic approximations are being applied in the more general context of program development such as program verification. In this work, we describe our extensión of the system to incorpórate Abstraction-Carrying Code (ACC), a novel approach to mobile code safety. ACC follows the standard strategy of associating safety certificates to programs, originally proposed in Proof Carrying- Code. A distinguishing feature of ACC is that we use an abstraction (or abstract model) of the program computed by standard static analyzers as a certifícate. The validity of the abstraction on the consumer side is checked in a single-pass by a very efficient and specialized abstractinterpreter. We have implemented and benchmarked ACC within CiaoPP. The experimental results show that the checking phase is indeed faster than the proof generation phase, and that the sizes of certificates are reasonable. Moreover, the preprocessor is based on compile-time (and run-time) tools for the certification of CLP programs with resource consumption assurances.
Resumo:
The technique of Abstract Interpretation [13] has allowed the development of sophisticated program analyses which are provably correct and practical. The semantic approximations produced by such analyses have been traditionally applied to optimization during program compilation. However, recently, novel and promising applications of semantic approximations have been proposed in the more general context of program verification and debugging [3],[10],[7].
Resumo:
Processor emulators are a software tool for allowing legacy computer programs to be executed on a modern processor. In the past emulators have been used in trivial applications such as maintenance of video games. Now, however, processor emulation is being applied to safety-critical control systems, including military avionics. These applications demand utmost guarantees of correctness, but no verification techniques exist for proving that an emulated system preserves the original system’s functional and timing properties. Here we show how this can be done by combining concepts previously used for reasoning about real-time program compilation, coupled with an understanding of the new and old software architectures. In particular, we show how both the old and new systems can be given a common semantics, thus allowing their behaviours to be compared directly.
Resumo:
Previous work on formally modelling and analysing program compilation has shown the need for a simple and expressive semantics for assembler level programs. Assembler programs contain unstructured jumps and previous formalisms have modelled these by using continuations, or by embedding the program in an explicit emulator. We propose a simpler approach, which uses techniques from compiler theory in a formal setting. This approach is based on an interpretation of programs as collections of program paths, each of which has a weakest liberal precondition semantics. We then demonstrate, by example, how we can use this formalism to justify the compilation of block-structured high-level language programs into assembler.
Resumo:
The current program of research addresses the need for multi-level programs to target the major increase in injury rates that occurs throughout adolescence. Specifically, it involves the investigation of school connectedness as a protective factor for adolescent injury, and the development of school connectedness as a component of an injury prevention program. To date, school-based risk taking and injury prevention has frequently been limited to addressing adolescents' knowledge and attitudes to risk behaviours, and has largely overlooked the importance of the wider school social context as a protective factor in adolescent development. Additionally, school connectedness has been primarily studied in terms of its impact on student achievement, wellbeing and risk taking behaviour, and research has not yet addressed possible links with injury. Further, school connectedness intervention programs have targeted risk taking behaviours without evaluating their potential impact on injury outcomes. This is the first reported research to develop strategies to increase school connectedness as part of a school-based injury prevention program. The research program was conceptualised as three distinct stages. The development of these research stages was informed by a comprehensive review of the literature on adolescent risk taking, injury and school-based prevention, as well as on school connectedness and its importance in adolescence. A review of the school connectedness literature indicated that students' connectedness is largely influenced by relationships within the school context including with teachers and other school staff, and is therefore a potentially modifiable factor that may be targeted in school-based programs. Overall, the literature shows school connectedness to be a key protective factor in adolescent development. This review established a foundation from which the current program of research was designed. The first stage of the research involved an empirical investigation of the relationship between adolescent risk taking-related injuries and school connectedness. Stage one incorporated two studies. The first involved the development of a measure of adolescent injury, the Extended Adolescent Injury Checklist (E-AIC), for use in the current research as well as in future school-based studies and program evaluation. The results of this study also highlighted the extent of the problem of risk-related injury in adolescence. The second study in Stage one examined the relationship between students' reports of school connectedness, risk taking behaviour and risk taking-related injuries on the E-AIC. The results of this study showed significant relationships between increased school connectedness and reduced reported engagement in transport and violence risk taking, and fewer associated injuries. This study therefore suggested the potential for school-based injury prevention programs to incorporate strategies targeting increased adolescent connectedness to school. The second stage of this research involved the compilation of an evidence base to inform the design of a school connectedness intervention. Stage two also incorporated two studies. The first study in Stage two involved a systematic review of programs that have targeted school connectedness for reduced risk taking and injury. The results of this study revealed that interventions targeting school connectedness can be effective in reducing adolescent risk taking behaviour, and also provided an evidence base for the design of the current school connectedness intervention. The second study in Stage two examined teachers' understanding and perceptions of school connectedness. This qualitative study indicated that teachers consider students' connectedness to be an important factor that relates to their risk taking behaviour; and also provided directions and content for the intervention design stage. The third stage of this research built upon the findings of each of the previous studies, and involved the design, implementation and evaluation of a school connectedness intervention as a component of an adolescent injury prevention program, Skills for Preventing Injury in Youth (SPIY). This connectedness intervention was designed as a professional development workshop for teachers of 13 to 14 year old adolescents, and was developed as a complementary component to the curriculum-based SPIY program. The SPIY connectedness component was implemented and evaluated using process and six-month impact evaluation methodologies. The results of this study revealed that teachers saw value in the program and made use of the strategies presented, and that program school students' self-reported violence risk behaviour was reduced at six-month follow-up. Despite these promising findings, the results of this study did not demonstrate a significant impact of the program on change in students' connectedness to school, relative to comparison schools. The positive impact on self-reported violence risk behaviour was however replicated in additional analyses comparing students participating in the connectedness version of SPIY with students participating in an earlier curriculumonly version of the program. This finding indicated that the connectedness component has additional benefits relating to reduction in violence risks, over and above a curriculum-only version of the program. This research was the first reported to address the relationship between school connectedness and adolescent injury outcomes, and to develop school connectedness as a component of an adolescent injury prevention program. Overall, the results of this program of research have demonstrated the importance of incorporating strategies targeting the wider school social context, including school connectedness, in adolescent injury prevention programs. This research has important implications for future research and practice in adolescent injury prevention.
Resumo:
MATLAB is an array language, initially popular for rapid prototyping, but is now being increasingly used to develop production code for numerical and scientific applications. Typical MATLAB programs have abundant data parallelism. These programs also have control flow dominated scalar regions that have an impact on the program's execution time. Today's computer systems have tremendous computing power in the form of traditional CPU cores and throughput oriented accelerators such as graphics processing units(GPUs). Thus, an approach that maps the control flow dominated regions to the CPU and the data parallel regions to the GPU can significantly improve program performance. In this paper, we present the design and implementation of MEGHA, a compiler that automatically compiles MATLAB programs to enable synergistic execution on heterogeneous processors. Our solution is fully automated and does not require programmer input for identifying data parallel regions. We propose a set of compiler optimizations tailored for MATLAB. Our compiler identifies data parallel regions of the program and composes them into kernels. The problem of combining statements into kernels is formulated as a constrained graph clustering problem. Heuristics are presented to map identified kernels to either the CPU or GPU so that kernel execution on the CPU and the GPU happens synergistically and the amount of data transfer needed is minimized. In order to ensure required data movement for dependencies across basic blocks, we propose a data flow analysis and edge splitting strategy. Thus our compiler automatically handles composition of kernels, mapping of kernels to CPU and GPU, scheduling and insertion of required data transfer. The proposed compiler was implemented and experimental evaluation using a set of MATLAB benchmarks shows that our approach achieves a geometric mean speedup of 19.8X for data parallel benchmarks over native execution of MATLAB.
Resumo:
The program SuSeFLAV is introduced for computing supersymmetric mass spectra with flavour violation in various supersymmetric breaking scenarios with/without see-saw mechanism. A short user guide summarizing the compilation, executables and the input files is provided.
Resumo:
MATLAB is an array language, initially popular for rapid prototyping, but is now being increasingly used to develop production code for numerical and scientific applications. Typical MATLAB programs have abundant data parallelism. These programs also have control flow dominated scalar regions that have an impact on the program's execution time. Today's computer systems have tremendous computing power in the form of traditional CPU cores and throughput oriented accelerators such as graphics processing units(GPUs). Thus, an approach that maps the control flow dominated regions to the CPU and the data parallel regions to the GPU can significantly improve program performance. In this paper, we present the design and implementation of MEGHA, a compiler that automatically compiles MATLAB programs to enable synergistic execution on heterogeneous processors. Our solution is fully automated and does not require programmer input for identifying data parallel regions. We propose a set of compiler optimizations tailored for MATLAB. Our compiler identifies data parallel regions of the program and composes them into kernels. The problem of combining statements into kernels is formulated as a constrained graph clustering problem. Heuristics are presented to map identified kernels to either the CPU or GPU so that kernel execution on the CPU and the GPU happens synergistically and the amount of data transfer needed is minimized. In order to ensure required data movement for dependencies across basic blocks, we propose a data flow analysis and edge splitting strategy. Thus our compiler automatically handles composition of kernels, mapping of kernels to CPU and GPU, scheduling and insertion of required data transfer. The proposed compiler was implemented and experimental evaluation using a set of MATLAB benchmarks shows that our approach achieves a geometric mean speedup of 19.8X for data parallel benchmarks over native execution of MATLAB.
Resumo:
Parallelizing compilers have difficulty analysing and optimising complex code. To address this, some analysis may be delayed until run-time, and techniques such as speculative execution used. Furthermore, to enhance performance, a feedback loop may be setup between the compile time and run-time analysis systems, as in iterative compilation. To extend this, it is proposed that the run-time analysis collects information about the values of variables not already determined, and estimates a probability measure for the sampled values. These measures may be used to guide optimisations in further analyses of the program. To address the problem of variables with measures as values, this paper also presents an outline of a novel combination of previous probabilistic denotational semantics models, applied to a simple imperative language.
Resumo:
The Integrated Mass Transit Systems are an initiative of the Colombian Government to replicate the experience of Bogota’s Bus Rapid Transit System —Transmilenio— in large urban areas of the country, most of them over municipal perimeters to provide transportation services to areas undergoing a metropolization process. Management of these large scale metropolitan infrastructure projects involves complex setups that present new challenges in the interaction between stakeholders and interests between municipalities, tiers of government and public and private sectors. This article presents a compilation of the management process of these projects from the national context, based on a document review of the regulatory framework, complemented by interviews with key stakeholders at the national level. Research suggests that the implementation of large-scale metropolitan projects requires a management framework orientated to overcome the traditional tensions between centralism and municipal autonomy.