411 resultados para Compiler


Relevância:

10.00% 10.00%

Publicador:

Resumo:

近年来,以数据依赖分析为基础的高级编译优化成为现代编译器的重要研发内容.针对这类编译优化的测试问题提出了一种测试程序自动生成方法,能够根据指定的数据依赖特征生成测试程序.首先设计了LoSpec语言用以描述测试程序,然后采用一种便于表示数据依赖关系的模型——过程图作为中间表示模型实现了测试程序的自动生成,并开发了自动测试工具LoTester.与已有方法相比,该方法对高级优化更具针对性,自动化程度较高.LoTester目前在一款面向多媒体应用的优化编译器EECC的开发中得到应用并获得了良好效果.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

编译器的质量保证对提高软件产品的质量有着重要作用,对编译优化的测试是其中的核心部分.对编译优化的测试需要大量的测试用例程序.要构造这些测试用例,使用传统手工构造方法面临着效率低的问题,而基于文法的构造方法则针对性不足.从对优化的形式化描述出发来自动构造测试用例能克服这些缺点.本文设计并实现了一种基于形式化描述的编译优化测试用例程序生成方法.该方法基于编译优化的时序逻辑描述构造关键顶点控制流图,逐步转换为控制流图并得到用例程序.针对GCC(版本4.1.1)进行的覆盖率测试实验表明,该方法可以生成具有较高针对性的测试用例,并达到相当的覆盖程度.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

随着计算机芯片的速度不断提升,器件的门限电压越来越低,因此单粒子翻转的瞬时故障越来越容易发生。特别是在太空环境中的计算机系统,在宇宙射线的影响下,瞬时故障更为频繁,系统可靠性面临更突出的考验。 为了提高计算机系统的可靠性,一般有硬件冗余容错和软件冗余容错两种方法。相对硬件容错而言,软件容错的优点是价格便宜,性价比高,配置灵活等,缺点是会带来额外的时间和空间开销,而且给程序员带来编写额外的容错代码的工作量。近来出现了一些基于编译的软件容错方法,可在编译的过程中自动加入冗余容错逻辑,但是这类编译容错方法仍然会带来显著的时间空间开销。如何在保持容错能力的同时尽量降低时空开销,是有待继续研究的问题。 本文在编译容错方向上进行了进一步研究和实现,提出利用源代码中的变量信息对冗余容错逻辑进行了剪裁,在保证容错能力的同时降低了时空开销,对内存和寄存器中的数据进行保护。具体内容有: 1. 提出了一个容错编译环境SCC的设计蓝图,构建了一个容错编译工具的远 景目标。 2. 提出了一种指令级的编译容错检测方法VarBIFT ,提供检测瞬时故障的能力。平均只利用0.0069倍的时间损耗和0.3620倍的空间损耗就将发生瞬时故障时,程序正确执行和检测到故障的概率总和平均从39.1%提升到76.9%, 3. 提出了一种指令级的编译容错恢复方法VarRIFT ,提供从瞬时故障中恢复正确数据的能力。平均只增加0.043倍的时间损耗和0.69倍的空间损耗就将发生瞬时故障时,程序仍然正确执行的概率平均从44.8%提升到了78.7%。 4. 基于开源编译器LCC,实现了上述两个编译容错方法VarBIFT 和VarRIFT 。在容错方法的实现中只修改了跟具体CPU指令相独立的中间逻辑,所以这两个实现能够方便得移植到SPARC、MIPS等其他CPU架构上。 5. 开发了一个故障注入工具,并用它测试了上述两个编译容错方法VarBIFT和VarRIFT 的容错能力。

Relevância:

10.00% 10.00%

Publicador:

Resumo:

提出一种基于时序逻辑公式的关键节点控制图生成方法,生成的测试用例针对性强,容易扩展;并以该方法改进了一种编译优化自动化测试工具,在很大程度上消除了其测试冗余,提高了测试效率

Relevância:

10.00% 10.00%

Publicador:

Resumo:

编译优化是现代编译器不可缺少的重要功能。编译优化技术在过去几十年里取得了显著进展,对提升程序运行速度、节省存储空间、节省能耗等起到了不可替代的作用。然而,编译优化的可靠性却不尽人意。编译优化技术种类多、处理复杂而且可复用性弱,容易出错,即便是成熟的编译器,也不断有与编译优化相关的bug被发现出来。编译器的可靠性对软件产品的可靠性和安全性有直接影响,随着编译优化在现代编译器中的比重不断增加,编译优化的可靠性也日益受到人们的关注。 软件测试是保障编译优化可靠性的基本技术手段之一,然而,编译优化测试涉及测试程序编写、测试执行等过程,人工完成相当费时费力,因此有必要研究编译优化自动测试方法,以提高编译优化测试的效率。基于这一实际需求,Intel、MEI (Matsushita Electric Industrial)、DaimlerChrysler AG等业界产商近年来也相继与有关研究机构开展合作,研究编译优化自动测试方法。 目前已有的编译器自动测试方法中大多数都是以程序设计语言的语法和语义为主要依据,适用于测试语法检查、语义检查、代码生成等基本编译功能,对于编译优化的测试则缺乏针对性,测试效率较低,而已有的若干种面向编译优化的自动测试方法也存在着对编译优化刻画不够准确、自动化程度不高等缺陷。 本文提出一种基于形式描述的编译优化自动测试方法(TEMCOFS),其实现过程分为四个阶段,即:(1) 建立编译优化形式描述;(2) 分析编译优化描述的正确性;(3) 基于编译优化形式描述自动生成测试程序;(4) 自动执行测试。在TEMCOFS方法框架下,本文分别研究了编译优化形式化描述方法、编译优化描述正确性分析方法和基于形式描述的两种自动测试方法,实现了三类典型优化—表达式优化、数据流优化、循环优化的自动测试,主要工作包括: (1) 在编译优化形式描述方面,除了应用前人研究成果—TRANS语言描述了表达式优化和数据流优化之外,还对TRANS语言进行了扩展,建立了循环优化的形式描述机制; (2) 在编译优化正确性分析方面,首先证明了揭示程序数据依赖关系对程序变换正确性影响的依赖基础定理,为循环优化正确性分析提供了基础,然后探讨了编译优化正确性分析的一般方法; (3) 在测试自动执行方面,提出了编译优化自动变形测试执行方法,该方法将变形测试思想引入编译优化测试中,利用测试程序等价性质实现测试结果的自动判定,能够避免传统方法中测试结果判定所存在的问题; (4) 在测试程序自动生成方面,分别对应两种测试自动执行方法—比照法和变形法提出了基于编译优化形式描述的测试程序自动生成方法,能够根据表达式优化、数据流优化、循环优化的形式描述自动生成测试程序集。 在GCC编译器上的实验表明,基于本文方法自动生成的测试程序集可使GCC的编译优化模块较快达到较高的测试覆盖率。与其他编译器自动测试方法相比,本文方法对编译优化测试的针对性较好,自动化程度也较高。总而言之,本文方法对于提高编译优化测试效率、保障优化编译器的质量具有较好的实用和参考价值。

Relevância:

10.00% 10.00%

Publicador:

Resumo:

主要讨论"面向方面软件开发"或"面向方面编程"要如何运用形式化的相关方法来进行模型检测。简单介绍面向方面软件开发的内容,并运用编译器的理论知识来分析面向方面编程相关工具的应用。解释面向方面软件开发在测试代码工作上容易遇到的困难点与常见问题,并解释如何运用已知形式化方法来分析描述这些问题,进行模型检测(model checking),找出代码出错的问题点,阐述如何让面向方面软件开发出来的代码更加强固、稳定与可靠。

Relevância:

10.00% 10.00%

Publicador:

Resumo:

给出公共参考串(CRS)模型下可否认零知识的一个正面结果:从Σ-协议到CRS模型下的可否认零知识的高效转化.由Pass在CRYPTO 2003中给出的下界可知,我们的编译器取得了最优的轮效率.此外,转化所增加的通信复杂度较小.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

编译器是软件产业中重要的工具,对它的质量保证非常重要。编译优化是编译器的重要功能,它的质量对于编译器质量有重大影响。可采用软件测试的方法进行编译器优化模块的质量保证。测试需要测试用例。编译优化的测试用例必须触发编译器的优化功能,是具有可被优化特征的源程序。对不同的编译优化,该特征各不相同。需要将不同优化所对应的特征加入到源程序中以构造编译优化测试用例程序。 TRANS语言结合了时序逻辑,描述了不同的编译优化,包括优化前后的代码特征、优化执行的条件及方法。优化前的代码特征和执行优化的条件可被用作构造编译优化测试用例程序所需的特征。一种基于时序逻辑的编译优化测试用例程序生成方法的框架已被提出。该方法从TRANS描述的某种变体生成编译优化测试用例程序。但是该框架并未完善,面临多方面的问题。本文参考该框架的思想,设计了编译优化测试用例程序生成方法,解决了算法框架的部分问题。该方法可以适应复杂描述的情况;公式的合法性及语义得以保持;具体化并完整化了原有框架。该方法是具有针对性的编译优化测试用例程序自动生成方法。本文对该方法作了原型系统实现,并从中得到测试用例程序。本文设计并进行针对GCC的优化模块测试实验,以覆盖率为评价指标检验了测试用例程序的质量。实验表明该方法生成的测试用例程序具有针对性。对编译优化模块的测试,该方法是一种行之有效的办法。并且该方法仍有更多的应用空间,加以改进后可用于优化组合测试、优化正确性检测等。

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Scientists are faced with a dilemma: either they can write abstract programs that express their understanding of a problem, but which do not execute efficiently; or they can write programs that computers can execute efficiently, but which are difficult to write and difficult to understand. We have developed a compiler that uses partial evaluation and scheduling techniques to provide a solution to this dilemma.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We describe an approach to parallel compilation that seeks to harness the vast amount of fine-grain parallelism that is exposed through partial evaluation of numerically-intensive scientific programs. We have constructed a compiler for the Supercomputer Toolkit parallel processor that uses partial evaluation to break down data abstractions and program structure, producing huge basic blocks that contain large amounts of fine-grain parallelism. We show that this fine-grain prarllelism can be effectively utilized even on coarse-grain parallel architectures by selectively grouping operations together so as to adjust the parallelism grain-size to match the inter-processor communication capabilities of the target architecture.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We describe the key role played by partial evaluation in the Supercomputer Toolkit, a parallel computing system for scientific applications that effectively exploits the vast amount of parallelism exposed by partial evaluation. The Supercomputer Toolkit parallel processor and its associated partial evaluation-based compiler have been used extensively by scientists at M.I.T., and have made possible recent results in astrophysics showing that the motion of the planets in our solar system is chaotically unstable.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This report describes Processor Coupling, a mechanism for controlling multiple ALUs on a single integrated circuit to exploit both instruction-level and inter-thread parallelism. A compiler statically schedules individual threads to discover available intra-thread instruction-level parallelism. The runtime scheduling mechanism interleaves threads, exploiting inter-thread parallelism to maintain high ALU utilization. ALUs are assigned to threads on a cycle byscycle basis, and several threads can be active concurrently. Simulation results show that Processor Coupling performs well both on single threaded and multi-threaded applications. The experiments address the effects of memory latencies, function unit latencies, and communication bandwidth between function units.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Security policies are increasingly being implemented by organisations. Policies are mapped to device configurations to enforce the policies. This is typically performed manually by network administrators. The development and management of these enforcement policies is a difficult and error prone task. This thesis describes the development and evaluation of an off-line firewall policy parser and validation tool. This provides the system administrator with a textual interface and the vendor specific low level languages they trust and are familiar with, but the support of an off-line compiler tool. The tool was created using the Microsoft C#.NET language, and the Microsoft Visual Studio Integrated Development Environment (IDE). This provided an object environment to create a flexible and extensible system, as well as simple Web and Windows prototyping facilities to create GUI front-end applications for testing and evaluation. A CLI was provided with the tool, for more experienced users, but it was also designed to be easily integrated into GUI based applications for non-expert users. The evaluation of the system was performed from a custom built GUI application, which can create test firewall rule sets containing synthetic rules, to supply a variety of experimental conditions, as well as record various performance metrics. The validation tool was created, based around a pragmatic outlook, with regard to the needs of the network administrator. The modularity of the design was important, due to the fast changing nature of the network device languages being processed. An object oriented approach was taken, for maximum changeability and extensibility, and a flexible tool was developed, due to the possible needs of different types users. System administrators desire, low level, CLI-based tools that they can trust, and use easily from scripting languages. Inexperienced users may prefer a more abstract, high level, GUI or Wizard that has an easier to learn process. Built around these ideas, the tool was implemented, and proved to be a usable, and complimentary addition to the many network policy-based systems currently available. The tool has a flexible design and contains comprehensive functionality. As opposed to some of the other tools which perform across multiple vendor languages, but do not implement a deep range of options for any of the languages. It compliments existing systems, such as policy compliance tools, and abstract policy analysis systems. Its validation algorithms were evaluated for both completeness, and performance. The tool was found to correctly process large firewall policies in just a few seconds. A framework for a policy-based management system, with which the tool would integrate, is also proposed. This is based around a vendor independent XML-based repository of device configurations, which could be used to bring together existing policy management and analysis systems.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Simon, B., Hanks, B., Murphy, L., Fitzgerald, S., McCauley, R., Thomas, L., and Zander, C. 2008. Saying isn't necessarily believing: influencing self-theories in computing. In Proceeding of the Fourth international Workshop on Computing Education Research (Sydney, Australia, September 06 - 07, 2008). ICER '08. ACM, New York, NY, 173-184.