11 resultados para nationella program
em Massachusetts Institute of Technology
Resumo:
Recognizing standard computational structures (cliches) in a program can help an experienced programmer understand the program. We develop a graph parsing approach to automating program recognition in which programs and cliches are represented in an attributed graph grammar formalism and recognition is achieved by graph parsing. In studying this approach, we evaluate our representation's ability to suppress many common forms of variation which hinder recognition. We investigate the expressiveness of our graph grammar formalism for capturing programming cliches. We empirically and analytically study the computational cost of our recognition approach with respect to two medium-sized, real-world simulator programs.
Resumo:
Introducing function sharing into designs allows eliminating costly structure by adapting existing structure to perform its function. This can eliminate many inefficiencies of reusing general componentssin specific contexts. "Redistribution of intermediate results'' focuses on instances where adaptation requires only addition/deletion of data flow and unused code removal. I show that this approach unifies and extends several well-known optimization classes. The system performs search and screening by deriving, using a novel explanation-based generalization technique, operational filtering predicates from input teleological information. The key advantage is to focus the system's effort on optimizations that are easier to prove safe.
Resumo:
The key to understanding a program is recognizing familiar algorithmic fragments and data structures in it. Automating this recognition process will make it easier to perform many tasks which require program understanding, e.g., maintenance, modification, and debugging. This report describes a recognition system, called the Recognizer, which automatically identifies occurrences of stereotyped computational fragments and data structures in programs. The Recognizer is able to identify these familiar fragments and structures, even though they may be expressed in a wide range of syntactic forms. It does so systematically and efficiently by using a parsing technique. Two important advances have made this possible. The first is a language-independent graphical representation for programs and programming structures which canonicalizes many syntactic features of programs. The second is an efficient graph parsing algorithm.
Resumo:
Artificial Intelligence research involves the creation of extremely complex programs which must possess the capability to introspect, learn, and improve their expertise. Any truly intelligent program must be able to create procedures and to modify them as it gathers information from its experience. [Sussman, 1975] produced such a system for a 'mini-world'; but truly intelligent programs must be considerably more complex. A crucial stepping stone in AI research is the development of a system which can understand complex programs well enough to modify them. There is also a complexity barrier in the world of commercial software which is making the cost of software production and maintenance prohibitive. Here too a system which is capable of understanding complex programs is a necessary step. The Programmer's Apprentice Project [Rich and Shrobe, 76] is attempting to develop an interactive programming tool which will help expert programmers deal with the complexity involved in engineering a large software system. This report describes REASON, the deductive component of the programmer's apprentice. REASON is intended to help expert programmers in the process of evolutionary program design. REASON utilizes the engineering techniques of modelling, decomposition, and analysis by inspection to determine how modules interact to achieve the desired overall behavior of a program. REASON coordinates its various sources of knowledge by using a dependency-directed structure which records the justification for each deduction it makes. Once a program has been analyzed these justifications can be summarized into a teleological structure called a plan which helps the system understand the impact of a proposed program modification.
Resumo:
A program was written to solve calculus word problems. The program, CARPS (CALculus Rate Problem Solver), is restricted to rate problems. The overall plan of the program is similar to Bobrow's STUDENT, the primary difference being the introduction of "structures" as the internal model in CARPS. Structures are stored internally as trees. Each structure is designed to hold the information gathered about one object. A description of CARPS is given by working through two problems, one in great detail. Also included is a critical analysis of STUDENT.
Resumo:
SIR is a computer system, programmed in the LISP language, which accepts information and answers questions expressed in a restricted form of English. This system demonstrates what can reasonably be called an ability to "understand" semantic information. SIR's semantic and deductive ability is based on the construction of an internal model, which uses word associations and property lists, for the relational information normally conveyed in conversational statements. A format-matching procedure extracts semantic content from English sentences. If an input sentence is declarative, the system adds appropriate information to the model. If an input sentence is a question, the system searches the model until it either finds the answer or determines why it cannot find the answer. In all cases SIR reports its conclusions. The system has some capacity to recognize exceptions to general rules, resolve certain semantic ambiguities, and modify its model structure in order to save computer memory space. Judging from its conversational ability, SIR, is a first step toward intelligent man-machine communication. The author proposes a next step by describing how to construct a more general system which is less complex and yet more powerful than SIR. This proposed system contains a generalized version of the SIR model, a formal logical system called SIR1, and a computer program for testing the truth of SIR1 statements with respect to the generalized model by using partial proof procedures in the predicate calculus. The thesis also describes the formal properties of SIR1 and how they relate to the logical structure of SIR.
Resumo:
A computer program, named ADEPT (A Distinctly Empirical Prover of Theorems), has been written which proves theorems taken from the abstract theory of groups. Its operation is basically heuristic, incorporating many of the techniques of the human mathematician in a "natural" way. This program has proved almost 100 theorems, as well as serving as a vehicle for testing and evaluating special-purpose heuristics. A detailed description of the program is supplemented by accounts of its performance on a number of theorems, thus providing many insights into the particular problems inherent in the design of a procedure capable of proving a variety of theorems from this domain. Suggestions have been formulated for further efforts along these lines, and comparisons with related work previously reported in the literature have been made.
Resumo:
This report is concerned with the problem of achieving flexibility (additivity, modularity) and efficiency (performance, expertise) simultaneously in one AI program. It deals with the domain of elementary electronic circuit design. The proposed solution is to provide a deduction-driven problem solver with built-in-control-structure concepts. This problem solver and its knowledge base in the applicaitn areas of design and electronics are descrbed. The prgram embodying it is being used to explore the solutionof some modest problems in circuit design. It is concluded that shallow reasoning about problem-solver plans is necessary for flexibility, and can be implemented with reasonable efficiency.
Resumo:
This paper describes a system for the computer understanding of English. The system answers questions, executes commands, and accepts information in normal English dialog. It uses semantic information and context to understand discourse and to disambiguate sentences. It combines a complete syntactic analysis of each sentence with a "heuristic understander" which uses different kinds of information about a sentence, other parts of the discourse, and general information about the world in deciding what the sentence means. It is based on the belief that a computer cannot deal reasonably with language unless it can "understand" the subject it is discussing. The program is given a detailed model of the knowledge needed by a simple robot having only a hand and an eye. We can give it instructions to manipulate toy objects, interrogate it about the scene, and give it information it will use in deduction. In addition to knowing the properties of toy objects, the program has a simple model of its own mentality. It can remember and discuss its plans and actions as well as carry them out. It enters into a dialog with a person, responding to English sentences with actions and English replies, and asking for clarification when its heuristic programs cannot understand a sentence through use of context and physical knowledge.
Resumo:
This white paper reports emerging findings at the end of Phase I of the Lean Aircraft Initiative in the Policy focus group area. Specifically, it provides details about research on program instability. Its objective is to discuss high-level findings detailing: 1) the relative contribution of different factors to a program’s overall instability; 2) the cost impact of program instability on acquisition programs; and 3) some strategies recommended by program managers for overcoming and/or mitigating the negative effects of program instability on their programs. Because this report comes as this research is underway, this is not meant to be a definitive document on the subject. Rather, is it anticipated that this research may potentially produce a number of reports on program instability-related topics. The government managers of military acquisition programs rated annual budget or production rate changes, changes in requirements, and technical difficulties as the three top contributors, respectively, to program instability. When asked to partition actual variance in their program’s planned cost and schedule to each of these factors, it was found that the combined effects of unplanned budget and requirement changes accounted for 5.2% annual cost growth and 20% total program schedule slip. At a rate of approximately 5% annual cost growth from these factors, it is easy to see that even conservative estimates of the cost benefits to be gained from acquisition reforms and process improvements can quickly be eclipsed by the added cost associated with program instability. Program management practices involving the integration of stakeholders from throughout the value chain into the decision making process were rated the most effective at avoiding program instability. The use of advanced information technologies was rated the most effective at mitigating the negative impact of program instability.
Resumo:
This white paper reports emerging findings at the end of Phase I of the Lean Aircraft Initiative in the Policy focus group area. Specifically, it provides details about research on program instability. Its objective is to discuss high-level findings detailing: 1) the relative contribution of different factors to a program’s overall instability; 2) the cost impact of program instability on acquisition programs; and 3) some strategies recommended by program managers for overcoming and/or mitigating the negative effects of program instability on their programs. Because this report comes as this research is underway, this is not meant to be a definitive document on the subject. Rather, is it anticipated that this research may potentially produce a number of reports on program instability-related topics. The government managers of military acquisition programs rated annual budget or production rate changes, changes in requirements, and technical difficulties as the three top contributors, respectively, to program instability. When asked to partition actual variance in their program’s planned cost and schedule to each of these factors, it was found that the combined effects of unplanned budget and requirement changes accounted for 5.2% annual cost growth and 20% total program schedule slip. At a rate of approximately 5% annual cost growth from these factors, it is easy to see that even conservative estimates of the cost benefits to be gained from acquisition reforms and process improvements can quickly be eclipsed by the added cost associated with program instability. Program management practices involving the integration of stakeholders from throughout the value chain into the decision making process were rated the most effective at avoiding program instability. The use of advanced information technologies was rated the most effective at mitigating the negative impact of program instability.