925 resultados para Subroutines in Procedural Programming Languages
Resumo:
Students on introductory courses in programming languages often experience difficulty understanding the basic principles of procedural programming. In this paper we discuss the importance of early understanding of the subroutine mechanism. Two approaches for self-training – static and dynamic - are presented and compared. The static approach is appropriate for written text in paper textbook. The dynamic approach is suitable for interactive training using a computer. An interactive module was developed for teaching subroutines.
Resumo:
A programming style can be seen as a particular model of shaping thought or a special way of codifying language to solve a problem. An adaptive device is made up of an underlying formalism, for instance, an automaton, a grammar, a decision tree, etc., and an adaptive mechanism, responsible for providing features for self-modification. Adaptive languages are obtained by using some programming language as the device’s underlying formalism. The conception of such languages calls for a new programming style, since the application of adaptive technology in the field of programming languages suggests a new way of thinking. Adaptive languages have the basic feature of allowing the expression of programs which self-modifying through adaptive actions at runtime. With the adaptive style, programming language codes can be structured in such a way that the codified program therein modifies or adapts itself towards the needs of the problem. The adaptive programming style may be a feasible alternate way to obtain self-modifying consistent codes, which allow its use in modern applications for self-modifying code.
Resumo:
Interactive theorem provers are tools designed for the certification of formal proofs developed by means of man-machine collaboration. Formal proofs obtained in this way cover a large variety of logical theories, ranging from the branches of mainstream mathematics, to the field of software verification. The border between these two worlds is marked by results in theoretical computer science and proofs related to the metatheory of programming languages. This last field, which is an obvious application of interactive theorem proving, poses nonetheless a serious challenge to the users of such tools, due both to the particularly structured way in which these proofs are constructed, and to difficulties related to the management of notions typical of programming languages like variable binding. This thesis is composed of two parts, discussing our experience in the development of the Matita interactive theorem prover and its use in the mechanization of the metatheory of programming languages. More specifically, part I covers: - the results of our effort in providing a better framework for the development of tactics for Matita, in order to make their implementation and debugging easier, also resulting in a much clearer code; - a discussion of the implementation of two tactics, providing infrastructure for the unification of constructor forms and the inversion of inductive predicates; we point out interactions between induction and inversion and provide an advancement over the state of the art. In the second part of the thesis, we focus on aspects related to the formalization of programming languages. We describe two works of ours: - a discussion of basic issues we encountered in our formalizations of part 1A of the Poplmark challenge, where we apply the extended inversion principles we implemented for Matita; - a formalization of an algebraic logical framework, posing more complex challenges, including multiple binding and a form of hereditary substitution; this work adopts, for the encoding of binding, an extension of Masahiko Sato's canonical locally named representation we designed during our visit to the Laboratory for Foundations of Computer Science at the University of Edinburgh, under the supervision of Randy Pollack.
Resumo:
Thesis (M.S.)--University of Illinois at Urbana-Champaign.
Resumo:
"UIUCDCS-R-74-652"
Resumo:
Thesis (M.S.)--University of Illinois at Urbana-Champaign.
Resumo:
Bibliography: p. 120-123.
Resumo:
If we classify variables in a program into various security levels, then a secure information flow analysis aims to verify statically that information in a program can flow only in ways consistent with the specified security levels. One well-studied approach is to formulate the rules of the secure information flow analysis as a type system. A major trend of recent research focuses on how to accommodate various sophisticated modern language features. However, this approach often leads to overly complicated and restrictive type systems, making them unfit for practical use. Also, problems essential to practical use, such as type inference and error reporting, have received little attention. This dissertation identified and solved major theoretical and practical hurdles to the application of secure information flow. ^ We adopted a minimalist approach to designing our language to ensure a simple lenient type system. We started out with a small simple imperative language and only added features that we deemed most important for practical use. One language feature we addressed is arrays. Due to the various leaking channels associated with array operations, arrays have received complicated and restrictive typing rules in other secure languages. We presented a novel approach for lenient array operations, which lead to simple and lenient typing of arrays. ^ Type inference is necessary because usually a user is only concerned with the security types for input/output variables of a program and would like to have all types for auxiliary variables inferred automatically. We presented a type inference algorithm B and proved its soundness and completeness. Moreover, algorithm B stays close to the program and the type system and therefore facilitates informative error reporting that is generated in a cascading fashion. Algorithm B and error reporting have been implemented and tested. ^ Lastly, we presented a novel framework for developing applications that ensure user information privacy. In this framework, core computations are defined as code modules that involve input/output data from multiple parties. Incrementally, secure flow policies are refined based on feedback from the type checking/inference. Core computations only interact with code modules from involved parties through well-defined interfaces. All code modules are digitally signed to ensure their authenticity and integrity. ^
Resumo:
It is widely accepted that solving programming exercises is fundamental to learn how to program. Nevertheless, solving exercises is only effective if students receive an assessment on their work. An exercise solved wrong will consolidate a false belief, and without feedback many students will not be able to overcome their difficulties. However, creating, managing and accessing a large number of exercises, covering all the points in the curricula of a programming course, in classes with large number of students, can be a daunting task without the appropriated tools working in unison. This involves a diversity of tools, from the environments where programs are coded, to automatic program evaluators providing feedback on the attempts of students, passing through the authoring, management and sequencing of programming exercises as learning objects. We believe that the integration of these tools will have a great impact in acquiring programming skills. Our research objective is to manage and coordinate a network of eLearning systems where students can solve computer programming exercises. Networks of this kind include systems such as learning management systems (LMS), evaluation engines (EE), learning objects repositories (LOR) and exercise resolution environments (ERE). Our strategy to achieve the interoperability among these tools is based on a shared definition of programming exercise as a Learning Object (LO).
Resumo:
Adaptive devices show the characteristic of dynamically change themselves in response to input stimuli with no interference of external agents. Occasional changes in behaviour are immediately detected by the devices, which right away react spontaneously to them. Chronologically such devices derived from researches in the field of formal languages and automata. However, formalism spurred applications in several other fields. Based on the operation of adaptive automata, the elementary ideas generanting programming adaptive languages are presented.
Resumo:
In this paper the architecture of an experimental multiparadigmatic programming environment is sketched, showing how its parts combine together with application modules in order to perform the integration of program modules written in different programming languages and paradigms. Adaptive automata are special self-modifying formal state machines used as a design and implementation tool in the representation of complex systems. Adaptive automata have been proven to have the same formal power as Turing Machines. Therefore, at least in theory, arbitrarily complex systems may be modeled with adaptive automata. The present work briefly introduces such formal tool and presents case studies showing how to use them in two very different situations: the first one, in the name management module of a multi-paradigmatic and multi-language programming environment, and the second one, in an application program implementing an adaptive automaton that accepts a context-sensitive language.
Resumo:
Mainstream hardware is becoming parallel, heterogeneous, and distributed on every desk, every home and in every pocket. As a consequence, in the last years software is having an epochal turn toward concurrency, distribution, interaction which is pushed by the evolution of hardware architectures and the growing of network availability. This calls for introducing further abstraction layers on top of those provided by classical mainstream programming paradigms, to tackle more effectively the new complexities that developers have to face in everyday programming. A convergence it is recognizable in the mainstream toward the adoption of the actor paradigm as a mean to unite object-oriented programming and concurrency. Nevertheless, we argue that the actor paradigm can only be considered a good starting point to provide a more comprehensive response to such a fundamental and radical change in software development. Accordingly, the main objective of this thesis is to propose Agent-Oriented Programming (AOP) as a high-level general purpose programming paradigm, natural evolution of actors and objects, introducing a further level of human-inspired concepts for programming software systems, meant to simplify the design and programming of concurrent, distributed, reactive/interactive programs. To this end, in the dissertation first we construct the required background by studying the state-of-the-art of both actor-oriented and agent-oriented programming, and then we focus on the engineering of integrated programming technologies for developing agent-based systems in their classical application domains: artificial intelligence and distributed artificial intelligence. Then, we shift the perspective moving from the development of intelligent software systems, toward general purpose software development. Using the expertise maturated during the phase of background construction, we introduce a general-purpose programming language named simpAL, which founds its roots on general principles and practices of software development, and at the same time provides an agent-oriented level of abstraction for the engineering of general purpose software systems.
Resumo:
Studying independence of goals has proven very useful in the context of logic programming. In particular, it has provided a formal basis for powerful automatic parallelization tools, since independence ensures that two goals may be evaluated in parallel while preserving correctness and eciency. We extend the concept of independence to constraint logic programs (CLP) and prove that it also ensures the correctness and eciency of the parallel evaluation of independent goals. Independence for CLP languages is more complex than for logic programming as search space preservation is necessary but no longer sucient for ensuring correctness and eciency. Two additional issues arise. The rst is that the cost of constraint solving may depend upon the order constraints are encountered. The second is the need to handle dynamic scheduling. We clarify these issues by proposing various types of search independence and constraint solver independence, and show how they can be combined to allow dierent optimizations, from parallelism to intelligent backtracking. Sucient conditions for independence which can be evaluated \a priori" at run-time are also proposed. Our study also yields new insights into independence in logic programming languages. In particular, we show that search space preservation is not only a sucient but also a necessary condition for ensuring correctness and eciency of parallel execution.