6 resultados para Restrictions de signe

em Boston University Digital Common


Relevância:

10.00% 10.00%

Publicador:

Resumo:

We consider the problems of typability[1] and type checking[2] in the Girard/Reynolds second-order polymorphic typed λ-calculus, for which we use the short name "System F" and which we use in the "Curry style" where types are assigned to pure λ -terms. These problems have been considered and proven to be decidable or undecidable for various restrictions and extensions of System F and other related systems, and lower-bound complexity results for System F have been achieved, but they have remained "embarrassing open problems"[3] for System F itself. We first prove that type checking in System F is undecidable by a reduction from semi-unification. We then prove typability in System F is undecidable by a reduction from type checking. Since the reverse reduction is already known, this implies the two problems are equivalent. The second reduction uses a novel method of constructing λ-terms such that in all type derivations, specific bound variables must always be assigned a specific type. Using this technique, we can require that specific subterms must be typable using a specific, fixed type assignment in order for the entire term to be typable at all. Any desired type assignment may be simulated. We develop this method, which we call "constants for free", for both the λK and λI calculi.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Various restrictions on the terms allowed for substitution give rise to different cases of semi-unification. Semi-unification on finite and regular terms has already been considered in the literature. We introduce a general case of semi-unification where substitutions are allowed on non-regular terms, and we prove the equivalence of this general case to a well-known undecidable data base dependency problem, thus establishing the undecidability of general semi-unification. We present a unified way of looking at the various problems of semi-unification. We give some properties that are common to all the cases of semi-unification. We also the principality property and the solution set for those problems. We prove that semi-unification on general terms has the principality property. Finally, we present a recursive inseparability result between semi-unification on regular terms and semi-unification on general terms.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

snBench is a platform on which novice users compose and deploy distributed Sense and Respond programs for simultaneous execution on a shared, distributed infrastructure. It is a natural imperative that we have the ability to (1) verify the safety/correctness of newly submitted tasks and (2) derive the resource requirements for these tasks such that correct allocation may occur. To achieve these goals we have established a multi-dimensional sized type system for our functional-style Domain Specific Language (DSL) called Sensor Task Execution Plan (STEP). In such a type system data types are annotated with a vector of size attributes (e.g., upper and lower size bounds). Tracking multiple size aspects proves essential in a system in which Images are manipulated as a first class data type, as image manipulation functions may have specific minimum and/or maximum resolution restrictions on the input they can correctly process. Through static analysis of STEP instances we not only verify basic type safety and establish upper computational resource bounds (i.e., time and space), but we also derive and solve data and resource sizing constraints (e.g., Image resolution, camera capabilities) from the implicit constraints embedded in program instances. In fact, the static methods presented here have benefit beyond their application to Image data, and may be extended to other data types that require tracking multiple dimensions (e.g., image "quality", video frame-rate or aspect ratio, audio sampling rate). In this paper we present the syntax and semantics of our functional language, our type system that builds costs and resource/data constraints, and (through both formalism and specific details of our implementation) provide concrete examples of how the constraints and sizing information are used in practice.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Principality of typings is the property that for each typable term, there is a typing from which all other typings are obtained via some set of operations. Type inference is the problem of finding a typing for a given term, if possible. We define an intersection type system which has principal typings and types exactly the strongly normalizable λ-terms. More interestingly, every finite-rank restriction of this system (using Leivant's first notion of rank) has principal typings and also has decidable type inference. This is in contrast to System F where the finite rank restriction for every finite rank at 3 and above has neither principal typings nor decidable type inference. This is also in contrast to earlier presentations of intersection types where the status of these properties is not known for the finite-rank restrictions at 3 and above.Furthermore, the notion of principal typings for our system involves only one operation, substitution, rather than several operations (not all substitution-based) as in earlier presentations of principality for intersection types (of unrestricted rank). A unification-based type inference algorithm is presented using a new form of unification, β-unification.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A weak reference is a reference to an object that is not followed by the pointer tracer when garbage collection is called. That is, a weak reference cannot prevent the object it references from being garbage collected. Weak references remain a troublesome programming feature largely because there is not an accepted, precise semantics that describes their behavior (in fact, we are not aware of any formalization of their semantics). The trouble is that weak references allow reachable objects to be garbage collected, therefore allowing garbage collection to influence the result of a program. Despite this difficulty, weak references continue to be used in practice for reasons related to efficient storage management, and are included in many popular programming languages (Standard ML, Haskell, OCaml, and Java). We give a formal semantics for a calculus called λweak that includes weak references and is derived from Morrisett, Felleisen, and Harper’s λgc. λgc formalizes the notion of garbage collection by means of a rewrite rule. Such a formalization is required to precisely characterize the semantics of weak references. However, the inclusion of a garbage-collection rewrite-rule in a language with weak references introduces non-deterministic evaluation, even if the parameter-passing mechanism is deterministic (call-by-value in our case). This raises the question of confluence for our rewrite system. We discuss natural restrictions under which our rewrite system is confluent, thus guaranteeing uniqueness of program result. We define conditions that allow other garbage collection algorithms to co-exist with our semantics of weak references. We also introduce a polymorphic type system to prove the absence of erroneous program behavior (i.e., the absence of “stuck evaluation”) and a corresponding type inference algorithm. We prove the type system sound and the inference algorithm sound and complete.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Sensor applications in Sensoria [1] are expressed using STEP (Sensorium Task Execution Plan). SNAFU (Sensor-Net Applications as Functional Units) serves as a high-level sensor-programming language, which is compiled into STEP. In SNAFU’s current form, its differences with STEP are relatively minor, as they are limited to shorthands and macros not available in STEP. We show that, however restrictive it may seem, SNAFU has in fact universal power; technically, it is a Turing-complete language, i.e., any Turing program can be written in SNAFU (though not always conveniently). Although STEP may be allowed to have universal power, as a low-level language not directly available to Sensorium users, SNAFU programmers may use this power for malicious purposes or inadvertently introduce errors with destructive consequences. In future developments of SNAFU, we plan to introduce restrictions and highlevel features with safety guards, such as those provided by a type system, which will make SNAFU programming safer.