899 resultados para Fractional Calculus Operators
Resumo:
We consider the problems of typability[1] and type checking[2] in the Girard/Reynolds second-order polymorphic typed λ-calculus, for which we use the short name "System F" and which we use in the "Curry style" where types are assigned to pure λ -terms. These problems have been considered and proven to be decidable or undecidable for various restrictions and extensions of System F and other related systems, and lower-bound complexity results for System F have been achieved, but they have remained "embarrassing open problems"[3] for System F itself. We first prove that type checking in System F is undecidable by a reduction from semi-unification. We then prove typability in System F is undecidable by a reduction from type checking. Since the reverse reduction is already known, this implies the two problems are equivalent. The second reduction uses a novel method of constructing λ-terms such that in all type derivations, specific bound variables must always be assigned a specific type. Using this technique, we can require that specific subterms must be typable using a specific, fixed type assignment in order for the entire term to be typable at all. Any desired type assignment may be simulated. We develop this method, which we call "constants for free", for both the λK and λI calculi.
Resumo:
We study the problem of type inference for a family of polymorphic type disciplines containing the power of Core-ML. This family comprises all levels of the stratification of the second-order lambda-calculus by "rank" of types. We show that typability is an undecidable problem at every rank k ≥ 3 of this stratification. While it was already known that typability is decidable at rank ≤ 2, no direct and easy-to-implement algorithm was available. To design such an algorithm, we develop a new notion of reduction and show how to use it to reduce the problem of typability at rank 2 to the problem of acyclic semi-unification. A by-product of our analysis is the publication of a simple solution procedure for acyclic semi-unification.
Resumo:
If every lambda-abstraction in a lambda-term M binds at most one variable occurrence, then M is said to be "linear". Many questions about linear lambda-terms are relatively easy to answer, e.g. they all are beta-strongly normalizing and all are simply-typable. We extend the syntax of the standard lambda-calculus L to a non-standard lambda-calculus L^ satisfying a linearity condition generalizing the notion in the standard case. Specifically, in L^ a subterm Q of a term M can be applied to several subterms R1,...,Rk in parallel, which we write as (Q. R1 \wedge ... \wedge Rk). The appropriate notion of beta-reduction beta^ for the calculus L^ is such that, if Q is the lambda-abstraction (\lambda x.P) with m\geq 0 bound occurrences of x, the reduction can be carried out provided k = max(m,1). Every M in L^ is thus beta^-SN. We relate standard beta-reduction and non-standard beta^-reduction in several different ways, and draw several consequences, e.g. a new simple proof for the fact that a standard term M is beta-SN iff M can be assigned a so-called "intersection" type ("top" type disallowed).
Resumo:
Weak references are references that do not prevent the object they point to from being garbage collected. Most realistic languages, including Java, SML/NJ, and OCaml to name a few, have some facility for programming with weak references. Weak references are used in implementing idioms like memoizing functions and hash-consing in order to avoid potential memory leaks. However, the semantics of weak references in many languages are not clearly specified. Without a formal semantics for weak references it becomes impossible to prove the correctness of implementations making use of this feature. Previous work by Hallett and Kfoury extends λgc, a language for modeling garbage collection, to λweak, a similar language with weak references. Using this previously formalized semantics for weak references, we consider two issues related to well-behavedness of programs. Firstly, we provide a new, simpler proof of the well-behavedness of the syntactically restricted fragment of λweak defined previously. Secondly, we give a natural semantic criterion for well-behavedness much broader than the syntactic restriction, which is useful as principle for programming with weak references. Furthermore we extend the result, proved in previously of λgc, which allows one to use type-inference to collect some reachable objects that are never used. We prove that this result holds of our language, and we extend this result to allow the collection of weakly-referenced reachable garbage without incurring the computational overhead sometimes associated with collecting weak bindings (e.g. the need to recompute a memoized function). Lastly we use extend the semantic framework to model the key/value weak references found in Haskell and we prove the Haskell is semantics equivalent to a simpler semantics due to the lack of side-effects in our language.
Resumo:
We consider different types of fractional branes on a Z2 orbifold of the conifold and analyze in detail the corresponding gauge/gravity duality. The gauge theory possesses a rich and varied dynamics, both in the UV and in the IR. We find the dual supergravity solution, which contains both untwisted and twisted 3-form fluxes, related to what are known as deformation and N=2 fractional branes, respectively. We analyze the resulting renormalization group flow from the supergravity perspective, by developing an algorithm to easily extract it. We find hints of a generalization of the familiar cascade of Seiberg dualities due to a nontrivial interplay between the different types of fractional branes. We finally consider the IR behavior in several limits, where the dominant effective dynamics is either confining in a Coulomb phase or runaway, and discuss the resolution of singularities in the dual geometric background. © 2008 The American Physical Society.
Resumo:
There are three main approaches to the representation of temporal information in AI literature: the so-called method of temporal arguments that simply extends functions and predicates of first-order language to include time as the additional argument; modal temporal logics which are extensions ofthe propositional or predicate calculus with modal temporal operators; and reified temporal logics which reify standard propositions of some initial language (e.g., the classical first-order or modal logic) as objects denoting propositional terms. The objective of this paper is to provide an overview onthe temporal reified approach by looking closely atsome representative existing systems featuring reified propositions, including those of Allen, McDermott, Shoham, Reichgelt, Galton, and Ma and Knight. We shall demonstrate that, although reified logics might be more complicated in expressing assertions about some given objects with respect to different times, they accord a special status to time and therefore have several distinct advantages in talking about some important issues which would be difficult (if not impossible) to express in other approaches.
Resumo:
Lennart Åqvist (1992) proposed a logical theory of legal evidence, based on the Bolding-Ekelöf of degrees of evidential strength. This paper reformulates Åqvist's model in terms of the probabilistic version of the kappa calculus. Proving its acceptability in the legal context is beyond the present scope, but the epistemological debate about Bayesian Law isclearly relevant. While the present model is a possible link to that lineof inquiry, we offer some considerations about the broader picture of thepotential of AI & Law in the evidentiary context. Whereas probabilisticreasoning is well-researched in AI, calculations about the threshold ofpersuasion in litigation, whatever their value, are just the tip of theiceberg. The bulk of the modeling desiderata is arguably elsewhere, if one isto ideally make the most of AI's distinctive contribution as envisaged forlegal evidence research.
Resumo:
A class of generalized Lévy Laplacians which contain as a special case the ordinary Lévy Laplacian are considered. Topics such as limit average of the second order functional derivative with respect to a certain equally dense (uniformly bounded) orthonormal base, the relations with Kuo’s Fourier transform and other infinite dimensional Laplacians are studied.
Resumo:
Review of a semi-staged performance of Calculus by Carl Djerassi at the Royal Institution, London on 30 September 2002.