746 resultados para Syntax
Resumo:
Information Systems for complex situations often fail to adequately deliver quality and suitability. One reason for this failure is an inability to identify comprehensive user requirements. Seldom do all stakeholders, especially those "invisible‟ or "back room‟ system users, have a voice when systems are designed. If this is a global problem then it may impact on both the public and private sectors in terms of their ability to perform, produce and stay competitive. To improve upon this, system designers use rich pictures as a diagrammatic means of identifying differing world views with the aim of creating shared understanding of the organisation. Rich pictures have predominantly been used as freeform, unstructured tools with no commonly agreed syntax. This research has collated, analysed and documented a substantial collection of rich pictures into a single dataset. Attention has been focussed on three main research areas; how the rich picture is facilitated, how the rich picture is constructed and how to interpret the resultant pictures. This research highlights the importance of the rich picture tool and argues the value of adding levels of structure, in certain cases. It is shown that there are considerable benefits for both the interpreter and the creator by providing a pre-drawing session, a common key of symbols and a framework for icon understanding. In conclusion, it is suggested that there is some evidence that a framework which aims to support the process of the rich picture and aid interpretation is valuable.
Resumo:
We show that children’s syntactic production is immediately affected by individual experiences of structures and verb–structure pairings within a dialogue, but that these effects have different timecourses. In a picture-matching game, three- to four-year-olds were more likely to describe a transitive action using a passive immediately after hearing the experimenter produce a passive than an active (abstract priming), and this tendency was stronger when the verb was repeated (lexical boost). The lexical boost disappeared after two intervening utterances, but the abstract priming effect persisted. This pattern did not differ significantly from control adults. Children also showed a cumulative priming effect. Our results suggest that whereas the same mechanism may underlie children’s immediate syntactic priming and long-term syntactic learning, different mechanisms underlie the lexical boost versus long-term learning of verb–structure links. They also suggest broad continuity of syntactic processing in production between this age group and adults.
Resumo:
Rozprawa jest próbą wypracowania kompromisu pomiędzy skrajnie morfologicznymi i uniwersalistycznymi podejściami do problematyki aspektu. Jej głównym celem jest wykazanie, jakimi środkami językowymi wyrażane są w języku duńskim i polskim perfektywność i imperfektywność. Za punkt wyjścia przyjęto ujęcie aspektu jako kategorii semantycznej, której wykładniki mogą mieć zróżnicowany charakter. Jednocześnie założono, że aspekt wyrażany jest w kontekście szerszym niż sama forma czasownika, a ostateczna wartość aspektu na poziomie zdania jest ściśle związana z rodzajem czynności (Aktionsart), do którego sytuacja opisywana w danym zdaniu należy. Zdefiniowano pojęcie konstelacji werbalnej (za Smith 1997), która składa się z podmiotu, orzeczenia oraz ewentualnie dopełnienia lub okolicznika miejsca/kierunku i stanowi minimalny kontekst, w którym rodzaje czynności mogą być wyrażane. W zaproponowanej teorii aspektu przewidziano dwie nadrzędne wartości semantyczne -perfektywność i imperfektywność, w ramach których ze względu na brak inwariantu semantycznego wyróżniono odpowiednio trzy typy znaczeń perfektywnych (terminatywne, inchoatywne i semelfaktywne) oraz dwa typy znaczeń imperfektywnych (kursywne i habitualno-generyczne). W celu ustalenia wspólnych mianowników semantycznych dla obu nadrzędnych grup na nowo zdefiniowano reichenbachowski czas zdarzenia, czas mówienia oraz czas odniesienia. Otrzymane definicje łączą perfektywność z punktowym a imperfektywność - z linearnym czasem odniesienia. Obszar badań ograniczono do środków wyrazu perfektywności i imperfektywności w duńskich i polskich prostych zdaniach oznajmujących bez negacji, odnoszących się do przeszłości. Ponadto pominięto zjawiska związane z tzw. dokonanością i niedokonanością sekundarną, w tym między innymi czasowniki wieloprefiksowe. Mimo pewnych podobieństw w wyrażaniu perfektywności i imperfektywności w obu badanych językach, różnice w charakterze wykładników znaczeń aspektualnych języka duńskiego i polskiego są znaczne. W związku z brakiem możliwości wyrażania znaczeń aspektualnych za pomocą odpowiednich afiksów w języku duńskim zachodzi duże ryzyko błędnej interpretacji wartości aspektualnej zdania. Istnieje jednak szereg konstrukcji syntaktycznych umożliwiających jednoznaczne oznaczenie perfektywności lub imperfektywności. W języku polskim znaczenia aspektualne mogą być wyrażane przede wszystkim za pomocą wykładników morfologicznych, lecz również za pomocą konstrukcji składniowych.
Resumo:
Głównym celem dysertacji jest interpretacja twórczości Bulanda al-Ḥaydarīego (iracki poeta kurdyjskiego pochodzenia (1926-1996)). Utwory Al-Ḥaydarīego zawierają w sobie charakterystyczne cechy współczesnego wiersza arabskiego. Opisują ponadto najważniejsze historyczne, społeczne i osobiste doświadczenia literackiego pokolenia tego poety, jak również innych członków społeczności arabskiej w okresie po II wojnie światowej. Praca doktorska składa się z dwóch części. Pierwszą część poprzedza krótki opis stanu badań nad liryką Al-Ḥaydarīego. W pierwszym jej rozdziale przedstawiono życiorys i biografię literacką poety oraz wyjaśniono pojęcia takie jak: ‘współczesna poezja arabska’, ‘wolna poezja’, ‘poemat prozą’ czy ‘ruch wolnej poezji’. Wspomniano również o głównych nurtach w poezji arabskiej w latach 50. i 60. XX w., jak również ukazano sposób, w jaki modernistyczni poeci postrzegali poezję. Natomiast w drugim rozdziale przedstawiono pokrótce charakterystyczne cechy (stylistyczne, składniowe i melodyczne) współczesnego wiersza arabskiego i zilustrowano je fragmentami utworów Al-Ḥaydarīego. Druga część dysertacji składa się z pięciu rozdziałów. W każdym z nich przedstawiono jeden lub kilka głównych motywów oraz ich różne warianty: w pierwszym rozdziale – motyw miłość, w drugim – motyw istnienia (życie i śmierć), w trzecim – motywy ojczyzny, obczyzny, zaangażowania społecznego twórcy i jego wyobcowania, w czwartym – motyw przestrzeni (np. dom, droga, raj, piekło) oraz w piątym – motyw czasu (przeszłość, teraźniejszość, przyszłość).
Resumo:
Predictability - the ability to foretell that an implementation will not violate a set of specified reliability and timeliness requirements - is a crucial, highly desirable property of responsive embedded systems. This paper overviews a development methodology for responsive systems, which enhances predictability by eliminating potential hazards resulting from physically-unsound specifications. The backbone of our methodology is the Time-constrained Reactive Automaton (TRA) formalism, which adopts a fundamental notion of space and time that restricts expressiveness in a way that allows the specification of only reactive, spontaneous, and causal computation. Using the TRA model, unrealistic systems - possessing properties such as clairvoyance, caprice, in finite capacity, or perfect timing - cannot even be specified. We argue that this "ounce of prevention" at the specification level is likely to spare a lot of time and energy in the development cycle of responsive systems - not to mention the elimination of potential hazards that would have gone, otherwise, unnoticed. The TRA model is presented to system developers through the CLEOPATRA programming language. CLEOPATRA features a C-like imperative syntax for the description of computation, which makes it easier to incorporate in applications already using C. It is event-driven, and thus appropriate for embedded process control applications. It is object-oriented and compositional, thus advocating modularity and reusability. CLEOPATRA is semantically sound; its objects can be transformed, mechanically and unambiguously, into formal TRA automata for verification purposes, which can be pursued using model-checking or theorem proving techniques. Since 1989, an ancestor of CLEOPATRA has been in use as a specification and simulation language for embedded time-critical robotic processes.
Resumo:
Predictability -- the ability to foretell that an implementation will not violate a set of specified reliability and timeliness requirements -- is a crucial, highly desirable property of responsive embedded systems. This paper overviews a development methodology for responsive systems, which enhances predictability by eliminating potential hazards resulting from physically-unsound specifications. The backbone of our methodology is the Time-constrained Reactive Automaton (TRA) formalism, which adopts a fundamental notion of space and time that restricts expressiveness in a way that allows the specification of only reactive, spontaneous, and causal computation. Using the TRA model, unrealistic systems – possessing properties such as clairvoyance, caprice, infinite capacity, or perfect timing -- cannot even be specified. We argue that this "ounce of prevention" at the specification level is likely to spare a lot of time and energy in the development cycle of responsive systems -- not to mention the elimination of potential hazards that would have gone, otherwise, unnoticed. The TRA model is presented to system developers through the Cleopatra programming language. Cleopatra features a C-like imperative syntax for the description of computation, which makes it easier to incorporate in applications already using C. It is event-driven, and thus appropriate for embedded process control applications. It is object-oriented and compositional, thus advocating modularity and reusability. Cleopatra is semantically sound; its objects can be transformed, mechanically and unambiguously, into formal TRA automata for verification purposes, which can be pursued using model-checking or theorem proving techniques. Since 1989, an ancestor of Cleopatra has been in use as a specification and simulation language for embedded time-critical robotic processes.
Resumo:
We present a type system, StaXML, which employs the stacked type syntax to represent essential aspects of the potential roles of XML fragments to the structure of complete XML documents. The simplest application of this system is to enforce well-formedness upon the construction of XML documents without requiring the use of templates or balanced "gap plugging" operators; this allows it to be applied to programs written according to common imperative web scripting idioms, particularly the echoing of unbalanced XML fragments to an output buffer. The system can be extended to verify particular XML applications such as XHTML and identifying individual XML tags constructed from their lexical components. We also present StaXML for PHP, a prototype precompiler for the PHP4 scripting language which infers StaXML types for expressions without assistance from the programmer.
Resumo:
Mitchell defined and axiomatized a subtyping relationship (also known as containment, coercibility, or subsumption) over the types of System F (with "→" and "∀"). This subtyping relationship is quite simple and does not involve bounded quantification. Tiuryn and Urzyczyn quite recently proved this subtyping relationship to be undecidable. This paper supplies a new undecidability proof for this subtyping relationship. First, a new syntax-directed axiomatization of the subtyping relationship is defined. Then, this axiomatization is used to prove a reduction from the undecidable problem of semi-unification to subtyping. The undecidability of subtyping implies the undecidability of type checking for System F extended with Mitchell's subtyping, also known as "F plus eta".
Resumo:
If every lambda-abstraction in a lambda-term M binds at most one variable occurrence, then M is said to be "linear". Many questions about linear lambda-terms are relatively easy to answer, e.g. they all are beta-strongly normalizing and all are simply-typable. We extend the syntax of the standard lambda-calculus L to a non-standard lambda-calculus L^ satisfying a linearity condition generalizing the notion in the standard case. Specifically, in L^ a subterm Q of a term M can be applied to several subterms R1,...,Rk in parallel, which we write as (Q. R1 \wedge ... \wedge Rk). The appropriate notion of beta-reduction beta^ for the calculus L^ is such that, if Q is the lambda-abstraction (\lambda x.P) with m\geq 0 bound occurrences of x, the reduction can be carried out provided k = max(m,1). Every M in L^ is thus beta^-SN. We relate standard beta-reduction and non-standard beta^-reduction in several different ways, and draw several consequences, e.g. a new simple proof for the fact that a standard term M is beta-SN iff M can be assigned a so-called "intersection" type ("top" type disallowed).
Resumo:
System F is the well-known polymorphically-typed λ-calculus with universal quantifiers ("∀"). F+η is System F extended with the eta rule, which says that if term M can be given type τ and M η-reduces to N, then N can also be given the type τ. Adding the eta rule to System F is equivalent to adding the subsumption rule using the subtyping ("containment") relation that Mitchell defined and axiomatized [Mit88]. The subsumption rule says that if M can be given type τ and τ is a subtype of type σ, then M can be given type σ. Mitchell's subtyping relation involves no extensions to the syntax of types, i.e., no bounded polymorphism and no supertype of all types, and is thus unrelated to the system F≤("F-sub"). Typability for F+η is the problem of determining for any term M whether there is any type τ that can be given to it using the type inference rules of F+η. Typability has been proven undecidable for System F [Wel94] (without the eta rule), but the decidability of typability has been an open problem for F+η. Mitchell's subtyping relation has recently been proven undecidable [TU95, Wel95b], implying the undecidability of "type checking" for F+η. This paper reduces the problem of subtyping to the problem of typability for F+η, thus proving the undecidability of typability. The proof methods are similar in outline to those used to prove the undecidability of typability for System F, but the fine details differ greatly.
Resumo:
snBench is a platform on which novice users compose and deploy distributed Sense and Respond programs for simultaneous execution on a shared, distributed infrastructure. It is a natural imperative that we have the ability to (1) verify the safety/correctness of newly submitted tasks and (2) derive the resource requirements for these tasks such that correct allocation may occur. To achieve these goals we have established a multi-dimensional sized type system for our functional-style Domain Specific Language (DSL) called Sensor Task Execution Plan (STEP). In such a type system data types are annotated with a vector of size attributes (e.g., upper and lower size bounds). Tracking multiple size aspects proves essential in a system in which Images are manipulated as a first class data type, as image manipulation functions may have specific minimum and/or maximum resolution restrictions on the input they can correctly process. Through static analysis of STEP instances we not only verify basic type safety and establish upper computational resource bounds (i.e., time and space), but we also derive and solve data and resource sizing constraints (e.g., Image resolution, camera capabilities) from the implicit constraints embedded in program instances. In fact, the static methods presented here have benefit beyond their application to Image data, and may be extended to other data types that require tracking multiple dimensions (e.g., image "quality", video frame-rate or aspect ratio, audio sampling rate). In this paper we present the syntax and semantics of our functional language, our type system that builds costs and resource/data constraints, and (through both formalism and specific details of our implementation) provide concrete examples of how the constraints and sizing information are used in practice.
Resumo:
In research areas involving mathematical rigor, there are numerous benefits to adopting a formal representation of models and arguments: reusability, automatic evaluation of examples, and verification of consistency and correctness. However, broad accessibility has not been a priority in the design of formal verification tools that can provide these benefits. We propose a few design criteria to address these issues: a simple, familiar, and conventional concrete syntax that is independent of any environment, application, or verification strategy, and the possibility of reducing workload and entry costs by employing features selectively. We demonstrate the feasibility of satisfying such criteria by presenting our own formal representation and verification system. Our system’s concrete syntax overlaps with English, LATEX and MediaWiki markup wherever possible, and its verifier relies on heuristic search techniques that make the formal authoring process more manageable and consistent with prevailing practices. We employ techniques and algorithms that ensure a simple, uniform, and flexible definition and design for the system, so that it easy to augment, extend, and improve.
Resumo:
In research areas involving mathematical rigor, there are numerous benefits to adopting a formal representation of models and arguments: reusability, automatic evaluation of examples, and verification of consistency and correctness. However, accessibility has not been a priority in the design of formal verification tools that can provide these benefits. In earlier work [30] we attempt to address this broad problem by proposing several specific design criteria organized around the notion of a natural context: the sphere of awareness a working human user maintains of the relevant constructs, arguments, experiences, and background materials necessary to accomplish the task at hand. In this report we evaluate our proposed design criteria by utilizing within the context of novel research a formal reasoning system that is designed according to these criteria. In particular, we consider how the design and capabilities of the formal reasoning system that we employ influence, aid, or hinder our ability to accomplish a formal reasoning task – the assembly of a machine-verifiable proof pertaining to the NetSketch formalism. NetSketch is a tool for the specification of constrained-flow applications and the certification of desirable safety properties imposed thereon. NetSketch is conceived to assist system integrators in two types of activities: modeling and design. It provides capabilities for compositional analysis based on a strongly-typed domain-specific language (DSL) for describing and reasoning about constrained-flow networks and invariants that need to be enforced thereupon. In a companion paper [13] we overview NetSketch, highlight its salient features, and illustrate how it could be used in actual applications. In this paper, we define using a machine-readable syntax major parts of the formal system underlying the operation of NetSketch, along with its semantics and a corresponding notion of validity. We then provide a proof of soundness for the formalism that can be partially verified using a lightweight formal reasoning system that simulates natural contexts. A traditional presentation of these definitions and arguments can be found in the full report on the NetSketch formalism [12].
Resumo:
This paper formally defines the operational semantic for TRAFFIC, a specification language for flow composition applications proposed in BUCS-TR-2005-014, and presents a type system based on desired safety assurance. We provide proofs on reduction (weak-confluence, strong-normalization and unique normal form), on soundness and completeness of type system with respect to reduction, and on equivalence classes of flow specifications. Finally, we provide a pseudo-code listing of a syntax-directed type checking algorithm implementing rules of the type system capable of inferring the type of a closed flow specification.
Resumo:
We present a type inference algorithm, in the style of compositional analysis, for the language TRAFFIC—a specification language for flow composition applications proposed in [2]—and prove that this algorithm is correct: the typings it infers are principal typings, and the typings agree with syntax-directed type checking on closed flow specifications. This algorithm is capable of verifying partial flow specifications, which is a significant improvement over syntax-directed type checking algorithm presented in [3]. We also show that this algorithm runs efficiently, i.e., in low-degree polynomial time.