937 resultados para Formal languages
Resumo:
NASA is working on complex future missions that require cooperation between multiple satellites or rovers. To implement these systems, developers are proposing and using intelligent and autonomous systems. These autonomous missions are new to NASA, and the software development community is just learning to develop such systems. With these new systems, new verification and validation techniques must be used. Current techniques have been developed based on large monolithic systems. These techniques have worked well and reliably, but do not translate to the new autonomous systems that are highly parallel and nondeterministic.
Resumo:
The semantic model developed in this research was in response to the difficulty a group of mathematics learners had with conventional mathematical language and their interpretation of mathematical constructs. In order to develop the model ideas from linguistics, psycholinguistics, cognitive psychology, formal languages and natural language processing were investigated. This investigation led to the identification of four main processes: the parsing process, syntactic processing, semantic processing and conceptual processing. The model showed the complex interdependency between these four processes and provided a theoretical framework in which the behaviour of the mathematics learner could be analysed. The model was then extended to include the use of technological artefacts into the learning process. To facilitate this aspect of the research, the theory of instrumentation was incorporated into the semantic model. The conclusion of this research was that although the cognitive processes were interdependent, they could develop at different rates until mastery of a topic was achieved. It also found that the introduction of a technological artefact into the learning environment introduced another layer of complexity, both in terms of the learning process and the underlying relationship between the four cognitive processes.
Resumo:
The challenge of detecting a change in the distribution of data is a sequential decision problem that is relevant to many engineering solutions, including quality control and machine and process monitoring. This dissertation develops techniques for exact solution of change-detection problems with discrete time and discrete observations. Change-detection problems are classified as Bayes or minimax based on the availability of information on the change-time distribution. A Bayes optimal solution uses prior information about the distribution of the change time to minimize the expected cost, whereas a minimax optimal solution minimizes the cost under the worst-case change-time distribution. Both types of problems are addressed. The most important result of the dissertation is the development of a polynomial-time algorithm for the solution of important classes of Markov Bayes change-detection problems. Existing techniques for epsilon-exact solution of partially observable Markov decision processes have complexity exponential in the number of observation symbols. A new algorithm, called constellation induction, exploits the concavity and Lipschitz continuity of the value function, and has complexity polynomial in the number of observation symbols. It is shown that change-detection problems with a geometric change-time distribution and identically- and independently-distributed observations before and after the change are solvable in polynomial time. Also, change-detection problems on hidden Markov models with a fixed number of recurrent states are solvable in polynomial time. A detailed implementation and analysis of the constellation-induction algorithm are provided. Exact solution methods are also established for several types of minimax change-detection problems. Finite-horizon problems with arbitrary observation distributions are modeled as extensive-form games and solved using linear programs. Infinite-horizon problems with linear penalty for detection delay and identically- and independently-distributed observations can be solved in polynomial time via epsilon-optimal parameterization of a cumulative-sum procedure. Finally, the properties of policies for change-detection problems are described and analyzed. Simple classes of formal languages are shown to be sufficient for epsilon-exact solution of change-detection problems, and methods for finding minimally sized policy representations are described.
facilitating formal specification acquisition by using recursive functions on context-free languages
Resumo:
Although formal specification techniques are very useful in software development, the acquisition of formal specifications is a difficult task. This paper presents the formal specification language LFC, which is designed to facilitate the acquisition and validation of formal specifications. LFC uses context-free languages for syntactic aspect and relies on a new kind of recursive functions, i.e. recursive functions on context-free languages, for semantic aspect of specifications. Construction and validation of LFC specifications are machine-aided. The basic ideas behind LFC, the main aspects of LFC, and the use of LFC and illustrative examples are described.
Resumo:
"UILU-ENG 77 1710."
Resumo:
This paper shows how formal and informal modeling languages can be cooperatively used in the MDA framework, and how transformations between models in these languages can be achieved using an MDA development environment. The integrated approach also provides an effective V&V technique for the MDA.
Resumo:
In this thesis I examine a variety of linguistic elements which involve ``alternative'' semantic values---a class arguably including focus, interrogatives, indefinites, and disjunctions---and the connections between these elements. This study focusses on the analysis of such elements in Sinhala, with comparison to Malayalam, Tlingit, and Japanese. The central part of the study concerns the proper syntactic and semantic analysis of Q[uestion]-particles (including Sinhala "da", Malayalam "-oo", Japanese "ka"), which, in many languages, appear not only in interrogatives, but also in the formation of indefinites, disjunctions, and relative clauses. This set of contexts is syntactically-heterogeneous, and so syntax does not offer an explanation for the appearance of Q-particles in this particular set of environments. I propose that these contexts can be united in terms of semantics, as all involving some element which denotes a set of ``alternatives''. Both wh-words and disjunctions can be analysed as creating Hamblin-type sets of ``alternatives''. Q-particles can be treated as uniformly denoting variables over choice functions which apply to the aforementioned Hamblin-type sets, thus ``restoring'' the derivation to normal Montagovian semantics. The treatment of Q-particles as uniformly denoting variables over choice functions provides an explanation for why these particles appear in just this set of contexts: they all include an element with Hamblin-type semantics. However, we also find variation in the use of Q-particles; including, in some languages, the appearance of multiple morphologically-distinct Q-particles in different syntactic contexts. Such variation can be handled largely by positing that Q-particles may vary in their formal syntactic feature specifications, determining which syntactic contexts they are licensed in. The unified analysis of Q-particles as denoting variables over choice functions also raises various questions about the proper analysis of interrogatives, indefinites, and disjunctions, including issues concerning the nature of the semantics of wh-words and the syntactic structure of disjunction. As well, I observe that indefinites involving Q-particles have a crosslinguistic tendency to be epistemic indefinites, i.e. indefinites which explicitly signal ignorance of details regarding who or what satisfies the existential claim. I provide an account of such indefinites which draws on the analysis of Q-particles as variables over choice functions. These pragmatic ``signals of ignorance'' (which I argue to be presuppositions) also have a further role to play in determining the distribution of Q-particles in disjunctions. The final section of this study investigates the historical development of focus constructions and Q-particles in Sinhala. This diachronic study allows us not only to observe the origin and development of such elements, but also serves to delimit the range of possible synchronic analyses, thus providing us with further insights into the formal syntactic and semantic properties of Q-particles. This study highlights both the importance of considering various components of the grammar (e.g. syntax, semantics, pragmatics, morphology) and the use of philology in developing plausible formal analyses of complex linguistic phenomena such as the crosslinguistic distribution of Q-particles.
Resumo:
We survey several of the research efforts pursued by the iBench and snBench projects in the CS Department at Boston University over the last half dozen years. These activities use ideas and methodologies inspired by recent developments in other parts of computer science -- particularly in formal methods and in the foundations of programming languages -- but now specifically applied to the certification of safety-critical networking systems. This is research jointly led by Azer Bestavros and Assaf Kfoury with the participation of Adam Bradley, Andrei Lapets, and Michael Ocean.
Resumo:
A weak reference is a reference to an object that is not followed by the pointer tracer when garbage collection is called. That is, a weak reference cannot prevent the object it references from being garbage collected. Weak references remain a troublesome programming feature largely because there is not an accepted, precise semantics that describes their behavior (in fact, we are not aware of any formalization of their semantics). The trouble is that weak references allow reachable objects to be garbage collected, therefore allowing garbage collection to influence the result of a program. Despite this difficulty, weak references continue to be used in practice for reasons related to efficient storage management, and are included in many popular programming languages (Standard ML, Haskell, OCaml, and Java). We give a formal semantics for a calculus called λweak that includes weak references and is derived from Morrisett, Felleisen, and Harper’s λgc. λgc formalizes the notion of garbage collection by means of a rewrite rule. Such a formalization is required to precisely characterize the semantics of weak references. However, the inclusion of a garbage-collection rewrite-rule in a language with weak references introduces non-deterministic evaluation, even if the parameter-passing mechanism is deterministic (call-by-value in our case). This raises the question of confluence for our rewrite system. We discuss natural restrictions under which our rewrite system is confluent, thus guaranteeing uniqueness of program result. We define conditions that allow other garbage collection algorithms to co-exist with our semantics of weak references. We also introduce a polymorphic type system to prove the absence of erroneous program behavior (i.e., the absence of “stuck evaluation”) and a corresponding type inference algorithm. We prove the type system sound and the inference algorithm sound and complete.
Resumo:
Use of structuring mechanisms (such as modularisation) is widely believed to be one of the key ways to improve software quality. Structuring is considered to be at least as important for specification documents as for source code, since it is assumed to improve comprehensibility. Yet, as with most widely held assumptions in software engineering, there is little empirical evidence to support this hypothesis. Also, even if structuring can be shown to he a good thing, we do not know how much structuring is somehow optimal. One of the more popular formal specification languages, Z, encourages structuring through its schema calculus. A controlled experiment is described in which two hypotheses about the effects of structure on the comprehensibility of Z specifications are tested. Evidence was found that structuring a specification into schemas of about 20 lines long significantly improved comprehensibility over a monolithic specification. However, there seems to be no perceived advantage in breaking down the schemas into much smaller components. The experiment can he fully replicated.
Resumo:
The Zipf curves of log of frequency against log of rank for a large English corpus of 500 million word tokens, 689,000 word types and for a large Spanish corpus of 16 million word tokens, 139,000 word types are shown to have the usual slope close to –1 for rank less than 5,000, but then for a higher rank they turn to give a slope close to –2. This is apparently mainly due to foreign words and place names. Other Zipf curves for highlyinflected Indo-European languages, Irish and ancient Latin, are also given. Because of the larger number of word types per lemma, they remain flatter than the English curve maintaining a slope of –1 until turning points of about ranks 30,000 for Irish and 10,000 for Latin. A formula which calculates the number of tokens given the number of types is derived in terms of the rank at the turning point, 5,000 for both English and Spanish, 30,000 for Irish and 10,000 for Latin.
Resumo:
In this paper we continue our investigation into the development of computational-science software based on the identification and formal specification of Abstract Data Types (ADTs) and their implementation in Fortran 90. In particular, we consider the consequences of using pointers when implementing a formally specified ADT in Fortran 90. Our aim is to highlight the resulting conflict between the goal of information hiding, which is central to the ADT methodology, and the space efficiency of the implementation. We show that the issue of storage recovery cannot be avoided by the ADT user, and present a range of implementations of a simple ADT to illustrate various approaches towards satisfactory storage management. Finally, we propose a set of guidelines for implementing ADTs using pointers in Fortran 90. These guidelines offer a way gracefully to provide disposal operations in Fortran 90. Such an approach is desirable since Fortran 90 does not provide automatic garbage collection which is offered by many object-oriented languages including Eiffel, Java, Smalltalk, and Simula.
Resumo:
Intimate Ecologies considers the practice of exhibition-making over the past decade in formal museum and gallery spaces and its relationship to creating a concept of craft in contemporary Britain. Different forms of expression found in traditions of still life painting, film and moving image, poetic text and performance are examined to highlight the complex layers of language at play in exhibitions and within a concept of craft. The thesis presents arguments for understanding the value of embodied material knowledge to aesthetic experience in exhibitions, across a spectrum of human expression. These are supported by reference to exhibition case studies, critical and theoretical works from fields including social anthropology, architecture, art and design history and literary criticism and a range of individual, original works of art. Intimate Ecologies concludes that the museum exhibition, as a creative medium for understanding objects, becomes enriched by close study of material practice, and embodied knowledge that draws on a concept of craft. In turn a concept of craft is refreshed by the makers’ participation in shifting patterns of exhibition-making in cultural spaces that allow the layers of language embedded in complex objects to be experienced from different perspectives. Both art-making and the experience of objects are intimate, and infinitely varied: a vibrant ecology of exhibition-making gives space to this diversity.