941 resultados para context-free language


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Mutation testing has been used to assess the quality of test case suites by analyzing the ability in distinguishing the artifact under testing from a set of alternative artifacts, the so-called mutants. The mutants are generated from the artifact under testing by applying a set of mutant operators, which produce artifacts with simple syntactical differences. The mutant operators are usually based on typical errors that occur during the software development and can be related to a fault model. In this paper, we propose a language-named MuDeL (MUtant DEfinition Language)-for the definition of mutant operators, aiming not only at automating the mutant generation, but also at providing precision and formality to the operator definition. The proposed language is based on concepts from transformational and logical programming paradigms, as well as from context-free grammar theory. Denotational semantics formal framework is employed to define the semantics of the MuDeL language. We also describe a system-named mudelgen-developed to support the use of this language. An executable representation of the denotational semantics of the language is used to check the correctness of the implementation of mudelgen. At the very end, a mutant generator module is produced, which can be incorporated into a specific mutant tool/environment. (C) 2008 Elsevier Ltd. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We present initial research regarding a system capable of generating novel card games. We furthermore propose a method for com- putationally analysing existing games of the same genre. Ultimately, we present a formalisation of card game rules, and a context-free grammar G cardgame capable of expressing the rules of a large variety of card games. Example derivations are given for the poker variant Texashold?em , Blackjack and UNO. Stochastic simulations are used both to verify the implementation of these well-known games, and to evaluate the results of new game rules derived from the grammar. In future work, this grammar will be used to evolve completely novel card games using a grammar- guided genetic program.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The human language-learning ability persists throughout life, indicating considerable flexibility at the cognitive and neural level. This ability spans from expanding the vocabulary in the mother tongue to acquisition of a new language with its lexicon and grammar. The present thesis consists of five studies that tap both of these aspects of adult language learning by using magnetoencephalography (MEG) and functional magnetic resonance imaging (fMRI) during language processing and language learning tasks. The thesis shows that learning novel phonological word forms, either in the native tongue or when exposed to a foreign phonology, activates the brain in similar ways. The results also show that novel native words readily become integrated in the mental lexicon. Several studies in the thesis highlight the left temporal cortex as an important brain region in learning and accessing phonological forms. Incidental learning of foreign phonological word forms was reflected in functionally distinct temporal lobe areas that, respectively, reflected short-term memory processes and more stable learning that persisted to the next day. In a study where explicitly trained items were tracked for ten months, it was found that enhanced naming-related temporal and frontal activation one week after learning was predictive of good long-term memory. The results suggest that memory maintenance is an active process that depends on mechanisms of reconsolidation, and that these process vary considerably between individuals. The thesis put special emphasis on studying language learning in the context of language production. The neural foundation of language production has been studied considerably less than that of perceptive language, especially on the sentence level. A well-known paradigm in language production studies is picture naming, also used as a clinical tool in neuropsychology. This thesis shows that accessing the meaning and phonological form of a depicted object are subserved by different neural implementations. Moreover, a comparison between action and object naming from identical images indicated that the grammatical class of the retrieved word (verb, noun) is less important than the visual content of the image. In the present thesis, the picture naming was further modified into a novel paradigm in order to probe sentence-level speech production in a newly learned miniature language. Neural activity related to grammatical processing did not differ between the novel language and the mother tongue, but stronger neural activation for the novel language was observed during the planning of the upcoming output, likely related to more demanding lexical retrieval and short-term memory. In sum, the thesis aimed at examining language learning by combining different linguistic domains, such as phonology, semantics, and grammar, in a dynamic description of language processing in the human brain.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The thesis presents results obtained during the authors PhD-studies. First systems of language equations of a simple form consisting of just two equations are proved to be computationally universal. These are systems over unary alphabet, that are seen as systems of equations over natural numbers. The systems contain only an equation X+A=B and an equation X+X+C=X+X+D, where A, B, C and D are eventually periodic constants. It is proved that for every recursive set S there exists natural numbers p and d, and eventually periodic sets A, B, C and D such that a number n is in S if and only if np+d is in the unique solution of the abovementioned system of two equations, so all recursive sets can be represented in an encoded form. It is also proved that all recursive sets cannot be represented as they are, so the encoding is really needed. Furthermore, it is proved that the family of languages generated by Boolean grammars is closed under injective gsm-mappings and inverse gsm-mappings. The arguments apply also for the families of unambiguous Boolean languages, conjunctive languages and unambiguous languages. Finally, characterizations for morphisims preserving subfamilies of context-free languages are presented. It is shown that the families of deterministic and LL context-free languages are closed under codes if and only if they are of bounded deciphering delay. These families are also closed under non-codes, if they map every letter into a submonoid generated by a single word. The family of unambiguous context-free languages is closed under all codes and under the same non-codes as the families of deterministic and LL context-free languages.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This study examined the effectiveness of context on the acquisition of new vocabulary for good and poor readers. Twentyeight Grade Three children, fourteen good readers and fourteen poor readers, took part in a word-learning task within three conditions: (1) strong sentence context, (2) weak sentence context, and (3) list condition. The primary hypothesis was that poor readers would show less learning in the list condition than good readers and that there would be no difference in the amount of learning in the sentence conditions. Results revealed that: (a) Words are read faster in sentence contexts than in 1 ist contexts; (b) more learning or greater improvement in performance occurs in list contexts and weak sentence contexts as opposed to strong sentence contexts; and (c) that most of these differences can be attributed to the build-up of meaning in sentences. Results indicated that good and poor readers learned more about words in all three condi tions. More learning and greater performance occurred in the list condition as opposed to the two sentence conditions for both subject groups. However, the poor readers learned significantly more about words in both the list condition and the weak sentence condition than the good readers.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Analysis by reduction is a method used in linguistics for checking the correctness of sentences of natural languages. This method is modelled by restarting automata. All types of restarting automata considered in the literature up to now accept at least the deterministic context-free languages. Here we introduce and study a new type of restarting automaton, the so-called t-RL-automaton, which is an RL-automaton that is rather restricted in that it has a window of size one only, and that it works under a minimal acceptance condition. On the other hand, it is allowed to perform up to t rewrite (that is, delete) steps per cycle. Here we study the gap-complexity of these automata. The membership problem for a language that is accepted by a t-RL-automaton with a bounded number of gaps can be solved in polynomial time. On the other hand, t-RL-automata with an unbounded number of gaps accept NP-complete languages.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Die vorliegende Arbeit behandelt Restartautomaten und Erweiterungen von Restartautomaten. Restartautomaten sind ein Werkzeug zum Erkennen formaler Sprachen. Sie sind motiviert durch die linguistische Methode der Analyse durch Reduktion und wurden 1995 von Jancar, Mráz, Plátek und Vogel eingeführt. Restartautomaten bestehen aus einer endlichen Kontrolle, einem Lese/Schreibfenster fester Größe und einem flexiblen Band. Anfänglich enthält dieses sowohl die Eingabe als auch Bandbegrenzungssymbole. Die Berechnung eines Restartautomaten läuft in so genannten Zyklen ab. Diese beginnen am linken Rand im Startzustand, in ihnen wird eine lokale Ersetzung auf dem Band durchgeführt und sie enden mit einem Neustart, bei dem das Lese/Schreibfenster wieder an den linken Rand bewegt wird und der Startzustand wieder eingenommen wird. Die vorliegende Arbeit beschäftigt sich hauptsächlich mit zwei Erweiterungen der Restartautomaten: CD-Systeme von Restartautomaten und nichtvergessende Restartautomaten. Nichtvergessende Restartautomaten können einen Zyklus in einem beliebigen Zustand beenden und CD-Systeme von Restartautomaten bestehen aus einer Menge von Restartautomaten, die zusammen die Eingabe verarbeiten. Dabei wird ihre Zusammenarbeit durch einen Operationsmodus, ähnlich wie bei CD-Grammatik Systemen, geregelt. Für beide Erweiterungen zeigt sich, dass die deterministischen Modelle mächtiger sind als deterministische Standardrestartautomaten. Es wird gezeigt, dass CD-Systeme von Restartautomaten in vielen Fällen durch nichtvergessende Restartautomaten simuliert werden können und andererseits lassen sich auch nichtvergessende Restartautomaten durch CD-Systeme von Restartautomaten simulieren. Des Weiteren werden Restartautomaten und nichtvergessende Restartautomaten untersucht, die nichtdeterministisch sind, aber keine Fehler machen. Es zeigt sich, dass diese Automaten durch deterministische (nichtvergessende) Restartautomaten simuliert werden können, wenn sie direkt nach der Ersetzung einen neuen Zyklus beginnen, oder ihr Fenster nach links und rechts bewegen können. Außerdem gilt, dass alle (nichtvergessenden) Restartautomaten, die zwar Fehler machen dürfen, diese aber nach endlich vielen Zyklen erkennen, durch (nichtvergessende) Restartautomaten simuliert werden können, die keine Fehler machen. Ein weiteres wichtiges Resultat besagt, dass die deterministischen monotonen nichtvergessenden Restartautomaten mit Hilfssymbolen, die direkt nach dem Ersetzungsschritt den Zyklus beenden, genau die deterministischen kontextfreien Sprachen erkennen, wohingegen die deterministischen monotonen nichtvergessenden Restartautomaten mit Hilfssymbolen ohne diese Einschränkung echt mehr, nämlich die links-rechts regulären Sprachen, erkennen. Damit werden zum ersten Mal Restartautomaten mit Hilfssymbolen, die direkt nach dem Ersetzungsschritt ihren Zyklus beenden, von Restartautomaten desselben Typs ohne diese Einschränkung getrennt. Besonders erwähnenswert ist hierbei, dass beide Automatentypen wohlbekannte Sprachklassen beschreiben.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

We study cooperating distributed systems (CD-systems) of stateless deterministic restarting automata with window size 1 that are governed by an external pushdown store. In this way we obtain an automata-theoretical characterization for the class of context-free trace languages.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Machine translation has been a particularly difficult problem in the area of Natural Language Processing for over two decades. Early approaches to translation failed since interaction effects of complex phenomena in part made translation appear to be unmanageable. Later approaches to the problem have succeeded (although only bilingually), but are based on many language-specific rules of a context-free nature. This report presents an alternative approach to natural language translation that relies on principle-based descriptions of grammar rather than rule-oriented descriptions. The model that has been constructed is based on abstract principles as developed by Chomsky (1981) and several other researchers working within the "Government and Binding" (GB) framework. Thus, the grammar is viewed as a modular system of principles rather than a large set of ad hoc language-specific rules.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Hierarchical structure with nested nonlocal dependencies is a key feature of human language and can be identified theoretically in most pieces of tonal music. However, previous studies have argued against the perception of such structures in music. Here, we show processing of nonlocal dependencies in music. We presented chorales by J. S. Bach and modified versions inwhich the hierarchical structure was rendered irregular whereas the local structure was kept intact. Brain electric responses differed between regular and irregular hierarchical structures, in both musicians and nonmusicians. This finding indicates that, when listening to music, humans apply cognitive processes that are capable of dealing with longdistance dependencies resulting from hierarchically organized syntactic structures. Our results reveal that a brain mechanism fundamental for syntactic processing is engaged during the perception of music, indicating that processing of hierarchical structure with nested nonlocal dependencies is not just a key component of human language, but a multidomain capacity of human cognition.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Some programs may have their entry data specified by formalized context-free grammars. This formalization facilitates the use of tools in the systematization and the rise of the quality of their test process. This category of programs, compilers have been the first to use this kind of tool for the automation of their tests. In this work we present an approach for definition of tests from the formal description of the entries of the program. The generation of the sentences is performed by taking into account syntactic aspects defined by the specification of the entries, the grammar. For optimization, their coverage criteria are used to limit the quantity of tests without diminishing their quality. Our approach uses these criteria to drive generation to produce sentences that satisfy a specific coverage criterion. The approach presented is based on the use of Lua language, relying heavily on its resources of coroutines and dynamic construction of functions. With these resources, we propose a simple and compact implementation that can be optimized and controlled in different ways, in order to seek satisfaction the different implemented coverage criteria. To make the use of our tool simpler, the EBNF notation for the specification of the entries was adopted. Its parser was specified in the tool Meta-Environment for rapid prototyping

Relevância:

90.00% 90.00%

Publicador:

Resumo:

No imaginário feminino da Amazônia Paraense, migrar é um sonho, cujo conteúdo onírico faz parte não só de uma estratégia de sobrevivência, como também de uma busca por ressignificação dos lugares/construções/imaginário/ atribuídos ao feminino, na herança cultural sexista, racializada e heteronormativa imposta na e para a região. Muitas sonham viver em um contexto livre da violência; ter uma casa bonita, filhos saudáveis e um marido bondoso; outras sonham ganhar muito dinheiro trabalhando na prostituição, como dançarinas ou qualquer trabalho que possibilite a realizarão daquele ou de outros sonhos. Todas já escutaram estórias de outras bem sucedidas que migraram, e hoje possuem carro, roupas caras e uma casa para morar. Ouvem dizer, que há boas perspectivas em torno dos Grandes Projetos, mas não fazem ideia de como chegar, pois, muitos desses locais são de difícil acesso, como minas e garimpos. Já ouviram dizer que “no estrangeiro” sua exoticidade rende muito dinheiro. Outras, já ouviram estórias ruins de gente que foi escravizada, presa, deportada ou morta. Mas, apostam na sorte e acreditam que o risco vale a pena. Sabem o quanto é difícil sair do país, tirar passaporte, negociar em outra língua, outra moeda, lidar com uma burocracia complexa, exigente e uma legislação rígida e restritiva. Acreditam que se tentassem migrar sozinhas, sem o apoio de alguém com experiência no ramo, provavelmente não conseguiriam. Até que, aparece alguém se dizendo com experiência e com a oferta de providenciar tudo, com um simples toque da varinha de condão...O tráfico de pessoas, especialmente o feminino para fins de superexploração sexual - que inclui mulheres, travestis e transgêneros é uma violação de direitos humanos no contexto da migração. Terceira atividade ilícita mais lucrativa do planeta perde, segundo a Organização das Nações Unidas – ONU, apenas para o tráfico de drogas e o de armas. Possui natureza multifacetada marcada por uma dupla regulação: a capitalista e a identitária, cuja finalidade é sempre o trabalho escravo, incluindo o casamento servil e a prostituição forçada. Seu contexto extrapola a esfera criminal, perpassa por questões culturais e de gênero. Seu enfrentamento reclama o reconhecimento da diversidade democrática, do direito à não discriminação e dos parâmetros de direitos humanos.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This thesis concerns artificially intelligent natural language processing systems that are capable of learning the properties of lexical items (properties like verbal valency or inflectional class membership) autonomously while they are fulfilling their tasks for which they have been deployed in the first place. Many of these tasks require a deep analysis of language input, which can be characterized as a mapping of utterances in a given input C to a set S of linguistically motivated structures with the help of linguistic information encoded in a grammar G and a lexicon L: G + L + C → S (1) The idea that underlies intelligent lexical acquisition systems is to modify this schematic formula in such a way that the system is able to exploit the information encoded in S to create a new, improved version of the lexicon: G + L + S → L' (2) Moreover, the thesis claims that a system can only be considered intelligent if it does not just make maximum usage of the learning opportunities in C, but if it is also able to revise falsely acquired lexical knowledge. So, one of the central elements in this work is the formulation of a couple of criteria for intelligent lexical acquisition systems subsumed under one paradigm: the Learn-Alpha design rule. The thesis describes the design and quality of a prototype for such a system, whose acquisition components have been developed from scratch and built on top of one of the state-of-the-art Head-driven Phrase Structure Grammar (HPSG) processing systems. The quality of this prototype is investigated in a series of experiments, in which the system is fed with extracts of a large English corpus. While the idea of using machine-readable language input to automatically acquire lexical knowledge is not new, we are not aware of a system that fulfills Learn-Alpha and is able to deal with large corpora. To instance four major challenges of constructing such a system, it should be mentioned that a) the high number of possible structural descriptions caused by highly underspeci ed lexical entries demands for a parser with a very effective ambiguity management system, b) the automatic construction of concise lexical entries out of a bulk of observed lexical facts requires a special technique of data alignment, c) the reliability of these entries depends on the system's decision on whether it has seen 'enough' input and d) general properties of language might render some lexical features indeterminable if the system tries to acquire them with a too high precision. The cornerstone of this dissertation is the motivation and development of a general theory of automatic lexical acquisition that is applicable to every language and independent of any particular theory of grammar or lexicon. This work is divided into five chapters. The introductory chapter first contrasts three different and mutually incompatible approaches to (artificial) lexical acquisition: cue-based queries, head-lexicalized probabilistic context free grammars and learning by unification. Then the postulation of the Learn-Alpha design rule is presented. The second chapter outlines the theory that underlies Learn-Alpha and exposes all the related notions and concepts required for a proper understanding of artificial lexical acquisition. Chapter 3 develops the prototyped acquisition method, called ANALYZE-LEARN-REDUCE, a framework which implements Learn-Alpha. The fourth chapter presents the design and results of a bootstrapping experiment conducted on this prototype: lexeme detection, learning of verbal valency, categorization into nominal count/mass classes, selection of prepositions and sentential complements, among others. The thesis concludes with a review of the conclusions and motivation for further improvements as well as proposals for future research on the automatic induction of lexical features.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

In contrast to formal semantics, the conjunction and is nonsymmetrical in pragmatics. The events in Marc went to bed and fell asleep seem to have occurred chronologically although no explicit time reference is given. As the temporal interpretation appears to be weaker in Mia ate chocolate and drank milk, it seems that the kind and nature of events presented in a context influences the interpretation of the conjunction. This work focuses on contextual influences on the interpretation of the German conjunction und (‘and’). A variety of theoretic approaches are concerned with whether and contributes to the establishment of discourse coherence via pragmatic processes or whether the conjunction has complex semantic meaning. These approaches are discussed with respect to how they explain the temporal and additive interpretation of the conjunction and the role of context in the interpre-tation process. It turned out that most theoretic approaches do not consider the importance of different kinds of context in the interpretation process.rnIn experimental pragmatics there are currently only very few studies that investigate the inter-pretation of the conjunction. As there are no studies that investigate contextual influences on the interpretation of und systematically or investigate preschoolers interpretation of the con-junction, research questions such as How do (preschool) children interpret ‘und’? and Does the kind of events conjoined influence children’s and adults’ interpretation? are yet to be answered. Therefore, this dissertation systematically investigates how different types of context influence children’s interpretation of und. Three auditory comprehension studies were conducted in German. Of special interest was whether and how the order of events introduced in a context contributes to the temporal read-ing of the conjunction und. Results indicate that the interpretation of und is – at least in Ger-man – context-dependent: The conjunction is interpreted temporally more often when events that typical occur in a certain order are connected (typical contexts) compared to events with-out typical event order (neutral contexts). This suggests that the type of events conjoined in-fluences the interpretation process. Moreover, older children and adults interpret the conjunc-tion temporally more often than the younger cohorts if the conjoined events typically occur in a certain order. In neutral contexts, additive interpretations increase with age. 5-year-olds reject reversed order statements more often in typical contexts compared to neutral contexts. However, they have more difficulties with reversed order statements in typical contexts where they perform at chance level. This suggests that not only the type of event but also other age-dependent factors such as knowledge about scripts influence children’s performance. The type of event conjoined influences children’s and adults’ interpretation of the conjunction. There-fore, the influence of different event types and script knowledge on the interpretation process does not only have to be considered in future experimental studies on language acquisition and pragmatics but also in experimental pragmatics in general. In linguistic theories, context has to be given a central role and a commonly agreed definition of context that considers the consequences arising from different event types has to be agreed upon.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The domain of context-free languages has been extensively explored and there exist numerous techniques for parsing (all or a subset of) context-free languages. Unfortunately, some programming languages are not context-free. Using standard context-free parsing techniques to parse a context-sensitive programming language poses a considerable challenge. Im- plementors of programming language parsers have adopted various techniques, such as hand-written parsers, special lex- ers, or post-processing of an ambiguous parser output to deal with that challenge. In this paper we suggest a simple extension of a top-down parser with contextual information. Contrary to the tradi- tional approach that uses only the input stream as an input to a parsing function, we use a parsing context that provides ac- cess to a stream and possibly to other context-sensitive infor- mation. At a same time we keep the context-free formalism so a grammar definition stays simple without mind-blowing context-sensitive rules. We show that our approach can be used for various purposes such as indent-sensitive parsing, a high-precision island parsing or XML (with arbitrary el- ement names) parsing. We demonstrate our solution with PetitParser, a parsing-expression grammar based, top-down, parser combinator framework written in Smalltalk.