970 resultados para modular languages
Resumo:
This article discusses the ways in which languages can be integrated into histories of war and conflict, by exploring ongoing research in two case studies: the liberation and occupation of Western Europe (1944–47), and peacekeeping/peace building in Bosnia-Herzegovina (1995–2000). The article suggests that three methodological approaches have been of particular value in this research: adopting an historical framework; following the “translation” of languages into war situations; and contextualizing the figure of the interpreter/translator. The process of incorporating languages into histories of conflict, the article argues, has helped to uncover a broader languages landscape within the theatres of war.
Resumo:
While learners’ attitudes to Modern Foreign Languages (MFL) and to Physical Education (PE) in the UK have been widely investigated in previous research, an under-explored area is learners’ feelings about being highly able in these subjects. The present study explored this issue, among 78 learners (aged 12-13) from two schools in England, a Specialist Language College, and a Specialist Sports College. Learners completed a questionnaire exploring their feelings about the prospect of being identified as gifted/talented in these subjects, and their perceptions of the characteristics of highly able learners in MFL and PE. Questionnaires were chosen as the data collection method to encourage more open responses from these young learners than might have been elicited in an interview. While learners were enthusiastic about the idea of being highly able in both subjects, this enthusiasm was more muted for MFL. School specialism was related to learners’ enthusiasm only in the Sports College. Learners expressed fairly stereotypical views of the characteristics of the highly able in MFL and PE. The relevance of these findings for motivation and curriculum design within both subjects is discussed.
Resumo:
This paper juxtaposes postmodernist discourses on language, identity and cultural power with historical forms of language inequalities grounded in the nation-state. The discussion is presented in three sections. The first section focuses on the mixed legacies of language-state relations within the pluralist nation-state, colonial and postcolonial language policies. The second section examines the concept of linguistic minority rights beyond the nation-state. This incorporates discussion of transmigration, the breaking up of previous power blocs in Eastern Europe and the role of language in the articulation of emergent 'ethnic' nationalisms. The third section examines the concept of multilingualism within the interactive cultural landscape defined by 'informationalism'. Discussing the collective impact of these variables on the shaping of new cultural, economic and political inequalities, the paper highlights the tensions in which the concept of linguistic minority rights exists in the world today.
Resumo:
Spiking neural networks are usually limited in their applications due to their complex mathematical models and the lack of intuitive learning algorithms. In this paper, a simpler, novel neural network derived from a leaky integrate and fire neuron model, the ‘cavalcade’ neuron, is presented. A simulation for the neural network has been developed and two basic learning algorithms implemented within the environment. These algorithms successfully learn some basic temporal and instantaneous problems. Inspiration for neural network structures from these experiments are then taken and applied to process sensor information so as to successfully control a mobile robot.
Resumo:
Recall in many types of verbal memory task is reliably disrupted by the presence of auditory distracters, with verbal distracters frequently proving the most disruptive (Beaman, 2005). A multinomial processing tree model (Schweickert, 1993) is applied to the effects on free recall of background speech from a known or an unknown language. The model reproduces the free recall curve and the impact on memory of verbal distracters for which a lexical entry exists (i.e., verbal items from a known language). The effects of semantic relatedness of distracters within a language is found to depend upon a redintegrative factor thought to reflect the contribution of the speech-production system. The differential impacts of known and unknown languages cannot be accounted for in this way, but the same effects of distraction are observed amongst bilinguals, regardless of distracter-language.
Resumo:
In a world where massive amounts of data are recorded on a large scale we need data mining technologies to gain knowledge from the data in a reasonable time. The Top Down Induction of Decision Trees (TDIDT) algorithm is a very widely used technology to predict the classification of newly recorded data. However alternative technologies have been derived that often produce better rules but do not scale well on large datasets. Such an alternative to TDIDT is the PrismTCS algorithm. PrismTCS performs particularly well on noisy data but does not scale well on large datasets. In this paper we introduce Prism and investigate its scaling behaviour. We describe how we improved the scalability of the serial version of Prism and investigate its limitations. We then describe our work to overcome these limitations by developing a framework to parallelise algorithms of the Prism family and similar algorithms. We also present the scale up results of a first prototype implementation.
Resumo:
The Distributed Rule Induction (DRI) project at the University of Portsmouth is concerned with distributed data mining algorithms for automatically generating rules of all kinds. In this paper we present a system architecture and its implementation for inducing modular classification rules in parallel in a local area network using a distributed blackboard system. We present initial results of a prototype implementation based on the Prism algorithm.
Resumo:
Induction of classification rules is one of the most important technologies in data mining. Most of the work in this field has concentrated on the Top Down Induction of Decision Trees (TDIDT) approach. However, alternative approaches have been developed such as the Prism algorithm for inducing modular rules. Prism often produces qualitatively better rules than TDIDT but suffers from higher computational requirements. We investigate approaches that have been developed to minimize the computational requirements of TDIDT, in order to find analogous approaches that could reduce the computational requirements of Prism.
Resumo:
Inducing rules from very large datasets is one of the most challenging areas in data mining. Several approaches exist to scaling up classification rule induction to large datasets, namely data reduction and the parallelisation of classification rule induction algorithms. In the area of parallelisation of classification rule induction algorithms most of the work has been concentrated on the Top Down Induction of Decision Trees (TDIDT), also known as the ‘divide and conquer’ approach. However powerful alternative algorithms exist that induce modular rules. Most of these alternative algorithms follow the ‘separate and conquer’ approach of inducing rules, but very little work has been done to make the ‘separate and conquer’ approach scale better on large training data. This paper examines the potential of the recently developed blackboard based J-PMCRI methodology for parallelising modular classification rule induction algorithms that follow the ‘separate and conquer’ approach. A concrete implementation of the methodology is evaluated empirically on very large datasets.
Resumo:
The Prism family of algorithms induces modular classification rules which, in contrast to decision tree induction algorithms, do not necessarily fit together into a decision tree structure. Classifiers induced by Prism algorithms achieve a comparable accuracy compared with decision trees and in some cases even outperform decision trees. Both kinds of algorithms tend to overfit on large and noisy datasets and this has led to the development of pruning methods. Pruning methods use various metrics to truncate decision trees or to eliminate whole rules or single rule terms from a Prism rule set. For decision trees many pre-pruning and postpruning methods exist, however for Prism algorithms only one pre-pruning method has been developed, J-pruning. Recent work with Prism algorithms examined J-pruning in the context of very large datasets and found that the current method does not use its full potential. This paper revisits the J-pruning method for the Prism family of algorithms and develops a new pruning method Jmax-pruning, discusses it in theoretical terms and evaluates it empirically.