65 resultados para modular languages
Resumo:
This article discusses the ways in which languages can be integrated into histories of war and conflict, by exploring ongoing research in two case studies: the liberation and occupation of Western Europe (1944–47), and peacekeeping/peace building in Bosnia-Herzegovina (1995–2000). The article suggests that three methodological approaches have been of particular value in this research: adopting an historical framework; following the “translation” of languages into war situations; and contextualizing the figure of the interpreter/translator. The process of incorporating languages into histories of conflict, the article argues, has helped to uncover a broader languages landscape within the theatres of war.
Resumo:
While learners’ attitudes to Modern Foreign Languages (MFL) and to Physical Education (PE) in the UK have been widely investigated in previous research, an under-explored area is learners’ feelings about being highly able in these subjects. The present study explored this issue, among 78 learners (aged 12-13) from two schools in England, a Specialist Language College, and a Specialist Sports College. Learners completed a questionnaire exploring their feelings about the prospect of being identified as gifted/talented in these subjects, and their perceptions of the characteristics of highly able learners in MFL and PE. Questionnaires were chosen as the data collection method to encourage more open responses from these young learners than might have been elicited in an interview. While learners were enthusiastic about the idea of being highly able in both subjects, this enthusiasm was more muted for MFL. School specialism was related to learners’ enthusiasm only in the Sports College. Learners expressed fairly stereotypical views of the characteristics of the highly able in MFL and PE. The relevance of these findings for motivation and curriculum design within both subjects is discussed.
Resumo:
This paper juxtaposes postmodernist discourses on language, identity and cultural power with historical forms of language inequalities grounded in the nation-state. The discussion is presented in three sections. The first section focuses on the mixed legacies of language-state relations within the pluralist nation-state, colonial and postcolonial language policies. The second section examines the concept of linguistic minority rights beyond the nation-state. This incorporates discussion of transmigration, the breaking up of previous power blocs in Eastern Europe and the role of language in the articulation of emergent 'ethnic' nationalisms. The third section examines the concept of multilingualism within the interactive cultural landscape defined by 'informationalism'. Discussing the collective impact of these variables on the shaping of new cultural, economic and political inequalities, the paper highlights the tensions in which the concept of linguistic minority rights exists in the world today.
Resumo:
Spiking neural networks are usually limited in their applications due to their complex mathematical models and the lack of intuitive learning algorithms. In this paper, a simpler, novel neural network derived from a leaky integrate and fire neuron model, the ‘cavalcade’ neuron, is presented. A simulation for the neural network has been developed and two basic learning algorithms implemented within the environment. These algorithms successfully learn some basic temporal and instantaneous problems. Inspiration for neural network structures from these experiments are then taken and applied to process sensor information so as to successfully control a mobile robot.
Resumo:
Recall in many types of verbal memory task is reliably disrupted by the presence of auditory distracters, with verbal distracters frequently proving the most disruptive (Beaman, 2005). A multinomial processing tree model (Schweickert, 1993) is applied to the effects on free recall of background speech from a known or an unknown language. The model reproduces the free recall curve and the impact on memory of verbal distracters for which a lexical entry exists (i.e., verbal items from a known language). The effects of semantic relatedness of distracters within a language is found to depend upon a redintegrative factor thought to reflect the contribution of the speech-production system. The differential impacts of known and unknown languages cannot be accounted for in this way, but the same effects of distraction are observed amongst bilinguals, regardless of distracter-language.
Resumo:
In a world where massive amounts of data are recorded on a large scale we need data mining technologies to gain knowledge from the data in a reasonable time. The Top Down Induction of Decision Trees (TDIDT) algorithm is a very widely used technology to predict the classification of newly recorded data. However alternative technologies have been derived that often produce better rules but do not scale well on large datasets. Such an alternative to TDIDT is the PrismTCS algorithm. PrismTCS performs particularly well on noisy data but does not scale well on large datasets. In this paper we introduce Prism and investigate its scaling behaviour. We describe how we improved the scalability of the serial version of Prism and investigate its limitations. We then describe our work to overcome these limitations by developing a framework to parallelise algorithms of the Prism family and similar algorithms. We also present the scale up results of a first prototype implementation.
Resumo:
The Distributed Rule Induction (DRI) project at the University of Portsmouth is concerned with distributed data mining algorithms for automatically generating rules of all kinds. In this paper we present a system architecture and its implementation for inducing modular classification rules in parallel in a local area network using a distributed blackboard system. We present initial results of a prototype implementation based on the Prism algorithm.
Resumo:
Induction of classification rules is one of the most important technologies in data mining. Most of the work in this field has concentrated on the Top Down Induction of Decision Trees (TDIDT) approach. However, alternative approaches have been developed such as the Prism algorithm for inducing modular rules. Prism often produces qualitatively better rules than TDIDT but suffers from higher computational requirements. We investigate approaches that have been developed to minimize the computational requirements of TDIDT, in order to find analogous approaches that could reduce the computational requirements of Prism.
Resumo:
Inducing rules from very large datasets is one of the most challenging areas in data mining. Several approaches exist to scaling up classification rule induction to large datasets, namely data reduction and the parallelisation of classification rule induction algorithms. In the area of parallelisation of classification rule induction algorithms most of the work has been concentrated on the Top Down Induction of Decision Trees (TDIDT), also known as the ‘divide and conquer’ approach. However powerful alternative algorithms exist that induce modular rules. Most of these alternative algorithms follow the ‘separate and conquer’ approach of inducing rules, but very little work has been done to make the ‘separate and conquer’ approach scale better on large training data. This paper examines the potential of the recently developed blackboard based J-PMCRI methodology for parallelising modular classification rule induction algorithms that follow the ‘separate and conquer’ approach. A concrete implementation of the methodology is evaluated empirically on very large datasets.
Resumo:
The Prism family of algorithms induces modular classification rules which, in contrast to decision tree induction algorithms, do not necessarily fit together into a decision tree structure. Classifiers induced by Prism algorithms achieve a comparable accuracy compared with decision trees and in some cases even outperform decision trees. Both kinds of algorithms tend to overfit on large and noisy datasets and this has led to the development of pruning methods. Pruning methods use various metrics to truncate decision trees or to eliminate whole rules or single rule terms from a Prism rule set. For decision trees many pre-pruning and postpruning methods exist, however for Prism algorithms only one pre-pruning method has been developed, J-pruning. Recent work with Prism algorithms examined J-pruning in the context of very large datasets and found that the current method does not use its full potential. This paper revisits the J-pruning method for the Prism family of algorithms and develops a new pruning method Jmax-pruning, discusses it in theoretical terms and evaluates it empirically.
Resumo:
The Prism family of algorithms induces modular classification rules in contrast to the Top Down Induction of Decision Trees (TDIDT) approach which induces classification rules in the intermediate form of a tree structure. Both approaches achieve a comparable classification accuracy. However in some cases Prism outperforms TDIDT. For both approaches pre-pruning facilities have been developed in order to prevent the induced classifiers from overfitting on noisy datasets, by cutting rule terms or whole rules or by truncating decision trees according to certain metrics. There have been many pre-pruning mechanisms developed for the TDIDT approach, but for the Prism family the only existing pre-pruning facility is J-pruning. J-pruning not only works on Prism algorithms but also on TDIDT. Although it has been shown that J-pruning produces good results, this work points out that J-pruning does not use its full potential. The original J-pruning facility is examined and the use of a new pre-pruning facility, called Jmax-pruning, is proposed and evaluated empirically. A possible pre-pruning facility for TDIDT based on Jmax-pruning is also discussed.
Resumo:
Advances in hardware and software in the past decade allow to capture, record and process fast data streams at a large scale. The research area of data stream mining has emerged as a consequence from these advances in order to cope with the real time analysis of potentially large and changing data streams. Examples of data streams include Google searches, credit card transactions, telemetric data and data of continuous chemical production processes. In some cases the data can be processed in batches by traditional data mining approaches. However, in some applications it is required to analyse the data in real time as soon as it is being captured. Such cases are for example if the data stream is infinite, fast changing, or simply too large in size to be stored. One of the most important data mining techniques on data streams is classification. This involves training the classifier on the data stream in real time and adapting it to concept drifts. Most data stream classifiers are based on decision trees. However, it is well known in the data mining community that there is no single optimal algorithm. An algorithm may work well on one or several datasets but badly on others. This paper introduces eRules, a new rule based adaptive classifier for data streams, based on an evolving set of Rules. eRules induces a set of rules that is constantly evaluated and adapted to changes in the data stream by adding new and removing old rules. It is different from the more popular decision tree based classifiers as it tends to leave data instances rather unclassified than forcing a classification that could be wrong. The ongoing development of eRules aims to improve its accuracy further through dynamic parameter setting which will also address the problem of changing feature domain values.