954 resultados para Analogy (Linguistics)
Resumo:
The Leaving Certificate (LC) is the national, standardised state examination in Ireland necessary for entry to third level education – this presents a massive, raw corpus of data with the potential to yield invaluable insight into the phenomena of learner interlanguage. With samples of official LC Spanish examination data, this project has compiled a digitised corpus of learner Spanish comprised of the written and oral production of 100 candidates. This corpus was then analysed using a specific investigative corpus technique, Computer-aided Error Analysis (CEA, Dagneaux et al, 1998). CEA is a powerful apparatus in that it greatly facilitates the quantification and analysis of a large learner corpus in digital format. The corpus was both compiled and analysed with the use of UAM Corpus Tool (O’Donnell 2013). This Tool allows for the recording of candidate-specific variables such as grade, examination level, task type and gender, therefore allowing for critical analysis of the corpus as one unit, as separate written and oral sub corpora and also of performance per task, level and gender. This is an interdisciplinary work combining aspects of Applied Linguistics, Learner Corpus Research and Foreign Language (FL) Learning. Beginning with a review of the context of FL learning in Ireland and Europe, I go on to discuss the disciplinary context and theoretical framework for this work and outline the methodology applied. I then perform detailed quantitative and qualitative analyses before going on to combine all research findings outlining principal conclusions. This investigation does not make a priori assumptions about the data set, the LC Spanish examination, the context of FLs or of any aspect of learner competence. It undertakes to provide the linguistic research community and the domain of Spanish language learning and pedagogy in Ireland with an empirical, descriptive profile of real learner performance, characterising learner difficulty.
Resumo:
We consider massless higher spin gauge theories with both electric and magnetic sources, with a special emphasis on the spin two case. We write the equations of motion at the linear level (with conserved external sources) and introduce Dirac strings so as to derive the equations from a variational principle. We then derive a quantization condition that generalizes the familiar Dirac quantization condition, and which involves the conserved charges associated with the asymptotic symmetries for higher spins. Next we discuss briefly how the result extends to the nonlinear theory. This is done in the context of gravitation, where the Taub-NUT solution provides the exact solution of the field equations with both types of sources. We rederive, in analogy with electromagnetism, the quantization condition from the quantization of the angular momentum. We also observe that the Taub-NUT metric is asymptotically flat at spatial infinity in the sense of Regge and Teitelboim (including their parity conditions). It follows, in particular, that one can consistently consider in the variational principle configurations with different electric and magnetic masses. © 2006 The American Physical Society.
Resumo:
In this work we introduce a new mathematical tool for optimization of routes, topology design, and energy efficiency in wireless sensor networks. We introduce a vector field formulation that models communication in the network, and routing is performed in the direction of this vector field at every location of the network. The magnitude of the vector field at every location represents the density of amount of data that is being transited through that location. We define the total communication cost in the network as the integral of a quadratic form of the vector field over the network area. With the above formulation, we introduce a mathematical machinery based on partial differential equations very similar to the Maxwell's equations in electrostatic theory. We show that in order to minimize the cost, the routes should be found based on the solution of these partial differential equations. In our formulation, the sensors are sources of information, and they are similar to the positive charges in electrostatics, the destinations are sinks of information and they are similar to negative charges, and the network is similar to a non-homogeneous dielectric media with variable dielectric constant (or permittivity coefficient). In one of the applications of our mathematical model based on the vector fields, we offer a scheme for energy efficient routing. Our routing scheme is based on changing the permittivity coefficient to a higher value in the places of the network where nodes have high residual energy, and setting it to a low value in the places of the network where the nodes do not have much energy left. Our simulations show that our method gives a significant increase in the network life compared to the shortest path and weighted shortest path schemes. Our initial focus is on the case where there is only one destination in the network, and later we extend our approach to the case where there are multiple destinations in the network. In the case of having multiple destinations, we need to partition the network into several areas known as regions of attraction of the destinations. Each destination is responsible for collecting all messages being generated in its region of attraction. The complexity of the optimization problem in this case is how to define regions of attraction for the destinations and how much communication load to assign to each destination to optimize the performance of the network. We use our vector field model to solve the optimization problem for this case. We define a vector field, which is conservative, and hence it can be written as the gradient of a scalar field (also known as a potential field). Then we show that in the optimal assignment of the communication load of the network to the destinations, the value of that potential field should be equal at the locations of all the destinations. Another application of our vector field model is to find the optimal locations of the destinations in the network. We show that the vector field gives the gradient of the cost function with respect to the locations of the destinations. Based on this fact, we suggest an algorithm to be applied during the design phase of a network to relocate the destinations for reducing the communication cost function. The performance of our proposed schemes is confirmed by several examples and simulation experiments. In another part of this work we focus on the notions of responsiveness and conformance of TCP traffic in communication networks. We introduce the notion of responsiveness for TCP aggregates and define it as the degree to which a TCP aggregate reduces its sending rate to the network as a response to packet drops. We define metrics that describe the responsiveness of TCP aggregates, and suggest two methods for determining the values of these quantities. The first method is based on a test in which we drop a few packets from the aggregate intentionally and measure the resulting rate decrease of that aggregate. This kind of test is not robust to multiple simultaneous tests performed at different routers. We make the test robust to multiple simultaneous tests by using ideas from the CDMA approach to multiple access channels in communication theory. Based on this approach, we introduce tests of responsiveness for aggregates, and call it CDMA based Aggregate Perturbation Method (CAPM). We use CAPM to perform congestion control. A distinguishing feature of our congestion control scheme is that it maintains a degree of fairness among different aggregates. In the next step we modify CAPM to offer methods for estimating the proportion of an aggregate of TCP traffic that does not conform to protocol specifications, and hence may belong to a DDoS attack. Our methods work by intentionally perturbing the aggregate by dropping a very small number of packets from it and observing the response of the aggregate. We offer two methods for conformance testing. In the first method, we apply the perturbation tests to SYN packets being sent at the start of the TCP 3-way handshake, and we use the fact that the rate of ACK packets being exchanged in the handshake should follow the rate of perturbations. In the second method, we apply the perturbation tests to the TCP data packets and use the fact that the rate of retransmitted data packets should follow the rate of perturbations. In both methods, we use signature based perturbations, which means packet drops are performed with a rate given by a function of time. We use analogy of our problem with multiple access communication to find signatures. Specifically, we assign orthogonal CDMA based signatures to different routers in a distributed implementation of our methods. As a result of orthogonality, the performance does not degrade because of cross interference made by simultaneously testing routers. We have shown efficacy of our methods through mathematical analysis and extensive simulation experiments.
Resumo:
Numerical approximation of the long time behavior of a stochastic di.erential equation (SDE) is considered. Error estimates for time-averaging estimators are obtained and then used to show that the stationary behavior of the numerical method converges to that of the SDE. The error analysis is based on using an associated Poisson equation for the underlying SDE. The main advantages of this approach are its simplicity and universality. It works equally well for a range of explicit and implicit schemes, including those with simple simulation of random variables, and for hypoelliptic SDEs. To simplify the exposition, we consider only the case where the state space of the SDE is a torus, and we study only smooth test functions. However, we anticipate that the approach can be applied more widely. An analogy between our approach and Stein's method is indicated. Some practical implications of the results are discussed. Copyright © by SIAM. Unauthorized reproduction of this article is prohibited.
Resumo:
HYPERJOSEPH combines hypertext, information retrieval, literary studies, Biblical scholarship, and linguistics. Dialectically, this paper contrasts hypertextual form (the extant tool) and AI-captured content (a desideratum), in the HYPERJOSEPH project. The discussion is more general and oriented to epistemology.
Resumo:
Reviews of: [1] James E. Hoch, Semitic Words in Egyptian Texts of the New Kingdom and Third Intermediate Period, (1994), Princeton University Press. [2] Daniel Sivan and Zipora Cochavi-Rainey, West Semitic Vocabulary in Egyptian Script of the 14th to the 10th Centuries BCE, (1992), Ben-Gurion University of the Negev Press.
Resumo:
Review of: Vardah Shiloh, Millon 'Ivri-'Arami-'Aššuri bs-Lahag Yihude Zaxo (A New Neo-Aramaic Dictionary: Jewish Dialect of Zakho). Volume I: 'alef—nun\ Volume II: samex-tav. V. Shilo (16 Ben-Gamla Street), Jerusalem 1995. Pp. xiv + 488 (Vol. I); 489-963 (Vol. II). (Modern Hebrew, Zakho Jewish Neo-Aramaic). Hbk.
Resumo:
This paper presents work on document retrieval based on first time participation in the CLEF 2001 monolingual retrieval task using French. The experiment findings indicated that Okapi, the text retrieval system in use, can successfully be used for non-English text retrieval. A lot of internal pre-processing is required in the basic search system for conversion into Okapi access formats. Various shell scripts were written to achieve the conversion in a UNIX environment, failure of which would significantly have impeded the overall performance. Based on the experiment findings using Okapi - originally designed for English - it was clear that, although most European languages share conventional word boundaries and variant word morphemes formed by the additon of suffixes, there is significant difference between French and English retrieval depending on the adaptation of indexing and search strategies in use. No sophisticated method for higher recall and precision such as stemming techniques, phrase translation or de-compounding was employed for the experiment and our results were suggestively poor. Future participation would include more refined query translation tools.
Resumo:
In this paper, we consider what is meant by elearning and contrast the delivery of material with the actual learning process using an analogy derived from Searle. A case study describes an attempt to use a groupware system in a knowledge management course that met with mixed results. The reasons for these are explored with issues regarding extrinsic and intrinsic motivation and scaffolding being considered in the elearning context
Resumo:
Benati provides clarity about the characteristics and notion of language proficiency in the field of second language acquisition. He looks at four areas of research paradigmatically related to the role of proficiency: theorizing and measuring second language proficiency; the dimensions of L2 proficiency; factors contributing to the attainment of L2 proficiency and attaining L2 proficiency in the classroom. It also contains a variety of research accounts about the specific factors which have an effect on proficiency together with a theorised measurement of proficiency in second language research. It will be required reading for researchers in applied linguistics and second language acquisition.
Resumo:
Japanese Language Teaching examines the practical aspects of the acquisition of Japanese as a second language, underpinned by current theory and research. Each chapter examines the theory and practice of language teaching, and progresses to a consideration of the practical design of tasks for teaching. The final section applies theory and practice to an empirical case study, drawn from a classroom with Japanese as a second language. With its emphasis on practice underpinned by contemporary theory, this book will be of interest to postgraduates studying second language acquisition and applied linguistics. [Source: publisher's description].
Resumo:
Key Terms in Second Language Acquisition includes definitions of key terms within second language acquisition, and also provides accessible summaries of the key issues within this complex area of study. The final section presents a list of key readings in second language acquisition that signposts the reader towards classic articles and also provides a springboard to further study.
Resumo:
This volume tracks the impact processing instruction has made since its conception. It provides an overview of new research trends on measuring the relative effects of processing instruction. Firstly, the authors explain processing instruction, both its main theoretical underpinnings as well as the guidelines for developing structured input practices. Secondly, they review the empirical research conducted, to date, so that readers have an overview of new research carried out on the effects of processing instruction. The authors finally reflect on the generalizability and limits of the research on processing instruction and offer future directions for processing instruction research.