361 resultados para Language representation
Resumo:
Starting with the incident now known as the Cow’s Head Protest, this article traces and unpacks the events, techniques, and conditions surrounding the representation of ethno-religious minorities in Malaysia. The author suggests that the Malaysian Indians’ struggle to correct the dominant reading of their community as an impoverished and humbled underclass is a disruption of the dominant cultural order in Malaysia. It is also among the key events to have has set in motion a set of dynamics—the visual turn—introduced by new media into the politics of ethno-communal representation in Malaysia. Believing that this situation requires urgent examination the author attempts to outline the problematics of the task.
Resumo:
In second language classrooms, listening is gaining recognition as an active element in the processes of learning and using a second language. Currently, however, much of the teaching of listening prioritises comprehension without sufficient emphasis on the skills and strategies that enhance learners’ understanding of spoken language. This paper presents an argument for rethinking the emphasis on comprehension and advocates augmenting current teaching with an explicit focus on strategies. Drawing on the literature, the paper provides three models of strategy instruction for the teaching and development of listening skills. The models include steps for implementation that accord with their respective approaches to explicit instruction. The final section of the paper synthesises key points from the models as a guide for application in the second language classroom. The premise underpinning the paper is that the teaching of strategies can provide learners with active and explicit measures for managing and expanding their listening capacities, both in the learning and ‘real world’ use of a second language.
Resumo:
This chapter reports on a study of oracy in a first-year university Business course, with particular interest in the oracy demands for second language-using international students. The research is relevant at a time when Higher Education is characterised by the confluence of increased international enrolments, more dialogic teaching and learning, and imperatives for teamwork and collaboration. Data sources for the study included videotaped lectures and tutorials, course documents, student surveys, and an interview with the lecturer. The findings pointed to a complex, oracy-laden environment where interactive talk fulfilled high-stakes functions related to social inclusion, the co-construction of knowledge, and the accomplishment of assessment tasks. The salience of talk posed significant challenges for students negotiating these core functions in their second language. The study highlights the oracy demands in university courses and foregrounds the need for university teachers, curriculum writers and speaking test developers to recognise these demands and explicate them for the benefit of all students.
Resumo:
Privacy issues have hindered the evolution of e-health since its emergence. Patients demand better solutions for the protection of private information. Health professionals demand open access to patient health records. Existing e-health systems find it difficult to fulfill these competing requirements. In this paper, we present an information accountability framework (IAF) for e-health systems. The IAF is intended to address privacy issues and their competing concerns related to e-health. Capabilities of the IAF adhere to information accountability principles and e-health requirements. Policy representation and policy reasoning are key capabilities introduced in the IAF. We investigate how these capabilities are feasible using Semantic Web technologies. We discuss with the use of a case scenario, how we can represent the different types of policies in the IAF using the Open Digital Rights Language (ODRL).
Resumo:
With the recognition that language both reflects and constructs culture and English now widely acknowledged as an international language, the cul-tural content of language teaching materials is now being problematised. Through a quantitative analysis, this chapter focuses on opportunities for intercultural understanding and connectedness through representations of the identities that appear in two leading English language textbooks. The analyses reveal that the textbooks orientate towards British and western identities with representations of people from non-European/non-Western backgrounds being notable for their absence, while others are hidden from view. Indeed there would appear to be a neocolonialist orientation in oper-ation in the textbooks, one that aligns English with the West. The chapter proposes arguments for the consideration of cultural diversity in English language teaching (ELT) textbook design, and promoting intercultural awareness and acknowledging the contexts in which English is now being used. It also offers ways that teachers can critically reflect on existing ELT materials and proposes arguments for including different varieties of Eng-lish in order to ensure a level of intercultural understanding and connect-edness.
Resumo:
Service-oriented Architectures (SOA) and Web services leverage the technical value of solutions in the areas of distributed systems and cross-enterprise integration. The emergence of Internet marketplaces for business services is driving the need to describe services, not only from a technical level, but also from a business and operational perspective. While, SOA and Web services reside in an IT layer, organizations owing Internet marketplaces are requiring advertising and trading business services which reside in a business layer. As a result, the gap between business and IT needs to be closed. This paper presents USDL (Unified Service Description Language), a specification language to describe services from a business, operational and technical perspective. USDL plays a major role in the Internet of Services to describe tradable services which are advertised in electronic marketplaces. The language has been tested using two service marketplaces as use cases.
Resumo:
In information retrieval (IR) research, more and more focus has been placed on optimizing a query language model by detecting and estimating the dependencies between the query and the observed terms occurring in the selected relevance feedback documents. In this paper, we propose a novel Aspect Language Modeling framework featuring term association acquisition, document segmentation, query decomposition, and an Aspect Model (AM) for parameter optimization. Through the proposed framework, we advance the theory and practice of applying high-order and context-sensitive term relationships to IR. We first decompose a query into subsets of query terms. Then we segment the relevance feedback documents into chunks using multiple sliding windows. Finally we discover the higher order term associations, that is, the terms in these chunks with high degree of association to the subsets of the query. In this process, we adopt an approach by combining the AM with the Association Rule (AR) mining. In our approach, the AM not only considers the subsets of a query as “hidden” states and estimates their prior distributions, but also evaluates the dependencies between the subsets of a query and the observed terms extracted from the chunks of feedback documents. The AR provides a reasonable initial estimation of the high-order term associations by discovering the associated rules from the document chunks. Experimental results on various TREC collections verify the effectiveness of our approach, which significantly outperforms a baseline language model and two state-of-the-art query language models namely the Relevance Model and the Information Flow model
Resumo:
In the era of Web 2.0, huge volumes of consumer reviews are posted to the Internet every day. Manual approaches to detecting and analyzing fake reviews (i.e., spam) are not practical due to the problem of information overload. However, the design and development of automated methods of detecting fake reviews is a challenging research problem. The main reason is that fake reviews are specifically composed to mislead readers, so they may appear the same as legitimate reviews (i.e., ham). As a result, discriminatory features that would enable individual reviews to be classified as spam or ham may not be available. Guided by the design science research methodology, the main contribution of this study is the design and instantiation of novel computational models for detecting fake reviews. In particular, a novel text mining model is developed and integrated into a semantic language model for the detection of untruthful reviews. The models are then evaluated based on a real-world dataset collected from amazon.com. The results of our experiments confirm that the proposed models outperform other well-known baseline models in detecting fake reviews. To the best of our knowledge, the work discussed in this article represents the first successful attempt to apply text mining methods and semantic language models to the detection of fake consumer reviews. A managerial implication of our research is that firms can apply our design artifacts to monitor online consumer reviews to develop effective marketing or product design strategies based on genuine consumer feedback posted to the Internet.
Resumo:
Error correction is perhaps the most widely used method for responding to student writing. While various studies have investigated the effectiveness of providing error correction, there has been relatively little research incorporating teachers' beliefs, practices, and students' preferences in written error correction. The current study adopted features of an ethnographic research design in order to explore the beliefs and practices of ESL teachers, and investigate the preferences of L2 students regarding written error correction in the context of a language institute situated in the Brisbane metropolitan district. In this study, two ESL teachers and two groups of adult intermediate L2 students were interviewed and observed. The beliefs and practices of the teachers were elicited through interviews and classroom observations. The preferences of L2 students were elicited through focus group interviews. Responses of the participants were encoded and analysed. Results of the teacher interviews showed that teachers believe that providing written error correction has advantages and disadvantages. Teachers believe that providing written error correction helps students improve their proof-reading skills in order to revise their writing more efficiently. However, results also indicate that providing written error correction is very time consuming. Furthermore, teachers prefer to provide explicit written feedback strategies during the early stages of the language course, and move to a more implicit strategy of providing written error correction in order to facilitate language learning. On the other hand, results of the focus group interviews suggest that students regard their teachers' practice of written error correction as important in helping them locate their errors and revise their writing. However, students also feel that the process of providing written error correction is time consuming. Nevertheless, students want and expect their teachers to provide written feedback because they believe that the benefits they gain from receiving feedback on their writing outweigh the apparent disadvantages of their teachers' written error correction strategies.
Resumo:
In this paper, we argue that second language (L2) reading research, which has been informed by studies involving first language (L1) alphabetic English reading, may be less relevant to L2 readers with non-alphabetic reading backgrounds, such as Chinese readers with an L1 logographic (Chinese character) learning history. We provide both neuroanatomical and behavioural evidence from Chinese language reading studies to support our claims. The paper concludes with an argument outlining the need for a universal L2 reading model which can adequately account for readers with diverse L1 orthographic language learning histories.
Resumo:
Language-use has proven to be the most complex and complicating of all Internet features, yet people and institutions invest enormously in language and crosslanguage features because they are fundamental to the success of the Internet’s past, present and future. The thesis takes into focus the developments of the latter – features that facilitate and signify linking between or across languages – both in their historical and current contexts. In the theoretical analysis, the conceptual platform of inter-language linking is developed to both accommodate efforts towards a new social complexity model for the co-evolution of languages and language content, as well as to create an open analytical space for language and cross-language related features of the Internet and beyond. The practiced uses of inter-language linking have changed over the last decades. Before and during the first years of the WWW, mechanisms of inter-language linking were at best important elements used to create new institutional or content arrangements, but on a large scale they were just insignificant. This has changed with the emergence of the WWW and its development into a web in which content in different languages co-evolve. The thesis traces the inter-language linking mechanisms that facilitated these dynamic changes by analysing what these linking mechanisms are, how their historical as well as current contexts can be understood and what kinds of cultural-economic innovation they enable and impede. The study discusses this alongside four empirical cases of bilingual or multilingual media use, ranging from television and web services for languages of smaller populations, to large-scale, multiple languages involving web ventures by the British Broadcasting Corporation, the Special Broadcasting Service Australia, Wikipedia and Google. To sum up, the thesis introduces the concepts of ‘inter-language linking’ and the ‘lateral web’ to model the social complexity and co-evolution of languages online. The resulting model reconsiders existing social complexity models in that it is the first that can explain the emergence of large-scale, networked co-evolution of languages and language content facilitated by the Internet and the WWW. Finally, the thesis argues that the Internet enables an open space for language and crosslanguage related features and investigates how far this process is facilitated by (1) amateurs and (2) human-algorithmic interaction cultures.
Resumo:
The question of under what conditions conceptual representation is compositional remains debatable within cognitive science. This paper proposes a well developed mathematical apparatus for a probabilistic representation of concepts, drawing upon methods developed in quantum theory to propose a formal test that can determine whether a specific conceptual combination is compositional, or not. This test examines a joint probability distribution modeling the combination, asking whether or not it is factorizable. Empirical studies indicate that some combinations should be considered non-compositionally.
Resumo:
This thesis is about defining participation in the context of fostering research cohesion in the field of Participatory Design. The systematic and incremental building of new knowledge is the process by which science and research is advanced. This process requires a certain type of cohesion in the way research is undertaken for new knowledge to be built from the knowledge provided by previous projects and research. To support this process and to foster research cohesion three conditions are necessary. These conditions are: common ground between practitioners, problem-space positioning, and adherence to clear research criteria. The challenge of fostering research cohesion in Participatory Design is apparent in at least four themes raised in the literature: the role of politics within Participatory Design epistemology, the role of participation, design with users, and the ability to translate theory into practice. These four thematic challenges frame the context which the research gap is situated. These themes are also further investigated and the research gap – a general lack of research cohesion – along with one avenue for addressing this gap – a clear and operationalizable definition for participation – are identified. The intended contribution of this thesis is to develop a framework and visual tool to address this research gap. In particular, an initial approximation for a clear and operationalizable definition for participation will be proposed such that it can be used within the field of Participatory Design to run projects and foster research cohesion. In pursuit of this contribution, a critical lens is developed and used to analyse some of the principles and practices of Participatory Design that are regarded as foundational. This lens addresses how to define participation in a way that adheres to basic principles of scientific rigour – namely, ensuring that the elements of a theory are operationalizable, falsifiable, generalizable, and useful, and it also treats participation as a construct rather than treating the notion of participation as a variable. A systematic analysis is performed using this lens on the principles and practices that are considered foundational within the field. From this analysis, three components of the participation construct – impact, influence, and agency – are identified. These components are then broken down into two constituent variables each (six in all) and represented visually. Impact is described as the relationship between the quality and use of information. Influence is described as the relationship between the amount and scope of decision making. Agency is described as the relationship between the motivation of the participant and the solidarity of the group. Thus, as a construct, participation is described as the relationship between a participant’s impact, influence, and agency. In the concluding section, the value of this participation construct is explored for its utility in enhancing project work and fostering research cohesion. Three items of potential value that emerge are: the creation of a visual tool through the representation of these six constituent variables in one image; the elaboration of a common language for researchers based on the six constituent variables identified; and the ability to systematically identify and remedy participation gaps throughout the life of the project. While future research exploring the applicability of the participation construct in real world projects is necessary, it is intended that this initial approximation of a participation construct in the form of the visual tool will serve as the basis for a cohesive and rigorous discussion about participation in Participatory Design.
Resumo:
The ability to detect unusual events in surviellance footage as they happen is a highly desireable feature for a surveillance system. However, this problem remains challenging in crowded scenes due to occlusions and the clustering of people. In this paper, we propose using the Distributed Behavior Model (DBM), which has been widely used in computer graphics, for video event detection. Our approach does not rely on object tracking, and is robust to camera movements. We use sparse coding for classification, and test our approach on various datasets. Our proposed approach outperforms a state-of-the-art work which uses the social force model and Latent Dirichlet Allocation.