965 resultados para Language production
Resumo:
Language Modeling (LM) has been successfully applied to Information Retrieval (IR). However, most of the existing LM approaches only rely on term occurrences in documents, queries and document collections. In traditional unigram based models, terms (or words) are usually considered to be independent. In some recent studies, dependence models have been proposed to incorporate term relationships into LM, so that links can be created between words in the same sentence, and term relationships (e.g. synonymy) can be used to expand the document model. In this study, we further extend this family of dependence models in the following two ways: (1) Term relationships are used to expand query model instead of document model, so that query expansion process can be naturally implemented; (2) We exploit more sophisticated inferential relationships extracted with Information Flow (IF). Information flow relationships are not simply pairwise term relationships as those used in previous studies, but are between a set of terms and another term. They allow for context-dependent query expansion. Our experiments conducted on TREC collections show that we can obtain large and significant improvements with our approach. This study shows that LM is an appropriate framework to implement effective query expansion.
Resumo:
This article presents a two-stage analytical framework that integrates ecological crop (animal) growth and economic frontier production models to analyse the productive efficiency of crop (animal) production systems. The ecological crop (animal) growth model estimates "potential" output levels given the genetic characteristics of crops (animals) and the physical conditions of locations where the crops (animals) are grown (reared). The economic frontier production model estimates "best practice" production levels, taking into account economic, institutional and social factors that cause farm and spatial heterogeneity. In the first stage, both ecological crop growth and economic frontier production models are estimated to calculate three measures of productive efficiency: (1) technical efficiency, as the ratio of actual to "best practice" output levels; (2) agronomic efficiency, as the ratio of actual to "potential" output levels; and (3) agro-economic efficiency, as the ratio of "best practice" to "potential" output levels. Also in the first stage, the economic frontier production model identifies factors that determine technical efficiency. In the second stage, agro-economic efficiency is analysed econometrically in relation to economic, institutional and social factors that cause farm and spatial heterogeneity. The proposed framework has several important advantages in comparison with existing proposals. Firstly, it allows the systematic incorporation of all physical, economic, institutional and social factors that cause farm and spatial heterogeneity in analysing the productive performance of crop and animal production systems. Secondly, the location-specific physical factors are not modelled symmetrically as other economic inputs of production. Thirdly, climate change and technological advancements in crop and animal sciences can be modelled in a "forward-looking" manner. Fourthly, knowledge in agronomy and data from experimental studies can be utilised for socio-economic policy analysis. The proposed framework can be easily applied in empirical studies due to the current availability of ecological crop (animal) growth models, farm or secondary data, and econometric software packages. The article highlights several directions of empirical studies that researchers may pursue in the future.
Resumo:
Intuitively, any ‘bag of words’ approach in IR should benefit from taking term dependencies into account. Unfortunately, for years the results of exploiting such dependencies have been mixed or inconclusive. To improve the situation, this paper shows how the natural language properties of the target documents can be used to transform and enrich the term dependencies to more useful statistics. This is done in three steps. The term co-occurrence statistics of queries and documents are each represented by a Markov chain. The paper proves that such a chain is ergodic, and therefore its asymptotic behavior is unique, stationary, and independent of the initial state. Next, the stationary distribution is taken to model queries and documents, rather than their initial distributions. Finally, ranking is achieved following the customary language modeling paradigm. The main contribution of this paper is to argue why the asymptotic behavior of the document model is a better representation then just the document’s initial distribution. A secondary contribution is to investigate the practical application of this representation in case the queries become increasingly verbose. In the experiments (based on Lemur’s search engine substrate) the default query model was replaced by the stable distribution of the query. Just modeling the query this way already resulted in significant improvements over a standard language model baseline. The results were on a par or better than more sophisticated algorithms that use fine-tuned parameters or extensive training. Moreover, the more verbose the query, the more effective the approach seems to become.
Resumo:
Journeys with Friends Truna aka J. Turner, Giselle Rosman and Matt Ditton Panel Session description: We are no longer an industry (alone) we are a sector. Where the model once consisted of industry making games, we now see the rise of a cultural sector playing in the game space – industry, indies (for whatever that distinction implies) artists (another odd distinction), individuals and well … everyone and their mums. This evolution has an affect – on audiences and who they are, what they expect and want, and how they understand the purpose and language of these “digital game forms’; how we talk about our worlds and the kinds of issues that are raised; on what we create and how we create it and on our communities and who we are. This evolution has an affect on how these works are understood within the wider social context and how we present this understanding to the next generation of makers and players. We can see the potential of this evolution from industry to sector in the rise of the Australian indie. We can see the potential fractures created by this evolution in the new voices that ask questions about diversity and social justice. And yet, we still see a ‘solution’ type reaction to the current changing state of our sector which announces the monolithic, Fordist model as desirable (albeit in smaller form) – with the subsequent ramifications for ‘training’ and production of local talent. Experts talk about a mismatch of graduate skills and industry needs, insufficient linkages between industry and education providers and the need to explore opportunity for the now passing model in new spaces such as adver-games and serious games. Head counts of Australian industry don’t recognise trans media producers as being part of their purview or opportunity, they don’t count the rise of the cultural playful game inspired creative works as one of thier team. Such perspectives are indeed relevant to the Australian Games Industry, but what about the emerging Australian Games Sector? How do we enable a future in such a space? This emerging sector is perhaps best represented by Melbourne’s Freeplay audience: a heady mix of indie developers, players, artists, critical thinkers and industry. Such audiences are no longer content with an ‘industry’ alone; they are the community who already see themselves as an important, vibrant cultural sector. Part of the discussion presented here seeks to identify and understand the resources, primarily in the context of community and educational opportunities, available to the evolving sector now relying more on the creative processes. This creative process and community building is already visibly growing within the context of smaller development studios, often involving more multiskilling production methodologies where the definition of ‘game’ clearly evolves beyond the traditional one.
Resumo:
Beryl & Gael discuss the ‘new’ metalanguage for knowledge about language presented in the Australian Curriculum English (ACARA, 2010). Their discussion connects to practice by recounting how one teacher scaffolds her students through detailed understandings of noun and adjective groups in reading activities. The stimulus text is the novel ‘A wrinkle in time’ (L’Engle, 1962, reproduced 2007) and the purpose is to build students’ understandings so they can work towards ‘expressing and developing ideas’ in written text (ACARA, 2010).
Resumo:
This paper reports results from a study exploring the multimedia search functionality of Chinese language search engines. Web searching in Chinese (Mandarin) is a growing research area and a technical challenge for popular commercial Web search engines. Few studies have been conducted on Chinese language search engines. We investigate two research questions: which Chinese language search engines provide multimedia searching, and what multimedia search functionalities are available in Chinese language Web search engines. Specifically, we examine each Web search engine's (1) features permitting Chinese language multimedia searches, (2) extent of search personalization and user control of multimedia search variables, and (3) the relationships between Web search engines and their features in the Chinese context. Key findings show that Chinese language Web search engines offer limited multimedia search functionality, and general search engines provide a wider range of features than specialized multimedia search engines. Study results have implications for Chinese Web users, Website designers and Web search engine developers. © 2009 Elsevier Ltd. All rights reserved.
Resumo:
Power relations and small and medium-sized enterprise strategies for capturing value in global production networks: visual effects (VFX) service firms in the Hollywood film industry, Regional Studies. This paper provides insights into the way in which non-lead firms manoeuvre in global value chains in the pursuit of a larger share of revenue and how power relations affect these manoeuvres. It examines the nature of value capture and power relations in the global supply of visual effects (VFX) services and the range of strategies VFX firms adopt to capture higher value in the global value chain. The analysis is based on a total of thirty-six interviews with informants in the industry in Australia, the United Kingdom and Canada, and a database of VFX credits for 3323 visual products for 640 VFX firms.
Resumo:
In second language classrooms, listening is gaining recognition as an active element in the processes of learning and using a second language. Currently, however, much of the teaching of listening prioritises comprehension without sufficient emphasis on the skills and strategies that enhance learners’ understanding of spoken language. This paper presents an argument for rethinking the emphasis on comprehension and advocates augmenting current teaching with an explicit focus on strategies. Drawing on the literature, the paper provides three models of strategy instruction for the teaching and development of listening skills. The models include steps for implementation that accord with their respective approaches to explicit instruction. The final section of the paper synthesises key points from the models as a guide for application in the second language classroom. The premise underpinning the paper is that the teaching of strategies can provide learners with active and explicit measures for managing and expanding their listening capacities, both in the learning and ‘real world’ use of a second language.
Resumo:
This chapter reports on a study of oracy in a first-year university Business course, with particular interest in the oracy demands for second language-using international students. The research is relevant at a time when Higher Education is characterised by the confluence of increased international enrolments, more dialogic teaching and learning, and imperatives for teamwork and collaboration. Data sources for the study included videotaped lectures and tutorials, course documents, student surveys, and an interview with the lecturer. The findings pointed to a complex, oracy-laden environment where interactive talk fulfilled high-stakes functions related to social inclusion, the co-construction of knowledge, and the accomplishment of assessment tasks. The salience of talk posed significant challenges for students negotiating these core functions in their second language. The study highlights the oracy demands in university courses and foregrounds the need for university teachers, curriculum writers and speaking test developers to recognise these demands and explicate them for the benefit of all students.
Resumo:
The concept of produsage developed from the realisation that new language was needed to describe the new phenomena emerging from the intersection of Web 2.0, user-generated content, and social media since the early years of the new millennium. When hundreds, thousands, maybe tens of thousands of participants utilise online platforms to collaborate in the development and continuous improvement of a wide variety of content – from software to informational resources to creative works –, and when this work takes place through a series of more or less unplanned, ad hoc, almost random cooperative encounters, then to describe these processes using terms which were developed during the industrial revolution no longer makes much sense. When – exactly because what takes place here is no longer a form of production in any conventional sense of the word – the outcomes of these massively distributed collaborations appear in the form of constantly changing, permanently mutable bodies of work which are owned at once by everyone and no-one, by the community of contributors as a whole but by none of them as individuals, then to conceptualise them as fixed and complete products in the industrial meaning of the term is missing the point. When what results from these efforts is of a quality (in both depth and breadth) that enables it to substitute for, replace, and even undermine the business model of long-established industrial products, even though precariously it relies on volunteer contributions, and when their volunteering efforts make it possible for some contributors to find semi- or fully professional employment in their field, then conventional industrial logic is put on its head.
Resumo:
With the recognition that language both reflects and constructs culture and English now widely acknowledged as an international language, the cul-tural content of language teaching materials is now being problematised. Through a quantitative analysis, this chapter focuses on opportunities for intercultural understanding and connectedness through representations of the identities that appear in two leading English language textbooks. The analyses reveal that the textbooks orientate towards British and western identities with representations of people from non-European/non-Western backgrounds being notable for their absence, while others are hidden from view. Indeed there would appear to be a neocolonialist orientation in oper-ation in the textbooks, one that aligns English with the West. The chapter proposes arguments for the consideration of cultural diversity in English language teaching (ELT) textbook design, and promoting intercultural awareness and acknowledging the contexts in which English is now being used. It also offers ways that teachers can critically reflect on existing ELT materials and proposes arguments for including different varieties of Eng-lish in order to ensure a level of intercultural understanding and connect-edness.
Resumo:
Service-oriented Architectures (SOA) and Web services leverage the technical value of solutions in the areas of distributed systems and cross-enterprise integration. The emergence of Internet marketplaces for business services is driving the need to describe services, not only from a technical level, but also from a business and operational perspective. While, SOA and Web services reside in an IT layer, organizations owing Internet marketplaces are requiring advertising and trading business services which reside in a business layer. As a result, the gap between business and IT needs to be closed. This paper presents USDL (Unified Service Description Language), a specification language to describe services from a business, operational and technical perspective. USDL plays a major role in the Internet of Services to describe tradable services which are advertised in electronic marketplaces. The language has been tested using two service marketplaces as use cases.
Resumo:
In information retrieval (IR) research, more and more focus has been placed on optimizing a query language model by detecting and estimating the dependencies between the query and the observed terms occurring in the selected relevance feedback documents. In this paper, we propose a novel Aspect Language Modeling framework featuring term association acquisition, document segmentation, query decomposition, and an Aspect Model (AM) for parameter optimization. Through the proposed framework, we advance the theory and practice of applying high-order and context-sensitive term relationships to IR. We first decompose a query into subsets of query terms. Then we segment the relevance feedback documents into chunks using multiple sliding windows. Finally we discover the higher order term associations, that is, the terms in these chunks with high degree of association to the subsets of the query. In this process, we adopt an approach by combining the AM with the Association Rule (AR) mining. In our approach, the AM not only considers the subsets of a query as “hidden” states and estimates their prior distributions, but also evaluates the dependencies between the subsets of a query and the observed terms extracted from the chunks of feedback documents. The AR provides a reasonable initial estimation of the high-order term associations by discovering the associated rules from the document chunks. Experimental results on various TREC collections verify the effectiveness of our approach, which significantly outperforms a baseline language model and two state-of-the-art query language models namely the Relevance Model and the Information Flow model
Resumo:
In the era of Web 2.0, huge volumes of consumer reviews are posted to the Internet every day. Manual approaches to detecting and analyzing fake reviews (i.e., spam) are not practical due to the problem of information overload. However, the design and development of automated methods of detecting fake reviews is a challenging research problem. The main reason is that fake reviews are specifically composed to mislead readers, so they may appear the same as legitimate reviews (i.e., ham). As a result, discriminatory features that would enable individual reviews to be classified as spam or ham may not be available. Guided by the design science research methodology, the main contribution of this study is the design and instantiation of novel computational models for detecting fake reviews. In particular, a novel text mining model is developed and integrated into a semantic language model for the detection of untruthful reviews. The models are then evaluated based on a real-world dataset collected from amazon.com. The results of our experiments confirm that the proposed models outperform other well-known baseline models in detecting fake reviews. To the best of our knowledge, the work discussed in this article represents the first successful attempt to apply text mining methods and semantic language models to the detection of fake consumer reviews. A managerial implication of our research is that firms can apply our design artifacts to monitor online consumer reviews to develop effective marketing or product design strategies based on genuine consumer feedback posted to the Internet.