42 resultados para Greek letters (Collections)


Relevância:

20.00% 20.00%

Publicador:

Resumo:

It is possible to write many different histories of Australian television, and these different histories draw on different primary sources. The ABC of Drama, for example, draws on the ABC Document Archives (Jacka 1991). Most of the information for Images and Industry: television drama production in Australia is taken from original interviews with television production staff (Moran 1985). Ending the Affair, as well as archival work, draws on ‘over ten years of watching … Australian television current affairs’ (Turner 2005, xiii). Moran’s Guide to Australian TV Series draws exhaustively on extant archives: the ABC Document Archives, material sourced through the ABC Drama department, the Australian Film Commission, the library of the Australian Film, Television and Radio School, and the Australian Film Institute (Moran 1993, xi)...

Relevância:

20.00% 20.00%

Publicador:

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The use of symbols and abbreviations adds uniqueness and complexity to the mathematical language register. In this article, the reader’s attention is drawn to the multitude of symbols and abbreviations which are used in mathematics. The conventions which underpin the use of the symbols and abbreviations and the linguistic difficulties which learners of mathematics may encounter due to the inclusion of the symbolic language are discussed. 2010 NAPLAN numeracy tests are used to illustrate examples of the complexities of the symbolic language of mathematics.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In 2010, the State Library of Queensland (SLQ) donated their out-of-copyright Queensland images to Wikimedia Commons. One direct effect of publishing the collections at Wikimedia Commons is the ability of general audiences to participate and help the library in processing the images in the collection. This paper will discuss a project that explored user participation in the categorisation of the State Library of Queensland digital image collections. The outcomes of this project can be used to gain a better understanding of user participation that lead to improving access to library digital collections. Two techniques for data collection were used: documents analysis and interview. Document analysis was performed on the Wikimedia Commons monthly reports. Meanwhile, interview was used as the main data collection technique in this research. The data collected from document analysis was used to help the researchers to devise appropriate questions for interviews. The interviews were undertaken with participants who were divided into two groups: SLQ staff members and Wikimedians (users who participate in Wikimedia). The two sets of data collected from participants were analysed independently and compared. This method was useful for the researchers to understand the differences between the experiences of categorisation from both the librarians’ and the users’ perspectives. This paper will provide a discussion on the preliminary findings that have emerged from each group participant. This research provides preliminary information about the extent of user participation in the categorisation of SLQ collections in Wikimedia Commons that can be used by SLQ and other interested libraries in describing their digital content by their categorisations to improve user access to the collection in the future.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper, we describe a machine-translated parallel English corpus for the NTCIR Chinese, Japanese and Korean (CJK) Wikipedia collections. This document collection is named CJK2E Wikipedia XML corpus. The corpus could be used by the information retrieval research community and knowledge sharing in Wikipedia in many ways; for example, this corpus could be used for experimentations in cross-lingual information retrieval, cross-lingual link discovery, or omni-lingual information retrieval research. Furthermore, the translated CJK articles could be used to further expand the current coverage of the English Wikipedia.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The Queensland University of Technology (QUT) Library, like many other academic and research institution libraries in Australia, has been collaborating with a range of academic and service provider partners to develop a range of research data management services and collections. Three main strategies are being employed and an overview of process, infrastructure, usage and benefits is provided of each of these service aspects. The development of processes and infrastructure to facilitate the strategic identification and management of QUT developed datasets has been a major focus. A number of Australian National Data Service (ANDS) sponsored projects - including Seeding the Commons; Metadata Hub / Store; Data Capture and Gold Standard Record Exemplars have / will provide QUT with a data registry system, linkages to storage, processes for identifying and describing datasets, and a degree of academic awareness. QUT supports open access and has established a culture for making its research outputs available via the QUT ePrints institutional repository. Incorporating open access research datasets into the library collections is an equally important aspect of facilitating the adoption of data-centric eresearch methods. Some datasets are available commercially, and the library has collaborated with QUT researchers, in the QUT Business School especially strongly, to identify and procure a rapidly growing range of financial datasets to support research. The library undertakes licensing and uses the Library Resource Allocation to pay for the subscriptions. It is a new area of collection development for with much to be learned. The final strategy discussed is the library acting as “data broker”. QUT Library has been working with researchers to identify these datasets and undertake the licensing, payment and access as a centrally supported service on behalf of researchers.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Collections of solid particles from the Earths' stratosphere have been a significant part of atmospheric research programs since 1965 [1], but it has only been in the past decade that space-related disciplines have provided the impetus for a continued interest in these collections. Early research on specific particle types collected from the stratosphere established that interplanetary dust particles (IDP's) can be collected efficiently and in reasonable abundance using flat-plate collectors [2-4]. The tenacity of Brownlee and co-workers in this subfield of cosmochemistry has led to the establishment of a successful IDP collection and analysis program (using flat-plate collectors on high-flying aircraft) based on samples available for distribution from Johnson Space Center [5]. Other stratospheric collections are made, but the program at JSC offers a unique opportunity to study well-documented, individual particles (or groups of particles) from a wide variety of sources [6]. The nature of the collection and curation process, as well as the timeliness of some sampling periods [7], ensures that all data obtained from stratospheric particles is a valuable resource for scientists from a wide range of disciplines. A few examples of the uses of these stratospheric dust collections are outlined below.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This thesis presents novel techniques for addressing the problems of continuous change and inconsistencies in large process model collections. The developed techniques treat process models as a collection of fragments and facilitate version control, standardization and automated process model discovery using fragment-based concepts. Experimental results show that the presented techniques are beneficial in consolidating large process model collections, specifically when there is a high degree of redundancy.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Automated process discovery techniques aim at extracting process models from information system logs. Existing techniques in this space are effective when applied to relatively small or regular logs, but generate spaghetti-like and sometimes inaccurate models when confronted to logs with high variability. In previous work, trace clustering has been applied in an attempt to reduce the size and complexity of automatically discovered process models. The idea is to split the log into clusters and to discover one model per cluster. This leads to a collection of process models – each one representing a variant of the business process – as opposed to an all-encompassing model. Still, models produced in this way may exhibit unacceptably high complexity and low fitness. In this setting, this paper presents a two-way divide-and-conquer process discovery technique, wherein the discovered process models are split on the one hand by variants and on the other hand hierarchically using subprocess extraction. Splitting is performed in a controlled manner in order to achieve user-defined complexity or fitness thresholds. Experiments on real-life logs show that the technique produces collections of models substantially smaller than those extracted by applying existing trace clustering techniques, while allowing the user to control the fitness of the resulting models.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The television quiz program Letters and Numbers, broadcast on the SBS network, has recently become quite popular in Australia. This paper considers an implementation in Excel 2010 and its potential as a vehicle to showcase a range of mathematical and computing concepts and principles.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The television quiz program Letters and Numbers, broadcast on the SBS network, has recently become quite popular in Australia. This paper explores the potential of this game to illustrate and engage student interest in a range of fundamental concepts of computer science and mathematics. The Numbers Game in particular has a rich mathematical structure whose analysis and solution involves concepts of counting and problem size, discrete (tree) structures, language theory, recurrences, computational complexity, and even advanced memory management. This paper presents an analysis of these games and their teaching applications, and presents some initial results of use in student assignments.