992 resultados para Metadata


Relevância:

10.00% 10.00%

Publicador:

Resumo:

QUT Library and the High Performance Computing and Research Support (HPC) Team have been collaborating on developing and delivering a range of research support services, including those designed to assist researchers to manage their data. QUT’s Management of Research Data policy has been available since 2010 and is complemented by the Data Management Guidelines and Checklist. QUT has partnered with the Australian Research Data Service (ANDS) on a number of projects including Seeding the Commons, Metadata Hub (with Griffith University) and the Data Capture program. The HPC Team has also been developing the QUT Research Data Repository based on the Architecta Mediaflux system and have run several pilots with faculties. Library and HPC staff have been trained in the principles of research data management and are providing a range of research data management seminars and workshops for researchers and HDR students.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Website customization can help to better fulfill the needs and wants of individual customers. It is an important aspect of customer satisfaction of online banking, especially among the younger generation. This dimension, however, is poorly addressed particularly in the Australian context. The proposed research aims to fulfill this gap by exploring the use of a popular Web 2.0 technology known as tags or user assigned metadata to facilitate customization at the interaction level. A prototype is proposed to demonstrate the various interaction-based customization types, evaluated through a series of experiments to assess the impact on customer satisfaction. The expected research outcome is a set of guidelines akin to interaction design patterns for aiding the design and implementation of the proposed tag-based approach.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This article compares YouTube and the National Film and Sound Archive (NFSA) as resources for television historians interested in viewing old Australian television programs. The author searched for seventeen important television programs, identified in a previous research project, to compare what was available in the two archives and how easy it was to find. The analysis focused on differences in curatorial practices of accessioning and cataloguing. NFSA is stronger in current affairs and older programs, while YouTube is stronger in game shows and lifestyle programs. YouTube is stronger than the NFSA on “human interest” material—births, marriages, and deaths. YouTube accessioning more strongly accords with popular histories of Australian television. Both NFSA and YouTube offer complete episodes of programs, while YouTube also offers many short clips of “moments.” YouTube has more surprising pieces of rare ephemera. YouTube cataloguing is more reliable than that of the NFSA, with fewer broken links. The YouTube metadata can be searched more intuitively. The NFSA generally provides more useful reference information about production and broadcast dates.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The final shape of the "Internet of Things" ubiquitous computing promises relies on a cybernetic system of inputs (in the form of sensory information), computation or decision making (based on the prefiguration of rules, contexts, and user-generated or defined metadata), and outputs (associated action from ubiquitous computing devices). My interest in this paper lies in the computational intelligences that suture these positions together, and how positioning these intelligences as autonomous agents extends the dialogue between human-users and ubiquitous computing technology. Drawing specifically on the scenarios surrounding the employment of ubiquitous computing within aged care, I argue that agency is something that cannot be traded without serious consideration of the associated ethics.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper is concerned with investigating existing and potential scope of Dublin Core metadata in Knowledge Management contexts. Modelling knowledge is identified as a conceptual prerequisite in this investigation, principally for the purpose of clarifying scope prior to identifying the range of tasks associated with organising knowledge. A variety of models is presented and relationships between data, information, and knowledge discussed. It is argued that the two most common modes of organisation, hierarchies and networks, influence the effectiveness and flow of knowledge. Practical perspective is provided by reference to implementations and projects providing evidence of how DC metadata is applied in such contexts. A sense-making model is introduced that can be used as a shorthand reference for identifying useful facets of knowledge that might be described using metadata. Discussion is aimed at presenting this model in a way that both validates current applications and points to potential novel applications.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The encryption method is a well established technology for protecting sensitive data. However, once encrypted, the data can no longer be easily queried. The performance of the database depends on how to encrypt the sensitive data. In this paper we review the conventional encryption method which can be partially queried and propose the encryption method for numerical data which can be effectively queried. The proposed system includes the design of the service scenario, and metadata.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The development of the Learning and Teaching Academic Standards Statement for Architecture (the Statement) centred on requirements for the Master of Architecture and proceeded alongside similar developments in the building and construction discipline under the guidance and support of the Australian Deans of Built Environment and Design (ADBED). Through their representation of Australian architecture programs, ADBED have provided high-level leadership for the Learning and Teaching Academic Standards Project in Architecture (LTAS Architecture). The threshold learning outcomes (TLOs), the description of the nature and extent of the discipline, and accompanying notes were developed through wide consultation with the discipline and profession nationally. They have been considered and debated by ADBED on a number of occasions and have, in their fi nal form, been strongly endorsed by the Deans. ADBED formed the core of the Architecture Reference Group (chaired by an ADBED member) that drew together representatives of every peak organisation for the profession and discipline in Australia. The views of the architectural education community and profession have been provided both through individual submissions and the voices of a number of peak bodies. Over two hundred individuals from the practising profession, the academic workforce and the student cohort have worked together to build consensus about the capabilities expected of a graduate of an Australian Master of Architecture degree. It was critical from the outset that the Statement should embrace the wisdom of the greater ‘tribe’, should ensure that graduates of the Australian Master of Architecture were eligible for professional registration and, at the same time, should allow for scope and diversity in the shape of Australian architectural education. A consultation strategy adopted by the Discipline Scholar involved meetings and workshops in Perth, Melbourne, Sydney, Canberra and Brisbane. Stakeholders from all jurisdictions and most universities participated in the early phases of consultation through a series of workshops that concluded late in October 2010. The Draft Architecture Standards Statement was formed from these early meetings and consultation in respect of that document continued through early 2011. This publication represents the outcomes of work to establish an agreed standards statement for the Master of Architecture. Significant further work remains to ensure the alignment of professional accreditation and recognition procedures with emerging regulatory frameworks cascading from the establishment of the Tertiary Education Quality and Standards Agency (TEQSA). The Australian architecture community hopes that mechanisms can be found to integrate TEQSA’s quality assurance purpose with well-established and understood systems of professional accreditation to ensure the good standing of Australian architectural education into the future. The work to build renewed and integrated quality assurance processes and to foster the interests of this project will continue, for at least the next eighteen months, under the auspices of Australian Learning and Teaching Council (ALTC)-funded Architecture Discipline Network (ADN), led by ADBED and Queensland University of Technology. The Discipline Scholar gratefully acknowledges the generous contributions given by those in stakeholder communities to the formulation of the Statement. Professional and academic colleagues have travelled and gathered to shape the Standards Statement. Debate has been vigorous and spirited and the Statement is rich with the purpose, critical thinking and good judgement of the Australian architectural education community. The commitments made to the processes that have produced this Statement reflect a deep and abiding interest by the constituency in architectural education. This commitment bodes well for the vibrancy and productivity of the emergent Architecture Discipline Network (ADN). Endorsement, in writing, was received from the Australian Institute of Architects National Education Committee (AIA NEC): The National Education Committee (NEC) of the Australian Institute of Architects thank you for your work thus far in developing the Learning and Teaching Academic Standards for Architecture In particular, we acknowledge your close consultation with the NEC on the project along with a comprehensive cross-section of the professional and academic communities in architecture. The TLOs with the nuanced levels of capacities – to identify, develop, explain, demonstrate etc – are described at an appropriate level to be understood as minimum expectations for a Master of Architecture graduate. The Architects Accreditation Council of Australia (AACA) has noted: There is a clear correlation between the current processes for accreditation and what may be the procedures in the future following the current review. The requirement of the outcomes as outlined in the draft paper to demonstrate capability is an appropriate way of expressing the measure of whether the learning outcomes have been achieved. The measure of capability as described in the outcome statements is enhanced with explanatory descriptions in the accompanying notes.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

As one of the first institutional repositories in Australia and the first in the world to have an institution-wide deposit mandate, QUT ePrints has great ‘brand recognition’ within the University (Queensland University of Technology) and beyond. The repository is managed by the library but, over the years, the Library’s repository team has worked closely with other departments (especially the Office of Research and IT Services) to ensure that QUT ePrints was embedded into the business processes and systems our academics use regularly. For example, the repository is the source of the publication information which displays on each academic’s Staff Profile page. The repository pulls in citation data from Scopus and Web of Science and displays the data in the publications records. Researchers can monitor their citations at a glance via the repository ‘View’ which displays all their publications. A trend in recent years has been to populate institutional repositories with publication details imported from the University’s research information system (RIS). The main advantage of the RIS to Repository workflow is that it requires little input from the academics as the publication details are often imported into the RIS from publisher databases. Sadly, this is also its main disadvantage. Generally, only the metadata is imported from the RIS and the lack of engagement by the academics results in very low proportions of records with open access full-texts. Consequently, while we could see the value of integrating the two systems, we were determined to make the repository the entry point for publication data. In 2011, the University funded a project to convert a number of paper-based processes into web-based workflows. This included a workflow to replace the paper forms academics used to complete to report new publications (which were later used by the data entry staff to input the details into the RIS). Publication details and full-text files are uploaded to the repository (by the academics or their nominees). Each night, the repository (QUT ePrints) pushes the metadata for new publications into a holding table. The data is checked by Office of Research staff the next day and then ‘imported’ into the RIS. Publication details (including the repository URLs) are pushed from the RIS to the Staff Profiles system. Previously, academics were required to supply the Office of research with photocopies of their publication (for verification/auditing purposes). The repository is now the source of verification information. Library staff verify the accuracy of the publication details and, where applicable, the peer review status of the work. The verification metadata is included in the information passed to the Office of Research. The RIS at QUT comprises two separate systems built on an Oracle database; a proprietary product (ResearchMaster) plus a locally produced system known as RAD (Research Activity Database). The repository platform is EPrints which is built on a MySQL database. This partly explains why the data is passed from one system to the other via a holding table. The new workflow went live in early April 2012. Tests of the technical integration have all been successful. At the end of the first 12 months, the impact of the new workflow on the proportion of full-texts deposited will be evaluated.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The Queensland University of Technology (QUT) in Brisbane, Australia, is involved in a number of projects funded by the Australian National Data Service (ANDS). Currently, QUT is working on a project (Metadata Stores Project) that uses open source VIVO software to aid in the storage and management of metadata relating to data sets created/managed by the QUT research community. The registry (called QUT Research Data Finder) will support the sharing and reuse of research datasets, within and external to QUT. QUT uses VIVO for both the display and the editing of research metadata.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The Queensland University of Technology (QUT) Library, like many other academic and research institution libraries in Australia, has been collaborating with a range of academic and service provider partners to develop a range of research data management services and collections. Three main strategies are being employed and an overview of process, infrastructure, usage and benefits is provided of each of these service aspects. The development of processes and infrastructure to facilitate the strategic identification and management of QUT developed datasets has been a major focus. A number of Australian National Data Service (ANDS) sponsored projects - including Seeding the Commons; Metadata Hub / Store; Data Capture and Gold Standard Record Exemplars have / will provide QUT with a data registry system, linkages to storage, processes for identifying and describing datasets, and a degree of academic awareness. QUT supports open access and has established a culture for making its research outputs available via the QUT ePrints institutional repository. Incorporating open access research datasets into the library collections is an equally important aspect of facilitating the adoption of data-centric eresearch methods. Some datasets are available commercially, and the library has collaborated with QUT researchers, in the QUT Business School especially strongly, to identify and procure a rapidly growing range of financial datasets to support research. The library undertakes licensing and uses the Library Resource Allocation to pay for the subscriptions. It is a new area of collection development for with much to be learned. The final strategy discussed is the library acting as “data broker”. QUT Library has been working with researchers to identify these datasets and undertake the licensing, payment and access as a centrally supported service on behalf of researchers.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Speaker diarization is the process of annotating an input audio with information that attributes temporal regions of the audio signal to their respective sources, which may include both speech and non-speech events. For speech regions, the diarization system also specifies the locations of speaker boundaries and assign relative speaker labels to each homogeneous segment of speech. In short, speaker diarization systems effectively answer the question of ‘who spoke when’. There are several important applications for speaker diarization technology, such as facilitating speaker indexing systems to allow users to directly access the relevant segments of interest within a given audio, and assisting with other downstream processes such as summarizing and parsing. When combined with automatic speech recognition (ASR) systems, the metadata extracted from a speaker diarization system can provide complementary information for ASR transcripts including the location of speaker turns and relative speaker segment labels, making the transcripts more readable. Speaker diarization output can also be used to localize the instances of specific speakers to pool data for model adaptation, which in turn boosts transcription accuracies. Speaker diarization therefore plays an important role as a preliminary step in automatic transcription of audio data. The aim of this work is to improve the usefulness and practicality of speaker diarization technology, through the reduction of diarization error rates. In particular, this research is focused on the segmentation and clustering stages within a diarization system. Although particular emphasis is placed on the broadcast news audio domain and systems developed throughout this work are also trained and tested on broadcast news data, the techniques proposed in this dissertation are also applicable to other domains including telephone conversations and meetings audio. Three main research themes were pursued: heuristic rules for speaker segmentation, modelling uncertainty in speaker model estimates, and modelling uncertainty in eigenvoice speaker modelling. The use of heuristic approaches for the speaker segmentation task was first investigated, with emphasis placed on minimizing missed boundary detections. A set of heuristic rules was proposed, to govern the detection and heuristic selection of candidate speaker segment boundaries. A second pass, using the same heuristic algorithm with a smaller window, was also proposed with the aim of improving detection of boundaries around short speaker segments. Compared to single threshold based methods, the proposed heuristic approach was shown to provide improved segmentation performance, leading to a reduction in the overall diarization error rate. Methods to model the uncertainty in speaker model estimates were developed, to address the difficulties associated with making segmentation and clustering decisions with limited data in the speaker segments. The Bayes factor, derived specifically for multivariate Gaussian speaker modelling, was introduced to account for the uncertainty of the speaker model estimates. The use of the Bayes factor also enabled the incorporation of prior information regarding the audio to aid segmentation and clustering decisions. The idea of modelling uncertainty in speaker model estimates was also extended to the eigenvoice speaker modelling framework for the speaker clustering task. Building on the application of Bayesian approaches to the speaker diarization problem, the proposed approach takes into account the uncertainty associated with the explicit estimation of the speaker factors. The proposed decision criteria, based on Bayesian theory, was shown to generally outperform their non- Bayesian counterparts.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Queensland University of Technology (QUT) Library offers a range of resources and services to researchers as part of their research support portfolio. This poster will present key features of two of the data management services offered by research support staff at QUT Library. The first service is QUT Research Data Finder (RDF), a product of the Australian National Data Service (ANDS) funded Metadata Stores project. RDF is a data registry (metadata repository) that aims to publicise datasets that are research outputs arising from completed QUT research projects. The second is a software and code registry, which is currently under development with the sole purpose of improving discovery of source code and software as QUT research outputs. RESEARCH DATA FINDER As an integrated metadata repository, Research Data Finder aligns with institutional sources of truth, such as QUT’s research administration system, ResearchMaster, as well as QUT’s Academic Profiles system to provide high quality data descriptions that increase awareness of, and access to, shareable research data. The repository and its workflows are designed to foster better data management practices, enhance opportunities for collaboration and research, promote cross-disciplinary research and maximise the impact of existing research data sets. SOFTWARE AND CODE REGISTRY The QUT Library software and code registry project stems from concerns amongst researchers with regards to development activities, storage, accessibility, discoverability and impact, sharing, copyright and IP ownership of software and code. As a result, the Library is developing a registry for code and software research outputs, which will use existing Research Data Finder architecture. The underpinning software for both registries is VIVO, open source software developed by Cornell University. The registry will use the Research Data Finder service instance of VIVO and will include a searchable interface, links to code/software locations and metadata feeds to Research Data Australia. Key benefits of the project include:improving the discoverability and reuse of QUT researchers’ code and software amongst QUT and the QUT research community; increasing the profile of QUT research outputs on a national level by providing a metadata feed to Research Data Australia, and; improving the metrics for access and reuse of code and software in the repository.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Tags or personal metadata for annotating web resources have been widely adopted in Web 2.0 sites. However, as tags are freely chosen by users, the vocabularies are diverse, ambiguous and sometimes only meaningful to individuals. Tag recommenders may assist users during tagging process. Its objective is to suggest relevant tags to use as well as to help consolidating vocabulary in the systems. In this paper we discuss our approach for providing personalized tag recommendation by making use of existing domain ontology generated from folksonomy. Specifically we evaluated the approach in sparse situation. The evaluation shows that the proposed ontology-based method has improved the accuracy of tag recommendation in this situation.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Tag recommendation is a specific recommendation task for recommending metadata (tag) for a web resource (item) during user annotation process. In this context, sparsity problem refers to situation where tags need to be produced for items with few annotations or for user who tags few items. Most of the state of the art approaches in tag recommendation are rarely evaluated or perform poorly under this situation. This paper presents a combined method for mitigating sparsity problem in tag recommendation by mainly expanding and ranking candidate tags based on similar items’ tags and existing tag ontology. We evaluated the approach on two public social bookmarking datasets. The experiment results show better accuracy for recommendation in sparsity situation over several state of the art methods.