955 resultados para Reuse
Resumo:
Queensland University of Technology (QUT) is faced with a rapidly growing research agenda built upon a strategic research capacity-building program. This presentation will outline the results of a project that has recently investigated QUT’s research support requirements and which has developed a model for the support of eResearch across the university. QUT’s research building strategy has produced growth at the faculty level and within its research institutes. This increased research activity is pushing the need for university-wide eResearch platforms capable of providing infrastructure and support in areas such as collaboration, data, networking, authentication and authorisation, workflows and the grid. One of the driving forces behind the investigation is data-centric nature of modern research. It is now critical that researchers have access to supported infrastructure that allows the collection, analysis, aggregation and sharing of large data volumes for exploration and mining in order to gain new insights and to generate new knowledge. However, recent surveys into current research data management practices by the Australian Partnership for Sustainable Repositories (APSR) and by QUT itself, has revealed serious shortcomings in areas such as research data management, especially its long term maintenance for reuse and authoritative evidence of research findings. While these internal university pressures are building, at the same time there are external pressures that are magnifying them. For example, recent compliance guidelines from bodies such as the ARC, and NHMRC and Universities Australia indicate that institutions need to provide facilities for the safe and secure storage of research data along with a surrounding set of policies, on its retention, ownership and accessibility. The newly formed Australian National Data Service (ANDS) is developing strategies and guidelines for research data management and research institutions are a central focus, responsible for managing and storing institutional data on platforms that can be federated nationally and internationally for wider use. For some time QUT has recognised the importance of eResearch and has been active in a number of related areas: ePrints to digitally publish research papers, grid computing portals and workflows, institutional-wide provisioning and authentication systems, and legal protocols for copyright management. QUT also has two widely recognised centres focused on fundamental research into eResearch itself: The OAK LAW project (Open Access to Knowledge) which focuses upon legal issues relating eResearch and the Microsoft QUT eResearch Centre whose goal is to accelerate scientific research discovery, through new smart software. In order to better harness all of these resources and improve research outcomes, the university recently established a project to investigate how it might better organise the support of eResearch. This presentation will outline the project outcomes, which include a flexible and sustainable eResearch support service model addressing short and longer term research needs, identification of resource requirements required to establish and sustain the service, and the development of research data management policies and implementation plans.
Resumo:
Cooperative collision warning system for road vehicles, enabled by recent advances in positioning systems and wireless communication technologies, can potentially reduce traffic accident significantly. To improve the system, we propose a graph model to represent interactions between multiple road vehicles in a specific region and at a specific time. Given a list of vehicles in vicinity, we can generate the interaction graph using several rules that consider vehicle's properties such as position, speed, heading, etc. Safety applications can use the model to improve emergency warning accuracy and optimize wireless channel usage. The model allows us to develop some congestion control strategies for an efficient multi-hop broadcast protocol.
Resumo:
With the widespread applications of electronic learning (e-Learning) technologies to education at all levels, increasing number of online educational resources and messages are generated from the corresponding e-Learning environments. Nevertheless, it is quite difficult, if not totally impossible, for instructors to read through and analyze the online messages to predict the progress of their students on the fly. The main contribution of this paper is the illustration of a novel concept map generation mechanism which is underpinned by a fuzzy domain ontology extraction algorithm. The proposed mechanism can automatically construct concept maps based on the messages posted to online discussion forums. By browsing the concept maps, instructors can quickly identify the progress of their students and adjust the pedagogical sequence on the fly. Our initial experimental results reveal that the accuracy and the quality of the automatically generated concept maps are promising. Our research work opens the door to the development and application of intelligent software tools to enhance e-Learning.
Resumo:
Context The School of Information Technology at QUT has recently undertaken a major restructuring of their Bachelor of Information Technology (BIT) course. Some of the aims of this restructuring include a reduction in first year attrition and to provide an attractive degree course that meets both student and industry expectations. Emphasis has been placed on the first semester in the context of retaining students by introducing a set of four units that complement one another and provide introductory material on technology, programming and related skills, and generic skills that will aid the students throughout their undergraduate course and in their careers. This discussion relates to one of these four fist semester units, namely Building IT Systems. The aim of this unit is to create small Information Technology (IT) systems that use programming or scripting, databases as either standalone applications or web applications. In the prior history of teaching introductory computer programming at QUT, programming has been taught as a stand alone subject and integration of computer applications with other systems such as databases and networks was not undertaken until students had been given a thorough grounding in those topics as well. Feedback has indicated that students do not believe that working with a database requires programming skills. In fact, the teaching of the building blocks of computer applications have been compartmentalized and taught in isolation from each other. The teaching of introductory computer programming has been an industry requirement of IT degree courses as many jobs require at least some knowledge of the topic. Yet, computer programming is not a skill that all students have equal capabilities of learning (Bruce et al., 2004) and this is clearly shown by the volume of publications dedicated to this topic in the literature over a broad period of time (Eckerdal & Berglund, 2005; Mayer, 1981; Winslow, 1996). The teaching of this introductory material has been done pretty much the same way over the past thirty years. During this period of time that introductory computer programming courses have been taught at QUT, a number of different programming languages and programming paradigms have been used and different approaches to teaching and learning have been attempted in an effort to find the golden thread that would allow students to learn this complex topic. Unfortunately, computer programming is not a skill that can be learnt in one semester. Some basics can be learnt but it can take many years to master (Norvig, 2001). Faculty data typically has shown a bimodal distribution of results for students undertaking introductory programming courses with a high proportion of students receiving a high mark and a high proportion of students receiving a low or failing mark. This indicates that there are students who understand and excel with the introductory material while there is another group who struggle to understand the concepts and practices required to be able to translate a specification or problem statement into a computer program that achieves what is being requested. The consequence of a large group of students failing the introductory programming course has been a high level of attrition amongst first year students. This attrition level does not provide good continuity in student numbers in later years of the degree program and the current approach is not seen as sustainable.
Resumo:
Modern Engineering Asset Management (EAM) requires the accurate assessment of current and the prediction of future asset health condition. Appropriate mathematical models that are capable of estimating times to failures and the probability of failures in the future are essential in EAM. In most real-life situations, the lifetime of an engineering asset is influenced and/or indicated by different factors that are termed as covariates. Hazard prediction with covariates is an elemental notion in the reliability theory to estimate the tendency of an engineering asset failing instantaneously beyond the current time assumed that it has already survived up to the current time. A number of statistical covariate-based hazard models have been developed. However, none of them has explicitly incorporated both external and internal covariates into one model. This paper introduces a novel covariate-based hazard model to address this concern. This model is named as Explicit Hazard Model (EHM). Both the semi-parametric and non-parametric forms of this model are presented in the paper. The major purpose of this paper is to illustrate the theoretical development of EHM. Due to page limitation, a case study with the reliability field data is presented in the applications part of this study.
Resumo:
Hazard and reliability prediction of an engineering asset is one of the significant fields of research in Engineering Asset Health Management (EAHM). In real-life situations where an engineering asset operates under dynamic operational and environmental conditions, the lifetime of an engineering asset can be influenced and/or indicated by different factors that are termed as covariates. The Explicit Hazard Model (EHM) as a covariate-based hazard model is a new approach for hazard prediction which explicitly incorporates both internal and external covariates into one model. EHM is an appropriate model to use in the analysis of lifetime data in presence of both internal and external covariates in the reliability field. This paper presents applications of the methodology which is introduced and illustrated in the theory part of this study. In this paper, the semi-parametric EHM is applied to a case study so as to predict the hazard and reliability of resistance elements on a Resistance Corrosion Sensor Board (RCSB).
Resumo:
The need for large scale environmental monitoring to manage environmental change is well established. Ecologists have long used acoustics as a means of monitoring the environment in their field work, and so the value of an acoustic environmental observatory is evident. However, the volume of data generated by such an observatory would quickly overwhelm even the most fervent scientist using traditional methods. In this paper we present our steps towards realising a complete acoustic environmental observatory - i.e. a cohesive set of hardware sensors, management utilities, and analytical tools required for large scale environmental monitoring. Concrete examples of these elements, which are in active use by ecological scientists, are also presented
Resumo:
We propose an efficient and low-complexity scheme for estimating and compensating clipping noise in OFDMA systems. Conventional clipping noise estimation schemes, which need all demodulated data symbols, may become infeasible in OFDMA systems where a specific user may only know his own modulation scheme. The proposed scheme first uses equalized output to identify a limited number of candidate clips, and then exploits the information on known subcarriers to reconstruct clipped signal. Simulation results show that the proposed scheme can significantly improve the system performance.
Resumo:
It is a big challenge to clearly identify the boundary between positive and negative streams. Several attempts have used negative feedback to solve this challenge; however, there are two issues for using negative relevance feedback to improve the effectiveness of information filtering. The first one is how to select constructive negative samples in order to reduce the space of negative documents. The second issue is how to decide noisy extracted features that should be updated based on the selected negative samples. This paper proposes a pattern mining based approach to select some offenders from the negative documents, where an offender can be used to reduce the side effects of noisy features. It also classifies extracted features (i.e., terms) into three categories: positive specific terms, general terms, and negative specific terms. In this way, multiple revising strategies can be used to update extracted features. An iterative learning algorithm is also proposed to implement this approach on RCV1, and substantial experiments show that the proposed approach achieves encouraging performance.
Resumo:
This article explores two matrix methods to induce the ``shades of meaning" (SoM) of a word. A matrix representation of a word is computed from a corpus of traces based on the given word. Non-negative Matrix Factorisation (NMF) and Singular Value Decomposition (SVD) compute a set of vectors corresponding to a potential shade of meaning. The two methods were evaluated based on loss of conditional entropy with respect to two sets of manually tagged data. One set reflects concepts generally appearing in text, and the second set comprises words used for investigations into word sense disambiguation. Results show that for NMF consistently outperforms SVD for inducing both SoM of general concepts as well as word senses. The problem of inducing the shades of meaning of a word is more subtle than that of word sense induction and hence relevant to thematic analysis of opinion where nuances of opinion can arise.
Resumo:
We argue that web service discovery technology should help the user navigate a complex problem space by providing suggestions for services which they may not be able to formulate themselves as (s)he lacks the epistemic resources to do so. Free text documents in service environments provide an untapped source of information for augmenting the epistemic state of the user and hence their ability to search effectively for services. A quantitative approach to semantic knowledge representation is adopted in the form of semantic space models computed from these free text documents. Knowledge of the user’s agenda is promoted by associational inferences computed from the semantic space. The inferences are suggestive and aim to promote human abductive reasoning to guide the user from fuzzy search goals into a better understanding of the problem space surrounding the given agenda. Experimental results are discussed based on a complex and realistic planning activity.
Resumo:
While spoken term detection (STD) systems based on word indices provide good accuracy, there are several practical applications where it is infeasible or too costly to employ an LVCSR engine. An STD system is presented, which is designed to incorporate a fast phonetic decoding front-end and be robust to decoding errors whilst still allowing for rapid search speeds. This goal is achieved through mono-phone open-loop decoding coupled with fast hierarchical phone lattice search. Results demonstrate that an STD system that is designed with the constraint of a fast and simple phonetic decoding front-end requires a compromise to be made between search speed and search accuracy.
Resumo:
The use of the PC and Internet for placing telephone calls will present new opportunities to capture vast amounts of un-transcribed speech for a particular speaker. This paper investigates how to best exploit this data for speaker-dependent speech recognition. Supervised and unsupervised experiments in acoustic model and language model adaptation are presented. Using one hour of automatically transcribed speech per speaker with a word error rate of 36.0%, unsupervised adaptation resulted in an absolute gain of 6.3%, equivalent to 70% of the gain from the supervised case, with additional adaptation data likely to yield further improvements. LM adaptation experiments suggested that although there seems to be a small degree of speaker idiolect, adaptation to the speaker alone, without considering the topic of the conversation, is in itself unlikely to improve transcription accuracy.
Resumo:
Public transportation is an environment with great potential for applying location-based services through mobile devices. The BusTracker study is looking at how real-time passenger information systems can provide a core platform to improve commuters’ experiences. These systems rely on mobile computing and GPS technology to provide accurate information on transport vehicle locations. BusTracker builds on this mobile computing platform and geospatial information. The pilot study is running on the open source BugLabs computing platform, using a GPS module for accurate location information.
Resumo:
This position paper examines the development of a dedicated service aggregator role in business networks. We predict that these intermediaries will soon emerge in service ecosystems and add value through the application of dedicated domain knowledge in the process of creating new, innovative services or service bundles based on the aggregation, composition, integration or orchestration of existing services procured from different service providers in the service ecosystem. We discuss general foundations of service aggregators and present Fourth-Party Logistics Providers as a real-world example of emerging business service aggregators. We also point out a demand for future research, e.g. into governance models, risk management tools, service portfolio management approaches and service bundling techniques, to be able to better understand core determinants of competitiveness and success of service aggregators.