728 resultados para web-based self-service systems
Resumo:
The dynamic interplay between existing learning frameworks: people, pedagogy, learning spaces and technology is challenging the traditional lecture. A paradigm is emerging from the correlation of change amongst these elements, offering new possibilities for improving the quality of the learning experience. For many universities, the design of physical learning spaces has been the focal point for blending technology and flexible learning spaces to promote learning and teaching. As the pace of technological change intensifies, affording new opportunities for engaging learners, pedagogical practice in higher education is not comparatively evolving. The resulting disparity is an opportunity for the reconsideration of pedagogical practice for increased student engagement in physical learning spaces as an opportunity for active learning. This interplay between students, staff and technology is challenging the value for students in attending physical learning spaces such as the traditional lecture. Why should students attend for classes devoted to content delivery when streaming and web technologies afford more flexible learning opportunities? Should we still lecture? Reconsideration of pedagogy is driving learning design at Queensland University of Technology, seeking new approaches affording increased student engagement via active learning experiences within large lectures. This paper provides an overview and an evaluation of one of these initiatives, Open Web Lecture (OWL), an experimental web based student response application developed by Queensland University of Technology. OWL seamlessly integrates a virtual learning environment within physical learning spaces, fostering active learning opportunities. This paper will evaluate the pilot of this initiative through consideration of effectiveness in increasing student engagement through the affordance of web enabled active learning opportunities in physical learning spaces.
Resumo:
This paper investigates the use of the dimensionality-reduction techniques weighted linear discriminant analysis (WLDA), and weighted median fisher discriminant analysis (WMFD), before probabilistic linear discriminant analysis (PLDA) modeling for the purpose of improving speaker verification performance in the presence of high inter-session variability. Recently it was shown that WLDA techniques can provide improvement over traditional linear discriminant analysis (LDA) for channel compensation in i-vector based speaker verification systems. We show in this paper that the speaker discriminative information that is available in the distance between pair of speakers clustered in the development i-vector space can also be exploited in heavy-tailed PLDA modeling by using the weighted discriminant approaches prior to PLDA modeling. Based upon the results presented within this paper using the NIST 2008 Speaker Recognition Evaluation dataset, we believe that WLDA and WMFD projections before PLDA modeling can provide an improved approach when compared to uncompensated PLDA modeling for i-vector based speaker verification systems.
Resumo:
It is a big challenge to clearly identify the boundary between positive and negative streams. Several attempts have used negative feedback to solve this challenge; however, there are two issues for using negative relevance feedback to improve the effectiveness of information filtering. The first one is how to select constructive negative samples in order to reduce the space of negative documents. The second issue is how to decide noisy extracted features that should be updated based on the selected negative samples. This paper proposes a pattern mining based approach to select some offenders from the negative documents, where an offender can be used to reduce the side effects of noisy features. It also classifies extracted features (i.e., terms) into three categories: positive specific terms, general terms, and negative specific terms. In this way, multiple revising strategies can be used to update extracted features. An iterative learning algorithm is also proposed to implement this approach on RCV1, and substantial experiments show that the proposed approach achieves encouraging performance.
Resumo:
As one of the first institutional repositories in Australia and the first in the world to have an institution-wide deposit mandate, QUT ePrints has great ‘brand recognition’ within the University (Queensland University of Technology) and beyond. The repository is managed by the library but, over the years, the Library’s repository team has worked closely with other departments (especially the Office of Research and IT Services) to ensure that QUT ePrints was embedded into the business processes and systems our academics use regularly. For example, the repository is the source of the publication information which displays on each academic’s Staff Profile page. The repository pulls in citation data from Scopus and Web of Science and displays the data in the publications records. Researchers can monitor their citations at a glance via the repository ‘View’ which displays all their publications. A trend in recent years has been to populate institutional repositories with publication details imported from the University’s research information system (RIS). The main advantage of the RIS to Repository workflow is that it requires little input from the academics as the publication details are often imported into the RIS from publisher databases. Sadly, this is also its main disadvantage. Generally, only the metadata is imported from the RIS and the lack of engagement by the academics results in very low proportions of records with open access full-texts. Consequently, while we could see the value of integrating the two systems, we were determined to make the repository the entry point for publication data. In 2011, the University funded a project to convert a number of paper-based processes into web-based workflows. This included a workflow to replace the paper forms academics used to complete to report new publications (which were later used by the data entry staff to input the details into the RIS). Publication details and full-text files are uploaded to the repository (by the academics or their nominees). Each night, the repository (QUT ePrints) pushes the metadata for new publications into a holding table. The data is checked by Office of Research staff the next day and then ‘imported’ into the RIS. Publication details (including the repository URLs) are pushed from the RIS to the Staff Profiles system. Previously, academics were required to supply the Office of research with photocopies of their publication (for verification/auditing purposes). The repository is now the source of verification information. Library staff verify the accuracy of the publication details and, where applicable, the peer review status of the work. The verification metadata is included in the information passed to the Office of Research. The RIS at QUT comprises two separate systems built on an Oracle database; a proprietary product (ResearchMaster) plus a locally produced system known as RAD (Research Activity Database). The repository platform is EPrints which is built on a MySQL database. This partly explains why the data is passed from one system to the other via a holding table. The new workflow went live in early April 2012. Tests of the technical integration have all been successful. At the end of the first 12 months, the impact of the new workflow on the proportion of full-texts deposited will be evaluated.
Resumo:
Phenomenography is a qualitative research approach that seeks to explore variation in how people experience various aspects of their world. Phenomenography has been used in numerous information research studies that have explored various phenomena of interest in the library and information sphere. This paper provides an overview of the phenomenographic method and discusses key assumptions that underlie this approach to research. Aspects including data collection, data analysis and the outcomes of phenomenographic research are also detailed. The paper concludes with an illustration of how phenomenography was used in research to investigate students’ experiences of web-based information searching. The results of this research demonstrate how phenomenography can reveal variation, making it possible to develop greater understanding of the phenomenon as it was experienced, and to draw upon these experiences to improve and enhance current practice.
Resumo:
Young drivers are overrepresented in motor vehicle crash rates, and their risk increases when carrying similar aged passengers. Graduated Driver Licensing strategies have demonstrated effectiveness in reducing fatalities among young drivers, however complementary approaches may further reduce crash rates. Previous studies conducted by the researchers have shown that there is considerable potential for a passenger focus in youth road safety interventions, particularly involving the encouragement of young passengers to intervene in their peers’ risky driving (Buckley, Chapman, Sheehan & Davidson, 2012). Additionally, this research has shown that technology-based applications may be a promising means of delivering passenger safety messages, particularly as young people are increasingly accessing web-based and mobile technologies. This research describes the participatory design process undertaken to develop a web-based road safety program, and involves feasibility testing of storyboards for a youth passenger safety application. Storyboards and framework web-based materials were initially developed for a passenger safety program, using the results of previous studies involving online and school-based surveys with young people. Focus groups were then conducted with 8 school staff and 30 senior school students at one public high school in the Australian Capital Territory. Young people were asked about the situations in which passengers may feel unsafe and potential strategies for intervening in their peers’ risky driving. Students were also shown the storyboards and framework web-based material and were asked to comment on design and content issues. Teachers were also shown the material and asked about their perceptions of program design and feasibility. The focus group data will be used as part of the participatory design process, in further developing the passenger safety program. This research describes an evidence-based approach to the development of a web-based application for youth passenger safety. The findings of this research and resulting technology will have important implications for the road safety education of senior high school students.
Resumo:
Introduction: Delirium is a serious issue associated with high morbidity and mortality in older hospitalised people. Early recognition enables diagnosis and treatment of underlying cause/s, which can lead to improved patient outcomes. However, research shows knowledge and accurate nurse recognition of delirium and is poor and lack of education appears to be a key issue related to this problem. Thus, the purpose of this randomised controlled trial (RCT) was to evaluate, in a sample of registered nurses, the usability and effectiveness of a web-based learning site, designed using constructivist learning principles, to improve acute care nurse knowledge and recognition of delirium. Prior to undertaking the RCT preliminary phases involving; validation of vignettes, video-taping five of the validated vignettes, website development and pilot testing were completed. Methods: The cluster RCT involved consenting registered nurse participants (N = 175) from twelve clinical areas within three acute health care facilities in Queensland, Australia. Data were collected through a variety of measures and instruments. Primary outcomes were improved ability of nurses to recognise delirium using written validated vignettes and improved knowledge of delirium using a delirium knowledge questionnaire. The secondary outcomes were aimed at determining nurse satisfaction and usability of the website. Primary outcome measures were taken at baseline (T1), directly after the intervention (T2) and two months later (T3). The secondary outcomes were measured at T2 by participants in the intervention group. Following baseline data collection remaining participants were assigned to either the intervention (n=75) or control (n=72) group. Participants in the intervention group were given access to the learning intervention while the control group continued to work in their clinical area and at that time, did not receive access to the learning intervention. Data from the primary outcome measures were examined in mixed model analyses. Results: Overall, the effect of the online learning intervention over time comparing the intervention group and the control group were positive. The intervention groups‘ scores were higher and the change over time results were statistically significant [T3 and T1 (t=3.78 p=<0.001) and T2 and T1 baseline (t=5.83 p=<0.001)]. Statistically significant improvements were also seen for delirium recognition when comparing T2 and T1 results (t=2.58 p=0.012) between the control and intervention group but not for changes in delirium recognition scores between the two groups from T3 and T1 (t=1.80 p=0.074). The majority of the participants rated the website highly on the visual, functional and content elements. Additionally, nearly 80% of the participants liked the overall website features and there were self-reported improvements in delirium knowledge and recognition by the registered nurses in the intervention group. Discussion: Findings from this study support the concept that online learning is an effective and satisfying method of information delivery. Embedded within a constructivist learning environment the site produced a high level of satisfaction and usability for the registered nurse end-users. Additionally, the results showed that the website significantly improved delirium knowledge & recognition scores and the improvement in delirium knowledge was retained at a two month follow-up. Given the strong effect of the intervention the online delirium intervention should be utilised as a way of providing information to registered nurses. It is envisaged that this knowledge would lead to improved recognition of delirium as well as improvement in patient outcomes however; translation of this knowledge attainment into clinical practice was outside the scope of this study. A critical next step is demonstrating the effect of the intervention in changing clinical behaviour, and improving patient health outcomes.
Resumo:
Accepting the fact that culture and language are interrelated in second language learning (SLL), the web sites should be designed to integrate with the cultural aspects. Yet many SLL web sites fail to integrate with the cultural aspects and/or focus on language acquisition only. This study identified three issues: (1) anthropologists’ cultural models mostly adopted in cross-cultural web user interface have been superficially used; (2) web designers deal with culture as a fixed one which needs to be modeled into interface design elements, so (3) there is a need for a communication framework between educators and design practitioners, which can be utilized in web design processes. This paper discusses what anthropology can contribute to language learning, mediated through web design processes and suggests a cultural user experience framework for web-based SLL by presenting an exemplary matrix. To evaluate the effectiveness of the framework, the key stakeholders (learners, teachers, and designers) participated in a case scenario-based evaluation. The result shows a high possibility that the framework can enhance the effective communication and collaboration for the cultural integration.
Resumo:
Our contemporary public sphere has seen the 'emergence of new political rituals, which are concerned with the stains of the past, with self disclosure, and with ways of remembering once taboo and traumatic events' (Misztal, 2005). A recent case of this phenomenon occurred in Australia in 2009 with the apology to the 'Forgotten Australians': a group who suffered abuse and neglect after being removed from their parents – either in Australia or in the UK - and placed in Church and State run institutions in Australia between 1930 and 1970. This campaign for recognition by a profoundly marginalized group coincides with the decade in which the opportunities of Web 2.0 were seen to be diffusing throughout different social groups, and were considered a tool for social inclusion. This paper examines the case of the Forgotten Australians as an opportunity to investigate the role of the internet in cultural trauma and public apology. As such, it adds to recent scholarship on the role of digital web based technologies in commemoration and memorials (Arthur, 2009; Haskins, 2007; Cohen and Willis, 2004), and on digital storytelling in the context of trauma (Klaebe, 2011) by locating their role in a broader and emerging domain of social responsibility and political action (Alexander, 2004).
Resumo:
Flexible information exchange is critical to successful design-analysis integration, but current top-down, standards-based and model-oriented strategies impose restrictions that contradicts this flexibility. In this article we present a bottom-up, user-controlled and process-oriented approach to linking design and analysis applications that is more responsive to the varied needs of designers and design teams. Drawing on research into scientific workflows, we present a framework for integration that capitalises on advances in cloud computing to connect discrete tools via flexible and distributed process networks. We then discuss how a shared mapping process that is flexible and user friendly supports non-programmers in creating these custom connections. Adopting a services-oriented system architecture, we propose a web-based platform that enables data, semantics and models to be shared on the fly. We then discuss potential challenges and opportunities for its development as a flexible, visual, collaborative, scalable and open system.
Resumo:
Flexible information exchange is critical to successful design integration, but current top-down, standards-based and model-oriented strategies impose restrictions that are contradictory to this flexibility. In this paper we present a bottom-up, user-controlled and process-oriented approach to linking design and analysis applications that is more responsive to the varied needs of designers and design teams. Drawing on research into scientific workflows, we present a framework for integration that capitalises on advances in cloud computing to connect discrete tools via flexible and distributed process networks. Adopting a services-oriented system architecture, we propose a web-based platform that enables data, semantics and models to be shared on the fly. We discuss potential challenges and opportunities for the development thereof as a flexible, visual, collaborative, scalable and open system.
Resumo:
Educators are faced with many challenging questions in designing an effective curriculum. What prerequisite knowledge do students have before commencing a new subject? At what level of mastery? What is the spread of capabilities between bare-passing students vs. the top performing group? How does the intended learning specification compare to student performance at the end of a subject? In this paper we present a conceptual model that helps in answering some of these questions. It has the following main capabilities: capturing the learning specification in terms of syllabus topics and outcomes; capturing mastery levels to model progression; capturing the minimal vs. aspirational learning design; capturing confidence and reliability metrics for each of these mappings; and finally, comparing and reflecting on the learning specification against actual student performance. We present a web-based implementation of the model, and validate it by mapping the final exams from four programming subjects against the ACM/IEEE CS2013 topics and outcomes, using Bloom's Taxonomy as the mastery scale. We then import the itemised exam grades from 632 students across the four subjects and compare the demonstrated student performance against the expected learning for each of these. Key contributions of this work are the validated conceptual model for capturing and comparing expected learning vs. demonstrated performance, and a web-based implementation of this model, which is made freely available online as a community resource.
Resumo:
The social tags in Web 2.0 are becoming another important information source to profile users' interests and preferences to make personalized recommendations. To solve the problem of low information sharing caused by the free-style vocabulary of tags and the long tails of the distribution of tags and items, this paper proposes an approach to integrate the social tags given by users and the item taxonomy with standard vocabulary and hierarchical structure provided by experts to make personalized recommendations. The experimental results show that the proposed approach can effectively improve the information sharing and recommendation accuracy.
Resumo:
Video presented as part of BPM2011 demonstration(France). In this video we show a prototype BPMN process modelling tool which uses Augmented Reality techniques to increase the sense of immersion when editing a process model. The avatar represents a remotely logged in user, and facilitates greater insight into the editing actions of the collaborator than present 2D web-based approaches in collaborative process modelling. We modified the Second Life client to integrate the ARToolkit in order to support pattern-based AR.
Resumo:
Video presented as part of Smart Services CRC Participants conferences. This video shows an example of the latest version of our middleware linking the YAWL workflow engine to Open Simulator. We have created a simple example of an accident victim being brought into a Hospital to be processed. The preliminary interface to the YAWL accident treatment workflow is shown as a worklist on the left of the image. The tasks are presented to the avatar via this interface, in a similar manner as done in web based workflow systems. Objects in the simulator are instrumented with a knowledge base, that enables the validation of actions within the world, to make sure that tasks are carried out correctly.