119 resultados para Web-Centric Expert System


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Collecting regular personal reflections from first year teachers in rural and remote schools is challenging as they are busily absorbed in their practice, and separated from each other and the researchers by thousands of kilometres. In response, an innovative web-based solution was designed to both collect data and be a responsive support system for early career teachers as they came to terms with their new professional identities within rural and remote school settings. Using an emailed link to a web-based application named goingok.com, the participants are charting their first year plotlines using a sliding scale from ‘distressed’, ‘ok’ to ‘soaring’ and describing their self-assessment in short descriptive posts. These reflections are visible to the participants as a developing online journal, while the collections of de-identified developing plotlines are visible to the research team, alongside numerical data. This paper explores important aspects of the design process, together with the challenges and opportunities encountered in its implementation. A number of the key considerations for choosing to develop a web application for data collection are initially identified, and the resultant application features and scope are then examined. Examples are then provided about how a responsive software development approach can be part of a supportive feedback loop for participants while being an effective data collection process. Opportunities for further development are also suggested with projected implications for future research.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

1. Expert knowledge continues to gain recognition as a valuable source of information in a wide range of research applications. Despite recent advances in defining expert knowledge, comparatively little attention has been given to how to view expertise as a system of interacting contributory factors, and thereby, to quantify an individual’s expertise. 2. We present a systems approach to describing expertise that accounts for many contributing factors and their interrelationships, and allows quantification of an individual’s expertise. A Bayesian network (BN) was chosen for this purpose. For the purpose of illustration, we focused on taxonomic expertise. The model structure was developed in consultation with professional taxonomists. The relative importance of the factors within the network were determined by a second set of senior taxonomists. This second set of experts (i.e. supra-experts) also provided validation of the model structure. Model performance was then assessed by applying the model to hypothetical career states in the discipline of taxonomy. Hypothetical career states were used to incorporate the greatest possible differences in career states and provide an opportunity to test the model against known inputs. 3. The resulting BN model consisted of 18 primary nodes feeding through one to three higher-order nodes before converging on the target node (Taxonomic Expert). There was strong consistency among node weights provided by the supra-experts for some nodes, but not others. The higher order nodes, “Quality of work” and “Total productivity”, had the greatest weights. Sensitivity analysis indicated that although some factors had stronger influence in the outer nodes of the network, there was relatively equal influence of the factors leading directly into the target node. Despite differences in the node weights provided by our supra-experts, there was remarkably good agreement among assessments of our hypothetical experts that accurately reflected differences we had built into them. 4. This systems approach provides a novel way of assessing the overall level of expertise of individuals, accounting for multiple contributory factors, and their interactions. Our approach is adaptable to other situations where it is desirable to understand components of expertise.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Facial expression recognition (FER) systems must ultimately work on real data in uncontrolled environments although most research studies have been conducted on lab-based data with posed or evoked facial expressions obtained in pre-set laboratory environments. It is very difficult to obtain data in real-world situations because privacy laws prevent unauthorized capture and use of video from events such as funerals, birthday parties, marriages etc. It is a challenge to acquire such data on a scale large enough for benchmarking algorithms. Although video obtained from TV or movies or postings on the World Wide Web may also contain ‘acted’ emotions and facial expressions, they may be more ‘realistic’ than lab-based data currently used by most researchers. Or is it? One way of testing this is to compare feature distributions and FER performance. This paper describes a database that has been collected from television broadcasts and the World Wide Web containing a range of environmental and facial variations expected in real conditions and uses it to answer this question. A fully automatic system that uses a fusion based approach for FER on such data is introduced for performance evaluation. Performance improvements arising from the fusion of point-based texture and geometry features, and the robustness to image scale variations are experimentally evaluated on this image and video dataset. Differences in FER performance between lab-based and realistic data, between different feature sets, and between different train-test data splits are investigated.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper, we present WebPut, a prototype system that adopts a novel web-based approach to the data imputation problem. Towards this, Webput utilizes the available information in an incomplete database in conjunction with the data consistency principle. Moreover, WebPut extends effective Information Extraction (IE) methods for the purpose of formulating web search queries that are capable of effectively retrieving missing values with high accuracy. WebPut employs a confidence-based scheme that efficiently leverages our suite of data imputation queries to automatically select the most effective imputation query for each missing value. A greedy iterative algorithm is proposed to schedule the imputation order of the different missing values in a database, and in turn the issuing of their corresponding imputation queries, for improving the accuracy and efficiency of WebPut. Moreover, several optimization techniques are also proposed to reduce the cost of estimating the confidence of imputation queries at both the tuple-level and the database-level. Experiments based on several real-world data collections demonstrate not only the effectiveness of WebPut compared to existing approaches, but also the efficiency of our proposed algorithms and optimization techniques.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

There is currently a wide range of research into the recent introduction of student response systems in higher education and tertiary settings (Banks 2006; Kay and Le Sange, 2009; Beatty and Gerace 2009; Lantz 2010; Sprague and Dahl 2009). However, most of this pedagogical literature has generated ‘how to’ approaches regarding the use of ‘clickers’, keypads, and similar response technologies. There are currently no systematic reviews on the effectiveness of ‘GoSoapBox’ – a more recent, and increasingly popular student response system – for its capacity to enhance critical thinking, and achieve sustained learning outcomes. With rapid developments in teaching and learning technologies across all undergraduate disciplines, there is a need to obtain comprehensive, evidence-based advice on these types of technologies, their uses, and overall efficacy. This paper addresses this current gap in knowledge. Our teaching team, in an undergraduate Sociology and Public Health unit at the Queensland University of Technology (QUT), introduced GoSoapBox as a mechanism for discussing controversial topics, such as sexuality, gender, economics, religion, and politics during lectures, and to take opinion polls on social and cultural issues affecting human health. We also used this new teaching technology to allow students to interact with each other during class – both on both social and academic topics – and to generate discussions and debates during lectures. The paper reports on a data-driven study into how this interactive online tool worked to improve engagement and the quality of academic work produced by students. This paper will firstly, cover the recent literature reviewing student response systems in tertiary settings. Secondly, it will outline the theoretical framework used to generate this pedagogical research. In keeping with the social and collaborative features of Web 2.0 technologies, Bandura’s Social Learning Theory (SLT) will be applied here to investigate the effectiveness of GoSoapBox as an online tool for improving learning experiences and the quality of academic output by students. Bandura has emphasised the Internet as a tool for ‘self-controlled learning’ (Bandura 2001), as it provides the education sector with an opportunity to reconceptualise the relationship between learning and thinking (Glassman & Kang 2011). Thirdly, we describe the methods used to implement the use of GoSoapBox in our lectures and tutorials, and which aspects of the technology we drew on for learning purposes, as well as the methods for obtaining feedback from the students about the effectiveness or otherwise of this tool. Fourthly, we report cover findings from an examination of all student/staff activity on GoSoapBox as well as reports from students about the benefits and limitations of it as a learning aid. We then display a theoretical model that is produced via an iterative analytical process between SLT and our data analysis for use by academics and teachers across the undergraduate curriculum. The model has implications for all teachers considering the use of student response systems to improve the learning experiences of their students. Finally, we consider some of the negative aspects of GoSoapBox as a learning aid.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Physical design objects such as sketches, drawings, collages, storyboards and models play an important role in supporting communication and coordination in design studios. CAM (Cooperative Artefact Memory) is a mobile-tagging based messaging system that allows designers to collaboratively store relevant information onto their design objects in the form of messages, annotations and external web links. We studied the use of CAM in a Product Design studio over three weeks, involving three different design teams. In this paper, we briefly describe CAM and show how it serves as 'object memory'.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

DeepBlue is much more than just an orchestra. Their innovative approach to audience engagement led it to develop ESP, their Electronic Show Programme web app which allows for real-time (synchronous) and delayed (asynchronous) audience interaction, customer feedback and research. The show itself is driven invisibly by a music technology operating system (currently QUT's Yodel) that allows them to adapt to a wide range of performance venues and varied types of presentation. DeepBlue's community engagement program has enabled over 5,500 young musicians and community choristers to participate in professional productions, it is also a cornerstone of DeepBlue's successful business model. You can view the ESP mobile web app at m.deepblue.net.au if you view this and only the landing page is active, there is not a show taking place or imminent. ESP prototype has already been used for 18 months. Imagine knowing what your audience really thinks – in real time so you can track their feelings and thoughts through the show. This tool has been developed and used by the performing group DeepBlue since late 2012 in Australia and Asia (even translated into Vietnamese). It has mostly superseded DeepBlue's SMS realtime communication during a show. It enables an event presenter or performance group to take the pulse of an audience through a series of targeted questions that can be anonymous or attributed. This will help build better, long-lasting, and more meaningful relationships with groups and individuals in the community. This can take place on a tablet, mobile phone or future platforms. There are three organisations trialling it so far.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The banking industry is under pressure. In order to compete, banks should adapt to concentrating on the specific customer needs, following an outside-in perspective. This paper presents the design of a business model for banks that considers this development by providing flexible and comprehensive support for retail banking clients. It is demonstrated that the identification of customer processes and the consequent alignment of banking services to those processes implies great potential to increase customer retention in banking. It will be shown that information technology – especially smartphones – can serve as an interface between customer and suppliers to enable an alignment of offerings to customer processes. This approach enables the integration of banks into their customers’ lifestyle, creating emotional value added, improving the personal relationship and the customers’ affiliation with the bank. The paper presents the design of such a customer-process-centric smartphone application and derives success factors for implementation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Health care systems are highly dynamic not just due to developments and innovations in diagnosis and treatments, but also by virtue of emerging management techniques supported by modern information and communication technology. A multitude of stakeholders such as patients, nurses, general practitioners or social carers can be integrated by modeling complex interactions necessary for managing the provision and consumption of health care services. Furthermore, it is the availability of Service-oriented Architecture (SOA) that supports those integration efforts by enabling the flexible and reusable composition of autonomous, loosely-coupled and web-enabled software components. However, there is still the gap between SOA and predominantly business-oriented perspectives (e.g. business process models). The alignment of both views is crucial not just for the guided development of SOA but also for the sustainable evolution of holistic enterprise architectures. In this paper, we combine the Semantic Object Model (SOM) and the Business Process Modelling Notation (BPMN) towards a model-driven approach to service engineering. By addressing a business system in Home Telecare and deriving a business process model, which can eventually be controlled and executed by machines; in particular by composed web services, the full potential of a process-centric SOA is exploited.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The ability to identify and assess user engagement with transmedia productions is vital to the success of individual projects and the sustainability of this mode of media production as a whole. It is essential that industry players have access to tools and methodologies that offer the most complete and accurate picture of how audiences/users engage with their productions and which assets generate the most valuable returns of investment. Drawing upon research conducted with Hoodlum Entertainment, a Brisbane-based transmedia producer, this chapter outlines an initial assessment of the way engagement tends to be understood, why standard web analytics tools are ill-suited to measuring it, how a customised tool could offer solutions, and why this question of measuring engagement is so vital to the future of transmedia as a sustainable industry.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Relevation! is a system for performing relevance judgements for information retrieval evaluation. Relevation! is web-based, fully configurable and expandable; it allows researchers to effectively collect assessments and additional qualitative data. The system is easily deployed allowing assessors to smoothly perform their relevance judging tasks, even remotely. Relevation! is available as an open source project at: http://ielab.github.io/relevation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

To harness safe operation of Web-based systems in Web environments, we propose an SSPA (Server-based SHA-1 Page-digest Algorithm) to verify the integrity of Web contents before the server issues an HTTP response to a user request. In addition to standard security measures, our Java implementation of the SSPA, which is called the Dynamic Security Surveillance Agent (DSSA), provides further security in terms of content integrity to Web-based systems. Its function is to prevent the display of Web contents that have been altered through the malicious acts of attackers and intruders on client machines. This is to protect the reputation of organisations from cyber-attacks and to ensure the safe operation of Web systems by dynamically monitoring the integrity of a Web site's content on demand. We discuss our findings in terms of the applicability and practicality of the proposed system. We also discuss its time metrics, specifically in relation to its computational overhead at the Web server, as well as the overall latency from the clients' point of view, using different Internet access methods. The SSPA, our DSSA implementation, some experimental results and related work are all discussed

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper reports on the 2nd ShARe/CLEFeHealth evaluation lab which continues our evaluation resource building activities for the medical domain. In this lab we focus on patients' information needs as opposed to the more common campaign focus of the specialised information needs of physicians and other healthcare workers. The usage scenario of the lab is to ease patients and next-of-kins' ease in understanding eHealth information, in particular clinical reports. The 1st ShARe/CLEFeHealth evaluation lab was held in 2013. This lab consisted of three tasks. Task 1 focused on named entity recognition and normalization of disorders; Task 2 on normalization of acronyms/abbreviations; and Task 3 on information retrieval to address questions patients may have when reading clinical reports. This year's lab introduces a new challenge in Task 1 on visual-interactive search and exploration of eHealth data. Its aim is to help patients (or their next-of-kin) in readability issues related to their hospital discharge documents and related information search on the Internet. Task 2 then continues the information extraction work of the 2013 lab, specifically focusing on disorder attribute identification and normalization from clinical text. Finally, this year's Task 3 further extends the 2013 information retrieval task, by cleaning the 2013 document collection and introducing a new query generation method and multilingual queries. De-identified clinical reports used by the three tasks were from US intensive care and originated from the MIMIC II database. Other text documents for Tasks 1 and 3 were from the Internet and originated from the Khresmoi project. Task 2 annotations originated from the ShARe annotations. For Tasks 1 and 3, new annotations, queries, and relevance assessments were created. 50, 79, and 91 people registered their interest in Tasks 1, 2, and 3, respectively. 24 unique teams participated with 1, 10, and 14 teams in Tasks 1, 2 and 3, respectively. The teams were from Africa, Asia, Canada, Europe, and North America. The Task 1 submission, reviewed by 5 expert peers, related to the task evaluation category of Effective use of interaction and targeted the needs of both expert and novice users. The best system had an Accuracy of 0.868 in Task 2a, an F1-score of 0.576 in Task 2b, and Precision at 10 (P@10) of 0.756 in Task 3. The results demonstrate the substantial community interest and capabilities of these systems in making clinical reports easier to understand for patients. The organisers have made data and tools available for future research and development.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In today’s world of information-driven society, many studies are exploring usefulness and ease of use of the technology. The research into personalizing next-generation user interface is also ever increasing. A better understanding of factors that influence users’ perception of web search engine performance would contribute in achieving this. This study measures and examines how users’ perceived level of prior knowledge and experience influence their perceived level of satisfaction of using the web search engines, and how their perceived level of satisfaction affects their perceived intention to reuse the system. 50 participants from an Australian university participated in the current study, where they performed three search tasks and completed survey questionnaires. A research model was constructed to test the proposed hypotheses. Correlation and regression analyses results indicated a significant correlation between (1) users’ prior level of experience and their perceived level of satisfaction in using the web search engines, and (2) their perceived level of satisfaction in using the systems and their perceived intention to reuse the systems. A theoretical model is proposed to illustrate the causal relationships. The implications and limitations of the study are also discussed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

There is a new type of home education parent challenging long-held assumptions about homeschooling (cf. Morton 2012). These parents are well educated (cf. Beck 2010) but have chosen to eschew the social and cultural capital (Bourdieu & Wacquant 1992) of school in favour of some- thing completely different. They are unschoolers, which involves ‘allow- ing children as much freedom to learn in the world as their parents can possibly bear’ (cf. Holt & Farenga 2003: 238). This chapter presents the approach taken by one researcher to explore the reasons families choose unschooling. These families can be difficult to access, because they often fail to register with home education units and thus remain outside the education system (cf. Townsend 2012). Their lack of registration makes them largely invisible, affecting their ability to make an important contribution to debates around education. In spite of this invisibility, many unschoolers are keen to talk to researchers to increase wider understanding of unschooling.