577 resultados para web tool
Resumo:
Young drivers aged 17-24 are consistently overrepresented in motor vehicle crashes. Research has shown that a young driver’s crash risk increases when carrying similarly aged passengers, with fatal crash risk increasing two to three fold with two or more passengers. Recent growth in access to and use of the internet has led to a corresponding increase in the number of web based behaviour change interventions. An increasing body of literature describes the evaluation of web based programs targeting risk behaviours and health issues. Evaluations have shown promise for such strategies with evidence for positive changes in knowledge, attitudes and behaviour. The growing popularity of web based programs is due in part to their wide accessibility, ability for personalised tailoring of intervention messages, and self-direction and pacing of online content. Young people are also highly receptive to the internet and the interactive elements of online programs are particularly attractive. The current study was designed to assess the feasibility for a web based intervention to increase the use of personal and peer protective strategies among young adult passengers. An extensive review was conducted on the development and evaluation of web based programs. Year 12 students were also surveyed about their use of the internet in general and for health and road safety information. All students reported internet access at home or at school, and 74% had searched for road safety information. Additional findings have shown promise for the development of a web based passenger safety program for young adults. Design and methodological issues will be discussed.
Resumo:
Query reformulation is a key user behavior during Web search. Our research goal is to develop predictive models of query reformulation during Web searching. This article reports results from a study in which we automatically classified the query-reformulation patterns for 964,780 Web searching sessions, composed of 1,523,072 queries, to predict the next query reformulation. We employed an n-gram modeling approach to describe the probability of users transitioning from one query-reformulation state to another to predict their next state. We developed first-, second-, third-, and fourth-order models and evaluated each model for accuracy of prediction, coverage of the dataset, and complexity of the possible pattern set. The results show that Reformulation and Assistance account for approximately 45% of all query reformulations; furthermore, the results demonstrate that the first- and second-order models provide the best predictability, between 28 and 40% overall and higher than 70% for some patterns. Implications are that the n-gram approach can be used for improving searching systems and searching assistance.
Resumo:
In this paper, we use time series analysis to evaluate predictive scenarios using search engine transactional logs. Our goal is to develop models for the analysis of searchers’ behaviors over time and investigate if time series analysis is a valid method for predicting relationships between searcher actions. Time series analysis is a method often used to understand the underlying characteristics of temporal data in order to make forecasts. In this study, we used a Web search engine transactional log and time series analysis to investigate users’ actions. We conducted our analysis in two phases. In the initial phase, we employed a basic analysis and found that 10% of searchers clicked on sponsored links. However, from 22:00 to 24:00, searchers almost exclusively clicked on the organic links, with almost no clicks on sponsored links. In the second and more extensive phase, we used a one-step prediction time series analysis method along with a transfer function method. The period rarely affects navigational and transactional queries, while rates for transactional queries vary during different periods. Our results show that the average length of a searcher session is approximately 2.9 interactions and that this average is consistent across time periods. Most importantly, our findings shows that searchers who submit the shortest queries (i.e., in number of terms) click on highest ranked results. We discuss implications, including predictive value, and future research.
Resumo:
This paper reports results from a study in which we automatically classified the query reformulation patterns for 964,780 Web searching sessions (composed of 1,523,072 queries) in order to predict what the next query reformulation would be. We employed an n-gram modeling approach to describe the probability of searchers transitioning from one query reformulation state to another and predict their next state. We developed first, second, third, and fourth order models and evaluated each model for accuracy of prediction. Findings show that Reformulation and Assistance account for approximately 45 percent of all query reformulations. Searchers seem to seek system searching assistant early in the session or after a content change. The results of our evaluations show that the first and second order models provided the best predictability, between 28 and 40 percent overall, and higher than 70 percent for some patterns. Implications are that the n-gram approach can be used for improving searching systems and searching assistance in real time.
Resumo:
This paper reports preliminary results from a study modeling the interplay between multitasking, cognitive coordination, and cognitive shifts during Web search. Study participants conducted three Web searches on personal information problems. Data collection techniques included pre- and post-search questionnaires; think-aloud protocols, Web search logs, observation, and post-search interviews. Key findings include: (1) users Web searches included multitasking, cognitive shifting and cognitive coordination processes, (2) cognitive coordination is the hinge linking multitasking and cognitive shifting that enables Web search construction, (3) cognitive shift levels determine the process of cognitive coordination, and (4) cognitive coordination is interplay of task, mechanism and strategy levels that underpin multitasking and task switching. An initial model depicts the interplay between multitasking, cognitive coordination, and cognitive shifts during Web search. Implications of the findings and further research are also discussed.
Resumo:
It is important to detect and treat malnutrition in hospital patients so as to improve clinical outcome and reduce hospital stay. The aim of this study was to develop and validate a nutrition screening tool with a simple and quick scoring system for acute hospital patients in Singapore. In this study, 818 newly admitted patients aged above 18 years old were screened using five parameters that contribute to the risk of malnutrition. A dietitian blinded to the nutrition screening score assessed the same patients using the reference standard, Subjective Global Assessment (SGA) within 48 hours. The sensitivity and specificity were established using the Receiver Operator Characteristics (ROC) curve and the best cutoff scores determined. The nutrition parameter with the largest Area Under the ROC Curve (AUC) was chosen as the final screening tool, which was named 3-Minute Nutrition Screening (3-MinNS). The combination of the parameters weight loss, intake and muscle wastage (3-MinNS), gave the largest AUC when compared with SGA. Using 3-MinNS, the best cutoff point to identify malnourished patients is three (sensitivity 86%, specificity 83%). The cutoff score to identify subjects at risk of severe malnutrition is five (sensitivity 93%, specificity 86%). 3-Minute Nutrition Screening is a valid, simple and rapid tool to identify patients at risk of malnutrition in Singapore acute hospital patients. It is able to differentiate patients at risk of moderate malnutrition and severe malnutrition for prioritization and management purposes.
Resumo:
Dealing with the ever-growing information overload in the Internet, Recommender Systems are widely used online to suggest potential customers item they may like or find useful. Collaborative Filtering is the most popular techniques for Recommender Systems which collects opinions from customers in the form of ratings on items, services or service providers. In addition to the customer rating about a service provider, there is also a good number of online customer feedback information available over the Internet as customer reviews, comments, newsgroups post, discussion forums or blogs which is collectively called user generated contents. This information can be used to generate the public reputation of the service providers’. To do this, data mining techniques, specially recently emerged opinion mining could be a useful tool. In this paper we present a state of the art review of Opinion Mining from online customer feedback. We critically evaluate the existing work and expose cutting edge area of interest in opinion mining. We also classify the approaches taken by different researchers into several categories and sub-categories. Each of those steps is analyzed with their strength and limitations in this paper.
Resumo:
The CDIO (Conceive-Design-Implement-Operate) Initiative has been globally recognised as an enabler for engineering education reform. With the CDIO process, the CDIO Standards and the CDIO Syllabus, many scholarly contributions have been made around cultural change, curriculum reform and learning environments. In the Australasian region, reform is gaining significant momentum within the engineering education community, the profession, and higher education institutions. This paper presents the CDIO Syllabus cast into the Australian context by mapping it to the Engineers Australia Graduate Attributes, the Washington Accord Graduate Attributes and the Queensland University of Technology Graduate Capabilities. Furthermore, in recognition that many secondary schools and technical training institutions offer introductory engineering technology subjects, this paper presents an extended self-rating framework suited for recognising developing levels of proficiency at a preparatory level. A demonstrator mapping tool has been created to demonstrate the application of this extended graduate attribute mapping framework as a precursor to an integrated curriculum information model.
Resumo:
This paper reports on the research and development of an ICT tool to facilitate the learning of ratio and fractions by adult prisoners. The design of the ICT tool was informed by a semiotic framework for mathematical meaning-making. The ICT tool thus employed multiple semiotic resources including topological, typological, and social-actional resources. The results showed that individual semiotic resource could only represent part of the mathematical concept, while at the same time it might signify something else to create a misconception. When multiple semiotic resources were utilised the mathematical ideas could be better learnt.
Resumo:
The artwork describes web as a network environment and a space where people are connected and as a result, it can reshape you as an interactive participant who is able to regenerate an object as a new form through a truly collaborative and cooperative interactions with others. The artwork has been created based on the research findings of characteristic of web: 1) Participatory (Slater 2002, p.536), 2) Communicational (Rheingold 1993), 3) Connected (Jordan 1999, 80), and 4) Stylising (Jordan 1999, 69). The artwork has conceptualised and visualised those characteristics of web based on principles of graphic design and visual communication.
Resumo:
Web design elements are significantly important for web designers to understand target users in terms of effective communication design and to develop a successful web site. However, web design elements generally known are broad and various that are hardly conceived and classified, so many practitioners and design researchers approach to web design elements based on graphic and visual design that mainly focus on print media design. This paper discusses about web design elements in terms of online user experience, as web media certainly differs from print media. It aims to propose a fundamentally new concept, called 'UEDUs: User Experience Design Units' which enables web designers to define web design elements and conceptualise user experience depending on the purpose of web site development.
Resumo:
We argue that web service discovery technology should help the user navigate a complex problem space by providing suggestions for services which they may not be able to formulate themselves as (s)he lacks the epistemic resources to do so. Free text documents in service environments provide an untapped source of information for augmenting the epistemic state of the user and hence their ability to search effectively for services. A quantitative approach to semantic knowledge representation is adopted in the form of semantic space models computed from these free text documents. Knowledge of the user’s agenda is promoted by associational inferences computed from the semantic space. The inferences are suggestive and aim to promote human abductive reasoning to guide the user from fuzzy search goals into a better understanding of the problem space surrounding the given agenda. Experimental results are discussed based on a complex and realistic planning activity.
Resumo:
With the size and state of the Internet today, a good quality approach to organizing this mass of information is of great importance. Clustering web pages into groups of similar documents is one approach, but relies heavily on good feature extraction and document representation as well as a good clustering approach and algorithm. Due to the changing nature of the Internet, resulting in a dynamic dataset, an incremental approach is preferred. In this work we propose an enhanced incremental clustering approach to develop a better clustering algorithm that can help to better organize the information available on the Internet in an incremental fashion. Experiments show that the enhanced algorithm outperforms the original histogram based algorithm by up to 7.5%.
Resumo:
Expert elicitation is the process of retrieving and quantifying expert knowledge in a particular domain. Such information is of particular value when the empirical data is expensive, limited, or unreliable. This paper describes a new software tool, called Elicitator, which assists in quantifying expert knowledge in a form suitable for use as a prior model in Bayesian regression. Potential environmental domains for applying this elicitation tool include habitat modeling, assessing detectability or eradication, ecological condition assessments, risk analysis, and quantifying inputs to complex models of ecological processes. The tool has been developed to be user-friendly, extensible, and facilitate consistent and repeatable elicitation of expert knowledge across these various domains. We demonstrate its application to elicitation for logistic regression in a geographically based ecological context. The underlying statistical methodology is also novel, utilizing an indirect elicitation approach to target expert knowledge on a case-by-case basis. For several elicitation sites (or cases), experts are asked simply to quantify their estimated ecological response (e.g. probability of presence), and its range of plausible values, after inspecting (habitat) covariates via GIS.