347 resultados para Tangible User Interfaces
Resumo:
Detection of Region of Interest (ROI) in a video leads to more efficient utilization of bandwidth. This is because any ROIs in a given frame can be encoded in higher quality than the rest of that frame, with little or no degradation of quality from the perception of the viewers. Consequently, it is not necessary to uniformly encode the whole video in high quality. One approach to determine ROIs is to use saliency detectors to locate salient regions. This paper proposes a methodology for obtaining ground truth saliency maps to measure the effectiveness of ROI detection by considering the role of user experience during the labelling process of such maps. User perceptions can be captured and incorporated into the definition of salience in a particular video, taking advantage of human visual recall within a given context. Experiments with two state-of-the-art saliency detectors validate the effectiveness of this approach to validating visual saliency in video. This paper will provide the relevant datasets associated with the experiments.
Resumo:
Tangible programming elements offer the dynamic and programmable properties of a computer without the complexity introduced by the keyboard, mouse and screen. This paper explores the extent to which programming skills are used by children during interactions with a set of tangible programming elements: the Electronic Blocks. An evaluation of the Electronic Blocks indicates that children become heavily engaged with the blocks, and learn simple programming with a minimum of adult support.
Resumo:
Recommender systems are one of the recent inventions to deal with ever growing information overload. Collaborative filtering seems to be the most popular technique in recommender systems. With sufficient background information of item ratings, its performance is promising enough. But research shows that it performs very poor in a cold start situation where previous rating data is sparse. As an alternative, trust can be used for neighbor formation to generate automated recommendation. User assigned explicit trust rating such as how much they trust each other is used for this purpose. However, reliable explicit trust data is not always available. In this paper we propose a new method of developing trust networks based on user’s interest similarity in the absence of explicit trust data. To identify the interest similarity, we have used user’s personalized tagging information. This trust network can be used to find the neighbors to make automated recommendations. Our experiment result shows that the proposed trust based method outperforms the traditional collaborative filtering approach which uses users rating data. Its performance improves even further when we utilize trust propagation techniques to broaden the range of neighborhood.
Resumo:
Information overload has become a serious issue for web users. Personalisation can provide effective solutions to overcome this problem. Recommender systems are one popular personalisation tool to help users deal with this issue. As the base of personalisation, the accuracy and efficiency of web user profiling affects the performances of recommender systems and other personalisation systems greatly. In Web 2.0, the emerging user information provides new possible solutions to profile users. Folksonomy or tag information is a kind of typical Web 2.0 information. Folksonomy implies the users‘ topic interests and opinion information. It becomes another source of important user information to profile users and to make recommendations. However, since tags are arbitrary words given by users, folksonomy contains a lot of noise such as tag synonyms, semantic ambiguities and personal tags. Such noise makes it difficult to profile users accurately or to make quality recommendations. This thesis investigates the distinctive features and multiple relationships of folksonomy and explores novel approaches to solve the tag quality problem and profile users accurately. Harvesting the wisdom of crowds and experts, three new user profiling approaches are proposed: folksonomy based user profiling approach, taxonomy based user profiling approach, hybrid user profiling approach based on folksonomy and taxonomy. The proposed user profiling approaches are applied to recommender systems to improve their performances. Based on the generated user profiles, the user and item based collaborative filtering approaches, combined with the content filtering methods, are proposed to make recommendations. The proposed new user profiling and recommendation approaches have been evaluated through extensive experiments. The effectiveness evaluation experiments were conducted on two real world datasets collected from Amazon.com and CiteULike websites. The experimental results demonstrate that the proposed user profiling and recommendation approaches outperform those related state-of-the-art approaches. In addition, this thesis proposes a parallel, scalable user profiling implementation approach based on advanced cloud computing techniques such as Hadoop, MapReduce and Cascading. The scalability evaluation experiments were conducted on a large scaled dataset collected from Del.icio.us website. This thesis contributes to effectively use the wisdom of crowds and expert to help users solve information overload issues through providing more accurate, effective and efficient user profiling and recommendation approaches. It also contributes to better usages of taxonomy information given by experts and folksonomy information contributed by users in Web 2.0.
Resumo:
The Large scaled emerging user created information in web 2.0 such as tags, reviews, comments and blogs can be used to profile users’ interests and preferences to make personalized recommendations. To solve the scalability problem of the current user profiling and recommender systems, this paper proposes a parallel user profiling approach and a scalable recommender system. The current advanced cloud computing techniques including Hadoop, MapReduce and Cascading are employed to implement the proposed approaches. The experiments were conducted on Amazon EC2 Elastic MapReduce and S3 with a real world large scaled dataset from Del.icio.us website.
Resumo:
Previous studies exploring the incidence and readmission rates of cardiac patients admitted to a coronary care unit (CCU) with type 2 diabetes [1] have been undertaken by the first author. Interviews of these patients regarding their experiences in managing their everyday conditions [2] provided the basis for developing the initial cardiac–diabetes self-management programme (CDSMP) [3]. Findings from each of these previous studies highlighted the complexity of self-management for patients with both conditions and contributed to the creation of a new self-management programme, the CDSMP, based on Bandura’s (2004) self-efficacy theory [4]. From patient and staff feedback received for the CDSMP [3], it became evident that further revision of the programme was needed to improve self-management levels of patients and possibility of incorporating methods of information technology (IT). Little is known about the applicability of different methods of technology for delivering self-management programmes for patients with chronic diseases such as those with type 2 diabetes and cardiac conditions. Although there is some evidence supporting the benefits and the great potential of using IT in supporting self-management programmes, it is not strong, and further research on the use of IT in such programmes is recommended [5–7]. Therefore, this study was designed to pilot test feasibility of the CDSMP incorporating telephone and text-messaging as follow-up approaches.
Resumo:
Intelligent agents are an advanced technology utilized in Web Intelligence. When searching information from a distributed Web environment, information is retrieved by multi-agents on the client site and fused on the broker site. The current information fusion techniques rely on cooperation of agents to provide statistics. Such techniques are computationally expensive and unrealistic in the real world. In this paper, we introduce a model that uses a world ontology constructed from the Dewey Decimal Classification to acquire user profiles. By search using specific and exhaustive user profiles, information fusion techniques no longer rely on the statistics provided by agents. The model has been successfully evaluated using the large INEX data set simulating the distributed Web environment.
Resumo:
Relevance Feedback (RF) has been proven very effective for improving retrieval accuracy. Adaptive information filtering (AIF) technology has benefited from the improvements achieved in all the tasks involved over the last decades. A difficult problem in AIF has been how to update the system with new feedback efficiently and effectively. In current feedback methods, the updating processes focus on updating system parameters. In this paper, we developed a new approach, the Adaptive Relevance Features Discovery (ARFD). It automatically updates the system's knowledge based on a sliding window over positive and negative feedback to solve a nonmonotonic problem efficiently. Some of the new training documents will be selected using the knowledge that the system currently obtained. Then, specific features will be extracted from selected training documents. Different methods have been used to merge and revise the weights of features in a vector space. The new model is designed for Relevance Features Discovery (RFD), a pattern mining based approach, which uses negative relevance feedback to improve the quality of extracted features from positive feedback. Learning algorithms are also proposed to implement this approach on Reuters Corpus Volume 1 and TREC topics. Experiments show that the proposed approach can work efficiently and achieves the encouragement performance.
Resumo:
In larger developments there is potential for construction cranes to encroach into the airspace of neighbouring properties. To resolve issues of this nature, a statutory right of user may be sought under s 180 of the Property Law Act 1974 (Qld). Section 180 allows the court to impose a statutory right of user on servient land where it is reasonably necessary in the interests of effective use in any reasonable manner of the dominant land. Such an order will not be made unless the court is satisfied that it is consistent with public interest, the owner of the servient land can be adequately recompensed for any loss or disadvantage which may be suffered from the imposition and the owner of the servient land has refused unreasonably to agree to accept the imposition of that obligation. In applying the statutory provision, a key practical concern for legal advisers will be the basis for assessment of compensation. A recent decision of the Queensland Supreme Court (Douglas J) provides guidance concerning matters relevant to this assessment. The decision is Lang Parade Pty Ltd v Peluso [2005] QSC 112.
Resumo:
The decision of McMurdo J in Pacific Coast Investments Pty Ltd v Cowlishaw [2005] QSC 259 concerned an application under s 180 of the Property Law Act 1974 (Qld) for a statutory right of user.
Resumo:
A vast proportion of companies nowadays are looking to design and are focusing on the end users as a means of driving new projects. However still many companies are drawn to technological improvements which drive innovation within their industry context. The Australian livestock industry is no different. To date the adoption of new products and services within the livestock industry has been documented as being quite slow. This paper investigates how disruptive innovation should be a priority for these technologically focused companies and demonstrates how the use of design led innovation can bring about a higher quality engagement between end user and company alike. A case study linking participatory design and design thinking is presented. Within this, a conceptual model of presenting future scenarios to internal and external stakeholders is applied to the livestock industry; assisting companies to apply strategy, culture and advancement in meaningful product offerings to consumers.
Resumo:
This paper reports an empirical study on measuring transit service reliability using the data from a Web-based passenger survey on a major transit corridor in Brisbane, Australia. After an introduction of transit service reliability measures, the paper presents the results from the case study including study area, data collection, and reliability measures obtained. This includes data exploration of boarding/arrival lateness, in-vehicle time variation, waiting time variation, and headway adherence. Impacts of peak-period effects and separate operation on service reliability are examined. Relationships between transit service characteristics and passenger waiting time are also discussed. A summary of key findings and an agenda of future research are offered in conclusions.