871 resultados para User-based collaborative filtering
Resumo:
Ubiquitous Computing is an emerging paradigm which facilitates user to access preferred services, wherever they are, whenever they want, and the way they need, with zero administration. While moving from one place to another the user does not need to specify and configure their surrounding environment, the system initiates necessary adaptation by itself to cope up with the changing environment. In this paper we propose a system to provide context-aware ubiquitous multimedia services, without user’s intervention. We analyze the context of the user based on weights, identify the UMMS (Ubiquitous Multimedia Service) based on the collected context information and user profile, search for the optimal server to provide the required service, then adapts the service according to user’s local environment and preferences, etc. The experiment conducted several times with different context parameters, their weights and various preferences for a user. The results are quite encouraging.
Resumo:
This paper proposes a hierarchical probabilistic model for ordinal matrix factorization. Unlike previous approaches, we model the ordinal nature of the data and take a principled approach to incorporating priors for the hidden variables. Two algorithms are presented for inference, one based on Gibbs sampling and one based on variational Bayes. Importantly, these algorithms may be implemented in the factorization of very large matrices with missing entries. The model is evaluated on a collaborative filtering task, where users have rated a collection of movies and the system is asked to predict their ratings for other movies. The Netflix data set is used for evaluation, which consists of around 100 million ratings. Using root mean-squared error (RMSE) as an evaluation metric, results show that the suggested model outperforms alternative factorization techniques. Results also show how Gibbs sampling outperforms variational Bayes on this task, despite the large number of ratings and model parameters. Matlab implementations of the proposed algorithms are available from cogsys.imm.dtu.dk/ordinalmatrixfactorization.
Resumo:
传统的协作过滤推荐方法主要基于个人兴趣特征来实现推荐。在组织内部协作场景下,为实现知识共享与重用,推荐系统不仅要考虑用户兴趣,还应考虑用户和用户组的任务。传统的协作过滤推荐方法已不能满足要求。CoP是组织内部人员管理的主要形式,它的特征是其成员任务的反映。基于已有的协作过滤推荐研究与D-S理论,提出了一种CoP特征构建算法,并以此为基础研究了面向CoP的协作过滤推荐。
Resumo:
This dissertation investigates the practice of leadership in collaboratively designed and funded research in a university setting. More specifically, this research explores the meaning of leadership as experienced by researchers who were, or still are, engaged on Social Sciences and Humanities Research Council (SSHRC) funded collaborative research projects in a university setting. This qualitative study (Gay & Airasian, 2003) is situated within a social constructivist paradigm (Kezar, Carducci, & Contreras-McGavin, 2006) and involves an analysis of the responses from 12 researchers who answered 11questions related to my overarching research question: What is the impact of leadership on university based collaborative research projects funded by the Social Sciences and Humanities Research Council based on the experiences of researchers involved? The data that emerged supported and enhanced the existing literature related to leadership and collaborative groups in academia. The type of preferred leadership that emerged as a result of this research seemed to indicate that the type of leader that appeared to be optimal in this context might be described as a functional collaborative expert.
Resumo:
Medical universities and teaching hospitals in Iraq are facing a lack of professional staff due to the ongoing violence that forces them to flee the country. The professionals are now distributed outside the country which reduces the chances for the staff and students to be physically in one place to continue the teaching and limits the efficiency of the consultations in hospitals. A survey was done among students and professional staff in Iraq to find the problems in the learning and clinical systems and how Information and Communication Technology could improve it. The survey has shown that 86% of the participants use the Internet as a learning resource and 25% for clinical purposes while less than 11% of them uses it for collaboration between different institutions. A web-based collaborative tool is proposed to improve the teaching and clinical system. The tool helps the users to collaborate remotely to increase the quality of the learning system as well as it can be used for remote medical consultation in hospitals.
Resumo:
Wikipedia is a free, web-based, collaborative, multilingual encyclopedia project supported by the non-profit Wikimedia Foundation. Due to the free nature of Wikipedia and allowing open access to everyone to edit articles the quality of articles may be affected. As all people don’t have equal level of knowledge and also different people have different opinions about a topic so there may be difference between the contributions made by different authors. To overcome this situation it is very important to classify the articles so that the articles of good quality can be separated from the poor quality articles and should be removed from the database. The aim of this study is to classify the articles of Wikipedia into two classes class 0 (poor quality) and class 1(good quality) using the Adaptive Neuro Fuzzy Inference System (ANFIS) and data mining techniques. Two ANFIS are built using the Fuzzy Logic Toolbox [1] available in Matlab. The first ANFIS is based on the rules obtained from J48 classifier in WEKA while the other one was built by using the expert’s knowledge. The data used for this research work contains 226 article’s records taken from the German version of Wikipedia. The dataset consists of 19 inputs and one output. The data was preprocessed to remove any similar attributes. The input variables are related to the editors, contributors, length of articles and the lifecycle of articles. In the end analysis of different methods implemented in this research is made to analyze the performance of each classification method used.
Resumo:
Tendo como motivação o desenvolvimento de uma representação gráfica de redes com grande número de vértices, útil para aplicações de filtro colaborativo, este trabalho propõe a utilização de superfícies de coesão sobre uma base temática multidimensionalmente escalonada. Para isso, utiliza uma combinação de escalonamento multidimensional clássico e análise de procrustes, em algoritmo iterativo que encaminha soluções parciais, depois combinadas numa solução global. Aplicado a um exemplo de transações de empréstimo de livros pela Biblioteca Karl A. Boedecker, o algoritmo proposto produz saídas interpretáveis e coerentes tematicamente, e apresenta um stress menor que a solução por escalonamento clássico.
Resumo:
Due to the large amount of television content, which emerged from the Digital TV, viewers are facing a new challenge, how to find interesting content intuitively and efficiently. The Personalized Electronic Programming Guides (pEPG) arise as an answer to this complex challenge. We propose TrendTV a layered architecture that allows the formation of social networks among viewers of Interactive Digital TV based on online microblogging. Associated with a pEPG, this social network allows the viewer to perform content filtering on a particular subject from the indications made by other viewers of his network. Allowing the viewer to create his own indications for a particular content when it is displayed, or to analyze the importance of a particular program online, based on these indications. This allows any user to perform filtering on content and generate or exchange information with other users in a flexible and transparent way, using several different devices (TVs, Smartphones, Tablets or PCs). Moreover, this architecture defines a mechanism to perform the automatic exchange of channels based on the best program that is showing at the moment, suggesting new components to be added to the middleware of the Brazilian Digital TV System (Ginga). The result is a constructed and dynamic database containing the classification of several TV programs as well as an application to automatically switch to the best channel of the moment
Resumo:
Service oriented architectures (SOA) based on Simple Object Access Protocol (SOAP) Web services have attracted the attention of enterprises mainly for business-to-business integration and to create composite applications that execute business processes. An existing problem is the lack of preoccupation with non technical users due to the fact that to create a composite application to fulfill users needs, it is necessary to be in contact with IT staff. To overcome this issue, enterprises can take advantage of web 2.0, 'introducing in the development stage some technologies like mashups and some concepts like user empowerment, collaborative work and collective intelligence. Some results [3] [13] have shown how web 2.0 concepts can help non technical users to produce relative complex business processes. However, traditional enterprise requirements goes beyond typical web 2.0 solutions in several aspects: (1) traditional enterprise systems are based on heterogeneous stack of technologies that are not directly exploitable from a web-based client (where SOAP web services play an important role); (2) web browsers set some cross-domain security constraints making difficult to integrate services from diverse domains. In this paper, a contribution to two web 2.0 research projects [14] [15] partially solves the problems described: provide a way to invoke cross-domain backend services (based on SOAP technologies) directly only using clientside languages, without a need for any adaptation layer. © 2010 ACM.
Resumo:
Il focus di questo elaborato è sui sistemi di recommendations e le relative caratteristiche. L'utilizzo di questi meccanism è sempre più forte e presente nel mondo del web, con un parallelo sviluppo di soluzioni sempre più accurate ed efficienti. Tra tutti gli approcci esistenti, si è deciso di prendere in esame quello affrontato in Apache Mahout. Questa libreria open source implementa il collaborative-filtering, basando il processo di recommendation sulle preferenze espresse dagli utenti riguardo ifferenti oggetti. Grazie ad Apache Mahout e ai principi base delle varie tipologie di recommendationè stato possibile realizzare un applicativo web che permette di produrre delle recommendations nell'ambito delle pubblicazioni scientifiche, selezionando quegli articoli che hanno un maggiore similarità con quelli pubblicati dall'utente corrente. La realizzazione di questo progetto ha portato alla definizione di un sistema ibrido. Infatti l'approccio alla recommendation di Apache Mahout non è completamente adattabile a questa situazione, per questo motivo le sue componenti sono state estese e modellate per il caso di studio. Siè cercato quindi di combinare il collaborative filtering e il content-based in un unico approccio. Di Apache Mahout si è mantenuto l'algoritmo attraverso il quale esaminare i dati del data set, tralasciando completamente l'aspetto legato alle preferenze degli utenti, poichè essi non esprimono delle valutazioni sugli articoli. Del content-based si è utilizzata l'idea del confronto tra i titoli delle pubblicazioni. La valutazione di questo applicativo ha portato alla luce diversi limiti, ma anche possibili sviluppi futuri che potrebbero migliorare la qualità delle recommendations, ma soprattuto le prestazioni. Grazie per esempio ad Apache Hadoop sarebbe possibile una computazione distribuita che permetterebbe di elaborare migliaia di dati con dei risultati più che discreti.
Resumo:
Background Young children are known to be the most frequent hospital users compared to older children and young adults. Therefore, they are an important population from economic and policy perspectives of health care delivery. In Switzerland complete hospitalization discharge records for children [<5 years] of four consecutive years [2002–2005] were evaluated in order to analyze variation in patterns of hospital use. Methods Stationary and outpatient hospitalization rates on aggregated ZIP code level were calculated based on census data provided by the Swiss federal statistical office (BfS). Thirty-seven hospital service areas for children [HSAP] were created with the method of "small area analysis", reflecting user-based health markets. Descriptive statistics and general linear models were applied to analyze the data. Results The mean stationary hospitalization rate over four years was 66.1 discharges per 1000 children. Hospitalizations for respiratory problem are most dominant in young children (25.9%) and highest hospitalization rates are associated with geographical factors of urban areas and specific language regions. Statistical models yielded significant effect estimates for these factors and a significant association between ambulatory/outpatient and stationary hospitalization rates. Conclusion The utilization-based approach, using HSAP as spatial representation of user-based health markets, is a valid instrument and allows assessing the supply and demand of children's health care services. The study provides for the first time estimates for several factors associated with the large variation in the utilization and provision of paediatric health care resources in Switzerland.
Resumo:
This paper reports on the results of a research project, on comparing one virtual collaborative environment with a first-person visual immersion (first-perspective interaction) and a second one where the user interacts through a sound-kinetic virtual representation of himself (avatar), as a stress-coping environment in real-life situations. Recent developments in coping research are proposing a shift from a trait-oriented approach of coping to a more situation-specific treatment. We defined as real-life situation a target-oriented situation that demands a complex coping skills inventory of high self-efficacy and internal or external "locus of control" strategies. The participants were 90 normal adults with healthy or impaired coping skills, 25-40 years of age, randomly spread across two groups. There was the same number of participants across groups and gender balance within groups. All two groups went through two phases. In Phase I, Solo, one participant was assessed using a three-stage assessment inspired by the transactional stress theory of Lazarus and the stress inoculation theory of Meichenbaum. In Phase I, each participant was given a coping skills measurement within the time course of various hypothetical stressful encounters performed in two different conditions and a control group. In Condition A, the participant was given a virtual stress assessment scenario relative to a first-person perspective (VRFP). In Condition B, the participant was given a virtual stress assessment scenario relative to a behaviorally realistic motion controlled avatar with sonic feedback (VRSA). In Condition C, the No Treatment Condition (NTC), the participant received just an interview. In Phase II, all three groups were mixed and exercised the same tasks but with two participants in pairs. The results showed that the VRSA group performed notably better in terms of cognitive appraisals, emotions and attributions than the other two groups in Phase I (VRSA, 92%; VRFP, 85%; NTC, 34%). In Phase II, the difference again favored the VRSA group against the other two. These results indicate that a virtual collaborative environment seems to be a consistent coping environment, tapping two classes of stress: (a) aversive or ambiguous situations, and (b) loss or failure situations in relation to the stress inoculation theory. In terms of coping behaviors, a distinction is made between self-directed and environment-directed strategies. A great advantage of the virtual collaborative environment with the behaviorally enhanced sound-kinetic avatar is the consideration of team coping intentions in different stages. Even if the aim is to tap transactional processes in real-life situations, it might be better to conduct research using a sound-kinetic avatar based collaborative environment than a virtual first-person perspective scenario alone. The VE consisted of two dual-processor PC systems, a video splitter, a digital camera and two stereoscopic CRT displays. The system was programmed in C++ and VRScape Immersive Cluster from VRCO, which created an artificial environment that encodes the user's motion from a video camera, targeted at the face of the users and physiological sensors attached to the body.
Resumo:
In this paper we introduce the idea of using a reliability measure associated to the predic- tions made by recommender systems based on collaborative filtering. This reliability mea- sure is based on the usual notion that the more reliable a prediction, the less liable to be wrong. Here we will define a general reliability measure suitable for any arbitrary recom- mender system. We will also show a method for obtaining specific reliability measures specially fitting the needs of different specific recommender systems.
Resumo:
In this paper we provide a method that allows the visualization of similarity relationships present between items of collaborative filtering recommender systems, as well as the relative importance of each of these. The objective is to offer visual representations of the recommender system?s set of items and of their relationships; these graphs show us where the most representative information can be found and which items are rated in a more similar way by the recommender system?s community of users. The visual representations achieved take the shape of phylogenetic trees, displaying the numerical similarity and the reliability between each pair of items considered to be similar. As a case study we provide the results obtained using the public database Movielens 1M, which contains 3900 movies.
Resumo:
La importancia de los sistemas de recomendación ha experimentado un crecimiento exponencial como consecuencia del auge de las redes sociales. En esta tesis doctoral presentaré una amplia visión sobre el estado del arte de los sistemas de recomendación. Incialmente, estos estaba basados en fitrado demográfico, basado en contendio o colaborativo. En la actualidad, estos sistemas incorporan alguna información social al proceso de recomendación. En el futuro utilizarán información implicita, local y personal proveniente del Internet de las cosas. Los sistemas de recomendación basados en filtrado colaborativo se pueden modificar con el fin de realizar recomendaciones a grupos de usuarios. Existen trabajos previos que han incluido estas modificaciones en diferentes etapas del algoritmo de filtrado colaborativo: búsqueda de los vecinos, predicción de las votaciones y elección de las recomendaciones. En esta tesis doctoral proporcionaré un nuevo método que realizar el proceso de unficación (pasar de varios usuarios a un grupo) en el primer paso del algoritmo de filtrado colaborativo: cálculo de la métrica de similaridad. Proporcionaré una formalización completa del método propuesto. Explicaré cómo obtener el conjunto de k vecinos del grupo de usuarios y mostraré cómo obtener recomendaciones usando dichos vecinos. Asimismo, incluiré un ejemplo detallando cada paso del método propuesto en un sistema de recomendación compuesto por 8 usuarios y 10 items. Las principales características del método propuesto son: (a) es más rápido (más eficiente) que las alternativas proporcionadas por otros autores, y (b) es al menos tan exacto y preciso como otras soluciones estudiadas. Para contrastar esta hipótesis realizaré varios experimentos que miden la precisión, la exactitud y el rendimiento del método. Los resultados obtenidos se compararán con los resultados de otras alternativas utilizadas en la recomendación de grupos. Los experimentos se realizarán con las bases de datos de MovieLens y Netflix. ABSTRACT The importance of recommender systems has grown exponentially with the advent of social networks. In this PhD thesis I will provide a wide vision about the state of the art of recommender systems. They were initially based on demographic, contentbased and collaborative filtering. Currently, these systems incorporate some social information to the recommendation process. In the future, they will use implicit, local and personal information from the Internet of Things. As we will see here, recommender systems based on collaborative filtering can be used to perform recommendations to group of users. Previous works have made this modification in different stages of the collaborative filtering algorithm: establishing the neighborhood, prediction phase and determination of recommended items. In this PhD thesis I will provide a new method that carry out the unification process (many users to one group) in the first stage of the collaborative filtering algorithm: similarity metric computation. I will provide a full formalization of the proposed method. I will explain how to obtain the k nearest neighbors of the group of users and I will show how to get recommendations using those users. I will also include a running example of a recommender system with 8 users and 10 items detailing all the steps of the method I will present. The main highlights of the proposed method are: (a) it will be faster (more efficient) that the alternatives provided by other authors, and (b) it will be at least as precise and accurate as other studied solutions. To check this hypothesis I will conduct several experiments measuring the accuracy, the precision and the performance of my method. I will compare these results with the results generated by other methods of group recommendation. The experiments will be carried out using MovieLens and Netflix datasets.