866 resultados para Dipl.-Wi.-Ing. Guido Gravenkötter
Resumo:
In a people-to-people matching systems, filtering is widely applied to find the most suitable matches. The results returned are either too many or only a few when the search is generic or specific respectively. The use of a sophisticated recommendation approach becomes necessary. Traditionally, the object of recommendation is the item which is inanimate. In online dating systems, reciprocal recommendation is required to suggest a partner only when the user and the recommended candidate both are satisfied. In this paper, an innovative reciprocal collaborative method is developed based on the idea of similarity and common neighbors, utilizing the information of relevance feedback and feature importance. Extensive experiments are carried out using data gathered from a real online dating service. Compared to benchmarking methods, our results show the proposed method can achieve noticeable better performance.
Resumo:
A new community and communication type of social networks - online dating - are gaining momentum. With many people joining in the dating network, users become overwhelmed by choices for an ideal partner. A solution to this problem is providing users with partners recommendation based on their interests and activities. Traditional recommendation methods ignore the users’ needs and provide recommendations equally to all users. In this paper, we propose a recommendation approach that employs different recommendation strategies to different groups of members. A segmentation method using the Gaussian Mixture Model (GMM) is proposed to customize users’ needs. Then a targeted recommendation strategy is applied to each identified segment. Empirical results show that the proposed approach outperforms several existing recommendation methods.
Resumo:
The rapid development of the World Wide Web has created massive information leading to the information overload problem. Under this circumstance, personalization techniques have been brought out to help users in finding content which meet their personalized interests or needs out of massively increasing information. User profiling techniques have performed the core role in this research. Traditionally, most user profiling techniques create user representations in a static way. However, changes of user interests may occur with time in real world applications. In this research we develop algorithms for mining user interests by integrating time decay mechanisms into topic-based user interest profiling. Time forgetting functions will be integrated into the calculation of topic interest measurements on in-depth level. The experimental study shows that, considering temporal effects of user interests by integrating time forgetting mechanisms shows better performance of recommendation.
Resumo:
Most recommender systems attempt to use collaborative filtering, content-based filtering or hybrid approach to recommend items to new users. Collaborative filtering recommends items to new users based on their similar neighbours, and content-based filtering approach tries to recommend items that are similar to new users' profiles. The fundamental issues include how to profile new users, and how to deal with the over-specialization in content-based recommender systems. Indeed, the terms used to describe items can be formed as a concept hierarchy. Therefore, we aim to describe user profiles or information needs by using concepts vectors. This paper presents a new method to acquire user information needs, which allows new users to describe their preferences on a concept hierarchy rather than rating items. It also develops a new ranking function to recommend items to new users based on their information needs. The proposed approach is evaluated on Amazon book datasets. The experimental results demonstrate that the proposed approach can largely improve the effectiveness of recommender systems.
Resumo:
Different reputation models are used in the web in order to generate reputation values for products using uses' review data. Most of the current reputation models use review ratings and neglect users' textual reviews, because it is more difficult to process. However, we argue that the overall reputation score for an item does not reflect the actual reputation for all of its features. And that's why the use of users' textual reviews is necessary. In our work we introduce a new reputation model that defines a new aggregation method for users' extracted opinions about products' features from users' text. Our model uses features ontology in order to define general features and sub-features of a product. It also reflects the frequencies of positive and negative opinions. We provide a case study to show how our results compare with other reputation models.
Resumo:
Textual document set has become an important and rapidly growing information source in the web. Text classification is one of the crucial technologies for information organisation and management. Text classification has become more and more important and attracted wide attention of researchers from different research fields. In this paper, many feature selection methods, the implement algorithms and applications of text classification are introduced firstly. However, because there are much noise in the knowledge extracted by current data-mining techniques for text classification, it leads to much uncertainty in the process of text classification which is produced from both the knowledge extraction and knowledge usage, therefore, more innovative techniques and methods are needed to improve the performance of text classification. It has been a critical step with great challenge to further improve the process of knowledge extraction and effectively utilization of the extracted knowledge. Rough Set decision making approach is proposed to use Rough Set decision techniques to more precisely classify the textual documents which are difficult to separate by the classic text classification methods. The purpose of this paper is to give an overview of existing text classification technologies, to demonstrate the Rough Set concepts and the decision making approach based on Rough Set theory for building more reliable and effective text classification framework with higher precision, to set up an innovative evaluation metric named CEI which is very effective for the performance assessment of the similar research, and to propose a promising research direction for addressing the challenging problems in text classification, text mining and other relative fields.
Resumo:
This paper describes a new method of indexing and searching large binary signature collections to efficiently find similar signatures, addressing the scalability problem in signature search. Signatures offer efficient computation with acceptable measure of similarity in numerous applications. However, performing a complete search with a given search argument (a signature) requires a Hamming distance calculation against every signature in the collection. This quickly becomes excessive when dealing with large collections, presenting issues of scalability that limit their applicability. Our method efficiently finds similar signatures in very large collections, trading memory use and precision for greatly improved search speed. Experimental results demonstrate that our approach is capable of finding a set of nearest signatures to a given search argument with a high degree of speed and fidelity.
Resumo:
How influential is the Australian Document Computing Symposium (ADCS)? What do ADCS articles speak about and who cites them? Who is the ADCS community and how has it evolved? This paper considers eighteen years of ADCS, investigating both the conference and its community. A content analysis of the proceedings uncovers the diversity of topics covered in ADCS and how these have changed over the years. Citation analysis reveals the impact of the papers. The number of authors and where they originate from reveal who has contributed to the conference. Finally, we generate co-author networks which reveal the collaborations within the community. These networks show how clusters of researchers form, the effect geographic location has on collaboration, and how these have evolved over time.
Resumo:
Aims This paper is a report on the effectiveness of a self-management programme based on the self-efficacy construct, in older people with heart failure. Background Heart failure is a major health problem worldwide, with high mortality and morbidity, making it a leading cause of hospitalization. Heart failure is associated with a complex set of symptoms that arise from problems in fluid and sodium retention. Hence, managing salt and fluid intake is important and can be enhanced by improving patients' self-efficacy in changing their behaviour. Design Randomized controlled trial. Methods Heart failure patients attending cardiac clinics in northern Taiwan from October 2006–May 2007 were randomly assigned to two groups: control (n = 46) and intervention (n = 47). The intervention group received a 12-week self-management programme that emphasized self-monitoring of salt/fluid intake and heart failure-related symptoms. Data were collected at baseline as well as 4 and 12 weeks later. Data analysis to test the hypotheses used repeated-measures anova models. Results Participants who received the intervention programme had significantly better self-efficacy for salt and fluid control, self-management behaviour and their heart failure-related symptoms were significantly lower than participants in the control group. However, the two groups did not differ significantly in health service use. Conclusion The self-management programme improved self-efficacy for salt and fluid control, self-management behaviours, and decreased heart failure-related symptoms in older Taiwanese outpatients with heart failure. Nursing interventions to improve health-related outcomes for patients with heart failure should emphasize self-efficacy in the self-management of their disease.
Resumo:
The bed nucleus of the stria terminalis (BNST) is believed to be a critical relay between the central nucleus of the amygdala (CE) and the paraventricular nucleus of the hypothalamus in the control of hypothalamic–pituitary– adrenal (HPA) responses elicited by conditioned fear stimuli. If correct, lesions of CE or BNST should block expression of HPA responses elicited by either a specific conditioned fear cue or a conditioned context. To test this, rats were subjected to cued (tone) or contextual classical fear conditioning. Two days later, electrolytic or sham lesions were placed in CE or BNST. After 5 days, the rats were tested for both behavioral (freezing) and neuroendocrine (corticosterone) responses to tone or contextual cues. CE lesions attenuated conditioned freezing and corticosterone responses to both tone and con- text. In contrast, BNST lesions attenuated these responses to contextual but not tone stimuli. These results suggest CE is indeed an essential output of the amygdala for the expres- sion of conditioned fear responses, including HPA re- sponses, regardless of the nature of the conditioned stimu- lus. However, because lesions of BNST only affected behav- ioral and endocrine responses to contextual stimuli, the results do not support the notion that BNST is critical for HPA responses elicited by conditioned fear stimuli in general. Instead, the BNST may be essential specifically for contex- tual conditioned fear responses, including both behavioral and HPA responses, by virtue of its connections with the hippocampus, a structure essential to contextual condition- ing. The results are also not consistent with the hypothesis that BNST is only involved in unconditioned aspects of fear and anxiety.
Resumo:
Learning and memory depend on signaling mole- cules that affect synaptic efficacy. The cytoskeleton has been implicated in regulating synaptic transmission but its role in learning and memory is poorly understood. Fear learning depends on plasticity in the lateral nucleus of the amygdala. We therefore examined whether the cytoskeletal-regulatory protein, myosin light chain kinase, might contribute to fear learning in the rat lateral amygdala. Microinjection of ML-7, a specific inhibitor of myosin light chain kinase, into the lateral nucleus of the amygdala before fear conditioning, but not immediately afterward, enhanced both short-term memory and long-term memory, suggesting that myosin light chain kinase is involved specifically in memory acquisition rather than in posttraining consolidation of memory. Myosin light chain kinase inhibitor had no effect on memory retrieval. Furthermore, ML-7 had no effect on behavior when the train- ing stimuli were presented in a non-associative manner. An- atomical studies showed that myosin light chain kinase is present in cells throughout lateral nucleus of the amygdala and is localized to dendritic shafts and spines that are postsynaptic to the projections from the auditory thalamus to lateral nucleus of the amygdala, a pathway specifically impli- cated in fear learning. Inhibition of myosin light chain kinase enhanced long-term potentiation, a physiological model of learning, in the auditory thalamic pathway to the lateral nu- cleus of the amygdala. When ML-7 was applied without as- sociative tetanic stimulation it had no effect on synaptic responses in lateral nucleus of the amygdala. Thus, myosin light chain kinase activity in lateral nucleus of the amygdala appears to normally suppress synaptic plasticity in the cir- cuits underlying fear learning, suggesting that myosin light chain kinase may help prevent the acquisition of irrelevant fears. Impairment of this mechanism could contribute to pathological fear learning.
Resumo:
The terrorist attacks in the United States on September 11, 2001 appeared to be a harbinger of increased terrorism and violence in the 21st century, bringing terrorism and political violence to the forefront of public discussion. Questions about these events abound, and “Estimating the Historical and Future Probabilities of Large Scale Terrorist Event” [Clauset and Woodard (2013)] asks specifically, “how rare are large scale terrorist events?” and, in general, encourages discussion on the role of quantitative methods in terrorism research and policy and decision-making. Answering the primary question raises two challenges. The first is identify- ing terrorist events. The second is finding a simple yet robust model for rare events that has good explanatory and predictive capabilities. The challenges of identifying terrorist events is acknowledged and addressed by reviewing and using data from two well-known and reputable sources: the Memorial Institute for the Prevention of Terrorism-RAND database (MIPT-RAND) [Memorial Institute for the Prevention of Terrorism] and the Global Terror- ism Database (GTD) [National Consortium for the Study of Terrorism and Responses to Terrorism (START) (2012), LaFree and Dugan (2007)]. Clauset and Woodard (2013) provide a detailed discussion of the limitations of the data and the models used, in the context of the larger issues surrounding terrorism and policy.
Resumo:
A known limitation of the Probability Ranking Principle (PRP) is that it does not cater for dependence between documents. Recently, the Quantum Probability Ranking Principle (QPRP) has been proposed, which implicitly captures dependencies between documents through “quantum interference”. This paper explores whether this new ranking principle leads to improved performance for subtopic retrieval, where novelty and diversity is required. In a thorough empirical investigation, models based on the PRP, as well as other recently proposed ranking strategies for subtopic retrieval (i.e. Maximal Marginal Relevance (MMR) and Portfolio Theory(PT)), are compared against the QPRP. On the given task, it is shown that the QPRP outperforms these other ranking strategies. And unlike MMR and PT, one of the main advantages of the QPRP is that no parameter estimation/tuning is required; making the QPRP both simple and effective. This research demonstrates that the application of quantum theory to problems within information retrieval can lead to significant improvements.
Resumo:
The Quantum Probability Ranking Principle (QPRP) has been recently proposed, and accounts for interdependent document relevance when ranking. However, to be instantiated, the QPRP requires a method to approximate the interference" between two documents. In this poster, we empirically evaluate a number of different methods of approximation on two TREC test collections for subtopic retrieval. It is shown that these approximations can lead to significantly better retrieval performance over the state of the art.