5 resultados para class imbalance problems

em Aston University Research Archive


Relevância:

80.00% 80.00%

Publicador:

Resumo:

In recent years, the rapid spread of smartphones has led to the increasing popularity of Location-Based Social Networks (LBSNs). Although a number of research studies and articles in the press have shown the dangers of exposing personal location data, the inherent nature of LBSNs encourages users to publish information about their current location (i.e., their check-ins). The same is true for the majority of the most popular social networking websites, which offer the possibility of associating the current location of users to their posts and photos. Moreover, some LBSNs, such as Foursquare, let users tag their friends in their check-ins, thus potentially releasing location information of individuals that have no control over the published data. This raises additional privacy concerns for the management of location information in LBSNs. In this paper we propose and evaluate a series of techniques for the identification of users from their check-in data. More specifically, we first present two strategies according to which users are characterized by the spatio-temporal trajectory emerging from their check-ins over time and the frequency of visit to specific locations, respectively. In addition to these approaches, we also propose a hybrid strategy that is able to exploit both types of information. It is worth noting that these techniques can be applied to a more general class of problems where locations and social links of individuals are available in a given dataset. We evaluate our techniques by means of three real-world LBSNs datasets, demonstrating that a very limited amount of data points is sufficient to identify a user with a high degree of accuracy. For instance, we show that in some datasets we are able to classify more than 80% of the users correctly.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

In this work the solution of a class of capital investment problems is considered within the framework of mathematical programming. Upon the basis of the net present value criterion, the problems in question are mainly characterized by the fact that the cost of capital is defined as a non-decreasing function of the investment requirements. Capital rationing and some cases of technological dependence are also included, this approach leading to zero-one non-linear programming problems, for which specifically designed solution procedures supported by a general branch and bound development are presented. In the context of both this development and the relevant mathematical properties of the previously mentioned zero-one programs, a generalized zero-one model is also discussed. Finally,a variant of the scheme, connected with the search sequencing of optimal solutions, is presented as an alternative in which reduced storage limitations are encountered.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The Vapnik-Chervonenkis (VC) dimension is a combinatorial measure of a certain class of machine learning problems, which may be used to obtain upper and lower bounds on the number of training examples needed to learn to prescribed levels of accuracy. Most of the known bounds apply to the Probably Approximately Correct (PAC) framework, which is the framework within which we work in this paper. For a learning problem with some known VC dimension, much is known about the order of growth of the sample-size requirement of the problem, as a function of the PAC parameters. The exact value of sample-size requirement is however less well-known, and depends heavily on the particular learning algorithm being used. This is a major obstacle to the practical application of the VC dimension. Hence it is important to know exactly how the sample-size requirement depends on VC dimension, and with that in mind, we describe a general algorithm for learning problems having VC dimension 1. Its sample-size requirement is minimal (as a function of the PAC parameters), and turns out to be the same for all non-trivial learning problems having VC dimension 1. While the method used cannot be naively generalised to higher VC dimension, it suggests that optimal algorithm-dependent bounds may improve substantially on current upper bounds.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background - Modelling the interaction between potentially antigenic peptides and Major Histocompatibility Complex (MHC) molecules is a key step in identifying potential T-cell epitopes. For Class II MHC alleles, the binding groove is open at both ends, causing ambiguity in the positional alignment between the groove and peptide, as well as creating uncertainty as to what parts of the peptide interact with the MHC. Moreover, the antigenic peptides have variable lengths, making naive modelling methods difficult to apply. This paper introduces a kernel method that can handle variable length peptides effectively by quantifying similarities between peptide sequences and integrating these into the kernel. Results - The kernel approach presented here shows increased prediction accuracy with a significantly higher number of true positives and negatives on multiple MHC class II alleles, when testing data sets from MHCPEP [1], MCHBN [2], and MHCBench [3]. Evaluation by cross validation, when segregating binders and non-binders, produced an average of 0.824 AROC for the MHCBench data sets (up from 0.756), and an average of 0.96 AROC for multiple alleles of the MHCPEP database. Conclusion - The method improves performance over existing state-of-the-art methods of MHC class II peptide binding predictions by using a custom, knowledge-based representation of peptides. Similarity scores, in contrast to a fixed-length, pocket-specific representation of amino acids, provide a flexible and powerful way of modelling MHC binding, and can easily be applied to other dynamic sequence problems.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We describe a parallel multi-threaded approach for high performance modelling of wide class of phenomena in ultrafast nonlinear optics. Specific implementation has been performed using the highly parallel capabilities of a programmable graphics processor. © 2011 SPIE.