5 resultados para complexity regularization
em Doria (National Library of Finland DSpace Services) - National Library of Finland, Finland
Resumo:
Learning of preference relations has recently received significant attention in machine learning community. It is closely related to the classification and regression analysis and can be reduced to these tasks. However, preference learning involves prediction of ordering of the data points rather than prediction of a single numerical value as in case of regression or a class label as in case of classification. Therefore, studying preference relations within a separate framework facilitates not only better theoretical understanding of the problem, but also motivates development of the efficient algorithms for the task. Preference learning has many applications in domains such as information retrieval, bioinformatics, natural language processing, etc. For example, algorithms that learn to rank are frequently used in search engines for ordering documents retrieved by the query. Preference learning methods have been also applied to collaborative filtering problems for predicting individual customer choices from the vast amount of user generated feedback. In this thesis we propose several algorithms for learning preference relations. These algorithms stem from well founded and robust class of regularized least-squares methods and have many attractive computational properties. In order to improve the performance of our methods, we introduce several non-linear kernel functions. Thus, contribution of this thesis is twofold: kernel functions for structured data that are used to take advantage of various non-vectorial data representations and the preference learning algorithms that are suitable for different tasks, namely efficient learning of preference relations, learning with large amount of training data, and semi-supervised preference learning. Proposed kernel-based algorithms and kernels are applied to the parse ranking task in natural language processing, document ranking in information retrieval, and remote homology detection in bioinformatics domain. Training of kernel-based ranking algorithms can be infeasible when the size of the training set is large. This problem is addressed by proposing a preference learning algorithm whose computation complexity scales linearly with the number of training data points. We also introduce sparse approximation of the algorithm that can be efficiently trained with large amount of data. For situations when small amount of labeled data but a large amount of unlabeled data is available, we propose a co-regularized preference learning algorithm. To conclude, the methods presented in this thesis address not only the problem of the efficient training of the algorithms but also fast regularization parameter selection, multiple output prediction, and cross-validation. Furthermore, proposed algorithms lead to notably better performance in many preference learning tasks considered.
Resumo:
In wireless communications the transmitted signals may be affected by noise. The receiver must decode the received message, which can be mathematically modelled as a search for the closest lattice point to a given vector. This problem is known to be NP-hard in general, but for communications applications there exist algorithms that, for a certain range of system parameters, offer polynomial expected complexity. The purpose of the thesis is to study the sphere decoding algorithm introduced in the article On Maximum-Likelihood Detection and the Search for the Closest Lattice Point, which was published by M.O. Damen, H. El Gamal and G. Caire in 2003. We concentrate especially on its computational complexity when used in space–time coding. Computer simulations are used to study how different system parameters affect the computational complexity of the algorithm. The aim is to find ways to improve the algorithm from the complexity point of view. The main contribution of the thesis is the construction of two new modifications to the sphere decoding algorithm, which are shown to perform faster than the original algorithm within a range of system parameters.
Resumo:
A company’s competence to manage its product portfolio complexity is becoming critically important in the rapidly changing business environment. The continuous evolvement of customer needs, the competitive market environment and internal product development lead to increasing complexity in product portfolios. The companies that manage the complexity in product development are more profitable in the long run. The complexity derives from product development and management processes where the new product variant development is not managed efficiently. Complexity is managed with modularization which is a method that divides the product structure into modules. In modularization, it is essential to take into account the trade-off between the perceived customer value and the module or component commonality across the products. Another goal is to enable the product configuration to be more flexible. The benefits are achieved through optimizing complexity in module offering and deriving the new product variants more flexibly and accurately. The developed modularization process includes the process steps for preparation, mapping the current situation, the creation of a modular strategy and implementing the strategy. Also the organization and support systems have to be adapted to follow-up targets and to execute modularization in practice.
Disturbing Whiteness: The Complexity of White Female Identity in Selected Works by Joyce Carol Oates
Resumo:
This thesis describes an approach to overcoming the complexity of software product management (SPM) and consists of several studies that investigate the activities and roles in product management, as well as issues related to the adoption of software product management. The thesis focuses on organizations that have started the adoption of SPM but faced difficulties due to its complexity and fuzziness and suggests the frameworks for overcoming these challenges using the principles of decomposition and iterative improvements. The research process consisted of three phases, each of which provided complementary results and empirical observation to the problem of overcoming the complexity of SPM. Overall, product management processes and practices in 13 companies were studied and analysed. Moreover, additional data was collected with a survey conducted worldwide. The collected data were analysed using the grounded theory (GT) to identify the possible ways to overcome the complexity of SPM. Complementary research methods, like elements of the Theory of Constraints were used for deeper data analysis. The results of the thesis indicate that the decomposition of SPM activities depending on the specific characteristics of companies and roles is a useful approach for simplifying the existing SPM frameworks. Companies would benefit from the results by adopting SPM activities more efficiently and effectively and spending fewer resources on its adoption by concentrating on the most important SPM activities.