41 resultados para kernel method


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Tutkielman tavoitteena on määritellä keskeiset ja sopivat asiakasportfoliomallit ja asiakasmatriisit asiakassuhteen määrittämiseen. Tutkimus keskittyy asiakassuhteen arvottamiseen ja avainasiakkaiden määrittämiseen kohdeyrityksessä. Keskeisimmät ja sopivimmat asiakasportfliomallit huomioidaan asiakkaiden arvioinnissa. Tutkielman teoriaosassa esitellään tunnetuimmat ja käytetyimmät asiakasportfoliomallit ja matriisit alan kirjallisuuden perusteella. Tämän lisäksi asiakasportfoliomalleihin yhdistetään näkökulmia suhdemarkkinoinnin, asiakkuuksien johtamisen ja tuoteportfolioiden teorioista. Keskeisimmät kirjallisuuden lähteet ovat johtamisen ja markkinoinnin alalta. Tutkielman empiriaosassa esitellään kohdeyritys ja sen tämän hetkinen asiakassuhteiden johtamiskäytäntö. Lisäksi tehdään parannusehdotuksia kohdeyrityksen nykyiseen asiakassuhteiden arvottamismenetelmään jotta asiakassuhteiden arvon laskeminen vastaisi mahdollisimman hyvin kohdeyrityksen nykyisiä tarpeita. Asiakassuhteen arvon määrittämiseksi käytetään myös fokusryhmähaastattelua. Avainasiakkaat määritellään ja tilannetta havainnollistetaan sijoittamalla avainasiakkaat asiakasportfolioon.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Tutkimuksen tavoitteena oli rakentaa case yritykselle malli lyhyen aikavälin kannattavuuden estimointia varten. Tutkimusmetodi on konstruktiivinen, ja malli kehitettiin laskentaihmisten avustuksella. Teoriaosassa käytiin kirjallisuuskatsauksen avulla läpi kannattavuutta, budjetointia sekä itse ennustamista. Teoriaosassa pyrittiin löytämään sellaisia menetelmiä, joita voitaisiin käyttää lyhyen aikavälin kannattavuuden estimoinnissa. Rakennettavalle mallille asetettujen vaatimusten mukaan menetelmäksi valittiin harkintaan perustuva menetelmä (judgmental). Tutkimuksen mukaan kannattavuuteen vaikuttaa myyntihinta ja –määrä, tuotanto, raaka-aineiden hinnat ja varaston muutos. Rakennettu malli toimii kohdeyrityksessä kohtalaisen hyvin ja huomattavaa on se, että eri tehtaiden ja eri koneiden väliset erot saattavat olla kohtuullisen suuret. Nämä erot johtuvat pääasiassa tehtaan koosta ja mallien erilaisuudesta. Mallin käytännön toimivuus tulee kuitenkin parhaiten selville silloin, kun se on laskentaihmisten käytössä. Ennustamiseen liittyy kuitenkin aina omat ongelmansa ja uudetkaan menetelmät eivät välttämättä poista näitä ongelmia.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Tutkielman tavoitteena on tutkia, mikä olisi parhaiten case-yritykselle sopiva menetelmä tulla tekemään kauppaa ulkomaan markkinoille. Kaikki yleiset kansainvälisille markkinoilletulomenetelmät esitetään ja niiden edut ja haitat tuodaan esille. Selvittäessä tehtävänantajayrityksen resurssit, odotukset ja vaatimukset todetaan, että yhteistyössä tehtävä markkinoilletulo on pätevin vaihtoehto. Tämän jälkeen valitaan parhaiten tarkoitukseen sopiva yritys ennalta valitusta yritysvaihtoehtojen ryhmästä ja testataan tämän yrityksen yhteistyösopivuus case-yrityksen kanssa. Yritysten välinen yhteistyösopivuus arvioidaan analysoimalla yritykset haastattelujen avulla ja tutkielmassa esitettyjen teorioiden avulla. Sopivuus todetaan hyväksi, kattaen 71 prosenttia analysoiduista kohdista. Kaksikymmentäyhdeksän prosenttia kohdista todetaan kohdiksi, joissa yritysten välinen yhteisymmärrys ei ole toimeksiantajayrityksen minimivaatimukset täyttävää. Näitä kohtia tullaan käyttämään suunnittelun pohjana kun suunnitellaan jatkoneuvotteluja yhteistyön käynnistämiseksi.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Recent advances in machine learning methods enable increasingly the automatic construction of various types of computer assisted methods that have been difficult or laborious to program by human experts. The tasks for which this kind of tools are needed arise in many areas, here especially in the fields of bioinformatics and natural language processing. The machine learning methods may not work satisfactorily if they are not appropriately tailored to the task in question. However, their learning performance can often be improved by taking advantage of deeper insight of the application domain or the learning problem at hand. This thesis considers developing kernel-based learning algorithms incorporating this kind of prior knowledge of the task in question in an advantageous way. Moreover, computationally efficient algorithms for training the learning machines for specific tasks are presented. In the context of kernel-based learning methods, the incorporation of prior knowledge is often done by designing appropriate kernel functions. Another well-known way is to develop cost functions that fit to the task under consideration. For disambiguation tasks in natural language, we develop kernel functions that take account of the positional information and the mutual similarities of words. It is shown that the use of this information significantly improves the disambiguation performance of the learning machine. Further, we design a new cost function that is better suitable for the task of information retrieval and for more general ranking problems than the cost functions designed for regression and classification. We also consider other applications of the kernel-based learning algorithms such as text categorization, and pattern recognition in differential display. We develop computationally efficient algorithms for training the considered learning machines with the proposed kernel functions. We also design a fast cross-validation algorithm for regularized least-squares type of learning algorithm. Further, an efficient version of the regularized least-squares algorithm that can be used together with the new cost function for preference learning and ranking tasks is proposed. In summary, we demonstrate that the incorporation of prior knowledge is possible and beneficial, and novel advanced kernels and cost functions can be used in algorithms efficiently.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The research on language equations has been active during last decades. Compared to the equations on words the equations on languages are much more difficult to solve. Even very simple equations that are easy to solve for words can be very hard for languages. In this thesis we study two of such equations, namely commutation and conjugacy equations. We study these equations on some limited special cases and compare some of these results to the solutions of corresponding equations on words. For both equations we study the maximal solutions, the centralizer and the conjugator. We present a fixed point method that we can use to search these maximal solutions and analyze the reasons why this method is not successful for all languages. We give also several examples to illustrate the behaviour of this method.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Learning of preference relations has recently received significant attention in machine learning community. It is closely related to the classification and regression analysis and can be reduced to these tasks. However, preference learning involves prediction of ordering of the data points rather than prediction of a single numerical value as in case of regression or a class label as in case of classification. Therefore, studying preference relations within a separate framework facilitates not only better theoretical understanding of the problem, but also motivates development of the efficient algorithms for the task. Preference learning has many applications in domains such as information retrieval, bioinformatics, natural language processing, etc. For example, algorithms that learn to rank are frequently used in search engines for ordering documents retrieved by the query. Preference learning methods have been also applied to collaborative filtering problems for predicting individual customer choices from the vast amount of user generated feedback. In this thesis we propose several algorithms for learning preference relations. These algorithms stem from well founded and robust class of regularized least-squares methods and have many attractive computational properties. In order to improve the performance of our methods, we introduce several non-linear kernel functions. Thus, contribution of this thesis is twofold: kernel functions for structured data that are used to take advantage of various non-vectorial data representations and the preference learning algorithms that are suitable for different tasks, namely efficient learning of preference relations, learning with large amount of training data, and semi-supervised preference learning. Proposed kernel-based algorithms and kernels are applied to the parse ranking task in natural language processing, document ranking in information retrieval, and remote homology detection in bioinformatics domain. Training of kernel-based ranking algorithms can be infeasible when the size of the training set is large. This problem is addressed by proposing a preference learning algorithm whose computation complexity scales linearly with the number of training data points. We also introduce sparse approximation of the algorithm that can be efficiently trained with large amount of data. For situations when small amount of labeled data but a large amount of unlabeled data is available, we propose a co-regularized preference learning algorithm. To conclude, the methods presented in this thesis address not only the problem of the efficient training of the algorithms but also fast regularization parameter selection, multiple output prediction, and cross-validation. Furthermore, proposed algorithms lead to notably better performance in many preference learning tasks considered.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This thesis deals with distance transforms which are a fundamental issue in image processing and computer vision. In this thesis, two new distance transforms for gray level images are presented. As a new application for distance transforms, they are applied to gray level image compression. The new distance transforms are both new extensions of the well known distance transform algorithm developed by Rosenfeld, Pfaltz and Lay. With some modification their algorithm which calculates a distance transform on binary images with a chosen kernel has been made to calculate a chessboard like distance transform with integer numbers (DTOCS) and a real value distance transform (EDTOCS) on gray level images. Both distance transforms, the DTOCS and EDTOCS, require only two passes over the graylevel image and are extremely simple to implement. Only two image buffers are needed: The original gray level image and the binary image which defines the region(s) of calculation. No other image buffers are needed even if more than one iteration round is performed. For large neighborhoods and complicated images the two pass distance algorithm has to be applied to the image more than once, typically 3 10 times. Different types of kernels can be adopted. It is important to notice that no other existing transform calculates the same kind of distance map as the DTOCS. All the other gray weighted distance function, GRAYMAT etc. algorithms find the minimum path joining two points by the smallest sum of gray levels or weighting the distance values directly by the gray levels in some manner. The DTOCS does not weight them that way. The DTOCS gives a weighted version of the chessboard distance map. The weights are not constant, but gray value differences of the original image. The difference between the DTOCS map and other distance transforms for gray level images is shown. The difference between the DTOCS and EDTOCS is that the EDTOCS calculates these gray level differences in a different way. It propagates local Euclidean distances inside a kernel. Analytical derivations of some results concerning the DTOCS and the EDTOCS are presented. Commonly distance transforms are used for feature extraction in pattern recognition and learning. Their use in image compression is very rare. This thesis introduces a new application area for distance transforms. Three new image compression algorithms based on the DTOCS and one based on the EDTOCS are presented. Control points, i.e. points that are considered fundamental for the reconstruction of the image, are selected from the gray level image using the DTOCS and the EDTOCS. The first group of methods select the maximas of the distance image to new control points and the second group of methods compare the DTOCS distance to binary image chessboard distance. The effect of applying threshold masks of different sizes along the threshold boundaries is studied. The time complexity of the compression algorithms is analyzed both analytically and experimentally. It is shown that the time complexity of the algorithms is independent of the number of control points, i.e. the compression ratio. Also a new morphological image decompression scheme is presented, the 8 kernels' method. Several decompressed images are presented. The best results are obtained using the Delaunay triangulation. The obtained image quality equals that of the DCT images with a 4 x 4

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Induction motors are widely used in industry, and they are generally considered very reliable. They often have a critical role in industrial processes, and their failure can lead to significant losses as a result of shutdown times. Typical failures of induction motors can be classified into stator, rotor, and bearing failures. One of the reasons for a bearing damage and eventually a bearing failure is bearing currents. Bearing currents in induction motors can be divided into two main categories; classical bearing currents and inverter-induced bearing currents. A bearing damage caused by bearing currents results, for instance, from electrical discharges that take place through the lubricant film between the raceways of the inner and the outer ring and the rolling elements of a bearing. This phenomenon can be considered similar to the one of electrical discharge machining, where material is removed by a series of rapidly recurring electrical arcing discharges between an electrode and a workpiece. This thesis concentrates on bearing currents with a special reference to bearing current detection in induction motors. A bearing current detection method based on radio frequency impulse reception and detection is studied. The thesis describes how a motor can work as a “spark gap” transmitter and discusses a discharge in a bearing as a source of radio frequency impulse. It is shown that a discharge, occurring due to bearing currents, can be detected at a distance of several meters from the motor. The issues of interference, detection, and location techniques are discussed. The applicability of the method is shown with a series of measurements with a specially constructed test motor and an unmodified frequency-converter-driven motor. The radio frequency method studied provides a nonintrusive method to detect harmful bearing currents in the drive system. If bearing current mitigation techniques are applied, their effectiveness can be immediately verified with the proposed method. The method also gives a tool to estimate the harmfulness of the bearing currents by making it possible to detect and locate individual discharges inside the bearings of electric motors.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper proposes a calibration method which can be utilized for the analysis of SEM images. The field of application of the developed method is a calculation of surface potential distribution of biased silicon edgeless detector. The suggested processing of the data collected by SEM consists of several stages and takes into account different aspects affecting the SEM image. The calibration method doesn’t pretend to be precise but at the same time it gives the basics of potential distribution when the different biasing voltages applied to the detector.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This thesis was produced for the Technology Marketing unit at the Nokia Research Center. Technology marketing was a new function at Nokia Research Center, and needed an established framework with the capacity to take into account multiple aspects for measuring the team performance. Technology marketing functions had existed in other parts of Nokia, yet no single method had been agreed upon for measuring their performance. The purpose of this study was to develop a performance measurement system for Nokia Research Center Technology Marketing. The target was that Nokia Research Center Technology Marketing had a framework for separate metrics; including benchmarking for starting level and target values in the future planning (numeric values were kept confidential within the company). As a result of this research, the Balanced Scorecard model of Kaplan and Norton, was chosen for the performance measurement system for Nokia Research Center Technology Marketing. This research selected the indicators, which were utilized in the chosen performance measurement system. Furthermore, performance measurement system was defined to guide the Head of Marketing in managing Nokia Research Center Technology Marketing team. During the research process the team mission, vision, strategy and critical success factors were outlined.