10 resultados para Regression methods
em Doria (National Library of Finland DSpace Services) - National Library of Finland, Finland
Resumo:
Singular Value Decomposition (SVD), Principal Component Analysis (PCA) and Multiple Linear Regression (MLR) are some of the mathematical pre- liminaries that are discussed prior to explaining PLS and PCR models. Both PLS and PCR are applied to real spectral data and their di erences and similarities are discussed in this thesis. The challenge lies in establishing the optimum number of components to be included in either of the models but this has been overcome by using various diagnostic tools suggested in this thesis. Correspondence analysis (CA) and PLS were applied to ecological data. The idea of CA was to correlate the macrophytes species and lakes. The di erences between PLS model for ecological data and PLS for spectral data are noted and explained in this thesis. i
Resumo:
Paneeliregressioita käyttäen tutkittiin 3 kuukauden Euribor-futuurin markkinatehokkuutta ja sitä, esiintyykö normal backwardation-efektiä. Tulokset olivat ristiriitaisia.
Resumo:
Kolmen eri hitsausliitoksen väsymisikä arvio on analysoitu monimuuttuja regressio analyysin avulla. Regression perustana on laaja S-N tietokanta joka on kerätty kirjallisuudesta. Tarkastellut liitokset ovat tasalevy liitos, krusiformi liitos ja pitkittäisripa levyssä. Muuttujina ovat jännitysvaihtelu, kuormitetun levyn paksuus ja kuormitus tapa. Paksuus effekti on käsitelty uudelleen kaikkia kolmea liitosta ajatellen. Uudelleen käsittelyn avulla on varmistettu paksuus effektin olemassa olo ennen monimuuttuja regressioon siirtymistä. Lineaariset väsymisikä yhtalöt on ajettu kolmelle hitsausliitokselle ottaen huomioon kuormitetun levyn paksuus sekä kuormitus tapa. Väsymisikä yhtalöitä on verrattu ja keskusteltu testitulosten valossa, jotka on kerätty kirjallisuudesta. Neljä tutkimustaon tehty kerättyjen väsymistestien joukosta ja erilaisia väsymisikä arvio metodeja on käytetty väsymisiän arviointiin. Tuloksia on tarkasteltu ja niistä keskusteltu oikeiden testien valossa. Tutkimuksissa on katsottu 2mm ja 6mm symmetristäpitkittäisripaa levyssä, 12.7mm epäsymmetristä pitkittäisripaa, 38mm symmetristä pitkittäisripaa vääntökuormituksessa ja 25mm/38mm kuorman kantavaa krusiformi liitosta vääntökuormituksessa. Mallinnus on tehty niin lähelle testi liitosta kuin mahdollista. Väsymisikä arviointi metodit sisältävät hot-spot metodin jossa hot-spot jännitys on laskettu kahta lineaarista ja epälineaarista ekstrapolointiakäyttäen sekä paksuuden läpi integrointia käyttäen. Lovijännitys ja murtumismekaniikka metodeja on käytetty krusiformi liitosta laskiessa.
Resumo:
Recent advances in machine learning methods enable increasingly the automatic construction of various types of computer assisted methods that have been difficult or laborious to program by human experts. The tasks for which this kind of tools are needed arise in many areas, here especially in the fields of bioinformatics and natural language processing. The machine learning methods may not work satisfactorily if they are not appropriately tailored to the task in question. However, their learning performance can often be improved by taking advantage of deeper insight of the application domain or the learning problem at hand. This thesis considers developing kernel-based learning algorithms incorporating this kind of prior knowledge of the task in question in an advantageous way. Moreover, computationally efficient algorithms for training the learning machines for specific tasks are presented. In the context of kernel-based learning methods, the incorporation of prior knowledge is often done by designing appropriate kernel functions. Another well-known way is to develop cost functions that fit to the task under consideration. For disambiguation tasks in natural language, we develop kernel functions that take account of the positional information and the mutual similarities of words. It is shown that the use of this information significantly improves the disambiguation performance of the learning machine. Further, we design a new cost function that is better suitable for the task of information retrieval and for more general ranking problems than the cost functions designed for regression and classification. We also consider other applications of the kernel-based learning algorithms such as text categorization, and pattern recognition in differential display. We develop computationally efficient algorithms for training the considered learning machines with the proposed kernel functions. We also design a fast cross-validation algorithm for regularized least-squares type of learning algorithm. Further, an efficient version of the regularized least-squares algorithm that can be used together with the new cost function for preference learning and ranking tasks is proposed. In summary, we demonstrate that the incorporation of prior knowledge is possible and beneficial, and novel advanced kernels and cost functions can be used in algorithms efficiently.
Resumo:
Seaports play an important part in the wellbeing of a nation. Many nations are highly dependent on foreign trade and most trade is done using sea vessels. This study is part of a larger research project, where a simulation model is required in order to create further analyses on Finnish macro logistical networks. The objective of this study is to create a system dynamic simulation model, which gives an accurate forecast for the development of demand of Finnish seaports up to 2030. The emphasis on this study is to show how it is possible to create a detailed harbor demand System Dynamic model with the help of statistical methods. The used forecasting methods were ARIMA (autoregressive integrated moving average) and regression models. The created simulation model gives a forecast with confidence intervals and allows studying different scenarios. The building process was found to be a useful one and the built model can be expanded to be more detailed. Required capacity for other parts of the Finnish logistical system could easily be included in the model.
Resumo:
Learning of preference relations has recently received significant attention in machine learning community. It is closely related to the classification and regression analysis and can be reduced to these tasks. However, preference learning involves prediction of ordering of the data points rather than prediction of a single numerical value as in case of regression or a class label as in case of classification. Therefore, studying preference relations within a separate framework facilitates not only better theoretical understanding of the problem, but also motivates development of the efficient algorithms for the task. Preference learning has many applications in domains such as information retrieval, bioinformatics, natural language processing, etc. For example, algorithms that learn to rank are frequently used in search engines for ordering documents retrieved by the query. Preference learning methods have been also applied to collaborative filtering problems for predicting individual customer choices from the vast amount of user generated feedback. In this thesis we propose several algorithms for learning preference relations. These algorithms stem from well founded and robust class of regularized least-squares methods and have many attractive computational properties. In order to improve the performance of our methods, we introduce several non-linear kernel functions. Thus, contribution of this thesis is twofold: kernel functions for structured data that are used to take advantage of various non-vectorial data representations and the preference learning algorithms that are suitable for different tasks, namely efficient learning of preference relations, learning with large amount of training data, and semi-supervised preference learning. Proposed kernel-based algorithms and kernels are applied to the parse ranking task in natural language processing, document ranking in information retrieval, and remote homology detection in bioinformatics domain. Training of kernel-based ranking algorithms can be infeasible when the size of the training set is large. This problem is addressed by proposing a preference learning algorithm whose computation complexity scales linearly with the number of training data points. We also introduce sparse approximation of the algorithm that can be efficiently trained with large amount of data. For situations when small amount of labeled data but a large amount of unlabeled data is available, we propose a co-regularized preference learning algorithm. To conclude, the methods presented in this thesis address not only the problem of the efficient training of the algorithms but also fast regularization parameter selection, multiple output prediction, and cross-validation. Furthermore, proposed algorithms lead to notably better performance in many preference learning tasks considered.
Resumo:
Forest inventories are used to estimate forest characteristics and the condition of forest for many different applications: operational tree logging for forest industry, forest health state estimation, carbon balance estimation, land-cover and land use analysis in order to avoid forest degradation etc. Recent inventory methods are strongly based on remote sensing data combined with field sample measurements, which are used to define estimates covering the whole area of interest. Remote sensing data from satellites, aerial photographs or aerial laser scannings are used, depending on the scale of inventory. To be applicable in operational use, forest inventory methods need to be easily adjusted to local conditions of the study area at hand. All the data handling and parameter tuning should be objective and automated as much as possible. The methods also need to be robust when applied to different forest types. Since there generally are no extensive direct physical models connecting the remote sensing data from different sources to the forest parameters that are estimated, mathematical estimation models are of "black-box" type, connecting the independent auxiliary data to dependent response data with linear or nonlinear arbitrary models. To avoid redundant complexity and over-fitting of the model, which is based on up to hundreds of possibly collinear variables extracted from the auxiliary data, variable selection is needed. To connect the auxiliary data to the inventory parameters that are estimated, field work must be performed. In larger study areas with dense forests, field work is expensive, and should therefore be minimized. To get cost-efficient inventories, field work could partly be replaced with information from formerly measured sites, databases. The work in this thesis is devoted to the development of automated, adaptive computation methods for aerial forest inventory. The mathematical model parameter definition steps are automated, and the cost-efficiency is improved by setting up a procedure that utilizes databases in the estimation of new area characteristics.
Resumo:
Machine learning provides tools for automated construction of predictive models in data intensive areas of engineering and science. The family of regularized kernel methods have in the recent years become one of the mainstream approaches to machine learning, due to a number of advantages the methods share. The approach provides theoretically well-founded solutions to the problems of under- and overfitting, allows learning from structured data, and has been empirically demonstrated to yield high predictive performance on a wide range of application domains. Historically, the problems of classification and regression have gained the majority of attention in the field. In this thesis we focus on another type of learning problem, that of learning to rank. In learning to rank, the aim is from a set of past observations to learn a ranking function that can order new objects according to how well they match some underlying criterion of goodness. As an important special case of the setting, we can recover the bipartite ranking problem, corresponding to maximizing the area under the ROC curve (AUC) in binary classification. Ranking applications appear in a large variety of settings, examples encountered in this thesis include document retrieval in web search, recommender systems, information extraction and automated parsing of natural language. We consider the pairwise approach to learning to rank, where ranking models are learned by minimizing the expected probability of ranking any two randomly drawn test examples incorrectly. The development of computationally efficient kernel methods, based on this approach, has in the past proven to be challenging. Moreover, it is not clear what techniques for estimating the predictive performance of learned models are the most reliable in the ranking setting, and how the techniques can be implemented efficiently. The contributions of this thesis are as follows. First, we develop RankRLS, a computationally efficient kernel method for learning to rank, that is based on minimizing a regularized pairwise least-squares loss. In addition to training methods, we introduce a variety of algorithms for tasks such as model selection, multi-output learning, and cross-validation, based on computational shortcuts from matrix algebra. Second, we improve the fastest known training method for the linear version of the RankSVM algorithm, which is one of the most well established methods for learning to rank. Third, we study the combination of the empirical kernel map and reduced set approximation, which allows the large-scale training of kernel machines using linear solvers, and propose computationally efficient solutions to cross-validation when using the approach. Next, we explore the problem of reliable cross-validation when using AUC as a performance criterion, through an extensive simulation study. We demonstrate that the proposed leave-pair-out cross-validation approach leads to more reliable performance estimation than commonly used alternative approaches. Finally, we present a case study on applying machine learning to information extraction from biomedical literature, which combines several of the approaches considered in the thesis. The thesis is divided into two parts. Part I provides the background for the research work and summarizes the most central results, Part II consists of the five original research articles that are the main contribution of this thesis.
Resumo:
In today's logistics environment, there is a tremendous need for accurate cost information and cost allocation. Companies searching for the proper solution often come across with activity-based costing (ABC) or one of its variations which utilizes cost drivers to allocate the costs of activities to cost objects. In order to allocate the costs accurately and reliably, the selection of appropriate cost drivers is essential in order to get the benefits of the costing system. The purpose of this study is to validate the transportation cost drivers of a Finnish wholesaler company and ultimately select the best possible driver alternatives for the company. The use of cost driver combinations as an alternative is also studied. The study is conducted as a part of case company's applied ABC-project using the statistical research as the main research method supported by a theoretical, literature based method. The main research tools featured in the study include simple and multiple regression analyses, which together with the literature and observations based practicality analysis forms the basis for the advanced methods. The results suggest that the most appropriate cost driver alternatives are the delivery drops and internal delivery weight. The possibility of using cost driver combinations is not suggested as their use doesn't provide substantially better results while increasing the measurement costs, complexity and load of use at the same time. The use of internal freight cost drivers is also questionable as the results indicate weakening trend in the cost allocation capabilities towards the end of the period. Therefore more research towards internal freight cost drivers should be conducted before taking them in use.
Resumo:
Mass spectrometry (MS)-based proteomics has seen significant technical advances during the past two decades and mass spectrometry has become a central tool in many biosciences. Despite the popularity of MS-based methods, the handling of the systematic non-biological variation in the data remains a common problem. This biasing variation can result from several sources ranging from sample handling to differences caused by the instrumentation. Normalization is the procedure which aims to account for this biasing variation and make samples comparable. Many normalization methods commonly used in proteomics have been adapted from the DNA-microarray world. Studies comparing normalization methods with proteomics data sets using some variability measures exist. However, a more thorough comparison looking at the quantitative and qualitative differences of the performance of the different normalization methods and at their ability in preserving the true differential expression signal of proteins, is lacking. In this thesis, several popular and widely used normalization methods (the Linear regression normalization, Local regression normalization, Variance stabilizing normalization, Quantile-normalization, Median central tendency normalization and also variants of some of the forementioned methods), representing different strategies in normalization are being compared and evaluated with a benchmark spike-in proteomics data set. The normalization methods are evaluated in several ways. The performance of the normalization methods is evaluated qualitatively and quantitatively on a global scale and in pairwise comparisons of sample groups. In addition, it is investigated, whether performing the normalization globally on the whole data or pairwise for the comparison pairs examined, affects the performance of the normalization method in normalizing the data and preserving the true differential expression signal. In this thesis, both major and minor differences in the performance of the different normalization methods were found. Also, the way in which the normalization was performed (global normalization of the whole data or pairwise normalization of the comparison pair) affected the performance of some of the methods in pairwise comparisons. Differences among variants of the same methods were also observed.