34 resultados para Kernel Smoothing
Resumo:
Electricity spot prices have always been a demanding data set for time series analysis, mostly because of the non-storability of electricity. This feature, making electric power unlike the other commodities, causes outstanding price spikes. Moreover, the last several years in financial world seem to show that ’spiky’ behaviour of time series is no longer an exception, but rather a regular phenomenon. The purpose of this paper is to seek patterns and relations within electricity price outliers and verify how they affect the overall statistics of the data. For the study techniques like classical Box-Jenkins approach, series DFT smoothing and GARCH models are used. The results obtained for two geographically different price series show that patterns in outliers’ occurrence are not straightforward. Additionally, there seems to be no rule that would predict the appearance of a spike from volatility, while the reverse effect is quite prominent. It is concluded that spikes cannot be predicted based only on the price series; probably some geographical and meteorological variables need to be included in modeling.
Resumo:
In many industrial applications, accurate and fast surface reconstruction is essential for quality control. Variation in surface finishing parameters, such as surface roughness, can reflect defects in a manufacturing process, non-optimal product operational efficiency, and reduced life expectancy of the product. This thesis considers reconstruction and analysis of high-frequency variation, that is roughness, on planar surfaces. Standard roughness measures in industry are calculated from surface topography. A fast and non-contact method to obtain surface topography is to apply photometric stereo in the estimation of surface gradients and to reconstruct the surface by integrating the gradient fields. Alternatively, visual methods, such as statistical measures, fractal dimension and distance transforms, can be used to characterize surface roughness directly from gray-scale images. In this thesis, the accuracy of distance transforms, statistical measures, and fractal dimension are evaluated in the estimation of surface roughness from gray-scale images and topographies. The results are contrasted to standard industry roughness measures. In distance transforms, the key idea is that distance values calculated along a highly varying surface are greater than distances calculated along a smoother surface. Statistical measures and fractal dimension are common surface roughness measures. In the experiments, skewness and variance of brightness distribution, fractal dimension, and distance transforms exhibited strong linear correlations to standard industry roughness measures. One of the key strengths of photometric stereo method is the acquisition of higher frequency variation of surfaces. In this thesis, the reconstruction of planar high-frequency varying surfaces is studied in the presence of imaging noise and blur. Two Wiener filterbased methods are proposed of which one is optimal in the sense of surface power spectral density given the spectral properties of the imaging noise and blur. Experiments show that the proposed methods preserve the inherent high-frequency variation in the reconstructed surfaces, whereas traditional reconstruction methods typically handle incorrect measurements by smoothing, which dampens the high-frequency variation.
Resumo:
This thesis deals with distance transforms which are a fundamental issue in image processing and computer vision. In this thesis, two new distance transforms for gray level images are presented. As a new application for distance transforms, they are applied to gray level image compression. The new distance transforms are both new extensions of the well known distance transform algorithm developed by Rosenfeld, Pfaltz and Lay. With some modification their algorithm which calculates a distance transform on binary images with a chosen kernel has been made to calculate a chessboard like distance transform with integer numbers (DTOCS) and a real value distance transform (EDTOCS) on gray level images. Both distance transforms, the DTOCS and EDTOCS, require only two passes over the graylevel image and are extremely simple to implement. Only two image buffers are needed: The original gray level image and the binary image which defines the region(s) of calculation. No other image buffers are needed even if more than one iteration round is performed. For large neighborhoods and complicated images the two pass distance algorithm has to be applied to the image more than once, typically 3 10 times. Different types of kernels can be adopted. It is important to notice that no other existing transform calculates the same kind of distance map as the DTOCS. All the other gray weighted distance function, GRAYMAT etc. algorithms find the minimum path joining two points by the smallest sum of gray levels or weighting the distance values directly by the gray levels in some manner. The DTOCS does not weight them that way. The DTOCS gives a weighted version of the chessboard distance map. The weights are not constant, but gray value differences of the original image. The difference between the DTOCS map and other distance transforms for gray level images is shown. The difference between the DTOCS and EDTOCS is that the EDTOCS calculates these gray level differences in a different way. It propagates local Euclidean distances inside a kernel. Analytical derivations of some results concerning the DTOCS and the EDTOCS are presented. Commonly distance transforms are used for feature extraction in pattern recognition and learning. Their use in image compression is very rare. This thesis introduces a new application area for distance transforms. Three new image compression algorithms based on the DTOCS and one based on the EDTOCS are presented. Control points, i.e. points that are considered fundamental for the reconstruction of the image, are selected from the gray level image using the DTOCS and the EDTOCS. The first group of methods select the maximas of the distance image to new control points and the second group of methods compare the DTOCS distance to binary image chessboard distance. The effect of applying threshold masks of different sizes along the threshold boundaries is studied. The time complexity of the compression algorithms is analyzed both analytically and experimentally. It is shown that the time complexity of the algorithms is independent of the number of control points, i.e. the compression ratio. Also a new morphological image decompression scheme is presented, the 8 kernels' method. Several decompressed images are presented. The best results are obtained using the Delaunay triangulation. The obtained image quality equals that of the DCT images with a 4 x 4
Resumo:
The ongoing development of the digital media has brought a new set of challenges with it. As images containing more than three wavelength bands, often called spectral images, are becoming a more integral part of everyday life, problems in the quality of the RGB reproduction from the spectral images have turned into an important area of research. The notion of image quality is often thought to comprise two distinctive areas – image quality itself and image fidelity, both dealing with similar questions, image quality being the degree of excellence of the image, and image fidelity the measure of the match of the image under study to the original. In this thesis, both image fidelity and image quality are considered, with an emphasis on the influence of color and spectral image features on both. There are very few works dedicated to the quality and fidelity of spectral images. Several novel image fidelity measures were developed in this study, which include kernel similarity measures and 3D-SSIM (structural similarity index). The kernel measures incorporate the polynomial, Gaussian radial basis function (RBF) and sigmoid kernels. The 3D-SSIM is an extension of a traditional gray-scale SSIM measure developed to incorporate spectral data. The novel image quality model presented in this study is based on the assumption that the statistical parameters of the spectra of an image influence the overall appearance. The spectral image quality model comprises three parameters of quality: colorfulness, vividness and naturalness. The quality prediction is done by modeling the preference function expressed in JNDs (just noticeable difference). Both image fidelity measures and the image quality model have proven to be effective in the respective experiments.
Resumo:
Preference relations, and their modeling, have played a crucial role in both social sciences and applied mathematics. A special category of preference relations is represented by cardinal preference relations, which are nothing other than relations which can also take into account the degree of relation. Preference relations play a pivotal role in most of multi criteria decision making methods and in the operational research. This thesis aims at showing some recent advances in their methodology. Actually, there are a number of open issues in this field and the contributions presented in this thesis can be grouped accordingly. The first issue regards the estimation of a weight vector given a preference relation. A new and efficient algorithm for estimating the priority vector of a reciprocal relation, i.e. a special type of preference relation, is going to be presented. The same section contains the proof that twenty methods already proposed in literature lead to unsatisfactory results as they employ a conflicting constraint in their optimization model. The second area of interest concerns consistency evaluation and it is possibly the kernel of the thesis. This thesis contains the proofs that some indices are equivalent and that therefore, some seemingly different formulae, end up leading to the very same result. Moreover, some numerical simulations are presented. The section ends with some consideration of a new method for fairly evaluating consistency. The third matter regards incomplete relations and how to estimate missing comparisons. This section reports a numerical study of the methods already proposed in literature and analyzes their behavior in different situations. The fourth, and last, topic, proposes a way to deal with group decision making by means of connecting preference relations with social network analysis.
Resumo:
Tämän työn tarkoituksena on kehittää lyhyen tähtäimen kysynnän ennakointiprosessia VAASAN Oy:ssä, jossa osa tuotteista valmistetaan kysyntäennakoiden perusteella. Valmistettavien tuotteiden luonteesta johtuva varastointimahdollisuuden puuttuminen, korkea toimitusvarmuustavoite sekä tarvittavien ennakoiden suuri määrä asettavat suuret haasteet kysynnän ennakointiprosessille. Työn teoriaosuudessa käsitellään kysynnän ennustamisen tarvetta, ennusteiden käyttökohteita sekä kysynnän ennustamismenetelmiä. Pelkällä kysynnän ennustamisella ei kuitenkaan päästä toimitusketjun kannalta optimaaliseen lopputulokseen, vaan siihen tarvitaan kokonaisvaltaista kysynnän hallintaa. Se on prosessi, jonka tavoitteena on tasapainottaa toimitusketjun kyvykkyydet ja asiakkaiden vaatimukset keskenään mahdollisimman tehokkaasti. Työssä tutkittiin yrityksessä kolmen kuukauden aikana eksponentiaalisen tasoituksen menetelmällä laadittuja ennakoita sekä ennakoijien tekemiä muutoksia niihin. Tutkimuksen perusteella optimaalinen eksponentiaalisen tasoituksen alfa-kerroin on 0,6. Ennakoijien tilastollisiin ennakoihin tekemät muutokset paransivat ennakoiden tarkkuutta ja ne olivat erityisen tehokkaita toimituspuutteiden minimoimisessa. Lisäksi työn tuloksena ennakoijien käyttöön saatiin monia päivittäisiä rutiineja helpottavia ja automatisoivia työkaluja.
Resumo:
Tillgången på traditionella biobränslen är begränsad och därför behöver man ta fram nya, tidigare outnyttjade biobränslen för att möta de uppställda CO2 emissionsmålen av EU och det ständigt ökande energibehovet. Under de senare åren har intresset riktats mot termisk energiutvinning ur olika restfraktioner och avfall. Vid produktion av fordonsbränsle ur biomassa är den fasta restprodukten ofta den största procesströmmen i produktionsanläggningen. En riktig hantering av restprodukterna skulle göra produktionen mera lönsam och mer ekologiskt hållbar. Ett alternativ är att genom förbränning producera elektricitet och/eller värme eftersom dessa restprodukter anses som CO2-neutrala. Målsättningen med den här avhandlingen var att studera förbränningsegenskaperna hos några fasta restprodukter som uppstår vid framställning av förnybara fordonsbränslen. De fyra undersökta materialen är rapskaka, palmkärnskaka, torkad drank och stabiliserat rötslam. I studien används ett stort urval av undersökningsmetoder, från laboratorieskala till fullskalig förbränning, för att identifiera de huvudsakliga utmaningarna förknippade med förbränning av restprodukterna i pannor med fluidiserad bäddteknik. Med hjälp av detaljerad bränslekarakterisering kunde restprodukterna konstateras vara en värdefull källa för värme- och elproduktion. Den kemiska sammansättningen av restprodukterna varierar stort jämfört med mera traditionellt använda biobränslen. En gemensam faktor för alla de studerade restprodukterna är en hög fosforhalt. På grund av de låga fosforkoncentrationerna i de traditionella biobränslena har grundämnet hittills inte ansetts spela någon större roll i askkemin. Experimenten visade nu att fosfor inte mera kan försummas då man studerar kemin i förbränningsprocesser, då allt flera fosforrika bränslen tränger in på energimarknaden.
Resumo:
Työssä tutkittiin hitsattujen levyliitosten väsymiskestävyyden mitoitusarvoja. Hitsien väsymiskestävyyden mitoitusarvot määritettiin lineaarista murtumismekaniikkaa soveltavalla 2D FEM-laskentaohjelmalla. Murtumismekaanisen laskennan tuloksista määriteltiin, eri liitosgeometrioiden ja kuormitustyyppien mukaisia, nimellisen jännityksen väsymismitoitusmenetelmää vastaavia FAT-luokkia, joissa on huomioitu rakenteellinen jännitys hitsiä vastaan kohtisuorassa suunnassa. Tutkittujen liitosten geometriat olivat pääsääntöisesti poikkeavia mitoitusstandardien ja ohjeiden sisältämistä taulukkotapauksista. Laskennassa otettiin huomioon hitsien liittymiskulma perusaineeseen, rajaviivan pyöristykset ja vajaa hitsautumissyvyys. Kuormitustyyppien vaihtelua tutkittiin rakenteellisen jännityksen taivutusosuuden muutoksilla ja kuormaa kantavien X-liitosten risteävien kuormituksien suhteellisilla suuruuksilla. Väsymiskestävyydet määritettiin kuormituskohtaisille kalvo- ja taivutusjännityksille sekä näiden jännitysjakaumien keskiarvoille. Työssä saatuja FAT-luokkia voidaan hyödyntää vastaavien geometrioiden ja kuormitusten yhteydessä, sekä interpoloimalla myös tuloksien väliarvoissa. Työssä käytetyillä menetelmillä voidaan parantaa nimellisen jännityksen mitoitusmenetelmän tarkkuutta ja laajentaa sitä koskemaan myös taulukkotapausten ulkopuolisia liitoksia. Työn tuloksissa on esitetty FAT-luokkia T-, X- ja päittäisliitoksille ja näiden eri kuormitusyhdistelmille.
Resumo:
Mass-produced paper electronics (large area organic printed electronics on paper-based substrates, “throw-away electronics”) has the potential to introduce the use of flexible electronic applications in everyday life. While paper manufacturing and printing have a long history, they were not developed with electronic applications in mind. Modifications to paper substrates and printing processes are required in order to obtain working electronic devices. This should be done while maintaining the high throughput of conventional printing techniques and the low cost and recyclability of paper. An understanding of the interactions between the functional materials, the printing process and the substrate are required for successful manufacturing of advanced devices on paper. Based on the understanding, a recyclable, multilayer-coated paper-based substrate that combines adequate barrier and printability properties for printed electronics and sensor applications was developed in this work. In this multilayer structure, a thin top-coating consisting of mineral pigments is coated on top of a dispersion-coated barrier layer. The top-coating provides well-controlled sorption properties through controlled thickness and porosity, thus enabling optimizing the printability of functional materials. The penetration of ink solvents and functional materials stops at the barrier layer, which not only improves the performance of the functional material but also eliminates potential fiber swelling and de-bonding that can occur when the solvents are allowed to penetrate into the base paper. The multi-layer coated paper under consideration in the current work consists of a pre-coating and a smoothing layer on which the barrier layer is deposited. Coated fine paper may also be used directly as basepaper, ensuring a smooth base for the barrier layer. The top layer is thin and smooth consisting of mineral pigments such as kaolin, precipitated calcium carbonate, silica or blends of these. All the materials in the coating structure have been chosen in order to maintain the recyclability and sustainability of the substrate. The substrate can be coated in steps, sequentially layer by layer, which requires detailed understanding and tuning of the wetting properties and topography of the barrier layer versus the surface tension of the top-coating. A cost competitive method for industrial scale production is the curtain coating technique allowing extremely thin top-coatings to be applied simultaneously with a closed and sealed barrier layer. The understanding of the interactions between functional materials formulated and applied on paper as inks, makes it possible to create a paper-based substrate that can be used to manufacture printed electronics-based devices and sensors on paper. The multitude of functional materials and their complex interactions make it challenging to draw general conclusions in this topic area. Inevitably, the results become partially specific to the device chosen and the materials needed in its manufacturing. Based on the results, it is clear that for inks based on dissolved or small size functional materials, a barrier layer is beneficial and ensures the functionality of the printed material in a device. The required active barrier life time depends on the solvents or analytes used and their volatility. High aspect ratio mineral pigments, which create tortuous pathways and physical barriers within the barrier layer limit the penetration of solvents used in functional inks. The surface pore volume and pore size can be optimized for a given printing process and ink through a choice of pigment type and coating layer thickness. However, when manufacturing multilayer functional devices, such as transistors, which consist of several printed layers, compromises have to be made. E.g., while a thick and porous top-coating is preferable for printing of source and drain electrodes with a silver particle ink, a thinner and less absorbing surface is required to form a functional semiconducting layer. With the multilayer coating structure concept developed in this work, it was possible to make the paper substrate suitable for printed functionality. The possibility of printing functional devices, such as transistors, sensors and pixels in a roll-to-roll process on paper is demonstrated which may enable introducing paper for use in disposable “onetime use” or “throwaway” electronics and sensors, such as lab-on-strip devices for various analyses, consumer packages equipped with product quality sensors or remote tracking devices.
Resumo:
Tropical forests are sources of many ecosystem services, but these forests are vanishing rapidly. The situation is severe in Sub-Saharan Africa and especially in Tanzania. The causes of change are multidimensional and strongly interdependent, and only understanding them comprehensively helps to change the ongoing unsustainable trends of forest decline. Ongoing forest changes, their spatiality and connection to humans and environment can be studied with the methods of Land Change Science. The knowledge produced with these methods helps to make arguments about the actors, actions and causes that are behind the forest decline. In this study of Unguja Island in Zanzibar the focus is in the current forest cover and its changes between 1996 and 2009. The cover and changes are measured with often used remote sensing methods of automated land cover classification and post-classification comparison from medium resolution satellite images. Kernel Density Estimation is used to determine the clusters of change, sub-area –analysis provides information about the differences between regions, while distance and regression analyses connect changes to environmental factors. These analyses do not only explain the happened changes, but also allow building quantitative and spatial future scenarios. Similar study has not been made for Unguja and therefore it provides new information, which is beneficial for the whole society. The results show that 572 km2 of Unguja is still forested, but 0,82–1,19% of these forests are disappearing annually. Besides deforestation also vertical degradation and spatial changes are significant problems. Deforestation is most severe in the communal indigenous forests, but also agroforests are decreasing. Spatially deforestation concentrates to the areas close to the coastline, population and Zanzibar Town. Biophysical factors on the other hand do not seem to influence the ongoing deforestation process. If the current trend continues there should be approximately 485 km2 of forests remaining in 2025. Solutions to these deforestation problems should be looked from sustainable land use management, surveying and protection of the forests in risk areas and spatially targeted self-sustainable tree planting schemes.
Resumo:
The purpose of this thesis was to study the design of demand forecasting processes and management of demand. In literature review were different processes found and forecasting methods and techniques interviewed. Also role of bullwhip effect in supply chain was identified and how to manage it with information sharing operations. In the empirical part of study is at first described current situation and challenges in case company. After that will new way to handle demand introduced with target budget creation and how information sharing with 5 products and a few customers would bring benefits to company. Also the new S&OP process created within this study and organization for it.
Resumo:
In this thesis, the suitability of different trackers for finger tracking in high-speed videos was studied. Tracked finger trajectories from the videos were post-processed and analysed using various filtering and smoothing methods. Position derivatives of the trajectories, speed and acceleration were extracted for the purposes of hand motion analysis. Overall, two methods, Kernelized Correlation Filters and Spatio-Temporal Context Learning tracking, performed better than the others in the tests. Both achieved high accuracy for the selected high-speed videos and also allowed real-time processing, being able to process over 500 frames per second. In addition, the results showed that different filtering methods can be applied to produce more appropriate velocity and acceleration curves calculated from the tracking data. Local Regression filtering and Unscented Kalman Smoother gave the best results in the tests. Furthermore, the results show that tracking and filtering methods are suitable for high-speed hand-tracking and trajectory-data post-processing.
Resumo:
Identification of low-dimensional structures and main sources of variation from multivariate data are fundamental tasks in data analysis. Many methods aimed at these tasks involve solution of an optimization problem. Thus, the objective of this thesis is to develop computationally efficient and theoretically justified methods for solving such problems. Most of the thesis is based on a statistical model, where ridges of the density estimated from the data are considered as relevant features. Finding ridges, that are generalized maxima, necessitates development of advanced optimization methods. An efficient and convergent trust region Newton method for projecting a point onto a ridge of the underlying density is developed for this purpose. The method is utilized in a differential equation-based approach for tracing ridges and computing projection coordinates along them. The density estimation is done nonparametrically by using Gaussian kernels. This allows application of ridge-based methods with only mild assumptions on the underlying structure of the data. The statistical model and the ridge finding methods are adapted to two different applications. The first one is extraction of curvilinear structures from noisy data mixed with background clutter. The second one is a novel nonlinear generalization of principal component analysis (PCA) and its extension to time series data. The methods have a wide range of potential applications, where most of the earlier approaches are inadequate. Examples include identification of faults from seismic data and identification of filaments from cosmological data. Applicability of the nonlinear PCA to climate analysis and reconstruction of periodic patterns from noisy time series data are also demonstrated. Other contributions of the thesis include development of an efficient semidefinite optimization method for embedding graphs into the Euclidean space. The method produces structure-preserving embeddings that maximize interpoint distances. It is primarily developed for dimensionality reduction, but has also potential applications in graph theory and various areas of physics, chemistry and engineering. Asymptotic behaviour of ridges and maxima of Gaussian kernel densities is also investigated when the kernel bandwidth approaches infinity. The results are applied to the nonlinear PCA and to finding significant maxima of such densities, which is a typical problem in visual object tracking.
Resumo:
Time series analysis can be categorized into three different approaches: classical, Box-Jenkins, and State space. Classical approach makes a basement for the analysis and Box-Jenkins approach is an improvement of the classical approach and deals with stationary time series. State space approach allows time variant factors and covers up a broader area of time series analysis. This thesis focuses on parameter identifiablity of different parameter estimation methods such as LSQ, Yule-Walker, MLE which are used in the above time series analysis approaches. Also the Kalman filter method and smoothing techniques are integrated with the state space approach and MLE method to estimate parameters allowing them to change over time. Parameter estimation is carried out by repeating estimation and integrating with MCMC and inspect how well different estimation methods can identify the optimal model parameters. Identification is performed in probabilistic and general senses and compare the results in order to study and represent identifiability more informative way.
Resumo:
The aim of this work is to invert the ionospheric electron density profile from Riometer (Relative Ionospheric opacity meter) measurement. The newly Riometer instrument KAIRA (Kilpisjärvi Atmospheric Imaging Receiver Array) is used to measure the cosmic HF radio noise absorption that taking place in the D-region ionosphere between 50 to 90 km. In order to invert the electron density profile synthetic data is used to feed the unknown parameter Neq using spline height method, which works by taking electron density profile at different altitude. Moreover, smoothing prior method also used to sample from the posterior distribution by truncating the prior covariance matrix. The smoothing profile approach makes the problem easier to find the posterior using MCMC (Markov Chain Monte Carlo) method.