55 resultados para STOCKINGS, COMPRESSION

em Doria (National Library of Finland DSpace Services) - National Library of Finland, Finland


Relevância:

20.00% 20.00%

Publicador:

Resumo:

The problem of selecting anappropriate wavelet filter is always present in signal compression based on thewavelet transform. In this report, we propose a method to select a wavelet filter from a predefined set of filters for the compression of spectra from a multispectral image. The wavelet filter selection is based on the Learning Vector Quantization (LVQ). In the training phase for the test images, the best wavelet filter for each spectrum has been found by a careful compression-decompression evaluation. Certain spectral features are used in characterizing the pixel spectra. The LVQ is used to form the best wavelet filter class for different types of spectra from multispectral images. When a new image is to be compressed, a set of spectra from that image is selected, the spectra are classified by the trained LVQand the filter associated to the largest class is selected for the compression of every spectrum from the multispectral image. The results show, that almost inevery case our method finds the most suitable wavelet filter from the pre-defined set for the compression.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Multispectral images contain information from several spectral wavelengths and currently multispectral images are widely used in remote sensing and they are becoming more common in the field of computer vision and in industrial applications. Typically, one multispectral image in remote sensing may occupy hundreds of megabytes of disk space and several this kind of images may be received from a single measurement. This study considers the compression of multispectral images. The lossy compression is based on the wavelet transform and we compare the suitability of different waveletfilters for the compression. A method for selecting a wavelet filter for the compression and reconstruction of multispectral images is developed. The performance of the multidimensional wavelet transform based compression is compared to other compression methods like PCA, ICA, SPIHT, and DCT/JPEG. The quality of the compression and reconstruction is measured by quantitative measures like signal-to-noise ratio. In addition, we have developed a qualitative measure, which combines the information from the spatial and spectral dimensions of a multispectral image and which also accounts for the visual quality of the bands from the multispectral images.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Technological progress has made a huge amount of data available at increasing spatial and spectral resolutions. Therefore, the compression of hyperspectral data is an area of active research. In somefields, the original quality of a hyperspectral image cannot be compromised andin these cases, lossless compression is mandatory. The main goal of this thesisis to provide improved methods for the lossless compression of hyperspectral images. Both prediction- and transform-based methods are studied. Two kinds of prediction based methods are being studied. In the first method the spectra of a hyperspectral image are first clustered and and an optimized linear predictor is calculated for each cluster. In the second prediction method linear prediction coefficients are not fixed but are recalculated for each pixel. A parallel implementation of the above-mentioned linear prediction method is also presented. Also,two transform-based methods are being presented. Vector Quantization (VQ) was used together with a new coding of the residual image. In addition we have developed a new back end for a compression method utilizing Principal Component Analysis (PCA) and Integer Wavelet Transform (IWT). The performance of the compressionmethods are compared to that of other compression methods. The results show that the proposed linear prediction methods outperform the previous methods. In addition, a novel fast exact nearest-neighbor search method is developed. The search method is used to speed up the Linde-Buzo-Gray (LBG) clustering method.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Vaatimus kuvatiedon tiivistämisestä on tullut entistä ilmeisemmäksi viimeisen kymmenen vuoden aikana kuvatietoon perustuvien sovellutusten myötä. Nykyisin kiinnitetään erityistä huomiota spektrikuviin, joiden tallettaminen ja siirto vaativat runsaasti levytilaa ja kaistaa. Aallokemuunnos on osoittautunut hyväksi ratkaisuksi häviöllisessä tiedontiivistämisessä. Sen toteutus alikaistakoodauksessa perustuu aallokesuodattimiin ja ongelmana on sopivan aallokesuodattimen valinta erilaisille tiivistettäville kuville. Tässä työssä esitetään katsaus tiivistysmenetelmiin, jotka perustuvat aallokemuunnokseen. Ortogonaalisten suodattimien määritys parametrisoimalla on työn painopisteenä. Työssä todetaan myös kahden erilaisen lähestymistavan samanlaisuus algebrallisten yhtälöiden avulla. Kokeellinen osa sisältää joukon testejä, joilla perustellaan parametrisoinnin tarvetta. Erilaisille kuville tarvitaan erilaisia suodattimia sekä erilaiset tiivistyskertoimet saavutetaan eri suodattimilla. Lopuksi toteutetaan spektrikuvien tiivistys aallokemuunnoksen avulla.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The purpose of this thesis is to present a new approach to the lossy compression of multispectral images. Proposed algorithm is based on combination of quantization and clustering. Clustering was investigated for compression of the spatial dimension and the vector quantization was applied for spectral dimension compression. Presenting algo¬rithms proposes to compress multispectral images in two stages. During the first stage we define the classes' etalons, another words to each uniform areas are located inside the image the number of class is given. And if there are the pixels are not yet assigned to some of the clusters then it doing during the second; pass and assign to the closest eta¬lons. Finally a compressed image is represented with a flat index image pointing to a codebook with etalons. The decompression stage is instant too. The proposed method described in this paper has been tested on different satellite multispectral images from different resources. The numerical results and illustrative examples of the method are represented too.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Main purpose of this thesis is to introduce a new lossless compression algorithm for multispectral images. Proposed algorithm is based on reducing the band ordering problem to the problem of finding a minimum spanning tree in a weighted directed graph, where set of the graph vertices corresponds to multispectral image bands and the arcs’ weights have been computed using a newly invented adaptive linear prediction model. The adaptive prediction model is an extended unification of 2–and 4–neighbour pixel context linear prediction schemes. The algorithm provides individual prediction of each image band using the optimal prediction scheme, defined by the adaptive prediction model and the optimal predicting band suggested by minimum spanning tree. Its efficiency has been compared with respect to the best lossless compression algorithms for multispectral images. Three recently invented algorithms have been considered. Numerical results produced by these algorithms allow concluding that adaptive prediction based algorithm is the best one for lossless compression of multispectral images. Real multispectral data captured from an airplane have been used for the testing.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The strength properties of paper coating layer are very important in converting and printing operations. Too great or low strength of the coating can affect several problems in printing. One of the problems caused by the strength of coating is the cracking at the fold. After printing the paper is folded to final form and the pages are stapled together. In folding the paper coating can crack causing aesthetic damage over printed image or in the worst case the centre sheet can fall off in stapling. When folding the paper other side undergoes tensile stresses and the other side compressive stresses. If the difference between these stresses is too high, the coating can crack on the folding. To better predict and prevent cracking at the fold it is good to know the strength properties of coating layer. It has measured earlier the tensile strength of coating layer but not the compressive strength. In this study it was tried to find some way to measure the compressive strength of the coating layer and investigate how different coatings behave in compression. It was used the short span crush test, which is used to measure the in-plane compressive strength of paperboards, to measure the compressive strength of the coating layer. In this method the free span of the specimen is very small which prevent buckling. It was measured the compressive strength of free coating films as well as coated paper. It was also measured the tensile strength and the Bendtsen air permeance of the coating film. The results showed that the shape of pigment has a great effect to the strength of coating. Platy pigment gave much better strength than round or needle-like pigment. On the other hand calcined kaolin, which is also platy but the particles are aggregated, decreased the strength substantially. The difference in the strength can be explained with packing of the particles which is affecting to the porosity and thus to the strength. The platy kaolin packs up much better than others and creates less porous structure. The results also showed that the binder properties have a great effect to the compressive strength of coating layer. The amount of latex and the glass transition temperature, Tg, affect to the strength. As the amount of latex is increasing, the strength of coating is increasing also. Larger amount of latex is binding the pigment particles better together and decreasing the porosity. Compressive strength was increasing when the Tg was increasing because the hard latex gives a stiffer and less elastic film than soft latex.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This thesis deals with distance transforms which are a fundamental issue in image processing and computer vision. In this thesis, two new distance transforms for gray level images are presented. As a new application for distance transforms, they are applied to gray level image compression. The new distance transforms are both new extensions of the well known distance transform algorithm developed by Rosenfeld, Pfaltz and Lay. With some modification their algorithm which calculates a distance transform on binary images with a chosen kernel has been made to calculate a chessboard like distance transform with integer numbers (DTOCS) and a real value distance transform (EDTOCS) on gray level images. Both distance transforms, the DTOCS and EDTOCS, require only two passes over the graylevel image and are extremely simple to implement. Only two image buffers are needed: The original gray level image and the binary image which defines the region(s) of calculation. No other image buffers are needed even if more than one iteration round is performed. For large neighborhoods and complicated images the two pass distance algorithm has to be applied to the image more than once, typically 3 10 times. Different types of kernels can be adopted. It is important to notice that no other existing transform calculates the same kind of distance map as the DTOCS. All the other gray weighted distance function, GRAYMAT etc. algorithms find the minimum path joining two points by the smallest sum of gray levels or weighting the distance values directly by the gray levels in some manner. The DTOCS does not weight them that way. The DTOCS gives a weighted version of the chessboard distance map. The weights are not constant, but gray value differences of the original image. The difference between the DTOCS map and other distance transforms for gray level images is shown. The difference between the DTOCS and EDTOCS is that the EDTOCS calculates these gray level differences in a different way. It propagates local Euclidean distances inside a kernel. Analytical derivations of some results concerning the DTOCS and the EDTOCS are presented. Commonly distance transforms are used for feature extraction in pattern recognition and learning. Their use in image compression is very rare. This thesis introduces a new application area for distance transforms. Three new image compression algorithms based on the DTOCS and one based on the EDTOCS are presented. Control points, i.e. points that are considered fundamental for the reconstruction of the image, are selected from the gray level image using the DTOCS and the EDTOCS. The first group of methods select the maximas of the distance image to new control points and the second group of methods compare the DTOCS distance to binary image chessboard distance. The effect of applying threshold masks of different sizes along the threshold boundaries is studied. The time complexity of the compression algorithms is analyzed both analytically and experimentally. It is shown that the time complexity of the algorithms is independent of the number of control points, i.e. the compression ratio. Also a new morphological image decompression scheme is presented, the 8 kernels' method. Several decompressed images are presented. The best results are obtained using the Delaunay triangulation. The obtained image quality equals that of the DCT images with a 4 x 4

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The purpose of this thesis was to investigate the compression of filter cakes at high filtration pressures with five different test materials and to compare the energy consumption of high pressure compression with the energy consumption of thermal drying. The secondary target of this study was to investigate the particle deformation of test materials during filtration and compression. Literature part consists of basic theory of filtration and compression and of the basic parameters that influence the filtration process. There is also a brief description about all of the test materials including their properties and their industrial production and processing. Theoretical equations for calculating the energy consumptions of the filtrations at different conditions are also presented. At the beginning of the experiments at experimental part, the basic filtration tests were done with all the five test materials. Filtration tests were made at eight different pressures, from 6 bars up to 100 bars, by using piston press pressure filter. Filtration tests were then repeated by using a cylinder with smaller slurry volume than in the first series of filtration tests. Separate filtration tests were also done for investigating the deformation of solid particles during filtration and for finding the optimal curve for raising the filtration pressure. Energy consumption differences between high pressure filtration and ideal thermal drying process were done partly experimentally and partly by using theoretical calculation equations. By comparing these two water removal methods, the optimal ranges for their use were found considering their energy efficiency. The results of the measurements shows that the filtration rate increased and the moisture content of the filter cakes decreased as the filtration pressure was increased. Also the porosity of the filter cakes mainly decreased when the filtration pressure was increased. Particle deformation during the filtration was observed only with coal particles.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Most modern passenger aeroplanes use air cycle cooling. A high-speed air cycle is a reliable and light option, but not very efficient. This thesis presents research work done to design a novel vapour cooling cycle for aeroplanes. Due to advancements in high-speed permanent magnet motors, the vapour cycle is seen as a competitive option for the air cycle in aeroplanes. The aerospace industry places tighter demands on the weight, reliability and environmental effects of the machinery than those met by conventional chillers, and thus modifications to conventional design are needed. The thesis is divided into four parts: the initial screening of the working fluid, 1-D design and performance values of the compressor, 1-D off-design value predictions of the compressor and the 3-D design of the compressor. The R245fa was selected as the working fluid based the study. The off-design range of the compressor was predicted to be wide and suitable for the application. The air-conditioning system developed is considerably smaller than previous designs using centrifugal compressors.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The subject of the thesis is automatic sentence compression with machine learning, so that the compressed sentences remain both grammatical and retain their essential meaning. There are multiple possible uses for the compression of natural language sentences. In this thesis the focus is generation of television program subtitles, which often are compressed version of the original script of the program. The main part of the thesis consists of machine learning experiments for automatic sentence compression using different approaches to the problem. The machine learning methods used for this work are linear-chain conditional random fields and support vector machines. Also we take a look which automatic text analysis methods provide useful features for the task. The data used for machine learning is supplied by Lingsoft Inc. and consists of subtitles in both compressed an uncompressed form. The models are compared to a baseline system and comparisons are made both automatically and also using human evaluation, because of the potentially subjective nature of the output. The best result is achieved using a CRF - sequence classification using a rich feature set. All text analysis methods help classification and most useful method is morphological analysis. Tutkielman aihe on suomenkielisten lauseiden automaattinen tiivistäminen koneellisesti, niin että lyhennetyt lauseet säilyttävät olennaisen informaationsa ja pysyvät kieliopillisina. Luonnollisen kielen lauseiden tiivistämiselle on monta käyttötarkoitusta, mutta tässä tutkielmassa aihetta lähestytään television ohjelmien tekstittämisen kautta, johon käytännössä kuuluu alkuperäisen tekstin lyhentäminen televisioruudulle paremmin sopivaksi. Tutkielmassa kokeillaan erilaisia koneoppimismenetelmiä tekstin automaatiseen lyhentämiseen ja tarkastellaan miten hyvin erilaiset luonnollisen kielen analyysimenetelmät tuottavat informaatiota, joka auttaa näitä menetelmiä lyhentämään lauseita. Lisäksi tarkastellaan minkälainen lähestymistapa tuottaa parhaan lopputuloksen. Käytetyt koneoppimismenetelmät ovat tukivektorikone ja lineaarisen sekvenssin mallinen CRF. Koneoppimisen tukena käytetään tekstityksiä niiden eri käsittelyvaiheissa, jotka on saatu Lingsoft OY:ltä. Luotuja malleja vertaillaan Lopulta mallien lopputuloksia evaluoidaan automaattisesti ja koska teksti lopputuksena on jossain määrin subjektiivinen myös ihmisarviointiin perustuen. Vertailukohtana toimii kirjallisuudesta poimittu menetelmä. Tutkielman tuloksena paras lopputulos saadaan aikaan käyttäen CRF sekvenssi-luokittelijaa laajalla piirrejoukolla. Kaikki kokeillut teksin analyysimenetelmät auttavat luokittelussa, joista tärkeimmän panoksen antaa morfologinen analyysi.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Diplomityössä tehdään jatkokehitystä KCI Konecranes yrityksen siltanosturin laskentaohjelmaan. Ohjelman tärkeimmät jatkokehityskohteet kartoitettiin käyttäjäkyselyn avulla ja niistä valittiin toivotuimmat, sekä diplomityön lujuusopilliseen aihepiiriin parhaiten soveltuvat. Työhön valitut kaksi aihetta ovat koteloprofiilin kaksiosaisen uuman lujuuslaskennan selvittäminen ja siltanosturin kahdeksanpyöräisenpäätykannattajan elementtimallin suunnittelu. Diplomityössä selvitetään jatkokehityskohteisiin liittyvä teoria, mutta varsinainen ohjelmointi jätetään työn ulkopuolelle. Kaksiosaisella uumalla varustetussa koteloprofiilissa nostovaunun kulkukiskon alla olevan uuman yläosa tehdään paksummaksi, jotta uuma kestäisi nostovaunun pyöräkuormasta aiheutuvan paikallisen jännityksen, eliniin sanotun rusennusjännityksen. Rusennusjännityksen määrittäminen uumalevyissä on kaksiosaisen uuman lujuuslaskennan tärkein tehtävä. Rusennuksen aiheuttamankalvojännityksen ja jännityskeskittymien määrittämiseen erilaisissa konstruktioissa etsittiin sopivimmat menetelmät kirjallisuudesta ja standardeista. Kalvojännitys voidaan määrittää luotettavasti käyttäen joko 45 asteen sääntöä tai standardin mukaista menetelmää ja jännityskonsentraatioiden suuruus saadaan kertomallakalvojännitys jännityskonsentraatiokertoimilla. Menetelmien toimivuus verifioitiin tekemällä kymmeniä uuman elementtimalleja erilaisin dimensioin ja reunaehdoin ja vertaamalla elementtimallien tuloksia käsin laskettuihin. Käsin lasketut jännitykset saatiin vastaamaan tarkasti elementtimallien tuloksia. Kaksiosaisen uuman lommahdus- ja väsymislaskentaa tutkittiin alustavasti. Kahdeksanpyöräisiä päätykannattajia käytetään suurissa siltanostureissa pienentämään pyöräkuormia ja radan rusennusjännityksiä. Kahdeksanpyöräiselle siltanosturin päätykannattajalle suunniteltiin elementtimallit molempiin rakenteesta käytettyihin konstruktioihin: nivelöityyn ja jäykkäkehäiseen malliin. Elementtimallien rakentamisessa hyödynnettiin jo olemassa olevia malleja, jolloin niiden lisääminen ohjelmakoodiin nopeutuu ja ne ovat varmasti yhteensopivia muiden laskentamoduuleiden kanssa. Elementtimallien värähtelyanalyysin reunaehtoja tarkasteltiin. Värähtelyanalyysin reunaehtoihin ei tutkimuksen perusteella tarvitse tehdä muutoksia, mutta staattisen analyysin reunaehdot kaipaavat vielä lisätutkimusta.