91 resultados para Search image
Resumo:
Multispectral images contain information from several spectral wavelengths and currently multispectral images are widely used in remote sensing and they are becoming more common in the field of computer vision and in industrial applications. Typically, one multispectral image in remote sensing may occupy hundreds of megabytes of disk space and several this kind of images may be received from a single measurement. This study considers the compression of multispectral images. The lossy compression is based on the wavelet transform and we compare the suitability of different waveletfilters for the compression. A method for selecting a wavelet filter for the compression and reconstruction of multispectral images is developed. The performance of the multidimensional wavelet transform based compression is compared to other compression methods like PCA, ICA, SPIHT, and DCT/JPEG. The quality of the compression and reconstruction is measured by quantitative measures like signal-to-noise ratio. In addition, we have developed a qualitative measure, which combines the information from the spatial and spectral dimensions of a multispectral image and which also accounts for the visual quality of the bands from the multispectral images.
Resumo:
Technological progress has made a huge amount of data available at increasing spatial and spectral resolutions. Therefore, the compression of hyperspectral data is an area of active research. In somefields, the original quality of a hyperspectral image cannot be compromised andin these cases, lossless compression is mandatory. The main goal of this thesisis to provide improved methods for the lossless compression of hyperspectral images. Both prediction- and transform-based methods are studied. Two kinds of prediction based methods are being studied. In the first method the spectra of a hyperspectral image are first clustered and and an optimized linear predictor is calculated for each cluster. In the second prediction method linear prediction coefficients are not fixed but are recalculated for each pixel. A parallel implementation of the above-mentioned linear prediction method is also presented. Also,two transform-based methods are being presented. Vector Quantization (VQ) was used together with a new coding of the residual image. In addition we have developed a new back end for a compression method utilizing Principal Component Analysis (PCA) and Integer Wavelet Transform (IWT). The performance of the compressionmethods are compared to that of other compression methods. The results show that the proposed linear prediction methods outperform the previous methods. In addition, a novel fast exact nearest-neighbor search method is developed. The search method is used to speed up the Linde-Buzo-Gray (LBG) clustering method.
Resumo:
Selostus: Ponsiviljeltävyys ja siihen liittyvät geenimerkit peltokauran ja susikauran risteytysjälkeläisissä
Resumo:
Diplomityössä on käsitelty paperin pinnankarkeuden mittausta, joka on keskeisimpiä ongelmia paperimateriaalien tutkimuksessa. Paperiteollisuudessa käytettävät mittausmenetelmät sisältävät monia haittapuolia kuten esimerkiksi epätarkkuus ja yhteensopimattomuus sileiden papereiden mittauksissa, sekä suuret vaatimukset laboratorio-olosuhteille ja menetelmien hitaus. Työssä on tutkittu optiseen sirontaan perustuvia menetelmiä pinnankarkeuden määrittämisessä. Konenäköä ja kuvan-käsittelytekniikoita tutkittiin karkeilla paperipinnoilla. Tutkimuksessa käytetyt algoritmit on tehty Matlab® ohjelmalle. Saadut tulokset osoittavat mahdollisuuden pinnankarkeuden mittaamiseen kuvauksen avulla. Parhaimman tuloksen perinteisen ja kuvausmenetelmän välillä antoi fraktaaliulottuvuuteen perustuva menetelmä.
Resumo:
This thesis gives an overview of the use of the level set methods in the field of image science. The similar fast marching method is discussed for comparison, also the narrow band and the particle level set methods are introduced. The level set method is a numerical scheme for representing, deforming and recovering structures in an arbitrary dimensions. It approximates and tracks the moving interfaces, dynamic curves and surfaces. The level set method does not define how and why some boundary is advancing the way it is but simply represents and tracks the boundary. The principal idea of the level set method is to represent the N dimensional boundary in the N+l dimensions. This gives the generality to represent even the complex boundaries. The level set methods can be powerful tools to represent dynamic boundaries, but they can require lot of computing power. Specially the basic level set method have considerable computational burden. This burden can be alleviated with more sophisticated versions of the level set algorithm like the narrow band level set method or with the programmable hardware implementation. Also the parallel approach can be used in suitable applications. It is concluded that these methods can be used in a quite broad range of image applications, like computer vision and graphics, scientific visualization and also to solve problems in computational physics. Level set methods and methods derived and inspired by it will be in the front line of image processing also in the future.
Resumo:
Tärkeä tehtävä ympäristön tarkkailussa on arvioida ympäristön nykyinen tila ja ihmisen siihen aiheuttamat muutokset sekä analysoida ja etsiä näiden yhtenäiset suhteet. Ympäristön muuttumista voidaan hallita keräämällä ja analysoimalla tietoa. Tässä diplomityössä on tutkittu vesikasvillisuudessa hai vainuja muutoksia käyttäen etäältä hankittua mittausdataa ja kuvan analysointimenetelmiä. Ympäristön tarkkailuun on käytetty Suomen suurimmasta järvestä Saimaasta vuosina 1996 ja 1999 otettuja ilmakuvia. Ensimmäinen kuva-analyysin vaihe on geometrinen korjaus, jonka tarkoituksena on kohdistaa ja suhteuttaa otetut kuvat samaan koordinaattijärjestelmään. Toinen vaihe on kohdistaa vastaavat paikalliset alueet ja tunnistaa kasvillisuuden muuttuminen. Kasvillisuuden tunnistamiseen on käytetty erilaisia lähestymistapoja sisältäen valvottuja ja valvomattomia tunnistustapoja. Tutkimuksessa käytettiin aitoa, kohinoista mittausdataa, minkä perusteella tehdyt kokeet antoivat hyviä tuloksia tutkimuksen onnistumisesta.
Resumo:
Tutkimuksen päätavoitteena oli auttaa Myllykoski –ryhmää määrittämään, mistä tekijöistä ryhmän uuden myyntiorganisaation, Myllykoski Salesin, tulevaisuuden imagon tulisi koostua. Näin ollen tutkimus pyrki selvittämään Myllykoski –ryhmän yritysidentiteetin nykytilaa ja Myllykoski Salesin toivottuja imagotekijöitä, sekä vertaamaan niiden vastaavuutta. Lisäksi tutkimuksen tavoitteena oli tutkia Myllykoski –ryhmän nykyistä ja toivottua tulevaisuuden imagotilaa. Jotta imagonrakennusprosessi olisi menestyksekäs, rakennettavan imagon ja viestittävien imagotekijöiden tulisi perustua yritysidentiteetin todellisiin tekijöihin. Yritysidentiteetin voidaan määritellä olevan yhtäläinen sisäisten sidosryhmien muodostaman yritysmielikuvan kanssa ja näin ollen nykyinen yritysidentiteetti voidaan paljastaa tutkimalla henkilökunnan mielipiteitä työorganisaatiotaan kohtaan. Näin ollen käsiteltävä tutkimus suoritettiin tekemällä kaksi sähköpostikyselyä, jotka suunnattiin Myllykoski -ryhmän myynti- ja markkinointihenkilökunnalle. Tutkimusten vastausprosentit olivat 71,4 % (johto, 14 lähetettyä kyselyä) ja 51,9 % (muu henkilökunta, 108 lähetettyä kyselyä). Saatuja vastauksia analysoitiin sekä laadullisesti että määrällisesti. Saaduista vastauksista oli selvästi havaittavissa Myllykoski –ryhmän yritysidentiteetin nykytila, nykyinen ja toivottu imagotila, sekä Myllykoski Salesin toivotut imagotekijät. Verrattaessa toivottuja imagotekijöitä ryhmän yritysidentiteettiin havaittiin, että suurin osa halutuista imagotekijöistä vastasi ryhmän identiteetin nykytilan ominaisuuksia ja näin ollen kyseisiä tekijöitä voitaisiin huoletta viestiä rakennettaessa Myllykoski Salesin imagoa. Joidenkin toivottujen imagotekijöiden viestintää tulisi kuitenkin vakavasti harkita, jottei rakennettaisi epärealistista imagoa.
Resumo:
This master’s thesis aims to study and represent from literature how evolutionary algorithms are used to solve different search and optimisation problems in the area of software engineering. Evolutionary algorithms are methods, which imitate the natural evolution process. An artificial evolution process evaluates fitness of each individual, which are solution candidates. The next population of candidate solutions is formed by using the good properties of the current population by applying different mutation and crossover operations. Different kinds of evolutionary algorithm applications related to software engineering were searched in the literature. Applications were classified and represented. Also the necessary basics about evolutionary algorithms were presented. It was concluded, that majority of evolutionary algorithm applications related to software engineering were about software design or testing. For example, there were applications about classifying software production data, project scheduling, static task scheduling related to parallel computing, allocating modules to subsystems, N-version programming, test data generation and generating an integration test order. Many applications were experimental testing rather than ready for real production use. There were also some Computer Aided Software Engineering tools based on evolutionary algorithms.
Resumo:
TUTKIMUKSEN TAVOITTEET Tutkielman tavoitteena oli luoda ensin yleiskäsitys tuotemerkkimarkkinoinnin roolista teollisilla markkinoilla, sekä suhdemarkkinoinnin merkityksestä teollisessa merkkituotemarkkinoinnissa. Toisena oleellisena tavoitteena oli kuvata teoreettisesti merkkituoteidentiteetin rakenne teollisessa yrityksessä ja sen vaikutukset myyntihenkilöstöön, ja lisäksi haluttiin tutkia tuotemerkkien lisäarvoa sekä asiakkaalle että myyjälle. Identiteetti ja sen vaikutukset, erityisesti imago haluttiin tutkia myös empiirisesti. LÄHDEAINEISTO JA TUTKIMUSMENETELMÄT Tämän tutkielman teoreettinen osuus perustuu kirjallisuuteen, akateemisiin julkaisuihin ja aikaisempiin tutkimuksiin; keskittyen merkkituotteiden markkinointiin, identiteettiin ja imagoon, sekä suhdemarkkinointiin osana merkkituotemarkkinointia. Tutkimuksen lähestymistapa on kuvaileva eli deskriptiivinen ja sekä kvalitatiivinen että kvantitatiivinen. Tutkimus on tapaustutkimus, jossa caseyritykseksi valittiin kansainvälinen pakkauskartonki-teollisuuden yritys. Empiirisen osuuden toteuttamiseen käytettiin www-pohjaista surveytä, jonka avulla tietoja kerättiin myyntihenkilöstöltä case-yrityksessä. Lisäksi empiiristä osuutta laajennettiin tutkimalla sekundäärilähteitä kuten yrityksen sisäisiä kirjallisia dokumentteja ja tutkimuksia. TULOKSET. Teoreettisen ja empiirisen tutkimuksen tuloksena luotiin malli jota voidaan hyödyntää merkkituotemarkkinoinnin päätöksenteon tukena pakkauskartonki-teollisuudessa. Teollisen brandinhallinnan tulee keskittyä erityisesti asiakas-suhteiden brandaukseen – tätä voisi kutsua teolliseksi suhdebrandaukseksi. Tuote-elementit ja –arvot, differointi ja positiointi, sisäinen yrityskuva ja viestintä ovat teollisen brandi-identiteetin peruskiviä, jotka luovat brandi-imagon. Case-yrityksen myyntihenkilöstön tuote- ja yritysmielikuvat osoittautuivat kokonaisuudessaan hyviksi. Paras imago on CKB tuotteilla, kun taas heikoin on WLC tuotteilla. Teolliset brandit voivat luoda monenlaisia lisäarvoja sekä asiakas- että myyjäyritykselle.
Resumo:
Current-day web search engines (e.g., Google) do not crawl and index a significant portion of theWeb and, hence, web users relying on search engines only are unable to discover and access a large amount of information from the non-indexable part of the Web. Specifically, dynamic pages generated based on parameters provided by a user via web search forms (or search interfaces) are not indexed by search engines and cannot be found in searchers’ results. Such search interfaces provide web users with an online access to myriads of databases on the Web. In order to obtain some information from a web database of interest, a user issues his/her query by specifying query terms in a search form and receives the query results, a set of dynamic pages that embed required information from a database. At the same time, issuing a query via an arbitrary search interface is an extremely complex task for any kind of automatic agents including web crawlers, which, at least up to the present day, do not even attempt to pass through web forms on a large scale. In this thesis, our primary and key object of study is a huge portion of the Web (hereafter referred as the deep Web) hidden behind web search interfaces. We concentrate on three classes of problems around the deep Web: characterization of deep Web, finding and classifying deep web resources, and querying web databases. Characterizing deep Web: Though the term deep Web was coined in 2000, which is sufficiently long ago for any web-related concept/technology, we still do not know many important characteristics of the deep Web. Another matter of concern is that surveys of the deep Web existing so far are predominantly based on study of deep web sites in English. One can then expect that findings from these surveys may be biased, especially owing to a steady increase in non-English web content. In this way, surveying of national segments of the deep Web is of interest not only to national communities but to the whole web community as well. In this thesis, we propose two new methods for estimating the main parameters of deep Web. We use the suggested methods to estimate the scale of one specific national segment of the Web and report our findings. We also build and make publicly available a dataset describing more than 200 web databases from the national segment of the Web. Finding deep web resources: The deep Web has been growing at a very fast pace. It has been estimated that there are hundred thousands of deep web sites. Due to the huge volume of information in the deep Web, there has been a significant interest to approaches that allow users and computer applications to leverage this information. Most approaches assumed that search interfaces to web databases of interest are already discovered and known to query systems. However, such assumptions do not hold true mostly because of the large scale of the deep Web – indeed, for any given domain of interest there are too many web databases with relevant content. Thus, the ability to locate search interfaces to web databases becomes a key requirement for any application accessing the deep Web. In this thesis, we describe the architecture of the I-Crawler, a system for finding and classifying search interfaces. Specifically, the I-Crawler is intentionally designed to be used in deepWeb characterization studies and for constructing directories of deep web resources. Unlike almost all other approaches to the deep Web existing so far, the I-Crawler is able to recognize and analyze JavaScript-rich and non-HTML searchable forms. Querying web databases: Retrieving information by filling out web search forms is a typical task for a web user. This is all the more so as interfaces of conventional search engines are also web forms. At present, a user needs to manually provide input values to search interfaces and then extract required data from the pages with results. The manual filling out forms is not feasible and cumbersome in cases of complex queries but such kind of queries are essential for many web searches especially in the area of e-commerce. In this way, the automation of querying and retrieving data behind search interfaces is desirable and essential for such tasks as building domain-independent deep web crawlers and automated web agents, searching for domain-specific information (vertical search engines), and for extraction and integration of information from various deep web resources. We present a data model for representing search interfaces and discuss techniques for extracting field labels, client-side scripts and structured data from HTML pages. We also describe a representation of result pages and discuss how to extract and store results of form queries. Besides, we present a user-friendly and expressive form query language that allows one to retrieve information behind search interfaces and extract useful data from the result pages based on specified conditions. We implement a prototype system for querying web databases and describe its architecture and components design.
Resumo:
Coating and filler pigments have strong influence to the properties of the paper. Filler content can be even over 30 % and pigment content in coating is about 85-95 weight percent. The physical and chemical properties of the pigments are different and the knowledge of these properties is important for optimising of optical and printing properties of the paper. The size and shape of pigment particles can be measured by different analysers which can be based on sedimentation, laser diffraction, changes in electric field etc. In this master's thesis was researched particle properties especially by scanning electron microscope (SEM) and image analysis programs. Research included nine pigments with different particle size and shape. Pigments were analysed by two image analysis programs (INCA Feature and Poikki), Coulter LS230 (laser diffraction) and SediGraph 5100 (sedimentation). The results were compared to perceive the effect of particle shape to the performance of the analysers. Only image analysis programs gave parameters of the particle shape. One part of research was also the sample preparation for SEM. Individual particles should be separated and distinct in ideal sample. Analysing methods gave different results but results from image analysis programs corresponded even to sedimentation or to laser diffraction depending on the particle shape. Detailed analysis of the particle shape required high magnification in SEM, but measured parameters described very well the shape of the particles. Large particles (ecd~1 µm) could be used also in 3D-modelling which enabled the measurement of the thickness of the particles. Scanning electron microscope and image analysis programs were effective and multifunctional tools for particle analyses. Development and experience will devise the usability of analysing method in routine use.
Resumo:
The number of digital images has been increasing exponentially in the last few years. People have problems managing their image collections and finding a specific image. An automatic image categorization system could help them to manage images and find specific images. In this thesis, an unsupervised visual object categorization system was implemented to categorize a set of unknown images. The system is unsupervised, and hence, it does not need known images to train the system which needs to be manually obtained. Therefore, the number of possible categories and images can be huge. The system implemented in the thesis extracts local features from the images. These local features are used to build a codebook. The local features and the codebook are then used to generate a feature vector for an image. Images are categorized based on the feature vectors. The system is able to categorize any given set of images based on the visual appearance of the images. Images that have similar image regions are grouped together in the same category. Thus, for example, images which contain cars are assigned to the same cluster. The unsupervised visual object categorization system can be used in many situations, e.g., in an Internet search engine. The system can categorize images for a user, and the user can then easily find a specific type of image.
Resumo:
This thesis deals with distance transforms which are a fundamental issue in image processing and computer vision. In this thesis, two new distance transforms for gray level images are presented. As a new application for distance transforms, they are applied to gray level image compression. The new distance transforms are both new extensions of the well known distance transform algorithm developed by Rosenfeld, Pfaltz and Lay. With some modification their algorithm which calculates a distance transform on binary images with a chosen kernel has been made to calculate a chessboard like distance transform with integer numbers (DTOCS) and a real value distance transform (EDTOCS) on gray level images. Both distance transforms, the DTOCS and EDTOCS, require only two passes over the graylevel image and are extremely simple to implement. Only two image buffers are needed: The original gray level image and the binary image which defines the region(s) of calculation. No other image buffers are needed even if more than one iteration round is performed. For large neighborhoods and complicated images the two pass distance algorithm has to be applied to the image more than once, typically 3 10 times. Different types of kernels can be adopted. It is important to notice that no other existing transform calculates the same kind of distance map as the DTOCS. All the other gray weighted distance function, GRAYMAT etc. algorithms find the minimum path joining two points by the smallest sum of gray levels or weighting the distance values directly by the gray levels in some manner. The DTOCS does not weight them that way. The DTOCS gives a weighted version of the chessboard distance map. The weights are not constant, but gray value differences of the original image. The difference between the DTOCS map and other distance transforms for gray level images is shown. The difference between the DTOCS and EDTOCS is that the EDTOCS calculates these gray level differences in a different way. It propagates local Euclidean distances inside a kernel. Analytical derivations of some results concerning the DTOCS and the EDTOCS are presented. Commonly distance transforms are used for feature extraction in pattern recognition and learning. Their use in image compression is very rare. This thesis introduces a new application area for distance transforms. Three new image compression algorithms based on the DTOCS and one based on the EDTOCS are presented. Control points, i.e. points that are considered fundamental for the reconstruction of the image, are selected from the gray level image using the DTOCS and the EDTOCS. The first group of methods select the maximas of the distance image to new control points and the second group of methods compare the DTOCS distance to binary image chessboard distance. The effect of applying threshold masks of different sizes along the threshold boundaries is studied. The time complexity of the compression algorithms is analyzed both analytically and experimentally. It is shown that the time complexity of the algorithms is independent of the number of control points, i.e. the compression ratio. Also a new morphological image decompression scheme is presented, the 8 kernels' method. Several decompressed images are presented. The best results are obtained using the Delaunay triangulation. The obtained image quality equals that of the DCT images with a 4 x 4