32 resultados para Image quality perception

em Doria (National Library of Finland DSpace Services) - National Library of Finland, Finland


Relevância:

100.00% 100.00%

Publicador:

Resumo:

This study discusses how audiovisual content can influence brand quality perceptions. The purpose of this study is to explore how audiovisual content creation can increase brand quality perceptions. This research problem is addressed with three sub questions, which aim at clarifying the role of emotions between content marketing and brand quality perception, explaining how different functions of audiovisual content can increase brand quality perception, and by identifying and comparing the key differences in content creation in business-to-consumer and business-to-businesscontexts. The theoretical background of the study is in brand personality, consumer emotions, consumerbrand relationships, content marketing and B2B branding literature. The empirical research part includes a single-case study. The case company was a Swiss startup that wished to build a highquality brand for both B2C and B2B segments. The empirical data was collected in September 2014. Eight interviews were conducted; seven with target segment representatives and one with an existing customer of the case company. The empirical findings were analyzed with thematic analysis and finally a 5-stage framework was created based on the findings of the research, offering a guideline for high-quality content creation. This study finds that emotions play an important role in brand quality perceptions. Psychological processes, emotion, cognition and conation, influence the engagement process of the target segment which ultimately can lead to activation and electronic word-of-mouth. Brand quality perception is the result of the overall emotion of the brand. The overall emotion derives from brand personality, brand concept, product attributes and utilitarian benefits of the brand. The entertaining and educational functions of the audiovisual content can target and evoke these emotional processes, and result in increased quality perceptions. In the B2B context, emotions are found to play a relatively smaller role in the quality perception processes. However, the significance of emotions cannot be ignored, since they can emphasize the value for the buying organization, and build on the trust and loyalty among the potential customers. The final framework presents five stages of content creation that ultimately improve brand quality perceptions. These stages help marketers to design and implement their content and evoke positive emotions in their target segment as part of a quality-based marketing strategy. Further research is warranted to quantitatively test the generalizability of the framework. Further research is also suggested to make the framework adaptable to different stages of the brand life cycle.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Image filtering is a highly demanded approach of image enhancement in digital imaging systems design. It is widely used in television and camera design technologies to improve the quality of an output image to avoid various problems such as image blurring problem thatgains importance in design of displays of large sizes and design of digital cameras. This thesis proposes a new image filtering method basedon visual characteristics of human eye such as MTF. In contrast to the traditional filtering methods based on human visual characteristics this thesis takes into account the anisotropy of the human eye vision. The proposed method is based on laboratory measurements of the human eye MTF and takes into account degradation of the image by the latter. This method improves an image in the way it will be degraded by human eye MTF to give perception of the original image quality. This thesis gives a basic understanding of an image filtering approach and the concept of MTF and describes an algorithm to perform an image enhancement based on MTF of human eye. Performed experiments have shown quite good results according to human evaluation. Suggestions to improve the algorithm are also given for the future improvements.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The ongoing development of the digital media has brought a new set of challenges with it. As images containing more than three wavelength bands, often called spectral images, are becoming a more integral part of everyday life, problems in the quality of the RGB reproduction from the spectral images have turned into an important area of research. The notion of image quality is often thought to comprise two distinctive areas – image quality itself and image fidelity, both dealing with similar questions, image quality being the degree of excellence of the image, and image fidelity the measure of the match of the image under study to the original. In this thesis, both image fidelity and image quality are considered, with an emphasis on the influence of color and spectral image features on both. There are very few works dedicated to the quality and fidelity of spectral images. Several novel image fidelity measures were developed in this study, which include kernel similarity measures and 3D-SSIM (structural similarity index). The kernel measures incorporate the polynomial, Gaussian radial basis function (RBF) and sigmoid kernels. The 3D-SSIM is an extension of a traditional gray-scale SSIM measure developed to incorporate spectral data. The novel image quality model presented in this study is based on the assumption that the statistical parameters of the spectra of an image influence the overall appearance. The spectral image quality model comprises three parameters of quality: colorfulness, vividness and naturalness. The quality prediction is done by modeling the preference function expressed in JNDs (just noticeable difference). Both image fidelity measures and the image quality model have proven to be effective in the respective experiments.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The topic of this thesis is studying how lesions in retina caused by diabetic retinopathy can be detected from color fundus images by using machine vision methods. Methods for equalizing uneven illumination in fundus images, detecting regions of poor image quality due toinadequate illumination, and recognizing abnormal lesions were developed duringthe work. The developed methods exploit mainly the color information and simpleshape features to detect lesions. In addition, a graphical tool for collecting lesion data was developed. The tool was used by an ophthalmologist who marked lesions in the images to help method development and evaluation. The tool is a general purpose one, and thus it is possible to reuse the tool in similar projects.The developed methods were tested with a separate test set of 128 color fundus images. From test results it was calculated how accurately methods classify abnormal funduses as abnormal (sensitivity) and healthy funduses as normal (specificity). The sensitivity values were 92% for hemorrhages, 73% for red small dots (microaneurysms and small hemorrhages), and 77% for exudates (hard and soft exudates). The specificity values were 75% for hemorrhages, 70% for red small dots, and 50% for exudates. Thus, the developed methods detected hemorrhages accurately and microaneurysms and exudates moderately.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Multispectral images contain information from several spectral wavelengths and currently multispectral images are widely used in remote sensing and they are becoming more common in the field of computer vision and in industrial applications. Typically, one multispectral image in remote sensing may occupy hundreds of megabytes of disk space and several this kind of images may be received from a single measurement. This study considers the compression of multispectral images. The lossy compression is based on the wavelet transform and we compare the suitability of different waveletfilters for the compression. A method for selecting a wavelet filter for the compression and reconstruction of multispectral images is developed. The performance of the multidimensional wavelet transform based compression is compared to other compression methods like PCA, ICA, SPIHT, and DCT/JPEG. The quality of the compression and reconstruction is measured by quantitative measures like signal-to-noise ratio. In addition, we have developed a qualitative measure, which combines the information from the spatial and spectral dimensions of a multispectral image and which also accounts for the visual quality of the bands from the multispectral images.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This thesis deals with distance transforms which are a fundamental issue in image processing and computer vision. In this thesis, two new distance transforms for gray level images are presented. As a new application for distance transforms, they are applied to gray level image compression. The new distance transforms are both new extensions of the well known distance transform algorithm developed by Rosenfeld, Pfaltz and Lay. With some modification their algorithm which calculates a distance transform on binary images with a chosen kernel has been made to calculate a chessboard like distance transform with integer numbers (DTOCS) and a real value distance transform (EDTOCS) on gray level images. Both distance transforms, the DTOCS and EDTOCS, require only two passes over the graylevel image and are extremely simple to implement. Only two image buffers are needed: The original gray level image and the binary image which defines the region(s) of calculation. No other image buffers are needed even if more than one iteration round is performed. For large neighborhoods and complicated images the two pass distance algorithm has to be applied to the image more than once, typically 3 10 times. Different types of kernels can be adopted. It is important to notice that no other existing transform calculates the same kind of distance map as the DTOCS. All the other gray weighted distance function, GRAYMAT etc. algorithms find the minimum path joining two points by the smallest sum of gray levels or weighting the distance values directly by the gray levels in some manner. The DTOCS does not weight them that way. The DTOCS gives a weighted version of the chessboard distance map. The weights are not constant, but gray value differences of the original image. The difference between the DTOCS map and other distance transforms for gray level images is shown. The difference between the DTOCS and EDTOCS is that the EDTOCS calculates these gray level differences in a different way. It propagates local Euclidean distances inside a kernel. Analytical derivations of some results concerning the DTOCS and the EDTOCS are presented. Commonly distance transforms are used for feature extraction in pattern recognition and learning. Their use in image compression is very rare. This thesis introduces a new application area for distance transforms. Three new image compression algorithms based on the DTOCS and one based on the EDTOCS are presented. Control points, i.e. points that are considered fundamental for the reconstruction of the image, are selected from the gray level image using the DTOCS and the EDTOCS. The first group of methods select the maximas of the distance image to new control points and the second group of methods compare the DTOCS distance to binary image chessboard distance. The effect of applying threshold masks of different sizes along the threshold boundaries is studied. The time complexity of the compression algorithms is analyzed both analytically and experimentally. It is shown that the time complexity of the algorithms is independent of the number of control points, i.e. the compression ratio. Also a new morphological image decompression scheme is presented, the 8 kernels' method. Several decompressed images are presented. The best results are obtained using the Delaunay triangulation. The obtained image quality equals that of the DCT images with a 4 x 4

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This thesis presents two graphical user interfaces for the project DigiQ - Fusion of Digital and Visual Print Quality, a project for computationally modeling the subjective human experience of print quality by measuring the image with certain metrics. After presenting the user interfaces, methods for reducing the computation time of several of the metrics and the image registration process required to compute the metrics, and details of their performance are given. The weighted sample method for the image registration process was able to signifigantly decrease the calculation times while resulting in some error. The random sampling method for the metrics greatly reduced calculation time while maintaining excellent accuracy, but worked with only two of the metrics.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The problem of understanding how humans perceive the quality of a reproduced image is of interest to researchers of many fields related to vision science and engineering: optics and material physics, image processing (compression and transfer), printing and media technology, and psychology. A measure for visual quality cannot be defined without ambiguity because it is ultimately the subjective opinion of an “end-user” observing the product. The purpose of this thesis is to devise computational methods to estimate the overall visual quality of prints, i.e. a numerical value that combines all the relevant attributes of the perceived image quality. The problem is limited to consider the perceived quality of printed photographs from the viewpoint of a consumer, and moreover, the study focuses only on digital printing methods, such as inkjet and electrophotography. The main contributions of this thesis are two novel methods to estimate the overall visual quality of prints. In the first method, the quality is computed as a visible difference between the reproduced image and the original digital (reference) image, which is assumed to have an ideal quality. The second method utilises instrumental print quality measures, such as colour densities, measured from printed technical test fields, and connects the instrumental measures to the overall quality via subjective attributes, i.e. attributes that directly contribute to the perceived quality, using a Bayesian network. Both approaches were evaluated and verified with real data, and shown to predict well the subjective evaluation results.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Tässä työssä raportoidaan hybridihitsauksesta otettujen suurnopeuskuvasarjojen automaattisen analyysijärjestelmän kehittäminen.Järjestelmän tarkoitus oli tuottaa tietoa, joka avustaisi analysoijaa arvioimaan kuvatun hitsausprosessin laatua. Tutkimus keskittyi valokaaren taajuuden säännöllisyyden ja lisäainepisaroiden lentosuuntien mittaamiseen. Valokaaria havaittiin kuvasarjoista sumean c-means-klusterointimenetelmän avullaja perättäisten valokaarien välistä aikaväliä käytettiin valokaaren taajuuden säännöllisyyden mittarina. Pisaroita paikannettiin menetelmällä, jossa yhdistyi pääkomponenttianalyysi ja tukivektoriluokitin. Kalman-suodinta käytettiin tuottamaan arvioita pisaroiden lentosuunnista ja nopeuksista. Lentosuunnanmääritysmenetelmä luokitteli pisarat niiden arvioitujen lentosuuntien perusteella. Järjestelmän kehittämiseen käytettävissä olleet kuvasarjat poikkesivat merkittävästi toisistaan kuvanlaadun ja pisaroiden ulkomuodon osalta, johtuen eroista kuvaus- ja hitsausprosesseissa. Analyysijärjestelmä kehitettiin toimimaan pienellä osajoukolla kuvasarjoja, joissa oli tietynlainen kuvaus- ja hitsausprosessi ja joiden kuvanlaatu ja pisaroiden ulkomuoto olivat samankaltaisia, mutta järjestelmää testattiin myös osajoukon ulkopuolisilla kuvasarjoilla. Testitulokset osoittivat, että lentosuunnanmääritystarkkuus oli kohtuullisen suuri osajoukonsisällä ja pieni muissa kuvasarjoissa. Valokaaren taajuuden säännöllisyyden määritys oli tarkka useammassa kuvasarjassa.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The purpose of this thesis is to present a new approach to the lossy compression of multispectral images. Proposed algorithm is based on combination of quantization and clustering. Clustering was investigated for compression of the spatial dimension and the vector quantization was applied for spectral dimension compression. Presenting algo¬rithms proposes to compress multispectral images in two stages. During the first stage we define the classes' etalons, another words to each uniform areas are located inside the image the number of class is given. And if there are the pixels are not yet assigned to some of the clusters then it doing during the second; pass and assign to the closest eta¬lons. Finally a compressed image is represented with a flat index image pointing to a codebook with etalons. The decompression stage is instant too. The proposed method described in this paper has been tested on different satellite multispectral images from different resources. The numerical results and illustrative examples of the method are represented too.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Kuvien laatu on tutkituimpia ja käytetyimpiä aiheita. Tässä työssä tarkastellaan värin laatu ja spektrikuvia. Työssä annetaan yleiskuva olemassa olevista pakattujen ja erillisten kuvien laadunarviointimenetelmistä painottaen näiden menetelmien soveltaminen spektrikuviin. Tässä työssä esitellään spektriväriulkomuotomalli värikuvien laadunarvioinnille. Malli sovelletaan spektrikuvista jäljennettyihin värikuviin. Malli pohjautuu sekä tilastolliseen spektrikuvamalliin, joka muodostaa yhteyden spektrikuvien ja valokuvien parametrien välille, että kuvan yleiseen ulkomuotoon. Värikuvien tilastollisten spektriparametrien ja fyysisten parametrien välinen yhteys on varmennettu tietokone-pohjaisella kuvamallinnuksella. Mallin ominaisuuksien pohjalta on kehitetty koekäyttöön tarkoitettu menetelmä värikuvien laadunarvioinnille. On kehitetty asiantuntija-pohjainen kyselymenetelmä ja sumea päättelyjärjestelmä värikuvien laadunarvioinnille. Tutkimus osoittaa, että spektri-väri –yhteys ja sumea päättelyjärjestelmä soveltuvat tehokkaasti värikuvien laadunarviointiin.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Optical microscopy is living its renaissance. The diffraction limit, although still physically true, plays a minor role in the achievable resolution in far-field fluorescence microscopy. Super-resolution techniques enable fluorescence microscopy at nearly molecular resolution. Modern (super-resolution) microscopy methods rely strongly on software. Software tools are needed all the way from data acquisition, data storage, image reconstruction, restoration and alignment, to quantitative image analysis and image visualization. These tools play a key role in all aspects of microscopy today – and their importance in the coming years is certainly going to increase, when microscopy little-by-little transitions from single cells into more complex and even living model systems. In this thesis, a series of bioimage informatics software tools are introduced for STED super-resolution microscopy. Tomographic reconstruction software, coupled with a novel image acquisition method STED< is shown to enable axial (3D) super-resolution imaging in a standard 2D-STED microscope. Software tools are introduced for STED super-resolution correlative imaging with transmission electron microscopes or atomic force microscopes. A novel method for automatically ranking image quality within microscope image datasets is introduced, and it is utilized to for example select the best images in a STED microscope image dataset.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The print substrate influences the print result in dry toner electrophotography, which is a widely used digital printing method. The influence of the substrate can be seen more easily in color printing, as that is a more complex process compared to monochrome printing. However, the print quality is also affected by the print substrate in grayscale printing. It is thus in the interests of both substrate producers and printing equipment manufacturers to understand the substrate properties that influence the quality of printed images in more detail. In dry toner electrophotography, the image is printed by transferring charged toner particles to the print substrate in the toner transfer nip, utilizing an electric field, in addition to the forces linked to the contact between toner particles and substrate in the nip. The toner transfer and the resulting image quality are thus influenced by the surface texture and the electrical and dielectric properties of the print substrate. In the investigation of the electrical and dielectric properties of the papers and the effects of substrate roughness, in addition to commercial papers, controlled sample sets were made on pilot paper machines and coating machines to exclude uncontrolled variables from the experiments. The electrical and dielectric properties of the papers investigated were electrical resistivity and conductivity, charge acceptance, charge decay, and the dielectric permittivity and losses at different frequencies, including the effect of temperature. The objective was to gain an understanding of how the electrical and dielectric properties are affected by normal variables in papermaking, including basis weight, material density, filler content, ion and moisture contents, and coating. In addition, the dependency of substrate resistivity on the electric field applied was investigated. Local discharging did not inhibit transfer with the paper roughness levels that are normal in electrophotographic color printing. The potential decay of paper revealed that the charge decay cannot be accurately described with a single exponential function, since in charge decay there are overlapping mechanisms of conduction and depolarization of paper. The resistivity of the paper depends on the NaCl content and exponentially on moisture content although it is also strongly dependent on the electric field applied. This dependency is influenced by the thickness, density, and filler contents of the paper. Furthermore, the Poole-Frenkel model can be applied to the resistivity of uncoated paper. The real part of the dielectric constant ε’ increases with NaCl content and relative humidity, but when these materials cannot polarize freely, the increase cannot be explained by summing the effects of their dielectric constants. Dependencies between the dielectric constant and dielectric loss factor and NaCl content, temperature, and frequency show that in the presence of a sufficient amount of moisture and NaCl, new structures with a relaxation time of the order of 10-3 s are formed in paper. The ε’ of coated papers is influenced by the addition of pigments and other coating additives with polarizable groups and due to the increase in density. The charging potential decreases and the electrical conductivity, potential decay rate, and dielectric constant of paper increase with increasing temperature. The dependencies are exponential and the temperature dependencies and their activation energies are altered by the ion content. The results have been utilized in manufacturing substrates for electrophotographic color printing.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Diplomityössä on käsitelty paperin pinnankarkeuden mittausta, joka on keskeisimpiä ongelmia paperimateriaalien tutkimuksessa. Paperiteollisuudessa käytettävät mittausmenetelmät sisältävät monia haittapuolia kuten esimerkiksi epätarkkuus ja yhteensopimattomuus sileiden papereiden mittauksissa, sekä suuret vaatimukset laboratorio-olosuhteille ja menetelmien hitaus. Työssä on tutkittu optiseen sirontaan perustuvia menetelmiä pinnankarkeuden määrittämisessä. Konenäköä ja kuvan-käsittelytekniikoita tutkittiin karkeilla paperipinnoilla. Tutkimuksessa käytetyt algoritmit on tehty Matlab® ohjelmalle. Saadut tulokset osoittavat mahdollisuuden pinnankarkeuden mittaamiseen kuvauksen avulla. Parhaimman tuloksen perinteisen ja kuvausmenetelmän välillä antoi fraktaaliulottuvuuteen perustuva menetelmä.