33 resultados para Digital processing image
Resumo:
Diabetic retinopathy, age-related macular degeneration and glaucoma are the leading causes of blindness worldwide. Automatic methods for diagnosis exist, but their performance is limited by the quality of the data. Spectral retinal images provide a significantly better representation of the colour information than common grayscale or red-green-blue retinal imaging, having the potential to improve the performance of automatic diagnosis methods. This work studies the image processing techniques required for composing spectral retinal images with accurate reflection spectra, including wavelength channel image registration, spectral and spatial calibration, illumination correction, and the estimation of depth information from image disparities. The composition of a spectral retinal image database of patients with diabetic retinopathy is described. The database includes gold standards for a number of pathologies and retinal structures, marked by two expert ophthalmologists. The diagnostic applications of the reflectance spectra are studied using supervised classifiers for lesion detection. In addition, inversion of a model of light transport is used to estimate histological parameters from the reflectance spectra. Experimental results suggest that the methods for composing, calibrating and postprocessing spectral images presented in this work can be used to improve the quality of the spectral data. The experiments on the direct and indirect use of the data show the diagnostic potential of spectral retinal data over standard retinal images. The use of spectral data could improve automatic and semi-automated diagnostics for the screening of retinal diseases, for the quantitative detection of retinal changes for follow-up, clinically relevant end-points for clinical studies and development of new therapeutic modalities.
Resumo:
Tässä diplomityössä tutkitaan tekniikoita, joillavesileima lisätään spektrikuvaan, ja menetelmiä, joilla vesileimat tunnistetaanja havaitaan spektrikuvista. PCA (Principal Component Analysis) -algoritmia käyttäen alkuperäisten kuvien spektriulottuvuutta vähennettiin. Vesileiman lisääminen spektrikuvaan suoritettiin muunnosavaruudessa. Ehdotetun mallin mukaisesti muunnosavaruuden komponentti korvattiin vesileiman ja toisen muunnosavaruuden komponentin lineaarikombinaatiolla. Lisäyksessä käytettävää parametrijoukkoa tutkittiin. Vesileimattujen kuvien laatu mitattiin ja analysoitiin. Suositukset vesileiman lisäykseen esitettiin. Useita menetelmiä käytettiin vesileimojen tunnistamiseen ja tunnistamisen tulokset analysoitiin. Vesileimojen kyky sietää erilaisia hyökkäyksiä tarkistettiin. Diplomityössä suoritettiin joukko havaitsemis-kokeita ottamalla huomioon vesileiman lisäyksessä käytetyt parametrit. ICA (Independent Component Analysis) -menetelmää pidetään yhtenä mahdollisena vaihtoehtona vesileiman havaitsemisessa.
Resumo:
The topic of this thesis is studying how lesions in retina caused by diabetic retinopathy can be detected from color fundus images by using machine vision methods. Methods for equalizing uneven illumination in fundus images, detecting regions of poor image quality due toinadequate illumination, and recognizing abnormal lesions were developed duringthe work. The developed methods exploit mainly the color information and simpleshape features to detect lesions. In addition, a graphical tool for collecting lesion data was developed. The tool was used by an ophthalmologist who marked lesions in the images to help method development and evaluation. The tool is a general purpose one, and thus it is possible to reuse the tool in similar projects.The developed methods were tested with a separate test set of 128 color fundus images. From test results it was calculated how accurately methods classify abnormal funduses as abnormal (sensitivity) and healthy funduses as normal (specificity). The sensitivity values were 92% for hemorrhages, 73% for red small dots (microaneurysms and small hemorrhages), and 77% for exudates (hard and soft exudates). The specificity values were 75% for hemorrhages, 70% for red small dots, and 50% for exudates. Thus, the developed methods detected hemorrhages accurately and microaneurysms and exudates moderately.
Resumo:
Tässä työssä raportoidaan harjoitustyön kehittäminen ja toteuttaminen Aktiivisen- ja robottinäön kurssille. Harjoitustyössä suunnitellaan ja toteutetaan järjestelmä joka liikuttaa kappaleita robottikäsivarrella kolmiuloitteisessa avaruudessa. Kappaleidenpaikkojen määrittämiseen järjestelmä käyttää digitaalisia kuvia. Tässä työssä esiteltävässä harjoitustyötoteutuksessa käytettiin raja-arvoistusta HSV-väriavaruudessa kappaleiden segmentointiin kuvasta niiden värien perusteella. Segmentoinnin tuloksena saatavaa binäärikuvaa suodatettiin mediaanisuotimella kuvan häiriöiden poistamiseksi. Kappaleen paikkabinäärikuvassa määritettiin nimeämällä yhtenäisiä pikseliryhmiä yhtenäisen alueen nimeämismenetelmällä. Kappaleen paikaksi määritettiin suurimman nimetyn pikseliryhmän paikka. Kappaleiden paikat kuvassa yhdistettiin kolmiuloitteisiin koordinaatteihin kalibroidun kameran avulla. Järjestelmä liikutti kappaleita niiden arvioitujen kolmiuloitteisten paikkojen perusteella.
Resumo:
Perceiving the world visually is a basic act for humans, but for computers it is still an unsolved problem. The variability present innatural environments is an obstacle for effective computer vision. The goal of invariant object recognition is to recognise objects in a digital image despite variations in, for example, pose, lighting or occlusion. In this study, invariant object recognition is considered from the viewpoint of feature extraction. Thedifferences between local and global features are studied with emphasis on Hough transform and Gabor filtering based feature extraction. The methods are examined with respect to four capabilities: generality, invariance, stability, and efficiency. Invariant features are presented using both Hough transform and Gabor filtering. A modified Hough transform technique is also presented where the distortion tolerance is increased by incorporating local information. In addition, methods for decreasing the computational costs of the Hough transform employing parallel processing and local information are introduced.
Resumo:
Tärkeä tehtävä ympäristön tarkkailussa on arvioida ympäristön nykyinen tila ja ihmisen siihen aiheuttamat muutokset sekä analysoida ja etsiä näiden yhtenäiset suhteet. Ympäristön muuttumista voidaan hallita keräämällä ja analysoimalla tietoa. Tässä diplomityössä on tutkittu vesikasvillisuudessa hai vainuja muutoksia käyttäen etäältä hankittua mittausdataa ja kuvan analysointimenetelmiä. Ympäristön tarkkailuun on käytetty Suomen suurimmasta järvestä Saimaasta vuosina 1996 ja 1999 otettuja ilmakuvia. Ensimmäinen kuva-analyysin vaihe on geometrinen korjaus, jonka tarkoituksena on kohdistaa ja suhteuttaa otetut kuvat samaan koordinaattijärjestelmään. Toinen vaihe on kohdistaa vastaavat paikalliset alueet ja tunnistaa kasvillisuuden muuttuminen. Kasvillisuuden tunnistamiseen on käytetty erilaisia lähestymistapoja sisältäen valvottuja ja valvomattomia tunnistustapoja. Tutkimuksessa käytettiin aitoa, kohinoista mittausdataa, minkä perusteella tehdyt kokeet antoivat hyviä tuloksia tutkimuksen onnistumisesta.
Resumo:
Mottling is one of the key defects in offset-printing. Mottling can be defined as unwanted unevenness of print. In this work, diameter of a mottle spot is defined between 0.5-10.0 mm. There are several types of mottling, but the reason behind the problem is still not fully understood. Several commercial machine vision products for the evaluation of print unevenness have been presented. Two of these methods used in these products have been implemented in this thesis. The one is the cluster method and the other is the band-pass method. The properties of human vision system have been taken into account in the implementation of these two methods. An index produced by the cluster method is a weighted sum of the number of found spots, and an index produced by band-pass method is a weighted sum of coefficients of variations of gray-levels for each spatial band. Both methods produce larger indices for visually poor samples, so they can discern good samples from the poor ones. The difference between the indices for good and poor samples is slightly larger produced by the cluster method. 11 However, without the samples evaluated by human experts, the goodness of these results is still questionable. This comparison will be left to the next phase of the project.
Resumo:
The number of digital images has been increasing exponentially in the last few years. People have problems managing their image collections and finding a specific image. An automatic image categorization system could help them to manage images and find specific images. In this thesis, an unsupervised visual object categorization system was implemented to categorize a set of unknown images. The system is unsupervised, and hence, it does not need known images to train the system which needs to be manually obtained. Therefore, the number of possible categories and images can be huge. The system implemented in the thesis extracts local features from the images. These local features are used to build a codebook. The local features and the codebook are then used to generate a feature vector for an image. Images are categorized based on the feature vectors. The system is able to categorize any given set of images based on the visual appearance of the images. Images that have similar image regions are grouped together in the same category. Thus, for example, images which contain cars are assigned to the same cluster. The unsupervised visual object categorization system can be used in many situations, e.g., in an Internet search engine. The system can categorize images for a user, and the user can then easily find a specific type of image.
Resumo:
This thesis deals with distance transforms which are a fundamental issue in image processing and computer vision. In this thesis, two new distance transforms for gray level images are presented. As a new application for distance transforms, they are applied to gray level image compression. The new distance transforms are both new extensions of the well known distance transform algorithm developed by Rosenfeld, Pfaltz and Lay. With some modification their algorithm which calculates a distance transform on binary images with a chosen kernel has been made to calculate a chessboard like distance transform with integer numbers (DTOCS) and a real value distance transform (EDTOCS) on gray level images. Both distance transforms, the DTOCS and EDTOCS, require only two passes over the graylevel image and are extremely simple to implement. Only two image buffers are needed: The original gray level image and the binary image which defines the region(s) of calculation. No other image buffers are needed even if more than one iteration round is performed. For large neighborhoods and complicated images the two pass distance algorithm has to be applied to the image more than once, typically 3 10 times. Different types of kernels can be adopted. It is important to notice that no other existing transform calculates the same kind of distance map as the DTOCS. All the other gray weighted distance function, GRAYMAT etc. algorithms find the minimum path joining two points by the smallest sum of gray levels or weighting the distance values directly by the gray levels in some manner. The DTOCS does not weight them that way. The DTOCS gives a weighted version of the chessboard distance map. The weights are not constant, but gray value differences of the original image. The difference between the DTOCS map and other distance transforms for gray level images is shown. The difference between the DTOCS and EDTOCS is that the EDTOCS calculates these gray level differences in a different way. It propagates local Euclidean distances inside a kernel. Analytical derivations of some results concerning the DTOCS and the EDTOCS are presented. Commonly distance transforms are used for feature extraction in pattern recognition and learning. Their use in image compression is very rare. This thesis introduces a new application area for distance transforms. Three new image compression algorithms based on the DTOCS and one based on the EDTOCS are presented. Control points, i.e. points that are considered fundamental for the reconstruction of the image, are selected from the gray level image using the DTOCS and the EDTOCS. The first group of methods select the maximas of the distance image to new control points and the second group of methods compare the DTOCS distance to binary image chessboard distance. The effect of applying threshold masks of different sizes along the threshold boundaries is studied. The time complexity of the compression algorithms is analyzed both analytically and experimentally. It is shown that the time complexity of the algorithms is independent of the number of control points, i.e. the compression ratio. Also a new morphological image decompression scheme is presented, the 8 kernels' method. Several decompressed images are presented. The best results are obtained using the Delaunay triangulation. The obtained image quality equals that of the DCT images with a 4 x 4
Resumo:
Peer-to-Peer (P2P) technology has revolutionized file exchange activities besides enhancing processing power distribution. As such, this technology which is nowadays made freely available to all internet users also imposes a threat as it enables the illegal distribution of copyrighted digital work. P2P technology continuously evolves in a greater pace than copyright legislation, leading to compatibility gaps between the applicability of copyright law and the illicit file sharing and downloading. Such issues give high incentives to consumers to practise piracy using P2P systems with a low perception of risk towards prosecution, leading to substantial losses for copyright owners. This study focuses on developing insights for content owners on consumer behaviour towards piracy in Finland, where quantitative analyses are assessed using a data set based on a survey conducted by the Helsinki Institute for IT. The research approach investigates the significance of three fundamental areas in relation to evaluate consumer behaviour as: environmental-related factors, innovation-related factors and consumer-related. each of these are integrates concepts derived in previous theoretical models such as the technology acceptance model, theory of reasoned action, theory of planned behaviour, the issue-risk-judgement model and the Hunt & Vitell’s model.
Resumo:
Dirt counting and dirt particle characterisation of pulp samples is an important part of quality control in pulp and paper production. The need for an automatic image analysis system to consider dirt particle characterisation in various pulp samples is also very critical. However, existent image analysis systems utilise a single threshold to segment the dirt particles in different pulp samples. This limits their precision. Based on evidence, designing an automatic image analysis system that could overcome this deficiency is very useful. In this study, the developed Niblack thresholding method is proposed. The method defines the threshold based on the number of segmented particles. In addition, the Kittler thresholding is utilised. Both of these thresholding methods can determine the dirt count of the different pulp samples accurately as compared to visual inspection and the Digital Optical Measuring and Analysis System (DOMAS). In addition, the minimum resolution needed for acquiring a scanner image is defined. By considering the variation in dirt particle features, the curl shows acceptable difference to discriminate the bark and the fibre bundles in different pulp samples. Three classifiers, called k-Nearest Neighbour, Linear Discriminant Analysis and Multi-layer Perceptron are utilised to categorize the dirt particles. Linear Discriminant Analysis and Multi-layer Perceptron are the most accurate in classifying the segmented dirt particles by the Kittler thresholding with morphological processing. The result shows that the dirt particles are successfully categorized for bark and for fibre bundles.
Resumo:
Steganografian tarkoituksena on salaisen viestin piilottaminen muun informaation sekaan. Tutkielmassa perehdytään kirjallisuuden pohjalta steganografiaan ja kuvien digitaaliseen vesileimaamiseen. Tutkielmaan kuuluu myös kokeellinen osuus. Siinä esitellään vesileimattujen kuvien tunnistamiseen kehitetty testausjärjestelmä ja testiajojen tulokset. Testiajoissa kuvasarjoja on vesileimattu valituilla vesileimausmenetelmillä parametreja vaihdellen. Tunnistettaville kuville tehdään piirreirrotus. Erotellut piirteet annetaan parametreina luokittimelle, joka tekee lopullisen tunnistamispäätöksen. Tutkimuksessa saatiin toteutettua toimiva ohjelmisto vesileiman lisäämiseen ja vesileimattujen kuvien tunnistamiseen kuvajoukosta. Tulosten perusteella, sopivalla piirreirrottimella ja tukivektorikoneluokittimella päästään yli 95 prosentin tunnistamistarkkuuteen.
Resumo:
Feature extraction is the part of pattern recognition, where the sensor data is transformed into a more suitable form for the machine to interpret. The purpose of this step is also to reduce the amount of information passed to the next stages of the system, and to preserve the essential information in the view of discriminating the data into different classes. For instance, in the case of image analysis the actual image intensities are vulnerable to various environmental effects, such as lighting changes and the feature extraction can be used as means for detecting features, which are invariant to certain types of illumination changes. Finally, classification tries to make decisions based on the previously transformed data. The main focus of this thesis is on developing new methods for the embedded feature extraction based on local non-parametric image descriptors. Also, feature analysis is carried out for the selected image features. Low-level Local Binary Pattern (LBP) based features are in a main role in the analysis. In the embedded domain, the pattern recognition system must usually meet strict performance constraints, such as high speed, compact size and low power consumption. The characteristics of the final system can be seen as a trade-off between these metrics, which is largely affected by the decisions made during the implementation phase. The implementation alternatives of the LBP based feature extraction are explored in the embedded domain in the context of focal-plane vision processors. In particular, the thesis demonstrates the LBP extraction with MIPA4k massively parallel focal-plane processor IC. Also higher level processing is incorporated to this framework, by means of a framework for implementing a single chip face recognition system. Furthermore, a new method for determining optical flow based on LBPs, designed in particular to the embedded domain is presented. Inspired by some of the principles observed through the feature analysis of the Local Binary Patterns, an extension to the well known non-parametric rank transform is proposed, and its performance is evaluated in face recognition experiments with a standard dataset. Finally, an a priori model where the LBPs are seen as combinations of n-tuples is also presented
Resumo:
The Saimaa ringed seal is one of the most endangered seals in the world. It is a symbol of Lake Saimaa and a lot of effort have been applied to save it. Traditional methods of seal monitoring include capturing the animals and installing sensors on their bodies. These invasive methods for identifying can be painful and affect the behavior of the animals. Automatic identification of seals using computer vision provides a more humane method for the monitoring. This Master's thesis focuses on automatic image-based identification of the Saimaa ringed seals. This consists of detection and segmentation of a seal in an image, analysis of its ring patterns, and identification of the detected seal based on the features of the ring patterns. The proposed algorithm is evaluated with a dataset of 131 individual seals. Based on the experiments with 363 images, 81\% of the images were successfully segmented automatically. Furthermore, a new approach for interactive identification of Saimaa ringed seals is proposed. The results of this research are a starting point for future research in the topic of seal photo-identification.
Resumo:
Marketing has changed because of digitalization. Marketing is moving towards digital channels and more companies are transitioning from “pushing” advertising messages to “pull” marketing, that attracts audience with the content that interests and benefits the audience. This kind of marketing is called content marketing or “inbound” marketing. This study focuses on how marketing communications agencies utilize digital content marketing and what are the best practices with the selected digital content marketing channels. In this study, those channels include blogs, Facebook, Twitter, and LinkedIn. The qualitative research method was utilized in order to examine the phenomenon of digital content marketing in-depth. The chosen data collecting method was semi-structured interviewing. A total of seven marketing communications agencies, who currently utilize digital content marketing, were selected as case companies and interviewed. All the case companies are from the marketing communications industry because that industry can be assumed to be well adapted to digital content marketing techniques. There is a research gap about digital content marketing in the B2B context, which increases the novelty value of this research. The study examines what is digital content marketing, why B2B companies use digital content marketing, and how should digital content marketing be conducted through blogs and social media. The informants perceived digital marketing to be a fundamental part of their all marketing. They conduct digital content marketing for the following reasons: to increase sales, to improve their brand image and to demonstrate their own skills. Concrete results of digital content marketing for the case companies include sales leads, new clients, better brand image, and that recruiting is easier. The most important success factors with blogs and social media are the following: 1) Audience-centric thinking. All content planning should start from figuring out which themes interests the target audience. Social media channel choices should be based on where the target audience can be reached. 2) Companies should not talk only about themselves. Instead, content is made about themes that interests the target audience. On social media channels, only a fragment of all shared content is about the company. Rather, most of the shared content is industry-specific content that helps the potential client.