12 resultados para colour image processing

em Aston University Research Archive


Relevância:

100.00% 100.00%

Publicador:

Resumo:

This report presents and evaluates a novel idea for scalable lossy colour image coding with Matching Pursuit (MP) performed in a transform domain. The benefits of the idea of MP performed in the transform domain are analysed in detail. The main contribution of this work is extending MP with wavelets to colour coding and proposing a coding method. We exploit correlations between image subbands after wavelet transformation in RGB colour space. Then, a new and simple quantisation and coding scheme of colour MP decomposition based on Run Length Encoding (RLE), inspired by the idea of coding indexes in relational databases, is applied. As a final coding step arithmetic coding is used assuming uniform distributions of MP atom parameters. The target application is compression at low and medium bit-rates. Coding performance is compared to JPEG 2000 showing the potential to outperform the latter with more sophisticated than uniform data models for arithmetic coder. The results are presented for grayscale and colour coding of 12 standard test images.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Textured regions in images can be defined as those regions containing a signal which has some measure of randomness. This thesis is concerned with the description of homogeneous texture in terms of a signal model and to develop a means of spatially separating regions of differing texture. A signal model is presented which is based on the assumption that a large class of textures can adequately be represented by their Fourier amplitude spectra only, with the phase spectra modelled by a random process. It is shown that, under mild restrictions, the above model leads to a stationary random process. Results indicate that this assumption is valid for those textures lacking significant local structure. A texture segmentation scheme is described which separates textured regions based on the assumption that each texture has a different distribution of signal energy within its amplitude spectrum. A set of bandpass quadrature filters are applied to the original signal and the envelope of the output of each filter taken. The filters are designed to have maximum mutual energy concentration in both the spatial and spatial frequency domains thus providing high spatial and class resolutions. The outputs of these filters are processed using a multi-resolution classifier which applies a clustering algorithm on the data at a low spatial resolution and then performs a boundary estimation operation in which processing is carried out over a range of spatial resolutions. Results demonstrate a high performance, in terms of the classification error, for a range of synthetic and natural textures

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The aim of this project was to carry out a fundamental study to assess the potential of colour image analysis for use in investigations of fire damaged concrete. This involved:(a) Quantification (rather than purely visual assessment) of colour change as an indicator of the thermal history of concrete.(b) Quantification of the nature and intensity of crack development as an indication of the thermal history of concrete, supporting and in addition to, colour change observations.(c) Further understanding of changes in the physical and chemical properties of aggregate and mortar matrix after heating.(d) An indication of the relationship between cracking and non-destructive methods of testing e.g. UPV or Schmidt hammer. Results showed that colour image analysis could be used to quantify the colour changes found when concrete is heated. Development of red colour coincided with significant reduction in compressive strength. Such measurements may be used to determine the thermal history of concrete by providing information regarding the temperature distribution that existed at the height of a fire. The actual colours observed depended on the types of cement and aggregate that were used to make the concrete. With some aggregates it may be more appropriate to only analyse the mortar matrix. Petrographic techniques may also be used to determine the nature and density of cracks developing at elevated temperatures and values of crack density correlate well with measurements of residual compressive strength. Small differences in crack density were observed with different cements and aggregates, although good correlations were always found with the residual compressive strength. Taken together these two techniques can provide further useful information for the evaluation of fire damaged concrete. This is especially so since petrographic analysis can also provide information on the quality of the original concrete such as cement content and water / cement ratio. Concretes made with blended cements tended to produce small differences in physical and chemical properties compared to those made with unblended cements. There is some evidence to suggest that a coarsening of pore structure in blended cements may lead to onset of cracking at lower temperatures. The use of DTA/TGA was of little use in assessing the thermal history of concrete made with blended cements. Corner spalling and sloughing off, as observed in columns, was effectively reproduced in tests on small scale specimens and the crack distributions measured. Relationships between compressive strength/cracking and non-destructive methods of testing are discussed and an outline procedure for site investigations of fire damaged concrete is described.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We present and evaluate a novel idea for scalable lossy colour image coding with Matching Pursuit (MP) performed in a transform domain. The idea is to exploit correlations in RGB colour space between image subbands after wavelet transformation rather than in the spatial domain. We propose a simple quantisation and coding scheme of colour MP decomposition based on Run Length Encoding (RLE) which can achieve comparable performance to JPEG 2000 even though the latter utilises careful data modelling at the coding stage. Thus, the obtained image representation has the potential to outperform JPEG 2000 with a more sophisticated coding algorithm.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The aim of this Interdisciplinary Higher Degrees project was the development of a high-speed method of photometrically testing vehicle headlamps, based on the use of image processing techniques, for Lucas Electrical Limited. Photometric testing involves measuring the illuminance produced by a lamp at certain points in its beam distribution. Headlamp performance is best represented by an iso-lux diagram, showing illuminance contours, produced from a two-dimensional array of data. Conventionally, the tens of thousands of measurements required are made using a single stationary photodetector and a two-dimensional mechanical scanning system which enables a lamp's horizontal and vertical orientation relative to the photodetector to be changed. Even using motorised scanning and computerised data-logging, the data acquisition time for a typical iso-lux test is about twenty minutes. A detailed study was made of the concept of using a video camera and a digital image processing system to scan and measure a lamp's beam without the need for the time-consuming mechanical movement. Although the concept was shown to be theoretically feasible, and a prototype system designed, it could not be implemented because of the technical limitations of commercially-available equipment. An alternative high-speed approach was developed, however, and a second prototype syqtem designed. The proposed arrangement again uses an image processing system, but in conjunction with a one-dimensional array of photodetectors and a one-dimensional mechanical scanning system in place of a video camera. This system can be implemented using commercially-available equipment and, although not entirely eliminating the need for mechanical movement, greatly reduces the amount required, resulting in a predicted data acquisiton time of about twenty seconds for a typical iso-lux test. As a consequence of the work undertaken, the company initiated an 80,000 programme to implement the system proposed by the author.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This thesis considers sparse approximation of still images as the basis of a lossy compression system. The Matching Pursuit (MP) algorithm is presented as a method particularly suited for application in lossy scalable image coding. Its multichannel extension, capable of exploiting inter-channel correlations, is found to be an efficient way to represent colour data in RGB colour space. Known problems with MP, high computational complexity of encoding and dictionary design, are tackled by finding an appropriate partitioning of an image. The idea of performing MP in the spatio-frequency domain after transform such as Discrete Wavelet Transform (DWT) is explored. The main challenge, though, is to encode the image representation obtained after MP into a bit-stream. Novel approaches for encoding the atomic decomposition of a signal and colour amplitudes quantisation are proposed and evaluated. The image codec that has been built is capable of competing with scalable coders such as JPEG 2000 and SPIHT in terms of compression ratio.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Accurate measurement of intervertebral kinematics of the cervical spine can support the diagnosis of widespread diseases related to neck pain, such as chronic whiplash dysfunction, arthritis, and segmental degeneration. The natural inaccessibility of the spine, its complex anatomy, and the small range of motion only permit concise measurement in vivo. Low dose X-ray fluoroscopy allows time-continuous screening of cervical spine during patient's spontaneous motion. To obtain accurate motion measurements, each vertebra was tracked by means of image processing along a sequence of radiographic images. To obtain a time-continuous representation of motion and to reduce noise in the experimental data, smoothing spline interpolation was used. Estimation of intervertebral motion for cervical segments was obtained by processing patient's fluoroscopic sequence; intervertebral angle and displacement and the instantaneous centre of rotation were computed. The RMS value of fitting errors resulted in about 0.2 degree for rotation and 0.2 mm for displacements. © 2013 Paolo Bifulco et al.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Digital image processing is exploited in many diverse applications but the size of digital images places excessive demands on current storage and transmission technology. Image data compression is required to permit further use of digital image processing. Conventional image compression techniques based on statistical analysis have reached a saturation level so it is necessary to explore more radical methods. This thesis is concerned with novel methods, based on the use of fractals, for achieving significant compression of image data within reasonable processing time without introducing excessive distortion. Images are modelled as fractal data and this model is exploited directly by compression schemes. The validity of this is demonstrated by showing that the fractal complexity measure of fractal dimension is an excellent predictor of image compressibility. A method of fractal waveform coding is developed which has low computational demands and performs better than conventional waveform coding methods such as PCM and DPCM. Fractal techniques based on the use of space-filling curves are developed as a mechanism for hierarchical application of conventional techniques. Two particular applications are highlighted: the re-ordering of data during image scanning and the mapping of multi-dimensional data to one dimension. It is shown that there are many possible space-filling curves which may be used to scan images and that selection of an optimum curve leads to significantly improved data compression. The multi-dimensional mapping property of space-filling curves is used to speed up substantially the lookup process in vector quantisation. Iterated function systems are compared with vector quantisers and the computational complexity or iterated function system encoding is also reduced by using the efficient matching algcnithms identified for vector quantisers.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Image segmentation is one of the most computationally intensive operations in image processing and computer vision. This is because a large volume of data is involved and many different features have to be extracted from the image data. This thesis is concerned with the investigation of practical issues related to the implementation of several classes of image segmentation algorithms on parallel architectures. The Transputer is used as the basic building block of hardware architectures and Occam is used as the programming language. The segmentation methods chosen for implementation are convolution, for edge-based segmentation; the Split and Merge algorithm for segmenting non-textured regions; and the Granlund method for segmentation of textured images. Three different convolution methods have been implemented. The direct method of convolution, carried out in the spatial domain, uses the array architecture. The other two methods, based on convolution in the frequency domain, require the use of the two-dimensional Fourier transform. Parallel implementations of two different Fast Fourier Transform algorithms have been developed, incorporating original solutions. For the Row-Column method the array architecture has been adopted, and for the Vector-Radix method, the pyramid architecture. The texture segmentation algorithm, for which a system-level design is given, demonstrates a further application of the Vector-Radix Fourier transform. A novel concurrent version of the quad-tree based Split and Merge algorithm has been implemented on the pyramid architecture. The performance of the developed parallel implementations is analysed. Many of the obtained speed-up and efficiency measures show values close to their respective theoretical maxima. Where appropriate comparisons are drawn between different implementations. The thesis concludes with comments on general issues related to the use of the Transputer system as a development tool for image processing applications; and on the issues related to the engineering of concurrent image processing applications.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Following miniaturisation of cameras and their integration into mobile devices such as smartphones combined with the intensive use of the latter, it is likely that in the near future the majority of digital images will be captured using such devices rather than using dedicated cameras. Since many users decide to keep their photos on their mobile devices, effective methods for managing these image collections are required. Common image browsers prove to be only of limited use, especially for large image sets [1].

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Purpose: To examine the use of real-time, generic edge detection, image processing techniques to enhance the television viewing of the visually impaired. Design: Prospective, clinical experimental study. Method: One hundred and two sequential visually impaired (average age 73.8 ± 14.8 years; 59% female) in a single center optimized a dynamic television image with respect to edge detection filter (Prewitt, Sobel, or the two combined), color (red, green, blue, or white), and intensity (one to 15 times) of the overlaid edges. They then rated the original television footage compared with a black-and-white image displaying the edges detected and the original television image with the detected edges overlaid in the chosen color and at the intensity selected. Footage of news, an advertisement, and the end of program credits were subjectively assessed in a random order. Results: A Prewitt filter was preferred (44%) compared with the Sobel filter (27%) or a combination of the two (28%). Green and white were equally popular for displaying the detected edges (32%), with blue (22%) and red (14%) less so. The average preferred edge intensity was 3.5 ± 1.7 times. The image-enhanced television was significantly preferred to the original (P < .001), which in turn was preferred to viewing the detected edges alone (P < .001) for each of the footage clips. Preference was not dependent on the condition causing visual impairment. Seventy percent were definitely willing to buy a set-top box that could achieve these effects for a reasonable price. Conclusions: Simple generic edge detection image enhancement options can be performed on television in real-time and significantly enhance the viewing of the visually impaired. © 2007 Elsevier Inc. All rights reserved.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Efficient and effective approaches of dealing with the vast amount of visual information available nowadays are highly sought after. This is particularly the case for image collections, both personal and commercial. Due to the magnitude of these ever expanding image repositories, annotation of all images images is infeasible, and search in such an image collection therefore becomes inherently difficult. Although content-based image retrieval techniques have shown much potential, such approaches also suffer from various problems making it difficult to adopt them in practice. In this paper, we follow a different approach, namely that of browsing image databases for image retrieval. In our Honeycomb Image Browser, large image databases are visualised on a hexagonal lattice with image thumbnails occupying hexagons. Arranged in a space filling manner, visually similar images are located close together enabling large image datasets to be navigated in a hierarchical manner. Various browsing tools are incorporated to allow for interactive exploration of the database. Experimental results confirm that our approach affords efficient image retrieval. © 2010 IEEE.