961 resultados para Techniques: images processing


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Fourier-phase information is important in determining the appearance of natural scenes, but the structure of natural-image phase spectra is highly complex and difficult to relate directly to human perceptual processes. This problem is addressed by extending previous investigations of human visual sensitivity to the randomisation and quantisation of Fourier phase in natural images. The salience of the image changes induced by these physical processes is shown to depend critically on the nature of the original phase spectrum of each image, and the processes of randomisation and quantisation are shown to be perceptually equivalent provided that they shift image phase components by the same average amount. These results are explained by assuming that the visual system is sensitive to those phase-domain image changes which also alter certain global higher-order image statistics. This assumption may be used to place constraints on the likely nature of cortical processing: mechanisms which correlate the outputs of a bank of relative-phase-sensitive units are found to be consistent with the patterns of sensitivity reported here.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis is a study of three techniques to improve performance of some standard fore-casting models, application to the energy demand and prices. We focus on forecasting demand and price one-day ahead. First, the wavelet transform was used as a pre-processing procedure with two approaches: multicomponent-forecasts and direct-forecasts. We have empirically compared these approaches and found that the former consistently outperformed the latter. Second, adaptive models were introduced to continuously update model parameters in the testing period by combining ?lters with standard forecasting methods. Among these adaptive models, the adaptive LR-GARCH model was proposed for the fi?rst time in the thesis. Third, with regard to noise distributions of the dependent variables in the forecasting models, we used either Gaussian or Student-t distributions. This thesis proposed a novel algorithm to infer parameters of Student-t noise models. The method is an extension of earlier work for models that are linear in parameters to the non-linear multilayer perceptron. Therefore, the proposed method broadens the range of models that can use a Student-t noise distribution. Because these techniques cannot stand alone, they must be combined with prediction models to improve their performance. We combined these techniques with some standard forecasting models: multilayer perceptron, radial basis functions, linear regression, and linear regression with GARCH. These techniques and forecasting models were applied to two datasets from the UK energy markets: daily electricity demand (which is stationary) and gas forward prices (non-stationary). The results showed that these techniques provided good improvement to prediction performance.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Queueing theory is an effective tool in the analysis of canputer camrunication systems. Many results in queueing analysis have teen derived in the form of Laplace and z-transform expressions. Accurate inversion of these transforms is very important in the study of computer systems, but the inversion is very often difficult. In this thesis, methods for solving some of these queueing problems, by use of digital signal processing techniques, are presented. The z-transform of the queue length distribution for the Mj GY jl system is derived. Two numerical methods for the inversion of the transfom, together with the standard numerical technique for solving transforms with multiple queue-state dependence, are presented. Bilinear and Poisson transform sequences are presented as useful ways of representing continuous-time functions in numerical computations.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The growth and advances made in computer technology have led to the present interest in picture processing techniques. When considering image data compression the tendency is towards trans-form source coding of the image data. This method of source coding has reached a stage where very high reductions in the number of bits representing the data can be made while still preserving image fidelity. The point has thus been reached where channel errors need to be considered, as these will be inherent in any image comnunication system. The thesis first describes general source coding of images with the emphasis almost totally on transform coding. The transform technique adopted is the Discrete Cosine Transform (DCT) which becomes common to both transform coders. Hereafter the techniques of source coding differ substantially i.e. one tech­nique involves zonal coding, the other involves threshold coding. Having outlined the theory and methods of implementation of the two source coders, their performances are then assessed first in the absence, and then in the presence, of channel errors. These tests provide a foundation on which to base methods of protection against channel errors. Six different protection schemes are then proposed. Results obtained, from each particular, combined, source and channel error protection scheme, which are described in full are then presented. Comparisons are made between each scheme and indicate the best one to use given a particular channel error rate.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Textured regions in images can be defined as those regions containing a signal which has some measure of randomness. This thesis is concerned with the description of homogeneous texture in terms of a signal model and to develop a means of spatially separating regions of differing texture. A signal model is presented which is based on the assumption that a large class of textures can adequately be represented by their Fourier amplitude spectra only, with the phase spectra modelled by a random process. It is shown that, under mild restrictions, the above model leads to a stationary random process. Results indicate that this assumption is valid for those textures lacking significant local structure. A texture segmentation scheme is described which separates textured regions based on the assumption that each texture has a different distribution of signal energy within its amplitude spectrum. A set of bandpass quadrature filters are applied to the original signal and the envelope of the output of each filter taken. The filters are designed to have maximum mutual energy concentration in both the spatial and spatial frequency domains thus providing high spatial and class resolutions. The outputs of these filters are processed using a multi-resolution classifier which applies a clustering algorithm on the data at a low spatial resolution and then performs a boundary estimation operation in which processing is carried out over a range of spatial resolutions. Results demonstrate a high performance, in terms of the classification error, for a range of synthetic and natural textures

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Digital image processing is exploited in many diverse applications but the size of digital images places excessive demands on current storage and transmission technology. Image data compression is required to permit further use of digital image processing. Conventional image compression techniques based on statistical analysis have reached a saturation level so it is necessary to explore more radical methods. This thesis is concerned with novel methods, based on the use of fractals, for achieving significant compression of image data within reasonable processing time without introducing excessive distortion. Images are modelled as fractal data and this model is exploited directly by compression schemes. The validity of this is demonstrated by showing that the fractal complexity measure of fractal dimension is an excellent predictor of image compressibility. A method of fractal waveform coding is developed which has low computational demands and performs better than conventional waveform coding methods such as PCM and DPCM. Fractal techniques based on the use of space-filling curves are developed as a mechanism for hierarchical application of conventional techniques. Two particular applications are highlighted: the re-ordering of data during image scanning and the mapping of multi-dimensional data to one dimension. It is shown that there are many possible space-filling curves which may be used to scan images and that selection of an optimum curve leads to significantly improved data compression. The multi-dimensional mapping property of space-filling curves is used to speed up substantially the lookup process in vector quantisation. Iterated function systems are compared with vector quantisers and the computational complexity or iterated function system encoding is also reduced by using the efficient matching algcnithms identified for vector quantisers.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The advent of personal communication systems within the last decade has depended upon the utilization of advanced digital schemes for source and channel coding and for modulation. The inherent digital nature of the communications processing has allowed the convenient incorporation of cryptographic techniques to implement security in these communications systems. There are various security requirements, of both the service provider and the mobile subscriber, which may be provided for in a personal communications system. Such security provisions include the privacy of user data, the authentication of communicating parties, the provision for data integrity, and the provision for both location confidentiality and party anonymity. This thesis is concerned with an investigation of the private-key and public-key cryptographic techniques pertinent to the security requirements of personal communication systems and an analysis of the security provisions of Second-Generation personal communication systems is presented. Particular attention has been paid to the properties of the cryptographic protocols which have been employed in current Second-Generation systems. It has been found that certain security-related protocols implemented in the Second-Generation systems have specific weaknesses. A theoretical evaluation of these protocols has been performed using formal analysis techniques and certain assumptions made during the development of the systems are shown to contribute to the security weaknesses. Various attack scenarios which exploit these protocol weaknesses are presented. The Fiat-Sharmir zero-knowledge cryptosystem is presented as an example of how asymmetric algorithm cryptography may be employed as part of an improved security solution. Various modifications to this cryptosystem have been evaluated and their critical parameters are shown to be capable of being optimized to suit a particular applications. The implementation of such a system using current smart card technology has been evaluated.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Category-specific disorders are frequently explained by suggesting that living and non-living things are processed in separate subsystems (e.g. Caramazza & Shelton, 1998). If subsystems exist, there should be benefits for normal processing, beyond the influence of structural similarity. However, no previous study has separated the relative influences of similarity and semantic category. We created novel examples of living and non-living things so category and similarity could be manipulated independently. Pre-tests ensured that our images evoked appropriate semantic information and were matched for familiarity. Participants were trained to associate names with the images and then performed a name-verification task under two levels of time pressure. We found no significant advantage for living things alongside strong effects of similarity. Our results suggest that similarity rather than category is the key determinant of speed and accuracy in normal semantic processing. We discuss the implications of this finding for neuropsychological studies. © 2005 Psychology Press Ltd.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The assessment of the reliability of systems which learn from data is a key issue to investigate thoroughly before the actual application of information processing techniques to real-world problems. Over the recent years Gaussian processes and Bayesian neural networks have come to the fore and in this thesis their generalisation capabilities are analysed from theoretical and empirical perspectives. Upper and lower bounds on the learning curve of Gaussian processes are investigated in order to estimate the amount of data required to guarantee a certain level of generalisation performance. In this thesis we analyse the effects on the bounds and the learning curve induced by the smoothness of stochastic processes described by four different covariance functions. We also explain the early, linearly-decreasing behaviour of the curves and we investigate the asymptotic behaviour of the upper bounds. The effect of the noise and the characteristic lengthscale of the stochastic process on the tightness of the bounds are also discussed. The analysis is supported by several numerical simulations. The generalisation error of a Gaussian process is affected by the dimension of the input vector and may be decreased by input-variable reduction techniques. In conventional approaches to Gaussian process regression, the positive definite matrix estimating the distance between input points is often taken diagonal. In this thesis we show that a general distance matrix is able to estimate the effective dimensionality of the regression problem as well as to discover the linear transformation from the manifest variables to the hidden-feature space, with a significant reduction of the input dimension. Numerical simulations confirm the significant superiority of the general distance matrix with respect to the diagonal one.In the thesis we also present an empirical investigation of the generalisation errors of neural networks trained by two Bayesian algorithms, the Markov Chain Monte Carlo method and the evidence framework; the neural networks have been trained on the task of labelling segmented outdoor images.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis is organised into four parts. In Part 1 relevant literature is reviewed and presented in three chapters. Chapter 1 examines legal and cultural factors in identifying the. boundaries of rape. Chapter 2 discusses idiographic features· and causal characteristics of rape suspects and victims. Chapter 3 reviews the evidence relating to attitudes toward rape,. attribution of responsibility to victims and the routine management of rape cases by the police. Part II comprises an experimental investigation of observer perception of the victims of violent crime. The experiment, examined the processes by which impressions were attributed to victims of personal crime. The results suggested that discrepancies from observers' stereotypes were an important factor in their polarisation of victim ratings. The relevance of. examining . both the structure and process of' impression, formation was highlighted. Part III describes an extensive field study in which the West. Midlands police files on rape for an eight year period (1071-1978) were analysed. The study revealed a large number of interesting findings related to a wide range of relevant features of the crime. Further, the impact .of common misconceptions and "myths" of rape were investigated across the legal and judicial processing of rape cases. The evidence suggests that these "myths" lead·to differential biasing effects at different stages in the process. In the final part of this thesis,. salient issues raised by the experiment and field study .are discussed·within the framework outlined in Part 1. Potential implications for future developments and research: are presented.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A technique is presented for the development of a high precision and resolution Mean Sea Surface (MSS) model. The model utilises Radar altimetric sea surface heights extracted from the geodetic phase of the ESA ERS-1 mission. The methodology uses a modified Le Traon et al. (1995) cubic-spline fit of dual ERS-1 and TOPEX/Poseidon crossovers for the minimisation of radial orbit error. The procedure then uses Fourier domain processing techniques for spectral optimal interpolation of the mean sea surface in order to reduce residual errors within the model. Additionally, a multi-satellite mean sea surface integration technique is investigated to supplement the first model with additional enhanced data from the GEOSAT geodetic mission.The methodology employs a novel technique that combines the Stokes' and Vening-Meinsz' transformations, again in the spectral domain. This allows the presentation of a new enhanced GEOSAT gravity anomaly field.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The Alborz Mountain range separates the northern part of Iran from the southern part. It also isolates a narrow coastal strip to the south of the Caspian Sea from the Central Iran plateau. Communication between the south and north until the 1950's was via two roads and one rail link. In 1963 work was completed on a major access road via the Haraz Valley (the most physically hostile area in the region). From the beginning the road was plagued by accidents resulting from unstable slopes on either side of the valley. Heavy casualties persuaded the government to undertake major engineering works to eliminate ''black spots" and make the road safe. However, despite substantial and prolonged expenditure the problems were not solved and casualties increased steadily due to the increase in traffic using the road. Another road was built to bypass the Haraz road and opened to traffic in 1983. But closure of the Haraz road was still impossible because of the growth of settlements along the route and the need for access to other installations such as the Lar Dam. The aim of this research was to explore the possibility of applying Landsat MSS imagery to locating black spots along the road and the instability problems. Landsat data had not previously been applied to highway engineering problems in the study area. Aerial photographs are better in general than satellite images for detailed mapping, but Landsat images are superior for reconnaissance and adequate for mapping at the 1 :250,000 scale. The broad overview and lack of distortion in the Landsat imagery make the images ideal for structural interpretation. The results of Landsat digital image analysis showed that certain rock types and structural features can be delineated and mapped. The most unstable areas comprising steep slopes, free of vegetation cover can be identified using image processing techniques. Structural lineaments revealed from the image analysis led to improved results (delineation of unstable features). Damavand Quaternary volcanics were found to be the dominant rock type along a 40 km stretch of the road. These rock types are inherently unstable and partly responsible for the difficulties along the road. For more detailed geological and morphological interpretation a sample of small subscenes was selected and analysed. A special developed image analysis package was designed at Aston for use on a non specialized computing system. Using this package a new and unique method for image classification was developed, allowing accurate delineation of the critical features of the study area.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Separate physiological mechanisms which respond to spatial and temporal stimulation have been identified in the visual system. Some pathological conditions may selectively affect these mechanisms, offering a unique opportunity to investigate how psychophysical and electrophysiological tests reflect these visual processes, and thus enhance the use of the tests in clinical diagnosis. Amblyopia and optical blur were studied, representing spatial visual defects of neural and optical origin, respectively. Selective defects of the visual pathways were also studied - optic neuritis which affects the optic nerve, and dementia of the Alzheimer type in which the higher association areas are believed to be affected, but the primary projections spared. Seventy control subjects from 10 to 79 years of age were investigated. This provided material for an additional study of the effect of age on the psychophysical and electrophysiological responses. Spatial processing was measured by visual acuity, the contrast sensitivity function, or spatial modulation transfer function (MTF), and the pattern reversal and pattern onset-offset visual evoked potential (VEP). Temporal, or luminance, processing was measured by the de Lange curve, or temporal MTF, and the flash VEP. The pattern VEP was shown to reflect the integrity of the optic nerve, geniculo striate pathway and primary projections, and was related to high temporal frequency processing. The individual components of the flash VEP differed in their characteristics. The results suggested that the P2 component reflects the function of the higher association areas and is related to low temporal frequency processing, while the Pl component reflects the primary projection areas. The combination of a delayed flash P2 component and a normal latency pattern VEP appears to be specific to dementia of the Alzheimer type and represents an important diagnostic test for this condition.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

All-optical data processing is expected to play a major role in future optical communications. Nonlinear effects in optical fibres have many attractive features and a great, not yet fully explored potential in optical signal processing. Here, we overview our recent advances in developing novel techniques and approaches to all-optical processing based on optical fibre nonlinearities.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis presents experimental investigation of different effects/techniques that can be used to upgrade legacy WDM communication systems. The main issue in upgrading legacy systems is that the fundamental setup, including components settings such as EDFA gains, does not need to be altered thus the improvement must be carried out at the network terminal. A general introduction to optical fibre communications is given at the beginning, including optical communication components and system impairments. Experimental techniques for performing laboratory optical transmission experiments are presented before the experimental work of this thesis. These techniques include optical transmitter and receiver designs as well as the design and operation of the recirculating loop. The main experimental work includes three different studies. The first study involves a development of line monitoring equipment that can be reliably used to monitor the performance of optically amplified long-haul undersea systems. This equipment can provide instant finding of the fault locations along the legacy communication link which in tum enables rapid repair execution to be performed hence upgrading the legacy system. The second study investigates the effect of changing the number of transmitted 1s and Os on the performance of WDM system. This effect can, in reality, be seen in some coding systems, e.g. forward-error correction (FEC) technique, where the proportion of the 1s and Os are changed at the transmitter by adding extra bits to the original bit sequence. The final study presents transmission results after all-optical format conversion from NRZ to CSRZ and from RZ to CSRZ using semiconductor optical amplifier in nonlinear optical loop mirror (SOA-NOLM). This study is mainly based on the fact that the use of all-optical processing, including format conversion, has become attractive for the future data networks that are proposed to be all-optical. The feasibility of the SOA-NOLM device for converting single and WDM signals is described. The optical conversion bandwidth and its limitations for WDM conversion are also investigated. All studies of this thesis employ 10Gbit/s single or WDM signals being transmitted over dispersion managed fibre span in the recirculating loop. The fibre span is composed of single-mode fibres (SMF) whose losses and dispersion are compensated using erbium-doped fibre amplifiers (EDFAs) and dispersion compensating fibres (DCFs), respectively. Different configurations of the fibre span are presented in different parts.