35 resultados para Mesh generation from image data
em Aston University Research Archive
Resumo:
The growth and advances made in computer technology have led to the present interest in picture processing techniques. When considering image data compression the tendency is towards trans-form source coding of the image data. This method of source coding has reached a stage where very high reductions in the number of bits representing the data can be made while still preserving image fidelity. The point has thus been reached where channel errors need to be considered, as these will be inherent in any image comnunication system. The thesis first describes general source coding of images with the emphasis almost totally on transform coding. The transform technique adopted is the Discrete Cosine Transform (DCT) which becomes common to both transform coders. Hereafter the techniques of source coding differ substantially i.e. one technique involves zonal coding, the other involves threshold coding. Having outlined the theory and methods of implementation of the two source coders, their performances are then assessed first in the absence, and then in the presence, of channel errors. These tests provide a foundation on which to base methods of protection against channel errors. Six different protection schemes are then proposed. Results obtained, from each particular, combined, source and channel error protection scheme, which are described in full are then presented. Comparisons are made between each scheme and indicate the best one to use given a particular channel error rate.
Resumo:
We consider return-to-zero (RZ) pulses with random phase modulation propagating in a nonlinear channel (modelled by the integrable nonlinear Schrödinger equation, NLSE). We suggest two different models for the phase fluctuations of the optical field: (i) Gaussian short-correlated fluctuations and (ii) generalized telegraph process. Using the rectangular-shaped pulse form we demonstrate that the presence of phase fluctuations of both types strongly influences the number of solitons generated in the channel. It is also shown that increasing the correlation time for the random phase fluctuations affects the coherent content of a pulse in a non-trivial way. The result obtained has potential consequences for all-optical processing and design of optical decision elements.
Resumo:
Generation of picosecond pulses with a peak power in excess of 7W and a duration of 24ps from a gain-switched InGaN diode laser is demonstrated for the first time.
Resumo:
This thesis presents a detailed, experiment-based study of generation of ultrashort optical pulses from diode lasers. Simple and cost-effective techniques were used to generate high power, high quality optical short pulses at various wavelength windows. The major achievements presented in the thesis is summarised as follows. High power pulses generation is one of the major topics discussed in the thesis. Although gain switching is the simplest way for ultrashort pulse generation, it proves to be quite effective to deliver high energy pulses on condition that the pumping pulses with extremely fast rising time and high enough amplitude are applied on specially designed pulse generators. In the experiment on a grating-coupled surface emitting laser (GCSEL), peak power as high as 1W was achieved even when its spectral bandwidth was controlled within 0.2nm. Another experiment shows violet picosecond pulses with peak power as high as 7W was achieved when the intensive electrical pulses were applied on optimised DC bias to pump on InGaN violet diode laser. The physical mechanism of this phenomenon, as we considered, may attributed to the self-organised quantum dots structure in the laser. Control of pulse quality, including spectral quality and temporal profile, is an important issue for high power pulse generation. The ways to control pulse quality described in the thesis are also based on simple and effective techniques. For instance, GCSEL used in our experiment has a specially designed air-grating structure for out-coupling of optical signals; hence, a tiny flat aluminium mirror was placed closed to the grating section and resulted in a wavelength tuning range over 100nm and the best side band suppression ratio of 40dB. Self-seeding, as an effective technique for spectral control of pulsed lasers, was demonstrated for the first time in a violet diode laser. In addition, control of temporal profile of the pulse is demonstrated in an overdriven DFB laser. Wavelength tuneable fibre Bragg gratings were used to tailor the huge energy tail of the high power pulse. The whole system was compact and robust. The ultimate purpose of our study is to design a new family of compact ultrafast diode lasers. Some practical ideas of laser design based on gain-switched and Q-switched devices are also provided in the end.
Resumo:
Digital image processing is exploited in many diverse applications but the size of digital images places excessive demands on current storage and transmission technology. Image data compression is required to permit further use of digital image processing. Conventional image compression techniques based on statistical analysis have reached a saturation level so it is necessary to explore more radical methods. This thesis is concerned with novel methods, based on the use of fractals, for achieving significant compression of image data within reasonable processing time without introducing excessive distortion. Images are modelled as fractal data and this model is exploited directly by compression schemes. The validity of this is demonstrated by showing that the fractal complexity measure of fractal dimension is an excellent predictor of image compressibility. A method of fractal waveform coding is developed which has low computational demands and performs better than conventional waveform coding methods such as PCM and DPCM. Fractal techniques based on the use of space-filling curves are developed as a mechanism for hierarchical application of conventional techniques. Two particular applications are highlighted: the re-ordering of data during image scanning and the mapping of multi-dimensional data to one dimension. It is shown that there are many possible space-filling curves which may be used to scan images and that selection of an optimum curve leads to significantly improved data compression. The multi-dimensional mapping property of space-filling curves is used to speed up substantially the lookup process in vector quantisation. Iterated function systems are compared with vector quantisers and the computational complexity or iterated function system encoding is also reduced by using the efficient matching algcnithms identified for vector quantisers.
Resumo:
There is a growing demand for data transmission over digital networks involving mobile terminals. An important class of data required for transmission over mobile terminals is image information such as street maps, floor plans and identikit images. This sort of transmission is of particular interest to the service industries such as the Police force, Fire brigade, medical services and other services. These services cannot be applied directly to mobile terminals because of the limited capacity of the mobile channels and the transmission errors caused by the multipath (Rayleigh) fading. In this research, transmission of line diagram images such as floor plans and street maps, over digital networks involving mobile terminals at transmission rates of 2400 bits/s and 4800 bits/s have been studied. A low bit-rate source encoding technique using geometric codes is found to be suitable to represent line diagram images. In geometric encoding, the amount of data required to represent or store the line diagram images is proportional to the image detail. Thus a simple line diagram image would require a small amount of data. To study the effect of transmission errors due to mobile channels on the transmitted images, error sources (error files), which represent mobile channels under different conditions, have been produced using channel modelling techniques. Satisfactory models of the mobile channel have been obtained when compared to the field test measurements. Subjective performance tests have been carried out to evaluate the quality and usefulness of the received line diagram images under various mobile channel conditions. The effect of mobile transmission errors on the quality of the received images has been determined. To improve the quality of the received images under various mobile channel conditions, forward error correcting codes (FEC) with interleaving and automatic repeat request (ARQ) schemes have been proposed. The performance of the error control codes have been evaluated under various mobile channel conditions. It has been shown that a FEC code with interleaving can be used effectively to improve the quality of the received images under normal and severe mobile channel conditions. Under normal channel conditions, similar results have been obtained when using ARQ schemes. However, under severe mobile channel conditions, the FEC code with interleaving shows better performance.
Resumo:
Distributed Brillouin sensing of strain and temperature works by making spatially resolved measurements of the position of the measurand-dependent extremum of the resonance curve associated with the scattering process in the weakly nonlinear regime. Typically, measurements of backscattered Stokes intensity (the dependent variable) are made at a number of predetermined fixed frequencies covering the design measurand range of the apparatus and combined to yield an estimate of the position of the extremum. The measurand can then be found because its relationship to the position of the extremum is assumed known. We present analytical expressions relating the relative error in the extremum position to experimental errors in the dependent variable. This is done for two cases: (i) a simple non-parametric estimate of the mean based on moments and (ii) the case in which a least squares technique is used to fit a Lorentzian to the data. The question of statistical bias in the estimates is discussed and in the second case we go further and present for the first time a general method by which the probability density function (PDF) of errors in the fitted parameters can be obtained in closed form in terms of the PDFs of the errors in the noisy data.
Resumo:
This paper assesses the impact of regional technological diversification on the emergence of new innovators across EU regions. Integrating analyses from regional economics, economic geography and technological change literatures, we explore the role that the regional embeddedness of actors characterised by diverse technological competencies may have in fostering novel and sustained interactions leading to new technological combinations. In particular, we test whether greater technological diversification improve regional ‘combinatorial’ opportunities leading to the emergence of new innovators. The analysis is based on panel data obtained merging regional economic data from Eurostat and patent data from the CRIOS-PATSTAT database over the period 1997–2006, covering 178 regions across 10 EU Countries. Accounting for different measures of economic and innovative activity at the NUTS2 level, our findings suggest that the regional co-location of diverse technological competencies contributes to the entry of new innovators, thereby shaping technological change and industry dynamics. Thus, this paper brings to the fore a better understanding of the relationship between regional diversity and technological change.
Resumo:
Sensory sensitivity is typically measured using behavioural techniques (psychophysics), which rely on observers responding to very large numbers of stimulus presentations. Psychophysics can be problematic when working with special populations, such as children or clinical patients, because they may lack the compliance or cognitive skills to perform the behavioural tasks. We used an auditory gap-detection paradigm to develop an accurate measure of sensory threshold derived from passively-recorded MEG data. Auditory evoked responses were elicited by silent gaps of varying durations in an on-going noise stimulus. Source modelling was used to spatially filter the MEG data and sigmoidal ‘cortical psychometric functions’ relating response amplitude to gap duration were obtained for each individual participant. Fitting the functions with a curve and estimating the gap duration at which the evoked response exceeded one standard deviation of the prestimulus brain activity provided an excellent prediction of psychophysical threshold. Thus we have demonstrated that accurate sensory thresholds can be reliably extracted from MEG data recorded while participants listen passively to a stimulus. Because we required no behavioural task, the method is suitable for studies of populations where variations in cognitive skills or vigilance make traditional psychophysics unsuitable.
Resumo:
This letter compares two nonlinear media for simultaneous carrier recovery and generation of frequency symmetric signals from a 42.7-Gb/s nonreturn-to-zero binary phase-shift-keyed input by exploiting four-wave mixing in a semiconductor optical amplifier and a highly nonlinear optical fiber for use in a phase-sensitive amplifier.
Resumo:
We investigate two numerical procedures for the Cauchy problem in linear elasticity, involving the relaxation of either the given boundary displacements (Dirichlet data) or the prescribed boundary tractions (Neumann data) on the over-specified boundary, in the alternating iterative algorithm of Kozlov et al. (1991). The two mixed direct (well-posed) problems associated with each iteration are solved using the method of fundamental solutions (MFS), in conjunction with the Tikhonov regularization method, while the optimal value of the regularization parameter is chosen via the generalized cross-validation (GCV) criterion. An efficient regularizing stopping criterion which ceases the iterative procedure at the point where the accumulation of noise becomes dominant and the errors in predicting the exact solutions increase, is also presented. The MFS-based iterative algorithms with relaxation are tested for Cauchy problems for isotropic linear elastic materials in various geometries to confirm the numerical convergence, stability, accuracy and computational efficiency of the proposed method.
Resumo:
We propose and investigate a method for the stable determination of a harmonic function from knowledge of its value and its normal derivative on a part of the boundary of the (bounded) solution domain (Cauchy problem). We reformulate the Cauchy problem as an operator equation on the boundary using the Dirichlet-to-Neumann map. To discretize the obtained operator, we modify and employ a method denoted as Classic II given in [J. Helsing, Faster convergence and higher accuracy for the Dirichlet–Neumann map, J. Comput. Phys. 228 (2009), pp. 2578–2576, Section 3], which is based on Fredholm integral equations and Nyström discretization schemes. Then, for stability reasons, to solve the discretized integral equation we use the method of smoothing projection introduced in [J. Helsing and B.T. Johansson, Fast reconstruction of harmonic functions from Cauchy data using integral equation techniques, Inverse Probl. Sci. Eng. 18 (2010), pp. 381–399, Section 7], which makes it possible to solve the discretized operator equation in a stable way with minor computational cost and high accuracy. With this approach, for sufficiently smooth Cauchy data, the normal derivative can also be accurately computed on the part of the boundary where no data is initially given.
Resumo:
We consider the problem of stable determination of a harmonic function from knowledge of the solution and its normal derivative on a part of the boundary of the (bounded) solution domain. The alternating method is a procedure to generate an approximation to the harmonic function from such Cauchy data and we investigate a numerical implementation of this procedure based on Fredholm integral equations and Nyström discretization schemes, which makes it possible to perform a large number of iterations (millions) with minor computational cost (seconds) and high accuracy. Moreover, the original problem is rewritten as a fixed point equation on the boundary, and various other direct regularization techniques are discussed to solve that equation. We also discuss how knowledge of the smoothness of the data can be used to further improve the accuracy. Numerical examples are presented showing that accurate approximations of both the solution and its normal derivative can be obtained with much less computational time than in previous works.