14 resultados para finite integral transform technique
em Aston University Research Archive
Resumo:
The growth and advances made in computer technology have led to the present interest in picture processing techniques. When considering image data compression the tendency is towards trans-form source coding of the image data. This method of source coding has reached a stage where very high reductions in the number of bits representing the data can be made while still preserving image fidelity. The point has thus been reached where channel errors need to be considered, as these will be inherent in any image comnunication system. The thesis first describes general source coding of images with the emphasis almost totally on transform coding. The transform technique adopted is the Discrete Cosine Transform (DCT) which becomes common to both transform coders. Hereafter the techniques of source coding differ substantially i.e. one technique involves zonal coding, the other involves threshold coding. Having outlined the theory and methods of implementation of the two source coders, their performances are then assessed first in the absence, and then in the presence, of channel errors. These tests provide a foundation on which to base methods of protection against channel errors. Six different protection schemes are then proposed. Results obtained, from each particular, combined, source and channel error protection scheme, which are described in full are then presented. Comparisons are made between each scheme and indicate the best one to use given a particular channel error rate.
Resumo:
The first part of the thesis compares Roth's method with other methods, in particular the method of separation of variables and the finite cosine transform method, for solving certain elliptic partial differential equations arising in practice. In particular we consider the solution of steady state problems associated with insulated conductors in rectangular slots. Roth's method has two main disadvantages namely the slow rate of convergence of the double Fourier series and the restrictive form of the allowable boundary conditions. A combined Roth-separation of variables method is derived to remove the restrictions on the form of the boundary conditions and various Chebyshev approximations are used to try to improve the rate of convergence of the series. All the techniques are then applied to the Neumann problem arising from balanced rectangular windings in a transformer window. Roth's method is then extended to deal with problems other than those resulting from static fields. First we consider a rectangular insulated conductor in a rectangular slot when the current is varying sinusoidally with time. An approximate method is also developed and compared with the exact method.The approximation is then used to consider the problem of an insulated conductor in a slot facing an air gap. We also consider the exact method applied to the determination of the eddy-current loss produced in an isolated rectangular conductor by a transverse magnetic field varying sinusoidally with time. The results obtained using Roth's method are critically compared with those obtained by other authors using different methods. The final part of the thesis investigates further the application of Chebyshdev methods to the solution of elliptic partial differential equations; an area where Chebyshev approximations have rarely been used. A poisson equation with a polynomial term is treated first followed by a slot problem in cylindrical geometry.
Resumo:
This paper presents a forecasting technique for forward energy prices, one day ahead. This technique combines a wavelet transform and forecasting models such as multi- layer perceptron, linear regression or GARCH. These techniques are applied to real data from the UK gas markets to evaluate their performance. The results show that the forecasting accuracy is improved significantly by using the wavelet transform. The methodology can be also applied to forecasting market clearing prices and electricity/gas loads.
The transformational implementation of JSD process specifications via finite automata representation
Resumo:
Conventional structured methods of software engineering are often based on the use of functional decomposition coupled with the Waterfall development process model. This approach is argued to be inadequate for coping with the evolutionary nature of large software systems. Alternative development paradigms, including the operational paradigm and the transformational paradigm, have been proposed to address the inadequacies of this conventional view of software developement, and these are reviewed. JSD is presented as an example of an operational approach to software engineering, and is contrasted with other well documented examples. The thesis shows how aspects of JSD can be characterised with reference to formal language theory and automata theory. In particular, it is noted that Jackson structure diagrams are equivalent to regular expressions and can be thought of as specifying corresponding finite automata. The thesis discusses the automatic transformation of structure diagrams into finite automata using an algorithm adapted from compiler theory, and then extends the technique to deal with areas of JSD which are not strictly formalisable in terms of regular languages. In particular, an elegant and novel method for dealing with so called recognition (or parsing) difficulties is described,. Various applications of the extended technique are described. They include a new method of automatically implementing the dismemberment transformation; an efficient way of implementing inversion in languages lacking a goto-statement; and a new in-the-large implementation strategy.
Resumo:
The work described in this thesis deals with the development and application of a finite element program for the analysis of several cracked structures. In order to simplify the organisation of the material presented herein, the thesis has been subdivided into two Sections : In the first Section the development of a finite element program for the analysis of two-dimensional problems of plane stress or plane strain is described. The element used in this program is the six-mode isoparametric triangular element which permits the accurate modelling of curved boundary surfaces. Various cases of material aniftropy are included in the derivation of the element stiffness properties. A digital computer program is described and examples of its application are presented. In the second Section, on fracture problems, several cracked configurations are analysed by embedding into the finite element mesh a sub-region, containing the singularities and over which an analytic solution is used. The modifications necessary to augment a standard finite element program, such as that developed in Section I, are discussed and complete programs for each cracked configuration are presented. Several examples are included to demonstrate the accuracy and flexibility of the technique.
Resumo:
Numerical techniques have been finding increasing use in all aspects of fracture mechanics, and often provide the only means for analyzing fracture problems. The work presented here, is concerned with the application of the finite element method to cracked structures. The present work was directed towards the establishment of a comprehensive two-dimensional finite element, linear elastic, fracture analysis package. Significant progress has been made to this end, and features which can now be studied include multi-crack tip mixed-mode problems, involving partial crack closure. The crack tip core element was refined and special local crack tip elements were employed to reduce the element density in the neighbourhood of the core region. The work builds upon experience gained by previous research workers and, as part of the general development, the program was modified to incorporate the eight-node isoparametric quadrilateral element. Also. a more flexible solving routine was developed, and provided a very compact method of solving large sets of simultaneous equations, stored in a segmented form. To complement the finite element analysis programs, an automatic mesh generation program has been developed, which enables complex problems. involving fine element detail, to be investigated with a minimum of input data. The scheme has proven to be versati Ie and reasonably easy to implement. Numerous examples are given to demonstrate the accuracy and flexibility of the finite element technique.
Resumo:
The aim of this research was to investigate the integration of computer-aided drafting and finite-element analysis in a linked computer-aided design procedure and to develop the necessary software. The Be'zier surface patch for surface representation was used to bridge the gap between the rather separate fields of drafting and finite-element analysis because the surfaces are defined by analytical functions which allow systematic and controlled variation of the shape and provide continuous derivatives up to any required degree. The objectives of this research were achieved by establishing : (i) A package which interpretes the engineering drawings of plate and shell structures and prepares the Be'zier net necessary for surface representation. (ii) A general purpose stand-alone meshed-surface modelling package for surface representation of plates and shells using the Be'zier surface patch technique. (iii) A translator which adapts the geometric description of plate and shell structures as given by the meshed-surface modeller to the form needed by the finite-element analysis package. The translator was extended to suit fan impellers by taking advantage of their sectorial symmetry. The linking processes were carried out for simple test structures, simplified and actual fan impellers to verify the flexibility and usefulness of the linking technique adopted. Finite-element results for thin plate and shell structures showed excellent agreement with those obtained by other investigators while results for the simplified and actual fan impellers also showed good agreement with those obtained in an earlier investigation where finite-element analysis input data were manually prepared. Some extensions of this work have also been discussed.
Resumo:
The present dissertation is concerned with the determination of the magnetic field distribution in ma[.rnetic electron lenses by means of the finite element method. In the differential form of this method a Poisson type equation is solved by numerical methods over a finite boundary. Previous methods of adapting this procedure to the requirements of digital computers have restricted its use to computers of extremely large core size. It is shown that by reformulating the boundary conditions, a considerable reduction in core store can be achieved for a given accuracy of field distribution. The magnetic field distribution of a lens may also be calculated by the integral form of the finite element rnethod. This eliminates boundary problems mentioned but introduces other difficulties. After a careful analysis of both methods it has proved possible to combine the advantages of both in a .new approach to the problem which may be called the 'differential-integral' finite element method. The application of this method to the determination of the magnetic field distribution of some new types of magnetic lenses is described. In the course of the work considerable re-programming of standard programs was necessary in order to reduce the core store requirements to a minimum.
Resumo:
Regions containing internal boundaries such as composite materials arise in many applications.We consider a situation of a layered domain in IR3 containing a nite number of bounded cavities. The model is stationary heat transfer given by the Laplace equation with piecewise constant conductivity. The heat ux (a Neumann condition) is imposed on the bottom of the layered region and various boundary conditions are imposed on the cavities. The usual transmission (interface) conditions are satised at the interface layer, that is continuity of the solution and its normal derivative. To eciently calculate the stationary temperature eld in the semi-innite region, we employ a Green's matrix technique and reduce the problem to boundary integral equations (weakly singular) over the bounded surfaces of the cavities. For the numerical solution of these integral equations, we use Wienert's approach [20]. Assuming that each cavity is homeomorphic with the unit sphere, a fully discrete projection method with super-algebraic convergence order is proposed. A proof of an error estimate for the approximation is given as well. Numerical examples are presented that further highlights the eciency and accuracy of the proposed method.
Resumo:
We use a functional integral formalism developed earlier for the pure Luttinger liquid (LL) to find an exact representation for the electron Green function of the LL in the presence of a single backscattering impurity in the low-temperature limit. This allows us to reproduce results (well known from the bosonization techniques) for the suppression of the electron local density of states (LDOS) at the position of the impurity and for the Friedel oscillations at finite temperature. In addition, we have extracted from the exact representation an analytic dependence of LDOS on the distance from the impurity and shown how it crosses over to that for the pure LL.
Resumo:
Pavement analysis and design for fatigue cracking involves a number of practical problems like material assessment/screening and performance prediction. A mechanics-aided method can answer these questions with satisfactory accuracy in a convenient way when it is appropriately implemented. This paper presents two techniques to implement the pseudo J-integral based Paris’ law to evaluate and predict fatigue cracking in asphalt mixtures and pavements. The first technique, quasi-elastic simulation, provides a rational and appropriate reference modulus for the pseudo analysis (i.e., viscoelastic to elastic conversion) by making use of the widely used material property: dynamic modulus. The physical significance of the quasi-elastic simulation is clarified. Introduction of this technique facilitates the implementation of the fracture mechanics models as well as continuum damage mechanics models to characterize fatigue cracking in asphalt pavements. The second technique about modeling fracture coefficients of the pseudo J-integral based Paris’ law simplifies the prediction of fatigue cracking without performing fatigue tests. The developed prediction models for the fracture coefficients rely on readily available mixture design properties that directly affect the fatigue performance, including the relaxation modulus, air void content, asphalt binder content, and aggregate gradation. Sufficient data are collected to develop such prediction models and the R2 values are around 0.9. The presented case studies serve as examples to illustrate how the pseudo J-integral based Paris’ law predicts fatigue resistance of asphalt mixtures and assesses fatigue performance of asphalt pavements. Future applications include the estimation of fatigue life of asphalt mixtures/pavements through a distinct criterion that defines fatigue failure by its physical significance.
Resumo:
The nonlinear Fourier transform, also known as eigenvalue communications, is a transmission and signal processing technique that makes positive use of the nonlinear properties of fibre channels. I will discuss recent progress in this field.
Resumo:
In this work, we introduce the periodic nonlinear Fourier transform (PNFT) method as an alternative and efficacious tool for compensation of the nonlinear transmission effects in optical fiber links. In the Part I, we introduce the algorithmic platform of the technique, describing in details the direct and inverse PNFT operations, also known as the inverse scattering transform for periodic (in time variable) nonlinear Schrödinger equation (NLSE). We pay a special attention to explaining the potential advantages of the PNFT-based processing over the previously studied nonlinear Fourier transform (NFT) based methods. Further, we elucidate the issue of the numerical PNFT computation: we compare the performance of four known numerical methods applicable for the calculation of nonlinear spectral data (the direct PNFT), in particular, taking the main spectrum (utilized further in Part II for the modulation and transmission) associated with some simple example waveforms as the quality indicator for each method. We show that the Ablowitz-Ladik discretization approach for the direct PNFT provides the best performance in terms of the accuracy and computational time consumption.
Resumo:
In this paper we propose the design of communication systems based on using periodic nonlinear Fourier transform (PNFT), following the introduction of the method in the Part I. We show that the famous "eigenvalue communication" idea [A. Hasegawa and T. Nyu, J. Lightwave Technol. 11, 395 (1993)] can also be generalized for the PNFT application: In this case, the main spectrum attributed to the PNFT signal decomposition remains constant with the propagation down the optical fiber link. Therefore, the main PNFT spectrum can be encoded with data in the same way as soliton eigenvalues in the original proposal. The results are presented in terms of the bit-error rate (BER) values for different modulation techniques and different constellation sizes vs. the propagation distance, showing a good potential of the technique.