936 resultados para locational accuracy


Relevância:

10.00% 10.00%

Publicador:

Resumo:

In recent times, the improved levels of accuracy obtained by Automatic Speech Recognition (ASR) technology has made it viable for use in a number of commercial products. Unfortunately, these types of applications are limited to only a few of the world’s languages, primarily because ASR development is reliant on the availability of large amounts of language specific resources. This motivates the need for techniques which reduce this language-specific, resource dependency. Ideally, these approaches should generalise across languages, thereby providing scope for rapid creation of ASR capabilities for resource poor languages. Cross Lingual ASR emerges as a means for addressing this need. Underpinning this approach is the observation that sound production is largely influenced by the physiological construction of the vocal tract, and accordingly, is human, and not language specific. As a result, a common inventory of sounds exists across languages; a property which is exploitable, as sounds from a resource poor, target language can be recognised using models trained on resource rich, source languages. One of the initial impediments to the commercial uptake of ASR technology was its fragility in more challenging environments, such as conversational telephone speech. Subsequent improvements in these environments has gained consumer confidence. Pragmatically, if cross lingual techniques are to considered a viable alternative when resources are limited, they need to perform under the same types of conditions. Accordingly, this thesis evaluates cross lingual techniques using two speech environments; clean read speech and conversational telephone speech. Languages used in evaluations are German, Mandarin, Japanese and Spanish. Results highlight that previously proposed approaches provide respectable results for simpler environments such as read speech, but degrade significantly when in the more taxing conversational environment. Two separate approaches for addressing this degradation are proposed. The first is based on deriving better target language lexical representation, in terms of the source language model set. The second, and ultimately more successful approach, focuses on improving the classification accuracy of context-dependent (CD) models, by catering for the adverse influence of languages specific phonotactic properties. Whilst the primary research goal in this thesis is directed towards improving cross lingual techniques, the catalyst for investigating its use was based on expressed interest from several organisations for an Indonesian ASR capability. In Indonesia alone, there are over 200 million speakers of some Malay variant, provides further impetus and commercial justification for speech related research on this language. Unfortunately, at the beginning of the candidature, limited research had been conducted on the Indonesian language in the field of speech science, and virtually no resources existed. This thesis details the investigative and development work dedicated towards obtaining an ASR system with a 10000 word recognition vocabulary for the Indonesian language.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

With the massive decline in savings arising from the Global Financial Crisis (GFC), it is timely to review superannuation fund investment and disclosure strategies in the lead-up to the crisis. Accordingly, this study examines differences among superannuation funds’ default investment options in terms of naming and framing over three years from 2005 to 2007, as presented in product disclosure statements (PDSs). The findings indicate that default options are becoming more alike regardless of their name, and consequently, members may face increasing difficulties in distinguishing between balanced and growth-named default options when comparing them across superannuation funds. Comparability is also likely to be constrained by variations in the framing of default options presented in investment option menus in PDSs. These findings highlight the need for standardisation of default option definitions and disclosures to ensure descriptive accuracy, transparency and comparability.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The performance of an adaptive filter may be studied through the behaviour of the optimal and adaptive coefficients in a given environment. This thesis investigates the performance of finite impulse response adaptive lattice filters for two classes of input signals: (a) frequency modulated signals with polynomial phases of order p in complex Gaussian white noise (as nonstationary signals), and (b) the impulsive autoregressive processes with alpha-stable distributions (as non-Gaussian signals). Initially, an overview is given for linear prediction and adaptive filtering. The convergence and tracking properties of the stochastic gradient algorithms are discussed for stationary and nonstationary input signals. It is explained that the stochastic gradient lattice algorithm has many advantages over the least-mean square algorithm. Some of these advantages are having a modular structure, easy-guaranteed stability, less sensitivity to the eigenvalue spread of the input autocorrelation matrix, and easy quantization of filter coefficients (normally called reflection coefficients). We then characterize the performance of the stochastic gradient lattice algorithm for the frequency modulated signals through the optimal and adaptive lattice reflection coefficients. This is a difficult task due to the nonlinear dependence of the adaptive reflection coefficients on the preceding stages and the input signal. To ease the derivations, we assume that reflection coefficients of each stage are independent of the inputs to that stage. Then the optimal lattice filter is derived for the frequency modulated signals. This is performed by computing the optimal values of residual errors, reflection coefficients, and recovery errors. Next, we show the tracking behaviour of adaptive reflection coefficients for frequency modulated signals. This is carried out by computing the tracking model of these coefficients for the stochastic gradient lattice algorithm in average. The second-order convergence of the adaptive coefficients is investigated by modeling the theoretical asymptotic variance of the gradient noise at each stage. The accuracy of the analytical results is verified by computer simulations. Using the previous analytical results, we show a new property, the polynomial order reducing property of adaptive lattice filters. This property may be used to reduce the order of the polynomial phase of input frequency modulated signals. Considering two examples, we show how this property may be used in processing frequency modulated signals. In the first example, a detection procedure in carried out on a frequency modulated signal with a second-order polynomial phase in complex Gaussian white noise. We showed that using this technique a better probability of detection is obtained for the reduced-order phase signals compared to that of the traditional energy detector. Also, it is empirically shown that the distribution of the gradient noise in the first adaptive reflection coefficients approximates the Gaussian law. In the second example, the instantaneous frequency of the same observed signal is estimated. We show that by using this technique a lower mean square error is achieved for the estimated frequencies at high signal-to-noise ratios in comparison to that of the adaptive line enhancer. The performance of adaptive lattice filters is then investigated for the second type of input signals, i.e., impulsive autoregressive processes with alpha-stable distributions . The concept of alpha-stable distributions is first introduced. We discuss that the stochastic gradient algorithm which performs desirable results for finite variance input signals (like frequency modulated signals in noise) does not perform a fast convergence for infinite variance stable processes (due to using the minimum mean-square error criterion). To deal with such problems, the concept of minimum dispersion criterion, fractional lower order moments, and recently-developed algorithms for stable processes are introduced. We then study the possibility of using the lattice structure for impulsive stable processes. Accordingly, two new algorithms including the least-mean P-norm lattice algorithm and its normalized version are proposed for lattice filters based on the fractional lower order moments. Simulation results show that using the proposed algorithms, faster convergence speeds are achieved for parameters estimation of autoregressive stable processes with low to moderate degrees of impulsiveness in comparison to many other algorithms. Also, we discuss the effect of impulsiveness of stable processes on generating some misalignment between the estimated parameters and the true values. Due to the infinite variance of stable processes, the performance of the proposed algorithms is only investigated using extensive computer simulations.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This thesis presents an original approach to parametric speech coding at rates below 1 kbitsjsec, primarily for speech storage applications. Essential processes considered in this research encompass efficient characterization of evolutionary configuration of vocal tract to follow phonemic features with high fidelity, representation of speech excitation using minimal parameters with minor degradation in naturalness of synthesized speech, and finally, quantization of resulting parameters at the nominated rates. For encoding speech spectral features, a new method relying on Temporal Decomposition (TD) is developed which efficiently compresses spectral information through interpolation between most steady points over time trajectories of spectral parameters using a new basis function. The compression ratio provided by the method is independent of the updating rate of the feature vectors, hence allows high resolution in tracking significant temporal variations of speech formants with no effect on the spectral data rate. Accordingly, regardless of the quantization technique employed, the method yields a high compression ratio without sacrificing speech intelligibility. Several new techniques for improving performance of the interpolation of spectral parameters through phonetically-based analysis are proposed and implemented in this research, comprising event approximated TD, near-optimal shaping event approximating functions, efficient speech parametrization for TD on the basis of an extensive investigation originally reported in this thesis, and a hierarchical error minimization algorithm for decomposition of feature parameters which significantly reduces the complexity of the interpolation process. Speech excitation in this work is characterized based on a novel Multi-Band Excitation paradigm which accurately determines the harmonic structure in the LPC (linear predictive coding) residual spectra, within individual bands, using the concept 11 of Instantaneous Frequency (IF) estimation in frequency domain. The model yields aneffective two-band approximation to excitation and computes pitch and voicing with high accuracy as well. New methods for interpolative coding of pitch and gain contours are also developed in this thesis. For pitch, relying on the correlation between phonetic evolution and pitch variations during voiced speech segments, TD is employed to interpolate the pitch contour between critical points introduced by event centroids. This compresses pitch contour in the ratio of about 1/10 with negligible error. To approximate gain contour, a set of uniformly-distributed Gaussian event-like functions is used which reduces the amount of gain information to about 1/6 with acceptable accuracy. The thesis also addresses a new quantization method applied to spectral features on the basis of statistical properties and spectral sensitivity of spectral parameters extracted from TD-based analysis. The experimental results show that good quality speech, comparable to that of conventional coders at rates over 2 kbits/sec, can be achieved at rates 650-990 bits/sec.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

During the past decade, a significant amount of research has been conducted internationally with the aim of developing, implementing, and verifying "advanced analysis" methods suitable for non-linear analysis and design of steel frame structures. Application of these methods permits comprehensive assessment of the actual failure modes and ultimate strengths of structural systems in practical design situations, without resort to simplified elastic methods of analysis and semi-empirical specification equations. Advanced analysis has the potential to extend the creativity of structural engineers and simplify the design process, while ensuring greater economy and more uniform safety with respect to the ultimate limit state. The application of advanced analysis methods has previously been restricted to steel frames comprising only members with compact cross-sections that are not subject to the effects of local buckling. This precluded the use of advanced analysis from the design of steel frames comprising a significant proportion of the most commonly used Australian sections, which are non-compact and subject to the effects of local buckling. This thesis contains a detailed description of research conducted over the past three years in an attempt to extend the scope of advanced analysis by developing methods that include the effects of local buckling in a non-linear analysis formulation, suitable for practical design of steel frames comprising non-compact sections. Two alternative concentrated plasticity formulations are presented in this thesis: the refined plastic hinge method and the pseudo plastic zone method. Both methods implicitly account for the effects of gradual cross-sectional yielding, longitudinal spread of plasticity, initial geometric imperfections, residual stresses, and local buckling. The accuracy and precision of the methods for the analysis of steel frames comprising non-compact sections has been established by comparison with a comprehensive range of analytical benchmark frame solutions. Both the refined plastic hinge and pseudo plastic zone methods are more accurate and precise than the conventional individual member design methods based on elastic analysis and specification equations. For example, the pseudo plastic zone method predicts the ultimate strength of the analytical benchmark frames with an average conservative error of less than one percent, and has an acceptable maximum unconservati_ve error of less than five percent. The pseudo plastic zone model can allow the design capacity to be increased by up to 30 percent for simple frames, mainly due to the consideration of inelastic redistribution. The benefits may be even more significant for complex frames with significant redundancy, which provides greater scope for inelastic redistribution. The analytical benchmark frame solutions were obtained using a distributed plasticity shell finite element model. A detailed description of this model and the results of all the 120 benchmark analyses are provided. The model explicitly accounts for the effects of gradual cross-sectional yielding, longitudinal spread of plasticity, initial geometric imperfections, residual stresses, and local buckling. Its accuracy was verified by comparison with a variety of analytical solutions and the results of three large-scale experimental tests of steel frames comprising non-compact sections. A description of the experimental method and test results is also provided.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Stereo vision is a method of depth perception, in which depth information is inferred from two (or more) images of a scene, taken from different perspectives. Practical applications for stereo vision include aerial photogrammetry, autonomous vehicle guidance, robotics and industrial automation. The initial motivation behind this work was to produce a stereo vision sensor for mining automation applications. For such applications, the input stereo images would consist of close range scenes of rocks. A fundamental problem faced by matching algorithms is the matching or correspondence problem. This problem involves locating corresponding points or features in two images. For this application, speed, reliability, and the ability to produce a dense depth map are of foremost importance. This work implemented a number of areabased matching algorithms to assess their suitability for this application. Area-based techniques were investigated because of their potential to yield dense depth maps, their amenability to fast hardware implementation, and their suitability to textured scenes such as rocks. In addition, two non-parametric transforms, the rank and census, were also compared. Both the rank and the census transforms were found to result in improved reliability of matching in the presence of radiometric distortion - significant since radiometric distortion is a problem which commonly arises in practice. In addition, they have low computational complexity, making them amenable to fast hardware implementation. Therefore, it was decided that matching algorithms using these transforms would be the subject of the remainder of the thesis. An analytic expression for the process of matching using the rank transform was derived from first principles. This work resulted in a number of important contributions. Firstly, the derivation process resulted in one constraint which must be satisfied for a correct match. This was termed the rank constraint. The theoretical derivation of this constraint is in contrast to the existing matching constraints which have little theoretical basis. Experimental work with actual and contrived stereo pairs has shown that the new constraint is capable of resolving ambiguous matches, thereby improving match reliability. Secondly, a novel matching algorithm incorporating the rank constraint has been proposed. This algorithm was tested using a number of stereo pairs. In all cases, the modified algorithm consistently resulted in an increased proportion of correct matches. Finally, the rank constraint was used to devise a new method for identifying regions of an image where the rank transform, and hence matching, are more susceptible to noise. The rank constraint was also incorporated into a new hybrid matching algorithm, where it was combined a number of other ideas. These included the use of an image pyramid for match prediction, and a method of edge localisation to improve match accuracy in the vicinity of edges. Experimental results obtained from the new algorithm showed that the algorithm is able to remove a large proportion of invalid matches, and improve match accuracy.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Shell structures find use in many fields of engineering, notably structural, mechanical, aerospace and nuclear-reactor disciplines. Axisymmetric shell structures are used as dome type of roofs, hyperbolic cooling towers, silos for storage of grain, oil and industrial chemicals and water tanks. Despite their thin walls, strength is derived due to the curvature. The generally high strength-to-weight ratio of the shell form, combined with its inherent stiffness, has formed the basis of this vast application. With the advent in computation technology, the finite element method and optimisation techniques, structural engineers have extremely versatile tools for the optimum design of such structures. Optimisation of shell structures can result not only in improved designs, but also in a large saving of material. The finite element method being a general numerical procedure that could be used to treat any shell problem to any desired degree of accuracy, requires several runs in order to obtain a complete picture of the effect of one parameter on the shell structure. This redesign I re-analysis cycle has been achieved via structural optimisation in the present research, and MSC/NASTRAN (a commercially available finite element code) has been used in this context for volume optimisation of axisymmetric shell structures under axisymmetric and non-axisymmetric loading conditions. The parametric study of different axisymmetric shell structures has revealed that the hyperbolic shape is the most economical solution of shells of revolution. To establish this, axisymmetric loading; self-weight and hydrostatic pressure, and non-axisymmetric loading; wind pressure and earthquake dynamic forces have been modelled on graphical pre and post processor (PATRAN) and analysis has been performed on two finite element codes (ABAQUS and NASTRAN), numerical model verification studies are performed, and optimum material volume required in the walls of cylindrical, conical, parabolic and hyperbolic forms of axisymmetric shell structures are evaluated and reviewed. Free vibration and transient earthquake analysis of hyperbolic shells have been performed once it was established that hyperbolic shape is the most economical under all possible loading conditions. Effect of important parameters of hyperbolic shell structures; shell wall thickness, height and curvature, have been evaluated and empirical relationships have been developed to estimate an approximate value of the lowest (first) natural frequency of vibration. The outcome of this thesis has been the generation of new research information on performance characteristics of axisymmetric shell structures that will facilitate improved designs of shells with better choice of shapes and enhanced levels of economy and performance. Key words; Axisymmetric shell structures, Finite element analysis, Volume Optimisation_ Free vibration_ Transient response.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

OneSteel Australian Tube Mills has recently developed a new hollow flange channel cold-formed section, known as the LiteSteel Beam (LSB). The innovative LSB sections have the beneficial characteristics of torsionally rigid closed rectangular flanges combined with economical fabrication processes from a single strip of high strength steel. They combine the stability of hot-rolled steel sections with the high strength to weight ratio of conventional cold-formed steel sections. The LSB sections are commonly used as flexural members in residential, industrial and commercial buildings. In order to ensure safe and efficient designs of LSBs, many research studies have been undertaken on the flexural behaviour of LSBs. However, no research has been undertaken on the shear behaviour of LSBs. Therefore this thesis investigated the ultimate shear strength behaviour of LSBs with and without web openings including their elastic buckling and post-buckling characteristics using both experimental and finite element analyses, and developed accurate shear design rules. Currently the elastic shear buckling coefficients of web panels are determined by assuming conservatively that the web panels are simply supported at the junction between the web and flange elements. Therefore finite element analyses were conducted first to investigate the elastic shear buckling behaviour of LSBs to determine the true support condition at the junction between their web and flange elements. An equation for the higher elastic shear buckling coefficient of LSBs was developed and included in the shear capacity equations in the cold-formed steel structures code, AS/NZS 4600. Predicted shear capacities from the modified equations and the available experimental results demonstrated the improvements to the shear capacities of LSBs due to the presence of higher level of fixity at the LSB flange to web juncture. A detailed study into the shear flow distribution of LSB was also undertaken prior to the elastic buckling analysis study. The experimental study of ten LSB sections included 42 shear tests of LSBs with aspect ratios of 1.0 and 1.5 that were loaded at midspan until failure. Both single and back to back LSB arrangements were used. Test specimens were chosen such that all three types of shear failure (shear yielding, inelastic and elastic shear buckling) occurred in the tests. Experimental results showed that the current cold-formed steel design rules are very conservative for the shear design of LSBs. Significant improvements to web shear buckling occurred due to the presence of rectangular hollow flanges while considerable post-buckling strength was also observed. Experimental results were presented and compared with corresponding predictions from the current design rules. Appropriate improvements have been proposed for the shear strength of LSBs based on AISI (2007) design equations and test results. Suitable design rules were also developed under the direct strength method (DSM) format. This thesis also includes the shear test results of cold-formed lipped channel beams from LaBoube and Yu (1978a), and the new design rules developed based on them using the same approach used with LSBs. Finite element models of LSBs in shear were also developed to investigate the ultimate shear strength behaviour of LSBs including their elastic and post-buckling characteristics. They were validated by comparing their results with experimental test results. Details of the finite element models of LSBs, the nonlinear analysis results and their comparisons with experimental results are presented in this thesis. Finite element analysis results showed that the current cold-formed steel design rules are very conservative for the shear design of LSBs. They also confirmed other experimental findings relating to elastic and post-buckling shear strength of LSBs. A detailed parametric study based on validated experimental finite element model was undertaken to develop an extensive shear strength data base and was then used to confirm the accuracy of the new shear strength equations proposed in this thesis. Experimental and numerical studies were also undertaken to investigate the shear behaviour of LSBs with web openings. Twenty six shear tests were first undertaken using a three point loading arrangement. It was found that AS/NZS 4600 and Shan et al.'s (1997) design equations are conservative for the shear design of LSBs with web openings while McMahon et al.'s (2008) design equation are unconservative. Experimental finite element models of LSBs with web openings were then developed and validated by comparing their results with experimental test results. The developed nonlinear finite element model was found to predict the shear capacity of LSBs with web opening with very good accuracy. Improved design equations have been proposed for the shear capacity of LSBs with web openings based on both experimental and FEA parametric study results. This thesis presents the details of experimental and numerical studies of the shear behaviour and strength of LSBs with and without web openings and the results including the developed accurate design rules.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The LiteSteel Beam (LSB) is a new hollow flange channel section developed by OneSteel Australian Tube Mills using a patented Dual Electric Resistance Welding technique. The LSB has a unique geometry consisting of torsionally rigid rectangular hollow flanges and a relatively slender web. It is commonly used as rafters, floor joists and bearers and roof beams in residential, industrial and commercial buildings. It is on average 40% lighter than traditional hot-rolled steel beams of equivalent performance. The LSB flexural members are subjected to a relatively new Lateral Distortional Buckling mode, which reduces the member moment capacity. Unlike the commonly observed lateral torsional buckling of steel beams, lateral distortional buckling of LSBs is characterised by simultaneous lateral deflection, twist and web distortion. Current member moment capacity design rules for lateral distortional buckling in AS/NZS 4600 (SA, 2005) do not include the effect of section geometry of hollow flange beams although its effect is considered to be important. Therefore detailed experimental and finite element analyses (FEA) were carried out to investigate the lateral distortional buckling behaviour of LSBs including the effect of section geometry. The results showed that the current design rules in AS/NZS 4600 (SA, 2005) are over-conservative in the inelastic lateral buckling region. New improved design rules were therefore developed for LSBs based on both FEA and experimental results. A geometrical parameter (K) defined as the ratio of the flange torsional rigidity to the major axis flexural rigidity of the web (GJf/EIxweb) was identified as the critical parameter affecting the lateral distortional buckling of hollow flange beams. The effect of section geometry was then included in the new design rules using the new parameter (K). The new design rule developed by including this parameter was found to be accurate in calculating the member moment capacities of not only LSBs, but also other types of hollow flange steel beams such as Hollow Flange Beams (HFBs), Monosymmetric Hollow Flange Beams (MHFBs) and Rectangular Hollow Flange Beams (RHFBs). The inelastic reserve bending capacity of LSBs has not been investigated yet although the section moment capacity tests of LSBs in the past revealed that inelastic reserve bending capacity is present in LSBs. However, the Australian and American cold-formed steel design codes limit them to the first yield moment. Therefore both experimental and FEA were carried out to investigate the section moment capacity behaviour of LSBs. A comparison of the section moment capacity results from FEA, experiments and current cold-formed steel design codes showed that compact and non-compact LSB sections classified based on AS 4100 (SA, 1998) have some inelastic reserve capacity while slender LSBs do not have any inelastic reserve capacity beyond their first yield moment. It was found that Shifferaw and Schafer’s (2008) proposed equations and Eurocode 3 Part 1.3 (ECS, 2006) design equations can be used to include the inelastic bending capacities of compact and non-compact LSBs in design. As a simple design approach, the section moment capacity of compact LSB sections can be taken as 1.10 times their first yield moment while it is the first yield moment for non-compact sections. For slender LSB sections, current cold-formed steel codes can be used to predict their section moment capacities. It was believed that the use of transverse web stiffeners could improve the lateral distortional buckling moment capacities of LSBs. However, currently there are no design equations to predict the elastic lateral distortional buckling and member moment capacities of LSBs with web stiffeners under uniform moment conditions. Therefore, a detailed study was conducted using FEA to simulate both experimental and ideal conditions of LSB flexural members. It was shown that the use of 3 to 5 mm steel plate stiffeners welded or screwed to the inner faces of the top and bottom flanges of LSBs at third span points and supports provided an optimum web stiffener arrangement. Suitable design rules were developed to calculate the improved elastic buckling and ultimate moment capacities of LSBs with these optimum web stiffeners. A design rule using the geometrical parameter K was also developed to improve the accuracy of ultimate moment capacity predictions. This thesis presents the details and results of the experimental and numerical studies of the section and member moment capacities of LSBs conducted in this research. It includes the recommendations made regarding the accuracy of current design rules as well as the new design rules for lateral distortional buckling. The new design rules include the effects of section geometry of hollow flange steel beams. This thesis also developed a method of using web stiffeners to reduce the lateral distortional buckling effects, and associated design rules to calculate the improved moment capacities.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Artificial neural network (ANN) learning methods provide a robust and non-linear approach to approximating the target function for many classification, regression and clustering problems. ANNs have demonstrated good predictive performance in a wide variety of practical problems. However, there are strong arguments as to why ANNs are not sufficient for the general representation of knowledge. The arguments are the poor comprehensibility of the learned ANN, and the inability to represent explanation structures. The overall objective of this thesis is to address these issues by: (1) explanation of the decision process in ANNs in the form of symbolic rules (predicate rules with variables); and (2) provision of explanatory capability by mapping the general conceptual knowledge that is learned by the neural networks into a knowledge base to be used in a rule-based reasoning system. A multi-stage methodology GYAN is developed and evaluated for the task of extracting knowledge from the trained ANNs. The extracted knowledge is represented in the form of restricted first-order logic rules, and subsequently allows user interaction by interfacing with a knowledge based reasoner. The performance of GYAN is demonstrated using a number of real world and artificial data sets. The empirical results demonstrate that: (1) an equivalent symbolic interpretation is derived describing the overall behaviour of the ANN with high accuracy and fidelity, and (2) a concise explanation is given (in terms of rules, facts and predicates activated in a reasoning episode) as to why a particular instance is being classified into a certain category.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The numerical modelling of electromagnetic waves has been the focus of many research areas in the past. Some specific applications of electromagnetic wave scattering are in the fields of Microwave Heating and Radar Communication Systems. The equations that govern the fundamental behaviour of electromagnetic wave propagation in waveguides and cavities are the Maxwell's equations. In the literature, a number of methods have been employed to solve these equations. Of these methods, the classical Finite-Difference Time-Domain scheme, which uses a staggered time and space discretisation, is the most well known and widely used. However, it is complicated to implement this method on an irregular computational domain using an unstructured mesh. In this work, a coupled method is introduced for the solution of Maxwell's equations. It is proposed that the free-space component of the solution is computed in the time domain, whilst the load is resolved using the frequency dependent electric field Helmholtz equation. This methodology results in a timefrequency domain hybrid scheme. For the Helmholtz equation, boundary conditions are generated from the time dependent free-space solutions. The boundary information is mapped into the frequency domain using the Discrete Fourier Transform. The solution for the electric field components is obtained by solving a sparse-complex system of linear equations. The hybrid method has been tested for both waveguide and cavity configurations. Numerical tests performed on waveguides and cavities for inhomogeneous lossy materials highlight the accuracy and computational efficiency of the newly proposed hybrid computational electromagnetic strategy.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This thesis applies Monte Carlo techniques to the study of X-ray absorptiometric methods of bone mineral measurement. These studies seek to obtain information that can be used in efforts to improve the accuracy of the bone mineral measurements. A Monte Carlo computer code for X-ray photon transport at diagnostic energies has been developed from first principles. This development was undertaken as there was no readily available code which included electron binding energy corrections for incoherent scattering and one of the objectives of the project was to study the effects of inclusion of these corrections in Monte Carlo models. The code includes the main Monte Carlo program plus utilities for dealing with input data. A number of geometrical subroutines which can be used to construct complex geometries have also been written. The accuracy of the Monte Carlo code has been evaluated against the predictions of theory and the results of experiments. The results show a high correlation with theoretical predictions. In comparisons of model results with those of direct experimental measurements, agreement to within the model and experimental variances is obtained. The code is an accurate and valid modelling tool. A study of the significance of inclusion of electron binding energy corrections for incoherent scatter in the Monte Carlo code has been made. The results show this significance to be very dependent upon the type of application. The most significant effect is a reduction of low angle scatter flux for high atomic number scatterers. To effectively apply the Monte Carlo code to the study of bone mineral density measurement by photon absorptiometry the results must be considered in the context of a theoretical framework for the extraction of energy dependent information from planar X-ray beams. Such a theoretical framework is developed and the two-dimensional nature of tissue decomposition based on attenuation measurements alone is explained. This theoretical framework forms the basis for analytical models of bone mineral measurement by dual energy X-ray photon absorptiometry techniques. Monte Carlo models of dual energy X-ray absorptiometry (DEXA) have been established. These models have been used to study the contribution of scattered radiation to the measurements. It has been demonstrated that the measurement geometry has a significant effect upon the scatter contribution to the detected signal. For the geometry of the models studied in this work the scatter has no significant effect upon the results of the measurements. The model has also been used to study a proposed technique which involves dual energy X-ray transmission measurements plus a linear measurement of the distance along the ray path. This is designated as the DPA( +) technique. The addition of the linear measurement enables the tissue decomposition to be extended to three components. Bone mineral, fat and lean soft tissue are the components considered here. The results of the model demonstrate that the measurement of bone mineral using this technique is stable over a wide range of soft tissue compositions and hence would indicate the potential to overcome a major problem of the two component DEXA technique. However, the results also show that the accuracy of the DPA( +) technique is highly dependent upon the composition of the non-mineral components of bone and has poorer precision (approximately twice the coefficient of variation) than the standard DEXA measurements. These factors may limit the usefulness of the technique. These studies illustrate the value of Monte Carlo computer modelling of quantitative X-ray measurement techniques. The Monte Carlo models of bone densitometry measurement have:- 1. demonstrated the significant effects of the measurement geometry upon the contribution of scattered radiation to the measurements, 2. demonstrated that the statistical precision of the proposed DPA( +) three tissue component technique is poorer than that of the standard DEXA two tissue component technique, 3. demonstrated that the proposed DPA(+) technique has difficulty providing accurate simultaneous measurement of body composition in terms of a three component model of fat, lean soft tissue and bone mineral,4. and provided a knowledge base for input to decisions about development (or otherwise) of a physical prototype DPA( +) imaging system. The Monte Carlo computer code, data, utilities and associated models represent a set of significant, accurate and valid modelling tools for quantitative studies of physical problems in the fields of diagnostic radiology and radiography.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Automatic spoken Language Identi¯cation (LID) is the process of identifying the language spoken within an utterance. The challenge that this task presents is that no prior information is available indicating the content of the utterance or the identity of the speaker. The trend of globalization and the pervasive popularity of the Internet will amplify the need for the capabilities spoken language identi¯ca- tion systems provide. A prominent application arises in call centers dealing with speakers speaking di®erent languages. Another important application is to index or search huge speech data archives and corpora that contain multiple languages. The aim of this research is to develop techniques targeted at producing a fast and more accurate automatic spoken LID system compared to the previous National Institute of Standards and Technology (NIST) Language Recognition Evaluation. Acoustic and phonetic speech information are targeted as the most suitable fea- tures for representing the characteristics of a language. To model the acoustic speech features a Gaussian Mixture Model based approach is employed. Pho- netic speech information is extracted using existing speech recognition technol- ogy. Various techniques to improve LID accuracy are also studied. One approach examined is the employment of Vocal Tract Length Normalization to reduce the speech variation caused by di®erent speakers. A linear data fusion technique is adopted to combine the various aspects of information extracted from speech. As a result of this research, a LID system was implemented and presented for evaluation in the 2003 Language Recognition Evaluation conducted by the NIST.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Assessment of the condition of connectors in the overhead electricity network has traditionally relied on the heat dissipation or voltage drop from existing load current (50Hz) as a measurable parameter to differentiate between satisfactory and failing connectors. This research has developed a technique which does not rely on the 50Hz current and a prototype connector tester has been developed. In this system a high frequency signal is injected into the section of line under test and measures the resistive voltage drop and the current at the test frequency to yield the resistance in micro-ohms. From the value of resistance a decision as to whether a connector is satisfactory or approaching failure can be made. Determining the resistive voltage drop in the presence of a large induced voltage was achieved by the innovative approach of using a representative sample of the magnetic flux producing the induced voltage as the phase angle reference for the signal processing rather than the phase angle of the current, which can be affected by the presence of nearby metal objects. Laboratory evaluation of the connector tester has validated the measurement technique. The magnitude of the load current (50Hz) has minimal effect on the measurement accuracy. Addition of a suitable battery based power supply system and isolated communications, probably radio and refinement of the printed circuit board design and software are the remaining development steps to a production instrument.