929 resultados para Two dimensional distribution
Resumo:
Packed beds have many industrial applications and are increasingly used in the process industries due to their low pressure drop. With the introduction of more efficient packings, novel packing materials (i.e. adsorbents) and new applications (i.e. flue gas desulphurisation); the aspect ratio (height to diameter) of such beds is decreasing. Obtaining uniform gas distribution in such beds is of crucial importance in minimising operating costs and optimising plant performance. Since to some extent a packed bed acts as its own distributor the importance of obtaining uniform gas distribution has increased as aspect ratios (bed height to diameter) decrease. There is no rigorous design method for distributors due to a limited understanding of the fluid flow phenomena and in particular of the effect of the bed base / free fluid interface. This study is based on a combined theoretical and modelling approach. The starting point is the Ergun Equation which is used to determine the pressure drop over a bed where the flow is uni-directional. This equation has been applied in a vectorial form so it can be applied to maldistributed and multi-directional flows and has been realised in the Computational Fluid Dynamics code PHOENICS. The use of this equation and its application has been verified by modelling experimental measurements of maldistributed gas flows, where there is no free fluid / bed base interface. A novel, two-dimensional experiment has been designed to investigate the fluid mechanics of maldistributed gas flows in shallow packed beds. The flow through the outlet of the duct below the bed can be controlled, permitting a rigorous investigation. The results from this apparatus provide useful insights into the fluid mechanics of flow in and around a shallow packed bed and show the critical effect of the bed base. The PHOENICS/vectorial Ergun Equation model has been adapted to model this situation. The model has been improved by the inclusion of spatial voidage variations in the bed and the prescription of a novel bed base boundary condition. This boundary condition is based on the logarithmic law for velocities near walls without restricting the velocity at the bed base to zero and is applied within a turbulence model. The flow in a curved bed section, which is three-dimensional in nature, is examined experimentally. The effect of the walls and the changes in gas direction on the gas flow are shown to be particularly significant. As before, the relative amounts of gas flowing through the bed and duct outlet can be controlled. The model and improved understanding of the underlying physical phenomena form the basis for the development of new distributors and rigorous design methods for them.
Resumo:
This work is concerned with a study of certain phenomena related to the performance and design of distributors in gas fluidized beds with particular regard to flowback of solid particles. The work to be described is divided into two parts. I. In Part one, a review of published material pertaining to distribution plates, including details from the patent specifications, has been prepared. After a chapter on the determination of the incipient fluidizing velocity, the following aspects of multi-orifice distributor plates in gas fluidized beds have been studied: (i) The effect of the distributor on bubble formation related to the way in which even distribution of bubbles on the top surface of the fluidized bed is obtained, e.g. the desirable pressure drop ratio ?PD/?PB for the even distribution of gas across the bed. Ratios of distributor pressure drop ?PD to bed pressure drop at which stable fluidization occurs show reasonable agreement with industrial practice. There is evidence that larger diameter beds tend to be less stable than smaller diameter beds when these are operated with shallow beds. Experiments show that in the presence of the bed the distributor pressure drop is reduced relative to the pressure drop without the bed, and this pressure drop in the former condition is regarded as the appropriate parameter for the design of the distributor. (ii) Experimental measurements of bubble distribution at the surface has been used to indicate maldistribution within the bed. Maldistribution is more likely at low gas flow rates and with distributors having large fractional free area characteristics (i.e. with distributors having low pressure drops). Bubble sizes obtained from this study, as well as those of others, have been successfully correlated. The correlation produced implies the existence of a bubble at the surface of an orifice and its growth by the addition of excess gas from the fluidized bed. (iii) For a given solid system, the amount of defluidized particles stagnating on the distributor plate is influenced by the orifice spacing, bed diameter and gas flow rate, but independent of the initial bed height and the way the orifices are arranged on the distributor plate. II. In Part two, solids flowback through single and multi-orifice distributors in two-dimensional and cylindrical beds of solids fluidized with air has been investigated. Distributors equipped with long cylindrical nozzles have also been included in the study. An equation for the prediction of free flowback of solids through multi-orifice distributors has been derived. Under fluidized conditions two regimes of flowback have been differentiated, namely Jumping and weeping. Data in the weeping regime have been successfully correlated. The limiting gas velocity through the distributor orifices at which flowback is completely excluded is found to be indepnndent of bed height, but a function of distributor design and physical properties of gas and solid used. A criterion for the prediction of this velocity has been established. The decisive advantage of increasing the distributor thickness or using nozzles to minimize solids flowback in fluidized beds has been observed and the opportunity taken to explore this poorly studied subject area. It has been noted, probably for the first time, that with long nozzles, there exists a critical nozzle length above which uncontrollable downflow of solids occurs. A theoretical model for predicting the critical length of a bundle of nozzles in terms of gas velocity through the nozzles has been set up. Theoretical calculations compared favourably with experiments.
Resumo:
This work is the first work using patterned soft underlayers in multilevel three-dimensional vertical magnetic data storage systems. The motivation stems from an exponentially growing information stockpile, and a corresponding need for more efficient storage devices with higher density. The world information stockpile currently exceeds 150EB (ExaByte=1x1018Bytes); most of which is in analog form. Among the storage technologies (semiconductor, optical and magnetic), magnetic hard disk drives are posed to occupy a big role in personal, network as well as corporate storage. However; this mode suffers from a limit known as the Superparamagnetic limit; which limits achievable areal density due to fundamental quantum mechanical stability requirements. There are many viable techniques considered to defer superparamagnetism into the 100's of Gbit/in2 such as: patterned media, Heat-Assisted Magnetic Recording (HAMR), Self Organized Magnetic Arrays (SOMA), antiferromagnetically coupled structures (AFC), and perpendicular magnetic recording. Nonetheless, these techniques utilize a single magnetic layer; and can thusly be viewed as two-dimensional in nature. In this work a novel three-dimensional vertical magnetic recording approach is proposed. This approach utilizes the entire thickness of a magnetic multilayer structure to store information; with potential areal density well into the Tbit/in2 regime. ^ There are several possible implementations for 3D magnetic recording; each presenting its own set of requirements, merits and challenges. The issues and considerations pertaining to the development of such systems will be examined, and analyzed using empirical and numerical analysis techniques. Two novel key approaches are proposed and developed: (1) Patterned soft underlayer (SUL) which allows for enhanced recording of thicker media, (2) A combinatorial approach for 3D media development that facilitates concurrent investigation of various film parameters on a predefined performance metric. A case study is presented using combinatorial overcoats of Tantalum and Zirconium Oxides for corrosion protection in magnetic media. ^ Feasibility of 3D recording is demonstrated, and an emphasis on 3D media development is emphasized as a key prerequisite. Patterned SUL shows significant enhancement over conventional "un-patterned" SUL, and shows that geometry can be used as a design tool to achieve favorable field distribution where magnetic storage and magnetic phenomena are involved. ^
Resumo:
The spacing of adjacent wheel lines of dual-lane loads induces different lateral live load distributions on bridges, which cannot be determined using the current American Association of State Highway and Transportation Officials (AASHTO) Load and Resistance Factor Design (LRFD) or Load Factor Design (LFD) equations for vehicles with standard axle configurations. Current Iowa law requires dual-lane loads to meet a five-foot requirement, the adequacy of which needs to be verified. To improve the state policy and AASHTO code specifications, it is necessary to understand the actual effects of wheel-line spacing on lateral load distribution. The main objective of this research was to investigate the impact of the wheel-line spacing of dual-lane loads on the lateral load distribution on bridges. To achieve this objective, a numerical evaluation using two-dimensional linear elastic finite element (FE) models was performed. For simulation purposes, 20 prestressed-concrete bridges, 20 steel bridges, and 20 slab bridges were randomly sampled from the Iowa bridge database. Based on the FE results, the load distribution factors (LDFs) of the concrete and steel bridges and the equivalent lengths of the slab bridges were derived. To investigate the variations of LDFs, a total of 22 types of single-axle four-wheel-line dual-lane loads were taken into account with configurations consisting of combinations of various interior and exterior wheel-line spacing. The corresponding moment and shear LDFs and equivalent widths were also derived using the AASHTO equations and the adequacy of the Iowa DOT five-foot requirement was evaluated. Finally, the axle weight limits per lane for different dual-lane load types were further calculated and recommended to complement the current Iowa Department of Transportation (DOT) policy and AASHTO code specifications.
Resumo:
In this work the split-field finite-difference time-domain method (SF-FDTD) has been extended for the analysis of two-dimensionally periodic structures with third-order nonlinear media. The accuracy of the method is verified by comparisons with the nonlinear Fourier Modal Method (FMM). Once the formalism has been validated, examples of one- and two-dimensional nonlinear gratings are analysed. Regarding the 2D case, the shifting in resonant waveguides is corroborated. Here, not only the scalar Kerr effect is considered, the tensorial nature of the third-order nonlinear susceptibility is also included. The consideration of nonlinear materials in this kind of devices permits to design tunable devices such as variable band filters. However, the third-order nonlinear susceptibility is usually small and high intensities are needed in order to trigger the nonlinear effect. Here, a one-dimensional CBG is analysed in both linear and nonlinear regime and the shifting of the resonance peaks in both TE and TM are achieved numerically. The application of a numerical method based on the finite- difference time-domain method permits to analyse this issue from the time domain, thus bistability curves are also computed by means of the numerical method. These curves show how the nonlinear effect modifies the properties of the structure as a function of variable input pump field. When taking the nonlinear behaviour into account, the estimation of the electric field components becomes more challenging. In this paper, we present a set of acceleration strategies based on parallel software and hardware solutions.
Resumo:
Insufficient availability of osteogenic cells limits bone regeneration through cell-based therapies. This study investigated the potential of amniotic fluid–derived stem (AFS) cells to synthesize mineralized extracellular matrix within porous medical-grade poly-e-caprolactone (mPCL) scaffolds. The AFS cells were initially differentiated in two-dimensional (2D) culture to determine appropriate osteogenic culture conditions and verify physiologic mineral production by the AFS cells. The AFS cells were then cultured on 3D mPCL scaffolds (6-mm diameter9-mm height) and analyzed for their ability to differentiate to osteoblastic cells in this environment. The amount and distribution of mineralized matrix production was quantified throughout the mPCL scaffold using nondestructive micro computed tomography (microCT) analysis and confirmed through biochemical assays. Sterile microCT scanning provided longitudinal analysis of long-term cultured mPCL constructs to determine the rate and distribution of mineral matrix within the scaffolds. The AFS cells deposited mineralized matrix throughout the mPCL scaffolds and remained viable after 15 weeks of 3D culture. The effect of predifferentiation of the AFS cells on the subsequent bone formation in vivo was determined in a rat subcutaneous model. Cells that were pre-differentiated for 28 days in vitro produced seven times more mineralized matrix when implanted subcutaneously in vivo. This study demonstrated the potential of AFS cells to produce 3D mineralized bioengineered constructs in vitro and in vivo and suggests that AFS cells may be an effective cell source for functional repair of large bone defects
Resumo:
Areal bone mineral density (aBMD) is the most common surrogate measurement for assessing the bone strength of the proximal femur associated with osteoporosis. Additional factors, however, contribute to the overall strength of the proximal femur, primarily the anatomical geometry. Finite element analysis (FEA) is an effective and widely used computerbased simulation technique for modeling mechanical loading of various engineering structures, providing predictions of displacement and induced stress distribution due to the applied load. FEA is therefore inherently dependent upon both density and anatomical geometry. FEA may be performed on both three-dimensional and two-dimensional models of the proximal femur derived from radiographic images, from which the mechanical stiffness may be redicted. It is examined whether the outcome measures of two-dimensional FEA, two-dimensional, finite element analysis of X-ray images (FEXI), and three-dimensional FEA computed stiffness of the proximal femur were more sensitive than aBMD to changes in trabecular bone density and femur geometry. It is assumed that if an outcome measure follows known trends with changes in density and geometric parameters, then an increased sensitivity will be indicative of an improved prediction of bone strength. All three outcome measures increased non-linearly with trabecular bone density, increased linearly with cortical shell thickness and neck width, decreased linearly with neck length, and were relatively insensitive to neck-shaft angle. For femoral head radius, aBMD was relatively insensitive, with two-dimensional FEXI and threedimensional FEA demonstrating a non-linear increase and decrease in sensitivity, respectively. For neck anteversion, aBMD decreased non-linearly, whereas both two-dimensional FEXI and three dimensional FEA demonstrated a parabolic-type relationship, with maximum stiffness achieved at an angle of approximately 15o. Multi-parameter analysis showed that all three outcome measures demonstrated their highest sensitivity to a change in cortical thickness. When changes in all input parameters were considered simultaneously, three and twodimensional FEA had statistically equal sensitivities (0.41±0.20 and 0.42±0.16 respectively, p = ns) that were significantly higher than the sensitivity of aBMD (0.24±0.07; p = 0.014 and 0.002 for three-dimensional and two-dimensional FEA respectively). This simulation study suggests that since mechanical integrity and FEA are inherently dependent upon anatomical geometry, FEXI stiffness, being derived from conventional two-dimensional radiographic images, may provide an improvement in the prediction of bone strength of the proximal femur than currently provided by aBMD.
Resumo:
In the structure of the title compound, C2H10N22+·C8H2Cl2O42-, the dications and dianions form hydrogen-bonded ribbon substructures which enclose conjoint cyclic R21(7), R12(7) and R42(8) associations and extend down the c-axis direction. These ribbons inter-associate down b, giving a two-dimensional sheet structure. In the dianions, one of the carboxylate groups is essentially coplanar with the benzene ring, while the other is normal to it [C-C-C-O torsion angles = 177.67 (12) and 81.94 (17)°, respectively].
Resumo:
The 1:1 proton-transfer compounds of L-tartaric acid with 3-aminopyridine [3-aminopyridinium hydrogen (2R,3R)-tartrate dihydrate, C5H7N2+·C4H5O6-·2H2O, (I)], pyridine-3-carboxylic acid (nicotinic acid) [anhydrous 3-carboxypyridinium hydrogen (2R,3R)-tartrate, C6H6NO2+·C4H5O6-, (II)] and pyridine-2-carboxylic acid [2-carboxypyridinium hydrogen (2R,3R)-tartrate monohydrate, C6H6NO2+·C4H5O6-·H2O, (III)] have been determined. In (I) and (II), there is a direct pyridinium-carboxyl N+-HO hydrogen-bonding interaction, four-centred in (II), giving conjoint cyclic R12(5) associations. In contrast, the N-HO association in (III) is with a water O-atom acceptor, which provides links to separate tartrate anions through Ohydroxy acceptors. All three compounds have the head-to-tail C(7) hydrogen-bonded chain substructures commonly associated with 1:1 proton-transfer hydrogen tartrate salts. These chains are extended into two-dimensional sheets which, in hydrates (I) and (III) additionally involve the solvent water molecules. Three-dimensional hydrogen-bonded structures are generated via crosslinking through the associative functional groups of the substituted pyridinium cations. In the sheet struture of (I), both water molecules act as donors and acceptors in interactions with separate carboxyl and hydroxy O-atom acceptors of the primary tartrate chains, closing conjoint cyclic R44(8), R34(11) and R33(12) associations. Also, in (II) and (III) there are strong cation carboxyl-carboxyl O-HO hydrogen bonds [OO = 2.5387 (17) Å in (II) and 2.441 (3) Å in (III)], which in (II) form part of a cyclic R22(6) inter-sheet association. This series of heteroaromatic Lewis base-hydrogen L-tartrate salts provides further examples of molecular assembly facilitated by the presence of the classical two-dimensional hydrogen-bonded hydrogen tartrate or hydrogen tartrate-water sheet substructures which are expanded into three-dimensional frameworks via peripheral cation bifunctional substituent-group crosslinking interactions.
Resumo:
Purpose: To ascertain the effectiveness of object-centered three-dimensional representations for the modeling of corneal surfaces. Methods: Three-dimensional (3D) surface decomposition into series of basis functions including: (i) spherical harmonics, (ii) hemispherical harmonics, and (iii) 3D Zernike polynomials were considered and compared to the traditional viewer-centered representation of two-dimensional (2D) Zernike polynomial expansion for a range of retrospective videokeratoscopic height data from three clinical groups. The data were collected using the Medmont E300 videokeratoscope. The groups included 10 normal corneas with corneal astigmatism less than −0.75 D, 10 astigmatic corneas with corneal astigmatism between −1.07 D and 3.34 D (Mean = −1.83 D, SD = ±0.75 D), and 10 keratoconic corneas. Only data from the right eyes of the subjects were considered. Results: All object-centered decompositions led to significantly better fits to corneal surfaces (in terms of the RMS error values) than the corresponding 2D Zernike polynomial expansions with the same number of coefficients, for all considered corneal surfaces, corneal diameters (2, 4, 6, and 8 mm), and model orders (4th to 10th radial orders) The best results (smallest RMS fit error) were obtained with spherical harmonics decomposition which lead to about 22% reduction in the RMS fit error, as compared to the traditional 2D Zernike polynomials. Hemispherical harmonics and the 3D Zernike polynomials reduced the RMS fit error by about 15% and 12%, respectively. Larger reduction in RMS fit error was achieved for smaller corneral diameters and lower order fits. Conclusions: Object-centered 3D decompositions provide viable alternatives to traditional viewer-centered 2D Zernike polynomial expansion of a corneal surface. They achieve better fits to videokeratoscopic height data and could be particularly suited to the analysis of multiple corneal measurements, where there can be slight variations in the position of the cornea from one map acquisition to the next.
Resumo:
The structures of proton-transfer compounds of 4,5-dichlorophthalic acid (DCPA) with the aliphatic Lewis bases triethylamine, diethylamine, n-butylamine and piperidine, namely triethylaminium 2-carboxy-4,5-dichlorobenzoate C~6~H~16~N^+^ C~8~H~3~Cl~2~O~4~^-^ (I), diethylaminium 2-carboxy-4,5-dichlorobenzoate C~4~H~12~N^+^ C~8~H~3~Cl~2~O~4~^-^ (II), bis(n-butylaminium) 4,5-dichlorophthalate monohydrate 2(C~4~H~12~N^+^) C~8~H~2~Cl~2~O~4~^2-^ . H~2~O (III) and bis(piperidinium) 4,5-dichlorophthalate monohydrate 2(C~5~H~12~N^+^) C~8~H~2~Cl~2~O~4~^2-^ . H~2~O (IV)have been determined at 200 K. All compounds have hydrogen-bonding associations giving in (I) discrete cation-anion units, linear chains in (II) while (III) and (IV) both have two-dimensional structures. In (I) a discrete cation-anion unit is formed through an asymmetric R2/1(4) N+-H...O,O' hydrogen-bonding association whereas in (II), one-dimensional chains are formed through linear N-H...O associations by both aminium H donors. In compounds (III) and (IV) the primary N-H...O linked cation-anion units are extended into a two-dimensional sheet structure via amide N-H...O(carboxyl) and ...O(carbonyl) interactions. In the 1:1 salts [(I) and (II)], the hydrogen 4,5-dichlorophthalate anions are essentially planar with short intramolecular carboxylic acid O-H...O(carboxyl) hydrogen bonds [O...O, 2.4223(14) and 2.388(2)A respectively]. This work provides a further example of the uncommon zero-dimensional hydrogen-bonded DCPA-Lewis base salt and the one-dimensional chain structure type, while even with the hydrate structures of the 1:2 salts with the primary and secondary amines, the low dimensionality generally associated with 1:1 DCPA salts is also found.
Resumo:
In this paper we describe the development of a three-dimensional (3D) imaging system for a 3500 tonne mining machine (dragline).Draglines are large walking cranes used for removing the dirt that covers a coal seam. Our group has been developing a dragline swing automation system since 1994. The system so far has been `blind' to its external environment. The work presented in this paper attempts to give the dragline an ability to sense its surroundings. A 3D digital terrain map (DTM) is created from data obtained from a two-dimensional laser scanner while the dragline swings. Experimental data from an operational dragline are presented.
Resumo:
The main goal of this research is to design an efficient compression al~ gorithm for fingerprint images. The wavelet transform technique is the principal tool used to reduce interpixel redundancies and to obtain a parsimonious representation for these images. A specific fixed decomposition structure is designed to be used by the wavelet packet in order to save on the computation, transmission, and storage costs. This decomposition structure is based on analysis of information packing performance of several decompositions, two-dimensional power spectral density, effect of each frequency band on the reconstructed image, and the human visual sensitivities. This fixed structure is found to provide the "most" suitable representation for fingerprints, according to the chosen criteria. Different compression techniques are used for different subbands, based on their observed statistics. The decision is based on the effect of each subband on the reconstructed image according to the mean square criteria as well as the sensitivities in human vision. To design an efficient quantization algorithm, a precise model for distribution of the wavelet coefficients is developed. The model is based on the generalized Gaussian distribution. A least squares algorithm on a nonlinear function of the distribution model shape parameter is formulated to estimate the model parameters. A noise shaping bit allocation procedure is then used to assign the bit rate among subbands. To obtain high compression ratios, vector quantization is used. In this work, the lattice vector quantization (LVQ) is chosen because of its superior performance over other types of vector quantizers. The structure of a lattice quantizer is determined by its parameters known as truncation level and scaling factor. In lattice-based compression algorithms reported in the literature the lattice structure is commonly predetermined leading to a nonoptimized quantization approach. In this research, a new technique for determining the lattice parameters is proposed. In the lattice structure design, no assumption about the lattice parameters is made and no training and multi-quantizing is required. The design is based on minimizing the quantization distortion by adapting to the statistical characteristics of the source in each subimage. 11 Abstract Abstract Since LVQ is a multidimensional generalization of uniform quantizers, it produces minimum distortion for inputs with uniform distributions. In order to take advantage of the properties of LVQ and its fast implementation, while considering the i.i.d. nonuniform distribution of wavelet coefficients, the piecewise-uniform pyramid LVQ algorithm is proposed. The proposed algorithm quantizes almost all of source vectors without the need to project these on the lattice outermost shell, while it properly maintains a small codebook size. It also resolves the wedge region problem commonly encountered with sharply distributed random sources. These represent some of the drawbacks of the algorithm proposed by Barlaud [26). The proposed algorithm handles all types of lattices, not only the cubic lattices, as opposed to the algorithms developed by Fischer [29) and Jeong [42). Furthermore, no training and multiquantizing (to determine lattice parameters) is required, as opposed to Powell's algorithm [78). For coefficients with high-frequency content, the positive-negative mean algorithm is proposed to improve the resolution of reconstructed images. For coefficients with low-frequency content, a lossless predictive compression scheme is used to preserve the quality of reconstructed images. A method to reduce bit requirements of necessary side information is also introduced. Lossless entropy coding techniques are subsequently used to remove coding redundancy. The algorithms result in high quality reconstructed images with better compression ratios than other available algorithms. To evaluate the proposed algorithms their objective and subjective performance comparisons with other available techniques are presented. The quality of the reconstructed images is important for a reliable identification. Enhancement and feature extraction on the reconstructed images are also investigated in this research. A structural-based feature extraction algorithm is proposed in which the unique properties of fingerprint textures are used to enhance the images and improve the fidelity of their characteristic features. The ridges are extracted from enhanced grey-level foreground areas based on the local ridge dominant directions. The proposed ridge extraction algorithm, properly preserves the natural shape of grey-level ridges as well as precise locations of the features, as opposed to the ridge extraction algorithm in [81). Furthermore, it is fast and operates only on foreground regions, as opposed to the adaptive floating average thresholding process in [68). Spurious features are subsequently eliminated using the proposed post-processing scheme.
Resumo:
This thesis deals with the problem of the instantaneous frequency (IF) estimation of sinusoidal signals. This topic plays significant role in signal processing and communications. Depending on the type of the signal, two major approaches are considered. For IF estimation of single-tone or digitally-modulated sinusoidal signals (like frequency shift keying signals) the approach of digital phase-locked loops (DPLLs) is considered, and this is Part-I of this thesis. For FM signals the approach of time-frequency analysis is considered, and this is Part-II of the thesis. In part-I we have utilized sinusoidal DPLLs with non-uniform sampling scheme as this type is widely used in communication systems. The digital tanlock loop (DTL) has introduced significant advantages over other existing DPLLs. In the last 10 years many efforts have been made to improve DTL performance. However, this loop and all of its modifications utilizes Hilbert transformer (HT) to produce a signal-independent 90-degree phase-shifted version of the input signal. Hilbert transformer can be realized approximately using a finite impulse response (FIR) digital filter. This realization introduces further complexity in the loop in addition to approximations and frequency limitations on the input signal. We have tried to avoid practical difficulties associated with the conventional tanlock scheme while keeping its advantages. A time-delay is utilized in the tanlock scheme of DTL to produce a signal-dependent phase shift. This gave rise to the time-delay digital tanlock loop (TDTL). Fixed point theorems are used to analyze the behavior of the new loop. As such TDTL combines the two major approaches in DPLLs: the non-linear approach of sinusoidal DPLL based on fixed point analysis, and the linear tanlock approach based on the arctan phase detection. TDTL preserves the main advantages of the DTL despite its reduced structure. An application of TDTL in FSK demodulation is also considered. This idea of replacing HT by a time-delay may be of interest in other signal processing systems. Hence we have analyzed and compared the behaviors of the HT and the time-delay in the presence of additive Gaussian noise. Based on the above analysis, the behavior of the first and second-order TDTLs has been analyzed in additive Gaussian noise. Since DPLLs need time for locking, they are normally not efficient in tracking the continuously changing frequencies of non-stationary signals, i.e. signals with time-varying spectra. Nonstationary signals are of importance in synthetic and real life applications. An example is the frequency-modulated (FM) signals widely used in communication systems. Part-II of this thesis is dedicated for the IF estimation of non-stationary signals. For such signals the classical spectral techniques break down, due to the time-varying nature of their spectra, and more advanced techniques should be utilized. For the purpose of instantaneous frequency estimation of non-stationary signals there are two major approaches: parametric and non-parametric. We chose the non-parametric approach which is based on time-frequency analysis. This approach is computationally less expensive and more effective in dealing with multicomponent signals, which are the main aim of this part of the thesis. A time-frequency distribution (TFD) of a signal is a two-dimensional transformation of the signal to the time-frequency domain. Multicomponent signals can be identified by multiple energy peaks in the time-frequency domain. Many real life and synthetic signals are of multicomponent nature and there is little in the literature concerning IF estimation of such signals. This is why we have concentrated on multicomponent signals in Part-H. An adaptive algorithm for IF estimation using the quadratic time-frequency distributions has been analyzed. A class of time-frequency distributions that are more suitable for this purpose has been proposed. The kernels of this class are time-only or one-dimensional, rather than the time-lag (two-dimensional) kernels. Hence this class has been named as the T -class. If the parameters of these TFDs are properly chosen, they are more efficient than the existing fixed-kernel TFDs in terms of resolution (energy concentration around the IF) and artifacts reduction. The T-distributions has been used in the IF adaptive algorithm and proved to be efficient in tracking rapidly changing frequencies. They also enables direct amplitude estimation for the components of a multicomponent
Resumo:
This dissertation is primarily an applied statistical modelling investigation, motivated by a case study comprising real data and real questions. Theoretical questions on modelling and computation of normalization constants arose from pursuit of these data analytic questions. The essence of the thesis can be described as follows. Consider binary data observed on a two-dimensional lattice. A common problem with such data is the ambiguity of zeroes recorded. These may represent zero response given some threshold (presence) or that the threshold has not been triggered (absence). Suppose that the researcher wishes to estimate the effects of covariates on the binary responses, whilst taking into account underlying spatial variation, which is itself of some interest. This situation arises in many contexts and the dingo, cypress and toad case studies described in the motivation chapter are examples of this. Two main approaches to modelling and inference are investigated in this thesis. The first is frequentist and based on generalized linear models, with spatial variation modelled by using a block structure or by smoothing the residuals spatially. The EM algorithm can be used to obtain point estimates, coupled with bootstrapping or asymptotic MLE estimates for standard errors. The second approach is Bayesian and based on a three- or four-tier hierarchical model, comprising a logistic regression with covariates for the data layer, a binary Markov Random field (MRF) for the underlying spatial process, and suitable priors for parameters in these main models. The three-parameter autologistic model is a particular MRF of interest. Markov chain Monte Carlo (MCMC) methods comprising hybrid Metropolis/Gibbs samplers is suitable for computation in this situation. Model performance can be gauged by MCMC diagnostics. Model choice can be assessed by incorporating another tier in the modelling hierarchy. This requires evaluation of a normalization constant, a notoriously difficult problem. Difficulty with estimating the normalization constant for the MRF can be overcome by using a path integral approach, although this is a highly computationally intensive method. Different methods of estimating ratios of normalization constants (N Cs) are investigated, including importance sampling Monte Carlo (ISMC), dependent Monte Carlo based on MCMC simulations (MCMC), and reverse logistic regression (RLR). I develop an idea present though not fully developed in the literature, and propose the Integrated mean canonical statistic (IMCS) method for estimating log NC ratios for binary MRFs. The IMCS method falls within the framework of the newly identified path sampling methods of Gelman & Meng (1998) and outperforms ISMC, MCMC and RLR. It also does not rely on simplifying assumptions, such as ignoring spatio-temporal dependence in the process. A thorough investigation is made of the application of IMCS to the three-parameter Autologistic model. This work introduces background computations required for the full implementation of the four-tier model in Chapter 7. Two different extensions of the three-tier model to a four-tier version are investigated. The first extension incorporates temporal dependence in the underlying spatio-temporal process. The second extensions allows the successes and failures in the data layer to depend on time. The MCMC computational method is extended to incorporate the extra layer. A major contribution of the thesis is the development of a fully Bayesian approach to inference for these hierarchical models for the first time. Note: The author of this thesis has agreed to make it open access but invites people downloading the thesis to send her an email via the 'Contact Author' function.