975 resultados para Lattice renormalization


Relevância:

10.00% 10.00%

Publicador:

Resumo:

While spoken term detection (STD) systems based on word indices provide good accuracy, there are several practical applications where it is infeasible or too costly to employ an LVCSR engine. An STD system is presented, which is designed to incorporate a fast phonetic decoding front-end and be robust to decoding errors whilst still allowing for rapid search speeds. This goal is achieved through mono-phone open-loop decoding coupled with fast hierarchical phone lattice search. Results demonstrate that an STD system that is designed with the constraint of a fast and simple phonetic decoding front-end requires a compromise to be made between search speed and search accuracy.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Tungsten trioxide is one of the potential semiconducting materials used for sensing NH3, CO, CH4 and acetaldehyde gases. The current research aims at development, microstructural characterization and gas sensing properties of thin films of Tungsten trioxide (WO3). In this paper, we intend to present the microstructural characterization of these films as a function of post annealing heat treatment. Microstructural and elemental analysis of electron beam evaporated WO3 thin films and iron doped WO3 films (WO3:Fe) have been carried out using analytical techniques such as Transmission electron microscopy, Rutherford Backscattered Spectroscopy and XPS analysis. TEM analysis revealed that annealing at 300oC for 1 hour improves cyrstallinity of WO3 film. Both WO3 and WO3:Fe films had uniform thickness and the values corresponded to those measured during deposition. RBS results show a fairly high concentration of oxygen at the film surface as well as in the bulk for both films, which might be due to adsorption of oxygen from atmosphere or lattice oxygen vacancy inherent in WO3 structure. XPS results indicate that tungsten exists in 4d electronic state on the surface but at a depth of 10 nm, both 4d and 4f electronic states were observed. Atomic force microscopy reveals nanosize particles and porous structure of the film. This study shows e-beam evaporation technique produces nanoaparticles and porous WO3 films suitable for gas sensing applications and doping with iron decreases the porosity and particle size which can help improve the gas selectivity.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Osteoporosis is a disease characterized by low bone mass and micro-architectural deterioration of bone tissue, with a consequent increase in bone fragility and susceptibility to fracture. Osteoporosis affects over 200 million people worldwide, with an estimated 1.5 million fractures annually in the United States alone, and with attendant costs exceeding $10 billion dollars per annum. Osteoporosis reduces bone density through a series of structural changes to the honeycomb-like trabecular bone structure (micro-structure). The reduced bone density, coupled with the microstructural changes, results in significant loss of bone strength and increased fracture risk. Vertebral compression fractures are the most common type of osteoporotic fracture and are associated with pain, increased thoracic curvature, reduced mobility, and difficulty with self care. Surgical interventions, such as kyphoplasty or vertebroplasty, are used to treat osteoporotic vertebral fractures by restoring vertebral stability and alleviating pain. These minimally invasive procedures involve injecting bone cement into the fractured vertebrae. The techniques are still relatively new and while initial results are promising, with the procedures relieving pain in 70-95% of cases, medium-term investigations are now indicating an increased risk of adjacent level fracture following the procedure. With the aging population, understanding and treatment of osteoporosis is an increasingly important public health issue in developed Western countries. The aim of this study was to investigate the biomechanics of spinal osteoporosis and osteoporotic vertebral compression fractures by developing multi-scale computational, Finite Element (FE) models of both healthy and osteoporotic vertebral bodies. The multi-scale approach included the overall vertebral body anatomy, as well as a detailed representation of the internal trabecular microstructure. This novel, multi-scale approach overcame limitations of previous investigations by allowing simultaneous investigation of the mechanics of the trabecular micro-structure as well as overall vertebral body mechanics. The models were used to simulate the progression of osteoporosis, the effect of different loading conditions on vertebral strength and stiffness, and the effects of vertebroplasty on vertebral and trabecular mechanics. The model development process began with the development of an individual trabecular strut model using 3D beam elements, which was used as the building block for lattice-type, structural trabecular bone models, which were in turn incorporated into the vertebral body models. At each stage of model development, model predictions were compared to analytical solutions and in-vitro data from existing literature. The incremental process provided confidence in the predictions of each model before incorporation into the overall vertebral body model. The trabecular bone model, vertebral body model and vertebroplasty models were validated against in-vitro data from a series of compression tests performed using human cadaveric vertebral bodies. Firstly, trabecular bone samples were acquired and morphological parameters for each sample were measured using high resolution micro-computed tomography (CT). Apparent mechanical properties for each sample were then determined using uni-axial compression tests. Bone tissue properties were inversely determined using voxel-based FE models based on the micro-CT data. Specimen specific trabecular bone models were developed and the predicted apparent stiffness and strength were compared to the experimentally measured apparent stiffness and strength of the corresponding specimen. Following the trabecular specimen tests, a series of 12 whole cadaveric vertebrae were then divided into treated and non-treated groups and vertebroplasty performed on the specimens of the treated group. The vertebrae in both groups underwent clinical-CT scanning and destructive uniaxial compression testing. Specimen specific FE vertebral body models were developed and the predicted mechanical response compared to the experimentally measured responses. The validation process demonstrated that the multi-scale FE models comprising a lattice network of beam elements were able to accurately capture the failure mechanics of trabecular bone; and a trabecular core represented with beam elements enclosed in a layer of shell elements to represent the cortical shell was able to adequately represent the failure mechanics of intact vertebral bodies with varying degrees of osteoporosis. Following model development and validation, the models were used to investigate the effects of progressive osteoporosis on vertebral body mechanics and trabecular bone mechanics. These simulations showed that overall failure of the osteoporotic vertebral body is initiated by failure of the trabecular core, and the failure mechanism of the trabeculae varies with the progression of osteoporosis; from tissue yield in healthy trabecular bone, to failure due to instability (buckling) in osteoporotic bone with its thinner trabecular struts. The mechanical response of the vertebral body under load is highly dependent on the ability of the endplates to deform to transmit the load to the underlying trabecular bone. The ability of the endplate to evenly transfer the load through the core diminishes with osteoporosis. Investigation into the effect of different loading conditions on the vertebral body found that, because the trabecular bone structural changes which occur in osteoporosis result in a structure that is highly aligned with the loading direction, the vertebral body is consequently less able to withstand non-uniform loading states such as occurs in forward flexion. Changes in vertebral body loading due to disc degeneration were simulated, but proved to have little effect on osteoporotic vertebra mechanics. Conversely, differences in vertebral body loading between simulated invivo (uniform endplate pressure) and in-vitro conditions (where the vertebral endplates are rigidly cemented) had a dramatic effect on the predicted vertebral mechanics. This investigation suggested that in-vitro loading using bone cement potting of both endplates has major limitations in its ability to represent vertebral body mechanics in-vivo. And lastly, FE investigation into the biomechanical effect of vertebroplasty was performed. The results of this investigation demonstrated that the effect of vertebroplasty on overall vertebra mechanics is strongly governed by the cement distribution achieved within the trabecular core. In agreement with a recent study, the models predicted that vertebroplasty cement distributions which do not form one continuous mass which contacts both endplates have little effect on vertebral body stiffness or strength. In summary, this work presents the development of a novel, multi-scale Finite Element model of the osteoporotic vertebral body, which provides a powerful new tool for investigating the mechanics of osteoporotic vertebral compression fractures at the trabecular bone micro-structural level, and at the vertebral body level.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We conduct the detailed numerical investigation of a nanomanipulation and nanofabrication technique—thermal tweezers with dynamic evolution of surface temperature, caused by absorption of interfering laser pulses in a thin metalfilm or any other absorbing surface. This technique uses random Brownian forces in the presence of strong temperature modulation (surfacethermophoresis) for effective manipulation of particles/adatoms with nanoscale resolution. Substantial redistribution of particles on the surface is shown to occur with the typical size of the obtained pattern elements of ∼100 nm, which is significantly smaller than the wavelength of the incident pulses used (532 nm). It is also demonstrated that thermal tweezers based on surfacethermophoresis of particles/adatoms are much more effective in achieving permanent high maximum-to-minimum concentration ratios than bulk thermophoresis, which is explained by the interaction of diffusing particles with the periodic lattice potential on the surface. Typically required pulse regimes including pulse lengths and energies are also determined. The approach is applicable for reproducing any holographically achievable surfacepatterns, and can thus be used for engineering properties of surfaces including nanopatterning and design of surface metamaterials.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Unmanned Aerial Vehicles (UAVs) are emerging as an ideal platform for a wide range of civil applications such as disaster monitoring, atmospheric observation and outback delivery. However, the operation of UAVs is currently restricted to specially segregated regions of airspace outside of the National Airspace System (NAS). Mission Flight Planning (MFP) is an integral part of UAV operation that addresses some of the requirements (such as safety and the rules of the air) of integrating UAVs in the NAS. Automated MFP is a key enabler for a number of UAV operating scenarios as it aids in increasing the level of onboard autonomy. For example, onboard MFP is required to ensure continued conformance with the NAS integration requirements when there is an outage in the communications link. MFP is a motion planning task concerned with finding a path between a designated start waypoint and goal waypoint. This path is described with a sequence of 4 Dimensional (4D) waypoints (three spatial and one time dimension) or equivalently with a sequence of trajectory segments (or tracks). It is necessary to consider the time dimension as the UAV operates in a dynamic environment. Existing methods for generic motion planning, UAV motion planning and general vehicle motion planning cannot adequately address the requirements of MFP. The flight plan needs to optimise for multiple decision objectives including mission safety objectives, the rules of the air and mission efficiency objectives. Online (in-flight) replanning capability is needed as the UAV operates in a large, dynamic and uncertain outdoor environment. This thesis derives a multi-objective 4D search algorithm entitled Multi- Step A* (MSA*) based on the seminal A* search algorithm. MSA* is proven to find the optimal (least cost) path given a variable successor operator (which enables arbitrary track angle and track velocity resolution). Furthermore, it is shown to be of comparable complexity to multi-objective, vector neighbourhood based A* (Vector A*, an extension of A*). A variable successor operator enables the imposition of a multi-resolution lattice structure on the search space (which results in fewer search nodes). Unlike cell decomposition based methods, soundness is guaranteed with multi-resolution MSA*. MSA* is demonstrated through Monte Carlo simulations to be computationally efficient. It is shown that multi-resolution, lattice based MSA* finds paths of equivalent cost (less than 0.5% difference) to Vector A* (the benchmark) in a third of the computation time (on average). This is the first contribution of the research. The second contribution is the discovery of the additive consistency property for planning with multiple decision objectives. Additive consistency ensures that the planner is not biased (which results in a suboptimal path) by ensuring that the cost of traversing a track using one step equals that of traversing the same track using multiple steps. MSA* mitigates uncertainty through online replanning, Multi-Criteria Decision Making (MCDM) and tolerance. Each trajectory segment is modeled with a cell sequence that completely encloses the trajectory segment. The tolerance, measured as the minimum distance between the track and cell boundaries, is the third major contribution. Even though MSA* is demonstrated for UAV MFP, it is extensible to other 4D vehicle motion planning applications. Finally, the research proposes a self-scheduling replanning architecture for MFP. This architecture replicates the decision strategies of human experts to meet the time constraints of online replanning. Based on a feedback loop, the proposed architecture switches between fast, near-optimal planning and optimal planning to minimise the need for hold manoeuvres. The derived MFP framework is original and shown, through extensive verification and validation, to satisfy the requirements of UAV MFP. As MFP is an enabling factor for operation of UAVs in the NAS, the presented work is both original and significant.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Transition metal oxides are functional materials that have advanced applications in many areas, because of their diverse properties (optical, electrical, magnetic, etc.), hardness, thermal stability and chemical resistance. Novel applications of the nanostructures of these oxides are attracting significant interest as new synthesis methods are developed and new structures are reported. Hydrothermal synthesis is an effective process to prepare various delicate structures of metal oxides on the scales from a few to tens of nanometres, specifically, the highly dispersed intermediate structures which are hardly obtained through pyro-synthesis. In this thesis, a range of new metal oxide (stable and metastable titanate, niobate) nanostructures, namely nanotubes and nanofibres, were synthesised via a hydrothermal process. Further structure modifications were conducted and potential applications in catalysis, photocatalysis, adsorption and construction of ceramic membrane were studied. The morphology evolution during the hydrothermal reaction between Nb2O5 particles and concentrated NaOH was monitored. The study demonstrates that by optimising the reaction parameters (temperature, amount of reactants), one can obtain a variety of nanostructured solids, from intermediate phases niobate bars and fibres to the stable phase cubes. Trititanate (Na2Ti3O7) nanofibres and nanotubes were obtained by the hydrothermal reaction between TiO2 powders or a titanium compound (e.g. TiOSO4·xH2O) and concentrated NaOH solution by controlling the reaction temperature and NaOH concentration. The trititanate possesses a layered structure, and the Na ions that exist between the negative charged titanate layers are exchangeable with other metal ions or H+ ions. The ion-exchange has crucial influence on the phase transition of the exchanged products. The exchange of the sodium ions in the titanate with H+ ions yields protonated titanate (H-titanate) and subsequent phase transformation of the H-titanate enable various TiO2 structures with retained morphology. H-titanate, either nanofibres or tubes, can be converted to pure TiO2(B), pure anatase, mixed TiO2(B) and anatase phases by controlled calcination and by a two-step process of acid-treatment and subsequent calcination. While the controlled calcination of the sodium titanate yield new titanate structures (metastable titanate with formula Na1.5H0.5Ti3O7, with retained fibril morphology) that can be used for removal of radioactive ions and heavy metal ions from water. The structures and morphologies of the metal oxides were characterised by advanced techniques. Titania nanofibres of mixed anatase and TiO2(B) phases, pure anatase and pure TiO2(B) were obtained by calcining H-titanate nanofibres at different temperatures between 300 and 700 °C. The fibril morphology was retained after calcination, which is suitable for transmission electron microscopy (TEM) analysis. It has been found by TEM analysis that in mixed-phase structure the interfaces between anatase and TiO2(B) phases are not random contacts between the engaged crystals of the two phases, but form from the well matched lattice planes of the two phases. For instance, (101) planes in anatase and (101) planes of TiO2(B) are similar in d spaces (~0.18 nm), and they join together to form a stable interface. The interfaces between the two phases act as an one-way valve that permit the transfer of photogenerated charge from anatase to TiO2(B). This reduces the recombination of photogenerated electrons and holes in anatase, enhancing the activity for photocatalytic oxidation. Therefore, the mixed-phase nanofibres exhibited higher photocatalytic activity for degradation of sulforhodamine B (SRB) dye under ultraviolet (UV) light than the nanofibres of either pure phase alone, or the mechanical mixtures (which have no interfaces) of the two pure phase nanofibres with a similar phase composition. This verifies the theory that the difference between the conduction band edges of the two phases may result in charge transfer from one phase to the other, which results in effectively the photogenerated charge separation and thus facilitates the redox reaction involving these charges. Such an interface structure facilitates charge transfer crossing the interfaces. The knowledge acquired in this study is important not only for design of efficient TiO2 photocatalysts but also for understanding the photocatalysis process. Moreover, the fibril titania photocatalysts are of great advantage when they are separated from a liquid for reuse by filtration, sedimentation, or centrifugation, compared to nanoparticles of the same scale. The surface structure of TiO2 also plays a significant role in catalysis and photocatalysis. Four types of large surface area TiO2 nanotubes with different phase compositions (labelled as NTA, NTBA, NTMA and NTM) were synthesised from calcination and acid treatment of the H-titanate nanotubes. Using the in situ FTIR emission spectrescopy (IES), desorption and re-adsorption process of surface OH-groups on oxide surface can be trailed. In this work, the surface OH-group regeneration ability of the TiO2 nanotubes was investigated. The ability of the four samples distinctively different, having the order: NTA > NTBA > NTMA > NTM. The same order was observed for the catalytic when the samples served as photocatalysts for the decomposition of synthetic dye SRB under UV light, as the supports of gold (Au) catalysts (where gold particles were loaded by a colloid-based method) for photodecomposition of formaldehyde under visible light and for catalytic oxidation of CO at low temperatures. Therefore, the ability of TiO2 nanotubes to generate surface OH-groups is an indicator of the catalytic activity. The reason behind the correlation is that the oxygen vacancies at bridging O2- sites of TiO2 surface can generate surface OH-groups and these groups facilitate adsorption and activation of O2 molecules, which is the key step of the oxidation reactions. The structure of the oxygen vacancies at bridging O2- sites is proposed. Also a new mechanism for the photocatalytic formaldehyde decomposition with the Au-TiO2 catalysts is proposed: The visible light absorbed by the gold nanoparticles, due to surface plasmon resonance effect, induces transition of the 6sp electrons of gold to high energy levels. These energetic electrons can migrate to the conduction band of TiO2 and are seized by oxygen molecules. Meanwhile, the gold nanoparticles capture electrons from the formaldehyde molecules adsorbed on them because of gold’s high electronegativity. O2 adsorbed on the TiO2 supports surface are the major electron acceptor. The more O2 adsorbed, the higher the oxidation activity of the photocatalyst will exhibit. The last part of this thesis demonstrates two innovative applications of the titanate nanostructures. Firstly, trititanate and metastable titanate (Na1.5H0.5Ti3O7) nanofibres are used as intelligent absorbents for removal of radioactive cations and heavy metal ions, utilizing the properties of the ion exchange ability, deformable layered structure, and fibril morphology. Environmental contamination with radioactive ions and heavy metal ions can cause a serious threat to the health of a large part of the population. Treatment of the wastes is needed to produce a waste product suitable for long-term storage and disposal. The ion-exchange ability of layered titanate structure permitted adsorption of bivalence toxic cations (Sr2+, Ra2+, Pb2+) from aqueous solution. More importantly, the adsorption is irreversible, due to the deformation of the structure induced by the strong interaction between the adsorbed bivalent cations and negatively charged TiO6 octahedra, and results in permanent entrapment of the toxic bivalent cations in the fibres so that the toxic ions can be safely deposited. Compared to conventional clay and zeolite sorbents, the fibril absorbents are of great advantage as they can be readily dispersed into and separated from a liquid. Secondly, new generation membranes were constructed by using large titanate and small ã-alumina nanofibres as intermediate and top layers, respectively, on a porous alumina substrate via a spin-coating process. Compared to conventional ceramic membranes constructed by spherical particles, the ceramic membrane constructed by the fibres permits high flux because of the large porosity of their separation layers. The voids in the separation layer determine the selectivity and flux of a separation membrane. When the sizes of the voids are similar (which means a similar selectivity of the separation layer), the flux passing through the membrane increases with the volume of the voids which are filtration passages. For the ideal and simplest texture, a mesh constructed with the nanofibres 10 nm thick and having a uniform pore size of 60 nm, the porosity is greater than 73.5 %. In contrast, the porosity of the separation layer that possesses the same pore size but is constructed with metal oxide spherical particles, as in conventional ceramic membranes, is 36% or less. The membrane constructed by titanate nanofibres and a layer of randomly oriented alumina nanofibres was able to filter out 96.8% of latex spheres of 60 nm size, while maintaining a high flux rate between 600 and 900 Lm–2 h–1, more than 15 times higher than the conventional membrane reported in the most recent study.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this thesis an investigation into theoretical models for formation and interaction of nanoparticles is presented. The work presented includes a literature review of current models followed by a series of five chapters of original research. This thesis has been submitted in partial fulfilment of the requirements for the degree of doctor of philosophy by publication and therefore each of the five chapters consist of a peer-reviewed journal article. The thesis is then concluded with a discussion of what has been achieved during the PhD candidature, the potential applications for this research and ways in which the research could be extended in the future. In this thesis we explore stochastic models pertaining to the interaction and evolution mechanisms of nanoparticles. In particular, we explore in depth the stochastic evaporation of molecules due to thermal activation and its ultimate effect on nanoparticles sizes and concentrations. Secondly, we analyse the thermal vibrations of nanoparticles suspended in a fluid and subject to standing oscillating drag forces (as would occur in a standing sound wave) and finally on lattice surfaces in the presence of high heat gradients. We have described in this thesis a number of new models for the description of multicompartment networks joined by a multiple, stochastically evaporating, links. The primary motivation for this work is in the description of thermal fragmentation in which multiple molecules holding parts of a carbonaceous nanoparticle may evaporate. Ultimately, these models predict the rate at which the network or aggregate fragments into smaller networks/aggregates and with what aggregate size distribution. The models are highly analytic and describe the fragmentation of a link holding multiple bonds using Markov processes that best describe different physical situations and these processes have been analysed using a number of mathematical methods. The fragmentation of the network/aggregate is then predicted using combinatorial arguments. Whilst there is some scepticism in the scientific community pertaining to the proposed mechanism of thermal fragmentation,we have presented compelling evidence in this thesis supporting the currently proposed mechanism and shown that our models can accurately match experimental results. This was achieved using a realistic simulation of the fragmentation of the fractal carbonaceous aggregate structure using our models. Furthermore, in this thesis a method of manipulation using acoustic standing waves is investigated. In our investigation we analysed the effect of frequency and particle size on the ability for the particle to be manipulated by means of a standing acoustic wave. In our results, we report the existence of a critical frequency for a particular particle size. This frequency is inversely proportional to the Stokes time of the particle in the fluid. We also find that for large frequencies the subtle Brownian motion of even larger particles plays a significant role in the efficacy of the manipulation. This is due to the decreasing size of the boundary layer between acoustic nodes. Our model utilises a multiple time scale approach to calculating the long term effects of the standing acoustic field on the particles that are interacting with the sound. These effects are then combined with the effects of Brownian motion in order to obtain a complete mathematical description of the particle dynamics in such acoustic fields. Finally, in this thesis, we develop a numerical routine for the description of "thermal tweezers". Currently, the technique of thermal tweezers is predominantly theoretical however there has been a handful of successful experiments which demonstrate the effect it practise. Thermal tweezers is the name given to the way in which particles can be easily manipulated on a lattice surface by careful selection of a heat distribution over the surface. Typically, the theoretical simulations of the effect can be rather time consuming with supercomputer facilities processing data over days or even weeks. Our alternative numerical method for the simulation of particle distributions pertaining to the thermal tweezers effect use the Fokker-Planck equation to derive a quick numerical method for the calculation of the effective diffusion constant as a result of the lattice and the temperature. We then use this diffusion constant and solve the diffusion equation numerically using the finite volume method. This saves the algorithm from calculating many individual particle trajectories since it is describes the flow of the probability distribution of particles in a continuous manner. The alternative method that is outlined in this thesis can produce a larger quantity of accurate results on a household PC in a matter of hours which is much better than was previously achieveable.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The main goal of this research is to design an efficient compression al~ gorithm for fingerprint images. The wavelet transform technique is the principal tool used to reduce interpixel redundancies and to obtain a parsimonious representation for these images. A specific fixed decomposition structure is designed to be used by the wavelet packet in order to save on the computation, transmission, and storage costs. This decomposition structure is based on analysis of information packing performance of several decompositions, two-dimensional power spectral density, effect of each frequency band on the reconstructed image, and the human visual sensitivities. This fixed structure is found to provide the "most" suitable representation for fingerprints, according to the chosen criteria. Different compression techniques are used for different subbands, based on their observed statistics. The decision is based on the effect of each subband on the reconstructed image according to the mean square criteria as well as the sensitivities in human vision. To design an efficient quantization algorithm, a precise model for distribution of the wavelet coefficients is developed. The model is based on the generalized Gaussian distribution. A least squares algorithm on a nonlinear function of the distribution model shape parameter is formulated to estimate the model parameters. A noise shaping bit allocation procedure is then used to assign the bit rate among subbands. To obtain high compression ratios, vector quantization is used. In this work, the lattice vector quantization (LVQ) is chosen because of its superior performance over other types of vector quantizers. The structure of a lattice quantizer is determined by its parameters known as truncation level and scaling factor. In lattice-based compression algorithms reported in the literature the lattice structure is commonly predetermined leading to a nonoptimized quantization approach. In this research, a new technique for determining the lattice parameters is proposed. In the lattice structure design, no assumption about the lattice parameters is made and no training and multi-quantizing is required. The design is based on minimizing the quantization distortion by adapting to the statistical characteristics of the source in each subimage. 11 Abstract Abstract Since LVQ is a multidimensional generalization of uniform quantizers, it produces minimum distortion for inputs with uniform distributions. In order to take advantage of the properties of LVQ and its fast implementation, while considering the i.i.d. nonuniform distribution of wavelet coefficients, the piecewise-uniform pyramid LVQ algorithm is proposed. The proposed algorithm quantizes almost all of source vectors without the need to project these on the lattice outermost shell, while it properly maintains a small codebook size. It also resolves the wedge region problem commonly encountered with sharply distributed random sources. These represent some of the drawbacks of the algorithm proposed by Barlaud [26). The proposed algorithm handles all types of lattices, not only the cubic lattices, as opposed to the algorithms developed by Fischer [29) and Jeong [42). Furthermore, no training and multiquantizing (to determine lattice parameters) is required, as opposed to Powell's algorithm [78). For coefficients with high-frequency content, the positive-negative mean algorithm is proposed to improve the resolution of reconstructed images. For coefficients with low-frequency content, a lossless predictive compression scheme is used to preserve the quality of reconstructed images. A method to reduce bit requirements of necessary side information is also introduced. Lossless entropy coding techniques are subsequently used to remove coding redundancy. The algorithms result in high quality reconstructed images with better compression ratios than other available algorithms. To evaluate the proposed algorithms their objective and subjective performance comparisons with other available techniques are presented. The quality of the reconstructed images is important for a reliable identification. Enhancement and feature extraction on the reconstructed images are also investigated in this research. A structural-based feature extraction algorithm is proposed in which the unique properties of fingerprint textures are used to enhance the images and improve the fidelity of their characteristic features. The ridges are extracted from enhanced grey-level foreground areas based on the local ridge dominant directions. The proposed ridge extraction algorithm, properly preserves the natural shape of grey-level ridges as well as precise locations of the features, as opposed to the ridge extraction algorithm in [81). Furthermore, it is fast and operates only on foreground regions, as opposed to the adaptive floating average thresholding process in [68). Spurious features are subsequently eliminated using the proposed post-processing scheme.

Relevância:

10.00% 10.00%

Publicador:

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Hydrogel polymers are used for the manufacture of soft (or disposable) contact lenses worldwide today, but have a tendency to dehydrate on the eye. In vitro methods that can probe the potential for a given hydrogel polymer to dehydrate in vivo are much sought after. Nuclear magnetic resonance (NMR) has been shown to be effective in characterising water mobility and binding in similar systems (Barbieri, Quaglia et al., 1998, Larsen, Huff et al., 1990, Peschier, Bouwstra et al., 1993), predominantly through measurement of the spin-lattice relaxation time (T1), the spinspin relaxation time (T2) and the water diffusion coefficient (D). The aim of this work was to use NMR to quantify the molecular behaviour of water in a series of commercially available contact lens hydrogels, and relate these measurements to the binding and mobility of the water, and ultimately the potential for the hydrogel to dehydrate. As a preliminary study, in vitro evaporation rates were measured for a set of commercial contact lens hydrogels. Following this, comprehensive measurement of the temperature and water content dependencies of T1, T2 and D was performed for a series of commercial hydrogels that spanned the spectrum of equilibrium water content (EWC) and common compositions of contact lenses that are manufactured today. To quantify material differences, the data were then modelled based on theory that had been used for similar systems in the literature (Walker, Balmer et al., 1989, Hills, Takacs et al., 1989). The differences were related to differences in water binding and mobility. The evaporative results suggested that the EWC of the material was important in determining a material's potential to dehydrate in this way. Similarly, the NMR water self-diffusion coefficient was also found to be largely (if not wholly) determined by the WC. A specific binding model confirmed that the we was the dominant factor in determining the diffusive behaviour, but also suggested that subtle differences existed between the materials used, based on their equilibrium we (EWC). However, an alternative modified free volume model suggested that only the current water content of the material was important in determining the diffusive behaviour, and not the equilibrium water content. It was shown that T2 relaxation was dominated by chemical exchange between water and exchangeable polymer protons for materials that contained exchangeable polymer protons. The data was analysed using a proton exchange model, and the results were again reasonably correlated with EWC. Specifically, it was found that the average water mobility increased with increasing EWe approaching that of free water. The T1 relaxation was also shown to be reasonably well described by the same model. The main conclusion that can be drawn from this work is that the hydrogel EWe is an important parameter, which largely determines the behaviour of water in the gel. Higher EWe results in a hydrogel with water that behaves more like bulk water on average, or is less strongly 'bound' on average, compared with a lower EWe material. Based on the set of materials used, significant differences due to composition (for materials of the same or similar water content) could not be found. Similar studies could be used in the future to highlight hydrogels that deviate significantly from this 'average' behaviour, and may therefore have the least/greatest potential to dehydrate on the eye.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This dissertation is primarily an applied statistical modelling investigation, motivated by a case study comprising real data and real questions. Theoretical questions on modelling and computation of normalization constants arose from pursuit of these data analytic questions. The essence of the thesis can be described as follows. Consider binary data observed on a two-dimensional lattice. A common problem with such data is the ambiguity of zeroes recorded. These may represent zero response given some threshold (presence) or that the threshold has not been triggered (absence). Suppose that the researcher wishes to estimate the effects of covariates on the binary responses, whilst taking into account underlying spatial variation, which is itself of some interest. This situation arises in many contexts and the dingo, cypress and toad case studies described in the motivation chapter are examples of this. Two main approaches to modelling and inference are investigated in this thesis. The first is frequentist and based on generalized linear models, with spatial variation modelled by using a block structure or by smoothing the residuals spatially. The EM algorithm can be used to obtain point estimates, coupled with bootstrapping or asymptotic MLE estimates for standard errors. The second approach is Bayesian and based on a three- or four-tier hierarchical model, comprising a logistic regression with covariates for the data layer, a binary Markov Random field (MRF) for the underlying spatial process, and suitable priors for parameters in these main models. The three-parameter autologistic model is a particular MRF of interest. Markov chain Monte Carlo (MCMC) methods comprising hybrid Metropolis/Gibbs samplers is suitable for computation in this situation. Model performance can be gauged by MCMC diagnostics. Model choice can be assessed by incorporating another tier in the modelling hierarchy. This requires evaluation of a normalization constant, a notoriously difficult problem. Difficulty with estimating the normalization constant for the MRF can be overcome by using a path integral approach, although this is a highly computationally intensive method. Different methods of estimating ratios of normalization constants (N Cs) are investigated, including importance sampling Monte Carlo (ISMC), dependent Monte Carlo based on MCMC simulations (MCMC), and reverse logistic regression (RLR). I develop an idea present though not fully developed in the literature, and propose the Integrated mean canonical statistic (IMCS) method for estimating log NC ratios for binary MRFs. The IMCS method falls within the framework of the newly identified path sampling methods of Gelman & Meng (1998) and outperforms ISMC, MCMC and RLR. It also does not rely on simplifying assumptions, such as ignoring spatio-temporal dependence in the process. A thorough investigation is made of the application of IMCS to the three-parameter Autologistic model. This work introduces background computations required for the full implementation of the four-tier model in Chapter 7. Two different extensions of the three-tier model to a four-tier version are investigated. The first extension incorporates temporal dependence in the underlying spatio-temporal process. The second extensions allows the successes and failures in the data layer to depend on time. The MCMC computational method is extended to incorporate the extra layer. A major contribution of the thesis is the development of a fully Bayesian approach to inference for these hierarchical models for the first time. Note: The author of this thesis has agreed to make it open access but invites people downloading the thesis to send her an email via the 'Contact Author' function.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Keyword Spotting is the task of detecting keywords of interest within continu- ous speech. The applications of this technology range from call centre dialogue systems to covert speech surveillance devices. Keyword spotting is particularly well suited to data mining tasks such as real-time keyword monitoring and unre- stricted vocabulary audio document indexing. However, to date, many keyword spotting approaches have su®ered from poor detection rates, high false alarm rates, or slow execution times, thus reducing their commercial viability. This work investigates the application of keyword spotting to data mining tasks. The thesis makes a number of major contributions to the ¯eld of keyword spotting. The ¯rst major contribution is the development of a novel keyword veri¯cation method named Cohort Word Veri¯cation. This method combines high level lin- guistic information with cohort-based veri¯cation techniques to obtain dramatic improvements in veri¯cation performance, in particular for the problematic short duration target word class. The second major contribution is the development of a novel audio document indexing technique named Dynamic Match Lattice Spotting. This technique aug- ments lattice-based audio indexing principles with dynamic sequence matching techniques to provide robustness to erroneous lattice realisations. The resulting algorithm obtains signi¯cant improvement in detection rate over lattice-based audio document indexing while still maintaining extremely fast search speeds. The third major contribution is the study of multiple veri¯er fusion for the task of keyword veri¯cation. The reported experiments demonstrate that substantial improvements in veri¯cation performance can be obtained through the fusion of multiple keyword veri¯ers. The research focuses on combinations of speech background model based veri¯ers and cohort word veri¯ers. The ¯nal major contribution is a comprehensive study of the e®ects of limited training data for keyword spotting. This study is performed with consideration as to how these e®ects impact the immediate development and deployment of speech technologies for non-English languages.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Cell invasion involves a population of cells which are motile and proliferative. Traditional discrete models of proliferation involve agents depositing daughter agents on nearest- neighbor lattice sites. Motivated by time-lapse images of cell invasion, we propose and analyze two new discrete proliferation models in the context of an exclusion process with an undirected motility mechanism. These discrete models are related to a family of reaction- diffusion equations and can be used to make predictions over a range of scales appropriate for interpreting experimental data. The new proliferation mechanisms are biologically relevant and mathematically convenient as the continuum-discrete relationship is more robust for the new proliferation mechanisms relative to traditional approaches.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Exclusion processes on a regular lattice are used to model many biological and physical systems at a discrete level. The average properties of an exclusion process may be described by a continuum model given by a partial differential equation. We combine a general class of contact interactions with an exclusion process. We determine that many different types of contact interactions at the agent-level always give rise to a nonlinear diffusion equation, with a vast variety of diffusion functions D(C). We find that these functions may be dependent on the chosen lattice and the defined neighborhood of the contact interactions. Mild to moderate contact interaction strength generally results in good agreement between discrete and continuum models, while strong interactions often show discrepancies between the two, particularly when D(C) takes on negative values. We present a measure to predict the goodness of fit between the discrete and continuous model, and thus the validity of the continuum description of a motile, contact-interacting population of agents. This work has implications for modeling cell motility and interpreting cell motility assays, giving the ability to incorporate biologically realistic cell-cell interactions and develop global measures of discrete microscopic data.