570 resultados para lattice Boltzmann method


Relevância:

20.00% 20.00%

Publicador:

Resumo:

The main goal of this research is to design an efficient compression al~ gorithm for fingerprint images. The wavelet transform technique is the principal tool used to reduce interpixel redundancies and to obtain a parsimonious representation for these images. A specific fixed decomposition structure is designed to be used by the wavelet packet in order to save on the computation, transmission, and storage costs. This decomposition structure is based on analysis of information packing performance of several decompositions, two-dimensional power spectral density, effect of each frequency band on the reconstructed image, and the human visual sensitivities. This fixed structure is found to provide the "most" suitable representation for fingerprints, according to the chosen criteria. Different compression techniques are used for different subbands, based on their observed statistics. The decision is based on the effect of each subband on the reconstructed image according to the mean square criteria as well as the sensitivities in human vision. To design an efficient quantization algorithm, a precise model for distribution of the wavelet coefficients is developed. The model is based on the generalized Gaussian distribution. A least squares algorithm on a nonlinear function of the distribution model shape parameter is formulated to estimate the model parameters. A noise shaping bit allocation procedure is then used to assign the bit rate among subbands. To obtain high compression ratios, vector quantization is used. In this work, the lattice vector quantization (LVQ) is chosen because of its superior performance over other types of vector quantizers. The structure of a lattice quantizer is determined by its parameters known as truncation level and scaling factor. In lattice-based compression algorithms reported in the literature the lattice structure is commonly predetermined leading to a nonoptimized quantization approach. In this research, a new technique for determining the lattice parameters is proposed. In the lattice structure design, no assumption about the lattice parameters is made and no training and multi-quantizing is required. The design is based on minimizing the quantization distortion by adapting to the statistical characteristics of the source in each subimage. 11 Abstract Abstract Since LVQ is a multidimensional generalization of uniform quantizers, it produces minimum distortion for inputs with uniform distributions. In order to take advantage of the properties of LVQ and its fast implementation, while considering the i.i.d. nonuniform distribution of wavelet coefficients, the piecewise-uniform pyramid LVQ algorithm is proposed. The proposed algorithm quantizes almost all of source vectors without the need to project these on the lattice outermost shell, while it properly maintains a small codebook size. It also resolves the wedge region problem commonly encountered with sharply distributed random sources. These represent some of the drawbacks of the algorithm proposed by Barlaud [26). The proposed algorithm handles all types of lattices, not only the cubic lattices, as opposed to the algorithms developed by Fischer [29) and Jeong [42). Furthermore, no training and multiquantizing (to determine lattice parameters) is required, as opposed to Powell's algorithm [78). For coefficients with high-frequency content, the positive-negative mean algorithm is proposed to improve the resolution of reconstructed images. For coefficients with low-frequency content, a lossless predictive compression scheme is used to preserve the quality of reconstructed images. A method to reduce bit requirements of necessary side information is also introduced. Lossless entropy coding techniques are subsequently used to remove coding redundancy. The algorithms result in high quality reconstructed images with better compression ratios than other available algorithms. To evaluate the proposed algorithms their objective and subjective performance comparisons with other available techniques are presented. The quality of the reconstructed images is important for a reliable identification. Enhancement and feature extraction on the reconstructed images are also investigated in this research. A structural-based feature extraction algorithm is proposed in which the unique properties of fingerprint textures are used to enhance the images and improve the fidelity of their characteristic features. The ridges are extracted from enhanced grey-level foreground areas based on the local ridge dominant directions. The proposed ridge extraction algorithm, properly preserves the natural shape of grey-level ridges as well as precise locations of the features, as opposed to the ridge extraction algorithm in [81). Furthermore, it is fast and operates only on foreground regions, as opposed to the adaptive floating average thresholding process in [68). Spurious features are subsequently eliminated using the proposed post-processing scheme.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This dissertation is primarily an applied statistical modelling investigation, motivated by a case study comprising real data and real questions. Theoretical questions on modelling and computation of normalization constants arose from pursuit of these data analytic questions. The essence of the thesis can be described as follows. Consider binary data observed on a two-dimensional lattice. A common problem with such data is the ambiguity of zeroes recorded. These may represent zero response given some threshold (presence) or that the threshold has not been triggered (absence). Suppose that the researcher wishes to estimate the effects of covariates on the binary responses, whilst taking into account underlying spatial variation, which is itself of some interest. This situation arises in many contexts and the dingo, cypress and toad case studies described in the motivation chapter are examples of this. Two main approaches to modelling and inference are investigated in this thesis. The first is frequentist and based on generalized linear models, with spatial variation modelled by using a block structure or by smoothing the residuals spatially. The EM algorithm can be used to obtain point estimates, coupled with bootstrapping or asymptotic MLE estimates for standard errors. The second approach is Bayesian and based on a three- or four-tier hierarchical model, comprising a logistic regression with covariates for the data layer, a binary Markov Random field (MRF) for the underlying spatial process, and suitable priors for parameters in these main models. The three-parameter autologistic model is a particular MRF of interest. Markov chain Monte Carlo (MCMC) methods comprising hybrid Metropolis/Gibbs samplers is suitable for computation in this situation. Model performance can be gauged by MCMC diagnostics. Model choice can be assessed by incorporating another tier in the modelling hierarchy. This requires evaluation of a normalization constant, a notoriously difficult problem. Difficulty with estimating the normalization constant for the MRF can be overcome by using a path integral approach, although this is a highly computationally intensive method. Different methods of estimating ratios of normalization constants (N Cs) are investigated, including importance sampling Monte Carlo (ISMC), dependent Monte Carlo based on MCMC simulations (MCMC), and reverse logistic regression (RLR). I develop an idea present though not fully developed in the literature, and propose the Integrated mean canonical statistic (IMCS) method for estimating log NC ratios for binary MRFs. The IMCS method falls within the framework of the newly identified path sampling methods of Gelman & Meng (1998) and outperforms ISMC, MCMC and RLR. It also does not rely on simplifying assumptions, such as ignoring spatio-temporal dependence in the process. A thorough investigation is made of the application of IMCS to the three-parameter Autologistic model. This work introduces background computations required for the full implementation of the four-tier model in Chapter 7. Two different extensions of the three-tier model to a four-tier version are investigated. The first extension incorporates temporal dependence in the underlying spatio-temporal process. The second extensions allows the successes and failures in the data layer to depend on time. The MCMC computational method is extended to incorporate the extra layer. A major contribution of the thesis is the development of a fully Bayesian approach to inference for these hierarchical models for the first time. Note: The author of this thesis has agreed to make it open access but invites people downloading the thesis to send her an email via the 'Contact Author' function.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Keyword Spotting is the task of detecting keywords of interest within continu- ous speech. The applications of this technology range from call centre dialogue systems to covert speech surveillance devices. Keyword spotting is particularly well suited to data mining tasks such as real-time keyword monitoring and unre- stricted vocabulary audio document indexing. However, to date, many keyword spotting approaches have su®ered from poor detection rates, high false alarm rates, or slow execution times, thus reducing their commercial viability. This work investigates the application of keyword spotting to data mining tasks. The thesis makes a number of major contributions to the ¯eld of keyword spotting. The ¯rst major contribution is the development of a novel keyword veri¯cation method named Cohort Word Veri¯cation. This method combines high level lin- guistic information with cohort-based veri¯cation techniques to obtain dramatic improvements in veri¯cation performance, in particular for the problematic short duration target word class. The second major contribution is the development of a novel audio document indexing technique named Dynamic Match Lattice Spotting. This technique aug- ments lattice-based audio indexing principles with dynamic sequence matching techniques to provide robustness to erroneous lattice realisations. The resulting algorithm obtains signi¯cant improvement in detection rate over lattice-based audio document indexing while still maintaining extremely fast search speeds. The third major contribution is the study of multiple veri¯er fusion for the task of keyword veri¯cation. The reported experiments demonstrate that substantial improvements in veri¯cation performance can be obtained through the fusion of multiple keyword veri¯ers. The research focuses on combinations of speech background model based veri¯ers and cohort word veri¯ers. The ¯nal major contribution is a comprehensive study of the e®ects of limited training data for keyword spotting. This study is performed with consideration as to how these e®ects impact the immediate development and deployment of speech technologies for non-English languages.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The modal strain energy method, which depends on the vibration characteristics of the structure, has been reasonably successful in identifying and localising damage in the structure. However, existing strain energy methods require the first few modes to be measured to provide meaningful damage detection. Use of individual modes with existing strain energy methods may indicate false alarms or may not detect the damage at or near the nodal points. This paper proposes a new modal strain energy based damage index which can detect and localize the damage using any one of the modes measured and illustrates its application for beam structures. It becomes evident that the proposed strain energy based damage index also has potential for damage quantification.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The literature was reviewed and analyzed to determine the feasibility of using a combination of acid hydrolysis and CO2-C release during long-term incubation to determine soil organic carbon (SOC) pool sizes and mean residence times (MRTs). Analysis of 1100 data points showed the SOC remaining after hydrolysis with 6 M HCI ranged from 30 to 80% of the total SOC depending on soil type, depth, texture, and management. Nonhydrolyzable carbon (NHC) in conventional till soils represented 48% of SOC; no-till averaged 56%, forest 55%, and grassland 56%. Carbon dates showed an average of 1200 yr greater MRT for the NHC fraction than total SOC. Longterm incubation, involving measurement of CO2 evolution and curve fitting, measured active and slow pools. Active-pool C comprised 2 to 8% of the SOC with MRTs of days to months; the slow pool comprised 45 to 65% of the SOC and had MRTs of 10 to 80 yr. Comparison of field C-14 and (13) C data with hydrolysis-incubation data showed a high correlation between independent techniques across soil types and experiments. There were large differences in MRTs depending on the length of the experiment. Insertion of hydrolysis-incubation derived estimates of active (C-a), slow (C-s), and resistant Pools (C-r) into the DAYCENT model provided estimates of daily field CO2 evolution rates. These were well correlated with field CO2 measurements. Although not without some interpretation problems, acid hydrolysis-laboratory incubation is useful for determining SOC pools and fluxes especially when used in combination with associated measurements.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Grassland management affects soil organic carbon (SOC) storage and can be used to mitigate greenhouse gas emissions. However, for a country to assess emission reductions due to grassland management, there must be an inventory method for estimating the change in SOC storage. The Intergovernmental Panel on Climate Change (IPCC) has developed a simple carbon accounting approach for this purpose, and here we derive new grassland management factors that represent the effect of changing management on carbon storage for this method. Our literature search identified 49 studies dealing with effects of management practices that either degraded or improved conditions relative to nominally managed grasslands. On average, degradation reduced SOC storage to 95% +/- 0.06 and 97% +/- 0.05 of carbon stored under nominal conditions in temperate and tropical regions, respectively. In contrast, improving grasslands with a single management activity enhanced SOC storage by 14% 0.06 and 17% +/- 0.05 in temperate and tropical regions, respectively, and with an additional improvement(s), storage increased by another 11% +/- 0.04. We applied the newly derived factor coefficients to analyze C sequestration potential for managed grasslands in the U.S., and found that over a 20-year period changing management could sequester from 5 to 142 Tg C yr(-1) or 0.1 to 0.9 Mg C ha(-1) yr(-1), depending on the level of change. This analysis provides revised factor coefficients for the IPCC method that can be used to estimate impacts of management; it also provides a methodological framework for countries to derive factor coefficients specific to conditions in their region.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Fractional Fokker-Planck equations (FFPEs) have gained much interest recently for describing transport dynamics in complex systems that are governed by anomalous diffusion and nonexponential relaxation patterns. However, effective numerical methods and analytic techniques for the FFPE are still in their embryonic state. In this paper, we consider a class of time-space fractional Fokker-Planck equations with a nonlinear source term (TSFFPE-NST), which involve the Caputo time fractional derivative (CTFD) of order α ∈ (0, 1) and the symmetric Riesz space fractional derivative (RSFD) of order μ ∈ (1, 2). Approximating the CTFD and RSFD using the L1-algorithm and shifted Grunwald method, respectively, a computationally effective numerical method is presented to solve the TSFFPE-NST. The stability and convergence of the proposed numerical method are investigated. Finally, numerical experiments are carried out to support the theoretical claims.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper outlines a method of constructing narratives about an individual’s self-efficacy. Self-efficacy is defined as “people’s judgments of their capabilities to organise and execute courses of action required to attain designated types of performances” (Bandura, 1986, p. 391), and as such represents a useful construct for thinking about personal agency. Social cognitive theory provides the theoretical framework for understanding the sources of self-efficacy, that is, the elements that contribute to a sense of self-efficacy. The narrative approach adopted offers an alternative to traditional, positivist psychology, characterised by a preoccupation with measuring psychological constructs (like self-efficacy) by means of questionnaires and scales. It is argued that these instruments yield scores which are somewhat removed from the lived experience of the person—respondent or subject—associated with the score. The method involves a cyclical and iterative process using qualitative interviews to collect data from participants – four mature aged university students. The method builds on a three-interview procedure designed for life history research (Dolbeare & Schuman, cited in Seidman, 1998). This is achieved by introducing reflective homework tasks, as well as written data generated by research participants, as they are guided in reflecting on those experiences (including behaviours, cognitions and emotions) that constitute a sense of self-efficacy, in narrative and by narrative. The method illustrates how narrative analysis is used “to produce stories as the outcome of the research” (Polkinghorne, 1995, p.15), with detail and depth contributing to an appreciation of the ‘lived experience’ of the participants. The method is highly collaborative, with narratives co-constructed by researcher and research participants. The research outcomes suggest an enhanced understanding of self-efficacy contributes to motivation, application of effort and persistence in overcoming difficulties. The paper concludes with an evaluation of the research process by the students who participated in the author’s doctoral study.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Proper application of sunscreen is essential as an effective public health strategy for skin cancer prevention. Insufficient application is common among sunbathers, results in decreased sun protection and may therefore lead to increased UV damage of the skin. However, no objective measure of sunscreen application thickness (SAT) is currently available for field-based use. We present a method to detect SAT on human skin for determining the amount of sunscreen applied and thus enabling comparisons to manufacturer recommendations. Using a skin swabbing method and subsequent spectrophotometric analysis, we were able to determine SAT on human skin. A swabbing method was used to derive SAT on skin (in mg sunscreen per cm2 of skin area) through the concentration–absorption relationship of sunscreen determined in laboratory experiments. Analysis differentiated SATs between 0.25 and 4 mg cm−2 and showed a small but significant decrease in concentration over time postapplication. A field study was performed, in which the heterogeneity of sunscreen application could be investigated. The proposed method is a low cost, noninvasive method for the determination of SAT on skin and it can be used as a valid tool in field- and population-based studies.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Extensive groundwater withdrawal has resulted in a severe seawater intrusion problem in the Gooburrum aquifers at Bundaberg, Queensland, Australia. Better management strategies can be implemented by understanding the seawater intrusion processes in those aquifers. To study the seawater intrusion process in the region, a two-dimensional density-dependent, saturated and unsaturated flow and transport computational model is used. The model consists of a coupled system of two non-linear partial differential equations. The first equation describes the flow of a variable-density fluid, and the second equation describes the transport of dissolved salt. A two-dimensional control volume finite element model is developed for simulating the seawater intrusion into the heterogeneous aquifer system at Gooburrum. The simulation results provide a realistic mechanism by which to study the convoluted transport phenomena evolving in this complex heterogeneous coastal aquifer.