981 resultados para Broadband spectral shaping


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Technology and Nursing Practice explains and critically engages with the practice implications of technology for nursing. It takes a broad view of technology, covering not only health informatics, but also 'tele-nursing' and the use of equipment in clinical practice.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Robust image hashing seeks to transform a given input image into a shorter hashed version using a key-dependent non-invertible transform. These image hashes can be used for watermarking, image integrity authentication or image indexing for fast retrieval. This paper introduces a new method of generating image hashes based on extracting Higher Order Spectral features from the Radon projection of an input image. The feature extraction process is non-invertible, non-linear and different hashes can be produced from the same image through the use of random permutations of the input. We show that the transform is robust to typical image transformations such as JPEG compression, noise, scaling, rotation, smoothing and cropping. We evaluate our system using a verification-style framework based on calculating false match, false non-match likelihoods using the publicly available Uncompressed Colour Image database (UCID) of 1320 images. We also compare our results to Swaminathan’s Fourier-Mellin based hashing method with at least 1% EER improvement under noise, scaling and sharpening.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The paper documents the development of an ethical framework for my current PhD project. I am a practice-led researcher with a background in creative writing. My project invovles conducting a number of oral history interviews with individuals living in Brisbane, Queensland, Australia. I use the interviews to inform a novel set in Brisbane. In doing so, I hope to provide a lens into a cultural and historical space by creating a rich, textured and vivid narrative while still retaining some of the essential aspects of the oral history. While developing a methodology for fictionalising these oral histories, I have encountered a derserve range of ethical issues. In particular I have had to confront my role as a writer and researcher working with other people’s stories. In order to grapple with the complex ethics of such an engagment, I examine the devices and stratedgies employed by other creative practioners working in similar fields. I focus chielfy on Miguel Barnet’s Biography of a Runaway Slave (published in English in 1968) Dave Eggers’What is the what: The autobiography of Valentino Achek Deng, a novel (2005) in order to understand the complex processes of mediation invloved in the artful shaping of oral histories. The paper explores how I have confronted and resolved ethical considerations in my theoretical and creative work.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The main goal of this research is to design an efficient compression al~ gorithm for fingerprint images. The wavelet transform technique is the principal tool used to reduce interpixel redundancies and to obtain a parsimonious representation for these images. A specific fixed decomposition structure is designed to be used by the wavelet packet in order to save on the computation, transmission, and storage costs. This decomposition structure is based on analysis of information packing performance of several decompositions, two-dimensional power spectral density, effect of each frequency band on the reconstructed image, and the human visual sensitivities. This fixed structure is found to provide the "most" suitable representation for fingerprints, according to the chosen criteria. Different compression techniques are used for different subbands, based on their observed statistics. The decision is based on the effect of each subband on the reconstructed image according to the mean square criteria as well as the sensitivities in human vision. To design an efficient quantization algorithm, a precise model for distribution of the wavelet coefficients is developed. The model is based on the generalized Gaussian distribution. A least squares algorithm on a nonlinear function of the distribution model shape parameter is formulated to estimate the model parameters. A noise shaping bit allocation procedure is then used to assign the bit rate among subbands. To obtain high compression ratios, vector quantization is used. In this work, the lattice vector quantization (LVQ) is chosen because of its superior performance over other types of vector quantizers. The structure of a lattice quantizer is determined by its parameters known as truncation level and scaling factor. In lattice-based compression algorithms reported in the literature the lattice structure is commonly predetermined leading to a nonoptimized quantization approach. In this research, a new technique for determining the lattice parameters is proposed. In the lattice structure design, no assumption about the lattice parameters is made and no training and multi-quantizing is required. The design is based on minimizing the quantization distortion by adapting to the statistical characteristics of the source in each subimage. 11 Abstract Abstract Since LVQ is a multidimensional generalization of uniform quantizers, it produces minimum distortion for inputs with uniform distributions. In order to take advantage of the properties of LVQ and its fast implementation, while considering the i.i.d. nonuniform distribution of wavelet coefficients, the piecewise-uniform pyramid LVQ algorithm is proposed. The proposed algorithm quantizes almost all of source vectors without the need to project these on the lattice outermost shell, while it properly maintains a small codebook size. It also resolves the wedge region problem commonly encountered with sharply distributed random sources. These represent some of the drawbacks of the algorithm proposed by Barlaud [26). The proposed algorithm handles all types of lattices, not only the cubic lattices, as opposed to the algorithms developed by Fischer [29) and Jeong [42). Furthermore, no training and multiquantizing (to determine lattice parameters) is required, as opposed to Powell's algorithm [78). For coefficients with high-frequency content, the positive-negative mean algorithm is proposed to improve the resolution of reconstructed images. For coefficients with low-frequency content, a lossless predictive compression scheme is used to preserve the quality of reconstructed images. A method to reduce bit requirements of necessary side information is also introduced. Lossless entropy coding techniques are subsequently used to remove coding redundancy. The algorithms result in high quality reconstructed images with better compression ratios than other available algorithms. To evaluate the proposed algorithms their objective and subjective performance comparisons with other available techniques are presented. The quality of the reconstructed images is important for a reliable identification. Enhancement and feature extraction on the reconstructed images are also investigated in this research. A structural-based feature extraction algorithm is proposed in which the unique properties of fingerprint textures are used to enhance the images and improve the fidelity of their characteristic features. The ridges are extracted from enhanced grey-level foreground areas based on the local ridge dominant directions. The proposed ridge extraction algorithm, properly preserves the natural shape of grey-level ridges as well as precise locations of the features, as opposed to the ridge extraction algorithm in [81). Furthermore, it is fast and operates only on foreground regions, as opposed to the adaptive floating average thresholding process in [68). Spurious features are subsequently eliminated using the proposed post-processing scheme.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This thesis investigates aspects of encoding the speech spectrum at low bit rates, with extensions to the effect of such coding on automatic speaker identification. Vector quantization (VQ) is a technique for jointly quantizing a block of samples at once, in order to reduce the bit rate of a coding system. The major drawback in using VQ is the complexity of the encoder. Recent research has indicated the potential applicability of the VQ method to speech when product code vector quantization (PCVQ) techniques are utilized. The focus of this research is the efficient representation, calculation and utilization of the speech model as stored in the PCVQ codebook. In this thesis, several VQ approaches are evaluated, and the efficacy of two training algorithms is compared experimentally. It is then shown that these productcode vector quantization algorithms may be augmented with lossless compression algorithms, thus yielding an improved overall compression rate. An approach using a statistical model for the vector codebook indices for subsequent lossless compression is introduced. This coupling of lossy compression and lossless compression enables further compression gain. It is demonstrated that this approach is able to reduce the bit rate requirement from the current 24 bits per 20 millisecond frame to below 20, using a standard spectral distortion metric for comparison. Several fast-search VQ methods for use in speech spectrum coding have been evaluated. The usefulness of fast-search algorithms is highly dependent upon the source characteristics and, although previous research has been undertaken for coding of images using VQ codebooks trained with the source samples directly, the product-code structured codebooks for speech spectrum quantization place new constraints on the search methodology. The second major focus of the research is an investigation of the effect of lowrate spectral compression methods on the task of automatic speaker identification. The motivation for this aspect of the research arose from a need to simultaneously preserve the speech quality and intelligibility and to provide for machine-based automatic speaker recognition using the compressed speech. This is important because there are several emerging applications of speaker identification where compressed speech is involved. Examples include mobile communications where the speech has been highly compressed, or where a database of speech material has been assembled and stored in compressed form. Although these two application areas have the same objective - that of maximizing the identification rate - the starting points are quite different. On the one hand, the speech material used for training the identification algorithm may or may not be available in compressed form. On the other hand, the new test material on which identification is to be based may only be available in compressed form. Using the spectral parameters which have been stored in compressed form, two main classes of speaker identification algorithm are examined. Some studies have been conducted in the past on bandwidth-limited speaker identification, but the use of short-term spectral compression deserves separate investigation. Combining the major aspects of the research, some important design guidelines for the construction of an identification model when based on the use of compressed speech are put forward.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This thesis presents an original approach to parametric speech coding at rates below 1 kbitsjsec, primarily for speech storage applications. Essential processes considered in this research encompass efficient characterization of evolutionary configuration of vocal tract to follow phonemic features with high fidelity, representation of speech excitation using minimal parameters with minor degradation in naturalness of synthesized speech, and finally, quantization of resulting parameters at the nominated rates. For encoding speech spectral features, a new method relying on Temporal Decomposition (TD) is developed which efficiently compresses spectral information through interpolation between most steady points over time trajectories of spectral parameters using a new basis function. The compression ratio provided by the method is independent of the updating rate of the feature vectors, hence allows high resolution in tracking significant temporal variations of speech formants with no effect on the spectral data rate. Accordingly, regardless of the quantization technique employed, the method yields a high compression ratio without sacrificing speech intelligibility. Several new techniques for improving performance of the interpolation of spectral parameters through phonetically-based analysis are proposed and implemented in this research, comprising event approximated TD, near-optimal shaping event approximating functions, efficient speech parametrization for TD on the basis of an extensive investigation originally reported in this thesis, and a hierarchical error minimization algorithm for decomposition of feature parameters which significantly reduces the complexity of the interpolation process. Speech excitation in this work is characterized based on a novel Multi-Band Excitation paradigm which accurately determines the harmonic structure in the LPC (linear predictive coding) residual spectra, within individual bands, using the concept 11 of Instantaneous Frequency (IF) estimation in frequency domain. The model yields aneffective two-band approximation to excitation and computes pitch and voicing with high accuracy as well. New methods for interpolative coding of pitch and gain contours are also developed in this thesis. For pitch, relying on the correlation between phonetic evolution and pitch variations during voiced speech segments, TD is employed to interpolate the pitch contour between critical points introduced by event centroids. This compresses pitch contour in the ratio of about 1/10 with negligible error. To approximate gain contour, a set of uniformly-distributed Gaussian event-like functions is used which reduces the amount of gain information to about 1/6 with acceptable accuracy. The thesis also addresses a new quantization method applied to spectral features on the basis of statistical properties and spectral sensitivity of spectral parameters extracted from TD-based analysis. The experimental results show that good quality speech, comparable to that of conventional coders at rates over 2 kbits/sec, can be achieved at rates 650-990 bits/sec.

Relevância:

20.00% 20.00%

Publicador:

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Norman K. Denzin (1989) claims that the central assumption of the biographical method—that a life can be captured and represented in a text—is open to question. This paper explores Denzin’s statement by documenting the role of creative writers in re-presenting oral histories in two case studies from Queensland, Australia. The first, The Queensland Business Leaders Hall of Fame, was a commercial research project commissioned by the State Library of Queensland (SLQ) in 2009, and involved semi-formal qualitative interviews and digital stories. The second is an on-going practice-led PhD project, The Artful Life: Oral History and Fiction, which investigates the fictionalisation of oral histories. Both projects enter into a dialogue around the re-presentation of oral and life histories, with attention given to the critical scholarship and creative practice in the process. Creative writers represent a life having particular preoccupations with techniques that more closely align with fiction than non-fiction (Hirsch and Dixon 2008). In this context, oral history resources are viewed not so much as repositories of historical facts, but as ambiguous and fluid narrative sources. The comparison of the two case studies also demonstrates that the aims of a particular project dictate the nature of the re-presentation, revealing that writing about another’s life is a complex act of artful ‘shaping’. Alistair Thomson (2007) notes the growing interdisciplinary nature of oral history scholarship since the 1980s; oral histories are used increasingly in art-based contexts to produce diverse cultural artefacts, such as digital stories and works of fiction, which are very different from traditional histories. What are the methodological implications of such projects? This paper will draw on self-reflexive practice to explore this question.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The focus of this paper is the role of Australian parents in early childhood education and care (ECEC), in particular, their role in shaping ECEC public policy. The paper reports the findings of a study investigating the different ways in which a group of parents viewed and experienced this role. Set against a policy backdrop where parents are positioned as 'consumers' and 'participants' in ECEC, the study employed a phenomenographic research approach to describe this role as viewed and experienced by parents. The study identified four logically related, qualitatively different ways of constituting this role among this group of parents, ranging from 'no role in shaping public policy' (the no role conception) to 'participating in policy decision-making, particularly where policy was likely to affect their child and family (the participating in policy decision-making conception). The study provides an insider-perspective on the role of parents in shaping policy and highlights variation in how this role is constituted by parents. The study also identifies factors perceived by parents as influencing their participation and discusses their implications for both policy and practice.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Since a recent Australian study found that university law students experience higher rates of depression than medical students and legal professionals (Kelk et al. 2009), the mental health of law students has increasingly become a target of government. To date, however, there has been no attempt to analyse these practices as an activity of government in advanced liberal societies. This paper addresses this imbalance by providing an initial analytics of the government of depression in law schools. It demonstrates how students are responsibilised to manage the risks and uncertainties of legal education by constructing resilient forms of personal and professional personae. It highlights that, in order to avoid depression, students are encouraged to shape not just their minds and bodies according to psychological and biomedical discourses, but are also to govern their ethical dispositions and become virtuous persons. This paper also argues that these forms of government are tied to advanced liberal forms of rule, as they position the law student as the locus of responsibility for depression, imply that depression is caused by an individual failing, and entrench students within responsibilising and entrepreneurial forms of subjectivity.