335 resultados para Factor-Augmented Vector Autorregression (FAVAR).
Resumo:
In this paper we describe the Large Margin Vector Quantization algorithm (LMVQ), which uses gradient ascent to maximise the margin of a radial basis function classifier. We present a derivation of the algorithm, which proceeds from an estimate of the class-conditional probability densities. We show that the key behaviour of Kohonen's well-known LVQ2 and LVQ3 algorithms emerge as natural consequences of our formulation. We compare the performance of LMVQ with that of Kohonen's LVQ algorithms on an artificial classification problem and several well known benchmark classification tasks. We find that the classifiers produced by LMVQ attain a level of accuracy that compares well with those obtained via LVQ1, LVQ2 and LVQ3, with reduced storage complexity. We indicate future directions of enquiry based on the large margin approach to Learning Vector Quantization.
Resumo:
The main goal of this research is to design an efficient compression al~ gorithm for fingerprint images. The wavelet transform technique is the principal tool used to reduce interpixel redundancies and to obtain a parsimonious representation for these images. A specific fixed decomposition structure is designed to be used by the wavelet packet in order to save on the computation, transmission, and storage costs. This decomposition structure is based on analysis of information packing performance of several decompositions, two-dimensional power spectral density, effect of each frequency band on the reconstructed image, and the human visual sensitivities. This fixed structure is found to provide the "most" suitable representation for fingerprints, according to the chosen criteria. Different compression techniques are used for different subbands, based on their observed statistics. The decision is based on the effect of each subband on the reconstructed image according to the mean square criteria as well as the sensitivities in human vision. To design an efficient quantization algorithm, a precise model for distribution of the wavelet coefficients is developed. The model is based on the generalized Gaussian distribution. A least squares algorithm on a nonlinear function of the distribution model shape parameter is formulated to estimate the model parameters. A noise shaping bit allocation procedure is then used to assign the bit rate among subbands. To obtain high compression ratios, vector quantization is used. In this work, the lattice vector quantization (LVQ) is chosen because of its superior performance over other types of vector quantizers. The structure of a lattice quantizer is determined by its parameters known as truncation level and scaling factor. In lattice-based compression algorithms reported in the literature the lattice structure is commonly predetermined leading to a nonoptimized quantization approach. In this research, a new technique for determining the lattice parameters is proposed. In the lattice structure design, no assumption about the lattice parameters is made and no training and multi-quantizing is required. The design is based on minimizing the quantization distortion by adapting to the statistical characteristics of the source in each subimage. 11 Abstract Abstract Since LVQ is a multidimensional generalization of uniform quantizers, it produces minimum distortion for inputs with uniform distributions. In order to take advantage of the properties of LVQ and its fast implementation, while considering the i.i.d. nonuniform distribution of wavelet coefficients, the piecewise-uniform pyramid LVQ algorithm is proposed. The proposed algorithm quantizes almost all of source vectors without the need to project these on the lattice outermost shell, while it properly maintains a small codebook size. It also resolves the wedge region problem commonly encountered with sharply distributed random sources. These represent some of the drawbacks of the algorithm proposed by Barlaud [26). The proposed algorithm handles all types of lattices, not only the cubic lattices, as opposed to the algorithms developed by Fischer [29) and Jeong [42). Furthermore, no training and multiquantizing (to determine lattice parameters) is required, as opposed to Powell's algorithm [78). For coefficients with high-frequency content, the positive-negative mean algorithm is proposed to improve the resolution of reconstructed images. For coefficients with low-frequency content, a lossless predictive compression scheme is used to preserve the quality of reconstructed images. A method to reduce bit requirements of necessary side information is also introduced. Lossless entropy coding techniques are subsequently used to remove coding redundancy. The algorithms result in high quality reconstructed images with better compression ratios than other available algorithms. To evaluate the proposed algorithms their objective and subjective performance comparisons with other available techniques are presented. The quality of the reconstructed images is important for a reliable identification. Enhancement and feature extraction on the reconstructed images are also investigated in this research. A structural-based feature extraction algorithm is proposed in which the unique properties of fingerprint textures are used to enhance the images and improve the fidelity of their characteristic features. The ridges are extracted from enhanced grey-level foreground areas based on the local ridge dominant directions. The proposed ridge extraction algorithm, properly preserves the natural shape of grey-level ridges as well as precise locations of the features, as opposed to the ridge extraction algorithm in [81). Furthermore, it is fast and operates only on foreground regions, as opposed to the adaptive floating average thresholding process in [68). Spurious features are subsequently eliminated using the proposed post-processing scheme.
Resumo:
This thesis investigates aspects of encoding the speech spectrum at low bit rates, with extensions to the effect of such coding on automatic speaker identification. Vector quantization (VQ) is a technique for jointly quantizing a block of samples at once, in order to reduce the bit rate of a coding system. The major drawback in using VQ is the complexity of the encoder. Recent research has indicated the potential applicability of the VQ method to speech when product code vector quantization (PCVQ) techniques are utilized. The focus of this research is the efficient representation, calculation and utilization of the speech model as stored in the PCVQ codebook. In this thesis, several VQ approaches are evaluated, and the efficacy of two training algorithms is compared experimentally. It is then shown that these productcode vector quantization algorithms may be augmented with lossless compression algorithms, thus yielding an improved overall compression rate. An approach using a statistical model for the vector codebook indices for subsequent lossless compression is introduced. This coupling of lossy compression and lossless compression enables further compression gain. It is demonstrated that this approach is able to reduce the bit rate requirement from the current 24 bits per 20 millisecond frame to below 20, using a standard spectral distortion metric for comparison. Several fast-search VQ methods for use in speech spectrum coding have been evaluated. The usefulness of fast-search algorithms is highly dependent upon the source characteristics and, although previous research has been undertaken for coding of images using VQ codebooks trained with the source samples directly, the product-code structured codebooks for speech spectrum quantization place new constraints on the search methodology. The second major focus of the research is an investigation of the effect of lowrate spectral compression methods on the task of automatic speaker identification. The motivation for this aspect of the research arose from a need to simultaneously preserve the speech quality and intelligibility and to provide for machine-based automatic speaker recognition using the compressed speech. This is important because there are several emerging applications of speaker identification where compressed speech is involved. Examples include mobile communications where the speech has been highly compressed, or where a database of speech material has been assembled and stored in compressed form. Although these two application areas have the same objective - that of maximizing the identification rate - the starting points are quite different. On the one hand, the speech material used for training the identification algorithm may or may not be available in compressed form. On the other hand, the new test material on which identification is to be based may only be available in compressed form. Using the spectral parameters which have been stored in compressed form, two main classes of speaker identification algorithm are examined. Some studies have been conducted in the past on bandwidth-limited speaker identification, but the use of short-term spectral compression deserves separate investigation. Combining the major aspects of the research, some important design guidelines for the construction of an identification model when based on the use of compressed speech are put forward.
Resumo:
This study, to elucidate the role of des(1-3)IGF-I in the maturation of IGF-I,used two strategies. The first was to detect the presence of enzymes in tissues, which would act on IGF-I to produce des(1-3)IGF-I, and the second was to detect the potential products of such enzymic activity, namely Gly-Pro-Glu(GPE), Gly-Pro(GP) and des(l- 3)IGF-I. No neutral tripeptidyl peptidase (TPP II), which would release the tripeptide GPE from IGF-I, was detected in brain, urine nor in red or white blood cells. The TPPlike activity which was detected, was attributed to a combined action of a dipeptidyl peptidase (DPP N) and an aminopeptidase (AP A). A true TPP II was, however, detected in platelets. Two purified TPP II enzymes were investigated but they did not release GPE from IGF-I under a variety of conditions. Consequently, TPP II seemed unlikely to participate in the formation of des(1-3)IGF-I. In contrast, an acidic tripeptidyl peptidase activity (TPP I) was detected in brain and colostrum, the former with a pH optimum of 4.5 and the latter 3.8. It seems likely that such an enzyme would participate in the formation of des( 1-3 )IGF-I in these tissues in vitro, ie. that des(1-3)IGF-I may have been produced as an artifact in the isolation of IGF-I from brain and colostrum in acidic conditions. This contrasts with suggestions of an in vivo role for des(1-3)IGF-I, as reported by others. The activity of a dipeptidyl peptidase N (DPP N) from urine, which should release the dipeptide GP from IGF-I, was assessed under a variety of conditions and with a variety of additives and potential enzyme stimulants, but there was no release of GP. The DPP N also exhibited a transferase activity with synthetic substrates in the presence of dipeptides, at lower concentrations than previously reported for other acceptors or other proteolytic enzymes. In addition, a low concentration of a product,possibly the tetrapeptide Gly-Pro-Gly-Leu, was detected with the action of the enzyme on IGF-I in the presence of the dipeptide Gly-Leu. As part of attempts to detect tissue production of des(1-3)IGF-I, a monoclonal antibody (MAb ), directed towards the GPE- end ofiGF-I was produced by immunisation with a 10-mer covalently attached to a carrier protein. By the use of indirect ELISA and inhibitor studies, the MAb was shown to selectively recognise peptides with anNterminal GPE- sequence, and applied to the indirect detection of des(1-3)IGF-I. The concentration of GPE in brain, measured by mass spectrometry ( MS), was low, and the concentration of total IGF-I (measured by ELISA with a commercial polyclonal antibody [P Ab]) was 40 times higher at 50 nmol/kg. This also, was not consistent with the action of a tripeptidyl peptidase in brain that converted all IGF-I to des(1-3)IGF-I plus GPE. Contrasting ELISA results, using the MAb prepared in this study, suggest an even higher concentration of intact IGF-I of 150 nmollkg. This would argue against the presence of any des( 1-3 )IGF-I in brain, but in turn, this indicates either the presence of other substances containing a GPE amino-terminus or other cross reacting epitope. Although the results of the specificity studies reported in Chapter 5 would make this latter possibility seem unlikely, it cannot be completely excluded. No GP was detected in brain by MS. No GPE was detected in colostrum by capillary electrophoresis (CE) but the interference from extraneous substances reduced the detectability of GPE by CE and this approach would require further, prior, purification and concentration steps. A molecule, with a migration time equal to that of the peptide GP, was detected in colostrum by CE, but the concentration (~ 10 11mo/L) was much higher than the IGF-I concentration measured by radio-immunoassay using a PAb (80 nmol/L) or using a Mab (300-400 nmolL). A DPP IV enzyme was detected in colostrum and this could account for the GP, derived from substrates other than IGF-1. Based on the differential results of the two antibody assays, there was no indication of the presence of des(1-3)IGF-I in brain or colostrum. In the absence of any enzyme activity directed towards the amino terminus of IGF-I and the absence any potential products, IGF-I, therefore, does not appear to "mature" via des(1-3)IGF-I in the brain, nor in the neutral colostrum. In spite of these results which indicate the absence of an enzymic attack on IGF-I and the absence of the expected products in tissues, the possibility that the conversion of IGF-I may occur in neutral conditions in limited amounts, cannot be ruled out. It remains possible that in the extracellular environment of the membrane, a complex interaction of IGF-I, binding protein, aminopeptidase(s) and receptor, produces des(1- 3)IGF-I as a transient product which is bound to the receptor and internalised.
Resumo:
The joints of a humanoid robot experience disturbances of markedly different magnitudes during the course of a walking gait. Consequently, simple feedback control techniques poorly track desired joint trajectories. This paper explores the addition of a control system inspired by the architecture of the cerebellum to improve system response. This system learns to compensate the changes in load that occur during a cycle of motion. The joint compensation scheme, called Trajectory Error Learning, augments the existing feedback control loop on a humanoid robot. The results from tests on the GuRoo platform show an improvement in system response for the system when augmented with the cerebellar compensator.
Resumo:
Background The androgen receptor is a ligand-induced transcriptional factor, which plays an important role in normal development of the prostate as well as in the progression of prostate cancer to a hormone refractory state. We previously reported the identification of a novel AR coactivator protein, L-dopa decarboxylase (DDC), which can act at the cytoplasmic level to enhance AR activity. We have also shown that DDC is a neuroendocrine (NE) marker of prostate cancer and that its expression is increased after hormone-ablation therapy and progression to androgen independence. In the present study, we generated tetracycline-inducible LNCaP-DDC prostate cancer stable cells to identify DDC downstream target genes by oligonucleotide microarray analysis. Results Comparison of induced DDC overexpressing cells versus non-induced control cell lines revealed a number of changes in the expression of androgen-regulated transcripts encoding proteins with a variety of molecular functions, including signal transduction, binding and catalytic activities. There were a total of 35 differentially expressed genes, 25 up-regulated and 10 down-regulated, in the DDC overexpressing cell line. In particular, we found a well-known androgen induced gene, TMEPAI, which wasup-regulated in DDC overexpressing cells, supporting its known co-activation function. In addition, DDC also further augmented the transcriptional repression function of AR for a subset of androgen-repressed genes. Changes in cellular gene transcription detected by microarray analysis were confirmed for selected genes by quantitative real-time RT-PCR. Conclusion Taken together, our results provide evidence for linking DDC action with AR signaling, which may be important for orchestrating molecular changes responsible for prostate cancer progression.
Resumo:
The treatment of challenging fractures and large osseous defects presents a formidable problem for orthopaedic surgeons. Tissue engineering/regenerative medicine approaches seek to solve this problem by delivering osteogenic signals within scaffolding biomaterials. In this study, we introduce a hybrid growth factor delivery system that consists of an electrospun nanofiber mesh tube for guiding bone regeneration combined with peptide-modified alginate hydrogel injected inside the tube for sustained growth factor release. We tested the ability of this system to deliver recombinant bone morphogenetic protein-2 (rhBMP-2) for the repair of critically-sized segmental bone defects in a rat model. Longitudinal [mu]-CT analysis and torsional testing provided quantitative assessment of bone regeneration. Our results indicate that the hybrid delivery system resulted in consistent bony bridging of the challenging bone defects. However, in the absence of rhBMP-2, the use of nanofiber mesh tube and alginate did not result in substantial bone formation. Perforations in the nanofiber mesh accelerated the rhBMP-2 mediated bone repair, and resulted in functional restoration of the regenerated bone. [mu]-CT based angiography indicated that perforations did not significantly affect the revascularization of defects, suggesting that some other interaction with the tissue surrounding the defect such as improved infiltration of osteoprogenitor cells contributed to the observed differences in repair. Overall, our results indicate that the hybrid alginate/nanofiber mesh system is a promising growth factor delivery strategy for the repair of challenging bone injuries.