726 resultados para Training method


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Four studies report on outcomes for long-term unemployed individuals who attend occupational skills/personal development training courses in Australia. Levels of distress, depression, guilt, anger, helplessness, positive and negative affect, life satisfaction and self esteem were used as measures of well-being. Employment value, employment expectations and employment commitment were used as measures of work attitude. Social support, financial strain, and use of community resources were used as measures of life situation. Other variables investigated were causal attribution, unemployment blame, levels of coping, self efficacy, the personality variable of neuroticism, the psycho-social climate of the training course, and changes to occupational status. Training courses were (a) government funded occupational skills-based programs which included some components of personal development training, and (b) a specially developed course which focused exclusively on improving well-being, and which utilised the cognitive-behavioural therapy (CBT) approach. Data for all studies were collected longitudinally by having subjects complete questionnaires pre-course, post-course, and (for 3 of the 4 studies) at 3 months follow-up, in order to investigate long-term effects. One of the studies utilised the case-study methodology and was designed to be illustrative and assist in interpreting the quantitative data from the other 3 evaluations. The outcomes for participants were contrasted with control subjects who met the same sel~tion criteria for training. Results confirmed earlier findings that the experiences of unemployment were negative. Immediate effects of the courses were to improve well-being. Improvements were greater for those who attended courses with higher levels of personal development input, and the best results were obtained from the specially developed CBT program. Participants who had lower levels of well-being at the beginning of the courses did better as a result of training than those who were already functioning at higher levels. Course participants gained only marginal advantages over control subjects in relation to improving their occupational status. Many of the short term well-being gains made as a result of attending the courses were still evident at 3 months follow-up. Best results were achieved for the specially designed CBT program. Results were discussed in the context of prevailing theories of Ynemployment (Fryer, 1986,1988; Jahoda, 1981, 1982; Warr, 1987a, 1987b).

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The main goal of this research is to design an efficient compression al~ gorithm for fingerprint images. The wavelet transform technique is the principal tool used to reduce interpixel redundancies and to obtain a parsimonious representation for these images. A specific fixed decomposition structure is designed to be used by the wavelet packet in order to save on the computation, transmission, and storage costs. This decomposition structure is based on analysis of information packing performance of several decompositions, two-dimensional power spectral density, effect of each frequency band on the reconstructed image, and the human visual sensitivities. This fixed structure is found to provide the "most" suitable representation for fingerprints, according to the chosen criteria. Different compression techniques are used for different subbands, based on their observed statistics. The decision is based on the effect of each subband on the reconstructed image according to the mean square criteria as well as the sensitivities in human vision. To design an efficient quantization algorithm, a precise model for distribution of the wavelet coefficients is developed. The model is based on the generalized Gaussian distribution. A least squares algorithm on a nonlinear function of the distribution model shape parameter is formulated to estimate the model parameters. A noise shaping bit allocation procedure is then used to assign the bit rate among subbands. To obtain high compression ratios, vector quantization is used. In this work, the lattice vector quantization (LVQ) is chosen because of its superior performance over other types of vector quantizers. The structure of a lattice quantizer is determined by its parameters known as truncation level and scaling factor. In lattice-based compression algorithms reported in the literature the lattice structure is commonly predetermined leading to a nonoptimized quantization approach. In this research, a new technique for determining the lattice parameters is proposed. In the lattice structure design, no assumption about the lattice parameters is made and no training and multi-quantizing is required. The design is based on minimizing the quantization distortion by adapting to the statistical characteristics of the source in each subimage. 11 Abstract Abstract Since LVQ is a multidimensional generalization of uniform quantizers, it produces minimum distortion for inputs with uniform distributions. In order to take advantage of the properties of LVQ and its fast implementation, while considering the i.i.d. nonuniform distribution of wavelet coefficients, the piecewise-uniform pyramid LVQ algorithm is proposed. The proposed algorithm quantizes almost all of source vectors without the need to project these on the lattice outermost shell, while it properly maintains a small codebook size. It also resolves the wedge region problem commonly encountered with sharply distributed random sources. These represent some of the drawbacks of the algorithm proposed by Barlaud [26). The proposed algorithm handles all types of lattices, not only the cubic lattices, as opposed to the algorithms developed by Fischer [29) and Jeong [42). Furthermore, no training and multiquantizing (to determine lattice parameters) is required, as opposed to Powell's algorithm [78). For coefficients with high-frequency content, the positive-negative mean algorithm is proposed to improve the resolution of reconstructed images. For coefficients with low-frequency content, a lossless predictive compression scheme is used to preserve the quality of reconstructed images. A method to reduce bit requirements of necessary side information is also introduced. Lossless entropy coding techniques are subsequently used to remove coding redundancy. The algorithms result in high quality reconstructed images with better compression ratios than other available algorithms. To evaluate the proposed algorithms their objective and subjective performance comparisons with other available techniques are presented. The quality of the reconstructed images is important for a reliable identification. Enhancement and feature extraction on the reconstructed images are also investigated in this research. A structural-based feature extraction algorithm is proposed in which the unique properties of fingerprint textures are used to enhance the images and improve the fidelity of their characteristic features. The ridges are extracted from enhanced grey-level foreground areas based on the local ridge dominant directions. The proposed ridge extraction algorithm, properly preserves the natural shape of grey-level ridges as well as precise locations of the features, as opposed to the ridge extraction algorithm in [81). Furthermore, it is fast and operates only on foreground regions, as opposed to the adaptive floating average thresholding process in [68). Spurious features are subsequently eliminated using the proposed post-processing scheme.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This thesis investigates aspects of encoding the speech spectrum at low bit rates, with extensions to the effect of such coding on automatic speaker identification. Vector quantization (VQ) is a technique for jointly quantizing a block of samples at once, in order to reduce the bit rate of a coding system. The major drawback in using VQ is the complexity of the encoder. Recent research has indicated the potential applicability of the VQ method to speech when product code vector quantization (PCVQ) techniques are utilized. The focus of this research is the efficient representation, calculation and utilization of the speech model as stored in the PCVQ codebook. In this thesis, several VQ approaches are evaluated, and the efficacy of two training algorithms is compared experimentally. It is then shown that these productcode vector quantization algorithms may be augmented with lossless compression algorithms, thus yielding an improved overall compression rate. An approach using a statistical model for the vector codebook indices for subsequent lossless compression is introduced. This coupling of lossy compression and lossless compression enables further compression gain. It is demonstrated that this approach is able to reduce the bit rate requirement from the current 24 bits per 20 millisecond frame to below 20, using a standard spectral distortion metric for comparison. Several fast-search VQ methods for use in speech spectrum coding have been evaluated. The usefulness of fast-search algorithms is highly dependent upon the source characteristics and, although previous research has been undertaken for coding of images using VQ codebooks trained with the source samples directly, the product-code structured codebooks for speech spectrum quantization place new constraints on the search methodology. The second major focus of the research is an investigation of the effect of lowrate spectral compression methods on the task of automatic speaker identification. The motivation for this aspect of the research arose from a need to simultaneously preserve the speech quality and intelligibility and to provide for machine-based automatic speaker recognition using the compressed speech. This is important because there are several emerging applications of speaker identification where compressed speech is involved. Examples include mobile communications where the speech has been highly compressed, or where a database of speech material has been assembled and stored in compressed form. Although these two application areas have the same objective - that of maximizing the identification rate - the starting points are quite different. On the one hand, the speech material used for training the identification algorithm may or may not be available in compressed form. On the other hand, the new test material on which identification is to be based may only be available in compressed form. Using the spectral parameters which have been stored in compressed form, two main classes of speaker identification algorithm are examined. Some studies have been conducted in the past on bandwidth-limited speaker identification, but the use of short-term spectral compression deserves separate investigation. Combining the major aspects of the research, some important design guidelines for the construction of an identification model when based on the use of compressed speech are put forward.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Throughout the twentieth century increased interest in the training of actors resulted in the emergence of a plethora of acting theories and innovative theatrical movements in Europe, the UK and the USA. The individuals or groups involved with the formulation of these theories and movements developed specific terminologies, or languages of acting, in an attempt to clearly articulate the nature and the practice of acting according to their particular pedagogy or theatrical aesthetic. Now at the dawning of the twenty-first century, Australia boasts quite a number of schools and university courses professing to train actors. This research aims to discover the language used in actor training on the east coast of Australia today. Using interviews with staff of the National Institute of Dramatic Art, the Victorian College of the Arts, and the Queensland University of Technology as the primary source of data, a constructivist grounded theory has emerged to assess the influence of last century‟s theatrical theorists and practitioners on Australian training and to ascertain the possibility of a distinctly Australian language of acting.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Science and technology are promoted as major contributors to national development. Consequently, improved science education has been placed high on the agenda of tasks to be tackled in many developing countries, although progress has often been limited. In fact there have been claims that the enormous investment in teaching science in developing countries has basically failed, with many reports of how efforts to teach science in developing countries often result in rote learning of strange concepts, mere copying of factual information, and a general lack of understanding on the part of local students. These generalisations can be applied to science education in Fiji. Muralidhar (1989) has described a situation in which upper primary and middle school students in Fiji were given little opportunity to engage in practical work; an extremely didactic form of teacher exposition was the predominant method of instruction during science lessons. He concluded that amongst other things, teachers' limited understanding, particularly of aspects of physical science, resulted in their rigid adherence to the text book or the omission of certain activities or topics. Although many of the problems associated with science education in developing countries have been documented, few attempts have been made to understand how non-Western students might better learn science. This study addresses the issue of Fiji pre-service primary teachers' understanding of a key aspect of physical science, namely, matter and how it changes, and their responses to learning experiences based on a constructivist epistemology. Initial interviews were used to probe pre-service primary teachers' understanding of this domain of science. The data were analysed to identify students' alternative and scientific conceptions. These conceptions were then used to construct Concept Profile Inventories (CPI) which allowed for qualitative comparison of the concepts of the two ethnic groups who took part in the study. This phase of the study also provided some insight into the interaction of scientific information and traditional beliefs in non-Western societies. A quantitative comparison of the groups' conceptions was conducted using a Science Concept Survey instrument developed from the CPis. These data provided considerable insight into the aspects of matter where the pre-service teachers' understanding was particularly weak. On the basis of these preliminary findings, a six-week teaching program aimed at improving the students' understanding of matter was implemented in an experimental design with a group of students. The intervention involved elements of pedagogy such as the use of analogies and concept maps which were novel to most of those who took part. At the conclusion of the teaching programme, the learning outcomes of the experimental group were compared with those of a control group taught in a more traditional manner. These outcomes were assessed quantitatively by means of pre- and post-tests and a delayed post-test, and qualitatively using an interview protocol. The students' views on the various teaching strategies used with the experimental group were also sought. The findings indicate that in the domain of matter little variation exists in the alternative conceptions held by Fijian and Indian students suggesting that cultural influences may be minimal in their construction. Furthermore, the teaching strategies implemented with the experimental group of students, although largely derived from Western research, showed considerable promise in the context of Fiji, where they appeared to be effective in improving the understanding of students from different cultural backgrounds. These outcomes may be of significance to those involved in teacher education and curriculum development in other developing countries.