883 resultados para Identification with supervisor


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Financial processes may possess long memory and their probability densities may display heavy tails. Many models have been developed to deal with this tail behaviour, which reflects the jumps in the sample paths. On the other hand, the presence of long memory, which contradicts the efficient market hypothesis, is still an issue for further debates. These difficulties present challenges with the problems of memory detection and modelling the co-presence of long memory and heavy tails. This PhD project aims to respond to these challenges. The first part aims to detect memory in a large number of financial time series on stock prices and exchange rates using their scaling properties. Since financial time series often exhibit stochastic trends, a common form of nonstationarity, strong trends in the data can lead to false detection of memory. We will take advantage of a technique known as multifractal detrended fluctuation analysis (MF-DFA) that can systematically eliminate trends of different orders. This method is based on the identification of scaling of the q-th-order moments and is a generalisation of the standard detrended fluctuation analysis (DFA) which uses only the second moment; that is, q = 2. We also consider the rescaled range R/S analysis and the periodogram method to detect memory in financial time series and compare their results with the MF-DFA. An interesting finding is that short memory is detected for stock prices of the American Stock Exchange (AMEX) and long memory is found present in the time series of two exchange rates, namely the French franc and the Deutsche mark. Electricity price series of the five states of Australia are also found to possess long memory. For these electricity price series, heavy tails are also pronounced in their probability densities. The second part of the thesis develops models to represent short-memory and longmemory financial processes as detected in Part I. These models take the form of continuous-time AR(∞) -type equations whose kernel is the Laplace transform of a finite Borel measure. By imposing appropriate conditions on this measure, short memory or long memory in the dynamics of the solution will result. A specific form of the models, which has a good MA(∞) -type representation, is presented for the short memory case. Parameter estimation of this type of models is performed via least squares, and the models are applied to the stock prices in the AMEX, which have been established in Part I to possess short memory. By selecting the kernel in the continuous-time AR(∞) -type equations to have the form of Riemann-Liouville fractional derivative, we obtain a fractional stochastic differential equation driven by Brownian motion. This type of equations is used to represent financial processes with long memory, whose dynamics is described by the fractional derivative in the equation. These models are estimated via quasi-likelihood, namely via a continuoustime version of the Gauss-Whittle method. The models are applied to the exchange rates and the electricity prices of Part I with the aim of confirming their possible long-range dependence established by MF-DFA. The third part of the thesis provides an application of the results established in Parts I and II to characterise and classify financial markets. We will pay attention to the New York Stock Exchange (NYSE), the American Stock Exchange (AMEX), the NASDAQ Stock Exchange (NASDAQ) and the Toronto Stock Exchange (TSX). The parameters from MF-DFA and those of the short-memory AR(∞) -type models will be employed in this classification. We propose the Fisher discriminant algorithm to find a classifier in the two and three-dimensional spaces of data sets and then provide cross-validation to verify discriminant accuracies. This classification is useful for understanding and predicting the behaviour of different processes within the same market. The fourth part of the thesis investigates the heavy-tailed behaviour of financial processes which may also possess long memory. We consider fractional stochastic differential equations driven by stable noise to model financial processes such as electricity prices. The long memory of electricity prices is represented by a fractional derivative, while the stable noise input models their non-Gaussianity via the tails of their probability density. A method using the empirical densities and MF-DFA will be provided to estimate all the parameters of the model and simulate sample paths of the equation. The method is then applied to analyse daily spot prices for five states of Australia. Comparison with the results obtained from the R/S analysis, periodogram method and MF-DFA are provided. The results from fractional SDEs agree with those from MF-DFA, which are based on multifractal scaling, while those from the periodograms, which are based on the second order, seem to underestimate the long memory dynamics of the process. This highlights the need and usefulness of fractal methods in modelling non-Gaussian financial processes with long memory.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In children, joint hypermobility (typified by structural instability of joints) manifests clinically as neuro-muscular and musculo-skeletal conditions and conditions associated with development and organization of control of posture and gait (Finkelstein, 1916; Jahss, 1919; Sobel, 1926; Larsson, Mudholkar, Baum and Srivastava, 1995; Murray and Woo, 2001; Hakim and Grahame, 2003; Adib, Davies, Grahame, Woo and Murray, 2005:). The process of control of the relative proportions of joint mobility and stability, whilst maintaining equilibrium in standing posture and gait, is dependent upon the complex interrelationship between skeletal, muscular and neurological function (Massion, 1998; Gurfinkel, Ivanenko, Levik and Babakova, 1995; Shumway-Cook and Woollacott, 1995). The efficiency of this relies upon the integrity of neuro-muscular and musculo-skeletal components (ligaments, muscles, nerves), and the Central Nervous System’s capacity to interpret, process and integrate sensory information from visual, vestibular and proprioceptive sources (Crotts, Thompson, Nahom, Ryan and Newton, 1996; Riemann, Guskiewicz and Shields, 1999; Schmitz and Arnold, 1998) and development and incorporation of this into a representational scheme (postural reference frame) of body orientation with respect to internal and external environments (Gurfinkel et al., 1995; Roll and Roll, 1988). Sensory information from the base of support (feet) makes significant contribution to the development of reference frameworks (Kavounoudias, Roll and Roll, 1998). Problems with the structure and/ or function of any one, or combination of these components or systems, may result in partial loss of equilibrium and, therefore ineffectiveness or significant reduction in the capacity to interact with the environment, which may result in disability and/ or injury (Crotts et al., 1996; Rozzi, Lephart, Sterner and Kuligowski, 1999b). Whilst literature focusing upon clinical associations between joint hypermobility and conditions requiring therapeutic intervention has been abundant (Crego and Ford, 1952; Powell and Cantab, 1983; Dockery, in Jay, 1999; Grahame, 1971; Childs, 1986; Barton, Bird, Lindsay, Newton and Wright, 1995a; Rozzi, et al., 1999b; Kerr, Macmillan, Uttley and Luqmani, 2000; Grahame, 2001), there has been a deficit in controlled studies in which the neuro-muscular and musculo-skeletal characteristics of children with joint hypermobility have been quantified and considered within the context of organization of postural control in standing balance and gait. This was the aim of this project, undertaken as three studies. The major study (Study One) compared the fundamental neuro-muscular and musculo-skeletal characteristics of 15 children with joint hypermobility, and 15 age (8 and 9 years), gender, height and weight matched non-hypermobile controls. Significant differences were identified between previously undiagnosed hypermobile (n=15) and non-hypermobile children (n=15) in passive joint ranges of motion of the lower limbs and lumbar spine, muscle tone of the lower leg and foot, barefoot CoP displacement and in parameters of barefoot gait. Clinically relevant differences were also noted in barefoot single leg balance time. There were no differences between groups in isometric muscle strength in ankle dorsiflexion, knee flexion or extension. The second comparative study investigated foot morphology in non-weight bearing and weight bearing load conditions of the same children with and without joint hypermobility using three dimensional images (plaster casts) of their feet. The preliminary phase of this study evaluated the casting technique against direct measures of foot length, forefoot width, RCSP and forefoot to rearfoot angle. Results indicated accurate representation of elementary foot morphology within the plaster images. The comparative study examined the between and within group differences in measures of foot length and width, and in measures above the support surface (heel inclination angle, forefoot to rearfoot angle, normalized arch height, height of the widest point of the heel) in the two load conditions. Results of measures from plaster images identified that hypermobile children have different barefoot weight bearing foot morphology above the support surface than non-hypermobile children, despite no differences in measures of foot length or width. Based upon the differences in components of control of posture and gait in the hypermobile group, identified in Study One and Study Two, the final study (Study Three), using the same subjects, tested the immediate effect of specifically designed custom-made foot orthoses upon balance and gait of hypermobile children. The design of the orthoses was evaluated against the direct measures and the measures from plaster images of the feet. This ascertained the differences in morphology of the modified casts used to mould the orthoses and the original image of the foot. The orthoses were fitted into standardized running shoes. The effect of the shoe alone was tested upon the non-hypermobile children as the non-therapeutic equivalent condition. Immediate improvement in balance was noted in single leg stance and CoP displacement in the hypermobile group together with significant immediate improvement in the percentage of gait phases and in the percentage of the gait cycle at which maximum plantar flexion of the ankle occurred in gait. The neuro-muscular and musculo-skeletal characteristics of children with joint hypermobility are different from those of non-hypermobile children. The Beighton, Solomon and Soskolne (1973) screening criteria successfully classified joint hypermobility in children. As a result of this study joint hypermobility has been identified as a variable which must be controlled in studies of foot morphology and function in children. The outcomes of this study provide a basis upon which to further explore the association between joint hypermobility and neuro-muscular and musculo-skeletal conditions, and, have relevance for the physical education of children with joint hypermobility, for footwear and orthotic design processes, and, in particular, for clinical identification and treatment of children with joint hypermobility.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this research, we aim to identify factors that significantly affect the clickthrough of Web searchers. Our underlying goal is determine more efficient methods to optimize the clickthrough rate. We devise a clickthrough metric for measuring customer satisfaction of search engine results using the number of links visited, number of queries a user submits, and rank of clicked links. We use a neural network to detect the significant influence of searching characteristics on future user clickthrough. Our results show that high occurrences of query reformulation, lengthy searching duration, longer query length, and the higher ranking of prior clicked links correlate positively with future clickthrough. We provide recommendations for leveraging these findings for improving the performance of search engine retrieval and result ranking, along with implications for search engine marketing

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper considers the implications of the permanent/transitory decomposition of shocks for identification of structural models in the general case where the model might contain more than one permanent structural shock. It provides a simple and intuitive generalization of the influential work of Blanchard and Quah [1989. The dynamic effects of aggregate demand and supply disturbances. The American Economic Review 79, 655–673], and shows that structural equations with known permanent shocks cannot contain error correction terms, thereby freeing up the latter to be used as instruments in estimating their parameters. The approach is illustrated by a re-examination of the identification schemes used by Wickens and Motto [2001. Estimating shocks and impulse response functions. Journal of Applied Econometrics 16, 371–387], Shapiro and Watson [1988. Sources of business cycle fluctuations. NBER Macroeconomics Annual 3, 111–148], King et al. [1991. Stochastic trends and economic fluctuations. American Economic Review 81, 819–840], Gali [1992. How well does the ISLM model fit postwar US data? Quarterly Journal of Economics 107, 709–735; 1999. Technology, employment, and the business cycle: Do technology shocks explain aggregate fluctuations? American Economic Review 89, 249–271] and Fisher [2006. The dynamic effects of neutral and investment-specific technology shocks. Journal of Political Economy 114, 413–451].

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper studies receiver autonomous integrity monitoring (RAIM) algorithms and performance benefits of RTK solutions with multiple-constellations. The proposed method is generally known as Multi-constellation RAIM -McRAIM. The McRAIM algorithms take advantage of the ambiguity invariant character to assist fast identification of multiple satellite faults in the context of multiple constellations, and then detect faulty satellites in the follow-up ambiguity search and position estimation processes. The concept of Virtual Galileo Constellation (VGC) is used to generate useful data sets of dual-constellations for performance analysis. Experimental results from a 24-h data set demonstrate that with GPS&VGC constellations, McRAIM can significantly enhance the detection and exclusion probabilities of two simultaneous faulty satellites in RTK solutions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Objective: To quantify the extent to which alcohol related injuries are adequately identified in hospitalisation data using ICD-10-AM codes indicative of alcohol involvement. Method: A random sample of 4373 injury-related hospital separations from 1 July 2002 to 30 June 2004 were obtained from a stratified random sample of 50 hospitals across 4 states in Australia. From this sample, cases were identified as involving alcohol if they contained an ICD-10-AM diagnosis or external cause code referring to alcohol, or if the text description extracted from the medical records mentioned alcohol involvement. Results: Overall, identification of alcohol involvement using ICD codes detected 38% of the alcohol-related sample, whilst almost 94% of alcohol-related cases were identified through a search of the text extracted from the medical records. The resultant estimate of alcohol involvement in injury-related hospitalisations in this sample was 10%. Emergency department records were the most likely to identify whether the injury was alcohol-related with almost three-quarters of alcohol-related cases mentioning alcohol in the text abstracted from these records. Conclusions and Implications: The current best estimates of the frequency of hospital admissions where alcohol is involved prior to the injury underestimate the burden by around 62%. This is a substantial underestimate that has major implications for public policy, and highlights the need for further work on improving the quality and completeness of routine administrative data sources for identification of alcohol-related injuries.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Service bundling can be regarded as an option for service providers to strengthen their competitive advantages, cope with dynamic market conditions and heterogeneous consumer demand. Despite these positive effects, actual guidance for the identification of service bundles and the act of bundling itself can be regarded as a gap. Previous research has resulted in a conceptualization of a service bundling method relying on a structured service description in order to fill this gap. This method addresses the reasoning about the suitability of services to be part of a bundle based on analyzing existing relationships between services captured by a description language. This paper extends the aforementioned research by presenting an initial set of empirically derived relationships between services in existing bundles that can subsequently be utilized to identify potential new bundles. Additionally, a gap analysis points out to what extent prominent ontologies and service description languages accommodate for the identified relationships.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Indigenous Australians have lower levels of health than mainstream Australians and (as far as statistics are able to indicate) higher levels of disability, yet there is little information on Indigenous social and cultural constructions of disability or the Indigenous experience of disability. This research seeks to address these gaps by using an ethnographic approach, couched within a critical medical anthropology (CMA) framework and using the “three bodies” approach, to study the lived experience of urban Indigenous people with an adult-onset disability. The research approach takes account of the debate about the legitimacy of research into Indigenous Australians, Foucault‟s governmentality, and the arguments for different models of disability. The possibility of a cultural model of disability is raised. After a series of initial interviews with contacts who were primarily service providers, more detailed ethnographic research was conducted with three Indigenous women in their homes and with four groups of Indigenous women and men at an Indigenous respite centre. The research involved multiple visits over a period extending more than two years, and the establishment of relationships with all participants. An iterative inductive approach utilising constant comparison (i.e. a form of grounded theory) was adopted, enabling the generation and testing of working hypotheses. The findings point to the lack of an Indigenous construct of disability, related to the holistic construction of health among Indigenous Australians. Shame emerges as a factor which affects the way that Indigenous Australians respond to disability, and which operates in apparent contradiction to expectations of community support. Aspects of shame relate to governmentality, suggesting that self-disciplinary mechanisms have been taken up and support the more obvious exertion of government power. A key finding is the strength of Indigenous identity above and beyond other forms of identification, e.g. as a person with a disability, expressed in forms of resistance by individuals and service providers to the categories and procedures of the mainstream. The implications of a holistic construction of health are discussed in relation to the use of CMA, the interpretation of the “three bodies”, governmentality and resistance. The explanatory value of the concept of sympatricity is discussed, as is the potential value of a cultural model of disability which takes into account the cultural politics of a defiant Indigenous identity.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Advances in symptom management strategies through a better understanding of cancer symptom clusters depend on the identification of symptom clusters that are valid and reliable. The purpose of this exploratory research was to investigate alternative analytical approaches to identify symptom clusters for patients with cancer, using readily accessible statistical methods, and to justify which methods of identification may be appropriate for this context. Three studies were undertaken: (1) a systematic review of the literature, to identify analytical methods commonly used for symptom cluster identification for cancer patients; (2) a secondary data analysis to identify symptom clusters and compare alternative methods, as a guide to best practice approaches in cross-sectional studies; and (3) a secondary data analysis to investigate the stability of symptom clusters over time. The systematic literature review identified, in 10 years prior to March 2007, 13 cross-sectional studies implementing multivariate methods to identify cancer related symptom clusters. The methods commonly used to group symptoms were exploratory factor analysis, hierarchical cluster analysis and principal components analysis. Common factor analysis methods were recommended as the best practice cross-sectional methods for cancer symptom cluster identification. A comparison of alternative common factor analysis methods was conducted, in a secondary analysis of a sample of 219 ambulatory cancer patients with mixed diagnoses, assessed within one month of commencing chemotherapy treatment. Principal axis factoring, unweighted least squares and image factor analysis identified five consistent symptom clusters, based on patient self-reported distress ratings of 42 physical symptoms. Extraction of an additional cluster was necessary when using alpha factor analysis to determine clinically relevant symptom clusters. The recommended approaches for symptom cluster identification using nonmultivariate normal data were: principal axis factoring or unweighted least squares for factor extraction, followed by oblique rotation; and use of the scree plot and Minimum Average Partial procedure to determine the number of factors. In contrast to other studies which typically interpret pattern coefficients alone, in these studies symptom clusters were determined on the basis of structure coefficients. This approach was adopted for the stability of the results as structure coefficients are correlations between factors and symptoms unaffected by the correlations between factors. Symptoms could be associated with multiple clusters as a foundation for investigating potential interventions. The stability of these five symptom clusters was investigated in separate common factor analyses, 6 and 12 months after chemotherapy commenced. Five qualitatively consistent symptom clusters were identified over time (Musculoskeletal-discomforts/lethargy, Oral-discomforts, Gastrointestinaldiscomforts, Vasomotor-symptoms, Gastrointestinal-toxicities), but at 12 months two additional clusters were determined (Lethargy and Gastrointestinal/digestive symptoms). Future studies should include physical, psychological, and cognitive symptoms. Further investigation of the identified symptom clusters is required for validation, to examine causality, and potentially to suggest interventions for symptom management. Future studies should use longitudinal analyses to investigate change in symptom clusters, the influence of patient related factors, and the impact on outcomes (e.g., daily functioning) over time.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The main goal of this research is to design an efficient compression al~ gorithm for fingerprint images. The wavelet transform technique is the principal tool used to reduce interpixel redundancies and to obtain a parsimonious representation for these images. A specific fixed decomposition structure is designed to be used by the wavelet packet in order to save on the computation, transmission, and storage costs. This decomposition structure is based on analysis of information packing performance of several decompositions, two-dimensional power spectral density, effect of each frequency band on the reconstructed image, and the human visual sensitivities. This fixed structure is found to provide the "most" suitable representation for fingerprints, according to the chosen criteria. Different compression techniques are used for different subbands, based on their observed statistics. The decision is based on the effect of each subband on the reconstructed image according to the mean square criteria as well as the sensitivities in human vision. To design an efficient quantization algorithm, a precise model for distribution of the wavelet coefficients is developed. The model is based on the generalized Gaussian distribution. A least squares algorithm on a nonlinear function of the distribution model shape parameter is formulated to estimate the model parameters. A noise shaping bit allocation procedure is then used to assign the bit rate among subbands. To obtain high compression ratios, vector quantization is used. In this work, the lattice vector quantization (LVQ) is chosen because of its superior performance over other types of vector quantizers. The structure of a lattice quantizer is determined by its parameters known as truncation level and scaling factor. In lattice-based compression algorithms reported in the literature the lattice structure is commonly predetermined leading to a nonoptimized quantization approach. In this research, a new technique for determining the lattice parameters is proposed. In the lattice structure design, no assumption about the lattice parameters is made and no training and multi-quantizing is required. The design is based on minimizing the quantization distortion by adapting to the statistical characteristics of the source in each subimage. 11 Abstract Abstract Since LVQ is a multidimensional generalization of uniform quantizers, it produces minimum distortion for inputs with uniform distributions. In order to take advantage of the properties of LVQ and its fast implementation, while considering the i.i.d. nonuniform distribution of wavelet coefficients, the piecewise-uniform pyramid LVQ algorithm is proposed. The proposed algorithm quantizes almost all of source vectors without the need to project these on the lattice outermost shell, while it properly maintains a small codebook size. It also resolves the wedge region problem commonly encountered with sharply distributed random sources. These represent some of the drawbacks of the algorithm proposed by Barlaud [26). The proposed algorithm handles all types of lattices, not only the cubic lattices, as opposed to the algorithms developed by Fischer [29) and Jeong [42). Furthermore, no training and multiquantizing (to determine lattice parameters) is required, as opposed to Powell's algorithm [78). For coefficients with high-frequency content, the positive-negative mean algorithm is proposed to improve the resolution of reconstructed images. For coefficients with low-frequency content, a lossless predictive compression scheme is used to preserve the quality of reconstructed images. A method to reduce bit requirements of necessary side information is also introduced. Lossless entropy coding techniques are subsequently used to remove coding redundancy. The algorithms result in high quality reconstructed images with better compression ratios than other available algorithms. To evaluate the proposed algorithms their objective and subjective performance comparisons with other available techniques are presented. The quality of the reconstructed images is important for a reliable identification. Enhancement and feature extraction on the reconstructed images are also investigated in this research. A structural-based feature extraction algorithm is proposed in which the unique properties of fingerprint textures are used to enhance the images and improve the fidelity of their characteristic features. The ridges are extracted from enhanced grey-level foreground areas based on the local ridge dominant directions. The proposed ridge extraction algorithm, properly preserves the natural shape of grey-level ridges as well as precise locations of the features, as opposed to the ridge extraction algorithm in [81). Furthermore, it is fast and operates only on foreground regions, as opposed to the adaptive floating average thresholding process in [68). Spurious features are subsequently eliminated using the proposed post-processing scheme.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis investigates aspects of encoding the speech spectrum at low bit rates, with extensions to the effect of such coding on automatic speaker identification. Vector quantization (VQ) is a technique for jointly quantizing a block of samples at once, in order to reduce the bit rate of a coding system. The major drawback in using VQ is the complexity of the encoder. Recent research has indicated the potential applicability of the VQ method to speech when product code vector quantization (PCVQ) techniques are utilized. The focus of this research is the efficient representation, calculation and utilization of the speech model as stored in the PCVQ codebook. In this thesis, several VQ approaches are evaluated, and the efficacy of two training algorithms is compared experimentally. It is then shown that these productcode vector quantization algorithms may be augmented with lossless compression algorithms, thus yielding an improved overall compression rate. An approach using a statistical model for the vector codebook indices for subsequent lossless compression is introduced. This coupling of lossy compression and lossless compression enables further compression gain. It is demonstrated that this approach is able to reduce the bit rate requirement from the current 24 bits per 20 millisecond frame to below 20, using a standard spectral distortion metric for comparison. Several fast-search VQ methods for use in speech spectrum coding have been evaluated. The usefulness of fast-search algorithms is highly dependent upon the source characteristics and, although previous research has been undertaken for coding of images using VQ codebooks trained with the source samples directly, the product-code structured codebooks for speech spectrum quantization place new constraints on the search methodology. The second major focus of the research is an investigation of the effect of lowrate spectral compression methods on the task of automatic speaker identification. The motivation for this aspect of the research arose from a need to simultaneously preserve the speech quality and intelligibility and to provide for machine-based automatic speaker recognition using the compressed speech. This is important because there are several emerging applications of speaker identification where compressed speech is involved. Examples include mobile communications where the speech has been highly compressed, or where a database of speech material has been assembled and stored in compressed form. Although these two application areas have the same objective - that of maximizing the identification rate - the starting points are quite different. On the one hand, the speech material used for training the identification algorithm may or may not be available in compressed form. On the other hand, the new test material on which identification is to be based may only be available in compressed form. Using the spectral parameters which have been stored in compressed form, two main classes of speaker identification algorithm are examined. Some studies have been conducted in the past on bandwidth-limited speaker identification, but the use of short-term spectral compression deserves separate investigation. Combining the major aspects of the research, some important design guidelines for the construction of an identification model when based on the use of compressed speech are put forward.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Automatic spoken Language Identi¯cation (LID) is the process of identifying the language spoken within an utterance. The challenge that this task presents is that no prior information is available indicating the content of the utterance or the identity of the speaker. The trend of globalization and the pervasive popularity of the Internet will amplify the need for the capabilities spoken language identi¯ca- tion systems provide. A prominent application arises in call centers dealing with speakers speaking di®erent languages. Another important application is to index or search huge speech data archives and corpora that contain multiple languages. The aim of this research is to develop techniques targeted at producing a fast and more accurate automatic spoken LID system compared to the previous National Institute of Standards and Technology (NIST) Language Recognition Evaluation. Acoustic and phonetic speech information are targeted as the most suitable fea- tures for representing the characteristics of a language. To model the acoustic speech features a Gaussian Mixture Model based approach is employed. Pho- netic speech information is extracted using existing speech recognition technol- ogy. Various techniques to improve LID accuracy are also studied. One approach examined is the employment of Vocal Tract Length Normalization to reduce the speech variation caused by di®erent speakers. A linear data fusion technique is adopted to combine the various aspects of information extracted from speech. As a result of this research, a LID system was implemented and presented for evaluation in the 2003 Language Recognition Evaluation conducted by the NIST.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The use of artificial neural networks (ANNs) to identify and control induction machines is proposed. Two systems are presented: a system to adaptively control the stator currents via identification of the electrical dynamics, and a system to adaptively control the rotor speed via identification of the mechanical and current-fed system dynamics. Both systems are inherently adaptive as well as self-commissioning. The current controller is a completely general nonlinear controller which can be used together with any drive algorithm. Various advantages of these control schemes over conventional schemes are cited, and the combined speed and current control scheme is compared with the standard vector control scheme

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper proposes the use of artificial neural networks (ANNs) to identify and control an induction machine. Two systems are presented: a system to adaptively control the stator currents via identification of the electrical dynamics; and a system to adaptively control the rotor speed via identification of the mechanical and current-fed system dynamics. Various advantages of these control schemes over other conventional schemes are cited and the performance of the combined speed and current control scheme is compared with that of the standard vector control scheme