985 resultados para face classification
Resumo:
Computer Aided Control Engineering involves three parallel streams: Simulation and modelling, Control system design (off-line), and Controller implementation. In industry the bottleneck problem has always been modelling, and this remains the case - that is where control (and other) engineers put most of their technical effort. Although great advances in software tools have been made, the cost of modelling remains very high - too high for some sectors. Object-oriented modelling, enabling truly re-usable models, seems to be the key enabling technology here. Software tools to support control systems design have two aspects to them: aiding and managing the work-flow in particular projects (whether of a single engineer or of a team), and provision of numerical algorithms to support control-theoretic and systems-theoretic analysis and design. The numerical problems associated with linear systems have been largely overcome, so that most problems can be tackled routinely without difficulty - though problems remain with (some) systems of extremely large dimensions. Recent emphasis on control of hybrid and/or constrained systems is leading to the emerging importance of geometric algorithms (ellipsoidal approximation, polytope projection, etc). Constantly increasing computational power is leading to renewed interest in design by optimisation, an example of which is MPC. The explosion of embedded control systems has highlighted the importance of autocode generation, directly from modelling/simulation products to target processors. This is the 'new kid on the block', and again much of the focus of commercial tools is on this part of the control engineer's job. Here the control engineer can no longer ignore computer science (at least, for the time being). © 2006 IEEE.
Resumo:
Computer Aided Control Engineering involves three parallel streams: Simulation and modelling, Control system design (off-line), and Controller implementation. In industry the bottleneck problem has always been modelling, and this remains the case - that is where control (and other) engineers put most of their technical effort. Although great advances in software tools have been made, the cost of modelling remains very high - too high for some sectors. Object-oriented modelling, enabling truly re-usable models, seems to be the key enabling technology here. Software tools to support control systems design have two aspects to them: aiding and managing the work-flow in particular projects (whether of a single engineer or of a team), and provision of numerical algorithms to support control-theoretic and systems-theoretic analysis and design. The numerical problems associated with linear systems have been largely overcome, so that most problems can be tackled routinely without difficulty - though problems remain with (some) systems of extremely large dimensions. Recent emphasis on control of hybrid and/or constrained systems is leading to the emerging importance of geometric algorithms (ellipsoidal approximation, polytope projection, etc). Constantly increasing computational power is leading to renewed interest in design by optimisation, an example of which is MPC. The explosion of embedded control systems has highlighted the importance of autocode generation, directly from modelling/simulation products to target processors. This is the 'new kid on the block', and again much of the focus of commercial tools is on this part of the control engineer's job. Here the control engineer can no longer ignore computer science (at least, for the time being). ©2006 IEEE.
Resumo:
Holistic representations of natural scenes is an effective and powerful source of information for semantic classification and analysis of arbitrary images. Recently, the frequency domain has been successfully exploited to holistically encode the content of natural scenes in order to obtain a robust representation for scene classification. In this paper, we present a new approach to naturalness classification of scenes using frequency domain. The proposed method is based on the ordering of the Discrete Fourier Power Spectra. Features extracted from this ordering are shown sufficient to build a robust holistic representation for Natural vs. Artificial scene classification. Experiments show that the proposed frequency domain method matches the accuracy of other state-of-the-art solutions. © 2008 Springer Berlin Heidelberg.
Resumo:
This paper investigates several approaches to bootstrapping a new spoken language understanding (SLU) component in a target language given a large dataset of semantically-annotated utterances in some other source language. The aim is to reduce the cost associated with porting a spoken dialogue system from one language to another by minimising the amount of data required in the target language. Since word-level semantic annotations are costly, Semantic Tuple Classifiers (STCs) are used in conjunction with statistical machine translation models both of which are trained from unaligned data to further reduce development time. The paper presents experiments in which a French SLU component in the tourist information domain is bootstrapped from English data. Results show that training STCs on automatically translated data produced the best performance for predicting the utterance's dialogue act type, however individual slot/value pairs are best predicted by training STCs on the source language and using them to decode translated utterances. © 2010 ISCA.
Resumo:
Most HMM-based TTS systems use a hard voiced/unvoiced classification to produce a discontinuous F0 signal which is used for the generation of the source-excitation. When a mixed source excitation is used, this decision can be based on two different sources of information: the state-specific MSD-prior of the F0 models, and/or the frame-specific features generated by the aperiodicity model. This paper examines the meaning of these variables in the synthesis process, their interaction, and how they affect the perceived quality of the generated speech The results of several perceptual experiments show that when using mixed excitation, subjects consistently prefer samples with very few or no false unvoiced errors, whereas a reduction in the rate of false voiced errors does not produce any perceptual improvement. This suggests that rather than using any form of hard voiced/unvoiced classification, e.g., the MSD-prior, it is better for synthesis to use a continuous F0 signal and rely on the frame-level soft voiced/unvoiced decision of the aperiodicity model. © 2011 IEEE.
Resumo:
A brief description is given of a program to carry out analysis of variance two-way classification on MICRO 2200, for use in fishery data processing.