25 resultados para Man-Machine Perceptual Performance.

em Cambridge University Engineering Department Publications Database


Relevância:

100.00% 100.00%

Publicador:

Resumo:

In FEA of ring rolling processes the tools' motions usually are defined prior to simulation. This procedure neglects the closed-loop control, which is used in industrial processes to control up to eight degrees of freedom (rotations, feed rates, guide rolls) in real time, taking into account the machine's performance limits as well as the process evolution. In order to close this gap in the new simulation approach all motions of the tools are controlled according to sensor values which are calculated within the FE simulation. This procedure leads to more realistic simulation results in comparison to the machine behaviour. © 2012 CIRP.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The development is described of a computer-controlled bowing machine that can bow a string with a range of gestures that match or exceed the capabilities of a human violinist. Example measurements of string vibration under controlled bowing conditions are shown, including a Schelleng diagram and a set of Guettler diagrams, for the open D string of a cello. For some results a rosin-coated rod was used in place of a conventional bow, to provide quantitative data for comparison with theoretical predictions. The results show qualitative consistency with the predictions of Schelleng and Guettler, but details are revealed that go beyond the limitations of existing models. © S. Hirzel Verlag · EAA.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This study is the first step in the psychoacoustic exploration of perceptual differences between the sounds of different violins. A method was used which enabled the same performance to be replayed on different "virtual violins," so that the relationships between acoustical characteristics of violins and perceived qualities could be explored. Recordings of real performances were made using a bridge-mounted force transducer, giving an accurate representation of the signal from the violin string. These were then played through filters corresponding to the admittance curves of different violins. Initially, limits of listener performance in detecting changes in acoustical characteristics were characterized. These consisted of shifts in frequency or increases in amplitude of single modes or frequency bands that have been proposed previously to be significant in the perception of violin sound quality. Thresholds were significantly lower for musically trained than for nontrained subjects but were not significantly affected by the violin used as a baseline. Thresholds for the musicians typically ranged from 3 to 6 dB for amplitude changes and 1.5%-20% for frequency changes. interpretation of the results using excitation patterns showed that thresholds for the best subjects were quite well predicted by a multichannel model based on optimal processing. (c) 2007 Acoustical Society of America.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper investigates several approaches to bootstrapping a new spoken language understanding (SLU) component in a target language given a large dataset of semantically-annotated utterances in some other source language. The aim is to reduce the cost associated with porting a spoken dialogue system from one language to another by minimising the amount of data required in the target language. Since word-level semantic annotations are costly, Semantic Tuple Classifiers (STCs) are used in conjunction with statistical machine translation models both of which are trained from unaligned data to further reduce development time. The paper presents experiments in which a French SLU component in the tourist information domain is bootstrapped from English data. Results show that training STCs on automatically translated data produced the best performance for predicting the utterance's dialogue act type, however individual slot/value pairs are best predicted by training STCs on the source language and using them to decode translated utterances. © 2010 ISCA.

Relevância:

30.00% 30.00%

Publicador: