3 resultados para Time code (Audio-visual technology)

em Digital Peer Publishing


Relevância:

100.00% 100.00%

Publicador:

Resumo:

People with disabilities often encounter difficulties while trying to learn something, because teaching material is for example not accessible to blind people or rooms, where courses take place, are not accessible to people using a wheelchair. E-learning provides an opportunity to disabled people. With the new German law on the equalisation of opportunities for people with disabilities for the first time access to information technology was explicitly taken up in German legislation. As a consequence of this law the framework law on universities (Hochschulrahmengesetz) was changed. The law now commit universities not to discriminate disabled students in their studies. For references on how universities can design accessible e-learning contents and provide accessible information online see http://wob11.de/links/anleitungen.html#elearning.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Audio-visual documents obtained from German TV news are classified according to the IPTC topic categorization scheme. To this end usual text classification techniques are adapted to speech, video, and non-speech audio. For each of the three modalities word analogues are generated: sequences of syllables for speech, “video words” based on low level color features (color moments, color correlogram and color wavelet), and “audio words” based on low-level spectral features (spectral envelope and spectral flatness) for non-speech audio. Such audio and video words provide a means to represent the different modalities in a uniform way. The frequencies of the word analogues represent audio-visual documents: the standard bag-of-words approach. Support vector machines are used for supervised classification in a 1 vs. n setting. Classification based on speech outperforms all other single modalities. Combining speech with non-speech audio improves classification. Classification is further improved by supplementing speech and non-speech audio with video words. Optimal F-scores range between 62% and 94% corresponding to 50% - 84% above chance. The optimal combination of modalities depends on the category to be recognized. The construction of audio and video words from low-level features provide a good basis for the integration of speech, non-speech audio and video.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper the software architecture of a framework which simplifies the development of applications in the area of Virtual and Augmented Reality is presented. It is based on VRML/X3D to enable rendering of audio-visual information. We extended our VRML rendering system by a device management system that is based on the concept of a data-flow graph. The aim of the system is to create Mixed Reality (MR) applications simply by plugging together small prefabricated software components, instead of compiling monolithic C++ applications. The flexibility and the advantages of the presented framework are explained on the basis of an exemplary implementation of a classic Augmented Realityapplication and its extension to a collaborative remote expert scenario.