928 resultados para Time code (Audio-visual technology)
Resumo:
Presentation at Open Repositories 2014, Helsinki, Finland, June 9-13, 2014
Resumo:
Simple reaction time (SRT) in response to visual stimuli can be influenced by many stimulus features. The speed and accuracy with which observers respond to a visual stimulus may be improved by prior knowledge about the stimulus location, which can be obtained by manipulating the spatial probability of the stimulus. However, when higher spatial probability is achieved by holding constant the stimulus location throughout successive trials, the resulting improvement in performance can also be due to local sensory facilitation caused by the recurrent spatial location of a visual target (position priming). The main objective of the present investigation was to quantitatively evaluate the modulation of SRT by the spatial probability structure of a visual stimulus. In two experiments the volunteers had to respond as quickly as possible to the visual target presented on a computer screen by pressing an optic key with the index finger of the dominant hand. Experiment 1 (N = 14) investigated how SRT changed as a function of both the different levels of spatial probability and the subject's explicit knowledge about the precise probability structure of visual stimulation. We found a gradual decrease in SRT with increasing spatial probability of a visual target regardless of the observer's previous knowledge concerning the spatial probability of the stimulus. Error rates, below 2%, were independent of the spatial probability structure of the visual stimulus, suggesting the absence of a speed-accuracy trade-off. Experiment 2 (N = 12) examined whether changes in SRT in response to a spatially recurrent visual target might be accounted for simply by sensory and temporally local facilitation. The findings indicated that the decrease in SRT brought about by a spatially recurrent target was associated with its spatial predictability, and could not be accounted for solely in terms of sensory priming.
Resumo:
Please consult the paper edition of this thesis to read. It is available on the 5th Floor of the Library at Call Number: Z 9999.5 E38 L64 2008
Resumo:
L’étiquette « homme-orchestre » est apposée à une grande variété de musiciens qui se distinguent en jouant seuls une performance qui est normalement interprétée par plusieurs personnes. La diversité qu’a pu prendre au cours du temps cette forme n’est pas prise en compte par la culture populaire qui propose une image relativement constante de cette figure tel que vue dans les films Mary Poppins (1964) de Walt Disney et One-man Band (2005) de Pixar. Il s’agit d’un seul performeur vêtu d’un costume coloré avec une grosse caisse sur le dos, des cymbales entre les jambes, une guitare ou un autre instrument à cordes dans les mains et un petit instrument à vent fixé assez près de sa bouche pour lui permettre d’alterner le chant et le jeu instrumental. Cette thèse propose une analyse de l’homme-orchestre qui va au-delà de sa simple production musicale en situant le phénomène comme un genre spectaculaire qui transmet un contenu symbolique à travers une relation tripartite entre performance divertissante, spectateur et image. Le contenu symbolique est lié aux idées caractéristiques du Siècle des lumières tels que la liberté, l’individu et une relation avec la technologie. Il est aussi incarné simultanément par les performeurs et par la représentation de l’homme-orchestre dans l’imaginaire collectif. En même temps, chaque performance sert à réaffirmer l’image de l’homme-orchestre, une image qui par répétitions est devenue un lieu commun de la culture, existant au-delà d’un seul performeur ou d’une seule performance. L’aspect visuel de l’homme-orchestre joue un rôle important dans ce processus par une utilisation inattendue du corps, une relation causale entre corps, technologie et production musicale ainsi que par l’utilisation de vêtements colorés et d’accessoires non musicaux tels des marionnettes, des feux d’artifice ou des animaux vivants. Ces éléments spectaculaires divertissent les spectateurs, ce qui se traduit, entre autres, par un gain financier pour le performeur. Le divertissement a une fonction phatique qui facilite la communication du contenu symbolique.
Resumo:
Embedded systems are usually designed for a single or a specified set of tasks. This specificity means the system design as well as its hardware/software development can be highly optimized. Embedded software must meet the requirements such as high reliability operation on resource-constrained platforms, real time constraints and rapid development. This necessitates the adoption of static machine codes analysis tools running on a host machine for the validation and optimization of embedded system codes, which can help meet all of these goals. This could significantly augment the software quality and is still a challenging field.Embedded systems are usually designed for a single or a specified set of tasks. This specificity means the system design as well as its hardware/software development can be highly optimized. Embedded software must meet the requirements such as high reliability operation on resource-constrained platforms, real time constraints and rapid development. This necessitates the adoption of static machine codes analysis tools running on a host machine for the validation and optimization of embedded system codes, which can help meet all of these goals. This could significantly augment the software quality and is still a challenging field.Embedded systems are usually designed for a single or a specified set of tasks. This specificity means the system design as well as its hardware/software development can be highly optimized. Embedded software must meet the requirements such as high reliability operation on resource-constrained platforms, real time constraints and rapid development. This necessitates the adoption of static machine codes analysis tools running on a host machine for the validation and optimization of embedded system codes, which can help meet all of these goals. This could significantly augment the software quality and is still a challenging field.Embedded systems are usually designed for a single or a specified set of tasks. This specificity means the system design as well as its hardware/software development can be highly optimized. Embedded software must meet the requirements such as high reliability operation on resource-constrained platforms, real time constraints and rapid development. This necessitates the adoption of static machine codes analysis tools running on a host machine for the validation and optimization of embedded system codes, which can help meet all of these goals. This could significantly augment the software quality and is still a challenging field.This dissertation contributes to an architecture oriented code validation, error localization and optimization technique assisting the embedded system designer in software debugging, to make it more effective at early detection of software bugs that are otherwise hard to detect, using the static analysis of machine codes. The focus of this work is to develop methods that automatically localize faults as well as optimize the code and thus improve the debugging process as well as quality of the code.Validation is done with the help of rules of inferences formulated for the target processor. The rules govern the occurrence of illegitimate/out of place instructions and code sequences for executing the computational and integrated peripheral functions. The stipulated rules are encoded in propositional logic formulae and their compliance is tested individually in all possible execution paths of the application programs. An incorrect sequence of machine code pattern is identified using slicing techniques on the control flow graph generated from the machine code.An algorithm to assist the compiler to eliminate the redundant bank switching codes and decide on optimum data allocation to banked memory resulting in minimum number of bank switching codes in embedded system software is proposed. A relation matrix and a state transition diagram formed for the active memory bank state transition corresponding to each bank selection instruction is used for the detection of redundant codes. Instances of code redundancy based on the stipulated rules for the target processor are identified.This validation and optimization tool can be integrated to the system development environment. It is a novel approach independent of compiler/assembler, applicable to a wide range of processors once appropriate rules are formulated. Program states are identified mainly with machine code pattern, which drastically reduces the state space creation contributing to an improved state-of-the-art model checking. Though the technique described is general, the implementation is architecture oriented, and hence the feasibility study is conducted on PIC16F87X microcontrollers. The proposed tool will be very useful in steering novices towards correct use of difficult microcontroller features in developing embedded systems.
Resumo:
The modern telecommunication industry demands higher capacity networks with high data rate. Orthogonal frequency division multiplexing (OFDM) is a promising technique for high data rate wireless communications at reasonable complexity in wireless channels. OFDM has been adopted for many types of wireless systems like wireless local area networks such as IEEE 802.11a, and digital audio/video broadcasting (DAB/DVB). The proposed research focuses on a concatenated coding scheme that improve the performance of OFDM based wireless communications. It uses a Redundant Residue Number System (RRNS) code as the outer code and a convolutional code as the inner code. The bit error rate (BER) performances of the proposed system under different channel conditions are investigated. These include the effect of additive white Gaussian noise (AWGN), multipath delay spread, peak power clipping and frame start synchronization error. The simulation results show that the proposed RRNS-Convolutional concatenated coding (RCCC) scheme provides significant improvement in the system performance by exploiting the inherent properties of RRNS.
Resumo:
La generación actual de estudiantes está familiarizada con medios tecnológicos del tipo: MP3, podcasts, descargas, redes sociales, teléfonos móviles, etc, por ello, el autor del texto considera que el profesorado debe conocer la tecnología audio y su uso para la enseñanza en clase. Así, a los docentes se les proporcionan consejos prácticos para la grabación, edición y mezcla de audio, sobre la elección de equipos, y se les muestra, también, que es un medio para involucrar a los alumnos, de primaria y de secundaria, en el aprendizaje y, también, para el desarrollo de otras habilidades necesarias para su vida fuera de la escuela.
Resumo:
There are still major challenges in the area of automatic indexing and retrieval of multimedia content data for very large multimedia content corpora. Current indexing and retrieval applications still use keywords to index multimedia content and those keywords usually do not provide any knowledge about the semantic content of the data. With the increasing amount of multimedia content, it is inefficient to continue with this approach. In this paper, we describe the project DREAM, which addresses such challenges by proposing a new framework for semi-automatic annotation and retrieval of multimedia based on the semantic content. The framework uses the Topic Map Technology, as a tool to model the knowledge automatically extracted from the multimedia content using an Automatic Labelling Engine. We describe how we acquire knowledge from the content and represent this knowledge using the support of NLP to automatically generate Topic Maps. The framework is described in the context of film post-production.
Resumo:
Greater attention has been focused on the use of CDMA for future cellular mobile communications. CA near-far resistant detector for asynchronous code-division multiple-access (CDMA) systems operating in additive white Gaussian noise (AWGN) channels is presented. The multiuser interference caused by K users transmitting simultaneously, each with a specific signature sequence, is completely removed at the receiver. The complexity of this detector grows only linearly with the number of users, as compared to the optimum multiuser detector which requires exponential complexity in the number of users. A modified algorithm based on time diversity is described. It performs detection on a bit-by-bit basis and overcomes the complexity of using a sequence detector. The performance of this detector is shown to be superior to that of the conventional receiver.
Resumo:
A new generation of advanced surveillance systems is being conceived as a collection of multi-sensor components such as video, audio and mobile robots interacting in a cooperating manner to enhance situation awareness capabilities to assist surveillance personnel. The prominent issues that these systems face are: the improvement of existing intelligent video surveillance systems, the inclusion of wireless networks, the use of low power sensors, the design architecture, the communication between different components, the fusion of data emerging from different type of sensors, the location of personnel (providers and consumers) and the scalability of the system. This paper focuses on the aspects pertaining to real-time distributed architecture and scalability. For example, to meet real-time requirements, these systems need to process data streams in concurrent environments, designed by taking into account scheduling and synchronisation. The paper proposes a framework for the design of visual surveillance systems based on components derived from the principles of Real Time Networks/Data Oriented Requirements Implementation Scheme (RTN/DORIS). It also proposes the implementation of these components using the well-known middleware technology Common Object Request Broker Architecture (CORBA). Results using this architecture for video surveillance are presented through an implemented prototype.
Resumo:
Visual motion cues play an important role in animal and humans locomotion without the need to extract actual ego-motion information. This paper demonstrates a method for estimating the visual motion parameters, namely the Time-To-Contact (TTC), Focus of Expansion (FOE), and image angular velocities, from a sparse optical flow estimation registered from a downward looking camera. The presented method is capable of estimating the visual motion parameters in a complicated 6 degrees of freedom motion and in real time with suitable accuracy for mobile robots visual navigation.
Resumo:
The influence of visual stimuli intensity on manual reaction time (RT) was investigated under two different attentional settings: high (Experiment 1) and low (Experiment 2) stimulus location predictability. These two experiments were also run under both binocular and monocular viewing conditions. We observed that RT decreased as stimulus intensity increased. It also decreased as the viewing condition was changed from monocular to binocular as well as the location predictability shifted from low to high. A significant interaction was found between stimulus intensity and viewing condition, but no interaction was observed between neither of these factors and location predictability. These findings support the idea that the stimulus intensity effect arises from purely sensory, pre-attentive mechanisms rather than deriving from more efficient attentional capture. (C) 2010 Elsevier Ireland Ltd. All rights reserved.
Resumo:
SANTANA, André M.; SANTIAGO, Gutemberg S.; MEDEIROS, Adelardo A. D. Real-Time Visual SLAM Using Pre-Existing Floor Lines as Landmarks and a Single Camera. In: CONGRESSO BRASILEIRO DE AUTOMÁTICA, 2008, Juiz de Fora, MG. Anais... Juiz de Fora: CBA, 2008.
Resumo:
Normally initial teacher training has not been sufficient to provide all the tools for an updated and efficient teaching practice. It is presented here one of the ways of working the completion of the initial training through a course of continuing education. This course is based on inquiry teaching which is considered an important teaching strategy for science education. This kind of teaching enables improvement of students reasoning and cognitive skills, the cooperation among them, the understanding of the nature of scientific work, and the motivation to think about the relationship between science, technology, society and environment. For this dissertation a course of continuing education based on this approach was followed in order to evaluate which contributions it can bring to the teaching practice. The course was followed based on three stages: on the first there was a questionnaire and an informal interview; next it happened through participant observation with audio and visual aid; the third stage happened through semi structured interview. The collected information was analyzed based on Content Analysis. An inquiry teaching pedagogical material was produced for the course including some examples and applications of this approach. The aim of the material is that it can be a support for the teachers after de course. The results allowed seeing that the course was very useful, different from the traditional and the teachers that put the approach to use found it to be very positive. Thus it can be said that some of the teachers who participated will try again to apply it, try to contextualize more the teaching situations with the students day to day life, as well make them more active and critic. We can also gather from the study, that the inquiry teaching is a very different tool from what the teacher was taught and is accustomed to use and the theoretical comprehension, acceptance and practice change is a complicated process and demands time
Resumo:
Includes bibliography