3 resultados para Human behaviour recognition
em WestminsterResearch - UK
Resumo:
Face recognition from images or video footage requires a certain level of recorded image quality. This paper derives acceptable bitrates (relating to levels of compression and consequently quality) of footage with human faces, using an industry implementation of the standard H.264/MPEG-4 AVC and the Closed-Circuit Television (CCTV) recording systems on London buses. The London buses application is utilized as a case study for setting up a methodology and implementing suitable data analysis for face recognition from recorded footage, which has been degraded by compression. The majority of CCTV recorders on buses use a proprietary format based on the H.264/MPEG-4 AVC video coding standard, exploiting both spatial and temporal redundancy. Low bitrates are favored in the CCTV industry for saving storage and transmission bandwidth, but they compromise the image usefulness of the recorded imagery. In this context, usefulness is determined by the presence of enough facial information remaining in the compressed image to allow a specialist to recognize a person. The investigation includes four steps: (1) Development of a video dataset representative of typical CCTV bus scenarios. (2) Selection and grouping of video scenes based on local (facial) and global (entire scene) content properties. (3) Psychophysical investigations to identify the key scenes, which are most affected by compression, using an industry implementation of H.264/MPEG-4 AVC. (4) Testing of CCTV recording systems on buses with the key scenes and further psychophysical investigations. The results showed a dependency upon scene content properties. Very dark scenes and scenes with high levels of spatial–temporal busyness were the most challenging to compress, requiring higher bitrates to maintain useful information.
Resumo:
The ability to learn new tasks rapidly is a prominent characteristic of human behaviour. This ability relies on flex- ible cognitive systems that adapt in order to encode temporary programs for processing non-automated tasks. Previous functional imaging studies have revealed distinct roles for the lateral frontal cortices (LFCs) and the ven- tral striatum in intentional learning processes. However, the human LFCs are complex; they house multiple dis- tinct sub-regions, each of which co-activates with a different functional network. It remains unclear how these LFC networks differ in their functions and how they coordinate with each other, and the ventral striatum, to support intentional learning. Here, we apply a suite of fMRI connectivity methods to determine how LFC networks activate and interact at different stages of two novel tasks, in which arbitrary stimulus-response rules are learnt either from explicit instruction or by trial-and-error. We report that the networks activate en masse and in synchrony when novel rules are being learnt from instruction. However, these networks are not homogeneous in their functions; instead, the directed connectivities between them vary asymmetrically across the learning timecourse and they disengage from the task sequentially along a rostro-caudal axis. Furthermore, when negative feedback indicates the need to switch to alternative stimulus–response rules, there is additional input to the LFC networks from the ventral striatum. These results support the hypotheses that LFC networks interact as a hierarchical system during intentional learning and that signals from the ventral striatum have a driving influence on this system when the internal program for processing the task is updated.
Resumo:
The Human-Computer Interaction (HCI) with interfaces is an active challenge field in the industry over the past decades and has opened the way to communicate with the means of verbal, hand and body gestures using the latest technologies for a variety of different applications in areas such as video games, training and simulation. However, accurate recognition of gestures is still a challenge. In this paper, we review the basic principles and current methodologies used for collecting the raw gesture data from the user for recognize actions the users perform and the technologies currently used for gesture-HCI in games enterprise. In addition, we present a set of projects from various applications in games industry that are using gestural interaction.