15 resultados para tutkimusvälineet - kenttätyö - video


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Situational awareness is achieved naturally by the human senses of sight and hearing in combination. Automatic scene understanding aims at replicating this human ability using microphones and cameras in cooperation. In this paper, audio and video signals are fused and integrated at different levels of semantic abstractions. We detect and track a speaker who is relatively unconstrained, i.e., free to move indoors within an area larger than the comparable reported work, which is usually limited to round table meetings. The system is relatively simple: consisting of just 4 microphone pairs and a single camera. Results show that the overall multimodal tracker is more reliable than single modality systems, tolerating large occlusions and cross-talk. System evaluation is performed on both single and multi-modality tracking. The performance improvement given by the audio–video integration and fusion is quantified in terms of tracking precision and accuracy as well as speaker diarisation error rate and precision–recall (recognition). Improvements vs. the closest works are evaluated: 56% sound source localisation computational cost over an audio only system, 8% speaker diarisation error rate over an audio only speaker recognition unit and 36% on the precision–recall metric over an audio–video dominant speaker recognition method.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper we present a convolutional neuralnetwork (CNN)-based model for human head pose estimation inlow-resolution multi-modal RGB-D data. We pose the problemas one of classification of human gazing direction. We furtherfine-tune a regressor based on the learned deep classifier. Next wecombine the two models (classification and regression) to estimateapproximate regression confidence. We present state-of-the-artresults in datasets that span the range of high-resolution humanrobot interaction (close up faces plus depth information) data tochallenging low resolution outdoor surveillance data. We buildupon our robust head-pose estimation and further introduce anew visual attention model to recover interaction with theenvironment. Using this probabilistic model, we show thatmany higher level scene understanding like human-human/sceneinteraction detection can be achieved. Our solution runs inreal-time on commercial hardware

Relevância:

20.00% 20.00%

Publicador:

Resumo:

FPGAs and GPUs are often used when real-time performance in video processing is required. An accelerated processor is chosen based on task-specific priorities (power consumption, processing time and detection accuracy), and this decision is normally made once at design time. All three characteristics are important, particularly in battery-powered systems. Here we propose a method for moving selection of processing platform from a single design-time choice to a continuous run time one.We implement Histogram of Oriented Gradients (HOG) detectors for cars and people and Mixture of Gaussians (MoG) motion detectors running across FPGA, GPU and CPU in a heterogeneous system. We use this to detect illegally parked vehicles in urban scenes. Power, time and accuracy information for each detector is characterised. An anomaly measure is assigned to each detected object based on its trajectory and location, when compared to learned contextual movement patterns. This drives processor and implementation selection, so that scenes with high behavioural anomalies are processed with faster but more power hungry implementations, but routine or static time periods are processed with power-optimised, less accurate, slower versions. Real-time performance is evaluated on video datasets including i-LIDS. Compared to power-optimised static selection, automatic dynamic implementation mapping is 10% more accurate but draws 12W extra power in our testbed desktop system.

Relevância:

20.00% 20.00%

Publicador:

Relevância:

20.00% 20.00%

Publicador:

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Background The use of simulation in medical education is increasing, with students taught and assessed using simulated patients and manikins. Medical students at Queen’s University of Belfast are taught advanced life support cardiopulmonary resuscitation as part of the undergraduate curriculum. Teaching and feedback in these skills have been developed in Queen’s University with high-fidelity manikins. This study aimed to evaluate the effectiveness of video compared to verbal feedback in assessment of student cardiopulmonary resuscitation performance Methods Final year students participated in this study using a high-fidelity manikin, in the Clinical Skills Centre, Queen’s University Belfast. Cohort A received verbal feedback only on their performance and cohort B received video feedback only. Video analysis using ‘StudioCode’ software was distributed to students. Each group returned for a second scenario and evaluation 4 weeks later. An assessment tool was created for performance assessment, which included individual skill and global score evaluation. Results One hundred thirty eight final year medical students completed the study. 62 % were female and the mean age was 23.9 years. Students having video feedback had significantly greater improvement in overall scores compared to those receiving verbal feedback (p = 0.006, 95 % CI: 2.8–15.8). Individual skills, including ventilation quality and global score were significantly better with video feedback (p = 0.002 and p < 0.001, respectively) when compared with cohort A. There was a positive change in overall score for cohort B from session one to session two (p < 0.001, 95 % CI: 6.3–15.8) indicating video feedback significantly benefited skill retention. In addition, using video feedback showed a significant improvement in the global score (p < 0.001, 95 % CI: 3.3–7.2) and drug administration timing (p = 0.004, 95 % CI: 0.7–3.8) of cohort B participants, from session one to session two. Conclusions There is increased use of simulation in medicine but a paucity of published data comparing feedback methods in cardiopulmonary resuscitation training. Our study shows the use of video feedback when teaching cardiopulmonary resuscitation is more effective than verbal feedback, and enhances skill retention. This is one of the first studies to demonstrate the benefit of video feedback in cardiopulmonary resuscitation teaching.