213 resultados para video capture
Resumo:
First principle calculations for a hexagonal (graphene-like) boron nitride (g-BN) monolayer sheet in the presence of a boron-atom vacancy show promising properties for capture and activation of carbon dioxide. CO2 is found to decompose to produce an oxygen molecule via an intermediate chemisorption state on the defect g-BN sheet. The three stationary states and two transition states in the reaction pathway are confirmed by minimum energy pathway search and frequency analysis. The values computed for the two energy barriers involved in this catalytic reaction after enthalpy correction indicate that the catalytic reaction should proceed readily at room temperature.
Resumo:
Concern about the increasing atmospheric CO2 concentration and its impact on the environment has led to increasing attention directed toward finding advanced materials and technologies suited for efficient CO2 capture, storage and purification of clean-burning natural gas. In this letter, we have performed comprehensive theoretical investigation of CO2, N2, CH4 and H2 adsorption on B2CNTs. Our study shows that CO2 molecules can form strong interactions with B2CNTs with different charge states. However, N2, CH4 and H2 can only form very weak interactions with B2CNTs. Therefore, the study demonstrates B2CNTs could sever as promising materials for CO2 capture and gas separation.
Resumo:
This paper describes the work being conducted in the baseline rail level crossing project, supported by the Australian rail industry and the Cooperative Research Centre for Rail Innovation. The paper discusses the limitations of near-miss data for analysis obtained using current level crossing occurrence reporting practices. The project is addressing these limitations through the development of a data collection and analysis system with an underlying level crossing accident causation model. An overview of the methodology and improved data recording process are described. The paper concludes with a brief discussion of benefits this project is expected to provide the Australian rail industry.
Resumo:
Collisions between pedestrians and vehicles continue to be a major problem throughout the world. Pedestrians trying to cross roads and railway tracks without any caution are often highly susceptible to collisions with vehicles and trains. Continuous financial, human and other losses have prompted transport related organizations to come up with various solutions addressing this issue. However, the quest for new and significant improvements in this area is still ongoing. This work addresses this issue by building a general framework using computer vision techniques to automatically monitor pedestrian movements in such high-risk areas to enable better analysis of activity, and the creation of future alerting strategies. As a result of rapid development in the electronics and semi-conductor industry there is extensive deployment of CCTV cameras in public places to capture video footage. This footage can then be used to analyse crowd activities in those particular places. This work seeks to identify the abnormal behaviour of individuals in video footage. In this work we propose using a Semi-2D Hidden Markov Model (HMM), Full-2D HMM and Spatial HMM to model the normal activities of people. The outliers of the model (i.e. those observations with insufficient likelihood) are identified as abnormal activities. Location features, flow features and optical flow textures are used as the features for the model. The proposed approaches are evaluated using the publicly available UCSD datasets, and we demonstrate improved performance using a Semi-2D Hidden Markov Model compared to other state of the art methods. Further we illustrate how our proposed methods can be applied to detect anomalous events at rail level crossings.
Resumo:
The brief for the creative work was to produce a digital backdrop that would be projected behind and enhance a dance performance. The animation needed to display a static kolam pattern that would then dissolve at a choreographed point in the performance. The dissolving mimics the fragmentation that occurs to physical kolam patterns throughout the day as people interact with the drawings. The final animated work was incorporated into Vanessa Mafe-Keane’s performance titled “Paired Back” performed at the Judith Wright Centre, Brisbane 2013 as part of “Dance. Indie Dance. Through the use of motion capture technology the process of dissolving the pattern is a direct result of the performer’s movements allowing visual and temporal connection between motion of performer and digital graphic to be observed. This creative work presented an opportunity to expand upon experiments conducted in the production of experimental visual forms undertaken at QUT using the Xsens MVN Inertial Motion Capture System. The project took on the form of an investigation into practice with a focus on the additional complexities of capturing, then applying multiple data sources into the production of animated visuals along with bringing to light the considerations taken into account when producing this type of generative art work for live performance. The reported outcomes from this investigation have contributed to a larger study on the use of motion capture in the generative arts, furthering the understanding of and generating theories on practice.
Resumo:
This paper examines the use of short video tutorials in a post-graduate accounting subject, as a means of helping students transition from dependent to more independent learners. Five short (three to five minute) video tutorials were introduced in an effort to shift the reliance for learning from the lecturer to the student. Students’ usage of video tutorials, comments by students, and reliance on teaching staff for individual assistance were monitored over three semesters from 2008 to 2009. Interviews with students were then conducted in late 2009 to more comprehensively evaluate the use and benefits of video tutorials. Findings reveal preliminary but positive outcomes in terms of both more efficient teaching and more effective learning.
Resumo:
Motion capture continues to be adopted across a range of creative fields including, animation, games, visual effects, dance, live theatre and the visual arts. This panel will discuss and showcase the use of motion capture across these creative fields and consider the future of virtual production in the creative industries.
Resumo:
Distributed Wireless Smart Camera (DWSC) network is a special type of Wireless Sensor Network (WSN) that processes captured images in a distributed manner. While image processing on DWSCs sees a great potential for growth, with its applications possessing a vast practical application domain such as security surveillance and health care, it suffers from tremendous constraints. In addition to the limitations of conventional WSNs, image processing on DWSCs requires more computational power, bandwidth and energy that presents significant challenges for large scale deployments. This dissertation has developed a number of algorithms that are highly scalable, portable, energy efficient and performance efficient, with considerations of practical constraints imposed by the hardware and the nature of WSN. More specifically, these algorithms tackle the problems of multi-object tracking and localisation in distributed wireless smart camera net- works and optimal camera configuration determination. Addressing the first problem of multi-object tracking and localisation requires solving a large array of sub-problems. The sub-problems that are discussed in this dissertation are calibration of internal parameters, multi-camera calibration for localisation and object handover for tracking. These topics have been covered extensively in computer vision literatures, however new algorithms must be invented to accommodate the various constraints introduced and required by the DWSC platform. A technique has been developed for the automatic calibration of low-cost cameras which are assumed to be restricted in their freedom of movement to either pan or tilt movements. Camera internal parameters, including focal length, principal point, lens distortion parameter and the angle and axis of rotation, can be recovered from a minimum set of two images of the camera, provided that the axis of rotation between the two images goes through the camera's optical centre and is parallel to either the vertical (panning) or horizontal (tilting) axis of the image. For object localisation, a novel approach has been developed for the calibration of a network of non-overlapping DWSCs in terms of their ground plane homographies, which can then be used for localising objects. In the proposed approach, a robot travels through the camera network while updating its position in a global coordinate frame, which it broadcasts to the cameras. The cameras use this, along with the image plane location of the robot, to compute a mapping from their image planes to the global coordinate frame. This is combined with an occupancy map generated by the robot during the mapping process to localised objects moving within the network. In addition, to deal with the problem of object handover between DWSCs of non-overlapping fields of view, a highly-scalable, distributed protocol has been designed. Cameras that follow the proposed protocol transmit object descriptions to a selected set of neighbours that are determined using a predictive forwarding strategy. The received descriptions are then matched at the subsequent camera on the object's path using a probability maximisation process with locally generated descriptions. The second problem of camera placement emerges naturally when these pervasive devices are put into real use. The locations, orientations, lens types etc. of the cameras must be chosen in a way that the utility of the network is maximised (e.g. maximum coverage) while user requirements are met. To deal with this, a statistical formulation of the problem of determining optimal camera configurations has been introduced and a Trans-Dimensional Simulated Annealing (TDSA) algorithm has been proposed to effectively solve the problem.
Resumo:
Mobile video, as an emerging market and a promising research field, has attracted much attention from both industry and researchers. Considering the quality of user-experience as the crux of mobile video services, this chapter aims to provide a guide to user-centered studies of mobile video quality. This will benefit future research in better understanding user needs and experiences, designing effective research, and providing solid solutions to improve the quality of mobile video. This chapter is organized in three main parts: (1) a review of recent user studies from the perspectives of research focuses, user study methods, and data analysis methods; (2) an example of conducting a user study of mobile video research, together with the discussion on a series of relative issues, such as participants, materials and devices, study procedure, and analysis results, and; (3) a conclusion with an open discussion about challenges and opportunities in mobile video related research, and associated potential future improvements.
Resumo:
Abstract An experimental dataset representing a typical flow field in a stormwater gross pollutant trap (GPT) was visualised. A technique was developed to apply the image-based flow visualisation (IBFV) algorithm to the raw dataset. Particle image velocimetry (PIV) software was previously used to capture the flow field data by tracking neutrally buoyant particles with a high speed camera. The dataset consisted of scattered 2D point velocity vectors and the IBFV visualisation facilitates flow feature characterisation within the GPT. The flow features played a pivotal role in understanding stormwater pollutant capture and retention behaviour within the GPT. It was found that the IBFV animations revealed otherwise unnoticed flow features and experimental artefacts. For example, a circular tracer marker in the IBFV program visually highlighted streamlines to investigate the possible flow paths of pollutants entering the GPT. The investigated flow paths were compared with the behaviour of pollutants monitored during experiments.
Resumo:
“Made by Motion” is a collaboration between digital artist Paul Van Opdenbosch and performer and choreographer Elise May; a series of studies on captured motion data used to generating experimental visual forms that reverberate in space and time. The project investigates the invisible forces generated by and influencing the movement of a dancer. Along with how the forces can be captured and applied to generating visual outcomes that surpass simple data visualisation, projecting the intent of the performer’s movements. The source or ‘seed’ comes from using an Xsens MVN - Inertial Motion Capture system to capture spontaneous dance movements, with the visual generation conducted through a customised dynamics simulation. In this first series the visual investigation focused on manipulating the movement date at the instance of capture, capture been the recording of three-dimensional movement as ‘seen’ by the hardware and ‘understood’ through the calibration of software. By repositioning the capture hardware on the body we can effectively change how the same sequence of movements is ‘seen’ by the motion capture system thus generating a different visual result from effetely identical movement. The outcomes from the experiments clearly demonstrates the effectiveness of using motion capture hardware as a creative tool to manipulate the perception of the capture subject, in this case been a sequence of dance movements. The creative work exhibited is a cross-section of the experiments conducted in practice with the first animated work (Movement A - Control) using the motion capture hardware in its default ‘normal’ configuration. Following this is the lower body moved to the upper body (Lb-Ub), right arm moved onto the left arm (Ra-La), right leg moved onto the left leg (Rl-Ll) and finally the left leg moved onto a object that is then held in the left hand (Ll-Pf (Lh)).
Resumo:
My practice-led research explores and maps workflows for generating experimental creative work involving inertia based motion capture technology. Motion capture has often been used as a way to bridge animation and dance resulting in abstracted visuals outcomes. In early works this process was largely done by rotoscoping, reference footage and mechanical forms of motion capture. With the evolution of technology, optical and inertial forms of motion capture are now more accessible and able to accurately capture a larger range of complex movements. Made by Motion is a collaboration between digital artist Paul Van Opdenbosch and performer and choreographer Elise May; a series of studies on captured motion data used to generate experimental visual forms that reverberate in space and time. The project investigates the invisible forces generated by and influencing the movement of a dancer. Along with how the forces can be captured and applied to generating visual outcomes that surpass simple data visualisation, projecting the intent of the performer’s movements. The source or ‘seed’ comes from using an Xsens MVN – Inertial Motion Capture system to capture spontaneous dance movements, with the visual generation conducted through a customised dynamics simulation. In my presentation I will be displaying and discussing a selected creative works from the project along with the process and considerations behind the work.
Resumo:
The aim of this project was to gain the voice of the early adolescent (aged between 11 and 13 years) about the things that are genuinely important to them in their lives. Eight participants were asked to record a private video diary entry each night for one week. A number of thematic topics were identified including: their experiences and perspectives on school curriculum and assessment, opinions about schooling structures, and importance of friendship and family. Giving young adolescents the opportunity to voice their opinions has been valuable in gaining insight to the relative impacts of teaching and learning approaches in their school contexts and the issues they consider as the most important in their lives.
Resumo:
The balance between player competence and the challenge presented by a task has been acknowledged as a major factor in providing optimal experience in video games. While Dynamic Difficulty Adjustment (DDA) presents methods for adjusting difficulty in real-time during singleplayer games, little research has explored its application in competitive multiplayer games where challenge is dictated by the competence of human opponents. By conducting a formal review of 180 existing competitive multiplayer games, it was found that a large number of modern games are utilizing DDA techniques to balance challenge between human opponents. From this data, we propose a preliminary framework for classifying Multiplayer Dynamic Difficulty Adjustment (mDDA) instances.
Resumo:
This paper presents an investigation into event detection in crowded scenes, where the event of interest co-occurs with other activities and only binary labels at the clip level are available. The proposed approach incorporates a fast feature descriptor from the MPEG domain, and a novel multiple instance learning (MIL) algorithm using sparse approximation and random sensing. MPEG motion vectors are used to build particle trajectories that represent the motion of objects in uniform video clips, and the MPEG DCT coefficients are used to compute a foreground map to remove background particles. Trajectories are transformed into the Fourier domain, and the Fourier representations are quantized into visual words using the K-Means algorithm. The proposed MIL algorithm models the scene as a linear combination of independent events, where each event is a distribution of visual words. Experimental results show that the proposed approaches achieve promising results for event detection compared to the state-of-the-art.