209 resultados para Video-camera
em Queensland University of Technology - ePrints Archive
Resumo:
A simple but accurate method for measuring the Earth’s radius using a video camera is described. A video camera was used to capture a shadow rising up the wall of a tall building at sunset. A free program called ImageJ was used to measure the time it took the shadow to rise a known distance up the building. The time, distance and length of the sidereal day were used to calculate the radius of the Earth. The radius was measured as 6394.3 +/- 118 km, which is within 1.8% of the accepted average value of 6371 km and well within the experimental error. The experiment is suitable as a high school or university project and should produce a value for Earth’s radius within a few per cent at latitudes towards the equator, where at some times of the year the ecliptic is approximately normal to the horizon.
Resumo:
Statement: Jams, Jelly Beans and the Fruits of Passion Let us search, instead, for an epistemology of practice implicit in the artistic, intuitive processes which some practitioners do bring to situations of uncertainty, instability, uniqueness, and value conflict. (Schön 1983, p40) Game On was born out of the idea of creative community; finding, networking, supporting and inspiring the people behind the face of an industry, those in the mist of the machine and those intending to join. We understood this moment to be a pivotal opportunity to nurture a new emerging form of game making, in an era of change, where the old industry models were proving to be unsustainable. As soon as we started putting people into a room under pressure, to make something in 48hrs, a whole pile of evolutionary creative responses emerged. People refashioned their craft in a moment of intense creativity that demanded different ways of working, an adaptive approach to the craft of making games – small – fast – indie. An event like the 48hrs forces participants’ attention on the process as much as the outcome. As one game industry professional taking part in a challenge for the first time observed: there are three paths in the genesis from idea to finished work: the path that focuses on mechanics; the path that focuses on team structure and roles and the path that focuses on the idea, the spirit – and the more successful teams need to put the spirit of the work first and foremost. The spirit drives the adaptation, it becomes improvisation. As Schön says: “Improvisation consists on varying, combining and recombining a set of figures within the schema which bounds and gives coherence to the performance.” (1983, p55). This improvisational approach is all about those making the games: the people and the principles of their creative process. This documentation evidences the intensity of their passion, determination and the shit that they are prepared to put themselves through to achieve their goal – to win a cup full of jellybeans and make a working game in 48hrs. 48hr is a project where, on all levels, analogue meets digital. This concept was further explored through the documentation process. This set of four videos were created by Cameron Owen on the fly during the challenge using both the iphone video camera and editing software in order to be available with immediacy and allow the event audience to share the experience - and perhaps to give some insights into the creative process exposed by the 48 hour challenge. ____________________________ Schön, D. A. 1983, The Reflective Practitioner: How Professionals Think in Action, Basic Books, New York
Resumo:
Coral reefs are biologically complex ecosystems that support a wide variety of marine organisms. These are fragile communities under enormous threat from natural and human-based influences. Properly assessing and measuring the growth and health of reefs is essential to understanding impacts of ocean acidification, coastal urbanisation and global warming. In this paper, we present an innovative 3-D reconstruction technique based on visual imagery as a non-intrusive, repeatable, in situ method for estimating physical parameters, such as surface area and volume for efficient assessment of long-term variability. The reconstruction algorithms are presented, and benchmarked using an existing data set. We validate the technique underwater, utilising a commercial-off-the-shelf camera and a piece of staghorn coral, Acropora cervicornis. The resulting reconstruction is compared with a laser scan of the coral piece for assessment and validation. The comparison shows that 77% of the pixels in the reconstruction are within 0.3 mm of the ground truth laser scan. Reconstruction results from an unknown video camera are also presented as a segue to future applications of this research.
Resumo:
This study examined the everyday practices of families within the context of family mealtime to investigate how members accomplished mealtime interactions. Using an ethnomethodological approach, conversation analysis and membership categorization analysis, the study investigated the interactional resources that family members used to assemble their social orders moment by moment during family mealtimes. While there is interest in mealtimes within educational policy, health research and the media, there remain few studies that provide fine-grained detail about how members produce the social activity of having a family meal. Findings from this study contribute empirical understandings about families and family mealtime. Two families with children aged 2 to 10 years were observed as they accomplished their everyday mealtime activities. Data collection took place in the family homes where family members video recorded their naturally occurring mealtimes. Each family was provided with a video camera for a one-month period and they decided which mealtimes they recorded, a method that afforded participants greater agency in the data collection process and made available to the analyst a window into the unfolding of the everyday lives of the families. A total of 14 mealtimes across the two families were recorded, capturing 347 minutes of mealtime interactions. Selected episodes from the data corpus, which includes centralised breakfast and dinnertime episodes, were transcribed using the Jeffersonian system. Three data chapters examine extended sequences of family talk at mealtimes, to show the interactional resources used by members during mealtime interactions. The first data chapter explores multiparty talk to show how the uniqueness of the occasion of having a meal influences turn design. It investigates the ways in which members accomplish two-party talk within a multiparty setting, showing how one child "tells" a funny story to accomplish the drawing together of his brothers as an audience. As well, this chapter identifies the interactional resources used by the mother to cohort her children to accomplish the choralling of grace. The second data chapter draws on sequential and categorical analysis to show how members are mapped to a locally produced membership category. The chapter shows how the mapping of members into particular categories is consequential for social order; for example, aligning members who belong to the membership category "had haircuts" and keeping out those who "did not have haircuts". Additional interactional resources such as echoing, used here to refer to the use of exactly the same words, similar prosody and physical action, and increasing physical closeness, are identified as important to the unfolding talk particularly as a way of accomplishing alignment between the grandmother and grand-daughter. The third and final data analysis chapter examines topical talk during family mealtimes. It explicates how members introduce topics of talk with an orientation to their co-participant and the way in which the take up of a topic is influenced both by the sequential environment in which it is introduced and the sensitivity of the topic. Together, these three data chapters show aspects of how family members participated in family mealtimes. The study contributes four substantive themes that emerged during the analytic process and, as such, the themes reflect what the members were observed to be doing. The first theme identified how family knowledge was relevant and consequential for initiating and sustaining interaction during mealtime with, for example, members buying into the talk of other members or being requested to help out with knowledge about a shared experience. Knowledge about members and their activities was evident with the design of questions evidencing an orientation to coparticipant’s knowledge. The second theme found how members used topic as a resource for social interaction. The third theme concerned the way in which members utilised membership categories for producing and making sense of social action. The fourth theme, evident across all episodes selected for analysis, showed how children’s competence is an ongoing interactional accomplishment as they manipulated interactional resources to manage their participation in family mealtime. The way in which children initiated interactions challenges previous understandings about children’s restricted rights as conversationalists. As well as making a theoretical contribution, the study offers methodological insight by working with families as research participants. The study shows the procedures involved as the study moved from one where the researcher undertook the decisions about what to videorecord to offering this decision making to the families, who chose when and what to videorecord of their mealtime practices. Evident also are the ways in which participants orient both to the video-camera and to the absent researcher. For the duration of the mealtime the video-camera was positioned by the adults as out of bounds to the children; however, it was offered as a "treat" to view after the mealtime was recorded. While situated within family mealtimes and reporting on the experiences of two families, this study illuminates how mealtimes are not just about food and eating; they are social. The study showed the constant and complex work of establishing and maintaining social orders and the rich array of interactional resources that members draw on during family mealtimes. The family’s interactions involved members contributing to building the social orders of family mealtime. With mealtimes occurring in institutional settings involving young children, such as long day care centres and kindergartens, the findings of this study may help educators working with young children to see the rich interactional opportunities mealtimes afford children, the interactional competence that children demonstrate during mealtimes, and the important role/s that adults may assume as co-participants in interactions with children within institutional settings.
Resumo:
Potential adverse effects on children health may result from school exposure to airborne particles. To address this issue, measurements in terms of particle number concentration, particle size distribution and black carbon (BC) concentrations were performed in three school buildings in Cassino (Italy) and its suburbs, outside and inside of the classrooms during normal occupancy and use. Additional time resolved information was gathered on ventilation condition, classroom activity, and traffic count data around the schools were obtained using a video camera. Across the three investigated school buildings, the outdoor and indoor particle number concentration monitored down to 4 nm and up to 3 m ranged from 2.8×104 part cm-3 to 4.7×104 part cm-3 and from 2.0×104 part cm-3 to 3.5×104 part cm-3, respectively. The total particle concentrations were usually higher outdoors than indoors, because no indoor sources were detected. I/O measured was less than 1 (varying in a relatively narrow range from 0.63 to 0.74), however one school exhibited indoor concentrations higher than outdoor during the morning rush hours. Particle size distribution at the outdoor site showed high particle concentrations in different size ranges, varying during the day; in relation to the starting and finishing of school time two modes were found. BC concentrations were 5 times higher at the urban school compared with the suburban and suburban-to-urban differences were larger than the relative differences of ultrafine particle concentrations.
Resumo:
In 2011 Queensland suffered both floods and cyclones, leaving residents without homes and their communities in ruins (2011). This paper presents how researchers from QUT, who are also members of the Oral History Association of Australia (OHAA) Queensland’s chapter, are using oral history, photographs, videography and digital storytelling to help heal and empower rural communities around the state and how evaluation has become a key element of our research. QUT researchers ran storytelling workshops in the capital city of Brisbane i early 2011, after the city suffered sever flooding. Cyclone Yasi then struck the town of Cardwell (in February 2011) destroying their historical museum and recording equipment. We delivered an 'emergency workshop', offering participants hands on use of the equipment, ethical and interviewing theory, so that the community could start to build a new collection. We included oral history workshops as well as sessions on how best to use a video camera, digital camera and creative writing sessions, so the community would also know how to make 'products' or exhibition pieces out of the interviews they were recording. We returned six months later to conduct follow-up workshops and the material produced by and with the community had been amazing. More funding has now been secured to replicate audio/visual/writing workshops in other remote rural Queensland communities including Townsville, Mackay and Cunnamulla and Toowoomba in 2012, highlighting the need for a multi media approach, to leverage the most out of OH interviews as a mechanism to restore and promote community resilience and pride.
Resumo:
Wildlife conservation involves an understanding of a specific animal, its environment and the interaction within a local ecosystem. Unmanned Aerial Vehicles (UAVs) present cost effective, non-intrusive solution for detecting animals over large areas and the use thermal imaging cameras offer the ability detect animals that would otherwise be concealed to visible light cameras. This report examines some of limitations on using SURF for the development of large maps using multiple stills images extracted from the thermal imaging video camera which contain wildlife (eg. Koala in them).
Resumo:
Surveillance networks are typically monitored by a few people, viewing several monitors displaying the camera feeds. It is then very difficult for a human operator to effectively detect events as they happen. Recently, computer vision research has begun to address ways to automatically process some of this data, to assist human operators. Object tracking, event recognition, crowd analysis and human identification at a distance are being pursued as a means to aid human operators and improve the security of areas such as transport hubs. The task of object tracking is key to the effective use of more advanced technologies. To recognize an event people and objects must be tracked. Tracking also enhances the performance of tasks such as crowd analysis or human identification. Before an object can be tracked, it must be detected. Motion segmentation techniques, widely employed in tracking systems, produce a binary image in which objects can be located. However, these techniques are prone to errors caused by shadows and lighting changes. Detection routines often fail, either due to erroneous motion caused by noise and lighting effects, or due to the detection routines being unable to split occluded regions into their component objects. Particle filters can be used as a self contained tracking system, and make it unnecessary for the task of detection to be carried out separately except for an initial (often manual) detection to initialise the filter. Particle filters use one or more extracted features to evaluate the likelihood of an object existing at a given point each frame. Such systems however do not easily allow for multiple objects to be tracked robustly, and do not explicitly maintain the identity of tracked objects. This dissertation investigates improvements to the performance of object tracking algorithms through improved motion segmentation and the use of a particle filter. A novel hybrid motion segmentation / optical flow algorithm, capable of simultaneously extracting multiple layers of foreground and optical flow in surveillance video frames is proposed. The algorithm is shown to perform well in the presence of adverse lighting conditions, and the optical flow is capable of extracting a moving object. The proposed algorithm is integrated within a tracking system and evaluated using the ETISEO (Evaluation du Traitement et de lInterpretation de Sequences vidEO - Evaluation for video understanding) database, and significant improvement in detection and tracking performance is demonstrated when compared to a baseline system. A Scalable Condensation Filter (SCF), a particle filter designed to work within an existing tracking system, is also developed. The creation and deletion of modes and maintenance of identity is handled by the underlying tracking system; and the tracking system is able to benefit from the improved performance in uncertain conditions arising from occlusion and noise provided by a particle filter. The system is evaluated using the ETISEO database. The dissertation then investigates fusion schemes for multi-spectral tracking systems. Four fusion schemes for combining a thermal and visual colour modality are evaluated using the OTCBVS (Object Tracking and Classification in and Beyond the Visible Spectrum) database. It is shown that a middle fusion scheme yields the best results and demonstrates a significant improvement in performance when compared to a system using either mode individually. Findings from the thesis contribute to improve the performance of semi-automated video processing and therefore improve security in areas under surveillance.
Resumo:
CCTV and surveillance networks are increasingly being used for operational as well as security tasks. One emerging area of technology that lends itself to operational analytics is soft biometrics. Soft biometrics can be used to describe a person and detect them throughout a sparse multi-camera network. This enables them to be used to perform tasks such as determining the time taken to get from point to point, and the paths taken through an environment by detecting and matching people across disjoint views. However, in a busy environment where there are 100's if not 1000's of people such as an airport, attempting to monitor everyone is highly unrealistic. In this paper we propose an average soft biometric, that can be used to identity people who look distinct, and are thus suitable for monitoring through a large, sparse camera network. We demonstrate how an average soft biometric can be used to identify unique people to calculate operational measures such as the time taken to travel from point to point.
Resumo:
Topographic structural complexity of a reef is highly correlated to coral growth rates, coral cover and overall levels of biodiversity, and is therefore integral in determining ecological processes. Modeling these processes commonly includes measures of rugosity obtained from a wide range of different survey techniques that often fail to capture rugosity at different spatial scales. Here we show that accurate estimates of rugosity can be obtained from video footage captured using underwater video cameras (i.e., monocular video). To demonstrate the accuracy of our method, we compared the results to in situ measurements of a 2m x 20m area of forereef from Glovers Reef atoll in Belize. Sequential pairs of images were used to compute fine scale bathymetric reconstructions of the reef substrate from which precise measurements of rugosity and reef topographic structural complexity can be derived across multiple spatial scales. To achieve accurate bathymetric reconstructions from uncalibrated monocular video, the position of the camera for each image in the video sequence and the intrinsic parameters (e.g., focal length) must be computed simultaneously. We show that these parameters can be often determined when the data exhibits parallax-type motion, and that rugosity and reef complexity can be accurately computed from existing video sequences taken from any type of underwater camera from any reef habitat or location. This technique provides an infinite array of possibilities for future coral reef research by providing a cost-effective and automated method of determining structural complexity and rugosity in both new and historical video surveys of coral reefs.
Resumo:
Distributed Wireless Smart Camera (DWSC) network is a special type of Wireless Sensor Network (WSN) that processes captured images in a distributed manner. While image processing on DWSCs sees a great potential for growth, with its applications possessing a vast practical application domain such as security surveillance and health care, it suffers from tremendous constraints. In addition to the limitations of conventional WSNs, image processing on DWSCs requires more computational power, bandwidth and energy that presents significant challenges for large scale deployments. This dissertation has developed a number of algorithms that are highly scalable, portable, energy efficient and performance efficient, with considerations of practical constraints imposed by the hardware and the nature of WSN. More specifically, these algorithms tackle the problems of multi-object tracking and localisation in distributed wireless smart camera net- works and optimal camera configuration determination. Addressing the first problem of multi-object tracking and localisation requires solving a large array of sub-problems. The sub-problems that are discussed in this dissertation are calibration of internal parameters, multi-camera calibration for localisation and object handover for tracking. These topics have been covered extensively in computer vision literatures, however new algorithms must be invented to accommodate the various constraints introduced and required by the DWSC platform. A technique has been developed for the automatic calibration of low-cost cameras which are assumed to be restricted in their freedom of movement to either pan or tilt movements. Camera internal parameters, including focal length, principal point, lens distortion parameter and the angle and axis of rotation, can be recovered from a minimum set of two images of the camera, provided that the axis of rotation between the two images goes through the camera's optical centre and is parallel to either the vertical (panning) or horizontal (tilting) axis of the image. For object localisation, a novel approach has been developed for the calibration of a network of non-overlapping DWSCs in terms of their ground plane homographies, which can then be used for localising objects. In the proposed approach, a robot travels through the camera network while updating its position in a global coordinate frame, which it broadcasts to the cameras. The cameras use this, along with the image plane location of the robot, to compute a mapping from their image planes to the global coordinate frame. This is combined with an occupancy map generated by the robot during the mapping process to localised objects moving within the network. In addition, to deal with the problem of object handover between DWSCs of non-overlapping fields of view, a highly-scalable, distributed protocol has been designed. Cameras that follow the proposed protocol transmit object descriptions to a selected set of neighbours that are determined using a predictive forwarding strategy. The received descriptions are then matched at the subsequent camera on the object's path using a probability maximisation process with locally generated descriptions. The second problem of camera placement emerges naturally when these pervasive devices are put into real use. The locations, orientations, lens types etc. of the cameras must be chosen in a way that the utility of the network is maximised (e.g. maximum coverage) while user requirements are met. To deal with this, a statistical formulation of the problem of determining optimal camera configurations has been introduced and a Trans-Dimensional Simulated Annealing (TDSA) algorithm has been proposed to effectively solve the problem.
Resumo:
At the highest level of competitive sport, nearly all performances of athletes (both training and competitive) are chronicled using video. Video is then often viewed by expert coaches/analysts who then manually label important performance indicators to gauge performance. Stroke-rate and pacing are important performance measures in swimming, and these are previously digitised manually by a human. This is problematic as annotating large volumes of video can be costly, and time-consuming. Further, since it is difficult to accurately estimate the position of the swimmer at each frame, measures such as stroke rate are generally aggregated over an entire swimming lap. Vision-based techniques which can automatically, objectively and reliably track the swimmer and their location can potentially solve these issues and allow for large-scale analysis of a swimmer across many videos. However, the aquatic environment is challenging due to fluctuations in scene from splashes, reflections and because swimmers are frequently submerged at different points in a race. In this paper, we temporally segment races into distinct and sequential states, and propose a multimodal approach which employs individual detectors tuned to each race state. Our approach allows the swimmer to be located and tracked smoothly in each frame despite a diverse range of constraints. We test our approach on a video dataset compiled at the 2012 Australian Short Course Swimming Championships.
Resumo:
This paper discusses the idea and demonstrates an early prototype of a novel method of interacting with security surveillance footage using natural user interfaces in place of traditional mouse and keyboard interaction. Current surveillance monitoring stations and systems provide the user with a vast array of video feeds from multiple locations on a video wall, relying on the user’s ability to distinguish locations of the live feeds from experience or list based key-value pair of location and camera IDs. During an incident, this current method of interaction may cause the user to spend increased amounts time obtaining situational and location awareness, which is counter-productive. The system proposed in this paper demonstrates how a multi-touch screen and natural interaction can enable the surveillance monitoring station users to quickly identify the location of a security camera and efficiently respond to an incident.
Resumo:
The location of previously unseen and unregistered individuals in complex camera networks from semantic descriptions is a time consuming and often inaccurate process carried out by human operators, or security staff on the ground. To promote the development and evaluation of automated semantic description based localisation systems, we present a new, publicly available, unconstrained 110 sequence database, collected from 6 stationary cameras. Each sequence contains detailed semantic information for a single search subject who appears in the clip (gender, age, height, build, hair and skin colour, clothing type, texture and colour), and between 21 and 290 frames for each clip are annotated with the target subject location (over 11,000 frames are annotated in total). A novel approach for localising a person given a semantic query is also proposed and demonstrated on this database. The proposed approach incorporates clothing colour and type (for clothing worn below the waist), as well as height and build to detect people. A method to assess the quality of candidate regions, as well as a symmetry driven approach to aid in modelling clothing on the lower half of the body, is proposed within this approach. An evaluation on the proposed dataset shows that a relative improvement in localisation accuracy of up to 21 is achieved over the baseline technique.
Resumo:
This sensory ethnography explores the affordances and constraints of multimodal design to represent emotions and appraisal associated with experiencing local places. Digital video production, walking with the camera, and the use of a think-aloud protocol to reflect on the videos, provided an opportunity for the primary school children to represent their emotions and appraisal of places multimodally. Applying a typology from Martin and White's (2005) framework for the Language of Evaluation, children's multimodal emotional responses to places in this study tended toward happiness, security, and satisfaction. The findings demonstrate an explicit connection between children's emotions in response to local places through video, while highlighting the potential for teachers to use digital filmmaking to allow children to reflect actively on their placed experiences and represent their emotional reactions to places through multiple modes.