954 resultados para Video-camera
Resumo:
Smart cameras perform on-board image analysis, adapt their algorithms to changes in their environment, and collaborate with other networked cameras to analyze the dynamic behavior of objects. A proposed computational framework adopts the concepts of self-awareness and self-expression to more efficiently manage the complex tradeoffs among performance, flexibility, resources, and reliability. The Web extra at http://youtu.be/NKe31-OKLz4 is a video demonstrating CamSim, a smart camera simulation tool, enables users to test self-adaptive and self-organizing smart-camera techniques without deploying a smart-camera network.
Resumo:
In this thesis, we introduce DeReEs-4v, an algorithm for unsupervised and automatic registration of two video frames captured depth-sensing cameras. DeReEs-4V receives two RGBD video streams from two depth-sensing cameras arbitrary located in an indoor space that share a minimum amount of 25% overlap between their captured scenes. The motivation of this research is to employ multiple depth-sensing cameras to enlarge the field of view and acquire a more complete and accurate 3D information of the environment. A typical way to combine multiple views from different cameras is through manual calibration. However, this process is time-consuming and may require some technical knowledge. Moreover, calibration has to be repeated when the location or position of the cameras change. In this research, we demonstrate how DeReEs-4V registration can be used to find the transformation of the view of one camera with respect to the other at interactive rates. Our algorithm automatically finds the 3D transformation to match the views from two cameras, requires no human interference, and is robust to camera movements while capturing. To validate this approach, a thorough examination of the system performance under different scenarios is presented. The system presented here supports any application that might benefit from the wider field-of-view provided by the combined scene from both cameras, including applications in 3D telepresence, gaming, people tracking, videoconferencing and computer vision.
Resumo:
Situational awareness is achieved naturally by the human senses of sight and hearing in combination. Automatic scene understanding aims at replicating this human ability using microphones and cameras in cooperation. In this paper, audio and video signals are fused and integrated at different levels of semantic abstractions. We detect and track a speaker who is relatively unconstrained, i.e., free to move indoors within an area larger than the comparable reported work, which is usually limited to round table meetings. The system is relatively simple: consisting of just 4 microphone pairs and a single camera. Results show that the overall multimodal tracker is more reliable than single modality systems, tolerating large occlusions and cross-talk. System evaluation is performed on both single and multi-modality tracking. The performance improvement given by the audio–video integration and fusion is quantified in terms of tracking precision and accuracy as well as speaker diarisation error rate and precision–recall (recognition). Improvements vs. the closest works are evaluated: 56% sound source localisation computational cost over an audio only system, 8% speaker diarisation error rate over an audio only speaker recognition unit and 36% on the precision–recall metric over an audio–video dominant speaker recognition method.
Resumo:
The estimating of the relative orientation and position of a camera is one of the integral topics in the field of computer vision. The accuracy of a certain Finnish technology company’s traffic sign inventory and localization process can be improved by utilizing the aforementioned concept. The company’s localization process uses video data produced by a vehicle installed camera. The accuracy of estimated traffic sign locations depends on the relative orientation between the camera and the vehicle. This thesis proposes a computer vision based software solution which can estimate a camera’s orientation relative to the movement direction of the vehicle by utilizing video data. The task was solved by using feature-based methods and open source software. When using simulated data sets, the camera orientation estimates had an absolute error of 0.31 degrees on average. The software solution can be integrated to be a part of the traffic sign localization pipeline of the company in question.
Resumo:
Designers need to develop good observational skills in order to conduct user studies that reveal the subtleties of human interactions and adequately inform design activity. In this paper we describe a game format that we have used in concert with wiki-web technology, to engage our IT and Information Environments students in developing much sharper observational skills. The Video Card Game is a method of video analysis that is suited to design practitioners as well as to researchers. It uses the familiar format of a card game similar to "Happy Families,, to help students develop themes of interactions from watching video clips. Students then post their interaction themes on wiki-web pages, which allows the teaching team and other students to edit and comment on them. We found that the tangible (cards), game, role playing and sharing aspects of this method led to a much larger amount of interaction and discussion between student groups and between students and the teaching team, than we have achieved using our traditional teaching methods, while taking no more time on the part of the teaching staff. The quality of the resulting interaction themes indicates that this method fosters development of observational skills.In the paper we describe the motivations, method and results in full. We also describe the research context in which we collected the videotape data, and how this method relates to state of the art research methods in interaction design for ubiquitous computing technology.
Resumo:
Debates concerning the veracity, ethics and politics of the documentary form circle endlessly around the function of those who participate in it, and the meaning attributed to their participation. Great significance is attached to the way that documentary filmmakers do or do not participate in the world they seek to represent, just as great significance is attached to those subjects whose participation extends beyond playing the part of eyewitness or expert, such that they become part of the very filmmaking process itself. This Ph.D. explores the interface between documentary practice and participatory culture by looking at how their practices, discursive fields and histories intersect, but also by looking at how participating in one might mean participating in the other. In short, the research is an examination of participatory culture through the lens of documentary practice and documentary criticism. In the process, however, this examination of participatory culture will in turn shed light on documentary thinking, especially the meaning and function of ‘the participant’ in contemporary documentary practice. A number of ways of conceiving of participation in documentary practice are discussed in this research, but one of the ideas that gives purpose to that investigation is the notion that the participant in contemporary documentary practice is someone who belongs to a participatory culture in particular. Not only does this mean that those subjects who play a part in a documentary are already informed by their engagement with a range of everyday media practices before the documentary apparatus arrives, the audience for such films are similarly informed and engaged. This audience have their own expectations about how they should be addressed by media producers in general, a fact that feeds back into their expectations about participatory approaches to documentary practice too. It is the ambition of this research to get closer to understanding the relationship between participants in the audience, in documentary and ancillary media texts, as well as behind the camera, and to think about how these relationships constitute a context for the production and reception of documentary films, but also how this context might provide a model for thinking about participatory culture itself. One way that documentary practice and participatory culture converge in this research is in the kind of participatory documentary that I call the ‘Camera Movie’, a narrow mode of documentary filmmaking that appeals directly to contemporary audiences’ desires for innovation and participation, something that is achieved in this case by giving documentary subjects control of the camera. If there is a certain inevitability about this research having to contend with the notion of the ‘participatory documentary’, the ‘participatory camera’ also emerges strongly in this context, especially as a conduit between producer and consumer. Making up the creative component of this research are two documentaries about the reality television event Band In A Bubble, and participatory media practices more broadly. The single-screen film, Hubbub , gives form to the collective intelligence and polyphonous voice of contemporary audiences who must be addressed and solicited in increasingly innovative ways. One More Like That is a split-screen, DVD-Video with alternate audio channels selected by a user who thereby chooses who listens and who speaks in the ongoing conversation between media producers and media consumers. It should be clear from the description above that my own practice does not extend to highly interactive, multi-authored or web-enabled practices, nor the distributed practices one might associate with social media and online collaboration. Mine is fundamentally a single authored, documentary video practice that seeks to analyse and represent participatory culture on screen, and for this reason the Ph.D. refrains from a sustained discussion of the kinds of collaborative practices listed above. This is not to say that such practices don’t also represent an important intersection of documentary practice and participatory culture, they simply represent a different point of intersection. Being practice-led, this research takes its procedural cues from the nature of the practice itself, and sketches parameters that are most enabling of the idea that the practice sets the terms of its own investigation.
Resumo:
When should a person who has a heart attack not be resuscitated? When should a patient no longer be kept alive on a ventilator, or be provided with food and water by a tube? When should a person not be given a blood transfusion they need to stay alive? The answers to these questions depend on a number of factors including the mental or physical condition of the patient and any wishes they have expressed prior to losing the ability to make this decision, as well as the requirements of good medical practice. This video is a record of a public lecture held on 7 July 2004 by the Faculty of Law at the Queensland University of Technology, in association with the Faculty of Health, the Centre for Palliative Care Research and Education, and Palliative Care Queensland.
Resumo:
Manual calibration of large and dynamic networks of cameras is labour intensive and time consuming. This is a strong motivator for the development of automatic calibration methods. Automatic calibration relies on the ability to find correspondences between multiple views of the same scene. If the cameras are sparsely placed, this can be a very difficult task. This PhD project focuses on the further development of uncalibrated wide baseline matching techniques.