964 resultados para choreography for the camera


Relevância:

90.00% 90.00%

Publicador:

Resumo:

In this article we present an approach to object tracking handover in a network of smart cameras, based on self-interested autonomous agents, which exchange responsibility for tracking objects in a market mechanism, in order to maximise their own utility. A novel ant-colony inspired mechanism is used to learn the vision graph, that is, the camera neighbourhood relations, during runtime, which may then be used to optimise communication between cameras. The key benefits of our completely decentralised approach are on the one hand generating the vision graph online, enabling efficient deployment in unknown scenarios and camera network topologies, and on the other hand relying only on local information, increasing the robustness of the system. Since our market-based approach does not rely on a priori topology information, the need for any multicamera calibration can be avoided. We have evaluated our approach both in a simulation study and in network of real distributed smart cameras.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Smart cameras allow pre-processing of video data on the camera instead of sending it to a remote server for further analysis. Having a network of smart cameras allows various vision tasks to be processed in a distributed fashion. While cameras may have different tasks, we concentrate on distributed tracking in smart camera networks. This application introduces various highly interesting problems. Firstly, how can conflicting goals be satisfied such as cameras in the network try to track objects while also trying to keep communication overhead low? Secondly, how can cameras in the network self adapt in response to the behavior of objects and changes in scenarios, to ensure continued efficient performance? Thirdly, how can cameras organise themselves to improve the overall network's performance and efficiency? This paper presents a simulation environment, called CamSim, allowing distributed self-adaptation and self-organisation algorithms to be tested, without setting up a physical smart camera network. The simulation tool is written in Java and hence allows high portability between different operating systems. Relaxing various problems of computer vision and network communication enables a focus on implementing and testing new self-adaptation and self-organisation algorithms for cameras to use.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

A recent trend in smart camera networks is that they are able to modify the functionality during runtime to better reflect changes in the observed scenes and in the specified monitoring tasks. In this paper we focus on different configuration methods for such networks. A configuration is given by three components: (i) a description of the camera nodes, (ii) a specification of the area of interest by means of observation points and the associated monitoring activities, and (iii) a description of the analysis tasks. We introduce centralized, distributed and proprioceptive configuration methods and compare their properties and performance. © 2012 IEEE.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

From the 12th until the 17th of July 2016, research vessel Maria S. Merian entered the Nordvestfjord of Scorsby Sound (East Greenland) as part of research cruise MSM56, "Ecological chemistry in Arctic fjords". A large variety of chemical and biological parameters of fjord and meltwater were measured during this cruise to characterize biogeochemical fluxes in arctic fjords. The photo documentation described here was a side project. It was started when we were close to the Daugaard-Jensen glacier at the end of the Nordvestfjord and realized that not many people have seen this area before and photos available for scientists are probably rare. These pictures shall help to document climate and landscape changes in a remote area of East Greenland. Pictures were taken with a Panasonic Lumix G6 equipped with either a 14-42 or 45-150 objective (zoom factor available in jpg metadata). Polarizer filters were used on both objectives. The time between taking the pictures and writing down the coordinates was maximally one minute but usually shorter. The uncertainty in position is therefore small as we were steaming slowly most of the time the pictures were taken (i.e. below 5 knots). I assume the uncertainty is in most cases below 200 m radius of the noted position. I did not check the direction I directed the camera to with a compass at the beginning. Hence, the direction that was noted is an approximation based on the navigation map and the positioning of the ship. The uncertainty was probably around +/- 40° but initially (pictures 1-17) perhaps even higher as this documentation was a spontaneous idea and it took some time to get the orientation right. It should be easy, however, to find the location of the mountains and glaciers when being on the respective positions because the mountains have a quite characteristic shape. In a later stage of this documentation, I took pictures from the bridge and used the gyros to approximate the direction the camera was pointed at. Here the uncertainty was much lower (i.e. +/- 20° or better). Directions approximated with the help of gyros have degree values in the overview table. The ship data provided in the MSM56 cruise report will contain all kinds of sensor data from Maria S. Merian sensor setup. This data can also be used to further constrain the position the pictures were taken because the exact time a photo was shot is noted in the metadata of the .jpg photo file. The shipboard clock was set on UTC. It was 57 minutes and 45 seconds behind the time in the camera. For example 12:57:45 on the camera was 12:00:00 UTC on the ship. All pictures provided here can be used for scientific purposes. In case of usage in presentations etc. please acknowledge RV Maria S. Merian (MSM56) and Lennart T. Bach as author. Please inform me and ask for reprint permission in case you want to use the pictures for scientific publications. I would like to thank all participants and the crew of Maria S. Merian Cruise 56 (MSM56, Ecological chemistry in Arctic fjords).

Relevância:

90.00% 90.00%

Publicador:

Resumo:

As we look around a scene, we perceive it as continuous and stable even though each saccadic eye movement changes the visual input to the retinas. How the brain achieves this perceptual stabilization is unknown, but a major hypothesis is that it relies on presaccadic remapping, a process in which neurons shift their visual sensitivity to a new location in the scene just before each saccade. This hypothesis is difficult to test in vivo because complete, selective inactivation of remapping is currently intractable. We tested it in silico with a hierarchical, sheet-based neural network model of the visual and oculomotor system. The model generated saccadic commands to move a video camera abruptly. Visual input from the camera and internal copies of the saccadic movement commands, or corollary discharge, converged at a map-level simulation of the frontal eye field (FEF), a primate brain area known to receive such inputs. FEF output was combined with eye position signals to yield a suitable coordinate frame for guiding arm movements of a robot. Our operational definition of perceptual stability was "useful stability," quantified as continuously accurate pointing to a visual object despite camera saccades. During training, the emergence of useful stability was correlated tightly with the emergence of presaccadic remapping in the FEF. Remapping depended on corollary discharge but its timing was synchronized to the updating of eye position. When coupled to predictive eye position signals, remapping served to stabilize the target representation for continuously accurate pointing. Graded inactivations of pathways in the model replicated, and helped to interpret, previous in vivo experiments. The results support the hypothesis that visual stability requires presaccadic remapping, provide explanations for the function and timing of remapping, and offer testable hypotheses for in vivo studies. We conclude that remapping allows for seamless coordinate frame transformations and quick actions despite visual afferent lags. With visual remapping in place for behavior, it may be exploited for perceptual continuity.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This work explores the use of statistical methods in describing and estimating camera poses, as well as the information feedback loop between camera pose and object detection. Surging development in robotics and computer vision has pushed the need for algorithms that infer, understand, and utilize information about the position and orientation of the sensor platforms when observing and/or interacting with their environment.

The first contribution of this thesis is the development of a set of statistical tools for representing and estimating the uncertainty in object poses. A distribution for representing the joint uncertainty over multiple object positions and orientations is described, called the mirrored normal-Bingham distribution. This distribution generalizes both the normal distribution in Euclidean space, and the Bingham distribution on the unit hypersphere. It is shown to inherit many of the convenient properties of these special cases: it is the maximum-entropy distribution with fixed second moment, and there is a generalized Laplace approximation whose result is the mirrored normal-Bingham distribution. This distribution and approximation method are demonstrated by deriving the analytical approximation to the wrapped-normal distribution. Further, it is shown how these tools can be used to represent the uncertainty in the result of a bundle adjustment problem.

Another application of these methods is illustrated as part of a novel camera pose estimation algorithm based on object detections. The autocalibration task is formulated as a bundle adjustment problem using prior distributions over the 3D points to enforce the objects' structure and their relationship with the scene geometry. This framework is very flexible and enables the use of off-the-shelf computational tools to solve specialized autocalibration problems. Its performance is evaluated using a pedestrian detector to provide head and foot location observations, and it proves much faster and potentially more accurate than existing methods.

Finally, the information feedback loop between object detection and camera pose estimation is closed by utilizing camera pose information to improve object detection in scenarios with significant perspective warping. Methods are presented that allow the inverse perspective mapping traditionally applied to images to be applied instead to features computed from those images. For the special case of HOG-like features, which are used by many modern object detection systems, these methods are shown to provide substantial performance benefits over unadapted detectors while achieving real-time frame rates, orders of magnitude faster than comparable image warping methods.

The statistical tools and algorithms presented here are especially promising for mobile cameras, providing the ability to autocalibrate and adapt to the camera pose in real time. In addition, these methods have wide-ranging potential applications in diverse areas of computer vision, robotics, and imaging.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

In recent years, depth cameras have been widely utilized in camera tracking for augmented and mixed reality. Many of the studies focus on the methods that generate the reference model simultaneously with the tracking and allow operation in unprepared environments. However, methods that rely on predefined CAD models have their advantages. In such methods, the measurement errors are not accumulated to the model, they are tolerant to inaccurate initialization, and the tracking is always performed directly in reference model's coordinate system. In this paper, we present a method for tracking a depth camera with existing CAD models and the Iterative Closest Point (ICP) algorithm. In our approach, we render the CAD model using the latest pose estimate and construct a point cloud from the corresponding depth map. We construct another point cloud from currently captured depth frame, and find the incremental change in the camera pose by aligning the point clouds. We utilize a GPGPU-based implementation of the ICP which efficiently uses all the depth data in the process. The method runs in real-time, it is robust for outliers, and it does not require any preprocessing of the CAD models. We evaluated the approach using the Kinect depth sensor, and compared the results to a 2D edge-based method, to a depth-based SLAM method, and to the ground truth. The results show that the approach is more stable compared to the edge-based method and it suffers less from drift compared to the depth-based SLAM.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

NOGUEIRA, Marcelo B. ; MEDEIROS, Adelardo A. D. ; ALSINA, Pablo J. Pose Estimation of a Humanoid Robot Using Images from an Mobile Extern Camera. In: IFAC WORKSHOP ON MULTIVEHICLE SYSTEMS, 2006, Salvador, BA. Anais... Salvador: MVS 2006, 2006.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Thesis (Ph.D.)--University of Washington, 2016-08

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Thesis (Ph.D.)--University of Washington, 2016-08

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Thesis (Ph.D.)--University of Washington, 2016-08

Relevância:

90.00% 90.00%

Publicador:

Resumo:

When thinking what paintings are, I am continually brought back to my memory of a short sequence in Alfred Hitchcock’s Vertigo. In the scene, Kim Novak’s Madeleine is seated on a bench in an art gallery. She is apparently transfixed by a painting, Portrait of Carlotta. Alongside James Stewart, we watch her looking intently. Madeleine is pretending to be a ghost. At this stage she does not expect us to believe she is a ghost, but simply to immerse ourselves in the conceit, to delight in the shudder. Madeleine’s back is turned away from us, and as the camera draws near to show that the knot pattern in her hair mirrors the image in the portrait, I imagine Madeleine suppressing a smile. She resolutely shows us her back, though, so her feint is not betrayed. Madeleine’s stillness in this scene makes her appear as an object, a thing in the world, a rock or a pile of logs perhaps. We are not looking at that thing, however, but rather a residual image of something creaturely, a spectre. This after-image is held to the ground both by the gravity suggested by its manifestation and by the fine lie - the camouflage - of pretending to be a ghost. Encountering a painting is like meeting Madeleine. It sits in front of its own picture, gazing at it. Despite being motionless and having its back to us, there is a lurching sensation the painting brings about by pretending to be the ghost of its picture, and, at the same time, never really anticipating your credulity.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The estimating of the relative orientation and position of a camera is one of the integral topics in the field of computer vision. The accuracy of a certain Finnish technology company’s traffic sign inventory and localization process can be improved by utilizing the aforementioned concept. The company’s localization process uses video data produced by a vehicle installed camera. The accuracy of estimated traffic sign locations depends on the relative orientation between the camera and the vehicle. This thesis proposes a computer vision based software solution which can estimate a camera’s orientation relative to the movement direction of the vehicle by utilizing video data. The task was solved by using feature-based methods and open source software. When using simulated data sets, the camera orientation estimates had an absolute error of 0.31 degrees on average. The software solution can be integrated to be a part of the traffic sign localization pipeline of the company in question.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

NOGUEIRA, Marcelo B. ; MEDEIROS, Adelardo A. D. ; ALSINA, Pablo J. Pose Estimation of a Humanoid Robot Using Images from an Mobile Extern Camera. In: IFAC WORKSHOP ON MULTIVEHICLE SYSTEMS, 2006, Salvador, BA. Anais... Salvador: MVS 2006, 2006.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The aim of this text is to discuss how it is possible to manage the art creating process in a film project, where the circumstances are often turbulent. In normative project management literature one proceeds from the idea that a project is realised in a stable world from a clear goal. In a film project there is often a need to change your plans, to improvise both in front of the camera as well as behind the camera. In the theoretical cinematic literature the responsibility for the final film text is more and more being viewed as a product of not only the director, but of the whole team’s work. Consequently, the narrative of leadership/management in a film team can be viewed from a relational perspective where the director and those s/he interacts with, are responsible for the action, relations and social situations they construe jointly in the process of filmmaking. The organization of a film project is a temporary one. The members of a team are seldom the same from one production to another, as well as the creative process always being unique. According to process thinking, organizing can be seen as the ongoing creative activity where we structure and stabilize the chaotic, moving reality. As concerns a film project, the process of becoming of the filmic expression; careful plans, on the one hand, and improvisation and flexibility in action, on the other hand, are a precondition for its realisation. The director when setting a linguistic formulation to what is to be done, can be considered as a practical author.