926 resultados para cameras and camera accessories
Resumo:
Ce mémoire s'intéresse à la reconstruction d'un modèle 3D à partir de plusieurs images. Le modèle 3D est élaboré avec une représentation hiérarchique de voxels sous la forme d'un octree. Un cube englobant le modèle 3D est calculé à partir de la position des caméras. Ce cube contient les voxels et il définit la position de caméras virtuelles. Le modèle 3D est initialisé par une enveloppe convexe basée sur la couleur uniforme du fond des images. Cette enveloppe permet de creuser la périphérie du modèle 3D. Ensuite un coût pondéré est calculé pour évaluer la qualité de chaque voxel à faire partie de la surface de l'objet. Ce coût tient compte de la similarité des pixels provenant de chaque image associée à la caméra virtuelle. Finalement et pour chacune des caméras virtuelles, une surface est calculée basée sur le coût en utilisant la méthode de SGM. La méthode SGM tient compte du voisinage lors du calcul de profondeur et ce mémoire présente une variation de la méthode pour tenir compte des voxels précédemment exclus du modèle par l'étape d'initialisation ou de creusage par une autre surface. Par la suite, les surfaces calculées sont utilisées pour creuser et finaliser le modèle 3D. Ce mémoire présente une combinaison innovante d'étapes permettant de créer un modèle 3D basé sur un ensemble d'images existant ou encore sur une suite d'images capturées en série pouvant mener à la création d'un modèle 3D en temps réel.
Resumo:
This paper describes a simple method for internal camera calibration for computer vision. This method is based on tracking image features through a sequence of images while the camera undergoes pure rotation. The location of the features relative to the camera or to each other need not be known and therefore this method can be used both for laboratory calibration and for self calibration in autonomous robots working in unstructured environments. A second method of calibration is also presented. This method uses simple geometric objects such as spheres and straight lines to The camera parameters. Calibration is performed using both methods and the results compared.
Resumo:
Getting images from your mobile phone is best done using bluetooth, remember the image quality on these phones will not be high and you may find you can only print very small images, however camera phones are great for ease of use and look fine on screen.
Resumo:
The paper reports an interactive tool for calibrating a camera, suitable for use in outdoor scenes. The motivation for the tool was the need to obtain an approximate calibration for images taken with no explicit calibration data. Such images are frequently presented to research laboratories, especially in surveillance applications, with a request to demonstrate algorithms. The method decomposes the calibration parameters into intuitively simple components, and relies on the operator interactively adjusting the parameter settings to achieve a visually acceptable agreement between a rectilinear calibration model and his own perception of the scene. Using the tool, we have been able to calibrate images of unknown scenes, taken with unknown cameras, in a matter of minutes. The standard of calibration has proved to be sufficient for model-based pose recovery and tracking of vehicles.
Resumo:
Capsule Avian predators are principally responsible. Aims To document the fate of Spotted Flycatcher nests and to identify the species responsible for nest predation. Methods During 2005-06, purpose-built, remote, digital nest-cameras were deployed at 65 out of 141 Spotted Flycatcher nests monitored in two study areas, one in south Devon and the second on the border of Bedfordshire and Cambridgeshire. Results Of the 141 nests monitored, 90 were successful (non-camera nests, 49 out of 76 successful, camera nests, 41 out of 65). Fate was determined for 63 of the 65 nests monitored by camera, with 20 predation events documented, all of which occurred during daylight hours. Avian predators carried out 17 of the 20 predations, with the principal nest predator identified as Eurasian Jay Garrulus glandarius. The only mammal recorded predating nests was the Domestic Cat Felis catus, the study therefore providing no evidence that Grey Squirrels Sciurus carolinensis are an important predator of Spotted Flycatcher nests. There was no evidence of differences in nest survival rates at nests with and without cameras. Nest remains following predation events gave little clue as to the identity of the predator species responsible. Conclusions Nest-cameras can be useful tools in the identification of nest predators, and may be deployed with no subsequent effect on nest survival. The majority of predation of Spotted Flycatcher nests in this study was by avian predators, principally the Jay. There was little evidence of predation by mammalian predators. Identification of specific nest predators enhances studies of breeding productivity and predation risk.
Resumo:
Garment information tracking is required for clean room garment management. In this paper, we present a camera-based robust system with implementation of Optical Character Reconition (OCR) techniques to fulfill garment label recognition. In the system, a camera is used for image capturing; an adaptive thresholding algorithm is employed to generate binary images; Connected Component Labelling (CCL) is then adopted for object detection in the binary image as a part of finding the ROI (Region of Interest); Artificial Neural Networks (ANNs) with the BP (Back Propagation) learning algorithm are used for digit recognition; and finally the system is verified by a system database. The system has been tested. The results show that it is capable of coping with variance of lighting, digit twisting, background complexity, and font orientations. The system performance with association to the digit recognition rate has met the design requirement. It has achieved real-time and error-free garment information tracking during the testing.
Resumo:
Calibrated cameras are an extremely useful resource for computer vision scenarios. Typically, cameras are calibrated through calibration targets, measurements of the observed scene, or self-calibrated through features matched between cameras with overlapping fields of view. This paper considers an approach to camera calibration based on observations of a pedestrian and compares the resulting calibration to a commonly used approach requiring that measurements be made of the scene.
Resumo:
The distribution of dust in the ecliptic plane between 0.96 and 1.04 au has been inferred from impacts on the two Solar Terrestrial Relations Observatory (STEREO) spacecraft through observation of secondary particle trails and unexpected off-points in the heliospheric imager (HI) cameras. This study made use of analysis carried out by members of a distributed web-based citizen science project Solar Stormwatch. A comparison between observations of the brightest particle trails and a survey of fainter trails shows consistent distributions. While there is no obvious correlation between this distribution and the occurrence of individual meteor streams at Earth, there are some broad longitudinal features in these distributions that are also observed in sources of the sporadic meteor population. The different position of the HI instrument on the two STEREO spacecraft leads to each sampling different populations of dust particles. The asymmetry in the number of trails seen by each spacecraft and the fact that there are many more unexpected off-points in the HI-B than in HI-A indicates that the majority of impacts are coming from the apex direction. For impacts causing off-points in the HI-B camera, these dust particles are estimated to have masses in excess of 10−17 kg with radii exceeding 0.1 μm. For off-points observed in the HI-A images, which can only have been caused by particles travelling from the anti-apex direction, the distribution is consistent with that of secondary ‘storm’ trails observed by HI-B, providing evidence that these trails also result from impacts with primary particles from an anti-apex source. Investigating the mass distribution for the off-points of both HI-A and HI-B, it is apparent that the differential mass index of particles from the apex direction (causing off-points in HI-B) is consistently above 2. This indicates that the majority of the mass is within the smaller particles of this population. In contrast, the differential mass index of particles from the anti-apex direction (causing off-points in HI-A) is consistently below 2, indicating that the majority of the mass is to be found in larger particles of this distribution.
Resumo:
This chapter explores the distinctive qualities of the Matt Smith era Doctor Who, focusing on how dramatic emphases are connected with emphases on visual style, and how this depends on the programme's production methods and technologies. Doctor Who was first made in the 1960s era of live, studio-based, multi-camera television with monochrome pictures. However, as technical innovations like colour filming, stereo sound, CGI and post-production effects technology have been routinely introduced into the programme, and now High Definition (HD) cameras, they have given Doctor Who’s creators new ways of making visually distinctive narratives. Indeed, it has been argued that since the 1980s television drama has become increasingly like cinema in its production methods and aesthetic aims. Viewers’ ability to view the programme on high-specification TV sets, and to record and repeat episodes using digital media, also encourage attention to visual style in television as much as in cinema. The chapter evaluates how these new circumstances affect what Doctor Who has become and engages with arguments that visual style has been allowed to override characterisation and story in the current Doctor Who. The chapter refers to specific episodes, and frames the analysis with reference to earlier years in Doctor Who’s long history. For example, visual spectacle using green-screen and CGI can function as a set-piece (at the opening or ending of an episode) but can also work ‘invisibly’ to render a setting realistically. Shooting on location using HD cameras provides a rich and detailed image texture, but also highlights mistakes and especially problems of lighting. The reduction of Doctor Who’s budget has led to Steven Moffat’s episodes relying less on visual extravagance, connecting back both to Russell T. Davies’s concern to show off the BBC’s investment in the series but also to reference British traditions of gritty and intimate social drama. Pressures to capitalise on Doctor Who as a branded product are the final aspect of the chapter’s analysis, where the role of Moffat as ‘showrunner’ links him to an American (not British) style of television production where the preservation of format and brand values give him unusual power over the look of the series.
Resumo:
In the last decade, several research results have presented formulations for the auto-calibration problem. Most of these have relied on the evaluation of vanishing points to extract the camera parameters. Normally vanishing points are evaluated using pedestrians or the Manhattan World assumption i.e. it is assumed that the scene is necessarily composed of orthogonal planar surfaces. In this work, we present a robust framework for auto-calibration, with improved results and generalisability for real-life situations. This framework is capable of handling problems such as occlusions and the presence of unexpected objects in the scene. In our tests, we compare our formulation with the state-of-the-art in auto-calibration using pedestrians and Manhattan World-based assumptions. This paper reports on the experiments conducted using publicly available datasets; the results have shown that our formulation represents an improvement over the state-of-the-art.
Resumo:
Traditionally, spoor (tracks, pug marks) have been used as a cost effective tool to assess the presence of larger mammals. Automated camera traps are now increasingly utilized to monitor wildlife, primarily as the cost has greatly declined and statistical approaches to data analysis have improved. While camera traps have become ubiquitous, we have little understanding of their effectiveness when compared to traditional approaches using spoor in the field. Here, we a) test the success of camera traps in recording a range of carnivore species against spoor; b) ask if simple measures of spoor size taken by amateur volunteers is likely to allow individual identification of leopards and c) for a trained tracker, ask if this approach may allow individual leopards to be followed with confidence in savannah habitat. We found that camera traps significantly under-recorded mammalian top and meso-carnivores, with camera traps more likely under-record the presence of smaller carnivores (civet 64%; genet 46%, Meller’s mongoose 45%) than larger (jackal sp. 30%, brown hyena 22%), while leopard was more likely to be recorded by camera trap (all recorded by camera trap only). We found that amateur trackers could be beneficial in regards to collecting presence data; however the large variance in measurements of spoor taken in the field by volunteers suggests that this approach is unlikely to add further data. Nevertheless, the use of simple spoor measurements in the field by a trained field researcher increases their ability to reliably follow a leopard trail in difficult terrain. This allows researchers to glean further data on leopard behaviour and habitat utilisation without the need for complex analysis.
Resumo:
This article describes a prototype system for quantifying bioassays and for exchanging the results of the assays digitally with physicians located off-site. The system uses paper-based microfluidic devices for running multiple assays simultaneously, camera phones or portable scanners for digitizing the intensity of color associated with each colorimetric assay, and established communications infrastructure for transferring the digital information from the assay site to an off-site laboratory for analysis by a trained medical professional; the diagnosis then can be returned directly to the healthcare provider in the field. The microfluidic devices were fabricated in paper using photolithography and were functionalized with reagents for colorimetric assays. The results of the assays were quantified by comparing the intensities of the color developed in each assay with those of calibration curves. An example of this system quantified clinically relevant concentrations of glucose and protein in artificial urine. The combination of patterned paper, a portable method for obtaining digital images, and a method for exchanging results of the assays with off-site diagnosticians offers new opportunities for inexpensive monitoring of health, especially in situations that require physicians to travel to patients (e.g., in the developing world, in emergency management, and during field operations by the military) to obtain diagnostic information that might be obtained more effectively by less valuable personnel.
Resumo:
This thesis is about new digital moving image recording technologies and how they augment the distribution of creativity and the flexibility in moving image production systems, but also impose constraints on how images flow through the production system. The central concept developed in this thesis is ‘creative space’ which links quality and efficiency in moving image production to time for creative work, capacity of digital tools, user skills and the constitution of digital moving image material. The empirical evidence of this thesis is primarily based on semi-structured interviews conducted with Swedish film and TV production representatives.This thesis highlights the importance of pre-production technical planning and proposes a design management support tool (MI-FLOW) as a way to leverage functional workflows that is a prerequisite for efficient and cost effective moving image production.
Resumo:
In the last years the number of industrial applications for Augmented Reality (AR) and Virtual Reality (VR) environments has significantly increased. Optical tracking systems are an important component of AR/VR environments. In this work, a low cost optical tracking system with adequate attributes for professional use is proposed. The system works in infrared spectral region to reduce optical noise. A highspeed camera, equipped with daylight blocking filter and infrared flash strobes, transfers uncompressed grayscale images to a regular PC, where image pre-processing software and the PTrack tracking algorithm recognize a set of retro-reflective markers and extract its 3D position and orientation. Included in this work is a comprehensive research on image pre-processing and tracking algorithms. A testbed was built to perform accuracy and precision tests. Results show that the system reaches accuracy and precision levels slightly worse than but still comparable to professional systems. Due to its modularity, the system can be expanded by using several one-camera tracking modules linked by a sensor fusion algorithm, in order to obtain a larger working range. A setup with two modules was built and tested, resulting in performance similar to the stand-alone configuration.
Resumo:
Image stitching is the process of joining several images to obtain a bigger view of a scene. It is used, for example, in tourism to transmit to the viewer the sensation of being in another place. I am presenting an inexpensive solution for automatic real time video and image stitching with two web cameras as the video/image sources. The proposed solution relies on the usage of several markers in the scene as reference points for the stitching algorithm. The implemented algorithm is divided in four main steps, the marker detection, camera pose determination (in reference to the markers), video/image size and 3d transformation, and image translation. Wii remote controllers are used to support several steps in the process. The built‐in IR camera provides clean marker detection, which facilitates the camera pose determination. The only restriction in the algorithm is that markers have to be in the field of view when capturing the scene. Several tests where made to evaluate the final algorithm. The algorithm is able to perform video stitching with a frame rate between 8 and 13 fps. The joining of the two videos/images is good with minor misalignments in objects at the same depth of the marker,misalignments in the background and foreground are bigger. The capture process is simple enough so anyone can perform a stitching with a very short explanation. Although real‐time video stitching can be achieved by this affordable approach, there are few shortcomings in current version. For example, contrast inconsistency along the stitching line could be reduced by applying a color correction algorithm to every source videos. In addition, the misalignments in stitched images due to camera lens distortion could be eased by optical correction algorithm. The work was developed in Apple’s Quartz Composer, a visual programming environment. A library of extended functions was developed using Xcode tools also from Apple.