954 resultados para Video-camera
Resumo:
A simple but accurate method for measuring the Earth’s radius using a video camera is described. A video camera was used to capture a shadow rising up the wall of a tall building at sunset. A free program called ImageJ was used to measure the time it took the shadow to rise a known distance up the building. The time, distance and length of the sidereal day were used to calculate the radius of the Earth. The radius was measured as 6394.3 +/- 118 km, which is within 1.8% of the accepted average value of 6371 km and well within the experimental error. The experiment is suitable as a high school or university project and should produce a value for Earth’s radius within a few per cent at latitudes towards the equator, where at some times of the year the ecliptic is approximately normal to the horizon.
Resumo:
With the use of a baited stereo-video camera system, this study semiquantitatively defined the habitat associations of 4 species of Lutjanidae: Opakapaka (Pristipomoides filamentosus), Kalekale (P. sieboldii), Onaga (Etelis coruscans), and Ehu (E. carbunculus). Fish abundance and length data from 6 locations in the main Hawaiian Islands were evaluated for species-specific and size-specific differences between regions and habitat types. Multibeam bathymetry and backscatter were used to classify habitats into 4 types on the basis of substrate (hard or soft) and slope (high or low). Depth was a major influence on bottomfish distributions. Opakapaka occurred at depths shallower than the depths at which other species were observed, and this species showed an ontogenetic shift to deeper water with increasing size. Opakapaka and Ehu had an overall preference for hard substrate with low slope (hard-low), and Onaga was found over both hard-low and hard-high habitats. No significant habitat preferences were recorded for Kalekale. Opakapaka, Kalekale, and Onaga exhibited size-related shifts with habitat type. A move into hard-high environments with increasing size was evident for Opakapaka and Kalekale. Onaga was seen predominantly in hard-low habitats at smaller sizes and in either hard-low or hard-high at larger sizes. These ontogenetic habitat shifts could be driven by reproductive triggers because they roughly coincided with the length at sexual maturity of each species. However, further studies are required to determine causality. No ontogenetic shifts were seen for Ehu, but only a limited number of juveniles were observed. Regional variations in abundance and length were also found and could be related to fishing pressure or large-scale habitat features.
Resumo:
Automating the model generation process of infrastructure can substantially reduce the modeling time and cost. This paper presents a method to generate a sparse point cloud of an infrastructure scene using a single video camera under practical constraints. It is the first step towards establishing an automatic framework for object-oriented as-built modeling. Motion blur and key frame selection criteria are considered. Structure from motion and bundle adjustment are explored. The method is demonstrated in a case study where the scene of a reinforced concrete bridge is videotaped, reconstructed, and metrically validated. The result indicates the applicability, efficiency, and accuracy of the proposed method.
Resumo:
Seminario realizado por cuatro profesoras de distintos centros educativos de Vizcaya para la elaboración de cinco 'projects', cada uno con ejercicios para diferentes niveles, que tienen como objetivo inmediato la práctica de estructuras del lenguaje en un marco comunicativo y como objetivo global la realización de una filmación sobre el tema básico del project. Los temas tratados son: el uso de la cámara de vídeo, las noticias, los anuncios (publicidad), video clips, cuentos y narraciones. Los resultados se valoran como muy positivos debido a la motivación de filmar el producto final.
Resumo:
In this paper we report the degree of reliability of image sequences taken by off-the-shelf TV cameras for modeling camera rotation and reconstructing 3D structure using computer vision techniques. This is done in spite of the fact that computer vision systems usually use imaging devices that are specifically designed for the human vision. Our scenario consists of a static scene and a mobile camera moving through the scene. The scene is any long axial building dominated by features along the three principal orientations and with at least one wall containing prominent repetitive planar features such as doors, windows bricks etc. The camera is an ordinary commercial camcorder moving along the axial axis of the scene and is allowed to rotate freely within the range +/- 10 degrees in all directions. This makes it possible that the camera be held by a walking unprofessional cameraman with normal gait, or to be mounted on a mobile robot. The system has been tested successfully on sequence of images of a variety of structured, but fairly cluttered scenes taken by different walking cameramen. The potential application areas of the system include medicine, robotics and photogrammetry.
Resumo:
Belugas, Delphinapterus leucas, groups were videotaped concurrent to observer counts during annual NMFS aerial surveys of Cook Inlet, Alaska, from 1994 to 2000. The videotapes provided permanent records of whale groups that could be examined and compared to group size estimates ade by aerial observers.Examination of the video recordings resulted in 275 counts of 79 whale groups. The McLaren formula was used to account for whales missed while they were underwater (average correction factor 2.03; SD=0.64). A correction for whales missed due to video resolution was developed by using a second, paired video camera that magnified images relative to the standard video. This analysis showed that some whales were missed either because their image size fell below the resolution of hte standard video recording or because two whales surfaced so close to each other that their images appeared to be one large whale. The correction method that resulted depended on knowing the average whale image size in the videotapes. Image sizes were measured for 2,775 whales from 275 different passes over whale groups. Corrected group sizes were calcualted as the product of the original count from video, the correction factor for whales missed underwater, and the correction factor for whales missed due to video resolution (averaged 1.17; SD=0.06). A regression formula was developed to estimate group sizes from aerial observer counts; independent variables were the aerial counts and an interaction term relative to encounter rate (whales per second during the counting of a group), which were regressed against the respective group sizes as calculated from the videotapes. Significant effects of encounter rate, either positive or negative, were found for several observers. This formula was used to estimate group size when video was not available. The estimated group sizes were used in the annual abundance estimates.
Resumo:
This memo describes the initial results of a project to create a self-supervised algorithm for learning object segmentation from video data. Developmental psychology and computational experience have demonstrated that the motion segmentation of objects is a simpler, more primitive process than the detection of object boundaries by static image cues. Therefore, motion information provides a plausible supervision signal for learning the static boundary detection task and for evaluating performance on a test set. A video camera and previously developed background subtraction algorithms can automatically produce a large database of motion-segmented images for minimal cost. The purpose of this work is to use the information in such a database to learn how to detect the object boundaries in novel images using static information, such as color, texture, and shape. This work was funded in part by the Office of Naval Research contract #N00014-00-1-0298, in part by the Singapore-MIT Alliance agreement of 11/6/98, and in part by a National Science Foundation Graduate Student Fellowship.
Resumo:
In this project we design and implement a centralized hashing table in the snBench sensor network environment. We discuss the feasibility of this approach and compare and contrast with the distributed hashing architecture, with particular discussion regarding the conditions under which a centralized architecture makes sense. There are numerous computational tasks that require persistence of data in a sensor network environment. To help motivate the need for data storage in snBench we demonstrate a practical application of the technology whereby a video camera can monitor a room to detect the presence of a person and send an alert to the appropriate authorities.
Resumo:
[EN] In this paper we study a variational problem derived from a computer vision application: video camera calibration with smoothing constraint. By video camera calibration we meanto estimate the location, orientation and lens zoom-setting of the camera for each video frame taking into account image visible features. To simplify the problem we assume that the camera is mounted on a tripod, in such case, for each frame captured at time t , the calibration is provided by 3 parameters : (1) P(t) (PAN) which represents the tripod vertical axis rotation, (2) T(t) (TILT) which represents the tripod horizontal axis rotation and (3) Z(t) (CAMERA ZOOM) the camera lens zoom setting. The calibration function t -> u(t) = (P(t),T(t),Z(t)) is obtained as the minima of an energy function I[u] . In thIs paper we study the existence of minima of such energy function as well as the solutions of the associated Euler-Lagrange equations.
Resumo:
Visual fixation is employed by humans and some animals to keep a specific 3D location at the center of the visual gaze. Inspired by this phenomenon in nature, this paper explores the idea to transfer this mechanism to the context of video stabilization for a handheld video camera. A novel approach is presented that stabilizes a video by fixating on automatically extracted 3D target points. This approach is different from existing automatic solutions that stabilize the video by smoothing. To determine the 3D target points, the recorded scene is analyzed with a stateof- the-art structure-from-motion algorithm, which estimates camera motion and reconstructs a 3D point cloud of the static scene objects. Special algorithms are presented that search either virtual or real 3D target points, which back-project close to the center of the image for as long a period of time as possible. The stabilization algorithm then transforms the original images of the sequence so that these 3D target points are kept exactly in the center of the image, which, in case of real 3D target points, produces a perfectly stable result at the image center. Furthermore, different methods of additional user interaction are investigated. It is shown that the stabilization process can easily be controlled and that it can be combined with state-of-theart tracking techniques in order to obtain a powerful image stabilization tool. The approach is evaluated on a variety of videos taken with a hand-held camera in natural scenes.