786 resultados para Video cameras
Resumo:
Functional linkage between reef habitat quality and fish growth and production has remained elusive. Most current research is focused on correlative relationships between a general habitat type and presence/absence of a species, an index of species abundance, or species diversity. Such descriptive information largely ignores how reef attributes regulate reef fish abundance (density-dependent habitat selection), trophic interactions, and physiological performance (growth and condition). To determine the functional relationship between habitat quality, fish abundance, trophic interactions, and physiological performance, we are using an experimental reef system in the northeastern Gulf of Mexico where we apply advanced sensor and biochemical technologies. Our study site controls for reef attributes (size, cavity space, and reef mosaics) and focuses on the processes that regulate gag grouper (Mycteroperca microlepis) abundance, behavior and performance (growth and condition), and the availability of their pelagic prey. We combine mobile and fixed-active (fisheries) acoustics, passive acoustics, video cameras, and advanced biochemical techniques. Fisheries acoustics quantifies the abundance of pelagic prey fishes associated with the reefs and their behavior. Passive acoustics and video allow direct observation of gag and prey fish behavior and the acoustic environment, and provide a direct visual for the interpretation of fixed fisheries acoustics measurements. New application of biochemical techniques, such as Electron Transport System (ETS) assay, allow the in situ measurement of metabolic expenditure of gag and relates this back to reef attributes, gag behavior, and prey fish availability. Here, we provide an overview of our integrated technological approach for understanding and quantifying the functional relationship between reef habitat quality and one element of production – gag grouper growth on shallow coastal reefs.
Resumo:
A preliminary study of reef fish and sharks was conducted at Navassa Island in the Caribbean Sea during a 24-h period beginning 9 September 1998. Conducting a study at Navassa Island was of particular interest because exploitation of Navassa Island’s fishery resources has been considered minimal due to its remote location (southwest of the Windward Passage, Caribbean Sea) and lack of human habitation. Reef fish (and associated habitats) were assessed with stationary underwater video cameras at 3 survey sites; sharks were assessed by bottom longlining at 5 survey sites. Fifty-seven reef fish identifications to lowest possible taxon were made from video footage. Longline catches produced 3 shark species and 3 incidental catch species. When results from the 1998 National Marine Fisheries Service (NMFS) project are combined with a previous 1977 NMFS survey of Navassa Island, 27 fish families, 79 fish identifications to lowest possible taxon, 4 invertebrate orders or families, 3 coraline families, and 2 macroalgae phyla are reported.
Resumo:
The relative abundance of Bristol Bay red king crab (Paralithodes camtschaticus) is estimated each year for stock assessment by using catch-per-swept-area data collected on the Alaska Fisheries Science Center’s annual eastern Bering Sea bottom trawl survey. To estimate survey trawl capture efficiency for red king crab, an experiment was conducted with an auxiliary net (fitted with its own heavy chain-link footrope) that was attached beneath the trawl to capture crabs escaping under the survey trawl footrope. Capture probability was then estimated by fitting a model to the proportion of crabs captured and crab size data. For males, mean capture probability was 72% at 95 mm (carapace length), the size at which full vulnerability to the survey trawl is assigned in the current management model; 84.1% at 135 mm, the legal size for the fishery; and 93% at 184 mm, the maximum size observed in this study. For females, mean capture probability was 70% at 90 mm, the size at which full vulnerability to the survey trawl is assigned in the current management model, and 77% at 162 mm, the maximum size observed in this study. The precision of our estimates for each sex decreased for juveniles under 60 mm and for the largest crab because of small sample sizes. In situ data collected from trawl-mounted video cameras were used to determine the importance of various factors associated with the capture of individual crabs. Capture probability was significantly higher when a crab was standing when struck by the footrope, rather than crouching, and higher when a crab was hit along its body axis, rather than from the side. Capture probability also increased as a function of increasing crab size but decreased with increasing footrope distance from the bottom and when artificial light was provided for the video camera.
Resumo:
A number of methods are commonly used today to collect infrastructure's spatial data (time-of-flight, visual triangulation, etc.). However, current practice lacks a solution that is accurate, automatic, and cost-efficient at the same time. This paper presents a videogrammetric framework for acquiring spatial data of infrastructure which holds the promise to address this limitation. It uses a calibrated set of low-cost high resolution video cameras that is progressively traversed around the scene and aims to produce a dense 3D point cloud which is updated in each frame. It allows for progressive reconstruction as opposed to point-and-shoot followed by point cloud stitching. The feasibility of the framework is studied in this paper. Required steps through this process are presented and the unique challenges of each step are identified. Results specific to each step are also presented.
Resumo:
On-site tracking in open construction sites is often difficult because of the large amounts of items that are present and need to be tracked. Additionally, the amounts of occlusions/obstructions present create a highly complex tracking environment. Existing tracking methods are based mainly on Radio Frequency technologies, including Global Positioning Systems (GPS), Radio Frequency Identification (RFID), Bluetooth and Wireless Fidelity (Wi-Fi, Ultra-Wideband, etc). These methods require considerable amounts of pre-processing time since they need to manually deploy tags and keep record of the items they are placed on. In construction sites with numerous entities, tags installation, maintenance and decommissioning become an issue since it increases the cost and time needed to implement these tracking methods. This paper presents a novel method for open site tracking with construction cameras based on machine vision. According to this method, video feed is collected from on site video cameras, and the user selects the entity he wishes to track. The entity is tracked in each video using 2D vision tracking. Epipolar geometry is then used to calculate the depth of the marked area to provide the 3D location of the entity. This method addresses the limitations of radio frequency methods by being unobtrusive and using inexpensive, and easy to deploy equipment. The method has been implemented in a C++ prototype and preliminary results indicate its effectiveness
Resumo:
Tracking applications provide real time on-site information that can be used to detect travel path conflicts, calculate crew productivity and eliminate unnecessary processes at the site. This paper presents the validation of a novel vision based tracking methodology at the Egnatia Odos Motorway in Thessaloniki, Greece. Egnatia Odos is a motorway that connects Turkey with Italy through Greece. Its multiple open construction sites serves as an ideal multi-site test bed for validating construction site tracking methods. The vision based tracking methodology uses video cameras and computer algorithms to calculate the 3D position of project related entities (e.g. personnel, materials and equipment) in construction sites. The approach provides an unobtrusive, inexpensive way of effectively identifying and tracking the 3D location of entities. The process followed in this study starts by acquiring video data from multiple synchronous cameras at several large scale project sites of Egnatia Odos, such as tunnels, interchanges and bridges under construction. Subsequent steps include the evaluation of the collected data and finally, performing the 3D tracking operations on selected entities (heavy equipment and personnel). The accuracy and precision of the method's results is evaluated by comparing it with the actual 3D position of the object, thus assessing the 3D tracking method's effectiveness.
Resumo:
The commercial far-range (>10m) infrastructure spatial data collection methods are not completely automated. They need significant amount of manual post-processing work and in some cases, the equipment costs are significant. This paper presents a method that is the first step of a stereo videogrammetric framework and holds the promise to address these issues. Under this method, video streams are initially collected from a calibrated set of two video cameras. For each pair of simultaneous video frames, visual feature points are detected and their spatial coordinates are then computed. The result, in the form of a sparse 3D point cloud, is the basis for the next steps in the framework (i.e., camera motion estimation and dense 3D reconstruction). A set of data, collected from an ongoing infrastructure project, is used to show the merits of the method. Comparison with existing tools is also shown, to indicate the performance differences of the proposed method in the level of automation and the accuracy of results.
Resumo:
Most of the existing automated machine vision-based techniques for as-built documentation of civil infrastructure utilize only point features to recover the 3D structure of a scene. However it is often the case in man-made structures that not enough point features can be reliably detected (e.g. buildings and roofs); this can potentially lead to the failure of these techniques. To address the problem, this paper utilizes the prominence of straight lines in infrastructure scenes. It presents a hybrid approach that benefits from both point and line features. A calibrated stereo set of video cameras is used to collect data. Point and line features are then detected and matched across video frames. Finally, the 3D structure of the scene is recovered by finding 3D coordinates of the matched features. The proposed approach has been tested on realistic outdoor environments and preliminary results indicate its capability to deal with a variety of scenes.
Resumo:
本文介绍一种应用在遥控水下机器人上的视频传输与监控技术。采用光多路复用器实现水面与水下的数据及视频传输,通过工业以太网实现整个系统的摄像机及灯光控制,并对传输到水面视频信号进行显示、叠加及存储。该技术成功的应用在新开发的遥控水下机器人中,效果良好。
Resumo:
RESUMO: Existem vários métodos para avaliar o crescimento da vegetação e a taxa de cobertura do solo. Medidas precisas e rápidas podem ser obtidas do tratamento digital de imagens geradas de câmeras fotográficas ou de vídeo. Há disponível, no mercado, diversos processadores de imagens que apresentam funções básicas semelhantes, mas com certas particularidades que poderão trazer maior benefício para o usuário, dependendo da aplicação. O SPRING, desenvolvido pelo INPE, é de domínio público, sendo mais abrangente do que um processador de imagens, incluindo funções de geoprocessamento. O ENVI foi desenvolvido para a análise de imagens multiespectrais e hiperespectrais, podendo também ser utilizado para o processamento de imagens obtidas de câmeras de vídeo, por exemplo. O KS-300 é um conjunto de hardware e de software destinado ao processamento e à quantificação de imagens microscópicas, permitindo a captação direta das imagens geradas por meio de lupas, microscópios eletrônicos ou câmeras de vídeo. O SIARCS foi desenvolvido pela Embrapa Instrumentação Agropecuária para tornar mais ágil o processo de captação de dados de um sistema. Este trabalho apresenta os fundamentos teóricos básicos envolvidos na técnica de análise de imagens, com as principais características dos softwares citados acima e sua aplicação na quantificação da taxa de crescimento e da cobertura do solo por espécies vegetais. ABSTRACT: Several methods exist to evaluate the growth of the vegetation and the tax of covering of the soil. Necessary and fast measures can be obtained of the digital treatment of generated images of photographic cameras or of video. There is available, in the market, several processors of images that you/they present similar basic functions, but with certain particularities that can bring larger benefit for the user, depending on the application. SPRING, developed by INPE, it is public domain, being including than a processor of images, including functions. ENVI was developed for the analysis of images multiespectrais and hiperespectrais, could also be used for the processing of obtained images of video cameras, for instance. The KS-300 it is a hardware group and software destined to the processing and quantification of microscopic images, allowing the direct reception of the images generated through magnifying glasses, eletronic microscopes or video cameras. SIARCS was developed by Embrapa Agricultural Instrumentation to turn more agile the process of reception of data of a system. This work presents the basic theoretical foundations involved in the technique of analysis of images, with the main characteristics of the softwares mentioned above and his application in the quantification of the growth tax and of the covering of the soil for vegetable species.
Resumo:
A novel method for 3D head tracking in the presence of large head rotations and facial expression changes is described. Tracking is formulated in terms of color image registration in the texture map of a 3D surface model. Model appearance is recursively updated via image mosaicking in the texture map as the head orientation varies. The resulting dynamic texture map provides a stabilized view of the face that can be used as input to many existing 2D techniques for face recognition, facial expressions analysis, lip reading, and eye tracking. Parameters are estimated via a robust minimization procedure; this provides robustness to occlusions, wrinkles, shadows, and specular highlights. The system was tested on a variety of sequences taken with low quality, uncalibrated video cameras. Experimental results are reported.
Resumo:
Carbendazim is highly toxic to earthworms and is used as a standard control substance when running field-based trials of pesticides, but results using carbendazim are highly variable. In the present study, impacts of timing of rainfall events following carbendazim application on earthworms were investigated. Lumbricus terrestris were maintained in soil columns to which carbendazim and then deionized water (a rainfall substitute) were applied. Carbendazim was applied at 4 kg/ha, the rate recommended in pesticide field trials. Three rainfall regimes were investigated: initial and delayed heavy rainfall 24 h and 6 d after carbendazim application, and frequent rainfall every 48 h. Earthworm mortality and movement of carbendazim through the soil was assessed 14 d after carbendazim application. No detectable movement of carbendazim occurred through the soil in any of the treatments or controls. Mortality in the initial heavy and frequent rainfall was significantly higher (approximately 55%) than in the delayed rainfall treatment (approximately 25%). This was due to reduced bioavailability of carbendazim in the latter treatment due to a prolonged period of sorption of carbendazim to soil particles before rainfall events. The impact of carbendazim application on earthworm surface activity was assessed using video cameras. Carbendazim applications significantly reduced surface activity due to avoidance behavior of the earthworms. Surface activity reductions were least in the delayed rainfall treatment due to the reduced bioavailability of the carbendazim. The nature of rainfall events' impacts on the response of earthworms to carbendazim applications, and details of rainfall events preceding and following applications during field trials should be made at a higher level of resolution than is currently practiced according to standard International Organization for Standardization protocols.
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)
Resumo:
In general, a land-based mobile mapping system is featured by a vehicle with a pair of video cameras mounted on the top and positioning and navigation sensors loaded in the vehicle. Considering the pair of video cameras mounted on the roof of the vehicle as a stereo camera pointing forward with both optical axes parallel to each other and orthogonal to the stereo base, whose length is 0.94 m, this paper aims at analyzing the interior and exterior camera orientation and the object point coordinates estimated by phototriangulation when the length constraint related to the stereo base is considered or not. The results show that the stereo base constraint has effect ouver the convergence estimation, but does it neither improves the object point coordinate estimation at significance level of 5% and nor it influences the interior orientation parameters. Finally, it has been noticed that the optical axes are not truly parallel to each other and orthogonal to the stereo base. Additionally, it has been observed that there is a convergence of approximately 0.5 degrees in the optical axes and they are not in the same plane (approximately 0.8 degrees deviation).