140 resultados para functional vision
Resumo:
These three papers describe an approach to the synthesis of solutions to a class of mechanical design problems; these involve transmission and transformation of mechanical forces and motion, and can be described by a set of inputs and outputs. The approach involves (1) identifying a set of primary functional elements and rules of combining them, and (2) developing appropriate representations and reasoning procedures for synthesising solution concepts using these elements and their combination rules; these synthesis procedures can produce an exhaustive set of solution concepts, in terms of their topological as well as spatial configurations, to a given design problem. This paper (Part III) describes a constraint propagation procedure which, using a knowledge base of spatial information about a set of primary functional elements, can produce possible spatial configurations of solution concepts generated in Part II.
Resumo:
In this paper we demonstrate laser emission from emulsion-based polymer dispersed liquid crystals. Such lasers can be easily formed on single substrates with no alignment layers. Remarkably, it is shown that there can exist two radically different laser emission profiles, namely, photonic band-edge lasing and non-resonant random lasing. The emission is controlled by simple changes in the emulsification procedure. Low mixing speeds generate larger droplets that favor photonic band edge lasing with the requisite helical alignment produced by film shrinkage. Higher mixing speeds generate small droplets, which facilitate random lasing by a non-resonant scattering feedback process. Lasing thresholds and linewidth data are presented showing the potential of controllable linewidth lasing sources. Sequential and stacked layers demonstrate the possibility of achieving complex, simultaneous multi-wavelength and "white-light" laser output from a wide variety of substrates including glass, metallic, paper and flexible plastic. © 2011 Copyright Society of Photo-Optical Instrumentation Engineers (SPIE).
Resumo:
The electronic and magnetic properties of the transition metal sesqui-oxides Cr(2)O(3), Ti(2)O(3), and Fe(2)O(3) have been calculated using the screened exchange (sX) hybrid density functional. This functional is found to give a band structure, bandgap, and magnetic moment in better agreement with experiment than the local density approximation (LDA) or the LDA+U methods. Ti(2)O(3) is found to be a spin-paired insulator with a bandgap of 0.22 eV in the Ti d orbitals. Cr(2)O(3) in its anti-ferromagnetic phase is an intermediate charge transfer Mott-Hubbard insulator with an indirect bandgap of 3.31 eV. Fe(2)O(3), with anti-ferromagnetic order, is found to be a wide bandgap charge transfer semiconductor with a 2.41 eV gap. Interestingly sX outperforms the HSE functional for the bandgaps of these oxides.
Resumo:
Vision tracking has significant potential for tracking resources on large scale, congested construction sites, where a small number of cameras strategically placed around the site could replace hundreds of tracking tags. The correlation of vision tracking 2D positions from multiple views can provide the 3D position. However, there are many 2D vision trackers available in the literature, and little information is available on which one is most effective for construction applications. In this paper, a comparative study of various vision tracker categories is carried out, to identify which one is most effective in tracking construction resources. Testing parameters for evaluating categories of trackers are identified, and benefits and limitations of each category are presented. The most promising trackers are tested using a database of construction operations videos. The results indicate the effectiveness of each tracker in relation to each parameter of the test, and the most suitable tracker needed to research effective 3D vision trackers of construction resources.
Resumo:
Estimating the fundamental matrix (F), to determine the epipolar geometry between a pair of images or video frames, is a basic step for a wide variety of vision-based functions used in construction operations, such as camera-pair calibration, automatic progress monitoring, and 3D reconstruction. Currently, robust methods (e.g., SIFT + normalized eight-point algorithm + RANSAC) are widely used in the construction community for this purpose. Although they can provide acceptable accuracy, the significant amount of required computational time impedes their adoption in real-time applications, especially video data analysis with many frames per second. Aiming to overcome this limitation, this paper presents and evaluates the accuracy of a solution to find F by combining the use of two speedy and consistent methods: SURF for the selection of a robust set of point correspondences and the normalized eight-point algorithm. This solution is tested extensively on construction site image pairs including changes in viewpoint, scale, illumination, rotation, and moving objects. The results demonstrate that this method can be used for real-time applications (5 image pairs per second with the resolution of 640 × 480) involving scenes of the built environment.
Resumo:
Tracking of project related entities such as construction equipment, materials, and personnel is used to calculate productivity, detect travel path conflicts, enhance the safety on the site, and monitor the project. Radio frequency tracking technologies (Wi-Fi, RFID, UWB) and GPS are commonly used for this purpose. However, on large-scale sites, deploying, maintaining and removing such systems can be costly and time-consuming. In addition, privacy issues with personnel tracking often limits the usability of these technologies on construction sites. This paper presents a vision based tracking framework that holds promise to address these limitations. The framework uses videos from a set of two or more static cameras placed on construction sites. In each camera view, the framework identifies and tracks construction entities providing 2D image coordinates across frames. Combining the 2D coordinates based on the installed camera system (the distance between the cameras and the view angles of them), 3D coordinates are calculated at each frame. The results of each step are presented to illustrate the feasibility of the framework.
Resumo:
When tracking resources in large-scale, congested, outdoor construction sites, the cost and time for purchasing, installing and maintaining the position sensors needed to track thousands of materials, and hundreds of equipment and personnel can be significant. To alleviate this problem a novel vision based tracking method that allows each sensor (camera) to monitor the position of multiple entities simultaneously has been proposed. This paper presents the full-scale validation experiments for this method. The validation included testing the method under harsh conditions at a variety of mega-project construction sites. The procedure for collecting data from the sites, the testing procedure, metrics, and results are reported. Full-scale validation demonstrates that the novel vision tracking provides a good solution to track different entities on a large, congested construction site.
Resumo:
Manually inspecting concrete surface defects (e.g., cracks and air pockets) is not always reliable. Also, it is labor-intensive. In order to overcome these limitations, automated inspection using image processing techniques was proposed. However, the current work can only detect defects in an image without the ability of evaluating them. This paper presents a novel approach for automatically assessing the impact of two common surface defects (i.e., air pockets and discoloration). These two defects are first located using the developed detection methods. Their attributes, such as the number of air pockets and the area of discoloration regions, are then retrieved to calculate defects’ visual impact ratios (VIRs). The appropriate threshold values for these VIRs are selected through a manual rating survey. This way, for a given concrete surface image, its quality in terms of air pockets and discoloration can be automatically measured by judging whether their VIRs are below the threshold values or not. The method presented in this paper was implemented in C++ and a database of concrete surface images was tested to validate its performance. Read More: http://ascelibrary.org/doi/abs/10.1061/%28ASCE%29CO.1943-7862.0000126?journalCode=jcemd4
Resumo:
On-site tracking in open construction sites is often difficult because of the large amounts of items that are present and need to be tracked. Additionally, the amounts of occlusions/obstructions present create a highly complex tracking environment. Existing tracking methods are based mainly on Radio Frequency technologies, including Global Positioning Systems (GPS), Radio Frequency Identification (RFID), Bluetooth and Wireless Fidelity (Wi-Fi, Ultra-Wideband, etc). These methods require considerable amounts of pre-processing time since they need to manually deploy tags and keep record of the items they are placed on. In construction sites with numerous entities, tags installation, maintenance and decommissioning become an issue since it increases the cost and time needed to implement these tracking methods. This paper presents a novel method for open site tracking with construction cameras based on machine vision. According to this method, video feed is collected from on site video cameras, and the user selects the entity he wishes to track. The entity is tracked in each video using 2D vision tracking. Epipolar geometry is then used to calculate the depth of the marked area to provide the 3D location of the entity. This method addresses the limitations of radio frequency methods by being unobtrusive and using inexpensive, and easy to deploy equipment. The method has been implemented in a C++ prototype and preliminary results indicate its effectiveness