118 resultados para Vision-based row tracking algorithm


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Manually inspecting concrete surface defects (e.g., cracks and air pockets) is not always reliable. Also, it is labor-intensive. In order to overcome these limitations, automated inspection using image processing techniques was proposed. However, the current work can only detect defects in an image without the ability of evaluating them. This paper presents a novel approach for automatically assessing the impact of two common surface defects (i.e., air pockets and discoloration). These two defects are first located using the developed detection methods. Their attributes, such as the number of air pockets and the area of discoloration regions, are then retrieved to calculate defects’ visual impact ratios (VIRs). The appropriate threshold values for these VIRs are selected through a manual rating survey. This way, for a given concrete surface image, its quality in terms of air pockets and discoloration can be automatically measured by judging whether their VIRs are below the threshold values or not. The method presented in this paper was implemented in C++ and a database of concrete surface images was tested to validate its performance. Read More: http://ascelibrary.org/doi/abs/10.1061/%28ASCE%29CO.1943-7862.0000126?journalCode=jcemd4

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Calibration of a camera system is a necessary step in any stereo metric process. It correlates all cameras to a common coordinate system by measuring the intrinsic and extrinsic parameters of each camera. Currently, manual calibration of a camera system is the only way to achieve calibration in civil engineering operations that require stereo metric processes (photogrammetry, videogrammetry, vision based asset tracking, etc). This type of calibration however is time-consuming and labor-intensive. Furthermore, in civil engineering operations, camera systems are exposed to open, busy sites. In these conditions, the position of presumably stationary cameras can easily be changed due to external factors such as wind, vibrations or due to an unintentional push/touch from personnel on site. In such cases manual calibration must be repeated. In order to address this issue, several self-calibration algorithms have been proposed. These algorithms use Projective Geometry, Absolute Conic and Kruppa Equations and variations of these to produce processes that achieve calibration. However, most of these methods do not consider all constraints of a camera system such as camera intrinsic constraints, scene constraints, camera motion or varying camera intrinsic properties. This paper presents a novel method that takes all constraints into consideration to auto-calibrate cameras using an image alignment algorithm originally meant for vision based tracking. In this method, image frames are taken from cameras. These frames are used to calculate the fundamental matrix that gives epipolar constraints. Intrinsic and extrinsic properties of cameras are acquired from this calculation. Test results are presented in this paper with recommendations for further improvement.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The existing machine vision-based 3D reconstruction software programs provide a promising low-cost and in some cases automatic solution for infrastructure as-built documentation. However in several steps of the reconstruction process, they only rely on detecting and matching corner-like features in multiple views of a scene. Therefore, in infrastructure scenes which include uniform materials and poorly textured surfaces, these programs fail with high probabilities due to lack of feature points. Moreover, except few programs that generate dense 3D models through significantly time-consuming algorithms, most of them only provide a sparse reconstruction which does not necessarily include required points such as corners or edges; hence these points have to be manually matched across different views that could make the process considerably laborious. To address these limitations, this paper presents a video-based as-built documentation method that automatically builds detailed 3D maps of a scene by aligning edge points between video frames. Compared to corner-like features, edge points are far more plentiful even in untextured scenes and often carry important semantic associations. The method has been tested for poorly textured infrastructure scenes and the results indicate that a combination of edge and corner-like features would allow dealing with a broader range of scenes.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Large concrete structures need to be inspected in order to assess their current physical and functional state, to predict future conditions, to support investment planning and decision making, and to allocate limited maintenance and rehabilitation resources. Current procedures in condition and safety assessment of large concrete structures are performed manually leading to subjective and unreliable results, costly and time-consuming data collection, and safety issues. To address these limitations, automated machine vision-based inspection procedures have increasingly been proposed by the research community. This paper presents current achievements and open challenges in vision-based inspection of large concrete structures. First, the general concept of Building Information Modeling is introduced. Then, vision-based 3D reconstruction and as-built spatial modeling of concrete civil infrastructure are presented. Following that, the focus is set on structural member recognition as well as on concrete damage detection and assessment exemplified for concrete columns. Although some challenges are still under investigation, it can be concluded that vision-based inspection methods have significantly improved over the last 10 years, and now, as-built spatial modeling as well as damage detection and assessment of large concrete structures have the potential to be fully automated.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Two new maximum power point tracking algorithms are presented: the input voltage sensor, and duty ratio maximum power point tracking algorithm (ViSD algorithm); and the output voltage sensor, and duty ratio maximum power point tracking algorithm (VoSD algorithm). The ViSD and VoSD algorithms have the features, characteristics and advantages of the incremental conductance algorithm (INC); but, unlike the incremental conductance algorithm which requires two sensors (the voltage sensor and current sensor), the two algorithms are more desirable because they require only one sensor: the voltage sensor. Moreover, the VoSD technique is less complex; hence, it requires less computational processing. Both the ViSD and the VoSD techniques operate by maximising power at the converter output, instead of the input. The ViSD algorithm uses a voltage sensor placed at the input of a boost converter, while the VoSD algorithm uses a voltage sensor placed at the output of a boost converter. © 2011 IEEE.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We present a novel framework for identifying and tracking dominant agents in groups. Our proposed approach relies on a causality detection scheme that is capable of ranking agents with respect to their contribution in shaping the system's collective behaviour based exclusively on the agents' observed trajectories. Further, the reasoning paradigm is made robust to multiple emissions and clutter by employing a class of recently introduced Markov chain Monte Carlo-based group tracking methods. Examples are provided that demonstrate the strong potential of the proposed scheme in identifying actual leaders in swarms of interacting agents and moving crowds. © 2011 IEEE.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Tracking of project related entities such as construction equipment, materials, and personnel is used to calculate productivity, detect travel path conflicts, enhance the safety on the site, and monitor the project. Radio frequency tracking technologies (Wi-Fi, RFID, UWB) and GPS are commonly used for this purpose. However, on large-scale sites, deploying, maintaining and removing such systems can be costly and time-consuming. In addition, privacy issues with personnel tracking often limits the usability of these technologies on construction sites. This paper presents a vision based tracking framework that holds promise to address these limitations. The framework uses videos from a set of two or more static cameras placed on construction sites. In each camera view, the framework identifies and tracks construction entities providing 2D image coordinates across frames. Combining the 2D coordinates based on the installed camera system (the distance between the cameras and the view angles of them), 3D coordinates are calculated at each frame. The results of each step are presented to illustrate the feasibility of the framework.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

When tracking resources in large-scale, congested, outdoor construction sites, the cost and time for purchasing, installing and maintaining the position sensors needed to track thousands of materials, and hundreds of equipment and personnel can be significant. To alleviate this problem a novel vision based tracking method that allows each sensor (camera) to monitor the position of multiple entities simultaneously has been proposed. This paper presents the full-scale validation experiments for this method. The validation included testing the method under harsh conditions at a variety of mega-project construction sites. The procedure for collecting data from the sites, the testing procedure, metrics, and results are reported. Full-scale validation demonstrates that the novel vision tracking provides a good solution to track different entities on a large, congested construction site.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Tracking applications provide real time on-site information that can be used to detect travel path conflicts, calculate crew productivity and eliminate unnecessary processes at the site. This paper presents the validation of a novel vision based tracking methodology at the Egnatia Odos Motorway in Thessaloniki, Greece. Egnatia Odos is a motorway that connects Turkey with Italy through Greece. Its multiple open construction sites serves as an ideal multi-site test bed for validating construction site tracking methods. The vision based tracking methodology uses video cameras and computer algorithms to calculate the 3D position of project related entities (e.g. personnel, materials and equipment) in construction sites. The approach provides an unobtrusive, inexpensive way of effectively identifying and tracking the 3D location of entities. The process followed in this study starts by acquiring video data from multiple synchronous cameras at several large scale project sites of Egnatia Odos, such as tunnels, interchanges and bridges under construction. Subsequent steps include the evaluation of the collected data and finally, performing the 3D tracking operations on selected entities (heavy equipment and personnel). The accuracy and precision of the method's results is evaluated by comparing it with the actual 3D position of the object, thus assessing the 3D tracking method's effectiveness.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Vision-based object detection has been introduced in construction for recognizing and locating construction entities in on-site camera views. It can provide spatial locations of a large number of entities, which is beneficial in large-scale, congested construction sites. However, even a few false detections prevent its practical applications. In resolving this issue, this paper presents a novel hybrid method for locating construction equipment that fuses the function of detection and tracking algorithms. This method detects construction equipment in the video view by taking advantage of entities' motion, shape, and color distribution. Background subtraction, Haar-like features, and eigen-images are used for motion, shape, and color information, respectively. A tracking algorithm steps in the process to make up for the false detections. False detections are identified by catching drastic changes in object size and appearance. The identified false detections are replaced with tracking results. Preliminary experiments show that the combination with tracking has the potential to enhance the detection performance.