26 resultados para Coproducts in frames
em Cambridge University Engineering Department Publications Database
Resumo:
Vision based tracking can provide the spatial location of construction entities such as equipment, workers, and materials in large scale, congested construction sites. It tracks entities in video streams by inferring their locations based on the entities’ visual features and motion histories. To initiate the process, it is necessary to determine the pixel areas corresponding to the construction entities to be tracked in the following consecutive video frames. In order to fully automate the process, an automated way of initialization is needed. This paper presents the method for construction worker detection which can automatically recognize and localize construction workers in video frames. The method first finds the foreground areas of moving objects using a background subtraction method. Within these foreground areas, construction workers are recognized based on the histogram of oriented gradients (HOG) and histogram of the HSV colors. HOG’s have proved to work effectively for detection of people, and the histogram of HSV colors helps differentiate between pedestrians and construction workers wearing safety vests. Preliminary experiments show that the proposed method has the potential to automate the initialization process of vision based tracking.
Resumo:
Vision-based object detection has been introduced in construction for recognizing and locating construction entities in on-site camera views. It can provide spatial locations of a large number of entities, which is beneficial in large-scale, congested construction sites. However, even a few false detections prevent its practical applications. In resolving this issue, this paper presents a novel hybrid method for locating construction equipment that fuses the function of detection and tracking algorithms. This method detects construction equipment in the video view by taking advantage of entities' motion, shape, and color distribution. Background subtraction, Haar-like features, and eigen-images are used for motion, shape, and color information, respectively. A tracking algorithm steps in the process to make up for the false detections. False detections are identified by catching drastic changes in object size and appearance. The identified false detections are replaced with tracking results. Preliminary experiments show that the combination with tracking has the potential to enhance the detection performance.
Resumo:
Monitoring the location of resources on large scale, congested, outdoor sites can be performed more efficiently with vision tracking, as this approach does not require any pre-tagging of resources. However, the greatest impediment to the use of vision tracking in this case is the lack of detection methods that are needed to automatically mark the resources of interest and initiate the tracking. This paper presents such a novel method for construction worker detection that localizes construction workers in video frames. The proposed method exploits motion, shape, and color cues to narrow down the detection regions to moving objects, people, and finally construction workers, respectively. The three cues are characterized by using background subtraction, the histogram of oriented gradients (HOG), and the HSV color histogram. The method has been tested on videos taken in various environments. The results demonstrate its suitability for automatic initialization of vision trackers.
Resumo:
Human choices are remarkably susceptible to the manner in which options are presented. This so-called "framing effect" represents a striking violation of standard economic accounts of human rationality, although its underlying neurobiology is not understood. We found that the framing effect was specifically associated with amygdala activity, suggesting a key role for an emotional system in mediating decision biases. Moreover, across individuals, orbital and medial prefrontal cortex activity predicted a reduced susceptibility to the framing effect. This finding highlights the importance of incorporating emotional processes within models of human choice and suggests how the brain may modulate the effect of these biasing influences to approximate rationality.
Resumo:
To manipulate an object skillfully, the brain must learn its dynamics, specifying the mapping between applied force and motion. A fundamental issue in sensorimotor control is whether such dynamics are represented in an extrinsic frame of reference tied to the object or an intrinsic frame of reference linked to the arm. Although previous studies have suggested that objects are represented in arm-centered coordinates [1-6], all of these studies have used objects with unusual and complex dynamics. Thus, it is not known how objects with natural dynamics are represented. Here we show that objects with simple (or familiar) dynamics and those with complex (or unfamiliar) dynamics are represented in object- and arm-centered coordinates, respectively. We also show that objects with simple dynamics are represented with an intermediate coordinate frame when vision of the object is removed. These results indicate that object dynamics can be flexibly represented in different coordinate frames by the brain. We suggest that with experience, the representation of the dynamics of a manipulated object may shift from a coordinate frame tied to the arm toward one that is linked to the object. The additional complexity required to represent dynamics in object-centered coordinates would be economical for familiar objects because such a representation allows object use regardless of the orientation of the object in hand.