18 resultados para metodi level set segmentazione immagini di nevi immagini mediche regolarizzazione
Resumo:
The commercial far-range (>10m) infrastructure spatial data collection methods are not completely automated. They need significant amount of manual post-processing work and in some cases, the equipment costs are significant. This paper presents a method that is the first step of a stereo videogrammetric framework and holds the promise to address these issues. Under this method, video streams are initially collected from a calibrated set of two video cameras. For each pair of simultaneous video frames, visual feature points are detected and their spatial coordinates are then computed. The result, in the form of a sparse 3D point cloud, is the basis for the next steps in the framework (i.e., camera motion estimation and dense 3D reconstruction). A set of data, collected from an ongoing infrastructure project, is used to show the merits of the method. Comparison with existing tools is also shown, to indicate the performance differences of the proposed method in the level of automation and the accuracy of results.
Resumo:
Looking for a target in a visual scene becomes more difficult as the number of stimuli increases. In a signal detection theory view, this is due to the cumulative effect of noise in the encoding of the distractors, and potentially on top of that, to an increase of the noise (i.e., a decrease of precision) per stimulus with set size, reflecting divided attention. It has long been argued that human visual search behavior can be accounted for by the first factor alone. While such an account seems to be adequate for search tasks in which all distractors have the same, known feature value (i.e., are maximally predictable), we recently found a clear effect of set size on encoding precision when distractors are drawn from a uniform distribution (i.e., when they are maximally unpredictable). Here we interpolate between these two extreme cases to examine which of both conclusions holds more generally as distractor statistics are varied. In one experiment, we vary the level of distractor heterogeneity; in another we dissociate distractor homogeneity from predictability. In all conditions in both experiments, we found a strong decrease of precision with increasing set size, suggesting that precision being independent of set size is the exception rather than the rule.
Resumo:
Approximately 40% of annual demand for steel worldwide is used to replace products that have failed. With this percentage set to rise, extending the lifespan of steel in products presents a significant opportunity to reduce demand and thus decrease carbon dioxide emissions from steel production. This article presents a new, simplified framework with which to analyse product failure. When applied to the products that dominate steel use, this framework reveals that they are often replaced because a component/sub-assembly becomes degraded, inferior, unsuitable or worthless. In light of this, four products, which are representative of high steel content products in general, are analysed at the component level, determining steel mass and cost profiles over the lifespan of each product. The results show that the majority of the steel components are underexploited - still functioning when the product is discarded; in particular, the potential lifespan of the steel-rich structure is typically much greater than its actual lifespan. Twelve case studies, in which product or component life has been increased, are then presented. The resulting evidence is used to tailor life-extension strategies to each reason for product failure and to identify the economic motivations for implementing these strategies. The results suggest that a product template in which the long-lived structure accounts for a relatively high share of costs while short-lived components can be easily replaced (offering profit to the producer and enhanced utility to owners) encourages product life extension. © 2013 The Author.