265 resultados para Image matching
Resumo:
Real-time cardiac ultrasound allows monitoring the heart motion during intracardiac beating heart procedures. Our application assists atrial septal defect (ASD) closure techniques using real-time 3D ultrasound guidance. One major image processing challenge is the processing of information at high frame rate. We present an optimized block flow technique, which combines the probability-based velocity computation for an entire block with template matching. We propose adapted similarity constraints both from frame to frame, to conserve energy, and globally, to minimize errors. We show tracking results on eight in-vivo 4D datasets acquired from porcine beating-heart procedures. Computing velocity at the block level with an optimized scheme, our technique tracks ASD motion at 41 frames/s. We analyze the errors of motion estimation and retrieve the cardiac cycle in ungated images. © 2007 IEEE.
Resumo:
The current procedures in post-earthquake safety and structural assessment are performed manually by a skilled triage team of structural engineers/certified inspectors. These procedures, and particularly the physical measurement of the damage properties, are time-consuming and qualitative in nature. This paper proposes a novel method that automatically detects spalled regions on the surface of reinforced concrete columns and measures their properties in image data. Spalling has been accepted as an important indicator of significant damage to structural elements during an earthquake. According to this method, the region of spalling is first isolated by way of a local entropy-based thresholding algorithm. Following this, the exposure of longitudinal reinforcement (depth of spalling into the column) and length of spalling along the column are measured using a novel global adaptive thresholding algorithm in conjunction with image processing methods in template matching and morphological operations. The method was tested on a database of damaged RC column images collected after the 2010 Haiti earthquake, and comparison of the results with manual measurements indicate the validity of the method.
Resumo:
In the modern and dynamic construction environment it is important to access information in a fast and efficient manner in order to improve the decision making processes for construction managers. This capability is, in most cases, straightforward with today’s technologies for data types with an inherent structure that resides primarily on established database structures like estimating and scheduling software. However, previous research has demonstrated that a significant percentage of construction data is stored in semi-structured or unstructured data formats (text, images, etc.) and that manually locating and identifying such data is a very hard and time-consuming task. This paper focuses on construction site image data and presents a novel image retrieval model that interfaces with established construction data management structures. This model is designed to retrieve images from related objects in project models or construction databases using location, date, and material information (extracted from the image content with pattern recognition techniques).
Resumo:
Computational models of visual cortex, and in particular those based on sparse coding, have enjoyed much recent attention. Despite this currency, the question of how sparse or how over-complete a sparse representation should be, has gone without principled answer. Here, we use Bayesian model-selection methods to address these questions for a sparse-coding model based on a Student-t prior. Having validated our methods on toy data, we find that natural images are indeed best modelled by extremely sparse distributions; although for the Student-t prior, the associated optimal basis size is only modestly over-complete.
Resumo:
Many transductive inference algorithms assume that distributions over training and test estimates should be related, e.g. by providing a large margin of separation on both sets. We use this idea to design a transduction algorithm which can be used without modification for classification, regression, and structured estimation. At its heart we exploit the fact that for a good learner the distributions over the outputs on training and test sets should match. This is a classical two-sample problem which can be solved efficiently in its most general form by using distance measures in Hilbert Space. It turns out that a number of existing heuristics can be viewed as special cases of our approach.
Resumo:
Time-resolved particle image velocimetry (PIV) has been performed inside the nozzle of a commercially available inkjet print-head to obtain the time-dependent velocity waveform. A printhead with a single transparent nozzle 80 μm in orifice diameter was used to eject single droplets at a speed of 5 m/s. An optical microscope was used with an ultra-high-speed camera to capture the motion of particles suspended in a transparent liquid at the center of the nozzle and above the fluid meniscus at a rate of half a million frames per second. Time-resolved velocity fields were obtained from a fluid layer approximately 200 μm thick within the nozzle for a complete jetting cycle. A Lagrangian finite-element numerical model with experimental measurements as inputs was used to predict the meniscus movement. The model predictions showed good agreement with the experimental results. This work provides the first experimental verification of physical models and numerical simulations of flows within a drop-on-demand nozzle. © 2012 Society for Imaging Science and Technology.
Resumo:
We present quantitative analysis of the ultra-high photoconductivity in amorphous oxide semiconductor (AOS) thin film transistors (TFTs), taking into account the sub-gap optical absorption in oxygen deficiency defects. We analyze the basis of photoconductivity in AOSs, explained in terms of the extended electron lifetime due to retarded recombination as a result of hole localization. Also, photoconductive gain in AOS photo-TFTs can be maximized by reducing the transit time associated with short channel lengths, making device scaling favourable for high sensitivity operation. © 2012 IEEE.
Resumo:
The Phase Response Curve (PRC) has proven a useful tool for the reduction of complex oscillator models. It is also an information often experimentally available to the biologist. This paper introduces a numerical tool based on the sensitivity analysis of the PRC to adapt initial model parameters in order to match a particular PRC shape. We illustrate the approach on a simple biochemical model of circadian oscillator. © 2011 IEEE.
Resumo:
A location- and scale-invariant predictor is constructed which exhibits good probability matching for extreme predictions outside the span of data drawn from a variety of (stationary) general distributions. It is constructed via the three-parameter {\mu, \sigma, \xi} Generalized Pareto Distribution (GPD). The predictor is designed to provide matching probability exactly for the GPD in both the extreme heavy-tailed limit and the extreme bounded-tail limit, whilst giving a good approximation to probability matching at all intermediate values of the tail parameter \xi. The predictor is valid even for small sample sizes N, even as small as N = 3. The main purpose of this paper is to present the somewhat lengthy derivations which draw heavily on the theory of hypergeometric functions, particularly the Lauricella functions. Whilst the construction is inspired by the Bayesian approach to the prediction problem, it considers the case of vague prior information about both parameters and model, and all derivations are undertaken using sampling theory.
Resumo:
This paper gives a new solution to the output feedback H2 model matching problem for a large class of delayed information sharing patterns. Existing methods for similar problems typically reduce the decentralized problem to a centralized problem of higher state dimension. In contrast, this paper demonstrates that the decentralized model matching solution can be constructed from the original centralized solution via quadratic programming. © 2013 AACC American Automatic Control Council.
Resumo:
This paper is about detecting bipedal motion in video sequences by using point trajectories in a framework of classification. Given a number of point trajectories, we find a subset of points which are arising from feet in bipedal motion by analysing their spatio-temporal correlation in a pairwise fashion. To this end, we introduce probabilistic trajectories as our new features which associate each point over a sufficiently long time period in the presence of noise. They are extracted from directed acyclic graphs whose edges represent temporal point correspondences and are weighted with their matching probability in terms of appearance and location. The benefit of the new representation is that it practically tolerates inherent ambiguity for example due to occlusions. We then learn the correlation between the motion of two feet using the probabilistic trajectories in a decision forest classifier. The effectiveness of the algorithm is demonstrated in experiments on image sequences captured with a static camera, and extensions to deal with a moving camera are discussed. © 2013 Elsevier B.V. All rights reserved.