3 resultados para image reconstruction
em Boston University Digital Common
Resumo:
Material discrimination based on conventional or dual energy X-ray computed tomography (CT) imaging can be ambiguous. X-ray diffraction imaging (XDI) can be used to construct diffraction profiles of objects, providing new molecular signature information that can be used to characterize the presence of specific materials. Combining X-ray CT and diffraction imaging can lead to enhanced detection and identification of explosives in luggage screening. In this work we are investigating techniques for joint reconstruction of CT absorption and X-ray diffraction profile images of objects to achieve improved image quality and enhanced material classification. The initial results have been validated via simulation of X-ray absorption and coherent scattering in 2 dimensions.
Resumo:
A non-linear supervised learning architecture, the Specialized Mapping Architecture (SMA) and its application to articulated body pose reconstruction from single monocular images is described. The architecture is formed by a number of specialized mapping functions, each of them with the purpose of mapping certain portions (connected or not) of the input space, and a feedback matching process. A probabilistic model for the architecture is described along with a mechanism for learning its parameters. The learning problem is approached using a maximum likelihood estimation framework; we present Expectation Maximization (EM) algorithms for two different instances of the likelihood probability. Performance is characterized by estimating human body postures from low level visual features, showing promising results.
Resumo:
A system for recovering 3D hand pose from monocular color sequences is proposed. The system employs a non-linear supervised learning framework, the specialized mappings architecture (SMA), to map image features to likely 3D hand poses. The SMA's fundamental components are a set of specialized forward mapping functions, and a single feedback matching function. The forward functions are estimated directly from training data, which in our case are examples of hand joint configurations and their corresponding visual features. The joint angle data in the training set is obtained via a CyberGlove, a glove with 22 sensors that monitor the angular motions of the palm and fingers. In training, the visual features are generated using a computer graphics module that renders the hand from arbitrary viewpoints given the 22 joint angles. We test our system both on synthetic sequences and on sequences taken with a color camera. The system automatically detects and tracks both hands of the user, calculates the appropriate features, and estimates the 3D hand joint angles from those features. Results are encouraging given the complexity of the task.