202 resultados para visual computation


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Biological sensing is explored through novel stable colloidal dispersions of pyrrole-benzophenone and pyrrole copolymerized silica (PPy-SiO(2)-PPyBPh) nanocomposites, which allow covalent linking of biological molecules through light mediation. The mechanism of nanocomposite attachment to a model protein is studied by gold labeled cholera toxin B (CTB) to enhance the contrast in electron microscopy imaging. The biological test itself is carried out without gold labeling, i.e., using CTB only. The protein is shown to be covalently bound through the benzophenone groups. When the reactive PPy-SiO(2)-PPyBPh-CTB nanocomposite is exposed to specific recognition anti-CTB immunoglobulins, a qualitative visual agglutination assay occurs spontaneously, producing as a positive test, PPy-SiO(2)-PPyBPh-CTB-anti-CTB, in less than 1 h, while the control solution of the PPy-SiO(2)-PPyBPh-CTB alone remained well-dispersed during the same period. These dispersions were characterized by cryogenic transmission microscopy (cryo-TEM), scanning electron microscopy (SEM), FTIR and X-ray photoelectron spectroscopy (XPS).

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Spread Transform (ST) is a quantization watermarking algorithm in which vectors of the wavelet coefficients of a host work are quantized, using one of two dithered quantizers, to embed hidden information bits; Loo had some success in applying such a scheme to still images. We extend ST to the video watermarking problem. Visibility considerations require that each spreading vector refer to corresponding pixels in each of several frames, that is, a multi-frame embedding approach. Use of the hierarchical complex wavelet transform (CWT) for a visual mask reduces computation and improves robustness to jitter and valumetric scaling. We present a method of recovering temporal synchronization at the detector, and give initial results demonstrating the robustness and capacity of the scheme.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper addresses the problem of automatically obtaining the object/background segmentation of a rigid 3D object observed in a set of images that have been calibrated for camera pose and intrinsics. Such segmentations can be used to obtain a shape representation of a potentially texture-less object by computing a visual hull. We propose an automatic approach where the object to be segmented is identified by the pose of the cameras instead of user input such as 2D bounding rectangles or brush-strokes. The key behind our method is a pairwise MRF framework that combines (a) foreground/background appearance models, (b) epipolar constraints and (c) weak stereo correspondence into a single segmentation cost function that can be efficiently solved by Graph-cuts. The segmentation thus obtained is further improved using silhouette coherency and then used to update the foreground/background appearance models which are fed into the next Graph-cut computation. These two steps are iterated until segmentation convergences. Our method can automatically provide a 3D surface representation even in texture-less scenes where MVS methods might fail. Furthermore, it confers improved performance in images where the object is not readily separable from the background in colour space, an area that previous segmentation approaches have found challenging. © 2011 IEEE.