3 resultados para radial continuous transmittance filter
em Boston University Digital Common
Resumo:
Oceanic bubble plumes caused by ship wakes or breaking waves disrupt sonar communi- cation because of the dramatic change in sound speed and attenuation in the bubbly fluid. Experiments in bubbly fluids have suffered from the inability to quantitatively characterize the fluid because of continuous air bubble motion. Conversely, single bubble experiments, where the bubble is trapped by a pressure field or stabilizing object, are limited in usable frequency range, apparatus complexity, or the invasive nature of the stabilizing object (wire, plate, etc.). Suspension of a bubble in a viscoelastic Xanthan gel allows acoustically forced oscilla- tions with negligible translation over a broad frequency band. Assuming only linear, radial motion, laser scattering from a bubble oscillating below, through, and above its resonance is measured. As the bubble dissolves in the gel, different bubble sizes are measured in the range 240 – 470 μm radius, corresponding to the frequency range 6 – 14 kHz. Equalization of the cell response in the raw data isolates the frequency response of the bubble. Compari- son to theory for a bubble in water shows good agreement between the predicted resonance frequency and damping, such that the bubble behaves as if it were oscillating in water.
Resumo:
The algorithm presented in this paper aims to segment the foreground objects in video (e.g., people) given time-varying, textured backgrounds. Examples of time-varying backgrounds include waves on water, clouds moving, trees waving in the wind, automobile traffic, moving crowds, escalators, etc. We have developed a novel foreground-background segmentation algorithm that explicitly accounts for the non-stationary nature and clutter-like appearance of many dynamic textures. The dynamic texture is modeled by an Autoregressive Moving Average Model (ARMA). A robust Kalman filter algorithm iteratively estimates the intrinsic appearance of the dynamic texture, as well as the regions of the foreground objects. Preliminary experiments with this method have demonstrated promising results.
Resumo:
The What-and-Where filter forms part of a neural network architecture for spatial mapping, object recognition, and image understanding. The Where fllter responds to an image figure that has been separated from its background. It generates a spatial map whose cell activations simultaneously represent the position, orientation, ancl size of all tbe figures in a scene (where they are). This spatial map may he used to direct spatially localized attention to these image features. A multiscale array of oriented detectors, followed by competitve and interpolative interactions between position, orientation, and size scales, is used to define the Where filter. This analysis discloses several issues that need to be dealt with by a spatial mapping system that is based upon oriented filters, such as the role of cliff filters with and without normalization, the double peak problem of maximum orientation across size scale, and the different self-similar interpolation properties across orientation than across size scale. Several computationally efficient Where filters are proposed. The Where filter rnay be used for parallel transformation of multiple image figures into invariant representations that are insensitive to the figures' original position, orientation, and size. These invariant figural representations form part of a system devoted to attentive object learning and recognition (what it is). Unlike some alternative models where serial search for a target occurs, a What and Where representation can he used to rapidly search in parallel for a desired target in a scene. Such a representation can also be used to learn multidimensional representations of objects and their spatial relationships for purposes of image understanding. The What-and-Where filter is inspired by neurobiological data showing that a Where processing stream in the cerebral cortex is used for attentive spatial localization and orientation, whereas a What processing stream is used for attentive object learning and recognition.