2 resultados para Detection of a castaway, sonar, UUV, acoustic underwater ICARUS, upward looking

em Boston University Digital Common


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Stabilized micron-sized bubbles, known as contrast agents, are often injected into the body to enhance ultrasound imaging of blood flow. The ability to detect such bubbles in blood depends on the relative magnitude of the acoustic power backscattered from the microbubbles (‘signal’) to the power backscattered from the red blood cells (‘noise’). Erythrocytes are acoustically small (Rayleigh regime), weak scatterers, and therefore the backscatter coefficient (BSC) of blood increases as the fourth power of frequency throughout the diagnostic frequency range. Microbubbles, on the other hand, are either resonant or super-resonant in the range 5-30 MHz. Above resonance, their total scattering cross-section remains constant with increasing frequency. In the present thesis, a theoretical model of the BSC of a suspension of red blood cells is presented and compared to the BSC of Optison® contrast agent microbubbles. It is predicted that, as the frequency increases, the BSC of red blood cell suspensions eventually exceeds the BSC of the strong scattering microbubbles, leading to a dramatic reduction in signal-to-noise ratio (SNR). This decrease in SNR with increasing frequency was also confirmed experimentally by use of an active cavitation detector for different concentrations of Optison® microbubbles in erythrocyte suspensions of different hematocrits. The magnitude of the observed decrease in SNR correlated well with theoretical predictions in most cases, except for very dense suspensions of red blood cells, where it is hypothesized that the close proximity of erythrocytes inhibits the acoustic response of the microbubbles.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

An automated system for detection of head movements is described. The goal is to label relevant head gestures in video of American Sign Language (ASL) communication. In the system, a 3D head tracker recovers head rotation and translation parameters from monocular video. Relevant head gestures are then detected by analyzing the length and frequency of the motion signal's peaks and valleys. Each parameter is analyzed independently, due to the fact that a number of relevant head movements in ASL are associated with major changes around one rotational axis. No explicit training of the system is necessary. Currently, the system can detect "head shakes." In experimental evaluation, classification performance is compared against ground-truth labels obtained from ASL linguists. Initial results are promising, as the system matches the linguists' labels in a significant number of cases.