4 resultados para planar intersect waveguide
em Boston University Digital Common
Resumo:
In the Spallation Neutron Source (SNS) facility at Oak Ridge National Laboratory (ORNL), the deposition of a high-energy proton beam into the liquid mercury target forms bubbles whose asymmetric collapse cause Cavitation Damage Erosion (CDE) to the container walls, thereby reducing its usable lifetime. One proposed solution for mitigation of this damage is to inject a population of microbubbles into the mercury, yielding a compliant and attenuative medium that will reduce the resulting cavitation damage. This potential solution presents the task of creating a diagnostic tool to monitor bubble population in the mercury flow in order to correlate void fraction and damage. Details of an acoustic waveguide for the eventual measurement of two-phase mercury-helium flow void fraction are discussed. The assembly’s waveguide is a vertically oriented stainless steel cylinder with 5.08cm ID, 1.27cm wall thickness and 40cm length. For water experiments, a 2.54cm thick stainless steel plate at the bottom supports the fluid, provides an acoustically rigid boundary condition, and is the mounting point for a hydrophone. A port near the bottom is the inlet for the fluid of interest. A spillover reservoir welded to the upper portion of the main tube allows for a flow-through design, yielding a pressure release top boundary condition for the waveguide. A cover on the reservoir supports an electrodynamic shaker that is driven by linear frequency sweeps to excite the tube. The hydrophone captures the frequency response of the waveguide. The sound speed of the flowing medium is calculated, assuming a linear dependence of axial mode number on modal frequency (plane wave). Assuming that the medium has an effective-mixture sound speed, and that it contains bubbles which are much smaller than the resonance radii at the highest frequency of interest (Wood’s limit), the void fraction of the flow is calculated. Results for water and bubbly water of varying void fraction are presented, and serve to demonstrate the accuracy and precision of the apparatus.
Resumo:
A specialized formulation of Azarbayejani and Pentland's framework for recursive recovery of motion, structure and focal length from feature correspondences tracked through an image sequence is presented. The specialized formulation addresses the case where all tracked points lie on a plane. This planarity constraint reduces the dimension of the original state vector, and consequently the number of feature points needed to estimate the state. Experiments with synthetic data and real imagery illustrate the system performance. The experiments confirm that the specialized formulation provides improved accuracy, stability to observation noise, and rate of convergence in estimation for the case where the tracked points lie on a plane.
Resumo:
Standard structure from motion algorithms recover 3D structure of points. If a surface representation is desired, for example a piece-wise planar representation, then a two-step procedure typically follows: in the first step the plane-membership of points is first determined manually, and in a subsequent step planes are fitted to the sets of points thus determined, and their parameters are recovered. This paper presents an approach for automatically segmenting planar structures from a sequence of images, and simultaneously estimating their parameters. In the proposed approach the plane-membership of points is determined automatically, and the planar structure parameters are recovered directly in the algorithm rather than indirectly in a post-processing stage. Simulated and real experimental results show the efficacy of this approach.
Resumo:
We introduce a method for recovering the spatial and temporal alignment between two or more views of objects moving over a ground plane. Existing approaches either assume that the streams are globally synchronized, so that only solving the spatial alignment is needed, or that the temporal misalignment is small enough so that exhaustive search can be performed. In contrast, our approach can recover both the spatial and temporal alignment. We compute for each trajectory a number of interesting segments, and we use their description to form putative matches between trajectories. Each pair of corresponding interesting segments induces a temporal alignment, and defines an interval of common support across two views of an object that is used to recover the spatial alignment. Interesting segments and their descriptors are defined using algebraic projective invariants measured along the trajectories. Similarity between interesting segments is computed taking into account the statistics of such invariants. Candidate alignment parameters are verified checking the consistency, in terms of the symmetric transfer error, of all the putative pairs of corresponding interesting segments. Experiments are conducted with two different sets of data, one with two views of an outdoor scene featuring moving people and cars, and one with four views of a laboratory sequence featuring moving radio-controlled cars.