7 resultados para LORENTZIAN TRIANGULATION
em Cambridge University Engineering Department Publications Database
Resumo:
This paper presents a method for the fast and direct extraction of model parameters for capacitive MEMS resonators from their measured transmission response such as quality factor, resonant frequency, and motional resistance. We show that these parameters may be extracted without having to first de-embed the resonator motional current from the feedthrough. The series and parallel resonances from the measured electrical transmission are used to determine the MEMS resonator circuit parameters. The theoretical basis for the method is elucidated by using both the Nyquist and susceptance frequency response plots, and applicable in the limit where CF > CmQ; commonly the case when characterizing MEMS resonators at RF. The method is then applied to the measured electrical transmission for capacitively transduced MEMS resonators, and compared against parameters obtained using a Lorentzian fit to the measured response. Close agreement between the two methods is reported herein. © 2010 IEEE.
Resumo:
This paper presents a method for fast and accurate determination of parameters relevant to the characterization of capacitive MEMS resonators like quality factor (Q), resonant frequency (fn), and equivalent circuit parameters such as the motional capacitance (Cm). In the presence of a parasitic feedthrough capacitor (CF) appearing across the input and output ports, the transmission characteristic is marked by two resonances: series (S) and parallel (P). Close approximations of these circuit parameters are obtained without having to first de-embed the resonator motional current typically buried in feedthrough by using the series and parallel resonances. While previous methods with the same objective are well known, we show that these are limited to the condition where CF ≪ CmQ. In contrast, this work focuses on moderate capacitive feedthrough levels where CF > CmQ, which are more common in MEMS resonators. The method is applied to data obtained from the measured electrical transmission of fabricated SOI MEMS resonators. Parameter values deduced via direct extraction are then compared against those obtained by a full extraction procedure where de-embedding is first performed and followed by a Lorentzian fit to the data based on the classical transfer function associated with a generic LRC series resonant circuit. © 2011 Elsevier B.V. All rights reserved.
Resumo:
A number of methods are commonly used today to collect infrastructure's spatial data (time-of-flight, visual triangulation, etc.). However, current practice lacks a solution that is accurate, automatic, and cost-efficient at the same time. This paper presents a videogrammetric framework for acquiring spatial data of infrastructure which holds the promise to address this limitation. It uses a calibrated set of low-cost high resolution video cameras that is progressively traversed around the scene and aims to produce a dense 3D point cloud which is updated in each frame. It allows for progressive reconstruction as opposed to point-and-shoot followed by point cloud stitching. The feasibility of the framework is studied in this paper. Required steps through this process are presented and the unique challenges of each step are identified. Results specific to each step are also presented.
Resumo:
The commercial far-range (>10 m) spatial data collection methods for acquiring infrastructure’s geometric data are not completely automated because of the necessary manual pre- and/or post-processing work. The required amount of human intervention and, in some cases, the high equipment costs associated with these methods impede their adoption by the majority of infrastructure mapping activities. This paper presents an automated stereo vision-based method, as an alternative and inexpensive solution, to producing a sparse Euclidean 3D point cloud of an infrastructure scene utilizing two video streams captured by a set of two calibrated cameras. In this process SURF features are automatically detected and matched between each pair of stereo video frames. 3D coordinates of the matched feature points are then calculated via triangulation. The detected SURF features in two successive video frames are automatically matched and the RANSAC algorithm is used to discard mismatches. The quaternion motion estimation method is then used along with bundle adjustment optimization to register successive point clouds. The method was tested on a database of infrastructure stereo video streams. The validity and statistical significance of the results were evaluated by comparing the spatial distance of randomly selected feature points with their corresponding tape measurements.
Resumo:
A number of methods are commonly used today to collect as-built spatial data (time-of-flight, visual triangulation, etc.). However, current practice lacks a solution that is accurate, automatic and cost-efficient at the same time. LiDARmethods generate high resolution depth information, but the significant cost of the equipment counteracts their benefits for the majority of construction projects. This is true especially for small projects, where projected savings hardly justify adopting this technology. Vision-based technologies, such as videogrammetry, is potentially able to address the existing limitations.
Resumo:
Vision trackers have been proposed as a promising alternative for tracking at large-scale, congested construction sites. They provide the location of a large number of entities in a camera view across frames. However, vision trackers provide only two-dimensional (2D) pixel coordinates, which are not adequate for construction applications. This paper proposes and validates a method that overcomes this limitation by employing stereo cameras and converting 2D pixel coordinates to three-dimensional (3D) metric coordinates. The proposed method consists of four steps: camera calibration, camera pose estimation, 2D tracking, and triangulation. Given that the method employs fixed, calibrated stereo cameras with a long baseline, appropriate algorithms are selected for each step. Once the first two steps reveal camera system parameters, the third step determines 2D pixel coordinates of entities in subsequent frames. The 2D coordinates are triangulated on the basis of the camera system parameters to obtain 3D coordinates. The methodology presented in this paper has been implemented and tested with data collected from a construction site. The results demonstrate the suitability of this method for on-site tracking purposes.
Resumo:
The polarization dependence of the double resonant Raman scattering (2D) band in bilayer graphene (BLG) is studied as a function of the excitation laser energy. It has been known that the complex shape of the 2D band of BLG can be decomposed into four Lorentzian peaks with different Raman frequency shifts attributable to four individual scattering paths in the energy-momentum space. From our polarization dependence study, however, we reveal that each of the four different peaks is actually doubly degenerate in its scattering channels, i.e., two different scattering paths with similar Raman frequency shifts for each peak. We find theoretically that one of these two paths, ignored for a long time, has a small contribution to their scattering intensities but are critical in understanding their polarization dependences. Because of this, the maximum-to-minimum intensity ratios of the four peaks show a strong dependence on the excitation energy, unlike the case of single-layer graphene (SLG). Our findings thus reveal another interesting aspect of electron-phonon interactions in graphitic systems. © 2014 Elsevier Ltd. All rights reserved.