969 resultados para Multi-frequency Bioimpedance
Resumo:
The MID-K, a new kind of multi-pipe string detection tool is introduced. This tool provides a means of evaluating the condition of in-place pipe string, such as tubing and casino. It is capable of discriminating the defects of the inside and outside, and estimating the thickness of tubing and casing. It is accomplished by means of a low frequency eddy current to detect flaws on the inner surface and a magnetic flux leakage to inspect the full thickness. The measurement principle, the technology and applications are presented in this paper.
Resumo:
The dissertation studies the general area of complex networked systems that consist of interconnected and active heterogeneous components and usually operate in uncertain environments and with incomplete information. Problems associated with those systems are typically large-scale and computationally intractable, yet they are also very well-structured and have features that can be exploited by appropriate modeling and computational methods. The goal of this thesis is to develop foundational theories and tools to exploit those structures that can lead to computationally-efficient and distributed solutions, and apply them to improve systems operations and architecture.
Specifically, the thesis focuses on two concrete areas. The first one is to design distributed rules to manage distributed energy resources in the power network. The power network is undergoing a fundamental transformation. The future smart grid, especially on the distribution system, will be a large-scale network of distributed energy resources (DERs), each introducing random and rapid fluctuations in power supply, demand, voltage and frequency. These DERs provide a tremendous opportunity for sustainability, efficiency, and power reliability. However, there are daunting technical challenges in managing these DERs and optimizing their operation. The focus of this dissertation is to develop scalable, distributed, and real-time control and optimization to achieve system-wide efficiency, reliability, and robustness for the future power grid. In particular, we will present how to explore the power network structure to design efficient and distributed market and algorithms for the energy management. We will also show how to connect the algorithms with physical dynamics and existing control mechanisms for real-time control in power networks.
The second focus is to develop distributed optimization rules for general multi-agent engineering systems. A central goal in multiagent systems is to design local control laws for the individual agents to ensure that the emergent global behavior is desirable with respect to the given system level objective. Ideally, a system designer seeks to satisfy this goal while conditioning each agent’s control on the least amount of information possible. Our work focused on achieving this goal using the framework of game theory. In particular, we derived a systematic methodology for designing local agent objective functions that guarantees (i) an equivalence between the resulting game-theoretic equilibria and the system level design objective and (ii) that the resulting game possesses an inherent structure that can be exploited for distributed learning, e.g., potential games. The control design can then be completed by applying any distributed learning algorithm that guarantees convergence to the game-theoretic equilibrium. One main advantage of this game theoretic approach is that it provides a hierarchical decomposition between the decomposition of the systemic objective (game design) and the specific local decision rules (distributed learning algorithms). This decomposition provides the system designer with tremendous flexibility to meet the design objectives and constraints inherent in a broad class of multiagent systems. Furthermore, in many settings the resulting controllers will be inherently robust to a host of uncertainties including asynchronous clock rates, delays in information, and component failures.
Resumo:
We investigate high-order harmonic emission and isolated attosecond pulse (IAP) generation in atoms driven by a two-colour multi-cycle laser field consisting of an 800 nm pulse and an infrared laser pulse at an arbitrary wavelength. With moderate laser intensity, an IAP of similar to 220 as can be generated in helium atoms by using two-colour laser pulses of 35 fs/800 nm and 46 fs/1150 nm. The discussion based on the three-step semiclassical model, and time-frequency analysis shows a clear picture of the high-order harmonic generation in the waveform-controlled laser field which is of benefit to the generation of XUV IAP and attosecond electron pulses. When the propagation effect is included, the duration of the IAP can be shorter than 200 as, when the driving laser pulses are focused 1 mm before the gas medium with a length between 1.5 mm and 2 mm.
Resumo:
Many applications in cosmology and astrophysics at millimeter wavelengths including CMB polarization, studies of galaxy clusters using the Sunyaev-Zeldovich effect (SZE), and studies of star formation at high redshift and in our local universe and our galaxy, require large-format arrays of millimeter-wave detectors. Feedhorn and phased-array antenna architectures for receiving mm-wave light present numerous advantages for control of systematics, for simultaneous coverage of both polarizations and/or multiple spectral bands, and for preserving the coherent nature of the incoming light. This enables the application of many traditional "RF" structures such as hybrids, switches, and lumped-element or microstrip band-defining filters.
Simultaneously, kinetic inductance detectors (KIDs) using high-resistivity materials like titanium nitride are an attractive sensor option for large-format arrays because they are highly multiplexable and because they can have sensitivities reaching the condition of background-limited detection. A KID is a LC resonator. Its inductance includes the geometric inductance and kinetic inductance of the inductor in the superconducting phase. A photon absorbed by the superconductor breaks a Cooper pair into normal-state electrons and perturbs its kinetic inductance, rendering it a detector of light. The responsivity of KID is given by the fractional frequency shift of the LC resonator per unit optical power.
However, coupling these types of optical reception elements to KIDs is a challenge because of the impedance mismatch between the microstrip transmission line exiting these architectures and the high resistivity of titanium nitride. Mitigating direct absorption of light through free space coupling to the inductor of KID is another challenge. We present a detailed titanium nitride KID design that addresses these challenges. The KID inductor is capacitively coupled to the microstrip in such a way as to form a lossy termination without creating an impedance mismatch. A parallel plate capacitor design mitigates direct absorption, uses hydrogenated amorphous silicon, and yields acceptable noise. We show that the optimized design can yield expected sensitivities very close to the fundamental limit for a long wavelength imager (LWCam) that covers six spectral bands from 90 to 400 GHz for SZE studies.
Excess phase (frequency) noise has been observed in KID and is very likely caused by two-level systems (TLS) in dielectric materials. The TLS hypothesis is supported by the measured dependence of the noise on resonator internal power and temperature. However, there is still a lack of a unified microscopic theory which can quantitatively model the properties of the TLS noise. In this thesis we derive the noise power spectral density due to the coupling of TLS with phonon bath based on an existing model and compare the theoretical predictions about power and temperature dependences with experimental data. We discuss the limitation of such a model and propose the direction for future study.
Resumo:
Morphogenesis is a phenomenon of intricate balance and dynamic interplay between processes occurring at a wide range of scales (spatial, temporal and energetic). During development, a variety of physical mechanisms are employed by tissues to simultaneously pattern, move, and differentiate based on information exchange between constituent cells, perhaps more than at any other time during an organism's life. To fully understand such events, a combined theoretical and experimental framework is required to assist in deciphering the correlations at both structural and functional levels at scales that include the intracellular and tissue levels as well as organs and organ systems. Microscopy, especially diffraction-limited light microscopy, has emerged as a central tool to capture the spatio-temporal context of life processes. Imaging has the unique advantage of watching biological events as they unfold over time at single-cell resolution in the intact animal. In this work I present a range of problems in morphogenesis, each unique in its requirements for novel quantitative imaging both in terms of the technique and analysis. Understanding the molecular basis for a developmental process involves investigating how genes and their products- mRNA and proteins-function in the context of a cell. Structural information holds the key to insights into mechanisms and imaging fixed specimens paves the first step towards deciphering gene function. The work presented in this thesis starts with the demonstration that the fluorescent signal from the challenging environment of whole-mount imaging, obtained by in situ hybridization chain reaction (HCR), scales linearly with the number of copies of target mRNA to provide quantitative sub-cellular mapping of mRNA expression within intact vertebrate embryos. The work then progresses to address aspects of imaging live embryonic development in a number of species. While processes such as avian cartilage growth require high spatial resolution and lower time resolution, dynamic events during zebrafish somitogenesis require higher time resolution to capture the protein localization as the somites mature. The requirements on imaging are even more stringent in case of the embryonic zebrafish heart that beats with a frequency of ~ 2-2.5 Hz, thereby requiring very fast imaging techniques based on two-photon light sheet microscope to capture its dynamics. In each of the hitherto-mentioned cases, ranging from the level of molecules to organs, an imaging framework is developed, both in terms of technique and analysis to allow quantitative assessment of the process in vivo. Overall the work presented in this thesis combines new quantitative tools with novel microscopy for the precise understanding of processes in embryonic development.
Resumo:
Optical Coherence Tomography(OCT) is a popular, rapidly growing imaging technique with an increasing number of bio-medical applications due to its noninvasive nature. However, there are three major challenges in understanding and improving an OCT system: (1) Obtaining an OCT image is not easy. It either takes a real medical experiment or requires days of computer simulation. Without much data, it is difficult to study the physical processes underlying OCT imaging of different objects simply because there aren't many imaged objects. (2) Interpretation of an OCT image is also hard. This challenge is more profound than it appears. For instance, it would require a trained expert to tell from an OCT image of human skin whether there is a lesion or not. This is expensive in its own right, but even the expert cannot be sure about the exact size of the lesion or the width of the various skin layers. The take-away message is that analyzing an OCT image even from a high level would usually require a trained expert, and pixel-level interpretation is simply unrealistic. The reason is simple: we have OCT images but not their underlying ground-truth structure, so there is nothing to learn from. (3) The imaging depth of OCT is very limited (millimeter or sub-millimeter on human tissues). While OCT utilizes infrared light for illumination to stay noninvasive, the downside of this is that photons at such long wavelengths can only penetrate a limited depth into the tissue before getting back-scattered. To image a particular region of a tissue, photons first need to reach that region. As a result, OCT signals from deeper regions of the tissue are both weak (since few photons reached there) and distorted (due to multiple scatterings of the contributing photons). This fact alone makes OCT images very hard to interpret.
This thesis addresses the above challenges by successfully developing an advanced Monte Carlo simulation platform which is 10000 times faster than the state-of-the-art simulator in the literature, bringing down the simulation time from 360 hours to a single minute. This powerful simulation tool not only enables us to efficiently generate as many OCT images of objects with arbitrary structure and shape as we want on a common desktop computer, but it also provides us the underlying ground-truth of the simulated images at the same time because we dictate them at the beginning of the simulation. This is one of the key contributions of this thesis. What allows us to build such a powerful simulation tool includes a thorough understanding of the signal formation process, clever implementation of the importance sampling/photon splitting procedure, efficient use of a voxel-based mesh system in determining photon-mesh interception, and a parallel computation of different A-scans that consist a full OCT image, among other programming and mathematical tricks, which will be explained in detail later in the thesis.
Next we aim at the inverse problem: given an OCT image, predict/reconstruct its ground-truth structure on a pixel level. By solving this problem we would be able to interpret an OCT image completely and precisely without the help from a trained expert. It turns out that we can do much better. For simple structures we are able to reconstruct the ground-truth of an OCT image more than 98% correctly, and for more complicated structures (e.g., a multi-layered brain structure) we are looking at 93%. We achieved this through extensive uses of Machine Learning. The success of the Monte Carlo simulation already puts us in a great position by providing us with a great deal of data (effectively unlimited), in the form of (image, truth) pairs. Through a transformation of the high-dimensional response variable, we convert the learning task into a multi-output multi-class classification problem and a multi-output regression problem. We then build a hierarchy architecture of machine learning models (committee of experts) and train different parts of the architecture with specifically designed data sets. In prediction, an unseen OCT image first goes through a classification model to determine its structure (e.g., the number and the types of layers present in the image); then the image is handed to a regression model that is trained specifically for that particular structure to predict the length of the different layers and by doing so reconstruct the ground-truth of the image. We also demonstrate that ideas from Deep Learning can be useful to further improve the performance.
It is worth pointing out that solving the inverse problem automatically improves the imaging depth, since previously the lower half of an OCT image (i.e., greater depth) can be hardly seen but now becomes fully resolved. Interestingly, although OCT signals consisting the lower half of the image are weak, messy, and uninterpretable to human eyes, they still carry enough information which when fed into a well-trained machine learning model spits out precisely the true structure of the object being imaged. This is just another case where Artificial Intelligence (AI) outperforms human. To the best knowledge of the author, this thesis is not only a success but also the first attempt to reconstruct an OCT image at a pixel level. To even give a try on this kind of task, it would require fully annotated OCT images and a lot of them (hundreds or even thousands). This is clearly impossible without a powerful simulation tool like the one developed in this thesis.
Resumo:
In multi-carrier systems, small carrier frequency offsets result in significant degradation of performance and this offset should be compensated before demodulation can be performed. In this paper, we consider a generic multi-carrier system with pulse shaping and estimate the frequency offset by exploiting the cyclostationarity of the received signal. By transforming the time domain signal to the cyclic correlation domain we are able to estimate the frequency offset without the aid of pilot symbols or the cyclic prefix. The Bayesian framework is used to obtain the estimate and we show how we can simplify the estimation process. © 1999 IEEE.
Resumo:
The paper is devoted to extending the new efficient frequency-domain method of adjoint Green's function calculation to curvilinear multi-block RANS domains for middle and farfield sound computations. Numerical details of the method such as grids, boundary conditions and convergence acceleration are discussed. Two acoustic source models are considered in conjunction with the method and acoustic modelling results are presented for a benchmark low-Reynolds-number jet case.
Resumo:
Transmission terahertz time-domain spectroscopy (THz-TDS) measurements of carbon nanotube arrays are presented. A relatively thin film with vertically aligned multi-walled carbon nanotubes has been prepared and measured using THz-TDS. Experimental results were obtained from 80GHz to 2.5THz, and the sample has been characterized by extracting the relative permittivity of the carbon nanotubes. A combination of the Maxwell-Garnett and Drude models within the frequency range provide a good fit to the measured permittivity.
Resumo:
Large digital chips use a significant amount of energy to distribute a multi-GHz clock. By discharging the clock network to ground every cycle, the energy stored in this large capacitor is wasted. Instead, the energy can be recovered using an on-chip DC-DC converter. This paper investigates the integration of two DC-DC converter topologies, boost and buck-boost, with a high-speed clock driver. The high operating frequency significantly shrinks the required size of the L and C components so they can be placed on-chip; typical converters place them off-chip. The clock driver and DC-DC converter are able to share the entire tapered buffer chain, including the widest drive transistors in the final stage. To achieve voltage regulation, the clock duty cycle must be modulated; implying only single-edge-triggered flops should be used. However, this minor drawback is eclipsed by the benefits: by recovering energy from the clock, the output power can actually exceed the additional power needed to operate the converter circuitry, resulting in an effective efficiency greater than 100%. Furthermore, the converter output can be used to operate additional power-saving features like low-voltage islands or body bias voltages. ©2008 IEEE.
Resumo:
This work addresses the problem of deriving F0 from distanttalking speech signals acquired by a microphone network. The method here proposed exploits the redundancy across the channels by jointly processing the different signals. To this purpose, a multi-microphone periodicity function is derived from the magnitude spectrum of all the channels. This function allows to estimate F0 reliably, even under reverberant conditions, without the need of any post-processing or smoothing technique. Experiments, conducted on real data, showed that the proposed frequency-domain algorithm is more suitable than other time-domain based ones.
Resumo:
This paper presents a long range and effectively error-free ultra high frequency (UHF) radio frequency identification (RFID) interrogation system. The system is based on a novel technique whereby two or more spatially separated transmit and receive antennas are used to enable greatly enhanced tag detection performance over longer distances using antenna diversity combined with frequency and phase hopping. The novel technique is first theoretically modelled using a Rician fading channel. It is shown that conventional RFID systems suffer from multi-path fading resulting in nulls in radio environments. We, for the first time, demonstrate that the nulls can be moved around by varying the phase and frequency of the interrogation signals in a multi-antenna system. As a result, much enhanced coverage can be achieved. A proof of principle prototype RFID system is built based on an Impinj R2000 transceiver. The demonstrator system shows that the new approach improves the tag detection accuracy from <50% to 100% and the tag backscatter signal strength by 10dB over a 20 m x 9 m area, compared with a conventional switched multi-antenna RFID system.
Resumo:
Optically-fed distributed antenna system (DAS) technology is combined with passive ultra high frequency (UHF) radio frequency identification (RFID). It is shown that RFID signals can be carried on directly modulated radio over fiber links without impacting their performance. It is also shown that a multi-antenna DAS can greatly reduce the number of nulls experienced by RFID in a complex radio environment, increasing the likelihood of successful tag detection. Consequently, optimization of the DAS reduces nulls further. We demonstrate RFID tag reading using a three antenna DAS system over a 20mx6m area, limited by building constraints, where 100% of the test points can be successfully read. The detected signal strength from the tag is also observed to increase by an average of approximately 10dB compared with a conventional switched multi-antenna RFID system. This improvement is achieved at +31dBm equivalent isotropically radiated power (EIRP) from all three antenna units (AUs).
Resumo:
Nonlinear analysis of thermoacoustic instability is essential for prediction of frequencies and amplitudes of limit cycles. In frequency domain analyses, a quasi-linear transfer function between acoustic velocity and heat release rate perturbations, called the flame describing function (FDF), is obtained from a flame model or experiments. The FDF is a function of the frequency and amplitude of velocity perturbations but only contains the heat release response at the forcing frequency. While the gain and phase of the FDF provide insight into the nonlinear dynamics of the system, the accuracy of its predictions remains to be verified for different types of nonlinearity. In time domain analyses, the governing equations of the fully coupled problem are solved to find the time evolution of the system. One method is to discretize the governing equations using a suitable basis, such as the natural acoustic modes of the system. The number of modes used in the discretization alters the accuracy of the solution. In our previous work we have shown that predictions using the FDF are almost exactly the same as those obtained from the time-domain using only one mode for the discretization. We call this the single-mode method. In this paper we compare results from the single-mode and multi-mode methods, applied to a thermoacoustic system of a premixed flame in a tube. For some cases, the results differ greatly in both amplitude as well as frequency content. This study shows that the contribution from higher and subharmonics to the nonlinear dynamics can be significant and must be considered for an accurate and comprehensive analysis of thermoacoustic systems. Hence multi-mode simulations are necessary, and the single-mode method or the FDF may be insufficient to capture some of the complex nonlinear behaviour in fhermoacoustics.
Resumo:
We review the potential of graphene in ultra-high speed circuits. To date, most of high-frequency graphene circuits typically consist of a single transistor integrated with a few passive components. The development of multi-transistor graphene integrated circuits operating at GHz frequencies can pave the way for applications in which high operating speed is traded off against power consumption and circuit complexity. Novel vertical and planar devices based on a combination of graphene and layered materials could broaden the scope and performances of future devices. © 2013 IEEE.