13 resultados para COHERENCE

em CaltechTHESIS


Relevância:

20.00% 20.00%

Publicador:

Resumo:

The relentlessly increasing demand for network bandwidth, driven primarily by Internet-based services such as mobile computing, cloud storage and video-on-demand, calls for more efficient utilization of the available communication spectrum, as that afforded by the resurging DSP-powered coherent optical communications. Encoding information in the phase of the optical carrier, using multilevel phase modulationformats, and employing coherent detection at the receiver allows for enhanced spectral efficiency and thus enables increased network capacity. The distributed feedback semiconductor laser (DFB) has served as the near exclusive light source powering the fiber optic, long-haul network for over 30 years. The transition to coherent communication systems is pushing the DFB laser to the limits of its abilities. This is due to its limited temporal coherence that directly translates into the number of different phases that can be imparted to a single optical pulse and thus to the data capacity. Temporal coherence, most commonly quantified in the spectral linewidth Δν, is limited by phase noise, result of quantum-mandated spontaneous emission of photons due to random recombination of carriers in the active region of the laser.

In this work we develop a generically new type of semiconductor laser with the requisite coherence properties. We demonstrate electrically driven lasers characterized by a quantum noise-limited spectral linewidth as low as 18 kHz. This narrow linewidth is result of a fundamentally new laser design philosophy that separates the functions of photon generation and storage and is enabled by a hybrid Si/III-V integration platform. Photons generated in the active region of the III-V material are readily stored away in the low loss Si that hosts the bulk of the laser field, thereby enabling high-Q photon storage. The storage of a large number of coherent quanta acts as an optical flywheel, which by its inertia reduces the effect of the spontaneous emission-mandated phase perturbations on the laser field, while the enhanced photon lifetime effectively reduces the emission rate of incoherent quanta into the lasing mode. Narrow linewidths are obtained over a wavelength bandwidth spanning the entire optical communication C-band (1530-1575nm) at only a fraction of the input power required by conventional DFB lasers. The results presented in this thesis hold great promise for the large scale integration of lithographically tuned, high-coherence laser arrays for use in coherent communications, that will enable Tb/s-scale data capacities.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Optical Coherence Tomography(OCT) is a popular, rapidly growing imaging technique with an increasing number of bio-medical applications due to its noninvasive nature. However, there are three major challenges in understanding and improving an OCT system: (1) Obtaining an OCT image is not easy. It either takes a real medical experiment or requires days of computer simulation. Without much data, it is difficult to study the physical processes underlying OCT imaging of different objects simply because there aren't many imaged objects. (2) Interpretation of an OCT image is also hard. This challenge is more profound than it appears. For instance, it would require a trained expert to tell from an OCT image of human skin whether there is a lesion or not. This is expensive in its own right, but even the expert cannot be sure about the exact size of the lesion or the width of the various skin layers. The take-away message is that analyzing an OCT image even from a high level would usually require a trained expert, and pixel-level interpretation is simply unrealistic. The reason is simple: we have OCT images but not their underlying ground-truth structure, so there is nothing to learn from. (3) The imaging depth of OCT is very limited (millimeter or sub-millimeter on human tissues). While OCT utilizes infrared light for illumination to stay noninvasive, the downside of this is that photons at such long wavelengths can only penetrate a limited depth into the tissue before getting back-scattered. To image a particular region of a tissue, photons first need to reach that region. As a result, OCT signals from deeper regions of the tissue are both weak (since few photons reached there) and distorted (due to multiple scatterings of the contributing photons). This fact alone makes OCT images very hard to interpret.

This thesis addresses the above challenges by successfully developing an advanced Monte Carlo simulation platform which is 10000 times faster than the state-of-the-art simulator in the literature, bringing down the simulation time from 360 hours to a single minute. This powerful simulation tool not only enables us to efficiently generate as many OCT images of objects with arbitrary structure and shape as we want on a common desktop computer, but it also provides us the underlying ground-truth of the simulated images at the same time because we dictate them at the beginning of the simulation. This is one of the key contributions of this thesis. What allows us to build such a powerful simulation tool includes a thorough understanding of the signal formation process, clever implementation of the importance sampling/photon splitting procedure, efficient use of a voxel-based mesh system in determining photon-mesh interception, and a parallel computation of different A-scans that consist a full OCT image, among other programming and mathematical tricks, which will be explained in detail later in the thesis.

Next we aim at the inverse problem: given an OCT image, predict/reconstruct its ground-truth structure on a pixel level. By solving this problem we would be able to interpret an OCT image completely and precisely without the help from a trained expert. It turns out that we can do much better. For simple structures we are able to reconstruct the ground-truth of an OCT image more than 98% correctly, and for more complicated structures (e.g., a multi-layered brain structure) we are looking at 93%. We achieved this through extensive uses of Machine Learning. The success of the Monte Carlo simulation already puts us in a great position by providing us with a great deal of data (effectively unlimited), in the form of (image, truth) pairs. Through a transformation of the high-dimensional response variable, we convert the learning task into a multi-output multi-class classification problem and a multi-output regression problem. We then build a hierarchy architecture of machine learning models (committee of experts) and train different parts of the architecture with specifically designed data sets. In prediction, an unseen OCT image first goes through a classification model to determine its structure (e.g., the number and the types of layers present in the image); then the image is handed to a regression model that is trained specifically for that particular structure to predict the length of the different layers and by doing so reconstruct the ground-truth of the image. We also demonstrate that ideas from Deep Learning can be useful to further improve the performance.

It is worth pointing out that solving the inverse problem automatically improves the imaging depth, since previously the lower half of an OCT image (i.e., greater depth) can be hardly seen but now becomes fully resolved. Interestingly, although OCT signals consisting the lower half of the image are weak, messy, and uninterpretable to human eyes, they still carry enough information which when fed into a well-trained machine learning model spits out precisely the true structure of the object being imaged. This is just another case where Artificial Intelligence (AI) outperforms human. To the best knowledge of the author, this thesis is not only a success but also the first attempt to reconstruct an OCT image at a pixel level. To even give a try on this kind of task, it would require fully annotated OCT images and a lot of them (hundreds or even thousands). This is clearly impossible without a powerful simulation tool like the one developed in this thesis.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This thesis presents an investigation on endoscopic optical coherence tomography (OCT). As a noninvasive imaging modality, OCT emerges as an increasingly important diagnostic tool for many clinical applications. Despite of many of its merits, such as high resolution and depth resolvability, a major limitation is the relatively shallow penetration depth in tissue (about 2∼3 mm). This is mainly due to tissue scattering and absorption. To overcome this limitation, people have been developing many different endoscopic OCT systems. By utilizing a minimally invasive endoscope, the OCT probing beam can be brought to the close vicinity of the tissue of interest and bypass the scattering of intervening tissues so that it can collect the reflected light signal from desired depth and provide a clear image representing the physiological structure of the region, which can not be disclosed by traditional OCT. In this thesis, three endoscope designs have been studied. While they rely on vastly different principles, they all converge to solve this long-standing problem.

A hand-held endoscope with manual scanning is first explored. When a user is holding a hand- held endoscope to examine samples, the movement of the device provides a natural scanning. We proposed and implemented an optical tracking system to estimate and record the trajectory of the device. By registering the OCT axial scan with the spatial information obtained from the tracking system, one can use this system to simply ‘paint’ a desired volume and get any arbitrary scanning pattern by manually waving the endoscope over the region of interest. The accuracy of the tracking system was measured to be about 10 microns, which is comparable to the lateral resolution of most OCT system. Targeted phantom sample and biological samples were manually scanned and the reconstructed images verified the method.

Next, we investigated a mechanical way to steer the beam in an OCT endoscope, which is termed as Paired-angle-rotation scanning (PARS). This concept was proposed by my colleague and we further developed this technology by enhancing the longevity of the device, reducing the diameter of the probe, and shrinking down the form factor of the hand-piece. Several families of probes have been designed and fabricated with various optical performances. They have been applied to different applications, including the collector channel examination for glaucoma stent implantation, and vitreous remnant detection during live animal vitrectomy.

Lastly a novel non-moving scanning method has been devised. This approach is based on the EO effect of a KTN crystal. With Ohmic contact of the electrodes, the KTN crystal can exhibit a special mode of EO effect, termed as space-charge-controlled electro-optic effect, where the carrier electron will be injected into the material via the Ohmic contact. By applying a high voltage across the material, a linear phase profile can be built under this mode, which in turn deflects the light beam passing through. We constructed a relay telescope to adapt the KTN deflector into a bench top OCT scanning system. One of major technical challenges for this system is the strong chromatic dispersion of KTN crystal within the wavelength band of OCT system. We investigated its impact on the acquired OCT images and proposed a new approach to estimate and compensate the actual dispersion. Comparing with traditional methods, the new method is more computational efficient and accurate. Some biological samples were scanned by this KTN based system. The acquired images justified the feasibility of the usage of this system into a endoscopy setting. My research above all aims to provide solutions to implement an OCT endoscope. As technology evolves from manual, to mechanical, and to electrical approaches, different solutions are presented. Since all have their own advantages and disadvantages, one has to determine the actual requirements and select the best fit for a specific application.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Cells in the lateral intraparietal cortex (LIP) of rhesus macaques respond vigorously and in spatially-tuned fashion to briefly memorized visual stimuli. Responses to stimulus presentation, memory maintenance, and task completion are seen, in varying combination from neuron to neuron. To help elucidate this functional segmentation a new system for simultaneous recording from multiple neighboring neurons was developed. The two parts of this dissertation discuss the technical achievements and scientific discoveries, respectively.

Technology. Simultanous recordings from multiple neighboring neurons were made with four-wire bundle electrodes, or tetrodes, which were adapted to the awake behaving primate preparation. Signals from these electrodes were partitionable into a background process with a 1/f-like spectrum and foreground spiking activity spanning 300-6000 Hz. Continuous voltage recordings were sorted into spike trains using a state-of-the-art clustering algorithm, producing a mean of 3 cells per site. The algorithm classified 96% of spikes correctly when tetrode recordings were confirmed with simultaneous intracellular signals. Recording locations were verified with a new technique that creates electrolytic lesions visible in magnetic resonance imaging, eliminating the need for histological processing. In anticipation of future multi-tetrode work, the chronic chamber microdrive, a device for long-term tetrode delivery, was developed.

Science. Simultaneously recorded neighboring LIP neurons were found to have similar preferred targets in the memory saccade paradigm, but dissimilar peristimulus time histograms, PSTH). A majority of neighboring cell pairs had a difference in preferred directions of under 45° while the trial time of maximal response showed a broader distribution, suggesting homogeneity of tuning with het erogeneity of function. A continuum of response characteristics was present, rather than a set of specific response types; however, a mapping experiment suggests this may be because a given cell's PSTH changes shape as well as amplitude through the response field. Spike train autocovariance was tuned over target and changed through trial epoch, suggesting different mechanisms during memory versus background periods. Mean frequency-domain spike-to-spike coherence was concentrated below 50 Hz with a significant maximum of 0.08; mean time-domain coherence had a narrow peak in the range ±10 ms with a significant maximum of 0.03. Time-domain coherence was found to be untuned for short lags (10 ms), but significantly tuned at larger lags (50 ms).

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This thesis details the investigations of the unconventional low-energy quasiparticle excitations in electron-type cuprate superconductors and electron-type ferrous superconductors as well as the electronic properties of Dirac fermions in graphene and three-dimensional strong topological insulators through experimental studies using spatially resolved scanning tunneling spectroscopy (STS) experiments.

Magnetic-field- and temperature-dependent evolution of the spatially resolved quasiparticle spectra in the electron-type cuprate La0.1Sr0.9CuO2 (La-112) TC = 43 K, are investigated experimentally. For temperature (T) less than the superconducting transition temperature (TC), and in zero field, the quasiparticle spectra of La-112 exhibits gapped behavior with two coherence peaks and no satellite features. For magnetic field measurements at T < TC, first ever observation of vortices in La-112 are reported. Moreover, pseudogap-like spectra are revealed inside the core of vortices, where superconductivity is suppressed. The intra-vortex pseudogap-like spectra are characterized by an energy gap of VPG = 8.5 ± 0.6 meV, while the inter-vortex quasiparticle spectra shows larger peak-to-peak gap values characterized by Δpk-pk(H) >VPG, and Δpk-pk (0)=12.2 ± 0.8 meV > Δpk-pk (H > 0). The quasiparticle spectra are found to be gapped at all locations up to the highest magnetic field examined (H = 6T) and reveal an apparent low-energy cutoff at the VPG energy scale.

Magnetic-field- and temperature-dependent evolution of the spatially resolved quasiparticle spectra in the electron-type "122" iron-based Ba(Fe1-xCox)2As2 are investigated for multiple doping levels (x = 0.06, 0.08, 0.12 with TC= 14 K, 24 K, and 20 K). For all doping levels and the T < TC, two-gap superconductivity is observed. Both superconducting gaps decrease monotonically in size with increasing temperature and disappear for temperatures above the superconducting transition temperature, TC. Magnetic resonant modes that follow the temperature dependence of the superconducting gaps have been identified in the tunneling quasiparticle spectra. Together with quasiparticle interference (QPI) analysis and magnetic field studies, this provides strong evidence for two-gap sign-changing s-wave superconductivity.

Additionally spatial scanning tunneling spectroscopic studies are performed on mechanically exfoliated graphene and chemical vapor deposition grown graphene. In all cases lattice strain exerts a strong influence on the electronic properties of the sample. In particular topological defects give rise to pseudomagnetic fields (B ~ 50 Tesla) and charging effects resulting in quantized conductance peaks associated with the integer and fractional Quantum Hall States.

Finally, spectroscopic studies on the 3D-STI, Bi2Se3 found evidence of impurity resonance in the surface state. The impurities are in the unitary limit and the spectral resonances are localized spatially to within ~ 0.2 nm of the impurity. The spectral weight of the impurity resonance diverges as the Fermi energy approaches the Dirac point and the rapid recovery of the surface state suggests robust topological protection against perturbations that preserve time reversal symmetry.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Signal processing techniques play important roles in the design of digital communication systems. These include information manipulation, transmitter signal processing, channel estimation, channel equalization and receiver signal processing. By interacting with communication theory and system implementing technologies, signal processing specialists develop efficient schemes for various communication problems by wisely exploiting various mathematical tools such as analysis, probability theory, matrix theory, optimization theory, and many others. In recent years, researchers realized that multiple-input multiple-output (MIMO) channel models are applicable to a wide range of different physical communications channels. Using the elegant matrix-vector notations, many MIMO transceiver (including the precoder and equalizer) design problems can be solved by matrix and optimization theory. Furthermore, the researchers showed that the majorization theory and matrix decompositions, such as singular value decomposition (SVD), geometric mean decomposition (GMD) and generalized triangular decomposition (GTD), provide unified frameworks for solving many of the point-to-point MIMO transceiver design problems.

In this thesis, we consider the transceiver design problems for linear time invariant (LTI) flat MIMO channels, linear time-varying narrowband MIMO channels, flat MIMO broadcast channels, and doubly selective scalar channels. Additionally, the channel estimation problem is also considered. The main contributions of this dissertation are the development of new matrix decompositions, and the uses of the matrix decompositions and majorization theory toward the practical transmit-receive scheme designs for transceiver optimization problems. Elegant solutions are obtained, novel transceiver structures are developed, ingenious algorithms are proposed, and performance analyses are derived.

The first part of the thesis focuses on transceiver design with LTI flat MIMO channels. We propose a novel matrix decomposition which decomposes a complex matrix as a product of several sets of semi-unitary matrices and upper triangular matrices in an iterative manner. The complexity of the new decomposition, generalized geometric mean decomposition (GGMD), is always less than or equal to that of geometric mean decomposition (GMD). The optimal GGMD parameters which yield the minimal complexity are derived. Based on the channel state information (CSI) at both the transmitter (CSIT) and receiver (CSIR), GGMD is used to design a butterfly structured decision feedback equalizer (DFE) MIMO transceiver which achieves the minimum average mean square error (MSE) under the total transmit power constraint. A novel iterative receiving detection algorithm for the specific receiver is also proposed. For the application to cyclic prefix (CP) systems in which the SVD of the equivalent channel matrix can be easily computed, the proposed GGMD transceiver has K/log_2(K) times complexity advantage over the GMD transceiver, where K is the number of data symbols per data block and is a power of 2. The performance analysis shows that the GGMD DFE transceiver can convert a MIMO channel into a set of parallel subchannels with the same bias and signal to interference plus noise ratios (SINRs). Hence, the average bit rate error (BER) is automatically minimized without the need for bit allocation. Moreover, the proposed transceiver can achieve the channel capacity simply by applying independent scalar Gaussian codes of the same rate at subchannels.

In the second part of the thesis, we focus on MIMO transceiver design for slowly time-varying MIMO channels with zero-forcing or MMSE criterion. Even though the GGMD/GMD DFE transceivers work for slowly time-varying MIMO channels by exploiting the instantaneous CSI at both ends, their performance is by no means optimal since the temporal diversity of the time-varying channels is not exploited. Based on the GTD, we develop space-time GTD (ST-GTD) for the decomposition of linear time-varying flat MIMO channels. Under the assumption that CSIT, CSIR and channel prediction are available, by using the proposed ST-GTD, we develop space-time geometric mean decomposition (ST-GMD) DFE transceivers under the zero-forcing or MMSE criterion. Under perfect channel prediction, the new system minimizes both the average MSE at the detector in each space-time (ST) block (which consists of several coherence blocks), and the average per ST-block BER in the moderate high SNR region. Moreover, the ST-GMD DFE transceiver designed under an MMSE criterion maximizes Gaussian mutual information over the equivalent channel seen by each ST-block. In general, the newly proposed transceivers perform better than the GGMD-based systems since the super-imposed temporal precoder is able to exploit the temporal diversity of time-varying channels. For practical applications, a novel ST-GTD based system which does not require channel prediction but shares the same asymptotic BER performance with the ST-GMD DFE transceiver is also proposed.

The third part of the thesis considers two quality of service (QoS) transceiver design problems for flat MIMO broadcast channels. The first one is the power minimization problem (min-power) with a total bitrate constraint and per-stream BER constraints. The second problem is the rate maximization problem (max-rate) with a total transmit power constraint and per-stream BER constraints. Exploiting a particular class of joint triangularization (JT), we are able to jointly optimize the bit allocation and the broadcast DFE transceiver for the min-power and max-rate problems. The resulting optimal designs are called the minimum power JT broadcast DFE transceiver (MPJT) and maximum rate JT broadcast DFE transceiver (MRJT), respectively. In addition to the optimal designs, two suboptimal designs based on QR decomposition are proposed. They are realizable for arbitrary number of users.

Finally, we investigate the design of a discrete Fourier transform (DFT) modulated filterbank transceiver (DFT-FBT) with LTV scalar channels. For both cases with known LTV channels and unknown wide sense stationary uncorrelated scattering (WSSUS) statistical channels, we show how to optimize the transmitting and receiving prototypes of a DFT-FBT such that the SINR at the receiver is maximized. Also, a novel pilot-aided subspace channel estimation algorithm is proposed for the orthogonal frequency division multiplexing (OFDM) systems with quasi-stationary multi-path Rayleigh fading channels. Using the concept of a difference co-array, the new technique can construct M^2 co-pilots from M physical pilot tones with alternating pilot placement. Subspace methods, such as MUSIC and ESPRIT, can be used to estimate the multipath delays and the number of identifiable paths is up to O(M^2), theoretically. With the delay information, a MMSE estimator for frequency response is derived. It is shown through simulations that the proposed method outperforms the conventional subspace channel estimator when the number of multipaths is greater than or equal to the number of physical pilots minus one.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Optical frequency combs (OFCs) provide direct phase-coherent link between optical and RF frequencies, and enable precision measurement of optical frequencies. In recent years, a new class of frequency combs (microcombs) have emerged based on parametric frequency conversions in dielectric microresonators. Micocombs have large line spacing from 10's to 100's GHz, allowing easy access to individual comb lines for arbitrary waveform synthesis. They also provide broadband parametric gain bandwidth, not limited by specific atomic or molecular transitions in conventional OFCs. The emerging applications of microcombs include low noise microwave generation, astronomical spectrograph calibration, direct comb spectroscopy, and high capacity telecommunications.

In this thesis, research is presented starting with the introduction of a new type of chemically etched, planar silica-on-silicon disk resonator. A record Q factor of 875 million is achieved for on-chip devices. A simple and accurate approach to characterize the FSR and dispersion of microcavities is demonstrated. Microresonator-based frequency combs (microcombs) are demonstrated with microwave repetition rate less than 80 GHz on a chip for the first time. Overall low threshold power (as low as 1 mW) of microcombs across a wide range of resonator FSRs from 2.6 to 220 GHz in surface-loss-limited disk resonators is demonstrated. The rich and complex dynamics of microcomb RF noise are studied. High-coherence, RF phase-locking of microcombs is demonstrated where injection locking of the subcomb offset frequencies are observed by pump-detuning-alignment. Moreover, temporal mode locking, featuring subpicosecond pulses from a parametric 22 GHz microcomb, is observed. We further demonstrated a shot-noise-limited white phase noise of microcomb for the first time. Finally, stabilization of the microcomb repetition rate is realized by phase lock loop control.

For another major nonlinear optical application of disk resonators, highly coherent, simulated Brillouin lasers (SBL) on silicon are also demonstrated, with record low Schawlow-Townes noise less than 0.1 Hz^2/Hz for any chip-based lasers and low technical noise comparable to commercial narrow-linewidth fiber lasers. The SBL devices are efficient, featuring more than 90% quantum efficiency and threshold as low as 60 microwatts. Moreover, novel properties of the SBL are studied, including cascaded operation, threshold tuning, and mode-pulling phenomena. Furthermore, high performance microwave generation using on-chip cascaded Brillouin oscillation is demonstrated. It is also robust enough to enable incorporation as the optical voltage-controlled-oscillator in the first demonstration of a photonic-based, microwave frequency synthesizer. Finally, applications of microresonators as frequency reference cavities and low-phase-noise optomechanical oscillators are presented.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Surface plasma waves arise from the collective oscillations of billions of electrons at the surface of a metal in unison. The simplest way to quantize these waves is by direct analogy to electromagnetic fields in free space, with the surface plasmon, the quantum of the surface plasma wave, playing the same role as the photon. It follows that surface plasmons should exhibit all of the same quantum phenomena that photons do, including quantum interference and entanglement.

Unlike photons, however, surface plasmons suffer strong losses that arise from the scattering of free electrons from other electrons, phonons, and surfaces. Under some circumstances, these interactions might also cause “pure dephasing,” which entails a loss of coherence without absorption. Quantum descriptions of plasmons usually do not account for these effects explicitly, and sometimes ignore them altogether. In light of this extra microscopic complexity, it is necessary for experiments to test quantum models of surface plasmons.

In this thesis, I describe two such tests that my collaborators and I performed. The first was a plasmonic version of the Hong-Ou-Mandel experiment, in which we observed two-particle quantum interference between plasmons with a visibility of 93 ± 1%. This measurement confirms that surface plasmons faithfully reproduce this effect with the same visibility and mutual coherence time, to within measurement error, as in the photonic case.

The second experiment demonstrated path entanglement between surface plasmons with a visibility of 95 ± 2%, confirming that a path-entangled state can indeed survive without measurable decoherence. This measurement suggests that elastic scattering mechanisms of the type that might cause pure dephasing must have been weak enough not to significantly perturb the state of the metal under the experimental conditions we investigated.

These two experiments add quantum interference and path entanglement to a growing list of quantum phenomena that surface plasmons appear to exhibit just as clearly as photons, confirming the predictions of the simplest quantum models.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The field of cavity optomechanics, which concerns the coupling of a mechanical object's motion to the electromagnetic field of a high finesse cavity, allows for exquisitely sensitive measurements of mechanical motion, from large-scale gravitational wave detection to microscale accelerometers. Moreover, it provides a potential means to control and engineer the state of a macroscopic mechanical object at the quantum level, provided one can realize sufficiently strong interaction strengths relative to the ambient thermal noise. Recent experiments utilizing the optomechanical interaction to cool mechanical resonators to their motional quantum ground state allow for a variety of quantum engineering applications, including preparation of non-classical mechanical states and coherent optical to microwave conversion. Optomechanical crystals (OMCs), in which bandgaps for both optical and mechanical waves can be introduced through patterning of a material, provide one particularly attractive means for realizing strong interactions between high-frequency mechanical resonators and near-infrared light. Beyond the usual paradigm of cavity optomechanics involving isolated single mechanical elements, OMCs can also be fashioned into planar circuits for photons and phonons, and arrays of optomechanical elements can be interconnected via optical and acoustic waveguides. Such coupled OMC arrays have been proposed as a way to realize quantum optomechanical memories, nanomechanical circuits for continuous variable quantum information processing and phononic quantum networks, and as a platform for engineering and studying quantum many-body physics of optomechanical meta-materials.

However, while ground state occupancies (that is, average phonon occupancies less than one) have been achieved in OMC cavities utilizing laser cooling techniques, parasitic absorption and the concomitant degradation of the mechanical quality factor fundamentally limit this approach. On the other hand, the high mechanical frequency of these systems allows for the possibility of using a dilution refrigerator to simultaneously achieve low thermal occupancy and long mechanical coherence time by passively cooling the device to the millikelvin regime. This thesis describes efforts to realize the measurement of OMC cavities inside a dilution refrigerator, including the development of fridge-compatible optical coupling schemes and the characterization of the heating dynamics of the mechanical resonator at sub-kelvin temperatures.

We will begin by summarizing the theoretical framework used to describe cavity optomechanical systems, as well as a handful of the quantum applications envisioned for such devices. Then, we will present background on the design of the nanobeam OMC cavities used for this work, along with details of the design and characterization of tapered fiber couplers for optical coupling inside the fridge. Finally, we will present measurements of the devices at fridge base temperatures of Tf = 10 mK, using both heterodyne spectroscopy and time-resolved sideband photon counting, as well as detailed analysis of the prospects for future quantum applications based on the observed optically-induced heating.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

With the advent of the laser in the year 1960, the field of optics experienced a renaissance from what was considered to be a dull, solved subject to an active area of development, with applications and discoveries which are yet to be exhausted 55 years later. Light is now nearly ubiquitous not only in cutting-edge research in physics, chemistry, and biology, but also in modern technology and infrastructure. One quality of light, that of the imparted radiation pressure force upon reflection from an object, has attracted intense interest from researchers seeking to precisely monitor and control the motional degrees of freedom of an object using light. These optomechanical interactions have inspired myriad proposals, ranging from quantum memories and transducers in quantum information networks to precision metrology of classical forces. Alongside advances in micro- and nano-fabrication, the burgeoning field of optomechanics has yielded a class of highly engineered systems designed to produce strong interactions between light and motion.

Optomechanical crystals are one such system in which the patterning of periodic holes in thin dielectric films traps both light and sound waves to a micro-scale volume. These devices feature strong radiation pressure coupling between high-quality optical cavity modes and internal nanomechanical resonances. Whether for applications in the quantum or classical domain, the utility of optomechanical crystals hinges on the degree to which light radiating from the device, having interacted with mechanical motion, can be collected and detected in an experimental apparatus consisting of conventional optical components such as lenses and optical fibers. While several efficient methods of optical coupling exist to meet this task, most are unsuitable for the cryogenic or vacuum integration required for many applications. The first portion of this dissertation will detail the development of robust and efficient methods of optically coupling optomechanical resonators to optical fibers, with an emphasis on fabrication processes and optical characterization.

I will then proceed to describe a few experiments enabled by the fiber couplers. The first studies the performance of an optomechanical resonator as a precise sensor for continuous position measurement. The sensitivity of the measurement, limited by the detection efficiency of intracavity photons, is compared to the standard quantum limit imposed by the quantum properties of the laser probe light. The added noise of the measurement is seen to fall within a factor of 3 of the standard quantum limit, representing an order of magnitude improvement over previous experiments utilizing optomechanical crystals, and matching the performance of similar measurements in the microwave domain.

The next experiment uses single photon counting to detect individual phonon emission and absorption events within the nanomechanical oscillator. The scattering of laser light from mechanical motion produces correlated photon-phonon pairs, and detection of the emitted photon corresponds to an effective phonon counting scheme. In the process of scattering, the coherence properties of the mechanical oscillation are mapped onto the reflected light. Intensity interferometry of the reflected light then allows measurement of the temporal coherence of the acoustic field. These correlations are measured for a range of experimental conditions, including the optomechanical amplification of the mechanics to a self-oscillation regime, and comparisons are drawn to a laser system for phonons. Finally, prospects for using phonon counting and intensity interferometry to produce non-classical mechanical states are detailed following recent proposals in literature.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This thesis presents investigations in four areas of theoretical astrophysics: the production of sterile neutrino dark matter in the early Universe, the evolution of small-scale baryon perturbations during the epoch of cosmological recombination, the effect of primordial magnetic fields on the redshifted 21-cm emission from the pre-reionization era, and the nonlinear stability of tidally deformed neutron stars.

In the first part of the thesis, we study the asymmetry-driven resonant production of 7 keV-scale sterile neutrino dark matter in the primordial Universe at temperatures T >~ 100 MeV. We report final DM phase space densities that are robust to uncertainties in the nature of the quark-hadron transition. We give transfer functions for cosmological density fluctuations that are useful for N-body simulations. We also provide a public code for the production calculation.

In the second part of the thesis, we study the instability of small-scale baryon pressure sound waves during cosmological recombination. We show that for relevant wavenumbers, inhomogenous recombination is driven by the transport of ionizing continuum and Lyman-alpha photons. We find a maximum growth factor less than ≈ 1.2 in 107 random realizations of initial conditions. The low growth factors are due to the relatively short duration of the recombination epoch.

In the third part of the thesis, we propose a method of measuring weak magnetic fields, of order 10-19 G (or 10-21 G if scaled to the present day), with large coherence lengths in the inter galactic medium prior to and during the epoch of cosmic reionization. The method utilizes the Larmor precession of spin-polarized neutral hydrogen in the triplet state of the hyperfine transition. We perform detailed calculations of the microphysics behind this effect, and take into account all the processes that affect the hyperfine transition, including radiative decays, collisions, and optical pumping by Lyman-alpha photons.

In the final part of the thesis, we study the non-linear effects of tidal deformations of neutron stars (NS) in a compact binary. We compute the largest three- and four-mode couplings among the tidal mode and high-order p- and g-modes of similar radial wavenumber. We demonstrate the near-exact cancellation of their effects, and resolve the question of the stability of the tidally deformed NS to leading order. This result is significant for the extraction of binary parameters from gravitational wave observations.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The study of codes, classically motivated by the need to communicate information reliably in the presence of error, has found new life in fields as diverse as network communication, distributed storage of data, and even has connections to the design of linear measurements used in compressive sensing. But in all contexts, a code typically involves exploiting the algebraic or geometric structure underlying an application. In this thesis, we examine several problems in coding theory, and try to gain some insight into the algebraic structure behind them.

The first is the study of the entropy region - the space of all possible vectors of joint entropies which can arise from a set of discrete random variables. Understanding this region is essentially the key to optimizing network codes for a given network. To this end, we employ a group-theoretic method of constructing random variables producing so-called "group-characterizable" entropy vectors, which are capable of approximating any point in the entropy region. We show how small groups can be used to produce entropy vectors which violate the Ingleton inequality, a fundamental bound on entropy vectors arising from the random variables involved in linear network codes. We discuss the suitability of these groups to design codes for networks which could potentially outperform linear coding.

The second topic we discuss is the design of frames with low coherence, closely related to finding spherical codes in which the codewords are unit vectors spaced out around the unit sphere so as to minimize the magnitudes of their mutual inner products. We show how to build frames by selecting a cleverly chosen set of representations of a finite group to produce a "group code" as described by Slepian decades ago. We go on to reinterpret our method as selecting a subset of rows of a group Fourier matrix, allowing us to study and bound our frames' coherences using character theory. We discuss the usefulness of our frames in sparse signal recovery using linear measurements.

The final problem we investigate is that of coding with constraints, most recently motivated by the demand for ways to encode large amounts of data using error-correcting codes so that any small loss can be recovered from a small set of surviving data. Most often, this involves using a systematic linear error-correcting code in which each parity symbol is constrained to be a function of some subset of the message symbols. We derive bounds on the minimum distance of such a code based on its constraints, and characterize when these bounds can be achieved using subcodes of Reed-Solomon codes.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Experimental investigations were made of the nature of weak superconductivity in a structure having well-defined, controllable characteristics and geometry. Controlled experiments were made possible by using a thin-film structure which was entirely metallic and consisted of a superconducting film with a localized section that was weak in the sense that its transition temperature was depressed relative to the rest of the film. The depression of transition temperature was brought about by underlaying the superconductor with a normal metal.

The DC and AC electrical characteristics of this structure were studied. It was found that this structure exhibited a non-zero, time-average supercurrent at finite voltage to at least .2 mV, and generated an oscillating electric potential at a frequency given by the Josephson relation. The DC V-I characteristic and the amplitude of the AC oscillation were found to be consistent with a two- fluid (normal current-supercurrent) model of weak super-conductivity based on e thermodynamically irreversible process of repetitive phase-slip, and featuring a periodic time dependence in the amplitude of the superconducting order parameter.

The observed linewidth of the AC oscillation could be accounted for by incorporating Johnson noise in the two-fluid model.

Experimentally it was found that the behavior of a short (length on the order of the coherence distance) weak superconductor could be characterized by its critical current and normal-state resistance, and an empirical expression was obtained for the time dependence of the super-current and voltage.

It was found that the results could not be explained on the basis of the theory of the Josephson junction.