15 resultados para SPECKLE NOISE
em CaltechTHESIS
Resumo:
One of the most exciting discoveries in astrophysics of the last last decade is of the sheer diversity of planetary systems. These include "hot Jupiters", giant planets so close to their host stars that they orbit once every few days; "Super-Earths", planets with sizes intermediate to those of Earth and Neptune, of which no analogs exist in our own solar system; multi-planet systems with planets smaller than Mars to larger than Jupiter; planets orbiting binary stars; free-floating planets flying through the emptiness of space without any star; even planets orbiting pulsars. Despite these remarkable discoveries, the field is still young, and there are many areas about which precious little is known. In particular, we don't know the planets orbiting Sun-like stars nearest to our own solar system, and we know very little about the compositions of extrasolar planets. This thesis provides developments in those directions, through two instrumentation projects.
The first chapter of this thesis concerns detecting planets in the Solar neighborhood using precision stellar radial velocities, also known as the Doppler technique. We present an analysis determining the most efficient way to detect planets considering factors such as spectral type, wavelengths of observation, spectrograph resolution, observing time, and instrumental sensitivity. We show that G and K dwarfs observed at 400-600 nm are the best targets for surveys complete down to a given planet mass and out to a specified orbital period. Overall we find that M dwarfs observed at 700-800 nm are the best targets for habitable-zone planets, particularly when including the effects of systematic noise floors caused by instrumental imperfections. Somewhat surprisingly, we demonstrate that a modestly sized observatory, with a dedicated observing program, is up to the task of discovering such planets.
We present just such an observatory in the second chapter, called the "MINiature Exoplanet Radial Velocity Array," or MINERVA. We describe the design, which uses a novel multi-aperture approach to increase stability and performance through lower system etendue, as well as keeping costs and time to deployment down. We present calculations of the expected planet yield, and data showing the system performance from our testing and development of the system at Caltech's campus. We also present the motivation, design, and performance of a fiber coupling system for the array, critical for efficiently and reliably bringing light from the telescopes to the spectrograph. We finish by presenting the current status of MINERVA, operational at Mt. Hopkins observatory in Arizona.
The second part of this thesis concerns a very different method of planet detection, direct imaging, which involves discovery and characterization of planets by collecting and analyzing their light. Directly analyzing planetary light is the most promising way to study their atmospheres, formation histories, and compositions. Direct imaging is extremely challenging, as it requires a high performance adaptive optics system to unblur the point-spread function of the parent star through the atmosphere, a coronagraph to suppress stellar diffraction, and image post-processing to remove non-common path "speckle" aberrations that can overwhelm any planetary companions.
To this end, we present the "Stellar Double Coronagraph," or SDC, a flexible coronagraphic platform for use with the 200" Hale telescope. It has two focal and pupil planes, allowing for a number of different observing modes, including multiple vortex phase masks in series for improved contrast and inner working angle behind the obscured aperture of the telescope. We present the motivation, design, performance, and data reduction pipeline of the instrument. In the following chapter, we present some early science results, including the first image of a companion to the star delta Andromeda, which had been previously hypothesized but never seen.
A further chapter presents a wavefront control code developed for the instrument, using the technique of "speckle nulling," which can remove optical aberrations from the system using the deformable mirror of the adaptive optics system. This code allows for improved contrast and inner working angles, and was written in a modular style so as to be portable to other high contrast imaging platforms. We present its performance on optical, near-infrared, and thermal infrared instruments on the Palomar and Keck telescopes, showing how it can improve contrasts by a factor of a few in less than ten iterations.
One of the large challenges in direct imaging is sensing and correcting the electric field in the focal plane to remove scattered light that can be much brighter than any planets. In the last chapter, we present a new method of focal-plane wavefront sensing, combining a coronagraph with a simple phase-shifting interferometer. We present its design and implementation on the Stellar Double Coronagraph, demonstrating its ability to create regions of high contrast by measuring and correcting for optical aberrations in the focal plane. Finally, we derive how it is possible to use the same hardware to distinguish companions from speckle errors using the principles of optical coherence. We present results observing the brown dwarf HD 49197b, demonstrating the ability to detect it despite it being buried in the speckle noise floor. We believe this is the first detection of a substellar companion using the coherence properties of light.
Resumo:
Quantum computing offers powerful new techniques for speeding up the calculation of many classically intractable problems. Quantum algorithms can allow for the efficient simulation of physical systems, with applications to basic research, chemical modeling, and drug discovery; other algorithms have important implications for cryptography and internet security.
At the same time, building a quantum computer is a daunting task, requiring the coherent manipulation of systems with many quantum degrees of freedom while preventing environmental noise from interacting too strongly with the system. Fortunately, we know that, under reasonable assumptions, we can use the techniques of quantum error correction and fault tolerance to achieve an arbitrary reduction in the noise level.
In this thesis, we look at how additional information about the structure of noise, or "noise bias," can improve or alter the performance of techniques in quantum error correction and fault tolerance. In Chapter 2, we explore the possibility of designing certain quantum gates to be extremely robust with respect to errors in their operation. This naturally leads to structured noise where certain gates can be implemented in a protected manner, allowing the user to focus their protection on the noisier unprotected operations.
In Chapter 3, we examine how to tailor error-correcting codes and fault-tolerant quantum circuits in the presence of dephasing biased noise, where dephasing errors are far more common than bit-flip errors. By using an appropriately asymmetric code, we demonstrate the ability to improve the amount of error reduction and decrease the physical resources required for error correction.
In Chapter 4, we analyze a variety of protocols for distilling magic states, which enable universal quantum computation, in the presence of faulty Clifford operations. Here again there is a hierarchy of noise levels, with a fixed error rate for faulty gates, and a second rate for errors in the distilled states which decreases as the states are distilled to better quality. The interplay of of these different rates sets limits on the achievable distillation and how quickly states converge to that limit.
Resumo:
We have used the technique of non-redundant masking at the Palomar 200-inch telescope and radio VLBI imaging software to make optical aperture synthesis maps of two binary stars, β Corona Borealis and σ Herculis. The dynamic range of the map of β CrB, a binary star with a separation of 230 milliarcseconds is 50:1. For σ Her, we find a separation of 70 milliarcseconds and the dynamic range of our image is 30:1. These demonstrate the potential of the non-redundant masking technique for diffraction-limited imaging of astronomical objects with high dynamic range.
We find that the optimal integration time for measuring the closure phase is longer than that for measuring the fringe amplitude. There is not a close relationship between amplitude errors and phase errors, as is found in radio interferometry. Amplitude self calibration is less effective at optical wavelengths than at radio wavelengths. Primary beam sensitivity correction made in radio aperture synthesis is not necessary in optical aperture synthesis.
The effects of atmospheric disturbances on optical aperture synthesis have been studied by Monte Carlo simulations based on the Kolmogorov theory of refractive-index fluctuations. For the non-redundant masking with τ_c-sized apertures, the simulated fringe amplitude gives an upper bound of the observed fringe amplitude. A smooth transition is seen from the non-redundant masking regime to the speckle regime with increasing aperture size. The fractional reduction of the fringe amplitude according to the bandwidth is nearly independent of the aperture size. The limiting magnitude of optical aperture synthesis with τ_c-sized apertures and that with apertures larger than τ_c are derived.
Monte Carlo simulations are also made to study the sensitivity and resolution of the bispectral analysis of speckle interferometry. We present the bispectral modulation transfer function and its signal-to-noise ratio at high light levels. The results confirm the validity of the heuristic interferometric view of image-forming process in the mid-spatial-frequency range. The signal-to- noise ratio of the bispectrum at arbitrary light levels is derived in the mid-spatial-frequency range.
The non-redundant masking technique is suitable for imaging bright objects with high resolution and high dynamic range, while the faintest limit will be better pursued by speckle imaging.
Resumo:
In this thesis, I apply detailed waveform modeling to study noise correlations in different environments, and earthquake waveforms for source parameters and velocity structure.
Green's functions from ambient noise correlations have primarily been used for travel-time measurement. In Part I of this thesis, by detailed waveform modeling of noise correlation functions, I retrieve both surface waves and crustal body waves from noise, and use them in improving earthquake centroid locations and regional crustal structures. I also present examples in which the noise correlations do not yield Green's functions, yet the results are still interesting and useful after case-by-case analyses, including non-uniform distribution of noise sources, spurious velocity changes, and noise correlations on the Amery Ice Shelf.
In Part II of this thesis, I study teleseismic body waves of earthquakes for source parameters or near-source structure. With the dense modern global network and improved methodologies, I obtain high-resolution earthquake locations, focal mechanisms and rupture processes, which provide critical insights to earthquake faulting processes in shallow and deep parts of subduction zones. Waveform modeling of relatively simple subduction zone events also displays new constraints on the structure of subducted slabs.
In summary, behind my approaches to the relatively independent problems, the philosophy is to bring observational insights from seismic waveforms in critical and simple ways.
Resumo:
Spontaneous emission into the lasing mode fundamentally limits laser linewidths. Reducing cavity losses provides two benefits to linewidth: (1) fewer excited carriers are needed to reach threshold, resulting in less phase-corrupting spontaneous emission into the laser mode, and (2) more photons are stored in the laser cavity, such that each individual spontaneous emission event disturbs the phase of the field less. Strong optical absorption in III-V materials causes high losses, preventing currently-available semiconductor lasers from achieving ultra-narrow linewidths. This absorption is a natural consequence of the compromise between efficient electrical and efficient optical performance in a semiconductor laser. Some of the III-V layers must be heavily doped in order to funnel excited carriers into the active region, which has the side effect of making the material strongly absorbing.
This thesis presents a new technique, called modal engineering, to remove modal energy from the lossy region and store it in an adjacent low-loss material, thereby reducing overall optical absorption. A quantum mechanical analysis of modal engineering shows that modal gain and spontaneous emission rate into the laser mode are both proportional to the normalized intensity of that mode at the active region. If optical absorption near the active region dominates the total losses of the laser cavity, shifting modal energy from the lossy region to the low-loss region will reduce modal gain, total loss, and the spontaneous emission rate into the mode by the same factor, so that linewidth decreases while the threshold inversion remains constant. The total spontaneous emission rate into all other modes is unchanged.
Modal engineering is demonstrated using the Si/III-V platform, in which light is generated in the III-V material and stored in the low-loss silicon material. The silicon is patterned as a high-Q resonator to minimize all sources of loss. Fabricated lasers employing modal engineering to concentrate light in silicon demonstrate linewidths at least 5 times smaller than lasers without modal engineering at the same pump level above threshold, while maintaining the same thresholds.
Resumo:
This thesis describes the development of low-noise heterodyne receivers at THz frequencies for submillimeter astronomy using Nb-based superconductor-insulator-superconductor (SIS) tunneling junctions. The mixers utilize a quasi-optical configuration which consists of a planar twin-slot antenna and antisymmetrically-fed two-junctions on an antireflection-coated silicon hyperhemispherical lens. On-chip integrated tuning circuits, in the form of microstrip lines, are used to obtain maximum coupling efficiency in the designed frequency band. To reduce the rf losses in the integrated tuning circuits above the superconducting Nb gap frequency (~ 700 GHz), normal-metal Al is used to replace Nb as the tuning circuits.
To account the rf losses in the micros trip lines, we calculated the surface impedance of the AI films using the nonlocal anomalous skin effect for finite thickness films. Nb films were calculated using the Mattis-Bardeen theory in the extreme anomalous limit. Our calculations show that the losses of the Al and Nb microstrip lines are about equal at 830 GHz. For Al-wiring and Nb-wiring mixers both optimized at 1050 GHz, the RF coupling efficiency of Al-wiring mixer is higher than that of Nb-wiring one by almost 50%. We have designed both Nb-wiring and Al-wiring mixers below and above the gap frequency.
A Fourier transform spectrometer (FTS) has been constructed especially for the study of the frequency response of SIS receivers. This FTS features large aperture size (10 inch) and high frequency resolution (114 MHz). The FTS spectra, obtained using the SIS receivers as direct detectors on the FTS, agree quite well with our theoretical simulations. We have also, for the first time, measured the FTS heterodyne response of an SIS mixer at sufficiently high resolution to resolve the LO and the sidebands. Heterodyne measurements of our SIS receivers with Nb-wiring or Al-wiring have yielded results which arc among the best reported to date for broadband heterodyne receivers. The Nb-wiring mixers, covering 400 - 850 GHz band with four separate fixed-tuned mixers, have uncorrected DSB receiver noise temperature around 5hv/kb to 700 GHz, and better than 540 K at 808 GHz. An Al-wiring mixer designed for 1050 GHz band has an uncorrected DSB receiver noise temperature 840 K at 1042 GHz and 2.5 K bath temperature. Mixer performance analysis shows that Nb junctions can work well up to twice the gap frequency and the major cause of loss above the gap frequency is the rf losses in the microstrip tuning structures. Further advances in THz SIS mixers may be possible using circuits fabricated with higher-gap superconductors such as NbN. However, this will require high-quality films with low RF surface resistance at THz frequencies.
Resumo:
This work reports investigations upon weakly superconducting proximity effect bridges. These bridges, which exhibit the Josephson effects, are produced by bisecting a superconductor with a short (<1µ) region of material whose superconducting transition temperature is below that of the adjacent superconductors. These bridges are fabricated from layered refractory metal thin films whose transition temperature will depend upon the thickness ratio of the materials involved. The thickness ratio is changed in the area of the bridge to lower its transition temperature. This is done through novel photolithographic techniques described in the text, Chapter 2.
If two such proximity effect bridges are connected in parallel, they form a quantum interferometer. The maximum zero voltage current through this circuit is periodically modulated by the magnetic flux through the circuit. At a constant bias current, the modulation of the critical current produces a modulation in the dc voltage across the bridge. This change in dc voltage has been found to be the result of a change in the internal dissipation in the device. A simple model using lumped circuit theory and treating the bridges as quantum oscillators of frequency ω = 2eV/h, where V is the time average voltage across the device, has been found to adequately describe the observed voltage modulation.
The quantum interferometers have been converted to a galvanometer through the inclusion of an integral thin film current path which couples magnetic flux through the interferometer. Thus a change in signal current produces a change in the voltage across the interferometer at a constant bias current. This work is described in Chapter 3 of the text.
The sensitivity of any device incorporating proximity effect bridges will ultimately be determined by the fluctuations in their electrical parameters. He have measured the spectral power density of the voltage fluctuations in proximity effect bridges using a room temperature electronics and a liquid helium temperature transformer to match the very low (~ 0.1 Ω) impedances characteristic of these devices.
We find the voltage noise to agree quite well with that predicted by phonon noise in the normal conduction through the bridge plus a contribution from the superconducting pair current through the bridge which is proportional to the ratios of this current to the time average voltage across the bridge. The total voltage fluctuations are given by <V^2(f ) > = 4kTR^2_d I/V where R_d is the dynamic resistance, I the total current, and V the voltage across the bridge . An additional noise source appears with a strong 1/f^(n) dependence , 1.5 < n < 2, if the bridges are fabricated upon a glass substrate. This excess noise, attributed to thermodynamic temperature fluctuations in the volume of the bridge, increases dramatically on a glass substrate due to the greatly diminished thermal diffusivity of the glass as compared to sapphire.
Resumo:
Noise measurements from 140°K to 350°K ambient temperature and between 10kHz and 22MHz performed on a double injection silicon diode as a function of operating point indicate that the high frequency noise depends linearly on the ambient temperature T and on the differential conductance g measured at the same frequency. The noise is represented quantitatively by〈i^2〉 = α•4kTgΔf. A new interpretation demands Nyquist noise with α ≡ 1 in these devices at high frequencies. This is in accord with an equivalent circuit derived for the double injection process. The effects of diode geometry on the static I-V characteristic as well as on the ac properties are illustrated. Investigation of the temperature dependence of double injection yields measurements of the temperature variation of the common high-level lifetime τ(τ ∝ T^2), the hole conductivity mobility µ_p (µ_p ∝ T^(-2.18)) and the electron conductivity mobility µ_n(µ_n ∝ T^(-1.75)).
Resumo:
The LIGO gravitational wave detectors are on the brink of making the first direct detections of gravi- tational waves. Noise cancellation techniques are described, in order to simplify the commissioning of these detectors as well as significantly improve their sensitivity to astrophysical sources. Future upgrades to the ground based detectors will require further cancellation of Newtonian gravitational noise in order to make the transition from detectors striving to make the first direct detection of gravitational waves, to observatories extracting physics from many, many detections. Techniques for this noise cancellation are described, as well as the work remaining in this realm.
Resumo:
The feedback coding problem for Gaussian systems in which the noise is neither white nor statistically independent between channels is formulated in terms of arbitrary linear codes at the transmitter and at the receiver. This new formulation is used to determine a number of feedback communication systems. In particular, the optimum linear code that satisfies an average power constraint on the transmitted signals is derived for a system with noiseless feedback and forward noise of arbitrary covariance. The noisy feedback problem is considered and signal sets for the forward and feedback channels are obtained with an average power constraint on each. The general formulation and results are valid for non-Gaussian systems in which the second order statistics are known, the results being applicable to the determination of error bounds via the Chebychev inequality.
Resumo:
The Fokker-Planck (FP) equation is used to develop a general method for finding the spectral density for a class of randomly excited first order systems. This class consists of systems satisfying stochastic differential equations of form ẋ + f(x) = m/Ʃ/j = 1 hj(x)nj(t) where f and the hj are piecewise linear functions (not necessarily continuous), and the nj are stationary Gaussian white noise. For such systems, it is shown how the Laplace-transformed FP equation can be solved for the transformed transition probability density. By manipulation of the FP equation and its adjoint, a formula is derived for the transformed autocorrelation function in terms of the transformed transition density. From this, the spectral density is readily obtained. The method generalizes that of Caughey and Dienes, J. Appl. Phys., 32.11.
This method is applied to 4 subclasses: (1) m = 1, h1 = const. (forcing function excitation); (2) m = 1, h1 = f (parametric excitation); (3) m = 2, h1 = const., h2 = f, n1 and n2 correlated; (4) the same, uncorrelated. Many special cases, especially in subclass (1), are worked through to obtain explicit formulas for the spectral density, most of which have not been obtained before. Some results are graphed.
Dealing with parametrically excited first order systems leads to two complications. There is some controversy concerning the form of the FP equation involved (see Gray and Caughey, J. Math. Phys., 44.3); and the conditions which apply at irregular points, where the second order coefficient of the FP equation vanishes, are not obvious but require use of the mathematical theory of diffusion processes developed by Feller and others. These points are discussed in the first chapter, relevant results from various sources being summarized and applied. Also discussed is the steady-state density (the limit of the transition density as t → ∞).
Resumo:
In this thesis, I develop the velocity and structure models for the Los Angeles Basin and Southern Peru. The ultimate goal is to better understand the geological processes involved in the basin and subduction zone dynamics. The results are obtained from seismic interferometry using ambient noise and receiver functions using earthquake- generated waves. Some unusual signals specific to the local structures are also studied. The main findings are summarized as follows:
(1) Los Angeles Basin
The shear wave velocities range from 0.5 to 3.0 km/s in the sediments, with lateral gradients at the Newport-Inglewood, Compton-Los Alamitos, and Whittier Faults. The basin is a maximum of 8 km deep along the profile, and the Moho rises to a depth of 17 km under the basin. The basin has a stretch factor of 2.6 in the center decreasing to 1.3 at the edges, and is in approximate isostatic equilibrium. This "high-density" (~1 km spacing) "short-duration" (~1.5 month) experiment may serve as a prototype experiment that will allow basins to be covered by this type of low-cost survey.
(2) Peruvian subduction zone
Two prominent mid-crust structures are revealed in the 70 km thick crust under the Central Andes: a low-velocity zone interpreted as partially molten rocks beneath the Western Cordillera – Altiplano Plateau, and the underthrusting Brazilian Shield beneath the Eastern Cordillera. The low-velocity zone is oblique to the present trench, and possibly indicates the location of the volcanic arcs formed during the steepening of the Oligocene flat slab beneath the Altiplano Plateau.
The Nazca slab changes from normal dipping (~25 degrees) subduction in the southeast to flat subduction in the northwest of the study area. In the flat subduction regime, the slab subducts to ~100 km depth and then remains flat for ~300 km distance before it resumes a normal dipping geometry. The flat part closely follows the topography of the continental Moho above, indicating a strong suction force between the slab and the overriding plate. A high-velocity mantle wedge exists above the western half of the flat slab, which indicates the lack of melting and thus explains the cessation of the volcanism above. The velocity turns to normal values before the slab steepens again, indicating possible resumption of dehydration and ecologitization.
(3) Some unusual signals
Strong higher-mode Rayleigh waves due to the basin structure are observed in the periods less than 5 s. The particle motions provide a good test for distinguishing between the fundamental and higher mode. The precursor and coda waves relative to the interstation Rayleigh waves are observed, and modeled with a strong scatterer located in the active volcanic area in Southern Peru. In contrast with the usual receiver function analysis, multiples are extensively involved in this thesis. In the LA Basin, a good image is only from PpPs multiples, while in Peru, PpPp multiples contribute significantly to the final results.
Resumo:
Jet noise reduction is an important goal within both commercial and military aviation. Although large-scale numerical simulations are now able to simultaneously compute turbulent jets and their radiated sound, lost-cost, physically-motivated models are needed to guide noise-reduction efforts. A particularly promising modeling approach centers around certain large-scale coherent structures, called wavepackets, that are observed in jets and their radiated sound. The typical approach to modeling wavepackets is to approximate them as linear modal solutions of the Euler or Navier-Stokes equations linearized about the long-time mean of the turbulent flow field. The near-field wavepackets obtained from these models show compelling agreement with those educed from experimental and simulation data for both subsonic and supersonic jets, but the acoustic radiation is severely under-predicted in the subsonic case. This thesis contributes to two aspects of these models. First, two new solution methods are developed that can be used to efficiently compute wavepackets and their acoustic radiation, reducing the computational cost of the model by more than an order of magnitude. The new techniques are spatial integration methods and constitute a well-posed, convergent alternative to the frequently used parabolized stability equations. Using concepts related to well-posed boundary conditions, the methods are formulated for general hyperbolic equations and thus have potential applications in many fields of physics and engineering. Second, the nonlinear and stochastic forcing of wavepackets is investigated with the goal of identifying and characterizing the missing dynamics responsible for the under-prediction of acoustic radiation by linear wavepacket models for subsonic jets. Specifically, we use ensembles of large-eddy-simulation flow and force data along with two data decomposition techniques to educe the actual nonlinear forcing experienced by wavepackets in a Mach 0.9 turbulent jet. Modes with high energy are extracted using proper orthogonal decomposition, while high gain modes are identified using a novel technique called empirical resolvent-mode decomposition. In contrast to the flow and acoustic fields, the forcing field is characterized by a lack of energetic coherent structures. Furthermore, the structures that do exist are largely uncorrelated with the acoustic field. Instead, the forces that most efficiently excite an acoustic response appear to take the form of random turbulent fluctuations, implying that direct feedback from nonlinear interactions amongst wavepackets is not an essential noise source mechanism. This suggests that the essential ingredients of sound generation in high Reynolds number jets are contained within the linearized Navier-Stokes operator rather than in the nonlinear forcing terms, a conclusion that has important implications for jet noise modeling.
Resumo:
A large portion of the noise in the light output of a laser oscillator is associated with the noise in the laser discharge. The effect of the discharge noise on the laser output has been studied. The discharge noise has been explained through an ac equivalent circuit of the laser discharge tube.
The discharge noise corresponds to time-varying spatial fluctuations in the electron density, the inverted population density and the dielectric permittivity of the laser medium from their equilibrium values. These fluctuations cause a shift in the resonant frequencies of the laser cavity. When the fluctuation in the dielectric permittivity of the laser medium is a longitudinally traveling wave (corresponding to the case in which moving striations exist in the positive column of the laser discharge), the laser output is frequency modulated.
The discharge noise has been analyzed by representing the laser discharge by an equivalent circuit. An appropriate ac equivalent circuit of a laser discharge tube has been obtained by considering the frequency spectrum of the current response of the discharge tube to an ac voltage modulation. It consist of a series ρLC circuit, which represents the discharge region, in parallel with a capacitance C', which comes mainly from the stray wiring. The equivalent inductance and capacitance of the discharge region have been calculated from the values of the resonant frequencies measured on discharge currents, gas pressures and lengths of the positive column. The experimental data provide for a set of typical values and dependencies on the discharge parameters for the equivalent inductance and capacitance of a discharge under laser operating conditions. It has been concluded from the experimental data that the equivalent inductance originates mainly from the positive column while the equivalent capacitance is due to the discharge region other than the positive column.
The ac equivalent circuit of the laser discharge has been shown analytically and experimentally to be applicable to analyzing the internal discharge noise. Experimental measurements have been made on the frequency of moving striations in a laser discharge. Its experimental dependence on the discharge current agrees very well with the expected dependence obtained from an analysis of the circuit and the experimental data on the equivalent circuit elements. The agreement confirms the validity of representing a laser discharge tube by its ac equivalent circuit in analyzing the striation phenomenon and other low frequency noises. Data have also been obtained for the variation of the striation frequency with an externally-applied longitudinal magnetic field and the increase in frequency has been attributed to a decrease in the equivalent inductance of the laser discharge.
Resumo:
The experimental portion of this thesis tries to estimate the density of the power spectrum of very low frequency semiconductor noise, from 10-6.3 cps to 1. cps with a greater accuracy than that achieved in previous similar attempts: it is concluded that the spectrum is 1/fα with α approximately 1.3 over most of the frequency range, but appearing to have a value of about 1 in the lowest decade. The noise sources are, among others, the first stage circuits of a grounded input silicon epitaxial operational amplifier. This thesis also investigates a peculiar form of stationarity which seems to distinguish flicker noise from other semiconductor noise.
In order to decrease by an order of magnitude the pernicious effects of temperature drifts, semiconductor "aging", and possible mechanical failures associated with prolonged periods of data taking, 10 independent noise sources were time-multiplexed and their spectral estimates were subsequently averaged. If the sources have similar spectra, it is demonstrated that this reduces the necessary data-taking time by a factor of 10 for a given accuracy.
In view of the measured high temperature sensitivity of the noise sources, it was necessary to combine the passive attenuation of a special-material container with active control. The noise sources were placed in a copper-epoxy container of high heat capacity and medium heat conductivity, and that container was immersed in a temperature controlled circulating ethylene-glycol bath.
Other spectra of interest, estimated from data taken concurrently with the semiconductor noise data were the spectra of the bath's controlled temperature, the semiconductor surface temperature, and the power supply voltage amplitude fluctuations. A brief description of the equipment constructed to obtain the aforementioned data is included.
The analytical portion of this work is concerned with the following questions: what is the best final spectral density estimate given 10 statistically independent ones of varying quality and magnitude? How can the Blackman and Tukey algorithm which is used for spectral estimation in this work be improved upon? How can non-equidistant sampling reduce data processing cost? Should one try to remove common trands shared by supposedly statistically independent noise sources and, if so, what are the mathematical difficulties involved? What is a physically plausible mathematical model that can account for flicker noise and what are the mathematical implications on its statistical properties? Finally, the variance of the spectral estimate obtained through the Blackman/Tukey algorithm is analyzed in greater detail; the variance is shown to diverge for α ≥ 1 in an assumed power spectrum of k/|f|α, unless the assumed spectrum is "truncated".