15 resultados para Noise Pollution.

em CaltechTHESIS


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Quantum computing offers powerful new techniques for speeding up the calculation of many classically intractable problems. Quantum algorithms can allow for the efficient simulation of physical systems, with applications to basic research, chemical modeling, and drug discovery; other algorithms have important implications for cryptography and internet security.

At the same time, building a quantum computer is a daunting task, requiring the coherent manipulation of systems with many quantum degrees of freedom while preventing environmental noise from interacting too strongly with the system. Fortunately, we know that, under reasonable assumptions, we can use the techniques of quantum error correction and fault tolerance to achieve an arbitrary reduction in the noise level.

In this thesis, we look at how additional information about the structure of noise, or "noise bias," can improve or alter the performance of techniques in quantum error correction and fault tolerance. In Chapter 2, we explore the possibility of designing certain quantum gates to be extremely robust with respect to errors in their operation. This naturally leads to structured noise where certain gates can be implemented in a protected manner, allowing the user to focus their protection on the noisier unprotected operations.

In Chapter 3, we examine how to tailor error-correcting codes and fault-tolerant quantum circuits in the presence of dephasing biased noise, where dephasing errors are far more common than bit-flip errors. By using an appropriately asymmetric code, we demonstrate the ability to improve the amount of error reduction and decrease the physical resources required for error correction.

In Chapter 4, we analyze a variety of protocols for distilling magic states, which enable universal quantum computation, in the presence of faulty Clifford operations. Here again there is a hierarchy of noise levels, with a fixed error rate for faulty gates, and a second rate for errors in the distilled states which decreases as the states are distilled to better quality. The interplay of of these different rates sets limits on the achievable distillation and how quickly states converge to that limit.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this thesis, I apply detailed waveform modeling to study noise correlations in different environments, and earthquake waveforms for source parameters and velocity structure.

Green's functions from ambient noise correlations have primarily been used for travel-time measurement. In Part I of this thesis, by detailed waveform modeling of noise correlation functions, I retrieve both surface waves and crustal body waves from noise, and use them in improving earthquake centroid locations and regional crustal structures. I also present examples in which the noise correlations do not yield Green's functions, yet the results are still interesting and useful after case-by-case analyses, including non-uniform distribution of noise sources, spurious velocity changes, and noise correlations on the Amery Ice Shelf.

In Part II of this thesis, I study teleseismic body waves of earthquakes for source parameters or near-source structure. With the dense modern global network and improved methodologies, I obtain high-resolution earthquake locations, focal mechanisms and rupture processes, which provide critical insights to earthquake faulting processes in shallow and deep parts of subduction zones. Waveform modeling of relatively simple subduction zone events also displays new constraints on the structure of subducted slabs.

In summary, behind my approaches to the relatively independent problems, the philosophy is to bring observational insights from seismic waveforms in critical and simple ways.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This work concerns itself with the possibility of solutions, both cooperative and market based, to pollution abatement problems. In particular, we are interested in pollutant emissions in Southern California and possible solutions to the abatement problems enumerated in the 1990 Clean Air Act. A tradable pollution permit program has been implemented to reduce emissions, creating property rights associated with various pollutants.

Before we discuss the performance of market-based solutions to LA's pollution woes, we consider the existence of cooperative solutions. In Chapter 2, we examine pollutant emissions as a trans boundary public bad. We show that for a class of environments in which pollution moves in a bi-directional, acyclic manner, there exists a sustainable coalition structure and associated levels of emissions. We do so via a new core concept, one more appropriate to modeling cooperative emissions agreements (and potential defection from them) than the standard definitions.

However, this leaves the question of implementing pollution abatement programs unanswered. While the existence of a cost-effective permit market equilibrium has long been understood, the implementation of such programs has been difficult. The design of Los Angeles' REgional CLean Air Incentives Market (RECLAIM) alleviated some of the implementation problems, and in part exacerbated them. For example, it created two overlapping cycles of permits and two zones of permits for different geographic regions. While these design features create a market that allows some measure of regulatory control, they establish a very difficult trading environment with the potential for inefficiency arising from the transactions costs enumerated above and the illiquidity induced by the myriad assets and relatively few participants in this market.

It was with these concerns in mind that the ACE market (Automated Credit Exchange) was designed. The ACE market utilizes an iterated combined-value call market (CV Market). Before discussing the performance of the RECLAIM program in general and the ACE mechanism in particular, we test experimentally whether a portfolio trading mechanism can overcome market illiquidity. Chapter 3 experimentally demonstrates the ability of a portfolio trading mechanism to overcome portfolio rebalancing problems, thereby inducing sufficient liquidity for markets to fully equilibrate.

With experimental evidence in hand, we consider the CV Market's performance in the real world. We find that as the allocation of permits reduces to the level of historical emissions, prices are increasing. As of April of this year, prices are roughly equal to the cost of the Best Available Control Technology (BACT). This took longer than expected, due both to tendencies to mis-report emissions under the old regime, and abatement technology advances encouraged by the program. Vve also find that the ACE market provides liquidity where needed to encourage long-term planning on behalf of polluting facilities.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Spontaneous emission into the lasing mode fundamentally limits laser linewidths. Reducing cavity losses provides two benefits to linewidth: (1) fewer excited carriers are needed to reach threshold, resulting in less phase-corrupting spontaneous emission into the laser mode, and (2) more photons are stored in the laser cavity, such that each individual spontaneous emission event disturbs the phase of the field less. Strong optical absorption in III-V materials causes high losses, preventing currently-available semiconductor lasers from achieving ultra-narrow linewidths. This absorption is a natural consequence of the compromise between efficient electrical and efficient optical performance in a semiconductor laser. Some of the III-V layers must be heavily doped in order to funnel excited carriers into the active region, which has the side effect of making the material strongly absorbing.

This thesis presents a new technique, called modal engineering, to remove modal energy from the lossy region and store it in an adjacent low-loss material, thereby reducing overall optical absorption. A quantum mechanical analysis of modal engineering shows that modal gain and spontaneous emission rate into the laser mode are both proportional to the normalized intensity of that mode at the active region. If optical absorption near the active region dominates the total losses of the laser cavity, shifting modal energy from the lossy region to the low-loss region will reduce modal gain, total loss, and the spontaneous emission rate into the mode by the same factor, so that linewidth decreases while the threshold inversion remains constant. The total spontaneous emission rate into all other modes is unchanged.

Modal engineering is demonstrated using the Si/III-V platform, in which light is generated in the III-V material and stored in the low-loss silicon material. The silicon is patterned as a high-Q resonator to minimize all sources of loss. Fabricated lasers employing modal engineering to concentrate light in silicon demonstrate linewidths at least 5 times smaller than lasers without modal engineering at the same pump level above threshold, while maintaining the same thresholds.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

An economic air pollution control model, which determines the least cost of reaching various air quality levels, is formulated. The model takes the form of a general, nonlinear, mathematical programming problem. Primary contaminant emission levels are the independent variables. The objective function is the cost of attaining various emission levels and is to be minimized subject to constraints that given air quality levels be attained.

The model is applied to a simplified statement of the photochemical smog problem in Los Angeles County in 1975 with emissions specified by a two-dimensional vector, total reactive hydrocarbon, (RHC), and nitrogen oxide, (NOx), emissions. Air quality, also two-dimensional, is measured by the expected number of days per year that nitrogen dioxide, (NO2), and mid-day ozone, (O3), exceed standards in Central Los Angeles.

The minimum cost of reaching various emission levels is found by a linear programming model. The base or "uncontrolled" emission levels are those that will exist in 1975 with the present new car control program and with the degree of stationary source control existing in 1971. Controls, basically "add-on devices", are considered here for used cars, aircraft, and existing stationary sources. It is found that with these added controls, Los Angeles County emission levels [(1300 tons/day RHC, 1000 tons /day NOx) in 1969] and [(670 tons/day RHC, 790 tons/day NOx) at the base 1975 level], can be reduced to 260 tons/day RHC (minimum RHC program) and 460 tons/day NOx (minimum NOx program).

"Phenomenological" or statistical air quality models provide the relationship between air quality and emissions. These models estimate the relationship by using atmospheric monitoring data taken at one (yearly) emission level and by using certain simple physical assumptions, (e. g., that emissions are reduced proportionately at all points in space and time). For NO2, (concentrations assumed proportional to NOx emissions), it is found that standard violations in Central Los Angeles, (55 in 1969), can be reduced to 25, 5, and 0 days per year by controlling emissions to 800, 550, and 300 tons /day, respectively. A probabilistic model reveals that RHC control is much more effective than NOx control in reducing Central Los Angeles ozone. The 150 days per year ozone violations in 1969 can be reduced to 75, 30, 10, and 0 days per year by abating RHC emissions to 700, 450, 300, and 150 tons/day, respectively, (at the 1969 NOx emission level).

The control cost-emission level and air quality-emission level relationships are combined in a graphical solution of the complete model to find the cost of various air quality levels. Best possible air quality levels with the controls considered here are 8 O3 and 10 NO2 violations per year (minimum ozone program) or 25 O3 and 3 NO2 violations per year (minimum NO2 program) with an annualized cost of $230,000,000 (above the estimated $150,000,000 per year for the new car control program for Los Angeles County motor vehicles in 1975).

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This thesis describes the development of low-noise heterodyne receivers at THz frequencies for submillimeter astronomy using Nb-based superconductor-insulator-superconductor (SIS) tunneling junctions. The mixers utilize a quasi-optical configuration which consists of a planar twin-slot antenna and antisymmetrically-fed two-junctions on an antireflection-coated silicon hyperhemispherical lens. On-chip integrated tuning circuits, in the form of microstrip lines, are used to obtain maximum coupling efficiency in the designed frequency band. To reduce the rf losses in the integrated tuning circuits above the superconducting Nb gap frequency (~ 700 GHz), normal-metal Al is used to replace Nb as the tuning circuits.

To account the rf losses in the micros trip lines, we calculated the surface impedance of the AI films using the nonlocal anomalous skin effect for finite thickness films. Nb films were calculated using the Mattis-Bardeen theory in the extreme anomalous limit. Our calculations show that the losses of the Al and Nb microstrip lines are about equal at 830 GHz. For Al-wiring and Nb-wiring mixers both optimized at 1050 GHz, the RF coupling efficiency of Al-wiring mixer is higher than that of Nb-wiring one by almost 50%. We have designed both Nb-wiring and Al-wiring mixers below and above the gap frequency.

A Fourier transform spectrometer (FTS) has been constructed especially for the study of the frequency response of SIS receivers. This FTS features large aperture size (10 inch) and high frequency resolution (114 MHz). The FTS spectra, obtained using the SIS receivers as direct detectors on the FTS, agree quite well with our theoretical simulations. We have also, for the first time, measured the FTS heterodyne response of an SIS mixer at sufficiently high resolution to resolve the LO and the sidebands. Heterodyne measurements of our SIS receivers with Nb-wiring or Al-wiring have yielded results which arc among the best reported to date for broadband heterodyne receivers. The Nb-wiring mixers, covering 400 - 850 GHz band with four separate fixed-tuned mixers, have uncorrected DSB receiver noise temperature around 5hv/kb to 700 GHz, and better than 540 K at 808 GHz. An Al-wiring mixer designed for 1050 GHz band has an uncorrected DSB receiver noise temperature 840 K at 1042 GHz and 2.5 K bath temperature. Mixer performance analysis shows that Nb junctions can work well up to twice the gap frequency and the major cause of loss above the gap frequency is the rf losses in the microstrip tuning structures. Further advances in THz SIS mixers may be possible using circuits fabricated with higher-gap superconductors such as NbN. However, this will require high-quality films with low RF surface resistance at THz frequencies.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This work reports investigations upon weakly superconducting proximity effect bridges. These bridges, which exhibit the Josephson effects, are produced by bisecting a superconductor with a short (<1µ) region of material whose superconducting transition temperature is below that of the adjacent superconductors. These bridges are fabricated from layered refractory metal thin films whose transition temperature will depend upon the thickness ratio of the materials involved. The thickness ratio is changed in the area of the bridge to lower its transition temperature. This is done through novel photolithographic techniques described in the text, Chapter 2.

If two such proximity effect bridges are connected in parallel, they form a quantum interferometer. The maximum zero voltage current through this circuit is periodically modulated by the magnetic flux through the circuit. At a constant bias current, the modulation of the critical current produces a modulation in the dc voltage across the bridge. This change in dc voltage has been found to be the result of a change in the internal dissipation in the device. A simple model using lumped circuit theory and treating the bridges as quantum oscillators of frequency ω = 2eV/h, where V is the time average voltage across the device, has been found to adequately describe the observed voltage modulation.

The quantum interferometers have been converted to a galvanometer through the inclusion of an integral thin film current path which couples magnetic flux through the interferometer. Thus a change in signal current produces a change in the voltage across the interferometer at a constant bias current. This work is described in Chapter 3 of the text.

The sensitivity of any device incorporating proximity effect bridges will ultimately be determined by the fluctuations in their electrical parameters. He have measured the spectral power density of the voltage fluctuations in proximity effect bridges using a room temperature electronics and a liquid helium temperature transformer to match the very low (~ 0.1 Ω) impedances characteristic of these devices.

We find the voltage noise to agree quite well with that predicted by phonon noise in the normal conduction through the bridge plus a contribution from the superconducting pair current through the bridge which is proportional to the ratios of this current to the time average voltage across the bridge. The total voltage fluctuations are given by <V^2(f ) > = 4kTR^2_d I/V where R_d is the dynamic resistance, I the total current, and V the voltage across the bridge . An additional noise source appears with a strong 1/f^(n) dependence , 1.5 < n < 2, if the bridges are fabricated upon a glass substrate. This excess noise, attributed to thermodynamic temperature fluctuations in the volume of the bridge, increases dramatically on a glass substrate due to the greatly diminished thermal diffusivity of the glass as compared to sapphire.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Noise measurements from 140°K to 350°K ambient temperature and between 10kHz and 22MHz performed on a double injection silicon diode as a function of operating point indicate that the high frequency noise depends linearly on the ambient temperature T and on the differential conductance g measured at the same frequency. The noise is represented quantitatively by〈i^2〉 = α•4kTgΔf. A new interpretation demands Nyquist noise with α ≡ 1 in these devices at high frequencies. This is in accord with an equivalent circuit derived for the double injection process. The effects of diode geometry on the static I-V characteristic as well as on the ac properties are illustrated. Investigation of the temperature dependence of double injection yields measurements of the temperature variation of the common high-level lifetime τ(τ ∝ T^2), the hole conductivity mobility µ_p (µ_p ∝ T^(-2.18)) and the electron conductivity mobility µ_n(µ_n ∝ T^(-1.75)).

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The LIGO gravitational wave detectors are on the brink of making the first direct detections of gravi- tational waves. Noise cancellation techniques are described, in order to simplify the commissioning of these detectors as well as significantly improve their sensitivity to astrophysical sources. Future upgrades to the ground based detectors will require further cancellation of Newtonian gravitational noise in order to make the transition from detectors striving to make the first direct detection of gravitational waves, to observatories extracting physics from many, many detections. Techniques for this noise cancellation are described, as well as the work remaining in this realm.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The feedback coding problem for Gaussian systems in which the noise is neither white nor statistically independent between channels is formulated in terms of arbitrary linear codes at the transmitter and at the receiver. This new formulation is used to determine a number of feedback communication systems. In particular, the optimum linear code that satisfies an average power constraint on the transmitted signals is derived for a system with noiseless feedback and forward noise of arbitrary covariance. The noisy feedback problem is considered and signal sets for the forward and feedback channels are obtained with an average power constraint on each. The general formulation and results are valid for non-Gaussian systems in which the second order statistics are known, the results being applicable to the determination of error bounds via the Chebychev inequality.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The Fokker-Planck (FP) equation is used to develop a general method for finding the spectral density for a class of randomly excited first order systems. This class consists of systems satisfying stochastic differential equations of form ẋ + f(x) = m/Ʃ/j = 1 hj(x)nj(t) where f and the hj are piecewise linear functions (not necessarily continuous), and the nj are stationary Gaussian white noise. For such systems, it is shown how the Laplace-transformed FP equation can be solved for the transformed transition probability density. By manipulation of the FP equation and its adjoint, a formula is derived for the transformed autocorrelation function in terms of the transformed transition density. From this, the spectral density is readily obtained. The method generalizes that of Caughey and Dienes, J. Appl. Phys., 32.11.

This method is applied to 4 subclasses: (1) m = 1, h1 = const. (forcing function excitation); (2) m = 1, h1 = f (parametric excitation); (3) m = 2, h1 = const., h2 = f, n1 and n2 correlated; (4) the same, uncorrelated. Many special cases, especially in subclass (1), are worked through to obtain explicit formulas for the spectral density, most of which have not been obtained before. Some results are graphed.

Dealing with parametrically excited first order systems leads to two complications. There is some controversy concerning the form of the FP equation involved (see Gray and Caughey, J. Math. Phys., 44.3); and the conditions which apply at irregular points, where the second order coefficient of the FP equation vanishes, are not obvious but require use of the mathematical theory of diffusion processes developed by Feller and others. These points are discussed in the first chapter, relevant results from various sources being summarized and applied. Also discussed is the steady-state density (the limit of the transition density as t → ∞).

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this thesis, I develop the velocity and structure models for the Los Angeles Basin and Southern Peru. The ultimate goal is to better understand the geological processes involved in the basin and subduction zone dynamics. The results are obtained from seismic interferometry using ambient noise and receiver functions using earthquake- generated waves. Some unusual signals specific to the local structures are also studied. The main findings are summarized as follows:

(1) Los Angeles Basin

The shear wave velocities range from 0.5 to 3.0 km/s in the sediments, with lateral gradients at the Newport-Inglewood, Compton-Los Alamitos, and Whittier Faults. The basin is a maximum of 8 km deep along the profile, and the Moho rises to a depth of 17 km under the basin. The basin has a stretch factor of 2.6 in the center decreasing to 1.3 at the edges, and is in approximate isostatic equilibrium. This "high-density" (~1 km spacing) "short-duration" (~1.5 month) experiment may serve as a prototype experiment that will allow basins to be covered by this type of low-cost survey.

(2) Peruvian subduction zone

Two prominent mid-crust structures are revealed in the 70 km thick crust under the Central Andes: a low-velocity zone interpreted as partially molten rocks beneath the Western Cordillera – Altiplano Plateau, and the underthrusting Brazilian Shield beneath the Eastern Cordillera. The low-velocity zone is oblique to the present trench, and possibly indicates the location of the volcanic arcs formed during the steepening of the Oligocene flat slab beneath the Altiplano Plateau.

The Nazca slab changes from normal dipping (~25 degrees) subduction in the southeast to flat subduction in the northwest of the study area. In the flat subduction regime, the slab subducts to ~100 km depth and then remains flat for ~300 km distance before it resumes a normal dipping geometry. The flat part closely follows the topography of the continental Moho above, indicating a strong suction force between the slab and the overriding plate. A high-velocity mantle wedge exists above the western half of the flat slab, which indicates the lack of melting and thus explains the cessation of the volcanism above. The velocity turns to normal values before the slab steepens again, indicating possible resumption of dehydration and ecologitization.

(3) Some unusual signals

Strong higher-mode Rayleigh waves due to the basin structure are observed in the periods less than 5 s. The particle motions provide a good test for distinguishing between the fundamental and higher mode. The precursor and coda waves relative to the interstation Rayleigh waves are observed, and modeled with a strong scatterer located in the active volcanic area in Southern Peru. In contrast with the usual receiver function analysis, multiples are extensively involved in this thesis. In the LA Basin, a good image is only from PpPs multiples, while in Peru, PpPp multiples contribute significantly to the final results.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Jet noise reduction is an important goal within both commercial and military aviation. Although large-scale numerical simulations are now able to simultaneously compute turbulent jets and their radiated sound, lost-cost, physically-motivated models are needed to guide noise-reduction efforts. A particularly promising modeling approach centers around certain large-scale coherent structures, called wavepackets, that are observed in jets and their radiated sound. The typical approach to modeling wavepackets is to approximate them as linear modal solutions of the Euler or Navier-Stokes equations linearized about the long-time mean of the turbulent flow field. The near-field wavepackets obtained from these models show compelling agreement with those educed from experimental and simulation data for both subsonic and supersonic jets, but the acoustic radiation is severely under-predicted in the subsonic case. This thesis contributes to two aspects of these models. First, two new solution methods are developed that can be used to efficiently compute wavepackets and their acoustic radiation, reducing the computational cost of the model by more than an order of magnitude. The new techniques are spatial integration methods and constitute a well-posed, convergent alternative to the frequently used parabolized stability equations. Using concepts related to well-posed boundary conditions, the methods are formulated for general hyperbolic equations and thus have potential applications in many fields of physics and engineering. Second, the nonlinear and stochastic forcing of wavepackets is investigated with the goal of identifying and characterizing the missing dynamics responsible for the under-prediction of acoustic radiation by linear wavepacket models for subsonic jets. Specifically, we use ensembles of large-eddy-simulation flow and force data along with two data decomposition techniques to educe the actual nonlinear forcing experienced by wavepackets in a Mach 0.9 turbulent jet. Modes with high energy are extracted using proper orthogonal decomposition, while high gain modes are identified using a novel technique called empirical resolvent-mode decomposition. In contrast to the flow and acoustic fields, the forcing field is characterized by a lack of energetic coherent structures. Furthermore, the structures that do exist are largely uncorrelated with the acoustic field. Instead, the forces that most efficiently excite an acoustic response appear to take the form of random turbulent fluctuations, implying that direct feedback from nonlinear interactions amongst wavepackets is not an essential noise source mechanism. This suggests that the essential ingredients of sound generation in high Reynolds number jets are contained within the linearized Navier-Stokes operator rather than in the nonlinear forcing terms, a conclusion that has important implications for jet noise modeling.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A large portion of the noise in the light output of a laser oscillator is associated with the noise in the laser discharge. The effect of the discharge noise on the laser output has been studied. The discharge noise has been explained through an ac equivalent circuit of the laser discharge tube.

The discharge noise corresponds to time-varying spatial fluctuations in the electron density, the inverted population density and the dielectric permittivity of the laser medium from their equilibrium values. These fluctuations cause a shift in the resonant frequencies of the laser cavity. When the fluctuation in the dielectric permittivity of the laser medium is a longitudinally traveling wave (corresponding to the case in which moving striations exist in the positive column of the laser discharge), the laser output is frequency modulated.

The discharge noise has been analyzed by representing the laser discharge by an equivalent circuit. An appropriate ac equivalent circuit of a laser discharge tube has been obtained by considering the frequency spectrum of the current response of the discharge tube to an ac voltage modulation. It consist of a series ρLC circuit, which represents the discharge region, in parallel with a capacitance C', which comes mainly from the stray wiring. The equivalent inductance and capacitance of the discharge region have been calculated from the values of the resonant frequencies measured on discharge currents, gas pressures and lengths of the positive column. The experimental data provide for a set of typical values and dependencies on the discharge parameters for the equivalent inductance and capacitance of a discharge under laser operating conditions. It has been concluded from the experimental data that the equivalent inductance originates mainly from the positive column while the equivalent capacitance is due to the discharge region other than the positive column.

The ac equivalent circuit of the laser discharge has been shown analytically and experimentally to be applicable to analyzing the internal discharge noise. Experimental measurements have been made on the frequency of moving striations in a laser discharge. Its experimental dependence on the discharge current agrees very well with the expected dependence obtained from an analysis of the circuit and the experimental data on the equivalent circuit elements. The agreement confirms the validity of representing a laser discharge tube by its ac equivalent circuit in analyzing the striation phenomenon and other low frequency noises. Data have also been obtained for the variation of the striation frequency with an externally-applied longitudinal magnetic field and the increase in frequency has been attributed to a decrease in the equivalent inductance of the laser discharge.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The experimental portion of this thesis tries to estimate the density of the power spectrum of very low frequency semiconductor noise, from 10-6.3 cps to 1. cps with a greater accuracy than that achieved in previous similar attempts: it is concluded that the spectrum is 1/fα with α approximately 1.3 over most of the frequency range, but appearing to have a value of about 1 in the lowest decade. The noise sources are, among others, the first stage circuits of a grounded input silicon epitaxial operational amplifier. This thesis also investigates a peculiar form of stationarity which seems to distinguish flicker noise from other semiconductor noise.

In order to decrease by an order of magnitude the pernicious effects of temperature drifts, semiconductor "aging", and possible mechanical failures associated with prolonged periods of data taking, 10 independent noise sources were time-multiplexed and their spectral estimates were subsequently averaged. If the sources have similar spectra, it is demonstrated that this reduces the necessary data-taking time by a factor of 10 for a given accuracy.

In view of the measured high temperature sensitivity of the noise sources, it was necessary to combine the passive attenuation of a special-material container with active control. The noise sources were placed in a copper-epoxy container of high heat capacity and medium heat conductivity, and that container was immersed in a temperature controlled circulating ethylene-glycol bath.

Other spectra of interest, estimated from data taken concurrently with the semiconductor noise data were the spectra of the bath's controlled temperature, the semiconductor surface temperature, and the power supply voltage amplitude fluctuations. A brief description of the equipment constructed to obtain the aforementioned data is included.

The analytical portion of this work is concerned with the following questions: what is the best final spectral density estimate given 10 statistically independent ones of varying quality and magnitude? How can the Blackman and Tukey algorithm which is used for spectral estimation in this work be improved upon? How can non-equidistant sampling reduce data processing cost? Should one try to remove common trands shared by supposedly statistically independent noise sources and, if so, what are the mathematical difficulties involved? What is a physically plausible mathematical model that can account for flicker noise and what are the mathematical implications on its statistical properties? Finally, the variance of the spectral estimate obtained through the Blackman/Tukey algorithm is analyzed in greater detail; the variance is shown to diverge for α ≥ 1 in an assumed power spectrum of k/|f|α, unless the assumed spectrum is "truncated".