21 resultados para 900 MHz

em CaltechTHESIS


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Among the branches of astronomy, radio astronomy is unique in that it spans the largest portion of the electromagnetic spectrum, e.g., from about 10 MHz to 300 GHz. On the other hand, due to scientific priorities as well as technological limitations, radio astronomy receivers have traditionally covered only about an octave bandwidth. This approach of "one specialized receiver for one primary science goal" is, however, not only becoming too expensive for next-generation radio telescopes comprising thousands of small antennas, but also is inadequate to answer some of the scientific questions of today which require simultaneous coverage of very large bandwidths.

This thesis presents significant improvements on the state of the art of two key receiver components in pursuit of decade-bandwidth radio astronomy: 1) reflector feed antennas; 2) low-noise amplifiers on compound-semiconductor technologies. The first part of this thesis introduces the quadruple-ridged flared horn, a flexible, dual linear-polarization reflector feed antenna that achieves 5:1-7:1 frequency bandwidths while maintaining near-constant beamwidth. The horn is unique in that it is the only wideband feed antenna suitable for radio astronomy that: 1) can be designed to have nominal 10 dB beamwidth between 30 and 150 degrees; 2) requires one single-ended 50 Ohm low-noise amplifier per polarization. Design, analysis, and measurements of several quad-ridged horns are presented to demonstrate its feasibility and flexibility.

The second part of the thesis focuses on modeling and measurements of discrete high-electron mobility transistors (HEMTs) and their applications in wideband, extremely low-noise amplifiers. The transistors and microwave monolithic integrated circuit low-noise amplifiers described herein have been fabricated on two state-of-the-art HEMT processes: 1) 35 nm indium phosphide; 2) 70 nm gallium arsenide. DC and microwave performance of transistors from both processes at room and cryogenic temperatures are included, as well as first-reported measurements of detailed noise characterization of the sub-micron HEMTs at both temperatures. Design and measurements of two low-noise amplifiers covering 1--20 and 8—50 GHz fabricated on both processes are also provided, which show that the 1--20 GHz amplifier improves the state of the art in cryogenic noise and bandwidth, while the 8--50 GHz amplifier achieves noise performance only slightly worse than the best published results but does so with nearly a decade bandwidth.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In the preparation of small organic paramagnets, these structures may conceptually be divided into spin-containing units (SCs) and ferromagnetic coupling units (FCs). The synthesis and direct observation of a series of hydrocarbon tetraradicals designed to test the ferromagnetic coupling ability of m-phenylene, 1,3-cyclobutane, 1,3- cyclopentane, and 2,4-adamantane (a chair 1,3-cyclohexane) using Berson TMMs and cyclobutanediyls as SCs are described. While 1,3-cyclobutane and m-phenylene are good ferromagnetic coupling units under these conditions, the ferromagnetic coupling ability of 1,3-cyclopentane is poor, and 1,3-cyclohexane is apparently an antiferromagnetic coupling unit. In addition, this is the first report of ferromagnetic coupling between the spins of localized biradical SCs.

The poor coupling of 1,3-cyclopentane has enabled a study of the variable temperature behavior of a 1,3-cyclopentane FC-based tetraradical in its triplet state. Through fitting the observed data to the usual Boltzman statistics, we have been able to determine the separation of the ground quintet and excited triplet states. From this data, we have inferred the singlet-triplet gap in 1,3-cyclopentanediyl to be 900 cal/mol, in remarkable agreement with theoretical predictions of this number.

The ability to simulate EPR spectra has been crucial to the assignments made here. A powder EPR simulation package is described that uses the Zeeman and dipolar terms to calculate powder EPR spectra for triplet and quintet states.

Methods for characterizing paramagnetic samples by SQUID magnetometry have been developed, including robust routines for data fitting and analysis. A precursor to a potentially magnetic polymer was prepared by ring-opening metathesis polymerization (ROMP), and doped samples of this polymer were studied by magnetometry. While the present results are not positive, calculations have suggested modifications in this structure which should lead to the desired behavior.

Source listings for all computer programs are given in the appendix.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The field of cavity-optomechanics explores the interaction of light with sound in an ever increasing array of devices. This interaction allows the mechanical system to be both sensed and controlled by the optical system, opening up a wide variety of experiments including the cooling of the mechanical resonator to its quantum mechanical ground state and the squeezing of the optical field upon interaction with the mechanical resonator, to name two.

In this work we explore two very different systems with different types of optomechanical coupling. The first system consists of two microdisk optical resonators stacked on top of each other and separated by a very small slot. The interaction of the disks causes their optical resonance frequencies to be extremely sensitive to the gap between the disks. By careful control of the gap between the disks, the optomechanical coupling can be made to be quadratic to first order which is uncommon in optomechanical systems. With this quadratic coupling the light field is now sensitive to the energy of the mechanical resonator and can directly control the potential energy trapping the mechanical motion. This ability to directly control the spring constant without modifying the energy of the mechanical system, unlike in linear optomechanical coupling, is explored.

Next, the bulk of this thesis deals with a high mechanical frequency optomechanical crystal which is used to coherently convert photons between different frequencies. This is accomplished via the engineered linear optomechanical coupling in these devices. Both classical and quantum systems utilize the interaction of light and matter across a wide range of energies. These systems are often not naturally compatible with one another and require a means of converting photons of dissimilar wavelengths to combine and exploit their different strengths. Here we theoretically propose and experimentally demonstrate coherent wavelength conversion of optical photons using photon-phonon translation in a cavity-optomechanical system. For an engineered silicon optomechanical crystal nanocavity supporting a 4 GHz localized phonon mode, optical signals in a 1.5 MHz bandwidth are coherently converted over a 11.2 THz frequency span between one cavity mode at wavelength 1460 nm and a second cavity mode at 1545 nm with a 93% internal (2% external) peak efficiency. The thermal and quantum limiting noise involved in the conversion process is also analyzed and, in terms of an equivalent photon number signal level, are found to correspond to an internal noise level of only 6 and 4 times 10x^-3 quanta, respectively.

We begin by developing the requisite theoretical background to describe the system. A significant amount of time is then spent describing the fabrication of these silicon nanobeams, with an emphasis on understanding the specifics and motivation. The experimental demonstration of wavelength conversion is then described and analyzed. It is determined that the method of getting photons into the cavity and collected from the cavity is a fundamental limiting factor in the overall efficiency. Finally, a new coupling scheme is designed, fabricated, and tested that provides a means of coupling greater than 90% of photons into and out of the cavity, addressing one of the largest obstacles with the initial wavelength conversion experiment.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Large quantities of teleseismic short-period seismograms recorded at SCARLET provide travel time, apparent velocity and waveform data for study of upper mantle compressional velocity structure. Relative array analysis of arrival times from distant (30° < Δ < 95°) earthquakes at all azimuths constrains lateral velocity variations beneath southern California. We compare dT/dΔ back azimuth and averaged arrival time estimates from the entire network for 154 events to the same parameters derived from small subsets of SCARLET. Patterns of mislocation vectors for over 100 overlapping subarrays delimit the spatial extent of an east-west striking, high-velocity anomaly beneath the Transverse Ranges. Thin lens analysis of the averaged arrival time differences, called 'net delay' data, requires the mean depth of the corresponding lens to be more than 100 km. Our results are consistent with the PKP-delay times of Hadley and Kanamori (1977), who first proposed the high-velocity feature, but we place the anomalous material at substantially greater depths than their 40-100 km estimate.

Detailed analysis of travel time, ray parameter and waveform data from 29 events occurring in the distance range 9° to 40° reveals the upper mantle structure beneath an oceanic ridge to depths of over 900 km. More than 1400 digital seismograms from earthquakes in Mexico and Central America yield 1753 travel times and 58 dT/dΔ measurements as well as high-quality, stable waveforms for investigation of the deep structure of the Gulf of California. The result of a travel time inversion with the tau method (Bessonova et al., 1976) is adjusted to fit the p(Δ) data, then further refined by incorporation of relative amplitude information through synthetic seismogram modeling. The application of a modified wave field continuation method (Clayton and McMechan, 1981) to the data with the final model confirms that GCA is consistent with the entire data set and also provides an estimate of the data resolution in velocity-depth space. We discover that the upper mantle under this spreading center has anomalously slow velocities to depths of 350 km, and place new constraints on the shape of the 660 km discontinuity.

Seismograms from 22 earthquakes along the northeast Pacific rim recorded in southern California form the data set for a comparative investigation of the upper mantle beneath the Cascade Ranges-Juan de Fuca region, an ocean-continent transit ion. These data consist of 853 seismograms (6° < Δ < 42°) which produce 1068 travel times and 40 ray parameter estimates. We use the spreading center model initially in synthetic seismogram modeling, and perturb GCA until the Cascade Ranges data are matched. Wave field continuation of both data sets with a common reference model confirms that real differences exist between the two suites of seismograms, implying lateral variation in the upper mantle. The ocean-continent transition model, CJF, features velocities from 200 and 350 km that are intermediate between GCA and T7 (Burdick and Helmberger, 1978), a model for the inland western United States. Models of continental shield regions (e.g., King and Calcagnile, 1976) have higher velocities in this depth range, but all four model types are similar below 400 km. This variation in rate of velocity increase with tectonic regime suggests an inverse relationship between velocity gradient and lithospheric age above 400 km depth.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The technique of variable-angle, electron energy-loss spectroscopy has been used to study the electronic spectroscopy of the diketene molecule. The experiment was performed using incident electron beam energies of 25 eV and 50 eV, and at scattering angles between 10° and 90°. The energy-loss region from 2 eV to 11 eV was examined. One spin-forbidden transition has been observed at 4.36 eV and three others that are spin-allowed have been located at 5.89 eV, 6.88 eV and 7.84 eV. Based on the intensity variation of these transitions with impact energy and scattering angle, and through analogy with simpler molecules, the first three transitions are tentatively assigned to an n → π* transition, a π - σ* (3s) Rydberg transition and a π → π* transition.

Thermal decomposition of chlorodifluoromethane, chloroform, dichloromethane and chloromethane under flash-vacuum pyrolysis conditions (900-1100°C) was investigated by the technique of electron energy-loss spectroscopy, using the impact energy of 50 eV and a scattering angle of 10°. The pyrolytic reaction follows a hydrogen-chloride α-elimination pathway. The difluoromethylene radical was produced from chlorodifluoromethane pyrolysis at 900°C and identified by its X^1 A_1 → A^1B_1 band at 5.04 eV.

Finally, a number of exploratory studies have been performed. The thermal decomposition of diketene was studied under flash vacuum pressures (1-10 mTorr) and temperatures ranging from 500°C to 1000°C. The complete decomposition of the diketene molecule into two ketene molecules was achieved at 900°C. The pyrolysis of trifluoromethyl iodide molecule at 1000°C produced an electron energy-loss spectrum with several iodine-atom, sharp peaks and only a small shoulder at 8.37 eV as a possible trifluoromethyl radical feature. The electron energy-loss spectrum of trichlorobromomethane at 900°C mainly showed features from bromine atom, chlorine molecule and tetrachloroethylene. Hexachloroacetone decomposed partially at 900°C, but showed well-defined features from chlorine, carbon monoxide and tetrachloroethylene molecules. Bromodichloromethane molecule was investigated at 1000°C and produced a congested, electron energy-loss spectrum with bromine-atom, hydrogen-bromide, hydrogen-chloride and tetrachloroethylene features.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This thesis describes the theoretical solution and experimental verification of phase conjugation via nondegenerate four-wave mixing in resonant media. The theoretical work models the resonant medium as a two-level atomic system with the lower state of the system being the ground state of the atom. Working initially with an ensemble of stationary atoms, the density matrix equations are solved by third-order perturbation theory in the presence of the four applied electro-magnetic fields which are assumed to be nearly resonant with the atomic transition. Two of the applied fields are assumed to be non-depleted counterpropagating pump waves while the third wave is an incident signal wave. The fourth wave is the phase conjugate wave which is generated by the interaction of the three previous waves with the nonlinear medium. The solution of the density matrix equations gives the local polarization of the atom. The polarization is used in Maxwell's equations as a source term to solve for the propagation and generation of the signal wave and phase conjugate wave through the nonlinear medium. Studying the dependence of the phase conjugate signal on the various parameters such as frequency, we show how an ultrahigh-Q isotropically sensitive optical filter can be constructed using the phase conjugation process.

In many cases the pump waves may saturate the resonant medium so we also present another solution to the density matrix equations which is correct to all orders in the amplitude of the pump waves since the third-order solution is correct only to first-order in each of the field amplitudes. In the saturated regime, we predict several new phenomena associated with degenerate four-wave mixing and also describe the ac Stark effect and how it modifies the frequency response of the filtering process. We also show how a narrow bandwidth optical filter with an efficiency greater than unity can be constructed.

In many atomic systems the atoms are moving at significant velocities such that the Doppler linewidth of the system is larger than the homogeneous linewidth. The latter linewidth dominates the response of the ensemble of stationary atoms. To better understand this case the density matrix equations are solved to third-order by perturbation theory for an atom of velocity v. The solution for the polarization is then integrated over the velocity distribution of the macroscopic system which is assumed to be a gaussian distribution of velocities since that is an excellent model of many real systems. Using the Doppler broadened system, we explain how a tunable optical filter can be constructed whose bandwidth is limited by the homogeneous linewidth of the atom while the tuning range of the filter extends over the entire Doppler profile.

Since it is a resonant system, sodium vapor is used as the nonlinear medium in our experiments. The relevant properties of sodium are discussed in great detail. In particular, the wavefunctions of the 3S and 3P states are analyzed and a discussion of how the 3S-3P transition models a two-level system is given.

Using sodium as the nonlinear medium we demonstrate an ultrahigh-Q optical filter using phase conjugation via nondegenerate four-wave mixing as the filtering process. The filter has a FWHM bandwidth of 41 MHz and a maximum efficiency of 4 x 10-3. However, our theoretical work and other experimental work with sodium suggest that an efficient filter with both gain and a narrower bandwidth should be quite feasible.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The epidemic of HIV/AIDS in the United States is constantly changing and evolving, starting from patient zero to now an estimated 650,000 to 900,000 Americans infected. The nature and course of HIV changed dramatically with the introduction of antiretrovirals. This discourse examines many different facets of HIV from the beginning where there wasn't any treatment for HIV until the present era of highly active antiretroviral therapy (HAART). By utilizing statistical analysis of clinical data, this paper examines where we were, where we are and projections as to where treatment of HIV/AIDS is headed.

Chapter Two describes the datasets that were used for the analyses. The primary database utilized was collected by myself from an outpatient HIV clinic. The data included dates from 1984 until the present. The second database was from the Multicenter AIDS Cohort Study (MACS) public dataset. The data from the MACS cover the time between 1984 and October 1992. Comparisons are made between both datasets.

Chapter Three discusses where we were. Before the first anti-HIV drugs (called antiretrovirals) were approved, there was no treatment to slow the progression of HIV. The first generation of antiretrovirals, reverse transcriptase inhibitors such as AZT (zidovudine), DDI (didanosine), DDC (zalcitabine), and D4T (stavudine) provided the first treatment for HIV. The first clinical trials showed that these antiretrovirals had a significant impact on increasing patient survival. The trials also showed that patients on these drugs had increased CD4+ T cell counts. Chapter Three examines the distributions of CD4 T cell counts. The results show that the estimated distributions of CD4 T cell counts are distinctly non-Gaussian. Thus distributional assumptions regarding CD4 T cell counts must be taken, into account when performing analyses with this marker. The results also show the estimated CD4 T cell distributions for each disease stage: asymptomatic, symptomatic and AIDS are non-Gaussian. Interestingly, the distribution of CD4 T cell counts for the asymptomatic period is significantly below that of the CD4 T cell distribution for the uninfected population suggesting that even in patients with no outward symptoms of HIV infection, there exists high levels of immunosuppression.

Chapter Four discusses where we are at present. HIV quickly grew resistant to reverse transcriptase inhibitors which were given sequentially as mono or dual therapy. As resistance grew, the positive effects of the reverse transcriptase inhibitors on CD4 T cell counts and survival dissipated. As the old era faded a new era characterized by a new class of drugs and new technology changed the way that we treat HIV-infected patients. Viral load assays were able to quantify the levels of HIV RNA in the blood. By quantifying the viral load, one now had a faster, more direct way to test antiretroviral regimen efficacy. Protease inhibitors, which attacked a different region of HIV than reverse transcriptase inhibitors, when used in combination with other antiretroviral agents were found to dramatically and significantly reduce the HIV RNA levels in the blood. Patients also experienced significant increases in CD4 T cell counts. For the first time in the epidemic, there was hope. It was hypothesized that with HAART, viral levels could be kept so low that the immune system as measured by CD4 T cell counts would be able to recover. If these viral levels could be kept low enough, it would be possible for the immune system to eradicate the virus. The hypothesis of immune reconstitution, that is bringing CD4 T cell counts up to levels seen in uninfected patients, is tested in Chapter Four. It was found that for these patients, there was not enough of a CD4 T cell increase to be consistent with the hypothesis of immune reconstitution.

In Chapter Five, the effectiveness of long-term HAART is analyzed. Survival analysis was conducted on 213 patients on long-term HAART. The primary endpoint was presence of an AIDS defining illness. A high level of clinical failure, or progression to an endpoint, was found.

Chapter Six yields insights into where we are going. New technology such as viral genotypic testing, that looks at the genetic structure of HIV and determines where mutations have occurred, has shown that HIV is capable of producing resistance mutations that confer multiple drug resistance. This section looks at resistance issues and speculates, ceterus parabis, where the state of HIV is going. This section first addresses viral genotype and the correlates of viral load and disease progression. A second analysis looks at patients who have failed their primary attempts at HAART and subsequent salvage therapy. It was found that salvage regimens, efforts to control viral replication through the administration of different combinations of antiretrovirals, were not effective in 90 percent of the population in controlling viral replication. Thus, primary attempts at therapy offer the best change of viral suppression and delay of disease progression. Documentation of transmission of drug-resistant virus suggests that the public health crisis of HIV is far from over. Drug resistant HIV can sustain the epidemic and hamper our efforts to treat HIV infection. The data presented suggest that the decrease in the morbidity and mortality due to HIV/AIDS is transient. Deaths due to HIV will increase and public health officials must prepare for this eventuality unless new treatments become available. These results also underscore the importance of the vaccine effort.

The final chapter looks at the economic issues related to HIV. The direct and indirect costs of treating HIV/AIDS are very high. For the first time in the epidemic, there exists treatment that can actually slow disease progression. The direct costs for HAART are estimated. It is estimated that the direct lifetime costs for treating each HIV infected patient with HAART is between $353,000 to $598,000 depending on how long HAART prolongs life. If one looks at the incremental cost per year of life saved it is only $101,000. This is comparable with the incremental costs per year of life saved from coronary artery bypass surgery.

Policy makers need to be aware that although HAART can delay disease progression, it is not a cure and HIV is not over. The results presented here suggest that the decreases in the morbidity and mortality due to HIV are transient. Policymakers need to be prepared for the eventual increase in AIDS incidence and mortality. Costs associated with HIV/AIDS are also projected to increase. The cost savings seen recently have been from the dramatic decreases in the incidence of AIDS defining opportunistic infections. As patients who have been on HAART the longest start to progress to AIDS, policymakers and insurance companies will find that the cost of treating HIV/AIDS will increase.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This thesis describes a series of experimental studies of lead chalcogenide thermoelectric semiconductors, mainly PbSe. Focusing on a well-studied semiconductor and reporting good but not extraordinary zT, this thesis distinguishes itself by answering the following questions that haven’t been answered: What represents the thermoelectric performance of PbSe? Where does the high zT come from? How (and how much) can we make it better? For the first question, samples were made with highest quality. Each transport property was carefully measured, cross-verified and compared with both historical and contemporary report to overturn commonly believed underestimation of zT. For n- and p-type PbSe zT at 850 K can be 1.1 and 1.0, respectively. For the second question, a systematic approach of quality factor B was used. In n-type PbSe zT is benefited from its high-quality conduction band that combines good degeneracy, low band mass and low deformation potential, whereas zT of p-type is boosted when two mediocre valence bands converge (in band edge energy). In both cases the thermal conductivity from PbSe lattice is inherently low. For the third question, the use of solid solution lead chalcogenide alloys was first evaluated. Simple criteria were proposed to help quickly evaluate the potential of improving zT by introducing atomic disorder. For both PbTe1-xSex and PbSe1-xSx, the impacts in electron and phonon transport compensate each other. Thus, zT in each case was roughly the average of two binary compounds. In p-type Pb1-xSrxSe alloys an improvement of zT from 1.1 to 1.5 at 900 K was achieved, due to the band engineering effect that moves the two valence bands closer in energy. To date, making n-type PbSe better hasn’t been accomplished, but possible strategy is discussed.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The laminar to turbulent transition process in boundary layer flows in thermochemical nonequilibrium at high enthalpy is measured and characterized. Experiments are performed in the T5 Hypervelocity Reflected Shock Tunnel at Caltech, using a 1 m length 5-degree half angle axisymmetric cone instrumented with 80 fast-response annular thermocouples, complemented by boundary layer stability computations using the STABL software suite. A new mixing tank is added to the shock tube fill apparatus for premixed freestream gas experiments, and a new cleaning procedure results in more consistent transition measurements. Transition location is nondimensionalized using a scaling with the boundary layer thickness, which is correlated with the acoustic properties of the boundary layer, and compared with parabolized stability equation (PSE) analysis. In these nondimensionalized terms, transition delay with increasing CO2 concentration is observed: tests in 100% and 50% CO2, by mass, transition up to 25% and 15% later, respectively, than air experiments. These results are consistent with previous work indicating that CO2 molecules at elevated temperatures absorb acoustic instabilities in the MHz range, which is the expected frequency of the Mack second-mode instability at these conditions, and also consistent with predictions from PSE analysis. A strong unit Reynolds number effect is observed, which is believed to arise from tunnel noise. NTr for air from 5.4 to 13.2 is computed, substantially higher than previously reported for noisy facilities. Time- and spatially-resolved heat transfer traces are used to track the propagation of turbulent spots, and convection rates at 90%, 76%, and 63% of the boundary layer edge velocity, respectively, are observed for the leading edge, centroid, and trailing edge of the spots. A model constructed with these spot propagation parameters is used to infer spot generation rates from measured transition onset to completion distance. Finally, a novel method to control transition location with boundary layer gas injection is investigated. An appropriate porous-metal injector section for the cone is designed and fabricated, and the efficacy of injected CO2 for delaying transition is gauged at various mass flow rates, and compared with both no injection and chemically inert argon injection cases. While CO2 injection seems to delay transition, and argon injection seems to promote it, the experimental results are inconclusive and matching computations do not predict a reduction in N factor from any CO2 injection condition computed.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This thesis consists of two separate parts. Part I (Chapter 1) is concerned with seismotectonics of the Middle America subduction zone. In this chapter, stress distribution and Benioff zone geometry are investigated along almost 2000 km of this subduction zone, from the Rivera Fracture Zone in the north to Guatemala in the south. Particular emphasis is placed on the effects on stress distribution of two aseismic ridges, the Tehuantepec Ridge and the Orozco Fracture Zone, which subduct at seismic gaps. Stress distribution is determined by studying seismicity distribution, and by analysis of 190 focal mechanisms, both new and previously published, which are collected here. In addition, two recent large earthquakes that have occurred near the Tehuantepec Ridge and the Orozco Fracture Zone are discussed in more detail. A consistent stress release pattern is found along most of the Middle America subduction zone: thrust events at shallow depths, followed down-dip by an area of low seismic activity, followed by a zone of normal events at over 175 km from the trench and 60 km depth. The zone of low activity is interpreted as showing decoupling of the plates, and the zone of normal activity as showing the breakup of the descending plate. The portion of subducted lithosphere containing the Orozco Fracture Zone does not differ significantly, in Benioff zone geometry or in stress distribution, from adjoining segments. The Playa Azul earthquake of October 25, 1981, Ms=7.3, occurred in this area. Body and surface wave analysis of this event shows a simple source with a shallow thrust mechanism and gives Mo=1.3x1027 dyne-cm. A stress drop of about 45 bars is calculated; this is slightly higher than that of other thrust events in this subduction zone. In the Tehuantepec Ridge area, only minor differences in stress distribution are seen relative to adjoining segments. For both ridges, the only major difference from adjoining areas is the infrequency or lack of occurrence of large interplate thrust events.

Part II involves upper mantle P wave structure studies, for the Canadian shield and eastern North America. In Chapter 2, the P wave structure of the Canadian shield is determined through forward waveform modeling of the phases Pnl, P, and PP. Effects of lateral heterogeneity are kept to a minimum by using earthquakes just outside the shield as sources, with propagation paths largely within the shield. Previous mantle structure studies have used recordings of P waves in the upper mantle triplication range of 15-30°; however, the lack of large earthquakes in the shield region makes compilation of a complete P wave dataset difficult. By using the phase PP, which undergoes triplications at 30-60°, much more information becomes available. The WKBJ technique is used to calculate synthetic seismograms for PP, and these records are modeled almost as well as the P. A new velocity model, designated S25, is proposed for the Canadian shield. This model contains a thick, high-Q, high-velocity lid to 165 km and a deep low-velocity zone. These features combine to produce seismograms that are markedly different from those generated by other shield structure models. The upper mantle discontinuities in S25 are placed at 405 and 660 km, with a simple linear gradient in velocity between them. Details of the shape of the discontinuities are not well constrained. Below 405 km, this model is not very different from many proposed P wave models for both shield and tectonic regions.

Chapter 3 looks in more detail at recordings of Pnl in eastern North America. First, seismograms from four eastern North American earthquakes are analyzed, and seismic moments for the events are calculated. These earthquakes are important in that they are among the largest to have occurred in eastern North America in the last thirty years, yet in some cases were not large enough to produce many good long-period teleseismic records. A simple layer-over-a-halfspace model is used for the initial modeling, and is found to provide an excellent fit for many features of the observed waveforms. The effects on Pnl of varying lid structure are then investigated. A thick lid with a positive gradient in velocity, such as that proposed for the Canadian shield in Chapter 2, will have a pronounced effect on the waveforms, beginning at distances of 800 or 900 km. Pnl records from the same eastern North American events are recalculated for several lid structure models, to survey what kinds of variations might be seen. For several records it is possible to see likely effects of lid structure in the data. However, the dataset is too sparse to make any general observations about variations in lid structure. This type of modeling is expected to be important in the future, as the analysis is extended to more recent eastern North American events, and as broadband instruments make more high-quality regional recordings available.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This thesis describes the development of low-noise heterodyne receivers at THz frequencies for submillimeter astronomy using Nb-based superconductor-insulator-superconductor (SIS) tunneling junctions. The mixers utilize a quasi-optical configuration which consists of a planar twin-slot antenna and antisymmetrically-fed two-junctions on an antireflection-coated silicon hyperhemispherical lens. On-chip integrated tuning circuits, in the form of microstrip lines, are used to obtain maximum coupling efficiency in the designed frequency band. To reduce the rf losses in the integrated tuning circuits above the superconducting Nb gap frequency (~ 700 GHz), normal-metal Al is used to replace Nb as the tuning circuits.

To account the rf losses in the micros trip lines, we calculated the surface impedance of the AI films using the nonlocal anomalous skin effect for finite thickness films. Nb films were calculated using the Mattis-Bardeen theory in the extreme anomalous limit. Our calculations show that the losses of the Al and Nb microstrip lines are about equal at 830 GHz. For Al-wiring and Nb-wiring mixers both optimized at 1050 GHz, the RF coupling efficiency of Al-wiring mixer is higher than that of Nb-wiring one by almost 50%. We have designed both Nb-wiring and Al-wiring mixers below and above the gap frequency.

A Fourier transform spectrometer (FTS) has been constructed especially for the study of the frequency response of SIS receivers. This FTS features large aperture size (10 inch) and high frequency resolution (114 MHz). The FTS spectra, obtained using the SIS receivers as direct detectors on the FTS, agree quite well with our theoretical simulations. We have also, for the first time, measured the FTS heterodyne response of an SIS mixer at sufficiently high resolution to resolve the LO and the sidebands. Heterodyne measurements of our SIS receivers with Nb-wiring or Al-wiring have yielded results which arc among the best reported to date for broadband heterodyne receivers. The Nb-wiring mixers, covering 400 - 850 GHz band with four separate fixed-tuned mixers, have uncorrected DSB receiver noise temperature around 5hv/kb to 700 GHz, and better than 540 K at 808 GHz. An Al-wiring mixer designed for 1050 GHz band has an uncorrected DSB receiver noise temperature 840 K at 1042 GHz and 2.5 K bath temperature. Mixer performance analysis shows that Nb junctions can work well up to twice the gap frequency and the major cause of loss above the gap frequency is the rf losses in the microstrip tuning structures. Further advances in THz SIS mixers may be possible using circuits fabricated with higher-gap superconductors such as NbN. However, this will require high-quality films with low RF surface resistance at THz frequencies.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Experimental demonstrations and theoretical analyses of a new electromechanical energy conversion process which is made feasible only by the unique properties of superconductors are presented in this dissertation. This energy conversion process is characterized by a highly efficient direct energy transformation from microwave energy into mechanical energy or vice versa and can be achieved at high power level. It is an application of a well established physical principle known as the adiabatic theorem (Boltzmann-Ehrenfest theorem) and in this case time dependent superconducting boundaries provide the necessary interface between the microwave energy on one hand and the mechanical work on the other. The mechanism which brings about the conversion is another known phenomenon - the Doppler effect. The resonant frequency of a superconducting resonator undergoes continuous infinitesimal shifts when the resonator boundaries are adiabatically changed in time by an external mechanical mechanism. These small frequency shifts can accumulate coherently over an extended period of time to produce a macroscopic shift when the resonator remains resonantly excited throughout this process. In addition, the electromagnetic energy in s ide the resonator which is proportional to the oscillation frequency is al so accordingly changed so that a direct conversion between electromagnetic and mechanical energies takes place. The intrinsically high efficiency of this process is due to the electromechanical interactions involved in the conversion rather than a process of thermodynamic nature and therefore is not limited by the thermodynamic value.

A highly reentrant superconducting resonator resonating in the range of 90 to 160 MHz was used for demonstrating this new conversion technique. The resonant frequency was mechanically modulated at a rate of two kilohertz. Experimental results showed that the time evolution of the electromagnetic energy inside this frequency modulated (FM) superconducting resonator indeed behaved as predicted and thus demonstrated the unique features of this process. A proposed usage of FM superconducting resonators as electromechanical energy conversion devices is given along with some practical design considerations. This device seems to be very promising in producing high power (~10W/cm^3) microwave energy at 10 - 30 GHz.

Weakly coupled FM resonator system is also analytically studied for its potential applications. This system shows an interesting switching characteristic with which the spatial distribution of microwave energies can be manipulated by external means. It was found that if the modulation was properly applied, a high degree (>95%) of unidirectional energy transfer from one resonator to the other could be accomplished. Applications of this characteristic to fabricate high efficiency energy switching devices and high power microwave pulse generators are also found feasible with present superconducting technology.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The initial probabilities of activated, dissociative chemisorption of methane and ethane on Pt(110)-(1 x 2) have been measured. The surface temperature was varied from 450 to 900 K with the reactant gas temperature constant at 300 K. Under these conditions, we probe the kinetics of dissociation via trapping-mediated (as opposed to 'direct') mechanism. It was found that the probabilities of dissociation of both methane and ethane were strong functions of the surface temperature with an apparent activation energies of 14.4 kcal/mol for methane and 2.8 kcal/mol for ethane, which implys that the methane and ethane molecules have fully accommodated to the surface temperature. Kinetic isotope effects were observed for both reactions, indicating that the C-H bond cleavage was involved in the rate-limiting step. A mechanistic model based on the trapping-mediated mechanism is used to explain the observed kinetic behavior. The activation energies for C-H bond dissociation of the thermally accommodated methane and ethane on the surface extracted from the model are 18.4 and 10.3 kcal/mol, respectively.

The studies of the catalytic decomposition of formic acid on the Ru(001) surface with thermal desorption mass spectrometry following the adsorption of DCOOH and HCOOH on the surface at 130 and 310 K are described. Formic acid (DCOOH) chemisorbs dissociatively on the surface via both the cleavage of its O-H bond to form a formate and a hydrogen adatom, and the cleavage of its C-O bond to form a carbon monoxide, a deuterium adatom and an hydroxyl (OH). The former is the predominant reaction. The rate of desorption of carbon dioxide is a direct measure of the kinetics of decomposition of the surface formate. It is characterized by a kinetic isotope effect, an increasingly narrow FWHM, and an upward shift in peak temperature with Ɵ_T, the coverage of the dissociatively adsorbed formic acid. The FWHM and the peak temperature change from 18 K and 326 K at Ɵ_T = 0.04 to 8 K and 395 K at Ɵ_T = 0.89. The increase in the apparent activation energy of the C-D bond cleavage is largely a result of self-poisoning by the formate, the presence of which on the surface alters the electronic properties of the surface such that the activation energy of the decomposition of formate is increased. The variation of the activation energy for carbon dioxide formation with Ɵ_T accounts for the observed sharp carbon dioxide peak. The coverage of surface formate can be adjusted over a relatively wide range so that the activation energy for C-D bond cleavage in the case of DCOOH can be adjusted to be below, approximately equal to, or well above the activation energy for the recombinative desorption of the deuterium adatoms. Accordingly, the desorption of deuterium was observed to be governed completely by the desorption kinetics of the deuterium adatoms at low Ɵ_T, jointly by the kinetics of deuterium desorption and C-D bond cleavage at intermediate Ɵ_T, and solely by the kinetics of C-D bond cleavage at high Ɵ_T. The overall branching ratio of the formate to carbon dioxide and carbon monoxide is approximately unity, regardless the initial coverage Ɵ_T, even though the activation energy for the production of carbon dioxide varies with Ɵ_T. The desorption of water, which implies C-O bond cleavage of the formate, appears at approximately the same temperature as that of carbon dioxide. These observations suggest that the cleavage of the C-D bond and that of the C-O bond of two surface formates are coupled, possibly via the formation of a short-lived surface complex that is the precursor to to the decomposition.

The measurement of steady-state rate is demonstrated here to be valuable in determining kinetics associated with short-lived, molecularly adsorbed precursor to further reactions on the surface, by determining the kinetic parameters of the molecular precursor of formaldehyde to its dissociation on the Pt(110)-(1 x 2) surface.

Overlayers of nitrogen adatoms on Ru(001) have been characterized both by thermal desorption mass spectrometry and low-energy electron diffraction, as well as chemically via the postadsorption and desorption of ammonia and carbon monoxide.

The nitrogen-adatom overlayer was prepared by decomposing ammonia thermally on the surface at a pressure of 2.8 x 10^(-6) Torr and a temperature of 480 K. The saturated overlayer prepared under these conditions has associated with it a (√247/10 x √247/10)R22.7° LEED pattern, has two peaks in its thermal desorption spectrum, and has a fractional surface coverage of 0.40. Annealing the overlayer to approximately 535 K results in a rather sharp (√3 x √3)R30° LEED pattern with an associated fractional surface coverage of one-third. Annealing the overlayer further to 620 K results in the disappearance of the low-temperature thermal desorption peak and the appearance of a rather fuzzy p(2x2) LEED pattern with an associated fractional surface coverage of approximately one-fourth. In the low coverage limit, the presence of the (√3 x √3)R30° N overlayer alters the surface in such a way that the binding energy of ammonia is increased by 20% relative to the clean surface, whereas that of carbon monoxide is reduced by 15%.

A general methodology for the indirect relative determination of the absolute fractional surface coverages has been developed and was utilized to determine the saturation fractional coverage of hydrogen on Ru(001). Formaldehyde was employed as a bridge to lead us from the known reference point of the saturation fractional coverage of carbon monoxide to unknown reference point of the fractional coverage of hydrogen on Ru(001), which is then used to determine accurately the saturation fractional coverage of hydrogen. We find that ƟSAT/H = 1.02 (±0.05), i.e., the surface stoichiometry is Ru : H = 1 : 1. The relative nature of the method, which cancels systematic errors, together with the utilization of a glass envelope around the mass spectrometer, which reduces spurious contributions in the thermal desorption spectra, results in high accuracy in the determination of absolute fractional coverages.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The surface resistance and the critical magnetic field of lead electroplated on copper were studied at 205 MHz in a half-wave coaxial resonator. The observed surface resistance at a low field level below 4.2°K could be well described by the BCS surface resistance with the addition of a temperature independent residual resistance. The available experimental data suggest that the major fraction of the residual resistance in the present experiment was due to the presence of an oxide layer on the surface. At higher magnetic field levels the surface resistance was found to be enhanced due to surface imperfections.

The attainable rf critical magnetic field between 2.2°K and T_c of lead was found to be limited not by the thermodynamic critical field but rather by the superheating field predicted by the one-dimensional Ginzburg-Landau theory. The observed rf critical field was very close to the expected superheating field, particularly in the higher reduced temperature range, but showed somewhat stronger temperature dependence than the expected superheating field in the lower reduced temperature range.

The rf critical magnetic field was also studied at 90 MHz for pure tin and indium, and for a series of SnIn and InBi alloys spanning both type I and type II superconductivity. The samples were spherical with typical diameters of 1-2 mm and a helical resonator was used to generate the rf magnetic field in the measurement. The results of pure samples of tin and indium showed that a vortex-like nucleation of the normal phase was responsible for the superconducting-to-normal phase transition in the rf field at temperatures up to about 0.98-0.99 T_c' where the ideal superheating limit was being reached. The results of the alloy samples showed that the attainable rf critical fields near T_c were well described by the superheating field predicted by the one-dimensional GL theory in both the type I and type II regimes. The measurement was also made at 300 MHz resulting in no significant change in the rf critical field. Thus it was inferred that the nucleation time of the normal phase, once the critical field was reached, was small compared with the rf period in this frequency range.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Semiconductor technology scaling has enabled drastic growth in the computational capacity of integrated circuits (ICs). This constant growth drives an increasing demand for high bandwidth communication between ICs. Electrical channel bandwidth has not been able to keep up with this demand, making I/O link design more challenging. Interconnects which employ optical channels have negligible frequency dependent loss and provide a potential solution to this I/O bandwidth problem. Apart from the type of channel, efficient high-speed communication also relies on generation and distribution of multi-phase, high-speed, and high-quality clock signals. In the multi-gigahertz frequency range, conventional clocking techniques have encountered several design challenges in terms of power consumption, skew and jitter. Injection-locking is a promising technique to address these design challenges for gigahertz clocking. However, its small locking range has been a major contributor in preventing its ubiquitous acceptance.

In the first part of this dissertation we describe a wideband injection locking scheme in an LC oscillator. Phase locked loop (PLL) and injection locking elements are combined symbiotically to achieve wide locking range while retaining the simplicity of the latter. This method does not require a phase frequency detector or a loop filter to achieve phase lock. A mathematical analysis of the system is presented and the expression for new locking range is derived. A locking range of 13.4 GHz–17.2 GHz (25%) and an average jitter tracking bandwidth of up to 400 MHz are measured in a high-Q LC oscillator. This architecture is used to generate quadrature phases from a single clock without any frequency division. It also provides high frequency jitter filtering while retaining the low frequency correlated jitter essential for forwarded clock receivers.

To improve the locking range of an injection locked ring oscillator; QLL (Quadrature locked loop) is introduced. The inherent dynamics of injection locked quadrature ring oscillator are used to improve its locking range from 5% (7-7.4GHz) to 90% (4-11GHz). The QLL is used to generate accurate clock phases for a four channel optical receiver using a forwarded clock at quarter-rate. The QLL drives an injection locked oscillator (ILO) at each channel without any repeaters for local quadrature clock generation. Each local ILO has deskew capability for phase alignment. The optical-receiver uses the inherent frequency to voltage conversion provided by the QLL to dynamically body bias its devices. A wide locking range of the QLL helps to achieve a reliable data-rate of 16-32Gb/s and adaptive body biasing aids in maintaining an ultra-low power consumption of 153pJ/bit.

From the optical receiver we move on to discussing a non-linear equalization technique for a vertical-cavity surface-emitting laser (VCSEL) based optical transmitter, to enable low-power, high-speed optical transmission. A non-linear time domain optical model of the VCSEL is built and evaluated for accuracy. The modelling shows that, while conventional FIR-based pre-emphasis works well for LTI electrical channels, it is not optimum for the non-linear optical frequency response of the VCSEL. Based on the simulations of the model an optimum equalization methodology is derived. The equalization technique is used to achieve a data-rate of 20Gb/s with power efficiency of 0.77pJ/bit.