10 resultados para EHF (30-300 GHz)

em CaltechTHESIS


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Among the branches of astronomy, radio astronomy is unique in that it spans the largest portion of the electromagnetic spectrum, e.g., from about 10 MHz to 300 GHz. On the other hand, due to scientific priorities as well as technological limitations, radio astronomy receivers have traditionally covered only about an octave bandwidth. This approach of "one specialized receiver for one primary science goal" is, however, not only becoming too expensive for next-generation radio telescopes comprising thousands of small antennas, but also is inadequate to answer some of the scientific questions of today which require simultaneous coverage of very large bandwidths.

This thesis presents significant improvements on the state of the art of two key receiver components in pursuit of decade-bandwidth radio astronomy: 1) reflector feed antennas; 2) low-noise amplifiers on compound-semiconductor technologies. The first part of this thesis introduces the quadruple-ridged flared horn, a flexible, dual linear-polarization reflector feed antenna that achieves 5:1-7:1 frequency bandwidths while maintaining near-constant beamwidth. The horn is unique in that it is the only wideband feed antenna suitable for radio astronomy that: 1) can be designed to have nominal 10 dB beamwidth between 30 and 150 degrees; 2) requires one single-ended 50 Ohm low-noise amplifier per polarization. Design, analysis, and measurements of several quad-ridged horns are presented to demonstrate its feasibility and flexibility.

The second part of the thesis focuses on modeling and measurements of discrete high-electron mobility transistors (HEMTs) and their applications in wideband, extremely low-noise amplifiers. The transistors and microwave monolithic integrated circuit low-noise amplifiers described herein have been fabricated on two state-of-the-art HEMT processes: 1) 35 nm indium phosphide; 2) 70 nm gallium arsenide. DC and microwave performance of transistors from both processes at room and cryogenic temperatures are included, as well as first-reported measurements of detailed noise characterization of the sub-micron HEMTs at both temperatures. Design and measurements of two low-noise amplifiers covering 1--20 and 8—50 GHz fabricated on both processes are also provided, which show that the 1--20 GHz amplifier improves the state of the art in cryogenic noise and bandwidth, while the 8--50 GHz amplifier achieves noise performance only slightly worse than the best published results but does so with nearly a decade bandwidth.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Much of the chemistry that affects life on planet Earth occurs in the condensed phase. The TeraHertz (THz) or far-infrared (far-IR) region of the electromagnetic spectrum (from 0.1 THz to 10 THz, 3 cm-1 to 300 cm-1, or 3000 μm to 30 μm) has been shown to provide unique possibilities in the study of condensed-phase processes. The goal of this work is to expand the possibilities available in the THz region and undertake new investigations of fundamental interest to chemistry. Since we are fundamentally interested in condensed-phase processes, this thesis focuses on two areas where THz spectroscopy can provide new understanding: astrochemistry and solvation science. To advance these fields, we had to develop new instrumentation that would enable the experiments necessary to answer new questions in either astrochemistry or solvation science. We first developed a new experimental setup capable of studying astrochemical ice analogs in both the TeraHertz (THz), or far-Infrared (far-IR), region (0.3 - 7.5 THz; 10 - 250 cm-1) and the mid-IR (400 - 4000 cm-1). The importance of astrochemical ices lies in their key role in the formation of complex organic molecules, such as amino acids and sugars in space. Thus, the instruments are capable of performing variety of spectroscopic studies that can provide especially relevant laboratory data to support astronomical observations from telescopes such as the Herschel Space Telescope, the Stratospheric Observatory for Infrared Astronomy (SOFIA), and the Atacama Large Millimeter Array (ALMA). The experimental apparatus uses a THz time-domain spectrometer, with a 1750/875 nm plasma source and a GaP detector crystal, to cover the bandwidth mentioned above with ~10 GHz (~0.3 cm-1) resolution.

Using the above instrumentation, experimental spectra of astrochemical ice analogs of water and carbon dioxide in pure, mixed, and layered ices were collected at different temperatures under high vacuum conditions with the goal of investigating the structure of the ice. We tentatively observe a new feature in both amorphous solid water and crystalline water at 33 cm-1 (1 THz). In addition, our studies of mixed and layered ices show how it is possible to identify the location of carbon dioxide as it segregates within the ice by observing its effect on the THz spectrum of water ice. The THz spectra of mixed and layered ices are further analyzed by fitting their spectra features to those of pure amorphous solid water and crystalline water ice to quantify the effects of temperature changes on structure. From the results of this work, it appears that THz spectroscopy is potentially well suited to study thermal transformations within the ice.

To advance the study of liquids with THz spectroscopy, we developed a new ultrafast nonlinear THz spectroscopic technique: heterodyne-detected, ultrafast THz Kerr effect (TKE) spectroscopy. We implemented a heterodyne-detection scheme into a TKE spectrometer that uses a stilbazoiumbased THz emitter, 4-N,N-dimethylamino-4-N-methyl-stilbazolium 2,4,6-trimethylbenzenesulfonate (DSTMS), and high numerical aperture optics which generates THz electric field in excess of 300 kV/cm, in the sample. This allows us to report the first measurement of quantum beats at terahertz (THz) frequencies that result from vibrational coherences initiated by the nonlinear, dipolar interaction of a broadband, high-energy, (sub)picosecond THz pulse with the sample. Our instrument improves on both the frequency coverage, and sensitivity previously reported; it also ensures a backgroundless measurement of the THz Kerr effect in pure liquids. For liquid diiodomethane, we observe a quantum beat at 3.66 THz (122 cm-1), in exact agreement with the fundamental transition frequency of the υ4 vibration of the molecule. This result provides new insight into dipolar vs. Raman selection rules at terahertz frequencies.

To conclude we discuss future directions for the nonlinear THz spectroscopy in the Blake lab. We report the first results from an experiment using a plasma-based THz source for nonlinear spectroscopy that has the potential to enable nonlinear THz spectra with a sub-100 fs temporal resolution, and how the optics involved in the plasma mechanism can enable THz pulse shaping. Finally, we discuss how a single-shot THz detection scheme could improve the acquisition of THz data and how such a scheme could be implemented in the Blake lab. The instruments developed herein will hopefully remain a part of the groups core competencies and serve as building blocks for the next generation of THz instrumentation that pushes the frontiers of both chemistry and the scientific enterprise as a whole.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The objective of this investigation has been a theoretical and experimental understanding of ferromagnetic resonance phenomena in ferromagnetic thin films, and a consequent understanding of several important physical properties of these films. Significant results have been obtained by ferromagnetic resonance, hysteresis, torque magnetometer, He ion backscattering, and X-ray fluorescence measurements for nickel-iron alloy films.

Taking into account all relevant magnetic fields, including the applied, demagnetizing, effective anisotropy and exchange fields, the spin wave resonance condition applicable to the thin film geometry is presented. On the basis of the simple exchange interaction model it is concluded that the normal resonance modes of an ideal film are expected to be unpinned. The possibility of nonideality near the surface of a real film was considered by means of surface anisotropy field, inhomogeneity in demagnetizing field and inhomogeneity of magnetization models. Numerical results obtained for reasonable parameters in all cases show that they negligibly perturb the resonance fields and the higher order mode shapes from those of the unpinned modes of ideal films for thicknesses greater than 1000 Å. On the other hand for films thinner than 1000 Å the resonance field deviations can be significant even though the modes are very nearly unpinned. A previously unnoticed but important feature of all three models is that the interpretation of the first resonance mode as the uniform mode of an ideal film allows an accurate measurement of the average effective demagnetizing field over the film volume. Furthermore, it is demonstrated that it is possible to choose parameters which give indistinguishable predictions for all three models, making it difficult to uniquely ascertain the source of spin pinning in real films from resonance measurements alone.

Spin wave resonance measurements of 81% Ni-19% Fe coevaporated films 30 to 9000 Å thick, at frequencies from 1 to 8 GHz, at room temperature, and with the static magnetic field parallel and perpendicular to the film plane have been performed. A self-consistent analysis of the results for films thicker than 1000 Å, in which multiple excitations can be observed, shows for the first time that a unique value of exchange constant A can only be obtained by the use of unpinned mode assignments. This evidence and the resonance behavior of films thinner than 1000 Å strongly imply that the magnetization at the surfaces of permalloy films is very weakly pinned. However, resonance measurements alone cannot determine whether this pinning is due to a surface anisotropy, an inhomogeneous demagnetizing field or an inhomogeneous magnetization. The above analysis yields a value of 4πM=10,100 Oe and A = (1.03 ± .05) x 10-6 erg/cm for this alloy. The ability to obtain a unique value of A suggests that spin wave resonance can be used to accurately characterize the exchange interaction in a ferromagnet.

In an effort to resolve the ambiguity of the source of pinning of the magnetization, a correlation of the ratio of magnetic moment and X-ray film thickness with the value of effective demagnetizing field 4πNM as determined from resonance, for films 45 to 300 Å has been performed. The remarkable agreement of both quantities and a comparison with the predictions of five distinct models, strongly imply that the thickness dependence of both quantities is related to a thickness dependent average saturation magnetization, which is far below 10,100 Oe for very thin films. However, a series of complementary experiments shows that this large decrease of average saturation magnetization cannot be simply explained by either oxidation or interdiffusion processes. It can only be satisfactorily explained by an intrinsic decrease of the average saturation magnetization for very thin films, an effect which cannot be justified by any simple physical considerations.

Recognizing that this decrease of average saturation magnetization could be due to an oxidation process, a correlation of resonance measurements, He ion backscattering, X-ray fluorescence and torque magnetometer measurements, for films 40 to 3500 Å thick has been performed. On basis of these measurements it is unambiguously established that the oxide layer on the surface of purposefully oxidized 81% Ni-19% Fe evaporated films is predominantly Fe-oxide, and that in the oxidation process Fe atoms are removed from the bulk of the film to depths of thousands of angstroms. Extrapolation of results for pure Fe films indicates that the oxide is most likely α-Fe2O3. These conclusions are in agreement with results from old metallurgical studies of high temperature oxidation of bulk Fe and Ni-Fe alloys. However, X-ray fluorescence results for films oxidized at room temperature, show that although the preferential oxidation of Fe also takes place in these films, the extent of this process is by far too small to explain the large variation of their average saturation magnetization with film thickness.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this thesis, a method to retrieve the source finiteness, depth of faulting, and the mechanisms of large earthquakes from long-period surface waves is developed and applied to several recent large events.

In Chapter 1, source finiteness parameters of eleven large earthquakes were determined from long-period Rayleigh waves recorded at IDA and GDSN stations. The basic data set is the seismic spectra of periods from 150 to 300 sec. Two simple models of source finiteness are studied. The first model is a point source with finite duration. In the determination of the duration or source-process times, we used Furumoto's phase method and a linear inversion method, in which we simultaneously inverted the spectra and determined the source-process time that minimizes the error in the inversion. These two methods yielded consistent results. The second model is the finite fault model. Source finiteness of large shallow earthquakes with rupture on a fault plane with a large aspect ratio was modeled with the source-finiteness function introduced by Ben-Menahem. The spectra were inverted to find the extent and direction of the rupture of the earthquake that minimize the error in the inversion. This method is applied to the 1977 Sumbawa, Indonesia, 1979 Colombia-Ecuador, 1983 Akita-Oki, Japan, 1985 Valparaiso, Chile, and 1985 Michoacan, Mexico earthquakes. The method yielded results consistent with the rupture extent inferred from the aftershock area of these earthquakes.

In Chapter 2, the depths and source mechanisms of nine large shallow earthquakes were determined. We inverted the data set of complex source spectra for a moment tensor (linear) or a double couple (nonlinear). By solving a least-squares problem, we obtained the centroid depth or the extent of the distributed source for each earthquake. The depths and source mechanisms of large shallow earthquakes determined from long-period Rayleigh waves depend on the models of source finiteness, wave propagation, and the excitation. We tested various models of the source finiteness, Q, the group velocity, and the excitation in the determination of earthquake depths.

The depth estimates obtained using the Q model of Dziewonski and Steim (1982) and the excitation functions computed for the average ocean model of Regan and Anderson (1984) are considered most reasonable. Dziewonski and Steim's Q model represents a good global average of Q determined over a period range of the Rayleigh waves used in this study. Since most of the earthquakes studied here occurred in subduction zones Regan and Anderson's average ocean model is considered most appropriate.

Our depth estimates are in general consistent with the Harvard CMT solutions. The centroid depths and their 90 % confidence intervals (numbers in the parentheses) determined by the Student's t test are: Colombia-Ecuador earthquake (12 December 1979), d = 11 km, (9, 24) km; Santa Cruz Is. earthquake (17 July 1980), d = 36 km, (18, 46) km; Samoa earthquake (1 September 1981), d = 15 km, (9, 26) km; Playa Azul, Mexico earthquake (25 October 1981), d = 41 km, (28, 49) km; El Salvador earthquake (19 June 1982), d = 49 km, (41, 55) km; New Ireland earthquake (18 March 1983), d = 75 km, (72, 79) km; Chagos Bank earthquake (30 November 1983), d = 31 km, (16, 41) km; Valparaiso, Chile earthquake (3 March 1985), d = 44 km, (15, 54) km; Michoacan, Mexico earthquake (19 September 1985), d = 24 km, (12, 34) km.

In Chapter 3, the vertical extent of faulting of the 1983 Akita-Oki, and 1977 Sumbawa, Indonesia earthquakes are determined from fundamental and overtone Rayleigh waves. Using fundamental Rayleigh waves, the depths are determined from the moment tensor inversion and fault inversion. The observed overtone Rayleigh waves are compared to the synthetic overtone seismograms to estimate the depth of faulting of these earthquakes. The depths obtained from overtone Rayleigh waves are consistent with the depths determined from fundamental Rayleigh waves for the two earthquakes. Appendix B gives the observed seismograms of fundamental and overtone Rayleigh waves for eleven large earthquakes.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The purpose of this thesis is to present new observations of thermal-infrared radiation from asteroids. Stellar photometry was performed to provide standards for comparison with the asteroid data. The details of the photometry and the data reduction are discussed in Part 1. A system of standard stars is derived for wavelengths of 8.5, 10.5 and 11.6 µm and a new calibration is adopted. Sources of error are evaluated and comparisons are made with the data of other observers.

The observations and analysis of the thermal-emission observations of asteroids are presented in Part 2. Thermal-emission lightcurve and phase effect data are considered. Special color diagrams are introduced to display the observational data. These diagrams are free of any model-dependent assumptions and show that asteroids differ in their surface properties.

On the basis of photometric models, (4) Vesta is thought to have a bolometric Bond albedo of about 0.1, an emissivity greater than 0.7 and a true radius that is close to the model value of 300^(+50)_(-30)km. Model albedos and model radii are given for asteroids 1, 2, 4, 5, 6, 7, 15, 19, 20, 27, 39, 44, 68, 80, 324 and 674. The asteroid (324) Bamberga is extremely dark with a model (~bolometric Bond) albedo in the 0.01 - 0.02 range, which is thought to be the lowest albedo yet measured for any solar-system body. The crucial question about such low-albedo asteroids is their number and the distribution of their orbits.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The initial objective of Part I was to determine the nature of upper mantle discontinuities, the average velocities through the mantle, and differences between mantle structure under continents and oceans by the use of P'dP', the seismic core phase P'P' (PKPPKP) that reflects at depth d in the mantle. In order to accomplish this, it was found necessary to also investigate core phases themselves and their inferences on core structure. P'dP' at both single stations and at the LASA array in Montana indicates that the following zones are candidates for discontinuities with varying degrees of confidence: 800-950 km, weak; 630-670 km, strongest; 500-600 km, strong but interpretation in doubt; 350-415 km, fair; 280-300 km, strong, varying in depth; 100-200 km, strong, varying in depth, may be the bottom of the low-velocity zone. It is estimated that a single station cannot easily discriminate between asymmetric P'P' and P'dP' for lead times of about 30 sec from the main P'P' phase, but the LASA array reduces this uncertainty range to less than 10 sec. The problems of scatter of P'P' main-phase times, mainly due to asymmetric P'P', incorrect identification of the branch, and lack of the proper velocity structure at the velocity point, are avoided and the analysis shows that one-way travel of P waves through oceanic mantle is delayed by 0.65 to 0.95 sec relative to United States mid-continental mantle.

A new P-wave velocity core model is constructed from observed times, dt/dΔ's, and relative amplitudes of P'; the observed times of SKS, SKKS, and PKiKP; and a new mantle-velocity determination by Jordan and Anderson. The new core model is smooth except for a discontinuity at the inner-core boundary determined to be at a radius of 1215 km. Short-period amplitude data do not require the inner core Q to be significantly lower than that of the outer core. Several lines of evidence show that most, if not all, of the arrivals preceding the DF branch of P' at distances shorter than 143° are due to scattering as proposed by Haddon and not due to spherically symmetric discontinuities just above the inner core as previously believed. Calculation of the travel-time distribution of scattered phases and comparison with published data show that the strongest scattering takes place at or near the core-mantle boundary close to the seismic station.

In Part II, the largest events in the San Fernando earthquake series, initiated by the main shock at 14 00 41.8 GMT on February 9, 1971, were chosen for analysis from the first three months of activity, 87 events in all. The initial rupture location coincides with the lower, northernmost edge of the main north-dipping thrust fault and the aftershock distribution. The best focal mechanism fit to the main shock P-wave first motions constrains the fault plane parameters to: strike, N 67° (± 6°) W; dip, 52° (± 3°) NE; rake, 72° (67°-95°) left lateral. Focal mechanisms of the aftershocks clearly outline a downstep of the western edge of the main thrust fault surface along a northeast-trending flexure. Faulting on this downstep is left-lateral strike-slip and dominates the strain release of the aftershock series, which indicates that the downstep limited the main event rupture on the west. The main thrust fault surface dips at about 35° to the northeast at shallow depths and probably steepens to 50° below a depth of 8 km. This steep dip at depth is a characteristic of other thrust faults in the Transverse Ranges and indicates the presence at depth of laterally-varying vertical forces that are probably due to buckling or overriding that causes some upward redirection of a dominant north-south horizontal compression. Two sets of events exhibit normal dip-slip motion with shallow hypocenters and correlate with areas of ground subsidence deduced from gravity data. Several lines of evidence indicate that a horizontal compressional stress in a north or north-northwest direction was added to the stresses in the aftershock area 12 days after the main shock. After this change, events were contained in bursts along the downstep and sequencing within the bursts provides evidence for an earthquake-triggering phenomenon that propagates with speeds of 5 to 15 km/day. Seismicity before the San Fernando series and the mapped structure of the area suggest that the downstep of the main fault surface is not a localized discontinuity but is part of a zone of weakness extending from Point Dume, near Malibu, to Palmdale on the San Andreas fault. This zone is interpreted as a decoupling boundary between crustal blocks that permits them to deform separately in the prevalent crustal-shortening mode of the Transverse Ranges region.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

An economic air pollution control model, which determines the least cost of reaching various air quality levels, is formulated. The model takes the form of a general, nonlinear, mathematical programming problem. Primary contaminant emission levels are the independent variables. The objective function is the cost of attaining various emission levels and is to be minimized subject to constraints that given air quality levels be attained.

The model is applied to a simplified statement of the photochemical smog problem in Los Angeles County in 1975 with emissions specified by a two-dimensional vector, total reactive hydrocarbon, (RHC), and nitrogen oxide, (NOx), emissions. Air quality, also two-dimensional, is measured by the expected number of days per year that nitrogen dioxide, (NO2), and mid-day ozone, (O3), exceed standards in Central Los Angeles.

The minimum cost of reaching various emission levels is found by a linear programming model. The base or "uncontrolled" emission levels are those that will exist in 1975 with the present new car control program and with the degree of stationary source control existing in 1971. Controls, basically "add-on devices", are considered here for used cars, aircraft, and existing stationary sources. It is found that with these added controls, Los Angeles County emission levels [(1300 tons/day RHC, 1000 tons /day NOx) in 1969] and [(670 tons/day RHC, 790 tons/day NOx) at the base 1975 level], can be reduced to 260 tons/day RHC (minimum RHC program) and 460 tons/day NOx (minimum NOx program).

"Phenomenological" or statistical air quality models provide the relationship between air quality and emissions. These models estimate the relationship by using atmospheric monitoring data taken at one (yearly) emission level and by using certain simple physical assumptions, (e. g., that emissions are reduced proportionately at all points in space and time). For NO2, (concentrations assumed proportional to NOx emissions), it is found that standard violations in Central Los Angeles, (55 in 1969), can be reduced to 25, 5, and 0 days per year by controlling emissions to 800, 550, and 300 tons /day, respectively. A probabilistic model reveals that RHC control is much more effective than NOx control in reducing Central Los Angeles ozone. The 150 days per year ozone violations in 1969 can be reduced to 75, 30, 10, and 0 days per year by abating RHC emissions to 700, 450, 300, and 150 tons/day, respectively, (at the 1969 NOx emission level).

The control cost-emission level and air quality-emission level relationships are combined in a graphical solution of the complete model to find the cost of various air quality levels. Best possible air quality levels with the controls considered here are 8 O3 and 10 NO2 violations per year (minimum ozone program) or 25 O3 and 3 NO2 violations per year (minimum NO2 program) with an annualized cost of $230,000,000 (above the estimated $150,000,000 per year for the new car control program for Los Angeles County motor vehicles in 1975).

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Experimental demonstrations and theoretical analyses of a new electromechanical energy conversion process which is made feasible only by the unique properties of superconductors are presented in this dissertation. This energy conversion process is characterized by a highly efficient direct energy transformation from microwave energy into mechanical energy or vice versa and can be achieved at high power level. It is an application of a well established physical principle known as the adiabatic theorem (Boltzmann-Ehrenfest theorem) and in this case time dependent superconducting boundaries provide the necessary interface between the microwave energy on one hand and the mechanical work on the other. The mechanism which brings about the conversion is another known phenomenon - the Doppler effect. The resonant frequency of a superconducting resonator undergoes continuous infinitesimal shifts when the resonator boundaries are adiabatically changed in time by an external mechanical mechanism. These small frequency shifts can accumulate coherently over an extended period of time to produce a macroscopic shift when the resonator remains resonantly excited throughout this process. In addition, the electromagnetic energy in s ide the resonator which is proportional to the oscillation frequency is al so accordingly changed so that a direct conversion between electromagnetic and mechanical energies takes place. The intrinsically high efficiency of this process is due to the electromechanical interactions involved in the conversion rather than a process of thermodynamic nature and therefore is not limited by the thermodynamic value.

A highly reentrant superconducting resonator resonating in the range of 90 to 160 MHz was used for demonstrating this new conversion technique. The resonant frequency was mechanically modulated at a rate of two kilohertz. Experimental results showed that the time evolution of the electromagnetic energy inside this frequency modulated (FM) superconducting resonator indeed behaved as predicted and thus demonstrated the unique features of this process. A proposed usage of FM superconducting resonators as electromechanical energy conversion devices is given along with some practical design considerations. This device seems to be very promising in producing high power (~10W/cm^3) microwave energy at 10 - 30 GHz.

Weakly coupled FM resonator system is also analytically studied for its potential applications. This system shows an interesting switching characteristic with which the spatial distribution of microwave energies can be manipulated by external means. It was found that if the modulation was properly applied, a high degree (>95%) of unidirectional energy transfer from one resonator to the other could be accomplished. Applications of this characteristic to fabricate high efficiency energy switching devices and high power microwave pulse generators are also found feasible with present superconducting technology.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

I. Foehn winds of southern California.
An investigation of the hot, dry and dust laden winds occurring in the late fall and early winter in the Los Angeles Basin and attributed in the past to the influences of the desert regions to the north revealed that these currents were of a foehn nature. Their properties were found to be entirely due to dynamical heating produced in the descent from the high level areas in the interior to the lower Los Angeles Basin. Any dust associated with the phenomenon was found to be acquired from the Los Angeles area rather than transported from the desert. It was found that the frequency of occurrence of a mild type foehn of this nature during this season was sufficient to warrant its classification as a winter monsoon. This results from the topography of the Los Angeles region which allows an easy entrance to the air from the interior by virtue of the low level mountain passes north of the area. This monsoon provides the mild winter climate of southern California since temperatures associated with the foehn currents are far higher than those experienced when maritime air from the adjacent Pacific Ocean occupies the region.

II. Foehn wind cyclo-genesis.
Intense anticyclones frequently build up over the high level regions of the Great Basin and Columbia Plateau which lie between the Sierra Nevada and Cascade Mountains to the west and the Rocky Mountains to the east. The outflow from these anticyclones produce extensive foehns east of the Rockies in the comparatively low level areas of the middle west and the Canadian provinces of Alberta and Saskatchewan. Normally at this season of the year very cold polar continental air masses are present over this territory and with the occurrence of these foehns marked discontinuity surfaces arise between the warm foehn current, which is obliged to slide over a colder mass, and the Pc air to the east. Cyclones are easily produced from this phenomenon and take the form of unstable waves which propagate along the discontinuity surface between the two dissimilar masses. A continual series of such cyclones was found to occur as long as the Great Basin anticyclone is maintained with undiminished intensity.

III. Weather conditions associated with the Akron disaster.
This situation illustrates the speedy development and propagation of young disturbances in the eastern United States during the spring of the year under the influence of the conditionally unstable tropical maritime air masses which characterise the region. It also furnishes an excellent example of the superiority of air mass and frontal methods of weather prediction for aircraft operation over the older methods based upon pressure distribution.

IV. The Los Angeles storm of December 30, 1933 to January 1, 1934.
This discussion points out some of the fundamental interactions occurring between air masses of the North Pacific Ocean in connection with Pacific Coast storms and the value of topographic and aerological considerations in predicting them. Estimates of rainfall intensity and duration from analyses of this type may be made and would prove very valuable in the Los Angeles area in connection with flood control problems.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In four chapters various aspects of earthquake source are studied.

Chapter I

Surface displacements that followed the Parkfield, 1966, earthquakes were measured for two years with six small-scale geodetic networks straddling the fault trace. The logarithmic rate and the periodic nature of the creep displacement recorded on a strain meter made it possible to predict creep episodes on the San Andreas fault. Some individual earthquakes were related directly to surface displacement, while in general, slow creep and aftershock activity were found to occur independently. The Parkfield earthquake is interpreted as a buried dislocation.

Chapter II

The source parameters of earthquakes between magnitude 1 and 6 were studied using field observations, fault plane solutions, and surface wave and S-wave spectral analysis. The seismic moment, MO, was found to be related to local magnitude, ML, by log MO = 1.7 ML + 15.1. The source length vs magnitude relation for the San Andreas system found to be: ML = 1.9 log L - 6.7. The surface wave envelope parameter AR gives the moment according to log MO = log AR300 + 30.1, and the stress drop, τ, was found to be related to the magnitude by τ = 0.54 M - 2.58. The relation between surface wave magnitude MS and ML is proposed to be MS = 1.7 ML - 4.1. It is proposed to estimate the relative stress level (and possibly the strength) of a source-region by the amplitude ratio of high-frequency to low-frequency waves. An apparent stress map for Southern California is presented.

Chapter III

Seismic triggering and seismic shaking are proposed as two closely related mechanisms of strain release which explain observations of the character of the P wave generated by the Alaskan earthquake of 1964, and distant fault slippage observed after the Borrego Mountain, California earthquake of 1968. The Alaska, 1964, earthquake is shown to be adequately described as a series of individual rupture events. The first of these events had a body wave magnitude of 6.6 and is considered to have initiated or triggered the whole sequence. The propagation velocity of the disturbance is estimated to be 3.5 km/sec. On the basis of circumstantial evidence it is proposed that the Borrego Mountain, 1968, earthquake caused release of tectonic strain along three active faults at distances of 45 to 75 km from the epicenter. It is suggested that this mechanism of strain release is best described as "seismic shaking."

Chapter IV

The changes of apparent stress with depth are studied in the South American deep seismic zone. For shallow earthquakes the apparent stress is 20 bars on the average, the same as for earthquakes in the Aleutians and on Oceanic Ridges. At depths between 50 and 150 km the apparent stresses are relatively high, approximately 380 bars, and around 600 km depth they are again near 20 bars. The seismic efficiency is estimated to be 0.1. This suggests that the true stress is obtained by multiplying the apparent stress by ten. The variation of apparent stress with depth is explained in terms of the hypothesis of ocean floor consumption.