20 resultados para FITS

em CaltechTHESIS


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Data were taken in 1979-80 by the CCFRR high energy neutrino experiment at Fermilab. A total of 150,000 neutrino and 23,000 antineutrino charged current events in the approximate energy range 25 < E_v < 250GeV are measured and analyzed. The structure functions F2 and xF_3 are extracted for three assumptions about σ_L/σ_T:R=0., R=0.1 and R= a QCD based expression. Systematic errors are estimated and their significance is discussed. Comparisons or the X and Q^2 behaviour or the structure functions with results from other experiments are made.

We find that statistical errors currently dominate our knowledge of the valence quark distribution, which is studied in this thesis. xF_3 from different experiments has, within errors and apart from level differences, the same dependence on x and Q^2, except for the HPWF results. The CDHS F_2 shows a clear fall-off at low-x from the CCFRR and EMC results, again apart from level differences which are calculable from cross-sections.

The result for the the GLS rule is found to be 2.83±.15±.09±.10 where the first error is statistical, the second is an overall level error and the third covers the rest of the systematic errors. QCD studies of xF_3 to leading and second order have been done. The QCD evolution of xF_3, which is independent of R and the strange sea, does not depend on the gluon distribution and fits yield

ʌ_(LO) = 88^(+163)_(-78) ^(+113)_(-70) MeV

The systematic errors are smaller than the statistical errors. Second order fits give somewhat different values of ʌ, although α_s (at Q^2_0 = 12.6 GeV^2) is not so different.

A fit using the better determined F_2 in place of xF_3 for x > 0.4 i.e., assuming q = 0 in that region, gives

ʌ_(LO) = 266^(+114)_(-104) ^(+85)_(-79) MeV

Again, the statistical errors are larger than the systematic errors. An attempt to measure R was made and the measurements are described. Utilizing the inequality q(x)≥0 we find that in the region x > .4 R is less than 0.55 at the 90% confidence level.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The brain is perhaps the most complex system to have ever been subjected to rigorous scientific investigation. The scale is staggering: over 10^11 neurons, each making an average of 10^3 synapses, with computation occurring on scales ranging from a single dendritic spine, to an entire cortical area. Slowly, we are beginning to acquire experimental tools that can gather the massive amounts of data needed to characterize this system. However, to understand and interpret these data will also require substantial strides in inferential and statistical techniques. This dissertation attempts to meet this need, extending and applying the modern tools of latent variable modeling to problems in neural data analysis.

It is divided into two parts. The first begins with an exposition of the general techniques of latent variable modeling. A new, extremely general, optimization algorithm is proposed - called Relaxation Expectation Maximization (REM) - that may be used to learn the optimal parameter values of arbitrary latent variable models. This algorithm appears to alleviate the common problem of convergence to local, sub-optimal, likelihood maxima. REM leads to a natural framework for model size selection; in combination with standard model selection techniques the quality of fits may be further improved, while the appropriate model size is automatically and efficiently determined. Next, a new latent variable model, the mixture of sparse hidden Markov models, is introduced, and approximate inference and learning algorithms are derived for it. This model is applied in the second part of the thesis.

The second part brings the technology of part I to bear on two important problems in experimental neuroscience. The first is known as spike sorting; this is the problem of separating the spikes from different neurons embedded within an extracellular recording. The dissertation offers the first thorough statistical analysis of this problem, which then yields the first powerful probabilistic solution. The second problem addressed is that of characterizing the distribution of spike trains recorded from the same neuron under identical experimental conditions. A latent variable model is proposed. Inference and learning in this model leads to new principled algorithms for smoothing and clustering of spike data.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Inelastic neutron scattering (INS) and nuclear-resonant inelastic x-ray scattering (NRIXS) were used to measure phonon spectra of FeV as a B2- ordered compound and as a bcc solid solution. Contrary to the behavior of ordering alloys studied to date, the phonons in the B2-ordered phase are softer than in the solid solution. Ordering increases the vibrational entropy, which stabilizes the ordered phase to higher temperatures. Ab initio calculations show that the number of electronic states at the Fermi level increases upon ordering, enhancing the screening between ions, and reducing the interatomic force constants. The effect of screening is larger at the V atomic sites than at the Fe atomic sites.

The phonon spectra of Au-rich alloys of fcc Au-Fe were also measured. The main effect on the vibrational entropy of alloying comes from a stiffening of the Au partial phonon density of states (DOS) with Fe concentration that increases the miscibility gap temperature. The magnitude of the effect is non- linear and it is reduced at higher Fe concentrations. Force constants were calculated for several compositions and show a local stiffening of Au–Au bonds close to Fe atoms, but Au–Au bonds that are farther away do not show this effect. Phonon DOS curves calculated from the force constants reproduced the experimental trends. The Au–Fe bond is soft and favors ordering, but a charge transfer from the Fe to the Au atoms stiffens the Au–Au bonds enough to favor unmixing. The stiffening is attributed to two main effects comparable in magnitude: an increase in electron density in the free-electron-like states, and stronger sd-hybridization.

INS and NRIXS measurements were performed at elevated temperatures on B2-ordered FeTi and NRIXS measurements were performed at high pressures. The high-pressure behavior is quasi- harmonic. The softening of the phonon DOS curves with temperature is strongly nonharmonic. Calculations of the force constants and Born-von Karman fits to the experimental data show that the bonds between second nearest neighbors (2nn) are much stiffer than those between 1nn, but fits to the high temperature data show that the former softens at a faster rate with temperature. The Fe–Fe bond softens more than the Ti–Ti bond. The unusual stiffness of the 2nn bond is explained by the calculated charge distribution, which is highly aspherical and localized preferentially in the t2g orbitals. Ab initio molecular dynamics (AIMD) simulations show a charge transfer from the t2g orbitals to the eg orbitals at elevated temperatures. The asphericity decreases linearly with temperature and is more severe at the Fe sites.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Metallic glasses have typically been treated as a “one size fits all” type of material. Every alloy is considered to have high strength, high hardness, large elastic limits, corrosion resistance, etc. However, similar to traditional crystalline materials, properties are strongly dependent upon the constituent elements, how it was processed, and the conditions under which it will be used. An important distinction which can be made is between metallic glasses and their composites. Charpy impact toughness measurements are performed to determine the effect processing and microstructure have on bulk metallic glass matrix composites (BMGMCs). Samples are suction cast, machined from commercial plates, and semi-solidly forged (SSF). The SSF specimens have been found to have the highest impact toughness due to the coarsening of the dendrites, which occurs during the semi-solid processing stages. Ductile to brittle transition (DTBT) temperatures are measured for a BMGMC. While at room temperature the BMGMC is highly toughened compared to a fully glassy alloy, it undergoes a DTBT by 250 K. At this point, its impact toughness mirrors that of the constituent glassy matrix. In the following chapter, BMGMCs are shown to have the capability of being capacitively welded to form single, monolithic structures. Shear measurements are performed across welded samples, and, at sufficient weld energies, are found to retain the strength of the parent alloy. Cross-sections are inspected via SEM and no visible crystallization of the matrix occurs.

Next, metallic glasses and BMGMCs are formed into sheets and eggbox structures are tested in hypervelocity impacts. Metallic glasses are ideal candidates for protection against micrometeorite orbital debris due to their high hardness and relatively low density. A flat single layer, flat BMG is compared to a BMGMC eggbox and the latter creates a more diffuse projectile cloud after penetration. A three tiered eggbox structure is also tested by firing a 3.17 mm aluminum sphere at 2.7 km/s at it. The projectile penetrates the first two layers, but is successfully contained by the third.

A large series of metallic glass alloys are created and their wear loss is measured in a pin on disk test. Wear is found to vary dramatically among different metallic glasses, with some considerably outperforming the current state-of-the-art crystalline material (most notably Cu₄₃Zr₄₃Al₇Be₇). Others, on the other hand, suffered extensive wear loss. Commercially available Vitreloy 1 lost nearly three times as much mass in wear as alloy prepared in a laboratory setting. No conclusive correlations can be found between any set of mechanical properties (hardness, density, elastic, bulk, or shear modulus, Poisson’s ratio, frictional force, and run in time) and wear loss. Heat treatments are performed on Vitreloy 1 and Cu₄₃Zr₄₃Al₇Be₇. Anneals near the glass transition temperature are found to increase hardness slightly, but decrease wear loss significantly. Crystallization of both alloys leads to dramatic increases in wear resistance. Finally, wear tests under vacuum are performed on the two alloys above. Vitreloy 1 experiences a dramatic decrease in wear loss, while Cu₄₃Zr₄₃Al₇Be₇ has a moderate increase. Meanwhile, gears are fabricated through three techniques: electrical discharge machining of 1 cm by 3 mm cylinders, semisolid forging, and copper mold suction casting. Initial testing finds the pin on disk test to be an accurate predictor of wear performance in gears.

The final chapter explores an exciting technique in the field of additive manufacturing. Laser engineered net shaping (LENS) is a method whereby small amounts of metallic powders are melted by a laser such that shapes and designs can be built layer by layer into a final part. The technique is extended to mixing different powders during melting, so that compositional gradients can be created across a manufactured part. Two compositional gradients are fabricated and characterized. Ti 6Al¬ 4V to pure vanadium was chosen for its combination of high strength and light weight on one end, and high melting point on the other. It was inspected by cross-sectional x-ray diffraction, and only the anticipated phases were present. 304L stainless steel to Invar 36 was created in both pillar and as a radial gradient. It combines strength and weldability along with a zero coefficient of thermal expansion material. Only the austenite phase is found to be present via x-ray diffraction. Coefficient of thermal expansion is measured for four compositions, and it is found to be tunable depending on composition.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The applicability of the white-noise method to the identification of a nonlinear system is investigated. Subsequently, the method is applied to certain vertebrate retinal neuronal systems and nonlinear, dynamic transfer functions are derived which describe quantitatively the information transformations starting with the light-pattern stimulus and culminating in the ganglion response which constitutes the visually-derived input to the brain. The retina of the catfish, Ictalurus punctatus, is used for the experiments.

The Wiener formulation of the white-noise theory is shown to be impractical and difficult to apply to a physical system. A different formulation based on crosscorrelation techniques is shown to be applicable to a wide range of physical systems provided certain considerations are taken into account. These considerations include the time-invariancy of the system, an optimum choice of the white-noise input bandwidth, nonlinearities that allow a representation in terms of a small number of characterizing kernels, the memory of the system and the temporal length of the characterizing experiment. Error analysis of the kernel estimates is made taking into account various sources of error such as noise at the input and output, bandwidth of white-noise input and the truncation of the gaussian by the apparatus.

Nonlinear transfer functions are obtained, as sets of kernels, for several neuronal systems: Light → Receptors, Light → Horizontal, Horizontal → Ganglion, Light → Ganglion and Light → ERG. The derived models can predict, with reasonable accuracy, the system response to any input. Comparison of model and physical system performance showed close agreement for a great number of tests, the most stringent of which is comparison of their responses to a white-noise input. Other tests include step and sine responses and power spectra.

Many functional traits are revealed by these models. Some are: (a) the receptor and horizontal cell systems are nearly linear (small signal) with certain "small" nonlinearities, and become faster (latency-wise and frequency-response-wise) at higher intensity levels, (b) all ganglion systems are nonlinear (half-wave rectification), (c) the receptive field center to ganglion system is slower (latency-wise and frequency-response-wise) than the periphery to ganglion system, (d) the lateral (eccentric) ganglion systems are just as fast (latency and frequency response) as the concentric ones, (e) (bipolar response) = (input from receptors) - (input from horizontal cell), (f) receptive field center and periphery exert an antagonistic influence on the ganglion response, (g) implications about the origin of ERG, and many others.

An analytical solution is obtained for the spatial distribution of potential in the S-space, which fits very well experimental data. Different synaptic mechanisms of excitation for the external and internal horizontal cells are implied.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

An area of about 25 square miles in the western part of the San Gabriel Mountains was mapped on a scale of 1000 feet to the inch. Special attention was given to the structural geology, particularly the relations between the different systems of faults, of which the San Gabriel fault system and the Sierra Madre fault system are the most important ones. The present distribution and relations of the rocks suggests that the southern block has tilted northward against a more stable mass of old rocks which was raised up during a Pliocene or post-Pliocene orogeny. It is suggested that this northward tilting of the block resulted in the group of thrust faults which comprise the Sierra Madre fault system. It is show that this hypothesis fits the present distribution of the rocks and occupies a logical place in the geologic history of the region as well or better than any other hypothesis previously offered to explain the geology of the region.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The Madden-Julian Oscillation (MJO) is a pattern of intense rainfall and associated planetary-scale circulations in the tropical atmosphere, with a recurrence interval of 30-90 days. Although the MJO was first discovered 40 years ago, it is still a challenge to simulate the MJO in general circulation models (GCMs), and even with simple models it is difficult to agree on the basic mechanisms. This deficiency is mainly due to our poor understanding of moist convection—deep cumulus clouds and thunderstorms, which occur at scales that are smaller than the resolution elements of the GCMs. Moist convection is the most important mechanism for transporting energy from the ocean to the atmosphere. Success in simulating the MJO will improve our understanding of moist convection and thereby improve weather and climate forecasting.

We address this fundamental subject by analyzing observational datasets, constructing a hierarchy of numerical models, and developing theories. Parameters of the models are taken from observation, and the simulated MJO fits the data without further adjustments. The major findings include: 1) the MJO may be an ensemble of convection events linked together by small-scale high-frequency inertia-gravity waves; 2) the eastward propagation of the MJO is determined by the difference between the eastward and westward phase speeds of the waves; 3) the planetary scale of the MJO is the length over which temperature anomalies can be effectively smoothed by gravity waves; 4) the strength of the MJO increases with the typical strength of convection, which increases in a warming climate; 5) the horizontal scale of the MJO increases with the spatial frequency of convection; and 6) triggered convection, where potential energy accumulates until a threshold is reached, is important in simulating the MJO. Our findings challenge previous paradigms, which consider the MJO as a large-scale mode, and point to ways for improving the climate models.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The epoch of reionization remains one of the last uncharted eras of cosmic history, yet this time is of crucial importance, encompassing the formation of both the first galaxies and the first metals in the universe. In this thesis, I present four related projects that both characterize the abundance and properties of these first galaxies and uses follow-up observations of these galaxies to achieve one of the first observations of the neutral fraction of the intergalactic medium during the heart of the reionization era.

First, we present the results of a spectroscopic survey using the Keck telescopes targeting 6.3 < z < 8.8 star-forming galaxies. We secured observations of 19 candidates, initially selected by applying the Lyman break technique to infrared imaging data from the Wide Field Camera 3 (WFC3) onboard the Hubble Space Telescope (HST). This survey builds upon earlier work from Stark et al. (2010, 2011), which showed that star-forming galaxies at 3 < z < 6, when the universe was highly ionized, displayed a significant increase in strong Lyman alpha emission with redshift. Our work uses the LRIS and NIRSPEC instruments to search for Lyman alpha emission in candidates at a greater redshift in the observed near-infrared, in order to discern if this evolution continues, or is quenched by an increase in the neutral fraction of the intergalactic medium. Our spectroscopic observations typically reach a 5-sigma limiting sensitivity of < 50 AA. Despite expecting to detect Lyman alpha at 5-sigma in 7-8 galaxies based on our Monte Carlo simulations, we only achieve secure detections in two of 19 sources. Combining these results with a similar sample of 7 galaxies from Fontana et al. (2010), we determine that these few detections would only occur in < 1% of simulations if the intrinsic distribution was the same as that at z ~ 6. We consider other explanations for this decline, but find the most convincing explanation to be an increase in the neutral fraction of the intergalactic medium. Using theoretical models, we infer a neutral fraction of X_HI ~ 0.44 at z = 7.

Second, we characterize the abundance of star-forming galaxies at z > 6.5 again using WFC3 onboard the HST. This project conducted a detailed search for candidates both in the Hubble Ultra Deep Field as well as a number of additional wider Hubble Space Telescope surveys to construct luminosity functions at both z ~ 7 and 8, reaching 0.65 and 0.25 mag fainter than any previous surveys, respectively. With this increased depth, we achieve some of the most robust constraints on the Schechter function faint end slopes at these redshifts, finding very steep values of alpha_{z~7} = -1.87 +/- 0.18 and alpha_{z~8} = -1.94 +/- 0.23. We discuss these results in the context of cosmic reionization, and show that given reasonable assumptions about the ionizing spectra and escape fraction of ionizing photons, only half the photons needed to maintain reionization are provided by currently observable galaxies at z ~ 7-8. We show that an extension of the luminosity function down to M_{UV} = -13.0, coupled with a low level of star-formation out to higher redshift, can fit all available constraints on the ionization history of the universe.

Third, we investigate the strength of nebular emission in 3 < z < 5 star-forming galaxies. We begin by using the Infrared Array Camera (IRAC) onboard the Spitzer Space Telescope to investigate the strength of H alpha emission in a sample of 3.8 < z < 5.0 spectroscopically confirmed galaxies. We then conduct near-infrared observations of star-forming galaxies at 3 < z < 3.8 to investigate the strength of the [OIII] 4959/5007 and H beta emission lines from the ground using MOSFIRE. In both cases, we uncover near-ubiquitous strong nebular emission, and find excellent agreement between the fluxes derived using the separate methods. For a subset of 9 objects in our MOSFIRE sample that have secure Spitzer IRAC detections, we compare the emission line flux derived from the excess in the K_s band photometry to that derived from direct spectroscopy and find 7 to agree within a factor of 1.6, with only one catastrophic outlier. Finally, for a different subset for which we also have DEIMOS rest-UV spectroscopy, we compare the relative velocities of Lyman alpha and the rest-optical nebular lines which should trace the cites of star-formation. We find a median velocity offset of only v_{Ly alpha} = 149 km/s, significantly less than the 400 km/s observed for star-forming galaxies with weaker Lyman alpha emission at z = 2-3 (Steidel et al. 2010), and show that this decrease can be explained by a decrease in the neutral hydrogen column density covering the galaxy. We discuss how this will imply a lower neutral fraction for a given observed extinction of Lyman alpha when its visibility is used to probe the ionization state of the intergalactic medium.

Finally, we utilize the recent CANDELS wide-field, infra-red photometry over the GOODS-N and S fields to re-analyze the use of Lyman alpha emission to evaluate the neutrality of the intergalactic medium. With this new data, we derive accurate ultraviolet spectral slopes for a sample of 468 3 < z < 6 star-forming galaxies, already observed in the rest-UV with the Keck spectroscopic survey (Stark et al. 2010). We use a Bayesian fitting method which accurately accounts for contamination and obscuration by skylines to derive a relationship between the UV-slope of a galaxy and its intrinsic Lyman alpha equivalent width probability distribution. We then apply this data to spectroscopic surveys during the reionization era, including our own, to accurately interpret the drop in observed Lyman alpha emission. From our most recent such MOSFIRE survey, we also present evidence for the most distant galaxy confirmed through emission line spectroscopy at z = 7.62, as well as a first detection of the CIII]1907/1909 doublet at z > 7.

We conclude the thesis by exploring future prospects and summarizing the results of Robertson et al. (2013). This work synthesizes many of the measurements in this thesis, along with external constraints, to create a model of reionization that fits nearly all available constraints.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In the five chapters that follow, I delineate my efforts over the last five years to synthesize structurally and chemically relevant models of the Oxygen Evolving Complex (OEC) of Photosystem II. The OEC is nature’s only water oxidation catalyst, in that it forms the dioxygen in our atmosphere necessary for oxygenic life. Therefore understanding its structure and function is of deep fundamental interest and could provide design elements for artificial photosynthesis and manmade water oxidation catalysts. Synthetic endeavors towards OEC mimics have been an active area of research since the mid 1970s and have mutually evolved alongside biochemical and spectroscopic studies, affording ever-refined proposals for the structure of the OEC and the mechanism of water oxidation. This research has culminated in the most recent proposal: a low symmetry Mn4CaO5 cluster with a distorted Mn3CaO4 cubane bridged to a fourth, dangling Mn. To give context for how my graduate work fits into this rich history of OEC research, Chapter 1 provides a historical timeline of proposals for OEC structure, emphasizing the role that synthetic Mn and MnCa clusters have played, and ending with our Mn3CaO4 heterometallic cubane complexes.

In Chapter 2, the triarylbenzene ligand framework used throughout my work is introduced, and trinuclear clusters of Mn, Co, and Ni are discussed. The ligand scaffold consistently coordinates three metals in close proximity while leaving coordination sites open for further modification through ancillary ligand binding. The ligands coordinated could be varied, with a range of carboxylates and some less coordinating anions studied. These complexes’ structures, magnetic behavior, and redox properties are discussed.

Chapter 3 explores the redox chemistry of the trimanganese system more thoroughly in the presence of a fourth Mn equivalent, finding a range of oxidation states and oxide incorporation dependent on oxidant, solvent, and Mn salt. Oxidation states from MnII4 to MnIIIMnIV3 were observed, with 1-4 O2– ligands incorporated, modeling the photoactivation of the OEC. These complexes were studied by X-ray diffraction, EPR, XAS, magnetometry, and CV.

As Ca2+ is a necessary component of the OEC, Chapter 4 discusses synthetic strategies for making highly structurally accurate models of the OEC containing both Mn and Ca in the Mn3CaO4 cubane + dangling Mn geometry. Structural and electrochemical characterization of the first Mn3CaO4 heterometallic cubane complex— and comparison to an all-Mn Mn4O4 analog—suggests a role for Ca2+ in the OEC. Modification of the Mn3CaO4 system by ligand substitution affords low symmetry Mn3CaO4 complexes that are the most accurate models of the OEC to date.

Finally, in Chapter 5 the reactivity of the Mn3CaO4 cubane complexes toward O- atom transfer is discussed. The metal M strongly affects the reactivity. The mechanisms of O-atom transfer and water incorporation from and into Mn4O4 and Mn4O3 clusters, respectively, are studied through computation and 18O-labeling studies. The μ3-oxos of the Mn4O4 system prove fluxional, lending support for proposals of O2– fluxionality within the OEC.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The 1-6 MeV electron flux at 1 AU has been measured for the time period October 1972 to December 1977 by the Caltech Electron/Isotope Spectrometers on the IMP-7 and IMP-8 satellites. The non-solar interplanetary electron flux reported here covered parts of five synodic periods. The 88 Jovian increases identified in these five synodic periods were classified by their time profiles. The fall time profiles were consistent with an exponential fall with τ ≈ 4-9 days. The rise time profiles displayed a systematic variation over the synodic period. Exponential rise time profiles with τ ≈ 1-3 days tended to occur in the time period before nominal connection, diffusive profiles predicted by the convection-diffusion model around nominal connection, and abrupt profiles after nominal connection.

The times of enhancements in the magnetic field, │B│, at 1 AU showed a better correlation than corotating interaction regions (CIR's) with Jovian increases and other changes in the electron flux at 1 AU, suggesting that │B│ enhancements indicate the times that barriers to electron propagation pass Earth. Time sequences of the increases and decreases in the electron flux at 1 AU were qualitatively modeled by using the times that CIR's passed Jupiter and the times that │B│ enhancements passed Earth.

The electron data observed at 1 AU were modeled by using a convection-diffusion model of Jovian electron propagation. The synodic envelope formed by the maxima of the Jovian increases was modeled by the envelope formed by the predicted intensities at a time less than that needed to reach equilibrium. Even though the envelope shape calculated in this way was similar to the observed envelope, the required diffusion coefficients were not consistent with a diffusive process.

Three Jovian electron increases at 1 AU for the 1974 synodic period were fit with rise time profiles calculated from the convection-diffusion model. For the fits without an ambient electron background flux, the values for the diffusion coefficients that were consistent with the data were kx = 1.0 - 2.5 x 1021 cm2/sec and ky = 1.6 - 2.0 x 1022 cm2/sec. For the fits that included the ambient electron background flux, the values for the diffusion coefficients that were consistent with the data were kx = 0.4 - 1.0 x 1021 cm2/sec and ky = 0.8 - 1.3 x 1022 cm2/sec.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The differential energy spectra of cosmic-ray protons and He nuclei have been measured at energies up to 315 MeV/nucleon using balloon- and satellite-borne instruments. These spectra are presented for solar quiet times for the years 1966 through 1970. The data analysis is verified by extensive accelerator calibrations of the detector systems and by calculations and measurements of the production of secondary protons in the atmosphere.

The spectra of protons and He nuclei in this energy range are dominated by the solar modulation of the local interstellar spectra. The transport equation governing this process includes as parameters the solar-wind velocity, V, and a diffusion coefficient, K(r,R), which is assumed to be a scalar function of heliocentric radius, r, and magnetic rigidity, R. The interstellar spectra, jD, enter as boundary conditions on the solutions to the transport equation. Solutions to the transport equation have been calculated for a broad range of assumed values for K(r,R) and jD and have been compared with the measured spectra.

It is found that the solutions may be characterized in terms of a dimensionless parameter, ψ(r,R) = r V dr'/(K(r',R). The amount of modulation is roughly proportional to ψ. At high energies or far from the Sun, where the modulation is weak, the solution is determined primarily by the value of ψ (and the interstellar spectrum) and is not sensitive to the radial dependence of the diffusion coefficient. At low energies and for small r, where the effects of adiabatic deceleration are found to be large, the spectra are largely determined by the radial dependence of the diffusion coefficient and are not very sensitive to the magnitude of ψ or to the interstellar spectra. This lack of sensitivity to jD implies that the shape of the spectra at Earth cannot be used to determine the interstellar intensities at low energies.

Values of ψ determined from electron data were used to calculate the spectra of protons and He nuclei near Earth. Interstellar spectra of the form jD α (W - 0.25m)-2.65 for both protons and He nuclei were found to yield the best fits to the measured spectra for these values of ψ, where W is the total energy and m is the rest energy. A simple model for the diffusion coefficient was used in which the radial and rigidity dependence are separable and K is independent of radius inside a modulation region which has a boundary at a distance D. Good agreement was found between the measured and calculated spectra for the years 1965 through 1968, using typical boundary distances of 2.7 and 6.1 A.U. The proton spectra observed in 1969 and 1970 were flatter than in previous years. This flattening could be explained in part by an increase in D, but also seemed to require that a noticeable fraction of the observed protons at energies as high at 50 to 100 MeV be attributed to quiet-time solar emission. The turnup in the spectra at low energies observed in all years was also attributed to solar emission. The diffusion coefficient used to fit the 1965 spectra is in reasonable agreement with that determined from the power spectra of the interplanetary magnetic field (Jokipii and Coleman, 1968). We find a factor of roughly 3 increase in ψ from 1965 to 1970, corresponding to the roughly order of magnitude decrease in the proton intensity at 250 MeV. The change in ψ might be attributed to a decrease in the diffusion coefficient, or, if the diffusion coefficient is essentially unchanged over that period (Mathews et al., 1971), might be attributed to an increase in the boundary distance, D.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Complexity in the earthquake rupture process can result from many factors. This study investigates the origin of such complexity by examining several recent, large earthquakes in detail. In each case the local tectonic environment plays an important role in understanding the source of the complexity.

Several large shallow earthquakes (Ms > 7.0) along the Middle American Trench have similarities and differences between them that may lead to a better understanding of fracture and subduction processes. They are predominantly thrust events consistent with the known subduction of the Cocos plate beneath N. America. Two events occurring along this subduction zone close to triple junctions show considerable complexity. This may be attributable to a more heterogeneous stress environment in these regions and as such has implications for other subduction zone boundaries.

An event which looks complex but is actually rather simple is the 1978 Bermuda earthquake (Ms ~ 6). It is located predominantly in the mantle. Its mechanism is one of pure thrust faulting with a strike N 20°W and dip 42°NE. Its apparent complexity is caused by local crustal structure. This is an important event in terms of understanding and estimating seismic hazard on the eastern seaboard of N. America.

A study of several large strike-slip continental earthquakes identifies characteristics which are common to them and may be useful in determining what to expect from the next great earthquake on the San Andreas fault. The events are the 1976 Guatemala earthquake on the Motagua fault and two events on the Anatolian fault in Turkey (the 1967, Mudurnu Valley and 1976, E. Turkey events). An attempt to model the complex P-waveforms of these events results in good synthetic fits for the Guatemala and Mudurnu Valley events. However, the E. Turkey event proves to be too complex as it may have associated thrust or normal faulting. Several individual sources occurring at intervals of between 5 and 20 seconds characterize the Guatemala and Mudurnu Valley events. The maximum size of an individual source appears to be bounded at about 5 x 1026 dyne-cm. A detailed source study including directivity is performed on the Guatemala event. The source time history of the Mudurnu Valley event illustrates its significance in modeling strong ground motion in the near field. The complex source time series of the 1967 event produces amplitudes greater by a factor of 2.5 than a uniform model scaled to the same size for a station 20 km from the fault.

Three large and important earthquakes demonstrate an important type of complexity --- multiple-fault complexity. The first, the 1976 Philippine earthquake, an oblique thrust event, represents the first seismological evidence for a northeast dipping subduction zone beneath the island of Mindanao. A large event, following the mainshock by 12 hours, occurred outside the aftershock area and apparently resulted from motion on a subsidiary fault since the event had a strike-slip mechanism.

An aftershock of the great 1960 Chilean earthquake on June 6, 1960, proved to be an interesting discovery. It appears to be a large strike-slip event at the main rupture's southern boundary. It most likely occurred on the landward extension of the Chile Rise transform fault, in the subducting plate. The results for this event suggest that a small event triggered a series of slow events; the duration of the whole sequence being longer than 1 hour. This is indeed a "slow earthquake".

Perhaps one of the most complex of events is the recent Tangshan, China event. It began as a large strike-slip event. Within several seconds of the mainshock it may have triggered thrust faulting to the south of the epicenter. There is no doubt, however, that it triggered a large oblique normal event to the northeast, 15 hours after the mainshock. This event certainly contributed to the great loss of life-sustained as a result of the Tangshan earthquake sequence.

What has been learned from these studies has been applied to predict what one might expect from the next great earthquake on the San Andreas. The expectation from this study is that such an event would be a large complex event, not unlike, but perhaps larger than, the Guatemala or Mudurnu Valley events. That is to say, it will most likely consist of a series of individual events in sequence. It is also quite possible that the event could trigger associated faulting on neighboring fault systems such as those occurring in the Transverse Ranges. This has important bearing on the earthquake hazard estimation for the region.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A general review of stochastic processes is given in the introduction; definitions, properties and a rough classification are presented together with the position and scope of the author's work as it fits into the general scheme.

The first section presents a brief summary of the pertinent analytical properties of continuous stochastic processes and their probability-theoretic foundations which are used in the sequel.

The remaining two sections (II and III), comprising the body of the work, are the author's contribution to the theory. It turns out that a very inclusive class of continuous stochastic processes are characterized by a fundamental partial differential equation and its adjoint (the Fokker-Planck equations). The coefficients appearing in those equations assimilate, in a most concise way, all the salient properties of the process, freed from boundary value considerations. The writer’s work consists in characterizing the processes through these coefficients without recourse to solving the partial differential equations.

First, a class of coefficients leading to a unique, continuous process is presented, and several facts are proven to show why this class is restricted. Then, in terms of the coefficients, the unconditional statistics are deduced, these being the mean, variance and covariance. The most general class of coefficients leading to the Gaussian distribution is deduced, and a complete characterization of these processes is presented. By specializing the coefficients, all the known stochastic processes may be readily studied, and some examples of these are presented; viz. the Einstein process, Bachelier process, Ornstein-Uhlenbeck process, etc. The calculations are effectively reduced down to ordinary first order differential equations, and in addition to giving a comprehensive characterization, the derivations are materially simplified over the solution to the original partial differential equations.

In the last section the properties of the integral process are presented. After an expository section on the definition, meaning, and importance of the integral process, a particular example is carried through starting from basic definition. This illustrates the fundamental properties, and an inherent paradox. Next the basic coefficients of the integral process are studied in terms of the original coefficients, and the integral process is uniquely characterized. It is shown that the integral process, with a slight modification, is a continuous Markoff process.

The elementary statistics of the integral process are deduced: means, variances, and covariances, in terms of the original coefficients. It is shown that an integral process is never temporally homogeneous in a non-degenerate process.

Finally, in terms of the original class of admissible coefficients, the statistics of the integral process are explicitly presented, and the integral process of all known continuous processes are specified.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The propagation of waves in an extended, irregular medium is studied under the "quasi-optics" and the "Markov random process" approximations. Under these assumptions, a Fokker-Planck equation satisfied by the characteristic functional of the random wave field is derived. A complete set of the moment equations with different transverse coordinates and different wavenumbers is then obtained from the characteristic functional. The derivation does not require Gaussian statistics of the random medium and the result can be applied to the time-dependent problem. We then solve the moment equations for the phase correlation function, angular broadening, temporal pulse smearing, intensity correlation function, and the probability distribution of the random waves. The necessary and sufficient conditions for strong scintillation are also given.

We also consider the problem of diffraction of waves by a random, phase-changing screen. The intensity correlation function is solved in the whole Fresnel diffraction region and the temporal pulse broadening function is derived rigorously from the wave equation.

The method of smooth perturbations is applied to interplanetary scintillations. We formulate and calculate the effects of the solar-wind velocity fluctuations on the observed intensity power spectrum and on the ratio of the observed "pattern" velocity and the true velocity of the solar wind in the three-dimensional spherical model. The r.m.s. solar-wind velocity fluctuations are found to be ~200 km/sec in the region about 20 solar radii from the Sun.

We then interpret the observed interstellar scintillation data using the theories derived under the Markov approximation, which are also valid for the strong scintillation. We find that the Kolmogorov power-law spectrum with an outer scale of 10 to 100 pc fits the scintillation data and that the ambient averaged electron density in the interstellar medium is about 0.025 cm-3. It is also found that there exists a region of strong electron density fluctuation with thickness ~10 pc and mean electron density ~7 cm-3 between the PSR 0833-45 pulsar and the earth.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Several types of seismological data, including surface wave group and phase velocities, travel times from large explosions, and teleseismic travel time anomalies, have indicated that there are significant regional variations in the upper few hundred kilometers of the mantle beneath continental areas. Body wave travel times and amplitudes from large chemical and nuclear explosions are used in this study to delineate the details of these variations beneath North America.

As a preliminary step in this study, theoretical P wave travel times, apparent velocities, and amplitudes have been calculated for a number of proposed upper mantle models, those of Gutenberg, Jeffreys, Lehman, and Lukk and Nersesov. These quantities have been calculated for both P and S waves for model CIT11GB, which is derived from surface wave dispersion data. First arrival times for all the models except that of Lukk and Nersesov are in close agreement, but the travel time curves for later arrivals are both qualitatively and quantitatively very different. For model CIT11GB, there are two large, overlapping regions of triplication of the travel time curve, produced by regions of rapid velocity increase near depths of 400 and 600 km. Throughout the distance range from 10 to 40 degrees, the later arrivals produced by these discontinuities have larger amplitudes than the first arrivals. The amplitudes of body waves, in fact, are extremely sensitive to small variations in the velocity structure, and provide a powerful tool for studying structural details.

Most of eastern North America, including the Canadian Shield has a Pn velocity of about 8.1 km/sec, with a nearly abrupt increase in compressional velocity by ~ 0.3 km/sec near at a depth varying regionally between 60 and 90 km. Variations in the structure of this part of the mantle are significant even within the Canadian Shield. The low-velocity zone is a minor feature in eastern North America and is subject to pronounced regional variations. It is 30 to 50 km thick, and occurs somewhere in the depth range from 80 to 160 km. The velocity decrease is less than 0.2 km/sec.

Consideration of the absolute amplitudes indicates that the attenuation due to anelasticity is negligible for 2 hz waves in the upper 200 km along the southeastern and southwestern margins of the Canadian Shield. For compressional waves the average Q for this region is > 3000. The amplitudes also indicate that the velocity gradient is at least 2 x 10-3 both above and below the low-velocity zone, implying that the temperature gradient is < 4.8°C/km if the regions are chemically homogeneous.

In western North America, the low-velocity zone is a pronounced feature, extending to the base of the crust and having minimum velocities of 7.7 to 7.8 km/sec. Beneath the Colorado Plateau and Southern Rocky Mountains provinces, there is a rapid velocity increase of about 0.3 km/sec, similar to that observed in eastern North America, but near a depth of 100 km.

Complicated travel time curves observed on profiles with stations in both eastern and western North America can be explained in detail by a model taking into account the lateral variations in the structure of the low-velocity zone. These variations involve primarily the velocity within the zone and the depth to the top of the zone; the depth to the bottom is, for both regions, between 140 and 160 km.

The depth to the transition zone near 400 km also varies regionally, by about 30-40 km. These differences imply variations of 250 °C in the temperature or 6 % in the iron content of the mantle, if the phase transformation of olivine to the spinel structure is assumed responsible. The structural variations at this depth are not correlated with those at shallower depths, and follow no obvious simple pattern.

The computer programs used in this study are described in the Appendices. The program TTINV (Appendix IV) fits spherically symmetric earth models to observed travel time data. The method, described in Appendix III, resembles conventional least-square fitting, using partial derivatives of the travel time with respect to the model parameters to perturb an initial model. The usual ill-conditioned nature of least-squares techniques is avoided by a technique which minimizes both the travel time residuals and the model perturbations.

Spherically symmetric earth models, however, have been found inadequate to explain most of the observed travel times in this study. TVT4, a computer program that performs ray theory calculations for a laterally inhomogeneous earth model, is described in Appendix II. Appendix I gives a derivation of seismic ray theory for an arbitrarily inhomogeneous earth model.