12 resultados para underdetermined blind source separation

em CaltechTHESIS


Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this thesis, a method to retrieve the source finiteness, depth of faulting, and the mechanisms of large earthquakes from long-period surface waves is developed and applied to several recent large events.

In Chapter 1, source finiteness parameters of eleven large earthquakes were determined from long-period Rayleigh waves recorded at IDA and GDSN stations. The basic data set is the seismic spectra of periods from 150 to 300 sec. Two simple models of source finiteness are studied. The first model is a point source with finite duration. In the determination of the duration or source-process times, we used Furumoto's phase method and a linear inversion method, in which we simultaneously inverted the spectra and determined the source-process time that minimizes the error in the inversion. These two methods yielded consistent results. The second model is the finite fault model. Source finiteness of large shallow earthquakes with rupture on a fault plane with a large aspect ratio was modeled with the source-finiteness function introduced by Ben-Menahem. The spectra were inverted to find the extent and direction of the rupture of the earthquake that minimize the error in the inversion. This method is applied to the 1977 Sumbawa, Indonesia, 1979 Colombia-Ecuador, 1983 Akita-Oki, Japan, 1985 Valparaiso, Chile, and 1985 Michoacan, Mexico earthquakes. The method yielded results consistent with the rupture extent inferred from the aftershock area of these earthquakes.

In Chapter 2, the depths and source mechanisms of nine large shallow earthquakes were determined. We inverted the data set of complex source spectra for a moment tensor (linear) or a double couple (nonlinear). By solving a least-squares problem, we obtained the centroid depth or the extent of the distributed source for each earthquake. The depths and source mechanisms of large shallow earthquakes determined from long-period Rayleigh waves depend on the models of source finiteness, wave propagation, and the excitation. We tested various models of the source finiteness, Q, the group velocity, and the excitation in the determination of earthquake depths.

The depth estimates obtained using the Q model of Dziewonski and Steim (1982) and the excitation functions computed for the average ocean model of Regan and Anderson (1984) are considered most reasonable. Dziewonski and Steim's Q model represents a good global average of Q determined over a period range of the Rayleigh waves used in this study. Since most of the earthquakes studied here occurred in subduction zones Regan and Anderson's average ocean model is considered most appropriate.

Our depth estimates are in general consistent with the Harvard CMT solutions. The centroid depths and their 90 % confidence intervals (numbers in the parentheses) determined by the Student's t test are: Colombia-Ecuador earthquake (12 December 1979), d = 11 km, (9, 24) km; Santa Cruz Is. earthquake (17 July 1980), d = 36 km, (18, 46) km; Samoa earthquake (1 September 1981), d = 15 km, (9, 26) km; Playa Azul, Mexico earthquake (25 October 1981), d = 41 km, (28, 49) km; El Salvador earthquake (19 June 1982), d = 49 km, (41, 55) km; New Ireland earthquake (18 March 1983), d = 75 km, (72, 79) km; Chagos Bank earthquake (30 November 1983), d = 31 km, (16, 41) km; Valparaiso, Chile earthquake (3 March 1985), d = 44 km, (15, 54) km; Michoacan, Mexico earthquake (19 September 1985), d = 24 km, (12, 34) km.

In Chapter 3, the vertical extent of faulting of the 1983 Akita-Oki, and 1977 Sumbawa, Indonesia earthquakes are determined from fundamental and overtone Rayleigh waves. Using fundamental Rayleigh waves, the depths are determined from the moment tensor inversion and fault inversion. The observed overtone Rayleigh waves are compared to the synthetic overtone seismograms to estimate the depth of faulting of these earthquakes. The depths obtained from overtone Rayleigh waves are consistent with the depths determined from fundamental Rayleigh waves for the two earthquakes. Appendix B gives the observed seismograms of fundamental and overtone Rayleigh waves for eleven large earthquakes.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This thesis is in two parts. In Part I the independent variable θ in the trigonometric form of Legendre's equation is extended to the range ( -∞, ∞). The associated spectral representation is an infinite integral transform whose kernel is the analytic continuation of the associated Legendre function of the second kind into the complex θ-plane. This new transform is applied to the problems of waves on a spherical shell, heat flow on a spherical shell, and the gravitational potential of a sphere. In each case the resulting alternative representation of the solution is more suited to direct physical interpretation than the standard forms.

In Part II separation of variables is applied to the initial-value problem of the propagation of acoustic waves in an underwater sound channel. The Epstein symmetric profile is taken to describe the variation of sound with depth. The spectral representation associated with the separated depth equation is found to contain an integral and a series. A point source is assumed to be located in the channel. The nature of the disturbance at a point in the vicinity of the channel far removed from the source is investigated.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In the preparation of small organic paramagnets, these structures may conceptually be divided into spin-containing units (SCs) and ferromagnetic coupling units (FCs). The synthesis and direct observation of a series of hydrocarbon tetraradicals designed to test the ferromagnetic coupling ability of m-phenylene, 1,3-cyclobutane, 1,3- cyclopentane, and 2,4-adamantane (a chair 1,3-cyclohexane) using Berson TMMs and cyclobutanediyls as SCs are described. While 1,3-cyclobutane and m-phenylene are good ferromagnetic coupling units under these conditions, the ferromagnetic coupling ability of 1,3-cyclopentane is poor, and 1,3-cyclohexane is apparently an antiferromagnetic coupling unit. In addition, this is the first report of ferromagnetic coupling between the spins of localized biradical SCs.

The poor coupling of 1,3-cyclopentane has enabled a study of the variable temperature behavior of a 1,3-cyclopentane FC-based tetraradical in its triplet state. Through fitting the observed data to the usual Boltzman statistics, we have been able to determine the separation of the ground quintet and excited triplet states. From this data, we have inferred the singlet-triplet gap in 1,3-cyclopentanediyl to be 900 cal/mol, in remarkable agreement with theoretical predictions of this number.

The ability to simulate EPR spectra has been crucial to the assignments made here. A powder EPR simulation package is described that uses the Zeeman and dipolar terms to calculate powder EPR spectra for triplet and quintet states.

Methods for characterizing paramagnetic samples by SQUID magnetometry have been developed, including robust routines for data fitting and analysis. A precursor to a potentially magnetic polymer was prepared by ring-opening metathesis polymerization (ROMP), and doped samples of this polymer were studied by magnetometry. While the present results are not positive, calculations have suggested modifications in this structure which should lead to the desired behavior.

Source listings for all computer programs are given in the appendix.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A central objective in signal processing is to infer meaningful information from a set of measurements or data. While most signal models have an overdetermined structure (the number of unknowns less than the number of equations), traditionally very few statistical estimation problems have considered a data model which is underdetermined (number of unknowns more than the number of equations). However, in recent times, an explosion of theoretical and computational methods have been developed primarily to study underdetermined systems by imposing sparsity on the unknown variables. This is motivated by the observation that inspite of the huge volume of data that arises in sensor networks, genomics, imaging, particle physics, web search etc., their information content is often much smaller compared to the number of raw measurements. This has given rise to the possibility of reducing the number of measurements by down sampling the data, which automatically gives rise to underdetermined systems.

In this thesis, we provide new directions for estimation in an underdetermined system, both for a class of parameter estimation problems and also for the problem of sparse recovery in compressive sensing. There are two main contributions of the thesis: design of new sampling and statistical estimation algorithms for array processing, and development of improved guarantees for sparse reconstruction by introducing a statistical framework to the recovery problem.

We consider underdetermined observation models in array processing where the number of unknown sources simultaneously received by the array can be considerably larger than the number of physical sensors. We study new sparse spatial sampling schemes (array geometries) as well as propose new recovery algorithms that can exploit priors on the unknown signals and unambiguously identify all the sources. The proposed sampling structure is generic enough to be extended to multiple dimensions as well as to exploit different kinds of priors in the model such as correlation, higher order moments, etc.

Recognizing the role of correlation priors and suitable sampling schemes for underdetermined estimation in array processing, we introduce a correlation aware framework for recovering sparse support in compressive sensing. We show that it is possible to strictly increase the size of the recoverable sparse support using this framework provided the measurement matrix is suitably designed. The proposed nested and coprime arrays are shown to be appropriate candidates in this regard. We also provide new guarantees for convex and greedy formulations of the support recovery problem and demonstrate that it is possible to strictly improve upon existing guarantees.

This new paradigm of underdetermined estimation that explicitly establishes the fundamental interplay between sampling, statistical priors and the underlying sparsity, leads to exciting future research directions in a variety of application areas, and also gives rise to new questions that can lead to stand-alone theoretical results in their own right.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Viruses possess very specific methods of targeting and entering cells. These methods would be extremely useful if they could also be applied to drug delivery, but little is known about the molecular mechanisms of the viral entry process. In order to gain further insight into mechanisms of viral entry, chemical and spectroscopic studies in two systems were conducted, examining hydrophobic protein-lipid interactions during Sendai virus membrane fusion, and the kinetics of bacteriophage λ DNA injection.

Sendai virus glycoprotein interactions with target membranes during the early stages of fusion were examined using time-resolved hydrophobic photoaffinity labeling with the lipid-soluble carbene generator3-(trifluoromethyl)-3-(m-^(125 )I] iodophenyl)diazirine (TID). The probe was incorporated in target membranes prior to virus addition and photolysis. During Sendai virus fusion with liposomes composed of cardiolipin (CL) or phosphatidylserine (PS), the viral fusion (F) protein is preferentially labeled at early time points, supporting the hypothesis that hydrophobic interaction of the fusion peptide at the N-terminus of the F_1 subunit with the target membrane is an initiating event in fusion. Correlation of the hydrophobic interactions with independently monitored fusion kinetics further supports this conclusion. Separation of proteins after labeling shows that the F_1 subunit, containing the putative hydrophobic fusion sequence, is exclusively labeled, and that the F_2 subunit does not participate in fusion. Labeling shows temperature and pH dependence consistent with a need for protein conformational mobility and fusion at neutral pH. Higher amounts of labeling during fusion with CL vesicles than during virus-PS vesicle fusion reflects membrane packing regulation of peptide insertion into target membranes. Labeling of the viral hemagglutinin/neuraminidase (HN) at low pH indicates that HN-mediated fusion is triggered by hydrophobic interactions, after titration of acidic amino acids. HN labeling under nonfusogenic conditions reveals that viral binding may involve hydrophobic as well as electrostatic interactions. Controls for diffusional labeling exclude a major contribution from this source. Labeling during reconstituted Sendai virus envelope-liposome fusion shows that functional reconstitution involves protein retention of the ability to undergo hydrophobic interactions.

Examination of Sendai virus fusion with erythrocyte membranes indicates that hydrophobic interactions also trigger fusion between biological membranes, and that HN binding may involve hydrophobic interactions as well. Labeling of the erythrocyte membranes revealed close membrane association of spectrin, which may play a role in regulating membrane fusion. The data show that hydrophobic fusion protein interaction with both artificial and biological membranes is a triggering event in fusion. Correlation of these results with earlier studies of membrane hydration and fusion kinetics provides a more detailed view of the mechanism of fusion.

The kinetics of DNA injection by bacteriophage λ. into liposomes bearing reconstituted receptors were measured using fluorescence spectroscopy. LamB, the bacteriophage receptor, was extracted from bacteria and reconstituted into liposomes by detergent removal dialysis. The DNA binding fluorophore ethidium bromide was encapsulated in the liposomes during dialysis. Enhanced fluorescence of ethidium bromide upon binding to injected DNA was monitored, and showed that injection is a rapid, one-step process. The bimolecular rate law, determined by the method of initial rates, revealed that injection occurs several times faster than indicated by earlier studies employing indirect assays.

It is hoped that these studies will increase the understanding of the mechanisms of virus entry into cells, and to facilitate the development of virus-mimetic drug delivery strategies.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The long- and short-period body waves of a number of moderate earthquakes occurring in central and southern California recorded at regional (200-1400 km) and teleseismic (> 30°) distances are modeled to obtain the source parameters-focal mechanism, depth, seismic moment, and source time history. The modeling is done in the time domain using a forward modeling technique based on ray summation. A simple layer over a half space velocity model is used with additional layers being added if necessary-for example, in a basin with a low velocity lid.

The earthquakes studied fall into two geographic regions: 1) the western Transverse Ranges, and 2) the western Imperial Valley. Earthquakes in the western Transverse Ranges include the 1987 Whittier Narrows earthquake, several offshore earthquakes that occurred between 1969 and 1981, and aftershocks to the 1983 Coalinga earthquake (these actually occurred north of the Transverse Ranges but share many characteristics with those that occurred there). These earthquakes are predominantly thrust faulting events with the average strike being east-west, but with many variations. Of the six earthquakes which had sufficient short-period data to accurately determine the source time history, five were complex events. That is, they could not be modeled as a simple point source, but consisted of two or more subevents. The subevents of the Whittier Narrows earthquake had different focal mechanisms. In the other cases, the subevents appear to be the same, but small variations could not be ruled out.

The recent Imperial Valley earthquakes modeled include the two 1987 Superstition Hills earthquakes and the 1969 Coyote Mountain earthquake. All are strike-slip events, and the second 1987 earthquake is a complex event With non-identical subevents.

In all the earthquakes studied, and particularly the thrust events, constraining the source parameters required modeling several phases and distance ranges. Teleseismic P waves could provide only approximate solutions. P_(nl) waves were probably the most useful phase in determining the focal mechanism, with additional constraints supplied by the SH waves when available. Contamination of the SH waves by shear-coupled PL waves was a frequent problem. Short-period data were needed to obtain the source time function.

In addition to the earthquakes mentioned above, several historic earthquakes were also studied. Earthquakes that occurred before the existence of dense local and worldwide networks are difficult to model due to the sparse data set. It has been noticed that earthquakes that occur near each other often produce similar waveforms implying similar source parameters. By comparing recent well studied earthquakes to historic earthquakes in the same region, better constraints can be placed on the source parameters of the historic events.

The Lompoc earthquake (M=7) of 1927 is the largest offshore earthquake to occur in California this century. By direct comparison of waveforms and amplitudes with the Coalinga and Santa Lucia Banks earthquakes, the focal mechanism (thrust faulting on a northwest striking fault) and long-period seismic moment (10^(26) dyne cm) can be obtained. The S-P travel times are consistent with an offshore location, rather than one in the Hosgri fault zone.

Historic earthquakes in the western Imperial Valley were also studied. These events include the 1942 and 1954 earthquakes. The earthquakes were relocated by comparing S-P and R-S times to recent earthquakes. It was found that only minor changes in the epicenters were required but that the Coyote Mountain earthquake may have been more severely mislocated. The waveforms as expected indicated that all the events were strike-slip. Moment estimates were obtained by comparing the amplitudes of recent and historic events at stations which recorded both. The 1942 event was smaller than the 1968 Borrego Mountain earthquake although some previous studies suggested the reverse. The 1954 and 1937 earthquakes had moments close to the expected value. An aftershock of the 1942 earthquake appears to be larger than previously thought.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Computation technology has dramatically changed the world around us; you can hardly find an area where cell phones have not saturated the market, yet there is a significant lack of breakthroughs in the development to integrate the computer with biological environments. This is largely the result of the incompatibility of the materials used in both environments; biological environments and experiments tend to need aqueous environments. To help aid in these development chemists, engineers, physicists and biologists have begun to develop microfluidics to help bridge this divide. Unfortunately, the microfluidic devices required large external support equipment to run the device. This thesis presents a series of several microfluidic methods that can help integrate engineering and biology by exploiting nanotechnology to help push the field of microfluidics back to its intended purpose, small integrated biological and electrical devices. I demonstrate this goal by developing different methods and devices to (1) separate membrane bound proteins with the use of microfluidics, (2) use optical technology to make fiber optic cables into protein sensors, (3) generate new fluidic devices using semiconductor material to manipulate single cells, and (4) develop a new genetic microfluidic based diagnostic assay that works with current PCR methodology to provide faster and cheaper results. All of these methods and systems can be used as components to build a self-contained biomedical device.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The rapid growth and development of Los Angeles City and County has been one of the phenomena of the present age. The growth of a city from 50,600 to 576,000, an increase of over 1000% in thirty years is an unprecedented occurrence. It has given rise to a variety of problems of increasing magnitude.

Chief among these are: supply of food, water and shelter development of industry and markets, prevention and removal of downtown congestion and protection of life and property. These, of course, are the problems that any city must face. But in the case of a community which doubles its population every ten years, radical and heroic measures must often be taken.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The cataphoretic purification of helium was investigated for binary mixtures of He with Ar, Ne, N2, O2, CO, and CO2 in DC glow discharge. An experimental technique was developed to continuously measure the composition in the anode end-bulb without sample withdrawal. Discharge currents ranged from 10 ma to 100 ma. Total gas pressure ranged from 2 torr to 9 torr. Initial compositions of the minority component in He ranged from 1.2 mole percent to 7.5 mole percent.

The cataphoretic separation of Ar and Ne from He was found to be in agreement with previous investigators. The cataphoretic separation of N2, O2, and CO from He was found to be similar to noble gas systems in that the steady-state separation improved with (1) increasing discharge current, (2) increasing gas pressure, and (3) decreasing initial composition of the minority component. In the He-CO2 mixture, the CO2 dissociated to CO plus O2. The fraction of CO2 dissociated was directly proportional to the current and pressure and independent of initial composition.

The experimental results for the separation of Ar, Ne, N2, O2, and CO from He were interpreted in the framework of a recently proposed theoretical model involving an electrostatic Peclet number. In the model the electric field was assumed to be constant. This assumption was checked experimentally and the maximum variation in electric field was 35% in time and 30% in position. Consequently, the assumption of constant electric field introduced no more than 55% variation in the electrostatic Peclet number during a separation.

To aid in the design of new cataphoretic systems, the following design criteria were developed and tested in detail: (1) electric field independent of discharge current, (2) electric field directly proportional to total pressure, (3) ion fraction of impurity directly proportional to discharge current, and (4) ion fraction of impurity independent of total pressure. Although these assumptions are approximate, they enabled the steady-state concentration profile to be predicted to within 25% for 75% of the data. The theoretical model was also tested with respect to the characteristic time associated with transient cataphoresis. Over 80% of the data was within a factor of two of the calculated characteristic times.

The electrostatic Peclet number ranged in value from 0.13 to 4.33. Back-calculated ion fractions of the impurity component ranged in value from 4.8x10-6 to 178x10-6.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Described in this thesis are measurements made of the thick-target neutron yield from the reaction 13C(α, n)16O. The yield was determined for laboratory bombarding energies between 0.475 and 0.700 MeV, using a stilbene crystal neutron detector and pulse-shape discrimination to eliminate gamma rays. Stellar temperatures between 2.5 and 4.5 x 108 oK are involved in this energy region. From the neutron yield was extracted the astrophysical cross-section factor S(E), which was found to fit a linear function: S(E) = [(5.48 ± 1.77) + (12.05 ± 3.91)E] x 105 MeV-barns, center-of-mass system. The stellar rate of the 13C(α, n)16O reaction if calculated, and discussed with reference to helium burning and neutron production in the core of a giant star.

Results are also presented of measurements carried out on the reaction 9Be(α, n)12C, taken with a thin Be target. The bombarding energy-range covered was from 0.340 to 0.680 MeV, with excitation curves for the ground- and first excited-state neutrons being reported. Some angular distributions were also measured. Resonances were found at bombarding energies of ELAB = 0.520 MeV (ECM = 0.360 MeV, Γ ~ 55 keV CM, ωγ = 3.79 eV CM) and ELAB = 0.600 MeV (ECM = 0.415 MeV, Γ ˂ 4 keV CM, ωγ = 0.88 eV CM). The astrophysical rate of the 9Be(α, n)12C reaction due to these resonances is calculated.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

I. PHOSPHORESCENCE AND THE TRUE LIFETIME OF TRIPLET STATES IN FLUID SOLUTIONS

Phosphorescence has been observed in a highly purified fluid solution of naphthalene in 3-methylpentane (3-MP). The phosphorescence lifetime of C10H8 in 3-MP at -45 °C was found to be 0.49 ± 0.07 sec, while that of C10D8 under identical conditions is 0.64 ± 0.07 sec. At this temperature 3-MP has the same viscosity (0.65 centipoise) as that of benzene at room temperature. It is believed that even these long lifetimes are dominated by impurity quenching mechanisms. Therefore it seems that the radiationless decay times of the lowest triplet states of simple aromatic hydrocarbons in liquid solutions are sensibly the same as those in the solid phase. A slight dependence of the phosphorescence lifetime on solvent viscosity was observed in the temperature region, -60° to -18°C. This has been attributed to the diffusion-controlled quenching of the triplet state by residual impurity, perhaps oxygen. Bimolecular depopulation of the triplet state was found to be of major importance over a large part of the triplet decay.

The lifetime of triplet C10H8 at room temperature was also measured in highly purified benzene by means of both phosphorescence and triplet-triplet absorption. The lifetime was estimated to be at least ten times shorter than that in 3-MP. This is believed to be due not only to residual impurities in the solvent but also to small amounts of impurities produced through unavoidable irradiation by the excitation source. In agreement with this idea, lifetime shortening caused by intense flashes of light is readily observed. This latter result suggests that experiments employing flash lamp techniques are not suitable for these kinds of studies.

The theory of radiationless transitions, based on Robinson's theory, is briefly outlined. A simple theoretical model which is derived from Fano's autoionization gives identical result.

Il. WHY IS CONDENSED OXYGEN BLUE?

The blue color of oxygen is mostly derived from double transitions. This paper presents a theoretical calculation of the intensity of the double transition (a 1Δg) (a 1Δg)←(X 3Σg-) (X 3Σg-), using a model based on a pair of oxygen molecules at a fixed separation of 3.81 Å. The intensity enhancement is assumed to be derived from the mixing (a 1Δg) (a 1Δg) ~~~ (X 3Σg-) (X 3Σu-) and (a 1Δg) (1Δu) ~~~ (X 3Σg-) (X 3Σg-). Matrix elements for these interactions are calculated using a π-electron approximation for the pair system. Good molecular wavefunctions are used for all but the perturbing (B 3Σu-) state, which is approximated in terms of ground state orbitals. The largest contribution to the matrix elements arises from large intramolecular terms multiplied by intermolecular overlap integrals. The strength of interaction depends not only on the intermolecular separation of the two oxygen molecules, but also as expected on the relative orientation. Matrix elements are calculated for different orientations, and the angular dependence is fit to an analytical expression. The theory therefore not only predicts an intensity dependence on density but also one on phase at constant density. Agreement between theory and available experimental results is satisfactory considering the nature of the approximation, and indicates the essential validity of the overall approach to this interesting intensity enhancement problem.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This thesis discusses simulations of earthquake ground motions using prescribed ruptures and dynamic failure. Introducing sliding degrees of freedom led to an innovative technique for numerical modeling of earthquake sources. This technique allows efficient implementation of both prescribed ruptures and dynamic failure on an arbitrarily oriented fault surface. Off the fault surface the solution of the three-dimensional, dynamic elasticity equation uses well known finite-element techniques. We employ parallel processing to efficiently compute the ground motions in domains containing millions of degrees of freedom.

Using prescribed ruptures we study the sensitivity of long-period near-source ground motions to five earthquake source parameters for hypothetical events on a strike-slip fault (Mw 7.0 to 7.1) and a thrust fault (Mw 6.6 to 7.0). The directivity of the ruptures creates large displacement and velocity pulses in the ground motions in the forward direction. We found a good match between the severity of the shaking and the shape of the near-source factor from the 1997 Uniform Building Code for strike-slip faults and thrust faults with surface rupture. However, for blind thrust faults the peak displacement and velocities occur up-dip from the region with the peak near-source factor. We assert that a simple modification to the formulation of the near-source factor improves the match between the severity of the ground motion and the shape of the near-source factor.

For simulations with dynamic failure on a strike-slip fault or a thrust fault, we examine what constraints must be imposed on the coefficient of friction to produce realistic ruptures under the application of reasonable shear and normal stress distributions with depth. We found that variation of the coefficient of friction with the shear modulus and the depth produces realistic rupture behavior in both homogeneous and layered half-spaces. Furthermore, we observed a dependence of the rupture speed on the direction of propagation and fluctuations in the rupture speed and slip rate as the rupture encountered changes in the stress field. Including such behavior in prescribed ruptures would yield more realistic ground motions.