27 resultados para Spectrally bounded

em CaltechTHESIS


Relevância:

20.00% 20.00%

Publicador:

Resumo:

The warm plasma resonance cone structure of the quasistatic field produced by a gap source in a bounded magnetized slab plasma is determined theoretically. This is initially determined for a homogeneous or mildly inhomogeneous plasma with source frequency lying between the lower hybrid frequency and the plasma frequency. It is then extended to the complicated case of an inhomogeneous plasma with two internal lower hybrid layers present, which is of interest to radio frequency heating of plasmas.

In the first case, the potential is obtained as a sum of multiply reflected warm plasma resonance cones, each of which has a similar structure, but a different size, amplitude, and position. An important interference between nearby multiply-reflected resonance cones is found. The cones are seen to spread out as they move away from the source, so that this interference increases and the individual resonance cones become obscured far away from the source.

In the second case, the potential is found to be expressible as a sum of multiply-reflected, multiply-tunnelled, and mode converted resonance cones, each of which has a unique but similar structure. The effects of both collisional and collisionless damping are included and their effects on the decay of the cone structure studied. Various properties of the cones such as how they move into and out of the hybrid layers, through the evanescent region, and transform at the hybrid layers are determined. It is found that cones can tunnel through the evanescent layer if the layer is thin, and the effect of the thin evanescent layer is to subdue the secondary maxima of cone relative to the main peak, while slightly broadening the main peak and shifting it closer to the cold plasma cone line.

Energy theorems for quasistatic fields are developed and applied to determine the power flow and absorption along the individual cones. This reveals the points of concentration of the flow and the various absorption mechanisms.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A model equation for water waves has been suggested by Whitham to study, qualitatively at least, the different kinds of breaking. This is an integro-differential equation which combines a typical nonlinear convection term with an integral for the dispersive effects and is of independent mathematical interest. For an approximate kernel of the form e^(-b|x|) it is shown first that solitary waves have a maximum height with sharp crests and secondly that waves which are sufficiently asymmetric break into "bores." The second part applies to a wide class of bounded kernels, but the kernel giving the correct dispersion effects of water waves has a square root singularity and the present argument does not go through. Nevertheless the possibility of the two kinds of breaking in such integro-differential equations is demonstrated.

Difficulties arise in finding variational principles for continuum mechanics problems in the Eulerian (field) description. The reason is found to be that continuum equations in the original field variables lack a mathematical "self-adjointness" property which is necessary for Euler equations. This is a feature of the Eulerian description and occurs in non-dissipative problems which have variational principles for their Lagrangian description. To overcome this difficulty a "potential representation" approach is used which consists of transforming to new (Eulerian) variables whose equations are self-adjoint. The transformations to the velocity potential or stream function in fluids or the scaler and vector potentials in electromagnetism often lead to variational principles in this way. As yet no general procedure is available for finding suitable transformations. Existing variational principles for the inviscid fluid equations in the Eulerian description are reviewed and some ideas on the form of the appropriate transformations and Lagrangians for fluid problems are obtained. These ideas are developed in a series of examples which include finding variational principles for Rossby waves and for the internal waves of a stratified fluid.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Storage systems are widely used and have played a crucial rule in both consumer and industrial products, for example, personal computers, data centers, and embedded systems. However, such system suffers from issues of cost, restricted-lifetime, and reliability with the emergence of new systems and devices, such as distributed storage and flash memory, respectively. Information theory, on the other hand, provides fundamental bounds and solutions to fully utilize resources such as data density, information I/O and network bandwidth. This thesis bridges these two topics, and proposes to solve challenges in data storage using a variety of coding techniques, so that storage becomes faster, more affordable, and more reliable.

We consider the system level and study the integration of RAID schemes and distributed storage. Erasure-correcting codes are the basis of the ubiquitous RAID schemes for storage systems, where disks correspond to symbols in the code and are located in a (distributed) network. Specifically, RAID schemes are based on MDS (maximum distance separable) array codes that enable optimal storage and efficient encoding and decoding algorithms. With r redundancy symbols an MDS code can sustain r erasures. For example, consider an MDS code that can correct two erasures. It is clear that when two symbols are erased, one needs to access and transmit all the remaining information to rebuild the erasures. However, an interesting and practical question is: What is the smallest fraction of information that one needs to access and transmit in order to correct a single erasure? In Part I we will show that the lower bound of 1/2 is achievable and that the result can be generalized to codes with arbitrary number of parities and optimal rebuilding.

We consider the device level and study coding and modulation techniques for emerging non-volatile memories such as flash memory. In particular, rank modulation is a novel data representation scheme proposed by Jiang et al. for multi-level flash memory cells, in which a set of n cells stores information in the permutation induced by the different charge levels of the individual cells. It eliminates the need for discrete cell levels, as well as overshoot errors, when programming cells. In order to decrease the decoding complexity, we propose two variations of this scheme in Part II: bounded rank modulation where only small sliding windows of cells are sorted to generated permutations, and partial rank modulation where only part of the n cells are used to represent data. We study limits on the capacity of bounded rank modulation and propose encoding and decoding algorithms. We show that overlaps between windows will increase capacity. We present Gray codes spanning all possible partial-rank states and using only ``push-to-the-top'' operations. These Gray codes turn out to solve an open combinatorial problem called universal cycle, which is a sequence of integers generating all possible partial permutations.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Seismic reflection methods have been extensively used to probe the Earth's crust and suggest the nature of its formative processes. The analysis of multi-offset seismic reflection data extends the technique from a reconnaissance method to a powerful scientific tool that can be applied to test specific hypotheses. The treatment of reflections at multiple offsets becomes tractable if the assumptions of high-frequency rays are valid for the problem being considered. Their validity can be tested by applying the methods of analysis to full wave synthetics.

Three studies illustrate the application of these principles to investigations of the nature of the crust in southern California. A survey shot by the COCORP consortium in 1977 across the San Andreas fault near Parkfield revealed events in the record sections whose arrival time decreased with offset. The reflectors generating these events are imaged using a multi-offset three-dimensional Kirchhoff migration. Migrations of full wave acoustic synthetics having the same limitations in geometric coverage as the field survey demonstrate the utility of this back projection process for imaging. The migrated depth sections show the locations of the major physical boundaries of the San Andreas fault zone. The zone is bounded on the southwest by a near-vertical fault juxtaposing a Tertiary sedimentary section against uplifted crystalline rocks of the fault zone block. On the northeast, the fault zone is bounded by a fault dipping into the San Andreas, which includes slices of serpentinized ultramafics, intersecting it at 3 km depth. These interpretations can be made despite complications introduced by lateral heterogeneities.

In 1985 the Calcrust consortium designed a survey in the eastern Mojave desert to image structures in both the shallow and the deep crust. Preliminary field experiments showed that the major geophysical acquisition problem to be solved was the poor penetration of seismic energy through a low-velocity surface layer. Its effects could be mitigated through special acquisition and processing techniques. Data obtained from industry showed that quality data could be obtained from areas having a deeper, older sedimentary cover, causing a re-definition of the geologic objectives. Long offset stationary arrays were designed to provide reversed, wider angle coverage of the deep crust over parts of the survey. The preliminary field tests and constant monitoring of data quality and parameter adjustment allowed 108 km of excellent crustal data to be obtained.

This dataset, along with two others from the central and western Mojave, was used to constrain rock properties and the physical condition of the crust. The multi-offset analysis proceeded in two steps. First, an increase in reflection peak frequency with offset is indicative of a thinly layered reflector. The thickness and velocity contrast of the layering can be calculated from the spectral dispersion, to discriminate between structures resulting from broad scale or local effects. Second, the amplitude effects at different offsets of P-P scattering from weak elastic heterogeneities indicate whether the signs of the changes in density, rigidity, and Lame's parameter at the reflector agree or are opposed. The effects of reflection generation and propagation in a heterogeneous, anisotropic crust were contained by the design of the experiment and the simplicity of the observed amplitude and frequency trends. Multi-offset spectra and amplitude trend stacks of the three Mojave Desert datasets suggest that the most reflective structures in the middle crust are strong Poisson's ratio (σ) contrasts. Porous zones or the juxtaposition of units of mutually distant origin are indicated. Heterogeneities in σ increase towards the top of a basal crustal zone at ~22 km depth. The transition to the basal zone and to the mantle include increases in σ. The Moho itself includes ~400 m layering having a velocity higher than that of the uppermost mantle. The Moho maintains the same configuration across the Mojave despite 5 km of crustal thinning near the Colorado River. This indicates that Miocene extension there either thinned just the basal zone, or that the basal zone developed regionally after the extensional event.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Many engineering applications face the problem of bounding the expected value of a quantity of interest (performance, risk, cost, etc.) that depends on stochastic uncertainties whose probability distribution is not known exactly. Optimal uncertainty quantification (OUQ) is a framework that aims at obtaining the best bound in these situations by explicitly incorporating available information about the distribution. Unfortunately, this often leads to non-convex optimization problems that are numerically expensive to solve.

This thesis emphasizes on efficient numerical algorithms for OUQ problems. It begins by investigating several classes of OUQ problems that can be reformulated as convex optimization problems. Conditions on the objective function and information constraints under which a convex formulation exists are presented. Since the size of the optimization problem can become quite large, solutions for scaling up are also discussed. Finally, the capability of analyzing a practical system through such convex formulations is demonstrated by a numerical example of energy storage placement in power grids.

When an equivalent convex formulation is unavailable, it is possible to find a convex problem that provides a meaningful bound for the original problem, also known as a convex relaxation. As an example, the thesis investigates the setting used in Hoeffding's inequality. The naive formulation requires solving a collection of non-convex polynomial optimization problems whose number grows doubly exponentially. After structures such as symmetry are exploited, it is shown that both the number and the size of the polynomial optimization problems can be reduced significantly. Each polynomial optimization problem is then bounded by its convex relaxation using sums-of-squares. These bounds are found to be tight in all the numerical examples tested in the thesis and are significantly better than Hoeffding's bounds.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This thesis presents a study of the dynamical, nonlinear interaction of colliding gravitational waves, as described by classical general relativity. It is focused mainly on two fundamental questions: First, what is the general structure of the singularities and Killing-Cauchy horizons produced in the collisions of exactly plane-symmetric gravitational waves? Second, under what conditions will the collisions of almost-plane gravitational waves (waves with large but finite transverse sizes) produce singularities?

In the work on the collisions of exactly-plane waves, it is shown that Killing horizons in any plane-symmetric spacetime are unstable against small plane-symmetric perturbations. It is thus concluded that the Killing-Cauchy horizons produced by the collisions of some exactly plane gravitational waves are nongeneric, and that generic initial data for the colliding plane waves always produce "pure" spacetime singularities without such horizons. This conclusion is later proved rigorously (using the full nonlinear theory rather than perturbation theory), in connection with an analysis of the asymptotic singularity structure of a general colliding plane-wave spacetime. This analysis also proves that asymptotically the singularities created by colliding plane waves are of inhomogeneous-Kasner type; the asymptotic Kasner axes and exponents of these singularities in general depend on the spatial coordinate that runs tangentially to the singularity in the non-plane-symmetric direction.

In the work on collisions of almost-plane gravitational waves, first some general properties of single almost-plane gravitational-wave spacetimes are explored. It is shown that, by contrast with an exact plane wave, an almost-plane gravitational wave cannot have a propagation direction that is Killing; i.e., it must diffract and disperse as it propagates. It is also shown that an almost-plane wave cannot be precisely sandwiched between two null wavefronts; i.e., it must leave behind tails in the spacetime region through which it passes. Next, the occurrence of spacetime singularities in the collisions of almost-plane waves is investigated. It is proved that if two colliding, almost-plane gravitational waves are initially exactly plane-symmetric across a central region of sufficiently large but finite transverse dimensions, then their collision produces a spacetime singularity with the same local structure as in the exact-plane-wave collision. Finally, it is shown that a singularity still forms when the central regions are only approximately plane-symmetric initially. Stated more precisely, it is proved that if the colliding almost-plane waves are initially sufficiently close to being exactly plane-symmetric across a bounded central region of sufficiently large transverse dimensions, then their collision necessarily produces spacetime singularities. In this case, nothing is now known about the local and global structures of the singularities.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Sources and effects of astrophysical gravitational radiation are explained briefly to motivate discussion of the Caltech 40 meter antenna, which employs laser interferometry to monitor proper distances between inertial test masses. Practical considerations in construction of the apparatus are described. Redesign of test mass systems has resulted in a reduction of noise from internal mass vibrations by up to two orders of magnitude at some frequencies. A laser frequency stabilization system was developed which corrects the frequency of an argon ion laser to a residual fluctuation level bounded by the spectral density √s_v(f) ≤ 60µHz/√Hz, at fluctuation frequencies near 1.2 kHz. These and other improvements have contributed to reducing the spectral density of equivalent gravitational wave strain noise to √s_h(f)≈10^(-19)/√ Hz at these frequencies.

Finally, observations made with the antenna in February and March of 1987 are described. Kilohertz-band gravitational waves produced by the remnant of the recent supernova are shown to be theoretically unlikely at the strength required for confident detection in this antenna (then operating at poorer sensitivity than that quoted above). A search for periodic waves in the recorded data, comprising Fourier analysis of four 105-second samples of the antenna strain signal, was used to place new upper limits on periodic gravitational radiation at frequencies between 305 Hz and 5 kHz. In particular, continuous waves of any polarization are ruled out above strain amplitudes of 1.2 x 10^(-18) R.M.S. for waves emanating from the direction of the supernova, and 6.2 x 10^(-19) R.M.S. for waves emanating from the galactic center, between 1.5 and 4 kilohertz. Between 305 Hz and 5kHz no strains greater than 1.2 x 10^(-17) R.M.S. were detected from either direction. Limitations of the analysis and potential improvements are discussed, as are prospects for future searches.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We develop new algorithms which combine the rigorous theory of mathematical elasticity with the geometric underpinnings and computational attractiveness of modern tools in geometry processing. We develop a simple elastic energy based on the Biot strain measure, which improves on state-of-the-art methods in geometry processing. We use this energy within a constrained optimization problem to, for the first time, provide surface parameterization tools which guarantee injectivity and bounded distortion, are user-directable, and which scale to large meshes. With the help of some new generalizations in the computation of matrix functions and their derivative, we extend our methods to a large class of hyperelastic stored energy functions quadratic in piecewise analytic strain measures, including the Hencky (logarithmic) strain, opening up a wide range of possibilities for robust and efficient nonlinear elastic simulation and geometry processing by elastic analogy.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The low-thrust guidance problem is defined as the minimum terminal variance (MTV) control of a space vehicle subjected to random perturbations of its trajectory. To accomplish this control task, only bounded thrust level and thrust angle deviations are allowed, and these must be calculated based solely on the information gained from noisy, partial observations of the state. In order to establish the validity of various approximations, the problem is first investigated under the idealized conditions of perfect state information and negligible dynamic errors. To check each approximate model, an algorithm is developed to facilitate the computation of the open loop trajectories for the nonlinear bang-bang system. Using the results of this phase in conjunction with the Ornstein-Uhlenbeck process as a model for the random inputs to the system, the MTV guidance problem is reformulated as a stochastic, bang-bang, optimal control problem. Since a complete analytic solution seems to be unattainable, asymptotic solutions are developed by numerical methods. However, it is shown analytically that a Kalman filter in cascade with an appropriate nonlinear MTV controller is an optimal configuration. The resulting system is simulated using the Monte Carlo technique and is compared to other guidance schemes of current interest.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A composite stock of alkaline gabbro and syenite is intrusive into limestone of the Del Carmen, Sue Peake and Santa Elena Formations at the northwest end of the Christmas Mountains. There is abundant evidence of solution of wallrock by magma but nowhere are gabbro and limestone in direct contact. The sequence of lithologies developed across the intrusive contact and across xenoliths is gabbro, pyroxenite, calc-silicate skarn, marble. Pyroxenite is made up of euhedral crystals of titanaugite and sphene in a leucocratic matrix of nepheline, Wollastonite and alkali feldspar. The uneven modal distribution of phases in pyroxenite and the occurrence' of nepheline syenite dikes, intrusive into pyroxenite and skarn, suggest that pyroxenite represents an accumulation of clinopyroxene "cemented" together by late-solidifying residual magma of nepheline syenite composition. Assimilation of limestone by gabbroic magma involves reactions between calcite and magma and/or crystals in equilibrium with magma and crystallization of phases in which the magma is saturated, to supply energy for the solution reaction. Gabbroic magma was saturated with plagioclase and clinopyroxene at the time of emplacement. The textural and mineralogic features of pyroxenite can be produced by the reaction 2( 1-X) CALCITE + ANXABl-X = (1-X) NEPHELINE+ 2(1-X) WOLLASTONITE+ X ANORTHITE+ 2(1-X) CO2. Plagioclase in pyroxenite has corroded margins and is rimmed by nepheline, suggestive of resorption by magma. Anorthite and wollastonite enter solid solution in titanaugite. For each mole of calcite dissolved, approximately one mole of clinopyroxene was crystallized. Thus the amount of limestone that may be assimilated is limited by the concentration of potential clinopyroxene in the magma. Wollastonite appears as a phase when magma has been depleted in iron and magnesium by crystallization of titanaugite. The predominance of mafic and ultramafic compositions among contaminated rocks and their restriction to a narrow zone along the intrusive contact provides little evidence for the generation of a significant volume of desilicated magma as a result of limestone assimilation.

Within 60 m of the intrusive contact with the gabbro, nodular chert in the Santa Elena Limestone reacted with the enveloping marble to form spherical nodules of high-temperature calc-silicate minerals. The phases wollastonite, rankinite, spurrite, tilleyite and calcite, form a series of sharply-bounded, concentric monomineralic and two-phase shells which record a step-wise decrease in silica content from the core of a nodule to its rim. Mineral zones in the nodules vary 'with distance from the gabbro as follows:

0-5 m CALCITE + SPURRITE + RANKINITE + WOLLASTONITE
5-16 m CALCITE + TILLEYITE ± SPURRITE + RANKINITE + WOLLASTONITE
16-31 m CALCITE + TILLEYITE + WOLLASTONITE
31-60 m CALCITE + WOLLASTONITE
60-plus CALCITE + QUARTZ

The mineral of a one-phase zone is compatible with the phases bounding it on either side but these phases are incompatible in the same volume of P-T-XCO2.

Growth of a monomineralio zone is initiated by reaction between minerals of adjacent one-phase zones which become unstable with rising temperature to form a thin layer of a new single phase that separates the reactants and is compatible with both of them. Because the mineral of the new zone is in equilibrium with the phases at both of its contacts, gradients in the chemical potentials of the exchangeable components are established across it. Although zone boundaries mark discontinuities in the gradients of bulk composition, two-phase equilibria at the contacts demonstrate that the chemical potentials are continuous. Hence, Ca, Si and CO2 were redistributed in the growing nodule by diffusion. A monomineralic zone grows at the expense of an adjacent zone by reaction between diffusing components and the mineral of the adjacent zone. Equilibria between two phases at zone boundaries buffers the chemical potentials of the diffusing species. Thus, within a monomineralic zone, the chemical potentials of the diffusing components are controlled external to the local assemblage by the two-phase equilibria at the zone boundaries.

Mineralogically zoned calc-silicate skarn occurs as a narrow band that separates pyroxenite and marble along the intrusive contact and forms a rim on marble xenoliths in gabbro. Skarn consists of melilite or idocrase pseudomorphs of melili te, one or two . stoichiometric calcsilicate phases and accessory Ti-Zr garnet, perovskite and magnetite. The sequence of mineral zones from pyroxenite to marble, defined by a characteristic calc-silicate, is wollastonite, rankinite, spurrite, calcite. Mineral assemblages of adjacent skarn zones are compatible and the set of zones in a skarn band defines a facies type, indicating that the different mineral assemblages represent different bulk compositions recrystallized under identical conditions. The number of phases in each zone is less than the number that might be expected to result from metamorphism of a general bulk composition under conditions of equilibrium, trivariant in P, T and uCO2. The "special" bulk composition of each zone is controlled by reaction between phases of the zones bounding it on either side. The continuity of the gradients of composition of melilite and garnet solid solutions across the skarn is consistent with the local equilibrium hypothesis and verifies that diffusion was the mechanism of mass transport. The formula proportions of Ti and Zr in garnet from skarn vary antithetically with that of Si Which systematically decreases from pyroxenite to marble. The chemical potential of Si in each skarn zone was controlled by the coexisting stoichiometric calc-silicate phases in the assemblage. Thus the formula proportion of Si in garnet is a direct measure of the chemical potential of Si from point to point in skarn. Reaction between gabbroic magma saturated with plagioclase and clinopyroxene produced nepheline pyroxenite and melilite-wollastonite skarn. The calcsilicate zones result from reaction between calcite and wollastonite to form spurrite and rankinite.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This study addresses the problem of obtaining reliable velocities and displacements from accelerograms, a concern which often arises in earthquake engineering. A closed-form acceleration expression with random parameters is developed to test any strong-motion accelerogram processing method. Integration of this analytical time history yields the exact velocities, displacements and Fourier spectra. Noise and truncation can also be added. A two-step testing procedure is proposed and the original Volume II routine is used as an illustration. The main sources of error are identified and discussed. Although these errors may be reduced, it is impossible to extract the true time histories from an analog or digital accelerogram because of the uncertain noise level and missing data. Based on these uncertainties, a probabilistic approach is proposed as a new accelerogram processing method. A most probable record is presented as well as a reliability interval which reflects the level of error-uncertainty introduced by the recording and digitization process. The data is processed in the frequency domain, under assumptions governing either the initial value or the temporal mean of the time histories. This new processing approach is tested on synthetic records. It induces little error and the digitization noise is adequately bounded. Filtering is intended to be kept to a minimum and two optimal error-reduction methods are proposed. The "noise filters" reduce the noise level at each harmonic of the spectrum as a function of the signal-to-noise ratio. However, the correction at low frequencies is not sufficient to significantly reduce the drifts in the integrated time histories. The "spectral substitution method" uses optimization techniques to fit spectral models of near-field, far-field or structural motions to the amplitude spectrum of the measured data. The extremes of the spectrum of the recorded data where noise and error prevail are then partly altered, but not removed, and statistical criteria provide the choice of the appropriate cutoff frequencies. This correction method has been applied to existing strong-motion far-field, near-field and structural data with promising results. Since this correction method maintains the whole frequency range of the record, it should prove to be very useful in studying the long-period dynamics of local geology and structures.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We simulate incompressible, MHD turbulence using a pseudo-spectral code. Our major conclusions are as follows.

1) MHD turbulence is most conveniently described in terms of counter propagating shear Alfvén and slow waves. Shear Alfvén waves control the cascade dynamics. Slow waves play a passive role and adopt the spectrum set by the shear Alfvén waves. Cascades composed entirely of shear Alfvén waves do not generate a significant measure of slow waves.

2) MHD turbulence is anisotropic with energy cascading more rapidly along k than along k, where k and k refer to wavevector components perpendicular and parallel to the local magnetic field. Anisotropy increases with increasing k such that excited modes are confined inside a cone bounded by k ∝ kγ where γ less than 1. The opening angle of the cone, θ(k) ∝ k-(1-γ), defines the scale dependent anisotropy.

3) MHD turbulence is generically strong in the sense that the waves which comprise it suffer order unity distortions on timescales comparable to their periods. Nevertheless, turbulent fluctuations are small deep inside the inertial range. Their energy density is less than that of the background field by a factor θ2 (k)≪1.

4) MHD cascades are best understood geometrically. Wave packets suffer distortions as they move along magnetic field lines perturbed by counter propagating waves. Field lines perturbed by unidirectional waves map planes perpendicular to the local field into each other. Shear Alfvén waves are responsible for the mapping's shear and slow waves for its dilatation. The amplitude of the former exceeds that of the latter by 1/θ(k) which accounts for dominance of the shear Alfvén waves in controlling the cascade dynamics.

5) Passive scalars mixed by MHD turbulence adopt the same power spectrum as the velocity and magnetic field perturbations.

6) Decaying MHD turbulence is unstable to an increase of the imbalance between the flux of waves propagating in opposite directions along the magnetic field. Forced MHD turbulence displays order unity fluctuations with respect to the balanced state if excited at low k by δ(t) correlated forcing. It appears to be statistically stable to the unlimited growth of imbalance.

7) Gradients of the dynamic variables are focused into sheets aligned with the magnetic field whose thickness is comparable to the dissipation scale. Sheets formed by oppositely directed waves are uncorrelated. We suspect that these are vortex sheets which the mean magnetic field prevents from rolling up.

8) Items (1)-(5) lend support to the model of strong MHD turbulence put forth by Goldreich and Sridhar (1995, 1997). Results from our simulations are also consistent with the GS prediction γ = 2/3. The sole not able discrepancy is that the 1D power law spectra, E(k) ∝ k-∝, determined from our simulations exhibit ∝ ≈ 3/2, whereas the GS model predicts ∝ = 5/3.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This dissertation studies long-term behavior of random Riccati recursions and mathematical epidemic model. Riccati recursions are derived from Kalman filtering. The error covariance matrix of Kalman filtering satisfies Riccati recursions. Convergence condition of time-invariant Riccati recursions are well-studied by researchers. We focus on time-varying case, and assume that regressor matrix is random and identical and independently distributed according to given distribution whose probability distribution function is continuous, supported on whole space, and decaying faster than any polynomial. We study the geometric convergence of the probability distribution. We also study the global dynamics of the epidemic spread over complex networks for various models. For instance, in the discrete-time Markov chain model, each node is either healthy or infected at any given time. In this setting, the number of the state increases exponentially as the size of the network increases. The Markov chain has a unique stationary distribution where all the nodes are healthy with probability 1. Since the probability distribution of Markov chain defined on finite state converges to the stationary distribution, this Markov chain model concludes that epidemic disease dies out after long enough time. To analyze the Markov chain model, we study nonlinear epidemic model whose state at any given time is the vector obtained from the marginal probability of infection of each node in the network at that time. Convergence to the origin in the epidemic map implies the extinction of epidemics. The nonlinear model is upper-bounded by linearizing the model at the origin. As a result, the origin is the globally stable unique fixed point of the nonlinear model if the linear upper bound is stable. The nonlinear model has a second fixed point when the linear upper bound is unstable. We work on stability analysis of the second fixed point for both discrete-time and continuous-time models. Returning back to the Markov chain model, we claim that the stability of linear upper bound for nonlinear model is strongly related with the extinction time of the Markov chain. We show that stable linear upper bound is sufficient condition of fast extinction and the probability of survival is bounded by nonlinear epidemic map.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Complexity in the earthquake rupture process can result from many factors. This study investigates the origin of such complexity by examining several recent, large earthquakes in detail. In each case the local tectonic environment plays an important role in understanding the source of the complexity.

Several large shallow earthquakes (Ms > 7.0) along the Middle American Trench have similarities and differences between them that may lead to a better understanding of fracture and subduction processes. They are predominantly thrust events consistent with the known subduction of the Cocos plate beneath N. America. Two events occurring along this subduction zone close to triple junctions show considerable complexity. This may be attributable to a more heterogeneous stress environment in these regions and as such has implications for other subduction zone boundaries.

An event which looks complex but is actually rather simple is the 1978 Bermuda earthquake (Ms ~ 6). It is located predominantly in the mantle. Its mechanism is one of pure thrust faulting with a strike N 20°W and dip 42°NE. Its apparent complexity is caused by local crustal structure. This is an important event in terms of understanding and estimating seismic hazard on the eastern seaboard of N. America.

A study of several large strike-slip continental earthquakes identifies characteristics which are common to them and may be useful in determining what to expect from the next great earthquake on the San Andreas fault. The events are the 1976 Guatemala earthquake on the Motagua fault and two events on the Anatolian fault in Turkey (the 1967, Mudurnu Valley and 1976, E. Turkey events). An attempt to model the complex P-waveforms of these events results in good synthetic fits for the Guatemala and Mudurnu Valley events. However, the E. Turkey event proves to be too complex as it may have associated thrust or normal faulting. Several individual sources occurring at intervals of between 5 and 20 seconds characterize the Guatemala and Mudurnu Valley events. The maximum size of an individual source appears to be bounded at about 5 x 1026 dyne-cm. A detailed source study including directivity is performed on the Guatemala event. The source time history of the Mudurnu Valley event illustrates its significance in modeling strong ground motion in the near field. The complex source time series of the 1967 event produces amplitudes greater by a factor of 2.5 than a uniform model scaled to the same size for a station 20 km from the fault.

Three large and important earthquakes demonstrate an important type of complexity --- multiple-fault complexity. The first, the 1976 Philippine earthquake, an oblique thrust event, represents the first seismological evidence for a northeast dipping subduction zone beneath the island of Mindanao. A large event, following the mainshock by 12 hours, occurred outside the aftershock area and apparently resulted from motion on a subsidiary fault since the event had a strike-slip mechanism.

An aftershock of the great 1960 Chilean earthquake on June 6, 1960, proved to be an interesting discovery. It appears to be a large strike-slip event at the main rupture's southern boundary. It most likely occurred on the landward extension of the Chile Rise transform fault, in the subducting plate. The results for this event suggest that a small event triggered a series of slow events; the duration of the whole sequence being longer than 1 hour. This is indeed a "slow earthquake".

Perhaps one of the most complex of events is the recent Tangshan, China event. It began as a large strike-slip event. Within several seconds of the mainshock it may have triggered thrust faulting to the south of the epicenter. There is no doubt, however, that it triggered a large oblique normal event to the northeast, 15 hours after the mainshock. This event certainly contributed to the great loss of life-sustained as a result of the Tangshan earthquake sequence.

What has been learned from these studies has been applied to predict what one might expect from the next great earthquake on the San Andreas. The expectation from this study is that such an event would be a large complex event, not unlike, but perhaps larger than, the Guatemala or Mudurnu Valley events. That is to say, it will most likely consist of a series of individual events in sequence. It is also quite possible that the event could trigger associated faulting on neighboring fault systems such as those occurring in the Transverse Ranges. This has important bearing on the earthquake hazard estimation for the region.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Part 1 of this thesis is about the 24 November, 1987, Superstition Hills earthquakes. The Superstition Hills earthquakes occurred in the western Imperial Valley in southern California. The earthquakes took place on a conjugate fault system consisting of the northwest-striking right-lateral Superstition Hills fault and a previously unknown Elmore Ranch fault, a northeast-striking left-lateral structure defined by surface rupture and a lineation of hypocenters. The earthquake sequence consisted of foreshocks, the M_s 6.2 first main shock, and aftershocks on the Elmore Ranch fault followed by the M_s 6.6 second main shock and aftershocks on the Superstition Hills fault. There was dramatic surface rupture along the Superstition Hills fault in three segments: the northern segment, the southern segment, and the Wienert fault.

In Chapter 2, M_L≥4.0 earthquakes from 1945 to 1971 that have Caltech catalog locations near the 1987 sequence are relocated. It is found that none of the relocated earthquakes occur on the southern segment of the Superstition Hills fault and many occur at the intersection of the Superstition Hills and Elmore Ranch faults. Also, some other northeast-striking faults may have been active during that time.

Chapter 3 discusses the Superstition Hills earthquake sequence using data from the Caltech-U.S.G.S. southern California seismic array. The earthquakes are relocated and their distribution correlated to the type and arrangement of the basement rocks. The larger earthquakes occur only where continental crystalline basement rocks are present. The northern segment of the Superstition Hills fault has more aftershocks than the southern segment.

An inversion of long period teleseismic data of the second mainshock of the 1987 sequence, along the Superstition Hills fault, is done in Chapter 4. Most of the long period seismic energy seen teleseismically is radiated from the southern segment of the Superstition Hills fault. The fault dip is near vertical along the northern segment of the fault and steeply southwest dipping along the southern segment of the fault.

Chapter 5 is a field study of slip and afterslip measurements made along the Superstition Hills fault following the second mainshock. Slip and afterslip measurements were started only two hours after the earthquake. In some locations, afterslip more than doubled the coseismic slip. The northern and southern segments of the Superstition Hills fault differ in the proportion of coseismic and postseismic slip to the total slip.

The northern segment of the Superstition Hills fault had more aftershocks, more historic earthquakes, released less teleseismic energy, and had a smaller proportion of afterslip to total slip than the southern segment. The boundary between the two segments lies at a step in the basement that separates a deeper metasedimentary basement to the south from a shallower crystalline basement to the north.

Part 2 of the thesis deals with the three-dimensional velocity structure of southern California. In Chapter 7, an a priori three-dimensional crustal velocity model is constructed by partitioning southern California into geologic provinces, with each province having a consistent one-dimensional velocity structure. The one-dimensional velocity structures of each region were then assembled into a three-dimensional model. The three-dimension model was calibrated by forward modeling of explosion travel times.

In Chapter 8, the three-dimensional velocity model is used to locate earthquakes. For about 1000 earthquakes relocated in the Los Angeles basin, the three-dimensional model has a variance of the the travel time residuals 47 per cent less than the catalog locations found using a standard one-dimensional velocity model. Other than the 1987 Whittier earthquake sequence, little correspondence is seen between these earthquake locations and elements of a recent structural cross section of the Los Angeles basin. The Whittier sequence involved rupture of a north dipping thrust fault bounded on at least one side by a strike-slip fault. The 1988 Pasadena earthquake was deep left-lateral event on the Raymond fault. The 1989 Montebello earthquake was a thrust event on a structure similar to that on which the Whittier earthquake occurred. The 1989 Malibu earthquake was a thrust or oblique slip event adjacent to the 1979 Malibu earthquake.

At least two of the largest recent thrust earthquakes (San Fernando and Whittier) in the Los Angeles basin have had the extent of their thrust plane ruptures limited by strike-slip faults. This suggests that the buried thrust faults underlying the Los Angeles basin are segmented by strike-slip faults.

Earthquake and explosion travel times are inverted for the three-dimensional velocity structure of southern California in Chapter 9. The inversion reduced the variance of the travel time residuals by 47 per cent compared to the starting model, a reparameterized version of the forward model of Chapter 7. The Los Angeles basin is well resolved, with seismically slow sediments atop a crust of granitic velocities. Moho depth is between 26 and 32 km.