15 resultados para Upper bound method

em CaltechTHESIS


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Sufficient conditions are derived for the validity of approximate periodic solutions of a class of second order ordinary nonlinear differential equations. An approximate solution is defined to be valid if an exact solution exists in a neighborhood of the approximation.

Two classes of validity criteria are developed. Existence is obtained using the contraction mapping principle in one case, and the Schauder-Leray fixed point theorem in the other. Both classes of validity criteria make use of symmetry properties of periodic functions, and both classes yield an upper bound on a norm of the difference between the approximate and exact solution. This bound is used in a procedure which establishes sufficient stability conditions for the approximated solution.

Application to a system with piecewise linear restoring force (bilinear system) reveals that the approximate solution obtained by the method of averaging is valid away from regions where the response exhibits vertical tangents. A narrow instability region is obtained near one-half the natural frequency of the equivalent linear system. Sufficient conditions for the validity of resonant solutions are also derived, and two term harmonic balance approximate solutions which exhibit ultraharmonic and subharmonic resonances are studied.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Flash memory is a leading storage media with excellent features such as random access and high storage density. However, it also faces significant reliability and endurance challenges. In flash memory, the charge level in the cells can be easily increased, but removing charge requires an expensive erasure operation. In this thesis we study rewriting schemes that enable the data stored in a set of cells to be rewritten by only increasing the charge level in the cells. We consider two types of modulation scheme; a convectional modulation based on the absolute levels of the cells, and a recently-proposed scheme based on the relative cell levels, called rank modulation. The contributions of this thesis to the study of rewriting schemes for rank modulation include the following: we

•propose a new method of rewriting in rank modulation, beyond the previously proposed method of “push-to-the-top”;

•study the limits of rewriting with the newly proposed method, and derive a tight upper bound of 1 bit per cell;

•extend the rank-modulation scheme to support rankings with repetitions, in order to improve the storage density;

•derive a tight upper bound of 2 bits per cell for rewriting in rank modulation with repetitions;

•construct an efficient rewriting scheme that asymptotically approaches the upper bound of 2 bit per cell.

The next part of this thesis studies rewriting schemes for a conventional absolute-levels modulation. The considered model is called “write-once memory” (WOM). We focus on WOM schemes that achieve the capacity of the model. In recent years several capacity-achieving WOM schemes were proposed, based on polar codes and randomness extractors. The contributions of this thesis to the study of WOM scheme include the following: we

•propose a new capacity-achievingWOM scheme based on sparse-graph codes, and show its attractive properties for practical implementation;

•improve the design of polarWOMschemes to remove the reliance on shared randomness and include an error-correction capability.

The last part of the thesis studies the local rank-modulation (LRM) scheme, in which a sliding window going over a sequence of real-valued variables induces a sequence of permutations. The LRM scheme is used to simulate a single conventional multi-level flash cell. The simulated cell is realized by a Gray code traversing all the relative-value states where, physically, the transition between two adjacent states in the Gray code is achieved by using a single “push-to-the-top” operation. The main results of the last part of the thesis are two constructions of Gray codes with asymptotically-optimal rate.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In this thesis, I will discuss how information-theoretic arguments can be used to produce sharp bounds in the studies of quantum many-body systems. The main advantage of this approach, as opposed to the conventional field-theoretic argument, is that it depends very little on the precise form of the Hamiltonian. The main idea behind this thesis lies on a number of results concerning the structure of quantum states that are conditionally independent. Depending on the application, some of these statements are generalized to quantum states that are approximately conditionally independent. These structures can be readily used in the studies of gapped quantum many-body systems, especially for the ones in two spatial dimensions. A number of rigorous results are derived, including (i) a universal upper bound for a maximal number of topologically protected states that is expressed in terms of the topological entanglement entropy, (ii) a first-order perturbation bound for the topological entanglement entropy that decays superpolynomially with the size of the subsystem, and (iii) a correlation bound between an arbitrary local operator and a topological operator constructed from a set of local reduced density matrices. I also introduce exactly solvable models supported on a three-dimensional lattice that can be used as a reliable quantum memory.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The work presented in this thesis revolves around erasure correction coding, as applied to distributed data storage and real-time streaming communications.

First, we examine the problem of allocating a given storage budget over a set of nodes for maximum reliability. The objective is to find an allocation of the budget that maximizes the probability of successful recovery by a data collector accessing a random subset of the nodes. This optimization problem is challenging in general because of its combinatorial nature, despite its simple formulation. We study several variations of the problem, assuming different allocation models and access models, and determine the optimal allocation and the optimal symmetric allocation (in which all nonempty nodes store the same amount of data) for a variety of cases. Although the optimal allocation can have nonintuitive structure and can be difficult to find in general, our results suggest that, as a simple heuristic, reliable storage can be achieved by spreading the budget maximally over all nodes when the budget is large, and spreading it minimally over a few nodes when it is small. Coding would therefore be beneficial in the former case, while uncoded replication would suffice in the latter case.

Second, we study how distributed storage allocations affect the recovery delay in a mobile setting. Specifically, two recovery delay optimization problems are considered for a network of mobile storage nodes: the maximization of the probability of successful recovery by a given deadline, and the minimization of the expected recovery delay. We show that the first problem is closely related to the earlier allocation problem, and solve the second problem completely for the case of symmetric allocations. It turns out that the optimal allocations for the two problems can be quite different. In a simulation study, we evaluated the performance of a simple data dissemination and storage protocol for mobile delay-tolerant networks, and observed that the choice of allocation can have a significant impact on the recovery delay under a variety of scenarios.

Third, we consider a real-time streaming system where messages created at regular time intervals at a source are encoded for transmission to a receiver over a packet erasure link; the receiver must subsequently decode each message within a given delay from its creation time. For erasure models containing a limited number of erasures per coding window, per sliding window, and containing erasure bursts whose maximum length is sufficiently short or long, we show that a time-invariant intrasession code asymptotically achieves the maximum message size among all codes that allow decoding under all admissible erasure patterns. For the bursty erasure model, we also show that diagonally interleaved codes derived from specific systematic block codes are asymptotically optimal over all codes in certain cases. We also study an i.i.d. erasure model in which each transmitted packet is erased independently with the same probability; the objective is to maximize the decoding probability for a given message size. We derive an upper bound on the decoding probability for any time-invariant code, and show that the gap between this bound and the performance of a family of time-invariant intrasession codes is small when the message size and packet erasure probability are small. In a simulation study, these codes performed well against a family of random time-invariant convolutional codes under a number of scenarios.

Finally, we consider the joint problems of routing and caching for named data networking. We propose a backpressure-based policy that employs virtual interest packets to make routing and caching decisions. In a packet-level simulation, the proposed policy outperformed a basic protocol that combines shortest-path routing with least-recently-used (LRU) cache replacement.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

We have used the technique of non-redundant masking at the Palomar 200-inch telescope and radio VLBI imaging software to make optical aperture synthesis maps of two binary stars, β Corona Borealis and σ Herculis. The dynamic range of the map of β CrB, a binary star with a separation of 230 milliarcseconds is 50:1. For σ Her, we find a separation of 70 milliarcseconds and the dynamic range of our image is 30:1. These demonstrate the potential of the non-redundant masking technique for diffraction-limited imaging of astronomical objects with high dynamic range.

We find that the optimal integration time for measuring the closure phase is longer than that for measuring the fringe amplitude. There is not a close relationship between amplitude errors and phase errors, as is found in radio interferometry. Amplitude self calibration is less effective at optical wavelengths than at radio wavelengths. Primary beam sensitivity correction made in radio aperture synthesis is not necessary in optical aperture synthesis.

The effects of atmospheric disturbances on optical aperture synthesis have been studied by Monte Carlo simulations based on the Kolmogorov theory of refractive-index fluctuations. For the non-redundant masking with τ_c-sized apertures, the simulated fringe amplitude gives an upper bound of the observed fringe amplitude. A smooth transition is seen from the non-redundant masking regime to the speckle regime with increasing aperture size. The fractional reduction of the fringe amplitude according to the bandwidth is nearly independent of the aperture size. The limiting magnitude of optical aperture synthesis with τ_c-sized apertures and that with apertures larger than τ_c are derived.

Monte Carlo simulations are also made to study the sensitivity and resolution of the bispectral analysis of speckle interferometry. We present the bispectral modulation transfer function and its signal-to-noise ratio at high light levels. The results confirm the validity of the heuristic interferometric view of image-forming process in the mid-spatial-frequency range. The signal-to- noise ratio of the bispectrum at arbitrary light levels is derived in the mid-spatial-frequency range.

The non-redundant masking technique is suitable for imaging bright objects with high resolution and high dynamic range, while the faintest limit will be better pursued by speckle imaging.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This dissertation studies long-term behavior of random Riccati recursions and mathematical epidemic model. Riccati recursions are derived from Kalman filtering. The error covariance matrix of Kalman filtering satisfies Riccati recursions. Convergence condition of time-invariant Riccati recursions are well-studied by researchers. We focus on time-varying case, and assume that regressor matrix is random and identical and independently distributed according to given distribution whose probability distribution function is continuous, supported on whole space, and decaying faster than any polynomial. We study the geometric convergence of the probability distribution. We also study the global dynamics of the epidemic spread over complex networks for various models. For instance, in the discrete-time Markov chain model, each node is either healthy or infected at any given time. In this setting, the number of the state increases exponentially as the size of the network increases. The Markov chain has a unique stationary distribution where all the nodes are healthy with probability 1. Since the probability distribution of Markov chain defined on finite state converges to the stationary distribution, this Markov chain model concludes that epidemic disease dies out after long enough time. To analyze the Markov chain model, we study nonlinear epidemic model whose state at any given time is the vector obtained from the marginal probability of infection of each node in the network at that time. Convergence to the origin in the epidemic map implies the extinction of epidemics. The nonlinear model is upper-bounded by linearizing the model at the origin. As a result, the origin is the globally stable unique fixed point of the nonlinear model if the linear upper bound is stable. The nonlinear model has a second fixed point when the linear upper bound is unstable. We work on stability analysis of the second fixed point for both discrete-time and continuous-time models. Returning back to the Markov chain model, we claim that the stability of linear upper bound for nonlinear model is strongly related with the extinction time of the Markov chain. We show that stable linear upper bound is sufficient condition of fast extinction and the probability of survival is bounded by nonlinear epidemic map.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This thesis consists of three essays in the areas of political economy and game theory, unified by their focus on the effects of pre-play communication on equilibrium outcomes.

Communication is fundamental to elections. Chapter 2 extends canonical voter turnout models, where citizens, divided into two competing parties, choose between costly voting and abstaining, to include any form of communication, and characterizes the resulting set of Aumann's correlated equilibria. In contrast to previous research, high-turnout equilibria exist in large electorates and uncertain environments. This difference arises because communication can coordinate behavior in such a way that citizens find it incentive compatible to follow their correlated signals to vote more. The equilibria have expected turnout of at least twice the size of the minority for a wide range of positive voting costs.

In Chapter 3 I introduce a new equilibrium concept, called subcorrelated equilibrium, which fills the gap between Nash and correlated equilibrium, extending the latter to multiple mediators. Subcommunication equilibrium similarly extends communication equilibrium for incomplete information games. I explore the properties of these solutions and establish an equivalence between a subset of subcommunication equilibria and Myerson's quasi-principals' equilibria. I characterize an upper bound on expected turnout supported by subcorrelated equilibrium in the turnout game.

Chapter 4, co-authored with Thomas Palfrey, reports a new study of the effect of communication on voter turnout using a laboratory experiment. Before voting occurs, subjects may engage in various kinds of pre-play communication through computers. We study three communication treatments: No Communication, a control; Public Communication, where voters exchange public messages with all other voters, and Party Communication, where messages are exchanged only within one's own party. Our results point to a strong interaction effect between the form of communication and the voting cost. With a low voting cost, party communication increases turnout, while public communication decreases turnout. The data are consistent with correlated equilibrium play. With a high voting cost, public communication increases turnout. With communication, we find essentially no support for the standard Nash equilibrium turnout predictions.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

We develop a logarithmic potential theory on Riemann surfaces which generalizes logarithmic potential theory on the complex plane. We show the existence of an equilibrium measure and examine its structure. This leads to a formula for the structure of the equilibrium measure which is new even in the plane. We then use our results to study quadrature domains, Laplacian growth, and Coulomb gas ensembles on Riemann surfaces. We prove that the complement of the support of the equilibrium measure satisfies a quadrature identity. Furthermore, our setup allows us to naturally realize weak solutions of Laplacian growth (for a general time-dependent source) as an evolution of the support of equilibrium measures. When applied to the Riemann sphere this approach unifies the known methods for generating interior and exterior Laplacian growth. We later narrow our focus to a special class of quadrature domains which we call Algebraic Quadrature Domains. We show that many of the properties of quadrature domains generalize to this setting. In particular, the boundary of an Algebraic Quadrature Domain is the inverse image of a planar algebraic curve under a meromorphic function. This makes the study of the topology of Algebraic Quadrature Domains an interesting problem. We briefly investigate this problem and then narrow our focus to the study of the topology of classical quadrature domains. We extend the results of Lee and Makarov and prove (for n ≥ 3) c ≤ 5n-5, where c and n denote the connectivity and degree of a (classical) quadrature domain. At the same time we obtain a new upper bound on the number of isolated points of the algebraic curve corresponding to the boundary and thus a new upper bound on the number of special points. In the final chapter we study Coulomb gas ensembles on Riemann surfaces.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This thesis studies Frobenius traces in Galois representations from two different directions. In the first problem we explore how often they vanish in Artin-type representations. We give an upper bound for the density of the set of vanishing Frobenius traces in terms of the multiplicities of the irreducible components of the adjoint representation. Towards that, we construct an infinite family of representations of finite groups with an irreducible adjoint action.

In the second problem we partially extend for Hilbert modular forms a result of Coleman and Edixhoven that the Hecke eigenvalues ap of classical elliptical modular newforms f of weight 2 are never extremal, i.e., ap is strictly less than 2[square root]p. The generalization currently applies only to prime ideals p of degree one, though we expect it to hold for p of any odd degree. However, an even degree prime can be extremal for f. We prove our result in each of the following instances: when one can move to a Shimura curve defined by a quaternion algebra, when f is a CM form, when the crystalline Frobenius is semi-simple, and when the strong Tate conjecture holds for a product of two Hilbert modular surfaces (or quaternionic Shimura surfaces) over a finite field.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

A Riesz space with a Hausdorff, locally convex topology determined by Riesz seminorms is called a locally convex Riesz space. A sequence {xn} in a locally convex Riesz space L is said to converge locally to x ϵ L if for some topologically bounded set B and every real r ˃ 0 there exists N (r) and n ≥ N (r) implies x – xn ϵ rb. Local Cauchy sequences are defined analogously, and L is said to be locally complete if every local Cauchy sequence converges locally. Then L is locally complete if and only if every monotone local Cauchy sequence has a least upper bound. This is a somewhat more general form of the completeness criterion for Riesz – normed Riesz spaces given by Luxemburg and Zaanen. Locally complete, bound, locally convex Riesz spaces are barrelled. If the space is metrizable, local completeness and topological completeness are equivalent.

Two measures of the non-archimedean character of a non-archimedean Riesz space L are the smallest ideal Ao (L) such that quotient space is Archimedean and the ideal I (L) = { x ϵ L: for some 0 ≤ v ϵ L, n |x| ≤ v for n = 1, 2, …}. In general Ao (L) ᴝ I (L). If L is itself a quotient space, a necessary and sufficient condition that Ao (L) = I (L) is given. There is an example where Ao (L) ≠ I (L).

A necessary and sufficient condition that a Riesz space L have every quotient space Archimedean is that for every 0 ≤ u, v ϵ L there exist u1 = sup (inf (n v, u): n = 1, 2, …), and real numbers m1 and m2 such that m1 u1 ≥ v1 and m2 v1 ≥ u1. If, in addition, L is Dedekind σ – complete, then L may be represented as the space of all functions which vanish off finite subsets of some non-empty set.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

An explicit formula is obtained for the coefficients of the cyclotomic polynomial Fn(x), where n is the product of two distinct odd primes. A recursion formula and a lower bound and an improvement of Bang’s upper bound for the coefficients of Fn(x) are also obtained, where n is the product of three distinct primes. The cyclotomic coefficients are also studied when n is the product of four distinct odd primes. A recursion formula and upper bounds for its coefficients are obtained. The last chapter includes a different approach to the cyclotomic coefficients. A connection is obtained between a certain partition function and the cyclotomic coefficients when n is the product of an arbitrary number of distinct odd primes. Finally, an upper bound for the coefficients is derived when n is the product of an arbitrary number of distinct and odd primes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Therapy employing epidural electrostimulation holds great potential for improving therapy for patients with spinal cord injury (SCI) (Harkema et al., 2011). Further promising results from combined therapies using electrostimulation have also been recently obtained (e.g., van den Brand et al., 2012). The devices being developed to deliver the stimulation are highly flexible, capable of delivering any individual stimulus among a combinatorially large set of stimuli (Gad et al., 2013). While this extreme flexibility is very useful for ensuring that the device can deliver an appropriate stimulus, the challenge of choosing good stimuli is quite substantial, even for expert human experimenters. To develop a fully implantable, autonomous device which can provide useful therapy, it is necessary to design an algorithmic method for choosing the stimulus parameters. Such a method can be used in a clinical setting, by caregivers who are not experts in the neurostimulator's use, and to allow the system to adapt autonomously between visits to the clinic. To create such an algorithm, this dissertation pursues the general class of active learning algorithms that includes Gaussian Process Upper Confidence Bound (GP-UCB, Srinivas et al., 2010), developing the Gaussian Process Batch Upper Confidence Bound (GP-BUCB, Desautels et al., 2012) and Gaussian Process Adaptive Upper Confidence Bound (GP-AUCB) algorithms. This dissertation develops new theoretical bounds for the performance of these and similar algorithms, empirically assesses these algorithms against a number of competitors in simulation, and applies a variant of the GP-BUCB algorithm in closed-loop to control SCI therapy via epidural electrostimulation in four live rats. The algorithm was tasked with maximizing the amplitude of evoked potentials in the rats' left tibialis anterior muscle. These experiments show that the algorithm is capable of directing these experiments sensibly, finding effective stimuli in all four animals. Further, in direct competition with an expert human experimenter, the algorithm produced superior performance in terms of average reward and comparable or superior performance in terms of maximum reward. These results indicate that variants of GP-BUCB may be suitable for autonomously directing SCI therapy.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Large quantities of teleseismic short-period seismograms recorded at SCARLET provide travel time, apparent velocity and waveform data for study of upper mantle compressional velocity structure. Relative array analysis of arrival times from distant (30° < Δ < 95°) earthquakes at all azimuths constrains lateral velocity variations beneath southern California. We compare dT/dΔ back azimuth and averaged arrival time estimates from the entire network for 154 events to the same parameters derived from small subsets of SCARLET. Patterns of mislocation vectors for over 100 overlapping subarrays delimit the spatial extent of an east-west striking, high-velocity anomaly beneath the Transverse Ranges. Thin lens analysis of the averaged arrival time differences, called 'net delay' data, requires the mean depth of the corresponding lens to be more than 100 km. Our results are consistent with the PKP-delay times of Hadley and Kanamori (1977), who first proposed the high-velocity feature, but we place the anomalous material at substantially greater depths than their 40-100 km estimate.

Detailed analysis of travel time, ray parameter and waveform data from 29 events occurring in the distance range 9° to 40° reveals the upper mantle structure beneath an oceanic ridge to depths of over 900 km. More than 1400 digital seismograms from earthquakes in Mexico and Central America yield 1753 travel times and 58 dT/dΔ measurements as well as high-quality, stable waveforms for investigation of the deep structure of the Gulf of California. The result of a travel time inversion with the tau method (Bessonova et al., 1976) is adjusted to fit the p(Δ) data, then further refined by incorporation of relative amplitude information through synthetic seismogram modeling. The application of a modified wave field continuation method (Clayton and McMechan, 1981) to the data with the final model confirms that GCA is consistent with the entire data set and also provides an estimate of the data resolution in velocity-depth space. We discover that the upper mantle under this spreading center has anomalously slow velocities to depths of 350 km, and place new constraints on the shape of the 660 km discontinuity.

Seismograms from 22 earthquakes along the northeast Pacific rim recorded in southern California form the data set for a comparative investigation of the upper mantle beneath the Cascade Ranges-Juan de Fuca region, an ocean-continent transit ion. These data consist of 853 seismograms (6° < Δ < 42°) which produce 1068 travel times and 40 ray parameter estimates. We use the spreading center model initially in synthetic seismogram modeling, and perturb GCA until the Cascade Ranges data are matched. Wave field continuation of both data sets with a common reference model confirms that real differences exist between the two suites of seismograms, implying lateral variation in the upper mantle. The ocean-continent transition model, CJF, features velocities from 200 and 350 km that are intermediate between GCA and T7 (Burdick and Helmberger, 1978), a model for the inland western United States. Models of continental shield regions (e.g., King and Calcagnile, 1976) have higher velocities in this depth range, but all four model types are similar below 400 km. This variation in rate of velocity increase with tectonic regime suggests an inverse relationship between velocity gradient and lithospheric age above 400 km depth.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Several types of seismological data, including surface wave group and phase velocities, travel times from large explosions, and teleseismic travel time anomalies, have indicated that there are significant regional variations in the upper few hundred kilometers of the mantle beneath continental areas. Body wave travel times and amplitudes from large chemical and nuclear explosions are used in this study to delineate the details of these variations beneath North America.

As a preliminary step in this study, theoretical P wave travel times, apparent velocities, and amplitudes have been calculated for a number of proposed upper mantle models, those of Gutenberg, Jeffreys, Lehman, and Lukk and Nersesov. These quantities have been calculated for both P and S waves for model CIT11GB, which is derived from surface wave dispersion data. First arrival times for all the models except that of Lukk and Nersesov are in close agreement, but the travel time curves for later arrivals are both qualitatively and quantitatively very different. For model CIT11GB, there are two large, overlapping regions of triplication of the travel time curve, produced by regions of rapid velocity increase near depths of 400 and 600 km. Throughout the distance range from 10 to 40 degrees, the later arrivals produced by these discontinuities have larger amplitudes than the first arrivals. The amplitudes of body waves, in fact, are extremely sensitive to small variations in the velocity structure, and provide a powerful tool for studying structural details.

Most of eastern North America, including the Canadian Shield has a Pn velocity of about 8.1 km/sec, with a nearly abrupt increase in compressional velocity by ~ 0.3 km/sec near at a depth varying regionally between 60 and 90 km. Variations in the structure of this part of the mantle are significant even within the Canadian Shield. The low-velocity zone is a minor feature in eastern North America and is subject to pronounced regional variations. It is 30 to 50 km thick, and occurs somewhere in the depth range from 80 to 160 km. The velocity decrease is less than 0.2 km/sec.

Consideration of the absolute amplitudes indicates that the attenuation due to anelasticity is negligible for 2 hz waves in the upper 200 km along the southeastern and southwestern margins of the Canadian Shield. For compressional waves the average Q for this region is > 3000. The amplitudes also indicate that the velocity gradient is at least 2 x 10-3 both above and below the low-velocity zone, implying that the temperature gradient is < 4.8°C/km if the regions are chemically homogeneous.

In western North America, the low-velocity zone is a pronounced feature, extending to the base of the crust and having minimum velocities of 7.7 to 7.8 km/sec. Beneath the Colorado Plateau and Southern Rocky Mountains provinces, there is a rapid velocity increase of about 0.3 km/sec, similar to that observed in eastern North America, but near a depth of 100 km.

Complicated travel time curves observed on profiles with stations in both eastern and western North America can be explained in detail by a model taking into account the lateral variations in the structure of the low-velocity zone. These variations involve primarily the velocity within the zone and the depth to the top of the zone; the depth to the bottom is, for both regions, between 140 and 160 km.

The depth to the transition zone near 400 km also varies regionally, by about 30-40 km. These differences imply variations of 250 °C in the temperature or 6 % in the iron content of the mantle, if the phase transformation of olivine to the spinel structure is assumed responsible. The structural variations at this depth are not correlated with those at shallower depths, and follow no obvious simple pattern.

The computer programs used in this study are described in the Appendices. The program TTINV (Appendix IV) fits spherically symmetric earth models to observed travel time data. The method, described in Appendix III, resembles conventional least-square fitting, using partial derivatives of the travel time with respect to the model parameters to perturb an initial model. The usual ill-conditioned nature of least-squares techniques is avoided by a technique which minimizes both the travel time residuals and the model perturbations.

Spherically symmetric earth models, however, have been found inadequate to explain most of the observed travel times in this study. TVT4, a computer program that performs ray theory calculations for a laterally inhomogeneous earth model, is described in Appendix II. Appendix I gives a derivation of seismic ray theory for an arbitrarily inhomogeneous earth model.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A large array has been used to investigate the P-wave velocity structure of the lower mantle. Linear array processing methods are reviewed and a method of nonlinear processing is presented. Phase velocities, travel times, and relative amplitudes of P waves have been measured with the large array at the Tonto Forest Seismological Observatory in Arizona for 125 earthquakes in the distance range of 30 to 100 degrees. Various models are assumed for the upper 771 km of the mantle and the Wiechert-Herglotz method applied to the phase velocity data to obtain a velocity depth structure for the lower mantle. The phase velocity data indicates the presence of a second-order discontinuity at a depth of 840 km, another at 1150 km, and less pronounced discontinuities at 1320, 1700 and 1950 km. Phase velocities beyond 85 degrees are interpreted in terms of a triplication of the phase velocity curve, and this results in a zone of almost constant velocity between depths of 2670 and 2800 km. Because of the uncertainty in the upper mantle assumptions, a final model cannot be proposed, but it appears that the lower mantle is more complicated than the standard models and there is good evidence for second-order discontinuities below a depth of 1000 km. A tentative lower bound of 2881 km can be placed on the depth to the core. The importance of checking the calculated velocity structure against independently measured travel times is pointed out. Comparisons are also made with observed PcP times and the agreement is good. The method of using measured values of the rate of change of amplitude with distances shows promising results.