13 resultados para Upper Bounds

em CaltechTHESIS


Relevância:

70.00% 70.00%

Publicador:

Resumo:

Network information theory and channels with memory are two important but difficult frontiers of information theory. In this two-parted dissertation, we study these two areas, each comprising one part. For the first area we study the so-called entropy vectors via finite group theory, and the network codes constructed from finite groups. In particular, we identify the smallest finite group that violates the Ingleton inequality, an inequality respected by all linear network codes, but not satisfied by all entropy vectors. Based on the analysis of this group we generalize it to several families of Ingleton-violating groups, which may be used to design good network codes. Regarding that aspect, we study the network codes constructed with finite groups, and especially show that linear network codes are embedded in the group network codes constructed with these Ingleton-violating families. Furthermore, such codes are strictly more powerful than linear network codes, as they are able to violate the Ingleton inequality while linear network codes cannot. For the second area, we study the impact of memory to the channel capacity through a novel communication system: the energy harvesting channel. Different from traditional communication systems, the transmitter of an energy harvesting channel is powered by an exogenous energy harvesting device and a finite-sized battery. As a consequence, each time the system can only transmit a symbol whose energy consumption is no more than the energy currently available. This new type of power supply introduces an unprecedented input constraint for the channel, which is random, instantaneous, and has memory. Furthermore, naturally, the energy harvesting process is observed causally at the transmitter, but no such information is provided to the receiver. Both of these features pose great challenges for the analysis of the channel capacity. In this work we use techniques from channels with side information, and finite state channels, to obtain lower and upper bounds of the energy harvesting channel. In particular, we study the stationarity and ergodicity conditions of a surrogate channel to compute and optimize the achievable rates for the original channel. In addition, for practical code design of the system we study the pairwise error probabilities of the input sequences.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This work is concerned with the derivation of optimal scaling laws, in the sense of matching lower and upper bounds on the energy, for a solid undergoing ductile fracture. The specific problem considered concerns a material sample in the form of an infinite slab of finite thickness subjected to prescribed opening displacements on its two surfaces. The solid is assumed to obey deformation-theory of plasticity and, in order to further simplify the analysis, we assume isotropic rigid-plastic deformations with zero plastic spin. When hardening exponents are given values consistent with observation, the energy is found to exhibit sublinear growth. We regularize the energy through the addition of nonlocal energy terms of the strain-gradient plasticity type. This nonlocal regularization has the effect of introducing an intrinsic length scale into the energy. We also put forth a physical argument that identifies the intrinsic length and suggests a linear growth of the nonlocal energy. Under these assumptions, ductile fracture emerges as the net result of two competing effects: whereas the sublinear growth of the local energy promotes localization of deformation to failure planes, the nonlocal regularization stabilizes this process, thus resulting in an orderly progression towards failure and a well-defined specific fracture energy. The optimal scaling laws derived here show that ductile fracture results from localization of deformations to void sheets, and that it requires a well-defined energy per unit fracture area. In particular, fractal modes of fracture are ruled out under the assumptions of the analysis. The optimal scaling laws additionally show that ductile fracture is cohesive in nature, i.e., it obeys a well-defined relation between tractions and opening displacements. Finally, the scaling laws supply a link between micromechanical properties and macroscopic fracture properties. In particular, they reveal the relative roles that surface energy and microplasticity play as contributors to the specific fracture energy of the material. Next, we present an experimental assessment of the optimal scaling laws. We show that when the specific fracture energy is renormalized in a manner suggested by the optimal scaling laws, the data falls within the bounds predicted by the analysis and, moreover, they ostensibly collapse---with allowances made for experimental scatter---on a master curve dependent on the hardening exponent, but otherwise material independent.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The current power grid is on the cusp of modernization due to the emergence of distributed generation and controllable loads, as well as renewable energy. On one hand, distributed and renewable generation is volatile and difficult to dispatch. On the other hand, controllable loads provide significant potential for compensating for the uncertainties. In a future grid where there are thousands or millions of controllable loads and a large portion of the generation comes from volatile sources like wind and solar, distributed control that shifts or reduces the power consumption of electric loads in a reliable and economic way would be highly valuable.

Load control needs to be conducted with network awareness. Otherwise, voltage violations and overloading of circuit devices are likely. To model these effects, network power flows and voltages have to be considered explicitly. However, the physical laws that determine power flows and voltages are nonlinear. Furthermore, while distributed generation and controllable loads are mostly located in distribution networks that are multiphase and radial, most of the power flow studies focus on single-phase networks.

This thesis focuses on distributed load control in multiphase radial distribution networks. In particular, we first study distributed load control without considering network constraints, and then consider network-aware distributed load control.

Distributed implementation of load control is the main challenge if network constraints can be ignored. In this case, we first ignore the uncertainties in renewable generation and load arrivals, and propose a distributed load control algorithm, Algorithm 1, that optimally schedules the deferrable loads to shape the net electricity demand. Deferrable loads refer to loads whose total energy consumption is fixed, but energy usage can be shifted over time in response to network conditions. Algorithm 1 is a distributed gradient decent algorithm, and empirically converges to optimal deferrable load schedules within 15 iterations.

We then extend Algorithm 1 to a real-time setup where deferrable loads arrive over time, and only imprecise predictions about future renewable generation and load are available at the time of decision making. The real-time algorithm Algorithm 2 is based on model-predictive control: Algorithm 2 uses updated predictions on renewable generation as the true values, and computes a pseudo load to simulate future deferrable load. The pseudo load consumes 0 power at the current time step, and its total energy consumption equals the expectation of future deferrable load total energy request.

Network constraints, e.g., transformer loading constraints and voltage regulation constraints, bring significant challenge to the load control problem since power flows and voltages are governed by nonlinear physical laws. Remarkably, distribution networks are usually multiphase and radial. Two approaches are explored to overcome this challenge: one based on convex relaxation and the other that seeks a locally optimal load schedule.

To explore the convex relaxation approach, a novel but equivalent power flow model, the branch flow model, is developed, and a semidefinite programming relaxation, called BFM-SDP, is obtained using the branch flow model. BFM-SDP is mathematically equivalent to a standard convex relaxation proposed in the literature, but numerically is much more stable. Empirical studies show that BFM-SDP is numerically exact for the IEEE 13-, 34-, 37-, 123-bus networks and a real-world 2065-bus network, while the standard convex relaxation is numerically exact for only two of these networks.

Theoretical guarantees on the exactness of convex relaxations are provided for two types of networks: single-phase radial alternative-current (AC) networks, and single-phase mesh direct-current (DC) networks. In particular, for single-phase radial AC networks, we prove that a second-order cone program (SOCP) relaxation is exact if voltage upper bounds are not binding; we also modify the optimal load control problem so that its SOCP relaxation is always exact. For single-phase mesh DC networks, we prove that an SOCP relaxation is exact if 1) voltage upper bounds are not binding, or 2) voltage upper bounds are uniform and power injection lower bounds are strictly negative; we also modify the optimal load control problem so that its SOCP relaxation is always exact.

To seek a locally optimal load schedule, a distributed gradient-decent algorithm, Algorithm 9, is proposed. The suboptimality gap of the algorithm is rigorously characterized and close to 0 for practical networks. Furthermore, unlike the convex relaxation approach, Algorithm 9 ensures a feasible solution. The gradients used in Algorithm 9 are estimated based on a linear approximation of the power flow, which is derived with the following assumptions: 1) line losses are negligible; and 2) voltages are reasonably balanced. Both assumptions are satisfied in practical distribution networks. Empirical results show that Algorithm 9 obtains 70+ times speed up over the convex relaxation approach, at the cost of a suboptimality within numerical precision.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

An explicit formula is obtained for the coefficients of the cyclotomic polynomial Fn(x), where n is the product of two distinct odd primes. A recursion formula and a lower bound and an improvement of Bang’s upper bound for the coefficients of Fn(x) are also obtained, where n is the product of three distinct primes. The cyclotomic coefficients are also studied when n is the product of four distinct odd primes. A recursion formula and upper bounds for its coefficients are obtained. The last chapter includes a different approach to the cyclotomic coefficients. A connection is obtained between a certain partition function and the cyclotomic coefficients when n is the product of an arbitrary number of distinct odd primes. Finally, an upper bound for the coefficients is derived when n is the product of an arbitrary number of distinct and odd primes.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this thesis, I will discuss how information-theoretic arguments can be used to produce sharp bounds in the studies of quantum many-body systems. The main advantage of this approach, as opposed to the conventional field-theoretic argument, is that it depends very little on the precise form of the Hamiltonian. The main idea behind this thesis lies on a number of results concerning the structure of quantum states that are conditionally independent. Depending on the application, some of these statements are generalized to quantum states that are approximately conditionally independent. These structures can be readily used in the studies of gapped quantum many-body systems, especially for the ones in two spatial dimensions. A number of rigorous results are derived, including (i) a universal upper bound for a maximal number of topologically protected states that is expressed in terms of the topological entanglement entropy, (ii) a first-order perturbation bound for the topological entanglement entropy that decays superpolynomially with the size of the subsystem, and (iii) a correlation bound between an arbitrary local operator and a topological operator constructed from a set of local reduced density matrices. I also introduce exactly solvable models supported on a three-dimensional lattice that can be used as a reliable quantum memory.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Therapy employing epidural electrostimulation holds great potential for improving therapy for patients with spinal cord injury (SCI) (Harkema et al., 2011). Further promising results from combined therapies using electrostimulation have also been recently obtained (e.g., van den Brand et al., 2012). The devices being developed to deliver the stimulation are highly flexible, capable of delivering any individual stimulus among a combinatorially large set of stimuli (Gad et al., 2013). While this extreme flexibility is very useful for ensuring that the device can deliver an appropriate stimulus, the challenge of choosing good stimuli is quite substantial, even for expert human experimenters. To develop a fully implantable, autonomous device which can provide useful therapy, it is necessary to design an algorithmic method for choosing the stimulus parameters. Such a method can be used in a clinical setting, by caregivers who are not experts in the neurostimulator's use, and to allow the system to adapt autonomously between visits to the clinic. To create such an algorithm, this dissertation pursues the general class of active learning algorithms that includes Gaussian Process Upper Confidence Bound (GP-UCB, Srinivas et al., 2010), developing the Gaussian Process Batch Upper Confidence Bound (GP-BUCB, Desautels et al., 2012) and Gaussian Process Adaptive Upper Confidence Bound (GP-AUCB) algorithms. This dissertation develops new theoretical bounds for the performance of these and similar algorithms, empirically assesses these algorithms against a number of competitors in simulation, and applies a variant of the GP-BUCB algorithm in closed-loop to control SCI therapy via epidural electrostimulation in four live rats. The algorithm was tasked with maximizing the amplitude of evoked potentials in the rats' left tibialis anterior muscle. These experiments show that the algorithm is capable of directing these experiments sensibly, finding effective stimuli in all four animals. Further, in direct competition with an expert human experimenter, the algorithm produced superior performance in terms of average reward and comparable or superior performance in terms of maximum reward. These results indicate that variants of GP-BUCB may be suitable for autonomously directing SCI therapy.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Large quantities of teleseismic short-period seismograms recorded at SCARLET provide travel time, apparent velocity and waveform data for study of upper mantle compressional velocity structure. Relative array analysis of arrival times from distant (30° < Δ < 95°) earthquakes at all azimuths constrains lateral velocity variations beneath southern California. We compare dT/dΔ back azimuth and averaged arrival time estimates from the entire network for 154 events to the same parameters derived from small subsets of SCARLET. Patterns of mislocation vectors for over 100 overlapping subarrays delimit the spatial extent of an east-west striking, high-velocity anomaly beneath the Transverse Ranges. Thin lens analysis of the averaged arrival time differences, called 'net delay' data, requires the mean depth of the corresponding lens to be more than 100 km. Our results are consistent with the PKP-delay times of Hadley and Kanamori (1977), who first proposed the high-velocity feature, but we place the anomalous material at substantially greater depths than their 40-100 km estimate.

Detailed analysis of travel time, ray parameter and waveform data from 29 events occurring in the distance range 9° to 40° reveals the upper mantle structure beneath an oceanic ridge to depths of over 900 km. More than 1400 digital seismograms from earthquakes in Mexico and Central America yield 1753 travel times and 58 dT/dΔ measurements as well as high-quality, stable waveforms for investigation of the deep structure of the Gulf of California. The result of a travel time inversion with the tau method (Bessonova et al., 1976) is adjusted to fit the p(Δ) data, then further refined by incorporation of relative amplitude information through synthetic seismogram modeling. The application of a modified wave field continuation method (Clayton and McMechan, 1981) to the data with the final model confirms that GCA is consistent with the entire data set and also provides an estimate of the data resolution in velocity-depth space. We discover that the upper mantle under this spreading center has anomalously slow velocities to depths of 350 km, and place new constraints on the shape of the 660 km discontinuity.

Seismograms from 22 earthquakes along the northeast Pacific rim recorded in southern California form the data set for a comparative investigation of the upper mantle beneath the Cascade Ranges-Juan de Fuca region, an ocean-continent transit ion. These data consist of 853 seismograms (6° < Δ < 42°) which produce 1068 travel times and 40 ray parameter estimates. We use the spreading center model initially in synthetic seismogram modeling, and perturb GCA until the Cascade Ranges data are matched. Wave field continuation of both data sets with a common reference model confirms that real differences exist between the two suites of seismograms, implying lateral variation in the upper mantle. The ocean-continent transition model, CJF, features velocities from 200 and 350 km that are intermediate between GCA and T7 (Burdick and Helmberger, 1978), a model for the inland western United States. Models of continental shield regions (e.g., King and Calcagnile, 1976) have higher velocities in this depth range, but all four model types are similar below 400 km. This variation in rate of velocity increase with tectonic regime suggests an inverse relationship between velocity gradient and lithospheric age above 400 km depth.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This study addresses the problem of obtaining reliable velocities and displacements from accelerograms, a concern which often arises in earthquake engineering. A closed-form acceleration expression with random parameters is developed to test any strong-motion accelerogram processing method. Integration of this analytical time history yields the exact velocities, displacements and Fourier spectra. Noise and truncation can also be added. A two-step testing procedure is proposed and the original Volume II routine is used as an illustration. The main sources of error are identified and discussed. Although these errors may be reduced, it is impossible to extract the true time histories from an analog or digital accelerogram because of the uncertain noise level and missing data. Based on these uncertainties, a probabilistic approach is proposed as a new accelerogram processing method. A most probable record is presented as well as a reliability interval which reflects the level of error-uncertainty introduced by the recording and digitization process. The data is processed in the frequency domain, under assumptions governing either the initial value or the temporal mean of the time histories. This new processing approach is tested on synthetic records. It induces little error and the digitization noise is adequately bounded. Filtering is intended to be kept to a minimum and two optimal error-reduction methods are proposed. The "noise filters" reduce the noise level at each harmonic of the spectrum as a function of the signal-to-noise ratio. However, the correction at low frequencies is not sufficient to significantly reduce the drifts in the integrated time histories. The "spectral substitution method" uses optimization techniques to fit spectral models of near-field, far-field or structural motions to the amplitude spectrum of the measured data. The extremes of the spectrum of the recorded data where noise and error prevail are then partly altered, but not removed, and statistical criteria provide the choice of the appropriate cutoff frequencies. This correction method has been applied to existing strong-motion far-field, near-field and structural data with promising results. Since this correction method maintains the whole frequency range of the record, it should prove to be very useful in studying the long-period dynamics of local geology and structures.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This thesis consists of two separate parts. Part I (Chapter 1) is concerned with seismotectonics of the Middle America subduction zone. In this chapter, stress distribution and Benioff zone geometry are investigated along almost 2000 km of this subduction zone, from the Rivera Fracture Zone in the north to Guatemala in the south. Particular emphasis is placed on the effects on stress distribution of two aseismic ridges, the Tehuantepec Ridge and the Orozco Fracture Zone, which subduct at seismic gaps. Stress distribution is determined by studying seismicity distribution, and by analysis of 190 focal mechanisms, both new and previously published, which are collected here. In addition, two recent large earthquakes that have occurred near the Tehuantepec Ridge and the Orozco Fracture Zone are discussed in more detail. A consistent stress release pattern is found along most of the Middle America subduction zone: thrust events at shallow depths, followed down-dip by an area of low seismic activity, followed by a zone of normal events at over 175 km from the trench and 60 km depth. The zone of low activity is interpreted as showing decoupling of the plates, and the zone of normal activity as showing the breakup of the descending plate. The portion of subducted lithosphere containing the Orozco Fracture Zone does not differ significantly, in Benioff zone geometry or in stress distribution, from adjoining segments. The Playa Azul earthquake of October 25, 1981, Ms=7.3, occurred in this area. Body and surface wave analysis of this event shows a simple source with a shallow thrust mechanism and gives Mo=1.3x1027 dyne-cm. A stress drop of about 45 bars is calculated; this is slightly higher than that of other thrust events in this subduction zone. In the Tehuantepec Ridge area, only minor differences in stress distribution are seen relative to adjoining segments. For both ridges, the only major difference from adjoining areas is the infrequency or lack of occurrence of large interplate thrust events.

Part II involves upper mantle P wave structure studies, for the Canadian shield and eastern North America. In Chapter 2, the P wave structure of the Canadian shield is determined through forward waveform modeling of the phases Pnl, P, and PP. Effects of lateral heterogeneity are kept to a minimum by using earthquakes just outside the shield as sources, with propagation paths largely within the shield. Previous mantle structure studies have used recordings of P waves in the upper mantle triplication range of 15-30°; however, the lack of large earthquakes in the shield region makes compilation of a complete P wave dataset difficult. By using the phase PP, which undergoes triplications at 30-60°, much more information becomes available. The WKBJ technique is used to calculate synthetic seismograms for PP, and these records are modeled almost as well as the P. A new velocity model, designated S25, is proposed for the Canadian shield. This model contains a thick, high-Q, high-velocity lid to 165 km and a deep low-velocity zone. These features combine to produce seismograms that are markedly different from those generated by other shield structure models. The upper mantle discontinuities in S25 are placed at 405 and 660 km, with a simple linear gradient in velocity between them. Details of the shape of the discontinuities are not well constrained. Below 405 km, this model is not very different from many proposed P wave models for both shield and tectonic regions.

Chapter 3 looks in more detail at recordings of Pnl in eastern North America. First, seismograms from four eastern North American earthquakes are analyzed, and seismic moments for the events are calculated. These earthquakes are important in that they are among the largest to have occurred in eastern North America in the last thirty years, yet in some cases were not large enough to produce many good long-period teleseismic records. A simple layer-over-a-halfspace model is used for the initial modeling, and is found to provide an excellent fit for many features of the observed waveforms. The effects on Pnl of varying lid structure are then investigated. A thick lid with a positive gradient in velocity, such as that proposed for the Canadian shield in Chapter 2, will have a pronounced effect on the waveforms, beginning at distances of 800 or 900 km. Pnl records from the same eastern North American events are recalculated for several lid structure models, to survey what kinds of variations might be seen. For several records it is possible to see likely effects of lid structure in the data. However, the dataset is too sparse to make any general observations about variations in lid structure. This type of modeling is expected to be important in the future, as the analysis is extended to more recent eastern North American events, and as broadband instruments make more high-quality regional recordings available.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The two-pulse stimulated radiation of dense (10^9/cm^3 < ne ≤ 10^(11) /cm^3) nonuniform neon and argon afterglow plasma columns longitudinally immersed in a magnetic field is studied. The magnetic field is very homogeneous over the plasma volume (∆B/B~.01%). If the S-band microwave pulses' center frequency is such that they resonantly excite a narrow band of plasma upper hybrid oscillations close to the maximum upper hybrid frequency of the column, strong two pulse echoes are observed. This new echo process is called the upper hybrid echo. The echo spectrum, echo power and echo width were studied as a function of the pulse peak power P, pulse separation τ, relative density (ω_(po)/ω)^2, and relative cyclotron frequency (ω_c/ω). The complex but systematic variations of the echo properties as a function of the above-mentioned parameters arc found to be in qualitative agreement with those predicted by a theory of Gould and Blum based upon a simple nonuniform unidimensional cold plasma slab model. The possible effects of electron neutral and electron ion collisions not retained in the theoretical model are discussed.

The existence of a new type of cyclotron echo, different from that of Hill and Kaplan and not predicted by the Blum and Gould model is documented. It is believed to be also of a collective effect nature and can probably be described in terms of a theory retaining some hot plasma effects.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The electromagnetic scattering and absorption properties of small (kr~1/2) inhomogeneous magnetoplasma columns are calculated via the full set of Maxwell's equations with tensor dielectric constitutive relation. The cold plasma model with collisional damping is used to describe the column. The equations are solved numerically, subject to boundary conditions appropriate to an infinite parallel strip line and to an incident plane wave. The results are similar for several density profiles and exhibit semiquantitative agreement with measurements in waveguide. The absorption is spatially limited, especially for small collision frequency, to a narrow hybrid resonant layer and is essentially zero when there is no hybrid layer in the column. The reflection is also enhanced when the hybrid layer is present, but the value of the reflection coefficient is strongly modified by the presence of the glass tube. The nature of the solutions and an extensive discussion of the conditions under which the cold collisional model should yield valid results is presented.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Several types of seismological data, including surface wave group and phase velocities, travel times from large explosions, and teleseismic travel time anomalies, have indicated that there are significant regional variations in the upper few hundred kilometers of the mantle beneath continental areas. Body wave travel times and amplitudes from large chemical and nuclear explosions are used in this study to delineate the details of these variations beneath North America.

As a preliminary step in this study, theoretical P wave travel times, apparent velocities, and amplitudes have been calculated for a number of proposed upper mantle models, those of Gutenberg, Jeffreys, Lehman, and Lukk and Nersesov. These quantities have been calculated for both P and S waves for model CIT11GB, which is derived from surface wave dispersion data. First arrival times for all the models except that of Lukk and Nersesov are in close agreement, but the travel time curves for later arrivals are both qualitatively and quantitatively very different. For model CIT11GB, there are two large, overlapping regions of triplication of the travel time curve, produced by regions of rapid velocity increase near depths of 400 and 600 km. Throughout the distance range from 10 to 40 degrees, the later arrivals produced by these discontinuities have larger amplitudes than the first arrivals. The amplitudes of body waves, in fact, are extremely sensitive to small variations in the velocity structure, and provide a powerful tool for studying structural details.

Most of eastern North America, including the Canadian Shield has a Pn velocity of about 8.1 km/sec, with a nearly abrupt increase in compressional velocity by ~ 0.3 km/sec near at a depth varying regionally between 60 and 90 km. Variations in the structure of this part of the mantle are significant even within the Canadian Shield. The low-velocity zone is a minor feature in eastern North America and is subject to pronounced regional variations. It is 30 to 50 km thick, and occurs somewhere in the depth range from 80 to 160 km. The velocity decrease is less than 0.2 km/sec.

Consideration of the absolute amplitudes indicates that the attenuation due to anelasticity is negligible for 2 hz waves in the upper 200 km along the southeastern and southwestern margins of the Canadian Shield. For compressional waves the average Q for this region is > 3000. The amplitudes also indicate that the velocity gradient is at least 2 x 10-3 both above and below the low-velocity zone, implying that the temperature gradient is < 4.8°C/km if the regions are chemically homogeneous.

In western North America, the low-velocity zone is a pronounced feature, extending to the base of the crust and having minimum velocities of 7.7 to 7.8 km/sec. Beneath the Colorado Plateau and Southern Rocky Mountains provinces, there is a rapid velocity increase of about 0.3 km/sec, similar to that observed in eastern North America, but near a depth of 100 km.

Complicated travel time curves observed on profiles with stations in both eastern and western North America can be explained in detail by a model taking into account the lateral variations in the structure of the low-velocity zone. These variations involve primarily the velocity within the zone and the depth to the top of the zone; the depth to the bottom is, for both regions, between 140 and 160 km.

The depth to the transition zone near 400 km also varies regionally, by about 30-40 km. These differences imply variations of 250 °C in the temperature or 6 % in the iron content of the mantle, if the phase transformation of olivine to the spinel structure is assumed responsible. The structural variations at this depth are not correlated with those at shallower depths, and follow no obvious simple pattern.

The computer programs used in this study are described in the Appendices. The program TTINV (Appendix IV) fits spherically symmetric earth models to observed travel time data. The method, described in Appendix III, resembles conventional least-square fitting, using partial derivatives of the travel time with respect to the model parameters to perturb an initial model. The usual ill-conditioned nature of least-squares techniques is avoided by a technique which minimizes both the travel time residuals and the model perturbations.

Spherically symmetric earth models, however, have been found inadequate to explain most of the observed travel times in this study. TVT4, a computer program that performs ray theory calculations for a laterally inhomogeneous earth model, is described in Appendix II. Appendix I gives a derivation of seismic ray theory for an arbitrarily inhomogeneous earth model.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The matrices studied here are positive stable (or briefly stable). These are matrices, real or complex, whose eigenvalues have positive real parts. A theorem of Lyapunov states that A is stable if and only if there exists H ˃ 0 such that AH + HA* = I. Let A be a stable matrix. Three aspects of the Lyapunov transformation LA :H → AH + HA* are discussed.

1. Let C1 (A) = {AH + HA* :H ≥ 0} and C2 (A) = {H: AH+HA* ≥ 0}. The problems of determining the cones C1(A) and C2(A) are still unsolved. Using solvability theory for linear equations over cones it is proved that C1(A) is the polar of C2(A*), and it is also shown that C1 (A) = C1(A-1). The inertia assumed by matrices in C1(A) is characterized.

2. The index of dissipation of A was defined to be the maximum number of equal eigenvalues of H, where H runs through all matrices in the interior of C2(A). Upper and lower bounds, as well as some properties of this index, are given.

3. We consider the minimal eigenvalue of the Lyapunov transform AH+HA*, where H varies over the set of all positive semi-definite matrices whose largest eigenvalue is less than or equal to one. Denote it by ψ(A). It is proved that if A is Hermitian and has eigenvalues μ1 ≥ μ2…≥ μn ˃ 0, then ψ(A) = -(μ1n)2/(4(μ1 + μn)). The value of ψ(A) is also determined in case A is a normal, stable matrix. Then ψ(A) can be expressed in terms of at most three of the eigenvalues of A. If A is an arbitrary stable matrix, then upper and lower bounds for ψ(A) are obtained.