10 resultados para Source code

em CaltechTHESIS


Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this thesis, a method to retrieve the source finiteness, depth of faulting, and the mechanisms of large earthquakes from long-period surface waves is developed and applied to several recent large events.

In Chapter 1, source finiteness parameters of eleven large earthquakes were determined from long-period Rayleigh waves recorded at IDA and GDSN stations. The basic data set is the seismic spectra of periods from 150 to 300 sec. Two simple models of source finiteness are studied. The first model is a point source with finite duration. In the determination of the duration or source-process times, we used Furumoto's phase method and a linear inversion method, in which we simultaneously inverted the spectra and determined the source-process time that minimizes the error in the inversion. These two methods yielded consistent results. The second model is the finite fault model. Source finiteness of large shallow earthquakes with rupture on a fault plane with a large aspect ratio was modeled with the source-finiteness function introduced by Ben-Menahem. The spectra were inverted to find the extent and direction of the rupture of the earthquake that minimize the error in the inversion. This method is applied to the 1977 Sumbawa, Indonesia, 1979 Colombia-Ecuador, 1983 Akita-Oki, Japan, 1985 Valparaiso, Chile, and 1985 Michoacan, Mexico earthquakes. The method yielded results consistent with the rupture extent inferred from the aftershock area of these earthquakes.

In Chapter 2, the depths and source mechanisms of nine large shallow earthquakes were determined. We inverted the data set of complex source spectra for a moment tensor (linear) or a double couple (nonlinear). By solving a least-squares problem, we obtained the centroid depth or the extent of the distributed source for each earthquake. The depths and source mechanisms of large shallow earthquakes determined from long-period Rayleigh waves depend on the models of source finiteness, wave propagation, and the excitation. We tested various models of the source finiteness, Q, the group velocity, and the excitation in the determination of earthquake depths.

The depth estimates obtained using the Q model of Dziewonski and Steim (1982) and the excitation functions computed for the average ocean model of Regan and Anderson (1984) are considered most reasonable. Dziewonski and Steim's Q model represents a good global average of Q determined over a period range of the Rayleigh waves used in this study. Since most of the earthquakes studied here occurred in subduction zones Regan and Anderson's average ocean model is considered most appropriate.

Our depth estimates are in general consistent with the Harvard CMT solutions. The centroid depths and their 90 % confidence intervals (numbers in the parentheses) determined by the Student's t test are: Colombia-Ecuador earthquake (12 December 1979), d = 11 km, (9, 24) km; Santa Cruz Is. earthquake (17 July 1980), d = 36 km, (18, 46) km; Samoa earthquake (1 September 1981), d = 15 km, (9, 26) km; Playa Azul, Mexico earthquake (25 October 1981), d = 41 km, (28, 49) km; El Salvador earthquake (19 June 1982), d = 49 km, (41, 55) km; New Ireland earthquake (18 March 1983), d = 75 km, (72, 79) km; Chagos Bank earthquake (30 November 1983), d = 31 km, (16, 41) km; Valparaiso, Chile earthquake (3 March 1985), d = 44 km, (15, 54) km; Michoacan, Mexico earthquake (19 September 1985), d = 24 km, (12, 34) km.

In Chapter 3, the vertical extent of faulting of the 1983 Akita-Oki, and 1977 Sumbawa, Indonesia earthquakes are determined from fundamental and overtone Rayleigh waves. Using fundamental Rayleigh waves, the depths are determined from the moment tensor inversion and fault inversion. The observed overtone Rayleigh waves are compared to the synthetic overtone seismograms to estimate the depth of faulting of these earthquakes. The depths obtained from overtone Rayleigh waves are consistent with the depths determined from fundamental Rayleigh waves for the two earthquakes. Appendix B gives the observed seismograms of fundamental and overtone Rayleigh waves for eleven large earthquakes.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The work presented in this thesis revolves around erasure correction coding, as applied to distributed data storage and real-time streaming communications.

First, we examine the problem of allocating a given storage budget over a set of nodes for maximum reliability. The objective is to find an allocation of the budget that maximizes the probability of successful recovery by a data collector accessing a random subset of the nodes. This optimization problem is challenging in general because of its combinatorial nature, despite its simple formulation. We study several variations of the problem, assuming different allocation models and access models, and determine the optimal allocation and the optimal symmetric allocation (in which all nonempty nodes store the same amount of data) for a variety of cases. Although the optimal allocation can have nonintuitive structure and can be difficult to find in general, our results suggest that, as a simple heuristic, reliable storage can be achieved by spreading the budget maximally over all nodes when the budget is large, and spreading it minimally over a few nodes when it is small. Coding would therefore be beneficial in the former case, while uncoded replication would suffice in the latter case.

Second, we study how distributed storage allocations affect the recovery delay in a mobile setting. Specifically, two recovery delay optimization problems are considered for a network of mobile storage nodes: the maximization of the probability of successful recovery by a given deadline, and the minimization of the expected recovery delay. We show that the first problem is closely related to the earlier allocation problem, and solve the second problem completely for the case of symmetric allocations. It turns out that the optimal allocations for the two problems can be quite different. In a simulation study, we evaluated the performance of a simple data dissemination and storage protocol for mobile delay-tolerant networks, and observed that the choice of allocation can have a significant impact on the recovery delay under a variety of scenarios.

Third, we consider a real-time streaming system where messages created at regular time intervals at a source are encoded for transmission to a receiver over a packet erasure link; the receiver must subsequently decode each message within a given delay from its creation time. For erasure models containing a limited number of erasures per coding window, per sliding window, and containing erasure bursts whose maximum length is sufficiently short or long, we show that a time-invariant intrasession code asymptotically achieves the maximum message size among all codes that allow decoding under all admissible erasure patterns. For the bursty erasure model, we also show that diagonally interleaved codes derived from specific systematic block codes are asymptotically optimal over all codes in certain cases. We also study an i.i.d. erasure model in which each transmitted packet is erased independently with the same probability; the objective is to maximize the decoding probability for a given message size. We derive an upper bound on the decoding probability for any time-invariant code, and show that the gap between this bound and the performance of a family of time-invariant intrasession codes is small when the message size and packet erasure probability are small. In a simulation study, these codes performed well against a family of random time-invariant convolutional codes under a number of scenarios.

Finally, we consider the joint problems of routing and caching for named data networking. We propose a backpressure-based policy that employs virtual interest packets to make routing and caching decisions. In a packet-level simulation, the proposed policy outperformed a basic protocol that combines shortest-path routing with least-recently-used (LRU) cache replacement.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The Northridge earthquake of January 17, 1994, highlighted the two previously known problems of premature fracturing of connections and the damaging capabilities of near-source ground motion pulses. Large ground motions had not been experienced in a city with tall steel moment-frame buildings before. Some steel buildings exhibited fracture of welded connections or other types of structural degradation.

A sophisticated three-dimensional nonlinear inelastic program is developed that can accurately model many nonlinear properties commonly ignored or approximated in other programs. The program can assess and predict severely inelastic response of steel buildings due to strong ground motions, including collapse.

Three-dimensional fiber and segment discretization of elements is presented in this work. This element and its two-dimensional counterpart are capable of modeling various geometric and material nonlinearities such as moment amplification, spread of plasticity and connection fracture. In addition to introducing a three-dimensional element discretization, this work presents three-dimensional constraints that limit the number of equations required to solve various three-dimensional problems consisting of intersecting planar frames.

Two buildings damaged in the Northridge earthquake are investigated to verify the ability of the program to match the level of response and the extent and location of damage measured. The program is used to predict response of larger near-source ground motions using the properties determined from the matched response.

A third building is studied to assess three-dimensional effects on a realistic irregular building in the inelastic range of response considering earthquake directivity. Damage levels are observed to be significantly affected by directivity and torsional response.

Several strong recorded ground motions clearly exceed code-based levels. Properly designed buildings can have drifts exceeding code specified levels due to these ground motions. The strongest ground motions caused collapse if fracture was included in the model. Near-source ground displacement pulses can cause columns to yield prior to weaker-designed beams. Damage in tall buildings correlates better with peak-to-peak displacements than with peak-to-peak accelerations.

Dynamic response of tall buildings shows that higher mode response can cause more damage than first mode response. Leaking of energy between modes in conjunction with damage can cause torsional behavior that is not anticipated.

Various response parameters are used for all three buildings to determine what correlations can be made for inelastic building response. Damage levels can be dramatically different based on the inelastic model used. Damage does not correlate well with several common response parameters.

Realistic modeling of material properties and structural behavior is of great value for understanding the performance of tall buildings due to earthquake excitations.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The long- and short-period body waves of a number of moderate earthquakes occurring in central and southern California recorded at regional (200-1400 km) and teleseismic (> 30°) distances are modeled to obtain the source parameters-focal mechanism, depth, seismic moment, and source time history. The modeling is done in the time domain using a forward modeling technique based on ray summation. A simple layer over a half space velocity model is used with additional layers being added if necessary-for example, in a basin with a low velocity lid.

The earthquakes studied fall into two geographic regions: 1) the western Transverse Ranges, and 2) the western Imperial Valley. Earthquakes in the western Transverse Ranges include the 1987 Whittier Narrows earthquake, several offshore earthquakes that occurred between 1969 and 1981, and aftershocks to the 1983 Coalinga earthquake (these actually occurred north of the Transverse Ranges but share many characteristics with those that occurred there). These earthquakes are predominantly thrust faulting events with the average strike being east-west, but with many variations. Of the six earthquakes which had sufficient short-period data to accurately determine the source time history, five were complex events. That is, they could not be modeled as a simple point source, but consisted of two or more subevents. The subevents of the Whittier Narrows earthquake had different focal mechanisms. In the other cases, the subevents appear to be the same, but small variations could not be ruled out.

The recent Imperial Valley earthquakes modeled include the two 1987 Superstition Hills earthquakes and the 1969 Coyote Mountain earthquake. All are strike-slip events, and the second 1987 earthquake is a complex event With non-identical subevents.

In all the earthquakes studied, and particularly the thrust events, constraining the source parameters required modeling several phases and distance ranges. Teleseismic P waves could provide only approximate solutions. P_(nl) waves were probably the most useful phase in determining the focal mechanism, with additional constraints supplied by the SH waves when available. Contamination of the SH waves by shear-coupled PL waves was a frequent problem. Short-period data were needed to obtain the source time function.

In addition to the earthquakes mentioned above, several historic earthquakes were also studied. Earthquakes that occurred before the existence of dense local and worldwide networks are difficult to model due to the sparse data set. It has been noticed that earthquakes that occur near each other often produce similar waveforms implying similar source parameters. By comparing recent well studied earthquakes to historic earthquakes in the same region, better constraints can be placed on the source parameters of the historic events.

The Lompoc earthquake (M=7) of 1927 is the largest offshore earthquake to occur in California this century. By direct comparison of waveforms and amplitudes with the Coalinga and Santa Lucia Banks earthquakes, the focal mechanism (thrust faulting on a northwest striking fault) and long-period seismic moment (10^(26) dyne cm) can be obtained. The S-P travel times are consistent with an offshore location, rather than one in the Hosgri fault zone.

Historic earthquakes in the western Imperial Valley were also studied. These events include the 1942 and 1954 earthquakes. The earthquakes were relocated by comparing S-P and R-S times to recent earthquakes. It was found that only minor changes in the epicenters were required but that the Coyote Mountain earthquake may have been more severely mislocated. The waveforms as expected indicated that all the events were strike-slip. Moment estimates were obtained by comparing the amplitudes of recent and historic events at stations which recorded both. The 1942 event was smaller than the 1968 Borrego Mountain earthquake although some previous studies suggested the reverse. The 1954 and 1937 earthquakes had moments close to the expected value. An aftershock of the 1942 earthquake appears to be larger than previously thought.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This thesis examines collapse risk of tall steel braced frame buildings using rupture-to-rafters simulations due to suite of San Andreas earthquakes. Two key advancements in this work are the development of (i) a rational methodology for assigning scenario earthquake probabilities and (ii) an artificial correction-free approach to broadband ground motion simulation. The work can be divided into the following sections: earthquake source modeling, earthquake probability calculations, ground motion simulations, building response, and performance analysis.

As a first step the kinematic source inversions of past earthquakes in the magnitude range of 6-8 are used to simulate 60 scenario earthquakes on the San Andreas fault. For each scenario earthquake a 30-year occurrence probability is calculated and we present a rational method to redistribute the forecast earthquake probabilities from UCERF to the simulated scenario earthquake. We illustrate the inner workings of the method through an example involving earthquakes on the San Andreas fault in southern California.

Next, three-component broadband ground motion histories are computed at 636 sites in the greater Los Angeles metropolitan area by superposing short-period (0.2~s-2.0~s) empirical Green's function synthetics on top of long-period ($>$ 2.0~s) spectral element synthetics. We superimpose these seismograms on low-frequency seismograms, computed from kinematic source models using the spectral element method, to produce broadband seismograms.

Using the ground motions at 636 sites for the 60 scenario earthquakes, 3-D nonlinear analysis of several variants of an 18-story steel braced frame building, designed for three soil types using the 1994 and 1997 Uniform Building Code provisions and subjected to these ground motions, are conducted. Model performance is classified into one of five performance levels: Immediate Occupancy, Life Safety, Collapse Prevention, Red-Tagged, and Model Collapse. The results are combined with the 30-year probability of occurrence of the San Andreas scenario earthquakes using the PEER performance based earthquake engineering framework to determine the probability of exceedance of these limit states over the next 30 years.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The rapid growth and development of Los Angeles City and County has been one of the phenomena of the present age. The growth of a city from 50,600 to 576,000, an increase of over 1000% in thirty years is an unprecedented occurrence. It has given rise to a variety of problems of increasing magnitude.

Chief among these are: supply of food, water and shelter development of industry and markets, prevention and removal of downtown congestion and protection of life and property. These, of course, are the problems that any city must face. But in the case of a community which doubles its population every ten years, radical and heroic measures must often be taken.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Described in this thesis are measurements made of the thick-target neutron yield from the reaction 13C(α, n)16O. The yield was determined for laboratory bombarding energies between 0.475 and 0.700 MeV, using a stilbene crystal neutron detector and pulse-shape discrimination to eliminate gamma rays. Stellar temperatures between 2.5 and 4.5 x 108 oK are involved in this energy region. From the neutron yield was extracted the astrophysical cross-section factor S(E), which was found to fit a linear function: S(E) = [(5.48 ± 1.77) + (12.05 ± 3.91)E] x 105 MeV-barns, center-of-mass system. The stellar rate of the 13C(α, n)16O reaction if calculated, and discussed with reference to helium burning and neutron production in the core of a giant star.

Results are also presented of measurements carried out on the reaction 9Be(α, n)12C, taken with a thin Be target. The bombarding energy-range covered was from 0.340 to 0.680 MeV, with excitation curves for the ground- and first excited-state neutrons being reported. Some angular distributions were also measured. Resonances were found at bombarding energies of ELAB = 0.520 MeV (ECM = 0.360 MeV, Γ ~ 55 keV CM, ωγ = 3.79 eV CM) and ELAB = 0.600 MeV (ECM = 0.415 MeV, Γ ˂ 4 keV CM, ωγ = 0.88 eV CM). The astrophysical rate of the 9Be(α, n)12C reaction due to these resonances is calculated.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Proper encoding of transmitted information can improve the performance of a communication system. To recover the information at the receiver it is necessary to decode the received signal. For many codes the complexity and slowness of the decoder is so severe that the code is not feasible for practical use. This thesis considers the decoding problem for one such class of codes, the comma-free codes related to the first-order Reed-Muller codes.

A factorization of the code matrix is found which leads to a simple, fast, minimum memory, decoder. The decoder is modular and only n modules are needed to decode a code of length 2n. The relevant factorization is extended to any code defined by a sequence of Kronecker products.

The problem of monitoring the correct synchronization position is also considered. A general answer seems to depend upon more detailed knowledge of the structure of comma-free codes. However, a technique is presented which gives useful results in many specific cases.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This thesis discusses simulations of earthquake ground motions using prescribed ruptures and dynamic failure. Introducing sliding degrees of freedom led to an innovative technique for numerical modeling of earthquake sources. This technique allows efficient implementation of both prescribed ruptures and dynamic failure on an arbitrarily oriented fault surface. Off the fault surface the solution of the three-dimensional, dynamic elasticity equation uses well known finite-element techniques. We employ parallel processing to efficiently compute the ground motions in domains containing millions of degrees of freedom.

Using prescribed ruptures we study the sensitivity of long-period near-source ground motions to five earthquake source parameters for hypothetical events on a strike-slip fault (Mw 7.0 to 7.1) and a thrust fault (Mw 6.6 to 7.0). The directivity of the ruptures creates large displacement and velocity pulses in the ground motions in the forward direction. We found a good match between the severity of the shaking and the shape of the near-source factor from the 1997 Uniform Building Code for strike-slip faults and thrust faults with surface rupture. However, for blind thrust faults the peak displacement and velocities occur up-dip from the region with the peak near-source factor. We assert that a simple modification to the formulation of the near-source factor improves the match between the severity of the ground motion and the shape of the near-source factor.

For simulations with dynamic failure on a strike-slip fault or a thrust fault, we examine what constraints must be imposed on the coefficient of friction to produce realistic ruptures under the application of reasonable shear and normal stress distributions with depth. We found that variation of the coefficient of friction with the shear modulus and the depth produces realistic rupture behavior in both homogeneous and layered half-spaces. Furthermore, we observed a dependence of the rupture speed on the direction of propagation and fluctuations in the rupture speed and slip rate as the rupture encountered changes in the stress field. Including such behavior in prescribed ruptures would yield more realistic ground motions.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Part I. Complexes of Biological Bases and Oligonucleotides with RNA

The physical nature of complexes of several biological bases and oligonucleotides with single-stranded ribonucleic acids have been studied by high resolution proton magnetic resonance spectroscopy. The importance of various forces in the stabilization of these complexes is also discussed.

Previous work has shown that purine forms an intercalated complex with single-stranded nucleic acids. This complex formation led to severe and stereospecific broadening of the purine resonances. From the field dependence of the linewidths, T1 measurements of the purine protons and nuclear Overhauser enhancement experiments, the mechanism for the line broadening was ascertained to be dipole-dipole interactions between the purine protons and the ribose protons of the nucleic acid.

The interactions of ethidium bromide (EB) with several RNA residues have been studied. EB forms vertically stacked aggregates with itself as well as with uridine, 3'-uridine monophosphate and 5'-uridine monophosphate and forms an intercalated complex with uridylyl (3' → 5') uridine and polyuridylic acid (poly U). The geometry of EB in the intercalated complex has also been determined.

The effect of chain length of oligo-A-nucleotides on their mode of interaction with poly U in D20 at neutral pD have also been studied. Below room temperatures, ApA and ApApA form a rigid triple-stranded complex involving a stoichiometry of one adenine to two uracil bases, presumably via specific adenine-uracil base pairing and cooperative base stacking of the adenine bases. While no evidence was obtained for the interaction of ApA with poly U above room temperature, ApApA exhibited complex formation of a 1:1 nature with poly U by forming Watson-Crick base pairs. The thermodynamics of these systems are discussed.

Part II. Template Recognition and the Degeneracy of the Genetic Code

The interaction of ApApG and poly U was studied as a model system for the codon-anticodon interaction of tRNA and mRNA in vivo. ApApG was shown to interact with poly U below ~20°C. The interaction was of a 1:1 nature which exhibited the Hoogsteen bonding scheme. The three bases of ApApG are in an anti conformation and the guanosine base appears to be in the lactim tautomeric form in the complex.

Due to the inadequacies of previous models for the degeneracy of the genetic code in explaining the observed interactions of ApApG with poly U, the "tautomeric doublet" model is proposed as a possible explanation of the degenerate interactions of tRNA with mRNA during protein synthesis in vivo.