12 resultados para MULTIPLE DISPLACEMENT AMPLIFICATION

em CaltechTHESIS


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Despite years of research on low-angle detachments, much about them remains enigmatic. This thesis addresses some of the uncertainty regarding two particular detachments, the Mormon Peak detachment in Nevada and the Heart Mountain detachment in Wyoming and Montana.

Constraints on the geometry and kinematics of emplacement of the Mormon Peak detachment are provided by detailed geologic mapping of the Meadow Valley Mountains, along with an analysis of structural data within the allochthon in the Mormon Mountains. Identifiable structures well suited to constrain the kinematics of the detachment include a newly mapped, Sevier-age monoclinal flexure in the hanging wall of the detachment. This flexure, including the syncline at its base and the anticline at its top, can be readily matched to the base and top of the frontal Sevier thrust ramp, which is exposed in the footwall of the detachment to the east in the Mormon Mountains and Tule Springs Hills. The ~12 km of offset of these structural markers precludes the radial sliding hypothesis for emplacement of the allochthon.

The role of fluids in the slip along faults is a widely investigated topic, but the use of carbonate clumped-isotope thermometry to investigate these fluids is new. Faults rocks from within ~1 m of the Mormon Peak detachment, including veins, breccias, gouges, and host rocks, were analyzed for carbon, oxygen, and clumped-isotope measurements. The data indicate that much of the carbonate breccia and gouge material along the detachment is comminuted host rock, as expected. Measurements in vein material indicate that the fluid system is dominated by meteoric water, whose temperature indicates circulation to substantial depths (c. 4 km) in the upper crust near the fault zone.

Slip along the subhorizontal Heart Mountain detachment is particularly enigmatic, and many different mechanisms for failure have been proposed, predominantly involving catastrophic failure. Textural evidence of multiple slip events is abundant, and include multiple brecciation events and cross-cutting clastic dikes. Footwall deformation is observed in numerous exposures of the detachment. Stylolitic surfaces and alteration textures within and around “banded grains” previously interpreted to be an indicator of high-temperature fluidization along the fault suggest their formation instead via low-temperature dissolution and alteration processes. There is abundant textural evidence of the significant role of fluids along the detachment via pressure solution. The process of pressure solution creep may be responsible for enabling multiple slip events on the low-angle detachment, via a local rotation of the stress field.

Clumped-isotope thermometry of fault rocks associated with the Heart Mountain detachment indicates that despite its location on the flanks of a volcano that was active during slip, the majority of carbonate along the Heart Mountain detachment does not record significant heating above ambient temperatures (c. 40-70°C). Instead, cold meteoric fluids infiltrated the detachment breccia, and carbonate precipitated under ambient temperatures controlled by structural depth. Locally, fault gouge does preserve hot temperatures (>200°C), as is observed in both the Mormon Peak detachment and Heart Mountain detachment areas. Samples with very hot temperatures attributable to frictional shear heating are present but rare. They appear to be best preserved in hanging wall structures related to the detachment, rather than along the main detachment.

Evidence is presented for the prevalence of relatively cold, meteoric fluids along both shallow crustal detachments studied, and for protracted histories of slip along both detachments. Frictional heating is evident from both areas, but is a minor component of the preserved fault rock record. Pressure solution is evident, and might play a role in initiating slip on the Heart Mountain fault, and possibly other low-angle detachments.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Over the last century, the silicon revolution has enabled us to build faster, smaller and more sophisticated computers. Today, these computers control phones, cars, satellites, assembly lines, and other electromechanical devices. Just as electrical wiring controls electromechanical devices, living organisms employ "chemical wiring" to make decisions about their environment and control physical processes. Currently, the big difference between these two substrates is that while we have the abstractions, design principles, verification and fabrication techniques in place for programming with silicon, we have no comparable understanding or expertise for programming chemistry.

In this thesis we take a small step towards the goal of learning how to systematically engineer prescribed non-equilibrium dynamical behaviors in chemical systems. We use the formalism of chemical reaction networks (CRNs), combined with mass-action kinetics, as our programming language for specifying dynamical behaviors. Leveraging the tools of nucleic acid nanotechnology (introduced in Chapter 1), we employ synthetic DNA molecules as our molecular architecture and toehold-mediated DNA strand displacement as our reaction primitive.

Abstraction, modular design and systematic fabrication can work only with well-understood and quantitatively characterized tools. Therefore, we embark on a detailed study of the "device physics" of DNA strand displacement (Chapter 2). We present a unified view of strand displacement biophysics and kinetics by studying the process at multiple levels of detail, using an intuitive model of a random walk on a 1-dimensional energy landscape, a secondary structure kinetics model with single base-pair steps, and a coarse-grained molecular model that incorporates three-dimensional geometric and steric effects. Further, we experimentally investigate the thermodynamics of three-way branch migration. Our findings are consistent with previously measured or inferred rates for hybridization, fraying, and branch migration, and provide a biophysical explanation of strand displacement kinetics. Our work paves the way for accurate modeling of strand displacement cascades, which would facilitate the simulation and construction of more complex molecular systems.

In Chapters 3 and 4, we identify and overcome the crucial experimental challenges involved in using our general DNA-based technology for engineering dynamical behaviors in the test tube. In this process, we identify important design rules that inform our choice of molecular motifs and our algorithms for designing and verifying DNA sequences for our molecular implementation. We also develop flexible molecular strategies for "tuning" our reaction rates and stoichiometries in order to compensate for unavoidable non-idealities in the molecular implementation, such as imperfectly synthesized molecules and spurious "leak" pathways that compete with desired pathways.

We successfully implement three distinct autocatalytic reactions, which we then combine into a de novo chemical oscillator. Unlike biological networks, which use sophisticated evolved molecules (like proteins) to realize such behavior, our test tube realization is the first to demonstrate that Watson-Crick base pairing interactions alone suffice for oscillatory dynamics. Since our design pipeline is general and applicable to any CRN, our experimental demonstration of a de novo chemical oscillator could enable the systematic construction of CRNs with other dynamic behaviors.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Cells in the lateral intraparietal cortex (LIP) of rhesus macaques respond vigorously and in spatially-tuned fashion to briefly memorized visual stimuli. Responses to stimulus presentation, memory maintenance, and task completion are seen, in varying combination from neuron to neuron. To help elucidate this functional segmentation a new system for simultaneous recording from multiple neighboring neurons was developed. The two parts of this dissertation discuss the technical achievements and scientific discoveries, respectively.

Technology. Simultanous recordings from multiple neighboring neurons were made with four-wire bundle electrodes, or tetrodes, which were adapted to the awake behaving primate preparation. Signals from these electrodes were partitionable into a background process with a 1/f-like spectrum and foreground spiking activity spanning 300-6000 Hz. Continuous voltage recordings were sorted into spike trains using a state-of-the-art clustering algorithm, producing a mean of 3 cells per site. The algorithm classified 96% of spikes correctly when tetrode recordings were confirmed with simultaneous intracellular signals. Recording locations were verified with a new technique that creates electrolytic lesions visible in magnetic resonance imaging, eliminating the need for histological processing. In anticipation of future multi-tetrode work, the chronic chamber microdrive, a device for long-term tetrode delivery, was developed.

Science. Simultaneously recorded neighboring LIP neurons were found to have similar preferred targets in the memory saccade paradigm, but dissimilar peristimulus time histograms, PSTH). A majority of neighboring cell pairs had a difference in preferred directions of under 45° while the trial time of maximal response showed a broader distribution, suggesting homogeneity of tuning with het erogeneity of function. A continuum of response characteristics was present, rather than a set of specific response types; however, a mapping experiment suggests this may be because a given cell's PSTH changes shape as well as amplitude through the response field. Spike train autocovariance was tuned over target and changed through trial epoch, suggesting different mechanisms during memory versus background periods. Mean frequency-domain spike-to-spike coherence was concentrated below 50 Hz with a significant maximum of 0.08; mean time-domain coherence had a narrow peak in the range ±10 ms with a significant maximum of 0.03. Time-domain coherence was found to be untuned for short lags (10 ms), but significantly tuned at larger lags (50 ms).

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The Northridge earthquake of January 17, 1994, highlighted the two previously known problems of premature fracturing of connections and the damaging capabilities of near-source ground motion pulses. Large ground motions had not been experienced in a city with tall steel moment-frame buildings before. Some steel buildings exhibited fracture of welded connections or other types of structural degradation.

A sophisticated three-dimensional nonlinear inelastic program is developed that can accurately model many nonlinear properties commonly ignored or approximated in other programs. The program can assess and predict severely inelastic response of steel buildings due to strong ground motions, including collapse.

Three-dimensional fiber and segment discretization of elements is presented in this work. This element and its two-dimensional counterpart are capable of modeling various geometric and material nonlinearities such as moment amplification, spread of plasticity and connection fracture. In addition to introducing a three-dimensional element discretization, this work presents three-dimensional constraints that limit the number of equations required to solve various three-dimensional problems consisting of intersecting planar frames.

Two buildings damaged in the Northridge earthquake are investigated to verify the ability of the program to match the level of response and the extent and location of damage measured. The program is used to predict response of larger near-source ground motions using the properties determined from the matched response.

A third building is studied to assess three-dimensional effects on a realistic irregular building in the inelastic range of response considering earthquake directivity. Damage levels are observed to be significantly affected by directivity and torsional response.

Several strong recorded ground motions clearly exceed code-based levels. Properly designed buildings can have drifts exceeding code specified levels due to these ground motions. The strongest ground motions caused collapse if fracture was included in the model. Near-source ground displacement pulses can cause columns to yield prior to weaker-designed beams. Damage in tall buildings correlates better with peak-to-peak displacements than with peak-to-peak accelerations.

Dynamic response of tall buildings shows that higher mode response can cause more damage than first mode response. Leaking of energy between modes in conjunction with damage can cause torsional behavior that is not anticipated.

Various response parameters are used for all three buildings to determine what correlations can be made for inelastic building response. Damage levels can be dramatically different based on the inelastic model used. Damage does not correlate well with several common response parameters.

Realistic modeling of material properties and structural behavior is of great value for understanding the performance of tall buildings due to earthquake excitations.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The Supreme Court’s decision in Shelby County has severely limited the power of the Voting Rights Act. I argue that Congressional attempts to pass a new coverage formula are unlikely to gain the necessary Republican support. Instead, I propose a new strategy that takes a “carrot and stick” approach. As the stick, I suggest amending Section 3 to eliminate the need to prove that discrimination was intentional. For the carrot, I envision a competitive grant program similar to the highly successful Race to the Top education grants. I argue that this plan could pass the currently divided Congress.

Without Congressional action, Section 2 is more important than ever before. A successful Section 2 suit requires evidence that voting in the jurisdiction is racially polarized. Accurately and objectively assessing the level of polarization has been and continues to be a challenge for experts. Existing ecological inference methods require estimating polarization levels in individual elections. This is a problem because the Courts want to see a history of polarization across elections.

I propose a new 2-step method to estimate racially polarized voting in a multi-election context. The procedure builds upon the Rosen, Jiang, King, and Tanner (2001) multinomial-Dirichlet model. After obtaining election-specific estimates, I suggest regressing those results on election-specific variables, namely candidate quality, incumbency, and ethnicity of the minority candidate of choice. This allows researchers to estimate the baseline level of support for candidates of choice and test whether the ethnicity of the candidates affected how voters cast their ballots.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Detection of biologically relevant targets, including small molecules, proteins, DNA, and RNA, is vital for fundamental research as well as clinical diagnostics. Sensors with biological elements provide a natural foundation for such devices because of the inherent recognition capabilities of biomolecules. Electrochemical DNA platforms are simple, sensitive, and do not require complex target labeling or expensive instrumentation. Sensitivity and specificity are added to DNA electrochemical platforms when the physical properties of DNA are harnessed. The inherent structure of DNA, with its stacked core of aromatic bases, enables DNA to act as a wire via DNA-mediated charge transport (DNA CT). DNA CT is not only robust over long molecular distances of at least 34 nm, but is also especially sensitive to anything that perturbs proper base stacking, including DNA mismatches, lesions, or DNA-binding proteins that distort the π-stack. Electrochemical sensors based on DNA CT have previously been used for single-nucleotide polymorphism detection, hybridization assays, and DNA-binding protein detection. Here, improvements to (i) the structure of DNA monolayers and (ii) the signal amplification with DNA CT platforms for improved sensitivity and detection are described.

First, improvements to the control over DNA monolayer formation are reported through the incorporation of copper-free click chemistry into DNA monolayer assembly. As opposed to conventional film formation involving the self-assembly of thiolated DNA, copper-free click chemistry enables DNA to be tethered to a pre-formed mixed alkylthiol monolayer. The total amount of DNA in the final film is directly related to the amount of azide in the underlying alkylthiol monolayer. DNA monolayers formed with this technique are significantly more homogeneous and lower density, with a larger amount of individual helices exposed to the analyte solution. With these improved monolayers, significantly more sensitive detection of the transcription factor TATA binding protein (TBP) is achieved.

Using low-density DNA monolayers, two-electrode DNA arrays were designed and fabricated to enable the placement of multiple DNA sequences onto a single underlying electrode. To pattern DNA onto the primary electrode surface of these arrays, a copper precatalyst for click chemistry was electrochemically activated at the secondary electrode. The location of the secondary electrode relative to the primary electrode enabled the patterning of up to four sequences of DNA onto a single electrode surface. As opposed to conventional electrochemical readout from the primary, DNA-modified electrode, a secondary microelectrode, coupled with electrocatalytic signal amplification, enables more sensitive detection with spatial resolution on the DNA array electrode surface. Using this two-electrode platform, arrays have been formed that facilitate differentiation between well-matched and mismatched sequences, detection of transcription factors, and sequence-selective DNA hybridization, all with the incorporation of internal controls.

For effective clinical detection, the two working electrode platform was multiplexed to contain two complementary arrays, each with fifteen electrodes. This platform, coupled with low density DNA monolayers and electrocatalysis with readout from a secondary electrode, enabled even more sensitive detection from especially small volumes (4 μL per well). This multiplexed platform has enabled the simultaneous detection of two transcription factors, TBP and CopG, with surface dissociation constants comparable to their solution dissociation constants.

With the sensitivity and selectivity obtained from the multiplexed, two working electrode array, an electrochemical signal-on assay for activity of the human methyltransferase DNMT1 was incorporated. DNMT1 is the most abundant human methyltransferase, and its aberrant methylation has been linked to the development of cancer. However, current methods to monitor methyltransferase activity are either ineffective with crude samples or are impractical to develop for clinical applications due to a reliance on radioactivity. Electrochemical detection of methyltransferase activity, in contrast, circumvents these issues. The signal-on detection assay translates methylation events into electrochemical signals via a methylation-specific restriction enzyme. Using the two working electrode platform combined with this assay, DNMT1 activity from tumor and healthy adjacent tissue lysate were evaluated. Our electrochemical measurements revealed significant differences in methyltransferase activity between tumor tissue and healthy adjacent tissue.

As differential activity was observed between colorectal tumor tissue and healthy adjacent tissue, ten tumor sets were subsequently analyzed for DNMT1 activity both electrochemically and by tritium incorporation. These results were compared to expression levels of DNMT1, measured by qPCR, and total DNMT1 protein content, measured by Western blot. The only trend detected was that hyperactivity was observed in the tumor samples as compared to the healthy adjacent tissue when measured electrochemically. These advances in DNA CT-based platforms have propelled this class of sensors from the purely academic realm into the realm of clinically relevant detection.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Part I: Synthesis of L-Amino Acid Oxidase by a Serine- or Glycine-Requiring Strain of Neurospora

Wild-type cultures of Neurospora crassa growing on minimal medium contain low levels of L-amino acid oxidase, tyrosinase, and nicotinarnide adenine dinucleotide glycohydrase (NADase). The enzymes are derepressed by starvation and by a number of other conditions which are inhibitory to growth. L-amino acid oxidase is, in addition, induced by growth on amino acids. A mutant which produces large quantities of both L-amino acid oxidase and NADase when growing on minimal medium was investigated. Constitutive synthesis of L-amino acid oxidase was shown to be inherited as a single gene, called P110, which is separable from constitutive synthesis of NADase. P110 maps near the centromere on linkage group IV.

L-amino acid oxidase produced constitutively by P110 was partially purified and compared to partially purified L-amino acid oxidase produced by derepressed wild-type cultures. The enzymes are identical with respect to thermostability and molecular weight as judged by gel filtration.

The mutant P110 was shown to be an incompletely blocked auxotroph which requires serine or glycine. None of the enzymes involved in the synthesis of serine from 3-phosphoglyceric acid or glyceric acid was found to be deficient in the mutant, however. An investigation of the free intracellular amino acid pools of P110 indicated that the mutant is deficient in serine, glycine, and alanine, and accumulates threonine and homoserine.

The relationship between the amino acid requirement of P110 and its synthesis of L-amino acid oxidase is discussed.

Part II: Studies Concerning Multiple Electrophoretic Forms of Tyrosinase in Neurospora

Supernumerary bands shown by some crude tyrosinase preparations in paper electrophoresis were investigated. Genetic analysis indicated that the location of the extra bands is determined by the particular T allele present. The presence of supernumerary bands varies with the method used to derepress tyrosinase production, and with the duration of derepression. The extra bands are unstable and may convert to the major electrophoretic band, suggesting that they result from modification of a single protein. Attempts to isolate the supernumerary bands by continuous flow paper electrophoresis or density gradient zonal electrophoresis were unsuccessful.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

STEEL, the Caltech created nonlinear large displacement analysis software, is currently used by a large number of researchers at Caltech. However, due to its complexity, lack of visualization tools (such as pre- and post-processing capabilities) rapid creation and analysis of models using this software was difficult. SteelConverter was created as a means to facilitate model creation through the use of the industry standard finite element solver ETABS. This software allows users to create models in ETABS and intelligently convert model information such as geometry, loading, releases, fixity, etc., into a format that STEEL understands. Models that would take several days to create and verify now take several hours or less. The productivity of the researcher as well as the level of confidence in the model being analyzed is greatly increased.

It has always been a major goal of Caltech to spread the knowledge created here to other universities. However, due to the complexity of STEEL it was difficult for researchers or engineers from other universities to conduct analyses. While SteelConverter did help researchers at Caltech improve their research, sending SteelConverter and its documentation to other universities was less than ideal. Issues of version control, individual computer requirements, and the difficulty of releasing updates made a more centralized solution preferred. This is where the idea for Caltech VirtualShaker was born. Through the creation of a centralized website where users could log in, submit, analyze, and process models in the cloud, all of the major concerns associated with the utilization of SteelConverter were eliminated. Caltech VirtualShaker allows users to create profiles where defaults associated with their most commonly run models are saved, and allows them to submit multiple jobs to an online virtual server to be analyzed and post-processed. The creation of this website not only allowed for more rapid distribution of this tool, but also created a means for engineers and researchers with no access to powerful computer clusters to run computationally intensive analyses without the excessive cost of building and maintaining a computer cluster.

In order to increase confidence in the use of STEEL as an analysis system, as well as verify the conversion tools, a series of comparisons were done between STEEL and ETABS. Six models of increasing complexity, ranging from a cantilever column to a twenty-story moment frame, were analyzed to determine the ability of STEEL to accurately calculate basic model properties such as elastic stiffness and damping through a free vibration analysis as well as more complex structural properties such as overall structural capacity through a pushover analysis. These analyses showed a very strong agreement between the two softwares on every aspect of each analysis. However, these analyses also showed the ability of the STEEL analysis algorithm to converge at significantly larger drifts than ETABS when using the more computationally expensive and structurally realistic fiber hinges. Following the ETABS analysis, it was decided to repeat the comparisons in a software more capable of conducting highly nonlinear analysis, called Perform. These analyses again showed a very strong agreement between the two softwares in every aspect of each analysis through instability. However, due to some limitations in Perform, free vibration analyses for the three story one bay chevron brace frame, two bay chevron brace frame, and twenty story moment frame could not be conducted. With the current trend towards ultimate capacity analysis, the ability to use fiber based models allows engineers to gain a better understanding of a building’s behavior under these extreme load scenarios.

Following this, a final study was done on Hall’s U20 structure [1] where the structure was analyzed in all three softwares and their results compared. The pushover curves from each software were compared and the differences caused by variations in software implementation explained. From this, conclusions can be drawn on the effectiveness of each analysis tool when attempting to analyze structures through the point of geometric instability. The analyses show that while ETABS was capable of accurately determining the elastic stiffness of the model, following the onset of inelastic behavior the analysis tool failed to converge. However, for the small number of time steps the ETABS analysis was converging, its results exactly matched those of STEEL, leading to the conclusion that ETABS is not an appropriate analysis package for analyzing a structure through the point of collapse when using fiber elements throughout the model. The analyses also showed that while Perform was capable of calculating the response of the structure accurately, restrictions in the material model resulted in a pushover curve that did not match that of STEEL exactly, particularly post collapse. However, such problems could be alleviated by choosing a more simplistic material model.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

An experimental method combined with boundary layer theory is given for evaluating the added mass of a sphere moving along the axis of a circular cylinder filled with water or oil. The real fluid effects are separated from ideal fluid effects.

The experimental method consists essentially of a magnetic steel sphere propelled from rest by an electromagnetic coil in which the current is accurately controlled so that it only supplies force for a short time interval which is within the laminar flow regime of the fluid. The motion of the sphere as a function of time is recorded on single frame photographs using a short-arc multiple flash lamp with accurately controlled time intervals between flashes.

A concept of the effect of boundary layer displacement on the fluid flow around a sphere is introduced to evaluate the real fluid effects on the added mass. Surprisingly accurate agreement between experiment and theory is achieved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The present work deals with the problem of the interaction of the electromagnetic radiation with a statistical distribution of nonmagnetic dielectric particles immersed in an infinite homogeneous isotropic, non-magnetic medium. The wavelength of the incident radiation can be less, equal or greater than the linear dimension of a particle. The distance between any two particles is several wavelengths. A single particle in the absence of the others is assumed to scatter like a Rayleigh-Gans particle, i.e. interaction between the volume elements (self-interaction) is neglected. The interaction of the particles is taken into account (multiple scattering) and conditions are set up for the case of a lossless medium which guarantee that the multiple scattering contribution is more important than the self-interaction one. These conditions relate the wavelength λ and the linear dimensions of a particle a and of the region occupied by the particles D. It is found that for constant λ/a, D is proportional to λ and that |Δχ|, where Δχ is the difference in the dielectric susceptibilities between particle and medium, has to lie within a certain range.

The total scattering field is obtained as a series the several terms of which represent the corresponding multiple scattering orders. The first term is a single scattering term. The ensemble average of the total scattering intensity is then obtained as a series which does not involve terms due to products between terms of different orders. Thus the waves corresponding to different orders are independent and their Stokes parameters add.

The second and third order intensity terms are explicitly computed. The method used suggests a general approach for computing any order. It is found that in general the first order scattering intensity pattern (or phase function) peaks in the forward direction Θ = 0. The second order tends to smooth out the pattern giving a maximum in the Θ = π/2 direction and minima in the Θ = 0 , Θ = π directions. This ceases to be true if ka (where k = 2π/λ) becomes large (> 20). For large ka the forward direction is further enhanced. Similar features are expected from the higher orders even though the critical value of ka may increase with the order.

The first order polarization of the scattered wave is determined. The ensemble average of the Stokes parameters of the scattered wave is explicitly computed for the second order. A similar method can be applied for any order. It is found that the polarization of the scattered wave depends on the polarization of the incident wave. If the latter is elliptically polarized then the first order scattered wave is elliptically polarized, but in the Θ = π/2 direction is linearly polarized. If the incident wave is circularly polarized the first order scattered wave is elliptically polarized except for the directions Θ = π/2 (linearly polarized) and Θ = 0, π (circularly polarized). The handedness of the Θ = 0 wave is the same as that of the incident whereas the handedness of the Θ = π wave is opposite. If the incident wave is linearly polarized the first order scattered wave is also linearly polarized. The second order makes the total scattered wave to be elliptically polarized for any Θ no matter what the incident wave is. However, the handedness of the total scattered wave is not altered by the second order. Higher orders have similar effects as the second order.

If the medium is lossy the general approach employed for the lossless case is still valid. Only the algebra increases in complexity. It is found that the results of the lossless case are insensitive in the first order of kimD where kim = imaginary part of the wave vector k and D a linear characteristic dimension of the region occupied by the particles. Thus moderately extended regions and small losses make (kimD)2 ≪ 1 and the lossy character of the medium does not alter the results of the lossless case. In general the presence of the losses tends to reduce the forward scattering.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The wave-theoretical analysis of acoustic and elastic waves refracted by a spherical boundary across which both velocity and density increase abruptly and thence either increase or decrease continuously with depth is formulated in terms of the general problem of waves generated at a steady point source and scattered by a radially heterogeneous spherical body. A displacement potential representation is used for the elastic problem that results in high frequency decoupling of P-SV motion in a spherically symmetric, radially heterogeneous medium. Through the application of an earth-flattening transformation on the radial solution and the Watson transform on the sum over eigenfunctions, the solution to the spherical problem for high frequencies is expressed as a Weyl integral for the corresponding half-space problem in which the effect of boundary curvature maps into an effective positive velocity gradient. The results of both analytical and numerical evaluation of this integral can be summarized as follows for body waves in the crust and upper mantle:

1) In the special case of a critical velocity gradient (a gradient equal and opposite to the effective curvature gradient), the critically refracted wave reduces to the classical head wave for flat, homogeneous layers.

2) For gradients more negative than critical, the amplitude of the critically refracted wave decays more rapidly with distance than the classical head wave.

3) For positive, null, and gradients less negative than critical, the amplitude of the critically refracted wave decays less rapidly with distance than the classical head wave, and at sufficiently large distances, the refracted wave can be adequately described in terms of ray-theoretical diving waves. At intermediate distances from the critical point, the spectral amplitude of the refracted wave is scalloped due to multiple diving wave interference.

These theoretical results applied to published amplitude data for P-waves refracted by the major crustal and upper mantle horizons (the Pg, P*, and Pn travel-time branches) suggest that the 'granitic' upper crust, the 'basaltic' lower crust, and the mantle lid all have negative or near-critical velocity gradients in the tectonically active western United States. On the other hand, the corresponding horizons in the stable eastern United States appear to have null or slightly positive velocity gradients. The distribution of negative and positive velocity gradients correlates closely with high heat flow in tectonic regions and normal heat flow in stable regions. The velocity gradients inferred from the amplitude data are generally consistent with those inferred from ultrasonic measurements of the effects of temperature and pressure on crustal and mantle rocks and probable geothermal gradients. A notable exception is the strong positive velocity gradient in the mantle lid beneath the eastern United States (2 x 10-3 sec-1), which appears to require a compositional gradient to counter the effect of even a small geothermal gradient.

New seismic-refraction data were recorded along a 800 km profile extending due south from the Canadian border across the Columbia Plateau into eastern Oregon. The source for the seismic waves was a series of 20 high-energy chemical explosions detonated by the Canadian government in Greenbush Lake, British Columbia. The first arrivals recorded along this profile are on the Pn travel-time branch. In northern Washington and central Oregon their travel time is described by T = Δ/8.0 + 7.7 sec, but in the Columbia Plateau the Pn arrivals are as much as 0.9 sec early with respect to this line. An interpretation of these Pn arrivals together with later crustal arrivals suggest that the crust under the Columbia Plateau is thinner by about 10 km and has a higher average P-wave velocity than the 35-km-thick, 62-km/sec crust under the granitic-metamorphic terrain of northern Washington. A tentative interpretation of later arrivals recorded beyond 500 km from the shots suggests that a thin 8.4-km/sec horizon may be present in the upper mantle beneath the Columbia Plateau and that this horizon may form the lid to a pronounced low-velocity zone extending to a depth of about 140 km.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Structural design is a decision-making process in which a wide spectrum of requirements, expectations, and concerns needs to be properly addressed. Engineering design criteria are considered together with societal and client preferences, and most of these design objectives are affected by the uncertainties surrounding a design. Therefore, realistic design frameworks must be able to handle multiple performance objectives and incorporate uncertainties from numerous sources into the process.

In this study, a multi-criteria based design framework for structural design under seismic risk is explored. The emphasis is on reliability-based performance objectives and their interaction with economic objectives. The framework has analysis, evaluation, and revision stages. In the probabilistic response analysis, seismic loading uncertainties as well as modeling uncertainties are incorporated. For evaluation, two approaches are suggested: one based on preference aggregation and the other based on socio-economics. Both implementations of the general framework are illustrated with simple but informative design examples to explore the basic features of the framework.

The first approach uses concepts similar to those found in multi-criteria decision theory, and directly combines reliability-based objectives with others. This approach is implemented in a single-stage design procedure. In the socio-economics based approach, a two-stage design procedure is recommended in which societal preferences are treated through reliability-based engineering performance measures, but emphasis is also given to economic objectives because these are especially important to the structural designer's client. A rational net asset value formulation including losses from uncertain future earthquakes is used to assess the economic performance of a design. A recently developed assembly-based vulnerability analysis is incorporated into the loss estimation.

The presented performance-based design framework allows investigation of various design issues and their impact on a structural design. It is a flexible one that readily allows incorporation of new methods and concepts in seismic hazard specification, structural analysis, and loss estimation.