13 resultados para Persistence of ground cover

em CaltechTHESIS


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Current earthquake early warning systems usually make magnitude and location predictions and send out a warning to the users based on those predictions. We describe an algorithm that assesses the validity of the predictions in real-time. Our algorithm monitors the envelopes of horizontal and vertical acceleration, velocity, and displacement. We compare the observed envelopes with the ones predicted by Cua & Heaton's envelope ground motion prediction equations (Cua 2005). We define a "test function" as the logarithm of the ratio between observed and predicted envelopes at every second in real-time. Once the envelopes deviate beyond an acceptable threshold, we declare a misfit. Kurtosis and skewness of a time evolving test function are used to rapidly identify a misfit. Real-time kurtosis and skewness calculations are also inputs to both probabilistic (Logistic Regression and Bayesian Logistic Regression) and nonprobabilistic (Least Squares and Linear Discriminant Analysis) models that ultimately decide if there is an unacceptable level of misfit. This algorithm is designed to work at a wide range of amplitude scales. When tested with synthetic and actual seismic signals from past events, it works for both small and large events.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this thesis, we develop an efficient collapse prediction model, the PFA (Peak Filtered Acceleration) model, for buildings subjected to different types of ground motions.

For the structural system, the PFA model covers modern steel and reinforced concrete moment-resisting frame buildings (potentially reinforced concrete shear wall buildings). For ground motions, the PFA model covers ramp-pulse-like ground motions, long-period ground motions, and short-period ground motions.

To predict whether a building will collapse in response to a given ground motion, we first extract long-period components from the ground motion using a Butterworth low-pass filter with suggested order and cutoff frequency. The order depends on the type of ground motion, and the cutoff frequency depends on the building’s natural frequency and ductility. We then compare the filtered acceleration time history with the capacity of the building. The capacity of the building is a constant for 2-dimentional buildings and a limit domain for 3-dimentional buildings. If the filtered acceleration exceeds the building’s capacity, the building is predicted to collapse. Otherwise, it is expected to survive the ground motion.

The parameters used in PFA model, which include fundamental period, global ductility and lateral capacity, can be obtained either from numerical analysis or interpolation based on the reference building system proposed in this thesis.

The PFA collapse prediction model greatly reduces computational complexity while archiving good accuracy. It is verified by FEM simulations of 13 frame building models and 150 ground motion records.

Based on the developed collapse prediction model, we propose to use PFA (Peak Filtered Acceleration) as a new ground motion intensity measure for collapse prediction. We compare PFA with traditional intensity measures PGA, PGV, PGD, and Sa in collapse prediction and find that PFA has the best performance among all the intensity measures.

We also provide a close form in term of a vector intensity measure (PGV, PGD) of the PFA collapse prediction model for practical collapse risk assessment.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A study is made of the accuracy of electronic digital computer calculations of ground displacement and response spectra from strong-motion earthquake accelerograms. This involves an investigation of methods of the preparatory reduction of accelerograms into a form useful for the digital computation and of the accuracy of subsequent digital calculations. Various checks are made for both the ground displacement and response spectra results, and it is concluded that the main errors are those involved in digitizing the original record. Differences resulting from various investigators digitizing the same experimental record may become as large as 100% of the maximum computed ground displacements. The spread of the results of ground displacement calculations is greater than that of the response spectra calculations. Standardized methods of adjustment and calculation are recommended, to minimize such errors.

Studies are made of the spread of response spectral values about their mean. The distribution is investigated experimentally by Monte Carlo techniques using an electric analog system with white noise excitation, and histograms are presented indicating the dependence of the distribution on the damping and period of the structure. Approximate distributions are obtained analytically by confirming and extending existing results with accurate digital computer calculations. A comparison of the experimental and analytical approaches indicates good agreement for low damping values where the approximations are valid. A family of distribution curves to be used in conjunction with existing average spectra is presented. The combination of analog and digital computations used with Monte Carlo techniques is a promising approach to the statistical problems of earthquake engineering.

Methods of analysis of very small earthquake ground motion records obtained simultaneously at different sites are discussed. The advantages of Fourier spectrum analysis for certain types of studies and methods of calculation of Fourier spectra are presented. The digitizing and analysis of several earthquake records is described and checks are made of the dependence of results on digitizing procedure, earthquake duration and integration step length. Possible dangers of a direct ratio comparison of Fourier spectra curves are pointed out and the necessity for some type of smoothing procedure before comparison is established. A standard method of analysis for the study of comparative ground motion at different sites is recommended.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This thesis explores the problem of mobile robot navigation in dense human crowds. We begin by considering a fundamental impediment to classical motion planning algorithms called the freezing robot problem: once the environment surpasses a certain level of complexity, the planner decides that all forward paths are unsafe, and the robot freezes in place (or performs unnecessary maneuvers) to avoid collisions. Since a feasible path typically exists, this behavior is suboptimal. Existing approaches have focused on reducing predictive uncertainty by employing higher fidelity individual dynamics models or heuristically limiting the individual predictive covariance to prevent overcautious navigation. We demonstrate that both the individual prediction and the individual predictive uncertainty have little to do with this undesirable navigation behavior. Additionally, we provide evidence that dynamic agents are able to navigate in dense crowds by engaging in joint collision avoidance, cooperatively making room to create feasible trajectories. We accordingly develop interacting Gaussian processes, a prediction density that captures cooperative collision avoidance, and a "multiple goal" extension that models the goal driven nature of human decision making. Navigation naturally emerges as a statistic of this distribution.

Most importantly, we empirically validate our models in the Chandler dining hall at Caltech during peak hours, and in the process, carry out the first extensive quantitative study of robot navigation in dense human crowds (collecting data on 488 runs). The multiple goal interacting Gaussian processes algorithm performs comparably with human teleoperators in crowd densities nearing 1 person/m2, while a state of the art noncooperative planner exhibits unsafe behavior more than 3 times as often as the multiple goal extension, and twice as often as the basic interacting Gaussian process approach. Furthermore, a reactive planner based on the widely used dynamic window approach proves insufficient for crowd densities above 0.55 people/m2. We also show that our noncooperative planner or our reactive planner capture the salient characteristics of nearly any dynamic navigation algorithm. For inclusive validation purposes, we show that either our non-interacting planner or our reactive planner captures the salient characteristics of nearly any existing dynamic navigation algorithm. Based on these experimental results and theoretical observations, we conclude that a cooperation model is critical for safe and efficient robot navigation in dense human crowds.

Finally, we produce a large database of ground truth pedestrian crowd data. We make this ground truth database publicly available for further scientific study of crowd prediction models, learning from demonstration algorithms, and human robot interaction models in general.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The geology and structure of two crustal scale shear zones were studied to understand the partitioning of strain within intracontinental orogenic belts. Movement histories and regional tectonic implications are deduced from observational data. The two widely separated study areas bear the imprint of intense Late Mesozoic through Middle Cenozoic tectonic activity. A regional transition from Late Cretaceous-Early Tertiary plutonism, metamorphism, and shortening strain to Middle Tertiary extension and magmatism is preserved in each area, with contrasting environments and mechanisms. Compressional phases of this tectonic history are better displayed in the Rand Mountains, whereas younger extensional structures dominate rock fabrics in the Magdalena area.

In the northwestern Mojave desert, the Rand Thrust Complex reveals a stack of four distinctive tectonic plates offset along the Garlock Fault. The lowermost plate, Rand Schist, is composed of greenschist facies metagraywacke, metachert, and metabasalt. Rand Schist is structurally overlain by Johannesburg Gneiss (= garnet-amphibolite grade orthogneisses, marbles and quartzites), which in turn is overlain by a Late Cretaceous hornblende-biotite granodiorite. Biotite granite forms the fourth and highest plate. Initial assembly of the tectonic stack involved a Late Cretaceous? south or southwest vergent overthrusting event in which Johannesburg Gneiss was imbricated and attenuated between Rand Schist and hornblende-biotite granodiorite. Thrusting postdated metamorphism and deformation of the lower two plates in separate environments. A post-kinematic stock, the Late Cretaceous Randsburg Granodiorite, intrudes deep levels of the complex and contains xenoliths of both Rand Schist and mylonitized Johannesburg? gneiss. Minimum shortening implied by the map patterns is 20 kilometers.

Some low angle faults of the Rand Thrust Complex formed or were reactivated between Late Cretaceous and Early Miocene time. South-southwest directed mylonites derived from Johannesburg Gneiss are commonly overprinted by less penetrative north-northeast vergent structures. Available kinematic information at shallower structural levels indicates that late disturbance(s) culminated in northward transport of the uppermost plate. Persistence of brittle fabrics along certain structural horizons suggests a possible association of late movement(s) with regionally known detachment faults. The four plates were juxtaposed and significant intraplate movements had ceased prior to Early Miocene emplacement of rhyolite porphyry dikes.

In the Magdalena region of north central Sonora, components of a pre-Middle Cretaceous stratigraphy are used as strain markers in tracking the evolution of a long lived orogenic belt. Important elements of the tectonic history include: (1) Compression during the Late Cretaceous and Early Tertiary, accompanied by plutonism, metamorphism, and ductile strain at depth, and thrust driven? syntectonic sedimentation at the surface. (2) Middle Tertiary transition to crustal extension, initially recorded by intrusion of leucogranites, inflation of the previously shortened middle and upper crustal section, and surface volcanism. (3) Gravity induced development of a normal sense ductile shear zone at mid crustal levels, with eventual detachment and southwestward displacement of the upper crustal stratigraphy by Early Miocene time.

Elucidation of the metamorphic core complex evolution just described was facilitated by fortuitous preservation of a unique assemblage of rocks and structures. The "type" stratigraphy utilized for regional correlation and strain analysis includes a Jurassic volcanic arc assemblage overlain by an Upper Jurassic-Lower Cretaceous quartz pebble conglomerate, in turn overlain by marine strata with fossiliferous Aptian-Albian limestones. The Jurassic strata, comprised of (a) rhyolite porphyries interstratified with quartz arenites, (b) rhyolite cobble conglomerate, and (c) intrusive granite porphyries, are known to rest on Precambrian basement north and east of the study area. The quartz pebble conglomerate is correlated with the Glance Conglomerate of southeastern Arizona and northeastern Sonora. The marine sequence represents part of an isolated arm? of the Bisbee Basin.

Crosscutting structural relationships between the pre-Middle Cretaceous supracrustal section, younger plutons, and deformational fabrics allow the tectonic sequence to be determined. Earliest phases of a Late Cretaceous-Early Tertiary orogeny are marked by emplacement of the 78 ± 3 Ma Guacomea Granodiorite (U/Pb zircon, Anderson et al., 1980) as a sill into deep levels of the layered Jurassic series. Subsequent regional metamorphism and ductile strain is recorded by a penetrative schistosity and lineation, and east-west trending folds. These fabrics are intruded by post-kinematic Early Tertiary? two mica granites. At shallower crustal levels, the orogeny is represented by north directed thrust faulting, formation of a large intermontane basin, and development of a pronounced unconformity. A second important phase of ductile strain followed Middle Tertiary? emplacement of leucogranites as sills and northwest trending dikes into intermediate levels of the deformed section (surficial volcanism was also active during this transitional period to regional extension). Gravitational instabilities resulting from crustal swelling via intrusion and thermal expansion led to development of a ductile shear zone within the stratigraphic horizon occupied by a laterally extensive leucogranite sill. With continued extension, upper crustal brittle normal faults (detachment faults) enhanced the uplift and tectonic denudation of this mylonite zone, ultimately resulting in southwestward displacement of the upper crustal stratigraphy.

Strains associated with the two ductile deformation events have been successfully partitioned through a multifaceted analysis. R_f/Ø measurements on various markers from the "type" stratigraphy allow a gradient representing cumulative strain since Middle Cretaceous time to be determined. From this gradient, noncoaxial strains accrued since emplacement of the leucogranites may be removed. Irrotational components of the postleucogranite strain are measured from quartz grain shapes in deformed granites; rotational components (shear strains) are determined from S-C fabrics and from restoration of rotated dike and vein networks. Structural observations and strain data are compatable with a deformation path of: (1) coaxial strain (pure shear?), followed by (2) injection of leucogranites as dikes (perpendicular to the minimum principle stress) and sills (parallel to the minimum principle stress), then (3) southwest directed simple shear. Modeling the late strain gradient as a simple shear zone permits a minimum displacement of 10 kilometers on the Magdalena mylonite zone/detachment fault system. Removal of the Middle Tertiary noncoaxial strains yields a residual (or pre-existing) strain gradient representative of the Late Cretaceous-Early Tertiary deformation. Several partially destrained cross sections, restored to the time of leucogranite emplacement, illustrate the idea that the upper plate of the core complex bas been detached from a region of significant topographic relief. 50% to 100% bulk extension across a 50 kilometer wide corridor is demonstrated.

Late Cenozoic tectonics of the Magdalena region are dominated by Basin and Range style faulting. Northeast and north-northwest trending high angle normal faults have interacted to extend the crust in an east-west direction. Net extension for this period is minor (10% to 15%) in comparison to the Middle Tertiary detachment related extensional episode.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The initial objective of Part I was to determine the nature of upper mantle discontinuities, the average velocities through the mantle, and differences between mantle structure under continents and oceans by the use of P'dP', the seismic core phase P'P' (PKPPKP) that reflects at depth d in the mantle. In order to accomplish this, it was found necessary to also investigate core phases themselves and their inferences on core structure. P'dP' at both single stations and at the LASA array in Montana indicates that the following zones are candidates for discontinuities with varying degrees of confidence: 800-950 km, weak; 630-670 km, strongest; 500-600 km, strong but interpretation in doubt; 350-415 km, fair; 280-300 km, strong, varying in depth; 100-200 km, strong, varying in depth, may be the bottom of the low-velocity zone. It is estimated that a single station cannot easily discriminate between asymmetric P'P' and P'dP' for lead times of about 30 sec from the main P'P' phase, but the LASA array reduces this uncertainty range to less than 10 sec. The problems of scatter of P'P' main-phase times, mainly due to asymmetric P'P', incorrect identification of the branch, and lack of the proper velocity structure at the velocity point, are avoided and the analysis shows that one-way travel of P waves through oceanic mantle is delayed by 0.65 to 0.95 sec relative to United States mid-continental mantle.

A new P-wave velocity core model is constructed from observed times, dt/dΔ's, and relative amplitudes of P'; the observed times of SKS, SKKS, and PKiKP; and a new mantle-velocity determination by Jordan and Anderson. The new core model is smooth except for a discontinuity at the inner-core boundary determined to be at a radius of 1215 km. Short-period amplitude data do not require the inner core Q to be significantly lower than that of the outer core. Several lines of evidence show that most, if not all, of the arrivals preceding the DF branch of P' at distances shorter than 143° are due to scattering as proposed by Haddon and not due to spherically symmetric discontinuities just above the inner core as previously believed. Calculation of the travel-time distribution of scattered phases and comparison with published data show that the strongest scattering takes place at or near the core-mantle boundary close to the seismic station.

In Part II, the largest events in the San Fernando earthquake series, initiated by the main shock at 14 00 41.8 GMT on February 9, 1971, were chosen for analysis from the first three months of activity, 87 events in all. The initial rupture location coincides with the lower, northernmost edge of the main north-dipping thrust fault and the aftershock distribution. The best focal mechanism fit to the main shock P-wave first motions constrains the fault plane parameters to: strike, N 67° (± 6°) W; dip, 52° (± 3°) NE; rake, 72° (67°-95°) left lateral. Focal mechanisms of the aftershocks clearly outline a downstep of the western edge of the main thrust fault surface along a northeast-trending flexure. Faulting on this downstep is left-lateral strike-slip and dominates the strain release of the aftershock series, which indicates that the downstep limited the main event rupture on the west. The main thrust fault surface dips at about 35° to the northeast at shallow depths and probably steepens to 50° below a depth of 8 km. This steep dip at depth is a characteristic of other thrust faults in the Transverse Ranges and indicates the presence at depth of laterally-varying vertical forces that are probably due to buckling or overriding that causes some upward redirection of a dominant north-south horizontal compression. Two sets of events exhibit normal dip-slip motion with shallow hypocenters and correlate with areas of ground subsidence deduced from gravity data. Several lines of evidence indicate that a horizontal compressional stress in a north or north-northwest direction was added to the stresses in the aftershock area 12 days after the main shock. After this change, events were contained in bursts along the downstep and sequencing within the bursts provides evidence for an earthquake-triggering phenomenon that propagates with speeds of 5 to 15 km/day. Seismicity before the San Fernando series and the mapped structure of the area suggest that the downstep of the main fault surface is not a localized discontinuity but is part of a zone of weakness extending from Point Dume, near Malibu, to Palmdale on the San Andreas fault. This zone is interpreted as a decoupling boundary between crustal blocks that permits them to deform separately in the prevalent crustal-shortening mode of the Transverse Ranges region.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Thrust fault earthquakes are investigated in the laboratory by generating dynamic shear ruptures along pre-existing frictional faults in rectangular plates. A considerable body of evidence suggests that dip-slip earthquakes exhibit enhanced ground motions in the acute hanging wall wedge as an outcome of broken symmetry between hanging and foot wall plates with respect to the earth surface. To understand the physical behavior of thrust fault earthquakes, particularly ground motions near the earth surface, ruptures are nucleated in analog laboratory experiments and guided up-dip towards the simulated earth surface. The transient slip event and emitted radiation mimic a natural thrust earthquake. High-speed photography and laser velocimeters capture the rupture evolution, outputting a full-field view of photo-elastic fringe contours proportional to maximum shearing stresses as well as continuous ground motion velocity records at discrete points on the specimen. Earth surface-normal measurements validate selective enhancement of hanging wall ground motions for both sub-Rayleigh and super-shear rupture speeds. The earth surface breaks upon rupture tip arrival to the fault trace, generating prominent Rayleigh surface waves. A rupture wave is sensed in the hanging wall but is, however, absent from the foot wall plate: a direct consequence of proximity from fault to seismometer. Signatures in earth surface-normal records attenuate with distance from the fault trace. Super-shear earthquakes feature greater amplitudes of ground shaking profiles, as expected from the increased tectonic pressures required to induce super-shear transition. Paired stations measure fault parallel and fault normal ground motions at various depths, which yield slip and opening rates through direct subtraction of like components. Peak fault slip and opening rates associated with the rupture tip increase with proximity to the fault trace, a result of selective ground motion amplification in the hanging wall. Fault opening rates indicate that the hanging and foot walls detach near the earth surface, a phenomenon promoted by a decrease in magnitude of far-field tectonic loads. Subsequent shutting of the fault sends an opening pulse back down-dip. In case of a sub-Rayleigh earthquake, feedback from the reflected S wave re-ruptures the locked fault at super-shear speeds, providing another mechanism of super-shear transition.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Part I of this thesis deals with 3 topics concerning the luminescence from bound multi-exciton complexes in Si. Part II presents a model for the decay of electron-hole droplets in pure and doped Ge.

Part I.

We present high resolution photoluminescence data for Si doped With Al, Ga, and In. We observe emission lines due to recombination of electron-hole pairs in bound excitons and satellite lines which have been interpreted in terms of complexes of several excitons bound to an impurity. The bound exciton luminescence in Si:Ga and Si:Al consists of three emission lines due to transitions from the ground state and two low lying excited states. In Si:Ga, we observe a second triplet of emission lines which precisely mirror the triplet due to the bound exciton. This second triplet is interpreted as due to decay of a two exciton complex into the bound exciton. The observation of the second complete triplet in Si:Ga conclusively demonstrates that more than one exciton will bind to an impurity. Similar results are found for Si:Al. The energy of the lines show that the second exciton is less tightly bound than the first in Si:Ga. Other lines are observed at lower energies. The assumption of ground state to ground-state transitions for the lower energy lines is shown to produce a complicated dependence of binding energy of the last exciton on the number of excitons in a complex. No line attributable to the decay of a two exciton complex is observed in Si:In.

We present measurements of the bound exciton lifetimes for the four common acceptors in Si and for the first two bound multi-exciton complexes in Si:Ga and Si:Al. These results are shown to be in agreement with a calculation by Osbourn and Smith of Auger transition rates for acceptor bound excitons in Si. Kinetics determine the relative populations of complexes of various sizes and work functions, at temperatures which do not allow them to thermalize with respect to one another. It is shown that kinetic limitations may make it impossible to form two-exciton complexes in Si:In from a gas of free excitons.

We present direct thermodynamic measurements of the work functions of bound multi-exciton complexes in Al, B, P and Li doped Si. We find that in general the work functions are smaller than previously believed. These data remove one obstacle to the bound multi-exciton complex picture which has been the need to explain the very large apparent work functions for the larger complexes obtained by assuming that some of the observed lines are ground-state to ground-state transitions. None of the measured work functions exceed that of the electron-hole liquid.

Part II.

A new model for the decay of electron-hole-droplets in Ge is presented. The model is based on the existence of a cloud of droplets within the crystal and incorporates exciton flow among the drops in the cloud and the diffusion of excitons away from the cloud. It is able to fit the experimental luminescence decays for pure Ge at different temperatures and pump powers while retaining physically reasonable parameters for the drops. It predicts the shrinkage of the cloud at higher temperatures which has been verified by spatially and temporally resolved infrared absorption experiments. The model also accounts for the nearly exponential decay of electron-hole-droplets in lightly doped Ge at higher temperatures.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

There is a sparse number of credible source models available from large-magnitude past earthquakes. A stochastic source model generation algorithm thus becomes necessary for robust risk quantification using scenario earthquakes. We present an algorithm that combines the physics of fault ruptures as imaged in laboratory earthquakes with stress estimates on the fault constrained by field observations to generate stochastic source models for large-magnitude (Mw 6.0-8.0) strike-slip earthquakes. The algorithm is validated through a statistical comparison of synthetic ground motion histories from a stochastically generated source model for a magnitude 7.90 earthquake and a kinematic finite-source inversion of an equivalent magnitude past earthquake on a geometrically similar fault. The synthetic dataset comprises of three-component ground motion waveforms, computed at 636 sites in southern California, for ten hypothetical rupture scenarios (five hypocenters, each with two rupture directions) on the southern San Andreas fault. A similar validation exercise is conducted for a magnitude 6.0 earthquake, the lower magnitude limit for the algorithm. Additionally, ground motions from the Mw7.9 earthquake simulations are compared against predictions by the Campbell-Bozorgnia NGA relation as well as the ShakeOut scenario earthquake. The algorithm is then applied to generate fifty source models for a hypothetical magnitude 7.9 earthquake originating at Parkfield, with rupture propagating from north to south (towards Wrightwood), similar to the 1857 Fort Tejon earthquake. Using the spectral element method, three-component ground motion waveforms are computed in the Los Angeles basin for each scenario earthquake and the sensitivity of ground shaking intensity to seismic source parameters (such as the percentage of asperity area relative to the fault area, rupture speed, and risetime) is studied.

Under plausible San Andreas fault earthquakes in the next 30 years, modeled using the stochastic source algorithm, the performance of two 18-story steel moment frame buildings (UBC 1982 and 1997 designs) in southern California is quantified. The approach integrates rupture-to-rafters simulations into the PEER performance based earthquake engineering (PBEE) framework. Using stochastic sources and computational seismic wave propagation, three-component ground motion histories at 636 sites in southern California are generated for sixty scenario earthquakes on the San Andreas fault. The ruptures, with moment magnitudes in the range of 6.0-8.0, are assumed to occur at five locations on the southern section of the fault. Two unilateral rupture propagation directions are considered. The 30-year probabilities of all plausible ruptures in this magnitude range and in that section of the fault, as forecast by the United States Geological Survey, are distributed among these 60 earthquakes based on proximity and moment release. The response of the two 18-story buildings hypothetically located at each of the 636 sites under 3-component shaking from all 60 events is computed using 3-D nonlinear time-history analysis. Using these results, the probability of the structural response exceeding Immediate Occupancy (IO), Life-Safety (LS), and Collapse Prevention (CP) performance levels under San Andreas fault earthquakes over the next thirty years is evaluated.

Furthermore, the conditional and marginal probability distributions of peak ground velocity (PGV) and displacement (PGD) in Los Angeles and surrounding basins due to earthquakes occurring primarily on the mid-section of southern San Andreas fault are determined using Bayesian model class identification. Simulated ground motions at sites within 55-75km from the source from a suite of 60 earthquakes (Mw 6.0 − 8.0) primarily rupturing mid-section of San Andreas fault are considered for PGV and PGD data.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Topological superconductors are particularly interesting in light of the active ongoing experimental efforts for realizing exotic physics such as Majorana zero modes. These systems have excitations with non-Abelian exchange statistics, which provides a path towards topological quantum information processing. Intrinsic topological superconductors are quite rare in nature. However, one can engineer topological superconductivity by inducing effective p-wave pairing in materials which can be grown in the laboratory. One possibility is to induce the proximity effect in topological insulators; another is to use hybrid structures of superconductors and semiconductors.

The proposal of interfacing s-wave superconductors with quantum spin Hall systems provides a promising route to engineered topological superconductivity. Given the exciting recent progress on the fabrication side, identifying experiments that definitively expose the topological superconducting phase (and clearly distinguish it from a trivial state) raises an increasingly important problem. With this goal in mind, we proposed a detection scheme to get an unambiguous signature of topological superconductivity, even in the presence of ordinarily detrimental effects such as thermal fluctuations and quasiparticle poisoning. We considered a Josephson junction built on top of a quantum spin Hall material. This system allows the proximity effect to turn edge states in effective topological superconductors. Such a setup is promising because experimentalists have demonstrated that supercurrents indeed flow through quantum spin Hall edges. To demonstrate the topological nature of the superconducting quantum spin Hall edges, theorists have proposed examining the periodicity of Josephson currents respect to the phase across a Josephson junction. The periodicity of tunneling currents of ground states in a topological superconductor Josephson junction is double that of a conventional Josephson junction. In practice, this modification of periodicity is extremely difficult to observe because noise sources, such as quasiparticle poisoning, wash out the signature of topological superconductors. For this reason, We propose a new, relatively simple DC measurement that can compellingly reveal topological superconductivity in such quantum spin Hall/superconductor heterostructures. More specifically, We develop a general framework for capturing the junction's current-voltage characteristics as a function of applied magnetic flux. Our analysis reveals sharp signatures of topological superconductivity in the field-dependent critical current. These signatures include the presence of multiple critical currents and a non-vanishing critical current for all magnetic field strengths as a reliable identification scheme for topological superconductivity.

This system becomes more interesting as interactions between electrons are involved. By modeling edge states as a Luttinger liquid, we find conductance provides universal signatures to distinguish between normal and topological superconductors. More specifically, we use renormalization group methods to extract universal transport characteristics of superconductor/quantum spin Hall heterostructures where the native edge states serve as a lead. Interestingly, arbitrarily weak interactions induce qualitative changes in the behavior relative to the free-fermion limit, leading to a sharp dichotomy in conductance for the trivial (narrow superconductor) and topological (wide superconductor) cases. Furthermore, we find that strong interactions can in principle induce parafermion excitations at a superconductor/quantum spin Hall junction.

As we identify the existence of topological superconductor, we can take a step further. One can use topological superconductor for realizing Majorana modes by breaking time reversal symmetry. An advantage of 2D topological insulator is that networks required for braiding Majoranas along the edge channels can be obtained by adjoining 2D topological insulator to form corner junctions. Physically cutting quantum wells for this purpose, however, presents technical challenges. For this reason, I propose a more accessible means of forming networks that rely on dynamically manipulating the location of edge states inside of a single 2D topological insulator sheet. In particular, I show that edge states can effectively be dragged into the system's interior by gating a region near the edge into a metallic regime and then removing the resulting gapless carriers via proximity-induced superconductivity. This method allows one to construct rather general quasi-1D networks along which Majorana modes can be exchanged by electrostatic means.

Apart from 2D topological insulators, Majorana fermions can also be generated in other more accessible materials such as semiconductors. Following up on a suggestion by experimentalist Charlie Marcus, I proposed a novel geometry to create Majorana fermions by placing a 2D electron gas in proximity to an interdigitated superconductor-ferromagnet structure. This architecture evades several manufacturing challenges by allowing single-side fabrication and widening the class of 2D electron gas that may be used, such as the surface states of bulk semiconductors. Furthermore, it naturally allows one to trap and manipulate Majorana fermions through the application of currents. Thus, this structure may lead to the development of a circuit that enables fully electrical manipulation of topologically-protected quantum memory. To reveal these exotic Majorana zero modes, I also proposed an interference scheme to detect Majorana fermions that is broadly applicable to any 2D topological superconductor platform.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The purpose of this investigation was to determine whether landslides could be predicted for hill slopes of known inclinations from data secured by laboratory tests performed on samples of the ground under consideration. Specifically, the investigation was to show whether a correlation existed between experimentally determined values for friction and cohesion of ground and calculated values based upon the configuration of earth masses that had slid. The ability to determine the stability of slopes from experimental data is of obvious significance.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The cytolytic interaction of Polyoma virus with mouse embryo cells has been studied by radiobiological methods known to distinguish temperate from virulent bacteriophage. No evidence for "temperate" properties of Polyoma was found. During the course of these studies, it was observed that the curve of inactivation of Polyoma virus by ultraviolet light had two components - a more sensitive one at low doses, and a less sensitive one at higher doses. Virus which survives a low dose has an eclipse period similar to that of unirradiated virus, while virus surviving higher doses shows a significantly longer eclipse period. If Puromycin is present during the early part of the eclipse period, the survival curve becomes a single exponential with the sensitivity of the less sensitive component. These results suggest a repair mechanism in mouse cells which operates more effectively if virus development is delayed.

A comparison of the rates of inactivation of the cytolytic and transforming abilities of Polyoma by ultraviolet light, X-rays, nitrous acid treatment, or the decay of incorporated P32, showed that the transforming ability has a target size roughly 60% of that of the plaque-forming ability. It is thus concluded that only a fraction of the viral genes are necessary for causing transformation.

The appearance of virus-specific RNA in productively infected mouse kidney cells has been followed by means of hybridization between pulse-labelled RNA from the infected cells and the purified virus DNA. The results show a sharp increase in the amount of virus-specific RNA around the time of virus DNA synthesis. The presence of a small amount of virus-specific RNA in virus-free transformed cells has also been shown. This result offers strong evidence for the persistence of at least part of the viral genome in transformed cells.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

I. PHOSPHORESCENCE AND THE TRUE LIFETIME OF TRIPLET STATES IN FLUID SOLUTIONS

Phosphorescence has been observed in a highly purified fluid solution of naphthalene in 3-methylpentane (3-MP). The phosphorescence lifetime of C10H8 in 3-MP at -45 °C was found to be 0.49 ± 0.07 sec, while that of C10D8 under identical conditions is 0.64 ± 0.07 sec. At this temperature 3-MP has the same viscosity (0.65 centipoise) as that of benzene at room temperature. It is believed that even these long lifetimes are dominated by impurity quenching mechanisms. Therefore it seems that the radiationless decay times of the lowest triplet states of simple aromatic hydrocarbons in liquid solutions are sensibly the same as those in the solid phase. A slight dependence of the phosphorescence lifetime on solvent viscosity was observed in the temperature region, -60° to -18°C. This has been attributed to the diffusion-controlled quenching of the triplet state by residual impurity, perhaps oxygen. Bimolecular depopulation of the triplet state was found to be of major importance over a large part of the triplet decay.

The lifetime of triplet C10H8 at room temperature was also measured in highly purified benzene by means of both phosphorescence and triplet-triplet absorption. The lifetime was estimated to be at least ten times shorter than that in 3-MP. This is believed to be due not only to residual impurities in the solvent but also to small amounts of impurities produced through unavoidable irradiation by the excitation source. In agreement with this idea, lifetime shortening caused by intense flashes of light is readily observed. This latter result suggests that experiments employing flash lamp techniques are not suitable for these kinds of studies.

The theory of radiationless transitions, based on Robinson's theory, is briefly outlined. A simple theoretical model which is derived from Fano's autoionization gives identical result.

Il. WHY IS CONDENSED OXYGEN BLUE?

The blue color of oxygen is mostly derived from double transitions. This paper presents a theoretical calculation of the intensity of the double transition (a 1Δg) (a 1Δg)←(X 3Σg-) (X 3Σg-), using a model based on a pair of oxygen molecules at a fixed separation of 3.81 Å. The intensity enhancement is assumed to be derived from the mixing (a 1Δg) (a 1Δg) ~~~ (X 3Σg-) (X 3Σu-) and (a 1Δg) (1Δu) ~~~ (X 3Σg-) (X 3Σg-). Matrix elements for these interactions are calculated using a π-electron approximation for the pair system. Good molecular wavefunctions are used for all but the perturbing (B 3Σu-) state, which is approximated in terms of ground state orbitals. The largest contribution to the matrix elements arises from large intramolecular terms multiplied by intermolecular overlap integrals. The strength of interaction depends not only on the intermolecular separation of the two oxygen molecules, but also as expected on the relative orientation. Matrix elements are calculated for different orientations, and the angular dependence is fit to an analytical expression. The theory therefore not only predicts an intensity dependence on density but also one on phase at constant density. Agreement between theory and available experimental results is satisfactory considering the nature of the approximation, and indicates the essential validity of the overall approach to this interesting intensity enhancement problem.