876 resultados para fault propagation


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Earthquake zones in the upper crust are usually more conductive than the surrounding rocks, and electrical geophysical measurements can be used to map these zones. Magnetotelluric (MT) measurements across fault zones that are parallel to the coast and not too far away can also give some important information about the lower crustal zone. This is because the long-period electric currents coming from the ocean gradually leak into the mantle, but the lower crust is usually very resistive and very little leakage takes place. If a lower crustal zone is less resistive it will be a leakage zone, and this can be seen because the MT phase will change as the ocean currents leave the upper crust. The San Andreas Fault is parallel to the ocean boundary and close enough to have a lot of extra ocean currents crossing the zone. The Loma Prieta zone, after the earthquake, showed a lot of ocean electric current leakage, suggesting that the lower crust under the fault zone was much more conductive than normal. It is hard to believe that water, which is responsible for the conductivity, had time to get into the lower crustal zone, so it was probably always there, but not well connected. If this is true, then the poorly connected water would be at a pressure close to the rock pressure, and it may play a role in modifying the fluid pressure in the upper crust fault zone. We also have telluric measurements across the San Andreas Fault near Palmdale from 1979 to 1990, and beginning in 1985 we saw changes in the telluric signals on the fault zone and east of the fault zone compared with the signals west of the fault zone. These measurements were probably seeing a better connection of the lower crust fluids taking place, and this may result in a fluid flow from the lower crust to the upper crust. This could be a factor in changing the strength of the upper crust fault zone.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The rate- and state-dependent constitutive formulation for fault slip characterizes an exceptional variety of materials over a wide range of sliding conditions. This formulation provides a unified representation of diverse sliding phenomena including slip weakening over a characteristic sliding distance Dc, apparent fracture energy at a rupture front, time-dependent healing after rapid slip, and various other transient and slip rate effects. Laboratory observations and theoretical models both indicate that earthquake nucleation is accompanied by long intervals of accelerating slip. Strains from the nucleation process on buried faults generally could not be detected if laboratory values of Dc apply to faults in nature. However, scaling of Dc is presently an open question and the possibility exists that measurable premonitory creep may precede some earthquakes. Earthquake activity is modeled as a sequence of earthquake nucleation events. In this model, earthquake clustering arises from sensitivity of nucleation times to the stress changes induced by prior earthquakes. The model gives the characteristic Omori aftershock decay law and assigns physical interpretation to aftershock parameters. The seismicity formulation predicts large changes of earthquake probabilities result from stress changes. Two mechanisms for foreshocks are proposed that describe observed frequency of occurrence of foreshock-mainshock pairs by time and magnitude. With the first mechanism, foreshocks represent a manifestation of earthquake clustering in which the stress change at the time of the foreshock increases the probability of earthquakes at all magnitudes including the eventual mainshock. With the second model, accelerating fault slip on the mainshock nucleation zone triggers foreshocks.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Based on the recent high-resolution laboratory experiments on propagating shear rupture, the constitutive law that governs shear rupture processes is discussed in view of the physical principles and constraints, and a specific constitutive law is proposed for shear rupture. It is demonstrated that nonuniform distributions of the constitutive law parameters on the fault are necessary for creating the nucleation process, which consists of two phases: (i) a stable, quasistatic phase, and (ii) the subsequent accelerating phase. Physical models of the breakdown zone and the nucleation zone are presented for shear rupture in the brittle regime. The constitutive law for shear rupture explicitly includes a scaling parameter Dc that enables one to give a common interpretation to both small scale rupture in the laboratory and large scale rupture as earthquake source in the Earth. Both the breakdown zone size Xc and the nucleation zone size L are prescribed and scaled by Dc, which in turn is prescribed by a characteristic length lambda c representing geometrical irregularities of the fault. The models presented here make it possible to understand the earthquake generation process from nucleation to unstable, dynamic rupture propagation in terms of physics. Since the nucleation process itself is an immediate earthquake precursor, deep understanding of the nucleation process in terms of physics is crucial for the short-term (or immediate) earthquake prediction.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We summarize studies of earthquake fault models that give rise to slip complexities like those in natural earthquakes. For models of smooth faults between elastically deformable continua, it is critical that the friction laws involve a characteristic distance for slip weakening or evolution of surface state. That results in a finite nucleation size, or coherent slip patch size, h*. Models of smooth faults, using numerical cell size properly small compared to h*, show periodic response or complex and apparently chaotic histories of large events but have not been found to show small event complexity like the self-similar (power law) Gutenberg-Richter frequency-size statistics. This conclusion is supported in the present paper by fully inertial elastodynamic modeling of earthquake sequences. In contrast, some models of locally heterogeneous faults with quasi-independent fault segments, represented approximately by simulations with cell size larger than h* so that the model becomes "inherently discrete," do show small event complexity of the Gutenberg-Richter type. Models based on classical friction laws without a weakening length scale or for which the numerical procedure imposes an abrupt strength drop at the onset of slip have h* = 0 and hence always fall into the inherently discrete class. We suggest that the small-event complexity that some such models show will not survive regularization of the constitutive description, by inclusion of an appropriate length scale leading to a finite h*, and a corresponding reduction of numerical grid size.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Although models of homogeneous faults develop seismicity that has a Gutenberg-Richter distribution, this is only a transient state that is followed by events that are strongly influenced by the nature of the boundaries. Models with geometrical inhomogeneities of fracture thresholds can limit the sizes of earthquakes but now favor the characteristic earthquake model for large earthquakes. The character of the seismicity is extremely sensitive to distributions of inhomogeneities, suggesting that statistical rules for large earthquakes in one region may not be applicable to large earthquakes in another region. Model simulations on simple networks of faults with inhomogeneities of threshold develop episodes of lacunarity on all members of the network. There is no validity to the popular assumption that the average rate of slip on individual faults is a constant. Intermediate term precursory activity such as local quiescence and increases in intermediate-magnitude activity at long range are simulated well by the assumption that strong weakening of faults by injection of fluids and weakening of asperities on inhomogeneous models of fault networks is the dominant process; the heat flow paradox, the orientation of the stress field, and the low average stress drop in some earthquakes are understood in terms of the asperity model of inhomogeneous faulting.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Interdependence between geometry of a fault system, its kinematics, and seismicity is investigated. Quantitative measure is introduced for inconsistency between a fixed configuration of faults and the slip rates on each fault. This measure, named geometric incompatibility (G), depicts summarily the instability near the fault junctions: their divergence or convergence ("unlocking" or "locking up") and accumulation of stress and deformations. Accordingly, the changes in G are connected with dynamics of seismicity. Apart from geometric incompatibility, we consider deviation K from well-known Saint Venant condition of kinematic compatibility. This deviation depicts summarily unaccounted stress and strain accumulation in the region and/or internal inconsistencies in a reconstruction of block- and fault system (its geometry and movements). The estimates of G and K provide a useful tool for bringing together the data on different types of movement in a fault system. An analog of Stokes formula is found that allows determination of the total values of G and K in a region from the data on its boundary. The phenomenon of geometric incompatibility implies that nucleation of strong earthquakes is to large extent controlled by processes near fault junctions. The junctions that have been locked up may act as transient asperities, and unlocked junctions may act as transient weakest links. Tentative estimates of K and G are made for each end of the Big Bend of the San Andreas fault system in Southern California. Recent strong earthquakes Landers (1992, M = 7.3) and Northridge (1994, M = 6.7) both reduced K but had opposite impact on G: Landers unlocked the area, whereas Northridge locked it up again.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Adenoviral vectors are widely used as highly efficient gene transfer vehicles in a variety of biological research strategies including human gene therapy. One of the limitations of the currently available adenoviral vector system is the presence of the majority of the viral genome in the vector, resulting in leaky expression of viral genes particularly at high multiplicity of infection and limited cloning capacity of exogenous sequences. As a first step to overcome this problem, we attempted to rescue a defective human adenovirus serotype 5 DNA, which had an essential region of the viral genome (L1, L2, VAI + II, pTP) deleted and replaced with an indicator gene. In the presence of wild-type adenovirus as a helper, this DNA was packaged and propagated as transducing viral particles. After several rounds of amplification, the titer of the recombinant virus reached at least 4 x 10(6) transducing particles per ml. The recombinant virus could be partially purified from the helper virus by CsCl equilibrium density-gradient centrifugation. The structure of the recombinant virus around the marker gene remained intact after serial propagation, while the pBR sequence inserted in the E1 region was deleted from the recombinant virus. Our results suggest that it should be possible to develop a helper-dependent adenoviral vector, which does not encode any viral proteins, as an alternative to the currently available adenoviral vector systems.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We have examined the dynamical behavior of the kink solutions of the one-dimensional sine-Gordon equation in the presence of a spatially periodic parametric perturbation. Our study clarifies and extends the currently available knowledge on this and related nonlinear problems in four directions. First, we present the results of a numerical simulation program that are not compatible with the existence of a radiative threshold predicted by earlier calculations. Second, we carry out a perturbative calculation that helps interpret those previous predictions, enabling us to understand in depth our numerical results. Third, we apply the collective coordinate formalism to this system and demonstrate numerically that it reproduces accurately the observed kink dynamics. Fourth, we report on the occurrence of length-scale competition in this system and show how it can be understood by means of linear stability analysis. Finally, we conclude by summarizing the general physical framework that arises from our study.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This research presents the development and implementation of fault location algorithms in power distribution networks with distributed generation units installed along their feeders. The proposed algorithms are capable of locating the fault based on voltage and current signals recorded by intelligent electronic devices installed at the end of the feeder sections, information to compute the loads connected to these feeders and their electric characteristics, and the operating status of the network. In addition, this work presents the study of analytical models of distributed generation and load technologies that could contribute to the performance of the proposed fault location algorithms. The validation of the algorithms was based on computer simulations using network models implemented in ATP, whereas the algorithms were implemented in MATLAB.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Póster presentado en SPIE Photonics Europe, Brussels, 16-19 April 2012.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We rigorously analyze the propagation of localized surface waves that takes place at the boundary between a semi-infinite layered metal-dielectric (MD) nanostructure cut normally to the layers and a isotropic medium. It is demonstrated that Dyakonov-like surface waves (also coined dyakonons) with hybrid polarization may propagate in a wide angular range. As a consequence, dyakonon-based wave-packets (DWPs) may feature sub-wavelength beamwidths. Due to the hyperbolic-dispersion regime in plasmonic crystals, supported DWPs are still in the canalization regime. The apparent quadratic beam spreading, however, is driven by dissipation effects in metal.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The design of fault tolerant systems is gaining importance in large domains of embedded applications where design constrains are as important as reliability. New software techniques, based on selective application of redundancy, have shown remarkable fault coverage with reduced costs and overheads. However, the large number of different solutions provided by these techniques, and the costly process to assess their reliability, make the design space exploration a very difficult and time-consuming task. This paper proposes the integration of a multi-objective optimization tool with a software hardening environment to perform an automatic design space exploration in the search for the best trade-offs between reliability, cost, and performance. The first tool is commanded by a genetic algorithm which can simultaneously fulfill many design goals thanks to the use of the NSGA-II multi-objective algorithm. The second is a compiler-based infrastructure that automatically produces selective protected (hardened) versions of the software and generates accurate overhead reports and fault coverage estimations. The advantages of our proposal are illustrated by means of a complex and detailed case study involving a typical embedded application, the AES (Advanced Encryption Standard).