984 resultados para Earthquake magnitude
Resumo:
Studies with 15N indicate that appreciable generation of NH4+ from endogenous sources accompanies the uptake and assimilation of exogenous NH4+ by roots. To identify the source of NH4+ generation, maize (Zea mays L.) seedlings were grown on 14NH4+ and then exposed for 3 d to highly labeled 15NH4+. More of the entering 15NH4+ was incorporated into the protein-N fraction of roots in darkness (approximately 25%) than in the light (approximately 14%). Although the 14NH4+ content of roots declined rapidly to less than 1 μmol per plant, efflux of 14NH4+ continued throughout the 3-d period at an average daily rate of 14 μmol per plant. As a consequence, cumulative 14NH4+ efflux during the 3-d period accounted for 25% of the total 14N initially present in the root. Although soluble organic 14N in roots declined during the 3-d period, insoluble 14N remained relatively constant. In shoots both soluble organic 14N and 14NH4+ declined, but a comparable increase in insoluble 14N was noted. Thus, total 14N in shoots remained constant, reflecting little or no net redistribution of 14N between shoots and roots. Collectively, these observations reveal that catabolism of soluble organic N, not protein N, is the primary source of endogenous NH4+ generation in maize roots.
Resumo:
Earthquake prediction research has searched for both informational phenomena, those that provide information about earthquake hazards useful to the public, and causal phenomena, causally related to the physical processes governing failure on a fault, to improve our understanding of those processes. Neither informational nor causal phenomena are a subset of the other. I propose a classification of potential earthquake predictors of informational, causal, and predictive phenomena, where predictors are causal phenomena that provide more accurate assessments of the earthquake hazard than can be gotten from assuming a random distribution. Achieving higher, more accurate probabilities than a random distribution requires much more information about the precursor than just that it is causally related to the earthquake.
Resumo:
For the average citizen and the public, "earthquake prediction" means "short-term prediction," a prediction of a specific earthquake on a relatively short time scale. Such prediction must specify the time, place, and magnitude of the earthquake in question with sufficiently high reliability. For this type of prediction, one must rely on some short-term precursors. Examinations of strain changes just before large earthquakes suggest that consistent detection of such precursory strain changes cannot be expected. Other precursory phenomena such as foreshocks and nonseismological anomalies do not occur consistently either. Thus, reliable short-term prediction would be very difficult. Although short-term predictions with large uncertainties could be useful for some areas if their social and economic environments can tolerate false alarms, such predictions would be impractical for most modern industrialized cities. A strategy for effective seismic hazard reduction is to take full advantage of the recent technical advancements in seismology, computers, and communication. In highly industrialized communities, rapid earthquake information is critically important for emergency services agencies, utilities, communications, financial companies, and media to make quick reports and damage estimates and to determine where emergency response is most needed. Long-term forecast, or prognosis, of earthquakes is important for development of realistic building codes, retrofitting existing structures, and land-use planning, but the distinction between short-term and long-term predictions needs to be clearly communicated to the public to avoid misunderstanding.
Resumo:
Progress in long- and intermediate-term earthquake prediction is reviewed emphasizing results from California. Earthquake prediction as a scientific discipline is still in its infancy. Probabilistic estimates that segments of several faults in California will be the sites of large shocks in the next 30 years are now generally accepted and widely used. Several examples are presented of changes in rates of moderate-size earthquakes and seismic moment release on time scales of a few to 30 years that occurred prior to large shocks. A distinction is made between large earthquakes that rupture the entire downdip width of the outer brittle part of the earth's crust and small shocks that do not. Large events occur quasi-periodically in time along a fault segment and happen much more often than predicted from the rates of small shocks along that segment. I am moderately optimistic about improving predictions of large events for time scales of a few to 30 years although little work of that type is currently underway in the United States. Precursory effects, like the changes in stress they reflect, should be examined from a tensorial rather than a scalar perspective. A broad pattern of increased numbers of moderate-size shocks in southern California since 1986 resembles the pattern in the 25 years before the great 1906 earthquake. Since it may be a long-term precursor to a great event on the southern San Andreas fault, that area deserves detailed intensified study.
Resumo:
The current status of geochemical and groundwater observations for earthquake prediction in Japan is described. The development of the observations is discussed in relation to the progress of the earthquake prediction program in Japan. Three major findings obtained from our recent studies are outlined. (i) Long-term radon observation data over 18 years at the SKE (Suikoen) well indicate that the anomalous radon change before the 1978 Izu-Oshima-kinkai earthquake can with high probability be attributed to precursory changes. (ii) It is proposed that certain sensitive wells exist which have the potential to detect precursory changes. (iii) The appearance and nonappearance of coseismic radon drops at the KSM (Kashima) well reflect changes in the regional stress state of an observation area. In addition, some preliminary results of chemical changes of groundwater prior to the 1995 Kobe (Hyogo-ken nanbu) earthquake are presented.
Resumo:
Based on the recent high-resolution laboratory experiments on propagating shear rupture, the constitutive law that governs shear rupture processes is discussed in view of the physical principles and constraints, and a specific constitutive law is proposed for shear rupture. It is demonstrated that nonuniform distributions of the constitutive law parameters on the fault are necessary for creating the nucleation process, which consists of two phases: (i) a stable, quasistatic phase, and (ii) the subsequent accelerating phase. Physical models of the breakdown zone and the nucleation zone are presented for shear rupture in the brittle regime. The constitutive law for shear rupture explicitly includes a scaling parameter Dc that enables one to give a common interpretation to both small scale rupture in the laboratory and large scale rupture as earthquake source in the Earth. Both the breakdown zone size Xc and the nucleation zone size L are prescribed and scaled by Dc, which in turn is prescribed by a characteristic length lambda c representing geometrical irregularities of the fault. The models presented here make it possible to understand the earthquake generation process from nucleation to unstable, dynamic rupture propagation in terms of physics. Since the nucleation process itself is an immediate earthquake precursor, deep understanding of the nucleation process in terms of physics is crucial for the short-term (or immediate) earthquake prediction.
Resumo:
We summarize studies of earthquake fault models that give rise to slip complexities like those in natural earthquakes. For models of smooth faults between elastically deformable continua, it is critical that the friction laws involve a characteristic distance for slip weakening or evolution of surface state. That results in a finite nucleation size, or coherent slip patch size, h*. Models of smooth faults, using numerical cell size properly small compared to h*, show periodic response or complex and apparently chaotic histories of large events but have not been found to show small event complexity like the self-similar (power law) Gutenberg-Richter frequency-size statistics. This conclusion is supported in the present paper by fully inertial elastodynamic modeling of earthquake sequences. In contrast, some models of locally heterogeneous faults with quasi-independent fault segments, represented approximately by simulations with cell size larger than h* so that the model becomes "inherently discrete," do show small event complexity of the Gutenberg-Richter type. Models based on classical friction laws without a weakening length scale or for which the numerical procedure imposes an abrupt strength drop at the onset of slip have h* = 0 and hence always fall into the inherently discrete class. We suggest that the small-event complexity that some such models show will not survive regularization of the constitutive description, by inclusion of an appropriate length scale leading to a finite h*, and a corresponding reduction of numerical grid size.
Resumo:
We study a simple antiplane fault of finite length embedded in a homogeneous isotropic elastic solid to understand the origin of seismic source heterogeneity in the presence of nonlinear rate- and state-dependent friction. All the mechanical properties of the medium and friction are assumed homogeneous. Friction includes a characteristic length that is longer than the grid size so that our models have a well-defined continuum limit. Starting from a heterogeneous initial stress distribution, we apply a slowly increasing uniform stress load far from the fault and we simulate the seismicity for a few 1000 events. The style of seismicity produced by this model is determined by a control parameter associated with the degree of rate dependence of friction. For classical friction models with rate-independent friction, no complexity appears and seismicity is perfectly periodic. For weakly rate-dependent friction, large ruptures are still periodic, but small seismicity becomes increasingly nonstationary. When friction is highly rate-dependent, seismicity becomes nonperiodic and ruptures of all sizes occur inside the fault. Highly rate-dependent friction destabilizes the healing process producing premature healing of slip and partial stress drop. Partial stress drop produces large variations in the state of stress that in turn produce earthquakes of different sizes. Similar results have been found by other authors using the Burridge and Knopoff model. We conjecture that all models in which static stress drop is only a fraction of the dynamic stress drop produce stress heterogeneity.
Resumo:
We summarize recent evidence that models of earthquake faults with dynamically unstable friction laws but no externally imposed heterogeneities can exhibit slip complexity. Two models are described here. The first is a one-dimensional model with velocity-weakening stick-slip friction; the second is a two-dimensional elastodynamic model with slip-weakening friction. Both exhibit small-event complexity and chaotic sequences of large characteristic events. The large events in both models are composed of Heaton pulses. We argue that the key ingredients of these models are reasonably accurate representations of the properties of real faults.
Resumo:
Although models of homogeneous faults develop seismicity that has a Gutenberg-Richter distribution, this is only a transient state that is followed by events that are strongly influenced by the nature of the boundaries. Models with geometrical inhomogeneities of fracture thresholds can limit the sizes of earthquakes but now favor the characteristic earthquake model for large earthquakes. The character of the seismicity is extremely sensitive to distributions of inhomogeneities, suggesting that statistical rules for large earthquakes in one region may not be applicable to large earthquakes in another region. Model simulations on simple networks of faults with inhomogeneities of threshold develop episodes of lacunarity on all members of the network. There is no validity to the popular assumption that the average rate of slip on individual faults is a constant. Intermediate term precursory activity such as local quiescence and increases in intermediate-magnitude activity at long range are simulated well by the assumption that strong weakening of faults by injection of fluids and weakening of asperities on inhomogeneous models of fault networks is the dominant process; the heat flow paradox, the orientation of the stress field, and the low average stress drop in some earthquakes are understood in terms of the asperity model of inhomogeneous faulting.
Resumo:
A model based on the nonlinear Poisson-Boltzmann equation is used to study the electrostatic contribution to the binding free energy of a simple intercalating ligand, 3,8-diamino-6-phenylphenanthridine, to DNA. We find that the nonlinear Poisson-Boltzmann model accurately describes both the absolute magnitude of the pKa shift of 3,8-diamino-6-phenylphenanthridine observed upon intercalation and its variation with bulk salt concentration. Since the pKa shift is directly related to the total electrostatic binding free energy of the charged and neutral forms of the ligand, the accuracy of the calculations implies that the electrostatic contributions to binding are accurately predicted as well. Based on our results, we have developed a general physical description of the electrostatic contribution to ligand-DNA binding in which the electrostatic binding free energy is described as a balance between the coulombic attraction of a ligand to DNA and the disruption of solvent upon binding. Long-range coulombic forces associated with highly charged nucleic acids provide a strong driving force for the interaction of cationic ligands with DNA. These favorable electrostatic interactions are, however, largely compensated for by unfavorable changes in the solvation of both the ligand and the DNA upon binding. The formation of a ligand-DNA complex removes both charged and polar groups at the binding interface from pure solvent while it displaces salt from around the nucleic acid. As a result, the total electrostatic binding free energy is quite small. Consequently, nonpolar interactions, such as tight packing and hydrophobic forces, must play a significant role in ligand-DNA stability.
Resumo:
After the 2010 Haiti earthquake, that hits the city of Port-au-Prince, capital city of Haiti, a multidisciplinary working group of specialists (seismologist, geologists, engineers and architects) from different Spanish Universities and also from Haiti, joined effort under the SISMO-HAITI project (financed by the Universidad Politecnica de Madrid), with an objective: Evaluation of seismic hazard and risk in Haiti and its application to the seismic design, urban planning, emergency and resource management. In this paper, as a first step for a structural damage estimation of future earthquakes in the country, a calibration of damage functions has been carried out by means of a two-stage procedure. After compiling a database with observed damage in the city after the earthquake, the exposure model (building stock) has been classified and through an iteratively two-step calibration process, a specific set of damage functions for the country has been proposed. Additionally, Next Generation Attenuation Models (NGA) and Vs30 models have been analysed to choose the most appropriate for the seismic risk estimation in the city. Finally in a next paper, these functions will be used to estimate a seismic risk scenario for a future earthquake.