604 resultados para Earthquakes.


Relevância:

10.00% 10.00%

Publicador:

Resumo:

El objetivo de esta tesis es el estudio de la respuesta estructural de los gasoductos sometidas a solicitaciones estáticas y dinámicas, enfocando prioritariamente en la respuesta sísmica. Los gasoductos, como las tuberías en general, se utilizan principalmente para la transportación de fluidos, como agua, gas o petróleo, de ahí la importancia de que el diseño y la estructura se realicen adecuadamente. La tubería debe ser capaz de soportar tanto los efectos de cargas estáticas como las debidas al peso propio o de la presión de la tierra, así como los diferentes tipos de cargas dinámicas ocurridas durante un evento sísmico, como los debidos a las ondas o el desplazamiento de fallas. En la primera parte de la tesis se describen aspectos generales de la tubería y su uso, y se da una breve historia de uso en la industria y las redes de abastecimiento urbano. Aparte de otros aspectos, se discuten las ventajas y desventajas de los diferentes materiales de las tuberías. En la segunda parte de la tesis se desarrollan las ecuaciones de equilibrio de una sección transversal de la tubería bajo cargas estáticas, tales como la presión interna, peso propio, presión de la tierra y las cargas externas. Un número de diferentes combinaciones de carga es analizado por medio de programas codificados como Matlab, los cuales se han desarrollado específicamente para este propósito. Los resultados se comparan con los obtenidos en Ansys utilizando un código de elementos finitos. En la tercera parte se presenta la respuesta dinámica de las tuberías, que abarca los efectos de las ondas y los desplazamientos de fallas. Se presentan las características relevantes del suelo como las velocidades de ondas, así como los métodos para estimar el desplazamiento máximo de las fallas. Un estudio paramétrico se emplea para ilustrar la influencia de estos parámetros en la respuesta estructural de la tubería. Con este fin se han utilizado dos métodos, el Pseudoestático y el Simplificado. En la última parte de la tesis son desarrollados los modelos de elementos finitos que permiten simular adecuadamente el comportamiento no lineal del suelo y la tubería. Los resultados se comparan con los obtenidos por un método simplificado utilizado con frecuencia que fue propuesto por Kennedy en 1977. Estudios paramétricos se presentan con el fin de examinar la validez de las hipótesis del método de Kennedy. La tesis concluye con recomendaciones que indican en qué casos los resultados obtenidos por el método de Kennedy son conservadores y cuando es preferible utilizar modelos de elementos finitos para estimar la respuesta de las tuberías durante los terremotos. ABSTRACT The subject of this thesis is the study of the structural response of pipelines subjected to static and dynamic loads with special attention to seismic design loads. Pipelines, as pipes in general, are used primarily for the transportation of fluids like water, gas or oil, hence the importance of an adequate design and structural behaviour. The pipe must be able to withstand both the effects of static loads like those due to self-weight or earth pressure as well as the different types of dynamic loads during a seismic event like those due to wave passing or fault displacements. In the first part of the thesis general aspects of pipelines and their use are described and a brief history of their usage in industry and for urban supply networks is given. Apart from other aspects, the advantages and disadvantages of different pipe materials are discussed. In the second part of the thesis the equilibrium equations of a transverse section of the pipe under static loads such as internal pressure, self-weight, earth pressure and external loads are developed. A number of different load combinations is analysed by means of programs coded in Matlab that have been specifically developed for this purpose. The results are compared to those obtained with the commercial Finite Element code Ansys. In the third part the dynamic response of pipelines during earthquakes is presented, covering the effects of passing waves and fault displacements. Relevant soil characteristics like wave propagation velocities as well as methods to estimate the maximum fault displacements are presented. A parametric study is employed to illustrate the influences of these parameters on the structural response of the pipe. To this end two methods have been used, the Pseudostatic and the Simplified method. In the last part of the thesis Finite Element models are developed which allow to adequately simulate the nonlinear behaviour of the soil and the pipe. The results are compared to those obtained by a frequently used simplified method which was proposed by Kennedy in 1977. Parametric studies are presented in order to examine the validity of the hypotheses Kennedys’ method is based on. The thesis concludes with recommendations indicating in which cases the results obtained by Kennedy’s method are conservative and when it is preferable to use Finite Element models to estimate the response of pipelines during earthquakes.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

El estudio sísmico en los últimos 50 años y el análisis del comportamiento dinámico del suelo revelan que el comportamiento del suelo es altamente no lineal e histéretico incluso para pequeñas deformaciones. El comportamiento no lineal del suelo durante un evento sísmico tiene un papel predominante en el análisis de la respuesta de sitio. Los análisis unidimensionales de la respuesta sísmica del suelo son a menudo realizados utilizando procedimientos lineales equivalentes, que requieren generalmente pocos parámetros conocidos. Los análisis de respuesta de sitio no lineal tienen el potencial para simular con mayor precisión el comportamiento del suelo, pero su aplicación en la práctica se ha visto limitada debido a la selección de parámetros poco documentadas y poco claras, así como una inadecuada documentación de los beneficios del modelado no lineal en relación al modelado lineal equivalente. En el análisis del suelo, el comportamiento del suelo es aproximado como un sólido Kelvin-Voigt con un módulo de corte elástico y amortiguamiento viscoso. En el análisis lineal y no lineal del suelo se están considerando geometrías y modelos reológicos más complejos. El primero está siendo dirigido por considerar parametrizaciones más ricas del comportamiento linealizado y el segundo mediante el uso de multi-modo de los elementos de resorte-amortiguador con un eventual amortiguador fraccional. El uso del cálculo fraccional está motivado en gran parte por el hecho de que se requieren menos parámetros para lograr la aproximación exacta a los datos experimentales. Basándose en el modelo de Kelvin-Voigt, la viscoelasticidad es revisada desde su formulación más estándar a algunas descripciones más avanzada que implica la amortiguación dependiente de la frecuencia (o viscosidad), analizando los efectos de considerar derivados fraccionarios para representar esas contribuciones viscosas. Vamos a demostrar que tal elección se traduce en modelos más ricos que pueden adaptarse a diferentes limitaciones relacionadas con la potencia disipada, amplitud de la respuesta y el ángulo de fase. Por otra parte, el uso de derivados fraccionarios permite acomodar en paralelo, dentro de un análogo de Kelvin-Voigt generalizado, muchos amortiguadores que contribuyen a aumentar la flexibilidad del modelado para la descripción de los resultados experimentales. Obviamente estos modelos ricos implican muchos parámetros, los asociados con el comportamiento y los relacionados con los derivados fraccionarios. El análisis paramétrico de estos modelos requiere técnicas numéricas eficientemente capaces de simular comportamientos complejos. El método de la Descomposición Propia Generalizada (PGD) es el candidato perfecto para la construcción de este tipo de soluciones paramétricas. Podemos calcular off-line la solución paramétrica para el depósito de suelo, para todos los parámetros del modelo, tan pronto como tales soluciones paramétricas están disponibles, el problema puede ser resuelto en tiempo real, porque no se necesita ningún nuevo cálculo, el solucionador sólo necesita particularizar on-line la solución paramétrica calculada off-line, que aliviará significativamente el procedimiento de solución. En el marco de la PGD, parámetros de los materiales y los diferentes poderes de derivación podrían introducirse como extra-coordenadas en el procedimiento de solución. El cálculo fraccional y el nuevo método de reducción modelo llamado Descomposición Propia Generalizada han sido aplicado en esta tesis tanto al análisis lineal como al análisis no lineal de la respuesta del suelo utilizando un método lineal equivalente. ABSTRACT Studies of earthquakes over the last 50 years and the examination of dynamic soil behavior reveal that soil behavior is highly nonlinear and hysteretic even at small strains. Nonlinear behavior of soils during a seismic event has a predominant role in current site response analysis. One-dimensional seismic ground response analysis are often performed using equivalent-linear procedures, which require few, generally well-known parameters. Nonlinear analyses have the potential to more accurately simulate soil behavior, but their implementation in practice has been limited because of poorly documented and unclear parameter selection, as well as inadequate documentation of the benefits of nonlinear modeling relative to equivalent linear modeling. In soil analysis, soil behaviour is approximated as a Kelvin-Voigt solid with a elastic shear modulus and viscous damping. In linear and nonlinear analysis more complex geometries and more complex rheological models are being considered. The first is being addressed by considering richer parametrizations of the linearized behavior and the second by using multi-mode spring-dashpot elements with eventual fractional damping. The use of fractional calculus is motivated in large part by the fact that fewer parameters are required to achieve accurate approximation of experimental data. Based in Kelvin-Voigt model the viscoelastodynamics is revisited from its most standard formulation to some more advanced description involving frequency-dependent damping (or viscosity), analyzing the effects of considering fractional derivatives for representing such viscous contributions. We will prove that such a choice results in richer models that can accommodate different constraints related to the dissipated power, response amplitude and phase angle. Moreover, the use of fractional derivatives allows to accommodate in parallel, within a generalized Kelvin-Voigt analog, many dashpots that contribute to increase the modeling flexibility for describing experimental findings. Obviously these rich models involve many parameters, the ones associated with the behavior and the ones related to the fractional derivatives. The parametric analysis of all these models require efficient numerical techniques able to simulate complex behaviors. The Proper Generalized Decomposition (PGD) is the perfect candidate for producing such kind of parametric solutions. We can compute off-line the parametric solution for the soil deposit, for all parameter of the model, as soon as such parametric solutions are available, the problem can be solved in real time because no new calculation is needed, the solver only needs particularize on-line the parametric solution calculated off-line, which will alleviate significantly the solution procedure. Within the PGD framework material parameters and the different derivation powers could be introduced as extra-coordinates in the solution procedure. Fractional calculus and the new model reduction method called Proper Generalized Decomposition has been applied in this thesis to the linear analysis and nonlinear soil response analysis using a equivalent linear method.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

En este trabajo se presenta el desarrollo de una metodología para obtener un universo de funciones de Green y el algoritmo correspondiente, para estimar la altura de tsunamis a lo largo de la costa occidental de México en función del momento sísmico y de la extensión del área de ruptura de sismos interplaca localizados entre la costa y la Trinchera Mesoamericana. Tomando como caso de estudio el sismo ocurrido el 9 de octubre de 1995 en la costa de Jalisco-Colima, se estudiaron los efectos del tsunami originados en la hidrodinámica del Puerto de Manzanillo, México, con una propuesta metodológica que contempló lo siguiente: El primer paso de la metodología contempló la aplicación del método inverso de tsunamis para acotar los parámetros de la fuente sísmica mediante la confección de un universo de funciones de Green para la costa occidental de México. Tanto el momento sísmico como la localización y extensión del área de ruptura de sismos se prescribe en segmentos de planos de falla de 30 X 30 km. A cada uno de estos segmentos del plano de falla corresponde un conjunto de funciones de Green ubicadas en la isobata de 100 m, para 172 localidades a lo largo de la costa, separadas en promedio 12 km entre una y otra. El segundo paso de la metodología contempló el estudio de la hidrodinámica (velocidades de las corrientes y niveles del mar en el interior del puerto y el estudio del runup en la playa) originada por el tsunami, la cual se estudió en un modelo hidráulico de fondo fijo y en un modelo numérico, representando un tsunami sintético en la profundidad de 34 m como condición inicial, el cual se propagó a la costa con una señal de onda solitaria. Como resultado de la hidrodinámica del puerto de Manzanillo, se realizó un análisis de riesgo para la definición de las condiciones operativas del puerto en términos de las velocidades en el interior del mismo, y partiendo de las condiciones iniciales del terremoto de 1995, se definieron las condiciones límites de operación de los barcos en el interior y exterior del puerto. In this work is presented the development of a methodology in order to obtain a universe of Green's functions and the corresponding algorithm in order to estimate the tsunami wave height along the west coast of Mexico, in terms of seismic moment and the extent of the area of the rupture, in the interplate earthquakes located between the coast and the Middle America Trench. Taking as a case of study the earthquake occurred on October 9, 1995 on the coast of Jalisco-Colima, were studied the hydrodynamics effects of the tsunami caused in the Port of Manzanillo, Mexico, with a methodology that contemplated the following The first step of the methodology contemplated the implementation of the tsunami inverse method to narrow the parameters of the seismic source through the creation of a universe of Green's functions for the west coast of Mexico. Both the seismic moment as the location and extent of earthquake rupture area prescribed in segments fault planes of 30 X 30 km. Each of these segments of the fault plane corresponds a set of Green's functions located in the 100 m isobath, to 172 locations along the coast, separated on average 12 km from each other. The second step of the methodology contemplated the study of the hydrodynamics (speed and directions of currents and sea levels within the port and the study of the runup on the beach Las Brisas) caused by the tsunami, which was studied in a hydraulic model of fix bed and in a numerical model, representing a synthetic tsunami in the depth of 34 m as an initial condition which spread to the coast with a solitary wave signal. As a result of the hydrodynamics of the port of Manzanillo, a risk analysis to define the operating conditions of the port in terms of the velocities in the inner and outside of the port was made, taken in account the initial conditions of the earthquake and tsunami ocurred in Manzanillo port in 1995, were defined the limits conditions of operation of the ships inside and outside the port.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

El territorio chileno esta propenso, desde antes que se constituyera como nación, al impacto del comportamiento de la naturaleza que le es inherente y que también le produce daños. Está representado en los seísmos, los más dañinos. Todavía, la sociedad chilena no termina de comprender que esos daños, son parte de un desequilibrio de una convivencia armoniosa entre ella y esa naturaleza, puesto que el ser humano que vive y habita sobre ella, también lo es. Así entonces, cada vez que el territorio y su espacio son remecidos por los seísmos, la naturaleza, manifestada en la sociedad, adquiere nuevos aprendizajes para mejorar la respuesta al próximo evento. El terremoto 2010 de 8.8° Richter, fue el segundo de mayor magnitud después del otro que hasta ahora, es el más grande del planeta, y que pudo ser medido. Aquel, fue el terremoto de Valdivia de 9,5° Richter, ocurrido el 22 de mayo de 1960. Las sociedades no son estáticas, cambian, son dinámicas. Esta vez el seísmo del 2010, ocurrió en una sociedad que hace ya 35 años, adoptó un modelo de economía de libre mercado. La pobreza que tenía a 1990, era de aproximadamente, un 40%. La del 2010, de un 14%. Durante la dictadura militar hubo otro seísmo de 7,8° Richter, recién instalándose el modelo aludido. El del 2010, permite sacar conclusiones en el contexto de este modelo económico. Los resultados aparentemente son interesantes en cuanto a que hubo pocas víctimas pero por otra parte, hubo un gran daño económico. La tesis profundiza en el impacto del seísmo en la dimensión del parque habitacional construido y de la vivienda social y en los habitantes más pobres y vulnerables. Es la primera investigación sobre seísmos y vivienda social en Chile. Se asume la hipótesis que ciertas variables por una parte, y una cultura antisísmica por otra, están presentes y han penetrado en los sectores populares durante los últimos 50 años y que ello, podría estar en la base de los resultados obtenidos. Se plantea una suerte de “matrimonio bien avenido” entre el habitante y políticas públicas en vivienda. De ello, se derivan recomendaciones para mejorar los avances en el problema investigado que se contextualizan en referencia al marco teórico elaborado. Sin embargo, y no obstante lo investigado, lo ya avanzado no garantiza buenos resultados en el próximo evento, Por ello, los aprendizajes nutren a otros, nuevos, que acompañarán a la sociedad chilena en su esencia e identidad como nación. ABSTRACT Long before its establishment as an independent nation, the Chilean territory has been prone to the impact of nature, which is an inherent and damaging feature of this land. Such an impact is represented by earthquakes, which are regarded as the most damaging natural disasters. Today, the Chilean society is still unable to understand that these impacts are part of an unbalanced coexistence between individuals and nature since human beings, who live and inhabit this space, are also an element of nature. Therefore, each time this territory is hit by earthquakes, nature —represented by society— learns new lessons in order to provide a better response to future events. The 2010 earthquake, which rated 8.8 on the Richter scale, was the second largest earthquake after the most powerful earthquake ever recorded. Such an event was the Valdivia earthquake of May 22, 1960, which rated 9.5 on the Richter scale. Societies are not static as they are changing and dynamic. The 2010 earthquake took place within a context in which society operated under a free market economy model that had been running for 35 years. As of 1990, 40 per cent of the population lived in poverty; in 2010, such a figure was reduced to 14 per cent. Likewise, a magnitude 7.8 quake struck the country during the military regime period in the early days of the above model. The 2010 earthquake allows us to draw some conclusions within the context of this economic model. Results are interesting since there were few fatalities but significant economic loss. This thesis provides insights into the impact of the 2010 earthquake on the housing stock, social housing and those living in poverty and vulnerability. This is the first research on earthquakes and social housing conducted in Chile. The hypothesis is that certain variables and anti-seismic culture have permeated popular segments of the population over the last 50 years. The latter may be at the basis of the results obtained during this research. Likewise, this study proposes a certain “happy marriage” between the inhabitant and public policies on housing. The above offers some recommendations intended to further explore this issue; these suggestions are contextualized according to the theoretical framework developed in this research. However, current progress on this matter does not ensure positive results in the event of an earthquake. This is why these lessons will serve as models for future events, which are intrinsically tied to local society and Chilean identity.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

El trabajo del Proyecto Fin de Carrera consiste en el análisis de la formación de los distintos abanicos aluviales localizados en la provincia de Murcia, entre las ciudades de Lorca y Totana y su entorno inmediato, es decir, la Sierra de la Tercia al NO y parte de la depresión del Guadalentín al SE. Se comprobará, como objetivo destacado, si a partir de los datos obtenidos se puede determinar la degradación de los abanicos aluviales afectados por fallas. Inicialmente se realiza un estudio del terreno, con unos datos previos de traza de falla, marcando en él posibles indicios del paso, o no, de la falla en esa zona. Apoyándonos en una parte del trabajo desarrollado en el marco del proyecto del plan nacional de I+D denominado “Searching the record of past earthquakes in South Iberia: Advanced technologies in terrestrial and marine paleoseismology” (SHAKE), los datos con los que se ha trabajado en este proyecto se obtuvieron mediante la realización, en agosto del 2013, de un vuelo usando tecnología LiDAR en combinación con técnicas de Fotogrametría Digital que incorporan imágenes aéreas proporcionadas por el Instituto Geográfico Nacional (PNOA 2010) de la Región de Murcia. La tecnología LiDAR (Light Detection And Ranging) se trata de una técnica geofísica de mapeo por barrido que consta de un sensor láser aerotransportado que escanea la superficie terrestre recogiendo millones de medidas de distancia entre el sensor y el objeto, cuya posición es calculada por GPS diferencial y un sistema de navegación inercial. Cada pulso de láser toma múltiples medidas de distancia a lo largo de un solo haz, con el primer retorno desde la parte superior de la vegetación local, y el último desde la superficie del suelo. El resultado es una nube de puntos a partir de la cual se desarrollan los MDT utilizando el programa MDTopX. Este vuelo (correspondiente a la falla de Alhama) ocupa una extensión aproximada de 282 km2 y un perímetro de vuelo de 139 km. Para obtener una cobertura LiDAR lo más densa posible, se planificó el vuelo a una altura de 1500 metros sobre el terreno, obteniendo así una densidad media de 4 puntos/m2 y una separación entre puntos promedio de 0,5 m. A continuación se crean diferentes mapas (pendientes, orientaciones y curvas de nivel) de los cuales se obtiene toda la información posible para realizar una clasificación de los diferentes indicios, según se explica más adelante. Posteriormente se realiza una nueva traza de los abanicos aluviales usando los resultados anteriores y estableciendo una clasificación de su época, materiales y grado de degradación, entre otros.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In the EU context extraction of shale and oil gas by hydraulic fracturing (fracking) differs from country to country in terms of legislation and implementation. While fossil fuel extraction using this technology is currently taking place in the UK, Germany and France have adopted respective moratoria. In between is the Spanish case, where hydrocarbon extraction projects through fracking have to undergo mandatory and routine environmental assessment in accordance with the last changes to environmental regulations. Nowadays Spain is at the crossroad with respect to the future of this technology. We presume a social conflictt in our country since the position and strategy of the involved and confronted social actors -national, regional and local authorities, energy companies, scientists, NGO and other social organization- are going to play key and likely divergent roles in its industrial implementation and public acceptance. In order to improve knowledge on how to address these controverted situations from the own engineering context, the affiliated units from the Higher Technical School of Mines and Energy Engineering at UPM have been working on a transversal program to teach values and ethics. Over the past seven years, this pioneering experience has shown the usefulness of applying a consequentialist ethics, based on a case-by-case approach and costs-benefits analysis both for action and inaction. As a result of this initiative a theoretical concept has arisen and crystallized in this field: it is named Inter-ethics. This theoretical perspective can be very helpful in complex situations, with multi-stakeholders and plurality of interests, when ethical management requires the interaction between the respective ethics of each group; professional ethics of a single group is not enough. Under this inter-ethics theoretical framework and applying content analysis techniques, this paper explores the articulation of the discourse in favour and against fracking technology and its underlying values as manifested in the Spanish traditional mass media and emerging social media such as Youtube. Results show that Spanish public discourse on fracking technology includes the costs-benefits analysis to communicate how natural resources from local communities may be affected by these facilities due to environmental, health and economic consequences. Furthermore, this technology is represented as a solution to the "demand of energy" according to the optimistic discourse while, from a pessimistic view, fracking is often framed as a source "environmental problems" and even natural disasters as possible earthquakes. In this latter case, this negative representation could have been influenced by the closure of a macro project to store injected natural gas in the Mediterranean Sea using the old facilities of an oil exploitation in Amposta (Proyecto Cástor). The closure of this project was due to the occurrence of earthquakes whose intensity was higher than the originally expected by the experts in the assessment stage of the project.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Progress in long- and intermediate-term earthquake prediction is reviewed emphasizing results from California. Earthquake prediction as a scientific discipline is still in its infancy. Probabilistic estimates that segments of several faults in California will be the sites of large shocks in the next 30 years are now generally accepted and widely used. Several examples are presented of changes in rates of moderate-size earthquakes and seismic moment release on time scales of a few to 30 years that occurred prior to large shocks. A distinction is made between large earthquakes that rupture the entire downdip width of the outer brittle part of the earth's crust and small shocks that do not. Large events occur quasi-periodically in time along a fault segment and happen much more often than predicted from the rates of small shocks along that segment. I am moderately optimistic about improving predictions of large events for time scales of a few to 30 years although little work of that type is currently underway in the United States. Precursory effects, like the changes in stress they reflect, should be examined from a tensorial rather than a scalar perspective. A broad pattern of increased numbers of moderate-size shocks in southern California since 1986 resembles the pattern in the 25 years before the great 1906 earthquake. Since it may be a long-term precursor to a great event on the southern San Andreas fault, that area deserves detailed intensified study.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The recent discovery of a low-velocity, low-Q zone with a width of 50-200 m reaching to the top of the ductile part of the crust, by observations on seismic guided waves trapped in the fault zone of the Landers earthquake of 1992, and its identification with the shear zone inferred from the distribution of tension cracks observed on the surface support the existence of a characteristic scale length of the order of 100 m affecting various earthquake phenomena in southern California, as evidenced earlier by the kink in the magnitude-frequency relation at about M3, the constant corner frequency for earthquakes with M below about 3, and the sourcecontrolled fmax of 5-10 Hz for major earthquakes. The temporal correlation between coda Q-1 and the fractional rate of occurrence of earthquakes in the magnitude range 3-3.5, the geographical similarity of coda Q-1 and seismic velocity at a depth of 20 km, and the simultaneous change of coda Q-1 and conductivity at the lower crust support the hypotheses that coda Q-1 may represent the activity of creep fracture in the ductile part of the lithosphere occurring over cracks with a characteristic size of the order of 100 m. The existence of such a characteristic scale length cannot be consistent with the overall self-similarity of earthquakes unless we postulate a discrete hierarchy of such characteristic scale lengths. The discrete hierarchy of characteristic scale lengths is consistent with recently observed logarithmic periodicity in precursory seismicity.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

An earthquake of magnitude M and linear source dimension L(M) is preceded within a few years by certain patterns of seismicity in the magnitude range down to about (M - 3) in an area of linear dimension about 5L-10L. Prediction algorithms based on such patterns may allow one to predict approximately 80% of strong earthquakes with alarms occupying altogether 20-30% of the time-space considered. An area of alarm can be narrowed down to 2L-3L when observations include lower magnitudes, down to about (M - 4). In spite of their limited accuracy, such predictions open a possibility to prevent considerable damage. The following findings may provide for further development of prediction methods: (i) long-range correlations in fault system dynamics and accordingly large size of the areas over which different observed fields could be averaged and analyzed jointly, (ii) specific symptoms of an approaching strong earthquake, (iii) the partial similarity of these symptoms worldwide, (iv) the fact that some of them are not Earth specific: we probably encountered in seismicity the symptoms of instability common for a wide class of nonlinear systems.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Predictions of earthquakes that are based on observations of precursory seismicity cannot depend on the average properties of the seismicity, such as the Gutenberg-Richter (G-R) distribution. Instead it must depend on the fluctuations in seismicity. We summarize the observational data of the fluctuations of seismicity in space, in time, and in a coupled space-time regime over the past 60 yr in Southern California, to provide a basis for determining whether these fluctuations are correlated with the times and locations of future strong earthquakes in an appropriate time- and space-scale. The simple extrapolation of the G-R distribution must lead to an overestimate of the risk due to large earthquakes. There may be two classes of earthquakes: the small earthquakes that satisfy the G-R law and the larger and large ones. Most observations of fluctuations of seismicity are of the rate of occurrence of smaller earthquakes. Large earthquakes are observed to be preceded by significant quiescence on the faults on which they occur and by an intensification of activity at distance. It is likely that the fluctuations are due to the nature of fractures on individual faults of the network of faults. There are significant inhomogeneities on these faults, which we assume will have an important influence on the nature of self-organization of seismicity. The principal source of the inhomogeneity on the large scale is the influence of geometry--i.e., of the nonplanarity of faults and the system of faults.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This survey of well-documented repeated fault rupture confirms that some faults have exhibited a "characteristic" behavior during repeated large earthquakes--that is, the magnitude, distribution, and style of slip on the fault has repeated during two or more consecutive events. In two cases faults exhibit slip functions that vary little from earthquake to earthquake. In one other well-documented case, however, fault lengths contrast markedly for two consecutive ruptures, but the amount of offset at individual sites was similar. Adjacent individual patches, 10 km or more in length, failed singly during one event and in tandem during the other. More complex cases of repetition may also represent the failure of several distinct patches. The faults of the 1992 Landers earthquake provide an instructive example of such complexity. Together, these examples suggest that large earthquakes commonly result from the failure of one or more patches, each characterized by a slip function that is roughly invariant through consecutive earthquake cycles. The persistence of these slip-patches through two or more large earthquakes indicates that some quasi-invariant physical property controls the pattern and magnitude of slip. These data seem incompatible with theoretical models that produce slip distributions that are highly variable in consecutive large events.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Requirements for testing include advance specification of the conditional rate density (probability per unit time, area, and magnitude) or, alternatively, probabilities for specified intervals of time, space, and magnitude. Here I consider testing fully specified hypotheses, with no parameter adjustments or arbitrary decisions allowed during the test period. Because it may take decades to validate prediction methods, it is worthwhile to formulate testable hypotheses carefully in advance. Earthquake prediction generally implies that the probability will be temporarily higher than normal. Such a statement requires knowledge of "normal behavior"--that is, it requires a null hypothesis. Hypotheses can be tested in three ways: (i) by comparing the number of actual earth-quakes to the number predicted, (ii) by comparing the likelihood score of actual earthquakes to the predicted distribution, and (iii) by comparing the likelihood ratio to that of a null hypothesis. The first two tests are purely self-consistency tests, while the third is a direct comparison of two hypotheses. Predictions made without a statement of probability are very difficult to test, and any test must be based on the ratio of earthquakes in and out of the forecast regions.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The rate- and state-dependent constitutive formulation for fault slip characterizes an exceptional variety of materials over a wide range of sliding conditions. This formulation provides a unified representation of diverse sliding phenomena including slip weakening over a characteristic sliding distance Dc, apparent fracture energy at a rupture front, time-dependent healing after rapid slip, and various other transient and slip rate effects. Laboratory observations and theoretical models both indicate that earthquake nucleation is accompanied by long intervals of accelerating slip. Strains from the nucleation process on buried faults generally could not be detected if laboratory values of Dc apply to faults in nature. However, scaling of Dc is presently an open question and the possibility exists that measurable premonitory creep may precede some earthquakes. Earthquake activity is modeled as a sequence of earthquake nucleation events. In this model, earthquake clustering arises from sensitivity of nucleation times to the stress changes induced by prior earthquakes. The model gives the characteristic Omori aftershock decay law and assigns physical interpretation to aftershock parameters. The seismicity formulation predicts large changes of earthquake probabilities result from stress changes. Two mechanisms for foreshocks are proposed that describe observed frequency of occurrence of foreshock-mainshock pairs by time and magnitude. With the first mechanism, foreshocks represent a manifestation of earthquake clustering in which the stress change at the time of the foreshock increases the probability of earthquakes at all magnitudes including the eventual mainshock. With the second model, accelerating fault slip on the mainshock nucleation zone triggers foreshocks.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We summarize studies of earthquake fault models that give rise to slip complexities like those in natural earthquakes. For models of smooth faults between elastically deformable continua, it is critical that the friction laws involve a characteristic distance for slip weakening or evolution of surface state. That results in a finite nucleation size, or coherent slip patch size, h*. Models of smooth faults, using numerical cell size properly small compared to h*, show periodic response or complex and apparently chaotic histories of large events but have not been found to show small event complexity like the self-similar (power law) Gutenberg-Richter frequency-size statistics. This conclusion is supported in the present paper by fully inertial elastodynamic modeling of earthquake sequences. In contrast, some models of locally heterogeneous faults with quasi-independent fault segments, represented approximately by simulations with cell size larger than h* so that the model becomes "inherently discrete," do show small event complexity of the Gutenberg-Richter type. Models based on classical friction laws without a weakening length scale or for which the numerical procedure imposes an abrupt strength drop at the onset of slip have h* = 0 and hence always fall into the inherently discrete class. We suggest that the small-event complexity that some such models show will not survive regularization of the constitutive description, by inclusion of an appropriate length scale leading to a finite h*, and a corresponding reduction of numerical grid size.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We study a simple antiplane fault of finite length embedded in a homogeneous isotropic elastic solid to understand the origin of seismic source heterogeneity in the presence of nonlinear rate- and state-dependent friction. All the mechanical properties of the medium and friction are assumed homogeneous. Friction includes a characteristic length that is longer than the grid size so that our models have a well-defined continuum limit. Starting from a heterogeneous initial stress distribution, we apply a slowly increasing uniform stress load far from the fault and we simulate the seismicity for a few 1000 events. The style of seismicity produced by this model is determined by a control parameter associated with the degree of rate dependence of friction. For classical friction models with rate-independent friction, no complexity appears and seismicity is perfectly periodic. For weakly rate-dependent friction, large ruptures are still periodic, but small seismicity becomes increasingly nonstationary. When friction is highly rate-dependent, seismicity becomes nonperiodic and ruptures of all sizes occur inside the fault. Highly rate-dependent friction destabilizes the healing process producing premature healing of slip and partial stress drop. Partial stress drop produces large variations in the state of stress that in turn produce earthquakes of different sizes. Similar results have been found by other authors using the Burridge and Knopoff model. We conjecture that all models in which static stress drop is only a fraction of the dynamic stress drop produce stress heterogeneity.