868 resultados para Fault severity
Resumo:
Earthquake zones in the upper crust are usually more conductive than the surrounding rocks, and electrical geophysical measurements can be used to map these zones. Magnetotelluric (MT) measurements across fault zones that are parallel to the coast and not too far away can also give some important information about the lower crustal zone. This is because the long-period electric currents coming from the ocean gradually leak into the mantle, but the lower crust is usually very resistive and very little leakage takes place. If a lower crustal zone is less resistive it will be a leakage zone, and this can be seen because the MT phase will change as the ocean currents leave the upper crust. The San Andreas Fault is parallel to the ocean boundary and close enough to have a lot of extra ocean currents crossing the zone. The Loma Prieta zone, after the earthquake, showed a lot of ocean electric current leakage, suggesting that the lower crust under the fault zone was much more conductive than normal. It is hard to believe that water, which is responsible for the conductivity, had time to get into the lower crustal zone, so it was probably always there, but not well connected. If this is true, then the poorly connected water would be at a pressure close to the rock pressure, and it may play a role in modifying the fluid pressure in the upper crust fault zone. We also have telluric measurements across the San Andreas Fault near Palmdale from 1979 to 1990, and beginning in 1985 we saw changes in the telluric signals on the fault zone and east of the fault zone compared with the signals west of the fault zone. These measurements were probably seeing a better connection of the lower crust fluids taking place, and this may result in a fluid flow from the lower crust to the upper crust. This could be a factor in changing the strength of the upper crust fault zone.
Resumo:
The rate- and state-dependent constitutive formulation for fault slip characterizes an exceptional variety of materials over a wide range of sliding conditions. This formulation provides a unified representation of diverse sliding phenomena including slip weakening over a characteristic sliding distance Dc, apparent fracture energy at a rupture front, time-dependent healing after rapid slip, and various other transient and slip rate effects. Laboratory observations and theoretical models both indicate that earthquake nucleation is accompanied by long intervals of accelerating slip. Strains from the nucleation process on buried faults generally could not be detected if laboratory values of Dc apply to faults in nature. However, scaling of Dc is presently an open question and the possibility exists that measurable premonitory creep may precede some earthquakes. Earthquake activity is modeled as a sequence of earthquake nucleation events. In this model, earthquake clustering arises from sensitivity of nucleation times to the stress changes induced by prior earthquakes. The model gives the characteristic Omori aftershock decay law and assigns physical interpretation to aftershock parameters. The seismicity formulation predicts large changes of earthquake probabilities result from stress changes. Two mechanisms for foreshocks are proposed that describe observed frequency of occurrence of foreshock-mainshock pairs by time and magnitude. With the first mechanism, foreshocks represent a manifestation of earthquake clustering in which the stress change at the time of the foreshock increases the probability of earthquakes at all magnitudes including the eventual mainshock. With the second model, accelerating fault slip on the mainshock nucleation zone triggers foreshocks.
Resumo:
We summarize studies of earthquake fault models that give rise to slip complexities like those in natural earthquakes. For models of smooth faults between elastically deformable continua, it is critical that the friction laws involve a characteristic distance for slip weakening or evolution of surface state. That results in a finite nucleation size, or coherent slip patch size, h*. Models of smooth faults, using numerical cell size properly small compared to h*, show periodic response or complex and apparently chaotic histories of large events but have not been found to show small event complexity like the self-similar (power law) Gutenberg-Richter frequency-size statistics. This conclusion is supported in the present paper by fully inertial elastodynamic modeling of earthquake sequences. In contrast, some models of locally heterogeneous faults with quasi-independent fault segments, represented approximately by simulations with cell size larger than h* so that the model becomes "inherently discrete," do show small event complexity of the Gutenberg-Richter type. Models based on classical friction laws without a weakening length scale or for which the numerical procedure imposes an abrupt strength drop at the onset of slip have h* = 0 and hence always fall into the inherently discrete class. We suggest that the small-event complexity that some such models show will not survive regularization of the constitutive description, by inclusion of an appropriate length scale leading to a finite h*, and a corresponding reduction of numerical grid size.
Resumo:
Although models of homogeneous faults develop seismicity that has a Gutenberg-Richter distribution, this is only a transient state that is followed by events that are strongly influenced by the nature of the boundaries. Models with geometrical inhomogeneities of fracture thresholds can limit the sizes of earthquakes but now favor the characteristic earthquake model for large earthquakes. The character of the seismicity is extremely sensitive to distributions of inhomogeneities, suggesting that statistical rules for large earthquakes in one region may not be applicable to large earthquakes in another region. Model simulations on simple networks of faults with inhomogeneities of threshold develop episodes of lacunarity on all members of the network. There is no validity to the popular assumption that the average rate of slip on individual faults is a constant. Intermediate term precursory activity such as local quiescence and increases in intermediate-magnitude activity at long range are simulated well by the assumption that strong weakening of faults by injection of fluids and weakening of asperities on inhomogeneous models of fault networks is the dominant process; the heat flow paradox, the orientation of the stress field, and the low average stress drop in some earthquakes are understood in terms of the asperity model of inhomogeneous faulting.
Resumo:
Interdependence between geometry of a fault system, its kinematics, and seismicity is investigated. Quantitative measure is introduced for inconsistency between a fixed configuration of faults and the slip rates on each fault. This measure, named geometric incompatibility (G), depicts summarily the instability near the fault junctions: their divergence or convergence ("unlocking" or "locking up") and accumulation of stress and deformations. Accordingly, the changes in G are connected with dynamics of seismicity. Apart from geometric incompatibility, we consider deviation K from well-known Saint Venant condition of kinematic compatibility. This deviation depicts summarily unaccounted stress and strain accumulation in the region and/or internal inconsistencies in a reconstruction of block- and fault system (its geometry and movements). The estimates of G and K provide a useful tool for bringing together the data on different types of movement in a fault system. An analog of Stokes formula is found that allows determination of the total values of G and K in a region from the data on its boundary. The phenomenon of geometric incompatibility implies that nucleation of strong earthquakes is to large extent controlled by processes near fault junctions. The junctions that have been locked up may act as transient asperities, and unlocked junctions may act as transient weakest links. Tentative estimates of K and G are made for each end of the Big Bend of the San Andreas fault system in Southern California. Recent strong earthquakes Landers (1992, M = 7.3) and Northridge (1994, M = 6.7) both reduced K but had opposite impact on G: Landers unlocked the area, whereas Northridge locked it up again.
Resumo:
Funding: This work was supported by funding awards to Dr Isabel Crane from the National Eye Research Centre, Bristol, UK (Grant ref. SCIAD 058); and NHS Grampian Endowment Trust (Grant ref. 10/16). The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.
Resumo:
Os motores de indução desempenham um importante papel na indústria, fato este que destaca a importância do correto diagnóstico e classificação de falhas ainda em fase inicial de sua evolução, possibilitando aumento na produtividade e, principalmente, eliminando graves danos aos processos e às máquinas. Assim, a proposta desta tese consiste em apresentar um multiclassificador inteligente para o diagnóstico de motor sem defeitos, falhas de curto-circuito nos enrolamentos do estator, falhas de rotor e falhas de rolamentos em motores de indução trifásicos acionados por diferentes modelos de inversores de frequência por meio da análise das amplitudes dos sinais de corrente de estator no domínio do tempo. Para avaliar a precisão de classificação frente aos diversos níveis de severidade das falhas, foram comparados os desempenhos de quatro técnicas distintas de aprendizado de máquina; a saber: (i) Rede Fuzzy Artmap, (ii) Rede Perceptron Multicamadas, (iii) Máquina de Vetores de Suporte e (iv) k-Vizinhos-Próximos. Resultados experimentais obtidos a partir de 13.574 ensaios experimentais são apresentados para validar o estudo considerando uma ampla faixa de frequências de operação, bem como regimes de conjugado de carga em 5 motores diferentes.
Resumo:
This research presents the development and implementation of fault location algorithms in power distribution networks with distributed generation units installed along their feeders. The proposed algorithms are capable of locating the fault based on voltage and current signals recorded by intelligent electronic devices installed at the end of the feeder sections, information to compute the loads connected to these feeders and their electric characteristics, and the operating status of the network. In addition, this work presents the study of analytical models of distributed generation and load technologies that could contribute to the performance of the proposed fault location algorithms. The validation of the algorithms was based on computer simulations using network models implemented in ATP, whereas the algorithms were implemented in MATLAB.
Resumo:
Widely held clinical assumptions about self-harming eating disorder patients were tested in this project. Specifically, the present study had two aims: (1) to confirm research that suggests patients with self-injurious behavior exhibit greater severity in eating disorder symptomology; and (2) to document the treatment course for these patients (e.g. reported change in eating disorder attitudes, beliefs, and behaviors) from admission to discharge. Data from 43 participants who received treatment at a Partial Hospitalization Program (PHP) for Eating Disorders were used in the current study. The length of treatment required for study inclusion reflected mean lengths of stay (Williamson, Thaw, & Varnardo-Sullivan, 2001) and meaningful treatment lengths in prior research (McFarlane et al., 2013; McFarlane, Olmsted, & Trottier, 2008): five to eight weeks. Scores on the Eating Disorder Inventory-III (Garner, 2004) at the time of admission and discharge were compared. These results suggest that there are no significant differences between eating disordered patients who engage in self-injury and those who do not in terms of symptom severity or pathology at admission. The results further suggest that patients in both groups see equivalent reductions in symptoms from admission to discharge across domains and also share non-significant changes in emotional dysregulation over the course of treatment. Importantly, these results also suggest that general psychological maladjustment is higher at discharge for eating disordered patients who engage in self-injury.
Resumo:
O cenário competitivo e globalizado em que as empresas estão inseridas, sobretudo a partir do século XXI, associados a ciclos de vida cada vez menores dos produtos, rigorosos requisitos de qualidade, além de políticas de preservação do meio ambiente, com redução de consumo energético e de recursos hídricos, somadas às exigências legais de melhores condições de trabalho, resultaram em uma quebra de paradigma nos processos produtivos até então concebidos. Como solução a este novo cenário produtivo pode-se citar o extenso uso da automação industrial, fato que resultou em sistemas cada vez mais complexos, tanto do ponto de vista estrutural, em função do elevado número de componentes, quanto da complexidade dos sistemas de controle. A previsibilidade de todos os estados possíveis do sistema torna-se praticamente impossível. Dentre os estados possíveis pode-se citar os estados de falha que, dependendo da severidade do efeito associado à sua ocorrência, podem resultar em sérios danos para o homem, o meio ambiente e às próprias instalações, caso não sejam corretamente diagnosticados e tratados. Fatos recentes de catástrofes relacionadas à sistemas produtivos revelam a necessidade de se implementar medidas para prevenir e para mitigar os efeitos da ocorrência de falhas, com o objetivo de se evitar a ocorrência de catástrofes. De acordo com especialistas, os Sistemas Instrumentados de Segurança SIS, referenciados em normas como a IEC 61508 e IEC 61511, são uma solução para este tipo de problema. Trabalhos publicados tratam de métodos para a implementação de camadas SIS de prevenção, porém com escassez de trabalhos para camadas SIS de mitigação. Em função do desconhecimento da dinâmica do sistema em estado de falha, técnicas tradicionais de modelagem tornam-se inviáveis. Neste caso, o uso de inteligência artificial, como por exemplo a lógica fuzzy, pode se tornar uma solução para o desenvolvimento do algoritmo de controle, associadas a ferramentas de edição, modelagem e geração dos códigos de controle. A proposta deste trabalho é apresentar uma sistemática para a implementação de um sistema de controle para a mitigação de falhas críticas em sistemas produtivos, com referência às normas IEC 61508/61511, com ação antecipativa à ocorrência de catástrofes.
Resumo:
PURPOSE: We sought to analyze whether the sociodemographic profile of battered women varies according to the level of severity of intimate partner violence (IPV), and to identify possible associations between IPV and different health problems taking into account the severity of these acts. METHODS: A cross-sectional study of 8,974 women (18-70 years) attending primary healthcare centers in Spain (2006-2007) was performed. A compound index was calculated based on frequency, types (physical, psychological, or both), and duration of IPV. Descriptive and multivariate procedures using logistic regression models were fitted. RESULTS: Women affected by low severity IPV and those affected by high severity IPV were found to have a similar sociodemographic profile. However, divorced women (odds ratio [OR], 8.1; 95% confidence interval [CI], 3.2-20.3), those without tangible support (OR, 6.6; 95% CI, 3.3-13.2), and retired women (OR, 2.7; 95% CI, 1.2-6.0) were more likely to report high severity IPV. Women experiencing high severity IPV were also more likely to suffer from poor health than were those who experienced low severity IPV. CONCLUSIONS: The distribution of low and high severity IPV seems to be influenced by the social characteristics of the women involved and may be an important indicator for estimating health effects. This evidence may contribute to the design of more effective interventions.
Resumo:
The design of fault tolerant systems is gaining importance in large domains of embedded applications where design constrains are as important as reliability. New software techniques, based on selective application of redundancy, have shown remarkable fault coverage with reduced costs and overheads. However, the large number of different solutions provided by these techniques, and the costly process to assess their reliability, make the design space exploration a very difficult and time-consuming task. This paper proposes the integration of a multi-objective optimization tool with a software hardening environment to perform an automatic design space exploration in the search for the best trade-offs between reliability, cost, and performance. The first tool is commanded by a genetic algorithm which can simultaneously fulfill many design goals thanks to the use of the NSGA-II multi-objective algorithm. The second is a compiler-based infrastructure that automatically produces selective protected (hardened) versions of the software and generates accurate overhead reports and fault coverage estimations. The advantages of our proposal are illustrated by means of a complex and detailed case study involving a typical embedded application, the AES (Advanced Encryption Standard).
Resumo:
This study analyses the effect of successional stage after farmland terrace abandonment on post-fire plant recovery in a Mediterranean landscape. Specific objectives of the study were to (1) compare fuel characteristics and fire severity in three successional stages after farmland abandonment – dry grassland, dense shrubland and pine stands; (2) analyse the effect of pre-fire successional stage and fire severity on vegetation recovery and (3) analyse the relative vulnerability (i.e. potential for ecosystem shift and soil degradation) to wildfires of the successional stages. We assessed 30 abandoned terraces (15 unburned and 15 burned), with diverse successional stages, on the Xortà Range (south-east Spain). Post-fire recovery was measured 1, 4 and 7 years after fire. The successional stages varied in aboveground biomass, litter amount, vertical structure and continuity of plant cover, and flammability. Dry grassland showed the lowest fire severity, whereas no differences in severity were found between shrubland and pine stands. One year after fire, plant cover was inversely related to fire severity; this relationship attenuated with time after fire. Post-fire recovery of pine stands and shrubland led in both cases to shrublands, contributing to landscape homogenisation. The pine stands showed the largest changes in composition due to fire and the lowest post-fire plant recovery – a sign of high vulnerability to fire.
Resumo:
This work presents a 3D geometric model of growth strata cropping out in a fault-propagation fold associated with the Crevillente Fault (Abanilla-Alicante sector) from the Bajo Segura Basin (eastern Betic Cordillera, southern Spain). The analysis of this 3D model enables us to unravel the along-strike and along-section variations of the growth strata, providing constraints to assess the fold development, and hence, the fault kinematic evolution in space and time. We postulate that the observed along-strike dip variations are related to lateral variation in fault displacement. Along-section variations of the progressive unconformity opening angles indicate greater fault slip in the upper Tortonian–Messinian time span; from the Messinian on, quantitative analysis of the unconformity indicate a constant or lower tectonic activity of the Crevillente Fault (Abanilla-Alicante sector); the minor abundance of striated pebbles in the Pliocene-Quaternary units could be interpreted as a decrease in the stress magnitude and consequently in the tectonic activity of the fault. At a regional scale, comparison of the growth successions cropping out in the northern and southern limits of the Bajo Segura Basin points to a southward migration of deformation in the basin. This means that the Bajo Segura Fault became active after the Crevillente Fault (Abanilla-Alicante sector), for which activity on the latter was probably decreasing according to our data. Consequently, we propose that the seismic hazard at the northern limit of the Bajo Segura Basin should be lower than at the southern limit.
Resumo:
Much has been made of the divide that opened up in 2015 between eastern and western member states as a result of acrimonious discussions on how to handle the refugee crisis and distribute asylum applicants across the EU. Against the prevailing political sentiment in certain member state capitals, Germany and France pushed through a plan devised by the European Commission to relocate 120,000 refugees, by a qualified majority vote in the Council. Rather than creating an east/west divide, however, the vote split the group of (relatively) new Central and Eastern European countries (CEECs) of the EU into two factions: Romania, Czechia, Slovakia and Hungary voted against the plan, whereas several other CEECs, namely Poland, Bulgaria and the Baltic states, joined the controversial motion on the side of the other (northern, southern and western) member states. Finland abstained. Few member states have shifted their positions in the meantime. If anything, in fact, they have coalesced among the Visegrad 4, following a change of government in Poland; and they have hardened, as a result of new proposals by the Commission to fine member states that refuse to accept refugees. With Hungary’s referendum on the Commission’s relocation scheme scheduled for October 2nd, tensions are set to intensify even further.