976 resultados para Stacking fault energy (SFE)
Resumo:
Faults can slip either aseismically or through episodic seismic ruptures, but we still do not understand the factors which determine the partitioning between these two modes of slip. This challenge can now be addressed thanks to the dense set of geodetic and seismological networks that have been deployed in various areas with active tectonics. The data from such networks, as well as modern remote sensing techniques, indeed allow documenting of the spatial and temporal variability of slip mode and give some insight. This is the approach taken in this study, which is focused on the Longitudinal Valley Fault (LVF) in Eastern Taiwan. This fault is particularly appropriate since the very fast slip rate (about 5 cm/yr) is accommodated by both seismic and aseismic slip. Deformation of anthropogenic features shows that aseismic creep accounts for a significant fraction of fault slip near the surface, but this fault also released energy seismically, since it has produced five M_w>6.8 earthquakes in 1951 and 2003. Moreover, owing to the thrust component of slip, the fault zone is exhumed which allows investigation of deformation mechanisms. In order to put constraint on the factors that control the mode of slip, we apply a multidisciplinary approach that combines modeling of geodetic observations, structural analysis and numerical simulation of the "seismic cycle". Analyzing a dense set of geodetic and seismological data across the Longitudinal Valley, including campaign-mode GPS, continuous GPS (cGPS), leveling, accelerometric, and InSAR data, we document the partitioning between seismic and aseismic slip on the fault. For the time period 1992 to 2011, we found that about 80-90% of slip on the LVF in the 0-26 km seismogenic depth range is actually aseismic. The clay-rich Lichi M\'elange is identified as the key factor promoting creep at shallow depth. Microstructural investigations show that deformation within the fault zone must have resulted from a combination of frictional sliding at grain boundaries, cataclasis and pressure solution creep. Numerical modeling of earthquake sequences have been performed to investigate the possibility of reproducing the results from the kinematic inversion of geodetic and seismological data on the LVF. We first investigate the different modeling strategy that was developed to explore the role and relative importance of different factors on the manner in which slip accumulates on faults. We compare the results of quasi dynamic simulations and fully dynamic ones, and we conclude that ignoring the transient wave-mediated stress transfers would be inappropriate. We therefore carry on fully dynamic simulations and succeed in qualitatively reproducing the wide range of observations for the southern segment of the LVF. We conclude that the spatio-temporal evolution of fault slip on the Longitudinal Valley Fault over 1997-2011 is consistent to first order with prediction from a simple model in which a velocity-weakening patch is embedded in a velocity-strengthening area.
Resumo:
Bayesian formulated neural networks are implemented using hybrid Monte Carlo method for probabilistic fault identification in cylindrical shells. Each of the 20 nominally identical cylindrical shells is divided into three substructures. Holes of (12±2) mm in diameter are introduced in each of the substructures and vibration data are measured. Modal properties and the Coordinate Modal Assurance Criterion (COMAC) are utilized to train the two modal-property-neural-networks. These COMAC are calculated by taking the natural-frequency-vector to be an additional mode. Modal energies are calculated by determining the integrals of the real and imaginary components of the frequency response functions over bandwidths of 12% of the natural frequencies. The modal energies and the Coordinate Modal Energy Assurance Criterion (COMEAC) are used to train the two frequency-response-function-neural-networks. The averages of the two sets of trained-networks (COMAC and COMEAC as well as modal properties and modal energies) form two committees of networks. The COMEAC and the COMAC are found to be better identification data than using modal properties and modal energies directly. The committee approach is observed to give lower standard deviations than the individual methods. The main advantage of the Bayesian formulation is that it gives identities of damage and their respective confidence intervals.
Resumo:
A fully integrated 0.18 μm DC-DC buck converter using a low-swing "stacked driver" configuration is reported in this paper. A high switching frequency of 660 MHz reduces filter components to fit on chip, but this suffers from high switching losses. These losses are reduced using: 1) low-swing drivers; 2) supply stacking; and 3) introducing a charge transfer path to deliver excess charge from the positive metal-oxide semiconductor drive chain to the load, thereby recycling the charge. The working prototype circuit converts 2.2 to 0.75-1.0 V at 40-55 mA. Design and simulation of an improved circuit is also included that further improves the efficiency by enhancing the charge recycling path, providing automated zero voltage switching (ZVS) operation, and synchronizing the half-swing gating signals. © 2009 IEEE.
Resumo:
This paper advocates 'reduce, reuse, recycle' as a complete energy savings strategy. While reduction has been common to date, there is growing need to emphasize reuse and recycling as well. We design a DC-DC buck converter to demonstrate the 3 techniques: reduce with low-swing and zero voltage switching (ZVS), reuse with supply stacking, and recycle with regulated delivery of excess energy to the output load. The efficiency gained from these 3 techniques helps offset the loss of operating drivers at very high switching frequencies which are needed to move the output filter completely on-chip. A prototype was fabricated in 0.18μm CMOS, operates at 660MHz, and converts 2.2V to 0.75-1.0V at ∼50mA.1 © 2008 IEEE.
Resumo:
The dependence of the Raman spectrum on the excitation energy has been investigated for ABA-and ABC- stacked few-layer graphene in order to establish the fingerprint of the stacking order and the number of layers, which affect the transport and optical properties of few-layer graphene. Five different excitation sources with energies of 1.96, 2.33, 2.41, 2.54 and 2.81â €...eV were used. The position and the line shape of the Raman 2D, G*, N, M, and other combination modes show dependence on the excitation energy as well as the stacking order and the thickness. One can unambiguously determine the stacking order and the thickness by comparing the 2D band spectra measured with 2 different excitation energies or by carefully comparing weaker combination Raman modes such as N, M, or LOLA modes. The criteria for unambiguous determination of the stacking order and the number of layers up to 5 layers are established.
Resumo:
By means of low temperature photoluminescence and synchrotron radiation X-ray diffraction, existence of stacking faults has been determined in epitaxy lateral overgrowth GaN by metalorganic chemical vapor deposition.
Resumo:
A series of block copolymers containing nonconjugated spacer and 3D pi-pi stacking structure with simultaneous blue-, green-, and yellow-emitting units has been synthesized and characterized. The dependence of the energy transfer and electroluminescence (EL) properties of these block copolymers on the contents of oligo(phenylenevinylene)s has been investigated. The block copolymer (GEO8-BEO-YEO4) with 98.8% blue-emitting oligomer (BEO), 0.8% green-emitting oligomer (GEO), and 0.4% yellow-emitting oligomer (YEO) showed the best electroluminescent performance, exhibiting a maximum luminance of 2309 cd/m(2) and efficiency of 0.34 cd/A. The single-layer-polymer light-emitting diodes device based on GEO2-BEO-YEO4 emitted greenish white light with the CIE coordinates of (0.26, 0.37) at 10 V. The synergetic effect of the efficient energy transfer and 3D pi-pi stack of these block copolymers on the photoiuminescent and electroluminescent properties are investigated.
Resumo:
Since Wireless Sensor Networks (WSNs) are subject to failures, fault-tolerance becomes an important requirement for many WSN applications. Fault-tolerance can be enabled in different areas of WSN design and operation, including the Medium Access Control (MAC) layer and the initial topology design. To be robust to failures, a MAC protocol must be able to adapt to traffic fluctuations and topology dynamics. We design ER-MAC that can switch from energy-efficient operation in normal monitoring to reliable and fast delivery for emergency monitoring, and vice versa. It also can prioritise high priority packets and guarantee fair packet deliveries from all sensor nodes. Topology design supports fault-tolerance by ensuring that there are alternative acceptable routes to data sinks when failures occur. We provide solutions for four topology planning problems: Additional Relay Placement (ARP), Additional Backup Placement (ABP), Multiple Sink Placement (MSP), and Multiple Sink and Relay Placement (MSRP). Our solutions use a local search technique based on Greedy Randomized Adaptive Search Procedures (GRASP). GRASP-ARP deploys relays for (k,l)-sink-connectivity, where each sensor node must have k vertex-disjoint paths of length ≤ l. To count how many disjoint paths a node has, we propose Counting-Paths. GRASP-ABP deploys fewer relays than GRASP-ARP by focusing only on the most important nodes – those whose failure has the worst effect. To identify such nodes, we define Length-constrained Connectivity and Rerouting Centrality (l-CRC). Greedy-MSP and GRASP-MSP place minimal cost sinks to ensure that each sensor node in the network is double-covered, i.e. has two length-bounded paths to two sinks. Greedy-MSRP and GRASP-MSRP deploy sinks and relays with minimal cost to make the network double-covered and non-critical, i.e. all sensor nodes must have length-bounded alternative paths to sinks when an arbitrary sensor node fails. We then evaluate the fault-tolerance of each topology in data gathering simulations using ER-MAC.
Resumo:
This paper proposes a decoupled fault ride-through strategy for a doubly fed induction generator (DFIG) to enhance network stability during grid disturbances. The decoupled operation proposes that a DFIG operates as an induction generator (IG) with the converter unit acting as a reactive power source during a fault condition. The transition power characteristics of the DFIG have been analyzed to derive the capability of the proposed strategy under various system conditions. The optimal crowbar resistance is obtained to exploit the maximum power capability from the DFIG during decoupled operation. The methods have been established to ensure proper coordination between the IG mode and reactive power compensation from the grid-side converter during decoupled operation. The viability and benefits of the proposed strategy are demonstrated using different test network structures and different wind penetration levels. Control performance has been benchmarked against existing grid code standards and commercial wind generator systems, based on the optimal network support required (i.e., voltage or frequency) by the system operator from a wind farm installed at a particular location.
Resumo:
Wavelet transforms provide basis functions for time-frequency analysis and have properties that are particularly useful for the compression of analogue point on wave transient and disturbance power system signals. This paper evaluates the compression properties of the discrete wavelet transform using actual power system data. The results presented in the paper indicate that reduction ratios up to 10:1 with acceptable distortion are achievable. The paper discusses the application of the reduction method for expedient fault analysis and protection assessment.
Resumo:
This paper presents a novel detection method for broken rotor bar fault (BRB) in induction motors based on Estimation of Signal Parameters via Rotational Invariance Technique (ESPRIT) and Simulated Annealing Algorithm (SAA). The performance of ESPRIT is tested with simulated stator current signal of an induction motor with BRB. It shows that even with a short-time measurement data, the technique is capable of correctly identifying the frequencies of the BRB characteristic components but with a low accuracy on the amplitudes and initial phases of those components. SAA is then used to determine their amplitudes and initial phases and shows satisfactory results. Finally, experiments on a 3kW, 380V, 50Hz induction motor are conducted to demonstrate the effectiveness of the ESPRIT-SAA-based method in detecting BRB with short-time measurement data. It proves that the proposed method is a promising choice for BRB detection in induction motors operating with small slip and fluctuant load.
Resumo:
This paper investigates a flexible fault ride through strategy for power systems in China with high wind power penetration. The strategy comprises of adaptive fault ride through requirements and maximum power restrictions of the wind farms with weak fault ride through capabilities. The slight faults and moderate faults with high probability are the main defending objective of the strategy. The adaptive fault ride through requirement in the strategy consists of two sub fault ride through requirements, a temporary slight voltage ride through requirement corresponding to a slight fault incident, with a moderate voltage ride through requirement corresponding to a moderate fault. The temporary overloading capability of the wind farm is reflected in both requirements to enhance the capability to defend slight faults and to avoid tripping when the crowbar is disconnected after moderate faults are cleared. For those wind farms that cannot meet the adaptive fault ride through requirement, restrictions are put on the maximum power output. Simulation results show that the flexible fault ride through strategy increases the fault ride through capability of the wind farm clusters and reduces the wind power curtailment during faults.
Resumo:
This paper proposes a new thermography-based maximum power point tracking (MPPT) scheme to address photovoltaic (PV) partial shading faults. Solar power generation utilizes a large number of PV cells connected in series and in parallel in an array, and that are physically distributed across a large field. When a PV module is faulted or partial shading occurs, the PV system sees a nonuniform distribution of generated electrical power and thermal profile, and the generation of multiple maximum power points (MPPs). If left untreated, this reduces the overall power generation and severe faults may propagate, resulting in damage to the system. In this paper, a thermal camera is employed for fault detection and a new MPPT scheme is developed to alter the operating point to match an optimized MPP. Extensive data mining is conducted on the images from the thermal camera in order to locate global MPPs. Based on this, a virtual MPPT is set out to find the global MPP. This can reduce MPPT time and be used to calculate the MPP reference voltage. Finally, the proposed methodology is experimentally implemented and validated by tests on a 600-W PV array.
Resumo:
As the complexity of computing systems grows, reliability and energy are two crucial challenges asking for holistic solutions. In this paper, we investigate the interplay among concurrency, power dissipation, energy consumption and voltage-frequency scaling for a key numerical kernel for the solution of sparse linear systems. Concretely, we leverage a task-parallel implementation of the Conjugate Gradient method, equipped with an state-of-the-art pre-conditioner embedded in the ILUPACK software, and target a low-power multi core processor from ARM.In addition, we perform a theoretical analysis on the impact of a technique like Near Threshold Voltage Computing (NTVC) from the points of view of increased hardware concurrency and error rate.