896 resultados para Fault compensation
Resumo:
As the complexity of computing systems grows, reliability and energy are two crucial challenges asking for holistic solutions. In this paper, we investigate the interplay among concurrency, power dissipation, energy consumption and voltage-frequency scaling for a key numerical kernel for the solution of sparse linear systems. Concretely, we leverage a task-parallel implementation of the Conjugate Gradient method, equipped with an state-of-the-art pre-conditioner embedded in the ILUPACK software, and target a low-power multi core processor from ARM.In addition, we perform a theoretical analysis on the impact of a technique like Near Threshold Voltage Computing (NTVC) from the points of view of increased hardware concurrency and error rate.
Resumo:
The end of Dennard scaling has promoted low power consumption into a firstorder concern for computing systems. However, conventional power conservation schemes such as voltage and frequency scaling are reaching their limits when used in performance-constrained environments. New technologies are required to break the power wall while sustaining performance on future processors. Low-power embedded processors and near-threshold voltage computing (NTVC) have been proposed as viable solutions to tackle the power wall in future computing systems. Unfortunately, these technologies may also compromise per-core performance and, in the case of NTVC, xreliability. These limitations would make them unsuitable for HPC systems and datacenters. In order to demonstrate that emerging low-power processing technologies can effectively replace conventional technologies, this study relies on ARM’s big.LITTLE processors as both an actual and emulation platform, and state-of-the-art implementations of the CG solver. For NTVC in particular, the paper describes how efficient algorithm-based fault tolerance schemes preserve the power and energy benefits of very low voltage operation.
Resumo:
Methane-derived authigenic carbonate (MDAC) mound features at the Codling Fault Zone (CFZ), located in shallow waters (50-120m) of the western Irish Sea were investigated and provide a comparison to deep sea MDAC settings. Carbonates consisted of aragonite as the major mineral phase, with δ13C depletion to -50‰ and δ18O enrichment to~2‰. These isotope signatures, together with the co-precipitation of framboidal pyrite confirm that anaerobic oxidation of methane (AOM) is an important process mediating methane release to the water column and the atmosphere in this region. 18O-enrichment could be a result of MDAC precipitation with seawater in colder than present day conditions, or precipitation with 18O-enriched water transported from deep petroleum sources. The 13C depletion of bulk carbonate and sampled gas (-70‰) suggests a biogenic source, but significant mixing of thermogenic gas and depletion of the original isotope signature cannot be ruled out. Active seepage was recorded from one mound and together with extensive areas of reduced sediment, confirms that seepage is ongoing. The mounds appear to be composed of stacked pavements that are largely covered by sand and extensively eroded. The CFZ mounds are colonized by abundant Sabellaria polychaetes and possible Nemertesia hydroids, which benefit indirectly from available hard substrate. In contrast to deep sea MDAC settings where seep-related macrofauna are commonly reported, seep-specialist fauna appear to be lacking at the CFZ. In addition, unlike MDAC in deep waters where organic carbon input from photosynthesis is limited, lipid biomarkers and isotope signatures related to marine planktonic production (e.g. sterols, alkanols) were most abundant. Evidence for microbes involved in AOM was limited from samples taken; possibly due to this dilution effect from organic matter derived from the photic zone, and will require further investigation.
Resumo:
Electric vehicles (EVs) and hybrid electric vehicles (HEVs) can reduce greenhouse gas emissions while switched reluctance motor (SRM) is one of the promising motor for such applications. This paper presents a novel SRM fault-diagnosis and fault-tolerance operation solution. Based on the traditional asymmetric half-bridge topology for the SRM driving, the central tapped winding of the SRM in modular half-bridge configuration are introduced to provide fault-diagnosis and fault-tolerance functions, which are set idle in normal conditions. The fault diagnosis can be achieved by detecting the characteristic of the excitation and demagnetization currents. An SRM fault-tolerance operation strategy is also realized by the proposed topology, which compensates for the missing phase torque under the open-circuit fault, and reduces the unbalanced phase current under the short-circuit fault due to the uncontrolled faulty phase. Furthermore, the current sensor placement strategy is also discussed to give two placement methods for low cost or modular structure. Simulation results in MATLAB/Simulink and experiments on a 750-W SRM validate the effectiveness of the proposed strategy, which may have significant implications and improve the reliability of EVs/HEVs.
Resumo:
Background: Large-scale biological jobs on high-performance computing systems require manual intervention if one or more computing cores on which they execute fail. This places not only a cost on the maintenance of the job, but also a cost on the time taken for reinstating the job and the risk of losing data and execution accomplished by the job before it failed. Approaches which can proactively detect computing core failures and take action to relocate the computing core's job onto reliable cores can make a significant step towards automating fault tolerance. Method: This paper describes an experimental investigation into the use of multi-agent approaches for fault tolerance. Two approaches are studied, the first at the job level and the second at the core level. The approaches are investigated for single core failure scenarios that can occur in the execution of parallel reduction algorithms on computer clusters. A third approach is proposed that incorporates multi-agent technology both at the job and core level. Experiments are pursued in the context of genome searching, a popular computational biology application. Result: The key conclusion is that the approaches proposed are feasible for automating fault tolerance in high-performance computing systems with minimal human intervention. In a typical experiment in which the fault tolerance is studied, centralised and decentralised checkpointing approaches on an average add 90% to the actual time for executing the job. On the other hand, in the same experiment the multi-agent approaches add only 10% to the overall execution time
Resumo:
With the maturation of strategic human resource management scholarship, there appears to be a greater call to move from monolithic workforce management to a more strategic and differentiated emphasis on employees with the greatest capacity to enhance competitive advantage. There has been little consideration in the literature as to whether organizations formally identify key groups of employees based on their impact on organizational learning and core competences. Using survey evidence from 260 multinational companies (MNCs), this paper explores the extent to which key groups of employees are formally recognized and whether they are subject to differential compensation practices. The results demonstrate that just in excess of half of these MNCs identify a key group. There was considerable differentiation in the compensation practices between these key groups, managers and the largest occupational group in the workforce. The results give rise to questions worthy of future investigation, namely whether the differentiated approaches used lead to improved performance outcomes.
Resumo:
In this paper, we consider the uplink of a single-cell multi-user single-input multiple-output (MU-SIMO) system with in-phase and quadrature-phase imbalance (IQI). Particularly, we investigate the effect of receive (RX) IQI on the performance of MU-SIMO systems with large antenna arrays employing maximum-ratio combining (MRC) receivers. In order to study how IQI affects channel estimation, we derive a new channel estimator for the IQI-impaired model and show that the higher the value of signal-to-noise ratio (SNR) the higher the impact of IQI on the spectral efficiency (SE). Moreover, a novel pilot-based joint estimator of the augmented MIMO channel matrix and IQI coefficients is described and then, a low-complexity IQI compensation scheme is proposed which is based on the
IQI coefficients’ estimation and it is independent of the channel gain. The performance of the proposed compensation scheme is analytically evaluated by deriving a tractable approximation of the ergodic SE assuming transmission over Rayleigh fading channels with large-scale fading. Furthermore, we investigate how many MSs should be scheduled in massive multiple-input multiple-output (MIMO) systems with IQI and show that the highest SE loss occurs at the optimal operating point. Finally,
by deriving asymptotic power scaling laws, and proving that the SE loss due to IQI is asymptotically independent of the number of BS antennas, we show that massive MIMO is resilient to the effect of RX IQI.
Resumo:
This paper presents the application of the on-load exciting current Extended Park's Vector Approach for diagnosing incipient turn-to-turn winding faults in operating power transformers. Experimental and simulated test results demonstrate the effectiveness of the proposed technique, which is based on the spectral analysis of the AC component of the on-load exciting current Park's Vector modulus.
Resumo:
This work addresses the joint compensation of IQimbalances and carrier phase synchronization errors of zero- IF receivers. The compensation scheme based on blind-source separation which provides simple yet potent means to jointly compensate for these errors independent of modulation format and constellation size used. The low-complexity of the algorithm makes it a suitable option for real-time deployment as well as practical for integration into monolithic receiver designs.
Resumo:
I and Q Channel phase and gain misniatches are of great concern in communications receiver design. In this paper we analyse the effects of I and Q channel mismatches and propose a low-complexity blind adaptive algorithm to minimize this problem. The proposed solution consists of two, 2-tap adaptive filters, arranged in Adaptive Noise Canceller (ANC) set-up, with the output of one cross-fed to the input of the other. The system works as a de-correlator eliminating I and Q mismatch errors.
Resumo:
Global navigation satellite system (GNSS) receivers require solutions that are compact, cheap and low-power, in order to enable their widespread proliferation into consumer products. Furthermore, interoperability of GNSS with non-navigation systems, especially communication systems will gain importance in providing the value added services in a variety of sectors, providing seamless quality of service for users. An important step into the market for Galileo is the timely availability of these hybrid multi-mode terminals for consumer applications. However, receiver architectures that are amenable to high-levels of integration will inevitably suffer from RF impairments hindering their easy widespread use in commercial products. This paper studies and presents analytical evaluations of the performance degradation due to the RF impairments and develops algorithms that can compensate for them in the DSP domain at the base band with complexity-reduced hardware overheads, hence, paving the way for low-power, highly integrated multi-mode GNSS receivers.
Resumo:
This paper presents compensation of all undesired effects (Power Amplifier (PA) nonlinearity, transmitter and receiver antenna crosstalk, before-PA nonlinear crosstalk, Multiple Input Multiple Output (MIMO) channel fading and crosstalk) in MIMO Orthogonal Frequency Division Multiplex (OFDM) wireless systems. It has been demonstrated that reduced-complexity Crossover Digital Predistortion (CO-DPD) algorithm on transmitter side and Matrix Inversion algorithm on receiver side can suppress almost all undesired effects introduced by transmitter, channel and receiver in 4×4 MIMO OFDM System that can be used in modern wireless system applications. A significant complexity reduction is achieved due to the fact that Digital Signal Processing (DSP) during CO-DPD process on transmitter side is done with real instead of complex numbers.