17 resultados para Unreliable Equipment

em QUB Research Portal - Research Directory and Institutional Repository for Queen's University Belfast


Relevância:

20.00% 20.00%

Publicador:

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A Design of Experiments (DoE) analysis was undertaken to generate a list of configurations for CFD numerical simulation of an aircraft crown compartment. Fitted regression models were built to predict the convective heat transfer coefficients of thermally sensitive dissipating elements located inside this compartment. These are namely the SEPDC and the Route G. Currently they are positioned close to the fuselage and it is of interest to optimise the heat transfer for reliability and performance purposes. Their locations and the external fuselage surface temperature were selected as input variables for the DoE. The models fit the CFD data with values ranging from 0.878 to 0.978, and predict that the optimum locations in terms of heat transfer are when the elements are positioned as close to the crown floor as possible ( and ?min. limits), where they come in direct contact with the air flow from the cabin ventilation system, and when they are positioned close to the centreline ( and ?CL). The methodology employed allows aircraft thermal designers to optimise equipment placement in confined areas of an aircraft during the design phase. The determined models should be incorporated into global aircraft numerical models to improve accuracy and reduce model size and computational time. © 2012 Elsevier Masson SAS. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper, we propose a design paradigm for energy efficient and variation-aware operation of next-generation multicore heterogeneous platforms. The main idea behind the proposed approach lies on the observation that not all operations are equally important in shaping the output quality of various applications and of the overall system. Based on such an observation, we suggest that all levels of the software design stack, including the programming model, compiler, operating system (OS) and run-time system should identify the critical tasks and ensure correct operation of such tasks by assigning them to dynamically adjusted reliable cores/units. Specifically, based on error rates and operating conditions identified by a sense-and-adapt (SeA) unit, the OS selects and sets the right mode of operation of the overall system. The run-time system identifies the critical/less-critical tasks based on special directives and schedules them to the appropriate units that are dynamically adjusted for highly-accurate/approximate operation by tuning their voltage/frequency. Units that execute less significant operations can operate at voltages less than what is required for correct operation and consume less power, if required, since such tasks do not need to be always exact as opposed to the critical ones. Such scheme can lead to energy efficient and reliable operation, while reducing the design cost and overheads of conventional circuit/micro-architecture level techniques.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper, we investigate the impact of circuit misbehavior due to parametric variations and voltage scaling on the performance of wireless communication systems. Our study reveals the inherent error resilience of such systems and argues that sufficiently reliable operation can be maintained even in the presence of unreliable circuits and manufacturing defects. We further show how selective application of more robust circuit design techniques is sufficient to deal with high defect rates at low overhead and improve energy efficiency with negligible system performance degradation.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Future digital signal processing (DSP) systems must provide robustness on algorithm and application level to the presence of reliability issues that come along with corresponding implementations in modern semiconductor process technologies. In this paper, we address this issue by investigating the impact of unreliable memories on general DSP systems. In particular, we propose a novel framework to characterize the effects of unreliable memories, which enables us to devise novel methods to mitigate the associated performance loss. We propose to deploy specifically designed data representations, which have the capability of substantially improving the system reliability compared to that realized by conventional data representations used in digital integrated circuits, such as 2's-complement or sign-magnitude number formats. To demonstrate the efficacy of the proposed framework, we analyze the impact of unreliable memories on coded communication systems, and we show that the deployment of optimized data representations substantially improves the error-rate performance of such systems.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Simulation of disorders of respiratory mechanics shown by spirometry provides insight into the pathophysiology of disease but some clinically important disorders have not been simulated and none have been formally evaluated for education. We have designed simple mechanical devices which, along with existing simulators, enable all the main dysfunctions which have diagnostic value in spirometry to be simulated and clearly explained with visual and haptic feedback. We modelled the airways as Starling resistors by a clearly visible mechanical action to simulate intra- and extra-thoracic obstruction. A narrow tube was used to simulate fixed large airway obstruction and inelastic bands to simulate restriction. We hypothesized that using simulators whose action explains disease promotes learning especially in higher domain educational objectives. The main features of obstruction and restriction were correctly simulated. Simulation of variable extra-thoracic obstruction caused blunting and plateauing of inspiratory flow, and simulation of intra-thoracic obstruction caused limitation of expiratory flow with marked dynamic compression. Multiple choice tests were created with questions allocated to lower (remember and understand) or higher cognitive domains (apply, analyse and evaluate). In a cross-over design, overall mean scores increased after 1½ h simulation spirometry (43-68 %, effect size 1.06, P < 0.0001). In higher cognitive domains the mean score was lower before and increased further than lower domains (Δ 30 vs 20 %, higher vs lower effect size 0.22, P < 0.05). In conclusion, the devices successfully simulate various patterns of obstruction and restriction. Using these devices medical students achieved marked enhancement of learning especially in higher cognitive domains.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Embedded memories account for a large fraction of the overall silicon area and power consumption in modern SoC(s). While embedded memories are typically realized with SRAM, alternative solutions, such as embedded dynamic memories (eDRAM), can provide higher density and/or reduced power consumption. One major challenge that impedes the widespread adoption of eDRAM is that they require frequent refreshes potentially reducing the availability of the memory in periods of high activity and also consuming significant amount of power due to such frequent refreshes. Reducing the refresh rate while on one hand can reduce the power overhead, if not performed in a timely manner, can cause some cells to lose their content potentially resulting in memory errors. In this paper, we consider extending the refresh period of gain-cell based dynamic memories beyond the worst-case point of failure, assuming that the resulting errors can be tolerated when the use-cases are in the domain of inherently error-resilient applications. For example, we observe that for various data mining applications, a large number of memory failures can be accepted with tolerable imprecision in output quality. In particular, our results indicate that by allowing as many as 177 errors in a 16 kB memory, the maximum loss in output quality is 11%. We use this failure limit to study the impact of relaxing reliability constraints on memory availability and retention power for different technologies.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper, we introduce a statistical data-correction framework that aims at improving the DSP system performance in presence of unreliable memories. The proposed signal processing framework implements best-effort error mitigation for signals that are corrupted by defects in unreliable storage arrays using a statistical correction function extracted from the signal statistics, a data-corruption model, and an application-specific cost function. An application example to communication systems demonstrates the efficacy of the proposed approach.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper, we investigate the impact of faulty memory bit-cells on the performance of LDPC and Turbo channel decoders based on realistic memory failure models. Our study investigates the inherent error resilience of such codes to potential memory faults affecting the decoding process. We develop two mitigation mechanisms that reduce the impact of memory faults rather than correcting every single error. We show how protection of only few bit-cells is sufficient to deal with high defect rates. In addition, we show how the use of repair-iterations specifically helps mitigating the impact of faults that occur inside the decoder itself.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Inherently error-resilient applications in areas such as signal processing, machine learning and data analytics provide opportunities for relaxing reliability requirements, and thereby reducing the overhead incurred by conventional error correction schemes. In this paper, we exploit the tolerable imprecision of such applications by designing an energy-efficient fault-mitigation scheme for unreliable data memories to meet target yield. The proposed approach uses a bit-shuffling mechanism to isolate faults into bit locations with lower significance. This skews the bit-error distribution towards the low order bits, substantially limiting the output error magnitude. By controlling the granularity of the shuffling, the proposed technique enables trading-off quality for power, area, and timing overhead. Compared to error-correction codes, this can reduce the overhead by as much as 83% in read power, 77% in read access time, and 89% in area, when applied to various data mining applications in 28nm process technology.