881 resultados para Soft errors
Resumo:
With technology scaling, vulnerability to soft errors in random logic is increasing. There is a need for on-line error detection and protection for logic gates even at sea level. The error checker is the key element for an on-line detection mechanism. We compare three different checkers for error detection from the point of view of area, power and false error detection rates. We find that the double sampling checker (used in Razor), is the simplest and most area and power efficient, but suffers from very high false detection rates of 1.15 times the actual error rates. We also find that the alternate approaches of triple sampling and integrate and sample method (I&S) can be designed to have zero false detection rates, but at an increased area, power and implementation complexity. The triple sampling method has about 1.74 times the area and twice the power as compared to the Double Sampling method and also needs a complex clock generation scheme. The I&S method needs about 16% more power with 0.58 times the area as double sampling, but comes with more stringent implementation constraints as it requires detection of small voltage swings.
Resumo:
Commercial off-the-shelf microprocessors are the core of low-cost embedded systems due to their programmability and cost-effectiveness. Recent advances in electronic technologies have allowed remarkable improvements in their performance. However, they have also made microprocessors more susceptible to transient faults induced by radiation. These non-destructive events (soft errors), may cause a microprocessor to produce a wrong computation result or lose control of a system with catastrophic consequences. Therefore, soft error mitigation has become a compulsory requirement for an increasing number of applications, which operate from the space to the ground level. In this context, this paper uses the concept of selective hardening, which is aimed to design reduced-overhead and flexible mitigation techniques. Following this concept, a novel flexible version of the software-based fault recovery technique known as SWIFT-R is proposed. Our approach makes possible to select different registers subsets from the microprocessor register file to be protected on software. Thus, design space is enriched with a wide spectrum of new partially protected versions, which offer more flexibility to designers. This permits to find the best trade-offs between performance, code size, and fault coverage. Three case studies have been developed to show the applicability and flexibility of the proposal.
Resumo:
Integrity assurance of configuration data has a significant impact on microcontroller-based systems reliability. This is especially true when running applications driven by events which behavior is tightly coupled to this kind of data. This work proposes a new hybrid technique that combines hardware and software resources for detecting and recovering soft-errors in system configuration data. Our approach is based on the utilization of a common built-in microcontroller resource (timer) that works jointly with a software-based technique, which is responsible to periodically refresh the configuration data. The experiments demonstrate that non-destructive single event effects can be effectively mitigated with reduced overheads. Results show an important increase in fault coverage for SEUs and SETs, about one order of magnitude.
Resumo:
The implementation of semiconductor circuits and systems in nano-technology makes it possible to achieve high speed, lower voltage level and smaller area. The unintended and undesirable result of this scaling is that it makes integrated circuits susceptible to soft errors normally caused by alpha particle or neutron hits. These events of radiation strike resulting into bit upsets referred to as single event upsets(SEU), become increasingly of concern for the reliable circuit operation in the field. Storage elements are worst hit by this phenomenon. As we further scale down, there is greater interest in reliability of the circuits and systems, apart from the performance, power and area aspects. In this paper we propose an improved 12T SEU tolerant SRAM cell design. The proposed SRAM cell is economical in terms of area overhead. It is easy to fabricate as compared to earlier designs. Simulation results show that the proposed cell is highly robust, as it does not flip even for a transient pulse with 62 times the Q(crit) of a standard 6T SRAM cell.
Resumo:
The new generations of SRAM-based FPGA (field programmable gate array) devices are the preferred choice for the implementation of reconfigurable computing platforms intended to accelerate processing in real-time systems. However, FPGA's vulnerability to hard and soft errors is a major weakness to robust configurable system design. In this paper, a novel built-in self-healing (BISH) methodology, based on run-time self-reconfiguration, is proposed. A soft microprocessor core implemented in the FPGA is responsible for the management and execution of all the BISH procedures. Fault detection and diagnosis is followed by repairing actions, taking advantage of the dynamic reconfiguration features offered by new FPGA families. Meanwhile, modular redundancy assures that the system still works correctly
Resumo:
SRAM-based FPGAs are sensitive to radiation effects. Soft errors can appear and accumulate, potentially defeating mitigation strategies deployed at the Application Layer. Therefore, Configuration Memory scrubbing is required to improve radiation tolerance of such FPGAs in space applications. Virtex FPGAs allow runtime scrubbing by means of dynamic partial reconfiguration. Even with scrubbing, intra-FPGA TMR systems are subjected to common-mode errors affecting more than one design domain. This is solved in inter-FPGA TMR systems at the expense of a higher cost, power and mass. In this context, a self-reference scrubber for device-level TMR system based on Xilinx Virtex FPGAs is presented. This scrubber allows for a fast SEU/MBU detection and correction by peer frame comparison without needing to access a golden configuration memory
Resumo:
Software-based techniques offer several advantages to increase the reliability of processor-based systems at very low cost, but they cause performance degradation and an increase of the code size. To meet constraints in performance and memory, we propose SETA, a new control-flow software-only technique that uses assertions to detect errors affecting the program flow. SETA is an independent technique, but it was conceived to work together with previously proposed data-flow techniques that aim at reducing performance and memory overheads. Thus, SETA is combined with such data-flow techniques and submitted to a fault injection campaign. Simulation and neutron induced SEE tests show high fault coverage at performance and memory overheads inferior to the state-of-the-art.
Resumo:
On the one hand this thesis attempts to develop and empirically test an ethically defensible theorization of the relationship between human resource management (HRM) and competitive advantage. The specific empirical evidence indicates that at least part of HRM's causal influence on employee performance may operate indirectly through a social architecture and then through psychological empowerment. However, in particular the evidence concerning a potential influence of HRM on organizational performance seems to put in question some of the rhetorics within the HRM research community. On the other hand, the thesis tries to explicate and defend a certain attitude towards the philosophically oriented debates within organization science. This involves suggestions as to how we should understand meaning, reference, truth, justification and knowledge. In this understanding it is not fruitful to see either the problems or the solutions to the problems of empirical social science as fundamentally philosophical ones. It is argued that the notorious problems of social science, in this thesis exemplified by research on HRM, can be seen as related to dynamic complexity in combination with both the ethical and pragmatic difficulty of ”laboratory-like-experiments”. Solutions … can only be sought by informed trials and errors depending on the perceived familiarity with the object(s) of research. The odds are against anybody who hopes for clearly adequate social scientific answers to more complex questions. Social science is in particular unlikely to arrive at largely accepted knowledge of the kind ”if we do this, then that will happen”, or even ”if we do this, then that is likely to happen”. One of the problems probably facing most of the social scientific research communities is to specify and agree upon the ”this ” and the ”that” and provide convincing evidence of how they are (causally) related. On most more complex questions the role of social science seems largely to remain that of contributing to a (critical) conversation, rather than to arrive at more generally accepted knowledge. This is ultimately what is both argued and, in a sense, demonstrated using research on the relationship between HRM and organizational performance as an example.
Resumo:
A dynamical instability is observed in experimental studies on micro-channels of rectangular cross-section with smallest dimension 100 and 160 mu m in which one of the walls is made of soft gel. There is a spontaneous transition from an ordered, laminar flow to a chaotic and highly mixed flow state when the Reynolds number increases beyond a critical value. The critical Reynolds number, which decreases as the elasticity modulus of the soft wall is reduced, is as low as 200 for the softest wall used here (in contrast to 1200 for a rigid-walled channel) The instability onset is observed by the breakup of a dye-stream introduced in the centre of the micro-channel, as well as the onset of wall oscillations due to laser scattering from fluorescent beads embedded in the wall of the channel. The mixing time across a channel of width 1.5 mm, measured by dye-stream and outlet conductance experiments, is smaller by a factor of 10(5) than that for a laminar flow. The increased mixing rate comes at very little cost, because the pressure drop (energy requirement to drive the flow) increases continuously and modestly at transition. The deformed shape is reconstructed numerically, and computational fluid dynamics (CFD) simulations are carried out to obtain the pressure gradient and the velocity fields for different flow rates. The pressure difference across the channel predicted by simulations is in agreement with the experiments (within experimental errors) for flow rates where the dye stream is laminar, but the experimental pressure difference is higher than the simulation prediction after dye-stream breakup. A linear stability analysis is carried out using the parallel-flow approximation, in which the wall is modelled as a neo-Hookean elastic solid, and the simulation results for the mean velocity and pressure gradient from the CFD simulations are used as inputs. The stability analysis accurately predicts the Reynolds number (based on flow rate) at which an instability is observed in the dye stream, and it also predicts that the instability first takes place at the downstream converging section of the channel, and not at the upstream diverging section. The stability analysis also indicates that the destabilization is due to the modification of the flow and the local pressure gradient due to the wall deformation; if we assume a parabolic velocity profile with the pressure gradient given by the plane Poiseuille law, the flow is always found to be stable.
Resumo:
There is a growing literature examining the impact of research on informing policy, and of research and policy on practice. Research and policy do not have the same types of impact on practice but can be evaluated using similar approaches. Sometimes the literature provides a platform for methodological debate but mostly it is concerned with how research can link to improvements in the process and outcomes of education, how it can promote innovative policies and practice, and how it may be successfully disseminated. Whether research-informed or research-based, policy and its implementation is often assessed on such 'hard' indicators of impact as changes in the number of students gaining five or more A to C grades in national examinations or a percentage fall in the number of exclusions in inner city schools. Such measures are necessarily crude, with large samples smoothing out errors and disguising instances of significant success or failure. Even when 'measurable' in such a fashion, however, the impact of any educational change or intervention may require a period of years to become observable. This paper considers circumstances in which short-term change may be implausible or difficult to observe. It explores how impact is currently theorized and researched and promotes the concept of 'soft' indicators of impact in circumstances in which the pursuit of conventional quantitative and qualitative evidence is rendered impractical within a reasonable cost and timeframe. Such indicators are characterized by their avowedly subjective, anecdotal and impressionistic provenance and have particular importance in the context of complex community education issues where the assessment of any impact often faces considerable problems of access. These indicators include the testimonies of those on whom the research intervention or policy focuses (for example, students, adult learners), the formative effects that are often reported (for example, by head teachers, community leaders) and media coverage. The collation and convergence of a wide variety of soft indicators (Where there is smoke …) is argued to offer a credible means of identifying subtle processes that are often neglected as evidence of potential and actual impact (… there is fire).
Resumo:
Tese dout., Engenharia electrónica e computação - Processamento de sinal, Universidade do Algarve, 2008
Resumo:
Segment poses and joint kinematics estimated from skin markers are highly affected by soft tissue artifact (STA) and its rigid motion component (STARM). While four marker-clusters could decrease the STA non-rigid motion during gait activity, other data, such as marker location or STARM patterns, would be crucial to compensate for STA in clinical gait analysis. The present study proposed 1) to devise a comprehensive average map illustrating the spatial distribution of STA for the lower limb during treadmill gait and 2) to analyze STARM from four marker-clusters assigned to areas extracted from spatial distribution. All experiments were realized using a stereophotogrammetric system to track the skin markers and a bi-plane fluoroscopic system to track the knee prosthesis. Computation of the spatial distribution of STA was realized on 19 subjects using 80 markers apposed on the lower limb. Three different areas were extracted from the distribution map of the thigh. The marker displacement reached a maximum of 24.9mm and 15.3mm in the proximal areas of thigh and shank, respectively. STARM was larger on thigh than the shank with RMS error in cluster orientations between 1.2° and 8.1°. The translation RMS errors were also large (3.0mm to 16.2mm). No marker-cluster correctly compensated for STARM. However, the coefficient of multiple correlations exhibited excellent scores between skin and bone kinematics, as well as for STARM between subjects. These correlations highlight dependencies between STARM and the kinematic components. This study provides new insights for modeling STARM for gait activity.
Resumo:
The General Packet Radio Service (GPRS) has been developed for the mobile radio environment to allow the migration from the traditional circuit switched connection to a more efficient packet based communication link particularly for data transfer. GPRS requires the addition of not only the GPRS software protocol stack, but also more baseband functionality for the mobile as new coding schemes have be en defined, uplink status flag detection, multislot operation and dynamic coding scheme detect. This paper concentrates on evaluating the performance of the GPRS coding scheme detection methods in the presence of a multipath fading channel with a single co-channel interferer as a function of various soft-bit data widths. It has been found that compressing the soft-bit data widths from the output of the equalizer to save memory can influence the likelihood decision of the coding scheme detect function and hence contribute to the overall performance loss of the system. Coding scheme detection errors can therefore force the channel decoder to either select the incorrect decoding scheme or have no clear decision which coding scheme to use resulting in the decoded radio block failing the block check sequence and contribute to the block error rate. For correct performance simulation, the performance of the full coding scheme detection must be taken into account.
Resumo:
The use of microprocessor-based systems is gaining importance in application domains where safety is a must. For this reason, there is a growing concern about the mitigation of SEU and SET effects. This paper presents a new hybrid technique aimed to protect both the data and the control-flow of embedded applications running on microprocessors. On one hand, the approach is based on software redundancy techniques for correcting errors produced in the data. On the other hand, control-flow errors can be detected by reusing the on-chip debug interface, existing in most modern microprocessors. Experimental results show an important increase in the system reliability even superior to two orders of magnitude, in terms of mitigation of both SEUs and SETs. Furthermore, the overheads incurred by our technique can be perfectly assumable in low-cost systems.
Resumo:
Purpose: This study investigated how aberration-controlling, customised soft contact lenses corrected higher-order ocular aberrations and visual performance in keratoconic patients compared to other forms of refractive correction (spectacles and rigid gas-permeable lenses). Methods: Twenty-two patients (16 rigid gas-permeable contact lens wearers and six spectacle wearers) were fitted with standard toric soft lenses and customised lenses (designed to correct 3rd-order coma aberrations). In the rigid gas-permeable lens-wearing patients, ocular aberrations were measured without lenses, with the patient's habitual lenses and with the study lenses (Hartmann-Shack aberrometry). In the spectacle-wearing patients, ocular aberrations were measured both with and without the study lenses. LogMAR visual acuity (high-contrast and low-contrast) was evaluated with the patient wearing their habitual correction (of either spectacles or rigid gas-permeable contact lenses) and with the study lenses. Results: In the contact lens wearers, the habitual rigid gas-permeable lenses and customised lenses provided significant reductions in 3rd-order coma root-mean-square (RMS) error, 3rd-order RMS and higher-order RMS error (p ≤ 0.004). In the spectacle wearers, the standard toric lenses and customised lenses significantly reduced 3rd-order RMS and higher-order RMS errors (p ≤ 0.005). The spectacle wearers showed no significant differences in visual performance measured between their habitual spectacles and the study lenses. However, in the contact lens wearers, the habitual rigid gas-permeable lenses and standard toric lenses provided significantly better high-contrast acuities compared to the customised lenses (p ≤ 0.006). Conclusions: The customised lenses provided substantial reductions in ocular aberrations in these keratoconic patients; however, the poor visual performances achieved with these lenses are most likely to be due to small, on-eye lens decentrations. © 2014 The College of Optometrists.