976 resultados para single event latchup
Resumo:
Cell biology is characterised by low molecule numbers and coupled stochastic chemical reactions with intrinsic noise permeating and dominating the interactions between molecules. Recent work [9] has shown that in such environments there are hard limits on the accuracy with which molecular populations can be controlled and estimated. These limits are predicated on a continuous diffusion approximation of the target molecule (although the remainder of the system is non-linear and discrete). The principal result of [9] assumes that the birth rate of the signalling species is linearly dependent on the target molecule population size. In this paper, we investigate the situation when the entire system is kept discrete, and arbitrary non-linear coupling is allowed between the target molecule and downstream signalling molecules. In this case it is possible, by relying solely on the event triggered nature of control and signalling reactions, to define non-linear reaction rate modulation schemes that achieve improved performance in certain parameter regimes. These schemes would not appear to be biologically relevant, raising the question of what are an appropriate set of assumptions for obtaining biologically meaningful results. © 2013 EUCA.
Resumo:
Silicon-on-insulator (SOI) technologies have been developed for radiation-hardened military and space applications. The use of SOI has been motivated by the full dielectric isolation of individual transistors, which prevents latch-up. The sensitive region for charge collection in SOI technologies is much smaller than for bulk-silicon devices potentially making SOI devices much harder to single event upset (SEU). In this study, 64 kB SOI SRAMs were exposed to different heavy ions, such as Cu, Br, I, Kr. Experimental results show that the heavy ion SEU threshold linear energy transfer (LET) in the 64 kB SOI SRAMs is about 71.8 MeV cm(2)/mg. Accorded to the experimental results, the single event upset rate (SEUR) in space orbits were calculated and they are at the order of 10(-13) upset/(day bit).
Resumo:
This thesis presents the study and development of fault-tolerant techniques for programmable architectures, the well-known Field Programmable Gate Arrays (FPGAs), customizable by SRAM. FPGAs are becoming more valuable for space applications because of the high density, high performance, reduced development cost and re-programmability. In particular, SRAM-based FPGAs are very valuable for remote missions because of the possibility of being reprogrammed by the user as many times as necessary in a very short period. SRAM-based FPGA and micro-controllers represent a wide range of components in space applications, and as a result will be the focus of this work, more specifically the Virtex® family from Xilinx and the architecture of the 8051 micro-controller from Intel. The Triple Modular Redundancy (TMR) with voters is a common high-level technique to protect ASICs against single event upset (SEU) and it can also be applied to FPGAs. The TMR technique was first tested in the Virtex® FPGA architecture by using a small design based on counters. Faults were injected in all sensitive parts of the FPGA and a detailed analysis of the effect of a fault in a TMR design synthesized in the Virtex® platform was performed. Results from fault injection and from a radiation ground test facility showed the efficiency of the TMR for the related case study circuit. Although TMR has showed a high reliability, this technique presents some limitations, such as area overhead, three times more input and output pins and, consequently, a significant increase in power dissipation. Aiming to reduce TMR costs and improve reliability, an innovative high-level technique for designing fault-tolerant systems in SRAM-based FPGAs was developed, without modification in the FPGA architecture. This technique combines time and hardware redundancy to reduce overhead and to ensure reliability. It is based on duplication with comparison and concurrent error detection. The new technique proposed in this work was specifically developed for FPGAs to cope with transient faults in the user combinational and sequential logic, while also reducing pin count, area and power dissipation. The methodology was validated by fault injection experiments in an emulation board. The thesis presents comparison results in fault coverage, area and performance between the discussed techniques.
Resumo:
This paper presents a methodology to emulate Single Event Upsets (SEUs) in FPGA flip-flops (FFs). Since the content of a FF is not modifiable through the FPGA configuration memory bits, a dedicated design is required for fault injection in the FFs. The method proposed in this paper is a hybrid approach that combines FPGA partial reconfiguration and extra logic added to the circuit under test, without modifying its operation. This approach has been integrated into a fault-injection platform, named NESSY (Non intrusive ErrorS injection SYstem), developed by our research group. Finally, this paper includes results on a Virtex-5 FPGA demonstrating the validity of the method on the ITC’99 benchmark set and a Feed-Forward Equalization (FFE) filter. In comparison with other approaches in the literature, this methodology reduces the resource consumption introduced to carry out the fault injection in FFs, at the cost of adding very little time overhead (1.6 �μs per fault).
Resumo:
This paper presents an experimental study of the sensitivity to 15-MeV neutrons of Advanced Low Power SRAMs (A-LPSRAM) at low bias voltage little above the threshold value that allows the retention of data. This family of memories is characterized by a 3D structure to minimize the area penalty and to cope with latchups, as well as by the presence of integrated capacitors to hinder the occurrence of single event upsets. In low voltage static tests, classical single event upsets were a minor source of errors, but other unexpected phenomena such as clusters of bitflips and hard errors turned out to be the origin of hundreds of bitflips. Besides, errors were not observed in dynamic tests at nominal voltage. This behavior is clearly different than that of standard bulk CMOS SRAMs, where thousands of errors have been reported.
Resumo:
This report presents observations, findings, and recommendations from an engineering reconnaissance trip following the May 20th, 2013 tornado that struck Moore, Oklahoma. A team of faculty, research scientists, professional engineers, and civil engineering students were tasked with investigating and documenting the performance of critical facility buildings and residences, (IBC Occupancy Category II, III, and IV), in Moore, OK. The Enhanced Fujita (EF) 5 tornado created a 17-mile long damage swath destroying over 12,000 buildings and killing 24 people. The total economic loss from this single event was estimated at $3 billion. The May 20th tornado was the third major tornado to hit Moore in the previous 15 years.
Resumo:
This paper describes the limitations of using the International Statistical Classification of Diseases and Related Health Problems, Tenth Revision, Australian Modification (ICD-10-AM) to characterise patient harm in hospitals. Limitations were identified during a project to use diagnoses flagged by Victorian coders as hospital-acquired to devise a classification of 144 categories of hospital acquired diagnoses (the Classification of Hospital Acquired Diagnoses or CHADx). CHADx is a comprehensive data monitoring system designed to allow hospitals to monitor their complication rates month-to-month using a standard method. Difficulties in identifying a single event from linear sequences of codes due to the absence of code linkage were the major obstacles to developing the classification. Obstetric and perinatal episodes also presented challenges in distinguishing condition onset, that is, whether conditions were present on admission or arose after formal admission to hospital. Used in the appropriate way, the CHADx allows hospitals to identify areas for future patient safety and quality initiatives. The value of timing information and code linkage should be recognised in the planning stages of any future electronic systems.
Resumo:
The superconducting (or cryogenic) gravimeter (SG) is based on the levitation of a superconducting sphere in a stable magnetic field created by current in superconducting coils. Depending on frequency, it is capable of detecting gravity variations as small as 10-11ms-2. For a single event, the detection threshold is higher, conservatively about 10-9 ms-2. Due to its high sensitivity and low drift rate, the SG is eminently suitable for the study of geodynamical phenomena through their gravity signatures. I present investigations of Earth dynamics with the superconducting gravimeter GWR T020 at Metsähovi from 1994 to 2005. The history and key technical details of the installation are given. The data processing methods and the development of the local tidal model at Metsähovi are presented. The T020 is a part of the worldwide GGP (Global Geodynamics Project) network, which consist of 20 working station. The data of the T020 and of other participating SGs are available to the scientific community. The SG T020 have used as a long-period seismometer to study microseismicity and the Earth s free oscillation. The annual variation, spectral distribution, amplitude and the sources of microseism at Metsähovi were presented. Free oscillations excited by three large earthquakes were analyzed: the spectra, attenuation and rotational splitting of the modes. The lowest modes of all different oscillation types are studied, i.e. the radial mode 0S0, the "football mode" 0S2, and the toroidal mode 0T2. The very low level (0.01 nms-1) incessant excitation of the Earth s free oscillation was detected with the T020. The recovery of global and regional variations in gravity with the SG requires the modelling of local gravity effects. The most important of them is hydrology. The variation in the groundwater level at Metsähovi as measured in a borehole in the fractured bedrock correlates significantly (0.79) with gravity. The influence of local precipitation, soil moisture and snow cover are detectable in the gravity record. The gravity effect of the variation in atmospheric mass and that of the non-tidal loading by the Baltic Sea were investigated together, as sea level and air pressure are correlated. Using Green s functions it was calculated that a 1 metre uniform layer of water in the Baltic Sea increases the gravity at Metsähovi by 31 nms-2 and the vertical deformation is -11 mm. The regression coefficient for sea level is 27 nms-2m-1, which is 87% of the uniform model. These studies are associated with temporal height variations using the GPS data of Metsähovi permanent station. Results of long time series at Metsähovi demonstrated high quality of data and correctly carried out offsets and drift corrections. The superconducting gravimeter T020 has been proved to be an eminent and versatile tool in studies of the Earth dynamics.
Resumo:
The implementation of semiconductor circuits and systems in nano-technology makes it possible to achieve high speed, lower voltage level and smaller area. The unintended and undesirable result of this scaling is that it makes integrated circuits susceptible to soft errors normally caused by alpha particle or neutron hits. These events of radiation strike resulting into bit upsets referred to as single event upsets(SEU), become increasingly of concern for the reliable circuit operation in the field. Storage elements are worst hit by this phenomenon. As we further scale down, there is greater interest in reliability of the circuits and systems, apart from the performance, power and area aspects. In this paper we propose an improved 12T SEU tolerant SRAM cell design. The proposed SRAM cell is economical in terms of area overhead. It is easy to fabricate as compared to earlier designs. Simulation results show that the proposed cell is highly robust, as it does not flip even for a transient pulse with 62 times the Q(crit) of a standard 6T SRAM cell.
Resumo:
4 p.
Resumo:
28 p.
Resumo:
Earthquake early warning (EEW) systems have been rapidly developing over the past decade. Japan Meteorological Agency (JMA) has an EEW system that was operating during the 2011 M9 Tohoku earthquake in Japan, and this increased the awareness of EEW systems around the world. While longer-time earthquake prediction still faces many challenges to be practical, the availability of shorter-time EEW opens up a new door for earthquake loss mitigation. After an earthquake fault begins rupturing, an EEW system utilizes the first few seconds of recorded seismic waveform data to quickly predict the hypocenter location, magnitude, origin time and the expected shaking intensity level around the region. This early warning information is broadcast to different sites before the strong shaking arrives. The warning lead time of such a system is short, typically a few seconds to a minute or so, and the information is uncertain. These factors limit human intervention to activate mitigation actions and this must be addressed for engineering applications of EEW. This study applies a Bayesian probabilistic approach along with machine learning techniques and decision theories from economics to improve different aspects of EEW operation, including extending it to engineering applications.
Existing EEW systems are often based on a deterministic approach. Often, they assume that only a single event occurs within a short period of time, which led to many false alarms after the Tohoku earthquake in Japan. This study develops a probability-based EEW algorithm based on an existing deterministic model to extend the EEW system to the case of concurrent events, which are often observed during the aftershock sequence after a large earthquake.
To overcome the challenge of uncertain information and short lead time of EEW, this study also develops an earthquake probability-based automated decision-making (ePAD) framework to make robust decision for EEW mitigation applications. A cost-benefit model that can capture the uncertainties in EEW information and the decision process is used. This approach is called the Performance-Based Earthquake Early Warning, which is based on the PEER Performance-Based Earthquake Engineering method. Use of surrogate models is suggested to improve computational efficiency. Also, new models are proposed to add the influence of lead time into the cost-benefit analysis. For example, a value of information model is used to quantify the potential value of delaying the activation of a mitigation action for a possible reduction of the uncertainty of EEW information in the next update. Two practical examples, evacuation alert and elevator control, are studied to illustrate the ePAD framework. Potential advanced EEW applications, such as the case of multiple-action decisions and the synergy of EEW and structural health monitoring systems, are also discussed.