903 resultados para single event effects
Resumo:
Silicon-on-insulator (SOI) technologies have been developed for radiation-hardened military and space applications. The use of SOI has been motivated by the full dielectric isolation of individual transistors, which prevents latch-up. The sensitive region for charge collection in SOI technologies is much smaller than for bulk-silicon devices potentially making SOI devices much harder to single event upset (SEU). In this study, 64 kB SOI SRAMs were exposed to different heavy ions, such as Cu, Br, I, Kr. Experimental results show that the heavy ion SEU threshold linear energy transfer (LET) in the 64 kB SOI SRAMs is about 71.8 MeV cm(2)/mg. Accorded to the experimental results, the single event upset rate (SEUR) in space orbits were calculated and they are at the order of 10(-13) upset/(day bit).
Resumo:
We extended genetic linkage analysis - an analysis widely used in quantitative genetics - to 3D images to analyze single gene effects on brain fiber architecture. We collected 4 Tesla diffusion tensor images (DTI) and genotype data from 258 healthy adult twins and their non-twin siblings. After high-dimensional fluid registration, at each voxel we estimated the genetic linkage between the single nucleotide polymorphism (SNP), Val66Met (dbSNP number rs6265), of the BDNF gene (brain-derived neurotrophic factor) with fractional anisotropy (FA) derived from each subject's DTI scan, by fitting structural equation models (SEM) from quantitative genetics. We also examined how image filtering affects the effect sizes for genetic linkage by examining how the overall significance of voxelwise effects varied with respect to full width at half maximum (FWHM) of the Gaussian smoothing applied to the FA images. Raw FA maps with no smoothing yielded the greatest sensitivity to detect gene effects, when corrected for multiple comparisons using the false discovery rate (FDR) procedure. The BDNF polymorphism significantly contributed to the variation in FA in the posterior cingulate gyrus, where it accounted for around 90-95% of the total variance in FA. Our study generated the first maps to visualize the effect of the BDNF gene on brain fiber integrity, suggesting that common genetic variants may strongly determine white matter integrity.
Resumo:
Cell biology is characterised by low molecule numbers and coupled stochastic chemical reactions with intrinsic noise permeating and dominating the interactions between molecules. Recent work [9] has shown that in such environments there are hard limits on the accuracy with which molecular populations can be controlled and estimated. These limits are predicated on a continuous diffusion approximation of the target molecule (although the remainder of the system is non-linear and discrete). The principal result of [9] assumes that the birth rate of the signalling species is linearly dependent on the target molecule population size. In this paper, we investigate the situation when the entire system is kept discrete, and arbitrary non-linear coupling is allowed between the target molecule and downstream signalling molecules. In this case it is possible, by relying solely on the event triggered nature of control and signalling reactions, to define non-linear reaction rate modulation schemes that achieve improved performance in certain parameter regimes. These schemes would not appear to be biologically relevant, raising the question of what are an appropriate set of assumptions for obtaining biologically meaningful results. © 2013 EUCA.
Resumo:
This thesis presents the study and development of fault-tolerant techniques for programmable architectures, the well-known Field Programmable Gate Arrays (FPGAs), customizable by SRAM. FPGAs are becoming more valuable for space applications because of the high density, high performance, reduced development cost and re-programmability. In particular, SRAM-based FPGAs are very valuable for remote missions because of the possibility of being reprogrammed by the user as many times as necessary in a very short period. SRAM-based FPGA and micro-controllers represent a wide range of components in space applications, and as a result will be the focus of this work, more specifically the Virtex® family from Xilinx and the architecture of the 8051 micro-controller from Intel. The Triple Modular Redundancy (TMR) with voters is a common high-level technique to protect ASICs against single event upset (SEU) and it can also be applied to FPGAs. The TMR technique was first tested in the Virtex® FPGA architecture by using a small design based on counters. Faults were injected in all sensitive parts of the FPGA and a detailed analysis of the effect of a fault in a TMR design synthesized in the Virtex® platform was performed. Results from fault injection and from a radiation ground test facility showed the efficiency of the TMR for the related case study circuit. Although TMR has showed a high reliability, this technique presents some limitations, such as area overhead, three times more input and output pins and, consequently, a significant increase in power dissipation. Aiming to reduce TMR costs and improve reliability, an innovative high-level technique for designing fault-tolerant systems in SRAM-based FPGAs was developed, without modification in the FPGA architecture. This technique combines time and hardware redundancy to reduce overhead and to ensure reliability. It is based on duplication with comparison and concurrent error detection. The new technique proposed in this work was specifically developed for FPGAs to cope with transient faults in the user combinational and sequential logic, while also reducing pin count, area and power dissipation. The methodology was validated by fault injection experiments in an emulation board. The thesis presents comparison results in fault coverage, area and performance between the discussed techniques.
Resumo:
Nowadays integrated circuit reliability is challenged by both variability and working conditions. Environmental radiation has become a major issue when ensuring the circuit correct behavior. The required radiation and later analysis performed to the circuit boards is both fund and time expensive. The lack of tools which support pre-manufacturing radiation hardness analysis hinders circuit designers tasks. This paper describes an extensively customizable simulation tool for the characterization of radiation effects on electronic systems. The proposed tool can produce an in depth analysis of a complete circuit in almost any kind of radiation environment in affordable computation times.
Resumo:
This paper presents a methodology to emulate Single Event Upsets (SEUs) in FPGA flip-flops (FFs). Since the content of a FF is not modifiable through the FPGA configuration memory bits, a dedicated design is required for fault injection in the FFs. The method proposed in this paper is a hybrid approach that combines FPGA partial reconfiguration and extra logic added to the circuit under test, without modifying its operation. This approach has been integrated into a fault-injection platform, named NESSY (Non intrusive ErrorS injection SYstem), developed by our research group. Finally, this paper includes results on a Virtex-5 FPGA demonstrating the validity of the method on the ITC’99 benchmark set and a Feed-Forward Equalization (FFE) filter. In comparison with other approaches in the literature, this methodology reduces the resource consumption introduced to carry out the fault injection in FFs, at the cost of adding very little time overhead (1.6 �μs per fault).
Resumo:
This paper presents the vulnerabilities of single event effects (SEEs) simulated by heavy ions on ground and observed oil SJ-5 research satellite in space for static random access memories (SRAMs). A single event upset (SEU) prediction code has been used to estimate the proton-induced upset rates based oil the ground test curve of SEU cross-section versus heavy ion linear energy transfer (LET). The result agrees with that of the flight data.
Resumo:
作为空间环境效应之一的单粒子效应对航天器在轨道中的安全运行有着严重的威胁。为了清楚地认识这一效应的机制、机理,可以采用重离子加速器地面模拟的方法来研究这一效应。为此建立和完善了国内第一套基于重离子加速器的单粒子效应地面模拟实验装置,并在此装置上进行了Xe离子辐照IDT71256的单粒了效应实验。建立的基于兰州重离子加速器的单粒子效应地面模拟实验装置包括真空靶室系统和束流监测系统。实验用兰州重离子加速器提供的~(40)Ar完成了对整个装置的性能测试。结果表明:该装置能很好的满足单粒子效应研究的需求,真空靶室系统允许在不破坏真空的条件下更换器件,改变倾角,改变离子入射能量等。注量探测器实现了对弱束流非拦截式测量;能量探测器对Ar离子测量的偏差小于2%;束流均匀度探测器的位置分辨好于0.34mm。在这一实验装置上,用兰州重离子加速器提供的~(136)Xe对两片IDT71256的单粒子效应情况进行了模拟研究。结果发现:在两片器件中均观察到单粒子翻转和单粒子闭锁,而且两片器件均表现出明显的翻转不对称性,其中1→0的翻转约占总翻转数的90%。探讨了离子入射角、有效LET、和离子射程对翻转和闭锁截面的影响,发现除离子的有效LET以外,离子在器件中的射程也将在很大程度上影响器件的翻转和闭锁截面。
Resumo:
The purpose of the work was to realize a high-speed digital data transfer system for RPC muon chambers in the CMS experiment on CERN’s new LHC accelerator. This large scale system took many years and many stages of prototyping to develop, and required the participation of tens of people. The system interfaces to Frontend Boards (FEB) at the 200,000-channel detector and to the trigger and readout electronics in the control room of the experiment. The distance between these two is about 80 metres and the speed required for the optic links was pushing the limits of available technology when the project was started. Here, as in many other aspects of the design, it was assumed that the features of readily available commercial components would develop in the course of the design work, just as they did. By choosing a high speed it was possible to multiplex the data from some the chambers into the same fibres to reduce the number of links needed. Further reduction was achieved by employing zero suppression and data compression, and a total of only 660 optical links were needed. Another requirement, which conflicted somewhat with choosing the components a late as possible was that the design needed to be radiation tolerant to an ionizing dose of 100 Gy and to a have a moderate tolerance to Single Event Effects (SEEs). This required some radiation test campaigns, and eventually led to ASICs being chosen for some of the critical parts. The system was made to be as reconfigurable as possible. The reconfiguration needs to be done from a distance as the electronics is not accessible except for some short and rare service breaks once the accelerator starts running. Therefore reconfigurable logic is extensively used, and the firmware development for the FPGAs constituted a sizable part of the work. Some special techniques needed to be used there too, to achieve the required radiation tolerance. The system has been demonstrated to work in several laboratory and beam tests, and now we are waiting to see it in action when the LHC will start running in the autumn 2008.