42 resultados para stacking faults
Resumo:
As computer chips implementation technologies evolve to obtain more performance, those computer chips are using smaller components, with bigger density of transistors and working with lower power voltages. All these factors turn the computer chips less robust and increase the probability of a transient fault. Transient faults may occur once and never more happen the same way in a computer system lifetime. There are distinct consequences when a transient fault occurs: the operating system might abort the execution if the change produced by the fault is detected by bad behavior of the application, but the biggest risk is that the fault produces an undetected data corruption that modifies the application final result without warnings (for example a bit flip in some crucial data). With the objective of researching transient faults in computer system’s processor registers and memory we have developed an extension of HP’s and AMD joint full system simulation environment, named COTSon. This extension allows the injection of faults that change a single bit in processor registers and memory of the simulated computer. The developed fault injection system makes it possible to: evaluate the effects of single bit flip transient faults in an application, analyze an application robustness against single bit flip transient faults and validate fault detection mechanism and strategies.
Resumo:
All-optical label swapping (AOLS) forms a key technology towards the implementation of all-optical packet switching nodes (AOPS) for the future optical Internet. The capital expenditures of the deployment of AOLS increases with the size of the label spaces (i.e. the number of used labels), since a special optical device is needed for each recognized label on every node. Label space sizes are affected by the way in which demands are routed. For instance, while shortest-path routing leads to the usage of fewer labels but high link utilization, minimum interference routing leads to the opposite. This paper studies all-optical label stacking (AOLStack), which is an extension of the AOLS architecture. AOLStack aims at reducing label spaces while easing the compromise with link utilization. In this paper, an integer lineal program is proposed with the objective of analyzing the softening of the aforementioned trade-off due to AOLStack. Furthermore, a heuristic aiming at finding good solutions in polynomial-time is proposed as well. Simulation results show that AOLStack either a) reduces the label spaces with a low increase in the link utilization or, similarly, b) uses better the residual bandwidth to decrease the number of labels even more
Resumo:
Whereas numerical modeling using finite-element methods (FEM) can provide transient temperature distribution in the component with enough accuracy, it is of the most importance the development of compact dynamic thermal models that can be used for electrothermal simulation. While in most cases single power sources are considered, here we focus on the simultaneous presence of multiple sources. The thermal model will be in the form of a thermal impedance matrix containing the thermal impedance transfer functions between two arbitrary ports. Eachindividual transfer function element ( ) is obtained from the analysis of the thermal temperature transient at node ¿ ¿ after a power step at node ¿ .¿ Different options for multiexponential transient analysis are detailed and compared. Among the options explored, small thermal models can be obtained by constrained nonlinear least squares (NLSQ) methods if the order is selected properly using validation signals. The methods are applied to the extraction of dynamic compact thermal models for a new ultrathin chip stack technology (UTCS).
Resumo:
The Quaternary Active Faults Database of Iberia (QAFI) is an initiative lead by the Institute of Geology and Mines of Spain (IGME) for building a public repository of scientific data regarding faults having documented activity during the last 2.59 Ma (Quaternary). QAFI also addresses a need to transfer geologic knowledge to practitioners of seismic hazard and risk in Iberia by identifying and characterizing seismogenic fault-sources. QAFI is populated by the information freely provided by more than 40 Earth science researchers, storing to date a total of 262 records. In this article we describe the development and evolution of the database, as well as its internal architecture. Aditionally, a first global analysis of the data is provided with a special focus on length and slip-rate fault parameters. Finally, the database completeness and the internal consistency of the data are discussed. Even though QAFI v.2.0 is the most current resource for calculating fault-related seismic hazard in Iberia, the database is still incomplete and requires further review.
Resumo:
The northwestern margin of the Valencia trough is an area of low strain characterized by slow normal faults and low to moderate seismicity. Since the mid 1990s this area has been the subject of a number of studies on active tectonic which have proposed different approaches to the location of active faults and to the calculation of the parameters that describe their seismic cycle. Fifty-six active faults have been found and a classification has been made in accordance with their characteristics: a) faults with clear evidence of large paleo-, historic or instrumental earthquakes (2/56); b) faults with evidence of accumulated activity during the Plio-Quaternary and with associated instrumental seismicity (7/56); c) faults with evidence of accumulated activity during the Plio-Quaternary and without associated instrumental seismicity (17/56); d) faults with associated instrumental seismicity and without evidence of accumulated activity during the Plio-Quaternary (30/56), and e) faults without evidence of activity or inactive faults. The parameters that describe the seismic cycle of these faults have been evaluated by different methods that use the geological data obtained for each fault except when paleoseismological studies were available. This classification can be applied to other areas with low slip faults because of the simplicity of the approaches adopted. This study reviews the different approaches proposed and describes the active faults located, highlighting the need a) to better understand active faults in slow strain zones through paleoseismological studies, and b) to include them in seismic hazard studies.
Resumo:
Report for the scientific sojourn at the Department of Information Technology (INTEC) at the Ghent University, Belgium, from january to june 2007. All-Optical Label Swapping (AOLS) forms a key technology towards the implementation of All-Optical Packet Switching nodes (AOPS) for the future optical Internet. The capital expenditures of the deployment of AOLS increases with the size of the label spaces (i.e. the number of used labels), since a special optical device is needed for each recognized label on every node. Label space sizes are affected by the wayin which demands are routed. For instance, while shortest-path routing leads to the usage of fewer labels but high link utilization, minimum interference routing leads to the opposite. This project studies and proposes All-Optical Label Stacking (AOLStack), which is an extension of the AOLS architecture. AOLStack aims at reducing label spaces while easing the compromise with link utilization. In this project, an Integer Lineal Program is proposed with the objective of analyzing the softening of the aforementioned trade-off due to AOLStack. Furthermore, a heuristic aiming at finding good solutions in polynomial-time is proposed as well. Simulation results show that AOLStack either a) reduces the label spaces with a low increase in the link utilization or, similarly, b) uses better the residual bandwidth to decrease the number of labels even more.
Resumo:
The demand for computational power has been leading the improvement of the High Performance Computing (HPC) area, generally represented by the use of distributed systems like clusters of computers running parallel applications. In this area, fault tolerance plays an important role in order to provide high availability isolating the application from the faults effects. Performance and availability form an undissociable binomial for some kind of applications. Therefore, the fault tolerant solutions must take into consideration these two constraints when it has been designed. In this dissertation, we present a few side-effects that some fault tolerant solutions may presents when recovering a failed process. These effects may causes degradation of the system, affecting mainly the overall performance and availability. We introduce RADIC-II, a fault tolerant architecture for message passing based on RADIC (Redundant Array of Distributed Independent Fault Tolerance Controllers) architecture. RADIC-II keeps as maximum as possible the RADIC features of transparency, decentralization, flexibility and scalability, incorporating a flexible dynamic redundancy feature, allowing to mitigate or to avoid some recovery side-effects.
Resumo:
Fault tolerance has become a major issue for computer and software engineers because the occurrence of faults increases the cost of using a parallel computer. RADIC is the fault tolerance architecture for message passing systems which is transparent, decentralized, flexible and scalable. This master thesis presents the methodology used to implement the RADIC architecture over Open MPI, a well-know large-used message passing library. This implementation kept the RADIC architecture characteristics. In order to validate the implementation we have executed a synthetic ping program, besides, to evaluate the implementation performance we have used the NAS Parallel Benchmarks. The results prove that the RADIC architecture performance depends on the communication pattern of the parallel application which is running. Furthermore, our implementation proves that the RADIC architecture could be implemented over an existent message passing library.
Resumo:
Amb la finalitat de millorar l’autosuficiència hídrica del monestir budista del Garraf Sakya Tashi Ling, es fa una avaluació de l’estat dels recursos hídrics d’aquest sistema, així com els seus usos i punts de consum. L’avaluació s’ha realitzat mitjançant la integració i ús de paràmetres mediambientals, hídrics i arquitectònics. Amb l’estimació d’entrades i consums d’aigua, juntament amb els càlculs realitzats, s’ha diagnosticat l’estat actual del sistema. Mitjançant la realització d’un inventari dels diferents equipaments i dispositius instal·lats en els punts de consum d’aigua, s’han detectat mancances en la eficiència hídrica com l’escassa implementació de dispositius d’estalvi hídric o la inexistent captació de les aigües pluvials. El diagnòstic de les mancances ha orientat les propostes de millora aplicables al sistema. Aquestes incideixen principalment en la millora de l’estalvi d’aigua amb la instal·lació de dispositius estalviadors i en la captació d’aigües pluvials mitjançant una xarxa de recollida, emmagatzematge i distribució.
Resumo:
Computer chips implementation technologies evolving to obtain more performance are increasing the probability of transient faults. As this probability grows and on-chip solutions are expensive or tend to degrade processor performance, the efforts to deal with these transient faults in higher levels (such as the operating system or even at the application level) are increasing. Mostly, these efforts are trying to avoid silent data corruptions using hardware, software and hybrid based techniques to add redundancy to detect the errors generated by the transient faults. This work presents our proposal to improve the robustness of applications with source code based transformation adding redundancy. Also, our proposal takes account of the tradeoff between the improved robustness and the overhead generated by the added redundancy.
Resumo:
Process supervision is the activity focused on monitoring the process operation in order to deduce conditions to maintain the normality including when faults are present Depending on the number/distribution/heterogeneity of variables, behaviour situations, sub-processes, etc. from processes, human operators and engineers do not easily manipulate the information. This leads to the necessity of automation of supervision activities. Nevertheless, the difficulty to deal with the information complicates the design and development of software applications. We present an approach called "integrated supervision systems". It proposes multiple supervisors coordination to supervise multiple sub-processes whose interactions permit one to supervise the global process
Resumo:
Fault location has been studied deeply for transmission lines due to its importance in power systems. Nowadays the problem of fault location on distribution systems is receiving special attention mainly because of the power quality regulations. In this context, this paper presents an application software developed in Matlabtrade that automatically calculates the location of a fault in a distribution power system, starting from voltages and currents measured at the line terminal and the model of the distribution power system data. The application is based on a N-ary tree structure, which is suitable to be used in this application due to the highly branched and the non- homogeneity nature of the distribution systems, and has been developed for single-phase, two-phase, two-phase-to-ground, and three-phase faults. The implemented application is tested by using fault data in a real electrical distribution power system
Resumo:
This paper focus on the problem of locating single-phase faults in mixed distribution electric systems, with overhead lines and underground cables, using voltage and current measurements at the sending-end and sequence model of the network. Since calculating series impedance for underground cables is not as simple as in the case of overhead lines, the paper proposes a methodology to obtain an estimation of zero-sequence impedance of underground cables starting from previous single-faults occurred in the system, in which an electric arc occurred at the fault location. For this reason, the signal is previously pretreated to eliminate its peaks voltage and the analysis can be done working with a signal as close as a sinus wave as possible
Resumo:
A model-based approach for fault diagnosis is proposed, where the fault detection is based on checking the consistencyof the Analytical Redundancy Relations (ARRs) using an interval tool. The tool takes into account the uncertainty in theparameters and the measurements using intervals. Faults are explicitly included in the model, which allows for the exploitation of additional information. This information is obtained from partial derivatives computed from the ARRs. The signs in the residuals are used to prune the candidate space when performing the fault diagnosis task. The method is illustrated using a two-tank example, in which these aspects are shown to have an impact on the diagnosis and fault discrimination, since the proposed method goes beyond the structural methods
Resumo:
We report here a new empirical density functional that is constructed based on the performance of OPBE and PBE for spin states and SN 2 reaction barriers and how these are affected by different regions of the reduced gradient expansion. In a previous study [Swart, Sol̀, and Bickelhaupt, J. Comput. Methods Sci. Eng. 9, 69 (2009)] we already reported how, by switching between OPBE and PBE, one could obtain both the good performance of OPBE for spin states and reaction barriers and that of PBE for weak interactions within one and the same (SSB-sw) functional. Here we fine tuned this functional and include a portion of the KT functional and Grimme's dispersion correction to account for π- π stacking. Our new SSB-D functional is found to be a clear improvement and functions very well for biological applications (hydrogen bonding, π -π stacking, spin-state splittings, accuracy of geometries, reaction barriers)