72 resultados para Fault tolerant computing
Resumo:
Deformation twins and stacking faults have been observed in nanocrystal line Ni, for the first time under uniaxial tensile test conditions. These partial dislocation mediated deformation mechanisms are enhanced at cryogenic test temperatures. Our observations highlight the effects of deformation conditions, temperature in particular, on deformation mechanisms in nanograins.
Resumo:
Generalized planar fault energy (GPFE) curves have been used to predict partial-dislocation-mediated processes in nanocrystalline materials, but their validity has not been evaluated experimentally. We report experimental observations of a large quantity of both stacking faults and twins in nc Ni deformed at relatively low stresses in a tensile test. The experimental findings indicate that the GPFE curves can reasonably explain the formation of stacking faults, but they alone were not able to adequately predict the propensity of deformation twinning.
Resumo:
The LURR theory is a new approach for earthquake prediction, which achieves good results in earthquake prediction within the China mainland and regions in America, Japan and Australia. However, the expansion of the prediction region leads to the refinement of its longitude and latitude, and the increase of the time period. This requires increasingly more computations, and the volume of data reaches the order of GB, which will be very difficult for a single CPU. In this paper, a new method was introduced to solve this problem. Adopting the technology of domain decomposition and parallelizing using MPI, we developed a new parallel tempo-spatial scanning program.
Resumo:
The stress release model, a stochastic version of the elastic rebound theory, is applied to the large events from four synthetic earthquake catalogs generated by models with various levels of disorder in distribution of fault zone strength (Ben-Zion, 1996) They include models with uniform properties (U), a Parkfield-type asperity (A), fractal brittle properties (F), and multi-size-scale heterogeneities (M). The results show that the degree of regularity or predictability in the assumed fault properties, based on both the Akaike information criterion and simulations, follows the order U, F, A, and M, which is in good agreement with that obtained by pattern recognition techniques applied to the full set of synthetic data. Data simulated from the best fitting stress release models reproduce, both visually and in distributional terms, the main features of the original catalogs. The differences in character and the quality of prediction between the four cases are shown to be dependent on two main aspects: the parameter controlling the sensitivity to departures from the mean stress level and the frequency-magnitude distribution, which differs substantially between the four cases. In particular, it is shown that the predictability of the data is strongly affected by the form of frequency-magnitude distribution, being greatly reduced if a pure Gutenburg-Richter form is assumed to hold out to high magnitudes.
Resumo:
Negabinary is a component of the positional number system. A complete set of negabinary arithmetic operations are presented, including the basic addition/subtraction logic, the two-step carry-free addition/subtraction algorithm based on negabinary signed-digit (NSD) representation, parallel multiplication, and the fast conversion from NSD to the normal negabinary in the carry-look-ahead mode. All the arithmetic operations can be performed with binary logic. By programming the binary reference bits, addition and subtraction can be realized in parallel with the same binary logic functions. This offers a technique to perform space-variant arithmetic-logic functions with space-invariant instructions. Multiplication can be performed in the tree structure and it is simpler than the modified signed-digit (MSD) counterpart. The parallelism of the algorithms is very suitable for optical implementation. Correspondingly, a general-purpose optical logic system using an electron trapping device is suggested. Various complex logic functions can be performed by programming the illumination of the data arrays without additional temporal latency of the intermediate results. The system can be compact. These properties make the proposed negabinary arithmetic-logic system a strong candidate for future applications in digital optical computing with the development of smart pixel arrays. (C) 1999 Society of Photo-Optical Instrumentation Engineers. [S0091-3286(99)00803-X].
Resumo:
We introduce a four-pass laser pulse compressor design based on two grating apertures with two gratings per aperture that is tolerant to some alignment errors and, importantly, to grating-to-grating period variations. Each half-beam samples each grating in a diamond-shaped compressor that is symmetric about a central bisecting plane. For any given grating, the two half-beams impinge on opposite sides of its surface normal. It is shown that the two split beams have no pointing difference from paired gratings with different periods. Furthermore, no phase shift between half-beams is incurred as long as the planes containing a grating line and the surface normal for each grating of the pair are parallel. For grating pairs satisfying this condition, gratings surfaces need not be on the same plane, as changes in the gap between the two can compensate to bring the beams back in phase. © 2008 Optical Society of America.
Resumo:
Photosynthetic activity during rehydration at four temperatures (5, 15, 25, 35 degrees C) was studied in a terrestrial, highly drought-tolerant cyanobacterium, Nostoc flagelliforme. At all the temperatures, the optimum quantum yield F-v/F-m increased rapidly within I It and then increased slowly during the process of rehydration. The increase in F-v/F-m at 25 and 35 degrees C was larger than that at 5 and 15 degrees C. In addition, the changes of initial intensity of fluorescence (F-0) and variable fluorescence (F-v) were more significant at 25 and 35 degrees C than those at 5 and 15 degrees C. Chlorophyll a content increased with the increase of temperature during the course of rehydration, with this being more pronounced at 25 and 35 degrees C. The photosynthetic rates at 25 and 35 degrees C were higher than those at 5 and 15 degrees C. Induction of chlorophyll fluorescence with sustained rewetting at 5 and 15 degrees C had two phases of transformation, whereas at 25 and 35 degrees C it had a third peak kinetic phase and showed typical chlorophyll fluorescence steps on rewetting for 24 h, representing a normal physiological state. A comparison of the chlorophyll fluorescence parameters, chlorophyll a content, and the chlorophyll fluorescence induction led to the conclusion that N. flagelliforme had a more rapid and complete recovery at 25 and 35 degrees C than that at 5 and 15 degrees C, although it could recover its photosynthetic activity at any of the four temperatures. (c) 2007 Published by Elsevier Ltd.
Resumo:
The ribosomal RNA molecule is an ideal model for evaluating the stability of a gene product under desiccation stress. We isolated 8 Nostoc strains that had the capacity to withstand desiccation in habitats and sequenced their 16S rRNA genes. The stabilities of 16S rRNAs secondary structures, indicated by free energy change of folding, were compared among Nostoc and other related species. The results suggested that 163 rRNA secondary structures of the desiccation-tolerant Nostoc strains were more stable than that of planktonic Nostocaceae species. The stabilizing mutations were divided into two categories: (1) those causing GC to replace other types of base pairs in stems and (2) those causing extension of stems. By mapping stabilizing mutations onto the Nostoc phylogenetic tree based on 16S rRNA gene, it was shown that most of stabilizing mutations had evolved during adaptive radiation among Nostoc spp. The evolution of 16S rRNA along the Nostoc lineage is suggested to be selectively advantageous under desiccation stress.
Resumo:
High dimensional biomimetic informatics (HDBI) is a novel theory of informatics developed in recent years. Its primary object of research is points in high dimensional Euclidean space, and its exploratory and resolving procedures are based on simple geometric computations. However, the mathematical descriptions and computing of geometric objects are inconvenient because of the characters of geometry. With the increase of the dimension and the multiformity of geometric objects, these descriptions are more complicated and prolix especially in high dimensional space. In this paper, we give some definitions and mathematical symbols, and discuss some symbolic computing methods in high dimensional space systematically from the viewpoint of HDBI. With these methods, some multi-variables problems in high dimensional space can be solved easily. Three detailed algorithms are presented as examples to show the efficiency of our symbolic computing methods: the algorithm for judging the center of a circle given three points on this circle, the algorithm for judging whether two points are on the same side of a hyperplane, and the algorithm for judging whether a point is in a simplex constructed by points in high dimensional space. Two experiments in blurred image restoration and uneven lighting image correction are presented for all these algorithms to show their good behaviors.
Resumo:
Quantum computing is a quickly growing research field. This article introduces the basic concepts of quantum computing, recent developments in quantum searching, and decoherence in a possible quantum dot realization.
Resumo:
The Double Synapse Weighted Neuron (DSWN) is a kind of general-purpose neuron model, which with the ability of configuring Hyper-sausage neuron (HSN). After introducing the design method of hardware DSWN synapse, this paper proposed a DSWN-based specific purpose neural computing device-CASSANN-IIspr. As its application, a rigid body recognition system was developed on CASSANN-IIspr, which achieved better performance than RIBF-SVMs system.