196 resultados para NEURAL-TUBE DEFECTS
Resumo:
Based on the embedded atom method (EAM) and molecular dynamics (MD) method, the deformation properties of Cu nanowires with different single defects under dynamic compression have been studied. The mechanical behaviours of the perfect nanowire are first studied, and the critical stress decreases with the increase of the nanowire’s length, which is well agreed with the modified Euler theory. We then consider the effects to the buckling phenomenon resulted from different defects. It is found that obvious decrease of the critical stress is resulted from different defects, and the largest decrease is found in nanowire with the surface vertical defect. Surface defects are found exerting larger influence than internal defects. The buckling duration is found shortened due to different defects except the nanowire with surface horizon defect, which is also found possessing the largest deflection. Different deflections are also observed for different defected nanowires. It is find that due to surface defects, only deflection in one direction is happened, but for internal defects, more complex deflection circumstances are observed.
Resumo:
Molecular dynamics (MD) simulations have been carried out to investigate the defect’s effect on the mechanical properties of copper nanowire with different crystallographic orientations, under tensile deformation. Three different crystallographic orientations have been considered. The deformation mechanism has been carefully discussed. It is found that the Young’s modulus is insensitive to the defect, even when the nanowire’s crystallographic orientation is different. However, due to the defect’s effect, the yield strength and yield strain appear a large decrease. The defects have played a role of dislocation sources, the slips or stacking faults are first generated around the locations of the defects. The necking locations have also been affected by different defects. Due to the surface defect, the plastic deformation has received a large influence for the <001>/{110} and <110> orientated nanowires, and a relative small influence is seen for the <111> nanowire.
Resumo:
Sample complexity results from computational learning theory, when applied to neural network learning for pattern classification problems, suggest that for good generalization performance the number of training examples should grow at least linearly with the number of adjustable parameters in the network. Results in this paper show that if a large neural network is used for a pattern classification problem and the learning algorithm finds a network with small weights that has small squared error on the training patterns, then the generalization performance depends on the size of the weights rather than the number of weights. For example, consider a two-layer feedforward network of sigmoid units, in which the sum of the magnitudes of the weights associated with each unit is bounded by A and the input dimension is n. We show that the misclassification probability is no more than a certain error estimate (that is related to squared error on the training set) plus A3 √((log n)/m) (ignoring log A and log m factors), where m is the number of training patterns. This may explain the generalization performance of neural networks, particularly when the number of training examples is considerably smaller than the number of weights. It also supports heuristics (such as weight decay and early stopping) that attempt to keep the weights small during training. The proof techniques appear to be useful for the analysis of other pattern classifiers: when the input domain is a totally bounded metric space, we use the same approach to give upper bounds on misclassification probability for classifiers with decision boundaries that are far from the training examples.
Resumo:
This important work describes recent theoretical advances in the study of artificial neural networks. It explores probabilistic models of supervised learning problems, and addresses the key statistical and computational questions. Chapters survey research on pattern classification with binary-output networks, including a discussion of the relevance of the Vapnik Chervonenkis dimension, and of estimates of the dimension for several neural network models. In addition, Anthony and Bartlett develop a model of classification by real-output networks, and demonstrate the usefulness of classification with a "large margin." The authors explain the role of scale-sensitive versions of the Vapnik Chervonenkis dimension in large margin classification, and in real prediction. Key chapters also discuss the computational complexity of neural network learning, describing a variety of hardness results, and outlining two efficient, constructive learning algorithms. The book is self-contained and accessible to researchers and graduate students in computer science, engineering, and mathematics