33 resultados para ONE-LAYER MODEL
Resumo:
This paper presents a practical destruction-free parameter extraction methodology for a new physics-based circuit simulator buffer-layer Integrated Gate Commutated Thyristor (IGCT) model. Most key parameters needed for this model can be extracted by one simple clamped inductive-load switching experiment. To validate this extraction method, a clamped inductive load switching experiment was performed, and corresponding simulations were carried out by employing the IGCT model with parameters extracted through the presented methodology. Good agreement has been obtained between the experimental data and simulation results.
Resumo:
A parallel processing network derived from Kanerva's associative memory theory Kanerva 1984 is shown to be able to train rapidly on connected speech data and recognize further speech data with a label error rate of 0·68%. This modified Kanerva model can be trained substantially faster than other networks with comparable pattern discrimination properties. Kanerva presented his theory of a self-propagating search in 1984, and showed theoretically that large-scale versions of his model would have powerful pattern matching properties. This paper describes how the design for the modified Kanerva model is derived from Kanerva's original theory. Several designs are tested to discover which form may be implemented fastest while still maintaining versatile recognition performance. A method is developed to deal with the time varying nature of the speech signal by recognizing static patterns together with a fixed quantity of contextual information. In order to recognize speech features in different contexts it is necessary for a network to be able to model disjoint pattern classes. This type of modelling cannot be performed by a single layer of links. Network research was once held back by the inability of single-layer networks to solve this sort of problem, and the lack of a training algorithm for multi-layer networks. Rumelhart, Hinton & Williams 1985 provided one solution by demonstrating the "back propagation" training algorithm for multi-layer networks. A second alternative is used in the modified Kanerva model. A non-linear fixed transformation maps the pattern space into a space of higher dimensionality in which the speech features are linearly separable. A single-layer network may then be used to perform the recognition. The advantage of this solution over the other using multi-layer networks lies in the greater power and speed of the single-layer network training algorithm. © 1989.
Resumo:
A model of graphite which is easy to comprehend and simple to implement for the simulation of scanning tunneling microscopy (STM) images is described. This model simulates the atomic density of graphite layers, which in turn correlates with the local density of states. The mechanism and construction of such a model is explained with all the necessary details which have not been explicitly reported before. This model is applied to the investigation of rippling fringes which have been experimentally observed on a superlattice, and it is found that the rippling fringes are not related to the superlattice itself. A superlattice with abnormal topmost layers interaction is simulated, and the result affirms the validity of the moiré rotation pattern assumption. The "odd-even" transition along the atomic rows of a superlattice is simulated, and the simulation result shows that when there is more than one rotated layer at the top, the "odd-even" transition will not be manifest. ©2005 The Japan Society of Applied Physics.