56 resultados para deep learning, convolutional neural network, computer aided detection, mammografie
Resumo:
This paper describes a formal component language, used to support automated component-based program development. The components, referred to as templates, are machine processable, meaning that appropriate tool support, such as retrieval support, can be developed. The templates are highly adaptable, meaning that they can be applied to a wide range of problems. Some of the main features of the language are described, including: higher-order parameters; state variable declarations; specification statements and conditionals; applicability conditions and theories; meta-level place holders; and abstract data structures.
Resumo:
We introduce a novel way of measuring the entropy of a set of values undergoing changes. Such a measure becomes useful when analyzing the temporal development of an algorithm designed to numerically update a collection of values such as artificial neural network weights undergoing adjustments during learning. We measure the entropy as a function of the phase-space of the values, i.e. their magnitude and velocity of change, using a method based on the abstract measure of entropy introduced by the philosopher Rudolf Carnap. By constructing a time-dynamic two-dimensional Voronoi diagram using Voronoi cell generators with coordinates of value- and value-velocity (change of magnitude), the entropy becomes a function of the cell areas. We term this measure teleonomic entropy since it can be used to describe changes in any end-directed (teleonomic) system. The usefulness of the method is illustrated when comparing the different approaches of two search algorithms, a learning artificial neural network and a population of discovering agents. (C) 2004 Elsevier Inc. All rights reserved.
Resumo:
This paper is concerned with the use of scientific visualization methods for the analysis of feedforward neural networks (NNs). Inevitably, the kinds of data associated with the design and implementation of neural networks are of very high dimensionality, presenting a major challenge for visualization. A method is described using the well-known statistical technique of principal component analysis (PCA). This is found to be an effective and useful method of visualizing the learning trajectories of many learning algorithms such as back-propagation and can also be used to provide insight into the learning process and the nature of the error surface.
Resumo:
The long short-term memory (LSTM) is not the only neural network which learns a context sensitive language. Second-order sequential cascaded networks (SCNs) are able to induce means from a finite fragment of a context-sensitive language for processing strings outside the training set. The dynamical behavior of the SCN is qualitatively distinct from that observed in LSTM networks. Differences in performance and dynamics are discussed.
Resumo:
The article describes an attempt to improve student learning outcomes in a computer networks course by making lectures more active learning experiences. Quick quizzes, group and individual exercises, the review of student questions, as well as multiple breaks, were incorporated into the weekly three-hour lectures. Student responses to the modified lectures was overwhelmingly positive: over 85% of respondents agreed that the lectures aided understanding, with large majorities of the respondents finding the individual activities useful to their learning. Although student examination performance improved over the previous year, performance on an examination question that was designed to examine deep understanding remained unchanged.
Resumo:
Recent work by Siegelmann has shown that the computational power of recurrent neural networks matches that of Turing Machines. One important implication is that complex language classes (infinite languages with embedded clauses) can be represented in neural networks. Proofs are based on a fractal encoding of states to simulate the memory and operations of stacks. In the present work, it is shown that similar stack-like dynamics can be learned in recurrent neural networks from simple sequence prediction tasks. Two main types of network solutions are found and described qualitatively as dynamical systems: damped oscillation and entangled spiraling around fixed points. The potential and limitations of each solution type are established in terms of generalization on two different context-free languages. Both solution types constitute novel stack implementations - generally in line with Siegelmann's theoretical work - which supply insights into how embedded structures of languages can be handled in analog hardware.