9 resultados para Special Functions and Pathways

em Massachusetts Institute of Technology


Relevância:

100.00% 100.00%

Publicador:

Resumo:

We had previously shown that regularization principles lead to approximation schemes, as Radial Basis Functions, which are equivalent to networks with one layer of hidden units, called Regularization Networks. In this paper we show that regularization networks encompass a much broader range of approximation schemes, including many of the popular general additive models, Breiman's hinge functions and some forms of Projection Pursuit Regression. In the probabilistic interpretation of regularization, the different classes of basis functions correspond to different classes of prior probabilities on the approximating function spaces, and therefore to different types of smoothness assumptions. In the final part of the paper, we also show a relation between activation functions of the Gaussian and sigmoidal type.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Learning an input-output mapping from a set of examples, of the type that many neural networks have been constructed to perform, can be regarded as synthesizing an approximation of a multi-dimensional function, that is solving the problem of hypersurface reconstruction. From this point of view, this form of learning is closely related to classical approximation techniques, such as generalized splines and regularization theory. This paper considers the problems of an exact representation and, in more detail, of the approximation of linear and nolinear mappings in terms of simpler functions of fewer variables. Kolmogorov's theorem concerning the representation of functions of several variables in terms of functions of one variable turns out to be almost irrelevant in the context of networks for learning. We develop a theoretical framework for approximation based on regularization techniques that leads to a class of three-layer networks that we call Generalized Radial Basis Functions (GRBF), since they are mathematically related to the well-known Radial Basis Functions, mainly used for strict interpolation tasks. GRBF networks are not only equivalent to generalized splines, but are also closely related to pattern recognition methods such as Parzen windows and potential functions and to several neural network algorithms, such as Kanerva's associative memory, backpropagation and Kohonen's topology preserving map. They also have an interesting interpretation in terms of prototypes that are synthesized and optimally combined during the learning stage. The paper introduces several extensions and applications of the technique and discusses intriguing analogies with neurobiological data.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Template matching by means of cross-correlation is common practice in pattern recognition. However, its sensitivity to deformations of the pattern and the broad and unsharp peaks it produces are significant drawbacks. This paper reviews some results on how these shortcomings can be removed. Several techniques (Matched Spatial Filters, Synthetic Discriminant Functions, Principal Components Projections and Reconstruction Residuals) are reviewed and compared on a common task: locating eyes in a database of faces. New variants are also proposed and compared: least squares Discriminant Functions and the combined use of projections on eigenfunctions and the corresponding reconstruction residuals. Finally, approximation networks are introduced in an attempt to improve filter design by the introduction of nonlinearity.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We wish to design a diagnostic for a device from knowledge of its structure and function. the diagnostic should achieve both coverage of the faults that can occur in the device, and should strive to achieve specificity in its diagnosis when it detects a fault. A system is described that uses a simple model of hardware structure and function, representing the device in terms of its internal primitive functions and connections. The system designs a diagnostic in three steps. First, an extension of path sensitization is used to design a test for each of the connections in teh device. Next, the resulting tests are improved by increasing their specificity. Finally the tests are ordered so that each relies on the fewest possible connections. We describe an implementation of this system and show examples of the results for some simple devices.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This thesis presents a theory of human-like reasoning in the general domain of designed physical systems, and in particular, electronic circuits. One aspect of the theory, causal analysis, describes how the behavior of individual components can be combined to explain the behavior of composite systems. Another aspect of the theory, teleological analysis, describes how the notion that the system has a purpose can be used to aid this causal analysis. The theory is implemented as a computer program, which, given a circuit topology, can construct by qualitative causal analysis a mechanism graph describing the functional topology of the system. This functional topology is then parsed by a grammar for common circuit functions. Ambiguities are introduced into the analysis by the approximate qualitative nature of the analysis. For example, there are often several possible mechanisms which might describe the circuit's function. These are disambiguated by teleological analysis. The requirement that each component be assigned an appropriate purpose in the functional topology imposes a severe constraint which eliminates all the ambiguities. Since both analyses are based on heuristics, the chosen mechanism is a rationalization of how the circuit functions, and does not guarantee that the circuit actually does function. This type of coarse understanding of circuits is useful for analysis, design and troubleshooting.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In most classical frameworks for learning from examples, it is assumed that examples are randomly drawn and presented to the learner. In this paper, we consider the possibility of a more active learner who is allowed to choose his/her own examples. Our investigations are carried out in a function approximation setting. In particular, using arguments from optimal recovery (Micchelli and Rivlin, 1976), we develop an adaptive sampling strategy (equivalent to adaptive approximation) for arbitrary approximation schemes. We provide a general formulation of the problem and show how it can be regarded as sequential optimal recovery. We demonstrate the application of this general formulation to two special cases of functions on the real line 1) monotonically increasing functions and 2) functions with bounded derivative. An extensive investigation of the sample complexity of approximating these functions is conducted yielding both theoretical and empirical results on test functions. Our theoretical results (stated insPAC-style), along with the simulations demonstrate the superiority of our active scheme over both passive learning as well as classical optimal recovery. The analysis of active function approximation is conducted in a worst-case setting, in contrast with other Bayesian paradigms obtained from optimal design (Mackay, 1992).

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper presents a new paradigm for signal reconstruction and superresolution, Correlation Kernel Analysis (CKA), that is based on the selection of a sparse set of bases from a large dictionary of class- specific basis functions. The basis functions that we use are the correlation functions of the class of signals we are analyzing. To choose the appropriate features from this large dictionary, we use Support Vector Machine (SVM) regression and compare this to traditional Principal Component Analysis (PCA) for the tasks of signal reconstruction, superresolution, and compression. The testbed we use in this paper is a set of images of pedestrians. This paper also presents results of experiments in which we use a dictionary of multiscale basis functions and then use Basis Pursuit De-Noising to obtain a sparse, multiscale approximation of a signal. The results are analyzed and we conclude that 1) when used with a sparse representation technique, the correlation function is an effective kernel for image reconstruction and superresolution, 2) for image compression, PCA and SVM have different tradeoffs, depending on the particular metric that is used to evaluate the results, 3) in sparse representation techniques, L_1 is not a good proxy for the true measure of sparsity, L_0, and 4) the L_epsilon norm may be a better error metric for image reconstruction and compression than the L_2 norm, though the exact psychophysical metric should take into account high order structure in images.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We analyze a finite horizon, single product, periodic review model in which pricing and production/inventory decisions are made simultaneously. Demands in different periods are random variables that are independent of each other and their distributions depend on the product price. Pricing and ordering decisions are made at the beginning of each period and all shortages are backlogged. Ordering cost includes both a fixed cost and a variable cost proportional to the amount ordered. The objective is to find an inventory policy and a pricing strategy maximizing expected profit over the finite horizon. We show that when the demand model is additive, the profit-to-go functions are k-concave and hence an (s,S,p) policy is optimal. In such a policy, the period inventory is managed based on the classical (s,S) policy and price is determined based on the inventory position at the beginning of each period. For more general demand functions, i.e., multiplicative plus additive functions, we demonstrate that the profit-to-go function is not necessarily k-concave and an (s,S,p) policy is not necessarily optimal. We introduce a new concept, the symmetric k-concave functions and apply it to provide a characterization of the optimal policy.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Amphiphilic polymers are a class of polymers that self-assemble into different types of microstructure, depending on the solvent environment and external stimuli. Self assembly structures can exist in many different forms, such as spherical micelles, rod-like micelles, bi-layers, vesicles, bi-continuous structure etc. Most biological systems are basically comprised of many of these organised structures arranged in an intelligent manner, which impart functions and life to the system. We have adopted the atom transfer radical polymerization (ATRP) technique to synthesize various types of block copolymer systems that self-assemble into different microstructure when subject to an external stimuli, such as pH or temperature. The systems that we have studied are: (1) pH responsive fullerene (C60) containing poly(methacrylic acid) (PMAA-b-C60); (2) pH and temperature responsive fullerene containing poly[2-(dimethylamino)ethyl methacrylate] (C₆₀-b-PDMAEMA); (3) other responsive water-soluble fullerene systems. By varying temperature, pH and salt concentration, different types microstructure can be produced. In the presence of inorganic salts, fractal patterns at nano- to microscopic dimension were observed for negatively charged PMAA-b-C60, while such structure was not observed for positively charged PDMAEMA-b-C60. We demonstrated that negatively charged fullerene containing polymeric systems can serve as excellent nano-templates for the controlled growth of inorganic crystals at the nano- to micrometer length scale and the possible mechanism was proposed. The physical properties and the characteristics of their self-assembly properties will be discussed, and their implications to chemical and biomedical applications will be highlighted.