87 resultados para Variational approximation


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Artificial Neural Networks are widely used in various applications in engineering, as such solutions of nonlinear problems. The implementation of this technique in reconfigurable devices is a great challenge to researchers by several factors, such as floating point precision, nonlinear activation function, performance and area used in FPGA. The contribution of this work is the approximation of a nonlinear function used in ANN, the popular hyperbolic tangent activation function. The system architecture is composed of several scenarios that provide a tradeoff of performance, precision and area used in FPGA. The results are compared in different scenarios and with current literature on error analysis, area and system performance. © 2013 IEEE.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Pós-graduação em Ciência e Tecnologia de Materiais - FC

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A variational analysis of the spiked harmonic oscillator Hamiltonian operator - d2/dx2 + x2 + l(l + 1)/x2 + λ|x| -α, where α is a real positive parameter, is reported in this work. The formalism makes use of the functional space spanned by the solutions of the Schrödinger equation for the linear harmonic oscillator Hamiltonian supplemented by a Dirichlet boundary condition, and a standard procedure for diagonalizing symmetric matrices. The eigenvalues obtained by increasing the dimension of the basis set provide accurate approximations for the ground state energy of the model system, valid for positive and relatively large values of the coupling parameter λ. Additionally, a large coupling perturbative expansion is carried out and the contributions up to fourth-order to the ground state energy are explicitly evaluated. Numerical results are compared for the special case α = 5/2. © 1989 American Institute of Physics.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The optimized δ-expansion is used to study vacuum polarization effects in the Walecka model. The optimized δ-expansion is a nonperturbative approach for field theoretic models which combines the techniques of perturbation theory and the variational principle. Vacuum effects on self-energies and the energy density of nuclear matter are studied up to script O sign(δ2). When exchange diagrams are neglected, the traditional relativistic Hartree approximation (RHA) results are exactly reproduced and, using the same set of parameters that saturate nuclear matter in the RHA, a new stable, tightly bound state at high density is found.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A radial basis function network (RBFN) circuit for function approximation is presented. Simulation and experimental results show that the network has good approximation capabilities. The RBFN was a squared hyperbolic secant with three adjustable parameters amplitude, width and center. To test the network a sinusoidal and sine function,vas approximated.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Function approximation is a very important task in environments where the computation has to be based on extracting information from data samples in real world processes. So, the development of new mathematical model is a very important activity to guarantee the evolution of the function approximation area. In this sense, we will present the Polynomials Powers of Sigmoid (PPS) as a linear neural network. In this paper, we will introduce one series of practical results for the Polynomials Powers of Sigmoid, where we will show some advantages of the use of the powers of sigmiod functions in relationship the traditional MLP-Backpropagation and Polynomials in functions approximation problems.