16 resultados para Radishchev, Aleksandr Nikolaevich, 1749-1802.

em Cambridge University Engineering Department Publications Database


Relevância:

10.00% 10.00%

Publicador:

Relevância:

10.00% 10.00%

Publicador:

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The effects of multiple scattering on acoustic manipulation of spherical particles using helicoidal Bessel-beams are discussed. A closed-form analytical solution is developed to calculate the acoustic radiation force resulting from a Bessel-beam on an acoustically reflective sphere, in the presence of an adjacent spherical particle, immersed in an unbounded fluid medium. The solution is based on the standard Fourier decomposition method and the effect of multi-scattering is taken into account using the addition theorem for spherical coordinates. Of particular interest here is the investigation of the effects of multiple scattering on the emergence of negative axial forces. To investigate the effects, the radiation force applied on the target particle resulting from a helicoidal Bessel-beam of different azimuthal indexes (m = 1 to 4), at different conical angles, is computed. Results are presented for soft and rigid spheres of various sizes, separated by a finite distance. Results have shown that the emergence of negative force regions is very sensitive to the level of cross-scattering between the particles. It has also been shown that in multiple scattering media, the negative axial force may occur at much smaller conical angles than previously reported for single particles, and that acoustic manipulation of soft spheres in such media may also become possible.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Although it is widely believed that reinforcement learning is a suitable tool for describing behavioral learning, the mechanisms by which it can be implemented in networks of spiking neurons are not fully understood. Here, we show that different learning rules emerge from a policy gradient approach depending on which features of the spike trains are assumed to influence the reward signals, i.e., depending on which neural code is in effect. We use the framework of Williams (1992) to derive learning rules for arbitrary neural codes. For illustration, we present policy-gradient rules for three different example codes - a spike count code, a spike timing code and the most general "full spike train" code - and test them on simple model problems. In addition to classical synaptic learning, we derive learning rules for intrinsic parameters that control the excitability of the neuron. The spike count learning rule has structural similarities with established Bienenstock-Cooper-Munro rules. If the distribution of the relevant spike train features belongs to the natural exponential family, the learning rules have a characteristic shape that raises interesting prediction problems.