2 resultados para parallel computer systems
em Bucknell University Digital Commons - Pensilvania - USA
Resumo:
The performance of the parallel vector implementation of the one- and two-dimensional orthogonal transforms is evaluated. The orthogonal transforms are computed using actual or modified fast Fourier transform (FFT) kernels. The factors considered in comparing the speed-up of these vectorized digital signal processing algorithms are discussed and it is shown that the traditional way of comparing th execution speed of digital signal processing algorithms by the ratios of the number of multiplications and additions is no longer effective for vector implementation; the structure of the algorithm must also be considered as a factor when comparing the execution speed of vectorized digital signal processing algorithms. Simulation results on the Cray X/MP with the following orthogonal transforms are presented: discrete Fourier transform (DFT), discrete cosine transform (DCT), discrete sine transform (DST), discrete Hartley transform (DHT), discrete Walsh transform (DWHT), and discrete Hadamard transform (DHDT). A comparison between the DHT and the fast Hartley transform is also included.(34 refs)
Resumo:
The means through which the nervous system perceives its environment is one of the most fascinating questions in contemporary science. Our endeavors to comprehend the principles of neural science provide an instance of how biological processes may inspire novel methods in mathematical modeling and engineering. The application ofmathematical models towards understanding neural signals and systems represents a vibrant field of research that has spanned over half a century. During this period, multiple approaches to neuronal modeling have been adopted, and each approach is adept at elucidating a specific aspect of nervous system function. Thus while bio-physical models have strived to comprehend the dynamics of actual physical processes occurring within a nerve cell, the phenomenological approach has conceived models that relate the ionic properties of nerve cells to transitions in neural activity. Further-more, the field of neural networks has endeavored to explore how distributed parallel processing systems may become capable of storing memory. Through this project, we strive to explore how some of the insights gained from biophysical neuronal modeling may be incorporated within the field of neural net-works. We specifically study the capabilities of a simple neural model, the Resonate-and-Fire (RAF) neuron, whose derivation is inspired by biophysical neural modeling. While reflecting further biological plausibility, the RAF neuron is also analytically tractable, and thus may be implemented within neural networks. In the following thesis, we provide a brief overview of the different approaches that have been adopted towards comprehending the properties of nerve cells, along with the framework under which our specific neuron model relates to the field of neuronal modeling. Subsequently, we explore some of the time-dependent neurocomputational capabilities of the RAF neuron, and we utilize the model to classify logic gates, and solve the classic XOR problem. Finally we explore how the resonate-and-fire neuron may be implemented within neural networks, and how such a network could be adapted through the temporal backpropagation algorithm.