4 resultados para Medial Rectus Recession
em Research Open Access Repository of the University of East London.
Resumo:
Sound localization can be defined as the ability to identify the position of an input sound source and is considered a powerful aspect of mammalian perception. For low frequency sounds, i.e., in the range 270 Hz-1.5 KHz, the mammalian auditory pathway achieves this by extracting the Interaural Time Difference between sound signals being received by the left and right ear. This processing is performed in a region of the brain known as the Medial Superior Olive (MSO). This paper presents a Spiking Neural Network (SNN) based model of the MSO. The network model is trained using the Spike Timing Dependent Plasticity learning rule using experimentally observed Head Related Transfer Function data in an adult domestic cat. The results presented demonstrate how the proposed SNN model is able to perform sound localization with an accuracy of 91.82% when an error tolerance of +/-10 degrees is used. For angular resolutions down to 2.5 degrees , it will be demonstrated how software based simulations of the model incur significant computation times. The paper thus also addresses preliminary implementation on a Field Programmable Gate Array based hardware platform to accelerate system performance.
Resumo:
Sound localisation is defined as the ability to identify the position of a sound source. The brain employs two cues to achieve this functionality for the horizontal plane, interaural time difference (ITD) by means of neurons in the medial superior olive (MSO) and interaural intensity difference (IID) by neurons of the lateral superior olive (LSO), both located in the superior olivary complex of the auditory pathway. This paper presents spiking neuron architectures of the MSO and LSO. An implementation of the Jeffress model using spiking neurons is presented as a representation of the MSO, while a spiking neuron architecture showing how neurons of the medial nucleus of the trapezoid body interact with LSO neurons to determine the azimuthal angle is discussed. Experimental results to support this work are presented.
Resumo:
This article examines prison education in England and Wales arguing that a disjuncture exists between the policy rhetoric of entitlement to education in prison at the European level and the playing out of that entitlement in English and Welsh prisons. Caught between conflicting discourses around a need to combat recidivism and a need for incarceration, prison education in England exists within a policy context informed, in part, by an international human rights agenda on the one hand and global recession, financial cutbacks, and a moral panic about crime on the other. The European Commission has highlighted a number of challenges facing prison education in Europe including over‐crowded institutions, increasing diversity in prison populations, the need to keep pace with pedagogical changes in mainstream education and the adoption of new technologies for learning (Hawley et al., 2013). These are challenges confronting all policy makers involved in prison education in England and Wales in a policy context that is messy, contradictory and fiercely contested. The article argues that this policy context, exacerbated by socio‐economic discourses around neo‐liberalism, is leading to a race‐to‐the‐bottom in the standards of educational provision for prisoners in England and Wales.
Resumo:
In this paper, a spiking neural network (SNN) architecture to simulate the sound localization ability of the mammalian auditory pathways using the interaural intensity difference cue is presented. The lateral superior olive was the inspiration for the architecture, which required the integration of an auditory periphery (cochlea) model and a model of the medial nucleus of the trapezoid body. The SNN uses leaky integrateand-fire excitatory and inhibitory spiking neurons, facilitating synapses and receptive fields. Experimentally derived headrelated transfer function (HRTF) acoustical data from adult domestic cats were employed to train and validate the localization ability of the architecture, training used the supervised learning algorithm called the remote supervision method to determine the azimuthal angles. The experimental results demonstrate that the architecture performs best when it is localizing high-frequency sound data in agreement with the biology, and also shows a high degree of robustness when the HRTF acoustical data is corrupted by noise.