3 resultados para Consonance dissonance sounds

em Boston University Digital Common


Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper attempts two tasks. First, it sketches how the natural sciences (including especially the biological sciences), the social sciences, and the scientific study of religion can be understood to furnish complementary, consonant perspectives on human beings and human groups. This suggests that it is possible to speak of a modern secular interpretation of humanity (MSIH) to which these perspectives contribute (though not without tensions). MSIH is not a comprehensive interpretation of human beings, if only because it adopts a posture of neutrality with regard to the reality of religious objects and the truth of theological claims about them. MSIH is certainly an impressively forceful interpretation, however, and it needs to be reckoned with by any perspective on human life that seeks to insert its truth claims into the arena of public debate. Second, the paper considers two challenges that MSIH poses to specifically theological interpretations of human beings. On the one hand, in spite of its posture of religious neutrality, MSIH is a key element in a class of wider, seemingly antireligious interpretations of humanity, including especially projectionist and illusionist critiques of religion. It is consonance with MSIH that makes these critiques such formidable competitors for traditional theological interpretations of human beings. On the other hand, and taking the religiously neutral posture of MSIH at face value, theological accounts of humanity that seek to coordinate the insights of MSIH with positive religious visions of human life must find ways to overcome or manage such dissonance as arises. The goal of synthesis is defended as important, and strategies for managing these challenges, especially in light of the pluralism of extant philosophical and theological interpretations of human beings, are advocated.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Auditory signals of speech are speaker-dependent, but representations of language meaning are speaker-independent. Such a transformation enables speech to be understood from different speakers. A neural model is presented that performs speaker normalization to generate a pitchindependent representation of speech sounds, while also preserving information about speaker identity. This speaker-invariant representation is categorized into unitized speech items, which input to sequential working memories whose distributed patterns can be categorized, or chunked, into syllable and word representations. The proposed model fits into an emerging model of auditory streaming and speech categorization. The auditory streaming and speaker normalization parts of the model both use multiple strip representations and asymmetric competitive circuits, thereby suggesting that these two circuits arose from similar neural designs. The normalized speech items are rapidly categorized and stably remembered by Adaptive Resonance Theory circuits. Simulations use synthesized steady-state vowels from the Peterson and Barney [J. Acoust. Soc. Am. 24, 175-184 (1952)] vowel database and achieve accuracy rates similar to those achieved by human listeners. These results are compared to behavioral data and other speaker normalization models.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This article describes a neural network model that addresses the acquisition of speaking skills by infants and subsequent motor equivalent production of speech sounds. The model learns two mappings during a babbling phase. A phonetic-to-orosensory mapping specifies a vocal tract target for each speech sound; these targets take the form of convex regions in orosensory coordinates defining the shape of the vocal tract. The babbling process wherein these convex region targets are formed explains how an infant can learn phoneme-specific and language-specific limits on acceptable variability of articulator movements. The model also learns an orosensory-to-articulatory mapping wherein cells coding desired movement directions in orosensory space learn articulator movements that achieve these orosensory movement directions. The resulting mapping provides a natural explanation for the formation of coordinative structures. This mapping also makes efficient use of redundancy in the articulator system, thereby providing the model with motor equivalent capabilities. Simulations verify the model's ability to compensate for constraints or perturbations applied to the articulators automatically and without new learning and to explain contextual variability seen in human speech production.