4 resultados para Multimodal översättningsanalys

em National Center for Biotechnology Information - NCBI


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Dynamic importance weighting is proposed as a Monte Carlo method that has the capability to sample relevant parts of the configuration space even in the presence of many steep energy minima. The method relies on an additional dynamic variable (the importance weight) to help the system overcome steep barriers. A non-Metropolis theory is developed for the construction of such weighted samplers. Algorithms based on this method are designed for simulation and global optimization tasks arising from multimodal sampling, neural network training, and the traveling salesman problem. Numerical tests on these problems confirm the effectiveness of the method.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The auditory system of monkeys includes a large number of interconnected subcortical nuclei and cortical areas. At subcortical levels, the structural components of the auditory system of monkeys resemble those of nonprimates, but the organization at cortical levels is different. In monkeys, the ventral nucleus of the medial geniculate complex projects in parallel to a core of three primary-like auditory areas, AI, R, and RT, constituting the first stage of cortical processing. These areas interconnect and project to the homotopic and other locations in the opposite cerebral hemisphere and to a surrounding array of eight proposed belt areas as a second stage of cortical processing. The belt areas in turn project in overlapping patterns to a lateral parabelt region with at least rostral and caudal subdivisions as a third stage of cortical processing. The divisions of the parabelt distribute to adjoining auditory and multimodal regions of the temporal lobe and to four functionally distinct regions of the frontal lobe. Histochemically, chimpanzees and humans have an auditory core that closely resembles that of monkeys. The challenge for future researchers is to understand how this complex system in monkeys analyzes and utilizes auditory information.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

One of the fascinating properties of the central nervous system is its ability to learn: the ability to alter its functional properties adaptively as a consequence of the interactions of an animal with the environment. The auditory localization pathway provides an opportunity to observe such adaptive changes and to study the cellular mechanisms that underlie them. The midbrain localization pathway creates a multimodal map of space that represents the nervous system's associations of auditory cues with locations in visual space. Various manipulations of auditory or visual experience, especially during early life, that change the relationship between auditory cues and locations in space lead to adaptive changes in auditory localization behavior and to corresponding changes in the functional and anatomical properties of this pathway. Traces of this early learning persist into adulthood, enabling adults to reacquire patterns of connectivity that were learned initially during the juvenile period.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Optimism is growing that the near future will witness rapid growth in human-computer interaction using voice. System prototypes have recently been built that demonstrate speaker-independent real-time speech recognition, and understanding of naturally spoken utterances with vocabularies of 1000 to 2000 words, and larger. Already, computer manufacturers are building speech recognition subsystems into their new product lines. However, before this technology can be broadly useful, a substantial knowledge base is needed about human spoken language and performance during computer-based spoken interaction. This paper reviews application areas in which spoken interaction can play a significant role, assesses potential benefits of spoken interaction with machines, and compares voice with other modalities of human-computer interaction. It also discusses information that will be needed to build a firm empirical foundation for the design of future spoken and multimodal interfaces. Finally, it argues for a more systematic and scientific approach to investigating spoken input and performance with future language technology.