2 resultados para Imagery (Psychology)
em Universidad Politécnica de Madrid
Resumo:
It has been suggested that different pathways through the brain are followed depending on the type of information that is being processed. Although it is now known that there is a continuous exchange of information through both hemispheres, language is considered to be processed by the left hemisphere, where Broca?s and Wernicke?s areas are located. On the other hand, music is thought to be processed mainly by the right hemisphere. According to Sininger Y.S. & Cone- Wesson, B. (2004), there is a similar but contralateral specialization of the human ears; due to the fact that auditory pathways cross-over at the brainstem. A previous study showed an effect of musical imagery on spontaneous otoacoustic emissions (SOAEs) (Perez-Acosta and Ramos-Amezquita, 2006), providing evidence of an efferent influence from the auditory cortex on the basilar membrane. Based on these results, the present work is a comparative study between left and right ears of a population of eight musicians that presented SOAEs. A familiar musical tune was chosen, and the subjects were trained in the task of evoking it after having heard it. Samples of ear-canal signals were obtained and processed in order to extract frequency and amplitude data on the SOAEs. This procedure was carried out before, during and after the musical image creation task. Results were then analyzed to compare the difference between SOAE responses of left and right ears. A clear asymmetrical SOAEs response to musical imagery tasks between left and right ears was obtained. Significant changes of SOAE amplitude related to musical imagery tasks were only observed on the right ear of the subjects. These results may suggest a predominant left hemisphere activity related to a melodic image creation task.
Resumo:
Low-cost systems that can obtain a high-quality foreground segmentation almostindependently of the existing illumination conditions for indoor environments are verydesirable, especially for security and surveillance applications. In this paper, a novelforeground segmentation algorithm that uses only a Kinect depth sensor is proposedto satisfy the aforementioned system characteristics. This is achieved by combininga mixture of Gaussians-based background subtraction algorithm with a new Bayesiannetwork that robustly predicts the foreground/background regions between consecutivetime steps. The Bayesian network explicitly exploits the intrinsic characteristics ofthe depth data by means of two dynamic models that estimate the spatial and depthevolution of the foreground/background regions. The most remarkable contribution is thedepth-based dynamic model that predicts the changes in the foreground depth distributionbetween consecutive time steps. This is a key difference with regard to visible imagery,where the color/gray distribution of the foreground is typically assumed to be constant.Experiments carried out on two different depth-based databases demonstrate that theproposed combination of algorithms is able to obtain a more accurate segmentation of theforeground/background than other state-of-the art approaches.