2 resultados para Field evidence

em Bucknell University Digital Commons - Pensilvania - USA


Relevância:

30.00% 30.00%

Publicador:

Resumo:

We present a near-infrared (0.9-2.4 mu m) spectroscopic study of 73 field ultracool dwarfs having spectroscopic and/or kinematic evidence of youth (approximate to 10-300 Myr). Our sample is composed of 48 low-resolution (R approximate to 100) spectra and 41 moderate-resolution spectra (R greater than or similar to 750-2000). First, we establish a method for spectral typing M5-L7 dwarfs at near-IR wavelengths that is independent of gravity. We find that both visual and index-based classification in the near-IR provides consistent spectral types with optical spectral types, though with a small systematic offset in the case of visual classification at J and K band. Second, we examine features in the spectra of similar to 10 Myr ultracool dwarfs to define a set of gravity-sensitive indices based on FeH, VO, Ki, Nai, and H-band continuum shape. We then create an index-based method for classifying the gravities of M6-L5 dwarfs that provides consistent results with gravity classifications from optical spectroscopy. Our index-based classification can distinguish between young and dusty objects. Guided by the resulting classifications, we propose a set of low-gravity spectral standards for the near-IR. Finally, we estimate the ages corresponding to our gravity classifications.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Recent advances in the field of statistical learning have established that learners are able to track regularities of multimodal stimuli, yet it is unknown whether the statistical computations are performed on integrated representations or on separate, unimodal representations. In the present study, we investigated the ability of adults to integrate audio and visual input during statistical learning. We presented learners with a speech stream synchronized with a video of a speaker's face. In the critical condition, the visual (e.g., /gi/) and auditory (e.g., /mi/) signals were occasionally incongruent, which we predicted would produce the McGurk illusion, resulting in the perception of an audiovisual syllable (e.g., /ni/). In this way, we used the McGurk illusion to manipulate the underlying statistical structure of the speech streams, such that perception of these illusory syllables facilitated participants' ability to segment the speech stream. Our results therefore demonstrate that participants can integrate audio and visual input to perceive the McGurk illusion during statistical learning. We interpret our findings as support for modality-interactive accounts of statistical learning.