3 resultados para 740500 Special Education

em DigitalCommons@University of Nebraska - Lincoln


Relevância:

80.00% 80.00%

Publicador:

Resumo:

A systematic social skills training intervention to teach reciprocal sharing was designed and implemented with triads of preschool-age children, including one child with an autism spectrum disorder (ASD) and two untrained classroom peers who had no delays or disabilities. A multiple-baseline research design was used to evaluate effects of the social skills training intervention on social-communication and sharing behaviors exhibited by the participants with ASD during interactive play activities with peers. Social-communication behaviors measured included contact and distal gestures, touching peers and speaking. Four sharing behaviors were also measured, including sharing toys and objects, receiving toys and objects, asking others to share, and giving requested items. Results indicated considerable gains in overall social-communication behaviors. The greatest improvements were observed in the participants’ use of contact gestures and speaking. Slightly increasing trends were noted and suggested that participants with ASD made modest gains in learning the sharing skills taught during social skills training lessons. Social validity data indicate that participants with ASD and peer participants found the intervention appropriate and acceptable, and staff perception ratings indicated significant changes in the social skills of participants with ASD. Study outcomes have practical implications for educational practitioners related to enhancing social-communication and social interactions of young children with ASD. Study limitations and future directions for research are discussed.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Masticatory muscle contraction causes both jaw movement and tissue deformation during function. Natural chewing data from 25 adult miniature pigs were studied by means of time series analysis. The data set included simultaneous recordings of electromyography (EMG) from bilateral masseter (MA), zygomaticomandibularis (ZM) and lateral pterygoid muscles, bone surface strains from the left squamosal bone (SQ), condylar neck (CD) and mandibular corpus (MD), and linear deformation of the capsule of the jaw joint measured bilaterally using differential variable reluctance transducers. Pairwise comparisons were examined by calculating the cross-correlation functions. Jaw-adductor muscle activity of MA and ZM was found to be highly cross-correlated with CD and SQ strains and weakly with MD strain. No muscle’s activity was strongly linked to capsular deformation of the jaw joint, nor were bone strains and capsular deformation tightly linked. Homologous muscle pairs showed the greatest synchronization of signals, but the signals themselves were not significantly more correlated than those of non-homologous muscle pairs. These results suggested that bone strains and capsular deformation are driven by different mechanical regimes. Muscle contraction and ensuing reaction forces are probably responsible for bone strains, whereas capsular deformation is more likely a product of movement.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This study investigated the influence of top-down and bottom-up information on speech perception in complex listening environments. Specifically, the effects of listening to different types of processed speech were examined on intelligibility and on simultaneous visual-motor performance. The goal was to extend the generalizability of results in speech perception to environments outside of the laboratory. The effect of bottom-up information was evaluated with natural, cell phone and synthetic speech. The effect of simultaneous tasks was evaluated with concurrent visual-motor and memory tasks. Earlier works on the perception of speech during simultaneous visual-motor tasks have shown inconsistent results (Choi, 2004; Strayer & Johnston, 2001). In the present experiments, two dual-task paradigms were constructed in order to mimic non-laboratory listening environments. In the first two experiments, an auditory word repetition task was the primary task and a visual-motor task was the secondary task. Participants were presented with different kinds of speech in a background of multi-speaker babble and were asked to repeat the last word of every sentence while doing the simultaneous tracking task. Word accuracy and visual-motor task performance were measured. Taken together, the results of Experiments 1 and 2 showed that the intelligibility of natural speech was better than synthetic speech and that synthetic speech was better perceived than cell phone speech. The visual-motor methodology was found to demonstrate independent and supplemental information and provided a better understanding of the entire speech perception process. Experiment 3 was conducted to determine whether the automaticity of the tasks (Schneider & Shiffrin, 1977) helped to explain the results of the first two experiments. It was found that cell phone speech allowed better simultaneous pursuit rotor performance only at low intelligibility levels when participants ignored the listening task. Also, simultaneous task performance improved dramatically for natural speech when intelligibility was good. Overall, it could be concluded that knowledge of intelligibility alone is insufficient to characterize processing of different speech sources. Additional measures such as attentional demands and performance of simultaneous tasks were also important in characterizing the perception of different kinds of speech in complex listening environments.