115 resultados para Regional speech
em University of Queensland eSpace - Australia
Resumo:
Background. The mechanisms by which the abdominal muscles move and control the lumbosacral spine are not clearly understood. Descriptions of abdominal morphology are also conflicting and the regional anatomy of these muscles has not been comprehensively examined. The aim of this study was to investigate the morphology of regions of transversus abdominis and obliquus internus and externus abdominis. Methods. Anterior and posterolateral abdominal walls were dissected bilaterally in 26 embalmed human cadavers. The orientation, thickness and length of the upper, middle and lower fascicles of transversus abdominis and obliquus internus abdominis, and the upper and middle fascicles of obliquus externus abdominis were measured. Findings. Differences in fascicle orientation, thickness and length were documented between the abdominal muscles and between regions of each muscle. The fascicles of transversus abdominis were horizontal in the upper region, with increasing inferomedial orientation in the middle and lower regions. The upper and middle fascicles of obliquus internus abdominis were oriented superomedially and the lower fascicles inferomedially. The mean vertical dimension of transversus abdominis that attaches to the lumbar spine via the thoracolumbar fascia was 5.2 (SD 2.1) cm. Intramuscular septa were observed between regions of transversus abdominis, and obliquus internus abdominis could be separated into two distinct layers in the lower and middle regions. Interpretation. This study provides quantitative data of morphological differences between regions of the abdominal muscles, which suggest variation in function between muscle regions. Precise understanding of abdominal muscle anatomy is required for incorporation of these muscles into biomechanical models. Furthermore, regional variation in their morphology may reflect differences in function. (C) 2004 Elsevier Ltd. All rights reserved.
Resumo:
The primary objective of this study was to assess the lingual kinematic strategies used by younger and older adults to increase rate of speech. It was hypothesised that the strategies used by the older adults would differ from the young adults either as a direct result of, or in response to a need to compensate for, age-related changes in the tongue. Electromagnetic articulography was used to examine the tongue movements of eight young (M526.7 years) and eight older (M567.1 years) females during repetitions of /ta/ and /ka/ at a controlled moderate rate and then as fast as possible. The younger and older adults were found to significantly reduce consonant durations and increase syllable repetition rate by similar proportions. To achieve these reduced durations both groups appeared to use the same strategy, that of reducing the distances travelled by the tongue. Further comparisons at each rate, however, suggested a speed-accuracy trade-off and increased speech monitoring in the older adults. The results may assist in differentiating articulatory changes associated with normal aging from pathological changes found in disorders that affect the older population.
Resumo:
Spectral peak resolution was investigated in normal hearing (NH), hearing impaired (HI), and cochlear implant (CI) listeners. The task involved discriminating between two rippled noise stimuli in which the frequency positions of the log-spaced peaks and valleys were interchanged. The ripple spacing was varied adaptively from 0.13 to 11.31 ripples/octave, and the minimum ripple spacing at which a reversal in peak and trough positions could be detected was determined as the spectral peak resolution threshold for each listener. Spectral peak resolution was best, on average, in NH listeners, poorest in CI listeners, and intermediate for HI listeners. There was a significant relationship between spectral peak resolution and both vowel and consonant recognition in quiet across the three listener groups. The results indicate that the degree of spectral peak resolution required for accurate vowel and consonant recognition in quiet backgrounds is around 4 ripples/octave, and that spectral peak resolution poorer than around 1–2 ripples/octave may result in highly degraded speech recognition. These results suggest that efforts to improve spectral peak resolution for HI and CI users may lead to improved speech recognition
Resumo:
The purpose of this study was to explore the potential advantages, both theoretical and applied, of preserving low-frequency acoustic hearing in cochlear implant patients. Several hypotheses are presented that predict that residual low-frequency acoustic hearing along with electric stimulation for high frequencies will provide an advantage over traditional long-electrode cochlear implants for the recognition of speech in competing backgrounds. A simulation experiment in normal-hearing subjects demonstrated a clear advantage for preserving low-frequency residual acoustic hearing for speech recognition in a background of other talkers, but not in steady noise. Three subjects with an implanted "short-electrode" cochlear implant and preserved low-frequency acoustic hearing were also tested on speech recognition in the same competing backgrounds and compared to a larger group of traditional cochlear implant users. Each of the three short-electrode subjects performed better than any of the traditional long-electrode implant subjects for speech recognition in a background of other talkers, but not in steady noise, in general agreement with the simulation studies. When compared to a subgroup of traditional implant users matched according to speech recognition ability in quiet, the short-electrode patients showed a 9-dB advantage in the multitalker background. These experiments provide strong preliminary support for retaining residual low-frequency acoustic hearing in cochlear implant patients. The results are consistent with the idea that better perception of voice pitch, which can aid in separating voices in a background of other talkers, was responsible for this advantage.
Resumo:
The purpose of the present study was to examine the benefits of providing audible speech to listeners with sensorineural hearing loss when the speech is presented in a background noise. Previous studies have shown that when listeners have a severe hearing loss in the higher frequencies, providing audible speech (in a quiet background) to these higher frequencies usually results in no improvement in speech recognition. In the present experiments, speech was presented in a background of multitalker babble to listeners with various severities of hearing loss. The signal was low-pass filtered at numerous cutoff frequencies and speech recognition was measured as additional high-frequency speech information was provided to the hearing-impaired listeners. It was found in all cases, regardless of hearing loss or frequency range, that providing audible speech resulted in an increase in recognition score. The change in recognition as the cutoff frequency was increased, along with the amount of audible speech information in each condition (articulation index), was used to calculate the "efficiency" of providing audible speech. Efficiencies were positive for all degrees of hearing loss. However, the gains in recognition were small, and the maximum score obtained by an listener was low, due to the noise background. An analysis of error patterns showed that due to the limited speech audibility in a noise background, even severely impaired listeners used additional speech audibility in the high frequencies to improve their perception of the "easier" features of speech including voicing
Resumo:
The purpose of this paper is to provide a cross-linguistic survey of the variation of coding strategies that are available for the grammatical distinction between direct and indirect speech representation with a particular focus on the expression of indirect reported speech. Cross-linguistic data from a sample of 42 languages will be provided to illustrate the range of available grammatical coding strategies.
Resumo:
To the Editor: The increase in medical graduates expected over the next decade presents a huge challenge to the many stakeholders involved in providing their prevocational and vocational medical training. 1 Increased numbers will add significantly to the teaching and supervision workload for registrars and consultants, while specialist training and access to advanced training positions may be compromised. However, this predicament may also provide opportunities for innovation in the way internships are delivered. Although facing these same challenges, regional and rural hospitals could use this situation to enhance their workforce by creating opportunities for interns and junior doctors to acquire valuable experience in non-metropolitan settings. We surveyed a representative sample (n = 147; 52% of total cohort) of Year 3 Bachelor of Medicine and Bachelor of Surgery students at the University of Queensland about their perceptions and expectations of their impending internship and the importance of its location (ie, urban/metropolitan versus regional/rural teaching hospitals) to their future training and career plans. Most students (n = 127; 86%) reported a high degree of contemplation about their internship choice. Issues relating to career progression and support ranked highest in their expectations. Most perceived internships in urban/metropolitan hospitals as more beneficial to their future career prospects compared with regional/rural hospitals, but, interestingly, felt that they would have more patient responsibility and greater contact with and supervision by senior staff in a regional setting (Box). Regional and rural hospitals should try to harness these positive perceptions and act to address any real or perceived shortcomings in order to enhance their future workforce.2 They could look to establish partnerships with rural clinical schools3 to enhance recruitment of interns as early as Year 3. To maximise competitiveness with their urban counterparts, regional and rural hospitals need to offer innovative training and career progression pathways to junior doctors, to combat the perception that internships in urban hospitals are more beneficial to future career prospects. Partnerships between hospitals, medical schools and vocational colleges, with input from postgraduate medical councils, should provide vertical integration4 in the important period between student and doctor. Work is underway to more closely evaluate and compare the intern experience across regional/rural and urban/metropolitan hospitals, and track student experiences and career choices longitudinally. This information may benefit teaching hospitals and help identify the optimal combination of resources necessary to provide quality teaching and a clear career pathway for the expected influx of new interns.
Resumo:
Parkinson's disease (PD) is a neurodegenerative movement disorder primarily due to basal ganglia dysfunction. While much research has been conducted on Parkinsonian deficits in the traditional arena of musculoskeletal limb movement, research in other functional motor tasks is lacking. The present study examined articulation in PD with increasingly complex sequences of articulatory movement. Of interest was whether dysfunction would affect articulation in the same manner as in limb-movement impairment. In particular, since very Similar (homogeneous) articulatory sequences (the tongue twister effect) are more difficult for healthy individuals to achieve than dissimilar (heterogeneous) gestures, while the reverse may apply for skeletal movements in PD, we asked which factor would dominate when PD patients articulated various grades of artificial tongue twisters: the influence of disease or a possible difference between the two motor systems. Execution was especially impaired when articulation involved a sequence of motor program heterogeneous in terms of place of articulation. The results are suggestive of a hypokinesic tendency in complex sequential articulatory movement as in limb movement. It appears that PD patients do show abnormalities in articulatory movement which are similar to those of the musculoskeletal system. The present study suggests that an underlying disease effect modulates movement impairment across different functional motor systems. (C) 1998 Academic Press.
Resumo:
Back,ground To examine the role of long-term swimming exercise on regional and total body bone mineral density (BMD) in men. Methods. Experimental design: Cross-sectional. Setting: Musculoskeletal research laboratory at a medical center, Participants:We compared elite collegiate swimmers (n=11) to age-, weight-, and height-matched non-athletic controls (n=11), Measures: BMD (g/cm(2)) of the lumbar spine L2-4, proximal femur (femoral neck, trochanter, Ward's triangle), total body and various subregions of the total body, as well as regional and total body fat and bone mineral-free lean mass (LM) was assessed by dual-energy X-ray absorptiometry (DXA, Hologic QDR 1000/W). Results. Swimmers, who commenced training at 10.7+/-3.7 yrs (mean+/-SD) and trained for 24.7+/-4.2 hrs per week, had a greater amount of LM (p<0.05), lower fat mass (p<0.001) and percent body fat (9.5 vs 16.2 %, p<0.001) than controls. There was no significant difference between groups for regional or total body BRID, In stepwise multiple regression analysis, body weight was a consistent independent predictor of regional and total body BMD, Conclusions. These results suggest that long-term swimming is not an osteogenic mode of training in college-aged males. This supports our previous findings in young female swimmers who displayed no bone mass benefits despite long-standing athletic training.
Resumo:
A major ongoing debate in population ecology has surrounded the causative factors underlying the abundance of phytophagous insects and whether or not these factors limit or regulate herbivore populations. However, it is often difficult to identify mortality agents in census data, and their distribution and relative importance across large spatial scales are rarely understood. Were, we present life tables for egg batches and larval cohorts of the processionary caterpillar Ochrogaster lunifer Herrich-Schaffer, using intensive local sampling combined with extensive regional monitoring to ascertain the relative importance of different mortality factors at different localities. Extinction of entire cohorts (representing the entire reproductive output of one female) at natural localities was high, with 82% of the initial 492 cohorts going extinct. Mortality was highest in the egg and early instar stages due to predation from dermestid beetles, and while different mortality factors (e.g. hatching failure, egg parasitism and failure to establish on the host) were present at many localities, dermestid predation, either directly observed or inferred from indirect evidence, was the dominant mortality factor at 89% of localities surveyed. Predation was significantly higher in plantations than in natural habitats. The second most important mortality factor was resource depletion, with 14 cohorts defoliating their hosts. Egg and larval parasitism were not major mortality agents. A combination of predation and resource depletion consistently accounted for the majority of mortality across localities, suggesting that both factors are important in limiting population abundance. This evidence shows that O. lunifer is not regulated by natural enemies alone, but that resource patches (Acacia trees) ultimately, and frequently, act together to limit population growth.