896 resultados para May <Familie : 14.-21. Jh. : Bern>May <Familie : 14.-21. Jh. : Bern>


Relevância:

40.00% 40.00%

Publicador:

Resumo:

STUDY QUESTION Can the number of oocytes retrieved in IVF cycles be predictive of the age at menopause? SUMMARY ANSWER The number of retrieved oocytes can be used as an indirect assessment of the extent of ovarian reserve to provide information on the duration of the reproductive life span in women of different ages. WHAT IS KNOWN ALREADY Menopause is determined by the exhaustion of the ovarian follicular pool. Ovarian reserve is the main factor influencing ovarian response in IVF cycles. As a consequence the response to ovarian stimulation with the administration of gonadotrophins in IVF treatment may be informative about the age at menopause. STUDY DESIGN, SIZE, DURATION In the present cross-sectional study, participants were 1585 infertile women from an IVF clinic and 2635 menopausal women from a more general population. PARTICIPANTS/MATERIALS, SETTING, METHODS For all infertile women, the response to ovarian stimulation with gonadotrophins was recorded. For menopausal women, relevant demographic characteristics were available for the analysis. MAIN RESULTS AND THE ROLE OF CHANCE A cubic function described the relationship between mean numbers of oocytes and age, with all terms being statistically significant. From the estimated residual distribution of the actual number of oocytes about this mean, a distribution of the age when there would be no oocytes retrieved following ovarian stimulation was derived. This was compared with the distribution of the age at menopause from the menopausal women, showing that menopause occurred about a year later. LIMITATIONS, REASONS FOR CAUTION The retrieved oocyte data were from infertile women, while the menopausal ages were from a more general population. WIDER IMPLICATIONS OF THE FINDINGS In the present study, we have shown some similarity between the distributions of the age when no retrieved oocytes can be expected after ovarian stimulation and the age at menopause. For a given age, the lower the ovarian reserve, the lower the number of retrieved oocytes would be and the earlier the age that menopause would occur.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Purpose. To characterize the changes occurring in choroidal thickness (ChT) across the posterior pole during accommodation using enhanced-depth imaging optical coherence tomography (OCT). Methods. Forty participants (mean age 21 ± 2 years) had measures of ChT and ocular biometry taken during accommodation to 0, 3, and 6 diopter (D) stimuli, with the Spectralis OCT and Lenstar biometer. A Badal optometer and cold mirror system was mounted on both instruments, allowing measurement collection while subjects viewed an external fixation target at varying accommodative demands. Results. The choroid exhibited significant thinning during accommodation to the 6 D stimulus in both subfoveal (mean change, −5 ± 7 μm) and parafoveal regions (P < 0.001). The magnitude of these changes varied by parafoveal meridian, with the largest changes seen in the temporal (−9 ± 12 μm) and inferotemporal (−8 ± 8 μm) meridians (P < 0.001). Axial length increased with accommodation (mean change, +5 ± 11 μm at 3 D, +14 ± 13 μm at 6 D), and these changes were weakly negatively associated with the choroidal changes (r2 = 0.114, P < 0.05). Conclusions. A small, but significant thinning of the choroid was observed at the 6 D accommodation demand, which was greatest in the temporal and inferotemporal parafoveal choroid, and increased with increasing eccentricity from the fovea. The regional variation in the parafoveal thinning corresponds to the distribution of the nonvascular smooth muscle within the uvea, which may implicate these cells as the potential mechanism by which the choroid thins during accommodation.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

We have genotyped 14,436 nonsynonymous SNPs (nsSNPs) and 897 major histocompatibility complex (MHC) tag SNPs from 1,000 independent cases of ankylosing spondylitis (AS), autoimmune thyroid disease (AITD), multiple sclerosis (MS) and breast cancer (BC). Comparing these data against a common control dataset derived from 1,500 randomly selected healthy British individuals, we report initial association and independent replication in a North American sample of two new loci related to ankylosing spondylitis, ARTS1 and IL23R, and confirmation of the previously reported association of AITD with TSHR and FCRL3. These findings, enabled in part by increased statistical power resulting from the expansion of the control reference group to include individuals from the other disease groups, highlight notable new possibilities for autoimmune regulation and suggest that IL23R may be a common susceptibility factor for the major 'seronegative' diseases.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Purpose To determine i) the architectural adaptations of the biceps femoris long head (BFlf) following concentric or eccentric strength training interventions; ii) the time course of adaptation during training and detraining. Methods Participants in this randomized controlled trial (control [n=28], concentric training group [n=14], eccentric training group [n=14], males) completed a 4-week control period, followed by 6 weeks of either concentric- or eccentric-only knee flexor training on an isokinetic dynamometer and finished with 28 days of detraining. Architectural characteristics of BFlf were assessed at rest and during graded isometric contractions utilizing two-dimensional ultrasonography at 28 days pre-baseline, baseline, days 14, 21 and 42 of the intervention and then again following 28 days of detraining. Results BFlf fascicle length was significantly longer in the eccentric training group (p<0.05, d range: 2.65 to 2.98) and shorter in the concentric training group (p<0.05, d range: -1.62 to -0.96) after 42 days of training compared to baseline at all isometric contraction intensities. Following the 28-day detraining period, BFlf fascicle length was significantly reduced in the eccentric training group at all contraction intensities compared to the end of the intervention (p<0.05, d range: -1.73 to -1.55). There was no significant change in fascicle length of the concentric training group following the detraining period. Conclusions These results provide evidence that short term resistance training can lead to architectural alterations in the BFlf. In addition, the eccentric training-induced lengthening of BFlf fascicle length was reversed and returned to baseline values following 28 days of detraining. The contraction mode specific adaptations in this study may have implications for injury prevention and rehabilitation.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Growth rate of abdominal aortic aneurysm (AAA) is thought to be an important indicator of the potential risk of rupture. Wall stress is also thought to be a trigger for its rupture. However, stress change during the expansion of an AAA is unclear. Forty-four patients with AAAs were included in this longitudinal follow-up study. They were assessed by serial abdominal ultrasonography and computerized tomography (CT) scans if a critical size was reached or a rapid expansion occurred. Patient-specific 3-dimensional AAA geometries were reconstructed from the follow-up CT images. Structural analysis was performed to calculate the wall stresses of the AAA models at both baseline and final visit. A non-linear large-strain finite element method was used to compute the wall stress distribution. The average growth rate was 0.66cm/year (range 0-1.32 cm/year). A significantly positive correlation between shoulder tress at baseline and growth rate was found (r=0.342; p=0.02). A higher shoulder stress is associated with a rapidly expanding AAA. Therefore, it may be useful for estimating the growth expansion of AAAs and further risk stratification of patients with AAAs.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Background: Rupture of vulnerable atheromatous plaque in the carotid and coronary arteries often leads to stroke and heart attack respectively. The role of calcium deposition and its contribution to plaque stability is controversial. This study uses both an idealized and a patient-specific model to evaluate the effect of a calcium deposit on the stress distribution within an atheromatous plaque. Methods: Using a finite-element method, structural analysis was performed on an idealized plaque model and the location of a calcium deposit within it was varied. In addition to the idealized model, in vivo high-resolution MR imaging was performed on 3 patients with carotid atheroma and stress distributions were generated. The individual plaques were chosen as they had calcium at varying locations with respect to the lumen and the fibrous cap. Results: The predicted maximum stress was increased by 47.5% when the calcium deposit was located in the thin fibrous cap in the model when compared with that in a model without a deposit. The result of adding a calcium deposit either to the lipid core or remote from the lumen resulted in almost no increase in maximal stress. Conclusion: Calcification at the thin fibrous cap may result in high stress concentrations, ultimately increasing the risk of plaque rupture. Assessing the location of calcification may, in the future, aid in the risk stratification of patients with carotid stenosis.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The 2008 US election has been heralded as the first presidential election of the social media era, but took place at a time when social media were still in a state of comparative infancy; so much so that the most important platform was not Facebook or Twitter, but the purpose-built campaign site my.barackobama.com, which became the central vehicle for the most successful electoral fundraising campaign in American history. By 2012, the social media landscape had changed: Facebook and, to a somewhat lesser extent, Twitter are now well-established as the leading social media platforms in the United States, and were used extensively by the campaign organisations of both candidates. As third-party spaces controlled by independent commercial entities, however, their use necessarily differs from that of home-grown, party-controlled sites: from the point of view of the platform itself, a @BarackObama or @MittRomney is technically no different from any other account, except for the very high follower count and an exceptional volume of @mentions. In spite of the significant social media experience which Democrat and Republican campaign strategists had already accumulated during the 2008 campaign, therefore, the translation of such experience to the use of Facebook and Twitter in their 2012 incarnations still required a substantial amount of new work, experimentation, and evaluation. This chapter examines the Twitter strategies of the leading accounts operated by both campaign headquarters: the ‘personal’ candidate accounts @BarackObama and @MittRomney as well as @JoeBiden and @PaulRyanVP, and the campaign accounts @Obama2012 and @TeamRomney. Drawing on datasets which capture all tweets from and at these accounts during the final months of the campaign (from early September 2012 to the immediate aftermath of the election night), we reconstruct the campaigns’ approaches to using Twitter for electioneering from the quantitative and qualitative patterns of their activities, and explore the resonance which these accounts have found with the wider Twitter userbase. A particular focus of our investigation in this context will be on the tweeting styles of these accounts: the mixture of original messages, @replies, and retweets, and the level and nature of engagement with everyday Twitter followers. We will examine whether the accounts chose to respond (by @replying) to the messages of support or criticism which were directed at them, whether they retweeted any such messages (and whether there was any preferential retweeting of influential or – alternatively – demonstratively ordinary users), and/or whether they were used mainly to broadcast and disseminate prepared campaign messages. Our analysis will highlight any significant differences between the accounts we examine, trace changes in style over the course of the final campaign months, and correlate such stylistic differences with the respective electoral positioning of the candidates. Further, we examine the use of these accounts during moments of heightened attention (such as the presidential and vice-presidential debates, or in the context of controversies such as that caused by the publication of the Romney “47%” video; additional case studies may emerge over the remainder of the campaign) to explore how they were used to present or defend key talking points, and exploit or avert damage from campaign gaffes. A complementary analysis of the messages directed at the campaign accounts (in the form of @replies or retweets) will also provide further evidence for the extent to which these talking points were picked up and disseminated by the wider Twitter population. Finally, we also explore the use of external materials (links to articles, images, videos, and other content on the campaign sites themselves, in the mainstream media, or on other platforms) by the campaign accounts, and the resonance which these materials had with the wider follower base of these accounts. This provides an indication of the integration of Twitter into the overall campaigning process, by highlighting how the platform was used as a means of encouraging the viral spread of campaign propaganda (such as advertising materials) or of directing user attention towards favourable media coverage. By building on comprehensive, large datasets of Twitter activity (as of early October, our combined datasets comprise some 3.8 million tweets) which we process and analyse using custom-designed social media analytics tools, and by using our initial quantitative analysis to guide further qualitative evaluation of Twitter activity around these campaign accounts, we are able to provide an in-depth picture of the use of Twitter in political campaigning during the 2012 US election which will provide detailed new insights social media use in contemporary elections. This analysis will then also be able to serve as a touchstone for the analysis of social media use in subsequent elections, in the USA as well as in other developed nations where Twitter and other social media platforms are utilised in electioneering.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

- Introduction There is limited understanding of how young adults’ driving behaviour varies according to long-term substance involvement. It is possible that regular users of amphetamine-type stimulants (i.e. ecstasy (MDMA) and methamphetamine) may have a greater predisposition to engage in drink/drug driving compared to non-users. We compare offence rates, and self-reported drink/drug driving rates, for stimulant users and non-users in Queensland, and examine contributing factors. - Methods The Natural History Study of Drug Use is a prospective longitudinal study using population screening to recruit a probabilistic sample of amphetamine-type stimulant users and non-users aged 19-23 years. At the 4 ½ year follow-up, consent was obtained to extract data from participants’ Queensland driver records (ATS users: n=217, non-users: n=135). Prediction models were developed of offence rates in stimulant users controlling for factors such as aggression and delinquency. - Results Stimulant users were more likely than non-users to have had a drink-driving offence (8.7% vs. 0.8%, p < 0.001). Further, about 26% of ATS users and 14% of non-users self-reported driving under the influence of alcohol during the last 12 months. Among stimulant users, drink-driving was independently associated with last month high-volume alcohol consumption (Incident Rate Ratio (IRR): 5.70, 95% CI: 2.24-14.52), depression (IRR: 1.28, 95% CI: 1.07-1.52), low income (IRR: 3.57, 95% CI: 1.12-11.38), and male gender (IRR: 5.40, 95% CI: 2.05-14.21). - Conclusions Amphetamine-type stimulant use is associated with increased long-term risk of drink-driving, due to a number of behavioural and social factors. Inter-sectoral approaches which target long-term behaviours may reduce offending rates.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Study design Anterior and posterior vertebral body heights were measured from sequential MRI scans of adolescent idiopathic scoliosis (AIS) patients and healthy controls. Objective To measure changes in vertebral body height over time during scoliosis progression to assess how vertebral body height discrepancies change during growth. Summary of background data Relative anterior overgrowth has been proposed as a potential driver for AIS initiation and progression. This theory proposes that the anterior column grows faster, and the posterior column slower, in AIS patients when compared to healthy controls. There is disagreement in the literature as to whether the anterior vertebral body heights are proportionally greater than posterior vertebral body heights in AIS patients when compared to healthy controls. To some extent, these discrepancies may be attributed to methodological differences. Methods MRI scans of the major curve of 21 AIS patients (mean age 12.5 ± 1.4 years, mean Cobb 32.2 ± 12.8º) and between T4 and T12 of 21 healthy adolescents (mean age 12.1 ± 0.5 years) were captured for this study. Of the 21 AIS patients, 14 had a second scan on average 10.8 ± 4.7 months after the first. Anterior and posterior vertebral body heights were measured from the true sagittal plane of each vertebra such that anterior overgrowth could be quantified. Results The difference between anterior and posterior vertebral body height in healthy, non-scoliotic children was significantly greater than in AIS patients with mild to moderate scoliosis. However there was no significant relationship between the overall anterior-posterior vertebral body height difference in AIS and either severity of the curve or its progression over time. Conclusions Whilst AIS patients have a proportionally longer anterior column than non-scoliotic controls, the degree of anterior overgrowth was not related to the rate of progression or the severity of the scoliotic curve.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The principal objective of this study was to determine if Campylobacter jejuni genotyping methods based upon resolution optimised sets of single nucleotide polymorphisms (SNPs) and binary genetic markers were capable of identifying epidemiologically linked clusters of chicken-derived isolates. Eighty-eight C. jejuni isolates of known flaA RFLP type were included in the study. They encompassed three groups of ten isolates that were obtained at the same time and place and possessed the same flaA type. These were regarded as being epidemiologically linked. Twenty-six unlinked C. jejuni flaA type I isolates were included to test the ability of SNP and binary typing to resolve isolates that were not resolved by flaA RFLP. The remaining isolates were of different flaA types. All isolates were typed by real-time PCR interrogation of the resolution optimised sets of SNPs and binary markers. According to each typing method, the three epidemiologically linked clusters were three different clones that were well resolved from the other isolates. The 26 unlinked C. jejuni flaA type I isolates were resolved into 14 SNP-binary types, indicating that flaA typing can be unreliable for revealing epidemiological linkage. Comparison of the data with data from a fully typed set of isolates associated with human infection revealed that abundant lineages in the chicken isolates that were also found in the human isolates belonged to clonal complex (CC) -21 and CC-353, with the usually rare C-353 member ST-524 being especially abundant in the chicken collection. The chicken isolates selected to be diverse according to flaA were also diverse according to SNP and binary typing. It was observed that CC-48 was absent in the chicken isolates, despite being very common in Australian human infection isolates, indicating that this may be a major cause of human disease that is not chicken associated.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Weed management is complicated by the presence of soil seed banks. The complexity of soil-seed interactions means that seed persistence in the field is often difficult to measure, let alone predict. Field trials, although accurate in their context, are time-consuming and expensive to conduct for individual species. Some ex situ techniques for estimating seed life expectancy have been proposed, but these fail to simulate the environmental complexity of the field. Also, it has been questioned whether techniques such as the controlled aging test (CAT) are useful indicators of field persistence. This study aimed to test the validity of the standard CAT (seed aging at 45 C and 60% relative humidity) in use at the Royal Botanic Gardens, Kew, U.K., for predicting field seed-persistence. Comparison of seed persistence and CAT data for 27 northwest European species suggested a significant positive correlation of 0.31. Subsequently, 13 species of emerging and common weeds of Queensland were assessed for their seed longevity using the CAT. The seed longevity data of these species in the CAT were linked with field seed-persistence data according to three broad seed-persistence categories: <1 yr, 1 to 3 yr, and >3 yr. We discuss the scope for using the CAT as a tool for rapid assignment of species to these categories. There is a need for further studies that compare predictions of seed persistence based on the CAT with seed persistence in the field for a larger range of species and environments.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The current study explored the influence of moral values (measured by ethical ideology) on self-reported driving anger and aggressive driving responses. A convenience sample of drivers aged 17-73 years (n = 280) in Queensland, Australia, completed a self-report survey. Measures included sensation seeking, trait aggression, driving anger, endorsement of aggressive driving responses and ethical ideology (Ethical Position Questionnaire, EPQ). Scores on the two underlying dimensions of the EPQ idealism (highI/lowI) and relativism (highR/lowR) were used to categorise drivers into four ideological groups: Situationists (highI/highR); Absolutists (highI/lowR); Subjectivists (lowI/highR); and Exceptionists (lowI/lowR). Mean aggressive driving scores suggested that exceptionists were significantly more likely to endorse aggressive responses. After accounting for demographic variables, sensation seeking and driving anger, ethical ideological category added significantly, though modestly to the prediction of aggressive driving responses. Patterns in results suggest that those drivers in ideological groups characterised by greater concern to avoid affecting others negatively (i.e. highI, Situationists, Absolutists) may be less likely to endorse aggressive driving responses, even when angry. In contrast, Subjectivists (lowI, HighR), reported the lowest levels of driving anger yet were significantly more likely to endorse aggressive responses. This provides further insight into why high levels of driving anger may not always translate into more aggressive driving.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

As failure to control Rhyzopertha dominica (F.) with phosphine is a common problem in the grain-growing regions of Brazil, a study was undertaken to investigate the frequency, distribution and strength of phosphine resistance in R. dominica in Brazil. Nineteen samples of R. dominica were collected between 1991 and 2003 from central storages where phosphine fumigation had failed to control this species. Insects were cultured without selection until testing in 2005. Each sample was tested for resistance to phosphine on the basis of the response of adults to discriminating concentrations of phosphine (20 and 48 h exposures) and full dose-response assays (48 h exposure). Responses of the Brazilian R. dominica samples were compared with reference susceptible, weak-resistance and strong-resistance strains from Australia in parallel assays. All Brazilian population samples showed resistance to phosphine: five were diagnosed with weak resistance and 14 with strong resistance. Five samples showed levels of resistance similar to the reference strong-resistance strain. A representative highly resistant sample was characterised by exposing mixed-age cultures to a range of constant concentrations of phosphine for various exposure periods. Time to population extinction (TPE) and time to 99.9% suppression of population (LT99.9) values of this sample were generally similar to those of the reference strong-resistance strain. For example, at 0.1, 0.5 and 1.0 mg L-1, LT99.9 values for BR33 and the reference strong-resistance strain were respectively 21, 6.4 and 3.7 days and 17, 6.2 and 3.8 days. With both strains, doubling phosphine concentrations to 2 mg L -1 resulted in increased LT99.9 and TPE. High level and frequency of resistance in all population samples, some of which had been cultured without selection for up to 12 years, suggest little or no fitness deficit associated with phosphine resistance. The present research indicates that widespread phosphine resistance may be developing in Brazil. Fumigation practices should be monitored and resistance management plans implemented to alleviate further resistance development.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Rainfall variability is a challenge to sustainable and pro. table cattle production in northern Australia. Strategies recommended to manage for rainfall variability, like light or variable stocking, are not widely adopted. This is due partly to the perception that sustainability and profitability are incompatible. A large, long-term grazing trial was initiated in 1997 in north Queensland, Australia, to test the effect of different grazing strategies on cattle production. These strategies are: (i) constant light stocking (LSR) at long-term carrying capacity (LTCC); (ii) constant heavy stocking (HSR) at twice LTCC; (iii) rotational wet-season spelling (R/Spell) at 1.5 LTCC; (iv) variable stocking (VAR), with stocking rates adjusted in May based on available pasture; and (v) a Southern Oscillation Index (SOI) variable strategy, with stocking rates adjusted in November, based on available pasture and SOI seasonal forecasts. Animal performance varied markedly over the 10 years for which data is presented, due to pronounced differences in rainfall and pasture availability. Nonetheless, lighter stocking at or about LTCC consistently gave the best individual liveweight gain (LWG), condition score and skeletal growth; mean LWG per annum was thus highest in the LSR (113 kg), intermediate in the R/Spell (104 kg) and lowest in the HSR(86 kg). MeanLWGwas 106 kg in the VAR and 103 kg in the SOI but, in all years, the relative performance of these strategies was dependent upon the stocking rate applied. After 2 years on the trial, steers from lightly stocked strategies were 60-100 kg heavier and received appreciable carcass price premiums at the meatworks compared to those under heavy stocking. In contrast, LWG per unit area was greatest at stocking rates of about twice LTCC; mean LWG/ha was thus greatest in the HSR (21 kg/ha), but this strategy required drought feeding in four of the 10 years and was unsustainable. Although LWG/ha was lower in the LSR (mean 14 kg/ha), or in strategies that reduced stocking rates in dry years like the VAR(mean 18 kg/ha) and SOI (mean 17 kg/ha), these strategies did not require drought feeding and appeared sustainable. The R/Spell strategy (mean 16 kg/ha) was compromised by an ill-timed fire, but also performed satisfactorily. The present results provide important evidence challenging the assumption that sustainable management in a variable environment is unprofitable. Further research is required to fully quantify the long-term effects of these strategies on land condition and profitability and to extrapolate the results to breeder performance at the property level.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Grain feeding low bodyweight, cast-for-age (CFA) sheep from pastoral areas of eastern Australia at the end of the growing season can enable critical carcass weight grades to be achieved and thus yield better economic returns. The aim of this work was to compare growth and carcass characteristics for CFA Merino ewes consuming either simple diets based on whole sorghum grain or commercial feed pellets. The experiment also compared various sources of additional nitrogen (N) for inclusion in sorghum diets and evaluated several introductory regimes. Seventeen ewes were killed initially to provide baseline carcass data and the remaining 301 ewes were gradually introduced to the concentrate diets over 14 days before being fed concentrates and wheaten hay ad libitum for 33 or 68 days. Concentrate treatments were: (i) commercial feed pellets, (ii) sorghum mix (SM; whole sorghum grain, limestone, salt and molasses) + urea and ammonium sulfate (SMU), (iii) SMU + whole cottonseed at 286 g/kg of concentrate dry matter (DM), (iv) SM + cottonseed meal at 139 g/kg of concentrate DM, (v) SMU + virginiamycin (20 mg/kg of concentrate) for the first 21 days of feeding, and (vi) whole cottonseed gradually replaced by SMU over the first 14 days of feeding. The target carcass weight of 18 kg was achieved after only 33 days on feed for the pellets and the SM + cottonseed meal diet. All other whole grain sorghum diets required between 33 and 68 days on feed to achieve the target carcass weight. Concentrates based on whole sorghum grain generally produced significantly (P < 0.05) lower carcass weight and fat score than pellets and this may have been linked to the significantly (P < 0.05) higher faecal starch concentrations for ewes consuming sorghum-based diets (270 v. 72 g/kg DM on day 51 of feeding for sorghum-based diets and pellets, respectively). Source of N in whole grain sorghum rations and special introductory regimes had no significant (P > 0.05) effects on carcass weight or fat score of ewes with the exception of carcass weight for SMU + whole cottonseed being significantly lower than SM + cottonseed meal at day 33. Ewes finished on all diets produced acceptable carcasses although muscle pH was high in all ewe carcasses (average 5.8 and 5.7 at 33 and 68 days, respectively). There were no significant (P > 0.05) differences between diets in concentrate DM intake, rumen fluid pH, meat colour score, fat colour score, eye muscle area, meat pH or meat temperature.