930 resultados para Over fifty years
Introducing a new limit states design concept to railway concrete sleepers: An Australian experience
Resumo:
Over 50 years, a large number of research and development projects with respect to the use of cementitious and concrete materials for manufacturing railway sleepers have been significantly progressed in Australia, Europe, and Japan (Wang, 1996; Murray and Cai, 1998; Wakui and Okuda, 1999; Esveld, 2001; Freudenstein and Haban, 2006; Remennikov and Kaewunruen, 2008). Traditional sleeper materials are timber, steel, and concrete. Cost-efficiency, superior durability, and improved track stability are the main factors toward significant adoption of concrete materials for railway sleepers. The sleepers in a track system, as shown in Figure 1, are subjected to harsh and aggressive external forces and natural environments across a distance. Many systemic problems and technical issues associated with concrete sleepers have been tackled over decades. These include pre-mature failures of sleepers, concrete cancer or ettringite, abrasion of railseats and soffits, impact damages by rail machinery, bond-slip damage, longitudinal and lateral instability of track system, dimensional instability of sleepers, nuisance noise and vibration, and so on (Pfeil, 1997; Gustavson, 2002; Kaewunruen and Remennikov, 2008a,b, 2013). These issues are, however, becoming an emerging risk for many countries (in North and South Americas, Asia, and the Middle East) that have recently installed large volumes of concrete sleepers in their railway networks (Federal Railroad Administration, 2013). As a result, it is vital to researchers and practitioners to critically review and learn from previous experience and lessons around the world.
Resumo:
Heavy haul railway lines are important and expensive items of infrastructure operating in an environment which is increasingly focussed on risk-based management and constrained profit margins. It is vital that costs are minimised but also that infrastructure satisfies failure criteria and standards of reliability which account for the random nature of wheel-rail forces and of the properties of the materials in the track. In Australia and the USA, concrete railway sleepers/ties are still designed using methods which the rest of the civil engineering world discarded decades ago in favour of the more rational, more economical and probabilistically based, limit states design (LSD) concept. This paper describes a LSD method for concrete sleepers which is based on (a) billions of measurements over many years of the real, random wheel-rail forces on heavy haul lines, and (b) the true capacity of sleepers. The essential principles on which the new method is based are similar to current, widely used LSD-based standards for concrete structures. The paper proposes and describes four limit states which a sleeper must satisfy, namely: strength; operations; serviceability; and fatigue. The method has been applied commercially to two new major heavy haul lines in Australia, where it has saved clients millions of dollars in capital expenditure.
Resumo:
The effect of fungal endophyte (Neotyphodium lolii) infection on the performance of perennial ryegrass (Lolium perenne) growing under irrigation in a subtropical environment was investigated. Seed of 4 cultivars, infected with standard (common toxic or wild-type) endophyte or the novel endophyte AR1, or free of endophyte (Nil), was sown in pure swards, which were fertilised with 50 kg N/ha.month. Seasonal and total yield, persistence, and rust susceptibility were assessed over 3 years, along with details of the presence of endophyte and alkaloids in plant shoots. Endophyte occurrence in tillers in both the standard and AR1 treatments was above 95% for Bronsyn and Impact throughout and rose to that level in Samson by the end of the second year. Meridian AR1 only reached 93% while, in the standard treatment, the endophyte had mostly died before sowing. Nil Zendophyte treatments carried an average of ?0.6% infection throughout. Infection of the standard endophyte was associated with increased dry matter (DM) yields in all 3 years compared with no endophyte. AR1 also significantly increased yields in the second and third years. Over the full 3 years, standard and AR1 increased yields by 18% and 11%, respectively. Infection with both endophytes was associated with increased yields in all 4 seasons, the effects increasing in intensity over time. There was 27% better persistence in standard infected plants compared with Nil at the end of the first year, increasing to 198% by the end of the experiment, while for AR1 the improvements were 20 and 134%, respectively. The effect of endophyte on crown rust (Puccinia coronata) infection was inconsistent, with endophyte increasing rust damage on one occasion and reducing it on another. Cultivar differences in rust infection were greater than endophyte effects. Plants infected with the AR1 endophyte had no detectable ergovaline or lolitrem B in leaf, pseudostem, or dead tissue. In standard infected plants, ergovaline and lolitrem B were highest in pseudostem and considerably lower in leaf. Dead tissue had very low or no detectable ergovaline but high lolitrem B concentrations. Peramine concentration was high and at similar levels in leaf and pseudostem, but not detectable in dead material. Concentration was similar in both AR1 and standard infected plants. Endophyte presence appeared to have a similar effect in the subtropics as has been demonstrated in temperate areas, in terms of improving yields and persistence and increasing tolerance of plants to stress factors.
Resumo:
Single or multiple factors implicated in subsoil constraints including salinity, sodicity, and phytotoxic concentrations of chloride (Cl) are present in many Vertosols including those occurring in Queensland, Australia. The variable distribution and the complex interactions that exist between these constraints limit the agronomic or management options available to manage the soil with these subsoil constraints. The identification of crops and cultivars adapted to these adverse subsoil conditions and/or able to exploit subsoil water may be an option to maintain productivity of these soils. We evaluated relative performance of 5 winter crop species, in terms of grain yields, nutrient concentration, and ability to extract soil water, grown on soils with various levels and combinations of subsoil constraints in 19 field experiments over 2 years. Subsoil constraints were measured by levels of soil Cl, electrical conductivity of the saturation extract (ECse), and exchangeable sodium percentage (ESP). Increasing levels of subsoil constraints significantly decreased maximum depth of water extraction, grain yield, and plant-available water capacity for all the 5 crops and more so for chickpea and durum wheat than bread wheat, barley, or canola. Increasing soil Cl levels had a greater restricting effect on water availability than did ECse and ESP. We developed empirical relationships between soil Cl, ECse, and ESP and crop lower limit (CLL) for estimating subsoil water extraction by 5 winter crops. However, the presence of gypsum influenced the ability to predict CLL based on the levels of ECse. Stronger relationships between apparent unused plant-available water (CLL - LL15; LL15 is lower limit at -1.5 MPa) and soil Cl concentrations than ESP or ECse suggested that the presence of high Cl in these soils most likely inhibited the subsoil water extraction by the crops. This was supported by increased sodium (Na) and Cl concentration with a corresponding decrease in calcium (Ca) and potassium (K) in young mature leaf of bread wheat, durum wheat, and chickpea with increasing levels of subsoil constraints. Of the 2 ions, Na and Cl, the latter appears to be more damaging than the former, resulting in plant dieback and reduced grain yields.
Resumo:
The possibilities of developmental rehabilitation. A study on the construction of work relatedness and the customer in Aslak rehabilitation The challenge of work-related rehabilitation is to anticipate the factors threatening work ability and to affect them. The purpose of this study was to analyze how work-related rehabilitation is constructed in practice and what are the challenges and, at the same time, the possibilities of an innovative transformation of rehabilitation when trying to achive this goal. The theoretical basis is cultural-historical activity theory and developmental work research. Based on a historical analysis, I studied rehabilitation activity empirically using the data gathered from one Aslak programme (Aslak = occupationally oriented medical rehabilitation) over two years. I described and analysed the construction of Aslak using ethnographic data and interviews. The data includes audio- and video-recordings of the Aslak course, fieldnotes, documents and other materials used in the course. The study aimed to reveal rehabilitation practices from different perspectives carried out by different stakeholders and participants in the Aslak course. It focused on the Aslak trajectory produced by a multiorganizational subject. I analyzed the rehabilitation activity using the method of ethnographic analysis of infrastructure. The method of analyzing the construction of the object of rehabilititation the customer was a membership categorization analysis (MCD) based on the ethnomethodological research tradition. I analyzed the meanings denoting customers given by different parties during one Aslak process and the relations between the meanings. Based on this analysis, I studied the disturbances, ruptures, and innovations in the rehabilitation activity. The results of the study show that the infrastructure of Aslak has different basic ideas. Aslak is constructed most explicitly on the infrastructure of medical rehabilitation. The second layer has been provided with some tools of identifying and preventing well-defined occupation-specific load factors. However, it has failed to perform a new structure, as Aslak has encountered, at the same time, rapid changes in working life. The study identified some promising markers representing new kinds of work-related rehabilitation ideas, but they proved to be incomplete and fragile. As a consequence of the multilayered infrastructure, the contents of the Aslak course were split into fragmented phases and disconnected themes, which were blocked in by the master idea of medical orientation. Its relationship to work remained weak and obscure. The categorizations of customers in Aslak were manifold and contradictory. According to the results, the possibilities for transforming work-related rehabilitation lie both in changing the orientation to the customer to be more relevant to changing working life and forging the infrastructural innovations related to this change. The results showed that a new work-relatedeness would be difficult but possible to construct. What is needed is the construction of an infrastructure that will support a coherent master idea of work-related rehabilitation over the entire trajectory of a process. A shared idea of a rehabilitation object must be constructed in close collaboration between different stakeholders, such as Kela (the Social Insurance Institution of Finland), occupational health services, work organizations, and rehabilitation institutes. Key words: Aslak rehabilitation, work-related rehabilitation, development of rehabilitation, customer of rehabilitation, developmental work research, analysis of infrastructure, membership category analysis
Resumo:
Occupational burnout and heath Occupational burnout is assumed to be a negative consequence of chronic work stress. In this study, it was explored in the framework of occupational health psychology, which focusses on psychologically mediated processes between work and health. The objectives were to examine the overlap between burnout and ill health in relation to mental disorders, musculoskeletal disorders, and cardiovascular diseases, which are the three commonest disease groups causing work disability in Finland; to study whether burnout can be distinguished from ill health by its relation to work characteristics and work disability; and to determine the socio-demographic correlates of burnout at the population level. A nationally representative sample of the Finnish working population aged 30 to 64 years (n = 3151-3424) from the multidisciplinary epidemiological Health 2000 Study was used. Burnout was measured with the Maslach Burnout Inventory - General Survey. The diagnoses of common mental disorders were based on the standardized mental health interview (the Composite International Diagnostic Interview), and physical illnesses were determined in a comprehensive clinical health examination by a research physician. Medically certified sickness absences exceeding 9 work days during a 2-year period were extracted from a register of The Social Insurance Institution of Finland. Work stress was operationalized according to the job strain model. Gender, age, education, occupational status, and marital status were recorded as socio-demographic factors. Occupational burnout was related to an increased prevalence of depressive and anxiety disorders and alcohol dependence among the men and women. Burnout was also related to musculoskeletal disorders among the women and cardiovascular diseases among the men independently of socio-demographic factors, physical strenuousness of work, health behaviour, and depressive symptoms. The odds of having at least one long, medically-certified sickness absence were higher for employees with burnout than for their colleagues without burnout. For severe burnout, this association was independent of co-occurring common mental disorders and physical illnesses for both genders, as was also the case for mild burnout among the women. In a subgroup of the men with absences, severe burnout was related to a greater number of absence days than among the women with absences. High job strain was associated with a higher occurrence of burnout and depressive disorders than low job strain was. Of these, the association between job strain and burnout was stronger, and it persisted after control for socio-demographic factors, health behaviour, physical illnesses, and various indicators of mental health. In contrast, job strain was not related to depressive disorders after burnout was accounted for. Among the working population over 30 years of age, burnout was positively associated with age. There was also a tendency towards higher levels of burnout among the women with low educational attainment and occupational status and among the unmarried men. In conclusion, a considerable overlap was found between burnout, mental disorders, and physical illnesses. Still, burnout did not seem to be totally redundant with respect to ill health. Burnout may be more strongly related to stressful work characteristics than depressive disorders are. In addition, burnout seems to be an independent risk factor for work disability, and it could possibly be used as a marker of health-impairing work stress. However, burnout may represent a different kind of risk factor for men and women, and this possibility needs to be taken into account in the promotion of occupational health.
Resumo:
Reduced supplies of nitrogen (N) in many soils of southern Queensland that were cropped exhaustively with cereals over many decades have been the focus of much research to avoid declines in profitability and sustainability of farming systems. A 45-month period of mixed grass (purple pigeon grass, Setaria incrassata Stapf; Rhodes grass, Chloris gayana Kunth.) and legume (lucerne, Medicago sativa L.; annual medics, M. scutellata L. Mill. and M. truncatula Gaertn.) pasture was one of several options that were compared at a fertility-depleted Vertosol at Warra, southern Queensland, to improve grain yields or increase grain protein concentration of subsequent wheat crops. Objectives of the study were to measure the productivity of a mixed grass and legume pasture grown over 45 months (cut and removed over 36 months) and its effects on yield and protein concentrations of the following wheat crops. Pasture production (DM t/ha) and aboveground plant N yield (kg/ha) for grass, legume (including a small amount of weeds) and total components of pasture responded linearly to total rainfall over the duration of each of 3 pastures sown in 1986, 1987 and 1988. Averaged over the 3 pastures, each 100 mm of rainfall resulted in 0.52 t/ha of grass, 0.44 t/ha of legume and 0.97 t/ha of total pasture DM, there being little variation between the 3 pastures. Aboveground plant N yield of the 3 pastures ranged from 17.2 to 20.5 kg/ha per 100 mm rainfall. Aboveground legume N in response to total rainfall was similar (10.6 - 13.2 kg/ha. 100 mm rainfall) across the 3 pastures in spite of very different populations of legumes and grasses at establishment. Aboveground grass N yield was 5.2 - 7.0 kg/ha per 100mm rainfall. In most wheat crops following pasture, wheat yields were similar to that of unfertilised wheat except in 1990 and 1994, when grain yields were significantly higher but similar to that for continuous wheat fertilised with 75 kg N/ha. In contrast, grain protein concentrations of most wheat crops following pasture responded positively, being substantially higher than unfertilised wheat but similar to that of wheat fertilised with 75 kg N/ha. Grain protein averaged over all years of assay was increased by 25 - 40% compared with that of unfertilised wheat. Stored water supplies after pasture were < 134mm (< 55% of plant available water capacity); for most assay crops water storages were 67 - 110 mm, an equivalent wet soil depth of only 0.3 - 0.45 m. Thus, the crop assays of pasture benefits were limited by low water supply to wheat crops. Moreover, the severity of common root rot in wheat crop was not reduced by pasture - wheat rotation.
Resumo:
Highly productive sown pasture systems can result in high growth rates of beef cattle and lead to increases in soil nitrogen and the production of subsequent crops. The nitrogen dynamics and growth of grain sorghum following grazed annual legume leys or a grass pasture were investigated in a no-till system in the South Burnett district of Queensland. Two years of the tropical legumes Macrotyloma daltonii and Vigna trilobata (both self regenerating annual legumes) and Lablab purpureus (a resown annual legume) resulted in soil nitrate N (0-0.9 m depth), at sorghum sowing, ranging from 35 to 86 kg/ha compared with 4 kg/ha after pure grass pastures. Average grain sorghum production in the 4 cropping seasons following the grazed legume leys ranged from 2651 to 4012 kg/ha. Following the grass pasture, grain sorghum production in the first and second year was < 1900 kg/ha and by the third year grain yield was comparable to the legume systems. Simulation studies utilising the farming systems model APSIM indicated that the soil N and water dynamics following 2-year ley phases could be closely represented over 4 years and the prediction of sorghum growth during this time was reasonable. In simulated unfertilised sorghum crops grown from 1954 to 2004, grain yield did not exceed 1500 kg/ha in 50% of seasons following a grass pasture, while following 2-year legume leys, grain exceeded 3000 kg/ha in 80% of seasons. It was concluded that mixed farming systems that utilise short term legume-based pastures for beef production in rotation with crop production enterprises can be highly productive.
Resumo:
Whilst the topic of soil salinity has received a substantive research effort over the years, the accurate measurement and interpretation of salinity tolerance data remain problematic. The tolerance of four perennial grass species (non-halophytes) to sodium chloride (NaCl) dominated salinity was determined in a free-flowing sand culture system. Although the salinity tolerance of non-halophytes is often represented by the threshold salinity model (bent-stick model), none of the species in the current study displayed any observable salinity threshold. Further, the observed yield decrease was not linear as suggested by the model. On re-examination of earlier datasets, we conclude that the threshold salinity model does not adequately describe the physiological processes limiting growth of non-halophytes in saline soils. Therefore, the use of the threshold salinity model is not recommended for non-halophytes, but rather, a model which more accurately reflects the physiological response observed in these saline soils, such as an exponential regression curve.
Resumo:
The effects of the hydrological regime on temporal changes to physical characteristics of substratum habitat, sediment texture of surface sediments (<10 cm), were investigated in a sub-tropical headwater stream over four years. Surface discharge was measured together with vertical hydraulic gradient and groundwater depth in order to explore features of sediment habitat that extend beyond the streambed surface. Whilst the typical discharge pattern was one of intermittent base flows and infrequent flow events associated with monsoonal rain patterns, the study period also encompassed a drought and a one-in-a-hundred-year flood. Rainfall and discharge did not necessarily reflect the actual conditions in the stream. Although surface waters were persistent long after discharge ceased, the streambed was completely dry on several occasions. Shallow groundwater was present at variable depths throughout the study period, being absent only at the height of the drought. The streambed sediments were mainly gravels, sand and clay. Finer sediment fractions showed a marked change in grain size over time, although bedload movement was limited to a single high discharge event. In response to a low discharge regimen (drought), sediments characteristically showed non-normal distributions and were dominated by finer materials. A high-energy discharge event produced a coarsening of sands and a diminished clay fraction in the streambed. Particulate organic matter from sediments showed trends of build-up and decline with the high and low discharge regimes, respectively. Within the surface sediment intersticies three potential categories of invertebrate habitat were recognised, each with dynamic spatial and temporal boundaries.
Resumo:
The objective of this study was to examine genetic changes in reproduction traits in sows (total number born (TNB), number born alive (NBA), average piglet birth weight (ABW) and number of piglets weaned (NW), body weight prior to mating (MW), gestation length (GL) and daily food intake during lactation (DFI)) in lines of Large White pigs divergently selected over 4 years for high and low post-weaning growth rate on a restricted ration. Heritabilities and repeatabilities of the reproduction traits were also determined. The analyses were carried out on 913 litter records using average information-restricted maximum likelihood method applied to single trait animal models. Estimates of heritability for most traits were small, except for ABW (0·33) and MW (0·35). Estimates of repeatability were slightly higher than those of heritability for TNB, NBA and NW, but they were almost identical for ABW, MW, GL and DFI. After 4 years of selection, the high growth line sows had significantly heavier body weight prior to mating and produced significantly more piglets born alive with heavier average birth weight than the low line sows. There were, however, no statistical differences between the selected lines in TNB or NW. The lower food intake of high relative to low line sows during lactation was not significant, indicating that daily food intake differences found between grower pigs in the high and low lines (2·71 v. 2·76 kg/day, s.e.d. 0·024) on ad libitum feeding were not fully expressed in lactating sows. It is concluded that selection for growth rate on the restricted ration resulted in beneficial effects on important measures of reproductive performance of the sows.
Resumo:
The quality of tropical grasses is a major limitation to animal production in tropical and subtropical areas. This is mainly associated with the lower digestibility because C4 grasses have higher fibre levels. Any improvement in quality would require a reduction in the lignin and an increase in the digestion of the neutral detergent fibre content of these plants (Clark and Wilson 1993). Kikuyu (Pennisetum clandestinum) is an important grass for the dairy and beef industries of the subtropics of Australia, South Africa and New Zealand (Mears 1970). Increased digestibility could substantially improve animal production in these industries. These experiments investigated the variation in agronomic and quality of natural populations selected from diverse regions within Australia. Runners of 14 kikuyu selections were collected by project staff or local agronomists from areas considered to have grown kikuyu for over 30 years while Whittet and Noonan were established by seed. Entries were established as single spaced plants on a 1.5 m grid in a randomised block with 3 replicates and evaluated under irrigation at Mutdapilly (brown podsol) and Wollongbar (red ferrosol). Foliage height, forage production and runner yield were assessed along with crude protein (CP), in vitro dry matter digestibility (IVDMD), metabolisable energy (ME), acid detergent fibre (ADF) and neutral detergent fibre (NDF) content of the leaf in autumn, winter and spring.
Resumo:
Studying the continuity and underlying mechanisms of temperament change from early childhood through adulthood is clinically and theoretically relevant. Knowledge of the continuity and change of temperament from infancy onwards, especially as perceived by both parents is, however, still scanty. Only in recent years have researchers become aware that personality, long considered as stable in adulthood, may also change. Further, studies that focus on the transactional change of child temperament and parental personality also seem to be lacking, as are studies focusing on transactions between child temperament and more transient parental characteristics, like parental stress. Therefore, this longitudinal study examined the degree of continuity of temperament over five years from the infant s age of six months to the child s age of five and a half years, as perceived by both biological parents, and also investigated the bidirectional effects between child temperament and parents personality traits and overall stress experienced during that time. First, moderate to high levels of continuity of temperament from infancy to middle childhood were shown, depicting the developmental links between affectively positive and well-adjusted temperament characteristics, and between characteristics of early and later negative affectivity. The continuity of temperament was quantitatively and qualitatively similar in both parents ratings. The findings also demonstrate that infant and childhood temperament characteristics cluster to form stable temperament types that resemble personality types shown in child and adult personality studies. Second, the parental personality traits of extraversion and neuroticism were shown to be highly stable over five years, but evidence of change in relation to parents views of their child s temperament was also shown: an infant s higher positive affectivity predicted an increase in parental extraversion, while the infant s higher activity level predicted a decrease in parental neuroticism over five years. Furthermore, initially higher parental extraversion predicted higher ratings of the child s effortful control, while initially higher parental neuroticism predicted the child s higher negative affectivity. In terms of changes in parental stress, the infant s higher activity level predicted a decrease in maternal overall stress, while initially higher maternal stress predicted a higher level of child negative affectivity in middle childhood. Together, the results demonstrate that the mother- and father-rated temperament of the child shows continuity during the early years of life, but also support the view that the development of these characteristics is sensitive to important contextual factors such as parental personality and overall stress. While parental personality and experienced stress were shown to have an effect on the child s developing temperament, the reverse was also true: the parents own personality traits and perceived stress seemed to be highly stable, but also susceptible to their experiences of their child s temperament.
Resumo:
Fibre diameter can vary dramatically along a wool staple, especially in the Mediterranean environment of southern Australia with its dry summers and abundance of green feed in spring. Other research results have shown a very low phenotypic correlation between fibre diameter grown between seasons. Many breeders use short staples to measure fibre diameter for breeding purposes and also to promote animals for sale. The effectiveness of this practice is determined by the relative response to selection by measuring fibre traits on a full 12 months wool staple as compared to measuring them only on part of a staple. If a high genetic correlation exists between the part record and the full record, then using part records may be acceptable to identify genetically superior animals. No information is available on the effectiveness of part records. This paper investigated whether wool growth and fibre diameter traits of Merino wool grown at different times of the year in a Mediterranean environment, are genetically the same trait, respectively. The work was carried out on about 7 dyebanded wool sections/animal.year, on ewes from weaning to hogget age, in the Katanning Merino resource flocks over 6 years. Relative clean wool growth of the different sections had very low heritability estimates of less than 0.10, and they were phenotypically and genetically poorly correlated with 6 or 12 months wool growth. This indicates that part record measurement of clean wool growth of these sections will be ineffective as indirect selection criteria to improve wool growth genetically. Staple length growth as measured by the length between dyebands, would be more effective with heritability estimates of between 0.20 and 0.30. However, these measurements were shown to have a low genetic correlation with wool grown for 12 months which implies that these staple length measurements would only be half as efficient as the wool weight for 6 or 12 months to improve total clean wool weight. Heritability estimates of fibre diameter, coefficient of variation of fibre diameter and fibre curvature were relatively high and were genetically and phenotypically highly correlated across sections. High positive phenotypic and genetic correlations were also found between fibre diameter, coefficient of variation of fibre diameter and fibre curvature of the different sections and similar measurements for wool grown over 6 or 12 months. Coefficient of variation of fibre diameter of the sections also had a moderate negative phenotypic and genetic correlation with staple strength of wool staples grown over 6 months indicating that coefficient of variation of fibre diameter of any section would be as good an indirect selection criterion to improve stable strength as coefficient of variation of fibre diameter for wool grown over 6 or 12 months. The results indicate that fibre diameter, coefficient of variation of fibre diameter and fibre curvature of wool grown over short periods of time have virtually the same heritability as that of wool grown over 12 months, and that the genetic correlation between fibre diameter, coefficient of variation of fibre diameter and fibre curvature on part and on full records is very high (rg > 0.85). This indicates that fibre diameter, coefficient of variation of fibre diameter and fibre curvature on part records can be used as selection criteria to improve these traits. However, part records of greasy and clean wool growth would be much less efficient than fleece weight for wool grown over 6 or 12 months because of the low heritability of part records and the low genetic correlation between these traits on part records and on wool grown for 12 months.
Resumo:
Pratylenchus thornei and P. neglectus are two species of root-lesion nematode that cause substantial yield losses in wheat. No commercially available wheat variety has resistance to both species. A doubled-haploid population developed from a cross between the synthetic hexaploid wheat line CPI133872 and the bread wheat Janz was used to locate and tag quantitative trait loci (QTLs) associated with resistance to both P. thornei and P. neglectus. Wheat plants were inoculated with both species of nematode in independent replicated glasshouse trials repeated over 2 years. Known locations of wheat microsatellite markers were used to construct a framework map. After an initial single-marker analysis to detect marker-trait linkages, chromosome regions associated with putative QTLs were targetted with microsatellite markers to increase map density in the chromosome regions of interest. In total, 148 wheat microsatellite markers and 21 amplified fragment length polymorphism markers were mapped. The codominant microsatellite marker Xbarc183 on the distal end of chromosome 6DS was allelic for resistance to both P. thornei and P. neglectus. The QTL were designated QRlnt.lrc-6D.1 and QRlnn.lrc-6D.1, for the 2 traits, respectively. The allele inherited from CPI133872 explained 22.0-24.2% of the phenotypic variation for P. thornei resistance, and the allele inherited from Janz accounted for 11.3-14.0% of the phenotypic variation for P. neglectus resistance. Composite interval mapping identified markers that flank a second major QTL on chromosome 6DL (QRlnt.lrc-6D.2) that explained 8.3-13.4% of the phenotypic variation for P. thornei resistance. An additional major QTL associated with P. neglectus resistance was detected on chromosome 4DS (QRlnn.lrc-4D.1) and explained a further 10.3-15.4% of the phenotypic variation. The identification and tagging of nematode resistance genes with molecular markers will allow appropriate allele combinations to be selected, which will aid the successful breeding of wheat with dual nematode resistance.