158 resultados para (A. Schmidt) G. Fryxell and T. P. Watkins
Resumo:
Arbuscular mycorrhizal (AM) fungi, commonly found in long-term cane-growing fields in northern Queensland, are linked with both negative and positive growth responses by sugarcane (Saccharum spp.), depending on P supply. A glasshouse trial was established to examine whether AM density might also have an important influence on these growth responses. Mycorrhizal spores (Glomus clarum), isolated from a long-term cane block in northern Queensland, were introduced into a pasteurised low-P cane soil at 5 densities (0, 0.06, 0.25, 1, 4 spores/g soil) and with 4 P treatments (0, 8.2, 25, and 47 mg/kg). At 83 days after planting, sugarcane tops responded positively to P fertilizer, although responses attributable to spore density were rarely observed. In one case, addition of 4 spores/g led to a 53% yield response over those without AM at 8 mgP/kg, or a relative benefit of 17 mg P/kg. Root colonisation was reduced for plants with nil or 74 mg P/kg. For those without AM, P concentration in the topmost visible dewlap (TVD) leaf increased significantly with fertiliser P (0.07 v. 0.15%). However, P concentration increased further with the presence of AM spores. Irrespective of AM, the critical P concentration in the TVD leaf was 0.18%. This study confirms earlier reports that sugarcane is poorly responsive to AM. Spore density, up to 4 spores/g soil, appears unable to influence this responsiveness, either positively or negatively. Attempts to gain P benefits by increasing AM density through rotation seem unlikely to lead to yield increases by sugarcane. Conversely, sugarcane grown in fields with high spore densities and high plant-available P, such as long-term cane-growing soils, is unlikely to suffer a yield reduction from mycorrhizal fungi.
Resumo:
Dairy farms in subtropical Australia use irrigated, annually sown short-term ryegrass (Lolium multiflorum) or mixtures of short-term ryegrass and white (Trifolium repens) and Persian (shaftal) (T. resupinatum) clover during the winter-spring period in all-year-round milk production systems. A series of small plot cutting experiments was conducted in 3 dairying regions (tropical upland, north Queensland, and subtropical southeast Queensland and northern New South Wales) to determine the most effective rate and frequency of application of nitrogen (N) fertiliser. The experiments were not grazed, nor was harvested material returned to the plots, after sampling. Rates up to 100 kg N/ha.month (as urea or calcium ammonium nitrate) and up to 200 kg N/ha every 2 months (as urea) were applied to pure stands of ryegrass in 1991. In 1993 and 1994, urea, at rates up to 150 kg N/ha.month and to 200 kg N/ha every 2 months, was applied to pure stands of ryegrass; urea, at rates up to 50 kg N/ha.month, was also applied to ryegrass-clover mixtures. The results indicate that applications of 50-85 kg N/ha.month can be recommended for short-term ryegrass pastures throughout the subtropics and tropical uplands of eastern Australia, irrespective of soil type. At this rate, dry matter yields will reach about 90% of their potential, forage nitrogen concentration will be increased, there is minimal risk to stock from nitrate poisoning and there will be no substantial increase in soil N. The rate of N for ryegrass-clover pastures is slightly higher than for pure ryegrass but, at these rates, the clover component will be suppressed. However, increased ryegrass yields and higher forage nitrogen concentrations will compensate for the reduced clover component. At application rates up to 100 kg N/ha.month, build-up of NO3--N and NH4+-N in soil was generally restricted to the surface layers (0-20 cm) of the soil, but there was a substantial increase throughout the soil profile at 150 kg N/ha.month. The build-up of NO3--N and NH4+-N was greater and was found at lower rates on the lighter soil compared with heavy clays. Generally, most of the soil N was in the NO3--N form and most was in the top 20 cm.
Resumo:
Candidatus Phytoplasma australiense (Ca. P. australiense) is associated with the plant diseases strawberry lethal yellows (SLY), strawberry green petal (SGP), papaya dieback (PDB), Australian grapevine yellows (AGY) and Phormium yellow leaf (PYL; New Zealand). Strawberry lethal yellows disease is also associated with a rickettsia-like-organism (RLO) or infrequently with the tomato big bud (TBB) phytoplasma, the latter being associated with a wide range of plant diseases throughout Australia. In contrast, the RLO has been identified only in association with SLY disease, and Ca. P. australiense has been detected only in a limited number of plant host species. The aim of this study was to identify plant hosts that are possible reservoirs of Ca. P. australiense and the SLY RLO. Thirty-one plant species from south-east Queensland were observed with disease between 2001 and 2003 and, of these, 18 species tested positive using phytoplasma-specific primers. The RLO was detected in diseased Jacksonia scoparia and Modiola caroliniana samples collected at Stanthorpe. The TBB phytoplasma was detected in 16 different plant species and Ca. P. australiense Australian grapevine yellows strain was detected in six species. The TBB phytoplasma was detected in plants collected at Nambour, Stanthorpe, Warwick and Brisbane. Ca. P. australiense was detected in plants collected at Nambour, Stanthorpe, Gatton and Allora. All four phytoplasmas were detected in diseased Gomphocarpus physocarpus plants collected at Toowoomba, Allora, Nambour and Gatton. These results indicated that the vector(s) of Ca. P. australiense are distributed throughout south-east Queensland and the diversity of phytoplasmas detected in G. physocarpus suggests it is a feeding source for phytoplasma insect vectors or it has a broad susceptibility to a range of phytoplasmas.
Resumo:
Strawberry lethal yellows (SLY) disease in Australia is associated with the phytoplasmas Candidatus Phytoplasma australiense and tomato big bud, and a rickettsia-like-organism (RLO). Ca. P. australiense is also associated with strawberry green petal (SGP) disease. This study investigated the strength of the association of the different agents with SLY disease. We also documented the location of SLY or SGP plants, and measured whether they were RLO or phytoplasma positive. Symptomatic strawberry plants collected from south-east Queensland (Australia) between January 2000 and October 2002 were screened by PCR for both phytoplasmas and the RLO. Two previously unreported disease symptoms termed severe fruit distortion (SFD) and strawberry leaves from fruit (SLF) were observed during this study but there was no clear association between these symptoms and phytoplasmas or the RLO. Only two SGP diseased plants were observed and collected, compared with 363 plants with SLY disease symptoms. Of the 363 SLY samples, 117 tested positive for the RLO, 67 tested positive for Ca. P. australiense AGY strain and 11 plants tested positive for Ca. P. australiense PYL variant strain. On runner production farms at Stanthorpe, Queensland the RLO was detected in SLY diseased plants more frequently than for the phytoplasmas. On fruit production farms on the Sunshine Coast, Queensland, Ca. P. australiense was detected in SLY disease plants more frequently than the RLO.
Resumo:
The response of cattle to alterations in social groupings can lead to physiological changes that affect meat quality. Feedlot practices frequently lead to a proportion of cattle in a pen being drafted for slaughter with the balance retained for a further period until they meet market specifications. An ability to regroup such retained cattle for short periods without consequences for meat quality would facilitate efficient use of feedlot pen space. The current experiment examined the impact on physiological variables and meat quality of regrouped British breed steers 4, 2 or 1 week before dispatch for slaughter. There was little effect of regrouping cattle on physiological variables associated with stress responses. Physical assessment of meat quality indicated that regrouping steers 1 week before slaughter led to higher compression and a tendency for higher peak force values in animals from one genotype than in their respective controls (1.89 v. 1.71 ± 0.05 kg, P = 0.017); however, these assessments were not matched by changes in sensory perception of meat quality. Average daily gain during feedlot finishing was negatively related to the temperament measure and flight time. It was also associated with breed, white cell count, plasma cortisol and haemoglobin at the midpoint of the 70-day finishing period. The results confirm the impact of flight time on growth rate during feedlot finishing and that regrouping cattle less than 2 weeks before slaughter may reduce meat quality.
Resumo:
Soils with high levels of chloride and/or sodium in their subsurface layers are often referred to as having subsoil constraints (SSCs). There is growing evidence that SSCs affect wheat yields by increasing the lower limit of a crop's available soil water (CLL) and thus reducing the soil's plant-available water capacity (PAWC). This proposal was tested by simulation of 33 farmers' paddocks in south-western Queensland and north-western New South Wales. The simulated results accounted for 79% of observed variation in grain yield, with a root mean squared deviation (RMSD) of 0.50 t/ha. This result was as close as any achieved from sites without SSCs, thus providing strong support for the proposed mechanism that SSCs affect wheat yields by increasing the CLL and thus reducing the soil's PAWC. In order to reduce the need to measure CLL of every paddock or management zone, two additional approaches to simulating the effects of SSCs were tested. In the first approach the CLL of soils was predicted from the 0.3-0.5 m soil layer, which was taken as the reference CLL of a soil regardless of its level of SSCs, while the CLL values of soil layers below 0.5 m depth were calculated as a function of these soils' 0.3-0.5 m CLL values as well as of soil depth plus one of the SSC indices EC, Cl, ESP, or Na. The best estimates of subsoil CLL values were obtained when the effects of SSCs were described by an ESP-dependent function. In the second approach, depth-dependent CLL values were also derived from the CLL values of the 0.3-0.5 m soil layer. However, instead of using SSC indices to further modify CLL, the default values of the water-extraction coefficient (kl) of each depth layer were modified as a function of the SSC indices. The strength of this approach was evaluated on the basis of correlation of observed and simulated grain yields. In this approach the best estimates were obtained when the default kl values were multiplied by a Cl-determined function. The kl approach was also evaluated with respect to simulated soil moisture at anthesis and at grain maturity. Results using this approach were highly correlated with soil moisture results obtained from simulations based on the measured CLL values. This research provides strong evidence that the effects of SSCs on wheat yields are accounted for by the effects of these constraints on wheat CLL values. The study also produced two satisfactory methods for simulating the effects of SSCs on CLL and on grain yield. While Cl and ESP proved to be effective indices of SSCs, EC was not effective due to the confounding effect of the presence of gypsum in some of these soils. This study provides the tools necessary for investigating the effects of SSCs on wheat crop yields and natural resource management (NRM) issues such as runoff, recharge, and nutrient loss through simulation studies. It also facilitates investigation of suggested agronomic adaptations to SSCs.
Resumo:
The main weeds and weed management practices undertaken in broad acre dryland cropping areas of north-eastern Australia have been identified. The information was collected in a comprehensive postal survey of both growers and agronomists from Dubbo in New South Wales (NSW) through to Clermont in central Queensland, where 237 surveys were returned. A very diverse weed flora of 105 weeds from 91 genera was identified for the three cropping zones within the region (central Queensland, southern Queensland and northern NSW). Twenty-three weeds were common to all cropping zones. The major common weeds were Sonchus oleraceus, Rapistrum rugosum, Echinochloa spp. and Urochloa panicoides. The main weeds were identified for both summer and winter fallows, and sorghum, wheat and chickpea crops for each of the zones, with some commonality as well as floral uniqueness recorded. More genera were recorded in the fallows than in crops, and those in summer fallows exceeded the number in winter. Across the region, weed management relied heavily on herbicides. In fallows, glyphosate and mixes with glyphosate were very common, although the importance of the glyphosate mix partner differed among the cropping zones. Use and importance of pre-emergence herbicides in-crop varied considerably among the zones. In wheat, more graminicides were used in northern NSW than in southern Queensland, and virtually none were used in central Queensland, reflecting the differences in winter grass weed flora across the region. Atrazine was the major herbicide used in sorghum, although metolachlor was also used predominantly in northern NSW. Fallow and inter-row cultivation were used more often in the southern areas of the region. Grazing of fallows was more prominent in northern NSW. High crop seeding rates were not commonly recorded indicating that growers are not using crop competition as a tool for weed management. Although many management practices were recorded overall, few growers were using integrated weed management, and herbicide resistance has been and continues to be an issue for the region.
Resumo:
In Australia communities are concerned about atrazine being detected in drinking water supplies. It is important to understand mechanisms by which atrazine is transported from paddocks to waterways if we are to reduce movement of agricultural chemicals from the site of application. Two paddocks cropped with grain sorghum on a Black Vertosol were monitored for atrazine, potassium chloride (KCl) extractable atrazine, desethylatrazine (DEA), and desisopropylatrazine (DIA) at 4 soil depths (0-0.05, 0.05-0.10, 0.10-0.20, and 0.20-0.30 m) and in runoff water and runoff sediment. Atrazine + DEA + DIA (total atrazine) had a half-life in soil of 16-20 days, more rapid dissipation than in many earlier reports. Atrazine extracted in dilute potassium chloride, considered available for weed control, was initially 34% of the total and had a half-life of 15-20 days until day 30, after which it dissipated rapidly with a half life of 6 days. We conclude that, in this region, atrazine may not pose a risk for groundwater contamination, as only 0.5% of applied atrazine moved deeper than 0.20 m into the soil, where it dissipated rapidly. In runoff (including suspended sediment) atrazine concentrations were greatest during the first runoff event (57 days after application) (85 μg/L) and declined with time. After 160 days, the total atrazine lost in runoff was 0.4% of the initial application. The total atrazine concentration in runoff was strongly related to the total concentration in soil, as expected. Even after 98% of the KCl-extractable atrazine had dissipated (and no longer provided weed control), runoff concentrations still exceeded the human health guideline value of 40 μg/L. For total atrazine in soil (0-0.05 m), the range for coefficient of soil sorption (Kd) was 1.9-28.4 mL/g and for soil organic carbon sorption (KOC) was 100-2184 mL/g, increasing with time of contact with the soil and rapid dissipation of the more soluble, available phase. Partition coefficients in runoff for total atrazine were initially 3, increasing to 32 and 51 with time, values for DEA being half these. To minimise atrazine losses, cultural practices that maximise rain infiltration, and thereby minimise runoff, and minimise concentrations in the soil surface should be adopted.
Resumo:
Two laboratory experiments were carried out to quantify the mortality and physiological responses of juvenile blue swimmer crabs (Portunus pelagicus) after simulated gillnet entanglement, air exposure, disentanglement, and discarding. In both experiments, all but control blue swimmer crabs were entangled in 1-m(2) gillnet panels for 1 h, exposed to air for 2 min, subjected to various treatments of disentanglement ranging between the forceful removal of none, one, two, and four appendages, then "discarded" into individual experimental tanks and monitored for 10 d. In Experiment 1, mortalities were associated with the number of appendages removed and the occurrence of unsealed wounds. In Experiment 2, live blue swimmer crabs were sampled for blood at 2 min and 6, 24, and 72 h post-discarding to test for the effects of disentanglement and appendage removal on total haemocyte counts, clotting times, protein levels (by refractive index), and blood ion concentrations. Compared with blue swimmer crabs that had sealed or no wounds, those with unsealed wounds had lower total haemocyte counts, protein, and calcium concentrations and increased clotting ties and magnesium and sodium levels. Induced autotomy, as opposed to the arbitrary, forceful removal of a appendages has the potential to minimize the mortality and stress of discarded, juvenile blue swimmer crabs.
Resumo:
We investigated whether plasticity in growth responses to nutrients could predict invasive potential in aquatic plants by measuring the effects of nutrients on growth of eight non-invasive native and six invasive exotic aquatic plant species. Nutrients were applied at two levels, approximating those found in urbanized and relatively undisturbed catchments, respectively. To identify systematic differences between invasive and non-invasive species, we compared the growth responses (total biomass, root:shoot allocation, and photosynthetic surface area) of native species with those of related invasive species after 13 weeks growth. The results were used to seek evidence of invasive potential among four recently naturalized species. There was evidence that invasive species tend to accumulate more biomass than native species (P = 0.0788). Root:shoot allocation did not differ between native and invasive plant species, nor was allocation affected by nutrient addition. However, the photosynthetic surface area of invasive species tended to increase with nutrients, whereas it did not among native species (P = 0.0658). Of the four recently naturalized species, Hydrocleys nymphoides showed the same nutrient-related plasticity in photosynthetic area displayed by known invasive species. Cyperus papyrus showed a strong reduction in photosynthetic area with increased nutrients. H. nymphoides and C. papyrus also accumulated more biomass than their native relatives. H. nymphoides possesses both of the traits we found to be associated with invasiveness, and should thus be regarded as likely to be invasive.
Resumo:
Numerous tests have been used to measure beef cattle temperament, but limited research has addressed the relationship between such tests and whether temperament can be modified. One-hundred-and-forty-four steers were given one of three human handling and yarding experiences on six occasions during a 12-month grazing period post-weaning (backgrounding): Good handling/yarding, Poor handling/yarding and Minimal handling/yarding. At the end of this phase the cattle were lot-fed for 78 days, with no handling/yarding treatments imposed, before being transported for commercial slaughter. Temperament was assessed at the start of the experiment, during backgrounding and lot-feeding by flight speed (FS) and a fear of humans test, which measured the proximity to a stimulus person (zone average; ZA), the closest approach to the person (CA) and the amount the cattle moved around the test arena (total transitions; TT). During backgrounding, FS decreased for all treatments and at the end of backgrounding there was no difference between them. The rate of decline, however, was greatest in the Good group, smallest in the Minimal group with the Poor intermediate. In contrast, ZA was affected by treatment, with a greater reduction for the Good group than the others (P = 0.012). During lot-feeding, treatment did not affect FS, but all groups showed a decrease in ZA, with the greatest change in the Poor group, the least in the Good and the Minimal intermediate (P = 0.052). CA was positively correlated with ZA (r = 0.18 to 0.66) and negatively with TT (r = -0.180 to -0.659). FS was consistently correlated with TT only (r = 0.17 to 0.49). These findings suggest that FS and TT measure a similar characteristic, as do ZA and CA, but that these characteristics are different from one another, indicating that temperament is not a unitary trait, but has different facets. FS and TT measure one facet that we suggest is general agitation, whilst ZA and CA measure fear of people. Thus, the cattle became less agitated during backgrounding, but the effect was not permanently influenced by the quantity and quality of handling/yarding. However, Good handling/yarding reduced fearfulness of people. Fear of people was also reduced during lot-feeding, probably as a consequence of frequent exposure to humans in a situation that was neutral or positive for the cattle.
Resumo:
At an international conference on the eradication of invasive species, held in 2001, Simberloff (2002) noted some past successes in eradication—from the global eradication of smallpox (Fenner et al. 1988) to the many successful eradications of populations (mostly mammals) from small islands (e.g. Veitch and Bell 1990; Burbidge and Morris 2002). However, he cautioned that we needed to be more ambitious and aim higher if we are to prevent and reverse the growing threat of the homogenization of global biodiversity. In this chapter we review how the management strategy of eradication—the permanent removal of entire discrete populations—has contributed to the stretch in goals advocated by Simberloff. We also discuss impediments to eradication success, and summarize how some of the lessons learnt during this process have contributed to the other strategies (prevention and sustained control) that are required to manage the wider threat posed by invasive alien species. We concentrate on terrestrial vertebrates and weeds (our areas of expertise), but touch on terrestrial invertebrates and marine and freshwater species in the discussion on emerging issues, to illustrate some of the different constraints these taxa and habitats impose on the feasibility of eradication.
Resumo:
Root-lesion nematodes (Pratylenchus thornei Sher and Allen and P. neglectus (Rensch) Filipijev and Schuurmans Stekhoven) cause substantial yield loss to wheat crops in the northern grain region of Australia. Resistance to P. thornei for use in wheat breeding programs was sought among synthetic hexaploid wheats (2n= 6x = 42, AABBDD) produced through hybridisations of Triticum turgidum L. subsp. durum (Desf.) Husn (2n= 4x = 28, AABB) with Aegilops tauschii Coss. (2n= 2x = 14, DD). Resistance was determined for the synthetic hexaploid wheats and their durum and Ae. tauschii parents from the numbers of nematodes in the roots of plants grown for 16 weeks in pots of pasteurised soil inoculated with P. thornei. Fifty-nine (32%) of 186 accessions of synthetic hexaploid wheats had lower numbers of nematodes than Gatcher Selection 50a (GS50a), a partially resistant bread wheat. Greater frequencies of partial resistance were present in the durum parents (72% of 39 lines having lower nematode numbers than GS50a) and in the Ae. tauschii parents (55% of 53 lines). The 59 synthetic hexaploids were re-tested in a second experiment along with their parents. In a third experiment, 11 resistant synthetic hexaploid wheats and their F-1 hybrids with Janz, a susceptible bread wheat, were tested and the F(1)s were found to give nematode counts intermediate between the respective two parents. Synthetic hexaploid wheats with higher levels of resistance resulted from hybridisations where both the durum and Ae. tauschii parents were partially resistant, rather than where only one parent was partially resistant. These results suggest that resistance to P. thornei in synthetic hexaploid wheats is polygenic, with resistances located both in the D genome from Ae. tauschii and in the A and/or B genomes from durum. Five synthetic hexaploid wheats were selected for further study on the basis of (1) a high level of resistance to P. thornei of the synthetic hexaploid wheats and of both their durum and Ae. tauschii parents, (2) being representative of both Australian and CIMMYT (International Maize and Wheat Improvement Centre) durums, and (3) being representative of the morphological subspecies and varieties of Ae. tauschii. These 5 synthetic hexaploid wheats were also shown to be resistant to P. neglectus, whereas GS50a and 2 P. thornei-resistant derivatives were quite susceptible. Results of P. thornei resistance of F(1)s and F(2)s from a half diallel of these 5 synthetic hexaploid wheats, GS50a, and Janz from another study indicate polygenic additive resistance and better general combining ability for the synthetic hexaploid wheats than for GS50a. Published molecular marker studies on a doubled haploid population between the synthetic hexaploid wheat with best general combining ability (CPI133872) and Janz have shown quantitative trait loci for resistance located in all 3 genomes. Synthetic hexaploid wheats offer a convenient way of introgressing new resistances to P. thornei and P. neglectus from both durum and Ae. tauschii into commercial bread wheats.
Resumo:
Seed persistence is poorly quantified for invasive plants of subtropical and tropical environments and Lantana camara, one of the world's worst weeds, is no exception. We investigated germination, seedling emergence, and seed survival of two lantana biotypes (Pink and pink-edged red [PER]) in southeastern Queensland, Australia. Controlled experiments were undertaken in 2002 and repeated in 2004, with treatments comprising two differing environmental regimes (irrigated and natural rainfall) and sowing depths (0 and 2 cm). Seed survival and seedling emergence were significantly affected by all factors (time, biotype, environment, sowing depth, and cohort) (P < 0.001). Seed dormancy varied with treatment (environment, sowing depth, biotype, and cohort) (P < 0.001), but declined rapidly after 6 mo. Significant differential responses by the two biotypes to sowing depth and environment were detected for both seed survival and seedling emergence (P < 0.001). Seed mass was consistently lower in the PER biotype at the population level (P < 0.001), but this variation did not adequately explain the differential responses. Moreover, under natural rainfall the magnitude of the biotype effect was unlikely to result in ecologically significant differences. Seed survival after 36 mo under natural rainfall ranged from 6.8 to 21.3%. Best fit regression analysis of the decline in seed survival over time yielded a five-parameter exponential decay model with a lower asymptote approaching −0.38 (% seed survival = [( 55 − (−0.38)) • e (k • t)] + −0.38; R2 = 88.5%; 9 df). Environmental conditions and burial affected the slope parameter or k value significantly (P < 0.01). Seed survival projections from the model were greatest for buried seeds under natural rainfall (11 yr) and least under irrigation (3 yr). Experimental data and model projections suggest that lantana has a persistent seed bank and this should be considered in management programs, particularly those aimed at eradication.
Resumo:
Citrus canker is a disease of citrus and closely related species, caused by the bacterium Xanthomonas citri subsp. citri. This disease, previously exotic to Australia, was detected on a single farm [infested premise-1, (IP1). IP is the terminology used in official biosecurity protocols to describe a locality at which an exotic plant pest has been confirmed or is presumed to exist. IP are numbered sequentially as they are detected] in Emerald, Queensland in July 2004. During the following 10 months the disease was subsequently detected on two other farms (IP2 and IP3) within the same area and studies indicated the disease first occurred on IP1 and spread to IP2 and IP3. The oldest, naturally infected plant tissue observed on any of these farms indicated the disease was present on IP1 for several months before detection and established on IP2 and IP3 during the second quarter (i.e. autumn) 2004. Transect studies on some IP1 blocks showed disease incidences ranged between 52 and 100% (trees infected). This contrasted to very low disease incidence, less than 4% of trees within a block, on IP2 and IP3. The mechanisms proposed for disease spread within blocks include weather-assisted dispersal of the bacterium (e.g. wind-driven rain) and movement of contaminated farm equipment, in particular by pivot irrigator towers via mechanical damage in combination with abundant water. Spread between blocks on IP2 was attributed to movement of contaminated farm equipment and/or people. Epidemiology results suggest: (i) successive surveillance rounds increase the likelihood of disease detection; (ii) surveillance sensitivity is affected by tree size; and (iii) individual destruction zones (for the purpose of eradication) could be determined using disease incidence and severity data rather than a predefined set area.