947 resultados para Geology -- Queensland -- Burdekin River region
Resumo:
The size at recruitment, temporal and spatial distribution, and abiotic factors influencing abundance of three commercially important species of penaeid prawns in the sublittoral trawl grounds of Moreton Bay (Queensland, Australia) were compared. Metapenaeus bennettae and Penaeus plebejus recruit to the trawl grounds at sizes which are relatively small (14-15 mm carapace length, CL) and below that at which prawns are selected for, and retained, in the fleet's cod-ends. In contrast, Penaeus esculenlus recruit at the relatively large size of 27 mm CL from February to May, well above the size ranges selected for. Recruitment of M. bennettae extends over several months, September-October and February March, and was thus likely to be bi-annual, while the recruitment period of P. plebejus was distinct, peaking in October-November each year. Size classes of M . bennettae were the most spatially stratified of the three species. Catch rates of recruits were negatively correlated with depth for all three species, and were also negatively correlated with salinity for M. bennettae.
Resumo:
This study used faecal pellets to investigate the broadscale distribution and diet of koalas in the mulgalands biogeographic region of south-west Queensland. Koala distribution was determined by conducting faecal pellet searches within a 30-cm radius of the base of eucalypts on 149 belt transects, located using a multi-scaled stratified sampling design. Cuticular analysis of pellets collected ffom 22 of these sites was conducted to identify the dietary composition of koalas within the region. Our data suggest that koala distribution is concentrated in the northern and more easterly regions of the study area, and appears to be strongly linked with annual rainfall. Over 50% of our koala records were obtained from non-riverine communities, indicating that koalas in the study area are not primarily restricted to riverine communities, as bas frequently been suggested. Cuticular analysis indicates that more than 90% of koala diet within the region consists of five eucalypt species. Our data highlights the importance of residual Tertiary landforms to koala conservation in the region.
Resumo:
Urban encroachment on dense, coastal koala populations has ensured that their management has received increasing government and public attention. The recently developed National Koala Conservation Strategy calls for maintenance of viable populations in the wild. Yet the success of this, and other, conservation initiatives is hampered by lack of reliable and generally accepted national and regional population estimates. In this paper we address this problem in a potentially large, but poorly studied, regional population in the State that is likely to have the largest wild populations. We draw on findings from previous reports in this series and apply the faecal standing-crop method (FSCM) to derive a regional estimate of more than 59 000 individuals. Validation trials in riverine communities showed that estimates of animal density obtained from the FSCM and direct observation were in close agreement. Bootstrapping and Monte Carlo simulations were used to obtain variance estimates for our population estimates in different vegetation associations across the region. The most favoured habitat was riverine vegetation, which covered only 0.9% of the region but supported 45% of the koalas. We also estimated that between 1969 and 1995 -30% of the native vegetation associations that are considered as potential koala habitat were cleared, leading to a decline of perhaps 10% in koala numbers. Management of this large regional population has significant implications for the national conservation of the species: the continued viability of this population is critically dependent on the retention and management of riverine and residual vegetation communities, and future vegetation-management guidelines should be cognisant of the potential impacts of clearing even small areas of critical habitat. We also highlight eight management implications.
Resumo:
Mounting levels of insecticide resistance within Australian Helicoverpa spp. populations have resulted in the adoption of non-chemical IPM control practices such as trap cropping with chickpea, Cicer arietinum (L.). However, a new leaf blight disease affecting chickpea in Australia has the potential to limit its use as a trap crop. Therefore this paper evaluates the potential of a variety of winter-active legume crops for use as an alternative spring trap crop to chickpea as part of an effort to improve the area-wide management strategy for Helicoverpa spp. in central Queensland’s cotton production region. The densities of Helicoverpa eggs and larvae were compared over three seasons on replicated plantings of chickpea, Cicer arietinum (L.), field pea Pisum sativum (L), vetch, Vicia sativa (L.) and faba bean, Vicia faba (L.). Of these treatments, field pea was found to harbour the highest densities of eggs. A partial life table study of the fate of eggs oviposited on field pea and chickpea suggested that large proportions of the eggs laid on field pea suffered mortality due to dislodgment from the plants after oviposition. Plantings of field pea as a replacement trap crop for chickpea under commercial conditions confirmed the high level of attractiveness of this crop to ovipositing moths. The use of field pea as a trap crop as part of an areawide management programme for Helicoverpa spp. is discussed.
Resumo:
Soil nitrogen (N) supply in the Vertosols of southern Queensland, Australia has steadily declined as a result of long-term cereal cropping without N fertiliser application or rotations with legumes. Nitrogen-fixing legumes such as lucerne may enhance soil N supply and therefore could be used in lucerne-wheat rotations. However, lucerne leys in this subtropical environment can create a soil moisture deficit, which may persist for a number of seasons. Therefore, we evaluated the effect of varying the duration of a lucerne ley (for up to 4 years) on soil N increase, N supply to wheat, soil water changes, wheat yields and wheat protein on a fertility-depleted Vertosol in a field experiment between 1989 and 1996 at Warra (26degrees 47'S, 150degrees53'E), southern Queensland. The experiment consisted of a wheat-wheat rotation, and 8 treatments of lucerne leys starting in 1989 (phase 1) or 1990 (phase 2) for 1,2,3 or 4 years duration, followed by wheat cropping. Lucerne DM yield and N yield increased with increasing duration of lucerne leys. Soil N increased over time following 2 years of lucerne but there was no further significant increase after 3 or 4 years of lucerne ley. Soil nitrate concentrations increased significantly with all lucerne leys and moved progressively downward in the soil profile from 1992 to 1995. Soil water, especially at 0.9-1.2 m depth, remained significantly lower for the next 3 years after the termination of the 4 year lucerne ley than under continuous wheat. No significant increase in wheat yields was observed from 1992 to 1995, irrespective of the lucerne ley. However, wheat grain protein concentrations were significantly higher under lucerne-wheat than under wheat wheat rotations for 3-5 years. The lucerne yield and soil water and nitrate-N concentrations were satisfactorily simulated with the APSIM model. Although significant N accretion occurred in the soil following lucerne leys, in drier seasons, recharge of the drier soil profile following long duration lucerne occurred after 3 years. Consequently, 3- and 4-year lucerne-wheat rotations resulted in more variable wheat yields than wheat-wheat rotations in this region. The remaining challenge in using lucerne-wheat rotations is balancing the N accretion benefits with plant-available water deficits, which are most likely to occur in the highly variable rainfall conditions of this region.
Resumo:
Data on catch sizes, catch rates, length-frequency and age composition from the Australian east coast tailor fishery are analysed by three different population dynamic models: a surplus production model, an age-structured model, and a model in which the population is structured by both age and length. The population is found to be very heavily exploited, with its ability to reproduce dependent on the fishery’s incomplete selectivity of one-year-old fish. Estimates of recent harvest rates (proportion of fish available to the fishery that are actually caught in a single year) are over 80%. It is estimated that only 30–50% of one-year-old fish are available to the fishery. Results from the age-length-structured model indicate that both exploitable biomass (total mass of fish selected by the fishery) and egg production have fallen to about half the levels that prevailed in the 1970s, and about 40% of virgin levels. Two-year-old fish appear to have become smaller over the history of the fishery. This is assumed to be due to increased fishing pressure combined with non-selectivity of small one-year-old fish, whereby the one-year-old fish that survive fishing are small and grow into small two-year-old fish the following year. An alternative hypothesis is that the stock has undergone a genetic change towards smaller fish; the true explanation is unknown. The instantaneous natural mortality rate of tailor is hypothesised to be higher than previously thought, with values between 0.8 and 1.3 yr–1 consistent with the models. These values apply only to tailor up to about three years of age, and it is possible that a lower value applies to fish older than three. The analysis finds no evidence that fishing pressure has yet affected recruitment. If a recruitment downturn were to occur, however, under current management and fishing pressure there is a strong chance that the fishery would need a complete closure for several years to recover, and even then recovery would be uncertain. Therefore it is highly desirable to better protect the spawning stock. The major recommendations are • An increase in the minimum size limit from 30cm to 40cm in order to allow most one-year-old fish to spawn, and • An experiment on discard mortality to gauge the proportion of fish between 30cm and 40cm that are likely to survive being caught and released by recreational line fishers (the dominant component of the fishery, currently harvesting roughly 1000t p.a. versus about 200t p.a. from the commercial fishery).
Resumo:
The aim of this study is to examine the relationship of the Roman villa to its environment. The villa was an important feature of the countryside intended both for agricultural production and for leisure. Manuals of Roman agriculture give instructions on how to select a location for an estate. The ideal location was a moderate slope facing east or south in a healthy area and good neighborhood, near good water resources and fertile soils. A road or a navigable river or the sea was needed for transportation of produce. A market for selling the produce, a town or a village, should have been nearby. The research area is the surroundings of the city of Rome, a key area for the development of the villa. The materials used consist of archaeological settlement sites, literary and epigraphical evidence as well as environmental data. The sites include all settlement sites from the 7th century BC to 5th century AD to examine changes in the tradition of site selection. Geographical Information Systems were used to analyze the data. Six aspects of location were examined: geology, soils, water resources, terrain, visibility/viewability and relationship to roads and habitation centers. Geology was important for finding building materials and the large villas from the 2nd century BC onwards are close to sources of building stones. Fertile soils were sought even in the period of the densest settlement. The area is rich in water, both rainfall and groundwater, and finding a water supply was fairly easy. A certain kind of terrain was sought over very long periods: a small spur or ridge shoulder facing preferably south with an open area in front of the site. The most popular villa resorts are located on the slopes visible from almost the entire Roman region. A visible villa served the social and political aspirations of the owner, whereas being in the villa created a sense of privacy. The area has a very dense road network ensuring good connectivity from almost anywhere in the region. The best visibility/viewability, dense settlement and most burials by roads coincide, creating a good neighborhood. The locations featuring the most qualities cover nearly a quarter of the area and more than half of the settlement sites are located in them. The ideal location was based on centuries of practical experience and rationalized by the literary tradition.
Resumo:
Groundwater tables are rising beneath irrigated fields in some areas of the Lower Burdekin in North Queensland, Australia. The soils where this occurs are predominantly sodic clay soils with low hydraulic conductivities. Many of these soils have been treated by applying gypsum or by increasing the salinity of irrigation water by mixing saline groundwater with fresh river water. While the purpose of these treatments is to increase infiltration into the surface soils and improve productivity of the root zone, it is thought that the treatments may have altered the soil hydraulic properties well below the root zone leading to increased groundwater recharge and rising water tables. In this paper we discuss the use of column experiments and HYDRUS modelling, with major ion reaction and transport and soil water chemistry-dependent hydraulic conductivity, to assess the likely depth, magnitude and timing of the impacts of surface soil amelioration on soil hydraulic properties below the root zone and hence groundwater recharge. In the experiments, columns of sodic clays from the Lower Burdekin were leached for extended periods of time with either gypsum solutions or mixed cation salt solutions and change s in hydraulic conductivity were measured. Leaching with a gypsum solution for an extended time period, until the flow rate stabilised, resulted in an approximately twenty fold increase in the hydraulic conductivity when compared with a low salinity, mixed cation solution. HYDRUS modelling was used to high light the role of those factors which might influence the impacts of soil treatment, particularly at depth, including the large amounts of rain during the relatively short wet season and the presence of thick low permeability clay layers.
Capsicum chlorosis virus infecting Capsicum annuum in the East Kimberley region of Western Australia
Resumo:
Capsicum chlorosis virus (CaCV) was detected in field grown Capsicum annuum from Kununurra in northeast Western Australia. Identification of the Kununurra isolate (WA-99) was confirmed using sap transmission to indicator hosts, positive reactions with tospovirus serogroup IV-specific antibodies and CaCV-specific primers, and amino acid sequence comparisons that showed >97% identity with published CaCV nucleocapsid gene sequences. The reactions of indicator hosts to infection with WA-99 often differed from those of the type isolate from Queensland. The virus multiplied best when test plants were grown at warm temperatures. CaCV was not detected in samples collected in a survey of C. annuum crops planted in the Perth Metropolitan area.
Resumo:
Table beet production in the Lockyer Valley of south-eastern Queensland is known to be adversely affected by soilborne root disease from infection by Pythium spp. However, little is known regarding the species or genotypes that are the causal agents of both pre- and post-emergence damping off. Based on RFLP analysis with HhaI, HinfI and MboI of the PCR amplified ITS region DNA from soil and diseased plant samples, the majority of 130 Pythium isolates could be grouped into three genotypes, designated LVP A, LVP B and LVP C. These groups comprised 43, 41 and 7% of all isolates, respectively. Deoxyribonucleic acid sequence analysis of the ITS region indicated that LVP A was a strain of Pythium aphanidermatum, with greater than 99% similarity to the corresponding P. aphanidermatum sequences from the publicly accessible databases. The DNA sequences from LVP B and LVP C were most closely related to P. ultimum and P. dissotocum, respectively. Lower frequencies of other distinct isolates with unique RFLP patterns were also obtained with high levels of similarity (>97%) to P. heterothallicum, P. periplocum and genotypes of P. ultimum other than LVP B. Inoculation trials of 1- and 4-week-old beet seedlings indicated that compared with isolates of the LVP B genotype, a higher frequency of LVP A isolates caused disease. Isolates with the LVP A, LVP B and LVP C genotypes were highly sensitive to the fungicide Ridomil MZ, which suppressed radial growth on V8 agar between approximately four and thirty fold at 5 μg/mL metalaxyl and 40 μg/mL mancozeb, a concentration far lower than the recommended field application rate.
Resumo:
The main weeds and weed management practices undertaken in broad acre dryland cropping areas of north-eastern Australia have been identified. The information was collected in a comprehensive postal survey of both growers and agronomists from Dubbo in New South Wales (NSW) through to Clermont in central Queensland, where 237 surveys were returned. A very diverse weed flora of 105 weeds from 91 genera was identified for the three cropping zones within the region (central Queensland, southern Queensland and northern NSW). Twenty-three weeds were common to all cropping zones. The major common weeds were Sonchus oleraceus, Rapistrum rugosum, Echinochloa spp. and Urochloa panicoides. The main weeds were identified for both summer and winter fallows, and sorghum, wheat and chickpea crops for each of the zones, with some commonality as well as floral uniqueness recorded. More genera were recorded in the fallows than in crops, and those in summer fallows exceeded the number in winter. Across the region, weed management relied heavily on herbicides. In fallows, glyphosate and mixes with glyphosate were very common, although the importance of the glyphosate mix partner differed among the cropping zones. Use and importance of pre-emergence herbicides in-crop varied considerably among the zones. In wheat, more graminicides were used in northern NSW than in southern Queensland, and virtually none were used in central Queensland, reflecting the differences in winter grass weed flora across the region. Atrazine was the major herbicide used in sorghum, although metolachlor was also used predominantly in northern NSW. Fallow and inter-row cultivation were used more often in the southern areas of the region. Grazing of fallows was more prominent in northern NSW. High crop seeding rates were not commonly recorded indicating that growers are not using crop competition as a tool for weed management. Although many management practices were recorded overall, few growers were using integrated weed management, and herbicide resistance has been and continues to be an issue for the region.
Resumo:
Negative potassium (K) balances in all broadacre grain cropping systems in northern Australia are resulting in a decline in the plant-available reserves of K and necessitating a closer examination of strategies to detect and respond to developing K deficiency in clay soils. Grain growers on the Red Ferrosol soils have increasingly encountered K deficiency over the last 10 years due to lower available K reserves in these soils in their native condition. However, the problem is now increasingly evident on the medium-heavy clay soils (Black and Grey Vertosols) and is made more complicated by the widespread adoption of direct drill cropping systems and the resulting strong strati. cation of available K reserves in the top 0.05-0.1 m of the soil pro. le. This paper reports glasshouse studies examining the fate of applied K fertiliser in key cropping soils of the inland Burnett region of south-east Queensland, and uses the resultant understanding of K dynamics to interpret results of field trials assessing the effectiveness of K application strategies in terms of K availability to crop plants. At similar concentrations of exchangeable K (K-exch), soil solution K concentrations and activity of K in the soil solution (AR(K)) varied by 6-7-fold between soil types. When K-exch arising from different rates of fertiliser application was expressed as a percentage of the effective cation exchange capacity (i.e. K saturation), there was evidence of greater selective adsorption of K on the exchange complex of Red Ferrosols than Black and Grey Vertosols or Brown Dermosols. Both soil solution K and AR(K) were much less responsive to increasing K-exch in the Black Vertosols; this is indicative of these soils having a high K buffer capacity (KBC). These contrasting properties have implications for the rate of diffusive supply of K to plant roots and the likely impact of K application strategies (banding v. broadcast and incorporation) on plant K uptake. Field studies investigating K application strategies (banding v. broadcasting) and the interaction with the degree of soil disturbance/mixing of different soil types are discussed in relation to K dynamics derived from glasshouse studies. Greater propensity to accumulate luxury K in crop biomass was observed in a Brown Ferrosol with a KBC lower than that of a Black Vertosol, consistent with more efficient diffusive supply to plant roots in the Ferrosol. This luxury K uptake, when combined with crops exhibiting low proportional removal of K in the harvested product (i.e. low K harvest index coarse grains and winter cereals) and residue retention, can lead to rapid re-development of stratified K profiles. There was clear evidence that some incorporation of K fertiliser into soil was required to facilitate root access and crop uptake, although there was no evidence of a need to incorporate K fertiliser any deeper than achieved by conventional disc tillage (i.e. 0.1-0.15 m). Recovery of fertiliser K applied in deep (0.25-0.3 m) bands in combination with N and P to facilitate root proliferation was quite poor in Red Ferrosols and Grey or Black Vertosols with moderate effective cation exchange capacity (ECEC, 25-35 cmol(+)/kg), was reasonable but not enough to overcome K deficiency in a Brown Dermosol (ECEC 11 cmol(+)/kg), but was quite good on a Black Vertosol (ECEC 50-60 cmol(+)/kg). Collectively, results suggest that frequent small applications of K fertiliser, preferably with some soil mixing, is an effective fertiliser application strategy on lighter clay soils with low KBC and an effective diffusive supply mechanism. Alternately, concentrated K bands and enhanced root proliferation around them may be a more effective strategy in Vertosol soils with high KBC and limited diffusive supply. Further studies to assess this hypothesis are needed.
Resumo:
For pasture growth in the semi-arid tropics of north-east Australia, where up to 80% of annual rainfall occurs between December and March, the timing and distribution of rainfall events is often more important than the total amount. In particular, the timing of the 'green break of the season' (GBOS) at the end of the dry season, when new pasture growth becomes available as forage and a live-weight gain is measured in cattle, affects several important management decisions that prevent overgrazing and pasture degradation. Currently, beef producers in the region use a GBOS rule based on rainfall (e. g. 40mm of rain over three days by 1 December) to define the event and make their management decisions. A survey of 16 beef producers in north-east Queensland shows three quarters of respondents use a rainfall amount that occurs in only half or less than half of all years at their location. In addition, only half the producers expect the GBOS to occur within two weeks of the median date calculated by the CSIRO plant growth days model GRIM. This result suggests that in the producer rules, either the rainfall quantity or the period of time over which the rain is expected, is unrealistic. Despite only 37% of beef producers indicating that they use a southern oscillation index (SOI) forecast in their decisions, cross validated LEPS (linear error in probability space) analyses showed both the average 3 month July-September SOI and the 2 month August-September SOI have significant forecast skill in predicting the probability of both the amount of wet season rainfall and the timing of the GBOS. The communication and implementation of a rigorous and realistic definition of the GBOS, and the likely impacts of anthropogenic climate change on the region are discussed in the context of the sustainable management of northern Australian rangelands.
Resumo:
The incorporation of sown pastures as short-term rotations into the cropping systems of northern Australia has been slow. The inherent chemical fertility and physical stability of the predominant vertisol soils across the region enabled farmers to grow crops for decades without nitrogen fertiliser, and precluded the evolution of a crop–pasture rotation culture. However, as less fertile and less physically stable soils were cropped for extended periods, farmers began to use contemporary farming and sown pasture technologies to rebuild and maintain their soils. This has typically involved sowing long-term grass and grass–legume pastures on the more marginal cropping soils of the region. In partnership with the catchment management authority, the Queensland Murray–Darling Committee (QMDC) and Landcare, a pasture extension process using the LeyGrain™ package was implemented in 2006 within two Grain & Graze projects in the Maranoa-Balonne and Border Rivers catchments in southern inland Queensland. The specific objectives were to increase the area sown to high quality pasture and to gain production and environmental benefits (particularly groundcover) through improving the skills of producers in pasture species selection, their understanding and management of risk during pasture establishment, and in managing pastures and the feed base better. The catalyst for increasing pasture sowings was a QMDC subsidy scheme for increasing groundcover on old cropping land. In recognising a need to enhance pasture knowledge and skills to implement this scheme, the QMDC and Landcare producer groups sought the involvement of, and set specific targets for, the LeyGrain workshop process. This is a highly interactive action learning process that built on the existing knowledge and skills of the producers. Thirty-four workshops were held with more than 200 producers in 26 existing groups and with private agronomists. An evaluation process assessed the impact of the workshops on the learning and skill development by participants, their commitment to practice change, and their future intent to sow pastures. The results across both project catchments were highly correlated. There was strong agreement by producers (>90%) that the workshops had improved knowledge and skills regarding the adaptation of pasture species to soils and climates, enabling a better selection at the paddock level. Additional strong impacts were in changing the attitudes of producers to all aspects of pasture establishment, and the relative species composition of mixtures. Producers made a strong commitment to practice change, particularly in managing pasture as a specialist crop at establishment to minimise risk, and in the better selection and management of improved pasture species (particularly legumes and the use of fertiliser). Producers have made a commitment to increase pasture sowings by 80% in the next 5 years, with fourteen producers in one group alone having committed to sow an additional 4893 ha of pasture in 2007–08 under the QMDC subsidy scheme. The success of the project was attributed to the partnership between QMDC and Landcare groups who set individual workshop targets with LeyGrain presenters, the interactive engagement processes within the workshops themselves, and the follow-up provided by the LeyGrain team for on-farm activities.
Resumo:
Buffel grass [Pennisetum ciliare (L.) Link] has been widely introduced in the Australian rangelands as a consequence of its value for productive grazing, but tends to competitively establish in non-target areas such as remnant vegetation. In this study, we examined the influence landscape-scale and local-scale variables had upon the distribution of buffel grass in remnant poplar box (Eucalyptus populnea F. Muell.) dominant woodland fragments in the Brigalow Bioregion, Queensland. Buffel grass and variables thought to influence its distribution in the region were measured at 60 sites, which were selected based on the amount of native woodland retained in the landscape and patch size. An information-theoretic modelling approach and hierarchical partitioning revealed that the most influential variable was the percent of retained vegetation within a 1-km spatial extent. From this, we identified a critical threshold of similar to 30% retained vegetation in the landscape, above which the model predicted buffel grass was not likely to occur in a woodland fragment. Other explanatory variables in the model were site based, and included litter cover and long-term rainfall. Given the paucity of information on the effect of buffel grass upon biodiversity values, we undertook exploratory analyses to determine whether buffel grass cover influenced the distribution of grass, forb and reptile species. We detected some trends; hierarchical partitioning revealed that buffel grass cover was the most important explanatory variable describing habitat preferences of four reptile species. However, establishing causal links - particularly between native grass and forb species and buffel grass - was problematic owing to possible confounding with grazing pressure. We conclude with a set of management recommendations aimed at reducing the spread of buffel grass into remnant woodlands.