73 resultados para Soils, Salts in
em University of Queensland eSpace - Australia
Resumo:
A method for regional assessment of the distribution of saline outbreaks is demonstrated for a large area (68 000 km(2)) in north Queensland, Australia. Soil samples were used in conjunction with a digital elevation model and a map of potentially saline discharge zones to examine the landscape distribution of soluble salts in the region. The hypothesis of atmospheric accession of salt was tested for the topographically defined catchment regions feeding into each potentially saline discharge area. Most catchments showed a salt distribution consistent with this hypothesis, i.e. %TSS was large near the discharge areas and decreased rapidly with distance uphill from the discharge areas. In some catchments, however, local saline outbreaks were apparent at significant distances uphill from discharge areas. The possibility of geological sources of this salt was examined by comparing random point distributions with the location of saline points with distance downhill from geological units (excluding points near discharge zones). The distribution of some saline outbreaks was consistent with the occurrence of Cambro-Ordovician metasediments, Devonian limestone, Upper Devonian-Lower Carboniferous volcanics, and Triassic sediments. Copyright (C) 2000 John Wiley & Sons, Ltd.
Resumo:
Despite the increasing prevalence of salinity world-wide, the measurement of exchangeable cation concentrations in saline soils remains problematic. Two soil types (Mollisol and Vertisol) were equilibrated with a range of sodium adsorption ratio (SAR) solutions at various ionic strengths. The concentrations of exchangeable cations were then determined using several different types of methods, and the measured exchangeable cation concentrations compared to reference values. At low ionic strength (low salinity), the concentration of exchangeable cations can be accurately estimated from the total soil extractable cations. In saline soils, however, the presence of soluble salts in the soil solution precludes the use of this method. Leaching of the soil with a pre-wash solution (such as alcohol) was found to effectively remove the soluble salts from the soil, thus allowing the accurate measurement of the effective cation exchange capacity (ECEC). However, the dilution associated with this pre-washing increased the exchangeable Ca concentrations while simultaneously decreasing exchangeable Na. In contrast, when calculated as the difference between the total extractable cations and the soil solution cations, good correlations were found between the calculated exchangeable cation concentrations and the reference values for both Na (Mollisol: y=0.873x and Vertisol: y=0.960x) and Ca (Mollisol: y=0.901x and Vertisol: y=1.05x). Therefore, for soils with a soil solution ionic strength greater than 50 mM (electrical conductivity of 4 dS/m) (in which exchangeable cation concentrations are overestimated by the assumption they can be estimated as the total extractable cations), concentrations can be calculated as the difference between total extractable cations and soluble cations.
Resumo:
Most soils contain preferential flow paths that can impact on solute mobility. Solutes can move rapidly down the preferential flow paths with high pore-water velocities, but can be held in the less permeable region of the soil matrix with low pore-water velocities, thereby reducing the efficiency of leaching. In this study, we conducted leaching experiments with interruption of the flow and drainage of the main flow paths to assess the efficiency of this type of leaching. We compared our experimental results to a simple analytical model, which predicts the influence of the variations in concentration gradients within a single spherical aggregate (SSA) surrounded by preferential flow paths on leaching. We used large (length: 300 mm, diameter: 216 mm) undisturbed field soil cores from two contrasting soil types. To carry out intermittent leaching experiments, the field soil cores were first saturated with tracer solution (CaBr2), and background solution (CaCl2) was applied to mimic a leaching event. The cores were then drained at 25- to 30-cm suction to empty the main flow paths to mimic a dry period during which solutes could redistribute within the undrained region. We also conducted continuous leaching experiments to assess the impact of the dry periods on the efficiency of leaching. The flow interruptions with drainage enhanced leaching by 10-20% for our soils, which was consistent with the model's prediction, given an optimised equivalent aggregate radius for each soil. This parameter quantifies the time scales that characterise diffusion within the undrained region of the soil, and allows us to calculate the duration of the leaching events and interruption periods that would lead to more efficient leaching. Application of these methodologies will aid development of strategies for improving management of chemicals in soils, needed in managing salts in soils, in improving fertiliser efficiency, and in reclaiming contaminated soils. (C) 2000 Elsevier Science B.V. All rights reserved.
Resumo:
This review considers the current literature on the macro-mineral nutrition of the soon-to-calve, or transition, dairy cow. Calcium is the main focus, since milk fever (clinical hypocalcaemia) appears to be the most common mineral-related problem faced by the transition cow Australia-wide. The importance of minimising calcium intake and optimising the balance of the key dietary electrolytes, sodium, potassium, sulfate, and chloride, in the weeks before calving is highlighted. Excess dietary potassium can, in some situations, induce milk fever, perhaps even more effectively than excess calcium. Excess sodium remains under suspicion. In contrast, excess dietary chlorine and, to a lesser extent, sulfur can improve the ability of the cow to maintain calcium homeostasis. Diets that promote either a hypomagnesaemia or hyperphosphataemia have also the potential to precipitate milk fever at calving. Current prevention strategies focus on the use of forages with moderate to low levels of calcium, potassium, and sodium, and also rely on or utilise addition of chloride and sulfate in the form of 'anionic' feeds. Anionic salts are one example of an anionic feed. However, legitimate questions remain as to the effectiveness of anionic salts in pasture-feeding systems. The causes and prevention of milk fever are considered from the perspective of the variety of Australian feedbases. Impediments to the use of anionic feeds in Australia feeding systems are outlined. The potential for improving maternal reserves of calcium around calving to reduce the risk of milk fever is also discussed.
Resumo:
Low-temperature (15 K) single-crystal neutron-diffraction structures and Raman spectra of the salts (NX4)(2)[CU(OX2)(6)](SO4)(2), where X = H or D, are reported. This study is concerned with the origin of the structural phase change that is known to occur upon deuteration. Data for the deuterated salt were measured in the metastable state, achieved by application of 500 bar of hydrostatic pressure at similar to303 K followed by cooling to 281 K and the subsequent release of pressure. This allows for the direct comparison between the hydrogenous and deuterated salts, in the same modification, at ambient pressure and low temperature. The Raman spectra provide no intimation of any significant change in the intermolecular bonding. Furthermore, structural differences are few, the largest being for the long Cu-O bond, which is 2.2834(5) and 2.2802(4) Angstrom for the hydrogenous and the deuterated salts, respectively. Calorimetric data for the deuterated salt are also presented, providing an estimate of 0.17(2) kJ/mol for the enthalpy difference between the two structural forms at 295.8(5) K. The structural data suggest that substitution of hydrogen for deuterium gives rise to changes in the hydrogen-bonding interactions that result in a slightly reduced force field about the copper(II) center. The small structural differences suggest different relative stabilities for the hydrogenous and deuterated salts, which may be sufficient to stabilize the hydrogenous salt in the anomalous structural form.
Resumo:
Plastic cracking of cement mortar and concrete is primarily attributable to desiccation by evaporation from unprotected surfaces. This causes high suctions (negative pressures) to develop in the pore water adjacent to these surfaces. Dissolved salts in the pore water can also contribute significantly to suctions. Quantitative expressions are available for all of the components of the total suction. The development of suctions over time is illustrated by the results of desiccation tests conducted on cement mortars, supplemented by data from the literature. It is shown that ambient conditions conducive to plastic cracking can arise almost anywhere, but that the extremely high suctions that develop in mature cement mortar and concrete do not imply that compression failures should occur A high value of fracture energy is derived from data from the desiccation tests that implies that plastic cracking is characterized by a significant zone of plastic straining or microcracking.
Resumo:
The movement of chemicals through the soil to the groundwater or discharged to surface waters represents a degradation of these resources. In many cases, serious human and stock health implications are associated with this form of pollution. The chemicals of interest include nutrients, pesticides, salts, and industrial wastes. Recent studies have shown that current models and methods do not adequately describe the leaching of nutrients through soil, often underestimating the risk of groundwater contamination by surface-applied chemicals, and overestimating the concentration of resident solutes. This inaccuracy results primarily from ignoring soil structure and nonequilibrium between soil constituents, water, and solutes. A multiple sample percolation system (MSPS), consisting of 25 individual collection wells, was constructed to study the effects of localized soil heterogeneities on the transport of nutrients (NO3-, Cl-, PO43-) in the vadose zone of an agricultural soil predominantly dominated by clay. Very significant variations in drainage patterns across a small spatial scale were observed tone-way ANOVA, p < 0.001) indicating considerable heterogeneity in water flow patterns and nutrient leaching. Using data collected from the multiple sample percolation experiments, this paper compares the performance of two mathematical models for predicting solute transport, the advective-dispersion model with a reaction term (ADR), and a two-region preferential flow model (TRM) suitable for modelling nonequilibrium transport. These results have implications for modelling solute transport and predicting nutrient loading on a larger scale. (C) 2001 Elsevier Science Ltd. All rights reserved.
Resumo:
The measurement of exchangeable cations in saline soils is limited by the difficulty in accurately separating soluble cations from exchangeable cations. A method is examined for saline soils in which exchangeable cations are calculated as the total extractable cations minus the concentration of soil solution (soluble) cations. In addition, a further two standard methods were investigated, one which assumes the total soil extractable cations are exchangeable, the other utilises a pretreatment to remove soluble salts prior to measurement of the remaining (exchangeable) cations. After equilibration with a range of sodium adsorption ratio (SAR) solutions at various ionic strengths, the exchangeable cation concentrations of two soils (Dermosol and Vertosol) were determined by these methods and compared to known values. The assumption that exchangeable cations can be estimated as the total soil extractable cations, although valid at low ionic strength, resulted in an overestimation of exchangeable Na and Ca concentrations at higher ionic strengths due to the presence of soluble salts. Pretreatment with ethanol and glycerol was found to effectively remove soluble salts thus allowing the accurate measurement of the effective cation exchange capacity (ECEC), however, dilution associated with the pretreatment process decreased concentrations of exchangeable Ca while simultaneously increasing exchangeable Na. Using the proposed method, good correlations were found between known and measured concentrations of exchangeable Na (Dermosol: y=0.873x and Vertosol: y=0.960x) and Ca (Dermosol: y=0.906x, and Vertosol: y=1.05x). Therefore, for soils with an ionic strength of approximately 50 mM (ECse 4 dS m-1) or greater (in which exchangeable cation concentrations are overestimated by assuming the total soil cations are exchangeable), concentrations can be calculated as difference between total extractable cations and soluble cations.
Resumo:
Although plant growth is often limited at high pH, little is known about root-induced changes in the rhizospheres of plants growing in alkaline soils. The effect of Mn deficiency in Rhodes grass (Chloris gayana cv. Pioneer) and of legume inoculation in lucerne (Medicago sativa L. cv. Hunter River), on the rhizosphere pH of plants grown in highly alkaline bauxite residue was investigated. Rhizosphere pH was measured quantitatively, with a micro pH electrode, and qualitatively, with an agar/pH indicator solution. Manganese deficiency in Rhodes grass increased root-induced acidification of the rhizosphere in a soil profile in which N was supplied entirely as NO3-. Rhizosphere pH in the Mn deficient plants was up to 1.22 pH units lower than that of the bulk soil, while only 0.90 to 0.62 pH units lower in plants supplied with adequate Mn. When soil N was supplied entirely as NO3-, rhizosphere acidification was more efficient in inoculated lucerne (1.75 pH unit decrease) than in non-inoculated lucerne (1.16 pH unit decrease). This difference in capacity to lower rhizosphere pH is attributable to the ability of the inoculated lucerne to fix atmospheric N2 rather than relying on the soil N (NO3 ) reserves as the non-inoculated plants. Rhizosphere acidification in both Rhodes grass and lucerne was greatest in the meristematic root zone and least in the maturation root zone.
Resumo:
Despite its environmental (and financial) importance, there is no agreement in the literature as to which extractant most accurately estimates the phytoavailability of trace metals in soils. A large dataset was taken from the literature, and the effectiveness of various extractants to predict the phytoavailability of Cd, Zn, Ni, Cu, and Pb examined across a range of soil types and contamination levels. The data suggest that generally, the total soil trace metal content, and trace metal concentrations determined by complexing agents (such as the widely used DTPA and EDTA extractants) or acid extractants (such as 0.1 M HCl and the Mehlich 1 extractant) are only poorly correlated to plant phytoavailability. Whilst there is no consensus, it would appear that neutral salt extractants (such as 0.01 M CaCl2 and 0.1 M NaNO3) provide the most useful indication of metal phytoavailability across a range of metals of interest, although further research is required.
Resumo:
The magnesium (Mg) status of 52 highly weathered, predominantly acidic, surface soils from tropical and subtropical north-eastern Australia was evaluated in a laboratory study. Soils were selected to represent a range of soil types and management histories. Exchangeable Mg concentrations were generally low (median value 0.37 cmol(+)/kg), with deficient levels (<0.3 cmol(+)/kg) being measured in 22 of the soils, highlighting the potential for Mg deficiency as a limitation to plant growth in the region. Furthermore, acid-extractable Mg concentrations, considered a reserve of potentially available Mg, were generally modest (mean and median values, 1.6 and 0.40 cmol(+)/kg, respectively). The total Mg content of the soils studied ranged from 123 to 7894 mg/kg, the majority present in the mineral pool (mean 71%), with smaller proportions in the acid-soluble (mean 11%) and exchangeable (mean 17%) pools, and a negligible amount associated with organic matter (mean 1%). A range of extractant solutions used to displace exchangeable Mg was compared, and found to yield similar results on soils with exchangeable Mg <4 cmol(+)/kg. However, at higher exchangeable Mg concentrations, dilute extractants (0.01 M CaCl2, 0.0125 M BaCl2) displaced less Mg than concentrated extractants (1 M NH4Cl, 1 M NH4OAc, 1 M KCl). The concentrated extractants displaced similar amounts of Mg, thus the choice of extractant is not critical, provided the displacing cation is sufficiently concentrated. Exchangeable Mg was not significantly correlated to organic carbon (P > 0.05), and only 45% of the variation in exchangeable Mg could be explained by a combination of pH(w) and clay content.
Resumo:
A glasshouse trial, in which maize (Zea mays L. cv. Pioneer 3270) was grown in 35 north-eastern Australian soils of low magnesium (Mg) status, was undertaken to study the response to applied Mg. Of the soils studied, 20 were strongly acidic (pH(1:5 soil:water) <5.4), and in these soils the response to Mg was studied in both the presence and absence of lime. Magnesium application significantly (P < 0.05) increased dry matter yield in 10 soils, all of which were strongly acidic. However, significant Mg responses were recorded in 6 soils in the presence of lime, indicating that, in many situations, liming strategies may need to include consideration of Mg nutrition. Critical soil test values for 90% relative yield were 0.21 cmol(+)/kg of exchangeable Mg or 7% Mg saturation, whilst the critical (90% yield) plant tissue Mg concentration (whole shoots) was 0.15%.