969 resultados para Environmental toxicity
Resumo:
Pastoralists from 37 beef cattle and sheep properties in western Queensland developed and implemented an environmental management system (EMS) over 18 months. The EMS implemented by them was customised for the pastoral industry as part of a national EMS pilot project, and staff from this project encouraged and assisted pastoralists during this trial. The 31 pastoralists surveyed at the end of the pilot project identified few benefits of EMS implementation, and these were largely associated with environmental management and sustainability. In terms of the reasons for uptake of an EMS, these pastoralists identified drivers similar to those reported in other primary industry sectors. These included improving property and environmental management, financial incentives, a range of market benefits, assistance with red tape issues, access to other training opportunities and assistance and support with the development of their EMS. However, these drivers are weak, and are not motivating pastoralists to adopt an EMS. In contrast, barriers to adoption such as the time involved in developing and implementing EMS are tangible and immediate. Given a lack of effective drivers and that pastoralists are under considerable pressure from ongoing rural adjustment processes, it is not surprising that an EMS is a low priority. It is concluded that widespread uptake and on-going use of an EMS in the pastoral industry will not occur unless pastoralists are required or rewarded for this by markets, governments, financiers, and regional natural resource management bodies.
Resumo:
The research is related to the Finnish Jabal Harun Project (FJHP), which is part of the research unit directed by Professor Jaakko Frösén. The project consists of two interrelated parts: the excavation of a Byzantine monastery/pilgrimage centre on Jabal Harun, and a multiperiod archaeological survey of the surrounding landscape. It is generally held that the Near Eastern landscape has been modified by millennia of human habitation and activity. Past climatic changes and human activities could be expected to have significantly changed also the landscape of the Jabal Harun area. Therefore it was considered that a study of erosion in the Jabal Harun area could shed light on the environmental and human history of the area. It was hoped that it would be possible to connect the results of the sedimentological studies either to wider climatic changes in the Near East, or to archaeologically observable periods of human activity and land use. As evidence of some archaeological periods is completely missing from the Jabal Harun area, it was also of interest whether catastrophic erosion or unfavourable environmental change, caused either by natural forces or by human agency, could explain the gaps in the archaeological record. Changes in climate and/or land-use were expected to be reflected in the sedimentary record. The field research, carried out as part of the FJHP survey fieldwork, included the mapping of wadi terraces and cleaning of sediment profiles which were recorded and sampled for laboratory analyses of facies and lithology. To obtain a chronology for the sedimentation and erosion phases also OSL (optically stimulated luminescence) dating samples were collected. The results were compared to the record of the Near Eastern palaeoclimate, and to data from geoarchaeological studies in central and southern Jordan. The picture of the environmental development was then compared to the human history in the area, based on archaeological evidence from the FJHP survey and the published archaeological research in the Petra region, and the question of the relationship between human activity and environmental change was critically discussed. Using the palaeoclimatic data and the results from geoarchaeological studies it was possible to outline the environmental development in the Jabal Harun area from the Pleistocene to the present.It is appears that there was a phase of accumulation of sediment before the Middle Palaeolithic period, possibly related to tectonic movement. This phase was later followed by erosion, tentatively suggested to have taken place during the Upper Palaeolithic. A period of wadi aggradation probably occurred during the Late Glacial and continued until the end of the Pleistocene, followed by significant channel degradation, attributed to increased rainfall during the Early Holocene. It seems that during the later Holocene channel incision has been dominant in the Jabal Harûn area although there have been also small-scale channel aggradation phases, two of which were OSL-dated to around 4000-3000 BP and 2400-2000 BP. As there is no evidence of tectonic movements in the Jabal Harun area after the early Pleistocene, it is suggested that climate change and human activity have been the major causes of environmental change in the area. At a brief glance it seems that many of the changes in the settlement and land use in the Jabal Harun area can be explained by climatic and environmental conditions. However, the responses of human societies to environmental change are dependent on many factors. Therefore an evaluation of the significance of environmental, cultural, socio-economic and political factors is needed to decide whether certain phenomena are environmentally induced. Comparison with the wider Petra region is also needed to judge whether the phenomena are characteristic of the Jabal Harun area only, or can they be connected to social, political and economic development over a wider area.
Resumo:
BACKGROUND: Field studies of diuron and its metabolites 3-(3,4-dichlorophenyl)-1-methylurea (DCPMU), 3,4-dichlorophenylurea (DCPU) and 3,4-dichloroaniline (DCA) were conducted in a farm soil and in stream sediments in coastal Queensland, Australia. RESULTS: During a 38 week period after a 1.6 kg ha^-1 diuron application, 70-100% of detected compounds were within 0-15 cm of the farm soil, and 3-10% reached the 30-45 cm depth. First-order t1/2 degradation averaged 49 ± 0.9 days for the 0-15, 0-30 and 0-45 cm soil depths. Farm runoff was collected in the first 13-50 min of episodes lasting 55-90 min. Average concentrations of diuron, DCPU and DCPMU in runoff were 93, 30 and 83-825 µg L^-1 respectively. Their total loading in all runoff was >0.6% of applied diuron. Diuron and DCPMU concentrations in stream sediments were between 3-22 and 4-31 µg kg^-1 soil respectively. The DCPMU/diuron sediment ratio was >1. CONCLUSION: Retention of diuron and its metabolites in farm topsoil indicated their negligible potential for groundwater contamination. Minimal amounts of diuron and DCMPU escaped in farm runoff. This may entail a significant loading into the wider environment at annual amounts of application. The concentrations and ratio of diuron and DCPMU in stream sediments indicated that they had prolonged residence times and potential for accumulation in sediments. The higher ecotoxicity of DCPMU compared with diuron and the combined presence of both compounds in stream sediments suggest that together they would have a greater impact on sensitive aquatic species than as currently apportioned by assessments that are based upon diuron alone.
Resumo:
Synthetic backcrossed-derived bread wheats (SBWs) from CIMMYT were grown in the north-west of Mexico (CIANO) and sites across Australia during 3 seasons. A different set of lines was evaluated each season, as new materials became available from the CIMMYT crop enhancement program. Previously, we have evaluated both the performance of genotypes across environments and the genotype x environment interaction (G x E). The objective of this study was to interpret the G x E for yield in terms of crop attributes measured at individual sites and to identify the potential environmental drivers of this interaction. Groups of SBWs with consistent yield performance were identified, often comprising closely related lines. However, contrasting performance was also relatively common among sister lines or between a recurrent parent and its SBWs. Early flowering was a common feature among lines with broad adaptation and/or high yield in the northern Australian wheatbelt, while yields in the southern region did not show any association with the maturity type. Lines with high yields in the southern and northern regions had cooler canopies during flowering and early grain filling. Among the SBWs with Australian genetic backgrounds, lines best adapted to CIANO were tall (>100 cm), with a slightly higher ground cover. These lines also displayed a higher concentration of water-soluble carbohydrates in the stem at flowering, which was negatively correlated with stem number per unit area when evaluated in southern Australia (Horsham). Possible reasons for these patterns are discussed. Selection for yield at CIANO did not specifically identify the lines best adapted to northern Australia, although they were not the most poorly adapted either. In addition, groups of lines with specific adaptation to the south would not have been selected by choosing the highest yielding lines at CIANO. These findings suggest that selection at CIMMYT for Australian environments may be improved by either trait based selection or yield data combined with trait information. Flowering date, canopy temperature around flowering, tiller density, and water-soluble carbohydrate concentration in the stem at flowering seem likely candidates.
Resumo:
We examine the microchemistry of otoliths of cohorts of a fished shed population of the large catadromous fish, barramundi Lates calcarifer from the estuary of a large tropical river. Barramundi from the estuary of the large, heavily regulated Fitzroy River, north eastern Australia were analysed by making transects of 87Sr/86Sr isotope and trace metal/Ca ratios from the core to the outer edge. Firstly, we examined the Sr/Ca, Ba/Ca, Mg/Ca and Mn/Ca and 87Sr/86Sr isotope ratios in otoliths of barramundi tagged in either freshwater or estuarine habitats that were caught by the commercial fishery in the estuary. We used 87Sr/86Sr isotope ratios to identify periods of freshwater residency and assess whether trace metal/Ca ratios varied between habitats. Only Sr/Ca consistently varied between known periods of estuarine or freshwater residency. The relationships between trace metal/Ca and river flow, salinity, temperature were examined in fish tagged and recaptured in the estuary. We found weak and inconsistent patterns in relationships between these variables in the majority of fish. These results suggest that both individual movement history within the estuary and the scale of environmental monitoring were reducing our ability to detect any patterns. Finally, we examined fish in the estuary from two dominant age cohorts (4 and 7 yr old) before and after a large flood in 2003 to ascertain if the flood had enabled fish from freshwater habitats to migrate to the estuary. There was no difference in the proportion of fish in the estuary that had accessed freshwater after the flood. Instead, we found that larger individuals with each age cohort were more likely to have spent a period in freshwater. This highlights the need to maintain freshwater flows in rivers. About half the fish examined had accessed freshwater habitats before capture. Of these, all had spent at least their first two months in marine salinity waters before entering freshwater and some did not enter freshwater until four years of age. This contrasts with the results of several previous studies in other parts of the range that found that access to freshwater swamps by larval barramundi was important for enhanced population productivity and recruitment.
Resumo:
BACKGROUND: The psocid Liposcelis bostrychophila Badonnel, is a widespread, significant pest of stored commodities, has developed strong resistance to phosphine, the major grain disinfestant. The aim was to develop effective fumigation protocols to control this resistant pest. RESULTS: Time to population extinction of all life stages (TPE) in days was evaluated at a series of phosphine concentrations and temperatures at two relative humidities. Regression analysis showed that temperature, concentration and relative humidity all contributed significantly to describing TPE (P < 0.001, R2 = 0.95), with temperature being the dominant variable, accounting for 74.4% of the variation. Irrespective of phosphine concentration, TPE was longer at lower temperatures and high humidity (70% RH) and shorter at higher temperatures and low humidity (55% RH). At any concentration of phosphine, a combination of higher temperature and lower humidity provides the shortest fumigation period to control resistant L. bostrychophila. For example, 19 and 11 days of fumigation are required at 15 °C and 70% RH at 0.1 and 1.0 mg L-1 of phosphine respectively, whereas only 4 and 2 days are required at 35 °C and 55% RH for the same respective concentrations. CONCLUSIONS: The developed fumigation protocols will provide industry with flexibility in application of phosphine.
Resumo:
This special issue of Continental Shelf Research contains 20 papers giving research results produced as part of Australia's Torres Strait Co-operative Research Centre (CRC) Program, which was funded over a three-year period during 2003-2006. Marine biophysical, fisheries, socioeconomic-cultural and extension research in the Torres Strait region of northeastern Australia was carried out to meet three aims: 1) support the sustainable development of marine resources and minimize impacts of resource use in Torres Strait; 2) enhance the conservation of the marine environment and the social, cultural and economic well being of all stakeholders, particularly the Torres Strait peoples; and 3) contribute to effective policy formulation and management decision making. Subjects covered, including commercial and traditional fisheries management, impacts of anthropogenic sediment inputs on seagrass meadows and communication of science results to local communities, have broad applications to other similar environments.
Resumo:
Purpose – The purpose of this study is to explore senior managers’ perception and motivations of corporate social and environmental responsibility (CSER) reporting in the context of a developing country, Bangladesh. Design/methodology/approach – In-depth semi-structured interviews were conducted with 25 senior managers of companies listed on the Dhaka Stock Exchange. Publicly available annual reports of these companies were also analysed. Findings – The results indicate that senior managers perceive CSER reporting as a social obligation. The study finds that the managers focus mostly on child labour, human resources/rights, responsible products/services, health education, sports and community engagement activities as part of the social obligations. Interviewees identify a lack of a regulatory framework along with socio-cultural and religious factors as contributing to the low level of disclosures. These findings suggest that CSER reporting is not merely stakeholder-driven, but rather country-specific social and environmental issues play an important role in relation to CSER reporting practices. Research limitations/implications – This paper contributes to engagement-based studies by focussing on CSER reporting practices in developing countries and are useful for academics, practitioners and policymakers in understanding the reasons behind CSER reporting in developing countries. Originality/value – This paper addresses a literature “gap” in the empirical study of CSER reporting in a developing country, such as Bangladesh. This study fills a gap in the existing literature to understand managers’ motivations for CSER reporting in a developing country context. Managerial perceptions on CSER issues are largely unexplored in developing countries.
Resumo:
Depression is a complex psychiatric disorder influenced by several genes, environmental factors, and their interplay. Serotonin receptor 2A (HTR2A) and tryptophan hydroxylase 1 (TPH1) genes have been implicated in vulnerability to depression and other psychiatric disorders, but the results have been inconsistent. The present study examined whether these two genes moderated the influence of different depressogenic environmental factors on subthreshold depressive symptoms (assessed on a modified version of Beck s Depression Inventory, BDI) and depression-related temperament, i.e., harm avoidance (assessed on the Temperament and Character Inventory, TCI). The environmental factors included measures of childhood and adolescence exposure, i.e., maternal nurturance and parental socioeconomic status, and adulthood social circumstances, i.e., perceived social support and urban/rural residence. The participants were two randomly selected subsamples (n = 1246, n = 341) from the longitudinal population-based Cardiovascular Risk in Young Finns study (n = 3596). Childhood environmental factors were assessed when the participants were 3 to 18 years of age, and three years after the baseline. Adulthood environmental factors and outcome measures were assessed 17 and 21 years later when the participants were 21 to 39 years of age. The T102C polymorphism of the HTR2A gene moderated the association between childhood maternal nurturance and adulthood depressive symptoms, such that exposure to high maternal nurturance predicted low depressive symptoms among individuals carrying the T/T or T/C genotypes, but not among those carrying the C/C genotype. Likewise, high parental SES predicted low adulthood harm avoidance in individuals carrying the T/T or T/C genotype, but not in C/C-genotype carriers. Individuals carrying the T/T or T/C genotype were also sensitive to urban/rural residence, such that they had lower depressive symptoms in urban than in rural areas, whereas those carrying the C/C genotype were not sensitive to urban/rural residence difference. HTR2A did not moderate the influence of social support. TheA779C/A218C haplotype of the TPH1 gene was not involved in the association between childhood environment and adulthood outcomes. However, individuals carrying A alleles of the TPH1 haplotype were more vulnerable to the lack of adulthood social support in terms of high depressive symptoms than their counterparts carrying no A alleles. Furthermore, individuals living in remote rural areas and carrying the A/A haplotype had higher depressive symptoms than those carrying other genotypes of the TPH1. The findings suggest that the HTR2A and TPH1 genes may be involved in the development of depression by influencing individual s sensitivity to depressogenic environmental influences.
Resumo:
The ability to predict phenology and canopy development is critical in crop models used for simulating likely consequences of alternative crop management and cultivar choice strategies. Here we quantify and contrast the temperature and photoperiod responses for phenology and canopy development of a diverse range of elite Indian and Australian sorghum genotypes (hybrid and landrace). Detailed field experiments were undertaken in Australia and India using a range of genotypes, sowing dates, and photoperiod extension treatments. Measurements of timing of developmental stages and leaf appearance were taken. The generality of photo-thermal approaches to modelling phenological and canopy development was tested. Environmental and genotypic effects on rate of progression from emergence to floral initiation (E-FI) were explained well using a multiplicative model, which combined the intrinsic development rate (Ropt), with responses to temperature and photoperiod. Differences in Ropt and extent of the photoperiod response explained most genotypic effects. Average leaf initiation rate (LIR), leaf appearance rate and duration of the phase from anthesis to physiological maturity differed among genotypes. The association of total leaf number (TLN) with photoperiod found for all genotypes could not be fully explained by effects on development and LIRs. While a putative effect of photoperiod on LIR would explain the observations, other possible confounding factors, such as air-soil temperature differential and the nature of model structure were considered and discussed. This study found a generally robust predictive capacity of photo-thermal development models across diverse ranges of both genotypes and environments. Hence, they remain the most appropriate models for simulation analysis of genotype-by-management scenarios in environments varying broadly in temperature and photoperiod.
Resumo:
Environmental heat can reduce conception rates (the proportion of services that result in pregnancy) in lactating dairy cows. The study objectives were to identify periods of exposure relative to the service date in which environmental heat is most closely associated with conception rates, and to assess whether the total time cows are exposed to high environmental heat within each 24-h period is more closely associated with conception rates than is the maximum environmental heat for each 24-h period. A retrospective observational study was conducted in 25 predominantly Holstein-Friesian commercial dairy herds located in Australia. Associations between weather and conception rates were assessed using 16,878 services performed over a 21-mo period. Services were classified as successful based on rectal palpation. Two measures of heat load were defined for each 24-h period: the maximum temperature-humidity index (THI) for the period, and the number of hours in the 24-h period when the THI was >72. Conception rates were reduced when cows were exposed to a high heat load from the day of service to 6 d after service, and in wk -1. Heat loads in wk -3 to -5 were also associated with reduced conception rates. Thus, management interventions to ameliorate the effects of heat load on conception rates should be implemented at least 5 wk before anticipated service and should continue until at least 1 wk after service. High autocorrelations existed between successive daily values in both measures, and associations between day of heat load relative to service day and conception rates differed substantially when ridge regression was used to account for this autocorrelation. This indicates that when assessing the effects of heat load on conception rates, the autocorrelation in heat load between days should be accounted for in analyses. The results suggest that either weekly averages or totals summarizing the daily heat load are adequate to describe heat load when assessing effects on conception rates in lactating dairy cows.
Resumo:
In school environments, children are constantly exposed to mixtures of airborne substances, derived from a variety of sources, both in the classroom and in the school surroundings. It is important to evaluate the hazardous properties of these mixtures, in order to conduct risk assessments of their impact on chil¬dren’s health. Within this context, through the application of a Maximum Cumulative Ratio approach, this study aimed to explore whether health risks due to indoor air mixtures are driven by a single substance or are due to cumulative exposure to various substances. This methodology requires knowledge of the concentration of substances in the air mixture, together with a health related weighting factor (i.e. reference concentration or lowest concentration of interest), which is necessary to calculate the Hazard Index. Maximum cumulative ratio and Hazard Index values were then used to categorise the mixtures into four groups, based on their hazard potential and therefore, appropriate risk management strategies. Air samples were collected from classrooms in 25 primary schools in Brisbane, Australia. Analysis was conducted based on the measured concentration of these substances in about 300 air samples. The results showed that in 92% of the schools, indoor air mixtures belonged to the ‘low concern’ group and therefore, they did not require any further assessment. In the remaining schools, toxicity was mainly governed by a single substance, with a very small number of schools having a multiple substance mix which required a combined risk assessment. The proposed approach enables the identification of such schools and thus, aides in the efficient health risk management of pollution emissions and air quality in the school environment.
Resumo:
This paper describes adoption rates of environmental assurance within meat and wool supply chains, and discusses this in terms of market interest and demand for certified 'environmentally friendly' products, based on phone surveys and personal interviews with pastoral producers, meat and wool processors, wholesalers and retailers, and domestic consumers. Members of meat and wool supply chains, particularly pastoral producers, are both aware of and interested in implementing various forms of environmental assurance, but significant costs combined with few private benefits have resulted in low adoption rates. The main reason for the lack of benefits is that the end user (the consumer) does not value environmental assurance and is not willing to pay for it. For this reason, global food and fibre supply chains, which compete to supply consumers with safe and quality food at the lowest price, resist public pressure to implement environmental assurance. This market failure is further exacerbated by highly variable environmental and social production standards required of primary producers in different countries, and the disparate levels of government support provided to them. Given that it is the Australian general public and not markets that demand environmental benefits from agriculture, the Australian government has a mandate to use public funds to counter this market failure. A national farm environmental policy should utilise a range of financial incentives to reward farmers for delivering general public good environmental outcomes, with these specified and verified through a national environmental assurance scheme.
Resumo:
This paper outlines the expectations of a wide range of stakeholders for environmental assurance in the pastoral industries and agriculture generally. Stakeholders consulted were domestic consumers, rangeland graziers, members of environmental groups, companies within meat and wool supply chains, and agricultural industry, environmental and consumer groups. Most stakeholders were in favour of the application of environmental assurance to agriculture, although supply chains and consumers had less enthusiasm for this than environmental and consumer groups. General public good benefits were more important to environmental and consumer groups, while private benefits were more important to consumers and supply chains. The 'ideal' form of environmental assurance appears to be a management system that provides for continuous improvement in environmental, quality and food safety outcomes, combined with elements of ISO 14024 eco-labelling such as life-cycle assessment, environmental performance criteria, third-party certification, labelling and multi-stakeholder involvement. However, market failure prevents this from being implemented and will continue to do so for the foreseeable future. In the short term, members of supply chains (the people that must implement and fund environmental assurance) want this to be kept simple and low cost, to be built into their existing industry standards and to add value to their businesses. As a starting point, several agricultural industry organisations favour the use of a basic management system, combining continuous improvement, risk assessment and industry best management practice programs, which can be built on over time to meet regulator, market and community expectations.
Resumo:
Species distribution modelling (SDM) typically analyses species’ presence together with some form of absence information. Ideally absences comprise observations or are inferred from comprehensive sampling. When such information is not available, then pseudo-absences are often generated from the background locations within the study region of interest containing the presences, or else absence is implied through the comparison of presences to the whole study region, e.g. as is the case in Maximum Entropy (MaxEnt) or Poisson point process modelling. However, the choice of which absence information to include can be both challenging and highly influential on SDM predictions (e.g. Oksanen and Minchin, 2002). In practice, the use of pseudo- or implied absences often leads to an imbalance where absences far outnumber presences. This leaves analysis highly susceptible to ‘naughty-noughts’: absences that occur beyond the envelope of the species, which can exert strong influence on the model and its predictions (Austin and Meyers, 1996). Also known as ‘excess zeros’, naughty noughts can be estimated via an overall proportion in simple hurdle or mixture models (Martin et al., 2005). However, absences, especially those that occur beyond the species envelope, can often be more diverse than presences. Here we consider an extension to excess zero models. The two-staged approach first exploits the compartmentalisation provided by classification trees (CTs) (as in O’Leary, 2008) to identify multiple sources of naughty noughts and simultaneously delineate several species envelopes. Then SDMs can be fit separately within each envelope, and for this stage, we examine both CTs (as in Falk et al., 2014) and the popular MaxEnt (Elith et al., 2006). We introduce a wider range of model performance measures to improve treatment of naughty noughts in SDM. We retain an overall measure of model performance, the area under the curve (AUC) of the Receiver-Operating Curve (ROC), but focus on its constituent measures of false negative rate (FNR) and false positive rate (FPR), and how these relate to the threshold in the predicted probability of presence that delimits predicted presence from absence. We also propose error rates more relevant to users of predictions: false omission rate (FOR), the chance that a predicted absence corresponds to (and hence wastes) an observed presence, and the false discovery rate (FDR), reflecting those predicted (or potential) presences that correspond to absence. A high FDR may be desirable since it could help target future search efforts, whereas zero or low FOR is desirable since it indicates none of the (often valuable) presences have been ignored in the SDM. For illustration, we chose Bradypus variegatus, a species that has previously been published as an exemplar species for MaxEnt, proposed by Phillips et al. (2006). We used CTs to increasingly refine the species envelope, starting with the whole study region (E0), eliminating more and more potential naughty noughts (E1–E3). When combined with an SDM fit within the species envelope, the best CT SDM had similar AUC and FPR to the best MaxEnt SDM, but otherwise performed better. The FNR and FOR were greatly reduced, suggesting that CTs handle absences better. Interestingly, MaxEnt predictions showed low discriminatory performance, with the most common predicted probability of presence being in the same range (0.00-0.20) for both true absences and presences. In summary, this example shows that SDMs can be improved by introducing an initial hurdle to identify naughty noughts and partition the envelope before applying SDMs. This improvement was barely detectable via AUC and FPR yet visible in FOR, FNR, and the comparison of predicted probability of presence distribution for pres/absence.