84 resultados para Slash-fell-burn
Resumo:
Pregnant rats were given control (46 mg iron/kg, 61 mg zinc/kg), low-Zn (6.9 mg Zn/kg) or low-Zn plus Fe (168 mg Fe/kg) diets from day 1 of pregnancy. The animals were allowed to give birth and parturition times recorded. Exactly 24 h after the end of parturition the pups were killed and analysed for water, fat, protein, Fe and Zn contents and the mothers' haemoglobin (Hb) and packed cell volume (PCV) were measured. There were no differences in weight gain or food intakes throughout pregnancy. Parturition times were similar (mean time 123 (SE 15) min) and there were no differences in the number of pups born. Protein, water and fat contents of the pups were similar but the low-Zn Fe-supplemented group had higher pup Fe than the low-Zn unsupplemented group, and the control group had higher pup Zn than both the low-Zn groups. The low-Zn groups had a greater incidence of haemorrhaged or deformed pups, or both, than the controls. Pregnant rats were given diets of adequate Zn level (40 mg/kg) but with varying Fe:Zn (0.8, 1.7, 2.9, 3.7). Zn retention from the diet was measured using 65Zn as an extrinsic label on days 3, 10 and 17 of pregnancy with a whole-body gamma-counter. A group of non-pregnant rats was also included as controls. The 65Zn content of mothers and pups was measured 24-48 h after birth and at 14, 21 and 24 d of age. In all groups Zn retention was highest from the first meal, fell in the second meal and then rose in the third meal of the pregnant but not the non-pregnant rats. There were no differences between the groups given diets of varying Fe:Zn level. Approximately 25% of the 65Zn was transferred from the mothers to the pups by the time they were 48 h old, and a further 17% during the first 14 d of lactation. The pup 65Zn content did not significantly increase after the first 20 d of lactation but the maternal 65Zn level continued to fall gradually.
Resumo:
This paper examines the implications of policy fracture and arms length governance within the decision making processes currently shaping curriculum design within the English education system. In particular it argues that an unresolved ‘ideological fracture’ at government level has been passed down to school leaders whose response to the dilemma is distorted by the target-driven agenda of arms length agencies. Drawing upon the findings of a large scale on-line survey of history teaching in English secondary schools, this paper illustrates the problems that occur when policy making is divorced from curriculum theory, and in particular from any consideration of the nature of knowledge. Drawing on the social realist theory of knowledge elaborated by Young (2008), we argue that the rapid spread of alternative curricular arrangements, implemented in the absence of an understanding of curriculum theory, undermines the value of disciplined thinking to the detriment of many young people, particularly those in areas of social and economic deprivation.
Resumo:
A near real-time flood detection algorithm giving a synoptic overview of the extent of flooding in both urban and rural areas, and capable of working during night-time and day-time even if cloud was present, could be a useful tool for operational flood relief management. The paper describes an automatic algorithm using high resolution Synthetic Aperture Radar (SAR) satellite data that builds on existing approaches, including the use of image segmentation techniques prior to object classification to cope with the very large number of pixels in these scenes. Flood detection in urban areas is guided by the flood extent derived in adjacent rural areas. The algorithm assumes that high resolution topographic height data are available for at least the urban areas of the scene, in order that a SAR simulator may be used to estimate areas of radar shadow and layover. The algorithm proved capable of detecting flooding in rural areas using TerraSAR-X with good accuracy, classifying 89% of flooded pixels correctly, with an associated false positive rate of 6%. Of the urban water pixels visible to TerraSAR-X, 75% were correctly detected, with a false positive rate of 24%. If all urban water pixels were considered, including those in shadow and layover regions, these figures fell to 57% and 18% respectively.
Resumo:
Integrated Arable Farming Systems (IAFS), which involve a reduction in the use of off-farm inputs, are attracting considerable research interest in the UK. The objectives of these systems experiments are to compare their financial performance with that from conventional or current farming practices. To date, this comparison has taken little account of any environmental benefits (or disbenefits) of the two systems. The objective of this paper is to review the assessment methodologies available for the analysis of environmental impacts. To illustrate the results of this exercise, the methodology and environmental indicators chosen are then applied to data from one of the LINK - Integrated Farming Systems experimental sites. Data from the Pathhead site in Southern Scotland are used to evaluate the use of invertebrates and nitrate loss as environmental indicators within IAFS. The results suggest that between 1992 and 1995 the biomass of earthworms fell by 28 kg per hectare on the integrated rotation and rose by 31 kg per hectare on the conventional system. This led to environmental costs ranging between £2.24 and £13.44 per hectare for the integrated system and gains of between £2.48 and £14.88 for the conventional system. In terms of nitrate, the integrated system had an estimated loss of £72.21 per hectare in comparison to £149.40 per hectare on the conventional system. Conclusions are drawn about the advantages and disadvantages of this type of analytical framework. Keywords: Farming systems; IAFS; Environmental valuation; Economics; Earthworms; Nitrates; Soil fauna
Resumo:
Elephant poaching and the ivory trade remain high on the agenda at meetings of the Convention on International Trade in Endangered Species of Wild Fauna and Flora (CITES). Well-informed debates require robust estimates of trends, the spatial distribution of poaching, and drivers of poaching. We present an analysis of trends and drivers of an indicator of elephant poaching of all elephant species. The site-based monitoring system known as Monitoring the Illegal Killing of Elephants (MIKE), set up by the 10th Conference of the Parties of CITES in 1997, produces carcass encounter data reported mainly by anti-poaching patrols. Data analyzed were site by year totals of 6,337 carcasses from 66 sites in Africa and Asia from 2002–2009. Analysis of these observational data is a serious challenge to traditional statistical methods because of the opportunistic and non-random nature of patrols, and the heterogeneity across sites. Adopting a Bayesian hierarchical modeling approach, we used the proportion of carcasses that were illegally killed (PIKE) as a poaching index, to estimate the trend and the effects of site- and country-level factors associated with poaching. Important drivers of illegal killing that emerged at country level were poor governance and low levels of human development, and at site level, forest cover and area of the site in regions where human population density is low. After a drop from 2002, PIKE remained fairly constant from 2003 until 2006, after which it increased until 2008. The results for 2009 indicate a decline. Sites with PIKE ranging from the lowest to the highest were identified. The results of the analysis provide a sound information base for scientific evidence-based decision making in the CITES process.
Resumo:
This paper reports the findings from two large scale national on-line surveys carried out in 2009 and 2010, which explored the state of history teaching in English secondary schools. Large variation in provision was identified within comprehensive schools in response to national policy decisions and initiatives. Using the data from the surveys and school level data that is publicly available, this study examines situated factors, particularly the nature of the school intake, the numbers of pupils with special educational needs and the socio-economic status of the area surrounding the school, and the impact these have on the provision of history education. The findings show that there is a growing divide between those students that have access to the ‘powerful knowledge’, provided by subjects like history, and those that do not.
Resumo:
Predicting how insect crop pests will respond to global climate change is an important part of increasing crop production for future food security, and will increasingly rely on empirically based evidence. The effects of atmospheric composition, especially elevated carbon dioxide (eCO(2)), on insect herbivores have been well studied, but this research has focussed almost exclusively on aboveground insects. However, responses of root-feeding insects to eCO(2) are unlikely to mirror these trends because of fundamental differences between aboveground and belowground habitats. Moreover, changes in secondary metabolites and defensive responses to insect attack under eCO(2) conditions are largely unexplored for root herbivore interactions. This study investigated how eCO(2) (700 mu mol mol-1) affected a root-feeding herbivore via changes to plant growth and concentrations of carbon (C), nitrogen (N) and phenolics. This study used the root-feeding vine weevil, Otiorhynchus sulcatus and the perennial crop, Ribes nigrum. Weevil populations decreased by 33% and body mass decreased by 23% (from 7.2 to 5.4 mg) in eCO(2). Root biomass decreased by 16% in eCO(2), which was strongly correlated with weevil performance. While root N concentrations fell by 8%, there were no significant effects of eCO(2) on root C and N concentrations. Weevils caused a sink in plants, resulting in 8-12% decreases in leaf C concentration following herbivory. There was an interactive effect of CO(2) and root herbivory on root phenolic concentrations, whereby weevils induced an increase at ambient CO(2), suggestive of defensive response, but caused a decrease under eCO(2). Contrary to predictions, there was a positive relationship between root phenolics and weevil performance. We conclude that impaired root-growth underpinned the negative effects of eCO(2) on vine weevils and speculate that the plant's failure to mount a defensive response at eCO(2) may have intensified these negative effects.
Resumo:
This study compares two sets of measurements of the composition of bulk precipitation and throughfall at a site in southern England with a 20-year gap between them. During this time, SO2 emissions from the UK fell by 82%, NOx emissions by 35% and NH3 emissions by 7%. These reductions were partly reflected in bulk precipitation, with deposition reductions of 56% in SO4,38% in NO3, 32% in NH4, and 73% in H+. In throughfall under Scots pine, the effects were more dramatic, with an 89% reduction in SO4 deposition and a 98% reduction in H+ deposition. The mean pH under these trees increased from 2.85 to 4.30. Nitrate and ammonium deposition in throughfall increased slightly, however. In the earlier period, the Scots pines were unable to neutralise the high flux of acidity associated with sulphur deposition, even though this was not a highly polluted part of the UK, and deciduous trees (oak and birch) were only able to neutralise it in summer when the leaves were present. In the later period, the sulphur flux had reduced to the point where the acidity could be neutralised by all species — the neutralisation mechanism is thus likely to be largely leaching of base cations and buffering substances from the foliage. The high fluxes are partly due to the fact that these are 60–80 year old trees growing in an open forest structure. The increase in NO3 and NH4 in throughfall in spite of decreased deposition seems likely due to a decrease in foliar uptake, perhaps due to the increasing nitrogen saturation of the catchment soils. These changes may increase the rate of soil microbial activity as nitrogen increases and acidity declines, with consequent effects on water quality of the catchment drainage stream.
Resumo:
In situ high resolution aircraft measurements of cloud microphysical properties were made in coordination with ground based remote sensing observations of a line of small cumulus clouds, using Radar and Lidar, as part of the Aerosol Properties, PRocesses And InfluenceS on the Earth's climate (APPRAISE) project. A narrow but extensive line (~100 km long) of shallow convective clouds over the southern UK was studied. Cloud top temperatures were observed to be higher than −8 °C, but the clouds were seen to consist of supercooled droplets and varying concentrations of ice particles. No ice particles were observed to be falling into the cloud tops from above. Current parameterisations of ice nuclei (IN) numbers predict too few particles will be active as ice nuclei to account for ice particle concentrations at the observed, near cloud top, temperatures (−7.5 °C). The role of mineral dust particles, consistent with concentrations observed near the surface, acting as high temperature IN is considered important in this case. It was found that very high concentrations of ice particles (up to 100 L−1) could be produced by secondary ice particle production providing the observed small amount of primary ice (about 0.01 L−1) was present to initiate it. This emphasises the need to understand primary ice formation in slightly supercooled clouds. It is shown using simple calculations that the Hallett-Mossop process (HM) is the likely source of the secondary ice. Model simulations of the case study were performed with the Aerosol Cloud and Precipitation Interactions Model (ACPIM). These parcel model investigations confirmed the HM process to be a very important mechanism for producing the observed high ice concentrations. A key step in generating the high concentrations was the process of collision and coalescence of rain drops, which once formed fell rapidly through the cloud, collecting ice particles which caused them to freeze and form instant large riming particles. The broadening of the droplet size-distribution by collision-coalescence was, therefore, a vital step in this process as this was required to generate the large number of ice crystals observed in the time available. Simulations were also performed with the WRF (Weather, Research and Forecasting) model. The results showed that while HM does act to increase the mass and number concentration of ice particles in these model simulations it was not found to be critical for the formation of precipitation. However, the WRF simulations produced a cloud top that was too cold and this, combined with the assumption of continual replenishing of ice nuclei removed by ice crystal formation, resulted in too many ice crystals forming by primary nucleation compared to the observations and parcel modelling.
Resumo:
This paper explores the possibility of combining moderate vacuum frying followed by post-frying high vacuum application during the oil drainage stage, with the aim to reduce oil content in potato chips. Potato slices were initially vacuum fried under two operating conditions (140 °C, 20 kPa and 162 °C, 50.67 kPa) until the moisture content reached 10 and 15 % (wet basis), prior to holding the samples in the head space under high vacuum level (1.33 kPa). This two-stage process was found to lower significantly the amount of oil taken up by potato chips by an amount as high as 48 %, compared to drainage at the same pressure as the frying pressure. Reducing the pressure value to 1.33 kPa reduced the water saturation temperature (11 °C), causing the product to continuously lose moisture during the course of drainage. Continuous release of water vapour prevented the occluded surface oil from penetrating into the product structure and released it from the surface of the product. When frying and drainage occurred at the same pressure, the temperature of the product fell below the water saturation temperature soon after it was lifted out of the oil, which resulted in the oil getting sucked into the product. Thus, lowering the pressure after frying to a value well below the frying pressure is a promising method to lower oil uptake by the product.
Resumo:
Limnologists had an early preoccupation with lake classification. It gave a necessary structure to the many chemical and biological observations that were beginning to form the basis of one of the earliest truly environmental sciences. August Thienemann was the doyen of such classifiers and his concept with Einar Naumann of oligotrophic and eutrophic lakes remains central to the world-view that limnologists still have. Classification fell into disrepute, however, as it became clear that there would always be lakes that deviated from the prescriptions that the classifiers made for them. Continua became the de rigeur concept and lakes were seen as varying along many chemical, biological and geographic axes. Modern limnologists are comfortable with this concept. That all lakes are different guarantees an indefinite future for limnological research. For those who manage lakes and the landscapes in which they are set, however, it is not very useful. There may be as many as 300000 standing water bodies in England and Wales alone and maybe as many again in Scotland. More than 80 000 are sizable (> 1 ha). Some classification scheme to cope with these numbers is needed and, as human impacts on them increase, a system of assessing and monitoring change must be built into such a scheme. Although ways of classifying and monitoring running waters are well developed in the UK, the same is not true of standing waters. Sufficient understanding of what determines the nature and functioning of lakes exists to create a system which has intellectual credibility as well as practical usefulness. This paper outlines the thinking behind a system which will be workable on a north European basis and presents some early results.
Resumo:
In order to influence global policy effectively, conservation scientists need to be able to provide robust predictions of the impact of alternative policies on biodiversity and measure progress towards goals using reliable indicators. We present a framework for using biodiversity indicators predictively to inform policy choices at a global level. The approach is illustrated with two case studies in which we project forwards the impacts of feasible policies on trends in biodiversity and in relevant indicators. The policies are based on targets agreed at the Convention on Biological Diversity (CBD) meeting in Nagoya in October 2010. The first case study compares protected area policies for African mammals, assessed using the Red List Index; the second example uses the Living Planet Index to assess the impact of a complete halt, versus a reduction, in bottom trawling. In the protected areas example, we find that the indicator can aid in decision-making because it is able to differentiate between the impacts of the different policies. In the bottom trawling example, the indicator exhibits some counter-intuitive behaviour, due to over-representation of some taxonomic and functional groups in the indicator, and contrasting impacts of the policies on different groups caused by trophic interactions. Our results support the need for further research on how to use predictive models and indicators to credibly track trends and inform policy. To be useful and relevant, scientists must make testable predictions about the impact of global policy on biodiversity to ensure that targets such as those set at Nagoya catalyse effective and measurable change.