59 resultados para site-specific
Resumo:
Geological carbon dioxide storage (CCS) has the potential to make a significant contribution to the decarbonisation of the UK. Amid concerns over maintaining security, and hence diversity, of supply, CCS could allow the continued use of coal, oil and gas whilst avoiding the CO2 emissions currently associated with fossil fuel use. This project has explored some of the geological, environmental, technical, economic and social implications of this technology. The UK is well placed to exploit CCS with a large offshore storage capacity, both in disused oil and gas fields and saline aquifers. This capacity should be sufficient to store CO2 from the power sector (at current levels) for a least one century, using well understood and therefore likely to be lower-risk, depleted hydrocarbon fields and contained parts of aquifers. It is very difficult to produce reliable estimates of the (potentially much larger) storage capacity of the less well understood geological reservoirs such as non-confined parts of aquifers. With the majority of its large coal fired power stations due to be retired during the next 15 to 20 years, the UK is at a natural decision point with respect to the future of power generation from coal; the existence of both national reserves and the infrastructure for receiving imported coal makes clean coal technology a realistic option. The notion of CCS as a ‘bridging’ or ‘stop-gap’ technology (i.e. whilst we develop ‘genuinely’ sustainable renewable energy technologies) needs to be examined somewhat critically, especially given the scale of global coal reserves. If CCS plant is built, then it is likely that technological innovation will bring down the costs of CO2 capture, such that it could become increasingly attractive. As with any capitalintensive option, there is a danger of becoming ‘locked-in’ to a CCS system. The costs of CCS in our model for UK power stations in the East Midlands and Yorkshire to reservoirs in the North Sea are between £25 and £60 per tonne of CO2 captured, transported and stored. This is between about 2 and 4 times the current traded price of a tonne of CO2 in the EU Emissions Trading Scheme. In addition to the technical and economic requirements of the CCS technology, it should also be socially and environmentally acceptable. Our research has shown that, given an acceptance of the severity and urgency of addressing climate change, CCS is viewed favourably by members of the public, provided it is adopted within a portfolio of other measures. The most commonly voiced concern from the public is that of leakage and this remains perhaps the greatest uncertainty with CCS. It is not possible to make general statements concerning storage security; assessments must be site specific. The impacts of any potential leakage are also somewhat uncertain but should be balanced against the deleterious effects of increased acidification in the oceans due to uptake of elevated atmospheric CO2 that have already been observed. Provided adequate long term monitoring can be ensured, any leakage of CO2 from a storage site is likely to have minimal localised impacts as long as leaks are rapidly repaired. A regulatory framework for CCS will need to include risk assessment of potential environmental and health and safety impacts, accounting and monitoring and liability for the long term. In summary, although there remain uncertainties to be resolved through research and demonstration projects, our assessment demonstrates that CCS holds great potential for significant cuts in CO2 emissions as we develop long term alternatives to fossil fuel use. CCS can contribute to reducing emissions of CO2 into the atmosphere in the near term (i.e. peak-shaving the future atmospheric concentration of CO2), with the potential to continue to deliver significant CO2 reductions over the long term.
Resumo:
Enantio-specific interactions on intrinsically chiral or chirally modified surfaces can be identified experimentally via comparison of the adsorption geometries of similar nonchiral and chiral molecules. Information about the effects of substrate-related and in interactions on the adsorption geometry of glycine, the only natural nonchiral amino acid, is therefore important for identifying enantio-specific interactions of larger chiral amino acids. We have studied the long- and short-range adsorption geometry and bonding properties of glycine on the intrinsically chiral Cu{531} surface with low-energy electron diffraction, near-edge X-ray absorption One structure spectroscopy, X-ray photoelectron spectroscopy, and temperature-programmed desorption. For coverages between 0.15 and 0.33 ML (saturated chemisorbed layer) and temperatures between 300 and 430 K, glycine molecules adsorb in two different azimuthal orientations, which are associated with adsorption sites on the {110} and {311} microfacets of Cu{531}. Both types of adsorption sites allow a triangular footprint with surface bonds through the two oxygen atoms and the nitrogen atom. The occupation of the two adsorption sites is equal for all coverages, which can be explained by pair formation due to similar site-specific adsorption energies and the possibility of forming hydrogen bonds between molecules on adjacent {110} and {311} sites. This is not the ease for alanine and points toward higher site specificity in the case of alanine, which is eventually responsible for the enantiomeric differences observed for the alanine system.
The importance of the relationship between scale and process in understanding long-term DOC dynamics
Resumo:
Concentrations of dissolved organic carbon have increased in many, but not all, surface waters across acid impacted areas of Europe and North America over the last two decades. Over the last eight years several hypotheses have been put forward to explain these increases, but none are yet accepted universally. Research in this area appears to have reached a stalemate between those favouring declining atmospheric deposition, climate change or land management as the key driver of long-term DOC trends. While it is clear that many of these factors influence DOC dynamics in soil and stream waters, their effect varies over different temporal and spatial scales. We argue that regional differences in acid deposition loading may account for the apparent discrepancies between studies. DOC has shown strong monotonic increases in areas which have experienced strong downward trends in pollutant sulphur and/or seasalt deposition. Elsewhere climatic factors, that strongly influence seasonality, have also dominated inter-annual variability, and here long-term monotonic DOC trends are often difficult to detect. Furthermore, in areas receiving similar acid loadings, different catchment characteristics could have affected the site specific sensitivity to changes in acidity and therefore the magnitude of DOC release in response to changes in sulphur deposition. We suggest that confusion over these temporal and spatial scales of investigation has contributed unnecessarily to the disagreement over the main regional driver(s) of DOC trends, and that the data behind the majority of these studies is more compatible than is often conveyed.
Resumo:
Virulence in Staphylococcus aureus is regulated via agr-dependent quorum sensing in which an autoinducing peptide (AIP) activates AgrC, a histidine protein kinase. AIPs are usually thiolactones containing seven to nine amino acid residues in which the thiol of the central cysteine is linked to the alpha-carboxyl of the C-terminal amino acid residue. The staphylococcal agr locus has diverged such that the AIPs of the four different S. aureus agr groups self-activate but cross-inhibit. Consequently, although the agr system is conserved among the staphylococci, it has undergone significant evolutionary divergence whereby to retain functionality, any changes in the AIP-encoding gene (agrD) that modifies AIP structure must be accompanied by corresponding changes in the AgrC receptor. Since AIP-1 and AIP-4 only differ by a single amino acid, we compared the transmembrane topology of AgrC1 and AgrC4 to identify amino acid residues involved in AIP recognition. As only two of the three predicted extracellular loops exhibited amino acid differences, site-specific mutagenesis was used to exchange the key AgrC1 and AgrC4 amino acid residues in each loop either singly or in combination. A novel lux-based agrP3 reporter gene fusion was constructed to evaluate the response of the mutated AgrC receptors. The data obtained revealed that while differential recognition of AIP-1 and AIP-4 depends primarily on three amino acid residues in loop 2, loop 1 is essential for receptor activation by the cognate AIP. Furthermore, a single mutation in the AgrC1 loop 2 resulted in conversion of (Ala5)AIP-1 from a potent antagonist to an activator, essentially resulting in the forced evolution of a new AIP group. Taken together, our data indicate that loop 2 constitutes the predicted hydrophobic pocket that binds the AIP thiolactone ring while the exocyclic amino acid tail interacts with loop 1 to facilitate receptor activation.
Resumo:
Many weeds occur in patches but farmers frequently spray whole fields to control the weeds in these patches. Given a geo-referenced weed map, technology exists to confine spraying to these patches. Adoption of patch spraying by arable farmers has, however, been negligible partly due to the difficulty of constructing weed maps. Building on previous DEFRA and HGCA projects, this proposal aims to develop and evaluate a machine vision system to automate the weed mapping process. The project thereby addresses the principal technical stumbling block to widespread adoption of site specific weed management (SSWM). The accuracy of weed identification by machine vision based on a single field survey may be inadequate to create herbicide application maps. We therefore propose to test the hypothesis that sufficiently accurate weed maps can be constructed by integrating information from geo-referenced images captured automatically at different times of the year during normal field activities. Accuracy of identification will also be increased by utilising a priori knowledge of weeds present in fields. To prove this concept, images will be captured from arable fields on two farms and processed offline to identify and map the weeds, focussing especially on black-grass, wild oats, barren brome, couch grass and cleavers. As advocated by Lutman et al. (2002), the approach uncouples the weed mapping and treatment processes and builds on the observation that patches of these weeds are quite stable in arable fields. There are three main aspects to the project. 1) Machine vision hardware. Hardware component parts of the system are one or more cameras connected to a single board computer (Concurrent Solutions LLC) and interfaced with an accurate Global Positioning System (GPS) supplied by Patchwork Technology. The camera(s) will take separate measurements for each of the three primary colours of visible light (red, green and blue) in each pixel. The basic proof of concept can be achieved in principle using a single camera system, but in practice systems with more than one camera may need to be installed so that larger fractions of each field can be photographed. Hardware will be reviewed regularly during the project in response to feedback from other work packages and updated as required. 2) Image capture and weed identification software. The machine vision system will be attached to toolbars of farm machinery so that images can be collected during different field operations. Images will be captured at different ground speeds, in different directions and at different crop growth stages as well as in different crop backgrounds. Having captured geo-referenced images in the field, image analysis software will be developed to identify weed species by Murray State and Reading Universities with advice from The Arable Group. A wide range of pattern recognition and in particular Bayesian Networks will be used to advance the state of the art in machine vision-based weed identification and mapping. Weed identification algorithms used by others are inadequate for this project as we intend to collect and correlate images collected at different growth stages. Plants grown for this purpose by Herbiseed will be used in the first instance. In addition, our image capture and analysis system will include plant characteristics such as leaf shape, size, vein structure, colour and textural pattern, some of which are not detectable by other machine vision systems or are omitted by their algorithms. Using such a list of features observable using our machine vision system, we will determine those that can be used to distinguish weed species of interest. 3) Weed mapping. Geo-referenced maps of weeds in arable fields (Reading University and Syngenta) will be produced with advice from The Arable Group and Patchwork Technology. Natural infestations will be mapped in the fields but we will also introduce specimen plants in pots to facilitate more rigorous system evaluation and testing. Manual weed maps of the same fields will be generated by Reading University, Syngenta and Peter Lutman so that the accuracy of automated mapping can be assessed. The principal hypothesis and concept to be tested is that by combining maps from several surveys, a weed map with acceptable accuracy for endusers can be produced. If the concept is proved and can be commercialised, systems could be retrofitted at low cost onto existing farm machinery. The outputs of the weed mapping software would then link with the precision farming options already built into many commercial sprayers, allowing their use for targeted, site-specific herbicide applications. Immediate economic benefits would, therefore, arise directly from reducing herbicide costs. SSWM will also reduce the overall pesticide load on the crop and so may reduce pesticide residues in food and drinking water, and reduce adverse impacts of pesticides on non-target species and beneficials. Farmers may even choose to leave unsprayed some non-injurious, environmentally-beneficial, low density weed infestations. These benefits fit very well with the anticipated legislation emerging in the new EU Thematic Strategy for Pesticides which will encourage more targeted use of pesticides and greater uptake of Integrated Crop (Pest) Management approaches, and also with the requirements of the Water Framework Directive to reduce levels of pesticides in water bodies. The greater precision of weed management offered by SSWM is therefore a key element in preparing arable farming systems for the future, where policy makers and consumers want to minimise pesticide use and the carbon footprint of farming while maintaining food production and security. The mapping technology could also be used on organic farms to identify areas of fields needing mechanical weed control thereby reducing both carbon footprints and also damage to crops by, for example, spring tines. Objective i. To develop a prototype machine vision system for automated image capture during agricultural field operations; ii. To prove the concept that images captured by the machine vision system over a series of field operations can be processed to identify and geo-reference specific weeds in the field; iii. To generate weed maps from the geo-referenced, weed plants/patches identified in objective (ii).
Resumo:
An increasing importance is assigned to the estimation and verification of carbon stocks in forests. Forestry practice has several long-established and reliable methods for the assessment of aboveground biomass; however we still miss accurate predictors of belowground biomass. A major windthrow event exposing the coarse root systems of Norway spruce trees allowed us to assess the effects of contrasting soil stone and water content on belowground allocation. Increasing stone content decreases root/shoot ratio, while soil waterlogging leads to an increase in this ratio. We constructed allometric relationships for belowground biomass prediction and were able to show that only soil waterlogging significantly impacts model parameters. We showed that diameter at breast height is a reliable predictor of belowground biomass and, once site-specific parameters have been developed, it is possible to accurately estimate belowground biomass in Norway spruce.
Resumo:
Executive summary Nature of the problem (science/management/policy) • Freshwater ecosystems play a key role in the European nitrogen (N) cycle, both as a reactive agent that transfers, stores and processes N loadings from the atmosphere and terrestrial ecosystems, and as a natural environment severely impacted by the increase of these loadings. Approaches • This chapter is a review of major processes and factors controlling N transport and transformations for running waters, standing waters, groundwaters and riparian wetlands. Key findings/state of knowledge • The major factor controlling N processes in freshwater ecosystems is the residence time of water, which varies widely both in space and in time, and which is sensitive to changes in climate, land use and management. • The effects of increased N loadings to European freshwaters include acidification in semi-natural environments, and eutrophication in more disturbed ecosystems, with associated loss of biodiversity in both cases. • An important part of the nitrogen transferred by surface waters is in the form of organic N, as dissolved organic N (DON) and particulate organic N (PON). This part is dominant in semi-natural catchments throughout Europe and remains a significant component of the total N load even in nitrate enriched rivers. • In eutrophicated standing freshwaters N can be a factor limiting or co-limiting biological production, and control of both N and phosphorus (P) loading is oft en needed in impacted areas, if ecological quality is to be restored. Major uncertainties/challenges • The importance of storage and denitrifi cation in aquifers is a major uncertainty in the global N cycle, and controls in part the response of catchments to land use or management changes. In some aquifers, the increase of N concentrations will continue for decades even if efficient mitigation measures are implemented now. • Nitrate retention by riparian wetlands has oft en been highlighted. However, their use for mitigation must be treated with caution, since their effectiveness is difficult to predict, and side effects include increased DON emissions to adjacent open waters, N2O emissions to the atmosphere, and loss of biodiversity. • In fact, the character and specific spatial origins of DON are not fully understood, and similarly the quantitative importance of indirect N2O emissions from freshwater ecosystems as a result of N leaching losses from agricultural soils is still poorly known at the regional scale. • These major uncertainties remain due to the lack of adequate monitoring (all forms of N at a relevant frequency), especially – but not only – in the southern and eastern EU countries. Recommendations (research/policy) • The great variability of transfer pathways, buffering capacity and sensitivity of the catchments and of the freshwater ecosystems calls for site specific mitigation measures rather than standard ones applied at regional to national scale. • The spatial and temporal variations of the N forms, the processes controlling the transport and transformation of N within freshwaters, require further investigation if the role of N in influencing freshwater ecosystem health is to be better understood, underpinning the implementation of the EU Water Framework Directive for European freshwaters.
Resumo:
Abstract Purpose: The pH discrepancy between healthy and atopic dermatitis skin was identified as a site specific trigger for delivering hydrocortisone from microcapsules. Methods: Using Eudragit L100, a pH-responsive polymer which dissolves at pH 6, hydrocortisone-loaded microparticles were produced by oil-in-oil microencapsulation or spray drying. Release and permeation of hydrocortisone from microparticles alone or in gels was assessed and preliminary stability data was determined. Results: Drug release from microparticles was pH-dependent though the particles produced by spray drying also gave significant non-pH dependent burst release, resulting from their porous nature or from drug enrichment on the surface of these particles. This pH-responsive release was maintained upon incorporation of the oil-in-oil microparticles into Carbopol- and HPMC-based gel formulations. In-vitro studies showed 4 to 5-fold higher drug permeation through porcine skin from the gels at pH 7 compared to pH 5. Conclusions: Permeation studies showed that the oil-in-oil generated particles deliver essentially no drug at normal (intact) skin pH (5.0 – 5.5) but that delivery can be triggered and targeted to atopic dermatitis skin where the pH is elevated. The incorporation of these microparticles into Carbopol- and HPMC-based aqueous gel formulations demonstrated good stability and pH-responsive permeation into porcine skin.
Resumo:
In the winter of 2007, Doug Aitken’s moving image installation, sleepwalkers, was projected onto the exterior walls of the Museum of Modern Art in New York. The project was a collaboration between Aitken, the museum and Creative Time, a New York-based organisation that commissions public art projects. A site-specific version of the installation has been commissioned by the Miami Art Museum for the opening of its new facility, designed by Swiss architects Herzog and de Meuron, in 2013: “sleepwalkers (Miami) will expand the work’s landscape and characters in a manner that reflects the diverse social fabric of Miami.” This essay examines sleepwalkers as an example of the emerging form of film as public art. There are three strands to my argument: first, an examination of the role of film in the redefinition of public art, shifting away from spatial practices concerned with fixed and permanent notions of space, community and art and towards transient and experimental spatial and artistic practices; second,a discussion of the relationship between projection and the built environment and the ways that the qualities of luminescence, transparency, movement and connectivity are transferred from projected images to the surfaces on which they are projected and the spaces around them; and third, an examination of the ways that sleepwalkers uses only certain aspects of narrativity, those concerned with movement and change, and avoids hermeneutic absorption in order to keep the spectators moving (transposing the idea of sleepwalking from characters to spectators). Transience and transparency are key ideas in the conceptualisation of the work, and these are deployed with significant differences in relation to the distinctive characteristics of each city and each museum.
Resumo:
Diffuse pollution, and the contribution from agriculture in particular, has become increasingly important as pollution from point sources has been addressed by wastewater treatment. Land management approaches, such as construction of field wetlands, provide one group of mitigation options available to farmers. Although field wetlands are widely used for diffuse pollution control in temperate environments worldwide, there is a shortage of evidence for the effectiveness and viability of these mitigation options in the UK. The Mitigation Options for Phosphorus and Sediment Project aims to make recommendations regarding the design and effectiveness of field wetlands for diffuse pollution control in UK landscapes. Ten wetlands have been built on four farms in Cumbria and Leicestershire. This paper focuses on sediment retention within the wetlands, estimated from annual sediment surveys in the first two years, and discusses establishment costs. It is clear that the wetlands are effective in trapping a substantial amount of sediment. Estimates of annual sediment retention suggest higher trapping rates at sandy sites (0.5–6 t ha�1 yr�1), compared to silty sites (0.02–0.4 t ha�1 yr�1) and clay sites (0.01–0.07 t ha�1 yr�1). Establishment costs for the wetlands ranged from £280 to £3100 and depended more on site specific factors, such as fencing and gateways on livestock farms, rather than on wetland size or design. Wetlands with lower trapping rates would also have lower maintenance costs, as dredging would be required less frequently. The results indicate that field wetlands show promise for inclusion in agri-environment schemes, particularly if capital payments can be provided for establishment, to encourage uptake of these multi-functional features.
Resumo:
Sustainable lake management for nutrient-enriched lakes must be underpinned by an understanding of both the functioning of the lake, and the origins of changes in nutrient loading from the catchment. To date, limnologists have tended to focus on studying the impact of nutrient enrichment on the lake biota, and the dynamics of nutrient cycling between the water column, biota and sediments within the lake. Relatively less attention has been paid to understanding the specific origins of nutrient loading from the catchment and nutrient transport pathways linking the lake to its catchment. As such, when devising catchment management strategies to reduce nutrient loading on enriched lakes, assumptions have been made regarding the relative significance of non-point versus point sources in the catchment. These are not always supported by research conducted on catchment nutrient dynamics in other fields of freshwater science. Studies on nutrient enrichment in lakes need to take account of the history of catchment use and management specific to each lake in order to devise targeted and sustainable management strategies to reduce nutrient loading to enriched lakes. Here a modelling approach which allows quantification of the relative contribution of nutrients from each specific point and non-point catchment source over the course of catchment history is presented. The approach has been applied to three contrasting catchments in the U.K. for the period 1931 to present. These are the catchment of Slapton Ley in south Devon, the River Esk in Cumbria and the Deben Estuary in Suffolk. Each catchment showed marked variations in the nature and intensity of land use and management. The model output quantifies the relative importance of point source versus non-point livestock and land use sources in each of the catchments, and demonstrates the necessity for an understanding of site-specific catchment history in devising suitable management strategies for the reduction of nutrient loading on enriched lakes.
Resumo:
This paper discusses key contextual differences and similarities in a comparative study on brownfield regeneration in England and Japan. Over the last decade, the regeneration of large-scale ‘flagship’ projects has been a primary focus in England, and previous research has discussed policy issues and key barriers at these sites. However, further research is required to explore specific barriers associated with problematic ‘hardcore’ sites suffering from long-term dereliction due to site-specific obstacles such as contamination and fragmented ownership. In comparison with England, brownfield regeneration is a relatively new urban agenda in Japan. Japan has less experience in terms of promoting redevelopment of brownfield sites at national level and the specific issues of ‘hardcore’ sites have been under-researched. The paper reviews and highlights important issues in comparing the definitions, national policy frameworks and the current stock of brownfields.
Resumo:
As wind generation increases, system impact studies rely on predictions of future generation and effective representation of wind variability. A well-established approach to investigate the impact of wind variability is to simulate generation using observations from 10 m meteorological mast-data. However, there are problems with relying purely on historical wind-speed records or generation histories: mast-data is often incomplete, not sited at a relevant wind generation sites, and recorded at the wrong altitude above ground (usually 10 m), each of which may distort the generation profile. A possible complimentary approach is to use reanalysis data, where data assimilation techniques are combined with state-of-the-art weather forecast models to produce complete gridded wind time-series over an area. Previous investigations of reanalysis datasets have placed an emphasis on comparing reanalysis to meteorological site records whereas this paper compares wind generation simulated using reanalysis data directly against historic wind generation records. Importantly, this comparison is conducted using raw reanalysis data (typical resolution ∼50 km), without relying on a computationally expensive “dynamical downscaling” for a particular target region. Although the raw reanalysis data cannot, by nature of its construction, represent the site-specific effects of sub-gridscale topography, it is nevertheless shown to be comparable to or better than the mast-based simulation in the region considered and it is therefore argued that raw reanalysis data may offer a number of significant advantages as a data source.
Resumo:
Historic geomagnetic activity observations have been used to reveal centennial variations in the open solar flux and the near-Earth heliospheric conditions (the interplanetary magnetic field and the solar wind speed). The various methods are in very good agreement for the past 135 years when there were sufficient reliable magnetic observatories in operation to eliminate problems due to site-specific errors and calibration drifts. This review underlines the physical principles that allow these reconstructions to be made, as well as the details of the various algorithms employed and the results obtained. Discussion is included of: the importance of the averaging timescale; the key differences between “range” and “interdiurnal variability” geomagnetic data; the need to distinguish source field sector structure from heliospherically-imposed field structure; the importance of ensuring that regressions used are statistically robust; and uncertainty analysis. The reconstructions are exceedingly useful as they provide calibration between the in-situ spacecraft measurements from the past five decades and the millennial records of heliospheric behaviour deduced from measured abundances of cosmogenic radionuclides found in terrestrial reservoirs. Continuity of open solar flux, using sunspot number to quantify the emergence rate, is the basis of a number of models that have been very successful in reproducing the variation derived from geomagnetic activity. These models allow us to extend the reconstructions back to before the development of the magnetometer and to cover the Maunder minimum. Allied to the radionuclide data, the models are revealing much about how the Sun and heliosphere behaved outside of grand solar maxima and are providing a means of predicting how solar activity is likely to evolve now that the recent grand maximum (that had prevailed throughout the space age) has come to an end.
Resumo:
Over the last decade issues related to the financial viability of development have become increasingly important to the English planning system. As part of a wider shift towards the compartmentalisation of planning tasks, expert consultants are required to quantify, in an attempt to rationalise, planning decisions in terms of economic ‘viability’. Often with a particular focus on planning obligations, the results of development viability modelling have emerged as a key part of the evidence base used in site-specific negotiations and in planning policy formation. Focussing on the role of clients and other stakeholders, this paper investigates how development viability is tested in practice. It draws together literature on the role of calculative practices in policy formation, client feedback and influence in real estate appraisals and stakeholder engagement and consultation in the planning literature to critically evaluate the role of clients and other interest groups in influencing the production and use of development viability appraisal models. The paper draws upon semi-structured interviews with the main producers of development viability appraisals to conclude that, whilst appraisals have the potential to be biased by client and stakeholder interests, there are important controlling influences on potential opportunistic behaviour. One such control is local authorities’ weak understanding of development viability appraisal techniques which limits their capacity to question the outputs of appraisal models. However, this also is of concern given that viability is now a central feature of the town planning system.