63 resultados para Areas subdesenvolvidas - Divida externa
em CentAUR: Central Archive University of Reading - UK
Resumo:
Flooding is a major hazard in both rural and urban areas worldwide, but it is in urban areas that the impacts are most severe. An investigation of the ability of high resolution TerraSAR-X Synthetic Aperture Radar (SAR) data to detect flooded regions in urban areas is described. The study uses a TerraSAR-X image of a 1 in 150 year flood near Tewkesbury, UK, in 2007, for which contemporaneous aerial photography exists for validation. The DLR SAR End-To-End simulator (SETES) was used in conjunction with airborne scanning laser altimetry (LiDAR) data to estimate regions of the image in which water would not be visible due to shadow or layover caused by buildings and taller vegetation. A semi-automatic algorithm for the detection of floodwater in urban areas is described, together with its validation using the aerial photographs. 76% of the urban water pixels visible to TerraSAR-X were correctly detected, with an associated false positive rate of 25%. If all urban water pixels were considered, including those in shadow and layover regions, these figures fell to 58% and 19% respectively. The algorithm is aimed at producing urban flood extents with which to calibrate and validate urban flood inundation models, and these findings indicate that TerraSAR-X is capable of providing useful data for this purpose.
Resumo:
Flooding is a major hazard in both rural and urban areas worldwide, but it is in urban areas that the impacts are most severe. An investigation of the ability of high resolution TerraSAR-X data to detect flooded regions in urban areas is described. An important application for this would be the calibration and validation of the flood extent predicted by an urban flood inundation model. To date, research on such models has been hampered by lack of suitable distributed validation data. The study uses a 3m resolution TerraSAR-X image of a 1-in-150 year flood near Tewkesbury, UK, in 2007, for which contemporaneous aerial photography exists for validation. The DLR SETES SAR simulator was used in conjunction with airborne LiDAR data to estimate regions of the TerraSAR-X image in which water would not be visible due to radar shadow or layover caused by buildings and taller vegetation, and these regions were masked out in the flood detection process. A semi-automatic algorithm for the detection of floodwater was developed, based on a hybrid approach. Flooding in rural areas adjacent to the urban areas was detected using an active contour model (snake) region-growing algorithm seeded using the un-flooded river channel network, which was applied to the TerraSAR-X image fused with the LiDAR DTM to ensure the smooth variation of heights along the reach. A simpler region-growing approach was used in the urban areas, which was initialized using knowledge of the flood waterline in the rural areas. Seed pixels having low backscatter were identified in the urban areas using supervised classification based on training areas for water taken from the rural flood, and non-water taken from the higher urban areas. Seed pixels were required to have heights less than a spatially-varying height threshold determined from nearby rural waterline heights. Seed pixels were clustered into urban flood regions based on their close proximity, rather than requiring that all pixels in the region should have low backscatter. This approach was taken because it appeared that urban water backscatter values were corrupted in some pixels, perhaps due to contributions from side-lobes of strong reflectors nearby. The TerraSAR-X urban flood extent was validated using the flood extent visible in the aerial photos. It turned out that 76% of the urban water pixels visible to TerraSAR-X were correctly detected, with an associated false positive rate of 25%. If all urban water pixels were considered, including those in shadow and layover regions, these figures fell to 58% and 19% respectively. These findings indicate that TerraSAR-X is capable of providing useful data for the calibration and validation of urban flood inundation models.
Resumo:
The common GIS-based approach to regional analyses of soil organic carbon (SOC) stocks and changes is to define geographic layers for which unique sets of driving variables are derived, which include land use, climate, and soils. These GIS layers, with their associated attribute data, can then be fed into a range of empirical and dynamic models. Common methodologies for collating and formatting regional data sets on land use, climate, and soils were adopted for the project Assessment of Soil Organic Carbon Stocks and Changes at National Scale (GEFSOC). This permitted the development of a uniform protocol for handling the various input for the dynamic GEFSOC Modelling System. Consistent soil data sets for Amazon-Brazil, the Indo-Gangetic Plains (IGP) of India, Jordan and Kenya, the case study areas considered in the GEFSOC project, were prepared using methodologies developed for the World Soils and Terrain Database (SOTER). The approach involved three main stages: (1) compiling new soil geographic and attribute data in SOTER format; (2) using expert estimates and common sense to fill selected gaps in the measured or primary data; (3) using a scheme of taxonomy-based pedotransfer rules and expert-rules to derive soil parameter estimates for similar soil units with missing soil analytical data. The most appropriate approach varied from country to country, depending largely on the overall accessibility and quality of the primary soil data available in the case study areas. The secondary SOTER data sets discussed here are appropriate for a wide range of environmental applications at national scale. These include agro-ecological zoning, land evaluation, modelling of soil C stocks and changes, and studies of soil vulnerability to pollution. Estimates of national-scale stocks of SOC, calculated using SOTER methods, are presented as a first example of database application. Independent estimates of SOC stocks are needed to evaluate the outcome of the GEFSOC Modelling System for current conditions of land use and climate. (C) 2007 Elsevier B.V. All rights reserved.
Resumo:
The aim of this study was to examine interrelationships between functional biochemical and microbial indicators of soil quality, and their suitability to differentiate areas under contrasting agricultural management regimes. The study included five 0.8 ha areas on a sandy-loam soil which had received contrasting fertility and cropping regimes over a 5 year period. These were organically managed vegetable, vegetable -cereal and arable rotations, an organically managed grass clover ley, and a conventional cereal rotation. The organic areas had been converted from conventional cereal production 5 years prior to the start of the study. All of the biochemical analyses, including light fraction organic matter (LFOM) C and N, labile organic N (LON), dissolved organic N and water-soluble carbohydrates showed significant differences between the areas, although the nature of the relationships between the areas varied between the different parameters, and were not related to differences in total soil organic matter content. The clearest differences were seen in LFOM C and N and LON, which were higher in the organic arable area relative to the other areas. In the case of the biological parameters, there were differences between the areas for biomass-N, ATP, chitin content, and the ratios of ATP: biomass and basal respiration: biomass. For these parameters, the precise relationships between the areas varied. However, relative to the conventionally managed area, areas under organic management generally had lower biomass-N and higher ATP contents. Arbuscular mycorrhizal fungus colonization potential was extremely low in the conventional area relative to the organic areas. Further, metabolic diversity and microbial community level physiological profiles, determined by analysis of microbial community metabolism using Biolog GN plates and the activities of eight key nutrient cycling enzymes, grouped the organic areas together, but separated them from the conventional area. We conclude that microbial parameters are more effective and consistent indicators of management induced changes to soil quality than biochemical parameters, and that a variety of biochemical and microbial analyses should be used when considering the impact of management on soil quality. (C) 2004 Elsevier Ltd. All rights reserved.
Resumo:
Trace elements may present an environmental hazard in the vicinity of mining and smelting activities. However, the factors controlling their distribution and transfer within the soil and vegetation systems are not always well defined. Total concentrations of up to 15,195 mg center dot kg (-1) As, 6,690 mg center dot kg(-1) Cu, 24,820 mg center dot kg(-1) Pb and 9,810 mg center dot kg(-1) Zn in soils, and 62 mg center dot kg(-1) As, 1,765 mg center dot kg(-1) Cu, 280 mg center dot kg(-1) Pb and 3,460 mg center dot kg (-1) Zn in vegetation were measured. However, unusually for smelters and mines of a similar size, the elevated trace element concentrations in soils were found to be restricted to the immediate vicinity of the mines and smelters (maximum 2-3 km). Parent material, prevailing wind direction, and soil physical and chemical characteristics were found to correlate poorly with the restricted trace element distributions in soils. Hypotheses are given for this unusual distribution: (1) the contaminated soils were removed by erosion or (2) mines and smelters released large heavy particles that could not have been transported long distances. Analyses of the accumulation of trace elements in vegetation (median ratios: As 0.06, Cu 0.19, Pb 0.54 and Zn 1.07) and the percentage of total trace elements being DTPA extractable in soils (median percentages: As 0.06%, Cu 15%, Pb 7% and Zn 4%) indicated higher relative trace element mobility in soils with low total concentrations than in soils with elevated concentrations.
Resumo:
Trace elements may present an environmental hazard in the vicinity of mining and smelting activities. However, the factors controlling trace element distribution in soils around ancient and modem mining and smelting areas are not always clear. Tharsis, Riotinto and Huelva are located in the Iberian Pyrite Belt in SW Spain. Tharsis and Riotinto mines have been exploited since 2500 B.C., with intensive smelting taking place. Huelva, established in 1970 and using the Flash Furnace Outokumpu process, is currently one of the largest smelter in the world. Pyrite and chalcopyrite ore have been intensively smelted for Cu. However, unusually for smelters and mines of a similar size, the elevated trace element concentrations in soils were found to be restricted to the immediate vicinity of the mines and smelters, being found up to a maximum of 2 kin from the mines and smelters at Tharsis, Riotinto and Huelva. Trace element partitioning (over 2/3 of trace elements found in the residual immobile fraction of soils at Tharsis) and soil particles examination by SEM-EDX showed that trace elements were not adsorbed onto soil particles, but were included within the matrix of large trace element-rich Fe silicate slag particles (i.e. 1 min circle divide at least 1 wt.% As, Cu and Zn, and 2 wt.% Pb). Slag particle large size (I mm 0) was found to control the geographically restricted trace element distribution in soils at Tharsis, Riotinto and Huelva, since large heavy particles could not have been transported long distances. Distribution and partitioning indicated that impacts to the environment as a result of mining and smelting should remain minimal in the region. (c) 2006 Elsevier B.V. All rights reserved.
Resumo:
This paper describes the results of field research to dissect how social interactions differ between two reserves in Paraguay having very different styles of governance. The two reserves were Mbaracayu Natural Forest Reserve (Reserva Natural del Bosque de Mbaracayti, RNBM) and San Rafael Managed Resource Reserve (Reserva de Recursos Manejados San Rafael, RRMSR). RNBM is a private reserve owned by a non-governmental organisation. while RRNISR is a publicly-managed reserve, albeit with a substantial degree of private land ownership. Both reserves are intended to protect Atlantic Forest, one of the five world biodiversity 'hotspots', and also one of the most highly threatened. Each reserve and its buffer zone comprises a set of stakeholders, including indigenous communities and farmers, and the paper explores the interactions between these and the management regime. Indeed, while the management regimes of the two reserves are different, one being highly top-down (RNBM) and the other more socially inclusive (RRMSR), the issues that they have to deal with are much the same. However, while both management regimes will readily acknowledge the need to address poverty, inequality appears to be a far more sensitive issue. Whereas this may be expected for the privately-owned RNBM it is perhaps more surprising in RRNISR even when allowing for the fact that much of the land in the latter is in private hands. It is argued that the origins of this sensitivity rest within the broader features of Paraguayan society, and the prevalence of private land ownership. Yet ironically, it is the inequality in land ownership that is perhaps the most significant threat to conservation in both reserves. Therefore, while reserve-level analyses can provide some insight into the driving forces at play in the interaction between conservation and sustainable management, larger scales may be necessary to gain a fuller appreciation of the dynamics operating at site level. Even in a society with a history of centralised control these dynamics may be surprising. (c) 2005 Elsevier Ltd. All rights reserved.
Resumo:
The development of genetically modified (GM) crops has led the European Union (EU) to put forward the concept of 'coexistence' to give fanners the freedom to plant both conventional and GM varieties. Should a premium for non-GM varieties emerge in the market, 'contamination' by GM pollen would generate a negative externality to conventional growers. It is therefore important to assess the effect of different 'policy variables'on the magnitude of the externality to identify suitable policies to manage coexistence. In this paper, taking GM herbicide tolerant oilseed rape as a model crop, we start from the model developed in Ceddia et al. [Ceddia, M.G., Bartlett, M., Perrings, C., 2007. Landscape gene flow, coexistence and threshold effect: the case of genetically modified herbicide tolerant oilseed rape (Brassica napus). Ecol. Modell. 205, pp. 169-180] use a Monte Carlo experiment to generate data and then estimate the effect of the number of GM and conventional fields, width of buffer areas and the degree of spatial aggregation (i.e. the 'policy variables') on the magnitude of the externality at the landscape level. To represent realistic conditions in agricultural production, we assume that detection of GM material in conventional produce might occur at the field level (no grain mixing occurs) or at the silos level (where grain mixing from different fields in the landscape occurs). In the former case, the magnitude of the externality will depend on the number of conventional fields with average transgenic presence above a certain threshold. In the latter case, the magnitude of the externality will depend on whether the average transgenic presence across all conventional fields exceeds the threshold. In order to quantify the effect of the relevant' policy variables', we compute the marginal effects and the elasticities. Our results show that when relying on marginal effects to assess the impact of the different 'policy variables', spatial aggregation is far more important when transgenic material is detected at field level, corroborating previous research. However, when elasticity is used, the effectiveness of spatial aggregation in reducing the externality is almost identical whether detection occurs at field level or at silos level. Our results show also that the area planted with GM is the most important 'policy variable' in affecting the externality to conventional growers and that buffer areas on conventional fields are more effective than those on GM fields. The implications of the results for the coexistence policies in the EU are discussed. (C) 2008 Elsevier B.V. All rights reserved.
Resumo:
Given the growing impact of human activities on the sea, managers are increasingly turning to marine protected areas (MPAs) to protect marine habitats and species. Many MPAs have been unsuccessful, however, and lack of income has been identified as a primary reason for failure. In this study, data from a global survey of 79 MPAs in 36 countries were analysed and attempts made to construct predictive models to determine the income requirements of any given MPA. Statistical tests were used to uncover possible patterns and relationships in the data, with two basic approaches. In the first of these, an attempt was made to build an explanatory "bottom-up" model of the cost structures that might be required to pursue various management activities. This proved difficult in practice owing to the very broad range of applicable data, spanning many orders of magnitude. In the second approach, a "top-down" regression model was constructed using logarithms of the base data, in order to address the breadth of the data ranges. This approach suggested that MPA size and visitor numbers together explained 46% of the minimum income requirements (P < 0.001), with area being the slightly more influential factor. The significance of area to income requirements was of little surprise, given its profile in the literature. However, the relationship between visitors and income requirements might go some way to explaining why northern hemisphere MPAs with apparently high incomes still claim to be under-funded. The relationship between running costs and visitor numbers has important implications not only in determining a realistic level of funding for MPAs, but also in assessing from where funding might be obtained. Since a substantial proportion of the income of many MPAs appears to be utilized for amenity purposes, a case may be made for funds to be provided from the typically better resourced government social and educational budgets as well as environmental budgets. Similarly visitor fees, already an important source of funding for some MPAs, might have a broader role to play in how MPAs are financed in the future. (C) 2007 Elsevier Ltd. All rights reserved.
Resumo:
It has been observed in the present study that when spores of Trichoderma harzianum (Th-2) isolate were applied in the sandy clay loam soil and continuously incubated for 4 months at 25 degrees C and 35 degrees C and at three water potentials, -0.03 MPa, -0.3 MPa and <-50 MPa, it has resulted in significantly reduced (P<0.05), growth of Fusarium oxysporum ciceri (Foc) on branches of chickpea plant. The pathogen population was greatly reduced in the moist soil (43 MPa) when compared with the wet soil (-0.03 MPa) at both temperatures which was indicated by greater colonization and growth of T. harzanum-2 on the branch pieces of chickpea plants. The pathogen was completely eradicated from the chickpea branch pieces, after 6 months at 35 degrees C in the moist soil. In air-dry soil (<-50 MPa), Foc survived in 100% of the branch pieces even after 6 months at both temperatures. When chickpea plant branch pieces having pathogen was sprayed with Th-2 antagonistic isolates of Trichoderma spp., the Th-2 isolate killed the pathogen up to minimum level (10-12%) after 5 months at 35 degrees C in the sandy clay loam soil. It can be concluded that in chickpea growing rainfed areas of Pakistan having sandy clay loam soil, Foc can be controlled by using specific Trichoderma spp., especially in the summer season as after harvest of the crop the temperature increased up and there is rainfall during this period which makes the soil moist. This practice will be able to reduce the inoculum of Foc during this hot period as field remain fallow till next crop is sown in most of the chickpea growing rainfed areas of Pakistan.