168 resultados para small-scale processing
Resumo:
Why do people engage in artisanal and small-scale mining (ASM) – labour-intensive mineral extraction and processing activity – across sub-Saharan Africa? This paper argues that ‘agricultural poverty’, or hardship induced by an over-dependency on farming for survival, has fuelled the recent rapid expansion of ASM operations throughout the region. The diminished viability of smallholder farming in an era of globalization and overreliance on rain-fed crop production restricted by seasonality has led hundreds of thousands of rural African families to ‘branch out’ into ASM, a move made to secure supplementary incomes. Experiences from Komana West in Southwest Mali and East Akim District in Southeast Ghana are drawn upon to illustrate how a movement into the ASM economy has impacted farm families, economically, in many rural stretches of sub-Saharan Africa.
Resumo:
A discrete element model is used to study shear rupture of sea ice under convergent wind stresses. The model includes compressive, tensile, and shear rupture of viscous elastic joints connecting floes that move under the action of the wind stresses. The adopted shear rupture is governed by Coulomb’s criterion. The ice pack is a 400 km long square domain consisting of 4 km size floes. In the standard case with tensile strength 10 times smaller than the compressive strength, under uniaxial compression the failure regime is mainly shear rupture with the most probable scenario corresponding to that with the minimum failure work. The orientation of cracks delineating formed aggregates is bimodal with the peaks around the angles given by the wing crack theory determining diamond-shaped blocks. The ice block (floe aggregate) size decreases as the wind stress gradient increases since the elastic strain energy grows faster leading to a higher speed of crack propagation. As the tensile strength grows, shear rupture becomes harder to attain and compressive failure becomes equally important leading to elongation of blocks perpendicular to the compression direction and the blocks grow larger. In the standard case, as the wind stress confinement ratio increases the failure mode changes at a confinement ratio within 0.2–0.4, which corresponds to the analytical critical confinement ratio of 0.32. Below this value, the cracks are bimodal delineating diamond shape aggregates, while above this value failure becomes isotropic and is determined by small-scale stress anomalies due to irregularities in floe shape.
Resumo:
Activity within caves provides an important element of the later prehistoric and historic settlement pattern of western Scotland. This contribution reports on a small-scale excavation within Croig Cave, on the coast of north-west Mull, that exposed a 1.95m sequence of midden deposits and cave floors that dated bewteen c 1700 BC and AD 1400. Midden analysis indicated the processing of a .... 950 BC, a penannular copper bracelet a discrete ritual episode within the cycle of otherwise potentially mundane activities. Lead isotope analysis indicates an Irish origin for the copper ore. A piece of iron slag within later midden deposits, dated to c 400 BC, along with high frequencies of wood charcoal, suggest that smithing or smelting may have occurred within the cave. High zinc levels in the historic levels of the midden c AD 1200 might indicate intensive processing of seaweed.
Resumo:
The large scale urban consumption of energy (LUCY) model simulates all components of anthropogenic heat flux (QF) from the global to individual city scale at 2.5 × 2.5 arc-minute resolution. This includes a database of different working patterns and public holidays, vehicle use and energy consumption in each country. The databases can be edited to include specific diurnal and seasonal vehicle and energy consumption patterns, local holidays and flows of people within a city. If better information about individual cities is available within this (open-source) database, then the accuracy of this model can only improve, to provide the community data from global-scale climate modelling or the individual city scale in the future. The results show that QF varied widely through the year, through the day, between countries and urban areas. An assessment of the heat emissions estimated revealed that they are reasonably close to those produced by a global model and a number of small-scale city models, so results from LUCY can be used with a degree of confidence. From LUCY, the global mean urban QF has a diurnal range of 0.7–3.6 W m−2, and is greater on weekdays than weekends. The heat release from building is the largest contributor (89–96%), to heat emissions globally. Differences between months are greatest in the middle of the day (up to 1 W m−2 at 1 pm). December to February, the coldest months in the Northern Hemisphere, have the highest heat emissions. July and August are at the higher end. The least QF is emitted in May. The highest individual grid cell heat fluxes in urban areas were located in New York (577), Paris (261.5), Tokyo (178), San Francisco (173.6), Vancouver (119) and London (106.7). Copyright © 2010 Royal Meteorological Society
Resumo:
The impact of humidity observations on forecast skill is explored by producing a series of global forecasts using initial data derived from the ERA-40 reanalyses system, in which all humidity data have been removed during the data assimilation. The new forecasts have been compared with the original ERA-40 analyses and forecasts made from them. Both sets of forecasts show virtually identical prediction skill in the extratropics and the tropics. Differences between the forecasts are small and undergo characteristic amplification rate. There are larger differences in temperature and geopotential in the tropics but the differences are small-scale and unstructured and have no noticeable effect on the skill of the wind forecasts. The results highlight the current very limited impact of the humidity observations, used to produce the initial state, on the forecasts.
Resumo:
Data from four recent reanalysis projects [ECMWF, NCEP-NCAR, NCEP - Department of Energy ( DOE), NASA] have been diagnosed at the scale of synoptic weather systems using an objective feature tracking method. The tracking statistics indicate that, overall, the reanalyses correspond very well in the Northern Hemisphere (NH) lower troposphere, although differences for the spatial distribution of mean intensities show that the ECMWF reanalysis is systematically stronger in the main storm track regions but weaker around major orographic features. A direct comparison of the track ensembles indicates a number of systems with a broad range of intensities that compare well among the reanalyses. In addition, a number of small-scale weak systems are found that have no correspondence among the reanalyses or that only correspond upon relaxing the matching criteria, indicating possible differences in location and/or temporal coherence. These are distributed throughout the storm tracks, particularly in the regions known for small-scale activity, such as secondary development regions and the Mediterranean. For the Southern Hemisphere (SH), agreement is found to be generally less consistent in the lower troposphere with significant differences in both track density and mean intensity. The systems that correspond between the various reanalyses are considerably reduced and those that do not match span a broad range of storm intensities. Relaxing the matching criteria indicates that there is a larger degree of uncertainty in both the location of systems and their intensities compared with the NH. At upper-tropospheric levels, significant differences in the level of activity occur between the ECMWF reanalysis and the other reanalyses in both the NH and SH winters. This occurs due to a lack of coherence in the apparent propagation of the systems in ERA15 and appears most acute above 500 hPa. This is probably due to the use of optimal interpolation data assimilation in ERA15. Also shown are results based on using the same techniques to diagnose the tropical easterly wave activity. Results indicate that the wave activity is sensitive not only to the resolution and assimilation methods used but also to the model formulation.
Resumo:
Sudden stratospheric warmings (SSWs) are usually considered to be initiated by planetary wave activity. Here it is asked whether small-scale variability (e.g., related to gravity waves) can lead to SSWs given a certain amount of planetary wave activity that is by itself not sufficient to cause a SSW. A highly vertically truncated version of the Holton–Mass model of stratospheric wave–mean flow interaction, recently proposed by Ruzmaikin et al., is extended to include stochastic forcing. In the deterministic setting, this low-order model exhibits multiple stable equilibria corresponding to the undisturbed vortex and SSW state, respectively. Momentum forcing due to quasi-random gravity wave activity is introduced as an additive noise term in the zonal momentum equation. Two distinct approaches are pursued to study the stochastic system. First, the system, initialized at the undisturbed state, is numerically integrated many times to derive statistics of first passage times of the system undergoing a transition to the SSW state. Second, the Fokker–Planck equation corresponding to the stochastic system is solved numerically to derive the stationary probability density function of the system. Both approaches show that even small to moderate strengths of the stochastic gravity wave forcing can be sufficient to cause a SSW for cases for which the deterministic system would not have predicted a SSW.
Resumo:
Finite computing resources limit the spatial resolution of state-of-the-art global climate simulations to hundreds of kilometres. In neither the atmosphere nor the ocean are small-scale processes such as convection, clouds and ocean eddies properly represented. Climate simulations are known to depend, sometimes quite strongly, on the resulting bulk-formula representation of unresolved processes. Stochastic physics schemes within weather and climate models have the potential to represent the dynamical effects of unresolved scales in ways which conventional bulk-formula representations are incapable of so doing. The application of stochastic physics to climate modelling is a rapidly advancing, important and innovative topic. The latest research findings are gathered together in the Theme Issue for which this paper serves as the introduction.
Resumo:
Aerosols from anthropogenic and natural sources have been recognized as having an important impact on the climate system. However, the small size of aerosol particles (ranging from 0.01 to more than 10 μm in diameter) and their influence on solar and terrestrial radiation makes them difficult to represent within the coarse resolution of general circulation models (GCMs) such that small-scale processes, for example, sulfate formation and conversion, need parameterizing. It is the parameterization of emissions, conversion, and deposition and the radiative effects of aerosol particles that causes uncertainty in their representation within GCMs. The aim of this study was to perturb aspects of a sulfur cycle scheme used within a GCM to represent the climatological impacts of sulfate aerosol derived from natural and anthropogenic sulfur sources. It was found that perturbing volcanic SO2 emissions and the scavenging rate of SO2 by precipitation had the largest influence on the sulfate burden. When these parameters were perturbed the sulfate burden ranged from 0.73 to 1.17 TgS for 2050 sulfur emissions (A2 Special Report on Emissions Scenarios (SRES)), comparable with the range in sulfate burden across all the Intergovernmental Panel on Climate Change SRESs. Thus, the results here suggest that the range in sulfate burden due to model uncertainty is comparable with scenario uncertainty. Despite the large range in sulfate burden there was little influence on the climate sensitivity, which had a range of less than 0.5 K across the ensemble. We hypothesize that this small effect was partly associated with high sulfate loadings in the control phase of the experiment.
Resumo:
This article describes the development and evaluation of the U.K.’s new High-Resolution Global Environmental Model (HiGEM), which is based on the latest climate configuration of the Met Office Unified Model, known as the Hadley Centre Global Environmental Model, version 1 (HadGEM1). In HiGEM, the horizontal resolution has been increased to 0.83° latitude × 1.25° longitude for the atmosphere, and 1/3° × 1/3° globally for the ocean. Multidecadal integrations of HiGEM, and the lower-resolution HadGEM, are used to explore the impact of resolution on the fidelity of climate simulations. Generally, SST errors are reduced in HiGEM. Cold SST errors associated with the path of the North Atlantic drift improve, and warm SST errors are reduced in upwelling stratocumulus regions where the simulation of low-level cloud is better at higher resolution. The ocean model in HiGEM allows ocean eddies to be partially resolved, which dramatically improves the representation of sea surface height variability. In the Southern Ocean, most of the heat transports in HiGEM is achieved by resolved eddy motions, which replaces the parameterized eddy heat transport in the lower-resolution model. HiGEM is also able to more realistically simulate small-scale features in the wind stress curl around islands and oceanic SST fronts, which may have implications for oceanic upwelling and ocean biology. Higher resolution in both the atmosphere and the ocean allows coupling to occur on small spatial scales. In particular, the small-scale interaction recently seen in satellite imagery between the atmosphere and tropical instability waves in the tropical Pacific Ocean is realistically captured in HiGEM. Tropical instability waves play a role in improving the simulation of the mean state of the tropical Pacific, which has important implications for climate variability. In particular, all aspects of the simulation of ENSO (spatial patterns, the time scales at which ENSO occurs, and global teleconnections) are much improved in HiGEM.
Resumo:
Problem structuring methods or PSMs are widely applied across a range of variable but generally small-scale organizational contexts. However, it has been argued that they are seen and experienced less often in areas of wide ranging and highly complex human activity-specifically those relating to sustainability, environment, democracy and conflict (or SEDC). In an attempt to plan, track and influence human activity in SEDC contexts, the authors in this paper make the theoretical case for a PSM, derived from various existing approaches. They show how it could make a contribution in a specific practical context-within sustainable coastal development projects around the Mediterranean which have utilized systemic and prospective sustainability analysis or, as it is now known, Imagine. The latter is itself a PSM but one which is 'bounded' within the limits of the project to help deliver the required 'deliverables' set out in the project blueprint. The authors argue that sustainable development projects would benefit from a deconstruction of process by those engaged in the project and suggest one approach that could be taken-a breakout from a project-bounded PSM to an analysis that embraces the project itself. The paper begins with an introduction to the sustainable development context and literature and then goes on to illustrate the issues by grounding the debate within a set of projects facilitated by Blue Plan for Mediterranean coastal zones. The paper goes on to show how the analytical framework could be applied and what insights might be generated.
Resumo:
This paper describes some of the results of a detailed farm-level survey of 32 small-scale cotton farmers in the Makhathini Flats region of South Africa. The aim was to assess and measure some of the impacts (especially in terms of savings in pesticide and labour as well as benefits to human health) attributable to the use of insect-tolerant Bt cotton. The study reveals a direct cost benefit for Bt growers of SAR416 ($51) per hectare per season due to a reduction in the number of insecticide applications. Cost savings emerged in the form of lower requirements for pesticide, but also important were reduced requirements for water and labour. The reduction in the number of sprays was particularly beneficial to women who do some spraying and children who collect water and assist in spraying. The increasing adoption rate of Bt cotton appears to have a health benefit measured in terms of reported rates of accidental insecticide poisoning. These appear to be declining as the uptake of Bt cotton increases. However, the understanding of refugia and their management by local farmers are deficient and need improving. Finally, Bt cotton growers emerge as more resilient in absorbing price fluctuations.
Resumo:
This paper summarizes some of the geoarchaeological evidence for early arable agriculture in Britain and Europe, and introduces new evidence for small-scale but very intensive cultivation in the Neolithic, Bronze Age and Iron Age in Scotland. The Scottish examples demonstrate that, from the Neolithic to the Iron Age, midden heaps were sometimes ploughed in situ; this means that, rather than spreading midden material onto the fields, the early farmers simply ran an ard over their compost heaps and sowed the resulting plots. The practice appears to have been common in Scotland, and may also have occurred in England. Neolithic cultivation of a Mesolithic midden is suggested, based on thin-section analysis of the middens at Northton, Harris. The fertility of the Mesolithic middens may partly explain why Neolithic farmers re-settled Mesolithic sites in the Northern and Western Isles.
Resumo:
The promotion of technologies seen to be aiding in the attainment of agricultural sustainability has been Popular amongst Northern-based development donors for many years. One of these, botanical insecticides (e.g., those based on neem, Pyrethrum and tobacco) have been a particular favorite as they are equated with being 'natural' and hence less damaging to human health and the environment. This paper describes the outcome of interactions between one non-government organisation (NGO), the Diocesan Development Services (DDS), based in Kogi State, Nigeria, and a major development donor based in Europe that led to the establishment of a programme designed to promote the Virtues of a tobacco-based insecticide to small-scale farmers. The Tobacco Insecticide Programme (TIP) began in the late 1980s and ended in 200 1, absorbing significant quantities of resource in the process. TIP began with exploratory investigations of efficacy on the DDS seed multiplication farm followed by stages of researcher-managed and farmer-managed on-farm trials. A survey in 2002 assessed adoption of the technology by farmers. While yield benefits from using the insecticide were nearly always positive and statistically significant relative to an untreated control, they were not as good as commercial insecticides. However, adoption of the tobacco insecticide by local farmers was poor. The paper discusses the reasons for poor adoption, including relative benefits in gross margin, and uses the TIP example to explore the differing power relationships that exist between donors, their field partners and farmers. (C) 2004 by The Haworth Press, Inc. All rights reserved.
Resumo:
These notes have been issued on a small scale in 1983 and 1987 and on request at other times. This issue follows two items of news. First, WaIter Colquitt and Luther Welsh found the 'missed' Mersenne prime M110503 and advanced the frontier of complete Mp-testing to 139,267. In so doing, they terminated Slowinski's significant string of four consecutive Mersenne primes. Secondly, a team of five established a non-Mersenne number as the largest known prime. This result terminated the 1952-89 reign of Mersenne primes. All the original Mersenne numbers with p < 258 were factorised some time ago. The Sandia Laboratories team of Davis, Holdridge & Simmons with some little assistance from a CRAY machine cracked M211 in 1983 and M251 in 1984. They contributed their results to the 'Cunningham Project', care of Sam Wagstaff. That project is now moving apace thanks to developments in technology, factorisation and primality testing. New levels of computer power and new computer architectures motivated by the open-ended promise of parallelism are now available. Once again, the suppliers may be offering free buildings with the computer. However, the Sandia '84 CRAY-l implementation of the quadratic-sieve method is now outpowered by the number-field sieve technique. This is deployed on either purpose-built hardware or large syndicates, even distributed world-wide, of collaborating standard processors. New factorisation techniques of both special and general applicability have been defined and deployed. The elliptic-curve method finds large factors with helpful properties while the number-field sieve approach is breaking down composites with over one hundred digits. The material is updated on an occasional basis to follow the latest developments in primality-testing large Mp and factorising smaller Mp; all dates derive from the published literature or referenced private communications. Minor corrections, additions and changes merely advance the issue number after the decimal point. The reader is invited to report any errors and omissions that have escaped the proof-reading, to answer the unresolved questions noted and to suggest additional material associated with this subject.