183 resultados para small scale wind turbines
Resumo:
The interaction between polynyas and the atmospheric boundary layer is examined in the Laptev Sea using the regional, non-hydrostatic Consortium for Small-scale Modelling (COSMO) atmosphere model. A thermodynamic sea-ice model is used to consider the response of sea-ice surface temperature to idealized atmospheric forcing. The idealized regimes represent atmospheric conditions that are typical for the Laptev Sea region. Cold wintertime conditions are investigated with sea-ice–ocean temperature differences of up to 40 K. The Laptev Sea flaw polynyas strongly modify the atmospheric boundary layer. Convectively mixed layers reach heights of up to 1200 m above the polynyas with temperature anomalies of more than 5 K. Horizontal transport of heat expands to areas more than 500 km downstream of the polynyas. Strong wind regimes lead to a more shallow mixed layer with strong near-surface modifications, while weaker wind regimes show a deeper, well-mixed convective boundary layer. Shallow mesoscale circulations occur in the vicinity of ice-free and thin-ice covered polynyas. They are forced by large turbulent and radiative heat fluxes from the surface of up to 789 W m−2, strong low-level thermally induced convergence and cold air flow from the orographic structure of the Taimyr Peninsula in the western Laptev Sea region. Based on the surface energy balance we derive potential sea-ice production rates between 8 and 25 cm d−1. These production rates are mainly determined by whether the polynyas are ice-free or covered by thin ice and by the wind strength.
Temporal and spatial variability of surface fluxes over the ice edge zone in the northern Baltic Sea
Resumo:
Three land-fast ice stations (one of them was the Finnish research ice breaker Aranda) and the German research aircraft Falcon were applied to measure the turbulent and radiation fluxes over the ice edge zone in the northern Baltic Sea during the Baltic Air-Sea-Ice Study (BASIS) field experiment from 16 February to 6 March 1998. The temporal and spatial variability of the surface fluxes is discussed. Synoptic weather systems passed the experimental area in a rapid sequence and dominated the conditions (wind speed, airsurface temperature difference, cloud field) for the variability of the turbulent and radiation fluxes. At the ice stations, the largest upward sensible heat fluxes of about 100 Wm�2 were measured during the passage of a cold front when the air cooled faster (�5 K per hour) than the surface. The largest downward flux of about �200 Wm�2 occurred during warm air advection when the air temperature reached +10�C but the surface temperature remained at 0�C. Spatial variability of fluxes was observed from the small scale (scale of ice floes and open water spots) to the mesoscale (width of the ice edge zone). The degree of spatial variability depends on the synoptic situation: during melting conditions downward heat fluxes were the same over ice and open water, whereas during strong cold-air advection upward heat fluxes differed by more than 100 Wm�2. A remarkable amount of grey ice with intermediate surface temperature was observed. The ice in the Baltic Sea cannot be described by one ice type only.
Resumo:
The behavior of the Asian summer monsoon is documented and compared using the European Centre for Medium-Range Weather Forecasts (ECMWF) Reanalysis (ERA) and the National Centers for Environmental Prediction-National Center for Atmospheric Research (NCEP-NCAR) Reanalysis. In terms of seasonal mean climatologies the results suggest that, in several respects, the ERA is superior to the NCEP-NCAR Reanalysis. The overall better simulation of the precipitation and hence the diabatic heating field over the monsoon domain in ERA means that the analyzed circulation is probably nearer reality. In terms of interannual variability, inconsistencies in the definition of weak and strong monsoon years based on typical monsoon indices such as All-India Rainfall (AIR) anomalies and the large-scale wind shear based dynamical monsoon index (DMI) still exist. Two dominant modes of interannual variability have been identified that together explain nearly 50% of the variance. Individually, they have many features in common with the composite flow patterns associated with weak and strong monsoons, when defined in terms of regional AIR anomalies and the large-scale DMI. The reanalyses also show a common dominant mode of intraseasonal variability that describes the latitudinal displacement of the tropical convergence zone from its oceanic-to-continental regime and essentially captures the low-frequency active/break cycles of the monsoon. The relationship between interannual and intraseasonal variability has been investigated by considering the probability density function (PDF) of the principal component of the dominant intraseasonal mode. Based on the DMI, there is an indication that in years with a weaker monsoon circulation, the PDF is skewed toward negative values (i,e., break conditions). Similarly, the PDFs for El Nino and La Nina years suggest that El Nino predisposes the system to more break spells, although the sample size may limit the statistical significance of the results.
Resumo:
Data from four recent reanalysis projects [ECMWF, NCEP-NCAR, NCEP - Department of Energy ( DOE), NASA] have been diagnosed at the scale of synoptic weather systems using an objective feature tracking method. The tracking statistics indicate that, overall, the reanalyses correspond very well in the Northern Hemisphere (NH) lower troposphere, although differences for the spatial distribution of mean intensities show that the ECMWF reanalysis is systematically stronger in the main storm track regions but weaker around major orographic features. A direct comparison of the track ensembles indicates a number of systems with a broad range of intensities that compare well among the reanalyses. In addition, a number of small-scale weak systems are found that have no correspondence among the reanalyses or that only correspond upon relaxing the matching criteria, indicating possible differences in location and/or temporal coherence. These are distributed throughout the storm tracks, particularly in the regions known for small-scale activity, such as secondary development regions and the Mediterranean. For the Southern Hemisphere (SH), agreement is found to be generally less consistent in the lower troposphere with significant differences in both track density and mean intensity. The systems that correspond between the various reanalyses are considerably reduced and those that do not match span a broad range of storm intensities. Relaxing the matching criteria indicates that there is a larger degree of uncertainty in both the location of systems and their intensities compared with the NH. At upper-tropospheric levels, significant differences in the level of activity occur between the ECMWF reanalysis and the other reanalyses in both the NH and SH winters. This occurs due to a lack of coherence in the apparent propagation of the systems in ERA15 and appears most acute above 500 hPa. This is probably due to the use of optimal interpolation data assimilation in ERA15. Also shown are results based on using the same techniques to diagnose the tropical easterly wave activity. Results indicate that the wave activity is sensitive not only to the resolution and assimilation methods used but also to the model formulation.
Resumo:
Different systems, different purposes – but how do they compare as learning environments? We undertook a survey of students at the University, asking whether they learned from their use of the systems, whether they made contact with other students through them, and how often they used them. Although it was a small scale survey, the results are quite enlightening and quite surprising. Blackboard is populated with learning material, has all the students on a module signed up to it, a safe environment (in terms of Acceptable Use and some degree of staff monitoring) and provides privacy within the learning group (plus lecturer and relevant support staff). Facebook, on the other hand, has no learning material, only some of the students using the system, and on the face of it, it has the opportunity for slips in privacy and potential bullying because the Acceptable Use policy is more lax than an institutional one, and breaches must be dealt with on an exception basis, when reported. So why do more students find people on their courses through Facebook than Blackboard? And why are up to 50% of students reporting that they have learned from using Facebook? Interviews indicate that students in subjects which use seminars are using Facebook to facilitate working groups – they can set up private groups which give them privacy to discuss ideas in an environment which perceived as safer than Blackboard can provide. No staff interference, unless they choose to invite them in, and the opportunity to select who in the class can engage. The other striking finding is the difference in use between the genders. Males are using blackboard more frequently than females, whilst the reverse is true for Facebook. Interviews suggest that this may have something to do with needing to access lecture notes… Overall, though, it appears that there is little relationship between the time spent engaging with Blackboard and reports that students have learned from it. Because Blackboard is our central repository for notes, any contact is likely to result in some learning. Facebook, however, shows a clear relationship between frequency of use and perception of learning – and our students post frequently to Facebook. Whilst much of this is probably trivia and social chit chat, the educational elements of it are, de facto, contructivist in nature. Further questions need to be answered - Is the reason the students learn from Facebook because they are creating content which others will see and comment on? Is it because they can engage in a dialogue, without the risk of interruption by others?
Resumo:
Sudden stratospheric warmings (SSWs) are usually considered to be initiated by planetary wave activity. Here it is asked whether small-scale variability (e.g., related to gravity waves) can lead to SSWs given a certain amount of planetary wave activity that is by itself not sufficient to cause a SSW. A highly vertically truncated version of the Holton–Mass model of stratospheric wave–mean flow interaction, recently proposed by Ruzmaikin et al., is extended to include stochastic forcing. In the deterministic setting, this low-order model exhibits multiple stable equilibria corresponding to the undisturbed vortex and SSW state, respectively. Momentum forcing due to quasi-random gravity wave activity is introduced as an additive noise term in the zonal momentum equation. Two distinct approaches are pursued to study the stochastic system. First, the system, initialized at the undisturbed state, is numerically integrated many times to derive statistics of first passage times of the system undergoing a transition to the SSW state. Second, the Fokker–Planck equation corresponding to the stochastic system is solved numerically to derive the stationary probability density function of the system. Both approaches show that even small to moderate strengths of the stochastic gravity wave forcing can be sufficient to cause a SSW for cases for which the deterministic system would not have predicted a SSW.
Resumo:
Finite computing resources limit the spatial resolution of state-of-the-art global climate simulations to hundreds of kilometres. In neither the atmosphere nor the ocean are small-scale processes such as convection, clouds and ocean eddies properly represented. Climate simulations are known to depend, sometimes quite strongly, on the resulting bulk-formula representation of unresolved processes. Stochastic physics schemes within weather and climate models have the potential to represent the dynamical effects of unresolved scales in ways which conventional bulk-formula representations are incapable of so doing. The application of stochastic physics to climate modelling is a rapidly advancing, important and innovative topic. The latest research findings are gathered together in the Theme Issue for which this paper serves as the introduction.
Resumo:
Aerosols from anthropogenic and natural sources have been recognized as having an important impact on the climate system. However, the small size of aerosol particles (ranging from 0.01 to more than 10 μm in diameter) and their influence on solar and terrestrial radiation makes them difficult to represent within the coarse resolution of general circulation models (GCMs) such that small-scale processes, for example, sulfate formation and conversion, need parameterizing. It is the parameterization of emissions, conversion, and deposition and the radiative effects of aerosol particles that causes uncertainty in their representation within GCMs. The aim of this study was to perturb aspects of a sulfur cycle scheme used within a GCM to represent the climatological impacts of sulfate aerosol derived from natural and anthropogenic sulfur sources. It was found that perturbing volcanic SO2 emissions and the scavenging rate of SO2 by precipitation had the largest influence on the sulfate burden. When these parameters were perturbed the sulfate burden ranged from 0.73 to 1.17 TgS for 2050 sulfur emissions (A2 Special Report on Emissions Scenarios (SRES)), comparable with the range in sulfate burden across all the Intergovernmental Panel on Climate Change SRESs. Thus, the results here suggest that the range in sulfate burden due to model uncertainty is comparable with scenario uncertainty. Despite the large range in sulfate burden there was little influence on the climate sensitivity, which had a range of less than 0.5 K across the ensemble. We hypothesize that this small effect was partly associated with high sulfate loadings in the control phase of the experiment.
Resumo:
Problem structuring methods or PSMs are widely applied across a range of variable but generally small-scale organizational contexts. However, it has been argued that they are seen and experienced less often in areas of wide ranging and highly complex human activity-specifically those relating to sustainability, environment, democracy and conflict (or SEDC). In an attempt to plan, track and influence human activity in SEDC contexts, the authors in this paper make the theoretical case for a PSM, derived from various existing approaches. They show how it could make a contribution in a specific practical context-within sustainable coastal development projects around the Mediterranean which have utilized systemic and prospective sustainability analysis or, as it is now known, Imagine. The latter is itself a PSM but one which is 'bounded' within the limits of the project to help deliver the required 'deliverables' set out in the project blueprint. The authors argue that sustainable development projects would benefit from a deconstruction of process by those engaged in the project and suggest one approach that could be taken-a breakout from a project-bounded PSM to an analysis that embraces the project itself. The paper begins with an introduction to the sustainable development context and literature and then goes on to illustrate the issues by grounding the debate within a set of projects facilitated by Blue Plan for Mediterranean coastal zones. The paper goes on to show how the analytical framework could be applied and what insights might be generated.
Resumo:
This paper describes some of the results of a detailed farm-level survey of 32 small-scale cotton farmers in the Makhathini Flats region of South Africa. The aim was to assess and measure some of the impacts (especially in terms of savings in pesticide and labour as well as benefits to human health) attributable to the use of insect-tolerant Bt cotton. The study reveals a direct cost benefit for Bt growers of SAR416 ($51) per hectare per season due to a reduction in the number of insecticide applications. Cost savings emerged in the form of lower requirements for pesticide, but also important were reduced requirements for water and labour. The reduction in the number of sprays was particularly beneficial to women who do some spraying and children who collect water and assist in spraying. The increasing adoption rate of Bt cotton appears to have a health benefit measured in terms of reported rates of accidental insecticide poisoning. These appear to be declining as the uptake of Bt cotton increases. However, the understanding of refugia and their management by local farmers are deficient and need improving. Finally, Bt cotton growers emerge as more resilient in absorbing price fluctuations.
Resumo:
This paper summarizes some of the geoarchaeological evidence for early arable agriculture in Britain and Europe, and introduces new evidence for small-scale but very intensive cultivation in the Neolithic, Bronze Age and Iron Age in Scotland. The Scottish examples demonstrate that, from the Neolithic to the Iron Age, midden heaps were sometimes ploughed in situ; this means that, rather than spreading midden material onto the fields, the early farmers simply ran an ard over their compost heaps and sowed the resulting plots. The practice appears to have been common in Scotland, and may also have occurred in England. Neolithic cultivation of a Mesolithic midden is suggested, based on thin-section analysis of the middens at Northton, Harris. The fertility of the Mesolithic middens may partly explain why Neolithic farmers re-settled Mesolithic sites in the Northern and Western Isles.
Resumo:
The promotion of technologies seen to be aiding in the attainment of agricultural sustainability has been Popular amongst Northern-based development donors for many years. One of these, botanical insecticides (e.g., those based on neem, Pyrethrum and tobacco) have been a particular favorite as they are equated with being 'natural' and hence less damaging to human health and the environment. This paper describes the outcome of interactions between one non-government organisation (NGO), the Diocesan Development Services (DDS), based in Kogi State, Nigeria, and a major development donor based in Europe that led to the establishment of a programme designed to promote the Virtues of a tobacco-based insecticide to small-scale farmers. The Tobacco Insecticide Programme (TIP) began in the late 1980s and ended in 200 1, absorbing significant quantities of resource in the process. TIP began with exploratory investigations of efficacy on the DDS seed multiplication farm followed by stages of researcher-managed and farmer-managed on-farm trials. A survey in 2002 assessed adoption of the technology by farmers. While yield benefits from using the insecticide were nearly always positive and statistically significant relative to an untreated control, they were not as good as commercial insecticides. However, adoption of the tobacco insecticide by local farmers was poor. The paper discusses the reasons for poor adoption, including relative benefits in gross margin, and uses the TIP example to explore the differing power relationships that exist between donors, their field partners and farmers. (C) 2004 by The Haworth Press, Inc. All rights reserved.
Resumo:
These notes have been issued on a small scale in 1983 and 1987 and on request at other times. This issue follows two items of news. First, WaIter Colquitt and Luther Welsh found the 'missed' Mersenne prime M110503 and advanced the frontier of complete Mp-testing to 139,267. In so doing, they terminated Slowinski's significant string of four consecutive Mersenne primes. Secondly, a team of five established a non-Mersenne number as the largest known prime. This result terminated the 1952-89 reign of Mersenne primes. All the original Mersenne numbers with p < 258 were factorised some time ago. The Sandia Laboratories team of Davis, Holdridge & Simmons with some little assistance from a CRAY machine cracked M211 in 1983 and M251 in 1984. They contributed their results to the 'Cunningham Project', care of Sam Wagstaff. That project is now moving apace thanks to developments in technology, factorisation and primality testing. New levels of computer power and new computer architectures motivated by the open-ended promise of parallelism are now available. Once again, the suppliers may be offering free buildings with the computer. However, the Sandia '84 CRAY-l implementation of the quadratic-sieve method is now outpowered by the number-field sieve technique. This is deployed on either purpose-built hardware or large syndicates, even distributed world-wide, of collaborating standard processors. New factorisation techniques of both special and general applicability have been defined and deployed. The elliptic-curve method finds large factors with helpful properties while the number-field sieve approach is breaking down composites with over one hundred digits. The material is updated on an occasional basis to follow the latest developments in primality-testing large Mp and factorising smaller Mp; all dates derive from the published literature or referenced private communications. Minor corrections, additions and changes merely advance the issue number after the decimal point. The reader is invited to report any errors and omissions that have escaped the proof-reading, to answer the unresolved questions noted and to suggest additional material associated with this subject.
Resumo:
In applications such as radar and wind turbines, it is often necessary to transfer power across a constantly rotating interface. As the rotation is continuous, it would be impossible to use wires to transfer the power as they would soon become twisted and stretched and the system would fail. The widespread solution to this problem is to use a slip-ring.
Resumo:
Chemical and meteorological parameters measured on board the Facility for Airborne Atmospheric Measurements (FAAM) BAe 146 Atmospheric Research Aircraft during the African Monsoon Multidisciplinary Analysis (AMMA) campaign are presented to show the impact of NOx emissions from recently wetted soils in West Africa. NO emissions from soils have been previously observed in many geographical areas with different types of soil/vegetation cover during small scale studies and have been inferred at large scales from satellite measurements of NOx. This study is the first dedicated to showing the emissions of NOx at an intermediate scale between local surface sites and continental satellite measurements. The measurements reveal pronounced mesoscale variations in NOx concentrations closely linked to spatial patterns of antecedent rainfall. Fluxes required to maintain the NOx concentrations observed by the BAe-146 in a number of cases studies and for a range of assumed OH concentrations (1×106 to 1×107 molecules cm−3) are calculated to be in the range 8.4 to 36.1 ng N m−2 s−1. These values are comparable to the range of fluxes from 0.5 to 28 ng N m−2 s−1 reported from small scale field studies in a variety of non-nutrient rich tropical and sub-tropical locations reported in the review of Davidson and Kingerlee (1997). The fluxes calculated in the present study have been scaled up to cover the area of the Sahel bounded by 10 to 20 N and 10 E to 20 W giving an estimated emission of 0.03 to 0.30 Tg N from this area for July and August 2006. The observed chemical data also suggest that the NOx emitted from soils is taking part in ozone formation as ozone concentrations exhibit similar fine scale structure to the NOx, with enhancements over the wet soils. Such variability can not be explained on the basis of transport from other areas. Delon et al. (2008) is a companion paper to this one which models the impact of soil NOx emissions on the NOx and ozone concentration over West Africa during AMMA. It employs an artificial neural network to define the emissions of NOx from soils, integrated into a coupled chemistry-dynamics model. The results are compared to the observed data presented in this paper. Here we compare fluxes deduced from the observed data with the model-derived values from Delon et al. (2008).