126 resultados para Coherent Vortices
Resumo:
A wide variety of exposure models are currently employed for health risk assessments. Individual models have been developed to meet the chemical exposure assessment needs of Government, industry and academia. These existing exposure models can be broadly categorised according to the following types of exposure source: environmental, dietary, consumer product, occupational, and aggregate and cumulative. Aggregate exposure models consider multiple exposure pathways, while cumulative models consider multiple chemicals. In this paper each of these basic types of exposure model are briefly described, along with any inherent strengths or weaknesses, with the UK as a case study. Examples are given of specific exposure models that are currently used, or that have the potential for future use, and key differences in modelling approaches adopted are discussed. The use of exposure models is currently fragmentary in nature. Specific organisations with exposure assessment responsibilities tend to use a limited range of models. The modelling techniques adopted in current exposure models have evolved along distinct lines for the various types of source. In fact different organisations may be using different models for very similar exposure assessment situations. This lack of consistency between exposure modelling practices can make understanding the exposure assessment process more complex, can lead to inconsistency between organisations in how critical modelling issues are addressed (e.g. variability and uncertainty), and has the potential to communicate mixed messages to the general public. Further work should be conducted to integrate the various approaches and models, where possible and regulatory remits allow, to get a coherent and consistent exposure modelling process. We recommend the development of an overall framework for exposure and risk assessment with common approaches and methodology, a screening tool for exposure assessment, collection of better input data, probabilistic modelling, validation of model input and output and a closer working relationship between scientists and policy makers and staff from different Government departments. A much increased effort is required is required in the UK to address these issues. The result will be a more robust, transparent, valid and more comparable exposure and risk assessment process. (C) 2006 Elsevier Ltd. All rights reserved.
Resumo:
This study explores the way in which our picture of the Levantine Epipalaeolithic has been created, investigating the constructs that take us from found objects to coherent narrative about the world. Drawing on the treatment of chipped stone, the fundamental raw material of prehistoric narratives, it examines the use of figurative devices - of metaphor, metonymy, and synecdoche - to make the connection between the world and the words we need to describe it. The work of three researchers is explored in a case study of the Middle Epipalaeolithic with the aim of showing how different research goals and methodologies have created characteristics for the period that are so entrenched in discourse as to have become virtually invisible.Yet the definition of distinct cultures with long-lasting traditions, the identification of two separate ethnic trajectories linked to separate environmental zones, and the analysis of climate as the key driver of change all rest on analytical manoeuvres to transform objects into data.
Resumo:
A new calibration curve for the conversion of radiocarbon ages to calibrated (cal) ages has been constructed and internationally ratified to replace ImCal98, which extended from 0-24 cal kyr BP (Before Present, 0 cal BP = AD 1950). The new calibration data set for terrestrial samples extends from 0-26 cal kyr BP, but with much higher resolution beyond 11.4 cal kyr BP than ImCal98. Dendrochronologically-dated tree-ring samples cover the period from 0-12.4 cal kyr BP. Beyond the end of the tree rings, data from marine records (corals and foraminifera) are converted to the atmospheric equivalent with a site-specific marine reservoir correction to provide terrestrial calibration from 12.4-26.0 cal kyr BP. A substantial enhancement relative to ImCal98 is the introduction of a coherent statistical approach based on a random walk model, which takes into account the uncertainty in both the calendar age and the C-14 age to calculate the underlying calibration curve (Buck and Blackwell, this issue). The tree-ring data sets, sources of uncertainty, and regional offsets are discussed here. The marine data sets and calibration curve for marine samples from the surface mixed layer (Marine 04) are discussed in brief, but details are presented in Hughen et al. (this issue a). We do not make a recommendation for calibration beyond 26 cal kyr BP at this time; however, potential calibration data sets are compared in another paper (van der Plicht et al., this issue).
Resumo:
We report evidence for a major ice stream that operated over the northwestern Canadian Shield in the Keewatin Sector of the Laurentide Ice Sheet during the last deglaciation 9000-8200 (uncalibrated) yr BP. It is reconstructed at 450 km in length, 140 km in width, and had an estimated catchment area of 190000 km. Mapping from satellite imagery reveals a suite of bedforms ('flow-set') characterized by a highly convergent onset zone, abrupt lateral margins, and where flow was presumed to have been fastest, a remarkably coherent pattern of mega-scale glacial lineations with lengths approaching 13 km and elongation ratios in excess of 40:1. Spatial variations in bedform elongation within the flow-set match the expected velocity field of a terrestrial ice stream. The flow pattern does not appear to be steered by topography and its location on the hard bedrock of the Canadian Shield is surprising. A soft sedimentary basin may have influenced ice-stream activity by lubricating the bed over the downstream crystalline bedrock, but it is unlikely that it operated over a pervasively deforming till layer. The location of the ice stream challenges the view that they only arise in deep bedrock troughs or over thick deposits of 'soft' fine-grained sediments. We speculate that fast ice flow may have been triggered when a steep ice sheet surface gradient with high driving stresses contacted a proglacial lake. An increase in velocity through calving could have propagated fast ice flow upstream (in the vicinity of the Keewatin Ice Divide) through a series of thermomechanical feedback mechanisms. It exerted a considerable impact on the Laurentide Ice Sheet, forcing the demise of one of the last major ice centres.
Resumo:
Elucidating the controls on the location and vigor of ice streams is crucial to understanding the processes that lead to fast disintegration of ice flows and ice sheets. In the former North American Laurentide ice sheet, ice stream occurrence appears to have been governed by topographic troughs or areas of soft-sediment geology. This paper reports robust evidence of a major paleo-ice stream over the northwestern Canadian Shield, an area previously assumed to be incompatible with fast ice flow because of the low relief and relatively hard bedrock. A coherent pattern of subglacial bedforms (drumlins and megascalle glacial lineations) demarcates the ice stream flow set, which exhibits a convergent onset zone, a narrow main trunk with abrupt lateral margins, and a lobate terminus. Variations in bedform elongation ratio within the flow set match theoretical expectations of ice velocity. In the center of the ice stream, extremely parallel megascalle glacial lineations tens of kilometers long with elongation ratios in excess of 40:1 attest to a single episode of rapid ice flow. We conclude that while bed properties are likely to be influential in determining the occurrence and vigor of ice streams, contrary to established views, widespread soft-bed geology is not an essential requirement for those ice streams without topographic control. We speculate that the ice stream acted as a release valve on ice-sheet mass balance and was initiated by the presence of a proglacial lake that destabilized the ice-sheet margin and propagated fast ice flow through a series of thermomechanical feedbacks involving ice flow and temperature.
Resumo:
This paper describes a novel numerical algorithm for simulating the evolution of fine-scale conservative fields in layer-wise two-dimensional flows, the most important examples of which are the earth's atmosphere and oceans. the algorithm combines two radically different algorithms, one Lagrangian and the other Eulerian, to achieve an unexpected gain in computational efficiency. The algorithm is demonstrated for multi-layer quasi-geostrophic flow, and results are presented for a simulation of a tilted stratospheric polar vortex and of nearly-inviscid quasi-geostrophic turbulence. the turbulence results contradict previous arguments and simulation results that have suggested an ultimate two-dimensional, vertically-coherent character of the flow. Ongoing extensions of the algorithm to the generally ageostrophic flows characteristic of planetary fluid dynamics are outlined.
Resumo:
Using a novel numerical method at unprecedented resolution, we demonstrate that structures of small to intermediate scale in rotating, stratified flows are intrinsically three-dimensional. Such flows are characterized by vortices (spinning volumes of fluid), regions of large vorticity gradients, and filamentary structures at all scales. It is found that such structures have predominantly three-dimensional dynamics below a horizontal scale LLR, where LR is the so-called Rossby radius of deformation, equal to the characteristic vertical scale of the fluid H divided by the ratio of the rotational and buoyancy frequencies f/N. The breakdown of two-dimensional dynamics at these scales is attributed to the so-called "tall-column instability" [D. G. Dritschel and M. de la Torre Juárez, J. Fluid. Mech. 328, 129 (1996)], which is active on columnar vortices that are tall after scaling by f/N, or, equivalently, that are narrow compared with LR. Moreover, this instability eventually leads to a simple relationship between typical vertical and horizontal scales: for each vertical wave number (apart from the vertically averaged, barotropic component of the flow) the average horizontal wave number is equal to f/N times the vertical wave number. The practical implication is that three-dimensional modeling is essential to capture the behavior of rotating, stratified fluids. Two-dimensional models are not valid for scales below LR. ©1999 American Institute of Physics.
Resumo:
Mostly because of a lack of observations, fundamental aspects of the St. Lawrence Estuary's wintertime response to forcing remain poorly understood. The results of a field campaign over the winter of 2002/03 in the estuary are presented. The response of the system to tidal forcing is assessed through the use of harmonic analyses of temperature, salinity, sea level, and current observations. The analyses confirm previous evidence for the presence of semidiurnal internal tides, albeit at greater depths than previously observed for ice-free months. The low-frequency tidal streams were found to be mostly baroclinic in character and to produce an important neap tide intensification of the estuarine circulation. Despite stronger atmospheric momentum forcing in winter, the response is found to be less coherent with the winds than seen in previous studies of ice-free months. The tidal residuals show the cold intermediate layer in the estuary is renewed rapidly ( 14 days) in late March by the advection of a wedge of near-freezing waters from the Gulf of St. Lawrence. In situ processes appeared to play a lesser role in the renewal of this layer. In particular, significant wintertime deepening of the estuarine surface mixed layer was prevented by surface stability, which remained high throughout the winter. The observations also suggest that the bottom circulation was intensified during winter, with the intrusion in the deep layer of relatively warm Atlantic waters, such that the 3 C isotherm rose from below 150 m to near 60 m.
Resumo:
Three interrelated climate phenomena are at the center of the Climate Variability and Predictability (CLIVAR) Atlantic research: tropical Atlantic variability (TAV), the North Atlantic Oscillation (NAO), and the Atlantic meridional overturning circulation (MOC). These phenomena produce a myriad of impacts on society and the environment on seasonal, interannual, and longer time scales through variability manifest as coherent fluctuations in ocean and land temperature, rainfall, and extreme events. Improved understanding of this variability is essential for assessing the likely range of future climate fluctuations and the extent to which they may be predictable, as well as understanding the potential impact of human-induced climate change. CLIVAR is addressing these issues through prioritized and integrated plans for short-term and sustained observations, basin-scale reanalysis, and modeling and theoretical investigations of the coupled Atlantic climate system and its links to remote regions. In this paper, a brief review of the state of understanding of Atlantic climate variability and achievements to date is provided. Considerable discussion is given to future challenges related to building and sustaining observing systems, developing synthesis strategies to support understanding and attribution of observed change, understanding sources of predictability, and developing prediction systems in order to meet the scientific objectives of the CLIVAR Atlantic program.
Resumo:
A suite of climate change indices derived from daily temperature and precipitation data, with a primary focus on extreme events, were computed and analyzed. By setting an exact formula for each index and using specially designed software, analyses done in different countries have been combined seamlessly. This has enabled the presentation of the most up-to-date and comprehensive global picture of trends in extreme temperature and precipitation indices using results from a number of workshops held in data-sparse regions and high-quality station data supplied by numerous scientists world wide. Seasonal and annual indices for the period 1951-2003 were gridded. Trends in the gridded fields were computed and tested for statistical significance. Results showed widespread significant changes in temperature extremes associated with warming, especially for those indices derived from daily minimum temperature. Over 70% of the global land area sampled showed a significant decrease in the annual occurrence of cold nights and a significant increase in the annual occurrence of warm nights. Some regions experienced a more than doubling of these indices. This implies a positive shift in the distribution of daily minimum temperature throughout the globe. Daily maximum temperature indices showed similar changes but with smaller magnitudes. Precipitation changes showed a widespread and significant increase, but the changes are much less spatially coherent compared with temperature change. Probability distributions of indices derived from approximately 200 temperature and 600 precipitation stations, with near-complete data for 1901-2003 and covering a very large region of the Northern Hemisphere midlatitudes (and parts of Australia for precipitation) were analyzed for the periods 1901-1950, 1951-1978 and 1979-2003. Results indicate a significant warming throughout the 20th century. Differences in temperature indices distributions are particularly pronounced between the most recent two periods and for those indices related to minimum temperature. An analysis of those indices for which seasonal time series are available shows that these changes occur for all seasons although they are generally least pronounced for September to November. Precipitation indices show a tendency toward wetter conditions throughout the 20th century.
Resumo:
We present results from fast-response wind measurements within and above a busy intersection between two street canyons (Marylebone Road and Gloucester Place) in Westminster, London taken as part of the DAPPLE (Dispersion of Air Pollution and Penetration into the Local Environment; www.dapple.org.uk) 2007 field campaign. The data reported here were collected using ultrasonic anemometers on the roof-top of a building adjacent to the intersection and at two heights on a pair of lamp-posts on opposite sides of the intersection. Site characteristics, data analysis and the variation of intersection flow with the above-roof wind direction (θref) are discussed. Evidence of both flow channelling and recirculation was identified within the canyon, only a few metres from the intersection for along-street and across-street roof-top winds respectively. Results also indicate that for oblique rooftop flows, the intersection flow is a complex combination of bifurcated channelled flows, recirculation and corner vortices. Asymmetries in local building geometry around the intersection and small changes in the background wind direction (changes in 15-min mean θref of 5–10 degrees) were also observed to have profound influences on the behaviour of intersection flow patterns. Consequently, short time-scale variability in the background flow direction can lead to highly scattered in-street mean flow angles masking the true multi-modal features of the flow and thus further complicating modelling challenges.
Resumo:
The Military Intelligence (Research) Department of the British War Office was tasked in 1940 with encouraging and supporting armed resistance in occupied Europe and the Axis-controlled Middle East. The major contention of this paper is that, in doing so, MI(R) performed a key role in British strategy in 1940-42 and in the development of what are now known as covert operations. MI(R) developed an organic, but coherent doctrine for such activity which was influential upon the Special Operations Executive (SOE) and its own sub-branch, G(R), which applied this doctrine in practice in East Africa and the Middle East in 1940-41. It was also here that a number of key figures in the development of covert operations and special forces first cut their teeth, the most notable being Major Generals Colin Gubbins and Orde Wingate.
Resumo:
The Bureau International des Poids et Mesures, the BIPM, was established by Article 1 of the Convention du Mètre, on 20 May 1875, and is charged with providing the basis for a single, coherent system of measurements to be used throughout the world. The decimal metric system, dating from the time of the French Revolution, was based on the metre and the kilogram. Under the terms of the 1875 Convention, new international prototypes of the metre and kilogram were made and formally adopted by the first Conférence Générale des Poids et Mesures (CGPM) in 1889. Over time this system developed, so that it now includes seven base units. In 1960 it was decided at the 11th CGPM that it should be called the Système International d’Unités, the SI (in English: the International System of Units). The SI is not static but evolves to match the world’s increasingly demanding requirements for measurements at all levels of precision and in all areas of science, technology, and human endeavour. This document is a summary of the SI Brochure, a publication of the BIPM which is a statement of the current status of the SI. The seven base units of the SI, listed in Table 1, provide the reference used to define all the measurement units of the International System. As science advances, and methods of measurement are refined, their definitions have to be revised. The more accurate the measurements, the greater the care required in the realization of the units of measurement.
Resumo:
More data will be produced in the next five years than in the entire history of human kind, a digital deluge that marks the beginning of the Century of Information. Through a year-long consultation with UK researchers, a coherent strategy has been developed, which will nurture Century-of-Information Research (CIR); it crystallises the ideas developed by the e-Science Directors' Forum Strategy Working Group. This paper is an abridged version of their latest report which can be found at: http://wikis.nesc.ac.uk/escienvoy/Century_of_Information_Research_Strategy which also records the consultation process and the affiliations of the authors. This document is derived from a paper presented at the Oxford e-Research Conference 2008 and takes into account suggestions made in the ensuing panel discussion. The goals of the CIR Strategy are to facilitate the growth of UK research and innovation that is data and computationally intensive and to develop a new culture of 'digital-systems judgement' that will equip research communities, businesses, government and society as a whole, with the skills essential to compete and prosper in the Century of Information. The CIR Strategy identifies a national requirement for a balanced programme of coordination, research, infrastructure, translational investment and education to empower UK researchers, industry, government and society. The Strategy is designed to deliver an environment which meets the needs of UK researchers so that they can respond agilely to challenges, can create knowledge and skills, and can lead new kinds of research. It is a call to action for those engaged in research, those providing data and computational facilities, those governing research and those shaping education policies. The ultimate aim is to help researchers strengthen the international competitiveness of the UK research base and increase its contribution to the economy. The objectives of the Strategy are to better enable UK researchers across all disciplines to contribute world-leading fundamental research; to accelerate the translation of research into practice; and to develop improved capabilities, facilities and context for research and innovation. It envisages a culture that is better able to grasp the opportunities provided by the growing wealth of digital information. Computing has, of course, already become a fundamental tool in all research disciplines. The UK e-Science programme (2001-06)—since emulated internationally—pioneered the invention and use of new research methods, and a new wave of innovations in digital-information technologies which have enabled them. The Strategy argues that the UK must now harness and leverage its own, plus the now global, investment in digital-information technology in order to spread the benefits as widely as possible in research, education, industry and government. Implementing the Strategy would deliver the computational infrastructure and its benefits as envisaged in the Science & Innovation Investment Framework 2004-2014 (July 2004), and in the reports developing those proposals. To achieve this, the Strategy proposes the following actions: support the continuous innovation of digital-information research methods; provide easily used, pervasive and sustained e-Infrastructure for all research; enlarge the productive research community which exploits the new methods efficiently; generate capacity, propagate knowledge and develop skills via new curricula; and develop coordination mechanisms to improve the opportunities for interdisciplinary research and to make digital-infrastructure provision more cost effective. To gain the best value for money strategic coordination is required across a broad spectrum of stakeholders. A coherent strategy is essential in order to establish and sustain the UK as an international leader of well-curated national data assets and computational infrastructure, which is expertly used to shape policy, support decisions, empower researchers and to roll out the results to the wider benefit of society. The value of data as a foundation for wellbeing and a sustainable society must be appreciated; national resources must be more wisely directed to the collection, curation, discovery, widening access, analysis and exploitation of these data. Every researcher must be able to draw on skills, tools and computational resources to develop insights, test hypotheses and translate inventions into productive use, or to extract knowledge in support of governmental decision making. This foundation plus the skills developed will launch significant advances in research, in business, in professional practice and in government with many consequent benefits for UK citizens. The Strategy presented here addresses these complex and interlocking requirements.
Resumo:
The Indian Ocean water that ends up in the Atlantic Ocean detaches from the Agulhas Current retroflection predominantly in the form of Agulhas rings and cyclones. Using numerical Lagrangian float trajectories in a high-resolution numerical ocean model, the fate of coherent structures near the Agulhas Current retroflection is investigated. It is shown that within the Agulhas Current, upstream of the retroflection, the spatial distributions of floats ending in the Atlantic Ocean and floats ending in the Indian Ocean are to a large extent similar. This indicates that Agulhas leakage occurs mostly through the detachment of Agulhas rings. After the floats detach from the Agulhas Current, the ambient water quickly looses its relative vorticity. The Agulhas rings thus seem to decay and loose much of their water in the Cape Basin. A cluster analysis reveals that most water in the Agulhas Current is within clusters of 180 km in diameter. Halfway in the Cape Basin there is an increase in the number of larger clusters with low relative vorticity, which carry the bulk of the Agulhas leakage transport through the Cape Basin. This upward cascade with respect to the length scales of the leakage, in combination with a power law decay of the magnitude of relative vorticity, might be an indication that the decay of Agulhas rings is somewhat comparable to the decay of two-dimensional turbulence.