823 resultados para tidal current
Resumo:
Birds are vulnerable to collisions with human-made fixed structures. Despite ongoing development and increases in infrastructure, we have few estimates of the magnitude of collision mortality. We reviewed the existing literature on avian mortality associated with transmission lines and derived an initial estimate for Canada. Estimating mortality from collisions with power lines is challenging due to the lack of studies, especially from sites within Canada, and due to uncertainty about the magnitude of detection biases. Detection of bird collisions with transmission lines varies due to habitat type, species size, and scavenging rates. In addition, birds can be crippled by the impact and subsequently die, although crippling rates are poorly known and rarely incorporated into estimates. We used existing data to derive a range of estimates of avian mortality associated with collisions with transmission lines in Canada by incorporating detection, scavenging, and crippling biases. There are 231,966 km of transmission lines across Canada, mostly in the boreal forest. Mortality estimates ranged from 1 million to 229.5 million birds per year, depending on the bias corrections applied. We consider our most realistic estimate, taking into account variation in risk across Canada, to range from 2.5 million to 25.6 million birds killed per year. Data from multiple studies across Canada and the northern U.S. indicate that the most vulnerable bird groups are (1) waterfowl, (2) grebes, (3) shorebirds, and (4) cranes, which is consistent with other studies. Populations of several groups that are vulnerable to collisions are increasing across Canada (e.g., waterfowl, raptors), which suggests that collision mortality, at current levels, is not limiting population growth. However, there may be impacts on other declining species, such as shorebirds and some species at risk, including Alberta’s Trumpeter Swans (Cygnus buccinator) and western Canada’s endangered Whooping Cranes (Grus americana). Collisions may be more common during migration, which underscores the need to understand impacts across the annual cycle. We emphasize that these estimates are preliminary, especially considering the absence of Canadian studies.
Resumo:
Atmospheric electricity measurements were made at Lerwick Observatory in the Shetland Isles (60°09′N, 1°08′W) during most of the 20th century. The Potential Gradient (PG) was measured from 1926 to 84 and the air-earth conduction current (Jc) was measured during the final decade of the PG measurements. Daily Jc values (1978–1984) observed at 15 UT are presented here for the first time, with independently-obtained PG measurements used to select valid data. The 15 UT Jc (1978–1984) spans 0.5–9.5 pA/m2, with median 2.5 pA/m2; the columnar resistance at Lerwick is estimated as 70 PΩm2. Smoke measurements confirm the low pollution properties of the site. Analysis of the monthly variation of Lerwick Jc data shows that winter (DJF) Jc is significantly greater than the summer (JJA) Jc by 20%. The Lerwick atmospheric electricity seasonality differs from the global lightning seasonality, but Jc has a similar seasonal phasing to that observed in Nimbostratus clouds globally, suggesting a role for non-thunderstorm rain clouds in the seasonality of the global circuit.
Resumo:
Voltage-dependent Ca2+ channels (VDCCs) have emerged as targets to treat neuropathic pain; however, amongst VDCCs, the precise role of the CaV2.3 subtype in nociception remains unproven. Here, we investigate the effects of partial sciatic nerve ligation (PSNL) on Ca2+ currents in small/medium diameter dorsal root ganglia (DRG) neurones isolated from CaV2.3(−/−) knock-out and wild-type (WT) mice. DRG neurones from CaV2.3(−/−) mice had significantly reduced sensitivity to SNX-482 versusWTmice. DRGs from CaV2.3(−/−) mice also had increased sensitivity to the CaV2.2 VDCC blocker -conotoxin. In WT mice, PSNL caused a significant increase in -conotoxin-sensitivity and a reduction in SNX-482-sensitivity. In CaV2.3(−/−) mice, PSNL caused a significant reduction in -conotoxin-sensitivity and an increase in nifedipine sensitivity. PSNL-induced changes in Ca2+ current were not accompanied by effects on voltagedependence of activation in either CaV2.3(−/−) or WT mice. These data suggest that CaV2.3 subunits contribute, but do not fully underlie, drug-resistant (R-type) Ca2+ current in these cells. In WT mice, PSNL caused adaptive changes in CaV2.2- and CaV2.3-mediated Ca2+ currents, supporting roles for these VDCCs in nociception during neuropathy. In CaV2.3(−/−) mice, PSNL-induced changes in CaV1 and CaV2.2 Ca2+ current, consistent with alternative adaptive mechanisms occurring in the absence of CaV2.3 subunits.
Resumo:
The global atmospheric electrical circuit sustains a vertical current density between the ionosphere and the Earth's surface, the existence of which is well-established from measurements made in fair-weather conditions. In overcast, but non-thunderstorm, non-precipitating conditions, the current travels through the cloud present, despite cloud layers having low electrical conductivity. For extensive layer clouds, this leads to space charge at the upper and lower cloud boundaries. Using a combination of atmospheric electricity and solar radiation measurements at three UK sites, vertical current measurements have been categorised into clear, broken, and overcast cloud conditions. This approach shows that the vertical “fair weather” current is maintained despite the presence of cloud. In fully overcast conditions with thick cloud, the vertical current is reduced compared to thin cloud overcast conditions, associated with the cloud's resistance contributions. Contribution of cloud to the columnar resistance depends both on cloud thickness, and the cloud's height.
Resumo:
Abstract Foggy air and clear air have appreciably different electrical conductivities. The conductivity gradient at horizontal droplet boundaries causes droplet charging, as a result of vertical current flow in the global atmospheric electrical circuit. The charging is poorly known, as both the current flow through atmospheric water droplet layers and the air conductivity are poorly characterised experimentally. Surface measurements during three days of continuous fog using new instrument techniques show that a shallow (of order 100 m deep) fog layer still permits the vertical conduction current to pass. Further, the conductivity in the fog is estimated to be approximately 20% lower than in clear air. Assuming a fog transition thickness of one metre, this implies a vertical conductivity gradient of order 10 fS m−2 at the boundary. The actual vertical conductivity gradient at a cloud boundary would probably be greater, due to the presence of larger droplets in clouds compared to fog, and cleaner, more conductive clear air aloft.
Resumo:
Severe wind storms are one of the major natural hazards in the extratropics and inflict substantial economic damages and even casualties. Insured storm-related losses depend on (i) the frequency, nature and dynamics of storms, (ii) the vulnerability of the values at risk, (iii) the geographical distribution of these values, and (iv) the particular conditions of the risk transfer. It is thus of great importance to assess the impact of climate change on future storm losses. To this end, the current study employs—to our knowledge for the first time—a coupled approach, using output from high-resolution regional climate model scenarios for the European sector to drive an operational insurance loss model. An ensemble of coupled climate-damage scenarios is used to provide an estimate of the inherent uncertainties. Output of two state-of-the-art global climate models (HadAM3, ECHAM5) is used for present (1961–1990) and future climates (2071–2100, SRES A2 scenario). These serve as boundary data for two nested regional climate models with a sophisticated gust parametrizations (CLM, CHRM). For validation and calibration purposes, an additional simulation is undertaken with the CHRM driven by the ERA40 reanalysis. The operational insurance model (Swiss Re) uses a European-wide damage function, an average vulnerability curve for all risk types, and contains the actual value distribution of a complete European market portfolio. The coupling between climate and damage models is based on daily maxima of 10 m gust winds, and the strategy adopted consists of three main steps: (i) development and application of a pragmatic selection criterion to retrieve significant storm events, (ii) generation of a probabilistic event set using a Monte-Carlo approach in the hazard module of the insurance model, and (iii) calibration of the simulated annual expected losses with a historic loss data base. The climate models considered agree regarding an increase in the intensity of extreme storms in a band across central Europe (stretching from southern UK and northern France to Denmark, northern Germany into eastern Europe). This effect increases with event strength, and rare storms show the largest climate change sensitivity, but are also beset with the largest uncertainties. Wind gusts decrease over northern Scandinavia and Southern Europe. Highest intra-ensemble variability is simulated for Ireland, the UK, the Mediterranean, and parts of Eastern Europe. The resulting changes on European-wide losses over the 110-year period are positive for all layers and all model runs considered and amount to 44% (annual expected loss), 23% (10 years loss), 50% (30 years loss), and 104% (100 years loss). There is a disproportionate increase in losses for rare high-impact events. The changes result from increases in both severity and frequency of wind gusts. Considerable geographical variability of the expected losses exists, with Denmark and Germany experiencing the largest loss increases (116% and 114%, respectively). All countries considered except for Ireland (−22%) experience some loss increases. Some ramifications of these results for the socio-economic sector are discussed, and future avenues for research are highlighted. The technique introduced in this study and its application to realistic market portfolios offer exciting prospects for future research on the impact of climate change that is relevant for policy makers, scientists and economists.
Resumo:
Holocene tidal palaoechannels, Severn Estuary Levels, UK: a search for granulometric and foraminiferal criteria. Proceedings of the Geologists' Association, 117, 329-344. Grain-size characteristics (by laser granulometry) and foraminiferal assemblages have been established for silts accumulated in five, dissimilar tidal palaeochannels of mid or late Holocene age in the Severn Estuary Levels, representative of muddy tidal systems. For purposes of general comparison, similar data were obtained from a representative active tidal inlet in the area, but all of these channels have been subject to human interference and are not relied upon as a model for environmental interpretation. Although the palaeochannel deposits differ substantially in their bedding characteristics and stratigraphical relationships from the level-bedded salt-marsh platform and mudflat deposits with which they are associated, and although the channel environment is distinctive morphologically and hydraulically, no critical textural differences could be found between the channel deposits and the associated facies. Similarly, no foraminiferal assemblages distinctive of a tidal channel were encountered. Instead, the assemblages compare with those from mudflats and salt-marsh platforms. It is concluded that the sides of the subfossil channels carried some vegetation, as was observed to be the case in the modern inlet. An alternative approach is necessary if concealed palaeochannel deposits are to be recognized in muddy systems from limited numbers of subsurface samples. Although the palaeochannels afforded no characteristic textural signature, they yield transverse grain-size patterns pointing to coastal movements during their evolution. Concave-up trends suggest outward coastal building, whereas convex-up ones point to marsh-edge retreat.
Resumo:
A common mode whereby destruction of coastal lowlands occurs is frontal erosion. The edge cliffing, nonetheless, is also an inherent aspect of salt marsh development in many northwest European tidal marshes. Quite a few geomorphologists in the earlier half of the past century recognized such edge erosion as a definite repetitive stage within an autocyclic mode of marsh growth. A shift in research priorities during the past decades (primarily because of coastal management concerns, however) has resulted in an enhanced focus on sediment-flux measurement campaigns on salt marshes. This, somewhat "object-oriented" strategy hindered any further development of the once-established autocyclic growth concept, which virtually has gone into oblivion in recent times. This work makes an attempt to resurrect the notion of autocyclicity by employing its premises to address edge erosion in tidal marshes. Through a review of intertidal morphosedimentology the underlying framework for autocyclicity is envisaged. The phenomenon is demonstrated in the Holocene salt marsh plain of Moricambe basin in NW England that displays several distinct phases of marsh retreat in the form of abandoned clifflets. The suite of abandoned shorelines and terraces has been identified in detailed field mapping that followed analysis of topographic maps and aerial photographs. Vertical trends in marsh plain sediments are recorded in trenches for signs of past marsh front movements. The characteristic sea level history of the area offers an opportunity to differentiate the morphodynamic variability induced in the autocyclic growth of the marsh plain in scenarios of rising and falling sea level and the accompanied change in sediment budget. The ideas gathered are incorporated to construct a conceptual model that links temporal extent of marsh erosion to inner tidal flat sediment budget and sea level tendency. The review leads to recognition of the necessity of adopting an holistic approach in the morphodynamic investigations where marshes should be treated as a component within the "marsh-mudflat system" as each element apparently modulates evolution of the other, with an eventual linkage to subtidal channels. (C) 2009 Elsevier B.V. All rights reserved.
Resumo:
A wide variety of exposure models are currently employed for health risk assessments. Individual models have been developed to meet the chemical exposure assessment needs of Government, industry and academia. These existing exposure models can be broadly categorised according to the following types of exposure source: environmental, dietary, consumer product, occupational, and aggregate and cumulative. Aggregate exposure models consider multiple exposure pathways, while cumulative models consider multiple chemicals. In this paper each of these basic types of exposure model are briefly described, along with any inherent strengths or weaknesses, with the UK as a case study. Examples are given of specific exposure models that are currently used, or that have the potential for future use, and key differences in modelling approaches adopted are discussed. The use of exposure models is currently fragmentary in nature. Specific organisations with exposure assessment responsibilities tend to use a limited range of models. The modelling techniques adopted in current exposure models have evolved along distinct lines for the various types of source. In fact different organisations may be using different models for very similar exposure assessment situations. This lack of consistency between exposure modelling practices can make understanding the exposure assessment process more complex, can lead to inconsistency between organisations in how critical modelling issues are addressed (e.g. variability and uncertainty), and has the potential to communicate mixed messages to the general public. Further work should be conducted to integrate the various approaches and models, where possible and regulatory remits allow, to get a coherent and consistent exposure modelling process. We recommend the development of an overall framework for exposure and risk assessment with common approaches and methodology, a screening tool for exposure assessment, collection of better input data, probabilistic modelling, validation of model input and output and a closer working relationship between scientists and policy makers and staff from different Government departments. A much increased effort is required is required in the UK to address these issues. The result will be a more robust, transparent, valid and more comparable exposure and risk assessment process. (C) 2006 Elsevier Ltd. All rights reserved.
Resumo:
The Rio Tinto river in SW Spain is a classic example of acid mine drainage and the focus of an increasing amount of research including environmental geochemistry, extremophile microbiology and Mars-analogue studies. Its 5000-year mining legacy has resulted in a wide range of point inputs including spoil heaps and tunnels draining underground workings. The variety of inputs and importance of the river as a research site make it an ideal location for investigating sulphide oxidation mechanisms at the field scale. Mass balance calculations showed that pyrite oxidation accounts for over 93% of the dissolved sulphate derived from sulphide oxidation in the Rio Tinto point inputs. Oxygen isotopes in water and sulphate were analysed from a variety of drainage sources and displayed delta O-18((SO4-H2O)) values from 3.9 to 13.6 parts per thousand, indicating that different oxidation pathways occurred at different sites within the catchment. The most commonly used approach to interpreting field oxygen isotope data applies water and oxygen fractionation factors derived from laboratory experiments. We demonstrate that this approach cannot explain high delta O-18((SO4-H2O)) values in a manner that is consistent with recent models of pyrite and sulphoxyanion oxidation. In the Rio Tinto, high delta O-18((SO4-H2O)) values (11.2-13.6 parts per thousand) occur in concentrated (Fe = 172-829 mM), low pH (0.88-1.4), ferrous iron (68-91% of total Fe) waters and are most simply explained by a mechanism involving a dissolved sulphite intermediate, sulphite-water oxygen equilibrium exchange and finally sulphite oxidation to sulphate with O-2. In contrast, drainage from large waste blocks of acid volcanic tuff with pyritiferous veins also had low pH (1.7). but had a low delta O-18((SO4-H2O)) value of 4.0 parts per thousand and high concentrations of ferric iron (Fe(III) = 185 mM, total Fe = 186 mM), suggesting a pathway where ferric iron is the primary oxidant, water is the primary source of oxygen in the sulphate and where sulphate is released directly from the pyrite surface. However, problems remain with the sulphite-water oxygen exchange model and recommendations are therefore made for future experiments to refine our understanding of oxygen isotopes in pyrite oxidation. (C) 2009 Elsevier B.V. All rights reserved.
Resumo:
Although numerous field studies have evaluated flow and transport processes in salt marsh channels, the overall role of channels in delivering and removing material from salt marsh platforms is still poorly characterised. In this paper, we consider this issue based on a numerical hydrodynamic model for a prototype marsh system and on a field survey of the cross-sectional geometry of a marsh channel network. Results of the numerical simulations indicate that the channel transfers approximately three times the volume of water that would be estimated from mass balance considerations alone. Marsh platform roughness exerts a significant influence on the partitioning of discharge between the channel and the marsh platform edge, alters flow patterns on the marsh platform due to its effects on channel-to-platform transfer and also controls the timing of peak discharge relative to marsh-edge overtopping. Although peak channel discharges and velocities are associated with the flood tide and marsh inundation, a larger volume of water is transferred by the channel during ebb flows, a portion of which transfer takes place after the tidal height is below the marsh platform. Detailed surveys of the marsh channels crossing a series of transects at Upper Stiffkey Marsh, north Norfolk, England, show that the total channel cross-sectional area increases linearly with catchment area in the inner part of the marsh, which is consistent with the increase in shoreward tidal prism removed by the channels. Toward the marsh edge, however, a deficit in the total cross-sectional area develops, suggesting that discharge partitioning between the marsh channels and the marsh platform edge may also be expressed in the morphology of marsh channel systems.
Resumo:
The global monsoon system is so varied and complex that understanding and predicting its diverse behaviour remains a challenge that will occupy modellers for many years to come. Despite the difficult task ahead, an improved monsoon modelling capability has been realized through the inclusion of more detailed physics of the climate system and higher resolution in our numerical models. Perhaps the most crucial improvement to date has been the development of coupled ocean-atmosphere models. From subseasonal to interdecadal time scales, only through the inclusion of air-sea interaction can the proper phasing and teleconnections of convection be attained with respect to sea surface temperature variations. Even then, the response to slow variations in remote forcings (e.g., El Niño—Southern Oscillation) does not result in a robust solution, as there are a host of competing modes of variability that must be represented, including those that appear to be chaotic. Understanding the links between monsoons and land surface processes is not as mature as that explored regarding air-sea interactions. A land surface forcing signal appears to dominate the onset of wet season rainfall over the North American monsoon region, though the relative role of ocean versus land forcing remains a topic of investigation in all the monsoon systems. Also, improved forecasts have been made during periods in which additional sounding observations are available for data assimilation. Thus, there is untapped predictability that can only be attained through the development of a more comprehensive observing system for all monsoon regions. Additionally, improved parameterizations - for example, of convection, cloud, radiation, and boundary layer schemes as well as land surface processes - are essential to realize the full potential of monsoon predictability. A more comprehensive assessment is needed of the impact of black carbon aerosols, which may modulate that of other anthropogenic greenhouse gases. Dynamical considerations require ever increased horizontal resolution (probably to 0.5 degree or higher) in order to resolve many monsoon features including, but not limited to, the Mei-Yu/Baiu sudden onset and withdrawal, low-level jet orientation and variability, and orographic forced rainfall. Under anthropogenic climate change many competing factors complicate making robust projections of monsoon changes. Absent aerosol effects, increased land-sea temperature contrast suggests strengthened monsoon circulation due to climate change. However, increased aerosol emissions will reflect more solar radiation back to space, which may temper or even reduce the strength of monsoon circulations compared to the present day. Precipitation may behave independently from the circulation under warming conditions in which an increased atmospheric moisture loading, based purely on thermodynamic considerations, could result in increased monsoon rainfall under climate change. The challenge to improve model parameterizations and include more complex processes and feedbacks pushes computing resources to their limit, thus requiring continuous upgrades of computational infrastructure to ensure progress in understanding and predicting current and future behaviour of monsoons.