165 resultados para Current loop
em CentAUR: Central Archive University of Reading - UK
Resumo:
The relation between the Agulhas Current retroflection location and the magnitude of Agulhas leakage, the transport of water from the Indian to the Atlantic Ocean, is investigated in a high-resolution numerical ocean model. Sudden eastward retreats of the Agulhas Current retroflection loop are linearly related to the shedding of Agulhas rings, where larger retreats generate larger rings. Using numerical Lagrangian floats a 37 year time series of the magnitude of Agulhas leakage in the model is constructed. The time series exhibits large amounts of variability, both on weekly and annual time scales. A linear relation is found between the magnitude of Agulhas leakage and the location of the Agulhas Current retroflection, both binned to three month averages. In the relation, a more westward location of the Agulhas Current retroflection corresponds to an increased transport from the Indian Ocean to the Atlantic Ocean. When this relation is used in a linear regression and applied to almost 20 years of altimetry data, it yields a best estimate of the mean magnitude of Agulhas leakage of 13.2 Sv. The early retroflection of 2000, when Agulhas leakage was probably halved, can be identified using the regression.
Resumo:
The orientation of the heliospheric magnetic field (HMF) in near‒Earth space is generally a good indicator of the polarity of HMF foot points at the photosphere. There are times, however, when the HMF folds back on itself (is inverted), as indicated by suprathermal electrons locally moving sunward, even though they must ultimately be carrying the heat flux away from the Sun. Analysis of the near‒Earth solar wind during the period 1998–2011 reveals that inverted HMF is present approximately 5.5% of the time and is generally associated with slow, dense solar wind and relatively weak HMF intensity. Inverted HMF is mapped to the coronal source surface, where a new method is used to estimate coronal structure from the potential‒field source‒surface model. We find a strong association with bipolar streamers containing the heliospheric current sheet, as expected, but also with unipolar or pseudostreamers, which contain no current sheet. Because large‒scale inverted HMF is a widely accepted signature of interchange reconnection at the Sun, this finding provides strong evidence for models of the slow solar wind which involve coronal loop opening by reconnection within pseudostreamer belts as well as the bipolar streamer belt. Occurrence rates of bipolar‒ and pseudostreamers suggest that they are equally likely to result in inverted HMF and, therefore, presumably undergo interchange reconnection at approximately the same rate. Given the different magnetic topologies involved, this suggests the rate of reconnection is set externally, possibly by the differential rotation rate which governs the circulation of open solar flux.
Resumo:
A recent area for investigation into the development of adaptable robot control is the use of living neuronal networks to control a mobile robot. The so-called Animat paradigm comprises a neuronal network (the ‘brain’) connected to an external embodiment (in this case a mobile robot), facilitating potentially robust, adaptable robot control and increased understanding of neural processes. Sensory input from the robot is provided to the neuronal network via stimulation on a number of electrodes embedded in a specialist Petri dish (Multi Electrode Array (MEA)); accurate control of this stimulation is vital. We present software tools allowing precise, near real-time control of electrical stimulation on MEAs, with fast switching between electrodes and the application of custom stimulus waveforms. These Linux-based tools are compatible with the widely used MEABench data acquisition system. Benefits include rapid stimulus modulation in response to neuronal activity (closed loop) and batch processing of stimulation protocols.
Resumo:
Atmospheric electricity measurements were made at Lerwick Observatory in the Shetland Isles (60°09′N, 1°08′W) during most of the 20th century. The Potential Gradient (PG) was measured from 1926 to 84 and the air-earth conduction current (Jc) was measured during the final decade of the PG measurements. Daily Jc values (1978–1984) observed at 15 UT are presented here for the first time, with independently-obtained PG measurements used to select valid data. The 15 UT Jc (1978–1984) spans 0.5–9.5 pA/m2, with median 2.5 pA/m2; the columnar resistance at Lerwick is estimated as 70 PΩm2. Smoke measurements confirm the low pollution properties of the site. Analysis of the monthly variation of Lerwick Jc data shows that winter (DJF) Jc is significantly greater than the summer (JJA) Jc by 20%. The Lerwick atmospheric electricity seasonality differs from the global lightning seasonality, but Jc has a similar seasonal phasing to that observed in Nimbostratus clouds globally, suggesting a role for non-thunderstorm rain clouds in the seasonality of the global circuit.
Resumo:
Voltage-dependent Ca2+ channels (VDCCs) have emerged as targets to treat neuropathic pain; however, amongst VDCCs, the precise role of the CaV2.3 subtype in nociception remains unproven. Here, we investigate the effects of partial sciatic nerve ligation (PSNL) on Ca2+ currents in small/medium diameter dorsal root ganglia (DRG) neurones isolated from CaV2.3(−/−) knock-out and wild-type (WT) mice. DRG neurones from CaV2.3(−/−) mice had significantly reduced sensitivity to SNX-482 versusWTmice. DRGs from CaV2.3(−/−) mice also had increased sensitivity to the CaV2.2 VDCC blocker -conotoxin. In WT mice, PSNL caused a significant increase in -conotoxin-sensitivity and a reduction in SNX-482-sensitivity. In CaV2.3(−/−) mice, PSNL caused a significant reduction in -conotoxin-sensitivity and an increase in nifedipine sensitivity. PSNL-induced changes in Ca2+ current were not accompanied by effects on voltagedependence of activation in either CaV2.3(−/−) or WT mice. These data suggest that CaV2.3 subunits contribute, but do not fully underlie, drug-resistant (R-type) Ca2+ current in these cells. In WT mice, PSNL caused adaptive changes in CaV2.2- and CaV2.3-mediated Ca2+ currents, supporting roles for these VDCCs in nociception during neuropathy. In CaV2.3(−/−) mice, PSNL-induced changes in CaV1 and CaV2.2 Ca2+ current, consistent with alternative adaptive mechanisms occurring in the absence of CaV2.3 subunits.
Resumo:
The global atmospheric electrical circuit sustains a vertical current density between the ionosphere and the Earth's surface, the existence of which is well-established from measurements made in fair-weather conditions. In overcast, but non-thunderstorm, non-precipitating conditions, the current travels through the cloud present, despite cloud layers having low electrical conductivity. For extensive layer clouds, this leads to space charge at the upper and lower cloud boundaries. Using a combination of atmospheric electricity and solar radiation measurements at three UK sites, vertical current measurements have been categorised into clear, broken, and overcast cloud conditions. This approach shows that the vertical “fair weather” current is maintained despite the presence of cloud. In fully overcast conditions with thick cloud, the vertical current is reduced compared to thin cloud overcast conditions, associated with the cloud's resistance contributions. Contribution of cloud to the columnar resistance depends both on cloud thickness, and the cloud's height.
Resumo:
Abstract Foggy air and clear air have appreciably different electrical conductivities. The conductivity gradient at horizontal droplet boundaries causes droplet charging, as a result of vertical current flow in the global atmospheric electrical circuit. The charging is poorly known, as both the current flow through atmospheric water droplet layers and the air conductivity are poorly characterised experimentally. Surface measurements during three days of continuous fog using new instrument techniques show that a shallow (of order 100 m deep) fog layer still permits the vertical conduction current to pass. Further, the conductivity in the fog is estimated to be approximately 20% lower than in clear air. Assuming a fog transition thickness of one metre, this implies a vertical conductivity gradient of order 10 fS m−2 at the boundary. The actual vertical conductivity gradient at a cloud boundary would probably be greater, due to the presence of larger droplets in clouds compared to fog, and cleaner, more conductive clear air aloft.
Resumo:
Severe wind storms are one of the major natural hazards in the extratropics and inflict substantial economic damages and even casualties. Insured storm-related losses depend on (i) the frequency, nature and dynamics of storms, (ii) the vulnerability of the values at risk, (iii) the geographical distribution of these values, and (iv) the particular conditions of the risk transfer. It is thus of great importance to assess the impact of climate change on future storm losses. To this end, the current study employs—to our knowledge for the first time—a coupled approach, using output from high-resolution regional climate model scenarios for the European sector to drive an operational insurance loss model. An ensemble of coupled climate-damage scenarios is used to provide an estimate of the inherent uncertainties. Output of two state-of-the-art global climate models (HadAM3, ECHAM5) is used for present (1961–1990) and future climates (2071–2100, SRES A2 scenario). These serve as boundary data for two nested regional climate models with a sophisticated gust parametrizations (CLM, CHRM). For validation and calibration purposes, an additional simulation is undertaken with the CHRM driven by the ERA40 reanalysis. The operational insurance model (Swiss Re) uses a European-wide damage function, an average vulnerability curve for all risk types, and contains the actual value distribution of a complete European market portfolio. The coupling between climate and damage models is based on daily maxima of 10 m gust winds, and the strategy adopted consists of three main steps: (i) development and application of a pragmatic selection criterion to retrieve significant storm events, (ii) generation of a probabilistic event set using a Monte-Carlo approach in the hazard module of the insurance model, and (iii) calibration of the simulated annual expected losses with a historic loss data base. The climate models considered agree regarding an increase in the intensity of extreme storms in a band across central Europe (stretching from southern UK and northern France to Denmark, northern Germany into eastern Europe). This effect increases with event strength, and rare storms show the largest climate change sensitivity, but are also beset with the largest uncertainties. Wind gusts decrease over northern Scandinavia and Southern Europe. Highest intra-ensemble variability is simulated for Ireland, the UK, the Mediterranean, and parts of Eastern Europe. The resulting changes on European-wide losses over the 110-year period are positive for all layers and all model runs considered and amount to 44% (annual expected loss), 23% (10 years loss), 50% (30 years loss), and 104% (100 years loss). There is a disproportionate increase in losses for rare high-impact events. The changes result from increases in both severity and frequency of wind gusts. Considerable geographical variability of the expected losses exists, with Denmark and Germany experiencing the largest loss increases (116% and 114%, respectively). All countries considered except for Ireland (−22%) experience some loss increases. Some ramifications of these results for the socio-economic sector are discussed, and future avenues for research are highlighted. The technique introduced in this study and its application to realistic market portfolios offer exciting prospects for future research on the impact of climate change that is relevant for policy makers, scientists and economists.
Resumo:
A wide variety of exposure models are currently employed for health risk assessments. Individual models have been developed to meet the chemical exposure assessment needs of Government, industry and academia. These existing exposure models can be broadly categorised according to the following types of exposure source: environmental, dietary, consumer product, occupational, and aggregate and cumulative. Aggregate exposure models consider multiple exposure pathways, while cumulative models consider multiple chemicals. In this paper each of these basic types of exposure model are briefly described, along with any inherent strengths or weaknesses, with the UK as a case study. Examples are given of specific exposure models that are currently used, or that have the potential for future use, and key differences in modelling approaches adopted are discussed. The use of exposure models is currently fragmentary in nature. Specific organisations with exposure assessment responsibilities tend to use a limited range of models. The modelling techniques adopted in current exposure models have evolved along distinct lines for the various types of source. In fact different organisations may be using different models for very similar exposure assessment situations. This lack of consistency between exposure modelling practices can make understanding the exposure assessment process more complex, can lead to inconsistency between organisations in how critical modelling issues are addressed (e.g. variability and uncertainty), and has the potential to communicate mixed messages to the general public. Further work should be conducted to integrate the various approaches and models, where possible and regulatory remits allow, to get a coherent and consistent exposure modelling process. We recommend the development of an overall framework for exposure and risk assessment with common approaches and methodology, a screening tool for exposure assessment, collection of better input data, probabilistic modelling, validation of model input and output and a closer working relationship between scientists and policy makers and staff from different Government departments. A much increased effort is required is required in the UK to address these issues. The result will be a more robust, transparent, valid and more comparable exposure and risk assessment process. (C) 2006 Elsevier Ltd. All rights reserved.
Resumo:
The Rio Tinto river in SW Spain is a classic example of acid mine drainage and the focus of an increasing amount of research including environmental geochemistry, extremophile microbiology and Mars-analogue studies. Its 5000-year mining legacy has resulted in a wide range of point inputs including spoil heaps and tunnels draining underground workings. The variety of inputs and importance of the river as a research site make it an ideal location for investigating sulphide oxidation mechanisms at the field scale. Mass balance calculations showed that pyrite oxidation accounts for over 93% of the dissolved sulphate derived from sulphide oxidation in the Rio Tinto point inputs. Oxygen isotopes in water and sulphate were analysed from a variety of drainage sources and displayed delta O-18((SO4-H2O)) values from 3.9 to 13.6 parts per thousand, indicating that different oxidation pathways occurred at different sites within the catchment. The most commonly used approach to interpreting field oxygen isotope data applies water and oxygen fractionation factors derived from laboratory experiments. We demonstrate that this approach cannot explain high delta O-18((SO4-H2O)) values in a manner that is consistent with recent models of pyrite and sulphoxyanion oxidation. In the Rio Tinto, high delta O-18((SO4-H2O)) values (11.2-13.6 parts per thousand) occur in concentrated (Fe = 172-829 mM), low pH (0.88-1.4), ferrous iron (68-91% of total Fe) waters and are most simply explained by a mechanism involving a dissolved sulphite intermediate, sulphite-water oxygen equilibrium exchange and finally sulphite oxidation to sulphate with O-2. In contrast, drainage from large waste blocks of acid volcanic tuff with pyritiferous veins also had low pH (1.7). but had a low delta O-18((SO4-H2O)) value of 4.0 parts per thousand and high concentrations of ferric iron (Fe(III) = 185 mM, total Fe = 186 mM), suggesting a pathway where ferric iron is the primary oxidant, water is the primary source of oxygen in the sulphate and where sulphate is released directly from the pyrite surface. However, problems remain with the sulphite-water oxygen exchange model and recommendations are therefore made for future experiments to refine our understanding of oxygen isotopes in pyrite oxidation. (C) 2009 Elsevier B.V. All rights reserved.
Resumo:
The global monsoon system is so varied and complex that understanding and predicting its diverse behaviour remains a challenge that will occupy modellers for many years to come. Despite the difficult task ahead, an improved monsoon modelling capability has been realized through the inclusion of more detailed physics of the climate system and higher resolution in our numerical models. Perhaps the most crucial improvement to date has been the development of coupled ocean-atmosphere models. From subseasonal to interdecadal time scales, only through the inclusion of air-sea interaction can the proper phasing and teleconnections of convection be attained with respect to sea surface temperature variations. Even then, the response to slow variations in remote forcings (e.g., El Niño—Southern Oscillation) does not result in a robust solution, as there are a host of competing modes of variability that must be represented, including those that appear to be chaotic. Understanding the links between monsoons and land surface processes is not as mature as that explored regarding air-sea interactions. A land surface forcing signal appears to dominate the onset of wet season rainfall over the North American monsoon region, though the relative role of ocean versus land forcing remains a topic of investigation in all the monsoon systems. Also, improved forecasts have been made during periods in which additional sounding observations are available for data assimilation. Thus, there is untapped predictability that can only be attained through the development of a more comprehensive observing system for all monsoon regions. Additionally, improved parameterizations - for example, of convection, cloud, radiation, and boundary layer schemes as well as land surface processes - are essential to realize the full potential of monsoon predictability. A more comprehensive assessment is needed of the impact of black carbon aerosols, which may modulate that of other anthropogenic greenhouse gases. Dynamical considerations require ever increased horizontal resolution (probably to 0.5 degree or higher) in order to resolve many monsoon features including, but not limited to, the Mei-Yu/Baiu sudden onset and withdrawal, low-level jet orientation and variability, and orographic forced rainfall. Under anthropogenic climate change many competing factors complicate making robust projections of monsoon changes. Absent aerosol effects, increased land-sea temperature contrast suggests strengthened monsoon circulation due to climate change. However, increased aerosol emissions will reflect more solar radiation back to space, which may temper or even reduce the strength of monsoon circulations compared to the present day. Precipitation may behave independently from the circulation under warming conditions in which an increased atmospheric moisture loading, based purely on thermodynamic considerations, could result in increased monsoon rainfall under climate change. The challenge to improve model parameterizations and include more complex processes and feedbacks pushes computing resources to their limit, thus requiring continuous upgrades of computational infrastructure to ensure progress in understanding and predicting current and future behaviour of monsoons.