842 resultados para Flash Events
Resumo:
The relationship between the magnetic field intensity and speed of solar wind events is examined using ∼3 years of data from the ACE spacecraft. No preselection of coronal mass ejections (CMEs) or magnetic clouds is carried out. The correlation between the field intensity and maximum speed is shown to increase significantly when |B| > 18 nT for 3 hours or more. Of the 24 events satisfying this criterion, 50% are magnetic clouds, the remaining half having no ordered field structure. A weaker correlation also exists between southward magnetic field and speed. Sixteen of the events are associated with halo CMEs leaving the Sun 2 to 4 days prior to the leading edge of the events arriving at ACE. Events selected by speed thresholds show no significant correlation, suggesting different relations between field intensity and speed for fast solar wind streams and ICMEs.
Resumo:
Transpolar voltages observed during traversals of the polar cap by the Defense Meteorological Satellite Program (DMSP) F-13 spacecraft during 2001 are analyzed using the expanding-contracting polar cap model of ionospheric convection. Each of the 10,216 passes is classified by its substorm phase or as a steady convection event (SCE) by inspection of the AE indices. For all phases, we detect a contribution to the transpolar voltage by reconnection in both the dayside magnetopause and in the crosstail current sheet. Detection of the IMF influence is 97% certain during quiet intervals and >99% certain during substorm/SCE growth phases but falls to 75% in substorm expansion phases: It is only 27% during SCEs. Detection of the influence of the nightside voltage is only 19% certain during growth phases, rising during expansion phases to a peak of 96% in recovery phases: During SCEs, it is >99%. The voltage during SCEs is dominated by the nightside, not the dayside, reconnection. On average, substorm expansion phases halt the growth phase rise in polar cap flux rather than reversing it. The main destruction of the excess open flux takes place during the 6- to 10-hour interval after the recovery phase (as seen in AE) and at a rate which is relatively independent of polar cap flux because the NENL has by then retreated to the far tail. The best estimate of the voltage associated with viscous-like transfer of closed field lines into the tail is around 10 kV.
Resumo:
Optical data are compared with EISCAT radar observations of multiple Naturally Enhanced Ion-Acoustic Line (NEIAL) events in the dayside cusp. This study uses narrow field of view cameras to observe small-scale, short-lived auroral features. Using multiple-wavelength optical observations, a direct link between NEIAL occurrences and low energy (about 100 eV) optical emissions is shown. This is consistent with the Langmuir wave decay interpretation of NEIALs being driven by streams of low-energy electrons. Modelling work connected with this study shows that, for the measured ionospheric conditions and precipitation characteristics, growth of unstable Langmuir (electron plasma) waves can occur, which decay into ion-acoustic wave modes. The link with low energy optical emissions shown here, will enable future studies of the shape, extent, lifetime, grouping and motions of NEIALs.
Resumo:
In this paper, Bayesian decision procedures are developed for dose-escalation studies based on bivariate observations of undesirable events and signs of therapeutic benefit. The methods generalize earlier approaches taking into account only the undesirable outcomes. Logistic regression models are used to model the two responses, which are both assumed to take a binary form. A prior distribution for the unknown model parameters is suggested and an optional safety constraint can be included. Gain functions to be maximized are formulated in terms of accurate estimation of the limits of a therapeutic window or optimal treatment of the next cohort of subjects, although the approach could be applied to achieve any of a wide variety of objectives. The designs introduced are illustrated through simulation and retrospective implementation to a completed dose-escalation study. Copyright © 2006 John Wiley & Sons, Ltd.
Resumo:
Pharmacovigilance, the monitoring of adverse events (AEs), is an integral part in the clinical evaluation of a new drug. Until recently, attempts to relate the incidence of AEs to putative causes have been restricted to the evaluation of simple demographic and environmental factors. The advent of large-scale genotyping, however, provides an opportunity to look for associations between AEs and genetic markers, such as single nucleotides polymorphisms (SNPs). It is envisaged that a very large number of SNPs, possibly over 500 000, will be used in pharmacovigilance in an attempt to identify any genetic difference between patients who have experienced an AE and those who have not. We propose a sequential genome-wide association test for analysing AEs as they arise, allowing evidence-based decision-making at the earliest opportunity. This gives us the capability of quickly establishing whether there is a group of patients at high-risk of an AE based upon their DNA. Our method provides a valid test which takes account of linkage disequilibrium and allows for the sequential nature of the procedure. The method is more powerful than using a correction, such as idák, that assumes that the tests are independent. Copyright © 2006 John Wiley & Sons, Ltd.
Resumo:
Ancient DNA (aDNA) research has long depended on the power of PCR to amplify trace amounts of surviving genetic material from preserved specimens. While PCR permits specific loci to be targeted and amplified, in many ways it can be intrinsically unsuited to damaged and degraded aDNA templates. PCR amplification of aDNA can produce highly-skewed distributions with significant contributions from miscoding lesion damage and non-authentic sequence artefacts. As traditional PCR-based approaches have been unable to fully resolve the molecular nature of aDNA damage over many years, we have developed a novel single primer extension (SPEX)-based approach to generate more accurate sequence information. SPEX targets selected template strands at defined loci and can generate a quantifiable redundancy of coverage; providing new insights into the molecular nature of aDNA damage and fragmentation. SPEX sequence data reveals inherent limitations in both traditional and metagenomic PCR-based approaches to aDNA, which can make current damage analyses and correct genotyping of ancient specimens problematic. In contrast to previous aDNA studies, SPEX provides strong quantitative evidence that C U-type base modifications are the sole cause of authentic endogenous damage-derived miscoding lesions. This new approach could allow ancient specimens to be genotyped with unprecedented accuracy.
Resumo:
The photochemistry of 1,1-dimethyl- and 1,1,3,4-tetramethylstannacyclopent-3-ene (4a and 4b,respectively) has been studied in the gas phase and in hexane solution by steady-state and 193-nm laser flash photolysis methods. Photolysis of the two compounds results in the formation of 1,3-butadiene (from 4a) and 2,3-dimethyl-1,3-butadiene (from 4b) as the major products, suggesting that cycloreversion to yield dimethylstannylene (SnMe2) is the main photodecomposition pathway of these molecules. Indeed, the stannylene has been trapped as the Sn-H insertion product upon photolysis of 4a in hexane containing trimethylstannane. Flash photolysis of 4a in the gas phase affords a transient absorbing in the 450-520nm range that is assigned to SnMe2 by comparison of its spectrum and reactivity to those previously reported from other precursors. Flash photolysis of 4b in hexane solution affords results consistent with the initial formation of SnMe2 (lambda(max) approximate to 500 nm), which decays over similar to 10 mu s to form tetramethyldistannene (5b; lambda(max) approximate to 470 nm). The distannene decays over the next ca. 50 mu s to form at least two other longer-lived species, which are assigned to higher SnMe2 oligomers. Time-dependent DFT calculations support the spectral assignments for SnMe2 and Sn2Me4, and calculations examining the variation in bond dissociation energy with substituent (H, Me, and Ph) in disilenes, digermenes, and distannenes rule out the possibility that dimerization of SnMe2 proceeds reversibly. Addition of methanol leads to reversible reaction with SnMe2 to form a transient absorbing at lambda(max) approximate to 360 nm, which is assigned to the Lewis acid-base complex between SnMe2 and the alcohol.
Resumo:
Time resolved gas-phase kinetic studies have contributed a great deal of fundamental information about the reactions and reactivity of heavy carbenes (silylenes, germylenes and stannylenes) during the past two decades. In this article we trace the development of our understanding through the mechanistic themes of intermediate complexes, third body assisted associations, catalysed reactions, non-observed reactions and substituent effects. Ab initio (quantum chemical) calculations have substantially assisted mechanistic interpretation and are discussed where appropriate. Trends in reactivity are identified and some signposts to future studies are indicated. This review, although detailed, is not comprehensive.
Resumo:
When people monitor a visual stream of rapidly presented stimuli for two targets (T1 and T2), they often miss T2 if it falls into a time window of about half a second after T1 onset-the attentional blink. However, if T2 immediately follows T1, performance is often reported being as good as that at long lags-the so-called Lag-1 sparing effect. Two experiments investigated the mechanisms underlying this effect. Experiment 1 showed that, at Lag 1, requiring subjects to correctly report both identity and temporal order of targets produces relatively good performance on T2 but relatively bad performance on T1. Experiment 2 confirmed that subjects often confuse target order at short lags, especially if the two targets are equally easy to discriminate. Results suggest that, if two targets appear in close succession, they compete for attentional resources. If the two competitors are of unequal strength the stronger one is more likely to win and be reported at the expense of the other. If the two are equally strong, however, they will often be integrated into the same attentional episode and thus get both access to attentional resources. But this comes with a cost, as it eliminates information about the targets' temporal order.
Resumo:
Investigation of the anatomical substructure of the medial temporal lobe has revealed a number of highly interconnected areas, which has led some to propose that the region operates as a unitary memory system. However, here we outline the results of a number of studies from our laboratories, which investigate the contributions of the rat's perirhinal cortex and postrhinal cortex to memory, concentrating particularly on their respective roles in memory for objects. By contrasting patterns of impairment and spared abilities on a number of related tasks, we suggest that perirhinal cortex and postrhinal cortex make distinctive contributions to learning and memory: for example, that postrhinal cortex is important in learning about within-scene position and context. We also provide evidence that despite the strong connectivity between these cortical regions and the hippocampus, the hippocampus, as evidenced by lesions of the fornix, has a distinct function of its own-combining information about objects, positions, and contexts.
Resumo:
The main activity carried out by the geophysicist when interpreting seismic data, in terms of both importance and time spent is tracking (or picking) seismic events. in practice, this activity turns out to be rather challenging, particularly when the targeted event is interrupted by discontinuities such as geological faults or exhibits lateral changes in seismic character. In recent years, several automated schemes, known as auto-trackers, have been developed to assist the interpreter in this tedious and time-consuming task. The automatic tracking tool available in modem interpretation software packages often employs artificial neural networks (ANN's) to identify seismic picks belonging to target events through a pattern recognition process. The ability of ANNs to track horizons across discontinuities largely depends on how reliably data patterns characterise these horizons. While seismic attributes are commonly used to characterise amplitude peaks forming a seismic horizon, some researchers in the field claim that inherent seismic information is lost in the attribute extraction process and advocate instead the use of raw data (amplitude samples). This paper investigates the performance of ANNs using either characterisation methods, and demonstrates how the complementarity of both seismic attributes and raw data can be exploited in conjunction with other geological information in a fuzzy inference system (FIS) to achieve an enhanced auto-tracking performance.
Resumo:
Most of the dissolved organic carbon (DOC) exported from catchments is transported during storm events. Accurate assessments of DOC fluxes are essential to understand long-term trends in the transport of DOC from terrestrial to aquatic systems, and also the loss of carbon from peatlands to determine changes in the source/sink status of peatland carbon stores. However, many long-term monitoring programmes collect water samples at a frequency (e.g. weekly/monthly) less than the time period of a typical storm event (typically <1–2 days). As widespread observations in catchments dominated by organo-mineral soils have shown that both concentration and flux of DOC increases during storm events, lower frequency monitoring could result in substantial underestimation of DOC flux as the most dynamic periods of transport are missed. However, our intensive monitoring study in a UK upland peatland catchment showed a contrasting response to these previous studies. Our results showed that (i) DOC concentrations decreased during autumn storm events and showed a poor relationship with flow during other seasons; and that (ii) this decrease in concentrations during autumn storms caused DOC flux estimates based on weekly monitoring data to be over-estimated, rather than under-estimated, because of over rather than under estimation of the flow-weighted mean concentration used in flux calculations. However, as DOC flux is ultimately controlled by discharge volume, and therefore rainfall, and the magnitude of change in discharge was greater than the magnitude of decline in concentrations, DOC flux increased during individual storm events. The implications for long-term DOC trends are therefore contradictory, as increased rainfall could increase flux but cause an overall decrease in DOC concentrations from peatland streams. Care needs to be taken when interpreting long-term trends in DOC flux rather than concentration; as flux is calculated from discharge estimates, and discharge is controlled by rainfall, DOC flux and rainfall/discharge will always be well correlated.
Resumo:
A key strategy to improve the skill of quantitative predictions of precipitation, as well as hazardous weather such as severe thunderstorms and flash floods is to exploit the use of observations of convective activity (e.g. from radar). In this paper, a convection-permitting ensemble prediction system (EPS) aimed at addressing the problems of forecasting localized weather events with relatively short predictability time scale and based on a 1.5 km grid-length version of the Met Office Unified Model is presented. Particular attention is given to the impact of using predicted observations of radar-derived precipitation intensity in the ensemble transform Kalman filter (ETKF) used within the EPS. Our initial results based on the use of a 24-member ensemble of forecasts for two summer case studies show that the convective-scale EPS produces fairly reliable forecasts of temperature, horizontal winds and relative humidity at 1 h lead time, as evident from the inspection of rank histograms. On the other hand, the rank histograms seem also to show that the EPS generates too much spread for forecasts of (i) surface pressure and (ii) surface precipitation intensity. These may indicate that for (i) the value of surface pressure observation error standard deviation used to generate surface pressure rank histograms is too large and for (ii) may be the result of non-Gaussian precipitation observation errors. However, further investigations are needed to better understand these findings. Finally, the inclusion of predicted observations of precipitation from radar in the 24-member EPS considered in this paper does not seem to improve the 1-h lead time forecast skill.
Resumo:
We test Slobin's (2003) Thinking-for-Speaking hypothesis on data from different groups of Turkish-German bilinguals, those living in Germany and those who have returned to Germany.