913 resultados para Variable Charge
Resumo:
By combining electrostatic measurements of lightning-induced electrostatic field changes with radio frequency lightning location, some field changes from exceptionally distant lightning events are apparent which are inconsistent with the usual inverse cube of distance. Furthermore, by using two measurement sites, a transition zone can be identified beyond which the electric field response reverses polarity. For these severe lightning events, we infer a horizontally extensive charge sheet above a thunderstorm, consistent with a mesospheric halo of several hundred kilometers’ extent.
Resumo:
The relative contributions of five variables (Stereoscopy, screen size, field of view, level of realism and level of detail) of virtual reality systems on spatial comprehension and presence are evaluated here. Using a variable-centered approach instead of an object-centric view as its theoretical basis, the contributions of these five variables and their two-way interactions are estimated through a 25-1 fractional factorial experiment (screening design) of resolution V with 84 subjects. The experiment design, procedure, measures used, creation of scales and indices, results of statistical analysis, their meaning and agenda for future research are elaborated.
Resumo:
We use density functional theory calculations with Hubbard corrections (DFT+U) to investigate electronic aspects of the interaction between ceria surfaces and gold atoms. Our results show that Au adatoms at the (111) surface of ceria can adopt Au0, Au+ or Au� electronic configurations depending on the adsorption site. The strongest adsorption sites are on top of the surface oxygen and in a bridge position between two surface oxygen atoms, and in both cases charge transfer from the gold atom to one of the Ce cations at the surface is involved. Adsorption at other sites, including the hollow sites of the surface, and an O–Ce bridging site, is weaker and does not involve charge transfer. Adsorption at an oxygen vacancy site is very strong and involves the formation of an Au� anion. We argue that the ability of gold atoms to stabilise oxygen vacancies at the ceria surface by moving into the vacancy site and attracting the excess electrons of the defect could be responsible for the enhanced reducibility of ceria surfaces in the presence of gold. Finally, we rationalise the differences in charge transfer behaviour from site to site in terms of the electrostatic potential at the surface and the coordination of the species.
Resumo:
Charged aerosol particles and water droplets are abundant throughout the lower atmosphere, and may influence interactions between small cloud droplets. This note describes a small, disposable sensor for the measurement of charge in non-thunderstorm cloud, which is an improvement of an earlier sensor [K. A. Nicoll and R. G. Harrison, Rev. Sci. Instrum. 80, 014501 (2009)]. The sensor utilizes a self-calibrating current measurement method. It is designed for use on a free balloon platform alongside a standard meteorological radiosonde, measuring currents from 2 fA to 15 pA and is stable to within 5 fA over a temperature range of 5 °C to −60 °C. During a balloon flight with the charge sensor through a stratocumulus cloud, charge layers up to 40 pC m−3 were detected on the cloud edges.
Resumo:
We present simultaneous multicolor infrared and optical photometry of the black hole X-ray transient XTE J1118+480 during its short 2005 January outburst, supported by simultaneous X-ray observations. The variability is dominated by short timescales, ~10 s, although a weak superhump also appears to be present in the optical. The optical rapid variations, at least, are well correlated with those in X-rays. Infrared JHKs photometry, as in the previous outburst, exhibits especially large-amplitude variability. The spectral energy distribution (SED) of the variable infrared component can be fitted with a power law of slope α=-0.78+/-0.07, where F_ν~ν^α. There is no compelling evidence for evolution in the slope over five nights, during which time the source brightness decayed along almost the same track as seen in variations within the nights. We conclude that both short-term variability and longer timescale fading are dominated by a single component of constant spectral shape. We cannot fit the SED of the IR variability with a credible thermal component, either optically thick or thin. This IR SED is, however, approximately consistent with optically thin synchrotron emission from a jet. These observations therefore provide indirect evidence to support jet-dominated models for XTE J1118+480 and also provide a direct measurement of the slope of the optically thin emission, which is impossible, based on the average spectral energy distribution alone.
Resumo:
The question of what explains variation in expenditures on Active Labour Market Programs (ALMPs) has attracted significant scholarship in recent years. Significant insights have been gained with respect to the role of employers, unions and dual labour markets, openness, and partisanship. However, there remain significant disagreements with respects to key explanatory variables such the role of unions or the impact of partisanship. Qualitative studies have shown that there are both good conceptual reasons as well as historical evidence that different ALMPs are driven by different dynamics. There is little reason to believe that vastly different programs such as training and employment subsidies are driven by similar structural, interest group or indeed partisan dynamics. The question is therefore whether different ALMPs have the same correlation with different key explanatory variables identified in the literature? Using regression analysis, this paper shows that the explanatory variables identified by the literature have different relation to distinct ALMPs. This refinement adds significant analytical value and shows that disagreements are at least partly due to a dependent variable problem of ‘over-aggregation’.
Resumo:
Bayesian analysis is given of an instrumental variable model that allows for heteroscedasticity in both the structural equation and the instrument equation. Specifically, the approach for dealing with heteroscedastic errors in Geweke (1993) is extended to the Bayesian instrumental variable estimator outlined in Rossi et al. (2005). Heteroscedasticity is treated by modelling the variance for each error using a hierarchical prior that is Gamma distributed. The computation is carried out by using a Markov chain Monte Carlo sampling algorithm with an augmented draw for the heteroscedastic case. An example using real data illustrates the approach and shows that ignoring heteroscedasticity in the instrument equation when it exists may lead to biased estimates.
Resumo:
The hypothesis that pronouns can be resolved via either the syntax or the discourse representation has played an important role in linguistic accounts of pronoun interpretation (e.g. Grodzinsky & Reinhart, 1993). We report the results of an eye-movement monitoring study investigating the relative timing of syntactically-mediated variable binding and discourse-based coreference assignment during pronoun resolution. We examined whether ambiguous pronouns are preferentially resolved via either the variable binding or coreference route, and in particular tested the hypothesis that variable binding should always be computed before coreference assignment. Participants’ eye movements were monitored while they read sentences containing a pronoun and two potential antecedents, a c-commanding quantified noun phrase and a non c-commanding proper name. Gender congruence between the pronoun and either of the two potential antecedents was manipulated as an experimental diagnostic for dependency formation. In two experiments, we found that participants’ reading times were reliably longer when the linearly closest antecedent mismatched in gender with the pronoun. These findings fail to support the hypothesis that variable binding is computed before coreference assignment, and instead suggest that antecedent recency plays an important role in affecting the extent to which a variable binding antecedent is considered. We discuss these results in relation to models of memory retrieval during sentence comprehension, and interpret the antecedent recency preference as an example of forgetting over time.
Resumo:
The new compounds [Ru(R-DAB)(acac)2] (R-DAB = 1,4-diorganyl-
1,4-diazabuta-1,3-diene; R = tert-butyl, 4-methoxyphenyl,
2,6-dimethylphenyl; acac– = 2,4-pentanedionate) exhibit intrachelate ring bond lengths 1.297
Resumo:
Over Arctic sea ice, pressure ridges and floe andmelt pond edges all introduce discrete obstructions to the flow of air or water past the ice and are a source of form drag. In current climate models form drag is only accounted for by tuning the air–ice and ice–ocean drag coefficients, that is, by effectively altering the roughness length in a surface drag parameterization. The existing approach of the skin drag parameter tuning is poorly constrained by observations and fails to describe correctly the physics associated with the air–ice and ocean–ice drag. Here, the authors combine recent theoretical developments to deduce the total neutral form drag coefficients from properties of the ice cover such as ice concentration, vertical extent and area of the ridges, freeboard and floe draft, and the size of floes and melt ponds. The drag coefficients are incorporated into the Los Alamos Sea Ice Model (CICE) and show the influence of the new drag parameterization on the motion and state of the ice cover, with the most noticeable being a depletion of sea ice over the west boundary of the Arctic Ocean and over the Beaufort Sea. The new parameterization allows the drag coefficients to be coupled to the sea ice state and therefore to evolve spatially and temporally. It is found that the range of values predicted for the drag coefficients agree with the range of values measured in several regions of the Arctic. Finally, the implications of the new form drag formulation for the spinup or spindown of the Arctic Ocean are discussed.
Resumo:
BACKGROUND: Low plasma 25-hydroxyvitamin D (25[OH]D) concentration is associated with high arterial blood pressure and hypertension risk, but whether this association is causal is unknown. We used a mendelian randomisation approach to test whether 25(OH)D concentration is causally associated with blood pressure and hypertension risk. METHODS: In this mendelian randomisation study, we generated an allele score (25[OH]D synthesis score) based on variants of genes that affect 25(OH)D synthesis or substrate availability (CYP2R1 and DHCR7), which we used as a proxy for 25(OH)D concentration. We meta-analysed data for up to 108 173 individuals from 35 studies in the D-CarDia collaboration to investigate associations between the allele score and blood pressure measurements. We complemented these analyses with previously published summary statistics from the International Consortium on Blood Pressure (ICBP), the Cohorts for Heart and Aging Research in Genomic Epidemiology (CHARGE) consortium, and the Global Blood Pressure Genetics (Global BPGen) consortium. FINDINGS: In phenotypic analyses (up to n=49 363), increased 25(OH)D concentration was associated with decreased systolic blood pressure (β per 10% increase, -0·12 mm Hg, 95% CI -0·20 to -0·04; p=0·003) and reduced odds of hypertension (odds ratio [OR] 0·98, 95% CI 0·97-0·99; p=0·0003), but not with decreased diastolic blood pressure (β per 10% increase, -0·02 mm Hg, -0·08 to 0·03; p=0·37). In meta-analyses in which we combined data from D-CarDia and the ICBP (n=146 581, after exclusion of overlapping studies), each 25(OH)D-increasing allele of the synthesis score was associated with a change of -0·10 mm Hg in systolic blood pressure (-0·21 to -0·0001; p=0·0498) and a change of -0·08 mm Hg in diastolic blood pressure (-0·15 to -0·02; p=0·01). When D-CarDia and consortia data for hypertension were meta-analysed together (n=142 255), the synthesis score was associated with a reduced odds of hypertension (OR per allele, 0·98, 0·96-0·99; p=0·001). In instrumental variable analysis, each 10% increase in genetically instrumented 25(OH)D concentration was associated with a change of -0·29 mm Hg in diastolic blood pressure (-0·52 to -0·07; p=0·01), a change of -0·37 mm Hg in systolic blood pressure (-0·73 to 0·003; p=0·052), and an 8·1% decreased odds of hypertension (OR 0·92, 0·87-0·97; p=0·002). INTERPRETATION: Increased plasma concentrations of 25(OH)D might reduce the risk of hypertension. This finding warrants further investigation in an independent, similarly powered study.
Resumo:
The analysis step of the (ensemble) Kalman filter is optimal when (1) the distribution of the background is Gaussian, (2) state variables and observations are related via a linear operator, and (3) the observational error is of additive nature and has Gaussian distribution. When these conditions are largely violated, a pre-processing step known as Gaussian anamorphosis (GA) can be applied. The objective of this procedure is to obtain state variables and observations that better fulfil the Gaussianity conditions in some sense. In this work we analyse GA from a joint perspective, paying attention to the effects of transformations in the joint state variable/observation space. First, we study transformations for state variables and observations that are independent from each other. Then, we introduce a targeted joint transformation with the objective to obtain joint Gaussianity in the transformed space. We focus primarily in the univariate case, and briefly comment on the multivariate one. A key point of this paper is that, when (1)-(3) are violated, using the analysis step of the EnKF will not recover the exact posterior density in spite of any transformations one may perform. These transformations, however, provide approximations of different quality to the Bayesian solution of the problem. Using an example in which the Bayesian posterior can be analytically computed, we assess the quality of the analysis distributions generated after applying the EnKF analysis step in conjunction with different GA options. The value of the targeted joint transformation is particularly clear for the case when the prior is Gaussian, the marginal density for the observations is close to Gaussian, and the likelihood is a Gaussian mixture.
Resumo:
A basic data requirement of a river flood inundation model is a Digital Terrain Model (DTM) of the reach being studied. The scale at which modeling is required determines the accuracy required of the DTM. For modeling floods in urban areas, a high resolution DTM such as that produced by airborne LiDAR (Light Detection And Ranging) is most useful, and large parts of many developed countries have now been mapped using LiDAR. In remoter areas, it is possible to model flooding on a larger scale using a lower resolution DTM, and in the near future the DTM of choice is likely to be that derived from the TanDEM-X Digital Elevation Model (DEM). A variable-resolution global DTM obtained by combining existing high and low resolution data sets would be useful for modeling flood water dynamics globally, at high resolution wherever possible and at lower resolution over larger rivers in remote areas. A further important data resource used in flood modeling is the flood extent, commonly derived from Synthetic Aperture Radar (SAR) images. Flood extents become more useful if they are intersected with the DTM, when water level observations (WLOs) at the flood boundary can be estimated at various points along the river reach. To illustrate the utility of such a global DTM, two examples of recent research involving WLOs at opposite ends of the spatial scale are discussed. The first requires high resolution spatial data, and involves the assimilation of WLOs from a real sequence of high resolution SAR images into a flood model to update the model state with observations over time, and to estimate river discharge and model parameters, including river bathymetry and friction. The results indicate the feasibility of such an Earth Observation-based flood forecasting system. The second example is at a larger scale, and uses SAR-derived WLOs to improve the lower-resolution TanDEM-X DEM in the area covered by the flood extents. The resulting reduction in random height error is significant.
Resumo:
Understanding the origin of the properties of metal-supported metal thin films is important for the rational design of bimetallic catalysts and other applications, but it is generally difficult to separate effects related to strain from those arising from interface interactions. Here we use density functional (DFT) theory to examine the structure and electronic behavior of few-layer palladium films on the rhenium (0001) surface, where there is negligible interfacial strain and therefore other effects can be isolated. Our DFT calculations predict stacking sequences and interlayer separations in excellent agreement with quantitative low-energy electron diffraction experiments. By theoretically simulating the Pd core-level X-ray photoemission spectra (XPS) of the films, we are able to interpret and assign the basic features of both low-resolution and high-resolution XPS measurements. The core levels at the interface shift to more negative energies, rigidly following the shifts in the same direction of the valence d-band center. We demonstrate that the valence band shift at the interface is caused by charge transfer from Re to Pd, which occurs mainly to valence states of hybridized s-p character rather than to the Pd d-band. Since the d-band filling is roughly constant, there is a correlation between the d-band center shift and its bandwidth. The resulting effect of this charge transfer on the valence d-band is thus analogous to the application of a lateral compressive strain on the adlayers. Our analysis suggests that charge transfer should be considered when describing the origin of core and valence band shifts in other metal / metal adlayer systems.