104 resultados para Turbocharger Lag
em CentAUR: Central Archive University of Reading - UK
Resumo:
A method is presented for determining the time to first division of individual bacterial cells growing on agar media. Bacteria were inoculated onto agar-coated slides and viewed by phase-contrast microscopy. Digital images of the growing bacteria were captured at intervals and the time to first division estimated by calculating the "box area ratio". This is the area of the smallest rectangle that can be drawn around an object, divided by the area of the object itself. The box area ratios of cells were found to increase suddenly during growth at a time that correlated with cell division as estimated by visual inspection of the digital images. This was caused by a change in the orientation of the two daughter cells that occurred when sufficient flexibility arose at their point of attachment. This method was used successfully to generate lag time distributions for populations of Escherichia coli, Listeria monocytogenes and Pseudomonas aeruginosa, but did not work with the coccoid organism Staphylococcus aureus. This method provides an objective measure of the time to first cell division, whilst automation of the data processing allows a large number of cells to be examined per experiment. (c) 2005 Elsevier B.V. All rights reserved.
Resumo:
Optical density measurements were used to estimate the effect of heat treatments on the single-cell lag times of Listeria innocua fitted to a shifted gamma distribution. The single-cell lag time was subdivided into repair time ( the shift of the distribution assumed to be uniform for all cells) and adjustment time (varying randomly from cell to cell). After heat treatments in which all of the cells recovered (sublethal), the repair time and the mean and the variance of the single-cell adjustment time increased with the severity of the treatment. When the heat treatments resulted in a loss of viability (lethal), the repair time of the survivors increased with the decimal reduction of the cell numbers independently of the temperature, while the mean and variance of the single-cell adjustment times remained the same irrespective of the heat treatment. Based on these observations and modeling of the effect of time and temperature of the heat treatment, we propose that the severity of a heat treatment can be characterized by the repair time of the cells whether the heat treatment is lethal or not, an extension of the F value concept for sublethal heat treatments. In addition, the repair time could be interpreted as the extent or degree of injury with a multiple-hit lethality model. Another implication of these results is that the distribution of the time for cells to reach unacceptable numbers in food is not affected by the time-temperature combination resulting in a given decimal reduction.
Resumo:
When people monitor a visual stream of rapidly presented stimuli for two targets (T1 and T2), they often miss T2 if it falls into a time window of about half a second after T1 onset-the attentional blink. However, if T2 immediately follows T1, performance is often reported being as good as that at long lags-the so-called Lag-1 sparing effect. Two experiments investigated the mechanisms underlying this effect. Experiment 1 showed that, at Lag 1, requiring subjects to correctly report both identity and temporal order of targets produces relatively good performance on T2 but relatively bad performance on T1. Experiment 2 confirmed that subjects often confuse target order at short lags, especially if the two targets are equally easy to discriminate. Results suggest that, if two targets appear in close succession, they compete for attentional resources. If the two competitors are of unequal strength the stronger one is more likely to win and be reported at the expense of the other. If the two are equally strong, however, they will often be integrated into the same attentional episode and thus get both access to attentional resources. But this comes with a cost, as it eliminates information about the targets' temporal order.
Resumo:
Measurements of the ionospheric E-region during total solar eclipses have been used to provide information about the evolution of the solar magnetic field and EUV and X-ray emissions from the solar corona and chromosphere. By measuring levels of ionisation during an eclipse and comparing these measurements with an estimate of the unperturbed ionisation levels (such as those made during a control day, where available) it is possible to estimate the percentage of ionising radiation being emitted by the solar corona and chromosphere. Previously unpublished data from the two eclipses presented here are particularly valuable as they provide information that supplements the data published to date. The eclipse of 23 October 1976 over Australia provides information in a data gap that would otherwise have spanned the years 1966 to 1991. The eclipse of 4 December 2002 over Southern Africa is important as it extends the published sequence of measurements. Comparing measurements from eclipses between 1932 and 2002 with the solar magnetic source flux reveals that changes in the solar EUV and X-ray flux lag the open source flux measurements by approximately 1.5 years. We suggest that this unexpected result comes about from changes to the relative size of the limb corona between eclipses, with the lag representing the time taken to populate the coronal field with plasma hot enough to emit the EUV and X-rays ionising our atmosphere.
Resumo:
In an adaptive equaliser, the time lag is an important parameter that significantly influences the performance. Only with the optimum time lag that corresponds to the best minimum-mean-square-error (MMSE) performance, can there be best use of the available resources. Many designs, however, choose the time lag either based on preassumption of the channel or simply based on average experience. The relation between the MMSE performance and the time lag is investigated using a new interpretation of the MMSE equaliser, and then a novel adaptive time lag algorithm is proposed based on gradient search. The proposed algorithm can converge to the optimum time lag in the mean and is verified by the numerical simulations provided.
Resumo:
This paper investigates how to choose the optimum tap-length and decision delay for the decision feedback equalizer (DFE). Although the feedback filter length can be set as the channel memory, there is no closed-form expression for the feedforward filter length and decision delay. In this paper, first we analytically show that the two dimensional search for the optimum feedforward filter length and decision delay can be simplified to a one dimensional search, and then describe a new adaptive DFE where the optimum structural parameters can be self-adapted.
Resumo:
Systematic natural ventilation effects on measured temperatures within a standard large wooden thermometer screen are investigated under summer conditions, using well-calibrated platinum resistance thermometers. Under low ventilation (2mwind speed u2 < 1.1 m s−1), the screen slightly underestimates daytime air temperature but overestimates air temperature nocturnally by 0.2◦C. The screen’s lag time L lengthens with decreasing wind speed, following an inverse power law relationship between L and u2. For u2 > 2 m s−1, L ∼ 2.5 min, increasing, when calm, to at least 15 min. Spectral response properties of the screen to air temperature fluctuations vary with wind speed because of the lag changes. Ventilation effects are particularly apparent at the higher (>25◦C) temperatures, both through the lag effect and from solar heating. For sites where wind speed decreases with increasing daytime temperature, thermometer screen temperatures may consequently show larger uncertainties at the higher temperatures. Under strong direct beam solar radiation (>850W m−2) the radiation effect is likely to be <0.4◦C. Copyright c 2011 RoyalMeteorological Society
Resumo:
Many studies have reported long-range synchronization of neuronal activity between brain areas, in particular in the beta and gamma bands with frequencies in the range of 14–30 and 40–80 Hz, respectively. Several studies have reported synchrony with zero phase lag, which is remarkable considering the synaptic and conduction delays inherent in the connections between distant brain areas. This result has led to many speculations about the possible functional role of zero-lag synchrony, such as for neuronal communication, attention, memory, and feature binding. However, recent studies using recordings of single-unit activity and local field potentials report that neuronal synchronization may occur with non-zero phase lags. This raises the questions whether zero-lag synchrony can occur in the brain and, if so, under which conditions. We used analytical methods and computer simulations to investigate which connectivity between neuronal populations allows or prohibits zero-lag synchrony. We did so for a model where two oscillators interact via a relay oscillator. Analytical results and computer simulations were obtained for both type I Mirollo–Strogatz neurons and type II Hodgkin–Huxley neurons. We have investigated the dynamics of the model for various types of synaptic coupling and importantly considered the potential impact of Spike-Timing Dependent Plasticity (STDP) and its learning window. We confirm previous results that zero-lag synchrony can be achieved in this configuration. This is much easier to achieve with Hodgkin–Huxley neurons, which have a biphasic phase response curve, than for type I neurons. STDP facilitates zero-lag synchrony as it adjusts the synaptic strengths such that zero-lag synchrony is feasible for a much larger range of parameters than without STDP.
Resumo:
In the absence of market frictions, the cost-of-carry model of stock index futures pricing predicts that returns on the underlying stock index and the associated stock index futures contract will be perfectly contemporaneously correlated. Evidence suggests, however, that this prediction is violated with clear evidence that the stock index futures market leads the stock market. It is argued that traditional tests, which assume that the underlying data generating process is constant, might be prone to overstate the lead-lag relationship. Using a new test for lead-lag relationships based on cross correlations and cross bicorrelations it is found that, contrary to results from using the traditional methodology, periods where the futures market leads the cash market are few and far between and when any lead-lag relationship is detected, it does not last long. Overall, the results are consistent with the prediction of the standard cost-of-carry model and market efficiency.
Resumo:
The impact of systematic model errors on a coupled simulation of the Asian Summer monsoon and its interannual variability is studied. Although the mean monsoon climate is reasonably well captured, systematic errors in the equatorial Pacific mean that the monsoon-ENSO teleconnection is rather poorly represented in the GCM. A system of ocean-surface heat flux adjustments is implemented in the tropical Pacific and Indian Oceans in order to reduce the systematic biases. In this version of the GCM, the monsoon-ENSO teleconnection is better simulated, particularly the lag-lead relationships in which weak monsoons precede the peak of El Nino. In part this is related to changes in the characteristics of El Nino, which has a more realistic evolution in its developing phase. A stronger ENSO amplitude in the new model version also feeds back to further strengthen the teleconnection. These results have important implications for the use of coupled models for seasonal prediction of systems such as the monsoon, and suggest that some form of flux correction may have significant benefits where model systematic error compromises important teleconnections and modes of interannual variability.
A new look at stratospheric sudden warmings. Part III. Polar vortex evolution and vertical structure
Resumo:
The evolution of the Arctic polar vortex during observed major mid-winter stratospheric sudden warmings (SSWs) is investigated for the period 1957-2002, using European Centre for Medium-Range Weather Forecasts (ECMWF) ERA-40 Ertel’s potential vorticity (PV) and temperature fields. Time-lag composites of vertically weighted PV, calculated relative to the SSW onset time, are derived for both vortex displacement SSWs and vortex splitting SSWs, by averaging over the 15 recorded displacement and 13 splitting events. The evolving vertical structure of the polar vortex during a typical SSW of each type is clearly illustrated by plotting an isosurface of the composite PV field, and is shown to be very close to that observed during representative individual events. Results are verified by comparison with an elliptical diagnostic vortex moment technique. For both types of SSW, little variation is found between individual events in the orientation of the developing vortex relative to the underlying topography, i.e. the location of the vortex during SSWs of each type is largely fixed in relation to the Earth’s surface. During each type of SSW, the vortex is found to have a distinctive vertical structure. Vortex splitting events are typically barotropic, with the vortex split occurring near-simultaneously over a large altitude range (20-40 km). In the majority of cases, of the two daughter vortices formed, it is the ‘Siberian’ vortex that dominates over its ‘Canadian’ counterpart. In contrast, displacement events are characterized by a very clear baroclinic structure; the vortex tilts significantly westward with height, so that the top and bottom of the vortex are separated by nearly 180◦ longitude before the upper vortex is sheared away and destroyed.
Resumo:
We have developed an ensemble Kalman Filter (EnKF) to estimate 8-day regional surface fluxes of CO2 from space-borne CO2 dry-air mole fraction observations (XCO2) and evaluate the approach using a series of synthetic experiments, in preparation for data from the NASA Orbiting Carbon Observatory (OCO). The 32-day duty cycle of OCO alternates every 16 days between nadir and glint measurements of backscattered solar radiation at short-wave infrared wavelengths. The EnKF uses an ensemble of states to represent the error covariances to estimate 8-day CO2 surface fluxes over 144 geographical regions. We use a 12×8-day lag window, recognising that XCO2 measurements include surface flux information from prior time windows. The observation operator that relates surface CO2 fluxes to atmospheric distributions of XCO2 includes: a) the GEOS-Chem transport model that relates surface fluxes to global 3-D distributions of CO2 concentrations, which are sampled at the time and location of OCO measurements that are cloud-free and have aerosol optical depths <0.3; and b) scene-dependent averaging kernels that relate the CO2 profiles to XCO2, accounting for differences between nadir and glint measurements, and the associated scene-dependent observation errors. We show that OCO XCO2 measurements significantly reduce the uncertainties of surface CO2 flux estimates. Glint measurements are generally better at constraining ocean CO2 flux estimates. Nadir XCO2 measurements over the terrestrial tropics are sparse throughout the year because of either clouds or smoke. Glint measurements provide the most effective constraint for estimating tropical terrestrial CO2 fluxes by accurately sampling fresh continental outflow over neighbouring oceans. We also present results from sensitivity experiments that investigate how flux estimates change with 1) bias and unbiased errors, 2) alternative duty cycles, 3) measurement density and correlations, 4) the spatial resolution of estimated flux estimates, and 5) reducing the length of the lag window and the size of the ensemble. At the revision stage of this manuscript, the OCO instrument failed to reach its orbit after it was launched on 24 February 2009. The EnKF formulation presented here is also applicable to GOSAT measurements of CO2 and CH4.
Resumo:
Mecoprop-p [(R)-2-(4-chloro-2-methylphenoxy) propanoic acid) is widely used in agriculture and poses an environmental concern because of its susceptibility to leach from soil to water. We investigated the effect of soil depth on mecoprop-p biodegradation and its relationship with the number and diversity of tfdA related genes, which are the most widely known genes involved in degradation of the phenoxyalkanoic acid group of herbicides by bacteria. Mecoprop-p half-life (DT50) was approximately 12 days in soil sampled from <30 cm depth, and increased progressively with soil depth, reaching over 84 days at 70–80 cm. In sub-soil there was a lag period of between 23 and 34 days prior to a phase of rapid degradation. No lag phase occurred in top-soil samples prior to the onset of degradation. The maximum degradation rate was the same in top-soil and sub-soil samples. Although diverse tfdAα and tfdA genes were present prior to mecoprop-p degradation, real time PCR revealed that degradation was associated with proliferation of tfdA genes. The number of tfdA genes and the most probable number of mecoprop-p degrading organisms in soil prior to mecoprop-p addition were below the limit of quantification and detection respectively. Melting curves from the real time PCR analysis showed that prior to mecoprop-p degradation both class I and class III tfdA genes were present in top- and sub-soil samples. However at all soil depths only tfdA class III genes proliferated during degradation. Denaturing gradient gel electrophoresis confirmed that class III tfdA genes were associated with mecoprop-p degradation. Degradation was not associated with the induction of novel tfdA genes in top- or sub-soil samples, and there were no apparent differences in tfdA gene diversity with soil depth prior to or following degradation.
Resumo:
A new spectral-based approach is presented to find orthogonal patterns from gridded weather/climate data. The method is based on optimizing the interpolation error variance. The optimally interpolated patterns (OIP) are then given by the eigenvectors of the interpolation error covariance matrix, obtained using the cross-spectral matrix. The formulation of the approach is presented, and the application to low-dimension stochastic toy models and to various reanalyses datasets is performed. In particular, it is found that the lowest-frequency patterns correspond to largest eigenvalues, that is, variances, of the interpolation error matrix. The approach has been applied to the Northern Hemispheric (NH) and tropical sea level pressure (SLP) and to the Indian Ocean sea surface temperature (SST). Two main OIP patterns are found for the NH SLP representing respectively the North Atlantic Oscillation and the North Pacific pattern. The leading tropical SLP OIP represents the Southern Oscillation. For the Indian Ocean SST, the leading OIP pattern shows a tripole-like structure having one sign over the eastern and north- and southwestern parts and an opposite sign in the remaining parts of the basin. The pattern is also found to have a high lagged correlation with the Niño-3 index with 6-months lag.