997 resultados para FREQUENCY OSCILLATORY VENTILATION
Resumo:
Global controls on month-by-month fractional burnt area (2000–2005) were investigated by fitting a generalised linear model (GLM) to Global Fire Emissions Database (GFED) data, with 11 predictor variables representing vegetation, climate, land use and potential ignition sources. Burnt area is shown to increase with annual net primary production (NPP), number of dry days, maximum temperature, grazing-land area, grass/shrub cover and diurnal temperature range, and to decrease with soil moisture, cropland area and population density. Lightning showed an apparent (weak) negative influence, but this disappeared when pure seasonal-cycle effects were taken into account. The model predicts observed geographic and seasonal patterns, as well as the emergent relationships seen when burnt area is plotted against each variable separately. Unimodal relationships with mean annual temperature and precipitation, population density and gross domestic product (GDP) are reproduced too, and are thus shown to be secondary consequences of correlations between different controls (e.g. high NPP with high precipitation; low NPP with low population density and GDP). These findings have major implications for the design of global fire models, as several assumptions in current models – most notably, the widely assumed dependence of fire frequency on ignition rates – are evidently incorrect.
Resumo:
We describe some recent advances in the numerical solution of acoustic scattering problems. A major focus of the paper is the efficient solution of high frequency scattering problems via hybrid numerical-asymptotic boundary element methods. We also make connections to the unified transform method due to A. S. Fokas and co-authors, analysing particular instances of this method, proposed by J. A. De-Santo and co-authors, for problems of acoustic scattering by diffraction gratings.
Resumo:
This paper describes the hydrochemistry of a lowland, urbanised river-system, The Cut in England, using in situ sub-daily sampling. The Cut receives effluent discharges from four major sewage treatment works serving around 190,000 people. These discharges consist largely of treated water, originally abstracted from the River Thames and returned via the water supply network, substantially increasing the natural flow. The hourly water quality data were supplemented by weekly manual sampling with laboratory analysis to check the hourly data and measure further determinands. Mean phosphorus and nitrate concentrations were very high, breaching standards set by EU legislation. Though 56% of the catchment area is agricultural, the hydrochemical dynamics were significantly impacted by effluent discharges which accounted for approximately 50% of the annual P catchment input loads and, on average, 59% of river flow at the monitoring point. Diurnal dissolved oxygen data demonstrated high in-stream productivity. From a comparison of high frequency and conventional monitoring data, it is inferred that much of the primary production was dominated by benthic algae, largely diatoms. Despite the high productivity and nutrient concentrations, the river water did not become anoxic and major phytoplankton blooms were not observed. The strong diurnal and annual variation observed showed that assessments of water quality made under the Water Framework Directive (WFD) are sensitive to the time and season of sampling. It is recommended that specific sampling time windows be specified for each determinand, and that WFD targets should be applied in combination to help identify periods of greatest ecological risk. This article is protected by copyright. All rights reserved.
Resumo:
A practical single-carrier (SC) block transmission with frequency domain equalisation (FDE) system can generally be modelled by the Hammerstein system that includes the nonlinear distortion effects of the high power amplifier (HPA) at transmitter. For such Hammerstein channels, the standard SC-FDE scheme no longer works. We propose a novel Bspline neural network based nonlinear SC-FDE scheme for Hammerstein channels. In particular, we model the nonlinear HPA, which represents the complex-valued static nonlinearity of the Hammerstein channel, by two real-valued B-spline neural networks, one for modelling the nonlinear amplitude response of the HPA and the other for the nonlinear phase response of the HPA. We then develop an efficient alternating least squares algorithm for estimating the parameters of the Hammerstein channel, including the channel impulse response coefficients and the parameters of the two B-spline models. Moreover, we also use another real-valued B-spline neural network to model the inversion of the HPA’s nonlinear amplitude response, and the parameters of this inverting B-spline model can be estimated using the standard least squares algorithm based on the pseudo training data obtained as a byproduct of the Hammerstein channel identification. Equalisation of the SC Hammerstein channel can then be accomplished by the usual one-tap linear equalisation in frequency domain as well as the inverse Bspline neural network model obtained in time domain. The effectiveness of our nonlinear SC-FDE scheme for Hammerstein channels is demonstrated in a simulation study.
Resumo:
In multiple-input multiple-output (MIMO) radar systems, the transmitters emit orthogonal waveforms to increase the spatial resolution. New frequency hopping (FH) codes based on chaotic sequences are proposed. The chaotic sequences have the characteristics of good encryption, anti-jamming properties and anti-intercept capabilities. The main idea of chaotic FH is based on queuing theory. According to the sensitivity to initial condition, these sequences can achieve good Hamming auto-correlation while also preserving good average correlation. Simulation results show that the proposed FH signals can achieve lower autocorrelation side lobe level and peak cross-correlation level with the increasing of iterations. Compared to the LFM signals, this sequence has higher range-doppler resolution.
Resumo:
The EU Water Framework Directive (WFD) requires that the ecological and chemical status of water bodies in Europe should be assessed, and action taken where possible to ensure that at least "good" quality is attained in each case by 2015. This paper is concerned with the accuracy and precision with which chemical status in rivers can be measured given certain sampling strategies, and how this can be improved. High-frequency (hourly) chemical data from four rivers in southern England were subsampled to simulate different sampling strategies for four parameters used for WFD classification: dissolved phosphorus, dissolved oxygen, pH and water temperature. These data sub-sets were then used to calculate the WFD classification for each site. Monthly sampling was less precise than weekly sampling, but the effect on WFD classification depended on the closeness of the range of concentrations to the class boundaries. In some cases, monthly sampling for a year could result in the same water body being assigned to three or four of the WFD classes with 95% confidence, due to random sampling effects, whereas with weekly sampling this was one or two classes for the same cases. In the most extreme case, the same water body could have been assigned to any of the five WFD quality classes. Weekly sampling considerably reduces the uncertainties compared to monthly sampling. The width of the weekly sampled confidence intervals was about 33% that of the monthly for P species and pH, about 50% for dissolved oxygen, and about 67% for water temperature. For water temperature, which is assessed as the 98th percentile in the UK, monthly sampling biases the mean downwards by about 1 °C compared to the true value, due to problems of assessing high percentiles with limited data. Low-frequency measurements will generally be unsuitable for assessing standards expressed as high percentiles. Confining sampling to the working week compared to all 7 days made little difference, but a modest improvement in precision could be obtained by sampling at the same time of day within a 3 h time window, and this is recommended. For parameters with a strong diel variation, such as dissolved oxygen, the value obtained, and thus possibly the WFD classification, can depend markedly on when in the cycle the sample was taken. Specifying this in the sampling regime would be a straightforward way to improve precision, but there needs to be agreement about how best to characterise risk in different types of river. These results suggest that in some cases it will be difficult to assign accurate WFD chemical classes or to detect likely trends using current sampling regimes, even for these largely groundwater-fed rivers. A more critical approach to sampling is needed to ensure that management actions are appropriate and supported by data.
Resumo:
Real estate securities have a number of distinct characteristics that differentiate them from stocks generally. Key amongst them is that under-pinning the firms are both real as well as investment assets. The connections between the underlying macro-economy and listed real estate firms is therefore clearly demonstrated and of heightened importance. To consider the linkages with the underlying macro-economic fundamentals we extract the ‘low-frequency’ volatility component from aggregate volatility shocks in 11 international markets over the 1990-2014 period. This is achieved using Engle and Rangel’s (2008) Spline-Generalized Autoregressive Conditional Heteroskedasticity (Spline-GARCH) model. The estimated low-frequency volatility is then examined together with low-frequency macro data in a fixed-effect pooled regression framework. The analysis reveals that the low-frequency volatility of real estate securities has strong and positive association with most of the macroeconomic risk proxies examined. These include interest rates, inflation, GDP and foreign exchange rates.
Resumo:
Animal studies find that prenatal stress is associated with increased physiological and emotional reactivity later in life, mediated via fetal programming of the HPA axis through decreased glucocorticoid receptor (GR) gene expression. Post-natal behaviours, notably licking and grooming in rats, cause decreased behavioural indices of fear and reduced HPA axis reactivity mediated via increased GR gene expression. Post-natal maternal behaviours may therefore be expected to modify prenatal effects, but this has not previously been examined in humans. We examined whether, according to self-report, maternal stroking over the first weeks of life modified associations between prenatal depression and physiological and behavioral outcomes in infancy, hence mimicking effects of rodent licking and grooming. From a general population sample of 1233 first time mothers recruited at 20 weeks gestation we drew a stratified random sample of 316 for assessment at 32 weeks based on reported inter-partner psychological abuse, a risk to child development. Of these 271 provided data at 5, 9 and 29 weeks post delivery. Mothers reported how often they stroked their babies at 5 and 9 weeks. At 29 weeks vagal withdrawal to a stressor, a measure of physiological adaptability, and maternal reported negative emotionality were assessed. There was a significant interaction between prenatal depression and maternal stroking in the prediction of vagal reactivity to a stressor (p = .01), and maternal reports of infant anger proneness (p = .007) and fear (p = .043). Increasing maternal depression was associated with decreasing physiological adaptability, and with increasing negative emotionality, only in the presence of low maternal stroking. These initial findings in humans indicate that maternal stroking in infancy, as reported by mothers, has effects strongly resembling the effects of observed maternal behaviours in animals, pointing to future studies of the epigenetic, physiological and behavioral effects of maternal stroking.
Resumo:
While eye movements have been used widely to investigate how skilled adult readers process written language, relatively little research has used this methodology with children. This is unfortunate as, as we discuss here, eye-movement studies have significant potential to inform our understanding of children’s reading development. We consider some of the empirical and theoretical issues that arise when using this methodology with children, illustrating our points with data from an experiment examining word frequency effects in 8-year-old children’s sentence reading. Children showed significantly longer gaze durations to low than high-frequency words, demonstrating that linguistic characteristics of text drive children’s eye movements as they read. We discuss these findings within the broader context of how eye-movement studies can inform our understanding of children’s reading, and can assist with the development of appropriately targeted interventions to support children as they learn to read.
Resumo:
In this article we assess the abilities of a new electromagnetic (EM) system, the CMD Mini-Explorer, for prospecting of archaeological features in Ireland and the UK. The Mini-Explorer is an EM probe which is primarily aimed at the environmental/geological prospecting market for the detection of pipes and geology. It has long been evident from the use of other EM devices that such an instrument might be suitable for shallow soil studies and applicable for archaeological prospecting. Of particular interest for the archaeological surveyor is the fact that the Mini-Explorer simultaneously obtains both quadrature (‘conductivity’) and in-phase (relative to ‘magnetic susceptibility’) data from three depth levels. As the maximum depth range is probably about 1.5 m, a comprehensive analysis of the subsoil within that range is possible. As with all EM devices the measurements require no contact with the ground, thereby negating the problem of high contact resistance that often besets earth resistance data during dry spells. The use of the CMD Mini-Explorer at a number of sites has demonstrated that it has the potential to detect a range of archaeological features and produces high-quality data that are comparable in quality to those obtained from standard earth resistance and magnetometer techniques. In theory the ability to measure two phenomena at three depths suggests that this type of instrument could reduce the number of poor outcomes that are the result of single measurement surveys. The high success rate reported here in the identification of buried archaeology using a multi-depth device that responds to the two most commonly mapped geophysical phenomena has implications for evaluation style surveys. Copyright © 2013 John Wiley & Sons, Ltd.
Resumo:
A recent study conducted by Blocken et al. (Numerical study on the existence of the Venturi effect in passages between perpendicular buildings. Journal of Engineering Mechanics, 2008,134: 1021-1028) challenged the popular view of the existence of the ‘Venturi effect’ in building passages as the wind is exposed to an open boundary. The present research extends the work of Blocken et al. (2008a) into a more general setup with the building orientation varying from 0° to 180° using CFD simulations. Our results reveal that the passage flow is mainly determined by the combination of corner streams. It is also shown that converging passages have a higher wind-blocking effect compared to diverging passages, explained by a lower wind speed and higher drag coefficient. Fluxes on the top plane of the passage volume reverse from outflow to inflow in the cases of α=135°, 150° and 165°. A simple mathematical expression to explain the relationship between the flux ratio and the geometric parameters has been developed to aid wind design in an urban neighborhood. In addition, a converging passage with α=15° is recommended for urban wind design in cold and temperate climates since the passage flow changes smoothly and a relatively lower wind speed is expected compared with that where there are no buildings. While for the high-density urban area in (sub)tropical climates such as Hong Kong where there is a desire for more wind, a diverging passage with α=150° is a better choice to promote ventilation at the pedestrian level.
Resumo:
This paper shows that radiometer channel radiances for cloudy atmospheric conditions can be simulated with an optimised frequency grid derived under clear-sky conditions. A new clear-sky optimised grid is derived for AVHRR channel 5 ð12 m m, 833 cm �1 Þ. For HIRS channel 11 ð7:33 m m, 1364 cm �1 Þ and AVHRR channel 5, radiative transfer simulations using an optimised frequency grid are compared with simulations using a reference grid, where the optimised grid has roughly 100–1000 times less frequencies than the full grid. The root mean square error between the optimised and the reference simulation is found to be less than 0.3 K for both comparisons, with the magnitude of the bias less than 0.03 K. The simulations have been carried out with the radiative transfer model Atmospheric Radiative Transfer Simulator (ARTS), version 2, using a backward Monte Carlo module for the treatment of clouds. With this module, the optimised simulations are more than 10 times faster than the reference simulations. Although the number of photons is the same, the smaller number of frequencies reduces the overhead for preparing the optical properties for each frequency. With deterministic scattering solvers, the relative decrease in runtime would be even more. The results allow for new radiative transfer applications, such as the development of new retrievals, because it becomes much quicker to carry out a large number of simulations. The conclusions are applicable to any downlooking infrared radiometer.
Resumo:
The El Niño/Southern Oscillation is Earth’s most prominent source of interannual climate variability, alternating irregularly between El Niño and La Niña, and resulting in global disruption of weather patterns, ecosystems, fisheries and agriculture1, 2, 3, 4, 5. The 1998–1999 extreme La Niña event that followed the 1997–1998 extreme El Niño event6 switched extreme El Niño-induced severe droughts to devastating floods in western Pacific countries, and vice versa in the southwestern United States4, 7. During extreme La Niña events, cold sea surface conditions develop in the central Pacific8, 9, creating an enhanced temperature gradient from the Maritime continent to the central Pacific. Recent studies have revealed robust changes in El Niño characteristics in response to simulated future greenhouse warming10, 11, 12, but how La Niña will change remains unclear. Here we present climate modelling evidence, from simulations conducted for the Coupled Model Intercomparison Project phase 5 (ref. 13), for a near doubling in the frequency of future extreme La Niña events, from one in every 23 years to one in every 13 years. This occurs because projected faster mean warming of the Maritime continent than the central Pacific, enhanced upper ocean vertical temperature gradients, and increased frequency of extreme El Niño events are conducive to development of the extreme La Niña events. Approximately 75% of the increase occurs in years following extreme El Niño events, thus projecting more frequent swings between opposite extremes from one year to the next.
Resumo:
El Niño events are a prominent feature of climate variability with global climatic impacts. The 1997/98 episode, often referred to as ‘the climate event of the twentieth century’1, 2, and the 1982/83 extreme El Niño3, featured a pronounced eastward extension of the west Pacific warm pool and development of atmospheric convection, and hence a huge rainfall increase, in the usually cold and dry equatorial eastern Pacific. Such a massive reorganization of atmospheric convection, which we define as an extreme El Niño, severely disrupted global weather patterns, affecting ecosystems4, 5, agriculture6, tropical cyclones, drought, bushfires, floods and other extreme weather events worldwide3, 7, 8, 9. Potential future changes in such extreme El Niño occurrences could have profound socio-economic consequences. Here we present climate modelling evidence for a doubling in the occurrences in the future in response to greenhouse warming. We estimate the change by aggregating results from climate models in the Coupled Model Intercomparison Project phases 3 (CMIP3; ref. 10) and 5 (CMIP5; ref. 11) multi-model databases, and a perturbed physics ensemble12. The increased frequency arises from a projected surface warming over the eastern equatorial Pacific that occurs faster than in the surrounding ocean waters13, 14, facilitating more occurrences of atmospheric convection in the eastern equatorial region.
Resumo:
Observed and predicted changes in the strength of the westerly winds blowing over the Southern Ocean have motivated a number of studies of the response of the Antarctic Circumpolar Current and Southern Ocean Meridional Overturning Circulation (MOC) to wind perturbations and led to the discovery of the``eddy-compensation" regime, wherein the MOC becomes insensitive to wind changes. In addition to the MOC, tracer transport also depends on mixing processes. Here we show, in a high-resolution process model, that isopycnal mixing by mesoscale eddies is strongly dependent on the wind strength. This dependence can be explained by mixing-length theory and is driven by increases in eddy kinetic energy; the mixing length does not change strongly in our simulation. Simulation of a passive ventilation tracer (analogous to CFCs or anthropogenic CO$_2$) demonstrates that variations in tracer uptake across experiments are dominated by changes in isopycnal mixing, rather than changes in the MOC. We argue that, to properly understand tracer uptake under different wind-forcing scenarios, the sensitivity of isopycnal mixing to winds must be accounted for.