989 resultados para Frequency dependence parameters
Resumo:
OBJECTIVE: To determine the effect of altering meal frequency on postprandial lipaemia and associated parameters. DESIGN: A randomized open cross over study to examine the programming effects of altering meal frequency. A standard test meal was given on three occasions following: (i) the normal diet; (ii) a period of two weeks on a nibbling and (iii) a period of two weeks on a gorging diet. SETTING: Free living subjects associated with the University of Surrey. SUBJECTS: Eleven female volunteers (age 22 +/- 0.89 y) were recruited. INTERVENTIONS: The subjects were requested to consume the same foods on either a nibbling diet (12 meals per day) or a gorging diet (three meals per day) for a period of two weeks. The standard test meal containing 80 g fat, 63 g carbohydrate and 20 g protein was administered on the day prior to the dietary intervention and on the day following each period of intervention. MAJOR OUTCOME MEASURES: Fasting and postprandial blood samples were taken for the analysis of plasma triacylglycerol, non-esterified fatty acids, glucose, immunoreactive insulin, glucose-dependent insulinotropic polypeptide levels (GIP) and glucagon-like peptide (GLP-1), fasting total, low density lipoprotein (LDL)- and high density lipoprotein (HDL)-cholesterol concentrations and postheparin lipoprotein lipase (LPL) activity measurements. Plasma paracetamol was measured following administration of a 1.5 g paracetamol load with the meal as an index of gastric emptying. RESULTS: The compliance to the two dietary regimes was high and there were no significant differences between the nutrient intakes on the two intervention diets. There were no significant differences in fasting or postprandial plasma concentrations of triacylglycerol, non-esterified fatty acids, glucose, immunoreactive insulin, GIP and GLP-1 levels, in response to the standard test meal following the nibbling or gorging dietary regimes. There were no significant differences in fasting total or LDL-cholesterol concentrations, or in the 15 min postheparin lipoprotein lipase activity measurements. There was a significant increase in HDL-cholesterol in the subjects following the gorging diet compared to the nibbling diet. DISCUSSION: The results suggest that previous meal frequency for a period of two weeks in young healthy women does not alter the fasting or postprandial lipid or hormonal response to a standard high fat meal. CONCLUSIONS: The findings of this study did not confirm the previous studies which suggested that nibbling is beneficial in reducing the concentrations of lipid and hormones. The rigorous control of diet content and composition in the present study compared with others, suggest reported effects of meal frequency may be due to unintentional alteration in nutrient and energy intake in previous studies.
Resumo:
We present a new method to determine mesospheric electron densities from partially reflected medium frequency radar pulses. The technique uses an optimal estimation inverse method and retrieves both an electron density profile and a gradient electron density profile. As well as accounting for the absorption of the two magnetoionic modes formed by ionospheric birefringence of each radar pulse, the forward model of the retrieval parameterises possible Fresnel scatter of each mode by fine electronic structure, phase changes of each mode due to Faraday rotation and the dependence of the amplitudes of the backscattered modes upon pulse width. Validation results indicate that known profiles can be retrieved and that χ2 tests upon retrieval parameters satisfy validity criteria. Application to measurements shows that retrieved electron density profiles are consistent with accepted ideas about seasonal variability of electron densities and their dependence upon nitric oxide production and transport.
Resumo:
Aircraft flying through cold ice-supersaturated air produce persistent contrails which contribute to the climate impact of aviation. Here, we demonstrate the importance of the weather situation, together with the route and altitude of the aircraft through this, on estimating contrail coverage. The results have implications for determining the climate impact of contrails as well as potential mitigation strategies. Twenty-one years of re-analysis data are used to produce a climatological assessment of conditions favorable for persistent contrail formation between 200 and 300 hPa over the north Atlantic in winter. The seasonal-mean frequency of cold ice-supersaturated regions is highest near 300 hPa, and decreases with altitude. The frequency of occurrence of ice-supersaturated regions varies with large-scale weather pattern; the most common locations are over Greenland, on the southern side of the jet stream and around the northern edge of high pressure ridges. Assuming aircraft take a great circle route, as opposed to a more realistic time-optimal route, is likely to lead to an error in the estimated contrail coverage, which can exceed 50% for westbound north Atlantic flights. The probability of contrail formation can increase or decrease with height, depending on the weather pattern, indicating that the generic suggestion that flying higher leads to fewer contrails is not robust.
Resumo:
Measurements from ground-based magnetometers and riometers at auroral latitudes have demonstrated that energetic (~30-300keV) electron precipitation can be modulated in the presence of magnetic field oscillations at ultra-low frequencies. It has previously been proposed that an ultra-low frequency (ULF) wave would modulate field and plasma properties near the equatorial plane, thus modifying the growth rates of whistler-mode waves. In turn, the resulting whistler-mode waves would mediate the pitch-angle scattering of electrons resulting in ionospheric precipitation. In this paper, we investigate this hypothesis by quantifying the changes to the linear growth rate expected due to a slow change in the local magnetic field strength for parameters typical of the equatorial region around 6.6RE radial distance. To constrain our study, we determine the largest possible ULF wave amplitudes from measurements of the magnetic field at geosynchronous orbit. Using nearly ten years of observations from two satellites, we demonstrate that the variation in magnetic field strength due to oscillations at 2mHz does not exceed ±10% of the background field. Modifications to the plasma density and temperature anisotropy are estimated using idealised models. For low temperature anisotropy, there is little change in the whistler-mode growth rates even for the largest ULF wave amplitude. Only for large temperature anisotropies can whistler-mode growth rates be modulated sufficiently to account for the changes in electron precipitation measured by riometers at auroral latitudes.
Resumo:
In this paper we propose and analyze a hybrid $hp$ boundary element method for the solution of problems of high frequency acoustic scattering by sound-soft convex polygons, in which the approximation space is enriched with oscillatory basis functions which efficiently capture the high frequency asymptotics of the solution. We demonstrate, both theoretically and via numerical examples, exponential convergence with respect to the order of the polynomials, moreover providing rigorous error estimates for our approximations to the solution and to the far field pattern, in which the dependence on the frequency of all constants is explicit. Importantly, these estimates prove that, to achieve any desired accuracy in the computation of these quantities, it is sufficient to increase the number of degrees of freedom in proportion to the logarithm of the frequency as the frequency increases, in contrast to the at least linear growth required by conventional methods.
Resumo:
Recent research into sea ice friction has focussed on ways to provide a model which maintains much of the clarity and simplicity of Amonton's law, yet also accounts for memory effects. One promising avenue of research has been to adapt the rate- and state- dependent models which are prevalent in rock friction. In such models it is assumed that there is some fixed critical slip displacement, which is effectively a measure of the displacement over which memory effects might be considered important. Here we show experimentally that a fixed critical slip displacement is not a valid assumption in ice friction, whereas a constant critical slip time appears to hold across a range of parameters and scales. As a simple rule of thumb, memory effects persist to a significant level for 10 s. We then discuss the implications of this finding for modelling sea ice friction and for our understanding of friction in general.
Resumo:
Single-carrier (SC) block transmission with frequency-domain equalisation (FDE) offers a viable transmission technology for combating the adverse effects of long dispersive channels encountered in high-rate broadband wireless communication systems. However, for high bandwidthefficiency and high power-efficiency systems, the channel can generally be modelled by the Hammerstein system that includes the nonlinear distortion effects of the high power amplifier (HPA) at transmitter. For such nonlinear Hammerstein channels, the standard SC-FDE scheme no longer works. This paper advocates a complex-valued (CV) B-spline neural network based nonlinear SC-FDE scheme for Hammerstein channels. Specifically, We model the nonlinear HPA, which represents the CV static nonlinearity of the Hammerstein channel, by a CV B-spline neural network, and we develop two efficient alternating least squares schemes for estimating the parameters of the Hammerstein channel, including both the channel impulse response coefficients and the parameters of the CV B-spline model. We also use another CV B-spline neural network to model the inversion of the nonlinear HPA, and the parameters of this inverting B-spline model can easily be estimated using the standard least squares algorithm based on the pseudo training data obtained as a natural byproduct of the Hammerstein channel identification. Equalisation of the SC Hammerstein channel can then be accomplished by the usual one-tap linear equalisation in frequency domain as well as the inverse B-spline neural network model obtained in time domain. Extensive simulation results are included to demonstrate the effectiveness of our nonlinear SC-FDE scheme for Hammerstein channels.
Resumo:
Experiments with CO2 instantaneously quadrupled and then held constant are used to show that the relationship between the global-mean net heat input to the climate system and the global-mean surface-air-temperature change is nonlinear in Coupled Model Intercomparison Project phase 5 (CMIP5) Atmosphere-Ocean General Circulation Models (AOGCMs). The nonlinearity is shown to arise from a change in strength of climate feedbacks driven by an evolving pattern of surface warming. In 23 out of the 27 AOGCMs examined the climate feedback parameter becomes significantly (95% confidence) less negative – i.e. the effective climate sensitivity increases – as time passes. Cloud feedback parameters show the largest changes. In the AOGCM-mean approximately 60% of the change in feedback parameter comes from the topics (30N-30S). An important region involved is the tropical Pacific where the surface warming intensifies in the east after a few decades. The dependence of climate feedbacks on an evolving pattern of surface warming is confirmed using the HadGEM2 and HadCM3 atmosphere GCMs (AGCMs). With monthly evolving sea-surface-temperatures and sea-ice prescribed from its AOGCM counterpart each AGCM reproduces the time-varying feedbacks, but when a fixed pattern of warming is prescribed the radiative response is linear with global temperature change or nearly so. We also demonstrate that the regression and fixed-SST methods for evaluating effective radiative forcing are in principle different, because rapid SST adjustment when CO2 is changed can produce a pattern of surface temperature change with zero global mean but non-zero change in net radiation at the top of the atmosphere (~ -0.5 Wm-2 in HadCM3).
Resumo:
Global controls on month-by-month fractional burnt area (2000–2005) were investigated by fitting a generalised linear model (GLM) to Global Fire Emissions Database (GFED) data, with 11 predictor variables representing vegetation, climate, land use and potential ignition sources. Burnt area is shown to increase with annual net primary production (NPP), number of dry days, maximum temperature, grazing-land area, grass/shrub cover and diurnal temperature range, and to decrease with soil moisture, cropland area and population density. Lightning showed an apparent (weak) negative influence, but this disappeared when pure seasonal-cycle effects were taken into account. The model predicts observed geographic and seasonal patterns, as well as the emergent relationships seen when burnt area is plotted against each variable separately. Unimodal relationships with mean annual temperature and precipitation, population density and gross domestic product (GDP) are reproduced too, and are thus shown to be secondary consequences of correlations between different controls (e.g. high NPP with high precipitation; low NPP with low population density and GDP). These findings have major implications for the design of global fire models, as several assumptions in current models – most notably, the widely assumed dependence of fire frequency on ignition rates – are evidently incorrect.
Resumo:
A practical single-carrier (SC) block transmission with frequency domain equalisation (FDE) system can generally be modelled by the Hammerstein system that includes the nonlinear distortion effects of the high power amplifier (HPA) at transmitter. For such Hammerstein channels, the standard SC-FDE scheme no longer works. We propose a novel Bspline neural network based nonlinear SC-FDE scheme for Hammerstein channels. In particular, we model the nonlinear HPA, which represents the complex-valued static nonlinearity of the Hammerstein channel, by two real-valued B-spline neural networks, one for modelling the nonlinear amplitude response of the HPA and the other for the nonlinear phase response of the HPA. We then develop an efficient alternating least squares algorithm for estimating the parameters of the Hammerstein channel, including the channel impulse response coefficients and the parameters of the two B-spline models. Moreover, we also use another real-valued B-spline neural network to model the inversion of the HPA’s nonlinear amplitude response, and the parameters of this inverting B-spline model can be estimated using the standard least squares algorithm based on the pseudo training data obtained as a byproduct of the Hammerstein channel identification. Equalisation of the SC Hammerstein channel can then be accomplished by the usual one-tap linear equalisation in frequency domain as well as the inverse Bspline neural network model obtained in time domain. The effectiveness of our nonlinear SC-FDE scheme for Hammerstein channels is demonstrated in a simulation study.
Resumo:
The EU Water Framework Directive (WFD) requires that the ecological and chemical status of water bodies in Europe should be assessed, and action taken where possible to ensure that at least "good" quality is attained in each case by 2015. This paper is concerned with the accuracy and precision with which chemical status in rivers can be measured given certain sampling strategies, and how this can be improved. High-frequency (hourly) chemical data from four rivers in southern England were subsampled to simulate different sampling strategies for four parameters used for WFD classification: dissolved phosphorus, dissolved oxygen, pH and water temperature. These data sub-sets were then used to calculate the WFD classification for each site. Monthly sampling was less precise than weekly sampling, but the effect on WFD classification depended on the closeness of the range of concentrations to the class boundaries. In some cases, monthly sampling for a year could result in the same water body being assigned to three or four of the WFD classes with 95% confidence, due to random sampling effects, whereas with weekly sampling this was one or two classes for the same cases. In the most extreme case, the same water body could have been assigned to any of the five WFD quality classes. Weekly sampling considerably reduces the uncertainties compared to monthly sampling. The width of the weekly sampled confidence intervals was about 33% that of the monthly for P species and pH, about 50% for dissolved oxygen, and about 67% for water temperature. For water temperature, which is assessed as the 98th percentile in the UK, monthly sampling biases the mean downwards by about 1 °C compared to the true value, due to problems of assessing high percentiles with limited data. Low-frequency measurements will generally be unsuitable for assessing standards expressed as high percentiles. Confining sampling to the working week compared to all 7 days made little difference, but a modest improvement in precision could be obtained by sampling at the same time of day within a 3 h time window, and this is recommended. For parameters with a strong diel variation, such as dissolved oxygen, the value obtained, and thus possibly the WFD classification, can depend markedly on when in the cycle the sample was taken. Specifying this in the sampling regime would be a straightforward way to improve precision, but there needs to be agreement about how best to characterise risk in different types of river. These results suggest that in some cases it will be difficult to assign accurate WFD chemical classes or to detect likely trends using current sampling regimes, even for these largely groundwater-fed rivers. A more critical approach to sampling is needed to ensure that management actions are appropriate and supported by data.
Resumo:
The first size-resolved airborne measurements of dust fluxes and the first dust flux measurements from the central Sahara are presented and compared with a parameterization by Kok (2011a). High-frequency measurements of dust size distribution were obtained from 0.16 to 300 µm diameter, and eddy covariance fluxes were derived. This is more than an order of magnitude larger size range than previous flux estimates. Links to surface emission are provided by analysis of particle drift velocities. Number flux is described by a −2 power law between 1 and 144 µm diameter, significantly larger than the 12 µm upper limit suggested by Kok (2011a). For small particles, the deviation from a power law varies with terrain type and the large size cutoff is correlated with atmospheric vertical turbulent kinetic energy, suggesting control by vertical transport rather than emission processes. The measured mass flux mode is in the range 30–100 µm. The turbulent scales important for dust flux are from 0.1 km to 1–10 km. The upper scale increases during the morning as boundary layer depth and eddy size increase. All locations where large dust fluxes were measured had large topographical variations. These features are often linked with highly erodible surface features, such as wadis or dunes. We also hypothesize that upslope flow and flow separation over such features enhance the dust flux by transporting large particles out of the saltation layer. The tendency to locate surface flux measurements in open, flat terrain means these favored dust sources have been neglected in previous studies.
Resumo:
The objective of this study was to select the optimal operational conditions for the production of instant soy protein isolate (SPI) by pulsed fluid bed agglomeration. The spray-dried SPI was characterized as being a cohesive powder, presenting cracks and channeling formation during its fluidization (Geldart type A). The process was carried out in a pulsed fluid bed, and aqueous maltodextrin solution was used as liquid binder. Air pulsation, at a frequency of 600 rpm, was used to fluidize the cohesive SPI particles and to allow agglomeration to occur. Seventeen tests were performed according to a central composite design. Independent variables were (i) feed flow rate (0.5-3.5 g/min), (ii) atomizing air pressure (0.5-1.5 bar) and (iii) binder concentration (10-50%). Mean particle diameter, process yield and product moisture were analyzed as responses. Surface response analysis led to the selection of optimal operational parameters, following which larger granules with low moisture content and high process yield were produced. Product transformations were also evaluated by the analysis of size distribution, flowability, cohesiveness and wettability. When compared to raw material, agglomerated particles were more porous and had a more irregular shape, presenting a wetting time decrease, free-flow improvement and cohesiveness reduction. (C) 2010 Elsevier B.V. All rights reserved.
Resumo:
This work is an assessment of frequency of extreme values (EVs) of daily rainfall in the city of Sao Paulo. Brazil, over the period 1933-2005, based on the peaks-over-threshold (POT) and Generalized Pareto Distribution (GPD) approach. Usually. a GPD model is fitted to a sample of POT Values Selected With a constant threshold. However. in this work we use time-dependent thresholds, composed of relatively large p quantities (for example p of 0.97) of daily rainfall amounts computed from all available data. Samples of POT values were extracted with several Values of p. Four different GPD models (GPD-1, GPD-2, GPD-3. and GDP-4) were fitted to each one of these samples by the maximum likelihood (ML) method. The shape parameter was assumed constant for the four models, but time-varying covariates were incorporated into scale parameter of GPD-2. GPD-3, and GPD-4, describing annual cycle in GPD-2. linear trend in GPD-3, and both annual cycle and linear trend in GPD-4. The GPD-1 with constant scale and shape parameters is the simplest model. For identification of the best model among the four models WC used rescaled Akaike Information Criterion (AIC) with second-order bias correction. This criterion isolates GPD-3 as the best model, i.e. the one with positive linear trend in the scale parameter. The slope of this trend is significant compared to the null hypothesis of no trend, for about 98% confidence level. The non-parametric Mann-Kendall test also showed presence of positive trend in the annual frequency of excess over high thresholds. with p-value being virtually zero. Therefore. there is strong evidence that high quantiles of daily rainfall in the city of Sao Paulo have been increasing in magnitude and frequency over time. For example. 0.99 quantiles of daily rainfall amount have increased by about 40 mm between 1933 and 2005. Copyright (C) 2008 Royal Meteorological Society
Resumo:
Cosmic shear requires high precision measurement of galaxy shapes in the presence of the observational point spread function (PSF) that smears out the image. The PSF must therefore be known for each galaxy to a high accuracy. However, for several reasons, the PSF is usually wavelength dependent; therefore, the differences between the spectral energy distribution of the observed objects introduce further complexity. In this paper, we investigate the effect of the wavelength dependence of the PSF, focusing on instruments in which the PSF size is dominated by the diffraction limit of the telescope and which use broad-band filters for shape measurement. We first calculate biases on cosmological parameter estimation from cosmic shear when the stellar PSF is used uncorrected. Using realistic galaxy and star spectral energy distributions and populations and a simple three-component circular PSF, we find that the colour dependence must be taken into account for the next generation of telescopes. We then consider two different methods for removing the effect: (i) the use of stars of the same colour as the galaxies and (ii) estimation of the galaxy spectral energy distribution using multiple colours and using a telescope model for the PSF. We find that both of these methods correct the effect to levels below the tolerances required for per cent level measurements of dark energy parameters. Comparison of the two methods favours the template-fitting method because its efficiency is less dependent on galaxy redshift than the broad-band colour method and takes full advantage of deeper photometry.