63 resultados para expense
Resumo:
A radiometric analysis of the light coupled by optical fiber amplitude modulating extrinsic-type reflectance displacement sensors is presented. Uncut fiber sensors show the largest range but a smaller responsivity. Single cut fiber sensors exhibit an improvement in responsivity at the expense of range. A further increase in responsivity as well as a reduction in the operational range is obtained when the double cut sensor configuration is implemented. The double cut configuration is particularly suitable in applications where feedback action is applied to the moving reflector surface. © 2000 American Institute of Physics.
Resumo:
The UK Food Standards Agency convened a group of expert scientists to review current research investigating the optimal dietary intake for n-9 cis-monounsaturated fatty acids (MUFA). The aim was to review the mechanisms underlying the reported beneficial effects of MUFA on CHD risk, and to establish priorities for future research. The issue of optimal MUFA intake is contingent upon optimal total fat intake; however, there is no consensus of opinion on what the optimal total fat intake should be. Thus, it was recommended that a large multi-centre study should look at the effects on CHD risk of MUFA replacement of saturated fatty acids in relation to varying total fat intakes; this study should be of sufficient size to take account of genetic variation, sex, physical activity and stage of life factors, as well as being of sufficient duration to account for adaptation to diets. Recommendations for studies investigating the mechanistic effects of MUFA were also made. Methods of manipulating the food chain to increase MUFA at the expense of saturated fatty acids were also discussed.
Resumo:
Cloud-resolving numerical simulations of airflow over a diurnally heated mountain ridge are conducted to explore the mechanisms and sensitivities of convective initiation under high pressure conditions. The simulations are based on a well-observed convection event from the Convective and Orographically Induced Precipitation Study (COPS) during summer 2007, where an isolated afternoon thunderstorm developed over the Black Forest mountains of central Europe, but they are idealized to facilitate understanding and reduce computational expense. In the conditionally unstable but strongly inhibited flow under consideration, sharp horizontal convergence over the mountain acts to locally weaken the inhibition and moisten the dry midtroposphere through shallow cumulus detrainment. The onset of deep convection occurs not through the deep ascent of a single updraft but rather through a rapid succession of thermals that are vented through the mountain convergence zone into the deepening cloud mass. Emerging thermals rise through the saturated wakes of their predecessors, which diminishes the suppressive effects of entrainment and allows for rapid glaciation above the freezing level as supercooled cloud drops rime onto preexisting ice particles. These effects strongly enhance the midlevel cloud buoyancy and enable rapid ascent to the tropopause. The existence and vigor of the convection is highly sensitive to small changes in background wind speed U0, which controls the strength of the mountain convergence and the ability of midlevel moisture to accumulate above the mountain. Whereas vigorous deep convection develops for U0 = 0 m s−1, deep convection is completely eliminated for U0 = 3 m s−1. Although deep convection is able to develop under intermediate winds (U0 = 1.5 m s−1), its formation is highly sensitive to small-amplitude perturbations in the initial flow.
Resumo:
In the earth sciences, data are commonly cast on complex grids in order to model irregular domains such as coastlines, or to evenly distribute grid points over the globe. It is common for a scientist to wish to re-cast such data onto a grid that is more amenable to manipulation, visualization, or comparison with other data sources. The complexity of the grids presents a significant technical difficulty to the regridding process. In particular, the regridding of complex grids may suffer from severe performance issues, in the worst case scaling with the product of the sizes of the source and destination grids. We present a mechanism for the fast regridding of such datasets, based upon the construction of a spatial index that allows fast searching of the source grid. We discover that the most efficient spatial index under test (in terms of memory usage and query time) is a simple look-up table. A kd-tree implementation was found to be faster to build and to give similar query performance at the expense of a larger memory footprint. Using our approach, we demonstrate that regridding of complex data may proceed at speeds sufficient to permit regridding on-the-fly in an interactive visualization application, or in a Web Map Service implementation. For large datasets with complex grids the new mechanism is shown to significantly outperform algorithms used in many scientific visualization packages.
Resumo:
The performance of an international real estate investment can be critically affected by currency fluctuations. While survey work suggests large international investors with multi-asset portfolios tend to hedge their overall currency exposure at portfolio level, smaller and specialist investors are more likely to hedge individual investments and face considerable specific risk. This presents particular problems in direct real estate investment due to the lengthy holding period. Prior research investigating the issue relies on ex post portfolio measure, understating the risk faced. This paper examines individual risk using a forward-looking simulation approach to model uncertain cashflow. The results suggest that a US investor can greatly reduce the downside currency risk inherent in UK real estate by using a swap structure – but at the expense of dampening upside potential.
Resumo:
An important part of strategic planning’s purpose should be to attempt to forecast the future, not simply to belatedly respond to events, or accept the future as inevitable. This paper puts forward a conceptual approach for seeking to achieve these aims and uses the Bournemouth and Poole area in Dorset as a vehicle for applying the basic methodology. The area has been chosen because of the significant issues that it currently faces in planning terms; and its future development possibilities. In order that alternative future choices for the area – different ‘developmental trajectories’ – can be evaluated, they must be carefully and logically constructed. Four Futures for Bournemouth/Poole have been put forward; they are titled and colour-coded: Future One is Maximising Growth – Golden Prospect which seeks to achieve the highest level of economic prosperity of the area; Future Two is Incremental Growth – Solid Silver which attempts to facilitate a steady, continuing, controlled pattern of the development for the area; Future Three is Steady State – Cobalt Blue which suggests that people in the area could be more concerned with preserving their quality of life in terms of their leisure and recreation rather than increasing wealth; Future Four is Environment First – Jade Green which makes the area’s environmental protection its top priority even at the possible expense of economic prosperity. The scenarios proposed here are not sacrosanct. Nor are they simply confined to the Bournemouth and Poole area. In theory, suitably modified, they could use in a variety of different contexts. Consideration of the scenarios – wherever located - might then generate other, additional scenarios. These are called hybrids, alloys and amalgams. Likewise it might identify some of them as inappropriate or impossible. Most likely, careful consideration of the scenarios will suggest hybrid scenarios, in which features from different scenarios are combined to produce alternative or additional futures for consideration. The real issue then becomes how best to fashion such a future for the particular area under consideration
Resumo:
Based on the potential benefits to human health there is interest in increasing 18:3n-3, 20:5n-3, 22:6n-6, and cis-9,trans-11 conjugated linoleic acid (CLA) in ruminant foods. Four Aberdeen Angus steers (406 ± 8.2 kg BW) fitted with rumen and duodenal cannulae were used in a 4 x 4 Latin square experiment with 21 d periods to examine the potential of fish oil (FO) and linseed oil (LO) in the diet to increase ruminal outflow of trans-11 18:1 and total n-3 polyunsaturated fatty acids (PUFA) in growing cattle. Treatments consisted of a control diet (60:40; forage:concentrate ratio, on a DM basis, respectively) based on maize silage, or the same basal ration containing 30 g/kg DM of FO, LO or a mixture (1:1, w/w) of FO and LO (LFO). Diets were offered as total mixed rations and fed at a rate of 85 g DM/kg BW0.75/d. Oils had no effect (P = 0.52) on DM intake. Linseed oil had no effect (P > 0.05) on ruminal pH or VFA concentrations, while FO shifted rumen fermentation towards propionate at the expense of acetate. Compared with the control, LO increased (P < 0.05) 18:0, cis 18:1 (Δ9, 12-15), trans 18:1 (Δ4-9, 11-16), trans 18:2, geometric isomers of ∆9,11, ∆11,13, and ∆13,15 CLA, trans-8,cis-10 CLA, trans-10,trans-12 CLA, trans-12,trans-14 CLA, and 18:3n-3 flow at the duodenum. Inclusion of FO in the diet resulted in higher (P < 0.05) flows of cis-9 16:1, trans 16:1 (Δ6-13), cis 18:1 (Δ9, 11, and 13), trans 18:1 (Δ6-15), trans 18:2, 20:5n-3, 22:5n-3, and 22:6n-3, and lowered (P < 0.001) 18:0 at the duodenum relative to the control. For most fatty acids at the duodenum responses to LFO were intermediate of FO and LO. However, LFO resulted in higher (P = 0.04) flows of total trans 18:1 than LO and increased (P < 0.01) trans-6 16:1 and trans-12 18:1 at the duodenum compared with FO or LO. Biohydrogenation of cis-9 18:1 and 18:2n-6 in the rumen was independent of treatment, but both FO and LO increased (P < 0.001) the extent of 18:3n-3 biohydrogenation compared with the control. Ruminal 18:3n-3 biohydrogenation was higher (P < 0.001) for LO and LFO than FO, while biohydrogenation of 20:5n-3 and 22:6n-3 in the rumen was marginally lower (P = 0.05) for LFO than FO. In conclusion, LO and FO at 30 g/kg DM altered the biohydrogenation of unsaturated fatty acids in the rumen causing an increase in the flow of specific intermediates at the duodenum, but the potential of these oils fed alone or as a mixture to increase n-3 PUFA at the duodenum in cattle appears limited.
Resumo:
The human large intestine is an intensively colonised area containing bacteria that are health promoting, as well as pathogenic - This has led to functional food developments that fortify the former at the expense of the latter - Probiotics have a long history of use in humans as live microbial feed additions - In contrast, a prebiotic is a non digestible food ingredient that beneficially affects the host by targeting indigenous components thought to be positive - Dietary carbohydrates, such as fibres are candidate prebiotics but most promise has been realised with oligosaccharides - As prebiotics exploit non-viable food ingredients, their applicability in diets is wide ranging - As gastrointestinal disorders are prevalent in terms of human health, both probiotics and prebiotics serve an important role in the prophylactic management of various acute and chronic gut derived conditions - Examples include protection from gastroenteritis and some inflammatory conditions.
Resumo:
If philosophy and poetry are to illuminate each other, we should first understand their tendencies to mutual antipathy. Examining (and, where possible, correcting) mutual misapprehension is part of this task. J. L. Austin's remarks on poetry offer one such point of entry: they are often cited by poets and critics as an example of philosophy's blindness to poetry (I). These remarks are complex and their purpose obscure—more so than those who take exception to them usually allow or admit (II). But it is reasonable to think that, for all his levity at their expense, what Austin offers poets is exemption from forms of commitment. Since such exemption is precisely what poets and critics have sought, this diagnosis is eirenic (III). This exemption has a price, but it may be affordable (IV).
Resumo:
This paper constructs a housing market model to analyse conditions for different generations of households in the UK. Previous policy work has suggested that baby-boomers have benefitted at the expense of younger generations. The model relies on a form of financial accelerator in which existing homeowners reinvest a proportion of the capital gains on moving home. The model is extended to look at homeownership probabilities. It also explains why an increasing share of mortgages has gone to existing owners, despite market liberalisation and securitisation. In addition, the model contributes to the explanation of volatility.
Resumo:
This dissertation deals with aspects of sequential data assimilation (in particular ensemble Kalman filtering) and numerical weather forecasting. In the first part, the recently formulated Ensemble Kalman-Bucy (EnKBF) filter is revisited. It is shown that the previously used numerical integration scheme fails when the magnitude of the background error covariance grows beyond that of the observational error covariance in the forecast window. Therefore, we present a suitable integration scheme that handles the stiffening of the differential equations involved and doesn’t represent further computational expense. Moreover, a transform-based alternative to the EnKBF is developed: under this scheme, the operations are performed in the ensemble space instead of in the state space. Advantages of this formulation are explained. For the first time, the EnKBF is implemented in an atmospheric model. The second part of this work deals with ensemble clustering, a phenomenon that arises when performing data assimilation using of deterministic ensemble square root filters in highly nonlinear forecast models. Namely, an M-member ensemble detaches into an outlier and a cluster of M-1 members. Previous works may suggest that this issue represents a failure of EnSRFs; this work dispels that notion. It is shown that ensemble clustering can be reverted also due to nonlinear processes, in particular the alternation between nonlinear expansion and compression of the ensemble for different regions of the attractor. Some EnSRFs that use random rotations have been developed to overcome this issue; these formulations are analyzed and their advantages and disadvantages with respect to common EnSRFs are discussed. The third and last part contains the implementation of the Robert-Asselin-Williams (RAW) filter in an atmospheric model. The RAW filter is an improvement to the widely popular Robert-Asselin filter that successfully suppresses spurious computational waves while avoiding any distortion in the mean value of the function. Using statistical significance tests both at the local and field level, it is shown that the climatology of the SPEEDY model is not modified by the changed time stepping scheme; hence, no retuning of the parameterizations is required. It is found the accuracy of the medium-term forecasts is increased by using the RAW filter.
Resumo:
Using functional magnetic resonance imaging, we found that when bilinguals named pictures or read words aloud, in their native or nonnative language, activation was higher relative to monolinguals in 5 left hemisphere regions: dorsal precentral gyrus, pars triangularis, pars opercularis, superior temporal gyrus, and planum temporale. We further demonstrate that these areas are sensitive to increasing demands on speech production in monolinguals. This suggests that the advantage of being bilingual comes at the expense of increased work in brain areas that support monolingual word processing. By comparing the effect of bilingualism across a range of tasks, we argue that activation is higher in bilinguals compared with monolinguals because word retrieval is more demanding; articulation of each word is less rehearsed; and speech output needs careful monitoring to avoid errors when competition for word selection occurs between, as well as within,language.
Resumo:
When performing data fusion, one often measures where targets were and then wishes to deduce where targets currently are. There has been recent research on the processing of such out-of-sequence data. This research has culminated in the development of a number of algorithms for solving the associated tracking problem. This paper reviews these different approaches in a common Bayesian framework and proposes an architecture that orthogonalises the data association and out-of-sequence problems such that any combination of solutions to these two problems can be used together. The emphasis is not on advocating one approach over another on the basis of computational expense, but rather on understanding the relationships among the algorithms so that any approximations made are explicit. Results for a multi-sensor scenario involving out-of-sequence data association are used to illustrate the utility of this approach in a specific context.
Resumo:
In data fusion systems, one often encounters measurements of past target locations and then wishes to deduce where the targets are currently located. Recent research on the processing of such out-of-sequence data has culminated in the development of a number of algorithms for solving the associated tracking problem. This paper reviews these different approaches in a common Bayesian framework and proposes an architecture that orthogonalises the data association and out-of-sequence problems such that any combination of solutions to these two problems can be used together. The emphasis is not on advocating one approach over another on the basis of computational expense, but rather on understanding the relationships between the algorithms so that any approximations made are explicit.
Resumo:
A rapid-distortion model is developed to investigate the interaction of weak turbulence with a monochromatic irrotational surface water wave. The model is applicable when the orbital velocity of the wave is larger than the turbulence intensity, and when the slope of the wave is sufficiently high that the straining of the turbulence by the wave dominates over the straining of the turbulence by itself. The turbulence suffers two distortions. Firstly, vorticity in the turbulence is modulated by the wave orbital motions, which leads to the streamwise Reynolds stress attaining maxima at the wave crests and minima at the wave troughs; the Reynolds stress normal to the free surface develops minima at the wave crests and maxima at the troughs. Secondly, over several wave cycles the Stokes drift associated with the wave tilts vertical vorticity into the horizontal direction, subsequently stretching it into elongated streamwise vortices, which come to dominate the flow. These results are shown to be strikingly different from turbulence distorted by a mean shear flow, when `streaky structures' of high and low streamwise velocity fluctuations develop. It is shown that, in the case of distortion by a mean shear flow, the tendency for the mean shear to produce streamwise vortices by distortion of the turbulent vorticity is largely cancelled by a distortion of the mean vorticity by the turbulent fluctuations. This latter process is absent in distortion by Stokes drift, since there is then no mean vorticity. The components of the Reynolds stress and the integral length scales computed from turbulence distorted by Stokes drift show the same behaviour as in the simulations of Langmuir turbulence reported by McWilliams, Sullivan & Moeng (1997). Hence we suggest that turbulent vorticity in the upper ocean, such as produced by breaking waves, may help to provide the initial seeds for Langmuir circulations, thereby complementing the shear-flow instability mechanism developed by Craik & Leibovich (1976). The tilting of the vertical vorticity into the horizontal by the Stokes drift tends also to produce a shear stress that does work against the mean straining associated with the wave orbital motions. The turbulent kinetic energy then increases at the expense of energy in the wave. Hence the wave decays. An expression for the wave attenuation rate is obtained by scaling the equation for the wave energy, and is found to be broadly consistent with available laboratory data.