934 resultados para Orthogonal polynomials on the real line
Resumo:
During the Last Glacial Maximum, the climate was substantially colder and the carbon cycle was clearly different from the late Holocene. According to proxy data deep oceanic δ13C was very low, and the atmospheric CO2 concentration also reduced. Several mechanisms have been proposed to explain these changes, but none can fully explain the data, especially the very low deep ocean δ13C values. Oceanic core data show that the deep ocean was very cold and salty, which would lead to enhanced deep ocean stratification. We show that such an enhanced stratification in the coupled climate model CLIMBER-2 helps get very low deep oceanic δ13C values. Indeed the simulated δ13C reaches values as low as −0.8‰ in line with proxy data evidences. Moreover it increases the oceanic carbon reservoir leading to a small, yet robust, atmospheric CO2 drop of approximately 10 ppm.
Resumo:
I consider the possibility that respondents to the Survey of Professional Forecasters round their probability forecasts of the event that real output will decline in the future, as well as their reported output growth probability distributions. I make various plausible assumptions about respondents’ rounding practices, and show how these impinge upon the apparent mismatch between probability forecasts of a decline in output and the probabilities of this event implied by the annual output growth histograms. I find that rounding accounts for about a quarter of the inconsistent pairs of forecasts.
Resumo:
Quasi-uniform grids of the sphere have become popular recently since they avoid parallel scaling bottle- necks associated with the poles of latitude–longitude grids. However quasi-uniform grids of the sphere are often non- orthogonal. A version of the C-grid for arbitrary non- orthogonal grids is presented which gives some of the mimetic properties of the orthogonal C-grid. Exact energy conservation is sacrificed for improved accuracy and the re- sulting scheme numerically conserves energy and potential enstrophy well. The non-orthogonal nature means that the scheme can be used on a cubed sphere. The advantage of the cubed sphere is that it does not admit the computa- tional modes of the hexagonal or triangular C-grids. On var- ious shallow-water test cases, the non-orthogonal scheme on a cubed sphere has accuracy less than or equal to the orthog- onal scheme on an orthogonal hexagonal icosahedron. A new diamond grid is presented consisting of quasi- uniform quadrilaterals which is more nearly orthogonal than the equal-angle cubed sphere but with otherwise similar properties. It performs better than the cubed sphere in ev- ery way and should be used instead in codes which allow a flexible grid structure.
Resumo:
Climate change in the UK is expected to cause increases in temperatures, altered precipitation patterns and more frequent and extreme weather events. In this review we discuss climate effects on dissolved organic matter (DOM), how altered DOM and water physico-chemical properties will affect treatment processes and assess the utility of techniques used to remove DOM and monitor water quality. A critical analysis of the literature has been undertaken with a focus on catchment drivers of DOM character, removal of DOM via coagulation and the formation of disinfectant by-products (DBPs). We suggest that: (1) upland catchments recovering from acidification will continue to produce more DOM with a greater hydrophobic fraction as solubility controls decrease; (2) greater seasonality in DOM export is likely in future due to altered precipitation patterns; (3) changes in species diversity and water properties could encourage algal blooms; and (4) that land management and vegetative changes may have significant effects on DOM export and treatability but require further research. Increases in DBPs may occur where catchments have high influence from peatlands or where algal blooms become an issue. To increase resilience to variable DOM quantity and character we suggest that one or more of the following steps are undertaken at the treatment works: a) ‘enhanced coagulation’ optimised for DOM removal; b) switching from aluminium to ferric coagulants and/or incorporating coagulant aids; c) use of magnetic ion-exchange (MIEX) pre-coagulation; and d) activated carbon filtration post-coagulation. Fluorescence and UV absorbance techniques are highlighted as potential methods for low-cost, rapid on-line process optimisation to improve DOM removal and minimise DBPs.
Resumo:
This paper examines the lead–lag relationship between the FTSE 100 index and index futures price employing a number of time series models. Using 10-min observations from June 1996–1997, it is found that lagged changes in the futures price can help to predict changes in the spot price. The best forecasting model is of the error correction type, allowing for the theoretical difference between spot and futures prices according to the cost of carry relationship. This predictive ability is in turn utilised to derive a trading strategy which is tested under real-world conditions to search for systematic profitable trading opportunities. It is revealed that although the model forecasts produce significantly higher returns than a passive benchmark, the model was unable to outperform the benchmark after allowing for transaction costs.
Resumo:
This paper employs a vector autoregressive model to investigate the impact of macroeconomic and financial variables on a UK real estate return series. The results indicate that unexpected inflation, and the interest rate term spread have explanatory powers for the property market. However, the most significant influence on the real estate series are the lagged values of the real estate series themselves. We conclude that identifying the factors that have determined UK property returns over the past twelve years remains a difficult task.
Resumo:
In this article, we investigate how the choice of the attenuation factor in an extended version of Katz centrality influences the centrality of the nodes in evolving communication networks. For given snapshots of a network, observed over a period of time, recently developed communicability indices aim to identify the best broadcasters and listeners (receivers) in the network. Here we explore the attenuation factor constraint, in relation to the spectral radius (the largest eigenvalue) of the network at any point in time and its computation in the case of large networks. We compare three different communicability measures: standard, exponential, and relaxed (where the spectral radius bound on the attenuation factor is relaxed and the adjacency matrix is normalised, in order to maintain the convergence of the measure). Furthermore, using a vitality-based measure of both standard and relaxed communicability indices, we look at the ways of establishing the most important individuals for broadcasting and receiving of messages related to community bridging roles. We compare those measures with the scores produced by an iterative version of the PageRank algorithm and illustrate our findings with two examples of real-life evolving networks: the MIT reality mining data set, consisting of daily communications between 106 individuals over the period of one year, a UK Twitter mentions network, constructed from the direct \emph{tweets} between 12.4k individuals during one week, and a subset the Enron email data set.
Resumo:
The present study aims to contribute to an understanding of the complexity of lobbying activities within the accounting standard-setting process in the UK. The paper reports detailed content analysis of submission letters to four related exposure drafts. These preceded two accounting standards that set out the concept of control used to determine the scope of consolidation in the UK, except for reporting under international standards. Regulation on the concept of control provides rich patterns of lobbying behaviour due to its controversial nature and its significance to financial reporting. Our examination is conducted by dividing lobbyists into two categories, corporate and non-corporate, which are hypothesised (and demonstrated) to lobby differently. In order to test the significance of these differences we apply ANOVA techniques and univariate regression analysis. Corporate respondents are found to devote more attention to issues of specific applicability of the concept of control, whereas non-corporate respondents tend to devote more attention to issues of general applicability of this concept. A strong association between the issues raised by corporate respondents and their line of business is revealed. Both categories of lobbyists are found to advance conceptually-based arguments more often than economic consequences-based or combined arguments. However, when economic consequences-based arguments are used, they come exclusively from the corporate category of respondents.
Resumo:
Using monthly time-series data 1999-2013, the paper shows that markets for agricultural commodities provide a yardstick for real purchasing power, and thus a reference point for the real value of fiat currencies. The daily need for each adult to consume about 2800 food calories is universal; data from FAO food balance sheets confirm that the world basket of food consumed daily is non-volatile in comparison to the volatility of currency exchange rates, and so the replacement cost of food consumed provides a consistent indicator of economic value. Food commodities are storable for short periods, but ultimately perishable, and this exerts continual pressure for markets to clear in the short term; moreover, food calories can be obtained from a very large range of foodstuffs, and so most households are able to use arbitrage to select a near optimal weighting of quantities purchased. The paper proposes an original method to enable a standard of value to be established, definable in physical units on the basis of actual worldwide consumption of food goods, with an illustration of the method.
Resumo:
The Distribution Network Operators (DNOs) role is becoming more difficult as electric vehicles and electric heating penetrate the network, increasing the demand. As a result it becomes harder for the distribution networks infrastructure to remain within its operating constraints. Energy storage is a potential alternative to conventional network reinforcement such as upgrading cables and transformers. The research presented here in this paper shows that due to the volatile nature of the LV network, the control approach used for energy storage has a significant impact on performance. This paper presents and compares control methodologies for energy storage where the objective is to get the greatest possible peak demand reduction across the day from a pre-specified storage device. The results presented show the benefits and detriments of specific types of control on a storage device connected to a single phase of an LV network, using aggregated demand profiles based on real smart meter data from individual homes. The research demonstrates an important relationship between how predictable an aggregation is and the best control methodology required to achieve the objective.
Resumo:
Reinforcing the Low Voltage (LV) distribution network will become essential to ensure it remains within its operating constraints as demand on the network increases. The deployment of energy storage in the distribution network provides an alternative to conventional reinforcement. This paper presents a control methodology for energy storage to reduce peak demand in a distribution network based on day-ahead demand forecasts and historical demand data. The control methodology pre-processes the forecast data prior to a planning phase to build in resilience to the inevitable errors between the forecasted and actual demand. The algorithm uses no real time adjustment so has an economical advantage over traditional storage control algorithms. Results show that peak demand on a single phase of a feeder can be reduced even when there are differences between the forecasted and the actual demand. In particular, results are presented that demonstrate when the algorithm is applied to a large number of single phase demand aggregations that it is possible to identify which of these aggregations are the most suitable candidates for the control methodology.
Resumo:
This paper discusses ECG signal classification after parametrizing the ECG waveforms in the wavelet domain. Signal decomposition using perfect reconstruction quadrature mirror filter banks can provide a very parsimonious representation of ECG signals. In the current work, the filter parameters are adjusted by a numerical optimization algorithm in order to minimize a cost function associated to the filter cut-off sharpness. The goal consists of achieving a better compromise between frequency selectivity and time resolution at each decomposition level than standard orthogonal filter banks such as those of the Daubechies and Coiflet families. Our aim is to optimally decompose the signals in the wavelet domain so that they can be subsequently used as inputs for training to a neural network classifier.
Resumo:
The interpretation of structure in cusp ion dispersions is important for helping to understand the temporal and spatial structure of magnetopause reconnection. “Stepped” and “sawtooth” signatures have been shown to be caused by temporal variations in the reconnection rate under the same physical conditions for different satellite trajectories. The present paper shows that even for a single satellite path, a change in the amplitude of any reconnection pulses can alter the observed signature and even turn sawtooth into stepped forms and vice versa. On 20 August 1998, the Defense Meteorological Satellite Program (DMSP) craft F-14 crossed the cusp just to the south of Longyearbyen, returning on the following orbit. The two passes by the DMSP F-14 satellites have very similar trajectories and the open-closed field line boundary (OCB) crossings, as estimated from the SSJ/4 precipitating particle data and Polar UVI images, imply a similarly-shaped polar cap, yet the cusp ion dispersion signatures differ substantially. The cusp crossing at 08:54 UT displays a stepped ion dispersion previously considered to be typical of a meridional pass, whereas the crossing at 10:38 UT is a sawtooth form ion dispersion, previously considered typical of a satellite travelling longitudinally with respect to the OCB. It is shown that this change in dispersed ion signature is likely to be due to a change in the amplitude of the pulses in the reconnection rate, causing the stepped signature. Modelling of the low-energy ion cutoff under different conditions has reproduced the forms of signature observed.
Resumo:
The open magnetosphere model of cusp ion injection, acceleration and precipitation is used to predict the dispersion characteristics for fully pulsed magnetic reconnection at a low-latitude magnetopause X-line. The resulting steps, as would be seen by a satellite moving meridionally and normal to the ionospheric projection of the X-line, are compared with those seen by satellites moving longitudinally, along the open/closed boundary. It is shown that two observed cases can be explained by similar magnetosheath and reconnection characteristics, and that the major differences between them are well explained by the different satellite paths through the events. Both cases were observed in association with poleward-moving transient events seen by ground-based radar, as also predicted by the theory. The results show that the reconnection is pulsed but strongly imply it cannot also be spatially patchy, in the sense of isolated X-lines which independently are intermittently active. Furthermore they show that the reconnection pulses responsible for the poleward-moving events and the cusp ion steps, must cover at least 3 h of magnetic local time, although propagation of the active reconnection region may mean that it does not extend this far at any one instant of time.
Resumo:
Assessment is made of the effect of the assumed form for the ion velocity distribution function on estimates of three-dimensional ion temperature from one-dimensional observations. Incoherent scatter observations by the EISCAT radar at a variety of aspect angles are used to demonstrate features of ion temperature determination and to study the ion velocity distribution function. One form of the distribution function which has recently been widely used In the interpretation of EISCAT measurements, is found to be consistent with the data presented here, in that no deviation from a Maxwellian can be detected for observations along the magnetic field line and that the ion temperature and its anisotropy are accurately predicted. It is shown that theoretical predictions of the anisotropy by Monte Carlo computations are very accurate, the observed value being greater by only a few percent. It is also demonstrated for the case studied that errors of up to 93% are introduced into the ion temperature estimate if the anisotropy is neglected. Observations at an aspect angle of 54.7°, which are not subject to this error, have a much smaller uncertainty (less than 1%) due to the adopted form of the distribution of line-of-sight velocity.