65 resultados para Centrality measures
Resumo:
Using UK equity index data, this paper considers the impact of news on time varying measures of beta, the usual measure of undiversifiable risk. The empirical model implies that beta depends on news about the market and news about the sector. The asymmetric response of beta to news about the market is consistent across all sectors considered. Recent research is divided as to whether abnormalities in equity returns arise from changes in expected returns in an efficient market or over-reactions to new information. The evidence suggests that such abnormalities may be due to changes in expected returns caused by time-variation and asymmetry in beta.
Resumo:
Classical measures of network connectivity are the number of disjoint paths between a pair of nodes and the size of a minimum cut. For standard graphs, these measures can be computed efficiently using network flow techniques. However, in the Internet on the level of autonomous systems (ASs), referred to as AS-level Internet, routing policies impose restrictions on the paths that traffic can take in the network. These restrictions can be captured by the valley-free path model, which assumes a special directed graph model in which edge types represent relationships between ASs. We consider the adaptation of the classical connectivity measures to the valley-free path model, where it is -hard to compute them. Our first main contribution consists of presenting algorithms for the computation of disjoint paths, and minimum cuts, in the valley-free path model. These algorithms are useful for ASs that want to evaluate different options for selecting upstream providers to improve the robustness of their connection to the Internet. Our second main contribution is an experimental evaluation of our algorithms on four types of directed graph models of the AS-level Internet produced by different inference algorithms. Most importantly, the evaluation shows that our algorithms are able to compute optimal solutions to instances of realistic size of the connectivity problems in the valley-free path model in reasonable time. Furthermore, our experimental results provide information about the characteristics of the directed graph models of the AS-level Internet produced by different inference algorithms. It turns out that (i) we can quantify the difference between the undirected AS-level topology and the directed graph models with respect to fundamental connectivity measures, and (ii) the different inference algorithms yield topologies that are similar with respect to connectivity and are different with respect to the types of paths that exist between pairs of ASs.
Resumo:
Experiences from the Mitigation Options for Phosphorus and Sediment (MOPS) projects, which aim to determine the effectiveness of measures to reduce pollutant loading from agricultural land to surface waters, have been used to contribute to the findings of a recent paper (Kay et al., 2009, Agricultural Systems, 99, 67–75), which reviewed the efficacy of contemporary agricultural stewardship measures for ameliorating the water pollution problems of key concern to the UK water industry. MOPS1 is a recently completed 3-year research project on three different soil types in the UK, which focused on mitigation options for winter cereals. MOPS1 demonstrated that tramlines can be the major pathway for sediment and nutrient transfer from arable hillslopes, and that although minimum tillage, crop residue incorporation, contour cultivation, and beetle banks also have potential to be cost-effective mitigation options, tramline management is the one of the most promising treatments for mitigating diffuse pollution losses, as it was able to reduce sediment and nutrient losses by 72–99% in four out of five site years trialled. Using information from the MOPS projects, this paper builds on the findings of Kay et al. to provide an updated picture of the evidence available and the immediate needs for research in this area.
Resumo:
“Point and click” interactions remain one of the key features of graphical user interfaces (GUIs). People with motion-impairments, however, can often have difficulty with accurate control of standard pointing devices. This paper discusses work that aims to reveal the nature of these difficulties through analyses that consider the cursor’s path of movement. A range of cursor measures was applied, and a number of them were found to be significant in capturing the differences between able-bodied users and motion-impaired users, as well as the differences between a haptic force feedback condition and a control condition. The cursor measures found in the literature, however, do not make up a comprehensive list, but provide a starting point for analysing cursor movements more completely. Six new cursor characteristics for motion-impaired users are introduced to capture aspects of cursor movement different from those already proposed.
Resumo:
People with motion-impairments can often have difficulty with accurate control of standard pointing devices for computer input. The nature of the difficulties may vary, so to be most effective, methods of assisting cursor control must be suited to each user's needs. The work presented here involves a study of cursor trajectories as a means of assessing the requirements of motion-impaired computer users. A new cursor characteristic is proposed that attempts to capture difficulties with moving the cursor in a smooth trajectory. A study was conducted to see if haptic tunnels could improve performance in "point and click" tasks. Results indicate that the tunnels reduced times to target for those users identified by the new characteristic as having the most difficulty moving in a smooth trajectory. This suggests that cursor characteristics have potential applications in performing assessments of a user's cursor control capabilities which can then be used to determine appropriate methods of assistance.
Resumo:
In this paper we perform an analytical and numerical study of Extreme Value distributions in discrete dynamical systems that have a singular measure. Using the block maxima approach described in Faranda et al. [2011] we show that, numerically, the Extreme Value distribution for these maps can be associated to the Generalised Extreme Value family where the parameters scale with the information dimension. The numerical analysis are performed on a few low dimensional maps. For the middle third Cantor set and the Sierpinskij triangle obtained using Iterated Function Systems, experimental parameters show a very good agreement with the theoretical values. For strange attractors like Lozi and H\`enon maps a slower convergence to the Generalised Extreme Value distribution is observed. Even in presence of large statistics the observed convergence is slower if compared with the maps which have an absolute continuous invariant measure. Nevertheless and within the uncertainty computed range, the results are in good agreement with the theoretical estimates.
Resumo:
ABSTRACT Non-Gaussian/non-linear data assimilation is becoming an increasingly important area of research in the Geosciences as the resolution and non-linearity of models are increased and more and more non-linear observation operators are being used. In this study, we look at the effect of relaxing the assumption of a Gaussian prior on the impact of observations within the data assimilation system. Three different measures of observation impact are studied: the sensitivity of the posterior mean to the observations, mutual information and relative entropy. The sensitivity of the posterior mean is derived analytically when the prior is modelled by a simplified Gaussian mixture and the observation errors are Gaussian. It is found that the sensitivity is a strong function of the value of the observation and proportional to the posterior variance. Similarly, relative entropy is found to be a strong function of the value of the observation. However, the errors in estimating these two measures using a Gaussian approximation to the prior can differ significantly. This hampers conclusions about the effect of the non-Gaussian prior on observation impact. Mutual information does not depend on the value of the observation and is seen to be close to its Gaussian approximation. These findings are illustrated with the particle filter applied to the Lorenz ’63 system. This article is concluded with a discussion of the appropriateness of these measures of observation impact for different situations.
Resumo:
In this study two new measures of lexical diversity are tested for the first time on French. The usefulness of these measures, MTLD (McCarthy and Jarvis (2010 and this volume) ) and HD-D (McCarthy and Jarvis 2007), in predicting different aspects of language proficiency is assessed and compared with D (Malvern and Richards 1997; Malvern, Richards, Chipere and Durán 2004) and Maas (1972) in analyses of stories told by two groups of learners (n=41) of two different proficiency levels and one group of native speakers of French (n=23). The importance of careful lemmatization in studies of lexical diversity which involve highly inflected languages is also demonstrated. The paper shows that the measures of lexical diversity under study are valid proxies for language ability in that they explain up to 62 percent of the variance in French C-test scores, and up to 33 percent of the variance in a measure of complexity. The paper also provides evidence that dependence on segment size continues to be a problem for the measures of lexical diversity discussed in this paper. The paper concludes that limiting the range of text lengths or even keeping text length constant is the safest option in analysing lexical diversity.
Resumo:
For decades regulators in the energy sector have focused on facilitating the maximisation of energy supply in order to meet demand through liberalisation and removal of market barriers. The debate on climate change has emphasised a new type of risk in the balance between energy demand and supply: excessively high energy demand brings about significantly negative environmental and economic impacts. This is because if a vast number of users is consuming electricity at the same time, energy suppliers have to activate dirty old power plants with higher greenhouse gas emissions and higher system costs. The creation of a Europe-wide electricity market requires a systematic investigation into the risk of aggregate peak demand. This paper draws on the e-Living Time-Use Survey database to assess the risk of aggregate peak residential electricity demand for European energy markets. Findings highlight in which countries and for what activities the risk of aggregate peak demand is greater. The discussion highlights which approaches energy regulators have started considering to convince users about the risks of consuming too much energy during peak times. These include ‘nudging’ approaches such as the roll-out of smart meters, incentives for shifting the timing of energy consumption, differentiated time-of-use tariffs, regulatory financial incentives and consumption data sharing at the community level.
Resumo:
Many different performance measures have been developed to evaluate field predictions in meteorology. However, a researcher or practitioner encountering a new or unfamiliar measure may have difficulty in interpreting its results, which may lead to them avoiding new measures and relying on those that are familiar. In the context of evaluating forecasts of extreme events for hydrological applications, this article aims to promote the use of a range of performance measures. Some of the types of performance measures that are introduced in order to demonstrate a six-step approach to tackle a new measure. Using the example of the European Centre for Medium-Range Weather Forecasts (ECMWF) ensemble precipitation predictions for the Danube floods of July and August 2002, to show how to use new performance measures with this approach and the way to choose between different performance measures based on their suitability for the task at hand is shown. Copyright © 2008 Royal Meteorological Society
Resumo:
Background: Exposure to solar ultraviolet-B (UV-B) radiation is a major source of vitamin D3. Chemistry climate models project decreases in ground-level solar erythemal UV over the current century. It is unclear what impact this will have on vitamin D status at the population level. The purpose of this study was to measure the association between ground-level solar UV-B and serum concentrations of 25-hydroxyvitamin D (25(OH)D) using a secondary analysis of the 2007 to 2009 Canadian Health Measures Survey (CHMS). Methods: Blood samples collected from individuals aged 12 to 79 years sampled across Canada were analyzed for 25(OH)D (n=4,398). Solar UV-B irradiance was calculated for the 15 CHMS collection sites using the Tropospheric Ultraviolet and Visible Radiation Model. Multivariable linear regression was used to evaluate the association between 25(OH)D and solar UV-B adjusted for other predictors and to explore effect modification. Results: Cumulative solar UV-B irradiance averaged over 91 days (91-day UV-B) prior to blood draw correlated significantly with 25(OH)D. Independent of other predictors, a 1 kJ/m 2 increase in 91-day UV-B was associated with a significant 0.5 nmol/L (95% CI 0.3-0.8) increase in mean 25(OH)D (P =0.0001). The relationship was stronger among younger individuals and those spending more time outdoors. Based on current projections of decreases in ground-level solar UV-B, we predict less than a 1 nmol/L decrease in mean 25(OH)D for the population. Conclusions: In Canada, cumulative exposure to ambient solar UV-B has a small but significant association with 25(OH)D concentrations. Public health messages to improve vitamin D status should target safe sun exposure with sunscreen use, and also enhanced dietary and supplemental intake and maintenance of a healthy body weight.