60 resultados para Provisional Measures
Resumo:
“Point and click” interactions remain one of the key features of graphical user interfaces (GUIs). People with motion-impairments, however, can often have difficulty with accurate control of standard pointing devices. This paper discusses work that aims to reveal the nature of these difficulties through analyses that consider the cursor’s path of movement. A range of cursor measures was applied, and a number of them were found to be significant in capturing the differences between able-bodied users and motion-impaired users, as well as the differences between a haptic force feedback condition and a control condition. The cursor measures found in the literature, however, do not make up a comprehensive list, but provide a starting point for analysing cursor movements more completely. Six new cursor characteristics for motion-impaired users are introduced to capture aspects of cursor movement different from those already proposed.
Resumo:
People with motion-impairments can often have difficulty with accurate control of standard pointing devices for computer input. The nature of the difficulties may vary, so to be most effective, methods of assisting cursor control must be suited to each user's needs. The work presented here involves a study of cursor trajectories as a means of assessing the requirements of motion-impaired computer users. A new cursor characteristic is proposed that attempts to capture difficulties with moving the cursor in a smooth trajectory. A study was conducted to see if haptic tunnels could improve performance in "point and click" tasks. Results indicate that the tunnels reduced times to target for those users identified by the new characteristic as having the most difficulty moving in a smooth trajectory. This suggests that cursor characteristics have potential applications in performing assessments of a user's cursor control capabilities which can then be used to determine appropriate methods of assistance.
Resumo:
In this paper we perform an analytical and numerical study of Extreme Value distributions in discrete dynamical systems that have a singular measure. Using the block maxima approach described in Faranda et al. [2011] we show that, numerically, the Extreme Value distribution for these maps can be associated to the Generalised Extreme Value family where the parameters scale with the information dimension. The numerical analysis are performed on a few low dimensional maps. For the middle third Cantor set and the Sierpinskij triangle obtained using Iterated Function Systems, experimental parameters show a very good agreement with the theoretical values. For strange attractors like Lozi and H\`enon maps a slower convergence to the Generalised Extreme Value distribution is observed. Even in presence of large statistics the observed convergence is slower if compared with the maps which have an absolute continuous invariant measure. Nevertheless and within the uncertainty computed range, the results are in good agreement with the theoretical estimates.
Resumo:
ABSTRACT Non-Gaussian/non-linear data assimilation is becoming an increasingly important area of research in the Geosciences as the resolution and non-linearity of models are increased and more and more non-linear observation operators are being used. In this study, we look at the effect of relaxing the assumption of a Gaussian prior on the impact of observations within the data assimilation system. Three different measures of observation impact are studied: the sensitivity of the posterior mean to the observations, mutual information and relative entropy. The sensitivity of the posterior mean is derived analytically when the prior is modelled by a simplified Gaussian mixture and the observation errors are Gaussian. It is found that the sensitivity is a strong function of the value of the observation and proportional to the posterior variance. Similarly, relative entropy is found to be a strong function of the value of the observation. However, the errors in estimating these two measures using a Gaussian approximation to the prior can differ significantly. This hampers conclusions about the effect of the non-Gaussian prior on observation impact. Mutual information does not depend on the value of the observation and is seen to be close to its Gaussian approximation. These findings are illustrated with the particle filter applied to the Lorenz ’63 system. This article is concluded with a discussion of the appropriateness of these measures of observation impact for different situations.
Resumo:
In this study two new measures of lexical diversity are tested for the first time on French. The usefulness of these measures, MTLD (McCarthy and Jarvis (2010 and this volume) ) and HD-D (McCarthy and Jarvis 2007), in predicting different aspects of language proficiency is assessed and compared with D (Malvern and Richards 1997; Malvern, Richards, Chipere and Durán 2004) and Maas (1972) in analyses of stories told by two groups of learners (n=41) of two different proficiency levels and one group of native speakers of French (n=23). The importance of careful lemmatization in studies of lexical diversity which involve highly inflected languages is also demonstrated. The paper shows that the measures of lexical diversity under study are valid proxies for language ability in that they explain up to 62 percent of the variance in French C-test scores, and up to 33 percent of the variance in a measure of complexity. The paper also provides evidence that dependence on segment size continues to be a problem for the measures of lexical diversity discussed in this paper. The paper concludes that limiting the range of text lengths or even keeping text length constant is the safest option in analysing lexical diversity.
Resumo:
For decades regulators in the energy sector have focused on facilitating the maximisation of energy supply in order to meet demand through liberalisation and removal of market barriers. The debate on climate change has emphasised a new type of risk in the balance between energy demand and supply: excessively high energy demand brings about significantly negative environmental and economic impacts. This is because if a vast number of users is consuming electricity at the same time, energy suppliers have to activate dirty old power plants with higher greenhouse gas emissions and higher system costs. The creation of a Europe-wide electricity market requires a systematic investigation into the risk of aggregate peak demand. This paper draws on the e-Living Time-Use Survey database to assess the risk of aggregate peak residential electricity demand for European energy markets. Findings highlight in which countries and for what activities the risk of aggregate peak demand is greater. The discussion highlights which approaches energy regulators have started considering to convince users about the risks of consuming too much energy during peak times. These include ‘nudging’ approaches such as the roll-out of smart meters, incentives for shifting the timing of energy consumption, differentiated time-of-use tariffs, regulatory financial incentives and consumption data sharing at the community level.
Resumo:
Many different performance measures have been developed to evaluate field predictions in meteorology. However, a researcher or practitioner encountering a new or unfamiliar measure may have difficulty in interpreting its results, which may lead to them avoiding new measures and relying on those that are familiar. In the context of evaluating forecasts of extreme events for hydrological applications, this article aims to promote the use of a range of performance measures. Some of the types of performance measures that are introduced in order to demonstrate a six-step approach to tackle a new measure. Using the example of the European Centre for Medium-Range Weather Forecasts (ECMWF) ensemble precipitation predictions for the Danube floods of July and August 2002, to show how to use new performance measures with this approach and the way to choose between different performance measures based on their suitability for the task at hand is shown. Copyright © 2008 Royal Meteorological Society
Resumo:
Background: Exposure to solar ultraviolet-B (UV-B) radiation is a major source of vitamin D3. Chemistry climate models project decreases in ground-level solar erythemal UV over the current century. It is unclear what impact this will have on vitamin D status at the population level. The purpose of this study was to measure the association between ground-level solar UV-B and serum concentrations of 25-hydroxyvitamin D (25(OH)D) using a secondary analysis of the 2007 to 2009 Canadian Health Measures Survey (CHMS). Methods: Blood samples collected from individuals aged 12 to 79 years sampled across Canada were analyzed for 25(OH)D (n=4,398). Solar UV-B irradiance was calculated for the 15 CHMS collection sites using the Tropospheric Ultraviolet and Visible Radiation Model. Multivariable linear regression was used to evaluate the association between 25(OH)D and solar UV-B adjusted for other predictors and to explore effect modification. Results: Cumulative solar UV-B irradiance averaged over 91 days (91-day UV-B) prior to blood draw correlated significantly with 25(OH)D. Independent of other predictors, a 1 kJ/m 2 increase in 91-day UV-B was associated with a significant 0.5 nmol/L (95% CI 0.3-0.8) increase in mean 25(OH)D (P =0.0001). The relationship was stronger among younger individuals and those spending more time outdoors. Based on current projections of decreases in ground-level solar UV-B, we predict less than a 1 nmol/L decrease in mean 25(OH)D for the population. Conclusions: In Canada, cumulative exposure to ambient solar UV-B has a small but significant association with 25(OH)D concentrations. Public health messages to improve vitamin D status should target safe sun exposure with sunscreen use, and also enhanced dietary and supplemental intake and maintenance of a healthy body weight.
Resumo:
In Europe, agri-environmental schemes (AES) have been introduced in response to concerns about farmland biodiversity declines. Yet, as AES have delivered variable results, a better understanding of what determines their success or failure is urgently needed. Focusing on pollinating insects, we quantitatively reviewed how environmental factors affect the effectiveness of AES. Our results suggest that the ecological contrast in floral resources created by schemes drives the response of pollinators to AES but that this response is moderated by landscape context and farmland type, with more positive responses in croplands (vs. grasslands) located in simple (vs. cleared or complex) landscapes. These findings inform us how to promote pollinators and associated pollination services in species-poor landscapes. They do not, however, present viable strategies to mitigate loss of threatened or endangered species. This indicates that the objectives and design of AES should distinguish more clearly between biodiversity conservation and delivery of ecosystem services.
Resumo:
We extend recent work that included the effect of pressure forces to derive the precession rate of eccentric accretion discs in cataclysmic variables to the case of double degenerate systems. We find that the logical scaling of the pressure force in such systems results in predictions of unrealistically high primary masses. Using the prototype AM CVn as a calibrator for the magnitude of the effect, we find that there is no scaling that applies consistently to all the systems in the class. We discuss the reasons for the lack of a superhump period to mass ratio relationship analogous to that known for SU UMa systems and suggest that this is because these secondaries do not have a single valued mass-radius relationship. We highlight the unreliability of mass-ratios derived by applying the SU UMa expression to the AM CVn binaries.
Resumo:
The catchment of the River Thames, the principal river system in southern England, provides the main water supply for London but is highly vulnerable to changes in climate, land use and population. The river is eutrophic with significant algal blooms with phosphorus assumed to be the primary chemical indicator of ecosystem health. In the Thames Basin, phosphorus is available from point sources such as wastewater treatment plants and from diffuse sources such as agriculture. In order to predict vulnerability to future change, the integrated catchments model for phosphorus (INCA-P) has been applied to the river basin and used to assess the cost-effectiveness of a range of mitigation and adaptation strategies. It is shown that scenarios of future climate and land-use change will exacerbate the water quality problems, but a range of mitigation measures can improve the situation. A cost-effectiveness study has been undertaken to compare the economic benefits of each mitigation measure and to assess the phosphorus reductions achieved. The most effective strategy is to reduce fertilizer use by 20% together with the treatment of effluent to a high standard. Such measures will reduce the instream phosphorus concentrations to close to the EU Water Framework Directive target for the Thames.
Resumo:
As the calibration and evaluation of flood inundation models are a prerequisite for their successful application, there is a clear need to ensure that the performance measures that quantify how well models match the available observations are fit for purpose. This paper evaluates the binary pattern performance measures that are frequently used to compare flood inundation models with observations of flood extent. This evaluation considers whether these measures are able to calibrate and evaluate model predictions in a credible and consistent way, i.e. identifying the underlying model behaviour for a number of different purposes such as comparing models of floods of different magnitudes or on different catchments. Through theoretical examples, it is shown that the binary pattern measures are not consistent for floods of different sizes, such that for the same vertical error in water level, a model of a flood of large magnitude appears to perform better than a model of a smaller magnitude flood. Further, the commonly used Critical Success Index (usually referred to as F<2 >) is biased in favour of overprediction of the flood extent, and is also biased towards correctly predicting areas of the domain with smaller topographic gradients. Consequently, it is recommended that future studies consider carefully the implications of reporting conclusions using these performance measures. Additionally, future research should consider whether a more robust and consistent analysis could be achieved by using elevation comparison methods instead.
Resumo:
As one of the key indicators of the firm’s ability to leverage successfully its resources and capabilities in the international context, export performance has been one of the most extensively studied phenomena. A plethora of studies have been conducted pertaining to provide better understanding of the factors (firm- or environment-specific) and behaviours (e.g., export strategy) that make exporting a successful venture. Following a comprehensive literature review undertaking in this study the current state of the export performance literature could be summarisedas (i) methodologically fragmented in that there is a variety of analytical and methodological approaches, (ii) conceptually diverse, a large number of determinants have been identified as having direct or indirect influence on the firm’s export performance, and a large number of indicators have been used to conceptualise and operationalise the export performance measures, and (iii) inconclusive, the studies have produced inconsistent results of the impact of different determinants on export performance.