45 resultados para Sensor of electric measures
em CentAUR: Central Archive University of Reading - UK
Resumo:
“Point and click” interactions remain one of the key features of graphical user interfaces (GUIs). People with motion-impairments, however, can often have difficulty with accurate control of standard pointing devices. This paper discusses work that aims to reveal the nature of these difficulties through analyses that consider the cursor’s path of movement. A range of cursor measures was applied, and a number of them were found to be significant in capturing the differences between able-bodied users and motion-impaired users, as well as the differences between a haptic force feedback condition and a control condition. The cursor measures found in the literature, however, do not make up a comprehensive list, but provide a starting point for analysing cursor movements more completely. Six new cursor characteristics for motion-impaired users are introduced to capture aspects of cursor movement different from those already proposed.
Resumo:
The catchment of the River Thames, the principal river system in southern England, provides the main water supply for London but is highly vulnerable to changes in climate, land use and population. The river is eutrophic with significant algal blooms with phosphorus assumed to be the primary chemical indicator of ecosystem health. In the Thames Basin, phosphorus is available from point sources such as wastewater treatment plants and from diffuse sources such as agriculture. In order to predict vulnerability to future change, the integrated catchments model for phosphorus (INCA-P) has been applied to the river basin and used to assess the cost-effectiveness of a range of mitigation and adaptation strategies. It is shown that scenarios of future climate and land-use change will exacerbate the water quality problems, but a range of mitigation measures can improve the situation. A cost-effectiveness study has been undertaken to compare the economic benefits of each mitigation measure and to assess the phosphorus reductions achieved. The most effective strategy is to reduce fertilizer use by 20% together with the treatment of effluent to a high standard. Such measures will reduce the instream phosphorus concentrations to close to the EU Water Framework Directive target for the Thames.
Resumo:
The effect of the direction of external electric field on the shear stress of an ER fluid has been studied by molecular-dynamics simulation. Due to the formation of inclined chains, the shear stress strongly depends on the direction of the field, and it may be very large under some special field direction. And theoretical model of ideal microstructure of ER fluids has proved this result. Thus the ER effect may be greatly enhanced just by choosing an optimum direction for the field without any additional requirement, suggesting a promising way to the practical application of ER fluids.
Resumo:
ABSTRACT Non-Gaussian/non-linear data assimilation is becoming an increasingly important area of research in the Geosciences as the resolution and non-linearity of models are increased and more and more non-linear observation operators are being used. In this study, we look at the effect of relaxing the assumption of a Gaussian prior on the impact of observations within the data assimilation system. Three different measures of observation impact are studied: the sensitivity of the posterior mean to the observations, mutual information and relative entropy. The sensitivity of the posterior mean is derived analytically when the prior is modelled by a simplified Gaussian mixture and the observation errors are Gaussian. It is found that the sensitivity is a strong function of the value of the observation and proportional to the posterior variance. Similarly, relative entropy is found to be a strong function of the value of the observation. However, the errors in estimating these two measures using a Gaussian approximation to the prior can differ significantly. This hampers conclusions about the effect of the non-Gaussian prior on observation impact. Mutual information does not depend on the value of the observation and is seen to be close to its Gaussian approximation. These findings are illustrated with the particle filter applied to the Lorenz ’63 system. This article is concluded with a discussion of the appropriateness of these measures of observation impact for different situations.
Resumo:
In this study two new measures of lexical diversity are tested for the first time on French. The usefulness of these measures, MTLD (McCarthy and Jarvis (2010 and this volume) ) and HD-D (McCarthy and Jarvis 2007), in predicting different aspects of language proficiency is assessed and compared with D (Malvern and Richards 1997; Malvern, Richards, Chipere and Durán 2004) and Maas (1972) in analyses of stories told by two groups of learners (n=41) of two different proficiency levels and one group of native speakers of French (n=23). The importance of careful lemmatization in studies of lexical diversity which involve highly inflected languages is also demonstrated. The paper shows that the measures of lexical diversity under study are valid proxies for language ability in that they explain up to 62 percent of the variance in French C-test scores, and up to 33 percent of the variance in a measure of complexity. The paper also provides evidence that dependence on segment size continues to be a problem for the measures of lexical diversity discussed in this paper. The paper concludes that limiting the range of text lengths or even keeping text length constant is the safest option in analysing lexical diversity.
Resumo:
Many different performance measures have been developed to evaluate field predictions in meteorology. However, a researcher or practitioner encountering a new or unfamiliar measure may have difficulty in interpreting its results, which may lead to them avoiding new measures and relying on those that are familiar. In the context of evaluating forecasts of extreme events for hydrological applications, this article aims to promote the use of a range of performance measures. Some of the types of performance measures that are introduced in order to demonstrate a six-step approach to tackle a new measure. Using the example of the European Centre for Medium-Range Weather Forecasts (ECMWF) ensemble precipitation predictions for the Danube floods of July and August 2002, to show how to use new performance measures with this approach and the way to choose between different performance measures based on their suitability for the task at hand is shown. Copyright © 2008 Royal Meteorological Society
Resumo:
Bright aurorae can be excited by the acceleration of electrons into the atmosphere in violation of ideal magnetohydrodynamics. Modelling studies predict that the accelerating electric potential consists of electric double layers at the boundaries of an acceleration region but observations suggest that particle acceleration occurs throughout this region. Using multi-spacecraft observations from Cluster we have examined two upward current regions on 14 December 2009. Our observations show that the potential difference below C4 and C3 changed by up to 1.7 kV between their respective crossings, which were separated by 150 s. The field-aligned current density observed by C3 was also larger than that observed by C4. The potential drop above C3 and C4 was approximately the same in both crossings. Using a novel technique of quantitatively comparing the electron spectra measured by Cluster 1 and 3, which were separated in altitude, we determine when these spacecraft made effectively magnetically conjugate observations and use these conjugate observations to determine the instantaneous distribution of the potential drop in the AAR. Our observations show that an average of 15% of the potential drop in the AAR was located between C1 at 6235 km and C3 at 4685 km altitude, with a maximum potential drop between the spacecraft of 500~V and that the majority of the potential drop was below C3. By assuming a spatial invariance along the length of the upward current region, we discuss these observations in terms of temporal changes and the vertical structure of the electrostatic potential drop and in the context of existing models and previous observations single- and multi-spacecraft observations.
Resumo:
Urban regeneration programmes in the UK over the past 20 years have increasingly focused on attracting investors, middle-class shoppers and visitors by transforming places and creating new consumption spaces. Ensuring that places are safe and are seen to be safe has taken on greater salience as these flows of income are easily disrupted by changing perceptions of fear and the threat of crime. At the same time, new technologies and policing strategies and tactics have been adopted in a number of regeneration areas which seek to establish control over these new urban spaces. Policing space is increasingly about controlling human actions through design, surveillance technologies and codes of conduct and enforcement. Regeneration agencies and the police now work in partnerships to develop their strategies. At its most extreme, this can lead to the creation of zero-tolerance, or what Smith terms 'revanchist', measures aimed at particular social groups in an effort to sanitise space in the interests of capital accumulation. This paper, drawing on an examination of regeneration practices and processes in one of the UK's fastest-growing urban areas, Reading in Berkshire, assesses policing strategies and tactics in the wake of a major regeneration programme. It documents and discusses the discourses of regeneration that have developed in the town and the ways in which new urban spaces have been secured. It argues that, whilst security concerns have become embedded in institutional discourses and practices, the implementation of security measures has been mediated, in part, by the local socio-political relations in and through which they have been developed.
Resumo:
Locality to other nodes on a peer-to-peer overlay network can be established by means of a set of landmarks shared among the participating nodes. Each node independently collects a set of latency measures to landmark nodes, which are used as a multi-dimensional feature vector. Each peer node uses the feature vector to generate a unique scalar index which is correlated to its topological locality. A popular dimensionality reduction technique is the space filling Hilbert’s curve, as it possesses good locality preserving properties. However, there exists little comparison between Hilbert’s curve and other techniques for dimensionality reduction. This work carries out a quantitative analysis of their properties. Linear and non-linear techniques for scaling the landmark vectors to a single dimension are investigated. Hilbert’s curve, Sammon’s mapping and Principal Component Analysis have been used to generate a 1d space with locality preserving properties. This work provides empirical evidence to support the use of Hilbert’s curve in the context of locality preservation when generating peer identifiers by means of landmark vector analysis. A comparative analysis is carried out with an artificial 2d network model and with a realistic network topology model with a typical power-law distribution of node connectivity in the Internet. Nearest neighbour analysis confirms Hilbert’s curve to be very effective in both artificial and realistic network topologies. Nevertheless, the results in the realistic network model show that there is scope for improvements and better techniques to preserve locality information are required.
Resumo:
Background: The present paper investigates the question of a suitable basic model for the number of scrapie cases in a holding and applications of this knowledge to the estimation of scrapie-ffected holding population sizes and adequacy of control measures within holding. Is the number of scrapie cases proportional to the size of the holding in which case it should be incorporated into the parameter of the error distribution for the scrapie counts? Or, is there a different - potentially more complex - relationship between case count and holding size in which case the information about the size of the holding should be better incorporated as a covariate in the modeling? Methods: We show that this question can be appropriately addressed via a simple zero-truncated Poisson model in which the hypothesis of proportionality enters as a special offset-model. Model comparisons can be achieved by means of likelihood ratio testing. The procedure is illustrated by means of surveillance data on classical scrapie in Great Britain. Furthermore, the model with the best fit is used to estimate the size of the scrapie-affected holding population in Great Britain by means of two capture-recapture estimators: the Poisson estimator and the generalized Zelterman estimator. Results: No evidence could be found for the hypothesis of proportionality. In fact, there is some evidence that this relationship follows a curved line which increases for small holdings up to a maximum after which it declines again. Furthermore, it is pointed out how crucial the correct model choice is when applied to capture-recapture estimation on the basis of zero-truncated Poisson models as well as on the basis of the generalized Zelterman estimator. Estimators based on the proportionality model return very different and unreasonable estimates for the population sizes. Conclusion: Our results stress the importance of an adequate modelling approach to the association between holding size and the number of cases of classical scrapie within holding. Reporting artefacts and speculative biological effects are hypothesized as the underlying causes of the observed curved relationship. The lack of adjustment for these artefacts might well render ineffective the current strategies for the control of the disease.
Resumo:
ES-62 is a phosphorylcholine-containing glycoprotein secreted by filarial nematodes. This molecule has been shown to reduce the severity of inflammation in collagen-induced arthritis (CIA) in mice, a model of rheumatoid arthritis, via down-regulation of anti-collagen type 1 immune responses. Malaria parasites induce a pro-inflammatory host immune response and many of the symptoms of malaria are immune system-mediated. Therefore we have asked whether the immunomodulatory properties of ES-62 can down-regulate the severity of malaria infection in BALB/c mice infected with Plasmodium chabaudi. We have found that ES-62 has no significant effect on the course of P. chabaudi parasitaemia, and does not significantly affect any of the measures of malaria-induced pathology taken throughout infection.
Resumo:
This study investigated, for the D-2 dopamine receptor, the relation between the ability of agonists and inverse agonists to stabilise different states of the receptor and their relative efficacies. K-i values for agonists were determined in competition, versus the binding of the antagonist [H-3]spiperone. Competition data were fitted best by a two-binding site model (with the exception of bromocriptine, for which a one-binding site model provided the best fit) and agonist affinities for the higher (K-h) (G protein-coupled) and lower affinity (K-l) (G protein-uncoupled) sites determined. Ki values for agonists were also determined in competition versus the binding of the agonist [H-3]N-propylnorapomorphine (NPA) to provide a second estimate of K-h,. Maximal agonist effects (E-max) and their potencies (EC50) were determined from concentration-response curves for agonist stimulation of guanosine-5'-O-(3-[S-32] thiotriphosphate) ([S-35]GTPgammaS) binding. The ability of agonists to stabilise the G protein-coupled state of the receptor (K-l/K-h, determined from ligand-binding assays) did not correlate with either of two measures of relative efficacy (relative E-max, Kl/EC50) of agonists determined in [S-35]GTPgammaS-binding assays, when the data for all of the compounds tested were analysed For a subset of compounds, however, there was a relation between K-l/K-h and E-max.. Competition-binding data versus [H-3]spiperone and [H-3]NPA for a range of inverse agonists were fitted best by a one-binding site model. K-i values for the inverse agonists tested were slightly lower in competition versus [H-3]NPA compared to [H-3]spiperone. These data do not provide support for the idea that inverse agonists act by binding preferentially to the ground state of the receptor. (C) 2004 Elsevier Inc. All rights reserved.
Resumo:
Geological carbon dioxide storage (CCS) has the potential to make a significant contribution to the decarbonisation of the UK. Amid concerns over maintaining security, and hence diversity, of supply, CCS could allow the continued use of coal, oil and gas whilst avoiding the CO2 emissions currently associated with fossil fuel use. This project has explored some of the geological, environmental, technical, economic and social implications of this technology. The UK is well placed to exploit CCS with a large offshore storage capacity, both in disused oil and gas fields and saline aquifers. This capacity should be sufficient to store CO2 from the power sector (at current levels) for a least one century, using well understood and therefore likely to be lower-risk, depleted hydrocarbon fields and contained parts of aquifers. It is very difficult to produce reliable estimates of the (potentially much larger) storage capacity of the less well understood geological reservoirs such as non-confined parts of aquifers. With the majority of its large coal fired power stations due to be retired during the next 15 to 20 years, the UK is at a natural decision point with respect to the future of power generation from coal; the existence of both national reserves and the infrastructure for receiving imported coal makes clean coal technology a realistic option. The notion of CCS as a ‘bridging’ or ‘stop-gap’ technology (i.e. whilst we develop ‘genuinely’ sustainable renewable energy technologies) needs to be examined somewhat critically, especially given the scale of global coal reserves. If CCS plant is built, then it is likely that technological innovation will bring down the costs of CO2 capture, such that it could become increasingly attractive. As with any capitalintensive option, there is a danger of becoming ‘locked-in’ to a CCS system. The costs of CCS in our model for UK power stations in the East Midlands and Yorkshire to reservoirs in the North Sea are between £25 and £60 per tonne of CO2 captured, transported and stored. This is between about 2 and 4 times the current traded price of a tonne of CO2 in the EU Emissions Trading Scheme. In addition to the technical and economic requirements of the CCS technology, it should also be socially and environmentally acceptable. Our research has shown that, given an acceptance of the severity and urgency of addressing climate change, CCS is viewed favourably by members of the public, provided it is adopted within a portfolio of other measures. The most commonly voiced concern from the public is that of leakage and this remains perhaps the greatest uncertainty with CCS. It is not possible to make general statements concerning storage security; assessments must be site specific. The impacts of any potential leakage are also somewhat uncertain but should be balanced against the deleterious effects of increased acidification in the oceans due to uptake of elevated atmospheric CO2 that have already been observed. Provided adequate long term monitoring can be ensured, any leakage of CO2 from a storage site is likely to have minimal localised impacts as long as leaks are rapidly repaired. A regulatory framework for CCS will need to include risk assessment of potential environmental and health and safety impacts, accounting and monitoring and liability for the long term. In summary, although there remain uncertainties to be resolved through research and demonstration projects, our assessment demonstrates that CCS holds great potential for significant cuts in CO2 emissions as we develop long term alternatives to fossil fuel use. CCS can contribute to reducing emissions of CO2 into the atmosphere in the near term (i.e. peak-shaving the future atmospheric concentration of CO2), with the potential to continue to deliver significant CO2 reductions over the long term.
Resumo:
Background: Population monitoring has been introduced in UK primary schools in an effort to track the growing obesity epidemic. It has been argued that parents should be informed of their child's results, but is there evidence that moving from monitoring to screening would be effective? We describe what is known about the effectiveness of monitoring and screening for overweight and obesity in primary school children and highlight areas where evidence is lacking and research should be prioritised. Design: Systematic review with discussion of evidence gaps and future research. Data sources: Published and unpublished studies ( any language) from electronic databases ( inception to July 2005), clinical experts, Primary Care Trusts and Strategic Health Authorities, and reference lists of retrieved studies. Review methods: We included any study that evaluated measures of overweight and obesity as part of a population-level assessment and excluded studies whose primary outcome measure was prevalence. Results: There were no trials assessing the effectiveness of monitoring or screening for overweight and obesity. Studies focussed on the diagnostic accuracy of measurements. Information on the attitudes of children, parents and health professionals to monitoring was extremely sparse. Conclusions: Our review found a lack of data on the potential impact of population monitoring or screening for obesity and more research is indicated. Identification of effective weight reduction strategies for children and clarification of the role of preventative measures are priorities. It is difficult to see how screening to identify individual children can be justified without effective interventions.