920 resultados para user tests
Resumo:
This article describes work undertaken by the VERA project to investigate how archaeologists work with information technology (IT) on excavation sites. We used a diary study to research the usual patterns of behaviour of archaeologists digging the Silchester Roman town site during the summer of 2007. Although recording had previously been undertaken using pen and paper, during the 2007 season a part of the dig was dedicated to trials of IT and archaeologists used digital pens and paper and Nokia N800 handheld PDAs to record their work. The goal of the trial was to see whether it was possible to record data from the dig whilst still on site, rather than waiting until after the excavation to enter it into the Integrated Archaeological Database (IADB) and to determine whether the archaeologists found the new technology helpful. The digital pens were a success, however, the N800s were not successful given the extreme conditions on site. Our findings confirmed that it was important that technology should fit in well with the work being undertaken rather than being used for its own sake, and should respect established work flows. We also found that the quality of data being entered was a recurrent concern as was the reliability of the infrastructure and equipment.
Resumo:
Book review of 'The prosthetic impulse: from a posthuman present to a biocultural future', edited by M. Smith and J. Morra.
Resumo:
A large fraction of papers in the climate literature includes erroneous uses of significance tests. A Bayesian analysis is presented to highlight the meaning of significance tests and why typical misuse occurs. The significance statistic is not a quantitative measure of how confident we can be of the ‘reality’ of a given result. It is concluded that a significance test very rarely provides useful quantitative information.
Resumo:
In the decade since OceanObs `99, great advances have been made in the field of ocean data dissemination. The use of Internet technologies has transformed the landscape: users can now find, evaluate and access data rapidly and securely using only a web browser. This paper describes the current state of the art in dissemination methods for ocean data, focussing particularly on ocean observations from in situ and remote sensing platforms. We discuss current efforts being made to improve the consistency of delivered data and to increase the potential for automated integration of diverse datasets. An important recent development is the adoption of open standards from the Geographic Information Systems community; we discuss the current impact of these new technologies and their future potential. We conclude that new approaches will indeed be necessary to exchange data more effectively and forge links between communities, but these approaches must be evaluated critically through practical tests, and existing ocean data exchange technologies must be used to their best advantage. Investment in key technology components, cross-community pilot projects and the enhancement of end-user software tools will be required in order to assess and demonstrate the value of any new technology.
Resumo:
Resistance baselines were obtained for the first generation anticoagulant rodenticides chlorophacinone and diphacinone using laboratory, caesarian-derived Norway rats (Rattus norvegicus) as the susceptible strain and the blood clotting response test method. The ED99 estimates for a quantal response were: chlorophacinone, males 0.86 mg kg−1, females 1.03 mg kg−1; diphacinone, males 1.26 mg kg−1, females 1.60 mg kg−1. The dose-response data also showed that chlorophacinone was significantly (p<0.0001) more potent than diphacinone for both male and female rats, and that male rats were more susceptible than females to both compounds (p<0.002). The ED99 doses were then given to groups of five male and five female rats of the Welsh and Hampshire warfarin-resistant strains. Twenty-four hours later, prothrombin times were slightly elevated in both strains but all the animals were classified as resistant to the two compounds, indicating cross-resistance from warfarin to diphacinone and chlorophacinone. When rats of the two resistant strains were fed for six consecutive days on baits containing either diphacinone or chlorophacinone, many animals survived, indicating that their resistance might enable them to survive treatments with these compounds in the field.
Resumo:
This paper derives some exact power properties of tests for spatial autocorrelation in the context of a linear regression model. In particular, we characterize the circumstances in which the power vanishes as the autocorrelation increases, thus extending the work of Krämer (2005). More generally, the analysis in the paper sheds new light on how the power of tests for spatial autocorrelation is affected by the matrix of regressors and by the spatial structure. We mainly focus on the problem of residual spatial autocorrelation, in which case it is appropriate to restrict attention to the class of invariant tests, but we also consider the case when the autocorrelation is due to the presence of a spatially lagged dependent variable among the regressors. A numerical study aimed at assessing the practical relevance of the theoretical results is included
Resumo:
Based on insufficient evidence, and inadequate research, Floridi and his students report inaccuracies and draw false conclusions in their Minds and Machines evaluation, which this paper aims to clarify. Acting as invited judges, Floridi et al. participated in nine, of the ninety-six, Turing tests staged in the finals of the 18th Loebner Prize for Artificial Intelligence in October 2008. From the transcripts it appears that they used power over solidarity as an interrogation technique. As a result, they were fooled on several occasions into believing that a machine was a human and that a human was a machine. Worse still, they did not realise their mistake. This resulted in a combined correct identification rate of less than 56%. In their paper they assumed that they had made correct identifications when they in fact had been incorrect.
Resumo:
In recent years there has been a rapid growth of interest in exploring the relationship between nutritional therapies and the maintenance of cognitive function in adulthood. Emerging evidence reveals an increasingly complex picture with respect to the benefits of various food constituents on learning, memory and psychomotor function in adults. However, to date, there has been little consensus in human studies on the range of cognitive domains to be tested or the particular tests to be employed. To illustrate the potential difficulties that this poses, we conducted a systematic review of existing human adult randomised controlled trial (RCT) studies that have investigated the effects of 24 d to 36 months of supplementation with flavonoids and micronutrients on cognitive performance. There were thirty-nine studies employing a total of 121 different cognitive tasks that met the criteria for inclusion. Results showed that less than half of these studies reported positive effects of treatment, with some important cognitive domains either under-represented or not explored at all. Although there was some evidence of sensitivity to nutritional supplementation in a number of domains (for example, executive function, spatial working memory), interpretation is currently difficult given the prevailing 'scattergun approach' for selecting cognitive tests. Specifically, the practice means that it is often difficult to distinguish between a boundary condition for a particular nutrient and a lack of task sensitivity. We argue that for significant future progress to be made, researchers need to pay much closer attention to existing human RCT and animal data, as well as to more basic issues surrounding task sensitivity, statistical power and type I error.
Resumo:
This paper suggests a method for identifying individuals who are most suited to using virtual reality (VR) systems. The aim is to help both an individual or employer to decide where that individual's skills and abilities would be best deployed. By considering a potential user's competence and temperament, a graphical representation is introduced that may then be used to crudely delineate a high-aptitude participant against those with lesser capabilities. By introducing standard tests for competence and a standard classifier for temperament, and by further weighting each measure with respect to the technology currently available and the application, a detailed representation of the effectiveness of different users is developed.
Resumo:
Where users are interacting in a distributed virtual environment, the actions of each user must be observed by peers with sufficient consistency and within a limited delay so as not to be detrimental to the interaction. The consistency control issue may be split into three parts: update control; consistent enactment and evolution of events; and causal consistency. The delay in the presentation of events, termed latency, is primarily dependent on the network propagation delay and the consistency control algorithms. The latency induced by the consistency control algorithm, in particular causal ordering, is proportional to the number of participants. This paper describes how the effect of network delays may be reduced and introduces a scalable solution that provides sufficient consistency control while minimising its effect on latency. The principles described have been developed at Reading over the past five years. Similar principles are now emerging in the simulation community through the HLA standard. This paper attempts to validate the suggested principles within the schema of distributed simulation and virtual environments and to compare and contrast with those described by the HLA definition documents.