969 resultados para Standard setting
Resumo:
We introduce and explore an approach to estimating statistical significance of classification accuracy, which is particularly useful in scientific applications of machine learning where high dimensionality of the data and the small number of training examples render most standard convergence bounds too loose to yield a meaningful guarantee of the generalization ability of the classifier. Instead, we estimate statistical significance of the observed classification accuracy, or the likelihood of observing such accuracy by chance due to spurious correlations of the high-dimensional data patterns with the class labels in the given training set. We adopt permutation testing, a non-parametric technique previously developed in classical statistics for hypothesis testing in the generative setting (i.e., comparing two probability distributions). We demonstrate the method on real examples from neuroimaging studies and DNA microarray analysis and suggest a theoretical analysis of the procedure that relates the asymptotic behavior of the test to the existing convergence bounds.
Robust Standard Details for Part E: submitted by request of the Office of the Deputy Prime Minister.
Resumo:
Traditionally, language speakers are categorised as mono-lingual, bilingual, or multilingual. It is traditionally assumed in English language education that the ‘lingual’ is something that can be ‘fixed’ in form, written down to be learnt, and taught. Accordingly, the ‘mono’-lingual will have a ‘fixed’ linguistic form. Such a ‘form’ differs according to a number of criteria or influences including region or ‘type’ of English (for example, World Englishes) but is nevertheless assumed to be a ‘form’. ‘Mono-lingualism’ is defined and believed, traditionally, to be ‘speaking one language’; wherever that language is; or whatever that language may be. In this chapter, grounded in an individual subjective philosophy of language, we question this traditional definition. Viewing language from the philosophical perspectives such as those of Bakhtin and Voloshinov, we argue that the prominence of ‘context’ and ‘consciousness’ in language means that to ‘fix’ the form of a language goes against the very spirit of how it is formed and used. We thus challenge the categorisation of ‘mono’-lingualism; proposing that such a categorisation is actually a category error, or a case ‘in which a property is ascribed to a thing that could not possibly have that property’ (Restivo, 2013, p. 175), in this case the property of ‘mono’. Using this proposition as a starting point, we suggest that more time be devoted to language in its context and as per its genuine use as a vehicle for consciousness. We theorise this can be done through a ‘literacy’ based approach which fronts the context of language use rather than the language itself. We outline how we envision this working for teachers, students and materials developers of English Language Education materials in a global setting. To do this we consider Scotland’s Curriculum for Excellence as an exemplar to promote conscious language use in context.
Resumo:
Depression is a common but frequently undiagnosed feature in individuals with HIV infection. To find a strategy to detect depression in a non-specialized clinical setting, the overall performance of the Hospital Anxiety and Depression Scale (HADS) and the depression identification questions proposed by the European AIDS Clinical Society (EACS) guidelines were assessed in a descriptive cross-sectional study of 113 patients with HIV infection. The clinician asked the two screening questions that were proposed under the EACS guidelines and requested patients to complete the HADS. A psychiatrist or psychologist administered semi-structured clinical interviews to yield psychiatric diagnoses of depression (gold standard). A receiver operating characteristic (ROC) analysis for the HADS-Depression (HADS-D) subscale indicated that the best sensitivity and specificity were obtained between the cut-off points of 5 and 8, and the ROC curve for the HADS-Total (HADS-T) indicated that the best cut-off points were between 12 and 14. There were no statistically significant differences in the correlations of the EACS (considering positive responses to one [A] or both questions [B]), the HADS-D ≥ 8 or the HADS-T ≥ 12 with the gold standard. The study concludes that both approaches (the two EACS questions and the HADS-D subscale) are appropriate depression-screening methods in HIV population. We believe that using the EACS-B and the HADS-D subscale in a two-step approach allows for rapid, assumable and accurate clinical diagnosis in non-psychiatric hospital settings.
Resumo:
The exploding demand for services like the World Wide Web reflects the potential that is presented by globally distributed information systems. The number of WWW servers world-wide has doubled every 3 to 5 months since 1993, outstripping even the growth of the Internet. At each of these self-managed sites, the Common Gateway Interface (CGI) and Hypertext Transfer Protocol (HTTP) already constitute a rudimentary basis for contributing local resources to remote collaborations. However, the Web has serious deficiencies that make it unsuited for use as a true medium for metacomputing --- the process of bringing hardware, software, and expertise from many geographically dispersed sources to bear on large scale problems. These deficiencies are, paradoxically, the direct result of the very simple design principles that enabled its exponential growth. There are many symptoms of the problems exhibited by the Web: disk and network resources are consumed extravagantly; information search and discovery are difficult; protocols are aimed at data movement rather than task migration, and ignore the potential for distributing computation. However, all of these can be seen as aspects of a single problem: as a distributed system for metacomputing, the Web offers unpredictable performance and unreliable results. The goal of our project is to use the Web as a medium (within either the global Internet or an enterprise intranet) for metacomputing in a reliable way with performance guarantees. We attack this problem one four levels: (1) Resource Management Services: Globally distributed computing allows novel approaches to the old problems of performance guarantees and reliability. Our first set of ideas involve setting up a family of real-time resource management models organized by the Web Computing Framework with a standard Resource Management Interface (RMI), a Resource Registry, a Task Registry, and resource management protocols to allow resource needs and availability information be collected and disseminated so that a family of algorithms with varying computational precision and accuracy of representations can be chosen to meet realtime and reliability constraints. (2) Middleware Services: Complementary to techniques for allocating and scheduling available resources to serve application needs under realtime and reliability constraints, the second set of ideas aim at reduce communication latency, traffic congestion, server work load, etc. We develop customizable middleware services to exploit application characteristics in traffic analysis to drive new server/browser design strategies (e.g., exploit self-similarity of Web traffic), derive document access patterns via multiserver cooperation, and use them in speculative prefetching, document caching, and aggressive replication to reduce server load and bandwidth requirements. (3) Communication Infrastructure: Finally, to achieve any guarantee of quality of service or performance, one must get at the network layer that can provide the basic guarantees of bandwidth, latency, and reliability. Therefore, the third area is a set of new techniques in network service and protocol designs. (4) Object-Oriented Web Computing Framework A useful resource management system must deal with job priority, fault-tolerance, quality of service, complex resources such as ATM channels, probabilistic models, etc., and models must be tailored to represent the best tradeoff for a particular setting. This requires a family of models, organized within an object-oriented framework, because no one-size-fits-all approach is appropriate. This presents a software engineering challenge requiring integration of solutions at all levels: algorithms, models, protocols, and profiling and monitoring tools. The framework captures the abstract class interfaces of the collection of cooperating components, but allows the concretization of each component to be driven by the requirements of a specific approach and environment.
Resumo:
Many people suffer from conditions that lead to deterioration of motor control and makes access to the computer using traditional input devices difficult. In particular, they may loose control of hand movement to the extent that the standard mouse cannot be used as a pointing device. Most current alternatives use markers or specialized hardware to track and translate a user's movement to pointer movement. These approaches may be perceived as intrusive, for example, wearable devices. Camera-based assistive systems that use visual tracking of features on the user's body often require cumbersome manual adjustment. This paper introduces an enhanced computer vision based strategy where features, for example on a user's face, viewed through an inexpensive USB camera, are tracked and translated to pointer movement. The main contributions of this paper are (1) enhancing a video based interface with a mechanism for mapping feature movement to pointer movement, which allows users to navigate to all areas of the screen even with very limited physical movement, and (2) providing a customizable, hierarchical navigation framework for human computer interaction (HCI). This framework provides effective use of the vision-based interface system for accessing multiple applications in an autonomous setting. Experiments with several users show the effectiveness of the mapping strategy and its usage within the application framework as a practical tool for desktop users with disabilities.
Resumo:
This research aimed to investigate the main concern facing nurses in minimising risk within the perioperative setting and to generate an explanatory substantive theory of how they resolve this through anticipatory vigilance. In the context of the perioperative setting, nurses encounter challenges in minimising risks for their patients on a continuous basis. Current explanations of minimising risk in the perioperative setting offers insights into how perioperative nurses undertake their work. Currently research in minimising risk is broadly related to dealing with errors as opposed to preventing them. To date, little is known about how perioperative nurses practice and maintain safety. This study was guided by the principles of classic grounded theory as described by Glaser (1978, 1998, 2001). Data was collected through individual unstructured interviews with thirty seven perioperative nurses (with varying lengths of experiences of working in the area) and thirty three hours of non-participant observation within eight different perioperative settings in the Republic of Ireland. Data was simultaneously collected and analysed. The theory of anticipatory vigilance emerged as the pattern of behaviour through which nurse’s deal with their main concern of minimising risk in a high risk setting. Anticipatory vigilance is enacted through orchestrating, routinising and momentary adapting within a spirit of trusting relations within the substantive area of the perioperative setting. This theory of offers an explanation on how nurses resolve their main concern of minimising risk within the perioperative setting. The theory of anticipatory vigilance will be useful to nurses in providing a comprehensive framework of explanation and understanding on how nurses deal with minimising risk in the perioperative setting. The theory links perioperative nursing, risk and vigilance together. Clinical improvements through understanding and awareness of the theory of anticipatory vigilance will result in an improved quality environment, leading to safe patient outcomes.
Resumo:
The world’s population is rapidly aging, which affects healthcare budgets, resources, pensions and social security systems. Although most older adults prefer to live independently in their own home as long as possible, smart living solutions to support elderly people at home did not reach mass adoption, yet. To support people age-in-place a Living Lab is established in one of the metropolitan areas in the Netherlands. The main goal of the Living Lab is to develop an online health and wellbeing platform that matches service providers, caretakers and users and to implement that platform in one particular city district. In this paper we describe the narrative of the action design research process that will give researchers insight how to deal with complex multi-stakeholder design projects as well as cooperation issues to develop an artifact in a real-life setting.
Resumo:
SCOPUS: cp.j