225 resultados para authentication test
Resumo:
Most current computer systems authorise the user at the start of a session and do not detect whether the current user is still the initial authorised user, a substitute user, or an intruder pretending to be a valid user. Therefore, a system that continuously checks the identity of the user throughout the session is necessary without being intrusive to end-user and/or effectively doing this. Such a system is called a continuous authentication system (CAS). Researchers have applied several approaches for CAS and most of these techniques are based on biometrics. These continuous biometric authentication systems (CBAS) are supplied by user traits and characteristics. One of the main types of biometric is keystroke dynamics which has been widely tried and accepted for providing continuous user authentication. Keystroke dynamics is appealing for many reasons. First, it is less obtrusive, since users will be typing on the computer keyboard anyway. Second, it does not require extra hardware. Finally, keystroke dynamics will be available after the authentication step at the start of the computer session. Currently, there is insufficient research in the CBAS with keystroke dynamics field. To date, most of the existing schemes ignore the continuous authentication scenarios which might affect their practicality in different real world applications. Also, the contemporary CBAS with keystroke dynamics approaches use characters sequences as features that are representative of user typing behavior but their selected features criteria do not guarantee features with strong statistical significance which may cause less accurate statistical user-representation. Furthermore, their selected features do not inherently incorporate user typing behavior. Finally, the existing CBAS that are based on keystroke dynamics are typically dependent on pre-defined user-typing models for continuous authentication. This dependency restricts the systems to authenticate only known users whose typing samples are modelled. This research addresses the previous limitations associated with the existing CBAS schemes by developing a generic model to better identify and understand the characteristics and requirements of each type of CBAS and continuous authentication scenario. Also, the research proposes four statistical-based feature selection techniques that have highest statistical significance and encompasses different user typing behaviors which represent user typing patterns effectively. Finally, the research proposes the user-independent threshold approach that is able to authenticate a user accurately without needing any predefined user typing model a-priori. Also, we enhance the technique to detect the impostor or intruder who may take over during the entire computer session.
Resumo:
This chapter is a tutorial that teaches you how to design extended finite state machine (EFSM) test models for a system that you want to test. EFSM models are more powerful and expressive than simple finite state machine (FSM) models, and are one of the most commonly used styles of models for model-based testing, especially for embedded systems. There are many languages and notations in use for writing EFSM models, but in this tutorial we write our EFSM models in the familiar Java programming language. To generate tests from these EFSM models we use ModelJUnit, which is an open-source tool that supports several stochastic test generation algorithms, and we also show how to write your own model-based testing tool. We show how EFSM models can be used for unit testing and system testing of embedded systems, and for offline testing as well as online testing.
Resumo:
The IEEE Subcommittee on the Application of Probability Methods (APM) published the IEEE Reliability Test System (RTS) [1] in 1979. This system provides a consistent and generally acceptable set of data that can be used both in generation capacity and in composite system reliability evaluation [2,3]. The test system provides a basis for the comparison of results obtained by different people using different methods. Prior to its publication, there was no general agreement on either the system or the data that should be used to demonstrate or test various techniques developed to conduct reliability studies. Development of reliability assessment techniques and programs are very dependent on the intent behind the development as the experience of one power utility with their system may be quite different from that of another utility. The development and the utilization of a reliability program are, therefore, greatly influenced by the experience of a utlity and the intent of the system manager, planner and designer conducting the reliability studies. The IEEE-RTS has proved to be extremely valuable in highlighting and comparing the capabilities (or incapabilities) of programs used in reliability studies, the differences in the perception of various power utilities and the differences in the solution techniques. The IEEE-RTS contains a reasonably large power network which can be difficult to use for initial studies in an educational environment.
Resumo:
The IEEE Reliability Test System (RTS) developed by the Application of Probability Method Subcommittee has been used to compare and test a wide range of generating capacity and composite system evaluation techniques and subsequent digital computer programs. A basic reliability test system is presented which has evolved from the reliability education and research programs conducted by the Power System Research Group at the University of Saskatchewan. The basic system data necessary for adequacy evaluation at the generation and composite generation and transmission system levels are presented together with the fundamental data required to conduct reliability-cost/reliability-worth evaluation
Resumo:
A set of basic reliability indices at the generation and composite generation and transmission levels for a small reliability test system are presented. The test system and the results presented have evolved from reliability research and teaching programs. The indices presented are for fundamental reliability applications which should be covered in a power system reliability teaching program. The RBTS test system and the basic indices provide a valuable reference for faculty and students engaged in reliability teaching and research
Resumo:
This paper reports a study that explored a new construct: ‘climate of fear’. We hypothesised that climate of fear would vary across work sites within organisations, but not across organisations. This is in contrast a to measures of organisational culture, which were expected to vary both within and across organisations. To test our hypotheses, we developed a new 13-item measure of perceived fear in organisations and tested it in 20 sites across two organisations (N ≡ 209). Culture variables measured were innovative leadership culture, and communication culture. Results were that climate of fear did vary across sites in both organisations, while differences across organisations were not significant, as we anticipated. Organisational culture, however, varied between the organisations, and within one of the organisations. The climate of fear scale exhibited acceptable psychometric properties
Resumo:
We blend research from human-computer interface (HCI) design with computational based crypto- graphic provable security. We explore the notion of practice-oriented provable security (POPS), moving the focus to a higher level of abstraction (POPS+) for use in providing provable security for security ceremonies involving humans. In doing so we high- light some challenges and paradigm shifts required to achieve meaningful provable security for a protocol which includes a human. We move the focus of security ceremonies from being protocols in their context of use, to the protocols being cryptographic building blocks in a higher level protocol (the security cere- mony), which POPS can be applied to. In order to illustrate the need for our approach, we analyse both a protocol proven secure in theory, and a similar proto- col implemented by a �nancial institution, from both HCI and cryptographic perspectives.
Resumo:
Security of RFID authentication protocols has received considerable interest recently. However, an important aspect of such protocols that has not received as much attention is the efficiency of their communication. In this paper we investigate the efficiency benefits of pre-computation for time-constrained applications in small to medium RFID networks. We also outline a protocol utilizing this mechanism in order to demonstrate the benefits and drawbacks of using thisapproach. The proposed protocol shows promising results as it is able to offer the security of untraceableprotocols whilst only requiring the time comparable to that of more efficient but traceable protocols.
Resumo:
We introduce a lightweight biometric solution for user authentication over networks using online handwritten signatures. The algorithm proposed is based on a modified Hausdorff distance and has favorable characteristics such as low computational cost and minimal training requirements. Furthermore, we investigate an information theoretic model for capacity and performance analysis for biometric authentication which brings additional theoretical insights to the problem. A fully functional proof-of-concept prototype that relies on commonly available off-the-shelf hardware is developed as a client-server system that supports Web services. Initial experimental results show that the algorithm performs well despite its low computational requirements and is resilient against over-the-shoulder attacks.
Resumo:
PURPOSE: To test the reliability of Timed Up and Go Tests (TUGTs) in cardiac rehabilitation (CR) and compare TUGTs to the 6-Minute Walk Test (6MWT) for outcome measurement. METHODS: Sixty-one of 154 consecutive community-based CR patients were prospectively recruited. Subjects undertook repeated TUGTs and 6MWTs at the start of CR (start-CR), postdischarge from CR (post-CR), and 6 months postdischarge from CR (6 months post-CR). The main outcome measurements were TUGT time (TUGTT) and 6MWT distance (6MWD). RESULTS: Mean (SD) TUGTT1 and TUGTT2 at the 3 assessments were 6.29 (1.30) and 5.94 (1.20); 5.81 (1.22) and 5.53 (1.09); and 5.39 (1.60) and 5.01 (1.28) seconds, respectively. A reduction in TUGTT occurred between each outcome point (P ≤ .002). Repeated TUGTTs were strongly correlated at each assessment, intraclass correlation (95% CI) = 0.85 (0.76–0.91), 0.84 (0.73–0.91), and 0.90 (0.83–0.94), despite a reduction between TUGTT1 and TUGTT2 of 5%, 5%, and 7%, respectively (P ≤ .006). Relative decreases in TUGTT1 (TUGTT2) occurred from start-CR to post-CR and from start-CR to 6 months post-CR of −7.5% (−6.9%) and −14.2% (−15.5%), respectively, while relative increases in 6MWD1 (6MWD2) occurred, 5.1% (7.2%) and 8.4% (10.2%), respectively (P < .001 in all cases). Pearson correlation coefficients for 6MWD1 to TUGTT1 and TUGTT2 across all times were −0.60 and −0.68 (P < .001) and the intraclass correlations (95% CI) for the speeds derived from averaged 6MWDs and TUGTTs were 0.65 (0.54, 0.73) (P < .001). CONCLUSIONS: Similar relative changes occurred for the TUGT and the 6MWT in CR. A significant correlation between the TUGTT and 6MWD was demonstrated, and we suggest that the TUGT may provide a related or a supplementary measurement of functional capacity in CR.
Resumo:
Population-wide associations between loci due to linkage disequilibrium can be used to map quantitative trait loci (QTL) with high resolution. However, spurious associations between markers and QTL can also arise as a consequence of population stratification. Statistical methods that cannot differentiate between loci associations due to linkage disequilibria from those caused in other ways can render false-positive results. The transmission-disequilibrium test (TDT) is a robust test for detecting QTL. The TDT exploits within-family associations that are not affected by population stratification. However, some TDTs are formulated in a rigid-form, with reduced potential applications. In this study we generalize TDT using mixed linear models to allow greater statistical flexibility. Allelic effects are estimated with two independent parameters: one exploiting the robust within-family information and the other the potentially biased between-family information. A significant difference between these two parameters can be used as evidence for spurious association. This methodology was then used to test the effects of the fourth melanocortin receptor (MC4R) on production traits in the pig. The new analyses supported the previously reported results; i.e., the studied polymorphism is either causal of in very strong linkage disequilibrium with the causal mutation, and provided no evidence for spurious association.
Resumo:
This paper describes an empirical study to test the proposition that all construction contract bidders are homogeneous ie. they can be treated as behaving collectively in an identical (statistical) manner. Examination of previous analyses of bidding data reveals a flaw in the method of standardising bids across different size contracts and a new procedure is proposed which involves the estimation of a contract datum. Three independent sets of bidding data were then subjected to this procedure and estimates of the necessary distributional parameters obtained. These were then tested against the bidder homogeneity assumption resulting in the conclusion that the assumption may be appropriate for a three parameter log-normal shape, but not for scale and location.
Resumo:
We test competing linear and curvilinear predictions between board diversity and performance. The predictions were tested using archival data on 288 organizations listed on the Australian Securities Exchange. The findings provide additional evidence on the business case for board gender diversity and refine the business case for board age diversity.
Resumo:
Objective: To investigate the validity of the Trendelenburg test (TT) using an ultrasound-guided nerve block (UNB) of the superior gluteal nerve and determine whether the reduction in hip abductor muscle (HABD) strength would result in the theorized mechanical compensatory strategies measured during the TT. Design: Quasi-experimental. Setting: Hospital. Participants: Convenience sample of 9 healthy men. Only participants with no current or previous injury to the lumbar spine, pelvis, or lower extremities, and no previous surgeries were included. Interventions: Ultrasound-guided nerve block. Main Outcome Measures: Hip abductor muscle strength (percent body weight [%BW]), contralateral pelvic drop (cPD), change in contralateral pelvic drop (Delta cPD), ipsilateral hip adduction, and ipsilateral trunk sway (TRUNK) measured in degrees. Results: The median age and weight of the participants were 31 years (interquartile range [IQR], 22-32 years) and 73 kg (IQR, 67-81 kg), respectively. An average 52% reduction of HABD strength (z = 2.36, P = 0.02) resulted after the UNB. No differences were found in cPD or Delta cPD (z = 0.01, P = 0.99, z = 20.67, P = 0.49, respectively). Individual changes in biomechanics showed no consistency between participants and nonsystematic changes across the group. One participant demonstrated the mechanical compensations described by Trendelenburg. Conclusions: The TT should not be used as a screening measure for HABD strength in populations demonstrating strength greater than 30% BW but should be reserved for use with populations with marked HABD weakness. Clinical Relevance: This study presents data regarding a critical level of HABD strength required to support the pelvis during the TT.