381 resultados para clock face drawing test
Resumo:
This work details the results of a face authentication test (FAT2004) (http://www.ee.surrey.ac.uk/banca/icpr2004) held in conjunction with the 17th International Conference on Pattern Recognition. The contest was held on the publicly available BANCA database (http://www.ee.surrey.ac.uk/banca) according to a defined protocol (E. Bailly-Bailliere et al., June 2003). The competition also had a sequestered part in which institutions had to submit their algorithms for independent testing. 13 different verification algorithms from 10 institutions submitted results. Also, a standard set of face recognition software packages from the Internet (http://www.cs.colostate.edu/evalfacerec) were used to provide a baseline performance measure.
Resumo:
Self-regulation is often promoted as a coping strategy that may allow older drivers to drive safely for longer. Self-regulation depends upon drivers making an accurate assessment of their own ability and having a willingness to practice self-regulatory behaviors to compensate for changes in ability. The current study explored the relationship between older drivers’ cognitive ability, their driving confidence and their use of self-regulation. An additional study aim was to explore the relationship between these factors and older drivers’ interest in driving programs. Seventy Australian drivers aged 65 years and over completed a questionnaire about their driving and a brief screening measure of cognitive ability (an untimed Clock Drawing Test). While all participants reported high levels of confidence regarding their driving ability, and agreed that they would continue driving in the foreseeable future, a notable proportion performed poorly on the Clock Drawing Test. Compared to older drivers who successfully completed the Clock Drawing Test, those who failed the cognitive test were significantly less likely to report driving self-regulation, and showed significantly less interest in being involved in driving programs. Older drivers with declining cognitive abilities may not be self-regulating their driving. This group also appears to be unlikely to self-refer to driving programs.
Resumo:
The number of doctorates being awarded around the world has almost doubled over the last ten years, propelling it from a small elite enterprise into a large and ever growing international market. Within the context of increasing numbers of doctoral students this book examines the new doctorate environment and the challenges it is starting to face. Drawing on research from around the world the individual authors contribute to a previously under-represented focus of theorising the emerging practices of doctoral education and the shape of change in this arena. Key aspects, expertly discussed by contributors from the UK, USA, Australia, New Zealand, China, South Africa, Sweden and Denmark include: -the changing nature of doctoral education -the need for systematic and principled accounts of doctoral pedagogies -the importance of disciplinary specificity -the relationship between pedagogy and knowledge generation -issues of transdisciplinarity. Reshaping Doctoral Education provides rich accounts of traditional and more innovative pedagogical practices within a range of doctoral systems in different disciplines, professional fields and geographical locations, providing the reader with a trustworthy and scholarly platform from which to design the doctoral experience. It will prove an essential resource for anyone involved in doctorate studies, whether as students, supervisors, researchers, administrators, teachers or mentors.
Resumo:
This paper presents a new method of eye localisation and face segmentation for use in a face recognition system. By using two near infrared light sources, we have shown that the face can be coarsely segmented, and the eyes can be accurately located, increasing the accuracy of the face localisation and improving the overall speed of the system. The system is able to locate both eyes within 25% of the eye-to-eye distance in over 96% of test cases.
Resumo:
Background: People with cardiac disease and type 2 diabetes have higher hospital readmission rates (22%)compared to those without diabetes (6%). Self-management is an effective approach to achieve better health outcomes; however there is a lack of specifically designed programs for patients with these dual conditions. This project aims to extend the development and pilot test of a Cardiac-Diabetes Self-Management Program incorporating user-friendly technologies and the preparation of lay personnel to provide follow-up support. Methods/Design: A randomised controlled trial will be used to explore the feasibility and acceptability of the Cardiac-Diabetes Self-Management Program incorporating DVD case studies and trained peers to provide follow-up support by telephone and text-messaging. A total of 30 cardiac patients with type 2 diabetes will be randomised, either to the usual care group, or to the intervention group. Participants in the intervention group will received the Cardiac-Diabetes Self-Management Program in addition to their usual care. The intervention consists of three faceto- face sessions as well as telephone and text-messaging follow up. The face-to-face sessions will be provided by a trained Research Nurse, commencing in the Coronary Care Unit, and continuing after discharge by trained peers. Peers will follow up patients for up to one month after discharge using text messages and telephone support. Data collection will be conducted at baseline (Time 1) and at one month (Time 2). The primary outcomes include self-efficacy, self-care behaviour and knowledge, measured by well established reliable tools. Discussion: This paper presents the study protocol of a randomised controlled trial to pilot evaluates a Cardiac- Diabetes Self-Management program, and the feasibility of incorporating peers in the follow-ups. Results of this study will provide directions for using such mode in delivering a self-management program for patients with both cardiac condition and diabetes. Furthermore, it will provide valuable information of refinement of the intervention program.
Resumo:
Given global demand for new infrastructure, governments face substantial challenges in funding new infrastructure and simultaneously delivering Value for Money (VfM). The paper begins with an update on a key development in a new early/first-order procurement decision making model that deploys production cost/benefit theory and theories concerning transaction costs from the New Institutional Economics, in order to identify a procurement mode that is likely to deliver the best ratio of production costs and transaction costs to production benefits, and therefore deliver superior VfM relative to alternative procurement modes. In doing so, the new procurement model is also able to address the uncertainty concerning the relative merits of Public-Private Partnerships (PPP) and non-PPP procurement approaches. The main aim of the paper is to develop competition as a dependent variable/proxy for VfM and a hypothesis (overarching proposition), as well as developing a research method to test the new procurement model. Competition reflects both production costs and benefits (absolute level of competition) and transaction costs (level of realised competition) and is a key proxy for VfM. Using competition as a proxy for VfM, the overarching proposition is given as: When the actual procurement mode matches the predicted (theoretical) procurement mode (informed by the new procurement model), then actual competition is expected to match potential competition (based on actual capacity). To collect data to test this proposition, the research method that is developed in this paper combines a survey and case study approach. More specifically, data collection instruments for the surveys to collect data on actual procurement, actual competition and potential competition are outlined. Finally, plans for analysing this survey data are briefly mentioned, along with noting the planned use of analytical pattern matching in deploying the new procurement model and in order to develop the predicted (theoretical) procurement mode.
Resumo:
Queensland's new State Planning Policy for Coastal Protection, released in March and approved in April 2011 as part of the Queensland Coastal Plan, stipulates that local governments prepare and implement adaptation strategies for built up areas projected to be subject to coastal hazards between present day and 2100. Urban localities within the delineated coastal high hazard zone (as determined by models incorporating a 0.8 meter rise in sea level and a 10% increase in the maximum cyclone activity) will be required to re-evaluate their plans to accommodate growth, revising land use plans to minimise impacts of anticipated erosion and flooding on developed areas and infrastructure. While implementation of such strategies would aid in avoidance or minimisation of risk exposure, communities are likely to face significant challenges in such implementation, especially as development in Queensland is so intensely focussed upon its coasts with these new policies directing development away from highly desirable waterfront land. This paper examines models of planning theory to understand how we plan when faced with technically complex problems towards formulation of a framework for evaluating and improving practice.
Resumo:
A significant issue encountered when fusing data received from multiple sensors is the accuracy of the timestamp associated with each piece of data. This is particularly important in applications such as Simultaneous Localisation and Mapping (SLAM) where vehicle velocity forms an important part of the mapping algorithms; on fastmoving vehicles, even millisecond inconsistencies in data timestamping can produce errors which need to be compensated for. The timestamping problem is compounded in a robot swarm environment due to the use of non-deterministic readily-available hardware (such as 802.11-based wireless) and inaccurate clock synchronisation protocols (such as Network Time Protocol (NTP)). As a result, the synchronisation of the clocks between robots can be out by tens-to-hundreds of milliseconds making correlation of data difficult and preventing the possibility of the units performing synchronised actions such as triggering cameras or intricate swarm manoeuvres. In this thesis, a complete data fusion unit is designed, implemented and tested. The unit, named BabelFuse, is able to accept sensor data from a number of low-speed communication buses (such as RS232, RS485 and CAN Bus) and also timestamp events that occur on General Purpose Input/Output (GPIO) pins referencing a submillisecondaccurate wirelessly-distributed "global" clock signal. In addition to its timestamping capabilities, it can also be used to trigger an attached camera at a predefined start time and frame rate. This functionality enables the creation of a wirelessly-synchronised distributed image acquisition system over a large geographic area; a real world application for this functionality is the creation of a platform to facilitate wirelessly-distributed 3D stereoscopic vision. A ‘best-practice’ design methodology is adopted within the project to ensure the final system operates according to its requirements. Initially, requirements are generated from which a high-level architecture is distilled. This architecture is then converted into a hardware specification and low-level design, which is then manufactured. The manufactured hardware is then verified to ensure it operates as designed and firmware and Linux Operating System (OS) drivers are written to provide the features and connectivity required of the system. Finally, integration testing is performed to ensure the unit functions as per its requirements. The BabelFuse System comprises of a single Grand Master unit which is responsible for maintaining the absolute value of the "global" clock. Slave nodes then determine their local clock o.set from that of the Grand Master via synchronisation events which occur multiple times per-second. The mechanism used for synchronising the clocks between the boards wirelessly makes use of specific hardware and a firmware protocol based on elements of the IEEE-1588 Precision Time Protocol (PTP). With the key requirement of the system being submillisecond-accurate clock synchronisation (as a basis for timestamping and camera triggering), automated testing is carried out to monitor the o.sets between each Slave and the Grand Master over time. A common strobe pulse is also sent to each unit for timestamping; the correlation between the timestamps of the di.erent units is used to validate the clock o.set results. Analysis of the automated test results show that the BabelFuse units are almost threemagnitudes more accurate than their requirement; clocks of the Slave and Grand Master units do not di.er by more than three microseconds over a running time of six hours and the mean clock o.set of Slaves to the Grand Master is less-than one microsecond. The common strobe pulse used to verify the clock o.set data yields a positive result with a maximum variation between units of less-than two microseconds and a mean value of less-than one microsecond. The camera triggering functionality is verified by connecting the trigger pulse output of each board to a four-channel digital oscilloscope and setting each unit to output a 100Hz periodic pulse with a common start time. The resulting waveform shows a maximum variation between the rising-edges of the pulses of approximately 39¥ìs, well below its target of 1ms.
Resumo:
This paper presents an efficient face detection method suitable for real-time surveillance applications. Improved efficiency is achieved by constraining the search window of an AdaBoost face detector to pre-selected regions. Firstly, the proposed method takes a sparse grid of sample pixels from the image to reduce whole image scan time. A fusion of foreground segmentation and skin colour segmentation is then used to select candidate face regions. Finally, a classifier-based face detector is applied only to selected regions to verify the presence of a face (the Viola-Jones detector is used in this paper). The proposed system is evaluated using 640 x 480 pixels test images and compared with other relevant methods. Experimental results show that the proposed method reduces the detection time to 42 ms, where the Viola-Jones detector alone requires 565 ms (on a desktop processor). This improvement makes the face detector suitable for real-time applications. Furthermore, the proposed method requires 50% of the computation time of the best competing method, while reducing the false positive rate by 3.2% and maintaining the same hit rate.
Resumo:
Abstract. In recent years, sparse representation based classification(SRC) has received much attention in face recognition with multipletraining samples of each subject. However, it cannot be easily applied toa recognition task with insufficient training samples under uncontrolledenvironments. On the other hand, cohort normalization, as a way of mea-suring the degradation effect under challenging environments in relationto a pool of cohort samples, has been widely used in the area of biometricauthentication. In this paper, for the first time, we introduce cohort nor-malization to SRC-based face recognition with insufficient training sam-ples. Specifically, a user-specific cohort set is selected to normalize theraw residual, which is obtained from comparing the test sample with itssparse representations corresponding to the gallery subject, using poly-nomial regression. Experimental results on AR and FERET databases show that cohort normalization can bring SRC much robustness against various forms of degradation factors for undersampled face recognition.
Resumo:
In recent years face recognition systems have been applied in various useful applications, such as surveillance, access control, criminal investigations, law enforcement, and others. However face biometric systems can be highly vulnerable to spoofing attacks where an impostor tries to bypass the face recognition system using a photo or video sequence. In this paper a novel liveness detection method, based on the 3D structure of the face, is proposed. Processing the 3D curvature of the acquired data, the proposed approach allows a biometric system to distinguish a real face from a photo, increasing the overall performance of the system and reducing its vulnerability. In order to test the real capability of the methodology a 3D face database has been collected simulating spoofing attacks, therefore using photographs instead of real faces. The experimental results show the effectiveness of the proposed approach.
Resumo:
In this 1972 documentary, The Computer Generation, by John Musilli, artist Stan Vanderbeek talks about the possibility of computers as an artist tool. My aim with drawing on this documentary is to compare the current state of transmedia with previous significant changes in media history, to illustrate how the current state of transmedia is quite diverse.