956 resultados para ACCURATE
Resumo:
Most current computer systems authorise the user at the start of a session and do not detect whether the current user is still the initial authorised user, a substitute user, or an intruder pretending to be a valid user. Therefore, a system that continuously checks the identity of the user throughout the session is necessary without being intrusive to end-user and/or effectively doing this. Such a system is called a continuous authentication system (CAS). Researchers have applied several approaches for CAS and most of these techniques are based on biometrics. These continuous biometric authentication systems (CBAS) are supplied by user traits and characteristics. One of the main types of biometric is keystroke dynamics which has been widely tried and accepted for providing continuous user authentication. Keystroke dynamics is appealing for many reasons. First, it is less obtrusive, since users will be typing on the computer keyboard anyway. Second, it does not require extra hardware. Finally, keystroke dynamics will be available after the authentication step at the start of the computer session. Currently, there is insufficient research in the CBAS with keystroke dynamics field. To date, most of the existing schemes ignore the continuous authentication scenarios which might affect their practicality in different real world applications. Also, the contemporary CBAS with keystroke dynamics approaches use characters sequences as features that are representative of user typing behavior but their selected features criteria do not guarantee features with strong statistical significance which may cause less accurate statistical user-representation. Furthermore, their selected features do not inherently incorporate user typing behavior. Finally, the existing CBAS that are based on keystroke dynamics are typically dependent on pre-defined user-typing models for continuous authentication. This dependency restricts the systems to authenticate only known users whose typing samples are modelled. This research addresses the previous limitations associated with the existing CBAS schemes by developing a generic model to better identify and understand the characteristics and requirements of each type of CBAS and continuous authentication scenario. Also, the research proposes four statistical-based feature selection techniques that have highest statistical significance and encompasses different user typing behaviors which represent user typing patterns effectively. Finally, the research proposes the user-independent threshold approach that is able to authenticate a user accurately without needing any predefined user typing model a-priori. Also, we enhance the technique to detect the impostor or intruder who may take over during the entire computer session.
Resumo:
Motor unit number estimation (MUNE) is a method which aims to provide a quantitative indicator of progression of diseases that lead to loss of motor units, such as motor neurone disease. However the development of a reliable, repeatable and fast real-time MUNE method has proved elusive hitherto. Ridall et al. (2007) implement a reversible jump Markov chain Monte Carlo (RJMCMC) algorithm to produce a posterior distribution for the number of motor units using a Bayesian hierarchical model that takes into account biological information about motor unit activation. However we find that the approach can be unreliable for some datasets since it can suffer from poor cross-dimensional mixing. Here we focus on improved inference by marginalising over latent variables to create the likelihood. In particular we explore how this can improve the RJMCMC mixing and investigate alternative approaches that utilise the likelihood (e.g. DIC (Spiegelhalter et al., 2002)). For this model the marginalisation is over latent variables which, for a larger number of motor units, is an intractable summation over all combinations of a set of latent binary variables whose joint sample space increases exponentially with the number of motor units. We provide a tractable and accurate approximation for this quantity and also investigate simulation approaches incorporated into RJMCMC using results of Andrieu and Roberts (2009).
Resumo:
Acepromazine (ACP) is a useful therapeutic drug, but is a prohibited substance in competition horses. The illicit use of ACP is difficult to detect due to its rapid metabolism, so this study investigated the ACP metabolite 2-(1-hydroxyethyl)promazine sulphoxide (HEPS) as a potential forensic marker. Acepromazine maleate, equivalent to 30 mg of ACP, was given IV to 12 racing-bred geldings. Blood and urine were collected for 7 days post-administration and analysed for ACP and HEPS by liquid chromatography–mass spectrometry (LC–MS). Acepromazine was quantifiable in plasma for up to 3 h with little reaching the urine unmodified. Similar to previous studies, there was wide variation in the distribution and metabolism of ACP. The metabolite HEPS was quantifiable for up to 24 h in plasma and 144 h in urine. The metabolism of ACP to HEPS was fast and erratic, so the early phase of the HEPS emergence could not be modelled directly, but was assumed to be similar to the rate of disappearance of ACP. However, the relationship between peak plasma HEPS and the y-intercept of the kinetic model was strong (P = 0.001, r2 = 0.72), allowing accurate determination of the formation pharmacokinetics of HEPS. Due to its rapid metabolism, testing of forensic samples for the parent drug is redundant with IV administration. The relatively long half-life of HEPS and its stable behaviour beyond the initial phase make it a valuable indicator of ACP use, and by determining the urine-to-plasma concentration ratios for HEPS, the approximate dose of ACP administration may be estimated.
Resumo:
A fundamental prerequisite of population health research is the ability to establish an accurate denominator. This in turn requires that every individual in the study population is counted. However, this seemingly simple principle has become a point of conflict between researchers whose aim is to produce evidence of disparities in population health outcomes and governments whose policies promote(intentionally or not) inequalities that are the underlying causes of health disparities. Research into the health of asylum seekers is a case in point. There is a growing body of evidence documenting the adverse affects of recent changes in asylum-seeking legislation, including mandatory detention. However, much of this evidence has been dismissed by some governments as being unsound, biased and unscientific because, it is argued, evidence is derived from small samples or from case studies. Yet, it is the policies of governments that are the key barrier to the conduct of rigorous population health research on asylum seekers. In this paper, the authors discuss the challenges of counting asylum seekers and the limitations of data reported in some industrialized countries. They argue that the lack of accurate statistical data on asylum seekers has been an effective neo-conservative strategy for erasing the health inequalities in this vulnerable population, indeed a strategy that renders invisible this population. They describe some alternative strategies that may be used by researchers to obtain denominator data on hard-to-reach populations such as asylum seekers.
Resumo:
Introduction: Delirium is a serious issue associated with high morbidity and mortality in older hospitalised people. Early recognition enables diagnosis and treatment of underlying cause/s, which can lead to improved patient outcomes. However, research shows knowledge and accurate nurse recognition of delirium and is poor and lack of education appears to be a key issue related to this problem. Thus, the purpose of this randomised controlled trial (RCT) was to evaluate, in a sample of registered nurses, the usability and effectiveness of a web-based learning site, designed using constructivist learning principles, to improve acute care nurse knowledge and recognition of delirium. Prior to undertaking the RCT preliminary phases involving; validation of vignettes, video-taping five of the validated vignettes, website development and pilot testing were completed. Methods: The cluster RCT involved consenting registered nurse participants (N = 175) from twelve clinical areas within three acute health care facilities in Queensland, Australia. Data were collected through a variety of measures and instruments. Primary outcomes were improved ability of nurses to recognise delirium using written validated vignettes and improved knowledge of delirium using a delirium knowledge questionnaire. The secondary outcomes were aimed at determining nurse satisfaction and usability of the website. Primary outcome measures were taken at baseline (T1), directly after the intervention (T2) and two months later (T3). The secondary outcomes were measured at T2 by participants in the intervention group. Following baseline data collection remaining participants were assigned to either the intervention (n=75) or control (n=72) group. Participants in the intervention group were given access to the learning intervention while the control group continued to work in their clinical area and at that time, did not receive access to the learning intervention. Data from the primary outcome measures were examined in mixed model analyses. Results: Overall, the effect of the online learning intervention over time comparing the intervention group and the control group were positive. The intervention groups‘ scores were higher and the change over time results were statistically significant [T3 and T1 (t=3.78 p=<0.001) and T2 and T1 baseline (t=5.83 p=<0.001)]. Statistically significant improvements were also seen for delirium recognition when comparing T2 and T1 results (t=2.58 p=0.012) between the control and intervention group but not for changes in delirium recognition scores between the two groups from T3 and T1 (t=1.80 p=0.074). The majority of the participants rated the website highly on the visual, functional and content elements. Additionally, nearly 80% of the participants liked the overall website features and there were self-reported improvements in delirium knowledge and recognition by the registered nurses in the intervention group. Discussion: Findings from this study support the concept that online learning is an effective and satisfying method of information delivery. Embedded within a constructivist learning environment the site produced a high level of satisfaction and usability for the registered nurse end-users. Additionally, the results showed that the website significantly improved delirium knowledge & recognition scores and the improvement in delirium knowledge was retained at a two month follow-up. Given the strong effect of the intervention the online delirium intervention should be utilised as a way of providing information to registered nurses. It is envisaged that this knowledge would lead to improved recognition of delirium as well as improvement in patient outcomes however; translation of this knowledge attainment into clinical practice was outside the scope of this study. A critical next step is demonstrating the effect of the intervention in changing clinical behaviour, and improving patient health outcomes.
Resumo:
A significant issue encountered when fusing data received from multiple sensors is the accuracy of the timestamp associated with each piece of data. This is particularly important in applications such as Simultaneous Localisation and Mapping (SLAM) where vehicle velocity forms an important part of the mapping algorithms; on fastmoving vehicles, even millisecond inconsistencies in data timestamping can produce errors which need to be compensated for. The timestamping problem is compounded in a robot swarm environment due to the use of non-deterministic readily-available hardware (such as 802.11-based wireless) and inaccurate clock synchronisation protocols (such as Network Time Protocol (NTP)). As a result, the synchronisation of the clocks between robots can be out by tens-to-hundreds of milliseconds making correlation of data difficult and preventing the possibility of the units performing synchronised actions such as triggering cameras or intricate swarm manoeuvres. In this thesis, a complete data fusion unit is designed, implemented and tested. The unit, named BabelFuse, is able to accept sensor data from a number of low-speed communication buses (such as RS232, RS485 and CAN Bus) and also timestamp events that occur on General Purpose Input/Output (GPIO) pins referencing a submillisecondaccurate wirelessly-distributed "global" clock signal. In addition to its timestamping capabilities, it can also be used to trigger an attached camera at a predefined start time and frame rate. This functionality enables the creation of a wirelessly-synchronised distributed image acquisition system over a large geographic area; a real world application for this functionality is the creation of a platform to facilitate wirelessly-distributed 3D stereoscopic vision. A ‘best-practice’ design methodology is adopted within the project to ensure the final system operates according to its requirements. Initially, requirements are generated from which a high-level architecture is distilled. This architecture is then converted into a hardware specification and low-level design, which is then manufactured. The manufactured hardware is then verified to ensure it operates as designed and firmware and Linux Operating System (OS) drivers are written to provide the features and connectivity required of the system. Finally, integration testing is performed to ensure the unit functions as per its requirements. The BabelFuse System comprises of a single Grand Master unit which is responsible for maintaining the absolute value of the "global" clock. Slave nodes then determine their local clock o.set from that of the Grand Master via synchronisation events which occur multiple times per-second. The mechanism used for synchronising the clocks between the boards wirelessly makes use of specific hardware and a firmware protocol based on elements of the IEEE-1588 Precision Time Protocol (PTP). With the key requirement of the system being submillisecond-accurate clock synchronisation (as a basis for timestamping and camera triggering), automated testing is carried out to monitor the o.sets between each Slave and the Grand Master over time. A common strobe pulse is also sent to each unit for timestamping; the correlation between the timestamps of the di.erent units is used to validate the clock o.set results. Analysis of the automated test results show that the BabelFuse units are almost threemagnitudes more accurate than their requirement; clocks of the Slave and Grand Master units do not di.er by more than three microseconds over a running time of six hours and the mean clock o.set of Slaves to the Grand Master is less-than one microsecond. The common strobe pulse used to verify the clock o.set data yields a positive result with a maximum variation between units of less-than two microseconds and a mean value of less-than one microsecond. The camera triggering functionality is verified by connecting the trigger pulse output of each board to a four-channel digital oscilloscope and setting each unit to output a 100Hz periodic pulse with a common start time. The resulting waveform shows a maximum variation between the rising-edges of the pulses of approximately 39¥ìs, well below its target of 1ms.
Resumo:
Objective: Food insecurity may be associated with a number of adverse health and social outcomes however our knowledge of its public health significance in Australia has been limited by use of a single-item measure in the Australian National Health Surveys (NHS) and, more recently, the exclusion of food security items from these surveys. The current study compares prevalence estimates of food insecurity in disadvantaged urban areas of Brisbane using the one-item NHS measure with three adaptations of the United States Department of Agriculture Food Security Survey Module (USDA-FSSM). Design: Data were collected by postal survey (n= 505, 53% response). Food security status was ascertained by the measure used in the NHS, and the 6-, 10- and 18-item versions of the USDA-FSSM. Demographic characteristics of the sample, prevalence estimates of food insecurity and different levels of food insecurity estimated by each tool were determined. Setting: Disadvantaged suburbs of Brisbane city, Australia, 2009. Subjects: Individuals aged ≥ 18 years. Results: Food insecurity was prevalent in socioeconomically-disadvantaged urban areas, estimated as 19.5% using the single-item NHS measure. This was significantly less than the 24.6% (P <0.01), 22.0% (P = 0.01) and 21.3% (P = 0.03) identified using the 18-item, 10-item and 6-item versions of the USDA-FSSM, respectively. The proportion of the sample reporting more severe levels of food insecurity were 10.7%, 10% and 8.6% for the 18-, 10- and 6-item USDA measures respectively, however this degree of food insecurity could not be ascertained using the NHS measure. Conclusions: The measure of food insecurity employed in the NHS may underestimate its prevalence and public health significance. Future monitoring and surveillance efforts should seek to employ a more accurate measure.
Resumo:
There is a growing interest in the use of megavoltage cone-beam computed tomography (MV CBCT) data for radiotherapy treatment planning. To calculate accurate dose distributions, knowledge of the electron density (ED) of the tissues being irradiated is required. In the case of MV CBCT, it is necessary to determine a calibration-relating CT number to ED, utilizing the photon beam produced for MV CBCT. A number of different parameters can affect this calibration. This study was undertaken on the Siemens MV CBCT system, MVision, to evaluate the effect of the following parameters on the reconstructed CT pixel value to ED calibration: the number of monitor units (MUs) used (5, 8, 15 and 60 MUs), the image reconstruction filter (head and neck, and pelvis), reconstruction matrix size (256 by 256 and 512 by 512), and the addition of extra solid water surrounding the ED phantom. A Gammex electron density CT phantom containing EDs from 0.292 to 1.707 was imaged under each of these conditions. The linear relationship between MV CBCT pixel value and ED was demonstrated for all MU settings and over the range of EDs. Changes in MU number did not dramatically alter the MV CBCT ED calibration. The use of different reconstruction filters was found to affect the MV CBCT ED calibration, as was the addition of solid water surrounding the phantom. Dose distributions from treatment plans calculated with simulated image data from a 15 MU head and neck reconstruction filter MV CBCT image and a MV CBCT ED calibration curve from the image data parameters and a 15 MU pelvis reconstruction filter showed small and clinically insignificant differences. Thus, the use of a single MV CBCT ED calibration curve is unlikely to result in any clinical differences. However, to ensure minimal uncertainties in dose reporting, MV CBCT ED calibration measurements could be carried out using parameter-specific calibration measurements.
Resumo:
A simulation-based training system for surgical wound debridement was developed and comprises a multimedia introduction, a surgical simulator (tutorial component), and an assessment component. The simulator includes two PCs, a haptic device, and mirrored display. Debridement is performed on a virtual leg model with a shallow laceration wound superimposed. Trainees are instructed to remove debris with forceps, scrub with a brush, and rinse with saline solution to maintain sterility. Research and development issues currently under investigation include tissue deformation models using mass-spring system and finite element methods; tissue cutting using a high-resolution volumetric mesh and dynamic topology; and accurate collision detection, cutting, and soft-body haptic rendering for two devices within the same haptic space.
Resumo:
The efficacy of existing articular cartilage defect repair strategies are limited. Native cartilage tissue forms via a series of exquisitely orchestrated morphogenic events spanning through gestation into early childhood. However, defect repair must be achieved in a non-ideal microenvironment over an accelerated time-frame compatible with the normal life of an adult patient. Scaffolds formed from decellularized tissues are commonly utilized to enable the rapid and accurate repair of tissues such as skin, bladder and heart valves. The intact extracellular matrix remaining following the decellularization of these relatively low-matrix-density tissues is able to rapidly and accurately guide host cell repopulation. By contrast, the extraordinary density of cartilage matrix limits both the initial decellularization of donor material as well as its subsequent repopulation. Repopulation of donor cartilage matrix is generally limited to the periphery, with repopulation of lacunae deeper within the matrix mass being highly inefficient. Herein, we review the relevant literature and discuss the trend toward the use of decellularized donor cartilage matrix of microscopic dimensions. We show that 2-µm microparticles of donor matrix are rapidly integrate with articular chondrocytes, forming a robust cartilage-like composites with enhanced chondrogenic gene expression. Strategies for the clinical application of donor matrix microparticles in cartilage defect repair are discussed.
Resumo:
Several investigators have recently proposed classification schemes for stratospheric dust particles [1-3]. In addition, extraterrestrial materials within stratospheric dust collections may be used as a measure of micrometeorite flux [4]. However, little attention has been given to the problems of the stratospheric collection as a whole. Some of these problems include: (a) determination of accurate particle abundances at a given point in time; (b) the extent of bias in the particle selection process; (c) the variation of particle shape and chemistry with size; (d) the efficacy of proposed classification schemes and (e) an accurate determination of physical parameters associated with the particle collection process (e.g. minimum particle size collected, collection efficiency, variation of particle density with time). We present here preliminary results from SEM, EDS and, where appropriate, XRD analysis of all of the particles from a collection surface which sampled the stratosphere between 18 and 20km in altitude. Determinations of particle densities from this study may then be used to refine models of the behavior of particles in the stratosphere [5].
Resumo:
Background: Nurses routinely use pulse oximetry (SpO2) monitoring equipment in acute care. Interpretation of the reading involves physical assessment and awareness of parameters including temperature, haemoglobin, and peripheral perfusion. However, there is little information on whether these clinical signs are routinely measured or used in pulse oximetry interpretation by nurses. Aim: The aim of this study was to review current practice of SpO2 measurement and the associated documentation of the physiological data that is required for accurate interpretation of the readings. The study reviewed the documentation practices relevant to SpO2 in five medical wards of a tertiary level metropolitan hospital. Method: A prospective casenote audit was conducted on random days over a three-month period. The audit tool had been validated in a previous study. Results: One hundred and seventy seven episodes of oxygen saturation monitoring were reviewed. Our study revealed a lack of parameters to validate the SpO2 readings. Only 10% of the casenotes reviewed had sufficient physiological data to meaningfully interpret the SpO2 reading and only 38% had an arterial blood gas as a comparator. Nursing notes rarely documented clinical interpretation of the results. Conclusion: The audits suggest that medical and nursing staff are not interpreting the pulse oximetry results in context and that the majority of the results were normal with no clinical indication for performing this observation. This reduces the usefulness of such readings and questions the appropriateness of performing “routine” SpO2 in this context.
Resumo:
Automated airborne collision-detection systems are a key enabling technology for facilitat- ing the integration of unmanned aerial vehicles (UAVs) into the national airspace. These safety-critical systems must be sensitive enough to provide timely warnings of genuine air- borne collision threats, but not so sensitive as to cause excessive false-alarms. Hence, an accurate characterisation of detection and false alarm sensitivity is essential for understand- ing performance trade-offs, and system designers can exploit this characterisation to help achieve a desired balance in system performance. In this paper we experimentally evaluate a sky-region, image based, aircraft collision detection system that is based on morphologi- cal and temporal processing techniques. (Note that the examined detection approaches are not suitable for the detection of potential collision threats against a ground clutter back- ground). A novel collection methodology for collecting realistic airborne collision-course target footage in both head-on and tail-chase engagement geometries is described. Under (hazy) blue sky conditions, our proposed system achieved detection ranges greater than 1540m in 3 flight test cases with no false alarm events in 14.14 hours of non-target data (under cloudy conditions, the system achieved detection ranges greater than 1170m in 4 flight test cases with no false alarm events in 6.63 hours of non-target data). Importantly, this paper is the first documented presentation of detection range versus false alarm curves generated from airborne target and non-target image data.
Resumo:
Purpose. To compare radiological records of 90 consecutive patients who underwent cemented total hip arthroplasty (THA) with or without use of the Rim Cutter to prepare the acetabulum. Methods. The acetabulum of 45 patients was prepared using the Rim Cutter, whereas the device was not used in the other 45 patients. Postoperative radiographs were evaluated using a digital templating system to measure (1) the positions of the operated hips with respect to the normal, contralateral hips (the centre of rotation of the socket, the height of the centre of rotation from the teardrop, and lateralisation of the centre of rotation from the teardrop) and (2) the uniformity and width of the cement mantle in the 3 DeLee Charnley acetabular zones, and the number of radiolucencies in these zones. Results. The study group showed improved radiological parameters and were closer to the anatomic centre of rotation both vertically (1.5 vs. 3.7 mm, p<0.001) and horizontally (1.8 vs. 4.4 mm, p<0.001) and had consistently thicker and more uniform cement mantles (p<0.001). There were 2 radiolucent lines in the control group but none in the study group. Conclusion. The Rim Cutter resulted in more accurate placement of the centre of rotation of a cemented prosthetic socket, and produced a thicker, more congruent cement mantle with fewer radiolucent lines.
Resumo:
Due to the explosive growth of the Web, the domain of Web personalization has gained great momentum both in the research and commercial areas. One of the most popular web personalization systems is recommender systems. In recommender systems choosing user information that can be used to profile users is very crucial for user profiling. In Web 2.0, one facility that can help users organize Web resources of their interest is user tagging systems. Exploring user tagging behavior provides a promising way for understanding users’ information needs since tags are given directly by users. However, free and relatively uncontrolled vocabulary makes the user self-defined tags lack of standardization and semantic ambiguity. Also, the relationships among tags need to be explored since there are rich relationships among tags which could provide valuable information for us to better understand users. In this paper, we propose a novel approach for learning tag ontology based on the widely used lexical database WordNet for capturing the semantics and the structural relationships of tags. We present personalization strategies to disambiguate the semantics of tags by combining the opinion of WordNet lexicographers and users’ tagging behavior together. To personalize further, clustering of users is performed to generate a more accurate ontology for a particular group of users. In order to evaluate the usefulness of the tag ontology, we use the tag ontology in a pilot tag recommendation experiment for improving the recommendation performance by exploiting the semantic information in the tag ontology. The initial result shows that the personalized information has improved the accuracy of the tag recommendation.