347 resultados para automated testing
Resumo:
Aromatherapy has been found to have some effectiveness in treating conditions such as postoperative nausea and vomiting, however unless clinicians are aware of and convinced by this evidence, it is unlikely they will choose to use it with their patients. The aim of this study was to test and modify an existing tool, Martin and Furnham’s Beliefs About Aromatherapy Scale in order to make it relevant and meaningful for use with a population of nurses and midwives working in an acute hospital setting. A Delphi process was used to modify the tool and then it was tested in a population of nurses and midwives, then exploratory factor analysis was conducted. The modified tool is reliable and valid for measuring beliefs about aromatherapy in this population.
Resumo:
Many researchers in the field of civil structural health monitoring have developed and tested their methods on simple to moderately complex laboratory structures such as beams, plates, frames, and trusses. Field work has also been conducted by many researchers and practitioners on more complex operating bridges. Most laboratory structures do not adequately replicate the complexity of truss bridges. This paper presents some preliminary results of experimental modal testing and analysis of the bridge model presented in the companion paper, using the peak picking method, and compares these results with those of a simple numerical model of the structure. Three dominant modes of vibration were experimentally identified under 15 Hz. The mode shapes and order of the modes matched those of the numerical model; however, the frequencies did not match.
Resumo:
While the implementation of the IEC 61850 standard has significantly enhanced the performance of communications in electrical substations, it has also increased the complexity of the system. Subsequently, these added elaborations have introduced new challenges in relation to the skills and tools required for the design, test and maintenance of 61850-compatible substations. This paper describes a practical experience of testing a protection relay using a non-conventional test equipment; in addition, it proposes a third party software technique to reveal the contents of the packets transferred on the substation network. Using this approach, the standard objects can be linked and interpreted to what the end-users normally see in the IED and test equipment proprietary software programs.
Resumo:
Diabetic macular edema (DME) is one of the most common causes of visual loss among diabetes mellitus patients. Early detection and successive treatment may improve the visual acuity. DME is mainly graded into non-clinically significant macular edema (NCSME) and clinically significant macular edema according to the location of hard exudates in the macula region. DME can be identified by manual examination of fundus images. It is laborious and resource intensive. Hence, in this work, automated grading of DME is proposed using higher-order spectra (HOS) of Radon transform projections of the fundus images. We have used third-order cumulants and bispectrum magnitude, in this work, as features, and compared their performance. They can capture subtle changes in the fundus image. Spectral regression discriminant analysis (SRDA) reduces feature dimension, and minimum redundancy maximum relevance method is used to rank the significant SRDA components. Ranked features are fed to various supervised classifiers, viz. Naive Bayes, AdaBoost and support vector machine, to discriminate No DME, NCSME and clinically significant macular edema classes. The performance of our system is evaluated using the publicly available MESSIDOR dataset (300 images) and also verified with a local dataset (300 images). Our results show that HOS cumulants and bispectrum magnitude obtained an average accuracy of 95.56 and 94.39 % for MESSIDOR dataset and 95.93 and 93.33 % for local dataset, respectively.
Resumo:
One of the objectives of this study was to evaluate soil testing equipment based on its capability of measuring in-place stiffness or modulus values. As design criteria transition from empirical to mechanistic-empirical, soil test methods and equipment that measure properties such as stiffness and modulus and how they relate to Florida materials are needed. Requirements for the selected equipment are that they be portable, cost effective, reliable, a ccurate, and repeatable. A second objective is that the selected equipment measures soil properties without the use of nuclear materials.The current device used to measure soil compaction is the nuclear density gauge (NDG). Equipment evaluated in this research included lightweight deflectometers (LWD) from different manufacturers, a dynamic cone penetrometer (DCP), a GeoGauge, a Clegg impact soil tester (CIST), a Briaud compaction device (BCD), and a seismic pavement analyzer (SPA). Evaluations were conducted over ranges of measured densities and moistures.Testing (Phases I and II) was conducted in a test box and test pits. Phase III testing was conducted on materials found on five construction projects located in the Jacksonville, Florida, area. Phase I analyses determined that the GeoGauge had the lowest overall coefficient of variance (COV). In ascending order of COV were the accelerometer-type LWD, the geophone-type LWD, the DCP, the BCD, and the SPA which had the highest overall COV. As a result, the BCD and the SPA were excluded from Phase II testing.In Phase II, measurements obtained from the selected equipment were compared to the modulus values obtained by the static plate load test (PLT), the resilient modulus (MR) from laboratory testing, and the NDG measurements. To minimize soil and moisture content variability, the single spot testing sequence was developed. At each location, test results obtained from the portable equipment under evaluation were compared to the values from adjacent NDG, PLT, and laboratory MR measurements. Correlations were developed through statistical analysis. Target values were developed for various soils for verification on similar soils that were field tested in Phase III. The single spot testing sequence also was employed in Phase III, field testing performed on A-3 and A-2-4 embankments, limerock-stabilized subgrade, limerock base, and graded aggregate base found on Florida Department of Transportation construction projects. The Phase II and Phase III results provided potential trend information for future research—specifically, data collection for in-depth statistical analysis for correlations with the laboratory MR for specific soil types under specific moisture conditions. With the collection of enough data, stronger relationships could be expected between measurements from the portable equipment and the MR values. Based on the statistical analyses and the experience gained from extensive use of the equipment, the combination of the DCP and the LWD was selected for in-place soil testing for compaction control acceptance. Test methods and developmental specifications were written for the DCP and the LWD. The developmental specifications include target values for the compaction control of embankment, subgrade, and base materials.
Resumo:
Environmental sensors collect massive amounts of audio data. This thesis investigates computational methods to support human analysts in identifying faunal vocalisations from that audio. A series of experiments was conducted to trial the effectiveness of novel user interfaces. This research examines the rapid scanning of spectrograms, decision support tools for users, and cleaning methods for folksonomies. Together, these investigations demonstrate that providing computational support to human analysts increases their efficiency and accuracy; this allows bioacoustics projects to efficiently utilise their valuable human analysts.
Resumo:
Background In the emergency department, portable point-of-care testing (POCT) coagulation devices may facilitate stroke patient care by providing rapid International Normalized Ratio (INR) measurement. The objective of this study was to evaluate the reliability, validity, and impact on clinical decision-making of a POCT device for INR testing in the setting of acute ischemic stroke (AIS). Methods A total of 150 patients (50 healthy volunteers, 51 anticoagulated patients, 49 AIS patients) were assessed in a tertiary care facility. The INR's were measured using the Roche Coaguchek S and the standard laboratory technique. Results The interclass correlation coefficient and 95% confidence interval between overall POCT device and standard laboratory value INRs was high (0.932 (0.69 - 0.78). In the AIS group alone, the correlation coefficient and 95% CI was also high 0.937 (0.59 - 0.74) and diagnostic accuracy of the POCT device was 94%. Conclusions When used by a trained health professional in the emergency department to assess INR in acute ischemic stroke patients, the CoaguChek S is reliable and provides rapid results. However, as concordance with laboratory INR values decreases with higher INR values, it is recommended that with CoaguChek S INRs in the > 1.5 range, a standard laboratory measurement be used to confirm the results.
Resumo:
Background The aim of this study was to compare through surface electromyographic (sEMG) recordings of the maximum voluntary contraction (MVC) on dry land and in water by manual muscle test (MMT). Method Sixteen healthy right-handed subjects (8 males and 8 females) participated in measurement of muscle activation of the right shoulder. The selected muscles were the cervical erector spinae, trapezius, pectoralis, anterior deltoid, middle deltoid, infraspinatus and latissimus dorsi. The MVC test conditions were random with respect to the order on the land/in water. Results For each muscle, the MVC test was performed and measured through sEMG to determine differences in muscle activation in both conditions. For all muscles except the latissimus dorsi, no significant differences were observed between land and water MVC scores (p = 0.063–0.679) and precision (%Diff = 7–10%) were observed between MVC conditions in the muscles trapezius, anterior deltoid and middle deltoid. Conclusions If the procedure for data collection is optimal, under MMT conditions it appears that comparable MVC sEMG values were achieved on land and in water and the integrity of the EMG recordings were maintained during wáter immersion.
Resumo:
We developed and validated a new method to create automated 3D parametric surface models of the lateral ventricles in brain MRI scans, providing an efficient approach to monitor degenerative disease in clinical studies and drug trials. First, we used a set of parameterized surfaces to represent the ventricles in four subjects' manually labeled brain MRI scans (atlases). We fluidly registered each atlas and mesh model to MRIs from 17 Alzheimer's disease (AD) patients and 13 age- and gender-matched healthy elderly control subjects, and 18 asymptomatic ApoE4-carriers and 18 age- and gender-matched non-carriers. We examined genotyped healthy subjects with the goal of detecting subtle effects of a gene that confers heightened risk for Alzheimer's disease. We averaged the meshes extracted for each 3D MR data set, and combined the automated segmentations with a radial mapping approach to localize ventricular shape differences in patients. Validation experiments comparing automated and expert manual segmentations showed that (1) the Hausdorff labeling error rapidly decreased, and (2) the power to detect disease- and gene-related alterations improved, as the number of atlases, N, was increased from 1 to 9. In surface-based statistical maps, we detected more widespread and intense anatomical deficits as we increased the number of atlases. We formulated a statistical stopping criterion to determine the optimal number of atlases to use. Healthy ApoE4-carriers and those with AD showed local ventricular abnormalities. This high-throughput method for morphometric studies further motivates the combination of genetic and neuroimaging strategies in predicting AD progression and treatment response. © 2007 Elsevier Inc. All rights reserved.
Resumo:
We developed and validated a new method to create automated 3D parametric surface models of the lateral ventricles, designed for monitoring degenerative disease effects in clinical neuroscience studies and drug trials. First we used a set of parameterized surfaces to represent the ventricles in a manually labeled set of 9 subjects' MRIs (atlases). We fluidly registered each of these atlases and mesh models to a set of MRIs from 12 Alzheimer's disease (AD) patients and 14 matched healthy elderly subjects, and we averaged the resulting meshes for each of these images. Validation experiments on expert segmentations showed that (1) the Hausdorff labeling error rapidly decreased, and (2) the power to detect disease-related alterations monotonically improved as the number of atlases, N, was increased from 1 to 9. We then combined the segmentations with a radial mapping approach to localize ventricular shape differences in patients. In surface-based statistical maps, we detected more widespread and intense anatomical deficits as we increased the number of atlases, and we formulated a statistical stopping criterion to determine the optimal value of N. Anterior horn anomalies in Alzheimer's patients were only detected with the multi-atlas segmentation, which clearly outperformed the standard single-atlas approach.
Resumo:
An automated method for extracting brain volumes from three commonly acquired three-dimensional (3D) MR images (proton density, T1 weighted, and T2-weighted) of the human head is described. The procedure is divided into four levels: preprocessing, segmentation, scalp removal, and postprocessing. A user-provided reference point is the sole operator-dependent input required. The method's parameters were first optimized and then fixed and applied to 30 repeat data sets from 15 normal older adult subjects to investigate its reproducibility. Percent differences between total brain volumes (TBVs) for the subjects' repeated data sets ranged from .5% to 2.2%. We conclude that the method is both robust and reproducible and has the potential for wide application.
Resumo:
This paper provides an important and timely overview of a conceptual framework designed to assist with the development of message content, as well as the evaluation, of persuasive health messages. While an earlier version of this framework was presented in a prior publication by the authors in 2009, important refinements to the framework have seen it evolve in recent years, warranting the need for an updated review. This paper outlines the Step approach to Message Design and Testing (or SatMDT) in accordance with the theoretical evidence which underpins, as well as empirical evidence which demonstrates the relevance and feasibility, of each of the framework’s steps. The development and testing of the framework have thus far been based exclusively within the road safety advertising context; however, the view expressed herein is that the framework may have broader appeal and application to the health persuasion context.
Resumo:
To this point, the collection has provided research-based, empirical accounts of the various and multiple effects of the National Assessment Program – Literacy and Numeracy (NAPLAN) in Australian schooling as a specific example of the global phenomenon of national testing. In this chapter, we want to develop a more theoretical analysis of national testing systems, globalising education policy and the promise of national testing as adaptive, online tests. These future moves claim to provide faster feedback and more useful diagnostic help for teachers. There is a utopian testing dream that one day adaptive, online tests will be responsive in real time providing an integrated personalised testing, pedagogy and intervention for each student. The moves towards these next generation assessments are well advanced, including the work of Pearson’s NextGen Learning and Assessment research group, the Organization for Economic Co-operation and Development’s (OECD) move into assessing affective skills and the Australian Curriculum, Assessment and Reporting Authority’s (ACARA) decision to phase in NAPLAN as an online, adaptive test from 2017...
Resumo:
Introduction This book examines a pressing educational issue: the global phenomenon of national testing in schooling and its vernacular development in Australia. The Australian National Assessment Program – Literacy and Numeracy (NAPLAN), introduced in 2008, involves annual census testing of students in Years 3, 5, 7 and 9 in nearly all Australian schools. In a variety of ways, NAPLAN affects the lives of Australia’s 3.5 million school students and their families, as well as more than 350,000 school staff and many other stakeholders in education. This book is organised in relation to a simple question: What are the effects of national testing for systems, schools and individuals? Of course, this simple question requires complex answers. The chapters in this edited collection consider issues relating to national testing policy, the construction of the test, usages of the testing data and various effects of testing in systems, schools and classrooms. Each chapter examines an aspect of national testing in Australia using evidence drawn from research. The final chapter by the editors of this collection provides a broader reflection on this phenomenon and situates developments in testing globally...
Resumo:
Since 2008, Australian schoolchildren in Years 3, 5, 7 and 9 have sat a series of tests each May designed to assess their attainment of basic skills in literacy and numeracy. These tests are known as the National Assessment Program – Literacy and Numeracy (NAPLAN). In 2010, individual school NAPLAN data were first published on the MySchool website which enables comparisons to be made between individual schools and statistically like schools across Australia. NAPLAN represents the increased centrality of the federal government in education, particularly in regards to education policy. One effect of this has been a recast emphasis of education as an economic, rather than democratic, good. As Reid (2009) suggests, this recasting of education within national productivity agendas mobilises commonsense discourses of accountability and transparency. These are common articles of faith for many involved in education administration and bureaucracy; more and better data, and holding people to account for that data, must improve education...