924 resultados para Accuracy and precision
Resumo:
Aim To assess the accuracy and reproducibility of biometry undertaken with the Aladdin (Topcon, Tokyo, Japan) in comparison with the current gold standard device, the IOLMaster 500 (Zeiss, Jena, Germany). Setting University Eye Clinic, Birmingham, UK and Refractive Surgery Centre, Kiel, Germany. Methods The right eye of 75 patients with cataracts and 22 healthy participants were assessed using the two devices. Measurements of axial length (AL), anterior chamber depth (ACD) and keratometry (K) were undertaken with the Aladdin and IOLMaster 500 in random order by an experienced practitioner. A second practitioner then obtained measurements for each participant using the Aladdin biometer in order to assess interobserver variability. Results No statistically significant differences ( p≥0.05) between the two biometers were found for average difference (AL)±95% CI=0.01±0.06 mm), ACD (0.00 ±0.11 mm) or mean K values (0.08±0.51 D). Furthermore, interobserver variability was very good for each parameter (weighted κ≥0.85). One patient's IOL powers could not be calculated with either biometer measurements, whereas a further three could not be analysed by the IOLMaster 500. The IOL power calculated from the valid measurements was not statistically significantly different between the biometers (p=0.842), with 91% of predictions within±0.25 D. Conclusions The Aladdin is a quick, easy-to-use biometer that produces valid and reproducible results that are comparable with those obtained with the IOLMaster 500.
Resumo:
An increasing number of publications on the dried blood spot (DBS) sampling approach for the quantification of drugs and metabolites have been spurred on by the inherent advantages of this sampling technique. In the present research, a selective and sensitive high-performance liquid chromatography method for the concurrent determination of multiple antiepileptic drugs (AEDs) [levetiracetam (LVT), lamotrigine (LTG), phenobarbital (PHB)], carbamazepine (CBZ) and its active metabolite carbamazepine-10,11 epoxide (CBZE)] in a single DBS has been developed and validated. Whole blood was spotted onto Guthrie cards and dried. Using a standard punch (6 mm diameter), a circular disc was punched from the card and extracted with methanol: acetonitrile (3:1, v/v) containing hexobarbital (Internal Standard) and sonicated prior to evaporation. The extract was then dissolved in water and vortex mixed before undergoing solid phase extraction using HLB cartridges. Chromatographic separation of the AEDs was achieved using Waters XBridge™ C18 column with a gradient system. The developed method was linear over the concentration ranges studied with r ≥ 0.995 for all compounds. The lower limits of quantification (LLOQs) were 2, 1, 2, 0.5 and 1 μg/mL for LVT, LTG, PHB, CBZE and CBZ, respectively. Accuracy (%RE) and precision (%CV) values for within and between day were <20% at the LLOQs and <15% at all other concentrations tested. This method was successfully applied to the analysis of the AEDs in DBS samples taken from children with epilepsy for the assessment of their adherence to prescribed treatments.
Resumo:
Accurate protein structure prediction remains an active objective of research in bioinformatics. Membrane proteins comprise approximately 20% of most genomes. They are, however, poorly tractable targets of experimental structure determination. Their analysis using bioinformatics thus makes an important contribution to their on-going study. Using a method based on Bayesian Networks, which provides a flexible and powerful framework for statistical inference, we have addressed the alignment-free discrimination of membrane from non-membrane proteins. The method successfully identifies prokaryotic and eukaryotic α-helical membrane proteins at 94.4% accuracy, β-barrel proteins at 72.4% accuracy, and distinguishes assorted non-membranous proteins with 85.9% accuracy. The method here is an important potential advance in the computational analysis of membrane protein structure. It represents a useful tool for the characterisation of membrane proteins with a wide variety of potential applications.
Resumo:
Contract Law Concentrate is a high quality revision guide which covers the main topics found on undergraduate courses. The clear, succinct coverage of key legal points within a specific topic area, including key cases, enables students to quickly grasp the fundamental principles of Contract law. Written by Jill Poole, an experienced teacher and examiner and author of Textbook on Contract Law and Casebook on Contract law. The book focuses on the needs of students to pass their exams with a number of pedagogical features which help with the preparation for exams and suggest ways to improve marks. Endorsed by students and lecturers for level of coverage, accuracy and exam advice. Online Resource Centre Interactive flashcards Glossary Exam and revision guidance.
Resumo:
The rhythm created by spacing a series of brief tones in a regular pattern can be disguised by interleaving identical distractors at irregular intervals. The disguised rhythm can be unmasked if the distractors are allocated to a separate stream from the rhythm by integration with temporally overlapping captors. Listeners identified which of 2 rhythms was presented, and the accuracy and rated clarity of their judgment was used to estimate the fusion of the distractors and captors. The extent of fusion depended primarily on onset asynchrony and degree of temporal overlap. Harmonic relations had some influence, but only an extreme difference in spatial location was effective (dichotic presentation). Both preattentive and attentionally driven processes governed performance. (PsycINFO Database Record (c) 2012 APA, all rights reserved)
Resumo:
Purpose – This paper aims to consider how climate change performance is measured and accounted for within the performance framework for local authority areas in England adopted in 2008. It critically evaluates the design of two mitigation and one adaptation indicators that are most relevant to climate change. Further, the potential for these performance indicators to contribute to climate change mitigation and adaptation is discussed. Design/methodology/approach – The authors begin by examining the importance of the performance framework and the related Local Area Agreements (LAAs), which were negotiated for all local areas in England between central government and Local Strategic Partnerships (LSPs). This development is located within the broader literature relating to new public management. The potential for this framework to assist in delivering the UK's climate change policy objectives is researched in a two-stage process. First, government publications and all 150 LAAs were analysed to identify the level of priority given to the climate change indicators. Second, interviews were conducted in spring 2009 with civil servants and local authority officials from the English West Midlands who were engaged in negotiating the climate change content of the LAAs. Findings – Nationally, the authors find that 97 per cent of LAAs included at least one climate change indicator as a priority. The indicators themselves, however, are perceived to be problematic – in terms of appropriateness, accuracy and timeliness. In addition, concerns were identified about the level of local control over the drivers of climate change performance and, therefore, a question is raised as to how LSPs can be held accountable for this. On a more positive note, for those concerned about climate change, the authors do find evidence that the inclusion of these indicators within the performance framework has helped to move climate change up the agenda for local authorities and their partners. However, actions by the UK's new coalition government to abolish the national performance framework and substantially reduce public expenditure potentially threaten this advance. Originality/value – This paper offers an insight into a new development for measuring climate change performance at a local level, which is relatively under-researched. It also contributes to knowledge of accountability within a local government setting and provides a reference point for further research into the potential role of local actions to address the issue of climate change.
Resumo:
Bio-impedance analysis (BIA) provides a rapid, non-invasive technique for body composition estimation. BIA offers a convenient alternative to standard techniques such as MRI, CT scan or DEXA scan for selected types of body composition analysis. The accuracy of BIA is limited because it is an indirect method of composition analysis. It relies on linear relationships between measured impedance and morphological parameters such as height and weight to derive estimates. To overcome these underlying limitations of BIA, a multi-frequency segmental bio-impedance device was constructed through a series of iterative enhancements and improvements of existing BIA instrumentation. Key features of the design included an easy to construct current-source and compact PCB design. The final device was trialled with 22 human volunteers and measured impedance was compared against body composition estimates obtained by DEXA scan. This enabled the development of newer techniques to make BIA predictions. To add a ‘visual aspect’ to BIA, volunteers were scanned in 3D using an inexpensive scattered light gadget (Xbox Kinect controller) and 3D volumes of their limbs were compared with BIA measurements to further improve BIA predictions. A three-stage digital filtering scheme was also implemented to enable extraction of heart-rate data from recorded bio-electrical signals. Additionally modifications have been introduced to measure change in bio-impedance with motion, this could be adapted to further improve accuracy and veracity for limb composition analysis. The findings in this thesis aim to give new direction to the prediction of body composition using BIA. The design development and refinement applied to BIA in this research programme suggest new opportunities to enhance the accuracy and clinical utility of BIA for the prediction of body composition analysis. In particular, the use of bio-impedance to predict limb volumes which would provide an additional metric for body composition measurement and help distinguish between fat and muscle content.
Resumo:
It is well established that speech, language and phonological skills are closely associated with literacy, and that children with a family risk of dyslexia (FRD) tend to show deficits in each of these areas in the preschool years. This paper examines what the relationships are between FRD and these skills, and whether deficits in speech, language and phonological processing fully account for the increased risk of dyslexia in children with FRD. One hundred and fifty-three 4-6-year-old children, 44 of whom had FRD, completed a battery of speech, language, phonology and literacy tasks. Word reading and spelling were retested 6 months later, and text reading accuracy and reading comprehension were tested 3 years later. The children with FRD were at increased risk of developing difficulties in reading accuracy, but not reading comprehension. Four groups were compared: good and poor readers with and without FRD. In most cases good readers outperformed poor readers regardless of family history, but there was an effect of family history on naming and nonword repetition regardless of literacy outcome, suggesting a role for speech production skills as an endophenotype of dyslexia. Phonological processing predicted spelling, while language predicted text reading accuracy and comprehension. FRD was a significant additional predictor of reading and spelling after controlling for speech production, language and phonological processing, suggesting that children with FRD show additional difficulties in literacy that cannot be fully explained in terms of their language and phonological skills. It is well established that speech, language and phonological skills are closely associated with literacy, and that children with a family risk of dyslexia (FRD) tend to show deficits in each of these areas in the preschool years. This paper examines what the relationships are between FRD and these skills, and whether deficits in speech, language and phonological processing fully account for the increased risk of dyslexia in children with FRD. One hundred and fifty-three 4-6-year-old children, 44 of whom had FRD, completed a battery of speech, language, phonology and literacy tasks. © 2014 John Wiley & Sons Ltd.
Resumo:
Abstract A new LIBS quantitative analysis method based on analytical line adaptive selection and Relevance Vector Machine (RVM) regression model is proposed. First, a scheme of adaptively selecting analytical line is put forward in order to overcome the drawback of high dependency on a priori knowledge. The candidate analytical lines are automatically selected based on the built-in characteristics of spectral lines, such as spectral intensity, wavelength and width at half height. The analytical lines which will be used as input variables of regression model are determined adaptively according to the samples for both training and testing. Second, an LIBS quantitative analysis method based on RVM is presented. The intensities of analytical lines and the elemental concentrations of certified standard samples are used to train the RVM regression model. The predicted elemental concentration analysis results will be given with a form of confidence interval of probabilistic distribution, which is helpful for evaluating the uncertainness contained in the measured spectra. Chromium concentration analysis experiments of 23 certified standard high-alloy steel samples have been carried out. The multiple correlation coefficient of the prediction was up to 98.85%, and the average relative error of the prediction was 4.01%. The experiment results showed that the proposed LIBS quantitative analysis method achieved better prediction accuracy and better modeling robustness compared with the methods based on partial least squares regression, artificial neural network and standard support vector machine.
Resumo:
Purpose: To assess the inter and intra observer variability of subjective grading of the retinal arterio-venous ratio (AVR) using a visual grading and to compare the subjectively derived grades to an objective method using a semi-automated computer program. Methods: Following intraocular pressure and blood pressure measurements all subjects underwent dilated fundus photography. 86 monochromatic retinal images with the optic nerve head centred (52 healthy volunteers) were obtained using a Zeiss FF450+ fundus camera. Arterio-venous ratios (AVR), central retinal artery equivalent (CRAE) and central retinal vein equivalent (CRVE) were calculated on three separate occasions by one single observer semi-automatically using the software VesselMap (ImedosSystems, Jena, Germany). Following the automated grading, three examiners graded the AVR visually on three separate occasions in order to assess their agreement. Results: Reproducibility of the semi-automatic parameters was excellent (ICCs: 0.97 (CRAE); 0.985 (CRVE) and 0.952 (AVR)). However, visual grading of AVR showed inter grader differences as well as discrepancies between subjectively derived and objectively calculated AVR (all p < 0.000001). Conclusion: Grader education and experience leads to inter-grader differences but more importantly, subjective grading is not capable to pick up subtle differences across healthy individuals and does not represent true AVR when compared with an objective assessment method. Technology advancements mean we no longer rely on opthalmoscopic evaluation but can capture and store fundus images with retinal cameras, enabling us to measure vessel calibre more accurately compared to visual estimation; hence it should be integrated in optometric practise for improved accuracy and reliability of clinical assessments of retinal vessel calibres. © 2014 Spanish General Council of Optometry.
Resumo:
The best results in the application of computer science systems to automatic translation are obtained in word processing when texts pertain to specific thematic areas, with structures well defined and a concise and limited lexicon. In this article we present a plan of systematic work for the analysis and generation of language applied to the field of pharmaceutical leaflet, a type of document characterized by format rigidity and precision in the use of lexicon. We propose a solution based in the use of one interlingua as language pivot between source and target languages; we are considering Spanish and Arab languages in this case of application.
Resumo:
Liquid-level sensing technologies have attracted great prominence, because such measurements are essential to industrial applications, such as fuel storage, flood warning and in the biochemical industry. Traditional liquid level sensors are based on electromechanical techniques; however they suffer from intrinsic safety concerns in explosive environments. In recent years, given that optical fiber sensors have lots of well-established advantages such as high accuracy, costeffectiveness, compact size, and ease of multiplexing, several optical fiber liquid level sensors have been investigated which are based on different operating principles such as side-polishing the cladding and a portion of core, using a spiral side-emitting optical fiber or using silica fiber gratings. The present work proposes a novel and highly sensitive liquid level sensor making use of polymer optical fiber Bragg gratings (POFBGs). The key elements of the system are a set of POFBGs embedded in silicone rubber diaphragms. This is a new development building on the idea of determining liquid level by measuring the pressure at the bottom of a liquid container, however it has a number of critical advantages. The system features several FBG-based pressure sensors as described above placed at different depths. Any sensor above the surface of the liquid will read the same ambient pressure. Sensors below the surface of the liquid will read pressures that increase linearly with depth. The position of the liquid surface can therefore be approximately identified as lying between the first sensor to read an above-ambient pressure and the next higher sensor. This level of precision would not in general be sufficient for most liquid level monitoring applications; however a much more precise determination of liquid level can be made by linear regression to the pressure readings from the sub-surface sensors. There are numerous advantages to this multi-sensor approach. First, the use of linear regression using multiple sensors is inherently more accurate than using a single pressure reading to estimate depth. Second, common mode temperature induced wavelength shifts in the individual sensors are automatically compensated. Thirdly, temperature induced changes in the sensor pressure sensitivity are also compensated. Fourthly, the approach provides the possibility to detect and compensate for malfunctioning sensors. Finally, the system is immune to changes in the density of the monitored fluid and even to changes in the effective force of gravity, as might be obtained in an aerospace application. The performance of an individual sensor was characterized and displays a sensitivity (54 pm/cm), enhanced by more than a factor of 2 when compared to a sensor head configuration based on a silica FBG published in the literature, resulting from the much lower elastic modulus of POF. Furthermore, the temperature/humidity behavior and measurement resolution were also studied in detail. The proposed configuration also displays a highly linear response, high resolution and good repeatability. The results suggest the new configuration can be a useful tool in many different applications, such as aircraft fuel monitoring, and biochemical and environmental sensing, where accuracy and stability are fundamental. © (2015) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
Resumo:
This study examines the contribution of early phonological processing (PP) and language skills on later phonological awareness (PA) and morphological awareness (MA), as well as the links among PA, MA, and reading. Children 4-6 years of age with poor PP at the start of school showed weaker PA and MA 3 years later (age 7-9), regardless of their language skills. PA and phonological and morphological strategies predict reading accuracy, whereas MA predicts reading comprehension. Our findings suggest that children with poor early PP are more at risk of developing deficits in MA and PA than children with poor language. They also suggest that there is a direct link between PA and reading accuracy and between MA and reading comprehension that cannot be accounted for by strategy use at the word level.
Resumo:
Large-scale mechanical products, such as aircraft and rockets, consist of large numbers of small components, which introduce additional difficulty for assembly accuracy and error estimation. Planar surfaces as key product characteristics are usually utilised for positioning small components in the assembly process. This paper focuses on assembly accuracy analysis of small components with planar surfaces in large-scale volume products. To evaluate the accuracy of the assembly system, an error propagation model for measurement error and fixture error is proposed, based on the assumption that all errors are normally distributed. In this model, the general coordinate vector is adopted to represent the position of the components. The error transmission functions are simplified into a linear model, and the coordinates of the reference points are composed by theoretical value and random error. The installation of a Head-Up Display is taken as an example to analyse the assembly error of small components based on the propagation model. The result shows that the final coordination accuracy is mainly determined by measurement error of the planar surface in small components. To reduce the uncertainty of the plane measurement, an evaluation index of measurement strategy is presented. This index reflects the distribution of the sampling point set and can be calculated by an inertia moment matrix. Finally, a practical application is introduced for validating the evaluation index.
Resumo:
Background: Self-testing technology allows people to test themselves for chlamydia without professional support. This may result in reassurance and wider access to chlamydia testing, but anxiety could occur on receipt of positive results. This study aimed to identify factors important in understanding self-testing for chlamydia outside formal screening contexts, to explore the potential impacts of self-testing on individuals, and to identify theoretical constructs to form a Framework for future research and intervention development. Methods: Eighteen university students participated in semi-structured interviews; eleven had self-tested for chlamydia. Data were analysed thematically using a Framework approach. Results: Perceived benefits of self-testing included its being convenient, anonymous and not requiring physical examination. There was concern about test accuracy and some participants lacked confidence in using vulvo-vaginal swabs. While some participants expressed concern about the absence of professional support, all said they would seek help on receiving a positive result. Factors identified in Protection Motivation Theory and the Theory of Planned Behaviour, such as response efficacy and self-efficacy, were found to be highly salient to participants in thinking about self-testing. Conclusions: These exploratory findings suggest that self-testing independently of formal health care systems may no more negatively impact people than being tested by health care professionals. Participants’ perceptions about self-testing behaviour were consistent with psychological theories. Findings suggest that interventions which increase confidence in using self-tests and that provide reassurance of test accuracy may increase self-test intentions.