208 resultados para Diagnostic, methods comparison
Resumo:
Background: Using the fastest dental X-ray film available is an easy way of reducing exposure to ionizing radiation. However, the diagnostic ability of fast films for the detection of proximal surface caries must be demonstrated before these films will become universally accepted. Methods: Extracted premolar and molar teeth were arranged to simulate a bitewing examination and radiographed using Ultraspeed and Ektaspeed Plus dental X-ray films. Three different exposure times were used for each film type. Six general dentists were used to determine the presence and depth of the decay in the proximal surfaces of the teeth radiographed. The actual extent of the decay in the teeth was determined by sectioning the teeth and examining them under a microscope. Results: There was no significant difference between the two films for the mean correct diagnosis. However, there was a significant difference between the means for the three exposure times used for Ultraspeed film. The practitioners used were not consistent in their ability to make a correct diagnosis, or for the film for which they got the highest correct diagnosis. Conclusions: Ektaspeed Plus dental X-ray film is just as reliable as Ultraspeed dental X-ray film for the detection of proximal surface decay. The effect of underexposure was significant for Ultraspeed, but not for Ektaspeed Plus. Patient exposure can be reduced significantly with no loss of diagnostic ability by changing from Ultraspeed X-ray film to Ektaspeed Plus X-ray film.
Resumo:
Objectives: To study the influence of different diagnostic criteria on the prevalence of diabetes mellitus and characteristics of those diagnosed. Design and setting: Retrospective analysis of data from the general-practice-based Australian Diabetes Screening Study (January 1994 to June 1995). Participants: 5911 people with no previous diagnosis of diabetes, two or more symptoms or risk factors for diabetes, a random venous plasma glucose (PG) level > 5.5 mmol/L and a subsequent oral glucose tolerance test (OGTT) result. Main outcome measure: Prevalence of undiagnosed diabetes based on each of three sets of criteria: 1997 criteria of the American Diabetes Association (ADA), 1996 two-step screening strategy of the Australian Diabetes Society (ADS) (modified according to ADA recommendations about lowered diagnostic fasting PG level), and 1999 definition of the World Health Organization (WHO). Results: Prevalence estimates for undiagnosed diabetes using the American (ADA), Australian (ADS) and WHO criteria (95% CI) were 9.4% (8.7%-10.1%), 16.0% (15.3%-16.7%) and 18.1% (17.1%-19.1%), respectively. People diagnosed with diabetes by fasting PG level (common to all sets of criteria) were more likely to be male and younger than those diagnosed only by 2 h glucose challenge PG level (Australian and WHO criteria only). The Australian (ADS) stepwise screening strategy detected 88% of those who met the WHO criteria for diabetes, including about three-quarters of those with isolated post-challenge hyperglycaemia. Conclusion: The WHO criteria (which include an OGTT result) are preferable to the American (ADA) criteria (which rely totally on fasting PG level), as the latter underestimated the prevalence of undiagnosed diabetes by almost a half. The Australian (ADS) strategy identified most of those diagnosed with diabetes by WHO criteria.
Resumo:
The aim of this study was to compare accumulated oxygen deficit data derived using two different exercise protocols with the aim of producing a less time-consuming test specifically for use with athletes. Six road and four track male endurance cyclists performed two series of cycle ergometer tests. The first series involved five 10 min sub-maximal cycle exercise bouts, a (V) over dotO(2peak) test and a 115% (V) over dotO(2peak) test. Data from these tests were used to estimate the accumulated oxygen deficit according to the calculations of Medbo et al. (1988). In the second series of tests, participants performed a 15 min incremental cycle ergometer test followed, 2 min later, by a 2 min variable resistance test in which they completed as much work as possible while pedalling at a constant rate. Analysis revealed that the accumulated oxygen deficit calculated from the first series of tests was higher (P< 0.02) than that calculated from the second series: 52.3 +/- 11.7 and 43.9 +/- 6.4 ml . kg(-1), respectively (mean +/- s). Other significant differences between the two protocols were observed for (V) over dot O-2peak, total work and maximal heart rate; all were higher during the modified protocol (P
Resumo:
Recent research suggests that the retrospective review of the International Classification of Disease (ICD-9-CM) codes assigned to a patient episode will identify a similar number of healthcare-acquired surgical-site infections as compared with prospective surveillance by infection control practitioners (ICP). We tested this finding by replicating the methods for 380 surgical procedures. The sensitivity and specificity of the ICP undertaking prospective surveillance was 80% and 100%, and the sensitivity and specificity of the review of ICD-10-AM codes was 60% and 98.9%. Based on these results we do not support retrospective review of ICD-10-AM codes in preference prospective surveillance for SSI. (C) 2004 The Hospital Infection Society. Published by Elsevier Ltd. All rights reserved.
Resumo:
In this paper, numerical simulations are used in an attempt to find optimal Source profiles for high frequency radiofrequency (RF) volume coils. Biologically loaded, shielded/unshielded circular and elliptical birdcage coils operating at 170 MHz, 300 MHz and 470 MHz are modelled using the FDTD method for both 2D and 3D cases. Taking advantage of the fact that some aspects of the electromagnetic system are linear, two approaches have been proposed for the determination of the drives for individual elements in the RF resonator. The first method is an iterative optimization technique with a kernel for the evaluation of RF fields inside an imaging plane of a human head model using pre-characterized sensitivity profiles of the individual rungs of a resonator; the second method is a regularization-based technique. In the second approach, a sensitivity matrix is explicitly constructed and a regularization procedure is employed to solve the ill-posed problem. Test simulations show that both methods can improve the B-1-field homogeneity in both focused and non-focused scenarios. While the regularization-based method is more efficient, the first optimization method is more flexible as it can take into account other issues such as controlling SAR or reshaping the resonator structures. It is hoped that these schemes and their extensions will be useful for the determination of multi-element RF drives in a variety of applications.
Resumo:
The comparative ability of different methods to assess virulence of Listeria species was investigated in ten Listeria strains. All strains were initially subjected to pulsed-field gel electrophoresis analysis to determine their relatedness. Virulence characteristics were subsequently tested for by (i) determining the presence of six virulence genes by polymerase chain reaction; (ii) testing for the production of listeriolysin O, phosphatidylcholine phospholipase C, and phosphatidylinositol-specific phospholipase C; (iii) investigating the hydrophobicity of the strains; (iv) determining the strains ability to attach to, enter, and replicate within the Caco-2 cells. Variations in most of the virulence characteristics were obvious across the strains for the range of tests performed. A wide range of anomalous results among methods were apparent. In particular, the presence of virulence genes was found to be unrelated to the production of virulence-associated proteins in vitro, while virulence protein production and hydrophobicity in Listeria monocytogenes were found to be unrelated or marginally related, respectively, to the ability to invade the Caco-2 cell line. It was concluded that the methods investigated were unable to consistently and unequivocally measure the differences in the virulence properties of the strains.
Resumo:
Forty-four soils from under native vegetation and a range of management practices following clearing were analysed for ‘labile’ organic carbon (OC) using both the particulate organic carbon (POC) and the 333 mm KmnO4 (MnoxC) methods. Although there was some correlation between the 2 methods, the POC method was more sensitive by about a factor of 2 to rapid loss in OC as a result of management or land-use change. Unlike the POC method, the MnoxC method was insensitive to rapid gains in TOC following establishment of pasture on degraded soil. The MnoxC method was shown to be particularly sensitive to the presence of lignin or lignin-like compounds and therefore is likely to be very sensitive to the nature of the vegetation present at or near the time of sampling and explains the insensitivity of this method to OC gain under pasture. The presence of charcoal is an issue with both techniques, but whereas the charcoal contribution to the POC fraction can be assessed, the MnoxC method cannot distinguish between charcoal and most biomolecules found in soil. Because of these limitations, the MnoxC method should not be applied indiscriminately across different soil types and management practices.
Resumo:
Absolute calibration relates the measured (arbitrary) intensity to the differential scattering cross section of the sample, which contains all of the quantitative information specific to the material. The importance of absolute calibration in small-angle scattering experiments has long been recognized. This work details the absolute calibration procedure of a small-angle X-ray scattering instrument from Bruker AXS. The absolute calibration presented here was achieved by using a number of different types of primary and secondary standards. The samples were: a glassy carbon specimen, which had been independently calibrated from neutron radiation; a range of pure liquids, which can be used as primary standards as their differential scattering cross section is directly related to their isothermal compressibility; and a suspension of monodisperse silica particles for which the differential scattering cross section is obtained from Porod's law. Good agreement was obtained between the different standard samples, provided that care was taken to obtain significant signal averaging and all sources of background scattering were accounted for. The specimen best suited for routine calibration was the glassy carbon sample, due to its relatively intense scattering and stability over time; however, initial calibration from a primary source is necessary. Pure liquids can be used as primary calibration standards, but the measurements take significantly longer and are, therefore, less suited for frequent use.
Resumo:
Background: Determination of the subcellular location of a protein is essential to understanding its biochemical function. This information can provide insight into the function of hypothetical or novel proteins. These data are difficult to obtain experimentally but have become especially important since many whole genome sequencing projects have been finished and many resulting protein sequences are still lacking detailed functional information. In order to address this paucity of data, many computational prediction methods have been developed. However, these methods have varying levels of accuracy and perform differently based on the sequences that are presented to the underlying algorithm. It is therefore useful to compare these methods and monitor their performance. Results: In order to perform a comprehensive survey of prediction methods, we selected only methods that accepted large batches of protein sequences, were publicly available, and were able to predict localization to at least nine of the major subcellular locations (nucleus, cytosol, mitochondrion, extracellular region, plasma membrane, Golgi apparatus, endoplasmic reticulum (ER), peroxisome, and lysosome). The selected methods were CELLO, MultiLoc, Proteome Analyst, pTarget and WoLF PSORT. These methods were evaluated using 3763 mouse proteins from SwissProt that represent the source of the training sets used in development of the individual methods. In addition, an independent evaluation set of 2145 mouse proteins from LOCATE with a bias towards the subcellular localization underrepresented in SwissProt was used. The sensitivity and specificity were calculated for each method and compared to a theoretical value based on what might be observed by random chance. Conclusion: No individual method had a sufficient level of sensitivity across both evaluation sets that would enable reliable application to hypothetical proteins. All methods showed lower performance on the LOCATE dataset and variable performance on individual subcellular localizations was observed. Proteins localized to the secretory pathway were the most difficult to predict, while nuclear and extracellular proteins were predicted with the highest sensitivity.
Resumo:
Objective: To compare the level of agreement in results obtained from four physical activity (PA) measurement instruments that are in use in Australia and around the world. Methods: 1,280 randomly selected participants answered two sets of PA questions by telephone. 428 answered the Active Australia (AA) and National Health Surveys, 427 answered the AA and CDC Behavioural Risk Factor Surveillance System surveys (BRFSS), and 425 answered the AA survey and the short International Physical Activity Questionnaire (IPAQ). Results: Among the three pairs of survey items, the difference in mean total PA time was lowest when the AA and NHS items were asked (difference=24) (SE:17) minutes, compared with 144 (SE:21) mins for AA/BRFSS and 406 (SE:27) mins for AA/IPAQ). Correspondingly, prevalence estimates for 'sufficiently active' were similar for AA and NHS (56% and 55% respectively), but about 10% higher when BRFSS data were used, and about 26% higher when the IPAQ items were used, compared with estimates from the AA survey. Conclusions: The findings clearly demonstrate that there are large differences in reported PA times and hence in prevalence estimates of 'sufficient activity' from these four measures. Implications: It is important to consistently use the same survey for population monitoring purposes. As the AA survey has now been used three times in national surveys, its continued use for population surveys is recommended so that trend data ever a longer period of time can be established.
Resumo:
The BR algorithm is a novel and efficient method to find all eigenvalues of upper Hessenberg matrices and has never been applied to eigenanalysis for power system small signal stability. This paper analyzes differences between the BR and the QR algorithms with performance comparison in terms of CPU time based on stopping criteria and storage requirement. The BR algorithm utilizes accelerating strategies to improve its performance when computing eigenvalues of narrowly banded, nearly tridiagonal upper Hessenberg matrices. These strategies significantly reduce the computation time at a reasonable level of precision. Compared with the QR algorithm, the BR algorithm requires fewer iteration steps and less storage space without depriving of appropriate precision in solving eigenvalue problems of large-scale power systems. Numerical examples demonstrate the efficiency of the BR algorithm in pursuing eigenanalysis tasks of 39-, 68-, 115-, 300-, and 600-bus systems. Experiment results suggest that the BR algorithm is a more efficient algorithm for large-scale power system small signal stability eigenanalysis.