909 resultados para Data accuracy


Relevância:

30.00% 30.00%

Publicador:

Resumo:

BACKGROUND: Multislice computed tomography (MSCT) is a promising noninvasive method of detecting coronary artery disease (CAD). However, most data have been obtained in selected series of patients. The purpose of the present study was to investigate the accuracy of 64-slice MSCT (64 MSCT) in daily practice, without any patient selection. METHODS AND RESULTS: Using 64-slice MSCT coronary angiography (CTA), 69 consecutive patients, 39 (57%) of whom had previously undergone stent implantation, were evaluated. The mean heart rate during scan was 72 beats/min, scan time 13.6 s and the amount of contrast media 72 mL. The mean time span between invasive coronary angiography (ICAG) and CTA was 6 days. Significant stenosis was defined as a diameter reduction of > 50%. Of 966 segments, 884 (92%) were assessable. Compared with ICAG, the sensitivity of CTA to diagnose significant stenosis was 90%, specificity 94%, positive predictive value (PPV) 89% and negative predictive value (NPV) 95%. With regard to 58 stented lesions, the sensitivity, specificity, PPV and NPV were 93%, 96%, 87% and 98%, respectively. On the patient-based analysis, the sensitivity, specificity, PPV and NPV of CTA to detect CAD were 98%, 86%, 98% and 86%, respectively. Eighty-two (8%) segments were not assessable because of irregular rhythm, calcification or tachycardia. CONCLUSION: Sixty-four-MSCT has a high accuracy for the detection of significant CAD in an unselected patient population and therefore can be considered as a valuable noninvasive technique.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

OBJECTIVES: The goal of the present study was to compare the accuracy of in vivo tissue characterization obtained by intravascular ultrasound (IVUS) radiofrequency (RF) data analysis, known as Virtual Histology (VH), to the in vitro histopathology of coronary atherosclerotic plaques obtained by directional coronary atherectomy. BACKGROUND: Vulnerable plaque leading to acute coronary syndrome (ACS) has been associated with specific plaque composition, and its characterization is an important clinical focus. METHODS: Virtual histology IVUS images were performed before and after a single debulking cut using directional coronary atherectomy. Debulking region of in vivo histology image was predicted by comparing pre- and post-debulking VH images. Analysis of VH images with the corresponding tissue cross section was performed. RESULTS: Fifteen stable angina pectoris (AP) and 15 ACS patients were enrolled. The results of IVUS RF data analysis correlated well with histopathologic examination (predictive accuracy from all patients data: 87.1% for fibrous, 87.1% for fibro-fatty, 88.3% for necrotic core, and 96.5% for dense calcium regions, respectively). In addition, the frequency of necrotic core was significantly higher in the ACS group than in the stable AP group (in vitro histopathology: 22.6% vs. 12.6%, p = 0.02; in vivo virtual histology: 24.5% vs. 10.4%, p = 0.002). CONCLUSIONS: Correlation of in vivo IVUS RF data analysis with histopathology shows a high accuracy. In vivo IVUS RF data analysis is a useful modality for the classification of different types of coronary components, and may play an important role in the detection of vulnerable plaque.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In most microarray technologies, a number of critical steps are required to convert raw intensity measurements into the data relied upon by data analysts, biologists and clinicians. These data manipulations, referred to as preprocessing, can influence the quality of the ultimate measurements. In the last few years, the high-throughput measurement of gene expression is the most popular application of microarray technology. For this application, various groups have demonstrated that the use of modern statistical methodology can substantially improve accuracy and precision of gene expression measurements, relative to ad-hoc procedures introduced by designers and manufacturers of the technology. Currently, other applications of microarrays are becoming more and more popular. In this paper we describe a preprocessing methodology for a technology designed for the identification of DNA sequence variants in specific genes or regions of the human genome that are associated with phenotypes of interest such as disease. In particular we describe methodology useful for preprocessing Affymetrix SNP chips and obtaining genotype calls with the preprocessed data. We demonstrate how our procedure improves existing approaches using data from three relatively large studies including one in which large number independent calls are available. Software implementing these ideas are avialble from the Bioconductor oligo package.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The ability to measure gene expression on a genome-wide scale is one of the most promising accomplishments in molecular biology. Microarrays, the technology that first permitted this, were riddled with problems due to unwanted sources of variability. Many of these problems are now mitigated, after a decade’s worth of statistical methodology development. The recently developed RNA sequencing (RNA-seq) technology has generated much excitement in part due to claims of reduced variability in comparison to microarrays. However, we show RNA-seq data demonstrates unwanted and obscuring variability similar to what was first observed in microarrays. In particular, we find GC-content has a strong sample specific effect on gene expression measurements that, if left uncorrected, leads to false positives in downstream results. We also report on commonly observed data distortions that demonstrate the need for data normalization. Here we describe statistical methodology that improves precision by 42% without loss of accuracy. Our resulting conditional quantile normalization (CQN) algorithm combines robust generalized regression to remove systematic bias introduced by deterministic features such as GC-content, and quantile normalization to correct for global distortions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

OBJECTIVE: To review the accuracy of electrocardiography in screening for left ventricular hypertrophy in patients with hypertension. DESIGN: Systematic review of studies of test accuracy of six electrocardiographic indexes: the Sokolow-Lyon index, Cornell voltage index, Cornell product index, Gubner index, and Romhilt-Estes scores with thresholds for a positive test of > or =4 points or > or =5 points. DATA SOURCES: Electronic databases ((Pre-)Medline, Embase), reference lists of relevant studies and previous reviews, and experts. STUDY SELECTION: Two reviewers scrutinised abstracts and examined potentially eligible studies. Studies comparing the electrocardiographic index with echocardiography in hypertensive patients and reporting sufficient data were included. DATA EXTRACTION: Data on study populations, echocardiographic criteria, and methodological quality of studies were extracted. DATA SYNTHESIS: Negative likelihood ratios, which indicate to what extent the posterior odds of left ventricular hypertrophy is reduced by a negative test, were calculated. RESULTS: 21 studies and data on 5608 patients were analysed. The median prevalence of left ventricular hypertrophy was 33% (interquartile range 23-41%) in primary care settings (10 studies) and 65% (37-81%) in secondary care settings (11 studies). The median negative likelihood ratio was similar across electrocardiographic indexes, ranging from 0.85 (range 0.34-1.03) for the Romhilt-Estes score (with threshold > or =4 points) to 0.91 (0.70-1.01) for the Gubner index. Using the Romhilt-Estes score in primary care, a negative electrocardiogram result would reduce the typical pre-test probability from 33% to 31%. In secondary care the typical pre-test probability of 65% would be reduced to 63%. CONCLUSION: Electrocardiographic criteria should not be used to rule out left ventricular hypertrophy in patients with hypertension.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Recent developments in clinical radiology have resulted in additional developments in the field of forensic radiology. After implementation of cross-sectional radiology and optical surface documentation in forensic medicine, difficulties in the validation and analysis of the acquired data was experienced. To address this problem and for the comparison of autopsy and radiological data a centralized database with internet technology for forensic cases was created. The main goals of the database are (1) creation of a digital and standardized documentation tool for forensic-radiological and pathological findings; (2) establishing a basis for validation of forensic cross-sectional radiology as a non-invasive examination method in forensic medicine that means comparing and evaluating the radiological and autopsy data and analyzing the accuracy of such data; and (3) providing a conduit for continuing research and education in forensic medicine. Considering the infrequent availability of CT or MRI for forensic institutions and the heterogeneous nature of case material in forensic medicine an evaluation of benefits and limitations of cross-sectional imaging concerning certain forensic features by a single institution may be of limited value. A centralized database permitting international forensic and cross disciplinary collaborations may provide important support for forensic-radiological casework and research.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Computer assisted orthopaedic surgery (CAOS) technology has recently been introduced to overcome problems resulting from acetabular component malpositioning in total hip arthroplasty. Available navigation modules can conceptually be categorized as computer tomography (CT) based, fluoroscopy based, or image-free. The current study presents a comprehensive accuracy analysis on the computer assisted placement accuracy of acetabular cups. It combines analyses using mathematical approaches, in vitro testing environments, and an in vivo clinical trial. A hybrid navigation approach combining image-free with fluoroscopic technology was chosen as the best compromise to CT-based systems. It introduces pointer-based digitization for easily assessable points and bi-planar fluoroscopy for deep-seated landmarks. From the in vitro data maximum deviations were found to be 3.6 degrees for inclination and 3.8 degrees for anteversion relative to a pre-defined test position. The maximum difference between intraoperatively calculated cup inclination and anteversion with the postoperatively measured position was 4 degrees and 5 degrees, respectively. These data coincide with worst cases scenario predictions applying a statistical simulation model. The proper use of navigation technology can reduce variability of cup placement well within the surgical safe zone. Surgeons have to concentrate on a variety of error sources during the procedure, which may explain the reported strong learning curves for CAOS technologies.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A patient-specific surface model of the proximal femur plays an important role in planning and supporting various computer-assisted surgical procedures including total hip replacement, hip resurfacing, and osteotomy of the proximal femur. The common approach to derive 3D models of the proximal femur is to use imaging techniques such as computed tomography (CT) or magnetic resonance imaging (MRI). However, the high logistic effort, the extra radiation (CT-imaging), and the large quantity of data to be acquired and processed make them less functional. In this paper, we present an integrated approach using a multi-level point distribution model (ML-PDM) to reconstruct a patient-specific model of the proximal femur from intra-operatively available sparse data. Results of experiments performed on dry cadaveric bones using dozens of 3D points are presented, as well as experiments using a limited number of 2D X-ray images, which demonstrate promising accuracy of the present approach.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper describes a method for DRR generation as well as for volume gradients projection using hardware accelerated 2D texture mapping and accumulation buffering and demonstrates its application in 2D-3D registration of X-ray fluoroscopy to CT images. The robustness of the present registration scheme are guaranteed by taking advantage of a coarse-to-fine processing of the volume/image pyramids based on cubic B-splines. A human cadaveric spine specimen together with its ground truth was used to compare the present scheme with a purely software-based scheme in three aspects: accuracy, speed, and capture ranges. Our experiments revealed an equivalent accuracy and capture ranges but with much shorter registration time with the present scheme. More specifically, the results showed 0.8 mm average target registration error, 55 second average execution time per registration, and 10 mm and 10° capture ranges for the present scheme when tested on a 3.0 GHz Pentium 4 computer.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

OBJECTIVES: To assess magnetic resonance (MR)-colonography (MRC) for detection of colorectal lesions using two different T1w three-dimensional (3D)-gradient-recalled echo (GRE)-sequences and integrated parallel data acquisition (iPAT) at a 3.0 Tesla MR-unit. MATERIALS AND METHODS: In this prospective study, 34 symptomatic patients underwent dark lumen MRC at a 3.0 Tesla unit before conventional colonoscopy (CC). After colon distension with tap water, 2 high-resolution T1w 3D-GRE [3-dimensional fast low angle shot (3D-FLASH), iPAT factor 2 and 3D-volumetric interpolated breathhold examination (VIBE), iPAT 3] sequences were acquired without and after bolus injection of gadolinium. Prospective evaluation of MRC was performed. Image quality of the different sequences was assessed qualitatively and quantitatively. The findings of the same day CC served as standard of reference. RESULTS: MRC identified all polyps >5 mm (16 of 16) in size and all carcinomas (4 of 4) correctly. Fifty percent of the small polyps 0.6). CONCLUSIONS: MRC using 3D-GRE-sequences and iPAT is feasible at 3.0 T-systems. The high-resolution 3D-FLASH was slightly preferred over the 3D-VIBE because of better image quality, although both used sequences showed no statistical significant difference.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

INTRODUCTION: Recent advances in medical imaging have brought post-mortem minimally invasive computed tomography (CT) guided percutaneous biopsy to public attention. AIMS: The goal of the following study was to facilitate and automate post-mortem biopsy, to suppress radiation exposure to the investigator, as may occur when tissue sampling under computer tomographic guidance, and to minimize the number of needle insertion attempts for each target for a single puncture. METHODS AND MATERIALS: Clinically approved and post-mortem tested ACN-III biopsy core needles (14 gauge x 160 mm) with an automatic pistol device (Bard Magnum, Medical Device Technologies, Denmark) were used for probe sampling. The needles were navigated in gelatine/peas phantom, ex vivo porcine model and subsequently in two human bodies using a navigation system (MEM centre/ISTB Medical Application Framework, Marvin, Bern, Switzerland) with guidance frame and a CT (Emotion 6, Siemens, Germany). RESULTS: Biopsy of all peas could be performed within a single attempt. The average distance between the inserted needle tip and the pea centre was 1.4mm (n=10; SD 0.065 mm; range 0-2.3 mm). The targets in the porcine liver were also accurately punctured. The average of the distance between the needle tip and the target was 0.5 mm (range 0-1 mm). Biopsies of brain, heart, lung, liver, pancreas, spleen, and kidney were performed on human corpses. For each target the biopsy needle was only inserted once. The examination of one body with sampling of tissue probes at the above-mentioned locations took approximately 45 min. CONCLUSIONS: Post-mortem navigated biopsy can reliably provide tissue samples from different body locations. Since the continuous update of positional data of the body and the biopsy needle is performed using optical tracking, no control CT images verifying the positional data are necessary and no radiation exposure to the investigator need be taken into account. Furthermore, the number of needle insertions for each target can be minimized to a single one with the ex vivo proven adequate accuracy and, in contrast to conventional CT guided biopsy, the insertion angle may be oblique. Navigation for minimally invasive tissue sampling is a useful addition to post-mortem CT guided biopsy.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In 1998-2001 Finland suffered the most severe insect outbreak ever recorded, over 500,000 hectares. The outbreak was caused by the common pine sawfly (Diprion pini L.). The outbreak has continued in the study area, Palokangas, ever since. To find a good method to monitor this type of outbreaks, the purpose of this study was to examine the efficacy of multi-temporal ERS-2 and ENVISAT SAR imagery for estimating Scots pine (Pinus sylvestris L.) defoliation. Three methods were tested: unsupervised k-means clustering, supervised linear discriminant analysis (LDA) and logistic regression. In addition, I assessed if harvested areas could be differentiated from the defoliated forest using the same methods. Two different speckle filters were used to determine the effect of filtering on the SAR imagery and subsequent results. The logistic regression performed best, producing a classification accuracy of 81.6% (kappa 0.62) with two classes (no defoliation, >20% defoliation). LDA accuracy was with two classes at best 77.7% (kappa 0.54) and k-means 72.8 (0.46). In general, the largest speckle filter, 5 x 5 image window, performed best. When additional classes were added the accuracy was usually degraded on a step-by-step basis. The results were good, but because of the restrictions in the study they should be confirmed with independent data, before full conclusions can be made that results are reliable. The restrictions include the small size field data and, thus, the problems with accuracy assessment (no separate testing data) as well as the lack of meteorological data from the imaging dates.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Riparian zones are dynamic, transitional ecosystems between aquatic and terrestrial ecosystems with well defined vegetation and soil characteristics. Development of an all-encompassing definition for riparian ecotones, because of their high variability, is challenging. However, there are two primary factors that all riparian ecotones are dependent on: the watercourse and its associated floodplain. Previous approaches to riparian boundary delineation have utilized fixed width buffers, but this methodology has proven to be inadequate as it only takes the watercourse into consideration and ignores critical geomorphology, associated vegetation and soil characteristics. Our approach offers advantages over other previously used methods by utilizing: the geospatial modeling capabilities of ArcMap GIS; a better sampling technique along the water course that can distinguish the 50-year flood plain, which is the optimal hydrologic descriptor of riparian ecotones; the Soil Survey Database (SSURGO) and National Wetland Inventory (NWI) databases to distinguish contiguous areas beyond the 50-year plain; and land use/cover characteristics associated with the delineated riparian zones. The model utilizes spatial data readily available from Federal and State agencies and geospatial clearinghouses. An accuracy assessment was performed to assess the impact of varying the 50-year flood height, changing the DEM spatial resolution (1, 3, 5 and 10m), and positional inaccuracies with the National Hydrography Dataset (NHD) streams layer on the boundary placement of the delineated variable width riparian ecotones area. The result of this study is a robust and automated GIS based model attached to ESRI ArcMap software to delineate and classify variable-width riparian ecotones.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background mortality is an essential component of any forest growth and yield model. Forecasts of mortality contribute largely to the variability and accuracy of model predictions at the tree, stand and forest level. In the present study, I implement and evaluate state-of-the-art techniques to increase the accuracy of individual tree mortality models, similar to those used in many of the current variants of the Forest Vegetation Simulator, using data from North Idaho and Montana. The first technique addresses methods to correct for bias induced by measurement error typically present in competition variables. The second implements survival regression and evaluates its performance against the traditional logistic regression approach. I selected the regression calibration (RC) algorithm as a good candidate for addressing the measurement error problem. Two logistic regression models for each species were fitted, one ignoring the measurement error, which is the “naïve” approach, and the other applying RC. The models fitted with RC outperformed the naïve models in terms of discrimination when the competition variable was found to be statistically significant. The effect of RC was more obvious where measurement error variance was large and for more shade-intolerant species. The process of model fitting and variable selection revealed that past emphasis on DBH as a predictor variable for mortality, while producing models with strong metrics of fit, may make models less generalizable. The evaluation of the error variance estimator developed by Stage and Wykoff (1998), and core to the implementation of RC, in different spatial patterns and diameter distributions, revealed that the Stage and Wykoff estimate notably overestimated the true variance in all simulated stands, but those that are clustered. Results show a systematic bias even when all the assumptions made by the authors are guaranteed. I argue that this is the result of the Poisson-based estimate ignoring the overlapping area of potential plots around a tree. Effects, especially in the application phase, of the variance estimate justify suggested future efforts of improving the accuracy of the variance estimate. The second technique implemented and evaluated is a survival regression model that accounts for the time dependent nature of variables, such as diameter and competition variables, and the interval-censored nature of data collected from remeasured plots. The performance of the model is compared with the traditional logistic regression model as a tool to predict individual tree mortality. Validation of both approaches shows that the survival regression approach discriminates better between dead and alive trees for all species. In conclusion, I showed that the proposed techniques do increase the accuracy of individual tree mortality models, and are a promising first step towards the next generation of background mortality models. I have also identified the next steps to undertake in order to advance mortality models further.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Inexpensive, commercial available off-the-shelf (COTS) Global Positioning Receivers (GPS) have typical accuracy of ±3 meters when augmented by the Wide Areas Augmentation System (WAAS). There exist applications that require position measurements between two moving targets. The focus of this work is to explore the viability of using clusters of COTS GPS receivers for relative position measurements to improve their accuracy. An experimental study was performed using two clusters, each with five GPS receivers, with a fixed distance of 4.5 m between the clusters. Although the relative position was fixed, the entire system of ten GPS receivers was on a mobile platform. Data was recorded while moving the system over a rectangular track with a perimeter distance of 7564 m. The data was post processed and yielded approximately 1 meter accuracy for the relative position vector between the two clusters.