970 resultados para Digital medical images
Resumo:
OBJECTIVE: The aim of this article was to apply psychometric theory to develop and validate a visual grading scale for assessing the visual perception of digital image quality anteroposterior (AP) pelvis. METHODS: Psychometric theory was used to guide scale development. Seven phantom and seven cadaver images of visually and objectively predetermined quality were used to help assess scale reliability and validity. 151 volunteers scored phantom images, and 184 volunteers scored cadaver images. Factor analysis and Cronbach's alpha were used to assess scale validity and reliability. RESULTS: A 24-item scale was produced. Aggregated mean volunteer scores for each image correlated with the rank order of the visually and objectively predetermined image qualities. Scale items had good interitem correlation (≥0.2) and high factor loadings (≥0.3). Cronbach's alpha (reliability) revealed that the scale has acceptable levels of internal reliability for both phantom and cadaver images (α = 0.8 and 0.9, respectively). Factor analysis suggested that the scale is multidimensional (assessing multiple quality themes). CONCLUSION: This study represents the first full development and validation of a visual image quality scale using psychometric theory. It is likely that this scale will have clinical, training and research applications. ADVANCES IN KNOWLEDGE: This article presents data to create and validate visual grading scales for radiographic examinations. The visual grading scale, for AP pelvis examinations, can act as a validated tool for future research, teaching and clinical evaluations of image quality.
Resumo:
The problem of understanding how humans perceive the quality of a reproduced image is of interest to researchers of many fields related to vision science and engineering: optics and material physics, image processing (compression and transfer), printing and media technology, and psychology. A measure for visual quality cannot be defined without ambiguity because it is ultimately the subjective opinion of an “end-user” observing the product. The purpose of this thesis is to devise computational methods to estimate the overall visual quality of prints, i.e. a numerical value that combines all the relevant attributes of the perceived image quality. The problem is limited to consider the perceived quality of printed photographs from the viewpoint of a consumer, and moreover, the study focuses only on digital printing methods, such as inkjet and electrophotography. The main contributions of this thesis are two novel methods to estimate the overall visual quality of prints. In the first method, the quality is computed as a visible difference between the reproduced image and the original digital (reference) image, which is assumed to have an ideal quality. The second method utilises instrumental print quality measures, such as colour densities, measured from printed technical test fields, and connects the instrumental measures to the overall quality via subjective attributes, i.e. attributes that directly contribute to the perceived quality, using a Bayesian network. Both approaches were evaluated and verified with real data, and shown to predict well the subjective evaluation results.
Resumo:
Diabetes is a rapidly increasing worldwide problem which is characterised by defective metabolism of glucose that causes long-term dysfunction and failure of various organs. The most common complication of diabetes is diabetic retinopathy (DR), which is one of the primary causes of blindness and visual impairment in adults. The rapid increase of diabetes pushes the limits of the current DR screening capabilities for which the digital imaging of the eye fundus (retinal imaging), and automatic or semi-automatic image analysis algorithms provide a potential solution. In this work, the use of colour in the detection of diabetic retinopathy is statistically studied using a supervised algorithm based on one-class classification and Gaussian mixture model estimation. The presented algorithm distinguishes a certain diabetic lesion type from all other possible objects in eye fundus images by only estimating the probability density function of that certain lesion type. For the training and ground truth estimation, the algorithm combines manual annotations of several experts for which the best practices were experimentally selected. By assessing the algorithm’s performance while conducting experiments with the colour space selection, both illuminance and colour correction, and background class information, the use of colour in the detection of diabetic retinopathy was quantitatively evaluated. Another contribution of this work is the benchmarking framework for eye fundus image analysis algorithms needed for the development of the automatic DR detection algorithms. The benchmarking framework provides guidelines on how to construct a benchmarking database that comprises true patient images, ground truth, and an evaluation protocol. The evaluation is based on the standard receiver operating characteristics analysis and it follows the medical practice in the decision making providing protocols for image- and pixel-based evaluations. During the work, two public medical image databases with ground truth were published: DIARETDB0 and DIARETDB1. The framework, DR databases and the final algorithm, are made public in the web to set the baseline results for automatic detection of diabetic retinopathy. Although deviating from the general context of the thesis, a simple and effective optic disc localisation method is presented. The optic disc localisation is discussed, since normal eye fundus structures are fundamental in the characterisation of DR.
Resumo:
With the increase of use of digital media the need for the methods of multimedia protection becomes extremely important. The number of the solutions to the problem from encryption to watermarking is large and is growing every year. In this work digital image watermarking is considered, specifically a novel method of digital watermarking of color and spectral images. An overview of existing methods watermarking of color and grayscale images is given in the paper. Methods using independent component analysis (ICA) for detection and the ones using discrete wavelet transform (DWT) and discrete cosine transform (DCT) are considered in more detail. A novel method of watermarking proposed in this paper allows embedding of a color or spectral watermark image into color or spectral image consequently and successful extraction of the watermark out of the resultant watermarked image. A number of experiments have been performed on the quality of extraction depending on the parameters of the embedding procedure. Another set of experiments included the test of the robustness of the algorithm proposed. Three techniques have been chosen for that purpose: median filter, low-pass filter (LPF) and discrete cosine transform (DCT), which are a part of a widely known StirMark - Image Watermarking Robustness Test. The study shows that the proposed watermarking technique is fragile, i.e. watermark is altered by simple image processing operations. Moreover, we have found that the contents of the image to be watermarked do not affect the quality of the extraction. Mixing coefficients, that determine the amount of the key and watermark image in the result, should not exceed 1% of the original. The algorithm proposed has proven to be successful in the task of watermark embedding and extraction.
Resumo:
Fifty Bursa of Fabricius (BF) were examined by conventional optical microscopy and digital images were acquired and processed using Matlab® 6.5 software. The Artificial Neuronal Network (ANN) was generated using Neuroshell® Classifier software and the optical and digital data were compared. The ANN was able to make a comparable classification of digital and optical scores. The use of ANN was able to classify correctly the majority of the follicles, reaching sensibility and specificity of 89% and 96%, respectively. When the follicles were scored and grouped in a binary fashion the sensibility increased to 90% and obtained the maximum value for the specificity of 92%. These results demonstrate that the use of digital image analysis and ANN is a useful tool for the pathological classification of the BF lymphoid depletion. In addition it provides objective results that allow measuring the dimension of the error in the diagnosis and classification therefore making comparison between databases feasible.
Resumo:
The purpose of this investigation was to demonstrate the feasibility of a biopsy technique by performing serial evaluations of tissue samples of the forelimb superficial digital flexor tendon (SDFT) in healthy horses and in horses subjected to superficial digital flexor tendonitis induction. Eight adult horses were evaluated in two different phases (P), control (P1) and tendonitis-induced (P2). At P1, the horses were subjected to five SDFT biopsies of the left forelimb, with 24 hours (h) of interval. Clinical and ultrasonographic (US) examinations were performed immediately before the tendonitis induction, 24 and 48 h after the procedure. The biopsied tendon tissues were analyzed through histology. P2 evaluations were carried out three months later, when the same horses were subjected to tendonitis induction by injection of bacterial collagenase into the right forelimb SDFT. P2 clinical and US evaluations, and SDFT biopsies were performed before, and after injury induction at the following time intervals: after 24, 48, 72 and 96 h, and after 15, 30, 60, 90, 120 and 150 days. The biopsy technique has proven to be easy and quick to perform and yielded good tendon samples for histological evaluation. At P1 the horses did not show signs of localised inflammation, pain or lameness, neither SDFT US alterations after biopsies, showing that the biopsy procedure per se did not risk tendon integrity. Therefore, this procedure is feasible for routine tendon histological evaluations. The P2 findings demonstrate a relation between the US and histology evaluations concerning induced tendonitis evolution. However, the clinical signs of tendonitis poorly reflected the microscopic tissue condition, indicating that clinical presentation is not a reliable parameter for monitoring injury development. The presented method of biopsying SDFT tissue in horses enables the serial collection of material for histological analysis causing no clinical signs and tendon damage seen by US images. Therefore, this technique allows tendonitis to be monitored and can be considered an excellent tool in protocols for evaluating SDFT injury.
Resumo:
The Shadow Moiré fringe patterns are level lines of equal depth generated by interference between a master grid and its shadow projected on the surface. In simplistic approach, the minimum error is about the order of the master grid pitch, that is, always larger than 0,1 mm, resulting in an experimental technique of low precision. The use of a phase shift increases the accuracy of the Shadow Moiré technique. The current work uses the phase shifting method to determine the surfaces three-dimensional shape using isothamic fringe patterns and digital image processing. The current study presents the method and applies it to images obtained by simulation for error evaluation, as well as to a buckled plate, obtaining excellent results. The method hands itself particularly useful to decrease the errors in the interpretation of the Moiré fringes that can adversely affect the calculations of displacements in pieces containing many concave and convex regions in relatively small areas.
Resumo:
Poster at Open Repositories 2014, Helsinki, Finland, June 9-13, 2014
Resumo:
Ventricular late potentials are low-amplitude signals originating from damaged myocardium and detected on the body surface by ECG filtering and averaging. Digital filters present in commercial equipment may interfere with the ability of arrhythmia stratification. We compared 40-Hz BiSpec (BI) and classical 40- to 250-Hz band-pass Butterworth bidirectional (BD) filters in terms of impact on time domain variables and diagnostic properties. In a transverse retrospective age-adjusted case-control study, 221 subjects with sinus rhythm without bundle branch block were divided into three groups after signal-averaged ECG acquisition: GI (N = 40), clinically normal controls, GII (N = 158), subjects with coronary heart disease without sustained monomorphic ventricular tachycardia (SMVT), and GIII (N = 23), subjects with heart disease and documented SMVT. Conventional variables analyzed from vector magnitude data after averaging to 0.3 µV final noise were obtained by application of each filter to the averaged signal, and evaluated in pairs by numerical comparison and by diagnostic agreement assessment, using conventional and optimized thresholds of normality. Significant differences were found between BI and BD variables in all groups, with diagnostic results showing significant disagreement between both filters [kappa value of 0.61 (P<0.05) for GII and 0.31 for GIII (P = NS)]. Sensitivity for SMVT was lower with BI than with BD (65.2 vs 91.3%, respectively, P<0.05). Filters provided significantly different numerical and diagnostic results and the BI filter showed only limited clinical application to risk stratification of ventricular arrhythmia.
Resumo:
Important biological and clinical features of malignancy are reflected in its transcript pattern. Recent advances in gene expression technology and informatics have provided a powerful new means to obtain and interpret these expression patterns. A comprehensive approach to expression profiling is serial analysis of gene expression (SAGE), which provides digital information on transcript levels. SAGE works by counting transcripts and storing these digital values electronically, providing absolute gene expression levels that make historical comparisons possible. SAGE produces a comprehensive profile of gene expression and can be used to search for candidate tumor markers or antigens in a limited number of samples. The Cancer Genome Anatomy Project has created a SAGE database of human gene expression levels for many different tumors and normal reference tissues and provides online tools for viewing, comparing, and downloading expression profiles. Digital expression profiling using SAGE and informatics have been useful for identifying genes that have a role in tumor invasion and other aspects of tumor progression.
Resumo:
In this research, the effectiveness of Naive Bayes and Gaussian Mixture Models classifiers on segmenting exudates in retinal images is studied and the results are evaluated with metrics commonly used in medical imaging. Also, a color variation analysis of retinal images is carried out to find how effectively can retinal images be segmented using only the color information of the pixels.
Resumo:
The present study describes an auxiliary tool in the diagnosis of left ventricular (LV) segmental wall motion (WM) abnormalities based on color-coded echocardiographic WM images. An artificial neural network (ANN) was developed and validated for grading LV segmental WM using data from color kinesis (CK) images, a technique developed to display the timing and magnitude of global and regional WM in real time. We evaluated 21 normal subjects and 20 patients with LVWM abnormalities revealed by two-dimensional echocardiography. CK images were obtained in two sets of viewing planes. A method was developed to analyze CK images, providing quantitation of fractional area change in each of the 16 LV segments. Two experienced observers analyzed LVWM from two-dimensional images and scored them as: 1) normal, 2) mild hypokinesia, 3) moderate hypokinesia, 4) severe hypokinesia, 5) akinesia, and 6) dyskinesia. Based on expert analysis of 10 normal subjects and 10 patients, we trained a multilayer perceptron ANN using a back-propagation algorithm to provide automated grading of LVWM, and this ANN was then tested in the remaining subjects. Excellent concordance between expert and ANN analysis was shown by ROC curve analysis, with measured area under the curve of 0.975. An excellent correlation was also obtained for global LV segmental WM index by expert and ANN analysis (R² = 0.99). In conclusion, ANN showed high accuracy for automated semi-quantitative grading of WM based on CK images. This technique can be an important aid, improving diagnostic accuracy and reducing inter-observer variability in scoring segmental LVWM.
Resumo:
Computed tomography (CT) images are routinely used to assess ischemic brain stroke in the acute phase. They can provide important clues about whether to treat the patient by thrombolysis with tissue plasminogen activator. However, in the acute phase, the lesions may be difficult to detect in the images using standard visual analysis. The objective of the present study was to determine if texture analysis techniques applied to CT images of stroke patients could differentiate between normal tissue and affected areas that usually go unperceived under visual analysis. We performed a pilot study in which texture analysis, based on the gray level co-occurrence matrix, was applied to the CT brain images of 5 patients and of 5 control subjects and the results were compared by discriminant analysis. Thirteen regions of interest, regarding areas that may be potentially affected by ischemic stroke, were selected for calculation of texture parameters. All regions of interest for all subjects were classified as lesional or non-lesional tissue by an expert neuroradiologist. Visual assessment of the discriminant analysis graphs showed differences in the values of texture parameters between patients and controls, and also between texture parameters for lesional and non-lesional tissue of the patients. This suggests that texture analysis can indeed be a useful tool to help neurologists in the early assessment of ischemic stroke and quantification of the extent of the affected areas.
Resumo:
The loss of brain volume has been used as a marker of tissue destruction and can be used as an index of the progression of neurodegenerative diseases, such as multiple sclerosis. In the present study, we tested a new method for tissue segmentation based on pixel intensity threshold using generalized Tsallis entropy to determine a statistical segmentation parameter for each single class of brain tissue. We compared the performance of this method using a range of different q parameters and found a different optimal q parameter for white matter, gray matter, and cerebrospinal fluid. Our results support the conclusion that the differences in structural correlations and scale invariant similarities present in each tissue class can be accessed by generalized Tsallis entropy, obtaining the intensity limits for these tissue class separations. In order to test this method, we used it for analysis of brain magnetic resonance images of 43 patients and 10 healthy controls matched for gender and age. The values found for the entropic q index were 0.2 for cerebrospinal fluid, 0.1 for white matter and 1.5 for gray matter. With this algorithm, we could detect an annual loss of 0.98% for the patients, in agreement with literature data. Thus, we can conclude that the entropy of Tsallis adds advantages to the process of automatic target segmentation of tissue classes, which had not been demonstrated previously.
Resumo:
Nous proposons de construire un atlas numérique 3D contenant les caractéristiques moyennes et les variabilités de la morphologie d’un organe. Nos travaux seront appliqués particulièrement à la construction d'un atlas numérique 3D de la totalité de la cornée humaine incluant la surface antérieure et postérieure à partir des cartes topographiques fournies par le topographe Orbscan II. Nous procédons tout d'abord par normalisation de toute une population de cornées. Dans cette étape, nous nous sommes basés sur l'algorithme de recalage ICP (iterative closest point) pour aligner simultanément les surfaces antérieures et postérieures d'une population de cornée vers les surfaces antérieure et postérieure d'une cornée de référence. En effet, nous avons élaboré une variante de l'algorithme ICP adapté aux images (cartes) de cornées qui tient compte de changement d'échelle pendant le recalage et qui se base sur la recherche par voisinage via la distance euclidienne pour établir la correspondance entre les points. Après, nous avons procédé pour la construction de l'atlas cornéen par le calcul des moyennes des élévations de surfaces antérieures et postérieures recalées et leurs écarts-types associés. Une population de 100 cornées saines a été utilisée pour construire l'atlas cornéen normal. Pour visualiser l’atlas, on a eu recours à des cartes topographiques couleurs similairement à ce qu’offrent déjà les systèmes topographiques actuels. Enfin, des observations ont été réalisées sur l'atlas cornéen reflétant sa précision et permettant de développer une meilleure connaissance de l’anatomie cornéenne.