61 resultados para Image Classification


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Background: Computed tomography (CT) is one of the most used modalities for diagnostics in paediatric populations, which is a concern as it also delivers a high patient dose. Research has focused on developing computer algorithms that provide better image quality at lower dose. The iterative reconstruction algorithm Sinogram-Affirmed Iterative Reconstruction (SAFIRE) was introduced as a new technique that reduces noise to increase image quality. Purpose: The aim of this study is to compare SAFIRE with the current gold standard, Filtered Back Projection (FBP), and assess whether SAFIRE alone permits a reduction in dose while maintaining image quality in paediatric head CT. Methods: Images were collected using a paediatric head phantom using a SIEMENS SOMATOM PERSPECTIVE 128 modulated acquisition. 54 images were reconstructed using FBP and 5 different strengths of SAFIRE. Objective measures of image quality were determined by measuring SNR and CNR. Visual measures of image quality were determined by 17 observers with different radiographic experiences. Images were randomized and displayed using 2AFC; observers scored the images answering 5 questions using a Likert scale. Results: At different dose levels, SAFIRE significantly increased SNR (up to 54%) in the acquired images compared to FBP at 80kVp (5.2-8.4), 110kVp (8.2-12.3), 130kVp (8.8-13.1). Visual image quality was higher with increasing SAFIRE strength. The highest image quality was scored with SAFIRE level 3 and higher. Conclusion: The SAFIRE algorithm is suitable for image noise reduction in paediatric head CT. Our data demonstrates that SAFIRE enhances SNR while reducing noise with a possible reduction of dose of 68%.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Purpose: To investigate whether standard X-ray acquisition factors for orbital radiographs are suitable for the detection of ferromagnetic intra-ocular foreign bodies in patients undergoing MRI. Method: 35 observers, at varied levels of education in radiography, attending a European Dose Optimisation EURASMUS Summer School were asked to score 24 images of varying acquisition factors against a clinical standard (reference image) using two alternative forced choice. The observers were provided with 12 questions and a 5 point Likert scale. Statistical tests were used to validate the scale, and scale reliability was also measured. The images which scored equal to, or better than, the reference image (36) were ranked alongside their corresponding effective dose (E), the image with the lowest dose equal to or better than the reference is considered the new optimum acquisition factors. Results: Four images emerged as equal to, or better than, the reference in terms of image quality. The images were then ranked in order of E. Only one image that scored the same as the reference had a lower dose. The reference image had a mean E of 3.31μSv, the image that scored the same had an E of 1.8μSv. Conclusion: Against the current clinical standard exposure factors of 70kVp, 20mAs and the use of an anti- scatter grid, one image proved to have a lower E whilst maintaining the same level of image quality and lesion visibility. It is suggested that the new exposure factors should be 60kVp, 20mAs and still include the use of an anti-scatter grid.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper reviews the literature for lowering of dose to paediatric patients through use of exposure factors and additional filtration. Dose reference levels set by The International Commission on Radiological Protection (ICRP) will be considered. Guidance was put in place in 1996 requires updating to come into line with modern imaging equipment. There is a wide range of literature that specifies that grids should not be used on paediatric patients. Although much of the literature advocates additional filtration, contrasting views on the relative benefits of using aluminium or copper filtration, and their effects on dose reduction and image quality can vary. Changing kVp and mAs has an effect on the dose to the patient and image quality. Collimation protects adjacent structures whilst reducing scattered radiation.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Purpose: To determine whether using different combinations of kVp and mAs with additional filtration can reduce the effective dose to a paediatric phantom whilst maintaining diagnostic image quality. Methods: 27 images of a paediatric AP pelvis phantom were acquired with different kVp, mAs and additional copper filtration. Images were displayed on quality controlled monitors with dimmed lighting. Ten diagnostic radiographers (5 students and 5 experienced radiographers) had eye tests to assess visual acuity before rating the images. Each image was rated for visual image quality against a reference image using 2 alternative forced choice software using a 5-point Likert scale. Physical measures (SNR and CNR) were also taken to assess image quality. Results: Of the 27 images rated, 13 of them were of acceptable image quality and had a dose lower than the image with standard acquisition parameters. Two were produced without filtration, 6 with 0.1mm and 5 with 0.2mm copper filtration. Statistical analysis found that the inter-rater and intra-rater reliability was high. Discussion: It is possible to obtain an image of acceptable image quality with a dose that is lower than published guidelines. There are some areas of the study that could be improved. These include using a wider range of kVp and mAs to give an exact set of parameters to use. Conclusion: Additional filtration has been identified as amajor tool for reducing effective dose whilst maintaining acceptable image quality in a 5 year old phantom.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Behavioral biometrics is one of the areas with growing interest within the biosignal research community. A recent trend in the field is ECG-based biometrics, where electrocardiographic (ECG) signals are used as input to the biometric system. Previous work has shown this to be a promising trait, with the potential to serve as a good complement to other existing, and already more established modalities, due to its intrinsic characteristics. In this paper, we propose a system for ECG biometrics centered on signals acquired at the subject's hand. Our work is based on a previously developed custom, non-intrusive sensing apparatus for data acquisition at the hands, and involved the pre-processing of the ECG signals, and evaluation of two classification approaches targeted at real-time or near real-time applications. Preliminary results show that this system leads to competitive results both for authentication and identification, and further validate the potential of ECG signals as a complementary modality in the toolbox of the biometric system designer.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Crossed classification models are applied in many investigations taking in consideration the existence of interaction between all factors or, in alternative, excluding all interactions, and in this case only the effects and the error term are considered. In this work we use commutative Jordan algebras in the study of the algebraic structure of these designs and we use them to obtain similar designs where only some of the interactions are considered. We finish presenting the expressions of the variance componentes estimators.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The acquisition of a Myocardial Perfusion image (MPI) is of great importance for the diagnosis of the coronary artery disease, since it allows to evaluate which areas of the heart aren’t being properly perfused, in rest and stress situations. This exam is greatly influenced by photon attenuation which creates image artifacts and affects quantification. The acquisition of a Computerized Tomography (CT) image makes it possible to get an atomic images which can be used to perform high-quality attenuation corrections of the radiopharmaceutical distribution, in the MPI image. Studies show that by using hybrid imaging to perform diagnosis of the coronary artery disease, there is an increase on the specificity when evaluating the perfusion of the right coronary artery (RCA). Using an iterative algorithm with a resolution recovery software for the reconstruction, which balances the image quality, the administered activity and the scanning time, we aim to evaluate the influence of attenuation correction on the MPI image and the outcome in perfusion quantification and imaging quality.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Semi quantification (SQ) in DaTScan® studies is broadly used in clinic daily basis, however there is a suspicious about its discriminative capability, and concordance with the diagnostic classification performed by the physician. Aim: Evaluate the discriminate capability of an adapted database and reference's values of healthy controls for the Dopamine Transporters (DAT) with 123I–FP-IT named DBRV adapted to Nuclear Medicine Department's protocol and population of Infanta Cristina's Hospital, and its concordance with the physician classification.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Purpose - To develop and validate a psychometric scale for assessing image quality for chest radiographs.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Purpose: This study aims to investigate the influence of tube potential (kVp) variation in relation to perceptual image quality and effective dose for pelvis using automatic exposure control (AEC) and non-AEC in a computed radiography (CR) system. Methods and Materials: To determine the effects of using AEC and non-AEC by applying the 10 kVp rule in two experiments using an anthropomorphic pelvis phantom. Images were acquired using 10 kVp increments (60-120 kVp) for both experiments. The first experiment, based on seven AEC combinations, produced 49 images. The mean mAs from each kVp increment were used as a baseline for the second experiment producing 35 images. A total of 84 images were produced and a panel of 5 experienced observers participated for the image scoring using the 2 AFC visual grading software. PCXMC software was used to estimate the effective dose. Results: A decrease in perceptual image quality as the kVp increases was observed both in non-AEC and AEC experiments, however no significant statistical differences (p> 0.05) were found. Image quality scores from all observers at 10 kVp increments for all mAs values using non-AEC mode demonstrates a better score up to 90 kVp. Effective dose results show a statistical significant decrease (p=0.000) on the 75th quartile from 0.3 mSv at 60 kVp to 0.1 mSv at 120 kVp when applying the 10 kVp rule in non-AEC mode. Conclusion: No significant reduction in perceptual image quality is observed when increasing kVp whilst a marked and significant effective dose reduction is observed.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Measurements in civil engineering load tests usually require considerable time and complex procedures. Therefore, measurements are usually constrained by the number of sensors resulting in a restricted monitored area. Image processing analysis is an alternative way that enables the measurement of the complete area of interest with a simple and effective setup. In this article photo sequences taken during load displacement tests were captured by a digital camera and processed with image correlation algorithms. Three different image processing algorithms were used with real images taken from tests using specimens of PVC and Plexiglas. The data obtained from the image processing algorithms were also compared with the data from physical sensors. A complete displacement and strain map were obtained. Results show that the accuracy of the measurements obtained by photogrammetry is equivalent to that from the physical sensors but with much less equipment and fewer setup requirements. © 2015Computer-Aided Civil and Infrastructure Engineering.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Conventional film based X-ray imaging systems are being replaced by their digital equivalents. Different approaches are being followed by considering direct or indirect conversion, with the later technique dominating. The typical, indirect conversion, X-ray panel detector uses a phosphor for X-ray conversion coupled to a large area array of amorphous silicon based optical sensors and a couple of switching thin film transistors (TFT). The pixel information can then be readout by switching the correspondent line and column transistors, routing the signal to an external amplifier. In this work we follow an alternative approach, where the electrical switching performed by the TFT is replaced by optical scanning using a low power laser beam and a sensing/switching PINPIN structure, thus resulting in a simpler device. The optically active device is a PINPIN array, sharing both front and back electrical contacts, deposited over a glass substrate. During X-ray exposure, each sensing side photodiode collects photons generated by the scintillator screen (560 nm), charging its internal capacitance. Subsequently a laser beam (445 nm) scans the switching diodes (back side) retrieving the stored charge in a sequential way, reconstructing the image. In this paper we present recent work on the optoelectronic characterization of the PINPIN structure to be incorporated in the X-ray image sensor. The results from the optoelectronic characterization of the device and the dependence on scanning beam parameters are presented and discussed. Preliminary results of line scans are also presented. (C) 2014 Elsevier B.V. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We define families of aperiodic words associated to Lorenz knots that arise naturally as syllable permutations of symbolic words corresponding to torus knots. An algorithm to construct symbolic words of satellite Lorenz knots is defined. We prove, subject to the validity of a previous conjecture, that Lorenz knots coded by some of these families of words are hyperbolic, by showing that they are neither satellites nor torus knots and making use of Thurston's theorem. Infinite families of hyperbolic Lorenz knots are generated in this way, to our knowledge, for the first time. The techniques used can be generalized to study other families of Lorenz knots.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In the last decade, local image features have been widely used in robot visual localization. In order to assess image similarity, a strategy exploiting these features compares raw descriptors extracted from the current image with those in the models of places. This paper addresses the ensuing step in this process, where a combining function must be used to aggregate results and assign each place a score. Casting the problem in the multiple classifier systems framework, in this paper we compare several candidate combiners with respect to their performance in the visual localization task. For this evaluation, we selected the most popular methods in the class of non-trained combiners, namely the sum rule and product rule. A deeper insight into the potential of these combiners is provided through a discriminativity analysis involving the algebraic rules and two extensions of these methods: the threshold, as well as the weighted modifications. In addition, a voting method, previously used in robot visual localization, is assessed. Furthermore, we address the process of constructing a model of the environment by describing how the model granularity impacts upon performance. All combiners are tested on a visual localization task, carried out on a public dataset. It is experimentally demonstrated that the sum rule extensions globally achieve the best performance, confirming the general agreement on the robustness of this rule in other classification problems. The voting method, whilst competitive with the product rule in its standard form, is shown to be outperformed by its modified versions.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In machine learning and pattern recognition tasks, the use of feature discretization techniques may have several advantages. The discretized features may hold enough information for the learning task at hand, while ignoring minor fluctuations that are irrelevant or harmful for that task. The discretized features have more compact representations that may yield both better accuracy and lower training time, as compared to the use of the original features. However, in many cases, mainly with medium and high-dimensional data, the large number of features usually implies that there is some redundancy among them. Thus, we may further apply feature selection (FS) techniques on the discrete data, keeping the most relevant features, while discarding the irrelevant and redundant ones. In this paper, we propose relevance and redundancy criteria for supervised feature selection techniques on discrete data. These criteria are applied to the bin-class histograms of the discrete features. The experimental results, on public benchmark data, show that the proposed criteria can achieve better accuracy than widely used relevance and redundancy criteria, such as mutual information and the Fisher ratio.