960 resultados para Processing image


Relevância:

40.00% 40.00%

Publicador:

Resumo:

Current state of the art techniques for landmine detection in ground penetrating radar (GPR) utilize statistical methods to identify characteristics of a landmine response. This research makes use of 2-D slices of data in which subsurface landmine responses have hyperbolic shapes. Various methods from the field of visual image processing are adapted to the 2-D GPR data, producing superior landmine detection results. This research goes on to develop a physics-based GPR augmentation method motivated by current advances in visual object detection. This GPR specific augmentation is used to mitigate issues caused by insufficient training sets. This work shows that augmentation improves detection performance under training conditions that are normally very difficult. Finally, this work introduces the use of convolutional neural networks as a method to learn feature extraction parameters. These learned convolutional features outperform hand-designed features in GPR detection tasks. This work presents a number of methods, both borrowed from and motivated by the substantial work in visual image processing. The methods developed and presented in this work show an improvement in overall detection performance and introduce a method to improve the robustness of statistical classification.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Coupled map lattices (CML) can describe many relaxation and optimization algorithms currently used in image processing. We recently introduced the ‘‘plastic‐CML’’ as a paradigm to extract (segment) objects in an image. Here, the image is applied by a set of forces to a metal sheet which is allowed to undergo plastic deformation parallel to the applied forces. In this paper we present an analysis of our ‘‘plastic‐CML’’ in one and two dimensions, deriving the nature and stability of its stationary solutions. We also detail how to use the CML in image processing, how to set the system parameters and present examples of it at work. We conclude that the plastic‐CML is able to segment images with large amounts of noise and large dynamic range of pixel values, and is suitable for a very large scale integration(VLSI) implementation.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Radio Simultaneous Location and Mapping (SLAM) consists of the simultaneous tracking of the target and estimation of the surrounding environment, to build a map and estimate the target movements within it. It is an increasingly exploited technique for automotive applications, in order to improve the localization of obstacles and the target relative movement with respect to them, for emergency situations, for example when it is necessary to explore (with a drone or a robot) environments with a limited visibility, or for personal radar applications, thanks to its versatility and cheapness. Until today, these systems were based on light detection and ranging (lidar) or visual cameras, high-accuracy and expensive approaches that are limited to specific environments and weather conditions. Instead, in case of smoke, fog or simply darkness, radar-based systems can operate exactly in the same way. In this thesis activity, the Fourier-Mellin algorithm is analyzed and implemented, to verify the applicability to Radio SLAM, in which the radar frames can be treated as images and the radar motion between consecutive frames can be covered with registration. Furthermore, a simplified version of that algorithm is proposed, in order to solve the problems of the Fourier-Mellin algorithm when working with real radar images and improve the performance. The INRAS RBK2, a MIMO 2x16 mmWave radar, is used for experimental acquisitions, consisting of multiple tests performed in Lab-E of the Cesena Campus, University of Bologna. The different performances of Fourier-Mellin and its simplified version are compared also with the MatchScan algorithm, a classic algorithm for SLAM systems.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents a novel algorithm to successfully achieve viable integrity and authenticity addition and verification of n-frame DICOM medical images using cryptographic mechanisms. The aim of this work is the enhancement of DICOM security measures, especially for multiframe images. Current approaches have limitations that should be properly addressed for improved security. The algorithm proposed in this work uses data encryption to provide integrity and authenticity, along with digital signature. Relevant header data and digital signature are used as inputs to cipher the image. Therefore, one can only retrieve the original data if and only if the images and the inputs are correct. The encryption process itself is a cascading scheme, where a frame is ciphered with data related to the previous frames, generating also additional data on image integrity and authenticity. Decryption is similar to encryption, featuring also the standard security verification of the image. The implementation was done in JAVA, and a performance evaluation was carried out comparing the speed of the algorithm with other existing approaches. The evaluation showed a good performance of the algorithm, which is an encouraging result to use it in a real environment.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

With the advent of functional neuroimaging techniques, in particular functional magnetic resonance imaging (fMRI), we have gained greater insight into the neural correlates of visuospatial function. However, it may not always be easy to identify the cerebral regions most specifically associated with performance on a given task. One approach is to examine the quantitative relationships between regional activation and behavioral performance measures. In the present study, we investigated the functional neuroanatomy of two different visuospatial processing tasks, judgement of line orientation and mental rotation. Twenty-four normal participants were scanned with fMRI using blocked periodic designs for experimental task presentation. Accuracy and reaction time (RT) to each trial of both activation and baseline conditions in each experiment was recorded. Both experiments activated dorsal and ventral visual cortical areas as well as dorsolateral prefrontal cortex. More regionally specific associations with task performance were identified by estimating the association between (sinusoidal) power of functional response and mean RT to the activation condition; a permutation test based on spatial statistics was used for inference. There was significant behavioral-physiological association in right ventral extrastriate cortex for the line orientation task and in bilateral (predominantly right) superior parietal lobule for the mental rotation task. Comparable associations were not found between power of response and RT to the baseline conditions of the tasks. These data suggest that one region in a neurocognitive network may be most strongly associated with behavioral performance and this may be regarded as the computationally least efficient or rate-limiting node of the network.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Radiation dose calculations in nuclear medicine depend on quantification of activity via planar and/or tomographic imaging methods. However, both methods have inherent limitations, and the accuracy of activity estimates varies with object size, background levels, and other variables. The goal of this study was to evaluate the limitations of quantitative imaging with planar and single photon emission computed tomography (SPECT) approaches, with a focus on activity quantification for use in calculating absorbed dose estimates for normal organs and tumors. To do this we studied a series of phantoms of varying complexity of geometry, with three radionuclides whose decay schemes varied from simple to complex. Four aqueous concentrations of (99m)Tc, (131)I, and (111)In (74, 185, 370, and 740 kBq mL(-1)) were placed in spheres of four different sizes in a water-filled phantom, with three different levels of activity in the surrounding water. Planar and SPECT images of the phantoms were obtained on a modern SPECT/computed tomography (CT) system. These radionuclides and concentration/background studies were repeated using a cardiac phantom and a modified torso phantom with liver and ""tumor"" regions containing the radionuclide concentrations and with the same varying background levels. Planar quantification was performed using the geometric mean approach, with attenuation correction (AC), and with and without scatter corrections (SC and NSC). SPECT images were reconstructed using attenuation maps (AM) for AC; scatter windows were used to perform SC during image reconstruction. For spherical sources with corrected data, good accuracy was observed (generally within +/- 10% of known values) for the largest sphere (11.5 mL) and for both planar and SPECT methods with (99m)Tc and (131)I, but were poorest and deviated from known values for smaller objects, most notably for (111)In. SPECT quantification was affected by the partial volume effect in smaller objects and generally showed larger errors than the planar results in these cases for all radionuclides. For the cardiac phantom, results were the most accurate of all of the experiments for all radionuclides. Background subtraction was an important factor influencing these results. The contribution of scattered photons was important in quantification with (131)I; if scatter was not accounted for, activity tended to be overestimated using planar quantification methods. For the torso phantom experiments, results show a clear underestimation of activity when compared to previous experiment with spherical sources for all radionuclides. Despite some variations that were observed as the level of background increased, the SPECT results were more consistent across different activity concentrations. Planar or SPECT quantification on state-of-the-art gamma cameras with appropriate quantitative processing can provide accuracies of better than 10% for large objects and modest target-to-background concentrations; however when smaller objects are used, in the presence of higher background, and for nuclides with more complex decay schemes, SPECT quantification methods generally produce better results. Health Phys. 99(5):688-701; 2010

Relevância:

30.00% 30.00%

Publicador:

Resumo:

An efficient representation method for arbitrarily shaped image segments is proposed. This method includes a smart way to select wavelet basis to approximate the given image segment, with improved image quality and reduced computational load.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Three-dimensional (3D) synthetic aperture radar (SAR) imaging via multiple-pass processing is an extension of interferometric SAR imaging. It exploits more than two flight passes to achieve a desired resolution in elevation. In this paper, a novel approach is developed to reconstruct a 3D space-borne SAR image with multiple-pass processing. It involves image registration, phase correction and elevational imaging. An image model matching is developed for multiple image registration, an eigenvector method is proposed for the phase correction and the elevational imaging is conducted using a Fourier transform or a super-resolution method for enhancement of elevational resolution. 3D SAR images are obtained by processing simulated data and real data from the first European Remote Sensing satellite (ERS-1) with the proposed approaches.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A detailed analysis procedure is described for evaluating rates of volumetric change in brain structures based on structural magnetic resonance (MR) images. In this procedure, a series of image processing tools have been employed to address the problems encountered in measuring rates of change based on structural MR images. These tools include an algorithm for intensity non-uniforniity correction, a robust algorithm for three-dimensional image registration with sub-voxel precision and an algorithm for brain tissue segmentation. However, a unique feature in the procedure is the use of a fractional volume model that has been developed to provide a quantitative measure for the partial volume effect. With this model, the fractional constituent tissue volumes are evaluated for voxels at the tissue boundary that manifest partial volume effect, thus allowing tissue boundaries be defined at a sub-voxel level and in an automated fashion. Validation studies are presented on key algorithms including segmentation and registration. An overall assessment of the method is provided through the evaluation of the rates of brain atrophy in a group of normal elderly subjects for which the rate of brain atrophy due to normal aging is predictably small. An application of the method is given in Part 11 where the rates of brain atrophy in various brain regions are studied in relation to normal aging and Alzheimer's disease. (C) 2002 Elsevier Science Inc. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Once in a digital form, a radiographic image may be processed in several ways in order to turn the visualization an act of improved diagnostic value. Practitioners should be aware that, depending on each clinical context, digital image processing techniques are available to help to unveil visual information that is, in fact, carried by the bare digital radiograph and may be otherwise neglected. The range of visual enhancement procedures includes simple techniques that deal with the usual brightness and contrast manipulation up to much more elaborate multi-scale processing that provides customized control over the emphasis given to the relevant finer anatomical details. This chapter is intended to give the reader a practical understanding of image enhancement techniques that might be helpful to improve the visual quality of the digital radiographs and thus to contribute to a more reliable and assertive reporting.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Coronary artery disease (CAD) is currently one of the most prevalent diseases in the world population and calcium deposits in coronary arteries are one direct risk factor. These can be assessed by the calcium score (CS) application, available via a computed tomography (CT) scan, which gives an accurate indication of the development of the disease. However, the ionising radiation applied to patients is high. This study aimed to optimise the protocol acquisition in order to reduce the radiation dose and explain the flow of procedures to quantify CAD. The main differences in the clinical results, when automated or semiautomated post-processing is used, will be shown, and the epidemiology, imaging, risk factors and prognosis of the disease described. The software steps and the values that allow the risk of developingCADto be predicted will be presented. A64-row multidetector CT scan with dual source and two phantoms (pig hearts) were used to demonstrate the advantages and disadvantages of the Agatston method. The tube energy was balanced. Two measurements were obtained in each of the three experimental protocols (64, 128, 256 mAs). Considerable changes appeared between the values of CS relating to the protocol variation. The predefined standard protocol provided the lowest dose of radiation (0.43 mGy). This study found that the variation in the radiation dose between protocols, taking into consideration the dose control systems attached to the CT equipment and image quality, was not sufficient to justify changing the default protocol provided by the manufacturer.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Measurements in civil engineering load tests usually require considerable time and complex procedures. Therefore, measurements are usually constrained by the number of sensors resulting in a restricted monitored area. Image processing analysis is an alternative way that enables the measurement of the complete area of interest with a simple and effective setup. In this article photo sequences taken during load displacement tests were captured by a digital camera and processed with image correlation algorithms. Three different image processing algorithms were used with real images taken from tests using specimens of PVC and Plexiglas. The data obtained from the image processing algorithms were also compared with the data from physical sensors. A complete displacement and strain map were obtained. Results show that the accuracy of the measurements obtained by photogrammetry is equivalent to that from the physical sensors but with much less equipment and fewer setup requirements. © 2015Computer-Aided Civil and Infrastructure Engineering.