952 resultados para image acquisition
Resumo:
Aims: To develop clinical protocols for acquiring PET images, performing CT-PET registration and tumour volume definition based on the PET image data, for radiotherapy for lung cancer patients and then to test these protocols with respect to levels of accuracy and reproducibility. Method: A phantom-based quality assurance study of the processes associated with using registered CT and PET scans for tumour volume definition was conducted to: (1) investigate image acquisition and manipulation techniques for registering and contouring CT and PET images in a radiotherapy treatment planning system, and (2) determine technology-based errors in the registration and contouring processes. The outcomes of the phantom image based quality assurance study were used to determine clinical protocols. Protocols were developed for (1) acquiring patient PET image data for incorporation into the 3DCRT process, particularly for ensuring that the patient is positioned in their treatment position; (2) CT-PET image registration techniques and (3) GTV definition using the PET image data. The developed clinical protocols were tested using retrospective clinical trials to assess levels of inter-user variability which may be attributed to the use of these protocols. A Siemens Somatom Open Sensation 20 slice CT scanner and a Philips Allegro stand-alone PET scanner were used to acquire the images for this research. The Philips Pinnacle3 treatment planning system was used to perform the image registration and contouring of the CT and PET images. Results: Both the attenuation-corrected and transmission images obtained from standard whole-body PET staging clinical scanning protocols were acquired and imported into the treatment planning system for the phantom-based quality assurance study. Protocols for manipulating the PET images in the treatment planning system, particularly for quantifying uptake in volumes of interest and window levels for accurate geometric visualisation were determined. The automatic registration algorithms were found to have sub-voxel levels of accuracy, with transmission scan-based CT-PET registration more accurate than emission scan-based registration of the phantom images. Respiration induced image artifacts were not found to influence registration accuracy while inadequate pre-registration over-lap of the CT and PET images was found to result in large registration errors. A threshold value based on a percentage of the maximum uptake within a volume of interest was found to accurately contour the different features of the phantom despite the lower spatial resolution of the PET images. Appropriate selection of the threshold value is dependant on target-to-background ratios and the presence of respiratory motion. The results from the phantom-based study were used to design, implement and test clinical CT-PET fusion protocols. The patient PET image acquisition protocols enabled patients to be successfully identified and positioned in their radiotherapy treatment position during the acquisition of their whole-body PET staging scan. While automatic registration techniques were found to reduce inter-user variation compared to manual techniques, there was no significant difference in the registration outcomes for transmission or emission scan-based registration of the patient images, using the protocol. Tumour volumes contoured on registered patient CT-PET images using the tested threshold values and viewing windows determined from the phantom study, demonstrated less inter-user variation for the primary tumour volume contours than those contoured using only the patient’s planning CT scans. Conclusions: The developed clinical protocols allow a patient’s whole-body PET staging scan to be incorporated, manipulated and quantified in the treatment planning process to improve the accuracy of gross tumour volume localisation in 3D conformal radiotherapy for lung cancer. Image registration protocols which factor in potential software-based errors combined with adequate user training are recommended to increase the accuracy and reproducibility of registration outcomes. A semi-automated adaptive threshold contouring technique incorporating a PET windowing protocol, accurately defines the geometric edge of a tumour volume using PET image data from a stand alone PET scanner, including 4D target volumes.
Resumo:
This paper presents a low cost but high resolution retinal image acquisition system of the human eye. The images acquired by a CMOS image sensor are communicated through the Universal Serial Bus (USB) interface to a personal computer for viewing and further processing. The image acquisition time was estimated to be 2.5 seconds. This system can also be used in telemedicine applications.
Resumo:
This paper presents a study on the effect of blurred images in hand biometrics. Blurred images simulates out-of-focus effects in hand image acquisition, a common consequence of unconstrained, contact-less and platform-free hand biometrics in mobile devices. The proposed biometric system presents a hand image segmentation based on multiscale aggregation, a segmentation method invariant to different changes like noise or blurriness, together with an innovative feature extraction and a template creation, oriented to obtain an invariant performance against blurring effects. The results highlight that the proposed system is invariant to some low degrees of blurriness, requiring an image quality control to detect and correct those images with a high degree of blurriness. The evaluation has considered a synthetic database created based on a publicly available database with 120 individuals. In addition, several biometric techniques could benefit from the approach proposed in this paper, since blurriness is a very common effect in biometric techniques involving image acquisition.
Resumo:
Current fusion devices consist of multiple diagnostics and hundreds or even thousands of signals. This situation forces on multiple occasions to use distributed data acquisition systems as the best approach. In this type of distributed systems, one of the most important issues is the synchronization between signals, so that it is possible to have a temporal correlation as accurate as possible between the acquired samples of all channels. In last decades, many fusion devices use different types of video cameras to provide inside views of the vessel during operations and to monitor plasma behavior. The synchronization between each video frame and the rest of the different signals acquired from any other diagnostics is essential in order to know correctly the plasma evolution, since it is possible to analyze jointly all the information having accurate knowledge of their temporal correlation. The developed system described in this paper allows timestamping image frames in a real-time acquisition and processing system using 1588 clock distribution. The system has been implemented using FPGA based devices together with a 1588 synchronized timing card (see Fig.1). The solution is based on a previous system [1] that allows image acquisition and real-time image processing based on PXIe technology. This architecture is fully compatible with the ITER Fast Controllers [2] and offers integration with EPICS to control and monitor the entire system. However, this set-up is not able to timestamp the frames acquired since the frame grabber module does not present any type of timing input (IRIG-B, GPS, PTP). To solve this lack, an IEEE1588 PXI timing device its used to provide an accurate way to synchronize distributed data acquisition systems using the Precision Time Protocol (PTP) IEEE 1588 2008 standard. This local timing device can be connected to a master clock device for global synchronization. The timing device has a buffer timestamp for each PXI trigger line and requires tha- a software application assigns each frame the corresponding timestamp. The previous action is critical and cannot be achieved if the frame rate is high. To solve this problem, it has been designed a solution that distributes the clock from the IEEE 1588 timing card to all FlexRIO devices [3]. This solution uses two PXI trigger lines that provide the capacity to assign timestamps to every frame acquired and register events by hardware in a deterministic way. The system provides a solution for timestamping frames to synchronize them with the rest of the different signals.
Resumo:
Texture analysis and textural cues have been applied for image classification, segmentation and pattern recognition. Dominant texture descriptors include directionality, coarseness, line-likeness etc. In this dissertation a class of textures known as particulate textures are defined, which are predominantly coarse or blob-like. The set of features that characterise particulate textures are different from those that characterise classical textures. These features are micro-texture, macro-texture, size, shape and compaction. Classical texture analysis techniques do not adequately capture particulate texture features. This gap is identified and new methods for analysing particulate textures are proposed. The levels of complexity in particulate textures are also presented ranging from the simplest images where blob-like particles are easily isolated from their back- ground to the more complex images where the particles and the background are not easily separable or the particles are occluded. Simple particulate images can be analysed for particle shapes and sizes. Complex particulate texture images, on the other hand, often permit only the estimation of particle dimensions. Real life applications of particulate textures are reviewed, including applications to sedimentology, granulometry and road surface texture analysis. A new framework for computation of particulate shape is proposed. A granulometric approach for particle size estimation based on edge detection is developed which can be adapted to the gray level of the images by varying its parameters. This study binds visual texture analysis and road surface macrotexture in a theoretical framework, thus making it possible to apply monocular imaging techniques to road surface texture analysis. Results from the application of the developed algorithm to road surface macro-texture, are compared with results based on Fourier spectra, the auto- correlation function and wavelet decomposition, indicating the superior performance of the proposed technique. The influence of image acquisition conditions such as illumination and camera angle on the results was systematically analysed. Experimental data was collected from over 5km of road in Brisbane and the estimated coarseness along the road was compared with laser profilometer measurements. Coefficient of determination R2 exceeding 0.9 was obtained when correlating the proposed imaging technique with the state of the art Sensor Measured Texture Depth (SMTD) obtained using laser profilometers.
Resumo:
We describe an investigation into how Massey University’s Pollen Classifynder can accelerate the understanding of pollen and its role in nature. The Classifynder is an imaging microscopy system that can locate, image and classify slide based pollen samples. Given the laboriousness of purely manual image acquisition and identification it is vital to exploit assistive technologies like the Classifynder to enable acquisition and analysis of pollen samples. It is also vital that we understand the strengths and limitations of automated systems so that they can be used (and improved) to compliment the strengths and weaknesses of human analysts to the greatest extent possible. This article reviews some of our experiences with the Classifynder system and our exploration of alternative classifier models to enhance both accuracy and interpretability. Our experiments in the pollen analysis problem domain have been based on samples from the Australian National University’s pollen reference collection (2,890 grains, 15 species) and images bundled with the Classifynder system (400 grains, 4 species). These samples have been represented using the Classifynder image feature set.We additionally work through a real world case study where we assess the ability of the system to determine the pollen make-up of samples of New Zealand honey. In addition to the Classifynder’s native neural network classifier, we have evaluated linear discriminant, support vector machine, decision tree and random forest classifiers on these data with encouraging results. Our hope is that our findings will help enhance the performance of future releases of the Classifynder and other systems for accelerating the acquisition and analysis of pollen samples.
Resumo:
This paper presents a low cost but high resolution retinal image acquisition system of the human eye. The images acquired by a CMOS image sensor are communicated through the Universal Serial Bus (USB) interface to a personal computer for viewing and further processing. The image acquisition time was estimated to be 2.5 seconds. This system can also be used in telemedicine applications.
Resumo:
Fluorescence microscopy has become an indispensable tool in cell biology research due its exceptional specificity and ability to visualize subcellular structures with high contrast. It has highest impact when applied in 4D mode, i.e. when applied to record 3D image information as a function of time, since it allows the study of dynamic cellular processes in their native environment. The main issue in 4D fluorescence microscopy is that the phototoxic effect of fluorescence excitation gets accumulated during 4D image acquisition to the extent that normal cell functions are altered. Hence to avoid the alteration of normal cell functioning, it is required to minimize the excitation dose used for individual 2D images constituting a 4D image. Consequently, the noise level becomes very high degrading the resolution. According to the current status of technology, there is a minimum required excitation dose to ensure a resolution that is adequate for biological investigations. This minimum is sufficient to damage light-sensitive cells such as yeast if 4D imaging is performed for an extended period of time, for example, imaging for a complete cell cycle. Nevertheless, our recently developed deconvolution method resolves this conflict forming an enabling technology for visualization of dynamical processes of light-sensitive cells for durations longer than ever without perturbing normal cell functioning. The main goal of this article is to emphasize that there are still possibilities for enabling newer kinds of experiment in cell biology research involving even longer 4D imaging, by only improving deconvolution methods without any new optical technologies.
Resumo:
Confocal and two-photon microcopy have become essential tools in biological research and today many investigations are not possible without their help. The valuable advantage that these two techniques offer is the ability of optical sectioning. Optical sectioning makes it possible to obtain 3D visuahzation of the structiu-es, and hence, valuable information of the structural relationships, the geometrical, and the morphological aspects of the specimen. The achievable lateral and axial resolutions by confocal and two-photon microscopy, similar to other optical imaging systems, are both defined by the diffraction theorem. Any aberration and imperfection present during the imaging results in broadening of the calculated theoretical resolution, blurring, geometrical distortions in the acquired images that interfere with the analysis of the structures, and lower the collected fluorescence from the specimen. The aberrations may have different causes and they can be classified by their sources such as specimen-induced aberrations, optics-induced aberrations, illumination aberrations, and misalignment aberrations. This thesis presents an investigation and study of image enhancement. The goal of this thesis was approached in two different directions. Initially, we investigated the sources of the imperfections. We propose methods to eliminate or minimize aberrations introduced during the image acquisition by optimizing the acquisition conditions. The impact on the resolution as a result of using a coverslip the thickness of which is mismatched with the one that the objective lens is designed for was shown and a novel technique was introduced in order to define the proper value on the correction collar of the lens. The amoimt of spherical aberration with regard to t he numerical aperture of the objective lens was investigated and it was shown that, based on the purpose of our imaging tasks, different numerical apertures must be used. The deformed beam cross section of the single-photon excitation source was corrected and the enhancement of the resolution and image quaUty was shown. Furthermore, the dependency of the scattered light on the excitation wavelength was shown empirically. In the second part, we continued the study of the image enhancement process by deconvolution techniques. Although deconvolution algorithms are used widely to improve the quality of the images, how well a deconvolution algorithm responds highly depends on the point spread function (PSF) of the imaging system applied to the algorithm and the level of its accuracy. We investigated approaches that can be done in order to obtain more precise PSF. Novel methods to improve the pattern of the PSF and reduce the noise are proposed. Furthermore, multiple soiu'ces to extract the PSFs of the imaging system are introduced and the empirical deconvolution results by using each of these PSFs are compared together. The results confirm that a greater improvement attained by applying the in situ PSF during the deconvolution process.
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
BACKGROUND AND PURPOSE: Daily use of conventional electronic portal imaging devices (EPID) for organ tracking is limited due to the relatively high dose required for high quality image acquisition. We studied the use of a novel dose saving acquisition mode (RadMode) allowing to take images with one monitor unit per image in prostate cancer patients undergoing intensity-modulated radiotherapy (IMRT) and tracking of implanted fiducial gold markers. PATIENTS AND METHODS: Twenty five patients underwent implantation of three fiducial gold markers prior to the planning CT. Before each treatment of a course of 37 fractions, orthogonal localization images from the antero-posterior and from the lateral direction were acquired. Portal images of both the setup procedure and the five IMRT treatment beams were analyzed. RESULTS: On average, four localization images were needed for a correct patient setup, resulting in four monitor units extra dose per fraction. The mean extra dose delivered to the patient was thereby increased by 1.2%. The procedure was precise enough to reduce the mean displacements prior to treatment to < o =0.3 mm. CONCLUSIONS: The use of a new dose saving acquisition mode enables to perform daily EPID-based prostate tracking with a cumulative extra dose of below 1 Gy. This concept is efficiently used in IMRT-treated patients, where separation of setup beams from treatment beams is mandatory.
Resumo:
The verification possibilities of dynamically collimated treatment beams with a scanning liquid ionization chamber electronic portal image device (SLIC-EPID) are investigated. The ion concentration in the liquid of a SLIC-EPID and therefore the read-out signal is determined by two parameters of a differential equation describing the creation and recombination of the ions. Due to the form of this equation, the portal image detector describes a nonlinear dynamic system with memory. In this work, the parameters of the differential equation were experimentally determined for the particular chamber in use and for an incident open 6 MV photon beam. The mathematical description of the ion concentration was then used to predict portal images of intensity-modulated photon beams produced by a dynamic delivery technique, the sliding window approach. Due to the nature of the differential equation, a mathematical condition for 'reliable leaf motion verification' in the sliding window technique can be formulated. It is shown that the time constants for both formation and decay of the equilibrium concentration in the chamber is in the order of seconds. In order to guarantee reliable leaf motion verification, these time constants impose a constraint on the rapidity of the image-read out for a given maximum leaf speed. For a leaf speed of 2 cm s(-1), a minimum image acquisition frequency of about 2 Hz is required. Current SLIC-EPID systems are usually too slow since they need about a second to acquire a portal image. However, if the condition is fulfilled, the memory property of the system can be used to reconstruct the leaf motion. It is shown that a simple edge detecting algorithm can be employed to determine the leaf positions. The method is also very robust against image noise.
Resumo:
In external beam radiotherapy, electronic portal imaging becomes more and more an indispensable tool for the verification of the patient setup. For the safe clinical introduction of high dose conformal radiotherapy like intensity modulated radiation therapy, on-line patient setup verification is a prerequisite to ensure that the planned dosimetric coverage of the tumor volume is actually realized in the patient. Since the direction of setup fields often deviates from the direction of the treatment beams, extra dose is delivered to the patient during the acquisition of these portal images which may reach clinical relevance. The aim of this work was to develop a new acquisition mode for the PortalVision aS500 electronic portal imaging device from Varian Medical Systems that allows one to take portal images with reduced dose while keeping good image quality. The new acquisition mode, called RadMode, selectively enables and disables beam pulses during image acquisition allowing one to stop wasting valuable dose during the initial acquisition of "reset frames." Images of excellent quality can be taken with 1 MU only. This low dose per image facilitates daily setup verification with considerably reduced extra dose.
Resumo:
PURPOSE: The aim of this study is to implement augmented reality in real-time image-guided interstitial brachytherapy to allow an intuitive real-time intraoperative orientation. METHODS AND MATERIALS: The developed system consists of a common video projector, two high-resolution charge coupled device cameras, and an off-the-shelf notebook. The projector was used as a scanning device by projecting coded-light patterns to register the patient and superimpose the operating field with planning data and additional information in arbitrary colors. Subsequent movements of the nonfixed patient were detected by means of stereoscopically tracking passive markers attached to the patient. RESULTS: In a first clinical study, we evaluated the whole process chain from image acquisition to data projection and determined overall accuracy with 10 patients undergoing implantation. The described method enabled the surgeon to visualize planning data on top of any preoperatively segmented and triangulated surface (skin) with direct line of sight during the operation. Furthermore, the tracking system allowed dynamic adjustment of the data to the patient's current position and therefore eliminated the need for rigid fixation. Because of soft-part displacement, we obtained an average deviation of 1.1 mm by moving the patient, whereas changing the projector's position resulted in an average deviation of 0.9 mm. Mean deviation of all needles of an implant was 1.4 mm (range, 0.3-2.7 mm). CONCLUSIONS: The developed low-cost augmented-reality system proved to be accurate and feasible in interstitial brachytherapy. The system meets clinical demands and enables intuitive real-time intraoperative orientation and monitoring of needle implantation.