946 resultados para image registration system
Resumo:
OBJECTIVES: To demonstrate the feasibility of panoramic image subtraction for implant assessment. STUDY DESIGN: Three titanium implants were inserted into a fresh pig mandible. One intraoral and 2 panoramic images were obtained at baseline and after each of 6 incremental (0.3, 0.6, 1.0, 1.5, 2.0, 2.5 mm) removals of bone. For each incremental removal of bone, the mandible was removed from and replaced in the holding device. Images representing incremental bone removals were registered by computer with the baseline images and subtracted. Assessment of the subtraction images was based on visual inspection and analysis of structured noise. RESULTS: Incremental bone removals were more visible in intraoral than in panoramic subtraction images; however, computer-based registration of panoramic images reduced the structured noise and enhanced the visibility of incremental removals. CONCLUSION: The feasibility of panoramic image subtraction for implant assessment was demonstrated.
Resumo:
BACKGROUND: In this paper we present a landmark-based augmented reality (AR) endoscope system for endoscopic paranasal and transnasal surgeries along with fast and automatic calibration and registration procedures for the endoscope. METHODS: Preoperatively the surgeon selects natural landmarks or can define new landmarks in CT volume. These landmarks are overlaid, after proper registration of preoperative CT to the patient, on the endoscopic video stream. The specified name of the landmark, along with selected colour and its distance from the endoscope tip, is also augmented. The endoscope optics are calibrated and registered by fast and automatic methods. Accuracy of the system is evaluated in a metallic grid and cadaver set-up. RESULTS: Root mean square (RMS) error of the system is 0.8 mm in a controlled laboratory set-up (metallic grid) and was 2.25 mm during cadaver studies. CONCLUSIONS: A novel landmark-based AR endoscope system is implemented and its accuracy is evaluated. Augmented landmarks will help the surgeon to orientate and navigate the surgical field. Studies prove the capability of the system for the proposed application. Further clinical studies are planned in near future.
Resumo:
A pilot study to detect volume changes of cerebral structures in growth hormone (GH)-deficient adults treated with GH using serial 3D MR image processing and to assess need for segmentation prior to registration was conducted.
Resumo:
Quantitative characterisation of carotid atherosclerosis and classification into symptomatic or asymptomatic is crucial in planning optimal treatment of atheromatous plaque. The computer-aided diagnosis (CAD) system described in this paper can analyse ultrasound (US) images of carotid artery and classify them into symptomatic or asymptomatic based on their echogenicity characteristics. The CAD system consists of three modules: a) the feature extraction module, where first-order statistical (FOS) features and Laws' texture energy can be estimated, b) the dimensionality reduction module, where the number of features can be reduced using analysis of variance (ANOVA), and c) the classifier module consisting of a neural network (NN) trained by a novel hybrid method based on genetic algorithms (GAs) along with the back propagation algorithm. The hybrid method is able to select the most robust features, to adjust automatically the NN architecture and to optimise the classification performance. The performance is measured by the accuracy, sensitivity, specificity and the area under the receiver-operating characteristic (ROC) curve. The CAD design and development is based on images from 54 symptomatic and 54 asymptomatic plaques. This study demonstrates the ability of a CAD system based on US image analysis and a hybrid trained NN to identify atheromatous plaques at high risk of stroke.
Resumo:
Television and movie images have been altered ever since it was technically possible. Nowadays embedding advertisements, or incorporating text and graphics in TV scenes, are common practice, but they can not be considered as integrated part of the scene. The introduction of new services for interactive augmented television is discussed in this paper. We analyse the main aspects related with the whole chain of augmented reality production. Interactivity is one of the most important added values of the digital television: This paper aims to break the model where all TV viewers receive the same final image. Thus, we introduce and discuss the new concept of interactive augmented television, i. e. real time composition of video and computer graphics - e.g. a real scene and freely selectable images or spatial rendered objects - edited and customized by the end user within the context of the user's set top box and TV receiver.
Resumo:
For broadcasting purposes MIXED REALITY, the combination of real and virtual scene content, has become ubiquitous nowadays. Mixed Reality recording still requires expensive studio setups and is often limited to simple color keying. We present a system for Mixed Reality applications which uses depth keying and provides threedimensional mixing of real and artificial content. It features enhanced realism through automatic shadow computation which we consider a core issue to obtain realism and a convincing visual perception, besides the correct alignment of the two modalities and correct occlusion handling. Furthermore we present a possibility to support placement of virtual content in the scene. Core feature of our system is the incorporation of a TIME-OF-FLIGHT (TOF)-camera device. This device delivers real-time depth images of the environment at a reasonable resolution and quality. This camera is used to build a static environment model and it also allows correct handling of mutual occlusions between real and virtual content, shadow computation and enhanced content planning. The presented system is inexpensive, compact, mobile, flexible and provides convenient calibration procedures. Chroma-keying is replaced by depth-keying which is efficiently performed on the GRAPHICS PROCESSING UNIT (GPU) by the usage of an environment model and the current ToF-camera image. Automatic extraction and tracking of dynamic scene content is herewith performed and this information is used for planning and alignment of virtual content. An additional sustainable feature is that depth maps of the mixed content are available in real-time, which makes the approach suitable for future 3DTV productions. The presented paper gives an overview of the whole system approach including camera calibration, environment model generation, real-time keying and mixing of virtual and real content, shadowing for virtual content and dynamic object tracking for content planning.
Resumo:
PURPOSE Images from computed tomography (CT), combined with navigation systems, improve the outcomes of local thermal therapies that are dependent on accurate probe placement. Although the usage of CT is desired, its availability for time-consuming radiological interventions is limited. Alternatively, three-dimensional images from C-arm cone-beam CT (CBCT) can be used. The goal of this study was to evaluate the accuracy of navigated CBCT-guided needle punctures, controlled with CT scans. METHODS Five series of five navigated punctures were performed on a nonrigid phantom using a liver specific navigation system and CBCT volumetric dataset for planning and navigation. To mimic targets, five titanium screws were fixed to the phantom. Target positioning accuracy (TPECBCT) was computed from control CT scans and divided into lateral and longitudinal components. Additionally, CBCT-CT guidance accuracy was deducted by performing CBCT-to-CT image coregistration and measuring TPECBCT-CT from fused datasets. Image coregistration was evaluated using fiducial registration error (FRECBCT-CT) and target registration error (TRECBCT-CT). RESULTS Positioning accuracies in lateral directions pertaining to CBCT (TPECBCT = 2.1 ± 1.0 mm) were found to be better to those achieved from previous study using CT (TPECT = 2.3 ± 1.3 mm). Image coregistration error was 0.3 ± 0.1 mm, resulting in an average TRE of 2.1 ± 0.7 mm (N = 5 targets) and average Euclidean TPECBCT-CT of 3.1 ± 1.3 mm. CONCLUSIONS Stereotactic needle punctures might be planned and performed on volumetric CBCT images and controlled with multidetector CT with positioning accuracy higher or similar to those performed using CT scanners.
Resumo:
OBJECTIVE Angiographic C-arm CT may allow performing percutaneous stereotactic tumor ablations in the interventional radiology suite. Our purpose was to evaluate the accuracy of using C-arm CT for single and multimodality image fusions and to compare the targeting accuracy of liver lesions with the reference standard of MDCT. MATERIALS AND METHODS C-arm CT and MDCT scans were obtained of a nonrigid rapid prototyping liver phantom containing five 1-mm targets that were placed under skin-simulating deformable plastic foam. Target registration errors of image fusion were evaluated for single-modality and multimodality image fusions. A navigation system and stereotactic aiming device were used to evaluate target positioning errors on postinterventional scans with the needles in place fused with the C-arm CT or MDCT planning images. RESULTS Target registration error of the image fusion showed no significant difference (p > 0.05) between both modalities. In five series with a total of 25 punctures for each modality, the lateral target positioning error (i.e., the lateral distance between the needle tip and the planned trajectory) was similar for C-arm CT (mean [± SD], 1.6 ± 0.6 mm) and MDCT (1.82 ± .97 mm) (p = 0.33). CONCLUSION In a nonrigid liver phantom, angiographic C-arm CT may provide similar image fusion accuracy for comparison of intra- and postprocedure control images with the planning images and enables stereotactic targeting accuracy similar to that of MDCT.
Resumo:
The successful management of cancer with radiation relies on the accurate deposition of a prescribed dose to a prescribed anatomical volume within the patient. Treatment set-up errors are inevitable because the alignment of field shaping devices with the patient must be repeated daily up to eighty times during the course of a fractionated radiotherapy treatment. With the invention of electronic portal imaging devices (EPIDs), patient's portal images can be visualized daily in real-time after only a small fraction of the radiation dose has been delivered to each treatment field. However, the accuracy of human visual evaluation of low-contrast portal images has been found to be inadequate. The goal of this research is to develop automated image analysis tools to detect both treatment field shape errors and patient anatomy placement errors with an EPID. A moments method has been developed to align treatment field images to compensate for lack of repositioning precision of the image detector. A figure of merit has also been established to verify the shape and rotation of the treatment fields. Following proper alignment of treatment field boundaries, a cross-correlation method has been developed to detect shifts of the patient's anatomy relative to the treatment field boundary. Phantom studies showed that the moments method aligned the radiation fields to within 0.5mm of translation and 0.5$\sp\circ$ of rotation and that the cross-correlation method aligned anatomical structures inside the radiation field to within 1 mm of translation and 1$\sp\circ$ of rotation. A new procedure of generating and using digitally reconstructed radiographs (DRRs) at megavoltage energies as reference images was also investigated. The procedure allowed a direct comparison between a designed treatment portal and the actual patient setup positions detected by an EPID. Phantom studies confirmed the feasibility of the methodology. Both the moments method and the cross-correlation technique were implemented within an experimental radiotherapy picture archival and communication system (RT-PACS) and were used clinically to evaluate the setup variability of two groups of cancer patients treated with and without an alpha-cradle immobilization aid. The tools developed in this project have proven to be very effective and have played an important role in detecting patient alignment errors and field-shape errors in treatment fields formed by a multileaf collimator (MLC). ^
Resumo:
Extensive experience with the analysis of human prophase chromosomes and studies into the complexity of prophase GTG-banding patterns have suggested that at least some prophase chromosomal segments can be accurately identified and characterized independently of the morphology of the chromosome as a whole. In this dissertation the feasibility of identifying and analyzing specified prophase chromosome segments was thus investigated as an alternative approach to prophase chromosome analysis based on whole chromosome recognition. Through the use of prophase idiograms at the 850-band-stage (FRANCKE, 1981) and a comparison system based on the calculation of cross-correlation coefficients between idiogram profiles, we have demonstrated that it is possible to divide the 24 human prophase idiograms into a set of 94 unique band sequences. Each unique band sequence has a banding pattern that is recognizable and distinct from any other non-homologous chromosome portion.^ Using chromosomes 11p and 16 thru 22 to demonstrate unique band sequence integrity at the chromosome level, we found that prophase chromosome banding pattern variation can be compensated for and that a set of unique band sequences very similar to those at the idiogram level can be identified on actual chromosomes.^ The use of a unique band sequence approach in prophase chromosome analysis is expected to increase efficiency and sensitivity through more effective use of available banding information. The use of a unique band sequence approach to prophase chromosome analysis is discussed both at the routine level by cytogeneticists and at an image processing level with a semi-automated approach to prophase chromosome analysis. ^
Resumo:
Computer vision-based food recognition could be used to estimate a meal's carbohydrate content for diabetic patients. This study proposes a methodology for automatic food recognition, based on the Bag of Features (BoF) model. An extensive technical investigation was conducted for the identification and optimization of the best performing components involved in the BoF architecture, as well as the estimation of the corresponding parameters. For the design and evaluation of the prototype system, a visual dataset with nearly 5,000 food images was created and organized into 11 classes. The optimized system computes dense local features, using the scale-invariant feature transform on the HSV color space, builds a visual dictionary of 10,000 visual words by using the hierarchical k-means clustering and finally classifies the food images with a linear support vector machine classifier. The system achieved classification accuracy of the order of 78%, thus proving the feasibility of the proposed approach in a very challenging image dataset.
Resumo:
A major component of minimally invasive cochlear implantation is atraumatic scala tympani (ST) placement of the electrode array. This work reports on a semiautomatic planning paradigm that uses anatomical landmarks and cochlear surface models for cochleostomy target and insertion trajectory computation. The method was validated in a human whole head cadaver model (n = 10 ears). Cochleostomy targets were generated from an automated script and used for consecutive planning of a direct cochlear access (DCA) drill trajectory from the mastoid surface to the inner ear. An image-guided robotic system was used to perform both, DCA and cochleostomy drilling. Nine of 10 implanted specimens showed complete ST placement. One case of scala vestibuli insertion occurred due to a registration/drilling error of 0.79 mm. The presented approach indicates that a safe cochleostomy target and insertion trajectory can be planned using conventional clinical imaging modalities, which lack sufficient resolution to identify the basilar membrane.
Resumo:
Quantification of protein expression based on immunohistochemistry (IHC) is an important step in clinical diagnoses and translational tissue-based research. Manual scoring systems are used in order to evaluate protein expression based on staining intensities and distribution patterns. However, visual scoring remains an inherently subjective approach. The aim of our study was to explore whether digital image analysis proves to be an alternative or even superior tool to quantify expression of membrane-bound proteins. We analyzed five membrane-binding biomarkers (HER2, EGFR, pEGFR, β-catenin, and E-cadherin) and performed IHC on tumor tissue microarrays from 153 esophageal adenocarcinomas patients from a single center study. The tissue cores were scored visually applying an established routine scoring system as well as by using digital image analysis obtaining a continuous spectrum of average staining intensity. Subsequently, we compared both assessments by survival analysis as an end point. There were no significant correlations with patient survival using visual scoring of β-catenin, E-cadherin, pEGFR, or HER2. In contrast, the results for digital image analysis approach indicated that there were significant associations with disease-free survival for β-catenin, E-cadherin, pEGFR, and HER2 (P = 0.0125, P = 0.0014, P = 0.0299, and P = 0.0096, respectively). For EGFR, there was a greater association with patient survival when digital image analysis was used compared to when visual scoring was (visual: P = 0.0045, image analysis: P < 0.0001). The results of this study indicated that digital image analysis was superior to visual scoring. Digital image analysis is more sensitive and, therefore, better able to detect biological differences within the tissues with greater accuracy. This increased sensitivity improves the quality of quantification.
Resumo:
For patients with extensive bilobar colorectal liver metastases (CRLM), initial surgery may not be feasible and a multimodal approach including microwave ablation (MWA) provides the only chance for prolonged survival. Intraoperative navigation systems may improve the accuracy of ablation and surgical resection of so-called "vanishing lesions", ultimately improving patient outcome. Clinical application of intraoperative navigated liver surgery is illustrated in a patient undergoing combined resection/MWA for multiple, synchronous, bilobar CRLM. Regular follow-up with computed tomography (CT) allowed for temporal development of the ablation zones. Of the ten lesions detected in a preoperative CT scan, the largest lesion was resected and the others were ablated using an intraoperative navigation system. Twelve months post-surgery a new lesion (Seg IVa) was detected and treated by trans-arterial embolization. Nineteen months post-surgery new liver and lung metastases were detected and a palliative chemotherapy started. The patient passed away four years after initial diagnosis. For patients with extensive CRLM not treatable by standard surgery, navigated MWA/resection may provide excellent tumor control, improving longer-term survival. Intraoperative navigation systems provide precise, real-time information to the surgeon, aiding the decision-making process and substantially improving the accuracy of both ablation and resection. Regular follow-ups including 3D modeling allow for early discrimination between ablation zones and recurrent tumor lesions.