33 resultados para Single system image
Resumo:
Purpose Accurate three-dimensional (3D) models of lumbar vertebrae can enable image-based 3D kinematic analysis. The common approach to derive 3D models is by direct segmentation of CT or MRI datasets. However, these have the disadvantages that they are expensive, timeconsuming and/or induce high-radiation doses to the patient. In this study, we present a technique to automatically reconstruct a scaled 3D lumbar vertebral model from a single two-dimensional (2D) lateral fluoroscopic image. Methods Our technique is based on a hybrid 2D/3D deformable registration strategy combining a landmark-to-ray registration with a statistical shape model-based 2D/3D reconstruction scheme. Fig. 1 shows different stages of the reconstruction process. Four cadaveric lumbar spine segments (total twelve lumbar vertebrae) were used to validate the technique. To evaluate the reconstruction accuracy, the surface models reconstructed from the lateral fluoroscopic images were compared to the associated ground truth data derived from a 3D CT-scan reconstruction technique. For each case, a surface-based matching was first used to recover the scale and the rigid transformation between the reconstructed surface model Results Our technique could successfully reconstruct 3D surface models of all twelve vertebrae. After recovering the scale and the rigid transformation between the reconstructed surface models and the ground truth models, the average error of the 2D/3D surface model reconstruction over the twelve lumbar vertebrae was found to be 1.0 mm. The errors of reconstructing surface models of all twelve vertebrae are shown in Fig. 2. It was found that the mean errors of the reconstructed surface models in comparison to their associated ground truths after iterative scaled rigid registrations ranged from 0.7 mm to 1.3 mm and the rootmean squared (RMS) errors ranged from 1.0 mm to 1.7 mm. The average mean reconstruction error was found to be 1.0 mm. Conclusion An accurate, scaled 3D reconstruction of the lumbar vertebra can be obtained from a single lateral fluoroscopic image using a statistical shape model based 2D/3D reconstruction technique. Future work will focus on applying the reconstructed model for 3D kinematic analysis of lumbar vertebrae, an extension of our previously-reported imagebased kinematic analysis. The developed method also has potential applications in surgical planning and navigation.
Resumo:
Background Image-guided systems have recently been introduced for their application in liver surgery.We aimed to identify and propose suitable indications for image-guided navigation systems in the domain of open oncologic liver surgery and,more specifically, in the setting of liver resection with and without microwave ablation. Method Retrospective analysis was conducted in patients undergoing liver resection with and without microwave ablation using an intraoperative image-guided stereotactic system during three stages of technological development (accuracy: 8.4 ± 4.4 mm in phase I and 8.4 ± 6.5 mm in phase II versus 4.5 ± 3.6 mm in phase III). It was evaluated, in which indications image-guided surgery was used according to the different stages of technical development. Results Between 2009 and 2013, 65 patients underwent image-guided surgical treatment, resection alone (n=38), ablation alone (n =11), or a combination thereof (n =16). With increasing accuracy of the system, image guidance was progressively used for atypical resections and combined microwave ablation and resection instead of formal liver resection (p<0.0001). Conclusion Clinical application of image guidance is feasible, while its efficacy is subject to accuracy. The concept of image guidance has been shown to be increasingly efficient for selected indications in liver surgery. While accuracy of available technology is increasing pertaining to technological advancements, more and more previously untreatable scenarios such as multiple small, bilobar lesions and so-called vanishing lesions come within reach.
Resumo:
Progress toward elucidating the 3D structures of eukaryotic membrane proteins has been hampered by the lack of appropriate expression systems. Recent work using the Xenopus oocyte as a novel expression system for structural analysis demonstrates the capability of providing not only the significant amount of protein yields required for structural work but also the expression of eukaryotic membrane proteins in a more native and functional conformation. There is a long history using the oocyte expression system as an efficient tool for membrane transporter and channel expression in direct functional analysis, but improvements in robotic injection systems and protein yield optimization allow the rapid scalability of expressed proteins to be purified and characterized in physiologically relevant structural states. Traditional overexpression systems (yeast, bacteria, and insect cells) by comparison require chaotropic conditions over several steps for extraction, solubilization, and purification. By contrast, overexpressing within the oocyte system for subsequent negative-staining transmission electron microscopy studies provides a single system that can functionally assess and purify eukaryotic membrane proteins in fewer steps maintaining the physiological properties of the membrane protein.
Resumo:
Dynamically typed languages lack information about the types of variables in the source code. Developers care about this information as it supports program comprehension. Ba- sic type inference techniques are helpful, but may yield many false positives or negatives. We propose to mine information from the software ecosys- tem on how frequently given types are inferred unambigu- ously to improve the quality of type inference for a single system. This paper presents an approach to augment existing type inference techniques by supplementing the informa- tion available in the source code of a project with data from other projects written in the same language. For all available projects, we track how often messages are sent to instance variables throughout the source code. Predictions for the type of a variable are made based on the messages sent to it. The evaluation of a proof-of-concept prototype shows that this approach works well for types that are sufficiently popular, like those from the standard librarie, and tends to create false positives for unpopular or domain specific types. The false positives are, in most cases, fairly easily identifiable. Also, the evaluation data shows a substantial increase in the number of correctly inferred types when compared to the non-augmented type inference.
Resumo:
Defocus blur is an indicator for the depth structure of a scene. However, given a single input image from a conventional camera one cannot distinguish between blurred objects lying in front or behind the focal plane, as they may be subject to exactly the same amount of blur. In this paper we address this limitation by exploiting coded apertures. Previous work in this area focuses on setups where the scene is placed either entirely in front or entirely behind the focal plane. We demonstrate that asymmetric apertures result in unique blurs for all distances from the camera. To exploit asymmetric apertures we propose an algorithm that can unambiguously estimate scene depth and texture from a single input image. One of the main advantages of our method is that, within the same depth range, we can work with less blurred data than in other methods. The technique is tested on both synthetic and real images.
Resumo:
An algorithm for the real-time registration of a retinal video sequence captured with a scanning digital ophthalmoscope (SDO) to a retinal composite image is presented. This method is designed for a computer-assisted retinal laser photocoagulation system to compensate for retinal motion and hence enhance the accuracy, speed, and patient safety of retinal laser treatments. The procedure combines intensity and feature-based registration techniques. For the registration of an individual frame, the translational frame-to-frame motion between preceding and current frame is detected by normalized cross correlation. Next, vessel points on the current video frame are identified and an initial transformation estimate is constructed from the calculated translation vector and the quadratic registration matrix of the previous frame. The vessel points are then iteratively matched to the segmented vessel centerline of the composite image to refine the initial transformation and register the video frame to the composite image. Criteria for image quality and algorithm convergence are introduced, which assess the exclusion of single frames from the registration process and enable a loss of tracking signal if necessary. The algorithm was successfully applied to ten different video sequences recorded from patients. It revealed an average accuracy of 2.47 ± 2.0 pixels (∼23.2 ± 18.8 μm) for 2764 evaluated video frames and demonstrated that it meets the clinical requirements.
Resumo:
In this paper we propose a new system that allows reliable acetabular cup placement when the THA is operated in lateral approach. Conceptually it combines the accuracy of computer-generated patient-specific morphology information with an easy-to-use mechanical guide, which effectively uses natural gravity as the angular reference. The former is achieved by using a statistical shape model-based 2D-3D reconstruction technique that can generate a scaled, patient-specific 3D shape model of the pelvis from a single conventional anteroposterior (AP) pelvic X-ray radiograph. The reconstructed 3D shape model facilitates a reliable and accurate co-registration of the mechanical guide with the patient’s anatomy in the operating theater. We validated the accuracy of our system by conducting experiments on placing seven cups to four pelvises with different morphologies. Taking the measurements from an image-free navigation system as the ground truth, our system showed an average accuracy of 2.1 ±0.7 o for inclination and an average accuracy of 1.2 ±1.4 o for anteversion.
Resumo:
Percutaneous needle intervention based on PET/CT images is effective, but exposes the patient to unnecessary radiation due to the increased number of CT scans required. Computer assisted intervention can reduce the number of scans, but requires handling, matching and visualization of two different datasets. While one dataset is used for target definition according to metabolism, the other is used for instrument guidance according to anatomical structures. No navigation systems capable of handling such data and performing PET/CT image-based procedures while following clinically approved protocols for oncologic percutaneous interventions are available. The need for such systems is emphasized in scenarios where the target can be located in different types of tissue such as bone and soft tissue. These two tissues require different clinical protocols for puncturing and may therefore give rise to different problems during the navigated intervention. Studies comparing the performance of navigated needle interventions targeting lesions located in these two types of tissue are not often found in the literature. Hence, this paper presents an optical navigation system for percutaneous needle interventions based on PET/CT images. The system provides viewers for guiding the physician to the target with real-time visualization of PET/CT datasets, and is able to handle targets located in both bone and soft tissue. The navigation system and the required clinical workflow were designed taking into consideration clinical protocols and requirements, and the system is thus operable by a single person, even during transition to the sterile phase. Both the system and the workflow were evaluated in an initial set of experiments simulating 41 lesions (23 located in bone tissue and 18 in soft tissue) in swine cadavers. We also measured and decomposed the overall system error into distinct error sources, which allowed for the identification of particularities involved in the process as well as highlighting the differences between bone and soft tissue punctures. An overall average error of 4.23 mm and 3.07 mm for bone and soft tissue punctures, respectively, demonstrated the feasibility of using this system for such interventions. The proposed system workflow was shown to be effective in separating the preparation from the sterile phase, as well as in keeping the system manageable by a single operator. Among the distinct sources of error, the user error based on the system accuracy (defined as the distance from the planned target to the actual needle tip) appeared to be the most significant. Bone punctures showed higher user error, whereas soft tissue punctures showed higher tissue deformation error.