36 resultados para Vision Based Navigation
em BORIS: Bern Open Repository and Information System - Berna - Suiça
Resumo:
INTRODUCTION: Recent advances in medical imaging have brought post-mortem minimally invasive computed tomography (CT) guided percutaneous biopsy to public attention. AIMS: The goal of the following study was to facilitate and automate post-mortem biopsy, to suppress radiation exposure to the investigator, as may occur when tissue sampling under computer tomographic guidance, and to minimize the number of needle insertion attempts for each target for a single puncture. METHODS AND MATERIALS: Clinically approved and post-mortem tested ACN-III biopsy core needles (14 gauge x 160 mm) with an automatic pistol device (Bard Magnum, Medical Device Technologies, Denmark) were used for probe sampling. The needles were navigated in gelatine/peas phantom, ex vivo porcine model and subsequently in two human bodies using a navigation system (MEM centre/ISTB Medical Application Framework, Marvin, Bern, Switzerland) with guidance frame and a CT (Emotion 6, Siemens, Germany). RESULTS: Biopsy of all peas could be performed within a single attempt. The average distance between the inserted needle tip and the pea centre was 1.4mm (n=10; SD 0.065 mm; range 0-2.3 mm). The targets in the porcine liver were also accurately punctured. The average of the distance between the needle tip and the target was 0.5 mm (range 0-1 mm). Biopsies of brain, heart, lung, liver, pancreas, spleen, and kidney were performed on human corpses. For each target the biopsy needle was only inserted once. The examination of one body with sampling of tissue probes at the above-mentioned locations took approximately 45 min. CONCLUSIONS: Post-mortem navigated biopsy can reliably provide tissue samples from different body locations. Since the continuous update of positional data of the body and the biopsy needle is performed using optical tracking, no control CT images verifying the positional data are necessary and no radiation exposure to the investigator need be taken into account. Furthermore, the number of needle insertions for each target can be minimized to a single one with the ex vivo proven adequate accuracy and, in contrast to conventional CT guided biopsy, the insertion angle may be oblique. Navigation for minimally invasive tissue sampling is a useful addition to post-mortem CT guided biopsy.
Resumo:
Background: Individuals with type 1 diabetes (T1D) have to count the carbohydrates (CHOs) of their meal to estimate the prandial insulin dose needed to compensate for the meal’s effect on blood glucose levels. CHO counting is very challenging but also crucial, since an error of 20 grams can substantially impair postprandial control. Method: The GoCARB system is a smartphone application designed to support T1D patients with CHO counting of nonpacked foods. In a typical scenario, the user places a reference card next to the dish and acquires 2 images with his/her smartphone. From these images, the plate is detected and the different food items on the plate are automatically segmented and recognized, while their 3D shape is reconstructed. Finally, the food volumes are calculated and the CHO content is estimated by combining the previous results and using the USDA nutritional database. Results: To evaluate the proposed system, a set of 24 multi-food dishes was used. For each dish, 3 pairs of images were taken and for each pair, the system was applied 4 times. The mean absolute percentage error in CHO estimation was 10 ± 12%, which led to a mean absolute error of 6 ± 8 CHO grams for normal-sized dishes. Conclusion: The laboratory experiments demonstrated the feasibility of the GoCARB prototype system since the error was below the initial goal of 20 grams. However, further improvements and evaluation are needed prior launching a system able to meet the inter- and intracultural eating habits.
Resumo:
In retinal surgery, surgeons face difficulties such as indirect visualization of surgical targets, physiological tremor, and lack of tactile feedback, which increase the risk of retinal damage caused by incorrect surgical gestures. In this context, intraocular proximity sensing has the potential to overcome current technical limitations and increase surgical safety. In this paper, we present a system for detecting unintentional collisions between surgical tools and the retina using the visual feedback provided by the opthalmic stereo microscope. Using stereo images, proximity between surgical tools and the retinal surface can be detected when their relative stereo disparity is small. For this purpose, we developed a system comprised of two modules. The first is a module for tracking the surgical tool position on both stereo images. The second is a disparity tracking module for estimating a stereo disparity map of the retinal surface. Both modules were specially tailored for coping with the challenging visualization conditions in retinal surgery. The potential clinical value of the proposed method is demonstrated by extensive testing using a silicon phantom eye and recorded rabbit in vivo data.
Resumo:
In this chapter a low-cost surgical navigation solution for periacetabular osteotomy (PAO) surgery is described. Two commercial inertial measurement units (IMU, Xsens Technologies, The Netherlands), are attached to a patient’s pelvis and to the acetabular fragment, respectively. Registration of the patient with a pre-operatively acquired computer model is done by recording the orientation of the patient’s anterior pelvic plane (APP) using one IMU. A custom-designed device is used to record the orientation of the APP in the reference coordinate system of the IMU. After registration, the two sensors are mounted to the patient’s pelvis and acetabular fragment, respectively. Once the initial position is recorded, the orientation is measured and displayed on a computer screen. A patient-specific computer model generated from a pre-operatively acquired computed tomography (CT) scan is used to visualize the updated orientation of the acetabular fragment. Experiments with plastic bones (7 hip joints) performed in an operating room comparing a previously developed optical navigation system with our inertial-based navigation system showed no statistical difference on the measurement of acetabular component reorientation (anteversion and inclination). In six out of seven hip joints the mean absolute difference was below five degrees for both anteversion and inclination.
Resumo:
Computed tomography based navigation for endoscopic sinus surgery is inflationary used despite of major public concern about iatrogenic radiation induced cancer risk. Studies on dose reduction for CAS-CT are almost nonexistent. We validate the use of radiation dose reduced CAS-CT for clinically applied surface registration.
Resumo:
A total knee arthroplasty performed with navigation results in more accurate component positioning with fewer outliers. It is not known whether image-based or image-free-systems are preferable and if navigation for only one component leads to equal accuracy in leg alignment than navigation of both components. We evaluated the results of total knee arthroplasties performed with femoral navigation. We studied 90 knees in 88 patients who had conventional total knee arthroplasties, image-based total knee arthroplasties, or total knee arthroplasties with image-free navigation. We compared patients' perioperative times, component alignment accuracy, and short-term outcomes. The total surgical time was longer in the image-based total knee arthroplasty group (109 +/- 7 minutes) compared with the image-free (101 +/- 17 minutes) and conventional total knee arthroplasty groups (87 +/- 20 minutes). The mechanical axis of the leg was within 3 degrees of neutral alignment, although the conventional total knee arthroplasty group showed more (10.6 degrees ) variance than the navigated groups (5.8 degrees and 6.4 degrees , respectively). We found a positive correlation between femoral component malalignment and the total mechanical axis in the conventional group. Our results suggest image-based navigation is not necessary, and image-free femoral navigation may be sufficient for accurate component alignment.
Resumo:
Surgical navigation systems visualize the positions and orientations of surgical instruments and implants as graphical overlays onto a medical image of the operated anatomy on a computer monitor. The orthopaedic surgical navigation systems could be categorized according to the image modalities that are used for the visualization of surgical action. In the so-called CT-based systems or 'surgeon-defined anatomy' based systems, where a 3D volume or surface representation of the operated anatomy could be constructed from the preoperatively acquired tomographic data or through intraoperatively digitized anatomy landmarks, a photorealistic rendering of the surgical action has been identified to greatly improve usability of these navigation systems. However, this may not hold true when the virtual representation of surgical instruments and implants is superimposed onto 2D projection images in a fluoroscopy-based navigation system due to the so-called image occlusion problem. Image occlusion occurs when the field of view of the fluoroscopic image is occupied by the virtual representation of surgical implants or instruments. In these situations, the surgeon may miss part of the image details, even if transparency and/or wire-frame rendering is used. In this paper, we propose to use non-photorealistic rendering to overcome this difficulty. Laboratory testing results on foamed plastic bones during various computer-assisted fluoroscopybased surgical procedures including total hip arthroplasty and long bone fracture reduction and osteosynthesis are shown.
Resumo:
Computer vision-based food recognition could be used to estimate a meal's carbohydrate content for diabetic patients. This study proposes a methodology for automatic food recognition, based on the Bag of Features (BoF) model. An extensive technical investigation was conducted for the identification and optimization of the best performing components involved in the BoF architecture, as well as the estimation of the corresponding parameters. For the design and evaluation of the prototype system, a visual dataset with nearly 5,000 food images was created and organized into 11 classes. The optimized system computes dense local features, using the scale-invariant feature transform on the HSV color space, builds a visual dictionary of 10,000 visual words by using the hierarchical k-means clustering and finally classifies the food images with a linear support vector machine classifier. The system achieved classification accuracy of the order of 78%, thus proving the feasibility of the proposed approach in a very challenging image dataset.
Resumo:
PURPOSE To evaluate a low-cost, inertial sensor-based surgical navigation solution for periacetabular osteotomy (PAO) surgery without the line-of-sight impediment. METHODS Two commercial inertial measurement units (IMU, Xsens Technologies, The Netherlands), are attached to a patient's pelvis and to the acetabular fragment, respectively. Registration of the patient with a pre-operatively acquired computer model is done by recording the orientation of the patient's anterior pelvic plane (APP) using one IMU. A custom-designed device is used to record the orientation of the APP in the reference coordinate system of the IMU. After registration, the two sensors are mounted to the patient's pelvis and acetabular fragment, respectively. Once the initial position is recorded, the orientation is measured and displayed on a computer screen. A patient-specific computer model generated from a pre-operatively acquired computed tomography scan is used to visualize the updated orientation of the acetabular fragment. RESULTS Experiments with plastic bones (eight hip joints) performed in an operating room comparing a previously developed optical navigation system with our inertial-based navigation system showed no statistically significant difference on the measurement of acetabular component reorientation. In all eight hip joints the mean absolute difference was below four degrees. CONCLUSION Using two commercially available inertial measurement units we show that it is possible to accurately measure the orientation (inclination and anteversion) of the acetabular fragment during PAO surgery and therefore to successfully eliminate the line-of-sight impediment that optical navigation systems have.
Resumo:
What's known on the subject? And what does the study add? We have previously shown that percutaneous radiofrequency ablation guided by image-fusion technology allows for precise needle placement with real time ultrasound superimposed with pre-loaded imaging, removing the need for real-time CT or MR guidance. Emerging technology also allows real-time tracking of a treatment needle within an organ in a virtually created 3D format. To our knowledge, this is the first study utilising a sophisticated ultrasound-based navigation system that uses both image-fusion and real-time probe-tracking technologies for in-vivo renal ablative intervention.
Resumo:
Computer-aided surgery (CAS) allows for real-time intraoperative feedback resulting in increased accuracy, while reducing intraoperative radiation. CAS is especially useful for the treatment of certain pelvic ring fractures, which necessitate the precise placement of screws. Flouroscopy-based CAS modules have been developed for many orthopedic applications. The integration of the isocentric flouroscope even enables navigation using intraoperatively acquired three-dimensional (3D) data, though the scan volume and imaging quality are limited. Complicated and comprehensive pathologies in regions like the pelvis can necessitate a CT-based navigation system because of its larger field of view. To be accurate, the patient's anatomy must be registered and matched with the virtual object (CT data). The actual precision within the region of interest depends on the area of the bone where surface matching is performed. Conventional surface matching with a solid pointer requires extensive soft tissue dissection. This contradicts the primary purpose of CAS as a minimally invasive alternative to conventional surgical techniques. We therefore integrated an a-mode ultrasound pointer into the process of surface matching for pelvic surgery and compared it to the conventional method. Accuracy measurements were made in two pelvic models: a foam model submerged in water and one with attached porcine muscle tissue. Three different tissue depths were selected based on CT scans of 30 human pelves. The ultrasound pointer allowed for registration of virtually any point on the pelvis. This method of surface matching could be successfully integrated into CAS of the pelvis.
Resumo:
Purpose: Most recently light and mobile reading devices with high display resolutions have become popular and they may open new possibilities for reading applications in education, business and the private sector. The ability to adapt font size may also open new reading opportunities for people with impaired or low vision. Based on their display technology two major groups of reading devices can be distinguished. One type, predominantly found in dedicated e-book readers, uses electronic paper also known as e-Ink. Other devices, mostly multifunction tablet-PCs, are equipped with backlit LCD displays. While it has long been accepted that reading on electronic displays is slow and associated with visual fatigue, this new generation is explicitly promoted for reading. Since research has shown that, compared to reading on electronic displays, reading on paper is faster and requires fewer fixations per line, one would expect differential effects when comparing reading behaviour on e-Ink and LCD. In the present study we therefore compared experimentally how these two display types are suited for reading over an extended period of time. Methods: Participants read for several hours on either e-Ink or LCD, and different measures of reading behaviour and visual strain were regularly recorded. These dependent measures included subjective (visual) fatigue, a letter search task, reading speed, oculomotor behaviour and the pupillary light reflex. Results: Results suggested that reading on the two display types is very similar in terms of both subjective and objective measures. Conclusions: It is not the technology itself, but rather the image quality that seems crucial for reading. Compared to the visual display units used in the previous few decades, these more recent electronic displays allow for good and comfortable reading, even for extended periods of time.
Resumo:
BACKGROUND Accurate needle placement is crucial for the success of percutaneous radiological needle interventions. We compared three guiding methods using an optical-based navigation system: freehand, using a stereotactic aiming device and active depth control, and using a stereotactic aiming device and passive depth control. METHODS For each method, 25 punctures were performed on a non-rigid phantom. Five 1 mm metal screws were used as targets. Time requirements were recorded, and target positioning errors (TPE) were measured on control scans as the distance between needle tip and target. RESULTS Time requirements were reduced using the aiming device and passive depth control. The Euclidian TPE was similar for each method (4.6 ± 1.2-4.9 ± 1.7 mm). However, the lateral component was significantly lower when an aiming device was used (2.3 ± 1.3-2.8 ± 1.6 mm with an aiming device vs 4.2 ± 2.0 mm without). DISCUSSION Using an aiming device may increase the lateral accuracy of navigated needle insertion.
Resumo:
Percutaneous needle intervention based on PET/CT images is effective, but exposes the patient to unnecessary radiation due to the increased number of CT scans required. Computer assisted intervention can reduce the number of scans, but requires handling, matching and visualization of two different datasets. While one dataset is used for target definition according to metabolism, the other is used for instrument guidance according to anatomical structures. No navigation systems capable of handling such data and performing PET/CT image-based procedures while following clinically approved protocols for oncologic percutaneous interventions are available. The need for such systems is emphasized in scenarios where the target can be located in different types of tissue such as bone and soft tissue. These two tissues require different clinical protocols for puncturing and may therefore give rise to different problems during the navigated intervention. Studies comparing the performance of navigated needle interventions targeting lesions located in these two types of tissue are not often found in the literature. Hence, this paper presents an optical navigation system for percutaneous needle interventions based on PET/CT images. The system provides viewers for guiding the physician to the target with real-time visualization of PET/CT datasets, and is able to handle targets located in both bone and soft tissue. The navigation system and the required clinical workflow were designed taking into consideration clinical protocols and requirements, and the system is thus operable by a single person, even during transition to the sterile phase. Both the system and the workflow were evaluated in an initial set of experiments simulating 41 lesions (23 located in bone tissue and 18 in soft tissue) in swine cadavers. We also measured and decomposed the overall system error into distinct error sources, which allowed for the identification of particularities involved in the process as well as highlighting the differences between bone and soft tissue punctures. An overall average error of 4.23 mm and 3.07 mm for bone and soft tissue punctures, respectively, demonstrated the feasibility of using this system for such interventions. The proposed system workflow was shown to be effective in separating the preparation from the sterile phase, as well as in keeping the system manageable by a single operator. Among the distinct sources of error, the user error based on the system accuracy (defined as the distance from the planned target to the actual needle tip) appeared to be the most significant. Bone punctures showed higher user error, whereas soft tissue punctures showed higher tissue deformation error.
Resumo:
Component malpositioning and postoperative leg length discrepancy are the most common technical problems associated with total hip arthroplasty (THA). Surgical navigation offers the potential to reduce the incidence of these problems. We reviewed 317 patients (344 hips) that underwent THA using computed tomography-based surgical navigation, including 112 THAs using a simplified method of measuring leg length. Guided by the navigation system, cups were placed in 40.8 degrees +/- 2 degrees of operative abduction (range, 35 degrees -50 degrees) and 30.8 degrees +/- 3.2 degrees (range, 19 degrees -43 degrees) of operative anteversion. We subsequently measured radiographic abduction on plain anteroposterior pelvic radiographs and calculated abduction and anteversion. Radiographically, 97.1 % of the cups were in the safe zone for abduction and 92.4% for anteversion. The mean incision length was less than 8 cm for 327 of the 344 hips. Leg length change measured intraoperatively was 6.6 +/- 4.1 mm (range, -2-22), similar to measurements from the pre- and postoperative magnification-corrected radiographs. Computer assistance during THA increased the consistency of component positioning and allowed reliable measurement of leg length change during surgery.