941 resultados para diagnosi,cuore,real-time,3D,feti,ecocardiografia
Resumo:
This paper presents methods based on Information Filters for solving matching problems with emphasis on real-time, or effectively real-time applications. Both applications discussed in this work deal with ultrasound-based rigid registration in computer-assisted orthopedic surgery. In the first application, the usual workflow of rigid registration is reformulated such that registration algorithms would iterate while the surgeon is acquiring ultrasound images of the anatomy to be operated. Using this effectively real-time approach to registration, the surgeon would then receive feedback in order to better gauge the quality of the final registration outcome. The second application considered in this paper circumvents the need to attach physical markers to bones for anatomical referencing. Experiments using anatomical objects immersed in water are performed in order to evaluate and compare the different methods presented herein, using both 2D as well as real-time 3D ultrasound.
Resumo:
Left ventricular (LV) volumes have important prognostic implications in patients with chronic ischemic heart disease. We sought to examine the accuracy and reproducibility of real-time 3D echo (RT-3DE) compared to TI-201 single photon emission computed tomography (SPECT) and cardiac magnetic resonance imaging (MRI). Thirty (n = 30) patients (age 62±9 years, 23 men) with chronic ischemic heart disease underwent LV volume assessment with RT-3DE, SPECT, and MRI. Ano vel semi-automated border detection algorithmwas used by RT-3DE. End diastolic volumes (EDV) and end systolic volumes (ESV) measured by RT3DE and SPECT were compared to MRI as the standard of reference. RT-3DE and SPECT volumes showed excellent correlation with MRI (Table). Both RT- 3DE and SPECT underestimated LV volumes compared to MRI (ESV, SPECT 74±58 ml versus RT-3DE 95±48 ml versus MRI 96±54 ml); (EDV, SPECT 121±61 ml versus RT-3DE 169±61 ml versus MRI 179±56 ml). The degree of ESV underestimation with RT-3DE was not significant.
Resumo:
A traditional photonic-force microscope (PFM) results in huge sets of data, which requires tedious numerical analysis. In this paper, we propose instead an analog signal processor to attain real-time capabilities while retaining the richness of the traditional PFM data. Our system is devoted to intracellular measurements and is fully interactive through the use of a haptic joystick. Using our specialized analog hardware along with a dedicated algorithm, we can extract the full 3D stiffness matrix of the optical trap in real time, including the off-diagonal cross-terms. Our system is also capable of simultaneously recording data for subsequent offline analysis. This allows us to check that a good correlation exists between the classical analysis of stiffness and our real-time measurements. We monitor the PFM beads using an optical microscope. The force-feedback mechanism of the haptic joystick helps us in interactively guiding the bead inside living cells and collecting information from its (possibly anisotropic) environment. The instantaneous stiffness measurements are also displayed in real time on a graphical user interface. The whole system has been built and is operational; here we present early results that confirm the consistency of the real-time measurements with offline computations.
Resumo:
RATIONALE AND OBJECTIVES: The purpose of this study was the investigation of the impact of real-time adaptive motion correction on image quality in navigator-gated, free-breathing, double-oblique three-dimensional (3D) submillimeter right coronary magnetic resonance angiography (MRA). MATERIALS AND METHODS: Free-breathing 3D right coronary MRA with real-time navigator technology was performed in 10 healthy adult subjects with an in-plane spatial resolution of 700 x 700 microm. Identical double-oblique coronary MR-angiograms were performed with navigator gating alone and combined navigator gating and real-time adaptive motion correction. Quantitative objective parameters of contrast-to-noise ratio (CNR) and vessel sharpness and subjective image quality scores were compared. RESULTS: Superior vessel sharpness, increased CNR, and superior image quality scores were found with combined navigator gating and real-time adaptive motion correction (vs. navigator gating alone; P < 0.01 for all comparisons). CONCLUSION: Real-time adaptive motion correction objectively and subjectively improves image quality in 3D navigator-gated free-breathing double-oblique submillimeter right coronary MRA.
Resumo:
This work proposes the development of an embedded real-time fruit detection system for future automatic fruit harvesting. The proposed embedded system is based on an ARM Cortex-M4 (STM32F407VGT6) processor and an Omnivision OV7670 color camera. The future goal of this embedded vision system will be to control a robotized arm to automatically select and pick some fruit directly from the tree. The complete embedded system has been designed to be placed directly in the gripper tool of the future robotized harvesting arm. The embedded system will be able to perform real-time fruit detection and tracking by using a three-dimensional look-up-table (LUT) defined in the RGB color space and optimized for fruit picking. Additionally, two different methodologies for creating optimized 3D LUTs based on existing linear color models and fruit histograms were implemented in this work and compared for the case of red peaches. The resulting system is able to acquire general and zoomed orchard images and to update the relative tracking information of a red peach in the tree ten times per second.
Resumo:
Tensor3D is a geometric modeling program with the capacity to simulate and visualize in real-time the deformation, specified through a tensor matrix and applied to triangulated models representing geological bodies. 3D visualization allows the study of deformational processes that are traditionally conducted in 2D, such as simple and pure shears. Besides geometric objects that are immediately available in the program window, the program can read other models from disk, thus being able to import objects created with different open-source or proprietary programs. A strain ellipsoid and a bounding box are simultaneously shown and instantly deformed with the main object. The principal axes of strain are visualized as well to provide graphical information about the orientation of the tensor's normal components. The deformed models can also be saved, retrieved later and deformed again, in order to study different steps of progressive strain, or to make this data available to other programs. The shape of stress ellipsoids and the corresponding Mohr circles defined by any stress tensor can also be represented. The application was written using the Visualization ToolKit, a powerful scientific visualization library in the public domain. This development choice, allied to the use of the Tcl/Tk programming language, which is independent on the host computational platform, makes the program a useful tool for the study of geometric deformations directly in three dimensions in teaching as well as research activities. (C) 2007 Elsevier Ltd. All rights reserved.
Resumo:
Unmanned aerial vehicles (UAVs) frequently operate in partially or entirely unknown environments. As the vehicle traverses the environment and detects new obstacles, rapid path replanning is essential to avoid collisions. This thesis presents a new algorithm called Hierarchical D* Lite (HD*), which combines the incremental algorithm D* Lite with a novel hierarchical path planning approach to replan paths sufficiently fast for real-time operation. Unlike current hierarchical planning algorithms, HD* does not require map corrections before planning a new path. Directional cost scale factors, path smoothing, and Catmull-Rom splines are used to ensure the resulting paths are feasible. HD* sacrifices optimality for real-time performance. Its computation time and path quality are dependent on the map size, obstacle density, sensor range, and any restrictions on planning time. For the most complex scenarios tested, HD* found paths within 10% of optimal in under 35 milliseconds.
Resumo:
Background: Precise needle puncture of renal calyces is a challenging and essential step for successful percutaneous nephrolithotomy. This work tests and evaluates, through a clinical trial, a real-time navigation system to plan and guide percutaneous kidney puncture. Methods: A novel system, entitled i3DPuncture, was developed to aid surgeons in establishing the desired puncture site and the best virtual puncture trajectory, by gathering and processing data from a tracked needle with optical passive markers. In order to navigate and superimpose the needle to a preoperative volume, the patient, 3D image data and tracker system were previously registered intraoperatively using seven points that were strategically chosen based on rigid bone structures and nearby kidney area. In addition, relevant anatomical structures for surgical navigation were automatically segmented using a multi-organ segmentation algorithm that clusters volumes based on statistical properties and minimum description length criterion. For each cluster, a rendering transfer function enhanced the visualization of different organs and surrounding tissues. Results: One puncture attempt was sufficient to achieve a successful kidney puncture. The puncture took 265 seconds, and 32 seconds were necessary to plan the puncture trajectory. The virtual puncture path was followed correctively until the needle tip reached the desired kidney calyceal. Conclusions: This new solution provided spatial information regarding the needle inside the body and the possibility to visualize surrounding organs. It may offer a promising and innovative solution for percutaneous punctures.
Resumo:
The robotics community is concerned with the ability to infer and compare the results from researchers in areas such as vision perception and multi-robot cooperative behavior. To accomplish that task, this paper proposes a real-time indoor visual ground truth system capable of providing accuracy with at least more magnitude than the precision of the algorithm to be evaluated. A multi-camera architecture is proposed under the ROS (Robot Operating System) framework to estimate the 3D position of objects and the implementation and results were contextualized to the Robocup Middle Size League scenario.
Resumo:
One of the major challenges in the development of an immersive system is handling the delay between the tracking of the user’s head position and the updated projection of a 3D image or auralised sound, also called end-to-end delay. Excessive end-to-end delay can result in the general decrement of the “feeling of presence”, the occurrence of motion sickness and poor performance in perception-action tasks. These latencies must be known in order to provide insights on the technological (hardware/software optimization) or psychophysical (recalibration sessions) strategies to deal with them. Our goal was to develop a new measurement method of end-to-end delay that is both precise and easily replicated. We used a Head and Torso simulator (HATS) as an auditory signal sensor, a fast response photo-sensor to detect a visual stimulus response from a Motion Capture System, and a voltage input trigger as real-time event. The HATS was mounted in a turntable which allowed us to precisely change the 3D sound relative to the head position. When the virtual sound source was at 90º azimuth, the correspondent HRTF would set all the intensity values to zero, at the same time a trigger would register the real-time event of turning the HATS 90º azimuth. Furthermore, with the HATS turned 90º to the left, the motion capture marker visualization would fell exactly in the photo-sensor receptor. This method allowed us to precisely measure the delay from tracking to displaying. Moreover, our results show that the method of tracking, its tracking frequency, and the rendering of the sound reflections are the main predictors of end-to-end delay.
Resumo:
The acquisition duration of most three-dimensional (3D) coronary magnetic resonance angiography (MRA) techniques is considerably prolonged, thereby precluding breathholding as a mechanism to suppress respiratory motion artifacts. Splitting the acquired 3D volume into multiple subvolumes or slabs serves to shorten individual breathhold duration. Still, problems associated with misregistration due to inconsistent depths of expiration and diaphragmatic drift during sustained respiration remain to be resolved. We propose the combination of an ultrafast 3D coronary MRA imaging sequence with prospective real-time navigator technology, which allows correction of the measured volume position. 3D volume splitting using prospective real-time navigator technology, was successfully applied for 3D coronary MRA in five healthy individuals. An ultrafast 3D interleaved hybrid gradient-echoplanar imaging sequence, including T2Prep for contrast enhancement, was used with the navigator localized at the basal anterior wall of the left ventricle. A 9-cm-thick volume, with in-plane spatial resolution of 1.1 x 2.2 mm, was acquired during five breathholds of 15-sec duration each. Consistently, no evidence of misregistration was observed in the images. Extensive contiguous segments of the left anterior descending coronary artery (48 +/- 18 mm) and the right coronary artery (75 +/- 5 mm) could be visualized. This technique has the potential for screening for anomalous coronary arteries, making it well suited as part of a larger clinical MR examination. In addition, this technique may also be applied as a scout scan, which allows an accurate definition of imaging planes for subsequent high-resolution coronary MRA.
Resumo:
PURPOSE: To implement real-time myocardial strain-encoding (SENC) imaging in combination with tracking the tissue displacement in the through-plane direction. MATERIALS AND METHODS: SENC imaging was combined with the slice-following technique by implementing three-dimensional (3D) selective excitation. Certain adjustments were implemented to reduce scan time to one heartbeat. A total of 10 volunteers and five pigs were scanned on a 3T MRI scanner. Spatial modulation of magnetization (SPAMM)-tagged images were acquired on planes orthogonal to the SENC planes for comparison. Myocardial infarction (MI) was induced in two pigs and the resulting SENC images were compared to standard delayed-enhancement (DE) images. RESULTS: The strain values computed from SENC imaging with slice-following showed significant difference from those acquired without slice-following, especially during systole (P < 0.01). The strain curves computed from the SENC images with and without slice-following were similar to those computed from the orthogonal SPAMM images, with and without, respectively, tracking the tag line displacement in the strain direction. The resulting SENC images showed good agreement with the DE images in identifying MI in infarcted pigs. CONCLUSION: Correction of through-plane motion in real-time cardiac functional imaging is feasible using slice-following. The strain measurements are more accurate than conventional SENC measurements in humans and animals, as validated with conventional MRI tagging.
Resumo:
In this work, image based estimation methods, also known as direct methods, are studied which avoid feature extraction and matching completely. Cost functions use raw pixels as measurements and the goal is to produce precise 3D pose and structure estimates. The cost functions presented minimize the sensor error, because measurements are not transformed or modified. In photometric camera pose estimation, 3D rotation and translation parameters are estimated by minimizing a sequence of image based cost functions, which are non-linear due to perspective projection and lens distortion. In image based structure refinement, on the other hand, 3D structure is refined using a number of additional views and an image based cost metric. Image based estimation methods are particularly useful in conditions where the Lambertian assumption holds, and the 3D points have constant color despite viewing angle. The goal is to improve image based estimation methods, and to produce computationally efficient methods which can be accomodated into real-time applications. The developed image-based 3D pose and structure estimation methods are finally demonstrated in practise in indoor 3D reconstruction use, and in a live augmented reality application.
Resumo:
In this paper we describe how to cope with the delays inherent in a real time control system for a steerable stereo head/eye platform. A purposive and reactive system requires the use of fast vision algorithms to provide the controller with the error signals to drive the platform. The time-critical implementation of these algorithms is necessary, not only to enable short latency reaction to real world events, but also to provide sufficiently high frequency results with small enough delays that controller remain stable. However, even with precise knowledge of that delay, nonlinearities in the plant make modelling of that plant impossible, thus precluding the use of a Smith Regulator. Moreover, the major delay in the system is in the feedback (image capture and vision processing) rather than feed forward (controller) loop. Delays ranging between 40msecs and 80msecs are common for the simple 2D processes, but might extend to several hundred milliseconds for more sophisticated 3D processes. The strategy presented gives precise control over the gaze direction of the cameras despite the lack of a priori knowledge of the delays involved. The resulting controller is shown to have a similar structure to the Smith Regulator, but with essential modifications.
Resumo:
Image stitching is the process of joining several images to obtain a bigger view of a scene. It is used, for example, in tourism to transmit to the viewer the sensation of being in another place. I am presenting an inexpensive solution for automatic real time video and image stitching with two web cameras as the video/image sources. The proposed solution relies on the usage of several markers in the scene as reference points for the stitching algorithm. The implemented algorithm is divided in four main steps, the marker detection, camera pose determination (in reference to the markers), video/image size and 3d transformation, and image translation. Wii remote controllers are used to support several steps in the process. The built‐in IR camera provides clean marker detection, which facilitates the camera pose determination. The only restriction in the algorithm is that markers have to be in the field of view when capturing the scene. Several tests where made to evaluate the final algorithm. The algorithm is able to perform video stitching with a frame rate between 8 and 13 fps. The joining of the two videos/images is good with minor misalignments in objects at the same depth of the marker,misalignments in the background and foreground are bigger. The capture process is simple enough so anyone can perform a stitching with a very short explanation. Although real‐time video stitching can be achieved by this affordable approach, there are few shortcomings in current version. For example, contrast inconsistency along the stitching line could be reduced by applying a color correction algorithm to every source videos. In addition, the misalignments in stitched images due to camera lens distortion could be eased by optical correction algorithm. The work was developed in Apple’s Quartz Composer, a visual programming environment. A library of extended functions was developed using Xcode tools also from Apple.