10 resultados para Robot vision systems
em BORIS: Bern Open Repository and Information System - Berna - Suiça
Resumo:
Within the next few years, the medical industry will launch increasingly affordable three-dimensional (3D) vision systems for the operating room (OR). This study aimed to evaluate the effect of two-dimensional (2D) and 3D visualization on surgical skills and task performance.
Resumo:
Diet-related chronic diseases severely affect personal and global health. However, managing or treating these diseases currently requires long training and high personal involvement to succeed. Computer vision systems could assist with the assessment of diet by detecting and recognizing different foods and their portions in images. We propose novel methods for detecting a dish in an image and segmenting its contents with and without user interaction. All methods were evaluated on a database of over 1600 manually annotated images. The dish detection scored an average of 99% accuracy with a .2s/image run time, while the automatic and semi-automatic dish segmentation methods reached average accuracies of 88% and 91% respectively, with an average run time of .5s/image, outperforming competing solutions.
Resumo:
Diet management is a key factor for the prevention and treatment of diet-related chronic diseases. Computer vision systems aim to provide automated food intake assessment using meal images. We propose a method for the recognition of already segmented food items in meal images. The method uses a 6-layer deep convolutional neural network to classify food image patches. For each food item, overlapping patches are extracted and classified and the class with the majority of votes is assigned to it. Experiments on a manually annotated dataset with 573 food items justified the choice of the involved components and proved the effectiveness of the proposed system yielding an overall accuracy of 84.9%.
Resumo:
We have developed a haptic-based approach for retraining of interjoint coordination following stroke called time-independent functional training (TIFT) and implemented this mode in the ARMin III robotic exoskeleton. The ARMin III robot was developed by Drs. Robert Riener and Tobias Nef at the Swiss Federal Institute of Technology Zurich (Eidgenossische Technische Hochschule Zurich, or ETH Zurich), in Zurich, Switzerland. In the TIFT mode, the robot maintains arm movements within the proper kinematic trajectory via haptic walls at each joint. These arm movements focus training of interjoint coordination with highly intuitive real-time feedback of performance; arm movements advance within the trajectory only if their movement coordination is correct. In initial testing, 37 nondisabled subjects received a single session of learning of a complex pattern. Subjects were randomized to TIFT or visual demonstration or moved along with the robot as it moved though the pattern (time-dependent [TD] training). We examined visual demonstration to separate the effects of action observation on motor learning from the effects of the two haptic guidance methods. During these training trials, TIFT subjects reduced error and interaction forces between the robot and arm, while TD subject performance did not change. All groups showed significant learning of the trajectory during unassisted recall trials, but we observed no difference in learning between groups, possibly because this learning task is dominated by vision. Further testing in stroke populations is warranted.
Resumo:
Few real software systems are built completely from scratch nowadays. Instead, systems are built iteratively and incrementally, while integrating and interacting with components from many other systems. Adaptation, reconfiguration and evolution are normal, ongoing processes throughout the lifecycle of a software system. Nevertheless the platforms, tools and environments we use to develop software are still largely based on an outmoded model that presupposes that software systems are closed and will not significantly evolve after deployment. We claim that in order to enable effective and graceful evolution of modern software systems, we must make these systems more amenable to change by (i) providing explicit, first-class models of software artifacts, change, and history at the level of the platform, (ii) continuously analysing static and dynamic evolution to track emergent properties, and (iii) closing the gap between the domain model and the developers' view of the evolving system. We outline our vision of dynamic, evolving software systems and identify the research challenges to realizing this vision.
Resumo:
HYPOTHESIS A previously developed image-guided robot system can safely drill a tunnel from the lateral mastoid surface, through the facial recess, to the middle ear, as a viable alternative to conventional mastoidectomy for cochlear electrode insertion. BACKGROUND Direct cochlear access (DCA) provides a minimally invasive tunnel from the lateral surface of the mastoid through the facial recess to the middle ear for cochlear electrode insertion. A safe and effective tunnel drilled through the narrow facial recess requires a highly accurate image-guided surgical system. Previous attempts have relied on patient-specific templates and robotic systems to guide drilling tools. In this study, we report on improvements made to an image-guided surgical robot system developed specifically for this purpose and the resulting accuracy achieved in vitro. MATERIALS AND METHODS The proposed image-guided robotic DCA procedure was carried out bilaterally on 4 whole head cadaver specimens. Specimens were implanted with titanium fiducial markers and imaged with cone-beam CT. A preoperative plan was created using a custom software package wherein relevant anatomical structures of the facial recess were segmented, and a drill trajectory targeting the round window was defined. Patient-to-image registration was performed with the custom robot system to reference the preoperative plan, and the DCA tunnel was drilled in 3 stages with progressively longer drill bits. The position of the drilled tunnel was defined as a line fitted to a point cloud of the segmented tunnel using principle component analysis (PCA function in MatLab). The accuracy of the DCA was then assessed by coregistering preoperative and postoperative image data and measuring the deviation of the drilled tunnel from the plan. The final step of electrode insertion was also performed through the DCA tunnel after manual removal of the promontory through the external auditory canal. RESULTS Drilling error was defined as the lateral deviation of the tool in the plane perpendicular to the drill axis (excluding depth error). Errors of 0.08 ± 0.05 mm and 0.15 ± 0.08 mm were measured on the lateral mastoid surface and at the target on the round window, respectively (n =8). Full electrode insertion was possible for 7 cases. In 1 case, the electrode was partially inserted with 1 contact pair external to the cochlea. CONCLUSION The purpose-built robot system was able to perform a safe and reliable DCA for cochlear implantation. The workflow implemented in this study mimics the envisioned clinical procedure showing the feasibility of future clinical implementation.
Resumo:
The aim of direct cochlear access (DCA) is to replace the standard mastoidectomy with a small diameter tunnel from the lateral bone surface to the cochlea for electrode array insertion. In contrast to previous attempts, the approach described in this work not only achieves an unprecedented high accuracy, but also contains several safety sub-systems. This paper provides a brief description of the system components, and summarizes accuracy results using the system in a cadaver model over the past two years.
Resumo:
Sensitivity to spatial and temporal patterns is a fundamental aspect of vision. Herein, we investigated this sensitivity in adult zebrafish for a wide range of spatial (0.014 to 0.511 cycles/degree [c/d]) and temporal frequencies (0.025 to 6 cycles/s) to better understand their visual system. Measurements were performed at photopic (1.8 log cd m(-2)) and scotopic (-4.5 log cd m(-2)) light levels to assess the optokinetic response (OKR). The resulting spatiotemporal contrast sensitivity (CS) functions revealed that the OKR of zebrafish is tuned to spatial frequency and speed but not to temporal frequencies. Thereby, optimal test parameters for CS measurements were identified. At photopic light levels, a spatial frequency of 0.116 ± 0.01 c/d (mean ± SD) and a grating speed of 8.42 ± 2.15 degrees/second (d/s) was ideal; at scotopic light levels, these values were 0.110 ± 0.02 c/d and 5.45 ± 1.31 d/s, respectively. This study allows to better characterize zebrafish mutants with altered vision and to distinguish between defects of rod and cone photoreceptors as measurements were performed under different light conditions.
Resumo:
In retinal surgery, surgeons face difficulties such as indirect visualization of surgical targets, physiological tremor, and lack of tactile feedback, which increase the risk of retinal damage caused by incorrect surgical gestures. In this context, intraocular proximity sensing has the potential to overcome current technical limitations and increase surgical safety. In this paper, we present a system for detecting unintentional collisions between surgical tools and the retina using the visual feedback provided by the opthalmic stereo microscope. Using stereo images, proximity between surgical tools and the retinal surface can be detected when their relative stereo disparity is small. For this purpose, we developed a system comprised of two modules. The first is a module for tracking the surgical tool position on both stereo images. The second is a disparity tracking module for estimating a stereo disparity map of the retinal surface. Both modules were specially tailored for coping with the challenging visualization conditions in retinal surgery. The potential clinical value of the proposed method is demonstrated by extensive testing using a silicon phantom eye and recorded rabbit in vivo data.