919 resultados para Image-based mesh generation


Relevância:

40.00% 40.00%

Publicador:

Resumo:

Automated identification of vertebrae from X-ray image(s) is an important step for various medical image computing tasks such as 2D/3D rigid and non-rigid registration. In this chapter we present a graphical model-based solution for automated vertebra identification from X-ray image(s). Our solution does not ask for a training process using training data and has the capability to automatically determine the number of vertebrae visible in the image(s). This is achieved by combining a graphical model-based maximum a posterior probability (MAP) estimate with a mean-shift based clustering. Experiments conducted on simulated X-ray images as well as on a low-dose low quality X-ray spinal image of a scoliotic patient verified its performance.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

In this paper, we propose a new method for stitching multiple fluoroscopic images taken by a C-arm instrument. We employ an X-ray radiolucent ruler with numbered graduations while acquiring the images, and the image stitching is based on detecting and matching ruler parts in the images to the corresponding parts of a virtual ruler. To achieve this goal, we first detect the regular spaced graduations on the ruler and the numbers. After graduation labeling, for each image, we have the location and the associated number for every graduation on the ruler. Then, we initialize the panoramic X-ray image with the virtual ruler, and we “paste” each image by aligning the detected ruler part on the original image, to the corresponding part of the virtual ruler on the panoramic image. Our method is based on ruler matching but without the requirement of matching similar feature points in pairwise images, and thus, we do not necessarily require overlap between the images. We tested our method on eight different datasets of X-ray images, including long bones and a complete spine. Qualitative and quantitative experiments show that our method achieves good results.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

XMapTools is a MATLAB©-based graphical user interface program for electron microprobe X-ray image processing, which can be used to estimate the pressure–temperature conditions of crystallization of minerals in metamorphic rocks. This program (available online at http://www.xmaptools.com) provides a method to standardize raw electron microprobe data and includes functions to calculate the oxide weight percent compositions for various minerals. A set of external functions is provided to calculate structural formulae from the standardized analyses as well as to estimate pressure–temperature conditions of crystallization, using empirical and semi-empirical thermobarometers from the literature. Two graphical user interface modules, Chem2D and Triplot3D, are used to plot mineral compositions into binary and ternary diagrams. As an example, the software is used to study a high-pressure Himalayan eclogite sample from the Stak massif in Pakistan. The high-pressure paragenesis consisting of omphacite and garnet has been retrogressed to a symplectitic assemblage of amphibole, plagioclase and clinopyroxene. Mineral compositions corresponding to ~165,000 analyses yield estimates for the eclogitic pressure–temperature retrograde path from 25 kbar to 9 kbar. Corresponding pressure–temperature maps were plotted and used to interpret the link between the equilibrium conditions of crystallization and the symplectitic microstructures. This example illustrates the usefulness of XMapTools for studying variations of the chemical composition of minerals and for retrieving information on metamorphic conditions on a microscale, towards computation of continuous pressure–temperature-and relative time path in zoned metamorphic minerals not affected by post-crystallization diffusion.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The articular cartilage layer of synovial joints is commonly lesioned by trauma or by a degenerative joint disease. Attempts to repair the damage frequently involve the performance of autologous chondrocyte implantation (ACI). Healthy cartilage must be first removed from the joint, and then, on a separate occasion, following the isolation of the chondrocytes and their expansion in vitro, implanted within the lesion. The disadvantages of this therapeutic approach include the destruction of healthy cartilage-which may predispose the joint to osteoarthritic degeneration-the necessarily restricted availability of healthy tissue, the limited proliferative capacity of the donor cells-which declines with age-and the need for two surgical interventions. We postulated that it should be possible to induce synovial stem cells, which are characterized by high, age-independent, proliferative and chondrogenic differentiation capacities, to lay down cartilage within the outer juxtasynovial space after the transcutaneous implantation of a carrier bearing BMP-2 in a slow-release system. The chondrocytes could be isolated on-site and immediately used for ACI. To test this hypothesis, Chinchilla rabbits were used as an experimental model. A collagenous patch bearing BMP-2 in a slow-delivery vehicle was sutured to the inner face of the synovial membrane. The neoformed tissue was excised 5, 8, 11 and 14 days postimplantation for histological and histomorphometric analyses. Neoformed tissue was observed within the outer juxtasynovial space already on the 5th postimplantation day. It contained connective and adipose tissues, and a central nugget of growing cartilage. Between days 5 and 14, the absolute volume of cartilage increased, attaining a value of 12 mm(3) at the latter juncture. Bone was deposited in measurable quantities from the 11th day onwards, but owing to resorption, the net volume did not exceed 1.5 mm(3) (14th day). The findings confirm our hypothesis. The quantity of neoformed cartilage that is deposited after only 1 week within the outer juxtasynovial space would yield sufficient cells for ACI. Since the BMP-2-bearing patches would be implanted transcutaneously in humans, only one surgical or arthroscopic intervention would be called for. Moreover, most importantly, sufficient numbers of cells could be generated in patients of all ages.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

PURPOSE Laser range scanners (LRS) allow performing a surface scan without physical contact with the organ, yielding higher registration accuracy for image-guided surgery (IGS) systems. However, the use of LRS-based registration in laparoscopic liver surgery is still limited because current solutions are composed of expensive and bulky equipment which can hardly be integrated in a surgical scenario. METHODS In this work, we present a novel LRS-based IGS system for laparoscopic liver procedures. A triangulation process is formulated to compute the 3D coordinates of laser points by using the existing IGS system tracking devices. This allows the use of a compact and cost-effective LRS and therefore facilitates the integration into the laparoscopic setup. The 3D laser points are then reconstructed into a surface to register to the preoperative liver model using a multi-level registration process. RESULTS Experimental results show that the proposed system provides submillimeter scanning precision and accuracy comparable to those reported in the literature. Further quantitative analysis shows that the proposed system is able to achieve a patient-to-image registration accuracy, described as target registration error, of [Formula: see text]. CONCLUSIONS We believe that the presented approach will lead to a faster integration of LRS-based registration techniques in the surgical environment. Further studies will focus on optimizing scanning time and on the respiratory motion compensation.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Pencil beam scanned (PBS) proton therapy has many advantages over conventional radiotherapy, but its effectiveness for treating mobile tumours remains questionable. Gating dose delivery to the breathing pattern is a well-developed method in conventional radiotherapy for mitigating tumour-motion, but its clinical efficiency for PBS proton therapy is not yet well documented. In this study, the dosimetric benefits and the treatment efficiency of beam gating for PBS proton therapy has been comprehensively evaluated. A series of dedicated 4D dose calculations (4DDC) have been performed on 9 different 4DCT(MRI) liver data sets, which give realistic 4DCT extracting motion information from 4DMRI. The value of 4DCT(MRI) is its capability of providing not only patient geometries and deformable breathing characteristics, but also includes variations in the breathing patterns between breathing cycles. In order to monitor target motion and derive a gating signal, we simulate time-resolved beams' eye view (BEV) x-ray images as an online motion surrogate. 4DDCs have been performed using three amplitude-based gating window sizes (10/5/3 mm) with motion surrogates derived from either pre-implanted fiducial markers or the diaphragm. In addition, gating has also been simulated in combination with up to 19 times rescanning using either volumetric or layered approaches. The quality of the resulting 4DDC plans has been quantified in terms of the plan homogeneity index (HI), total treatment time and duty cycle. Results show that neither beam gating nor rescanning alone can fully retrieve the plan homogeneity of the static reference plan. Especially for variable breathing patterns, reductions of the effective duty cycle to as low as 10% have been observed with the smallest gating rescanning window (3 mm), implying that gating on its own for such cases would result in much longer treatment times. In addition, when rescanning is applied on its own, large differences between volumetric and layered rescanning have been observed as a function of increasing number of re-scans. However, once gating and rescanning is combined, HI to within 2% of the static plan could be achieved in the clinical target volume, with only moderately prolonged treatment times, irrespective of the rescanning strategy used. Moreover, these results are independent of the motion surrogate used. In conclusion, our results suggest image guided beam gating, combined with rescanning, is a feasible, effective and efficient motion mitigation approach for PBS-based liver tumour treatments.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

BACKGROUND Patient-to-image registration is a core process of image-guided surgery (IGS) systems. We present a novel registration approach for application in laparoscopic liver surgery, which reconstructs in real time an intraoperative volume of the underlying intrahepatic vessels through an ultrasound (US) sweep process. METHODS An existing IGS system for an open liver procedure was adapted, with suitable instrument tracking for laparoscopic equipment. Registration accuracy was evaluated on a realistic phantom by computing the target registration error (TRE) for 5 intrahepatic tumors. The registration work flow was evaluated by computing the time required for performing the registration. Additionally, a scheme for intraoperative accuracy assessment by visual overlay of the US image with preoperative image data was evaluated. RESULTS The proposed registration method achieved an average TRE of 7.2 mm in the left lobe and 9.7 mm in the right lobe. The average time required for performing the registration was 12 minutes. A positive correlation was found between the intraoperative accuracy assessment and the obtained TREs. CONCLUSIONS The registration accuracy of the proposed method is adequate for laparoscopic intrahepatic tumor targeting. The presented approach is feasible and fast and may, therefore, not be disruptive to the current surgical work flow.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

PURPOSE Treatment of vascular malformations requires the placement of a needle within vessels which may be as small as 1 mm, with the current state of the art relying exclusively on two-dimensional fluoroscopy images for guidance. We hypothesize that the combination of stereotactic image guidance with existing targeting methods will result in faster and more reproducible needle placements, as well as reduced radiationexposure, when compared to standard methods based on fluoroscopy alone. METHODS The proposed navigation approach was evaluated in a phantom experiment designed to allow direct comparison with the conventional method. An anatomical phantom of the left forearm was constructed, including an independent control mechanism to indicate the attainment of the target position. Three interventionalists (one inexperienced, two of them frequently practice the conventional fluoroscopic technique) performed 45 targeting attempts utilizing the combined and 45 targeting attempts utilizing the standard approaches. RESULTS In all 45 attempts, the users were able to reach the target when utilizing the combined approach. In two cases, targeting was stopped after 15 min without reaching the target when utilizing only the C-arm. The inexperienced user was faster when utilizing the combined approach and applied significantly less radiation than when utilizing the conventional approach. Conversely, both experienced users were faster when using the conventional approach, in one case significantly so, with no significant difference in radiation dose when compared to the combined approach. CONCLUSIONS This work presents an initial evaluation of a combined navigation fluoroscopy targeting technique in a phantom study. The results suggest that, especially for inexperienced interventionalists, navigation may help to reduce the time and the radiation dose. Future work will focus on the improvement and clinical evaluation of the proposed method.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

ENVISAT ASAR WSM images with pixel size 150 × 150 m, acquired in different meteorological, oceanographic and sea ice conditions were used to determined icebergs in the Amundsen Sea (Antarctica). An object-based method for automatic iceberg detection from SAR data has been developed and applied. The object identification is based on spectral and spatial parameters on 5 scale levels, and was verified with manual classification in four polygon areas, chosen to represent varying environmental conditions. The algorithm works comparatively well in freezing temperatures and strong wind conditions, prevailing in the Amundsen Sea during the year. The detection rate was 96% which corresponds to 94% of the area (counting icebergs larger than 0.03 km**2), for all seasons. The presented algorithm tends to generate errors in the form of false alarms, mainly caused by the presence of ice floes, rather than misses. This affects the reliability since false alarms were manually corrected post analysis.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Providing accurate maps of coral reefs where the spatial scale and labels of the mapped features correspond to map units appropriate for examining biological and geomorphic structures and processes is a major challenge for remote sensing. The objective of this work is to assess the accuracy and relevance of the process used to derive geomorphic zone and benthic community zone maps for three western Pacific coral reefs produced from multi-scale, object-based image analysis (OBIA) of high-spatial-resolution multi-spectral images, guided by field survey data. Three Quickbird-2 multi-spectral data sets from reefs in Australia, Palau and Fiji and georeferenced field photographs were used in a multi-scale segmentation and object-based image classification to map geomorphic zones and benthic community zones. A per-pixel approach was also tested for mapping benthic community zones. Validation of the maps and comparison to past approaches indicated the multi-scale OBIA process enabled field data, operator field experience and a conceptual hierarchical model of the coral reef environment to be linked to provide output maps at geomorphic zone and benthic community scales on coral reefs. The OBIA mapping accuracies were comparable with previously published work using other methods; however, the classes mapped were matched to a predetermined set of features on the reef.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

This thesis deals with the problem of efficiently tracking 3D objects in sequences of images. We tackle the efficient 3D tracking problem by using direct image registration. This problem is posed as an iterative optimization procedure that minimizes a brightness error norm. We review the most popular iterative methods for image registration in the literature, turning our attention to those algorithms that use efficient optimization techniques. Two forms of efficient registration algorithms are investigated. The first type comprises the additive registration algorithms: these algorithms incrementally compute the motion parameters by linearly approximating the brightness error function. We centre our attention on Hager and Belhumeur’s factorization-based algorithm for image registration. We propose a fundamental requirement that factorization-based algorithms must satisfy to guarantee good convergence, and introduce a systematic procedure that automatically computes the factorization. Finally, we also bring out two warp functions to register rigid and nonrigid 3D targets that satisfy the requirement. The second type comprises the compositional registration algorithms, where the brightness function error is written by using function composition. We study the current approaches to compositional image alignment, and we emphasize the importance of the Inverse Compositional method, which is known to be the most efficient image registration algorithm. We introduce a new algorithm, the Efficient Forward Compositional image registration: this algorithm avoids the necessity of inverting the warping function, and provides a new interpretation of the working mechanisms of the inverse compositional alignment. By using this information, we propose two fundamental requirements that guarantee the convergence of compositional image registration methods. Finally, we support our claims by using extensive experimental testing with synthetic and real-world data. We propose a distinction between image registration and tracking when using efficient algorithms. We show that, depending whether the fundamental requirements are hold, some efficient algorithms are eligible for image registration but not for tracking.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

En esta tesis se aborda la detección y el seguimiento automático de vehículos mediante técnicas de visión artificial con una cámara monocular embarcada. Este problema ha suscitado un gran interés por parte de la industria automovilística y de la comunidad científica ya que supone el primer paso en aras de la ayuda a la conducción, la prevención de accidentes y, en última instancia, la conducción automática. A pesar de que se le ha dedicado mucho esfuerzo en los últimos años, de momento no se ha encontrado ninguna solución completamente satisfactoria y por lo tanto continúa siendo un tema de investigación abierto. Los principales problemas que plantean la detección y seguimiento mediante visión artificial son la gran variabilidad entre vehículos, un fondo que cambia dinámicamente debido al movimiento de la cámara, y la necesidad de operar en tiempo real. En este contexto, esta tesis propone un marco unificado para la detección y seguimiento de vehículos que afronta los problemas descritos mediante un enfoque estadístico. El marco se compone de tres grandes bloques, i.e., generación de hipótesis, verificación de hipótesis, y seguimiento de vehículos, que se llevan a cabo de manera secuencial. No obstante, se potencia el intercambio de información entre los diferentes bloques con objeto de obtener el máximo grado posible de adaptación a cambios en el entorno y de reducir el coste computacional. Para abordar la primera tarea de generación de hipótesis, se proponen dos métodos complementarios basados respectivamente en el análisis de la apariencia y la geometría de la escena. Para ello resulta especialmente interesante el uso de un dominio transformado en el que se elimina la perspectiva de la imagen original, puesto que este dominio permite una búsqueda rápida dentro de la imagen y por tanto una generación eficiente de hipótesis de localización de los vehículos. Los candidatos finales se obtienen por medio de un marco colaborativo entre el dominio original y el dominio transformado. Para la verificación de hipótesis se adopta un método de aprendizaje supervisado. Así, se evalúan algunos de los métodos de extracción de características más populares y se proponen nuevos descriptores con arreglo al conocimiento de la apariencia de los vehículos. Para evaluar la efectividad en la tarea de clasificación de estos descriptores, y dado que no existen bases de datos públicas que se adapten al problema descrito, se ha generado una nueva base de datos sobre la que se han realizado pruebas masivas. Finalmente, se presenta una metodología para la fusión de los diferentes clasificadores y se plantea una discusión sobre las combinaciones que ofrecen los mejores resultados. El núcleo del marco propuesto está constituido por un método Bayesiano de seguimiento basado en filtros de partículas. Se plantean contribuciones en los tres elementos fundamentales de estos filtros: el algoritmo de inferencia, el modelo dinámico y el modelo de observación. En concreto, se propone el uso de un método de muestreo basado en MCMC que evita el elevado coste computacional de los filtros de partículas tradicionales y por consiguiente permite que el modelado conjunto de múltiples vehículos sea computacionalmente viable. Por otra parte, el dominio transformado mencionado anteriormente permite la definición de un modelo dinámico de velocidad constante ya que se preserva el movimiento suave de los vehículos en autopistas. Por último, se propone un modelo de observación que integra diferentes características. En particular, además de la apariencia de los vehículos, el modelo tiene en cuenta también toda la información recibida de los bloques de procesamiento previos. El método propuesto se ejecuta en tiempo real en un ordenador de propósito general y da unos resultados sobresalientes en comparación con los métodos tradicionales. ABSTRACT This thesis addresses on-road vehicle detection and tracking with a monocular vision system. This problem has attracted the attention of the automotive industry and the research community as it is the first step for driver assistance and collision avoidance systems and for eventual autonomous driving. Although many effort has been devoted to address it in recent years, no satisfactory solution has yet been devised and thus it is an active research issue. The main challenges for vision-based vehicle detection and tracking are the high variability among vehicles, the dynamically changing background due to camera motion and the real-time processing requirement. In this thesis, a unified approach using statistical methods is presented for vehicle detection and tracking that tackles these issues. The approach is divided into three primary tasks, i.e., vehicle hypothesis generation, hypothesis verification, and vehicle tracking, which are performed sequentially. Nevertheless, the exchange of information between processing blocks is fostered so that the maximum degree of adaptation to changes in the environment can be achieved and the computational cost is alleviated. Two complementary strategies are proposed to address the first task, i.e., hypothesis generation, based respectively on appearance and geometry analysis. To this end, the use of a rectified domain in which the perspective is removed from the original image is especially interesting, as it allows for fast image scanning and coarse hypothesis generation. The final vehicle candidates are produced using a collaborative framework between the original and the rectified domains. A supervised classification strategy is adopted for the verification of the hypothesized vehicle locations. In particular, state-of-the-art methods for feature extraction are evaluated and new descriptors are proposed by exploiting the knowledge on vehicle appearance. Due to the lack of appropriate public databases, a new database is generated and the classification performance of the descriptors is extensively tested on it. Finally, a methodology for the fusion of the different classifiers is presented and the best combinations are discussed. The core of the proposed approach is a Bayesian tracking framework using particle filters. Contributions are made on its three key elements: the inference algorithm, the dynamic model and the observation model. In particular, the use of a Markov chain Monte Carlo method is proposed for sampling, which circumvents the exponential complexity increase of traditional particle filters thus making joint multiple vehicle tracking affordable. On the other hand, the aforementioned rectified domain allows for the definition of a constant-velocity dynamic model since it preserves the smooth motion of vehicles in highways. Finally, a multiple-cue observation model is proposed that not only accounts for vehicle appearance but also integrates the available information from the analysis in the previous blocks. The proposed approach is proven to run near real-time in a general purpose PC and to deliver outstanding results compared to traditional methods.