916 resultados para Biomedical Image Processing


Relevância:

90.00% 90.00%

Publicador:

Resumo:

This paper presents an optimum user-steered boundary tracking approach for image segmentation, which simulates the behavior of water flowing through a riverbed. The riverbed approach was devised using the image foresting transform with a never-exploited connectivity function. We analyze its properties in the derived image graphs and discuss its theoretical relation with other popular methods such as live wire and graph cuts. Several experiments show that riverbed can significantly reduce the number of user interactions (anchor points), as compared to live wire for objects with complex shapes. This paper also includes a discussion about how to combine different methods in order to take advantage of their complementary strengths.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Electrical impedance tomography (EIT) is an imaging technique that attempts to reconstruct the impedance distribution inside an object from the impedance between electrodes placed on the object surface. The EIT reconstruction problem can be approached as a nonlinear nonconvex optimization problem in which one tries to maximize the matching between a simulated impedance problem and the observed data. This nonlinear optimization problem is often ill-posed, and not very suited to methods that evaluate derivatives of the objective function. It may be approached by simulated annealing (SA), but at a large computational cost due to the expensive evaluation process of the objective function, which involves a full simulation of the impedance problem at each iteration. A variation of SA is proposed in which the objective function is evaluated only partially, while ensuring boundaries on the behavior of the modified algorithm.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

A deep theoretical analysis of the graph cut image segmentation framework presented in this paper simultaneously translates into important contributions in several directions. The most important practical contribution of this work is a full theoretical description, and implementation, of a novel powerful segmentation algorithm, GC(max). The output of GC(max) coincides with a version of a segmentation algorithm known as Iterative Relative Fuzzy Connectedness, IRFC. However, GC(max) is considerably faster than the classic IRFC algorithm, which we prove theoretically and show experimentally. Specifically, we prove that, in the worst case scenario, the GC(max) algorithm runs in linear time with respect to the variable M=|C|+|Z|, where |C| is the image scene size and |Z| is the size of the allowable range, Z, of the associated weight/affinity function. For most implementations, Z is identical to the set of allowable image intensity values, and its size can be treated as small with respect to |C|, meaning that O(M)=O(|C|). In such a situation, GC(max) runs in linear time with respect to the image size |C|. We show that the output of GC(max) constitutes a solution of a graph cut energy minimization problem, in which the energy is defined as the a"" (a) norm ayenF (P) ayen(a) of the map F (P) that associates, with every element e from the boundary of an object P, its weight w(e). This formulation brings IRFC algorithms to the realm of the graph cut energy minimizers, with energy functions ayenF (P) ayen (q) for qa[1,a]. Of these, the best known minimization problem is for the energy ayenF (P) ayen(1), which is solved by the classic min-cut/max-flow algorithm, referred to often as the Graph Cut algorithm. We notice that a minimization problem for ayenF (P) ayen (q) , qa[1,a), is identical to that for ayenF (P) ayen(1), when the original weight function w is replaced by w (q) . Thus, any algorithm GC(sum) solving the ayenF (P) ayen(1) minimization problem, solves also one for ayenF (P) ayen (q) with qa[1,a), so just two algorithms, GC(sum) and GC(max), are enough to solve all ayenF (P) ayen (q) -minimization problems. We also show that, for any fixed weight assignment, the solutions of the ayenF (P) ayen (q) -minimization problems converge to a solution of the ayenF (P) ayen(a)-minimization problem (ayenF (P) ayen(a)=lim (q -> a)ayenF (P) ayen (q) is not enough to deduce that). An experimental comparison of the performance of GC(max) and GC(sum) algorithms is included. This concentrates on comparing the actual (as opposed to provable worst scenario) algorithms' running time, as well as the influence of the choice of the seeds on the output.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

OBJECTIVE: To evaluate tools for the fusion of images generated by tomography and structural and functional magnetic resonance imaging. METHODS: Magnetic resonance and functional magnetic resonance imaging were performed while a volunteer who had previously undergone cranial tomography performed motor and somatosensory tasks in a 3-Tesla scanner. Image data were analyzed with different programs, and the results were compared. RESULTS: We constructed a flow chart of computational processes that allowed measurement of the spatial congruence between the methods. There was no single computational tool that contained the entire set of functions necessary to achieve the goal. CONCLUSION: The fusion of the images from the three methods proved to be feasible with the use of four free-access software programs (OsiriX, Register, MRIcro and FSL). Our results may serve as a basis for building software that will be useful as a virtual tool prior to neurosurgery.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This thesis deals with Visual Servoing and its strictly connected disciplines like projective geometry, image processing, robotics and non-linear control. More specifically the work addresses the problem to control a robotic manipulator through one of the largely used Visual Servoing techniques: the Image Based Visual Servoing (IBVS). In Image Based Visual Servoing the robot is driven by on-line performing a feedback control loop that is closed directly in the 2D space of the camera sensor. The work considers the case of a monocular system with the only camera mounted on the robot end effector (eye in hand configuration). Through IBVS the system can be positioned with respect to a 3D fixed target by minimizing the differences between its initial view and its goal view, corresponding respectively to the initial and the goal system configurations: the robot Cartesian Motion is thus generated only by means of visual informations. However, the execution of a positioning control task by IBVS is not straightforward because singularity problems may occur and local minima may be reached where the reached image is very close to the target one but the 3D positioning task is far from being fulfilled: this happens in particular for large camera displacements, when the the initial and the goal target views are noticeably different. To overcame singularity and local minima drawbacks, maintaining the good properties of IBVS robustness with respect to modeling and camera calibration errors, an opportune image path planning can be exploited. This work deals with the problem of generating opportune image plane trajectories for tracked points of the servoing control scheme (a trajectory is made of a path plus a time law). The generated image plane paths must be feasible i.e. they must be compliant with rigid body motion of the camera with respect to the object so as to avoid image jacobian singularities and local minima problems. In addition, the image planned trajectories must generate camera velocity screws which are smooth and within the allowed bounds of the robot. We will show that a scaled 3D motion planning algorithm can be devised in order to generate feasible image plane trajectories. Since the paths in the image are off-line generated it is also possible to tune the planning parameters so as to maintain the target inside the camera field of view even if, in some unfortunate cases, the feature target points would leave the camera images due to 3D robot motions. To test the validity of the proposed approach some both experiments and simulations results have been reported taking also into account the influence of noise in the path planning strategy. The experiments have been realized with a 6DOF anthropomorphic manipulator with a fire-wire camera installed on its end effector: the results demonstrate the good performances and the feasibility of the proposed approach.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

L'epilessia frontale notturna (EFN) è caratterizzata da crisi motorie che insorgono durante il sonno. Scopo del progetto è studiare le cause fisiopatologiche e morfo-funzionali che sottendono ai fenomeni motori nei pazienti con EFN e identificare alterazioni strutturali e/o metaboliche mediante tecniche avanzate di Risonanza Magnetica (RM). Abbiamo raccolto una casistica di pazienti con EFN afferenti al Centro Epilessia e dei Disturbi del Sonno del Dipartimento di Scienze Neurologiche, Università di Bologna. Ad ogni paziente è stato associato un controllo sano di età (± 5 anni) e sesso corrispondente. Tutti sono stati studiati mediante tecniche avanzate di RM comprendenti Spettroscopia del protone (1H-MRS), Tensore di diffusione ed imaging 3D ad alta risoluzione per analisi morfometriche. In particolare, la 1H-MRS è stata effettuata su due volumi di interesse localizzati nei talami e nel giro del cingolo anteriore. Sono stati inclusi nell’analisi finale 19 pazienti (7 M), età media 34 anni (range 19-50) e 14 controlli (6 M) età media 30 anni (range 19-40). A livello del cingolo anteriore il rapporto della concentrazione di N-Acetil-Aspartato rispetto alla Creatina (NAA/Cr) è risultato significativamente ridotto nei pazienti rispetto ai controlli (p=0,021). Relativamente all’analisi di correlazione, l'analisi tramite modelli di regressione multipla ha evidenziato che il rapporto NAA/Cr nel cingolo anteriore nei pazienti correlava con la frequenza delle crisi (p=0,048), essendo minore nei pazienti con crisi plurisettimanali/plurigiornaliere. Per interpretare il dato ottenuto è possibile solo fare delle ipotesi. L’NAA è un marker di integrità, densità e funzionalità neuronale. E’ possibile che alla base della EFN ci siano alterazioni metaboliche tessutali in precise strutture come il giro del cingolo anteriore. Questo apre nuove possibilità sull’utilizzo di strumenti di indagine basati sull’analisi di biosegnali, per caratterizzare aree coinvolte nella genesi della EFN ancora largamente sconosciute e chiarire ulteriormente l’eziologia di questo tipo di epilessia.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

In der vorliegenden Arbeit wurde gezeigt, wie man das Potential nanopartikulärer Systeme, die vorwiegend via Miniemulsion hergestellt wurden, im Hinblick auf „Drug Delivery“ ausnutzen könnte, indem ein Wirkstoffmodell auf unterschiedliche Art und Weise intrazellulär freigesetzt wurde. Dies wurde hauptsächlich mittels konfokaler Laser-Raster-Mikrokopie (CLSM) in Kombination mit dem Bildbearbeitungsprogramm Volocity® analysiert.rnPBCA-Nanokapseln eigneten sich besonders, um hydrophile Substanzen wie etwa Oligonukleotide zu verkapseln und sie so auf ihrem Transportweg in die Zellen vor einem etwaigen Abbau zu schützen. Es konnte eine Freisetzung der Oligonukleotide in den Zellen aufgrund der elektrostatischen Anziehung des mitochondrialen Membranpotentials nachgewiesen werden. Dabei war die Kombination aus Oligonukleotid und angebundenem Cyanin-Farbstoff (Cy5) an der 5‘-Position der Oligonukleotid-Sequenz ausschlaggebend. Durch quantitative Analysen mittels Volocity® konnte die vollständige Kolokalisation der freigesetzten Oligonukleotide an Mitochondrien bewiesen werden, was anhand der Kolokalisationskoeffizienten „Manders‘ Coefficients“ M1 und M2 diskutiert wurde. Es konnte ebenfalls aufgrund von FRET-Studien doppelt markierter Oligos gezeigt werden, dass die Oligonukleotide weder beim Transport noch bei der Freisetzung abgebaut wurden. Außerdem wurde aufgeklärt, dass nur der Inhalt der Nanokapseln, d. h. die Oligonukleotide, an Mitochondrien akkumulierte, das Kapselmaterial selbst jedoch in anderen intrazellulären Bereichen aufzufinden war. Eine Kombination aus Cyanin-Farbstoffen wie Cy5 mit einer Nukleotidsequenz oder einem Wirkstoff könnte also die Basis für einen gezielten Wirkstofftransport zu Mitochondrien liefern bzw. die Grundlage schaffen, eine Freisetzung aus Kapseln ins Zytoplasma zu gewährleisten.rnDer vielseitige Einsatz der Miniemulsion gestattete es, nicht nur Kapseln sondern auch Nanopartikel herzustellen, in welchen hydrophobe Substanzen im Partikelkern eingeschlossen werden konnten. Diese auf hydrophobe Wechselwirkungen beruhende „Verkapselung“ eines Wirkstoffmodells, in diesem Fall PMI, wurde bei PDLLA- bzw. PS-Nanopartikeln ausgenutzt, welche durch ein HPMA-basiertes Block-Copolymer stabilisiert wurden. Dabei konnte gezeigt werden, dass das hydrophobe Wirkstoffmodell PMI innerhalb kürzester Zeit in die Zellen freigesetzt wurde und sich in sogenannte „Lipid Droplets“ einlagerte, ohne dass die Nanopartikel selbst aufgenommen werden mussten. Daneben war ein intrazelluläres Ablösen des stabilisierenden Block-Copolymers zu verzeichnen, welches rn8 h nach Partikelaufnahme erfolgte und ebenfalls durch Analysen mittels Volocity® untermauert wurde. Dies hatte jedoch keinen Einfluss auf die eigentliche Partikelaufnahme oder die Freisetzung des Wirkstoffmodells. Ein großer Vorteil in der Verwendung des HPMA-basierten Block-Copolymers liegt darin begründet, dass auf zeitaufwendige Waschschritte wie etwa Dialyse nach der Partikelherstellung verzichtet werden konnte, da P(HPMA) ein biokompatibles Polymer ist. Auf der anderen Seite hat man aufgrund der Syntheseroute dieses Block-Copolymers vielfältige Möglichkeiten, Funktionalitäten wie etwa Fluoreszenzmarker einzubringen. Eine kovalente Anbindung eines Wirkstoffs ist ebenfalls denkbar, welcher intrazellulär z. B. aufgrund von enzymatischen Abbauprozessen langsam freigesetzt werden könnte. Somit bietet sich die Möglichkeit mit Nanopartikeln, die durch HPMA-basierte Block-Copolymere stabilisiert wurden, gleichzeitig zwei unterschiedliche Wirkstoffe in die Zellen zu bringen, wobei der eine schnell und der zweite über einen längeren Zeitraum hinweg (kontrolliert) freigesetzt werden könnte.rnNeben Nanokapseln sowie –partikeln, die durch inverse bzw. direkte Miniemulsion dargestellt wurden, sind auch Nanohydrogelpartikel untersucht worden, die sich aufgrund von Selbstorganisation eines amphiphilen Bock-Copolymers bildeten. Diese Nanohydrogelpartikel dienten der Komplexierung von siRNA und wurden hinsichtlich ihrer Anreicherung in Lysosomen untersucht. Aufgrund der Knockdown-Studien von Lutz Nuhn konnte ein Unterschied in der Knockdown-Effizienz festgestellt werden, je nach dem, ob 100 nm oder 40 nm große Nanohydrogelpartikel verwendet wurden. Es sollte festgestellt werden, ob eine größenbedingte, unterschiedlich schnelle Anreicherung dieser beiden Partikel in Lysosomen erfolgte, was die unterschiedliche Knockdown-Effizienz erklären könnte. CLSM-Studien und quantitative Kolokalisationsstudien gaben einen ersten Hinweis auf diese Größenabhängigkeit. rnBei allen verwendeten nanopartikulären Systemen konnte eine Freisetzung ihres Inhalts gezeigt werden. Somit bieten sie ein großes Potential als Wirkstoffträger für biomedizinische Anwendungen.rn

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The discrete cosine transform (DCT) is an important functional block for image processing applications. The implementation of a DCT has been viewed as a specialized research task. We apply a micro-architecture based methodology to the hardware implementation of an efficient DCT algorithm in a digital design course. Several circuit optimization and design space exploration techniques at the register-transfer and logic levels are introduced in class for generating the final design. The students not only learn how the algorithm can be implemented, but also receive insights about how other signal processing algorithms can be translated into a hardware implementation. Since signal processing has very broad applications, the study and implementation of an extensively used signal processing algorithm in a digital design course significantly enhances the learning experience in both digital signal processing and digital design areas for the students.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

A confocal imaging and image processing scheme is introduced to visualize and evaluate the spatial distribution of spectral information in tissue. The image data are recorded using a confocal laser-scanning microscope equipped with a detection unit that provides high spectral resolution. The processing scheme is based on spectral data, is less error-prone than intensity-based visualization and evaluation methods, and provides quantitative information on the composition of the sample. The method is tested and validated in the context of the development of dermal drug delivery systems, introducing a quantitative uptake indicator to compare the performances of different delivery systems is introduced. A drug penetration study was performed in vitro. The results show that the method is able to detect, visualize and measure spectral information in tissue. In the penetration study, uptake efficiencies of different experiment setups could be discriminated and quantitatively described. The developed uptake indicator is a step towards a quantitative assessment and, in a more general view apart from pharmaceutical research, provides valuable information on tissue composition. It can potentially be used for clinical in vitro and in vivo applications.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The task considered in this paper is performance evaluation of region segmentation algorithms in the ground-truth-based paradigm. Given a machine segmentation and a ground-truth segmentation, performance measures are needed. We propose to consider the image segmentation problem as one of data clustering and, as a consequence, to use measures for comparing clusterings developed in statistics and machine learning. By doing so, we obtain a variety of performance measures which have not been used before in image processing. In particular, some of these measures have the highly desired property of being a metric. Experimental results are reported on both synthetic and real data to validate the measures and compare them with others.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Sustainable yields from water wells in hard-rock aquifers are achieved when the well bore intersects fracture networks. Fracture networks are often not readily discernable at the surface. Lineament analysis using remotely sensed satellite imagery has been employed to identify surface expressions of fracturing, and a variety of image-analysis techniques have been successfully applied in “ideal” settings. An ideal setting for lineament detection is where the influences of human development, vegetation, and climatic situations are minimal and hydrogeological conditions and geologic structure are known. There is not yet a well-accepted protocol for mapping lineaments nor have different approaches been compared in non-ideal settings. A new approach for image-processing/synthesis was developed to identify successful satellite imagery types for lineament analysis in non-ideal terrain. Four satellite sensors (ASTER, Landsat7 ETM+, QuickBird, RADARSAT-1) and a digital elevation model were evaluated for lineament analysis in Boaco, Nicaragua, where the landscape is subject to varied vegetative cover, a plethora of anthropogenic features, and frequent cloud cover that limit the availability of optical satellite data. A variety of digital image processing techniques were employed and lineament interpretations were performed to obtain 12 complementary image products that were evaluated subjectively to identify lineaments. The 12 lineament interpretations were synthesized to create a raster image of lineament zone coincidence that shows the level of agreement among the 12 interpretations. A composite lineament interpretation was made using the coincidence raster to restrict lineament observations to areas where multiple interpretations (at least 4) agree. Nine of the 11 previously mapped faults were identified from the coincidence raster. An additional 26 lineaments were identified from the coincidence raster, and the locations of 10 were confirmed by field observation. Four manual pumping tests suggest that well productivity is higher for wells proximal to lineament features. Interpretations from RADARSAT-1 products were superior to interpretations from other sensor products, suggesting that quality lineament interpretation in this region requires anthropogenic features to be minimized and topographic expressions to be maximized. The approach developed in this study has the potential to improve siting wells in non-ideal regions.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

All optical systems that operate in or through the atmosphere suffer from turbulence induced image blur. Both military and civilian surveillance, gun-sighting, and target identification systems are interested in terrestrial imaging over very long horizontal paths, but atmospheric turbulence can blur the resulting images beyond usefulness. My dissertation explores the performance of a multi-frame-blind-deconvolution technique applied under anisoplanatic conditions for both Gaussian and Poisson noise model assumptions. The technique is evaluated for use in reconstructing images of scenes corrupted by turbulence in long horizontal-path imaging scenarios and compared to other speckle imaging techniques. Performance is evaluated via the reconstruction of a common object from three sets of simulated turbulence degraded imagery representing low, moderate and severe turbulence conditions. Each set consisted of 1000 simulated, turbulence degraded images. The MSE performance of the estimator is evaluated as a function of the number of images, and the number of Zernike polynomial terms used to characterize the point spread function. I will compare the mean-square-error (MSE) performance of speckle imaging methods and a maximum-likelihood, multi-frame blind deconvolution (MFBD) method applied to long-path horizontal imaging scenarios. Both methods are used to reconstruct a scene from simulated imagery featuring anisoplanatic turbulence induced aberrations. This comparison is performed over three sets of 1000 simulated images each for low, moderate and severe turbulence-induced image degradation. The comparison shows that speckle-imaging techniques reduce the MSE 46 percent, 42 percent and 47 percent on average for low, moderate, and severe cases, respectively using 15 input frames under daytime conditions and moderate frame rates. Similarly, the MFBD method provides, 40 percent, 29 percent, and 36 percent improvements in MSE on average under the same conditions. The comparison is repeated under low light conditions (less than 100 photons per pixel) where improvements of 39 percent, 29 percent and 27 percent are available using speckle imaging methods and 25 input frames and 38 percent, 34 percent and 33 percent respectively for the MFBD method and 150 input frames. The MFBD estimator is applied to three sets of field data and the results presented. Finally, a combined Bispectrum-MFBD Hybrid estimator is proposed and investigated. This technique consistently provides a lower MSE and smaller variance in the estimate under all three simulated turbulence conditions.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Image denoising methods have been implemented in both spatial and transform domains. Each domain has its advantages and shortcomings, which can be complemented by each other. State-of-the-art methods like block-matching 3D filtering (BM3D) therefore combine both domains. However, implementation of such methods is not trivial. We offer a hybrid method that is surprisingly easy to implement and yet rivals BM3D in quality.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Morphometric investigations using a point and intersection counting strategy in the lung often are not able to reveal the full set of morphologic changes. This happens particularly when structural modifications are not expressed in terms of volume density changes and when rough and fine surface density alterations cancel each other at different magnifications. Making use of digital image processing, we present a methodological approach that allows to easily and quickly quantify changes of the geometrical properties of the parenchymal lung structure and reflects closely the visual appreciation of the changes. Randomly sampled digital images from light microscopic sections of lung parenchyma are filtered, binarized, and skeletonized. The lung septa are thus represented as a single-pixel wide line network with nodal points and end points and the corresponding internodal and end segments. By automatically counting the number of points and measuring the lengths of the skeletal segments, the lung architecture can be characterized and very subtle structural changes can be detected. This new methodological approach to lung structure analysis is highly sensitive to morphological changes in the parenchyma: it detected highly significant quantitative alterations in the structure of lungs of rats treated with a glucocorticoid hormone, where the classical morphometry had partly failed.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Image denoising continues to be an active research topic. Although state-of-the-art denoising methods are numerically impressive and approch theoretical limits, they suffer from visible artifacts.While they produce acceptable results for natural images, human eyes are less forgiving when viewing synthetic images. At the same time, current methods are becoming more complex, making analysis, and implementation difficult. We propose image denoising as a simple physical process, which progressively reduces noise by deterministic annealing. The results of our implementation are numerically and visually excellent. We further demonstrate that our method is particularly suited for synthetic images. Finally, we offer a new perspective on image denoising using robust estimators.