989 resultados para segmentazione immagini mediche algoritmo Canny algoritmo watershed edge detection


Relevância:

100.00% 100.00%

Publicador:

Resumo:

L’applicazione degli algoritmi di Intelligenza Artificiale (AI) al settore dell’imaging medico potrebbe apportare numerosi miglioramenti alla qualità delle cure erogate ai pazienti. Tuttavia, per poterla mettere a frutto si devono ancora superare alcuni limiti legati alla necessità di grandi quantità di immagini acquisite su pazienti reali, utili nell’addestramento degli stessi algoritmi. Il principale limite è costituito dalle norme che tutelano la privacy di dati sensibili, tra cui sono incluse le immagini mediche. La generazione di grandi dataset di immagini sintetiche, ottenute con algoritmi di Deep Learning (DL), sembra essere la soluzione a questi problemi.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Ogni anno in Italia, migliaia di protesi convenzionali vengono impiantate con successo in pazienti affetti da artrosi alla spalla, tuttavia è stato dimostrato che questa tipologia di protesi non funziona in quei pazienti che soffrono contemporaneamente anche di grandi lesioni alla cuffia dei rotatori, ricorrendo successivamente a un impianto di protesi di spalla inversa. La scelta sul miglior tipo di protesi da parte del chirurgo è quindi fondamentale e necessaria per evitare futuri stress e operazioni al paziente. Nel corso degli anni si è fatto affidamento a protocolli che non considerano in maniera specifica la componente tissutale ossea. In questa tesi si cerca di dimostrare che attraverso l’utilizzo delle immagini mediche è possibile ricavare dati e grafici specifici sulla componente ossea del paziente per ottimizzare poi la scelta della protesi da parte del chirurgo e la fase pre/intra operatoria.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper, methods are presented for automatic detection of the nipple and the pectoral muscle edge in mammograms via image processing in the Radon domain. Radon-domain information was used for the detection of straight-line candidates with high gradient. The longest straight-line candidate was used to identify the pectoral muscle edge. The nipple was detected as the convergence point of breast tissue components, indicated by the largest response in the Radon domain. Percentages of false-positive (FP) and false-negative (FN) areas were determined by comparing the areas of the pectoral muscle regions delimited manually by a radiologist and by the proposed method applied to 540 mediolateral-oblique (MLO) mammographic images. The average FP and FN were 8.99% and 9.13%, respectively. In the detection of the nipple, an average error of 7.4 mm was obtained with reference to the nipple as identified by a radiologist on 1,080 mammographic images (540 MLO and 540 craniocaudal views).

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Dental implant recognition in patients without available records is a time-consuming and not straightforward task. The traditional method is a complete user-dependent process, where the expert compares a 2D X-ray image of the dental implant with a generic database. Due to the high number of implants available and the similarity between them, automatic/semi-automatic frameworks to aide implant model detection are essential. In this study, a novel computer-aided framework for dental implant recognition is suggested. The proposed method relies on image processing concepts, namely: (i) a segmentation strategy for semi-automatic implant delineation; and (ii) a machine learning approach for implant model recognition. Although the segmentation technique is the main focus of the current study, preliminary details of the machine learning approach are also reported. Two different scenarios are used to validate the framework: (1) comparison of the semi-automatic contours against implant’s manual contours of 125 X-ray images; and (2) classification of 11 known implants using a large reference database of 601 implants. Regarding experiment 1, 0.97±0.01, 2.24±0.85 pixels and 11.12±6 pixels of dice metric, mean absolute distance and Hausdorff distance were obtained, respectively. In experiment 2, 91% of the implants were successfully recognized while reducing the reference database to 5% of its original size. Overall, the segmentation technique achieved accurate implant contours. Although the preliminary classification results prove the concept of the current work, more features and an extended database should be used in a future work.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This work presents an automatic calibration method for a vision based external underwater ground-truth positioning system. These systems are a relevant tool in benchmarking and assessing the quality of research in underwater robotics applications. A stereo vision system can in suitable environments such as test tanks or in clear water conditions provide accurate position with low cost and flexible operation. In this work we present a two step extrinsic camera parameter calibration procedure in order to reduce the setup time and provide accurate results. The proposed method uses a planar homography decomposition in order to determine the relative camera poses and the determination of vanishing points of detected lines in the image to obtain the global pose of the stereo rig in the reference frame. This method was applied to our external vision based ground-truth at the INESC TEC/Robotics test tank. Results are presented in comparison with an precise calibration performed using points obtained from an accurate 3D LIDAR modelling of the environment.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This project was funded under the Applied Research Grants Scheme administered by Enterprise Ireland. The project was a partnership between Galway - Mayo Institute of Technology and an industrial company, Tyco/Mallinckrodt Galway. The project aimed to develop a semi - automatic, self - learning pattern recognition system capable of detecting defects on the printed circuits boards such as component vacancy, component misalignment, component orientation, component error, and component weld. The research was conducted in three directions: image acquisition, image filtering/recognition and software development. Image acquisition studied the process of forming and digitizing images and some fundamental aspects regarding the human visual perception. The importance of choosing the right camera and illumination system for a certain type of problem has been highlighted. Probably the most important step towards image recognition is image filtering, The filters are used to correct and enhance images in order to prepare them for recognition. Convolution, histogram equalisation, filters based on Boolean mathematics, noise reduction, edge detection, geometrical filters, cross-correlation filters and image compression are some examples of the filters that have been studied and successfully implemented in the software application. The software application developed during the research is customized in order to meet the requirements of the industrial partner. The application is able to analyze pictures, perform the filtering, build libraries, process images and generate log files. It incorporates most of the filters studied and together with the illumination system and the camera it provides a fully integrated framework able to analyze defects on printed circuit boards.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Demosaicking is a particular case of interpolation problems where, from a scalar image in which each pixel has either the red, the green or the blue component, we want to interpolate the full-color image. State-of-the-art demosaicking algorithms perform interpolation along edges, but these edges are estimated locally. We propose a level-set-based geometric method to estimate image edges, inspired by the image in-painting literature. This method has a time complexity of O(S) , where S is the number of pixels in the image, and compares favorably with the state-of-the-art algorithms both visually and in most relevant image quality measures.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Aim: We asked whether myocardial flow reserve (MFR) by Rb-82 cardiac PET improve the selection of patients eligible for invasive coronary angiography (ICA). Material and Methods: We enrolled 26 consecutive patients with suspected or known coronary artery disease who performed dynamic Rb-82 PET/CT and (ICA) within 60 days; 4 patients who underwent revascularization or had any cardiovascular events between PET and ICA were excluded. Myocardial blood flow at rest (rMBF), at stress with adenosine (sMBF) and myocardial flow reserve (MFR=sMBF/rMBF) were estimated using the 1-compartment Lortie model (FlowQuant) for each coronary arteries territories. Stenosis severity was assessed using computer-based automated edge detection (QCA). MFR was divided in 3 groups: G1:MFR<1.5, G2:1.5≤MFR<2 and G3:2≤MFR. Stenosis severity was graded as non-significant (<50% or FFR ≥0.8), intermediate (50%≤stenosis<70%) and severe (≥70%). Correlation between MFR and percentage of stenosis were assessed using a non-parametric Spearman test. Results: In G1 (44 vessels), 17 vessels (39%) had a severe stenosis, 11 (25%) an intermediate one, and 16 (36%) no significant stenosis. In G2 (13 vessels), 2 (15%) vessels presented a severe stenosis, 7 (54%) an intermediate one, and 4 (31%) no significant stenosis. In G3 (9 vessels), 0 vessel presented a severe stenosis, 1 (11%) an intermediate one, and 8 (89%) no significant stenosis. Of note, among 11 patients with 3-vessel low MFR<1.5 (G1), 9/11 (82%) had at least one severe stenosis and 2/11 (18%) had at least one intermediate stenosis. There was a significant inverse correlation between stenosis severity and MFR among all 66 territories analyzed (rho= -0.38, p=0.002). Conclusion: Patients with MFR>2 could avoid ICA. Low MFR (G1, G2) on a vessel-based analysis seems to be a poor predictor of severe stenosis severity. Patients with 3-vessel low MFR would benefit from ICA as they are likely to present a significant stenosis in at least one vessel.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

BACKGROUND: Direct noninvasive visualization of the coronary vessel wall may enhance risk stratification by quantifying subclinical coronary atherosclerotic plaque burden. We sought to evaluate high-resolution black-blood 3D cardiovascular magnetic resonance (CMR) imaging for in vivo visualization of the proximal coronary artery vessel wall. METHODS AND RESULTS: Twelve adult subjects, including 6 clinically healthy subjects and 6 patients with nonsignificant coronary artery disease (10% to 50% x-ray angiographic diameter reduction) were studied with the use of a commercial 1.5 Tesla CMR scanner. Free-breathing 3D coronary vessel wall imaging was performed along the major axis of the right coronary artery with isotropic spatial resolution (1.0x1.0x1.0 mm(3)) with the use of a black-blood spiral image acquisition. The proximal vessel wall thickness and luminal diameter were objectively determined with an automated edge detection tool. The 3D CMR vessel wall scans allowed for visualization of the contiguous proximal right coronary artery in all subjects. Both mean vessel wall thickness (1.7+/-0.3 versus 1.0+/-0.2 mm) and wall area (25.4+/-6.9 versus 11.5+/-5.2 mm(2)) were significantly increased in the patients compared with the healthy subjects (both P<0.01). The lumen diameter (3.6+/-0.7 versus 3.4+/-0.5 mm, P=0.47) and lumen area (8.9+/-3.4 versus 7.9+/-3.5 mm(2), P=0.47) were similar in both groups. CONCLUSIONS: Free-breathing 3D black-blood coronary CMR with isotropic resolution identified an increased coronary vessel wall thickness with preservation of lumen size in patients with nonsignificant coronary artery disease, consistent with a "Glagov-type" outward arterial remodeling. This novel approach has the potential to quantify subclinical disease.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Painelajittelussa sellusta poistetaan epäpuhtauksia. Painelajittimien suunnittelussa on tärkeää ymmärtää lajittimessa tapahtuvia ilmiöitä. Työn tavoitteena oli kehittää kuvaamiseen perustuva mittausjärjestelmä kuitujen liikkeiden mittaamista varten. Mittauksen kohteena ovat sellusulpun kuitujen ja epäpuhtauksien nopeudet. Kuvaamisessa käytetyllä kaksoisvalotuksella pystytään mittaamaan kuitujen ja roskien nopeuksia. Nopeuksien mittaamiseen kuvista kehitettiin järjestelmä ja tutkittiin mahdollisuutta automatisoida mittaaminen. Yksittäisten kuitujen havaitsemiseen sellumassasta käytettiin optisella kirkasteella kirkastettuja kuituja ja UV-valoa. Kuituja värjättiin myös mustiksi ja kuvattiin näkyvällä valolla. Kaksoisvalotukseen käytettiin kahta stroboskooppia. Prosessin kuvaamisessa käytettiin ulkoisella herätteellä ohjattavaa kameraa. Kuvan tuomiseen kameralle ja kohteen valaistukseen käytettiin boroskooppia. Saatujen kuvien käsittelyä ja nopeuksien mittausta varten tehtiin tietokoneohjelma. Käytetyn boroskoopin valovoima ei ollut riittävä kuvausten suorittamiseen, mutta muilta osin laitteisto havaittiin toimivaksi. Kuitujen ja roskien nopeuksia pystyttiin laskemaan ohjelmalla kuvista, joita otettiin ilman boroskooppia. Mittaustiedon hankinnan automatisointi näyttää mahdolliselta tekemällä muutoksia kuvauslaitteistoon.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Tämä työ käsittelee puutukkien tilavuuden mittaamista värikonenäön avulla. Värikuvat on saatu Simpeleellä olevan metsäteollisuusyrityksen hiomosta. Työssä esitetään perusteellisesti matemaattinen teoria, joka liittyy käytettyihin kuvankäsittelymenetelmiin, kuten luokitteluun, kohinan poistoon ja tukkien segmentointiin. Esitetyt menetelmät implementointiin käytännössä ja eri menetelmillä saatuja tuloksia vertailtiin keskenään. Kuvankäsittelyalgoritmit on implementoitu Matlab 6.0:n avulla. Pääasiassa käytettiin uusinta Image Processing Toolboxia, joka on versio 3.0. Tämä työn näkökulma on pääasiassa käytäntöön soveltava, koska metsäteollsuus on korkealla tasolla Suomessa ja siellä on paljon alan yrityksiä, joissa tässä työssä kehitettyä menetelmää voidaan hyödyntää.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The paper summarizes the design and implementation of a quadratic edge detection filter, based on Volterra series, for enhancing calcifications in mammograms. The proposed filter can account for much of the polynomial nonlinearities inherent in the input mammogram image and can replace the conventional edge detectors like Laplacian, gaussian etc. The filter gives rise to improved visualization and early detection of microcalcifications, which if left undetected, can lead to breast cancer. The performance of the filter is analyzed and found superior to conventional spatial edge detectors

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The basic concepts of digital signal processing are taught to the students in engineering and science. The focus of the course is on linear, time invariant systems. The question as to what happens when the system is governed by a quadratic or cubic equation remains unanswered in the vast majority of literature on signal processing. Light has been shed on this problem when John V Mathews and Giovanni L Sicuranza published the book Polynomial Signal Processing. This book opened up an unseen vista of polynomial systems for signal and image processing. The book presented the theory and implementations of both adaptive and non-adaptive FIR and IIR quadratic systems which offer improved performance than conventional linear systems. The theory of quadratic systems presents a pristine and virgin area of research that offers computationally intensive work. Once the area of research is selected, the next issue is the choice of the software tool to carry out the work. Conventional languages like C and C++ are easily eliminated as they are not interpreted and lack good quality plotting libraries. MATLAB is proved to be very slow and so do SCILAB and Octave. The search for a language for scientific computing that was as fast as C, but with a good quality plotting library, ended up in Python, a distant relative of LISP. It proved to be ideal for scientific computing. An account of the use of Python, its scientific computing package scipy and the plotting library pylab is given in the appendix Initially, work is focused on designing predictors that exploit the polynomial nonlinearities inherent in speech generation mechanisms. Soon, the work got diverted into medical image processing which offered more potential to exploit by the use of quadratic methods. The major focus in this area is on quadratic edge detection methods for retinal images and fingerprints as well as de-noising raw MRI signals

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Texture provides one cue for identifying the physical cause of an intensity edge, such as occlusion, shadow, surface orientation or reflectance change. Marr, Julesz, and others have proposed that texture is represented by small lines or blobs, called 'textons' by Julesz [1981a], together with their attributes, such as orientation, elongation, and intensity. Psychophysical studies suggest that texture boundaries are perceived where distributions of attributes over neighborhoods of textons differ significantly. However, these studies, which deal with synthetic images, neglect to consider two important questions: How can these textons be extracted from images of natural scenes? And how, exactly, are texture boundaries then found? This thesis proposes answers to these questions by presenting an algorithm for computing blobs from natural images and a statistic for measuring the difference between two sample distributions of blob attributes. As part of the blob detection algorithm, methods for estimating image noise are presented, which are applicable to edge detection as well.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We investigate the properties of feedforward neural networks trained with Hebbian learning algorithms. A new unsupervised algorithm is proposed which produces statistically uncorrelated outputs. The algorithm causes the weights of the network to converge to the eigenvectors of the input correlation with largest eigenvalues. The algorithm is closely related to the technique of Self-supervised Backpropagation, as well as other algorithms for unsupervised learning. Applications of the algorithm to texture processing, image coding, and stereo depth edge detection are given. We show that the algorithm can lead to the development of filters qualitatively similar to those found in primate visual cortex.