5 resultados para Reconstrução de imagens

em Universidade Federal de Uberlândia


Relevância:

80.00% 80.00%

Publicador:

Resumo:

The objective of this work is to use algorithms known as Boltzmann Machine to rebuild and classify patterns as images. This algorithm has a similar structure to that of an Artificial Neural Network but network nodes have stochastic and probabilistic decisions. This work presents the theoretical framework of the main Artificial Neural Networks, General Boltzmann Machine algorithm and a variation of this algorithm known as Restricted Boltzmann Machine. Computer simulations are performed comparing algorithms Artificial Neural Network Backpropagation with these algorithms Boltzmann General Machine and Machine Restricted Boltzmann. Through computer simulations are analyzed executions times of the different described algorithms and bit hit percentage of trained patterns that are later reconstructed. Finally, they used binary images with and without noise in training Restricted Boltzmann Machine algorithm, these images are reconstructed and classified according to the bit hit percentage in the reconstruction of the images. The Boltzmann machine algorithms were able to classify patterns trained and showed excellent results in the reconstruction of the standards code faster runtime and thus can be used in applications such as image recognition.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Lung cancer is the most common of malignant tumors, with 1.59 million new cases worldwide in 2012. Early detection is the main factor to determine the survival of patients affected by this disease. Furthermore, the correct classification is important to define the most appropriate therapeutic approach as well as suggest the prognosis and the clinical disease evolution. Among the exams used to detect lung cancer, computed tomography have been the most indicated. However, CT images are naturally complex and even experts medical are subject to fault detection or classification. In order to assist the detection of malignant tumors, computer-aided diagnosis systems have been developed to aid reduce the amount of false positives biopsies. In this work it was developed an automatic classification system of pulmonary nodules on CT images by using Artificial Neural Networks. Morphological, texture and intensity attributes were extracted from lung nodules cut tomographic images using elliptical regions of interest that they were subsequently segmented by Otsu method. These features were selected through statistical tests that compare populations (T test of Student and U test of Mann-Whitney); from which it originated a ranking. The features after selected, were inserted in Artificial Neural Networks (backpropagation) to compose two types of classification; one to classify nodules in malignant and benign (network 1); and another to classify two types of malignancies (network 2); featuring a cascade classifier. The best networks were associated and its performance was measured by the area under the ROC curve, where the network 1 and network 2 achieved performance equal to 0.901 and 0.892 respectively.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Nowadays, the amount of customers using sites for shopping is greatly increasing, mainly due to the easiness and rapidity of this way of consumption. The sites, differently from physical stores, can make anything available to customers. In this context, Recommender Systems (RS) have become indispensable to help consumers to find products that may possibly pleasant or be useful to them. These systems often use techniques of Collaborating Filtering (CF), whose main underlying idea is that products are recommended to a given user based on purchase information and evaluations of past, by a group of users similar to the user who is requesting recommendation. One of the main challenges faced by such a technique is the need of the user to provide some information about her preferences on products in order to get further recommendations from the system. When there are items that do not have ratings or that possess quite few ratings available, the recommender system performs poorly. This problem is known as new item cold-start. In this paper, we propose to investigate in what extent information on visual attention can help to produce more accurate recommendation models. We present a new CF strategy, called IKB-MS, that uses visual attention to characterize images and alleviate the new item cold-start problem. In order to validate this strategy, we created a clothing image database and we use three algorithms well known for the extraction of visual attention these images. An extensive set of experiments shows that our approach is efficient and outperforms state-of-the-art CF RS.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

lmage super-resolution is defined as a class of techniques that enhance the spatial resolution of images. Super-resolution methods can be subdivided in single and multi image methods. This thesis focuses on developing algorithms based on mathematical theories for single image super­ resolution problems. lndeed, in arder to estimate an output image, we adopta mixed approach: i.e., we use both a dictionary of patches with sparsity constraints (typical of learning-based methods) and regularization terms (typical of reconstruction-based methods). Although the existing methods already per- form well, they do not take into account the geometry of the data to: regularize the solution, cluster data samples (samples are often clustered using algorithms with the Euclidean distance as a dissimilarity metric), learn dictionaries (they are often learned using PCA or K-SVD). Thus, state-of-the-art methods still suffer from shortcomings. In this work, we proposed three new methods to overcome these deficiencies. First, we developed SE-ASDS (a structure tensor based regularization term) in arder to improve the sharpness of edges. SE-ASDS achieves much better results than many state-of-the- art algorithms. Then, we proposed AGNN and GOC algorithms for determining a local subset of training samples from which a good local model can be computed for recon- structing a given input test sample, where we take into account the underlying geometry of the data. AGNN and GOC methods outperform spectral clustering, soft clustering, and geodesic distance based subset selection in most settings. Next, we proposed aSOB strategy which takes into account the geometry of the data and the dictionary size. The aSOB strategy outperforms both PCA and PGA methods. Finally, we combine all our methods in a unique algorithm, named G2SR. Our proposed G2SR algorithm shows better visual and quantitative results when compared to the results of state-of-the-art methods.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We seek, through this work, to understand the construction of the pirates’ images from the Golden Age of Piracy (late seventeenth through early eighteenth century) through the observation of the circulation of these images, which are not limited to one field of knowledge. We take into account the importance of the book “A General History of the robberies and murders of the most notorious pirates...” written by Charles Johnson for these constructions, not only literary, but also historiographical provided that the stories of pirates and piracy gained ground in historiography from the twentieth century on. We also seek to show that this historiographical space arises opposed to an apparent historiographical silence about these stories that lasts for about two centuries, related to a new way of writing history in the aesthetic regime, where it arises as a science through a poetics of knowledge, of which the philosopher Jacques Rancière helps us reflect. Lastly, reflecting upon how these images of pirates circulate nowadays, we seek to understand the historicity of the pirates’ images within that aesthetic regime based on some scenes of the film series Pirates of the Caribbean by Disney™.