866 resultados para Computer Vision and Pattern Recognition
Resumo:
Background and aims: Machine learning techniques for the text mining of cancer-related clinical documents have not been sufficiently explored. Here some techniques are presented for the pre-processing of free-text breast cancer pathology reports, with the aim of facilitating the extraction of information relevant to cancer staging.
Materials and methods: The first technique was implemented using the freely available software RapidMiner to classify the reports according to their general layout: ‘semi-structured’ and ‘unstructured’. The second technique was developed using the open source language engineering framework GATE and aimed at the prediction of chunks of the report text containing information pertaining to the cancer morphology, the tumour size, its hormone receptor status and the number of positive nodes. The classifiers were trained and tested respectively on sets of 635 and 163 manually classified or annotated reports, from the Northern Ireland Cancer Registry.
Results: The best result of 99.4% accuracy – which included only one semi-structured report predicted as unstructured – was produced by the layout classifier with the k nearest algorithm, using the binary term occurrence word vector type with stopword filter and pruning. For chunk recognition, the best results were found using the PAUM algorithm with the same parameters for all cases, except for the prediction of chunks containing cancer morphology. For semi-structured reports the performance ranged from 0.97 to 0.94 and from 0.92 to 0.83 in precision and recall, while for unstructured reports performance ranged from 0.91 to 0.64 and from 0.68 to 0.41 in precision and recall. Poor results were found when the classifier was trained on semi-structured reports but tested on unstructured.
Conclusions: These results show that it is possible and beneficial to predict the layout of reports and that the accuracy of prediction of which segments of a report may contain certain information is sensitive to the report layout and the type of information sought.
Resumo:
[EN]This paper presents recognition results based on a PCA representation and classification with SVMs and temporal coherence.
Resumo:
Computer game technology produces compelling ‘immersive environments’ where our digitally-native youth play and explore. Players absorb visual, auditory and other signs and process these in real time, making rapid choices on how to move through the game-space to experience ‘meaningful play’. How can immersive environments be designed to elicit perception and understanding of signs? In this paper we explore game design and gameplay from a semiotic perspective, focusing on the creation of meaning for players as they play the game. We propose a theory of game design based on semiotics.
Resumo:
Artikeln visar hur både Petrarca och Wordsworth använder sig av kyrkofadern Augustinus' teorier och personliga exempel tagna från hans Bekännelser
Resumo:
This article presents an interdisciplinary experience that brings together two areas of computer science; didactics and philosophy. As such, the article introduces a relatively unexplored area of research, not only in Uruguay but in the whole Latin American region. The reflection on the ontological status of computer science, its epistemic and educational problems, as well as their relationship with technology, allows us to elaborate a critical analysis of the discipline and a social perception of it as a basic science.
Resumo:
International audience
Resumo:
La actividad cerebral puede ser monitoreada mediante la electroencefalografía y utilizada como un indicador bioeléctrico. Este articulo muestra como un dispositivo de bajo costo y fácil acceso puede utilizarse para el desarrollo de aplicaciones basadas en interfaces cerebro-computador (BCI). Los resultados obtenidos muestran que el dispositivo MindWave puede efectivamente utilizarse para la adquisición de señales relacionadas a la actividad cerebral en diversas actividades cerebrales bajo la influencia de diversos estímulos. Se propone además el uso de la transformada Wavelet para el acondicionamiento de las señales EEG con el objetivo de utilizar algoritmos de inteligencia artificial y técnicas de reconocimiento de patrones para distinguir respuestas cerebrales.
Resumo:
This study aimed at determining the incidence and pattern of pneumonia, in slaughtered goats in Kumasi abattoir, Ghana. One thousand three hundred and fifty lungs of goats; (1,012 Sahelian and 338 West Africa Dwarf goats (WAD) lungs) of different ages (less than a year to above 4 years) were used in this study. The frequency of occurrence of pneumonia, the degree of consolidation as a percentage of the total lung volume and histological assessment were determined by standard techniques. Fifty five (55) lungs (39 Sahelian, 16 WAD goats) were pneumonic (4.07% prevalence). The right lungs had a significant higher average lung consolidation percentage (19.11) while the right cranial lobes were more affected (9.37). WAD goats of 1-2 years are mostly affected with an average percentage consolidation of 11.73% while Sahelian goats above 4 years of age were the most affected with 32.59% consolidation. Does of both breeds were more while Sahelian goats had higher consolidation than other breeds. Histological examination revealed the presence of giant cell, fibrinous and suppurative bronchointerstitial pneumonia suggesting complicated viral pneumonia which was observed to be the most important caprine pneumonia in slaughtered goats in Ghana. Transportation and pregnancy stress were the major contributory factor to the pneumonia observed hence effective ante-mortem examinations will help to minimize the slaughter of pregnant does and transportation stress in Ghana.
Resumo:
The very nature of computer science with its constant changes forces those who wish to follow to adapt and react quickly. Large companies invest in being up to date in order to generate revenue and stay active on the market. Universities, on the other hand, need to imply same practices of staying up to date with industry needs in order to produce industry ready engineers. By interviewing former students, now engineers in the industry, and current university staff this thesis aims to learn if there is space for enhancing the education through different lecturing approaches and/or curriculum adaptation and development. In order to address these concerns a qualitative research has been conducted, focusing on data collection obtained through semi-structured live world interviews. The method used follows the seven stages of research interviewing introduced by Kvale and focuses on collecting and preparing relevant data for analysis. The collected data is transcribed, refined, and further on analyzed in the “Findings and analysis” chapter. The focus of analyzing was answering the three research questions; learning how higher education impacts a Computer Science and Informatics Engineers’ job, how to better undergo the transition from studies to working in the industry and how to develop a curriculum that helps support the previous two. Unaltered quoted extracts are presented and individually analyzed. To paint a better picture a theme-wise analysis is presented summing valuable themes that were repeated throughout the interviewing phase. The findings obtained imply that there are several factors directly influencing the quality of education. From the student side, it mostly concerns expectation and dedication involving studies, and from the university side it is commitment to the curriculum development process. Due to the time and resource limitations this research provides findings conducted on a narrowed scope, although it can serve as a great foundation for further development; possibly as a PhD research.
Resumo:
Feature selection is a central problem in machine learning and pattern recognition. On large datasets (in terms of dimension and/or number of instances), using search-based or wrapper techniques can be cornputationally prohibitive. Moreover, many filter methods based on relevance/redundancy assessment also take a prohibitively long time on high-dimensional. datasets. In this paper, we propose efficient unsupervised and supervised feature selection/ranking filters for high-dimensional datasets. These methods use low-complexity relevance and redundancy criteria, applicable to supervised, semi-supervised, and unsupervised learning, being able to act as pre-processors for computationally intensive methods to focus their attention on smaller subsets of promising features. The experimental results, with up to 10(5) features, show the time efficiency of our methods, with lower generalization error than state-of-the-art techniques, while being dramatically simpler and faster.
Resumo:
In machine learning and pattern recognition tasks, the use of feature discretization techniques may have several advantages. The discretized features may hold enough information for the learning task at hand, while ignoring minor fluctuations that are irrelevant or harmful for that task. The discretized features have more compact representations that may yield both better accuracy and lower training time, as compared to the use of the original features. However, in many cases, mainly with medium and high-dimensional data, the large number of features usually implies that there is some redundancy among them. Thus, we may further apply feature selection (FS) techniques on the discrete data, keeping the most relevant features, while discarding the irrelevant and redundant ones. In this paper, we propose relevance and redundancy criteria for supervised feature selection techniques on discrete data. These criteria are applied to the bin-class histograms of the discrete features. The experimental results, on public benchmark data, show that the proposed criteria can achieve better accuracy than widely used relevance and redundancy criteria, such as mutual information and the Fisher ratio.
Resumo:
As novas tecnologias aplicadas ao processamento de imagem e reconhecimento de padrões têm sido alvo de um grande progresso nas últimas décadas. A sua aplicação é transversal a diversas áreas da ciência, nomeadamente a área da balística forense. O estudo de evidências (invólucros e projeteis) encontradas numa cena de crime, recorrendo a técnicas de processamento e análise de imagem, é pertinente pelo facto de, aquando do disparo, as armas de fogo imprimirem marcas únicas nos invólucros e projéteis deflagrados, permitindo relacionar evidências deflagradas pela mesma arma. A comparação manual de evidências encontradas numa cena de crime com evidências presentes numa base de dados, em termos de parâmetros visuais, constitui uma abordagem demorada. No âmbito deste trabalho pretendeu-se desenvolver técnicas automáticas de processamento e análise de imagens de evidências, obtidas através do microscópio ótico de comparação, tendo por base algoritmos computacionais. Estes foram desenvolvidos com recurso a pacotes de bibliotecas e a ferramentas open-source. Para a aquisição das imagens de evidências balísticas foram definidas quatro modalidades de aquisição: modalidade Planar, Multifocus, Microscan e Multiscan. As imagens obtidas foram aplicados algoritmos de processamento especialmente desenvolvidos para o efeito. A aplicação dos algoritmos de processamento permite a segmentação de imagem, a extração de características e o alinhamento de imagem. Este último tem como finalidade correlacionar as evidências e obter um valor quantitativo (métrica), indicando o quão similar essas evidências são. Com base no trabalho desenvolvido e nos resultados obtidos, foram definidos protocolos de aquisição de imagens de microscopia, que possibilitam a aquisição de imagens das regiões passiveis de serem estudadas, assim como algoritmos que permitem automatizar o posterior processo de alinhamento de imagens de evidências, constituindo uma vantagem em relação ao processo de comparação manual.
Resumo:
Epipolar geometry is a key point in computer vision and the fundamental matrix estimation is the only way to compute it. This article surveys several methods of fundamental matrix estimation which have been classified into linear methods, iterative methods and robust methods. All of these methods have been programmed and their accuracy analysed using real images. A summary, accompanied with experimental results, is given
Resumo:
En aquest projecte es pretén utilitzar mètodes coneguts com ara Viola&Jones (detecció) i EigenFaces (reconeixement) per a detectar i reconèixer cares dintre d’imatges de vídeo. Per a aconseguir aquesta tasca cal partir d’un conjunt de dades d’entrenament per a cada un dels mètodes (base de dades formada per imatges i anotacions manuals). A partir d’aquí, l’aplicació, ha de ser capaç de detectar cares en noves imatges i reconèixer-les (identificar de quina cara es tracta)