983 resultados para Automatic term extraction
Resumo:
Several kinds of research in road extraction have been carried out in the last 6 years by the Photogrammetry and Computer Vision Research Group (GPF&VC - Grupo de Pesquisa em Fotogrametria e Visão Computacional). Several semi-automatic road extraction methodologies have been developed, including sequential and optimizatin techniques. The GP-F&VC has also been developing fully automatic methodologies for road extraction. This paper presents an overview of the GP-F&VC research in road extraction from digital images, along with examples of results obtained by the developed methodologies.
Resumo:
The purpose of this paper is to introduce a methodology for semi-automatic road extraction from aerial digital image pairs by using dynamic programming and epipolar geometry. The method uses both images from where each road feature pair is extracted. The operator identifies the corresponding road featuresand s/he selects sparse seed points along them. After all road pairs have been extracted, epipolar geometry is applied to determine the automatic point-to-point correspondence between each correspondent feature. Finally, each correspondent road pair is georeferenced by photogrammetric intersection. Experiments were made with rural aerial images. The results led to the conclusion that the methodology is robust and efficient, even in the presence of shadows of trees and buildings or other irregularities.
Resumo:
This paper proposes a methodology for edge detection in digital images using the Canny detector, but associated with a priori edge structure focusing by a nonlinear anisotropic diffusion via the partial differential equation (PDE). This strategy aims at minimizing the effect of the well-known duality of the Canny detector, under which is not possible to simultaneously enhance the insensitivity to image noise and the localization precision of detected edges. The process of anisotropic diffusion via thePDE is used to a priori focus the edge structure due to its notable characteristic in selectively smoothing the image, leaving the homogeneous regions strongly smoothed and mainly preserving the physical edges, i.e., those that are actually related to objects presented in the image. The solution for the mentioned duality consists in applying the Canny detector to a fine gaussian scale but only along the edge regions focused by the process of anisotropic diffusion via the PDE. The results have shown that the method is appropriate for applications involving automatic feature extraction, since it allowed the high-precision localization of thinned edges, which are usually related to objects present in the image. © Nauka/Interperiodica 2006.
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
Pós-graduação em Ciência da Computação - IBILCE
Resumo:
The use of physical characteristics for human identification is known as biometrics. Among the many biometrics traits available, the fingerprint is the most widely used. The fingerprint identification is based on the impression patterns, as the pattern of ridges and minutiae, characteristics of first and second levels respectively. The current identification systems use these two levels of fingerprint features due to the low cost of the sensors. However, the recent advances in sensor technology, became possible to use third level features present within the ridges, such as the perspiration pores. Recent studies show that the use of third-level features can increase security and fraud protection in biometric systems, since they are difficult to reproduce. In addition, recent researches have also focused on multibiometrics recognition due to its many advantages. The goal of this research project was to apply fusion techniques for fingerprint recognition in order to combine minutia, ridges and pore-based methods and, thus, provide more robust biometrics recognition systems, and also to develop an automated fingerprint identification system using these three methods of recognition. We evaluated isotropic-based and adaptive-based automatic pore extraction methods, and the fusion of pore-based method with the identification methods based on minutiae and ridges. The experiments were performed on the public database PolyUHRF and showed a reduction of approximately 16% in the EER compared to the best results obtained by the methods individually
Resumo:
The thesis analyses relationships between ecological and social systems in the context of coastal ecosystems. It examines human impacts from resource extraction and addresses management and governance behind resource exploitation. The main premises are that a lack of ecological knowledge leads to poor ecosystem management and that the dichotomy between social and natural systems is an artificial one. The thesis illustrates the importance of basing resource management on the ecological conditions of the resource and its ecosystem. It also demonstrates the necessity of accounting for the human dimension in ecosystem management and the challenges of organising human actions for sustainable use of ecosystem services in the face of economic incentives that push users towards short-term extraction. Many Caribbean coral reefs have undergone a shift from coral to macroalgal domination. An experiment on Glovers Reef Atoll in Belize manually cleared patch reefs in a no-take zone and a fished zone (Papers I and II). The study hypothesised that overfishing has reduced herbivorous fish populations that control macroalgae growth. Overall, management had no significant effect on fish abundance and the impacts of the algal reduction were short-lived. This illustrated that the benefits of setting aside marine reserves in impacted environments should not be taken for granted. Papers III and IV studied the development of the lobster and conch fisheries in Belize, and the shrimp farming industry in Thailand respectively. These studies found that environmental feedback can be masked to give the impression of resource abundance through sequential exploitation. In both cases inadequate property rights contributed to this unsustainable resource use. The final paper (V) compared the responses to changes in the resource by the lobster fisheries in Belize and Maine in terms of institutions, organisations and their role in management. In contrast to Maine’s, the Belize system seems to lack social mechanisms for responding effectively to environmental feedback. The results illustrate the importance of organisational and institutional diversity that incorporate ecological knowledge, respond to ecosystem feedback and provide a social context for learning from and adapting to change.
Resumo:
This study aims to the elaboration of juridical and administrative terminology in Ladin language, actually on the Ladin idiom spoken in Val Badia. The necessity of this study is strictly connected to the fact that in South Tyrol the Ladin language is not just safeguarded, but the editing of administrative and normative text is guaranteed by law. This means that there is a need for a unique terminology in order to support translators and editors of specialised texts. The starting point of this study are, on one side the need of a unique terminology, and on the other side the translation work done till now from the employees of the public administration in Ladin language. In order to document their efforts a corpus made up of digitalized administrative and normative documents was build. The first two chapters focuses on the state of the art of projects on terminology and corpus linguistics for lesser used languages. The information were collected thanks to the help of institutes, universities and researchers dealing with lesser used languages. The third chapter focuses on the development of administrative language in Ladin language and the fourth chapter focuses on the creation of the trilingual Italian – German – Ladin corpus made up of administrative and normative documents. The last chapter deals with the methodologies applied in order to elaborate the terminology entries in Ladin language though the use of the trilingual corpus. Starting from the terminology entry all steps are described, from term extraction, to the extraction of equivalents, contexts and definitions and of course also of the elaboration of translation proposals for not found equivalences. Finally the problems referring to the elaboration of terminology in Ladin language are illustrated.
Resumo:
In recent years, Deep Learning techniques have shown to perform well on a large variety of problems both in Computer Vision and Natural Language Processing, reaching and often surpassing the state of the art on many tasks. The rise of deep learning is also revolutionizing the entire field of Machine Learning and Pattern Recognition pushing forward the concepts of automatic feature extraction and unsupervised learning in general. However, despite the strong success both in science and business, deep learning has its own limitations. It is often questioned if such techniques are only some kind of brute-force statistical approaches and if they can only work in the context of High Performance Computing with tons of data. Another important question is whether they are really biologically inspired, as claimed in certain cases, and if they can scale well in terms of "intelligence". The dissertation is focused on trying to answer these key questions in the context of Computer Vision and, in particular, Object Recognition, a task that has been heavily revolutionized by recent advances in the field. Practically speaking, these answers are based on an exhaustive comparison between two, very different, deep learning techniques on the aforementioned task: Convolutional Neural Network (CNN) and Hierarchical Temporal memory (HTM). They stand for two different approaches and points of view within the big hat of deep learning and are the best choices to understand and point out strengths and weaknesses of each of them. CNN is considered one of the most classic and powerful supervised methods used today in machine learning and pattern recognition, especially in object recognition. CNNs are well received and accepted by the scientific community and are already deployed in large corporation like Google and Facebook for solving face recognition and image auto-tagging problems. HTM, on the other hand, is known as a new emerging paradigm and a new meanly-unsupervised method, that is more biologically inspired. It tries to gain more insights from the computational neuroscience community in order to incorporate concepts like time, context and attention during the learning process which are typical of the human brain. In the end, the thesis is supposed to prove that in certain cases, with a lower quantity of data, HTM can outperform CNN.
Resumo:
Cognitive rehabilitation aims to remediate or alleviate the cognitive deficits appearing after an episode of acquired brain injury (ABI). The purpose of this work is to describe the telerehabilitation platform called Guttmann Neuropersonal Trainer (GNPT) which provides new strategies for cognitive rehabilitation, improving efficiency and access to treatments, and to increase knowledge generation from the process. A cognitive rehabilitation process has been modeled to design and develop the system, which allows neuropsychologists to configure and schedule rehabilitation sessions, consisting of set of personalized computerized cognitive exercises grounded on neuroscience and plasticity principles. It provides remote continuous monitoring of patient's performance, by an asynchronous communication strategy. An automatic knowledge extraction method has been used to implement a decision support system, improving treatment customization. GNPT has been implemented in 27 rehabilitation centers and in 83 patients' homes, facilitating the access to the treatment. In total, 1660 patients have been treated. Usability and cost analysis methodologies have been applied to measure the efficiency in real clinical environments. The usability evaluation reveals a system usability score higher than 70 for all target users. The cost efficiency study results show a relation of 1-20 compared to face-to-face rehabilitation. GNPT enables brain-damaged patients to continue and further extend rehabilitation beyond the hospital, improving the efficiency of the rehabilitation process. It allows customized therapeutic plans, providing information to further development of clinical practice guidelines.
Resumo:
Humans have a high ability to extract visual data information acquired by sight. Trought a learning process, which starts at birth and continues throughout life, image interpretation becomes almost instinctively. At a glance, one can easily describe a scene with reasonable precision, naming its main components. Usually, this is done by extracting low-level features such as edges, shapes and textures, and associanting them to high level meanings. In this way, a semantic description of the scene is done. An example of this, is the human capacity to recognize and describe other people physical and behavioral characteristics, or biometrics. Soft-biometrics also represents inherent characteristics of human body and behaviour, but do not allow unique person identification. Computer vision area aims to develop methods capable of performing visual interpretation with performance similar to humans. This thesis aims to propose computer vison methods which allows high level information extraction from images in the form of soft biometrics. This problem is approached in two ways, unsupervised and supervised learning methods. The first seeks to group images via an automatic feature extraction learning , using both convolution techniques, evolutionary computing and clustering. In this approach employed images contains faces and people. Second approach employs convolutional neural networks, which have the ability to operate on raw images, learning both feature extraction and classification processes. Here, images are classified according to gender and clothes, divided into upper and lower parts of human body. First approach, when tested with different image datasets obtained an accuracy of approximately 80% for faces and non-faces and 70% for people and non-person. The second tested using images and videos, obtained an accuracy of about 70% for gender, 80% to the upper clothes and 90% to lower clothes. The results of these case studies, show that proposed methods are promising, allowing the realization of automatic high level information image annotation. This opens possibilities for development of applications in diverse areas such as content-based image and video search and automatica video survaillance, reducing human effort in the task of manual annotation and monitoring.
Resumo:
The extraction of relevant terms from texts is an extensively researched task in Text- Mining. Relevant terms have been applied in areas such as Information Retrieval or document clustering and classification. However, relevance has a rather fuzzy nature since the classification of some terms as relevant or not relevant is not consensual. For instance, while words such as "president" and "republic" are generally considered relevant by human evaluators, and words like "the" and "or" are not, terms such as "read" and "finish" gather no consensus about their semantic and informativeness. Concepts, on the other hand, have a less fuzzy nature. Therefore, instead of deciding on the relevance of a term during the extraction phase, as most extractors do, I propose to first extract, from texts, what I have called generic concepts (all concepts) and postpone the decision about relevance for downstream applications, accordingly to their needs. For instance, a keyword extractor may assume that the most relevant keywords are the most frequent concepts on the documents. Moreover, most statistical extractors are incapable of extracting single-word and multi-word expressions using the same methodology. These factors led to the development of the ConceptExtractor, a statistical and language-independent methodology which is explained in Part I of this thesis. In Part II, I will show that the automatic extraction of concepts has great applicability. For instance, for the extraction of keywords from documents, using the Tf-Idf metric only on concepts yields better results than using Tf-Idf without concepts, specially for multi-words. In addition, since concepts can be semantically related to other concepts, this allows us to build implicit document descriptors. These applications led to published work. Finally, I will present some work that, although not published yet, is briefly discussed in this document.
Resumo:
Keyphrases are added to documents to help identify the areas of interest they contain. However, in a significant proportion of papers author selected keyphrases are not appropriate for the document they accompany: for instance, they can be classificatory rather than explanatory, or they are not updated when the focus of the paper changes. As such, automated methods for improving the use of keyphrases are needed, and various methods have been published. However, each method was evaluated using a different corpus, typically one relevant to the field of study of the method’s authors. This not only makes it difficult to incorporate the useful elements of algorithms in future work, but also makes comparing the results of each method inefficient and ineffective. This paper describes the work undertaken to compare five methods across a common baseline of corpora. The methods chosen were Term Frequency, Inverse Document Frequency, the C-Value, the NC-Value, and a Synonym based approach. These methods were analysed to evaluate performance and quality of results, and to provide a future benchmark. It is shown that Term Frequency and Inverse Document Frequency were the best algorithms, with the Synonym approach following them. Following these findings, a study was undertaken into the value of using human evaluators to judge the outputs. The Synonym method was compared to the original author keyphrases of the Reuters’ News Corpus. The findings show that authors of Reuters’ news articles provide good keyphrases but that more often than not they do not provide any keyphrases.
Resumo:
The objective of this study was to evaluate the long-term influence of xenogenic grafts on bone crestal height and radiographic density following extraction of teeth. The right and left third lower molars of 22 patients were surgically extracted, and one randomly chosen socket was filled with a xenogenic graft (Gent-Tech). The contralateral molar was left to heal naturally, serving as a paired control. Digital intraoral radiographies were taken at surgery and 2, 6, and 24 months after, to evaluate bone density (BD) and alveolar bone crest to cementoenamel junction distance. The data obtained were subjected to two-way analysis of variance and Tukey`s test (alpha = 0.05). The significant decrease in cementoenamel junction distance observed for both groups was limited to the first 6 months. BD values increased significantly in the first 6 months, with no alterations observed up to 24 months for both groups. BD was higher for the experimental group at all time points (p < 0.05). Socket grafting with the xenogenic materials tested did not changed bone crestal height and bone radiographic density in the long term.
Resumo:
Introduction: The objective of this study was to cephalometrically compare the stability of complete Class II malocclusion treatment with 2 or 4 premolar extractions after a mean period of 9.35 years. Methods: A sample of 57 records from patients with complete Class II malocclusion was selected and divided into 2 groups. Group 1 consisted of 30 patients with an initial mean age of 12.87 years treated with extraction of 2 maxillary premolars. Group 2 consisted of 27 patients with an initial mean age of 13.72 years treated with extraction of 4 premolars. T tests were used to compare the groups` initial cephalometric characteristics and posttreatment changes. Pearson correlation coefficients were calculated to determine the correlation between treatment and posttreatment dental-relationship changes. Results: During the posttreatment period, both groups had similar behavior, except that group 1 had a statistically greater maxillary forward displacement and a greater increase in the apical-base relationship than group 2. On the other hand, group 2 had a statistically greater molar-relationship relapse toward Class II. There were significant positive correlations between the amounts of treatment and posttreatment dentoalveolar-relationship changes. Conclusions: Treatment of complete Class II malocclusions with 2 maxillary premolar extractions or 4 premolar extractions had similar long-term posttreatment stability. (Am J Orthod Dentofacial Orthop 2009;136:154.e1-154.e10)