989 resultados para Automatic term extraction


Relevância:

80.00% 80.00%

Publicador:

Resumo:

The use of physical characteristics for human identification is known as biometrics. Among the many biometrics traits available, the fingerprint is the most widely used. The fingerprint identification is based on the impression patterns, as the pattern of ridges and minutiae, characteristics of first and second levels respectively. The current identification systems use these two levels of fingerprint features due to the low cost of the sensors. However, the recent advances in sensor technology, became possible to use third level features present within the ridges, such as the perspiration pores. Recent studies show that the use of third-level features can increase security and fraud protection in biometric systems, since they are difficult to reproduce. In addition, recent researches have also focused on multibiometrics recognition due to its many advantages. The goal of this research project was to apply fusion techniques for fingerprint recognition in order to combine minutia, ridges and pore-based methods and, thus, provide more robust biometrics recognition systems, and also to develop an automated fingerprint identification system using these three methods of recognition. We evaluated isotropic-based and adaptive-based automatic pore extraction methods, and the fusion of pore-based method with the identification methods based on minutiae and ridges. The experiments were performed on the public database PolyUHRF and showed a reduction of approximately 16% in the EER compared to the best results obtained by the methods individually

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The thesis analyses relationships between ecological and social systems in the context of coastal ecosystems. It examines human impacts from resource extraction and addresses management and governance behind resource exploitation. The main premises are that a lack of ecological knowledge leads to poor ecosystem management and that the dichotomy between social and natural systems is an artificial one. The thesis illustrates the importance of basing resource management on the ecological conditions of the resource and its ecosystem. It also demonstrates the necessity of accounting for the human dimension in ecosystem management and the challenges of organising human actions for sustainable use of ecosystem services in the face of economic incentives that push users towards short-term extraction. Many Caribbean coral reefs have undergone a shift from coral to macroalgal domination. An experiment on Glovers Reef Atoll in Belize manually cleared patch reefs in a no-take zone and a fished zone (Papers I and II). The study hypothesised that overfishing has reduced herbivorous fish populations that control macroalgae growth. Overall, management had no significant effect on fish abundance and the impacts of the algal reduction were short-lived. This illustrated that the benefits of setting aside marine reserves in impacted environments should not be taken for granted. Papers III and IV studied the development of the lobster and conch fisheries in Belize, and the shrimp farming industry in Thailand respectively. These studies found that environmental feedback can be masked to give the impression of resource abundance through sequential exploitation. In both cases inadequate property rights contributed to this unsustainable resource use. The final paper (V) compared the responses to changes in the resource by the lobster fisheries in Belize and Maine in terms of institutions, organisations and their role in management. In contrast to Maine’s, the Belize system seems to lack social mechanisms for responding effectively to environmental feedback. The results illustrate the importance of organisational and institutional diversity that incorporate ecological knowledge, respond to ecosystem feedback and provide a social context for learning from and adapting to change.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This study aims to the elaboration of juridical and administrative terminology in Ladin language, actually on the Ladin idiom spoken in Val Badia. The necessity of this study is strictly connected to the fact that in South Tyrol the Ladin language is not just safeguarded, but the editing of administrative and normative text is guaranteed by law. This means that there is a need for a unique terminology in order to support translators and editors of specialised texts. The starting point of this study are, on one side the need of a unique terminology, and on the other side the translation work done till now from the employees of the public administration in Ladin language. In order to document their efforts a corpus made up of digitalized administrative and normative documents was build. The first two chapters focuses on the state of the art of projects on terminology and corpus linguistics for lesser used languages. The information were collected thanks to the help of institutes, universities and researchers dealing with lesser used languages. The third chapter focuses on the development of administrative language in Ladin language and the fourth chapter focuses on the creation of the trilingual Italian – German – Ladin corpus made up of administrative and normative documents. The last chapter deals with the methodologies applied in order to elaborate the terminology entries in Ladin language though the use of the trilingual corpus. Starting from the terminology entry all steps are described, from term extraction, to the extraction of equivalents, contexts and definitions and of course also of the elaboration of translation proposals for not found equivalences. Finally the problems referring to the elaboration of terminology in Ladin language are illustrated.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In recent years, Deep Learning techniques have shown to perform well on a large variety of problems both in Computer Vision and Natural Language Processing, reaching and often surpassing the state of the art on many tasks. The rise of deep learning is also revolutionizing the entire field of Machine Learning and Pattern Recognition pushing forward the concepts of automatic feature extraction and unsupervised learning in general. However, despite the strong success both in science and business, deep learning has its own limitations. It is often questioned if such techniques are only some kind of brute-force statistical approaches and if they can only work in the context of High Performance Computing with tons of data. Another important question is whether they are really biologically inspired, as claimed in certain cases, and if they can scale well in terms of "intelligence". The dissertation is focused on trying to answer these key questions in the context of Computer Vision and, in particular, Object Recognition, a task that has been heavily revolutionized by recent advances in the field. Practically speaking, these answers are based on an exhaustive comparison between two, very different, deep learning techniques on the aforementioned task: Convolutional Neural Network (CNN) and Hierarchical Temporal memory (HTM). They stand for two different approaches and points of view within the big hat of deep learning and are the best choices to understand and point out strengths and weaknesses of each of them. CNN is considered one of the most classic and powerful supervised methods used today in machine learning and pattern recognition, especially in object recognition. CNNs are well received and accepted by the scientific community and are already deployed in large corporation like Google and Facebook for solving face recognition and image auto-tagging problems. HTM, on the other hand, is known as a new emerging paradigm and a new meanly-unsupervised method, that is more biologically inspired. It tries to gain more insights from the computational neuroscience community in order to incorporate concepts like time, context and attention during the learning process which are typical of the human brain. In the end, the thesis is supposed to prove that in certain cases, with a lower quantity of data, HTM can outperform CNN.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Cognitive rehabilitation aims to remediate or alleviate the cognitive deficits appearing after an episode of acquired brain injury (ABI). The purpose of this work is to describe the telerehabilitation platform called Guttmann Neuropersonal Trainer (GNPT) which provides new strategies for cognitive rehabilitation, improving efficiency and access to treatments, and to increase knowledge generation from the process. A cognitive rehabilitation process has been modeled to design and develop the system, which allows neuropsychologists to configure and schedule rehabilitation sessions, consisting of set of personalized computerized cognitive exercises grounded on neuroscience and plasticity principles. It provides remote continuous monitoring of patient's performance, by an asynchronous communication strategy. An automatic knowledge extraction method has been used to implement a decision support system, improving treatment customization. GNPT has been implemented in 27 rehabilitation centers and in 83 patients' homes, facilitating the access to the treatment. In total, 1660 patients have been treated. Usability and cost analysis methodologies have been applied to measure the efficiency in real clinical environments. The usability evaluation reveals a system usability score higher than 70 for all target users. The cost efficiency study results show a relation of 1-20 compared to face-to-face rehabilitation. GNPT enables brain-damaged patients to continue and further extend rehabilitation beyond the hospital, improving the efficiency of the rehabilitation process. It allows customized therapeutic plans, providing information to further development of clinical practice guidelines.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Humans have a high ability to extract visual data information acquired by sight. Trought a learning process, which starts at birth and continues throughout life, image interpretation becomes almost instinctively. At a glance, one can easily describe a scene with reasonable precision, naming its main components. Usually, this is done by extracting low-level features such as edges, shapes and textures, and associanting them to high level meanings. In this way, a semantic description of the scene is done. An example of this, is the human capacity to recognize and describe other people physical and behavioral characteristics, or biometrics. Soft-biometrics also represents inherent characteristics of human body and behaviour, but do not allow unique person identification. Computer vision area aims to develop methods capable of performing visual interpretation with performance similar to humans. This thesis aims to propose computer vison methods which allows high level information extraction from images in the form of soft biometrics. This problem is approached in two ways, unsupervised and supervised learning methods. The first seeks to group images via an automatic feature extraction learning , using both convolution techniques, evolutionary computing and clustering. In this approach employed images contains faces and people. Second approach employs convolutional neural networks, which have the ability to operate on raw images, learning both feature extraction and classification processes. Here, images are classified according to gender and clothes, divided into upper and lower parts of human body. First approach, when tested with different image datasets obtained an accuracy of approximately 80% for faces and non-faces and 70% for people and non-person. The second tested using images and videos, obtained an accuracy of about 70% for gender, 80% to the upper clothes and 90% to lower clothes. The results of these case studies, show that proposed methods are promising, allowing the realization of automatic high level information image annotation. This opens possibilities for development of applications in diverse areas such as content-based image and video search and automatica video survaillance, reducing human effort in the task of manual annotation and monitoring.

Relevância:

50.00% 50.00%

Publicador:

Resumo:

Tese de doutoramento, Informática (Engenharia Informática), Universidade de Lisboa, Faculdade de Ciências, 2014

Relevância:

50.00% 50.00%

Publicador:

Resumo:

Keyphrases are added to documents to help identify the areas of interest they contain. However, in a significant proportion of papers author selected keyphrases are not appropriate for the document they accompany: for instance, they can be classificatory rather than explanatory, or they are not updated when the focus of the paper changes. As such, automated methods for improving the use of keyphrases are needed, and various methods have been published. However, each method was evaluated using a different corpus, typically one relevant to the field of study of the method’s authors. This not only makes it difficult to incorporate the useful elements of algorithms in future work, but also makes comparing the results of each method inefficient and ineffective. This paper describes the work undertaken to compare five methods across a common baseline of corpora. The methods chosen were Term Frequency, Inverse Document Frequency, the C-Value, the NC-Value, and a Synonym based approach. These methods were analysed to evaluate performance and quality of results, and to provide a future benchmark. It is shown that Term Frequency and Inverse Document Frequency were the best algorithms, with the Synonym approach following them. Following these findings, a study was undertaken into the value of using human evaluators to judge the outputs. The Synonym method was compared to the original author keyphrases of the Reuters’ News Corpus. The findings show that authors of Reuters’ news articles provide good keyphrases but that more often than not they do not provide any keyphrases.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The automatic extraction of road features from remote sensed images has been a topic of great interest within the photogrammetric and remote sensing communities for over 3 decades. Although various techniques have been reported in the literature, it is still challenging to efficiently extract the road details with the increasing of image resolution as well as the requirement for accurate and up-to-date road data. In this paper, we will focus on the automatic detection of road lane markings, which are crucial for many applications, including lane level navigation and lane departure warning. The approach consists of four steps: i) data preprocessing, ii) image segmentation and road surface detection, iii) road lane marking extraction based on the generated road surface, and iv) testing and system evaluation. The proposed approach utilized the unsupervised ISODATA image segmentation algorithm, which segments the image into vegetation regions, and road surface based only on the Cb component of YCbCr color space. A shadow detection method based on YCbCr color space is also employed to detect and recover the shadows from the road surface casted by the vehicles and trees. Finally, the lane marking features are detected from the road surface using the histogram clustering. The experiments of applying the proposed method to the aerial imagery dataset of Gympie, Queensland demonstrate the efficiency of the approach.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Automated analysis of the sentiments presented in online consumer feedbacks can facilitate both organizations’ business strategy development and individual consumers’ comparison shopping. Nevertheless, existing opinion mining methods either adopt a context-free sentiment classification approach or rely on a large number of manually annotated training examples to perform context sensitive sentiment classification. Guided by the design science research methodology, we illustrate the design, development, and evaluation of a novel fuzzy domain ontology based contextsensitive opinion mining system. Our novel ontology extraction mechanism underpinned by a variant of Kullback-Leibler divergence can automatically acquire contextual sentiment knowledge across various product domains to improve the sentiment analysis processes. Evaluated based on a benchmark dataset and real consumer reviews collected from Amazon.com, our system shows remarkable performance improvement over the context-free baseline.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

An automatic approach to road lane marking extraction from high-resolution aerial images is proposed, which can automatically detect the road surfaces in rural areas based on hierarchical image analysis. The procedure is facilitated by the road centrelines obtained from low-resolution images. The lane markings are further extracted on the generated road surfaces with 2D Gabor filters. The proposed method is applied on the aerial images of the Bruce Highway around Gympie, Queensland. Evaluation of the generated road surfaces and lane markings using four representative test fields has validated the proposed method.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The commercialization of aerial image processing is highly dependent on the platforms such as UAVs (Unmanned Aerial Vehicles). However, the lack of an automated UAV forced landing site detection system has been identified as one of the main impediments to allow UAV flight over populated areas in civilian airspace. This article proposes a UAV forced landing site detection system that is based on machine learning approaches including the Gaussian Mixture Model and the Support Vector Machine. A range of learning parameters are analysed including the number of Guassian mixtures, support vector kernels including linear, radial basis function Kernel (RBF) and polynormial kernel (poly), and the order of RBF kernel and polynormial kernel. Moreover, a modified footprint operator is employed during feature extraction to better describe the geometric characteristics of the local area surrounding a pixel. The performance of the presented system is compared to a baseline UAV forced landing site detection system which uses edge features and an Artificial Neural Network (ANN) region type classifier. Experiments conducted on aerial image datasets captured over typical urban environments reveal improved landing site detection can be achieved with an SVM classifier with an RBF kernel using a combination of colour and texture features. Compared to the baseline system, the proposed system provides significant improvement in term of the chance to detect a safe landing area, and the performance is more stable than the baseline in the presence of changes to the UAV altitude.