911 resultados para Machine Learning,Natural Language Processing,Descriptive Text Mining,POIROT,Transformer


Relevância:

100.00% 100.00%

Publicador:

Resumo:

A evolução tecnológica, associada às mudanças sociais a que temos assistido, nomeadamente nas últimas décadas, originou mudanças significativas na forma como os utentes interagem com as instituições, passando a privilegiar a utilização de meios electrónicos, tais como as mensagens de correio electrónico, em detrimento de formas mais tradicionais, como a carta e o telefone. Neste contexto, sendo o ISEP uma instituição de ensino superior que alberga milhares de alunos e recebe centenas de novos alunos todos os anos, necessita de ter condições para que possa responder de forma atempada às inúmeras mensagens de correio electrónico que recebe. Esta necessidade fez com que surgisse um projecto, de nome SiRAC, que servisse para auxiliar na resposta a essas mensagens. O SiRAC tem como objectivo responder a mensagens de correio electrónico de forma automática. De salientar que se admite não ser possível responder a todas as mensagens, privilegiando-se aquelas que são recorrentemente colocadas à Divisão Académica. Assim será possível encurtar o tempo de comunicação entre os diversos intervenientes, criando uma relação mais próxima entre o ISEP e o público que o contacta. O SiRAC analisa as mensagens e procura responder de forma automática sempre que o seu conteúdo possa ser classificado como fazendo parte de um conjunto de questões previamente identificadas pelos recursos humanos da Divisão Académica como recorrentes e para as quais já exista uma resposta tipo. As questões constantes da mensagem são identificadas através de palavras e expressões normalmente associadas aos diferentes tipos de questão. O envio da resposta pressupõe a identificação correcta dos tipos associados e de acordo com requisitos mínimos definidos, de forma a evitar enviar uma resposta errada a uma mensagem. A implementação do SiRAC permite a libertação de recursos humanos da Divisão Académica que anteriormente estavam afectas à resposta de mensagens para o desempenho de outras funções.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A extração de informação a partir de descrições textuais para a modelação procedimental de ambientes urbanos é apresentada com solução para os edifícios antigos. No entanto, este tipo de edifício carece de maior cuidado com os detalhes de alto nível. Este artigo descreve uma plataforma para a geração expedita de modelos 3D de edifícios monumentais, cuja arquitetura é modular. O primeiro módulo permite a extração de informação a partir de textos formais, pela integração do NooJ num Web Service. No segundo módulo, toda a informação extraída é mapeada para uma ontologia que define os objetos a contemplar na modelação procedimental, processo esse realizado pelo módulo final que gera os modelos 3D em CityGML, também como um Web Service. A partir desta plataforma, desenvolveu-se um protótipo Web para o caso de estudo da modelação das igrejas da cidade do Porto. Os resultados obtidos deram indicações positivas sobre o modelo de dados definidos e a flexibilidade de representação de estruturas diversificadas, como portas, janelas e outras características de igrejas.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

As redes sociais são cada vez mais utilizadas no nosso dia-a-dia. O recente aumento de popularidade deste tipo de serviço veio trazer novas funcionalidades e aplicações. Os utilizadores contribuem com as suas opiniões e conhecimentos, formando um repositório de informação de grandes proporções. Esta informação é cada vez mais utilizada por empresas, que vêem nas redes sociais uma forma de promover os seus produtos junto do público ou analisar de que forma os mesmos são considerados. O estudo apresentado neste artigo aplicou técnicas de Análise Sentimental para verificar se a informação existente em duas redes sociais (Facebook e Twitter) pode ser utilizada para estimar valores que podem vir a ser obtidos na comercialização de bens ou serviços a serem lançados no mercado.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Behavioral biometrics is one of the areas with growing interest within the biosignal research community. A recent trend in the field is ECG-based biometrics, where electrocardiographic (ECG) signals are used as input to the biometric system. Previous work has shown this to be a promising trait, with the potential to serve as a good complement to other existing, and already more established modalities, due to its intrinsic characteristics. In this paper, we propose a system for ECG biometrics centered on signals acquired at the subject's hand. Our work is based on a previously developed custom, non-intrusive sensing apparatus for data acquisition at the hands, and involved the pre-processing of the ECG signals, and evaluation of two classification approaches targeted at real-time or near real-time applications. Preliminary results show that this system leads to competitive results both for authentication and identification, and further validate the potential of ECG signals as a complementary modality in the toolbox of the biometric system designer.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This PhD project aims to study paraphrasing, initially understood as the different ways in which the same content is expressed linguistically. We will go into that concept in depth trying to define and delimit its scope more accurately. In that sense, we also aim to discover which kind of structures and phenomena it covers. Although there exist some paraphrasing typologies, the great majority of them only apply to English, and focus on lexical and syntactic transformations. Our intention is to go further into this subject and propose a paraphrasing typology for Spanish and Catalan combining lexical, syntactic, semantic and pragmatic knowledge. We apply a bottom-up methodology trying to collect evidence of this phenomenon from the data. For this purpose, we are initially using the Spanish Wikipedia as our corpus. The internal structure of this encyclopedia makes it a good resource for extracting paraphrasing examples for our investigation. This empirical approach will be complemented with the use of linguistic knowledge, and by comparing and contrasting our results to previously proposed paraphrasing typologies in order to enlarge the possible paraphrasing forms found in our corpus. The fact that the same content can be expressed in many different ways presents a major challenge for Natural Language Processing (NLP) applications. Thus, research on paraphrasing has recently been attracting increasing attention in the fields of NLP and Computational Linguistics. The results obtained in this investigation would be of great interest in many of these applications.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In the recent years, kernel methods have revealed very powerful tools in many application domains in general and in remote sensing image classification in particular. The special characteristics of remote sensing images (high dimension, few labeled samples and different noise sources) are efficiently dealt with kernel machines. In this paper, we propose the use of structured output learning to improve remote sensing image classification based on kernels. Structured output learning is concerned with the design of machine learning algorithms that not only implement input-output mapping, but also take into account the relations between output labels, thus generalizing unstructured kernel methods. We analyze the framework and introduce it to the remote sensing community. Output similarity is here encoded into SVM classifiers by modifying the model loss function and the kernel function either independently or jointly. Experiments on a very high resolution (VHR) image classification problem shows promising results and opens a wide field of research with structured output kernel methods.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Projecte de recerca elaborat a partir d’una estada a la National University of Singapore Singapur, entre juliol i octubre del 2007. Donada l'explosió de la música a l'internet i la ràpida expansió de les col•leccions de música digital, un repte clau en l'àrea de la informació musical és el desenvolupament de sistemes de processament musical eficients i confiables. L'objectiu de la investigació proposada ha estat treballar en diferents aspectes de l'extracció, modelatge i processat del contingut musical. En particular, s’ha treballat en l'extracció, l'anàlisi i la manipulació de descriptors d'àudio de baix nivell, el modelatge de processos musicals, l'estudi i desenvolupament de tècniques d'aprenentatge automàtic per a processar àudio, i la identificació i extracció d'atributs musicals d'alt nivell. S’han revisat i millorat alguns components d'anàlisis d'àudio i revisat components per a l'extracció de descriptors inter-nota i intra-nota en enregistraments monofónics d'àudio. S’ha aplicat treball previ en Tempo a la formalització de diferents tasques musicals. Finalment, s’ha investigat el processat d'alt nivell de música basandonos en el seu contingut. Com exemple d'això, s’ha investigat com músics professionals expressen i comuniquen la seva interpretació del contingut musical i emocional de peces musicals, i hem usat aquesta informació per a identificar automàticament intèrprets. S’han estudiat les desviacions en paràmetres com to, temps, amplitud i timbre a nivell inter-nota i intra-nota.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The automatic diagnostic discrimination is an application of artificial intelligence techniques that can solve clinical cases based on imaging. Diffuse liver diseases are diseases of wide prominence in the population and insidious course, yet early in its progression. Early and effective diagnosis is necessary because many of these diseases progress to cirrhosis and liver cancer. The usual technique of choice for accurate diagnosis is liver biopsy, an invasive and not without incompatibilities one. It is proposed in this project an alternative non-invasive and free of contraindications method based on liver ultrasonography. The images are digitized and then analyzed using statistical techniques and analysis of texture. The results are validated from the pathology report. Finally, we apply artificial intelligence techniques as Fuzzy k-Means or Support Vector Machines and compare its significance to the analysis Statistics and the report of the clinician. The results show that this technique is significantly valid and a promising alternative as a noninvasive diagnostic chronic liver disease from diffuse involvement. Artificial Intelligence classifying techniques significantly improve the diagnosing discrimination compared to other statistics.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

El treball té com a objectiu l'estudi de les propietats semàntiques d'un grup de verbs de desplaçament i els seus corresponents arguments. La informació sobre el tipus de complement que demana cada verb és important de cara a conèixer l'estructura sintàctica de la frase i oferir solucions pràctiques en tasques de Processament del Llenguatge Natural. L'anàlisi se centrarà en els verbs conduir, navegar i volar, a partir dels sentits bàsics que el Diccionari d'ús dels verbs catalans (DUVC) descriu per a cadascun d'aquests verbs i de les seves restriccions selectives. Comprovarem, mitjançant un centenar de frases extretes del Corpus d'Ús del Català a la Web de la Universitat Pompeu Fabra i del Corpus Textual Informatitzat de la Llengua Catalana de l'Institut d'Estudis Catalans, si en la llengua es donen només els sentits i usos descrits en el DUVC i quins són els més freqüents. Finalment, descriurem els noms que fan de nucli dels arguments en termes de trets semàntics.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper we present the theoretical and methodologicalfoundations for the development of a multi-agentSelective Dissemination of Information (SDI) servicemodel that applies Semantic Web technologies for specializeddigital libraries. These technologies make possibleachieving more efficient information management,improving agent–user communication processes, andfacilitating accurate access to relevant resources. Othertools used are fuzzy linguistic modelling techniques(which make possible easing the interaction betweenusers and system) and natural language processing(NLP) techniques for semiautomatic thesaurus generation.Also, RSS feeds are used as “current awareness bulletins”to generate personalized bibliographic alerts.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Lexical Resources are a critical component for Natural Language Processing applications. However, the high cost of comparing and merging different resources has been a bottleneck to have richer resources with a broad range of potential uses for a significant number of languages.With the objective of reducing cost byeliminating human intervention, we present a new method for automating the merging of resources,with special emphasis in what we call the mapping step. This mapping step, which converts the resources into a common format that allows latter the merging, is usually performed with huge manual effort and thus makes the whole process very costly. Thus, we propose a method to perform this mapping fully automatically. To test our method, we have addressed the merging of two verb subcategorization frame lexica for Spanish, The resultsachieved, that almost replicate human work, demonstrate the feasibility of the approach.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Lexical Resources are a critical component for Natural Language Processing applications. However, the high cost of comparing and merging different resources has been a bottleneck to obtain richer resources and a broader range of potential uses for a significant number of languages. With the objective of reducing cost by eliminating human intervention, we present a new method towards the automatic merging of resources. This method includes both, the automatic mapping of resources involved to a common format and merging them, once in this format. This paper presents how we have addressed the merging of two verb subcategorization frame lexica for Spanish, but our method will be extended to cover other types of Lexical Resources. The achieved results, that almost replicate human work, demonstrate the feasibility of the approach.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This research investigates the phenomenon of translationese in two monolingual comparable corpora of original and translated Catalan texts. Translationese has been defined as the dialect, sub-language or code of translated language. This study aims at giving empirical evidence of translation universals regardless the source language.Traditionally, research conducted on translation strategies has been mainly intuition-based. Computational Linguistics and Natural Language Processing techniques provide reliable information of lexical frequencies, morphological and syntactical distribution in corpora. Therefore, they have been applied to observe which translation strategies occur in these corpora.Results seem to prove the simplification, interference and explicitation hypotheses, whereas no sign of normalization has been detected with the methodology used.The data collected and the resources created for identifying lexical, morphological and syntactic patterns of translations can be useful for Translation Studies teachers, scholars and students: teachers will have more tools to help students avoid the reproduction of translationese patterns. Resources developed will help in detecting non-genuine or inadequate structures in the target language. This fact may imply an improvement in stylistic quality in translations. Translation professionals can also take advantage of these resources to improve their translation quality.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Abstract Since its creation, the Internet has permeated our daily life. The web is omnipresent for communication, research and organization. This exploitation has resulted in the rapid development of the Internet. Nowadays, the Internet is the biggest container of resources. Information databases such as Wikipedia, Dmoz and the open data available on the net are a great informational potentiality for mankind. The easy and free web access is one of the major feature characterizing the Internet culture. Ten years earlier, the web was completely dominated by English. Today, the web community is no longer only English speaking but it is becoming a genuinely multilingual community. The availability of content is intertwined with the availability of logical organizations (ontologies) for which multilinguality plays a fundamental role. In this work we introduce a very high-level logical organization fully based on semiotic assumptions. We thus present the theoretical foundations as well as the ontology itself, named Linguistic Meta-Model. The most important feature of Linguistic Meta-Model is its ability to support the representation of different knowledge sources developed according to different underlying semiotic theories. This is possible because mast knowledge representation schemata, either formal or informal, can be put into the context of the so-called semiotic triangle. In order to show the main characteristics of Linguistic Meta-Model from a practical paint of view, we developed VIKI (Virtual Intelligence for Knowledge Induction). VIKI is a work-in-progress system aiming at exploiting the Linguistic Meta-Model structure for knowledge expansion. It is a modular system in which each module accomplishes a natural language processing task, from terminology extraction to knowledge retrieval. VIKI is a supporting system to Linguistic Meta-Model and its main task is to give some empirical evidence regarding the use of Linguistic Meta-Model without claiming to be thorough.