894 resultados para Query expansion, Text mining, Information retrieval, Chinese IR
Resumo:
Classification schemes are built at a particular point in time; at inception, they reflect a worldview indicative of that time. This is their strength, but results in potential weak- nesses as worldviews change. For example, if a scheme of mathematics is not updated even though the state of the art has changed, then it is not a very useful scheme to users for the purposes of information retrieval. However, change in schemes is a good thing. Changing allows designers of schemes to update their model and serves as a responsible mediator between resources and users. But change does come at a cost. In the print world, we revise universal clas- sification schemes—sometimes in drastic ways—and this means that over time, the power of a classification scheme to collocate is compromised if we do not account for scheme change in the organization of affected physical resources. If we understand this phenomenon in the print world, we can design ameliorations for the digital world.
Resumo:
Many years have passed since Berners-Lee envi- sioned the Web as it should be (1999), but still many information professionals do not know their precise role in its development, especially con- cerning ontologies –considered one of its main elements. Why? May it still be a lack of under- standing between the different academic commu- nities involved (namely, Computer Science, Lin- guistics and Library and Information Science), as reported by Soergel (1999)? The idea behind the Semantic Web is that of several technologies working together to get optimum information re- trieval performance, which is based on proper resource description in a machine-understandable way, by means of metadata and vocabularies (Greenberg, Sutton and Campbell, 2003). This is obviously something that Library and Information Science professionals can do very well, but, are we doing enough? When computer scientists put on stage the ontology paradigm they were asking for semantically richer vocabularies that could support logical inferences in artificial intelligence as a way to improve information retrieval systems. Which direction should vocabulary development take to contribute better to that common goal? The main objective of this paper is twofold: 1) to identify main trends, issues and problems con- cerning ontology research and 2) to identify pos- sible contributions from the Library and Information Science area to the development of ontologies for the semantic web. To do so, our paper has been structured in the following manner. First, the methodology followed in the paper is reported, which is based on a thorough literature review, where main contributions are analysed. Then, the paper presents a discussion of the main trends, issues and problems concerning ontology re- search identified in the literature review. Recom- mendations of possible contributions from the Library and Information Science area to the devel- opment of ontologies for the semantic web are finally presented.
Resumo:
A history of specialties in economics since the late 1950s is constructed on the basis of a large corpus of documents from economics journals. The production of this history relies on a combination of algorithmic methods that avoid subjective assessments of the boundaries of specialties: bibliographic coupling, automated community detection in dynamic networks and text mining. these methods uncover a structuring of economics around recognizable specialties with some significant changes over the time-period covered (1956-2014). Among our results, especially noteworthy are (a) the clearcut existence of 10 families of specialties, (b) the disappearance in the late 1970s of a specialty focused on general economic theory, (c) the dispersal of the econometrics-centered specialty in the early 1990s and the ensuing importance of specific econometric methods for the identity of many specialties since the 1990s, (d) the low level of specialization of individual economists throughout the period in contrast to physicists as early as the late 1960s.
Resumo:
This paper describes our semi-automatic keyword based approach for the four topics of Information Extraction from Microblogs Posted during Disasters task at Forum for Information Retrieval Evaluation (FIRE) 2016. The approach consists three phases.
Resumo:
In questo elaborato ci siamo occupati della legge di Zipf sia da un punto di vista applicativo che teorico. Tale legge empirica afferma che il rango in frequenza (RF) delle parole di un testo seguono una legge a potenza con esponente -1. Per quanto riguarda l'approccio teorico abbiamo trattato due classi di modelli in grado di ricreare leggi a potenza nella loro distribuzione di probabilità. In particolare, abbiamo considerato delle generalizzazioni delle urne di Polya e i processi SSR (Sample Space Reducing). Di questi ultimi abbiamo dato una formalizzazione in termini di catene di Markov. Infine abbiamo proposto un modello di dinamica delle popolazioni capace di unificare e riprodurre i risultati dei tre SSR presenti in letteratura. Successivamente siamo passati all'analisi quantitativa dell'andamento del RF sulle parole di un corpus di testi. Infatti in questo caso si osserva che la RF non segue una pura legge a potenza ma ha un duplice andamento che può essere rappresentato da una legge a potenza che cambia esponente. Abbiamo cercato di capire se fosse possibile legare l'analisi dell'andamento del RF con le proprietà topologiche di un grafo. In particolare, a partire da un corpus di testi abbiamo costruito una rete di adiacenza dove ogni parola era collegata tramite un link alla parola successiva. Svolgendo un'analisi topologica della struttura del grafo abbiamo trovato alcuni risultati che sembrano confermare l'ipotesi che la sua struttura sia legata al cambiamento di pendenza della RF. Questo risultato può portare ad alcuni sviluppi nell'ambito dello studio del linguaggio e della mente umana. Inoltre, siccome la struttura del grafo presenterebbe alcune componenti che raggruppano parole in base al loro significato, un approfondimento di questo studio potrebbe condurre ad alcuni sviluppi nell'ambito della comprensione automatica del testo (text mining).
Resumo:
L'estrazione automatica degli eventi biomedici dalla letteratura scientifica ha catturato un forte interesse nel corso degli ultimi anni, dimostrandosi in grado di riconoscere interazioni complesse e semanticamente ricche espresse all'interno del testo. Purtroppo però, esistono davvero pochi lavori focalizzati sull'apprendimento di embedding o di metriche di similarità per i grafi evento. Questa lacuna lascia le relazioni biologiche scollegate, impedendo l'applicazione di tecniche di machine learning che potrebbero dare un importante contributo al progresso scientifico. Approfittando dei vantaggi delle recenti soluzioni di deep graph kernel e dei language model preaddestrati, proponiamo Deep Divergence Event Graph Kernels (DDEGK), un metodo non supervisionato e induttivo in grado di mappare gli eventi all'interno di uno spazio vettoriale, preservando le loro similarità semantiche e strutturali. Diversamente da molti altri sistemi, DDEGK lavora a livello di grafo e non richiede nè etichette e feature specifiche per un determinato task, nè corrispondenze note tra i nodi. A questo scopo, la nostra soluzione mette a confronto gli eventi con un piccolo gruppo di eventi prototipo, addestra delle reti di cross-graph attention per andare a individuare i legami di similarità tra le coppie di nodi (rafforzando l'interpretabilità), e impiega dei modelli basati su transformer per la codifica degli attributi continui. Sono stati fatti ampi esperimenti su dieci dataset biomedici. Mostriamo che le nostre rappresentazioni possono essere utilizzate in modo efficace in task quali la classificazione di grafi, clustering e visualizzazione e che, allo stesso tempo, sono in grado di semplificare il task di semantic textual similarity. Risultati empirici dimostrano che DDEGK supera significativamente gli altri modelli che attualmente detengono lo stato dell'arte.
Resumo:
Questa tesi di laurea compie uno studio sull’ utilizzo di tecniche di web crawling, web scraping e Natural Language Processing per costruire automaticamente un dataset di documenti e una knowledge base di coppie verbo-oggetto utilizzabile per la classificazione di testi. Dopo una breve introduzione sulle tecniche utilizzate verrà presentato il metodo di generazione, prima in forma teorica e generalizzabile a qualunque classificazione basata su un insieme di argomenti, e poi in modo specifico attraverso un caso di studio: il software SDG Detector. In particolare quest ultimo riguarda l’applicazione pratica del metodo esposto per costruire una raccolta di informazioni utili alla classificazione di documenti in base alla presenza di uno o più Sustainable Development Goals. La parte relativa alla classificazione è curata dal co-autore di questa applicazione, la presente invece si concentra su un’analisi di correttezza e performance basata sull’espansione del dataset e della derivante base di conoscenza.
Resumo:
Nel corso dell’elaborato verranno utilizzate tecniche e strumenti di analisi automatica di dati aventi carattere testuale. Lo scopo del lavoro di tesi consisterà nel condurre text mining e sentiment analysis su dei messaggi al fine di comprenderne il significato, con interesse particolare sulle emozioni ed i sentimenti in essi contenuti per riuscire ad estrapolare informazioni di interesse.
Resumo:
This thesis develops AI methods as a contribution to computational musicology, an interdisciplinary field that studies music with computers. In systematic musicology a composition is defined as the combination of harmony, melody and rhythm. According to de La Borde, harmony alone "merits the name of composition". This thesis focuses on analysing the harmony from a computational perspective. We concentrate on symbolic music representation and address the problem of formally representing chord progressions in western music compositions. Informally, chords are sets of pitches played simultaneously, and chord progressions constitute the harmony of a composition. Our approach combines ML techniques with knowledge-based techniques. We design and implement the Modal Harmony ontology (MHO), using OWL. It formalises one of the most important theories in western music: the Modal Harmony Theory. We propose and experiment with different types of embedding methods to encode chords, inspired by NLP and adapted to the music domain, using both statistical (extensional) knowledge by relying on a huge dataset of chord annotations (ChoCo), intensional knowledge by relying on MHO and a combination of the two. The methods are evaluated on two musicologically relevant tasks: chord classification and music structure segmentation. The former is verified by comparing the results of the Odd One Out algorithm to the classification obtained with MHO. Good performances (accuracy: 0.86) are achieved. We feed a RNN for the latter, using our embeddings. Results show that the best performance (F1: 0.6) is achieved with embeddings that combine both approaches. Our method outpeforms the state of the art (F1 = 0.42) for symbolic music structure segmentation. It is worth noticing that embeddings based only on MHO almost equal the best performance (F1 = 0.58). We remark that those embeddings only require the ontology as an input as opposed to other approaches that rely on large datasets.
Resumo:
As one of the most popular deep learning models, convolution neural network (CNN) has achieved huge success in image information extraction. Traditionally CNN is trained by supervised learning method with labeled data and used as a classifier by adding a classification layer in the end. Its capability of extracting image features is largely limited due to the difficulty of setting up a large training dataset. In this paper, we propose a new unsupervised learning CNN model, which uses a so-called convolutional sparse auto-encoder (CSAE) algorithm pre-Train the CNN. Instead of using labeled natural images for CNN training, the CSAE algorithm can be used to train the CNN with unlabeled artificial images, which enables easy expansion of training data and unsupervised learning. The CSAE algorithm is especially designed for extracting complex features from specific objects such as Chinese characters. After the features of articficial images are extracted by the CSAE algorithm, the learned parameters are used to initialize the first CNN convolutional layer, and then the CNN model is fine-Trained by scene image patches with a linear classifier. The new CNN model is applied to Chinese scene text detection and is evaluated with a multilingual image dataset, which labels Chinese, English and numerals texts separately. More than 10% detection precision gain is observed over two CNN models.
Resumo:
An experimental setup to measure the three-dimensional phase-intensity distribution of an infrared laser beam in the focal region has been presented. It is based on the knife-edge method to perform a tomographic reconstruction and on a transport of intensity equation-based numerical method to obtain the propagating wavefront. This experimental approach allows us to characterize a focalized laser beam when the use of image or interferometer arrangements is not possible. Thus, we have recovered intensity and phase of an aberrated beam dominated by astigmatism. The phase evolution is fully consistent with that of the beam intensity along the optical axis. Moreover, this method is based on an expansion on both the irradiance and the phase information in a series of Zernike polynomials. We have described guidelines to choose a proper set of these polynomials depending on the experimental conditions and showed that, by abiding these criteria, numerical errors can be reduced.
Resumo:
Resource Selection (or Query Routing) is an important step in P2P IR. Though analogous to document retrieval in the sense of choosing a relevant subset of resources, resource selection methods have evolved independently from those for document retrieval. Among the reasons for such divergence is that document retrieval targets scenarios where underlying resources are semantically homogeneous, whereas peers would manage diverse content. We observe that semantic heterogeneity is mitigated in the clustered 2-tier P2P IR architecture resource selection layer by way of usage of clustering, and posit that this necessitates a re-look at the applicability of document retrieval methods for resource selection within such a framework. This paper empirically benchmarks document retrieval models against the state-of-the-art resource selection models for the problem of resource selection in the clustered P2P IR architecture, using classical IR evaluation metrics. Our benchmarking study illustrates that document retrieval models significantly outperform other methods for the task of resource selection in the clustered P2P IR architecture. This indicates that clustered P2P IR framework can exploit advancements in document retrieval methods to deliver corresponding improvements in resource selection, indicating potential convergence of these fields for the clustered P2P IR architecture.
Resumo:
This paper examines the effects of information request ambiguity and construct incongruence on end user's ability to develop SQL queries with an interactive relational database query language. In this experiment, ambiguity in information requests adversely affected accuracy and efficiency. Incongruities among the information request, the query syntax, and the data representation adversely affected accuracy, efficiency, and confidence. The results for ambiguity suggest that organizations might elicit better query development if end users were sensitized to the nature of ambiguities that could arise in their business contexts. End users could translate natural language queries into pseudo-SQL that could be examined for precision before the queries were developed. The results for incongruence suggest that better query development might ensue if semantic distances could be reduced by giving users data representations and database views that maximize construct congruence for the kinds of queries in typical domains. (C) 2001 Elsevier Science B.V. All rights reserved.
Resumo:
Dissertation submitted in partial fulfillment of the requirements for the Degree of Master of Science in Geospatial Technologies.
Resumo:
This paper provides an extended analysis of livelihood diversification in rural Tanzania, with special emphasis on artisanal and small-scale mining (ASM). Over the past decade, this sector of industry, which is labour-intensive and comprises an array of rudimentary and semi-mechanized operations, has become an indispensable economic activity throughout Sub-Saharan Africa, providing employment to a host of redundant public sector workers, retrenched large-scale mine labourers and poor farmers. In many of the region’s rural areas, it is overtaking subsistence agriculture as the primary industry. Such a pattern appears to be unfolding within the Morogoro and Mbeya regions of southern Tanzania, where findings from recent research suggest that a growing number of smallholder farmers are turning to ASM for employment and financial support. It is imperative that national rural development programmes take this trend into account and provide support to these people.