4 resultados para Semantic Text Analysis
em Biblioteca Digital da Produção Intelectual da Universidade de São Paulo
Resumo:
While the use of statistical physics methods to analyze large corpora has been useful to unveil many patterns in texts, no comprehensive investigation has been performed on the interdependence between syntactic and semantic factors. In this study we propose a framework for determining whether a text (e.g., written in an unknown alphabet) is compatible with a natural language and to which language it could belong. The approach is based on three types of statistical measurements, i.e. obtained from first-order statistics of word properties in a text, from the topology of complex networks representing texts, and from intermittency concepts where text is treated as a time series. Comparative experiments were performed with the New Testament in 15 different languages and with distinct books in English and Portuguese in order to quantify the dependency of the different measurements on the language and on the story being told in the book. The metrics found to be informative in distinguishing real texts from their shuffled versions include assortativity, degree and selectivity of words. As an illustration, we analyze an undeciphered medieval manuscript known as the Voynich Manuscript. We show that it is mostly compatible with natural languages and incompatible with random texts. We also obtain candidates for keywords of the Voynich Manuscript which could be helpful in the effort of deciphering it. Because we were able to identify statistical measurements that are more dependent on the syntax than on the semantics, the framework may also serve for text analysis in language-dependent applications.
Resumo:
Este trabalho indica alguns referenciais teóricos que podem colaborar na discussão acerca dos vínculos entre comunicação e linguagem. Para tanto, remete a autores que fixaram categorias, sobretudo no âmbito verbal – das quais são exemplos multicentralidade, diálogo, interação, jogo, contrato, ação –, com potencial para permitir análises mais proficientes dos textos comunicacionais mediados tecnologicamente.
Resumo:
Assuming that textbooks give literary expression to cultural and ideological values of a nation or group, we propose the analysis of chemistry textbooks used in Brazilian universities throughout the twentieth century. We analyzed iconographic and textual aspects of 31 textbooks which had significant diffusion in the context of Brazilian universities at that period. As a result of the iconographic analysis, nine categories of images were proposed: (1) laboratory and experimentation, (2) industry and production, (3) graphs and diagrams, (4) illustrations related to daily life, (5) models, (6) illustrations related to the history of science, (7) pictures or diagrams of animal, vegetable or mineral samples, (8) analogies and (9) concepts of physics. The distribution of images among the categories showed a different emphasis in the presentation of chemical content due to a commitment to different conceptions of chemistry over the period. So, we started with chemistry as an experimental science in the early twentieth century, with an emphasis change to the principles of chemistry from the 1950s, culminating in a chemistry of undeniable technological influence. Results showed that reflections not only on the history of science, but on the history of science education, may be useful for the improvement of science education.
Resumo:
Abstract Background The study and analysis of gene expression measurements is the primary focus of functional genomics. Once expression data is available, biologists are faced with the task of extracting (new) knowledge associated to the underlying biological phenomenon. Most often, in order to perform this task, biologists execute a number of analysis activities on the available gene expression dataset rather than a single analysis activity. The integration of heteregeneous tools and data sources to create an integrated analysis environment represents a challenging and error-prone task. Semantic integration enables the assignment of unambiguous meanings to data shared among different applications in an integrated environment, allowing the exchange of data in a semantically consistent and meaningful way. This work aims at developing an ontology-based methodology for the semantic integration of gene expression analysis tools and data sources. The proposed methodology relies on software connectors to support not only the access to heterogeneous data sources but also the definition of transformation rules on exchanged data. Results We have studied the different challenges involved in the integration of computer systems and the role software connectors play in this task. We have also studied a number of gene expression technologies, analysis tools and related ontologies in order to devise basic integration scenarios and propose a reference ontology for the gene expression domain. Then, we have defined a number of activities and associated guidelines to prescribe how the development of connectors should be carried out. Finally, we have applied the proposed methodology in the construction of three different integration scenarios involving the use of different tools for the analysis of different types of gene expression data. Conclusions The proposed methodology facilitates the development of connectors capable of semantically integrating different gene expression analysis tools and data sources. The methodology can be used in the development of connectors supporting both simple and nontrivial processing requirements, thus assuring accurate data exchange and information interpretation from exchanged data.