983 resultados para programming language processing


Relevância:

80.00% 80.00%

Publicador:

Resumo:

Este trabalho propõe o estudo comparativo do uso de infográficos multimídia pelos sites Clarín.com, da Argentina e Folha.com, do Brasil. A pesquisa tem como objetivo verificar e analisar como esses dois importantes veículos de comunicação online da América Latina têm utilizado a tecnologia HTML5 para avançar nas possibilidades interativas do gênero jornalístico. Para tanto, a análise comparada trata da infografia multimídia, que tem passado por profundas mudanças tecnológicas, alterando o formato e o conteúdo da notícia. Além da conceituação teórica e revisão de literatura sobre infografia, newsgame, narrativa transmídia, jornalismo online, interatividade e as linguagens de programação voltadas para a produção de infografia multimídia, o trabalho realizou análise comparativa das seções Infográficos, veiculada pela Folha.com, e Especiales Multimedia, do Clarín.com. O estudo, quantitativo e qualitativo, verificou os recursos narrativos e informativos, ferramentas e tecnologias de linguagem de programação para Internet que são empregadas pelos dois meios de comunicação, com base no modelo de análise proposto por Alberto Cairo em Infografia 2.0 – visualización interactiva de información en prensa. A pesquisa demonstrou que ainda que o Clarín.com tenha utilizado a tecnologia Flash na maioria dos infográficos multimídia analisados, os resultados da análise comparada mostram que os infográficos do jornal online argentino possibilitaram níveis mais elevados de interatividade do que os infográficos multimídia da Folha.com, desenvolvidos majoritariamente em HTML5.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This thesis proposes the implementation of a space efficient Prolog implementation based on the work of David H. D. Warren and Hassan Aït-Kaci. The Common Lisp is the framework used to the construction of the Prolog system, it was chosen both to provide a space efficient environment and a rich programming language in the sense that it supply the user with abstractions and new ways of thinking. The resulting system is a new syntax to the initial language that runs on top of the SBCL Common Lisp implementation and can abstract away or exploit the underlying system.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The semantic model described in this paper is based on ones developed for arithmetic (e.g. McCloskey et al. 1985, Cohene and Dehaene 1995), natural language processing (Fodor 1975, Chomsky 1981) and work by the author on how learners parse mathematical structures. The semantic model highlights the importance of the parsing process and the relationship between this process and the mathematical lexicon/grammar. It concludes by demonstrating that for a learner to become an efficient, competent mathematician a process of top-down parsing is essential.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The semantic model developed in this research was in response to the difficulty a group of mathematics learners had with conventional mathematical language and their interpretation of mathematical constructs. In order to develop the model ideas from linguistics, psycholinguistics, cognitive psychology, formal languages and natural language processing were investigated. This investigation led to the identification of four main processes: the parsing process, syntactic processing, semantic processing and conceptual processing. The model showed the complex interdependency between these four processes and provided a theoretical framework in which the behaviour of the mathematics learner could be analysed. The model was then extended to include the use of technological artefacts into the learning process. To facilitate this aspect of the research, the theory of instrumentation was incorporated into the semantic model. The conclusion of this research was that although the cognitive processes were interdependent, they could develop at different rates until mastery of a topic was achieved. It also found that the introduction of a technological artefact into the learning environment introduced another layer of complexity, both in terms of the learning process and the underlying relationship between the four cognitive processes.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Este trabajo tiene como objetivo estudiar los diferentes estados mentales de los personajes de las novelas de Tu rostro mañana, de Javier Marías. Estudiando las reflexiones del narrador sobre J. Deza, Peter Wheeler o Francisco Rico observamos que su decadencia mental se muestra a través de una suerte de ―presciencia‖ o lucidez momentánea que puede servir para mostrar el silencio como única tendencia de todo discurso. Desde el momento en que toda historia de ficción se cimenta sobre un discurso –no importa su cauce de presentación, ni su fuente– este es falsificado por el tiempo, la gente y cualquier otra herramienta que pueda ser utilizada para contar nada. Las conclusiones de este trabajo muestran la quimera que implica tratar de mantener una contención absoluta sobre lo acaecido, pues dicho vacío de narrativas será ocupado por una suplantación que suele ser el reverso más infame de sus actores. Es por ello que el narrador J. Deza sigue conminado a explicar sus historias, incluso allí donde uno diría que ya no puede haber ni palabras suficientes para traducir un hecho en ficción.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

One of the leading motivations behind the multilingual semantic web is to make resources accessible digitally in an online global multilingual context. Consequently, it is fundamental for knowledge bases to find a way to manage multilingualism and thus be equipped with those procedures for its conceptual modelling. In this context, the goal of this paper is to discuss how common-sense knowledge and cultural knowledge are modelled in a multilingual framework. More particularly, multilingualism and conceptual modelling are dealt with from the perspective of FunGramKB, a lexico-conceptual knowledge base for natural language understanding. This project argues for a clear division between the lexical and the conceptual dimensions of knowledge. Moreover, the conceptual layer is organized into three modules, which result from a strong commitment towards capturing semantic knowledge (Ontology), procedural knowledge (Cognicon) and episodic knowledge (Onomasticon). Cultural mismatches are discussed and formally represented at the three conceptual levels of FunGramKB.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

People recommenders are a widespread feature of social networking sites and educational social learning platforms alike. However, when these systems are used to extend learners’ Personal Learning Networks, they often fall short of providing recommendations of learning value to their users. This paper proposes a design of a people recommender based on content-based user profiles, and a matching method based on dissimilarity therein. It presents the results of an experiment conducted with curators of the content curation site Scoop.it!, where curators rated personalized recommendations for contacts. The study showed that matching dissimilarity of interpretations of shared interests is more successful in providing positive experiences of breakdown for the curator than is matching on similarity. The main conclusion of this paper is that people recommenders should aim to trigger constructive experiences of breakdown for their users, as the prospect and potential of such experiences encourage learners to connect to their recommended peers.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The current study builds upon a previous study, which examined the degree to which the lexical properties of students’ essays could predict their vocabulary scores. We expand on this previous research by incorporating new natural language processing indices related to both the surface- and discourse-levels of students’ essays. Additionally, we investigate the degree to which these NLP indices can be used to account for variance in students’ reading comprehension skills. We calculated linguistic essay features using our framework, ReaderBench, which is an automated text analysis tools that calculates indices related to linguistic and rhetorical features of text. University students (n = 108) produced timed (25 minutes), argumentative essays, which were then analyzed by ReaderBench. Additionally, they completed the Gates-MacGinitie Vocabulary and Reading comprehension tests. The results of this study indicated that two indices were able to account for 32.4% of the variance in vocabulary scores and 31.6% of the variance in reading comprehension scores. Follow-up analyses revealed that these models further improved when only considering essays that contained multiple paragraph (R2 values = .61 and .49, respectively). Overall, the results of the current study suggest that natural language processing techniques can help to inform models of individual differences among student writers.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Text cohesion is an important element of discourse processing. This paper presents a new approach to modeling, quantifying, and visualizing text cohesion using automated cohesion flow indices that capture semantic links among paragraphs. Cohesion flow is calculated by applying Cohesion Network Analysis, a combination of semantic distances, Latent Semantic Analysis, and Latent Dirichlet Allocation, as well as Social Network Analysis. Experiments performed on 315 timed essays indicated that cohesion flow indices are significantly correlated with human ratings of text coherence and essay quality. Visualizations of the global cohesion indices are also included to support a more facile understanding of how cohesion flow impacts coherence in terms of semantic dependencies between paragraphs.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This presentation summarizes experience with the automated speech recognition and translation approach realised in the context of the European project EMMA.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Rhythm analysis of written texts focuses on literary analysis and it mainly considers poetry. In this paper we investigate the relevance of rhythmic features for categorizing texts in prosaic form pertaining to different genres. Our contribution is threefold. First, we define a set of rhythmic features for written texts. Second, we extract these features from three corpora, of speeches, essays, and newspaper articles. Third, we perform feature selection by means of statistical analyses, and determine a subset of features which efficiently discriminates between the three genres. We find that using as little as eight rhythmic features, documents can be adequately assigned to a given genre with an accuracy of around 80 %, significantly higher than the 33 % baseline which results from random assignment.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Opinion mining and sentiment analysis are important research areas of Natural Language Processing (NLP) tools and have become viable alternatives for automatically extracting the affective information found in texts. Our aim is to build an NLP model to analyze gamers’ sentiments and opinions expressed in a corpus of 9750 game reviews. A Principal Component Analysis using sentiment analysis features explained 51.2 % of the variance of the reviews and provides an integrated view of the major sentiment and topic related dimensions expressed in game reviews. A Discriminant Function Analysis based on the emerging components classified game reviews into positive, neutral and negative ratings with a 55 % accuracy.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Taxonomies have gained a broad usage in a variety of fields due to their extensibility, as well as their use for classification and knowledge organization. Of particular interest is the digital document management domain in which their hierarchical structure can be effectively employed in order to organize documents into content-specific categories. Common or standard taxonomies (e.g., the ACM Computing Classification System) contain concepts that are too general for conceptualizing specific knowledge domains. In this paper we introduce a novel automated approach that combines sub-trees from general taxonomies with specialized seed taxonomies by using specific Natural Language Processing techniques. We provide an extensible and generalizable model for combining taxonomies in the practical context of two very large European research projects. Because the manual combination of taxonomies by domain experts is a highly time consuming task, our model measures the semantic relatedness between concept labels in CBOW or skip-gram Word2vec vector spaces. A preliminary quantitative evaluation of the resulting taxonomies is performed after applying a greedy algorithm with incremental thresholds used for matching and combining topic labels.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The Semantic Annotation component is a software application that provides support for automated text classification, a process grounded in a cohesion-centered representation of discourse that facilitates topic extraction. The component enables the semantic meta-annotation of text resources, including automated classification, thus facilitating information retrieval within the RAGE ecosystem. It is available in the ReaderBench framework (http://readerbench.com/) which integrates advanced Natural Language Processing (NLP) techniques. The component makes use of Cohesion Network Analysis (CNA) in order to ensure an in-depth representation of discourse, useful for mining keywords and performing automated text categorization. Our component automatically classifies documents into the categories provided by the ACM Computing Classification System (http://dl.acm.org/ccs_flat.cfm), but also into the categories from a high level serious games categorization provisionally developed by RAGE. English and French languages are already covered by the provided web service, whereas the entire framework can be extended in order to support additional languages.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Community-driven Question Answering (CQA) systems that crowdsource experiential information in the form of questions and answers and have accumulated valuable reusable knowledge. Clustering of QA datasets from CQA systems provides a means of organizing the content to ease tasks such as manual curation and tagging. In this paper, we present a clustering method that exploits the two-part question-answer structure in QA datasets to improve clustering quality. Our method, {\it MixKMeans}, composes question and answer space similarities in a way that the space on which the match is higher is allowed to dominate. This construction is motivated by our observation that semantic similarity between question-answer data (QAs) could get localized in either space. We empirically evaluate our method on a variety of real-world labeled datasets. Our results indicate that our method significantly outperforms state-of-the-art clustering methods for the task of clustering question-answer archives.