850 resultados para Text-Based Image Retrieval
Resumo:
Paper presentation at the TEA2016 conference, Tallinn, Estonia.
Resumo:
As a way to gain greater insights into the operation of online communities, this dissertation applies automated text mining techniques to text-based communication to identify, describe and evaluate underlying social networks among online community members. The main thrust of the study is to automate the discovery of social ties that form between community members, using only the digital footprints left behind in their online forum postings. Currently, one of the most common but time consuming methods for discovering social ties between people is to ask questions about their perceived social ties. However, such a survey is difficult to collect due to the high investment in time associated with data collection and the sensitive nature of the types of questions that may be asked. To overcome these limitations, the dissertation presents a new, content-based method for automated discovery of social networks from threaded discussions, referred to as ‘name network’. As a case study, the proposed automated method is evaluated in the context of online learning communities. The results suggest that the proposed ‘name network’ method for collecting social network data is a viable alternative to costly and time-consuming collection of users’ data using surveys. The study also demonstrates how social networks produced by the ‘name network’ method can be used to study online classes and to look for evidence of collaborative learning in online learning communities. For example, educators can use name networks as a real time diagnostic tool to identify students who might need additional help or students who may provide such help to others. Future research will evaluate the usefulness of the ‘name network’ method in other types of online communities.
Resumo:
A strategy for document analysis is presented which uses Portable Document Format (PDF the underlying file structure for Adobe Acrobat software) as its starting point. This strategy examines the appearance and geometric position of text and image blocks distributed over an entire document. A blackboard system is used to tag the blocks as a first stage in deducing the fundamental relationships existing between them. PDF is shown to be a useful intermediate stage in the bottom-up analysis of document structure. Its information on line spacing and font usage gives important clues in bridging the semantic gap between the scanned bitmap page and its fully analysed, block-structured form. Analysis of PDF can yield not only accurate page decomposition but also sufficient document information for the later stages of structural analysis and document understanding.
Resumo:
Trabalho de projeto apresentado à Escola Superior de Educação de Paula Frassinetti para obtenção do grau de Mestre em Ciências da Educação Especialização em Animação da Leitura
Resumo:
Humans have a high ability to extract visual data information acquired by sight. Trought a learning process, which starts at birth and continues throughout life, image interpretation becomes almost instinctively. At a glance, one can easily describe a scene with reasonable precision, naming its main components. Usually, this is done by extracting low-level features such as edges, shapes and textures, and associanting them to high level meanings. In this way, a semantic description of the scene is done. An example of this, is the human capacity to recognize and describe other people physical and behavioral characteristics, or biometrics. Soft-biometrics also represents inherent characteristics of human body and behaviour, but do not allow unique person identification. Computer vision area aims to develop methods capable of performing visual interpretation with performance similar to humans. This thesis aims to propose computer vison methods which allows high level information extraction from images in the form of soft biometrics. This problem is approached in two ways, unsupervised and supervised learning methods. The first seeks to group images via an automatic feature extraction learning , using both convolution techniques, evolutionary computing and clustering. In this approach employed images contains faces and people. Second approach employs convolutional neural networks, which have the ability to operate on raw images, learning both feature extraction and classification processes. Here, images are classified according to gender and clothes, divided into upper and lower parts of human body. First approach, when tested with different image datasets obtained an accuracy of approximately 80% for faces and non-faces and 70% for people and non-person. The second tested using images and videos, obtained an accuracy of about 70% for gender, 80% to the upper clothes and 90% to lower clothes. The results of these case studies, show that proposed methods are promising, allowing the realization of automatic high level information image annotation. This opens possibilities for development of applications in diverse areas such as content-based image and video search and automatica video survaillance, reducing human effort in the task of manual annotation and monitoring.
Resumo:
This thesis compares contemporary anglophone and francophone rewritings of traditional fairy tales for adults. Examining material dating from the 1990s to the present, including novels, novellas, short stories, comics, televisual and filmic adaptations, this thesis argues that while the revisions studied share similar themes and have comparable aims, the methods for inducing wonder (where wonder is defined as the effect produced by the text rather than simply its magical contents) are diametrically opposed, and it is this opposition that characterises the difference between the two types of rewriting. While they all engage with the hybridity of the fairy-tale genre, the anglophone works studied tend to question traditional narratives by keeping the fantasy setting, while francophone works debunk the tales not only in relation to questions of content, but also aesthetics. Through theoretical, historical, and cultural contextualisation, along with close readings of the texts, this thesis aims to demonstrate the existence of this francophone/anglophone divide and to explain how and why the authors in each tradition tend to adopt such different views while rewriting similar material. This division is the guiding thread of the thesis and also functions as a springboard to explore other concepts such as genre hybridity, reader-response, and feminism. The thesis is divided into two parts; the first three chapters work as an in-depth literature review: after examining, in chapters one and two, the historical and contemporary cultural field in which these works were created, chapter three examines theories of fantasy and genre hybridity. The second part of the thesis consists of textual studies and comparisons between francophone and anglophone material and is built on three different approaches. The first (chapter four) looks at selected texts in relation to questions of form, studying the process of world building and world creation enacted when authors combine and rewrite several fairy tales in a single narrative world. The second (chapter five) is a thematic approach which investigates the interactions between femininity, the monstrous, and the wondrous in contemporary tales of animal brides. Finally, chapter six compares rewritings of the tale of ‘Bluebeard’ with a comparison hinged on the representation of the forbidden room and its contents: Bluebeard’s cabinet of wonder is one that he holds sacred, one where he sublimates his wives’ corpses, and it is the catalyst of wonder, terror, and awe. The three contextual chapters and the three text-based studies work towards tracing the tangible existence of the division postulated between francophone and anglophone texts, but also the similarities that exist between the two cultural fields and their roles in the renewal of the fairy-tale genre.
Resumo:
During the lifetime of a research project, different partners develop several research prototype tools that share many common aspects. This is equally true for researchers as individuals and as groups: during a period of time they often develop several related tools to pursue a specific research line. Making research prototype tools easily accessible to the community is of utmost importance to promote the corresponding research, get feedback, and increase the tools’ lifetime beyond the duration of a specific project. One way to achieve this is to build graphical user interfaces (GUIs) that facilitate trying tools; in particular, with web-interfaces one avoids the overhead of downloading and installing the tools. Building GUIs from scratch is a tedious task, in particular for web-interfaces, and thus it typically gets low priority when developing a research prototype. Often we opt for copying the GUI of one tool and modifying it to fit the needs of a new related tool. Apart from code duplication, these tools will “live” separately, even though we might benefit from having them all in a common environment since they are related. This work aims at simplifying the process of building GUIs for research prototypes tools. In particular, we present EasyInterface, a toolkit that is based on novel methodology that provides an easy way to make research prototype tools available via common different environments such as a web-interface, within Eclipse, etc. It includes a novel text-based output language that allows to present results graphically without requiring any knowledge in GUI/Web programming. For example, an output of a tool could be (a structured version of) “highlight line number 10 of file ex.c” and “when the user clicks on line 10, open a dialog box with the text ...”. The environment will interpret this output and converts it to corresponding visual e_ects. The advantage of using this approach is that it will be interpreted equally by all environments of EasyInterface, e.g., the web-interface, the Eclipse plugin, etc. EasyInterface has been developed in the context of the Envisage [5] project, and has been evaluated on tools developed in this project, which include static analyzers, test-case generators, compilers, simulators, etc. EasyInterface is open source and available at GitHub2.
Resumo:
A computer vision system that has to interact in natural language needs to understand the visual appearance of interactions between objects along with the appearance of objects themselves. Relationships between objects are frequently mentioned in queries of tasks like semantic image retrieval, image captioning, visual question answering and natural language object detection. Hence, it is essential to model context between objects for solving these tasks. In the first part of this thesis, we present a technique for detecting an object mentioned in a natural language query. Specifically, we work with referring expressions which are sentences that identify a particular object instance in an image. In many referring expressions, an object is described in relation to another object using prepositions, comparative adjectives, action verbs etc. Our proposed technique can identify both the referred object and the context object mentioned in such expressions. Context is also useful for incrementally understanding scenes and videos. In the second part of this thesis, we propose techniques for searching for objects in an image and events in a video. Our proposed incremental algorithms use the context from previously explored regions to prioritize the regions to explore next. The advantage of incremental understanding is restricting the amount of computation time and/or resources spent for various detection tasks. Our first proposed technique shows how to learn context in indoor scenes in an implicit manner and use it for searching for objects. The second technique shows how explicitly written context rules of one-on-one basketball can be used to sequentially detect events in a game.
Resumo:
Audit firms are organized along industry lines and industry specialization is a prominent feature of the audit market. Yet, we know little about how audit firms make their industry portfolio decisions, i.e., how audit firms decide which set of industries to specialize in. In this study, I examine how the linkages between industries in the product space affect audit firms’ industry portfolio choice. Using text-based product space measures to capture these industry linkages, I find that both Big 4 and small audit firms tend to specialize in industry-pairs that 1) are close to each other in the product space (i.e., have more similar product language) and 2) have a greater number of “between-industries” in the product space (i.e., have a greater number of industries with product language that is similar to both industries in the pair). Consistent with the basic tradeoff between specialization and coordination, these results suggest that specializing in industries that have more similar product language and more linkages to other industries in the product space allow audit firms greater flexibility to transfer industry-specific expertise across industries as well as greater mobility in the product space, hence enhancing its competitive advantage. Additional analysis using the collapse of Arthur Andersen as an exogenous supply shock in the audit market finds consistent results. Taken together, the findings suggest that industry linkages in the product space play an important role in shaping the audit market structure.
Resumo:
Dissertação (mestrado)—Universidade de Brasília, Faculdade de Comunicação, Programa de Pós-Graduação em Comunicação, 2015.
Resumo:
omando como referência fundamental o trabalho desenvolvido pela UNESCO em matéria de proteção do Património Cultural Imaterial (PCI), muito particularmente a Convenção para a Salvaguarda do Património Cultural Imaterial (2003), considerou-se oportuno refletir sobre as implicações que este enfoque traz para os museus. São indiscutíveis as repercussões que este instrumento trouxe para o reconhecimento da importância do PCI à escala internacional, motivando um crescendo de iniciativas em tomo da sua salvaguarda. São vários os agentes envolvidos na preservação deste património, no entanto o International Council of Museums (ICOM) reconhece um papel central aos museus nesta matéria. Mas para responder a este repto, os museus terão que repensar as suas estratégias de forma a relacionar-se mais com o PCI, contrariando uma longa tradição profundamente enraizada na cultura material. O presente estudo reflete sobre as possibilidades de actuação dos museus no sentido de dar resposta aos desafios da Convenção 2003, sendo certo que a partir das catividades dos museus é possível encontrar formas de estudar e de dar visibilidade a este património. Em função das especificidades de cada museu, podem ser encontradas estratégias de salvaguarda do PCI, entre as quais se pode incluir o inventário e a documentação (audiovisual, texto, áudio, imagem), a investigação, a divulgação através de exposições e publicações, difusão através da internet, educação não formal, entre outras actividades. Alguns museus começaram já a desenvolver abordagens integradas para a salvaguarda do PCI, cujos exemplos se apresentam. Este tema suscita vários desafios, implicando práticas museológicas inovadoras que possam reflectir o papel dos museus como promotores da diversidade e criatividade cultural. ABSTRACT: Recalling the UNESCO's work towards the protection of Intangible Cultural Heritage (ICH), in particular the Convention for the Safeguarding of the Intangible Cultural Heritage adopted in 2003, I took this opportunity to reflect upon the implications that this recognition brings to museums. The overwhelming success of this document has raised the importance of ICH at international level, motivating a growing number of initiatives towards its safeguard. Accordingly, to the 2003 Convention, there are many agents involved in the preservation of this heritage, yet the International Council of Museums (ICOM) recognizes a central role for museums. Nevertheless, to face this challenge, museums will have to rethink their relationship with ICH in opposition to their deep rooted tradition in material culture. The present study reflects upon the possibilities that museums have to answer the changeling 2003 Convention, recognizing that it’s possible through museum activities to find ways to study and give visibility to ICH. According to each museum specificities, it seems clear that strategies can be engaged in order to promote the safeguard of ICH, including inventory and documentation (audiovisual, audio, text and image), research, promotion through exhibitions, publications, dissemination trough internet and other means, informal education, among other activities. Many museums have already started exploring integrated approaches towards the safeguard of ICH and some of these examples are presented in this study. This theme is challenging, implying innovative museum practices which reflect on museums role towards the promotion of cultural diversity and creativity.
Resumo:
Partant de la réputation naïve, colorée et digressive de La Conquête de Constantinople de Robert de Clari, ce mémoire propose une analyse méthodique de ce récit en prose vernaculaire de la quatrième croisade de façon à en circonscrire les moments de continuité et de rupture. En fonction de plusieurs facteurs, dont leurs formules d’introduction et de clôture, leur rapport au temps de la croisade, leur longueur relative ainsi que leur positionnement dans l’économie globale du texte, les épisodes divergents sont identifiés puis analysés en travaillant lestement avec trois caractéristiques fondamentales de la digression plutôt qu’avec une définition nucléaire du concept, ce qui permet de discerner des degrés de digressif et d’offrir un panorama nuancé de l’oeuvre. Afin d’adopter un regard plus large sur le phénomène de la digression, quatre autres récits de croisade sont étudiés, et tous, qu’ils soient écrits en prose ou en vers, en français ou en latin, sont à leur façon coupables de s’être laissés emporter par leur sujet dans des excursus qui trahissent la personnalité et les convictions de leur auteur. Tout comme Clari, Villehardouin, l’auteur de l’Estoire de la guerre sainte, Eudes de Deuil et Albert d’Aix laissent entrevoir leur propre histoire lorsque celle qu’ils mettent à l’écrit s’égare de la droite voie de sa narration. Les digressions contenues dans les récits de croisade constituent ainsi une fenêtre privilégiée sur l’histoire des mentalités du Moyen Âge central, une mine d’informations qui ne peut être adéquatement exploitée que par les efforts conjoints de l’histoire et de la littérature.
Resumo:
Partant de la réputation naïve, colorée et digressive de La Conquête de Constantinople de Robert de Clari, ce mémoire propose une analyse méthodique de ce récit en prose vernaculaire de la quatrième croisade de façon à en circonscrire les moments de continuité et de rupture. En fonction de plusieurs facteurs, dont leurs formules d’introduction et de clôture, leur rapport au temps de la croisade, leur longueur relative ainsi que leur positionnement dans l’économie globale du texte, les épisodes divergents sont identifiés puis analysés en travaillant lestement avec trois caractéristiques fondamentales de la digression plutôt qu’avec une définition nucléaire du concept, ce qui permet de discerner des degrés de digressif et d’offrir un panorama nuancé de l’oeuvre. Afin d’adopter un regard plus large sur le phénomène de la digression, quatre autres récits de croisade sont étudiés, et tous, qu’ils soient écrits en prose ou en vers, en français ou en latin, sont à leur façon coupables de s’être laissés emporter par leur sujet dans des excursus qui trahissent la personnalité et les convictions de leur auteur. Tout comme Clari, Villehardouin, l’auteur de l’Estoire de la guerre sainte, Eudes de Deuil et Albert d’Aix laissent entrevoir leur propre histoire lorsque celle qu’ils mettent à l’écrit s’égare de la droite voie de sa narration. Les digressions contenues dans les récits de croisade constituent ainsi une fenêtre privilégiée sur l’histoire des mentalités du Moyen Âge central, une mine d’informations qui ne peut être adéquatement exploitée que par les efforts conjoints de l’histoire et de la littérature.
Resumo:
Cet article se veut exploratoire en deux temps : une piste de réflexion sur l’impact du numérique sur les sciences humaines, et une lecture de l’essai « Le nénuphar et l’araignée » de Claire Legendre, publié le 4 février 2015 chez Les Allusifs. Notre hypothèse est qu’il est nécessaire de jeter les bases d’une théorie et d’une pensée du numérique, comme de poursuivre et de favoriser l’implémentation de nouveaux outils de recherche conçus par et pour les humanités, en lien direct avec les questions d’édition, de diffusion, d’encodage, de fouille, de curation, ou encore de visualisation et de représentation des données textuelles, sonores et visuelles. Cet article propose ainsi une première piste d’exploration de l’usage de ces nouvelles possibilités pour la littérature québécoise.
Resumo:
Cet article se veut exploratoire en deux temps : une piste de réflexion sur l’impact du numérique sur les sciences humaines, et une lecture de l’essai « Le nénuphar et l’araignée » de Claire Legendre, publié le 4 février 2015 chez Les Allusifs. Notre hypothèse est qu’il est nécessaire de jeter les bases d’une théorie et d’une pensée du numérique, comme de poursuivre et de favoriser l’implémentation de nouveaux outils de recherche conçus par et pour les humanités, en lien direct avec les questions d’édition, de diffusion, d’encodage, de fouille, de curation, ou encore de visualisation et de représentation des données textuelles, sonores et visuelles. Cet article propose ainsi une première piste d’exploration de l’usage de ces nouvelles possibilités pour la littérature québécoise.