820 resultados para Bag-of-words


Relevância:

90.00% 90.00%

Publicador:

Resumo:

The triangular space between memory, narrative and pictorial representation is the terrain on which this article is developed. Taking the art of memory developed by Giordano Bruno (1548 – 1600) and the art of painting subtly revolutionised by Adam Elsheimer (1578 – 1610) as test-cases, it is shown how both subvert the norms of mimesis and narration prevalent throughout the Renaissance, how disrupted memory creates “incoherent” narratives, and how perspective and the notion of “place” are questioned in a corollary way. Two paintings by Elsheimer are analysed and shown to include, in spite of their supposed “realism”, numerous incoherencies, aporias and strange elements – often overlooked. Thus, they do not conform to two of the basic rules governing both the classical art of memory and the humanist art of painting: well-defined places and the exhaustive translatability of words into images (and vice-versa). In the work of Bruno, both his philosophical claims and the literary devices he uses are analysed as hints for a similar (and contemporaneous) undermining of conventions about the transparency and immediacy of representation.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The purpose of life is to obtain knowledge, use it to live with as much satisfaction as possible, and pass it on with improvements and modifications to the next generation.'' This may sound philosophical, and the interpretation of words may be subjective, yet it is fairly clear that this is what all living organisms--from bacteria to human beings--do in their life time. Indeed, this can be adopted as the information theoretic definition of life. Over billions of years, biological evolution has experimented with a wide range of physical systems for acquiring, processing and communicating information. We are now in a position to make the principles behind these systems mathematically precise, and then extend them as far as laws of physics permit. Therein lies the future of computation, of ourselves, and of life.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This paper describes the efforts at MILE lab, IISc, to create a 100,000-word database each in Kannada and Tamil for the design and development of Online Handwritten Recognition. It has been collected from over 600 users in order to capture the variations in writing style. We describe features of the scripts and how the number of symbols were reduced to be able to effectively train the data for recognition. The list of words include all the characters, Kannada and Indo-Arabic numerals, punctuations and other symbols. A semi-automated tool for the annotation of data from stroke to word level is used. It segments each word into stroke groups and also acts as a validation mechanism for segmentation. The tool displays the stroke, stroke groups and aksharas of a word and hence can be used to study the various styles of writing, delayed strokes and for assigning quality tags to the words. The tool is currently being used for annotating Tamil and Kannada data. The output is stored in a standard XML format.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

In this paper, we have proposed a simple and effective approach to classify H.264 compressed videos, by capturing orientation information from the motion vectors. Our major contribution involves computing Histogram of Oriented Motion Vectors (HOMV) for overlapping hierarchical Space-Time cubes. The Space-Time cubes selected are partially overlapped. HOMV is found to be very effective to define the motion characteristics of these cubes. We then use Bag of Features (B OF) approach to define the video as histogram of HOMV keywords, obtained using k-means clustering. The video feature, thus computed, is found to be very effective in classifying videos. We demonstrate our results with experiments on two large publicly available video database.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

For sign languages used by deaf communities, linguistic corpora have until recently been unavailable, due to the lack of a writing system and a written culture in these communities, and the very recent advent of digital video. Recent improvements in video and computer technology have now made larger sign language datasets possible; however, large sign language datasets that are fully machine-readable are still elusive. This is due to two challenges. 1. Inconsistencies that arise when signs are annotated by means of spoken/written language. 2. The fact that many parts of signed interaction are not necessarily fully composed of lexical signs (equivalent of words), instead consisting of constructions that are less conventionalised. As sign language corpus building progresses, the potential for some standards in annotation is beginning to emerge. But before this project, there were no attempts to standardise these practices across corpora, which is required to be able to compare data crosslinguistically. This project thus had the following aims: 1. To develop annotation standards for glosses (lexical/word level) 2. To test their reliability and validity 3. To improve current software tools that facilitate a reliable workflow Overall the project aimed not only to set a standard for the whole field of sign language studies throughout the world but also to make significant advances toward two of the world’s largest machine-readable datasets for sign languages – specifically the BSL Corpus (British Sign Language, http://bslcorpusproject.org) and the Corpus NGT (Sign Language of the Netherlands, http://www.ru.nl/corpusngt).

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This research is concerned with the development of tactual displays to supplement the information available through lipreading. Because voicing carries a high informational load in speech and is not well transmitted through lipreading, the efforts are focused on providing tactual displays of voicing to supplement the information available on the lips of the talker. This research includes exploration of 1) signal-processing schemes to extract information about voicing from the acoustic speech signal, 2) methods of displaying this information through a multi-finger tactual display, and 3) perceptual evaluations of voicing reception through the tactual display alone (T), lipreading alone (L), and the combined condition (L+T). Signal processing for the extraction of voicing information used amplitude-envelope signals derived from filtered bands of speech (i.e., envelopes derived from a lowpass-filtered band at 350 Hz and from a highpass-filtered band at 3000 Hz). Acoustic measurements made on the envelope signals of a set of 16 initial consonants represented through multiple tokens of C1VC2 syllables indicate that the onset-timing difference between the low- and high-frequency envelopes (EOA: envelope-onset asynchrony) provides a reliable and robust cue for distinguishing voiced from voiceless consonants. This acoustic cue was presented through a two-finger tactual display such that the envelope of the high-frequency band was used to modulate a 250-Hz carrier signal delivered to the index finger (250-I) and the envelope of the low-frequency band was used to modulate a 50-Hz carrier delivered to the thumb (50T). The temporal-onset order threshold for these two signals, measured with roving signal amplitude and duration, averaged 34 msec, sufficiently small for use of the EOA cue. Perceptual evaluations of the tactual display of EOA with speech signal indicated: 1) that the cue was highly effective for discrimination of pairs of voicing contrasts; 2) that the identification of 16 consonants was improved by roughly 15 percentage points with the addition of the tactual cue over L alone; and 3) that no improvements in L+T over L were observed for reception of words in sentences, indicating the need for further training on this task

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Does knowledge of language consist of symbolic rules? How do children learn and use their linguistic knowledge? To elucidate these questions, we present a computational model that acquires phonological knowledge from a corpus of common English nouns and verbs. In our model the phonological knowledge is encapsulated as boolean constraints operating on classical linguistic representations of speech sounds in term of distinctive features. The learning algorithm compiles a corpus of words into increasingly sophisticated constraints. The algorithm is incremental, greedy, and fast. It yields one-shot learning of phonological constraints from a few examples. Our system exhibits behavior similar to that of young children learning phonological knowledge. As a bonus the constraints can be interpreted as classical linguistic rules. The computational model can be implemented by a surprisingly simple hardware mechanism. Our mechanism also sheds light on a fundamental AI question: How are signals related to symbols?

Relevância:

90.00% 90.00%

Publicador:

Resumo:

One finding of user studies is that information on meaning tends to be what diction¬ary users want most from their dictionaries. This is consistent with the traditional image of the dictionary as a repository of meanings of words, and this is also borne out in definitions of the item DICTIONARY itself as given in dictionaries. While this popular view has not changed much, the growing role of electronic dictionaries can change the lexicographers' approach to meaning repre¬sentation. Traditionally, paper dictionaries have explained words with words, using either a defi¬nition or an equivalent, and occasionally a line-drawn picture. However, a prominent feature of the electronic medium is its multimodality, and this offers potential for the description of meaning. While it is much easier to include pictorial content, electronic dictionaries can also hold media objects which paper cannot carry, such as audio, animation or video. Publishers are drawn by the attraction of these new options, but are they always functionally useful for the dictionary users? In this article, the existing evidence is examined, and informed guesses are offered where evidence is not yet available.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Undergraduates were asked to generate a name for a hypothetical new exemplar of a category. They produced names that had the same numbers of syllables, the same endings, and the same types of word stems as existing exemplars of that category. In addition, novel exemplars, each consisting of a nonsense syllable root and a prototypical ending, were accurately assigned to categories. The data demonstrate the abstraction and use of surface properties of words.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The percentage of subjects recalling each unit in a list or prose passage is considered as a dependent measure. When the same units are recalled in different tasks, processing is assumed to be the same; when different units are recalled, processing is assumed to be different. Two collections of memory tasks are presented, one for lists and one for prose. The relations found in these two collections are supported by an extensive reanalysis of the existing prose memory literature. The same set of words were learned by 13 different groups of subjects under 13 different conditions. Included were intentional free-recall tasks, incidental free recall following lexical decision, and incidental free recall following ratings of orthographic distinctiveness and emotionality. Although the nine free-recall tasks varied widely with regard to the amount of recall, the relative probability of recall for the words was very similar among the tasks. Imagery encoding and recognition produced relative probabilities of recall that were different from each other and from the free-recall tasks. Similar results were obtained with a prose passage. A story was learned by 13 different groups of subjects under 13 different conditions. Eight free-recall tasks, which varied with respect to incidental or intentional learning, retention interval, and the age of the subjects, produced similar relative probabilities of recall, whereas recognition and prompted recall produced relative probabilities of recall that were different from each other and from the free-recall tasks. A review of the prose literature was undertaken to test the generality of these results. Analysis of variance is the most common statistical procedure in this literature. If the relative probability of recall of units varied across conditions, a units by condition interaction would be expected. For the 12 studies that manipulated retention interval, an average of 21% of the variance was accounted for by the main effect of retention interval, 17% by the main effect of units, and only 2% by the retention interval by units interaction. Similarly, for the 12 studies that varied the age of the subjects, 6% of the variance was accounted for by the main effect of age, 32% by the main effect of units, and only 1% by the interaction of age by units.(ABSTRACT TRUNCATED AT 400 WORDS)

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Whilst analysis of 'risk' (in its many conceptual shapes) has loomed large in both medicine and social sciences over the past 25 years, detailed investigations as to how risk assessments are actually put together (in either lay or professional contexts) are few in number. The studies that are available usually focus on the use of words or everyday conversation in assembling risk. Talking about risk is, of course, important, but what tends to be ignored is the fact that risk can be and is often made visible. For example, it can be made visible through the use of tables, charts, diagrams and various kinds of sophisticated laboratory images. This paper concentrates on the role of such images in the context of a cancer genetics clinic and its associated laboratory. Precisely how these images are tied into the production of risk estimates, how professionals discuss and use such images in clinical work, and how professionals reference them to display facts about risk is the focus of the paper. The paper concludes by highlighting the significance of different kinds of visibility for an understanding of genetic abnormalities and how such differences might impact on the attempts of lay people to get to grips with risk.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

We present a new way of extracting policy positions from political texts that treats texts not as discourses to be understood and interpreted but rather, as data in the form of words. We compare this approach to previous methods of text analysis and use it to replicate published estimates of the policy positions of political parties in Britain and Ireland, on both economic and social policy dimensions. We “export” the method to a non-English-language environment, analyzing the policy positions of German parties, including the PDS as it entered the former West German party system. Finally, we extend its application beyond the analysis of party manifestos, to the estimation of political positions from legislative speeches. Our “language-blind” word scoring technique successfully replicates published policy estimates without the substantial costs of time and labor that these require. Furthermore, unlike in any previous method for extracting policy positions from political texts, we provide uncertainty measures for our estimates, allowing analysts to make informed judgments of the extent to which differences between two estimated policy positions can be viewed as significant or merely as products of measurement error.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Objective: To explore, using functional magnetic resonance imaging (MRI), the functional organisation of phonological processing in young adults born very preterm.

Subjects: Six right handed male subjects with radiological evidence of thinning of the corpus callosum were selected from a cohort of very preterm subjects. Six normal right handed male volunteers acted as controls.

Method: Blood oxygenation level dependent contrast echoplanar images were acquired over five minutes at 1.5 T while subjects performed the tasks. During the ON condition, subjects were visually presented with pairs of non-words and asked to press a key when a pair of words rhymed (phonological processing). This task alternated with the OFF condition, which required subjects to make letter case judgments of visually presented pairs of consonant letter strings (orthographic processing). Generic brain activation maps were constructed from individual images by sinusoidal regression and non-parametric testing. Between group differences in the mean power of experimental response were identified on a voxel wise basis by analysis of variance.

Results: Compared with controls, the subjects with thinning of the corpus callosum showed significantly reduced power of response in the left hemisphere, including the peristriate cortex and the cerebellum, as well as in the right parietal association area. Significantly increased power of response was observed in the right precentral gyrus and the right supplementary motor area.

Conclusions: The data show evidence of increased frontal and decreased occipital activation in male subjects with neurodevelopmental thinning of the corpus callosum, which may be due to the operation of developmental compensatory mechanisms.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

In most previous research on distributional semantics, Vector Space Models (VSMs) of words are built either from topical information (e.g., documents in which a word is present), or from syntactic/semantic types of words (e.g., dependency parse links of a word in sentences), but not both. In this paper, we explore the utility of combining these two representations to build VSM for the task of semantic composition of adjective-noun phrases. Through extensive experiments on benchmark datasets, we find that even though a type-based VSM is effective for semantic composition, it is often outperformed by a VSM built using a combination of topic- and type-based statistics. We also introduce a new evaluation task wherein we predict the composed vector representation of a phrase from the brain activity of a human subject reading that phrase. We exploit a large syntactically parsed corpus of 16 billion tokens to build our VSMs, with vectors for both phrases and words, and make them publicly available.