6 resultados para Dictionaries.
em CentAUR: Central Archive University of Reading - UK
Resumo:
Illustrations are an integral part of many dictionaries, but the selection, placing, and sizing of illustrations is often highly conservative, and can appear to reflect the editorial concerns and technological constraints of previous eras. We might start with the question ‘why not illustrate?’, especially when we consider the ability of an illustration to simplify the definition of technical terms. How do illustrations affect the reader’s view of a dictionary as objective, and how illustrations reinforce the pedagogic aims of the dictionary? By their graphic nature, illustrations stand out from the field of text against which they stand, and they can immediately indicate to the reader the level of seriousness or popularity of the book’s approach, or the age-range that it is intended for. And illustrations are expensive to create and can add to printing costs, so it is not surprising that there is much direct and indirect copying from dictionary to dictionary, and simple re-use. This article surveys developments in illustrating dictionaries, considering the difference between distributing individual illustrations through the text of the dictionary and grouping illustrations into larger synoptic illustrations; the graphic style of illustrations; and the role of illustrations in ‘feature-led’ dictionary marketing.
Resumo:
Word sense disambiguation is the task of determining which sense of a word is intended from its context. Previous methods have found the lack of training data and the restrictiveness of dictionaries' choices of senses to be major stumbling blocks. A robust novel algorithm is presented that uses multiple dictionaries, the Internet, clustering and triangulation to attempt to discern the most useful senses of a given word and learn how they can be disambiguated. The algorithm is explained, and some promising sample results are given.
Resumo:
Dictionary compilers and designers use punctuation to structure and clarify entries and to encode information. Dictionaries with a relatively simple structure can have simple typography, and simple punctuation; as dictionaries grew more complex, and encountered the space constraints of the printed page, complex encoding systems were developed, using punctuation and symbols. Two recent trends have emerged in dictionary design: to eliminate punctuation, and sometimes to use a larger number of fonts, so that the boundaries between elements are indicated by font change, not punctuation.
Resumo:
This latest issue of the series of Typography papers opens with a beautifully illustrated article by the type designer Gerard Unger on ‘Romanesque’ letters. A further installment of Eric Kindel’s pathbreaking history of stencil letters is published in contributions by him, Fred Smeijers, and James Mosley. Maurice Göldner writes the first history of an early twentieth-century German typefounder, Brüder Butter. William Berkson and Peter Enneson recover the notion of ‘readability’ through a history of the collaboration between Matthew Luckiesh and the Linotype Company. Paul Luna discusses the role of pictures in dictionaries. Titus Nemeth describes a new form of Arabic type for metal composition. The whole gathering shows the remarkable variety and vitality of typography now.
Resumo:
Traditional dictionary learning algorithms are used for finding a sparse representation on high dimensional data by transforming samples into a one-dimensional (1D) vector. This 1D model loses the inherent spatial structure property of data. An alternative solution is to employ Tensor Decomposition for dictionary learning on their original structural form —a tensor— by learning multiple dictionaries along each mode and the corresponding sparse representation in respect to the Kronecker product of these dictionaries. To learn tensor dictionaries along each mode, all the existing methods update each dictionary iteratively in an alternating manner. Because atoms from each mode dictionary jointly make contributions to the sparsity of tensor, existing works ignore atoms correlations between different mode dictionaries by treating each mode dictionary independently. In this paper, we propose a joint multiple dictionary learning method for tensor sparse coding, which explores atom correlations for sparse representation and updates multiple atoms from each mode dictionary simultaneously. In this algorithm, the Frequent-Pattern Tree (FP-tree) mining algorithm is employed to exploit frequent atom patterns in the sparse representation. Inspired by the idea of K-SVD, we develop a new dictionary update method that jointly updates elements in each pattern. Experimental results demonstrate our method outperforms other tensor based dictionary learning algorithms.