96 resultados para Tokens


Relevância:

10.00% 10.00%

Publicador:

Resumo:

This thesis explores how to represent image texture in order to obtain information about the geometry and structure of surfaces, with particular emphasis on locating surface discontinuities. Theoretical and psychophysical results lead to the following conclusions for the representation of image texture: (1) A texture edge primitive is needed to identify texture change contours, which are formed by an abrupt change in the 2-D organization of similar items in an image. The texture edge can be used for locating discontinuities in surface structure and surface geometry and for establishing motion correspondence. (2) Abrupt changes in attributes that vary with changing surface geometry ??ientation, density, length, and width ??ould be used to identify discontinuities in surface geometry and surface structure. (3) Texture tokens are needed to separate the effects of different physical processes operating on a surface. They represent the local structure of the image texture. Their spatial variation can be used in the detection of texture discontinuities and texture gradients, and their temporal variation may be used for establishing motion correspondence. What precisely constitutes the texture tokens is unknown; it appears, however, that the intensity changes alone will not suffice, but local groupings of them may. (4) The above primitives need to be assigned rapidly over a large range in an image.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A procedure that uses fuzzy ARTMAP and K-Nearest Neighbor (K-NN) categorizers to evaluate intrinsic and extrinsic speaker normalization methods is described. Each classifier is trained on preprocessed, or normalized, vowel tokens from about 30% of the speakers of the Peterson-Barney database, then tested on data from the remaining speakers. Intrinsic normalization methods included one nonscaled, four psychophysical scales (bark, bark with end-correction, mel, ERB), and three log scales, each tested on four different combinations of the fundamental (Fo) and the formants (F1 , F2, F3). For each scale and frequency combination, four extrinsic speaker adaptation schemes were tested: centroid subtraction across all frequencies (CS), centroid subtraction for each frequency (CSi), linear scale (LS), and linear transformation (LT). A total of 32 intrinsic and 128 extrinsic methods were thus compared. Fuzzy ARTMAP and K-NN showed similar trends, with K-NN performing somewhat better and fuzzy ARTMAP requiring about 1/10 as much memory. The optimal intrinsic normalization method was bark scale, or bark with end-correction, using the differences between all frequencies (Diff All). The order of performance for the extrinsic methods was LT, CSi, LS, and CS, with fuzzy AHTMAP performing best using bark scale with Diff All; and K-NN choosing psychophysical measures for all except CSi.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Our ability to track an object as the same persisting entity over time and motion may primarily rely on spatiotemporal representations which encode some, but not all, of an object's features. Previous researchers using the 'object reviewing' paradigm have demonstrated that such representations can store featural information of well-learned stimuli such as letters and words at a highly abstract level. However, it is unknown whether these representations can also store purely episodic information (i.e. information obtained from a single, novel encounter) that does not correspond to pre-existing type-representations in long-term memory. Here, in an object-reviewing experiment with novel face images as stimuli, observers still produced reliable object-specific preview benefits in dynamic displays: a preview of a novel face on a specific object speeded the recognition of that particular face at a later point when it appeared again on the same object compared to when it reappeared on a different object (beyond display-wide priming), even when all objects moved to new positions in the intervening delay. This case study demonstrates that the mid-level visual representations which keep track of persisting identity over time--e.g. 'object files', in one popular framework can store not only abstract types from long-term memory, but also specific tokens from online visual experience.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The Zipf curves of log of frequency against log of rank for a large English corpus of 500 million word tokens, 689,000 word types and for a large Spanish corpus of 16 million word tokens, 139,000 word types are shown to have the usual slope close to –1 for rank less than 5,000, but then for a higher rank they turn to give a slope close to –2. This is apparently mainly due to foreign words and place names. Other Zipf curves for highlyinflected Indo-European languages, Irish and ancient Latin, are also given. Because of the larger number of word types per lemma, they remain flatter than the English curve maintaining a slope of –1 until turning points of about ranks 30,000 for Irish and 10,000 for Latin. A formula which calculates the number of tokens given the number of types is derived in terms of the rank at the turning point, 5,000 for both English and Spanish, 30,000 for Irish and 10,000 for Latin.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Experiments show that for a large corpus, Zipf’s law does not hold for all rank of words: the frequencies fall below those predicted by Zipf’s law for ranks greater than about 5,000 word types in the English language and about 30,000 word types in the inflected languages Irish and Latin. It also does not hold for syllables or words in the syllable-based languages, Chinese or Vietnamese. However, when single words are combined together with word n-grams in one list and put in rank order, the frequency of tokens in the combined list extends Zipf’s law with a slope close to -1 on a log-log plot in all five languages. Further experiments have demonstrated the validity of this extension of Zipf’s law to n-grams of letters, phonemes or binary bits in English. It is shown theoretically that probability theory
alone can predict this behavior in randomly created n-grams of binary bits.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In noise repetition-detection tasks, listeners have to distinguish trials of continuously running noise from trials in which noise tokens are repeated in a cyclic manner. Recently, it has been shown that using the exact same noise token across several trials (“reference noise”) facilitates the detection of repetitions for this token [Agus et al. (2010). Neuron 66, 610–618]. This was attributed to perceptual learning. Here, the nature of the learning was investigated. In experiment 1, reference noise tokens were embedded in trials with or without cyclic presentation. Naïve listeners reported repetitions in both cases, thus responding to the reference noise even in the absence of an actual repetition. Experiment 2, with the same listeners, showed a similar pattern of results even after the design of the experiment was made explicit, ruling out a misunderstanding of the task. Finally, in experiment 3, listeners reported repetitions in trials containing the reference noise, even before ever hearing it presented cyclically. The results show that listeners were able to learn and recognize noise tokens in the absence of an immediate repetition. Moreover, the learning mandatorily interfered with listeners' ability to detect repetitions. It is concluded that salient perceptual changes accompany the learning of noise.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In most previous research on distributional semantics, Vector Space Models (VSMs) of words are built either from topical information (e.g., documents in which a word is present), or from syntactic/semantic types of words (e.g., dependency parse links of a word in sentences), but not both. In this paper, we explore the utility of combining these two representations to build VSM for the task of semantic composition of adjective-noun phrases. Through extensive experiments on benchmark datasets, we find that even though a type-based VSM is effective for semantic composition, it is often outperformed by a VSM built using a combination of topic- and type-based statistics. We also introduce a new evaluation task wherein we predict the composed vector representation of a phrase from the brain activity of a human subject reading that phrase. We exploit a large syntactically parsed corpus of 16 billion tokens to build our VSMs, with vectors for both phrases and words, and make them publicly available.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A família de especificações WS-* define um modelo de segurança para web services, baseado nos conceitos de claim, security token e Security Token Service (STS). Neste modelo, a informação de segurança dos originadores de mensagens (identidade, privilégios, etc.) é representada através de conjuntos de claims, contidos dentro de security tokens. A emissão e obtenção destes security tokens, por parte dos originadores de mensagens, são realizadas através de protocolos legados ou através de serviços especiais, designados de Security Token Services, usando as operações e os protocolos definidos na especificação WS-Trust. O conceito de Security Token Service não é usado apenas no contexto dos web services. Propostas como o modelo dos Information Cards, aplicável no contexto de aplicações web, também utilizam este conceito. Os Security Token Services desempenham vários papéis, dependendo da informação presente no token emitido. São exemplos o papel de Identity Provider, quando os tokens emitidos contêm informação de identidade, ou o papel de Policy Decision Point, quando os tokens emitidos definem autorizações. Este documento descreve o projecto duma biblioteca software para a realização de Security Token Services, tal como definidos na norma WS-Trust, destinada à plataforma .NET 3.5. Propõem-se uma arquitectura flexível e extensível, de forma a suportar novas versões das normas e as diversas variantes que os Security Token Services possuem, nomeadamente: o tipo dos security token emitidos e das claims neles contidas, a inferência das claims e os métodos de autenticação das entidades requerentes. Apresentam-se aspectos de implementação desta arquitectura, nomeadamente a integração com a plataforma WCF, a sua extensibilidade e o suporte a modelos e sistemas externos à norma. Finalmente, descrevem-se as plataformas de teste implementadas para a validação da biblioteca realizada e os módulos de extensão da biblioteca para: suporte do modelo associado aos Information Cards, do modelo OpenID e para a integração com o Authorization Manager.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Animal Cognition, V.6, pp. 259–267

Relevância:

10.00% 10.00%

Publicador:

Resumo:

El objetivo de esta investigación es el de abordar los trabajos académicos, realizados en el periodo comprendido entre 2005 hasta 2011, a propósito de las reparaciones en el marco jurídico de la justicia transicional en Colombia, a saber; la Ley de Justicia y Paz 975 de 2005 y la Ley de Víctimas y Restitución de Tierras Ley 1448 de 2011. Este esfuerzo puso en evidencia que solo hasta la promulgación de una ley con contenido de justicia transicional, el ejercicio investigativo frente a las reparaciones logró un desarrollo y una continuidad. Para lograr dicho objetivo, fue necesaria la implementación de fichas de estudio de cada una de las publicaciones citadas a lo largo de la investigación, sumada a otras herramientas de análisis, que dieron como resultado la clasificación de las producciones académicas en tres grandes tendencias de estudio.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

An information processing paradigm in the brain is proposed, instantiated in an artificial neural network using biologically motivated temporal encoding. The network will locate within the external world stimulus, the target memory, defined by a specific pattern of micro-features. The proposed network is robust and efficient. Akin in operation to the swarm intelligence paradigm, stochastic diffusion search, it will find the best-fit to the memory with linear time complexity. information multiplexing enables neurons to process knowledge as 'tokens' rather than 'types'. The network illustrates possible emergence of cognitive processing from low level interactions such as memory retrieval based on partial matching. (C) 2007 Elsevier B.V. All rights reserved.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The paper begins with the assumption that psychological event tokens are identical to or constituted from physical events. It then articulates a familiar apparent problem concerning the causal role of psychological properties. If they do not reduce to physical properties, then either they must be epiphenomenal or any effects they cause must also be caused by physical properties, and hence be overdetermined. It then argues that both epiphenomenalism and over-determinationism are prima facie perfectly reasonable and relatively unproblematic views. The paper proceeds to argue against Kim’s (Kim, 2000, 2005) attempt to articulate a plausible version of reductionism. It is then argued that psychological properties, along with paradigmatically causally efficacious macro-properties, such as toughness, are causally inefficacious in respect of their possessor’s typical effects, because they are insufficiently distinct from those effects. It is finally suggested that the distinction between epiphenomenalism and overdeterminationism may be more terminological than real.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This article summarizes the results of a longer study of address forms in Ancient Greek, based on 11,891 address tokens from a variety of sources. It argues that the Greek evidence appears to contradict two tendencies, found in address forms in other languages, which have been claimed as possible sociolinguistic universals: the tendency toward T/V distinctions, and the principle that “What is new is polite.” It is suggested that these alleged universals should perhaps be re-examined in light of the Greek evidence, and that ancient languages in general have more to contribute to sociolinguistics than is sometimes realized. (Address, Ancient Greek, T/V distinctions)

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This doctoral dissertation analyzes two novels by the American novelist Robert Coover as examples of hypertextual writing on the book bound page, as tokens of hyperfiction. The complexity displayed in the novels, John's Wife and The Adventures of Lucky Pierre, integrates the cultural elements that characterize the contemporary condition of capitalism and technologized practices that have fostered a different subjectivity evidenced in hypertextual writing and reading, the posthuman subjectivity. The models that account for the complexity of each novel are drawn from the concept of strange attractors in Chaos Theory and from the concept of rhizome in Nomadology. The transformations the characters undergo in the degree of their corporeality sets the plane on which to discuss turbulence and posthumanity. The notions of dynamic patterns and strange attractors, along with the concept of the Body without Organs and Rhizome are interpreted, leading to the revision of narratology and to analytical categories appropriate to the study of the novels. The reading exercised throughout this dissertation enacts Daniel Punday's corporeal reading. The changes in the characters' degree of materiality are associated with the stages of order, turbulence and chaos in the story, bearing on the constitution of subjectivity within and along the reading process. Coover's inscription of planes of consistency to counter linearity and accommodate hypertextual features to the paper supported narratives describes the characters' trajectory as rhizomatic. The study led to the conclusion that narrative today stands more as a regime in a rhizomatic relation with other regimes in cultural practice than as an exclusively literary form and genre. Besides this, posthuman subjectivity emerges as class identity, holding hypertextual novels as their literary form of choice.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

As digital systems move away from traditional desktop setups, new interaction paradigms are emerging that better integrate with users’ realworld surroundings, and better support users’ individual needs. While promising, these modern interaction paradigms also present new challenges, such as a lack of paradigm-specific tools to systematically evaluate and fully understand their use. This dissertation tackles this issue by framing empirical studies of three novel digital systems in embodied cognition – an exciting new perspective in cognitive science where the body and its interactions with the physical world take a central role in human cognition. This is achieved by first, focusing the design of all these systems on a contemporary interaction paradigm that emphasizes physical interaction on tangible interaction, a contemporary interaction paradigm; and second, by comprehensively studying user performance in these systems through a set of novel performance metrics grounded on epistemic actions, a relatively well established and studied construct in the literature on embodied cognition. The first system presented in this dissertation is an augmented Four-in-a-row board game. Three different versions of the game were developed, based on three different interaction paradigms (tangible, touch and mouse), and a repeated measures study involving 36 participants measured the occurrence of three simple epistemic actions across these three interfaces. The results highlight the relevance of epistemic actions in such a task and suggest that the different interaction paradigms afford instantiation of these actions in different ways. Additionally, the tangible version of the system supports the most rapid execution of these actions, providing novel quantitative insights into the real benefits of tangible systems. The second system presented in this dissertation is a tangible tabletop scheduling application. Two studies with single and paired users provide several insights into the impact of epistemic actions on the user experience when these are performed outside of a system’s sensing boundaries. These insights are clustered by the form, size and location of ideal interface areas for such offline epistemic actions to occur, as well as how can physical tokens be designed to better support them. Finally, and based on the results obtained to this point, the last study presented in this dissertation directly addresses the lack of empirical tools to formally evaluate tangible interaction. It presents a video-coding framework grounded on a systematic literature review of 78 papers, and evaluates its value as metric through a 60 participant study performed across three different research laboratories. The results highlight the usefulness and power of epistemic actions as a performance metric for tangible systems. In sum, through the use of such novel metrics in each of the three studies presented, this dissertation provides a better understanding of the real impact and benefits of designing and developing systems that feature tangible interaction.